Permanently Deleted

  • Frank [he/him, he/him]
    ·
    3 months ago

    Here's the apparent buried lede;

    This conjecture builds on the notion that a name is a stereotype that carries social meanings and expectations (6–9), eventually enabling a shared representation regarding what the “right name” most likely is for a specific face (10). As years go by, people internalize the characteristics and expectations associated with their name and embrace them, consciously or unconsciously, in their identity and choices (2, 11, 12). Facial appearance may be affected by this process directly, as when a person chooses specific features according to these expectations [e.g., hairstyle, glasses, make-up; (13)], or indirectly, via other behaviors that affect one’s facial appearance [e.g., facial expressions; (14, 15)]. The hypothesis behind the face–name matching effect is that facial appearance may change over time to eventually represent how we “should” look.

    They're talking about haircut, glasses, and makeup, not bone structure and fat distribution.

    This is still, at best, sketchy.

    • iie [they/them, he/him]
      hexagon
      ·
      edit-2
      3 months ago

      They're talking about haircut, glasses, and makeup, not bone structure and fat distribution.

      Obviously not bone structure. I don't know why that would constitute a lede to be buried.

      But they're not just talking about haircut, glasses, and makeup either. They found effects even for grey-scale images with hair cropped out of the photos—see the quote below. Other studies have found that a person's personality can affect their face—probably through the facial expressions they tend to make. If your name might affect your personality, and your personality might affect your face, it seems reasonable to investigate if your name can affect your face. The researchers provide multiple lines of evidence suggesting this might be the case.

      Across the machine learning studies (Studies 3 and 4B), while the facial images included facial accessories (e.g., glasses, etc.), the images were cropped around the face itself such that hardly any hair was included. Prior to feeding the images into the neural network, we preprocessed the facial images using several steps to ensure accuracy and consistency. Initially, OpenCV’s deep learning face detector, which is based on the single shot detector (SSD) framework with a ResNet base network, was employed to crop faces from the images. All cropped faces were manually verified to ensure the accuracy of the detection. Subsequently, the images were converted to grayscale, normalized to have pixel values between 0 and 1, and resized to 128128 pixels. This preprocessing approach is supported by several studies that highlight the importance of consistent face detection and preprocessing for improving neural network performance (30–32).