Next Article in Journal
Exploring Footedness, Throwing Arm, and Handedness as Predictors of Eyedness Using Cluster Analysis and Machine Learning: Implications for the Origins of Behavioural Asymmetries
Previous Article in Journal
Low-Order Moments of Velocity Gradient Tensors in Two-Dimensional Isotropic Turbulence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessing the Role of Facial Symmetry and Asymmetry between Partners in Predicting Relationship Duration: A Pilot Deep Learning Analysis of Celebrity Couples

1
Maxillofacial Surgery University Hospital Ruppin-Brandenburg, Fehrbelliner Straße 38, 16816 Neuruppin, Germany
2
Department of Oral and Maxillofacial Plastic Surgery, University Hospital of Würzburg, 97070 Würzburg, Germany
3
Department of Oral and Maxillofacial Surgery, Tuebingen University Hospital, Osianderstrasse 2-8, 72076 Tuebingen, Germany
4
Department of Orthopedics and Trauma Surgery, Medical Centre-Albert-Ludwigs-University of Freiburg, Faculty of Medicine, Albert-Ludwigs-University of Freiburg, 79106 Freiburg, Germany
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(2), 176; https://doi.org/10.3390/sym16020176
Submission received: 5 December 2023 / Revised: 23 January 2024 / Accepted: 29 January 2024 / Published: 2 February 2024
(This article belongs to the Section Life Sciences)

Abstract

:
Prevailing studies on romantic relationships often emphasize facial symmetry as a factor in partner selection and marital satisfaction. This study aims to explore the inverse of this hypothesis—the relationship between facial dissimilarity and partnership duration among celebrity couples. Utilizing the CELEB-A dataset, which includes 202,599 images of 10,177 celebrities, we conducted an in-depth analysis using advanced artificial intelligence-based techniques. Deep learning and machine learning methods were employed to process and evaluate facial images, focusing on dissimilarity across various facial regions. Our sample comprised 1822 celebrity couples. The predictive analysis, incorporating models like Linear Regression, Ridge Regression, Random Forest, Support Vector Machine, and a Neural Network, revealed varying degrees of effectiveness in estimating partnership duration based on facial features and partnership status. However, the most notable performance was observed in Ridge Regression (Mean R2 = 0.0623 for whole face), indicating a moderate predictive capability. The study found no significant correlation between facial dissimilarity and partnership duration. These findings emphasize the complexity of predicting relationship outcomes based solely on facial attributes and suggest that other nuanced factors might play a more critical role in determining relationship dynamics. This study contributes to the understanding of the intricate nature of partnership dynamics and the limitations of facial attributes as predictors.

1. Introduction

The longstanding fascination with why certain individuals form and sustain romantic partnerships extends from personal curiosity to academic inquiry. One popular theory is that physical resemblance, particularly in facial features, plays a pivotal role in romantic relationships [1,2,3,4,5]. This belief, which has roots in various cultures, posits that similar-looking couples are viewed as better matches. Previous research has further fueled this notion by demonstrating higher facial resemblance among couples than non-couples, as well as a correlation between facial similarity and marital satisfaction [6].
The mechanisms behind this observed similarity are diverse, ranging from evolutionary advantages such as genetic compatibility [6] to the psychological tendency toward narcissistic mate selection [1]. Additionally, theories have been proposed to explain the development of similar facial features over time through long-term exposure to a partner’s facial expressions [5]. What ties these theories together is the focus on stable, intrinsic facial features like the shape of the eyes, nose, mouth, and chin, as opposed to more transient, extrinsic features like hairstyle or makeup [6,7].
Deep convolutional neural networks (DCNNs), trained in facial recognition, have achieved or surpassed the capabilities of humans [8]. These networks seem to inherently encode various facial attributes. Studies have shown a strong resemblance between the advanced layers of DCNNs’ object and face representations and the neural activities in the brains of primates [9,10]. Similar parallels have been observed in human brains [11,12,13]. Most of these studies relied on brief displays of static images. Only a few studies used longer clips of moving faces, finding a limited correlation between DCNNs and brain representations [8,14]. Extended exposure to facial images and videos may engage diverse cognitive aspects. Humans instinctively judge new faces for traits like trustworthiness or attractiveness, which can alter how these faces are mentally represented [15,16]. Knowledge about a person significantly affects how familiar faces are processed [17,18,19]. Familiarity can skew these representations, and the resemblance of new faces to known ones can impact perception and judgment [8]. Faces also influence where attention is directed, and various factors like personal traits, familiarity, and memory can affect neural responses to faces [20,21]. Further research is needed to disentangle the impact of these social and cognitive elements on the mental representation of faces. Similarly, the integration of dynamic and social cues (like facial expressions, eye movement, etc.) in machine vision systems could improve their effectiveness in human–computer interactions.
Facial attractiveness, crucial for various biological advantages including mating success [22], earning potential [23], and longevity [24], is consistently valued across different ages and cultures [25,26]. While there is a general consensus on what makes a face attractive, some individual and cross-cultural differences do exist. Facial attractiveness is often quantified using ideal ratios such as neoclassical canons [27,28], golden proportions [29,30], facial thirds [31,32], and new golden ratios [33,34]. These ratios define attractiveness through the spatial relationships of facial features. The golden ratio, an irrational value of 0.618, is often cited in aesthetics for its harmonious proportions and is considered a universal standard in facial attractiveness, particularly in fields like plastic surgery [30,35]. Research shows that these ratios have a neural basis, with studies like Shen et al. (2016) demonstrating that attractive facial proportions trigger responses in brain areas linked to rewards, such as the orbitofrontal cortex and amygdala [36]. Computer models have also been used to assess facial attractiveness, utilizing these ratios to generate attractiveness scores [27,37,38]. For instance, Schmid et al. (2008) developed a model using a feature vector of 77 putative ratios, showing a significant correlation with human attractiveness ratings [27]. Deep neural networks (DNNs) have furthered this field by learning higher-level features from vast numbers of face images, leading to more accurate predictions of facial attractiveness [39,40,41]. Unlike traditional methods that use handcrafted features, DNNs autonomously learn from data, as seen in Rothe et al. (2016), who used thousands of images and millions of internet ratings to train a convolutional neural network (CNN) for this purpose [41]. Interestingly, recent research has noted parallels between DNNs and biological vision systems in their operational patterns [42,43,44,45,46,47,48,49]. For example, Cichy et al. (2016a) found that DNNs mirror the stages of human visual processing in time and space, suggesting that these networks could provide insights into biological visual perception [44]. However, most studies focus on object recognition, with only a few, like those by McCurrie et al. (2018) and Parde et al. (2019), exploring DNNs’ implicit feature representations in understanding high-level perception, such as facial attractiveness [50,51].
Various brain regions work together to recognize and respond to faces, underlining their significance in human communication and survival. Humans identify faces remarkably fast, taking only about 70 milliseconds post-stimulus [52]. Quick assessments of traits like trustworthiness and aggressiveness also happen within the first 100 milliseconds [53]. Research shows that even infants exhibit a preference for faces over other stimuli from as young as two months [54], with a bias for faces of their own race appearing at three months [55]. Face recognition involves analyzing both three-dimensional structures and surface reflections [56], and studies support a shape-based approach to facial processing [57]. This early development and rapid judgment process highlight the role of facial recognition in human interaction and survival, emphasizing its significance in how humans, a visually and socially oriented species, interact with the world. People often quickly make assumptions about someone’s trustworthiness based on facial appearance [15,58,59], even though these impressions may not accurately predict actual behavior [60]. The environment influences these judgments by shaping exposure to certain facial feature distributions [61,62]. However, this facial recognition expertise is less developed for computer-generated faces, as they are less frequently encountered in everyday life [63].
Understanding what drives people to form and sustain long-term romantic relationships is a complex issue with significant repercussions for individuals, families, and society at large. Previous research has identified a wide range of similarities between romantic partners, including physical, physiological, demographic, and psychological characteristics [64,65,66,67,68,69,70]. Two primary sets of mechanisms have been proposed to account for this phenomenon: initial selection based on similarity through mechanisms like homophily [71,72], dating market dynamics [73], or social homogamy [70], and the development or maintenance of similarity over time due to shared experiences and environments [5,74]. However, it appears that while couples tend to be similar from the outset, they do not grow more similar over time [75].
Prevailing studies on romantic relationships often emphasize facial symmetry as a factor in partner selection and marital satisfaction. Given these intriguing but inconclusive findings, our study aims to explore a hitherto underexamined facet of romantic relationships: the relationship between facial dissimilarity and partnership duration. We leverage advanced artificial intelligence methods, specifically machine learning and deep learning techniques, on a comprehensive dataset of celebrity facial images to provide new insights into this complex issue.

2. Materials and Methods

2.1. Data Source

The primary dataset employed in this study was the CELEB-A dataset, generously provided by the Multimedia Lab at The Chinese University of Hong Kong for non-commercial research purposes [76]. This extensive dataset consists of 202,599 original web facial images of celebrities, colloquially referred to as “In-The-Wild” images. The celebrities included in the CELEB-A dataset were sourced from a diverse array of countries and regions across the globe, ensuring a wide-ranging representation of facial attributes and partnership dynamics. These images were further processed by aligning and cropping, thereby yielding 202,599 face images suitable for analysis. The dataset encompasses 10,177 unique identity labels, corresponding to 10,177 distinct celebrities. In instances where multiple images were available for a given celebrity, a single image was selected at random for analysis. Additionally, it features meticulously annotated boundary box coordinates for key facial features—left eye, right eye, nose, left mouth region, and right mouth region.
To enrich the dataset and make it more pertinent to the study’s objectives, a rigorous web search was undertaken by two independent researchers to identify celebrities within the CELEB-A dataset who were reported to be in a romantic partnership or marriage. Search engines employed for this information retrieval included Google and DuckDuckGo. The dataset already included unique identifiers (names) and associated image IDs for each celebrity, aiding in accurate data retrieval and association.
Partnerships were eligible for inclusion only if reliable information could be garnered regarding the duration of the partnership, spanning from the date of first reporting up until May 2023. Only heterosexual couples were considered. Instances where the relationship or marital status was confirmed but no corresponding duration data were available were excluded from the dataset. For couples reported as married, the duration was calculated from the initial reporting of the partnership, even if the duration of the actual marriage may have been the same length or shorter in comparison.
A structured dataset was then compiled, incorporating several variables:
  • Name: Identifier for each celebrity.
  • image_id_1 and image_id_2: The image IDs corresponding to the two partners.
  • Duration_of_Partnership_in_months_until_2023: The calculated duration of each partnership.
  • Married/Partnership: A binary indicator representing whether the partnership was a marriage (coded as 1) or a non-marital partnership (coded as 0).
Subsequent to the data extraction and augmentation processes, the Married/Partnership column was mapped to a numerical format to facilitate computational analysis. This curated dataset served as the foundation for all subsequent analyses.

2.2. Deep Learning-Based Analysis of Dissimilarity

The analytical pipeline began with meticulous data preprocessing. Our data source was the above-mentioned Excel file that contained the paths and IDs of the facial images. Subsequently, a list of image pairs of partners, along with their corresponding IDs, was generated for further processing.
To prepare the images for the deep learning model, multiple preprocessing steps were executed. Initially, images were read from their file paths, decoded, and resized to uniform dimensions (e.g., 218 × 178 pixels). Pixel values were then normalized to a range of [−1, 1] to facilitate the computational efficiency of the neural network. To augment the dataset and introduce robustness to the model, we applied a series of random transformations including contrast adjustment, brightness variation, and rotation. We also implemented a denormalization function to revert pixel values back to the original range [0, 255] for potential visualization.
Our study employed the DenseNet architecture to assess image dissimilarity. The architecture initiates with a convolutional layer equipped with 2k filters and a 7 × 7 kernel size, optionally succeeded by max pooling. Subsequent layers are organized into dense blocks, each featuring multiple dense units comprised of batch normalization, convolutional layers, and the concatenation of feature maps. To bridge these dense blocks, transition layers consisting of batch normalization, convolution, and average pooling are utilized. The network concludes with a global average pooling layer to minimize the spatial dimensions, followed by a logits layer, which is a fully connected layer that outputs the final classification scores.
Training the DenseNet model involved the use of a contrastive loss function designed to quantify dissimilarity between paired images. We calculated the Euclidean distance between the high-dimensional embeddings of each image pair. This dissimilarity was then employed to compute the contrastive loss, incorporating a margin to distinctly separate dissimilar pairs.
The training regimen utilized a cyclical learning rate, dynamically adjusting the learning rate and momentum across training cycles for optimized performance. At the onset, the DenseNet architecture was initialized. Following this, the test dataset was prepared in a manner consistent with the training data preprocessing steps. The model was then employed for inference on this test dataset. During this phase, contrastive loss and mean dissimilarity values were computed for the pairs. Lastly, both images and their dissimilarity scores were merged with the initial dataset for further statistical analysis.

2.3. Machine and Deep Learning-Based Prediction of Relationship Duration

To predict the duration of relationships, a comprehensive dataset, including dissimilarity values, was imported into a DataFrame using the pandas library in Python. The categorical variable ‘Married/Partnership’ was encoded into numerical format (‘Married’: 1, ‘Partner’: 0). We transformed the target variable, ‘Duration_of_Partnership_in_months_until_2023’, using a logarithmic transformation to improve model fit. The feature matrix, X, was constructed from the ‘Married/Partnership’ and dissimilarity value columns. Polynomial features were generated to explore nonlinear relationships.
A 5-Fold Cross-Validation scheme was utilized for model assessment. This involved dividing the dataset into five equal parts, using each part in turn for testing while training on the remainder. We evaluated four models: Linear Regression, Ridge Regression (with alpha values of 0.1, 1.0, and 10.0), Support Vector Regressor (SVM) with a linear kernel, and a Random Forest Regressor. The Random Forest model underwent hyperparameter tuning using GridSearchCV, optimizing parameters such as the number of trees, tree depth, and minimum samples for splits and leaf nodes (Table 1). A manual grid search was applied to the other models. Additionally, a simple deep-learning model was developed using TensorFlow’s Keras API. This model consisted of an input layer, one hidden layer with 64 neurons, a second hidden layer with 32 neurons, and an output layer. The Rectified Linear Unit (ReLU) activation function was applied, and the model was compiled using the Adam optimizer and mean squared error (MSE) as the loss function. The neural network was trained for 10 epochs for each fold in the cross-validation.
For each fold and each model, evaluation metrics including Mean Squared Error (MSE) and R-squared (R2) score were computed. These metrics were averaged over all folds to provide an overall measure of model performance. R2 quantifies the proportion of variance in the partnership duration that is accounted for by our predictive models. To provide a meaningful context for the R2 values reported, we calculated a baseline R2 score by predicting the mean partnership duration. The baseline model uses this constant mean value as its prediction for all instances in the test data, regardless of the input features. The baseline R2 score was calculated over the same test data used for the more complex models. This baseline model represents a simplistic approach that does not consider any facial features but instead predicts the average partnership duration for all couples. We then compared the R2 values obtained by our machine learning models to this baseline R2 value. A model with an R2 value close to or lower than the baseline suggests that it provides predictions similar to those of a model predicting the mean partnership duration, indicating limited explanatory power. Conversely, if the model’s R2 value significantly surpasses the baseline R2, it implies that the model contributes substantially to explaining the variation in partnership duration beyond what a simple average prediction can achieve.

2.4. Landmark-Based Subanalyses

In addition to the aforementioned primary analyses, we conducted landmark-based subanalyses to further scrutinize the impact of facial features on relationship duration and similarity metrics. These subanalyses leveraged the boundary box coordinates provided in the CELEB A dataset to focus on key facial landmarks—specifically, the left eye, right eye, nose, left mouth region, and right mouth region. The boundary boxes corresponding to each of these landmarks were used to crop the original facial images, isolating each facial feature for separate analysis. This step aimed to investigate whether certain facial landmarks might carry a disproportionate influence on the perceived similarity and, potentially, the duration of relationships.

2.5. Statistical Analysis

All aforementioned analyses were performed using Python for computational tasks. The finalized dataset, which incorporated both whole-face and landmark-specific dissimilarity values, was subsequently imported into SPSS version 27.0 (IBM Corp., Armonk, NY, USA) for in-depth statistical evaluation. For continuous variables, mean values and 95% confidence intervals (95% CI) were computed. For categorical variables, frequency counts and corresponding percentages were tabulated. To assess the distributional properties of the data, the Shapiro–Wilk test was employed to test for normality. For examining associations between variables, Spearman’s rank-order correlation was utilized. Between-group comparisons involving continuous variables—specifically, duration of partnership and dissimilarity values—were conducted using the Mann–Whitney U test. A significance level of p < 0.05 was considered statistically significant for all statistical tests.

3. Results

3.1. Comparative Analyses

A total of 1822 celebrity couples were included in the analyses. The overall mean duration of partnership for the entire cohort was 108.75 months (95% CI: 103.75–113.74). The median was 84 months. The mean duration of partnership for the partnership group was 83.27 (95% CI: 76.66–89.88). The median was 48 months. The mean duration of partnership for the married group was 133.51 (95% CI: 126.38–140.65). The median was 108 months (p < 0.001) (Figure 1).
The analysis revealed varying degrees of dissimilarity across different facial regions, both for partners in partnerships and married couples. Overall, the highest mean dissimilarity value was observed for the whole face (mean = 1.88, 95% CI: 1.85–1.91), followed by the left eye, right eye, nose, right mouth region, and left mouth region (Table 2 and Figure 2). Within partnerships, the highest dissimilarity was noted for the whole face (mean = 1.90, 95% CI: 1.86–1.94). Conversely, married couples exhibited slightly lower mean dissimilarity values for the whole face (mean = 1.86, 95% CI: 1.82–1.90). The differences in dissimilarity values between partnerships and married couples were not statistically significant for any facial region (p > 0.05 for all). Specifically, the p-values ranged from 0.071 for the right mouth region to 0.848 for the left mouth region, indicating no significant divergence in facial features between the two groups across the regions examined. This lack of statistical significance casts doubt on the capacity of these facial regions to predict the duration or type of partnership. Therefore, while the data offer a comprehensive overview of facial dissimilarity across different relationship statuses, they do not support a predictive relationship between facial dissimilarity and partnership duration.
The Spearman’s rho correlation coefficients indicate weak and statistically non-significant associations between the duration of partnership and dissimilarity values across various facial regions (Table 3 and Figure 3). Specifically, for the whole face images, the correlation coefficient was −0.045 with a p-value of 0.055, suggesting a lack of significant correlation. Notably, none of the facial regions demonstrated a significant correlation with the duration of the partnership. The correlation coefficients ranged from −0.045 for the whole face to −0.010 for the right mouth region, all of which are statistically non-significant (p > 0.05).
Intriguingly, some facial regions exhibited weak but statistically significant correlations with each other. For instance, dissimilarity values for the left eye and the whole face had a correlation coefficient of 0.129 (p < 0.001), and the nose and right eye regions showed a correlation coefficient of 0.179 (p < 0.001). These suggest some level of correlation among facial features but do not point to a meaningful relationship with the duration of the partnership.
Additionally, when the data were segmented into married couples and those in a partnership, no significant correlations were observed for either group. For married couples, Spearman’s rho was −0.055 (p = 0.094), and for those in partnerships, Spearman’s rho was −0.021 (p = 0.520). Overall, these findings suggest that facial dissimilarity, whether evaluated for the whole face or for individual landmark regions, does not serve as a robust predictor for the duration of partnerships.

3.2. Prediction Modelling

In our comprehensive approach to predicting the duration of partnerships, we employed a diverse set of machine learning and deep learning models including Linear Regression, Ridge Regression, Random Forest, Support Vector Machine (SVM), and a Neural Network. These models were evaluated based on their performance in terms of Mean Squared Error (MSE) and R-squared (R2) values, calculated through a 5-Fold Cross-Validation process. For the whole face analysis, Linear Regression emerged as the most effective model with a mean MSE of 1.128 and a mean R2 of 0.0587, closely followed by Ridge Regression, which showed a mean MSE of 1.124 and a mean R2 of 0.0623. The Neural Network achieved a mean MSE of 1.172 and a mean R2 of 0.0227, demonstrating competitive predictive capabilities. The Random Forest model, after extensive hyperparameter tuning, yielded a mean MSE of 1.198 and a mean R2 of 0.0011. SVM also performed consistently, recording a mean MSE of 1.162 and a mean R2 of 0.0296. In the left eye region, the models displayed similar performance patterns. The Neural Network reported a mean MSE of 1.115 and a mean R2 of 0.0684, while the Linear Regression and Ridge Regression models showed closely matched results, with mean MSEs of 1.126 and 1.122 and mean R2 values of 0.0599 and 0.0631, respectively. Random Forest and SVM exhibited mean MSEs of 1.179 and 1.172 and mean R2 values of 0.0164 and 0.0215, respectively. Analyses of the left mouth, nose, right eye, and right mouth regions exhibited a similar trend. In these facial regions, Linear Regression and Ridge Regression consistently presented the lowest MSE and highest R2 values, underlining their robustness across different facial areas. The Neural Network, while showing variability in performance, maintained a competitive stance in terms of MSE and R2 values. Random Forest, particularly after hyperparameter optimization, showed considerable improvement, indicating its potential as a reliable predictive model. The results from our analysis demonstrate a notable variance in the effectiveness of the models across different facial regions. This suggests that the relationship between facial features and the duration of partnerships might be complex, with each region contributing differently to the predictive models. Table 4 presents a detailed summary of the results, including the Mean Squared Error and Mean R2 values for each model across all the facial regions under study. These findings shed light on the predictive power of facial features regarding partnership duration. While the R2 values across various models generally exceed the baseline model’s R2 of 0.0025, they remain modest. This indicates that our models, including Linear Regression, Ridge Regression, and Neural Network, are indeed capturing more variance in the partnership duration than a simple average prediction. However, the overall low R2 values suggest that the relationship between facial features and partnership duration is complex and not easily modeled. Linear Regression and Ridge Regression consistently outperformed other models, hinting at their suitability for this type of data. However, the modest R2 values, even for these better-performing models, imply that while facial features do provide some predictive power, they might not fully explain the variations in partnership duration. The relatively higher performance of these models compared to the baseline underscores their ability to leverage facial feature data, albeit within the constraints of the data’s inherent complexity. Random Forest and Neural Network models, while surpassing the baseline, showed limited predictive power. This might reflect the challenges in modeling the nuanced relationships in the data or could indicate that the current feature set and model architectures are not fully capturing the underlying patterns. The variations in R2 values across different facial regions also highlight the diverse contributions of these regions to predicting partnership duration. In summary, the results demonstrate a nuanced relationship between facial features and partnership duration. The exceedance of the baseline R2 by most models validates the relevance of facial features in predicting partnership duration, yet the overall low R2 values across models suggest a complex interplay that is not entirely captured by the current modeling approach.

4. Discussion

The present study is the first to harness the potential of artificial intelligence-based imaging analysis to explore facial dissimilarities in the context of romantic relationships. Utilizing a comprehensive dataset that incorporated images from 1822 celebrity couples, we sought to investigate the correlation between facial dissimilarity and the duration of romantic partnerships. Our analyses did not reveal significant associations between these variables. Interestingly, our analysis found also no statistically significant difference in facial dissimilarity between married and non-married couples, indicating that facial characteristics might have limited predictive utility in distinguishing these types of partnerships. While the analyses, employing a range of machine learning and deep learning models, indicated varying degrees of predictive effectiveness, they notably revealed that certain models, particularly Linear Regression and Ridge Regression, exhibited a reasonable level of predictive ability. This suggests a nuanced relationship between facial features, partnership status, and relationship duration, although the overall predictability remains moderate. These findings were consistently observed across different facial regions, indicating that while facial dissimilarity metrics offer some predictive insights, their capacity to definitively determine relationship duration is limited.
The study stands out for its innovative application of artificial intelligence in analyzing romantic relationships. Employing AI-based imaging analysis, particularly leveraging the DenseNet architecture, allows for a level of precision and scale that is novel in the field of relationship studies. The methodology might serve as a foundation for future endeavors aiming to merge computational science and relationship psychology. However, the initial results suggest that while the technology can analyze facial features with high accuracy, the theoretical underpinnings linking facial dissimilarity to relationship duration may need to be reconsidered.
The roots of the phenomenon of assortative mating are still unclear. Some explanations encompass the idea of selecting mates with similar genetic makeup for evolutionary advantages (like enhanced fitness and communication) [7], the possibility of seeking partners who resemble oneself due to narcissistic tendencies [1], and even the notion that over time, individuals may develop similar facial features through prolonged exposure to their spouse’s facial expressions [5]. In contrast to the present results, previous studies have demonstrated that couples tend to have more facial similarities than people who are not in a romantic relationship. Additionally, it has been suggested that the degree of facial similarity between partners can be used to predict how satisfied they are in their marriage [1,2,3,4,7,74]. The findings of previous studies are based on the analysis of a small number of photographs or pictures and subjective ratings of similarity. The present study stands out as it utilized a large amount of data and objective artificial intelligence-based analysis of similarities.
The majority of earlier studies on facial similarity collected ratings from independent participant judges, where, for example, the participants were asked to rate the given face in comparison to target faces using a 4-point scale [77]. In a study conducted by Milord [78], participants rated pairs of faces on a 7-point scale to assess their similarity or difference. Harmon [79] created an early computer-based face recognition system by relying on ratings of face descriptors. Typically, these ratings of similarity were obtained globally: the participants were instructed to rate faces on a single scale, ranging from “not at all similar” to “very similar”. Researchers have not investigated whether the terms “highly similar” and “easily mistakable” are synonymous or correlated, nor have they systematically examined the underlying dimensions that influence judgments of similarity. The psychometric properties of these similarity ratings are also infrequently reported, despite indications that this is a significant omission. For instance, Lindsay [80] noted that facial similarity judgments exhibit considerable interparticipant variability.
In other studies on assortative mating in humans, the primary focus was on identifying correlations between various anthropometric traits in couples. For instance, Spuhler’s [81] research involved taking 43 physical measurements from 205 married couples. Notably, the authors observed substantial positive correlations in 29 of these measurements, which included 7 out of 15 facial or cranial measurements. Griffiths and Kunz [2] conducted a study in which they captured photographs of married couples. They then instructed participants to correctly pair these photos with their actual partners from a limited selection of faces. Interestingly, participants were able to match couples who had been married for less than 10 years and those married for over 20 years at levels exceeding random chance. However, when it came to couples married for a duration between ten and twenty years, the participants could not successfully make the connections.
In another investigation by Hinsz [3], the focus was on examining facial resemblance in genuine couples. This study involved photographs of engaged couples and couples who had been married for at least 25 years. Participants were presented with pairs of opposite-sex photos and asked to assess the similarity between the two faces. Half of these pairs were actual couples, while the other half consisted of randomly generated couples. The findings revealed that real couples were consistently rated as significantly more alike in appearance compared to randomly generated couples. It is worth noting that unlike the study conducted by Zajonc et al. [74], the duration of the couples’ relationships did not lead to differences in how similar they were perceived to be. Consequently, studies on facial similarity suggest that couples are generally seen as more facially similar to each other than would be expected by chance, although the connection between the length of a partnership and facial similarity has not been investigated until now.
A study by Anthony C. Little [4] did not reveal that individuals in longer marriages exhibit greater physical resemblance in terms of height, weight, perceived attractiveness, masculinity, or distinctiveness. Nevertheless, a significant trend emerged indicating that partners appeared more alike in terms of personality traits the longer they had been married. Authors suppose the increase in partnership similarity over time is because the individuals grow more alike in perceived personality as their time together lengthens, as suggested by Zajonc et al. [5]. This may occur because shared expressions and experiences become visible in their facial features. Alternatively, individuals who already exhibit similarity in personality traits may have a propensity to sustain longer marriages. Selecting a partner who is similar to oneself may enhance marital stability, as exemplified by Hill’s [82] discovery that couples who shared various physical and psychological traits were more likely to stay together compared to dissimilar partners.
There are some limitations associated with the present study. The dataset employed consisted of celebrity couples, which may limit the generalizability of the results to broader populations. The CELEB-A dataset offers a unique combination of diversity and scale. It encompasses a wide range of individuals, from less-known to well-known celebrities, providing a broad spectrum of facial features for analysis. The size of this dataset allows for robust statistical analysis, which is a significant advantage in exploring our research question. One of the critical aspects of our study is the availability of detailed personal information, such as marital/relationship status, partner information, and relationship duration. Such data are readily accessible and verifiable for celebrities through in-depth web searches, which is often not the case with the general population. Utilizing the CELEB-A dataset represents the most feasible and practical first step in examining our research question. The alternative approach, involving the collection and analysis of similar data from the general population (“ordinary people”, “non-celebrities”) presents significant logistical and ethical challenges. Defining what constitutes “ordinary” is also subjective and can introduce additional biases. This study represents a novel application of the CELEB-A dataset in exploring the relationship between facial features and partnership dynamics. By using a dataset not previously applied in this manner, we are breaking new ground in this area of research. Our findings, indicating no association within the dataset used, provide valuable insights that can guide future resource allocation in research. This initial exploration sets the stage for subsequent studies, which may include more diverse and representative samples, to further investigate the research question. While the generalizability of findings from a celebrity sample to the general population is a valid concern, our primary objective was to explore whether there is an observable association in this particular dataset. The findings from this study can inform and refine future research that includes more representative samples. Additionally, our analysis focused on dissimilarity metrics alone; other variables such as personality traits, shared interests, or social factors were not considered. These could be incorporated into future multi-modal analyses to create a more comprehensive model of relationship duration. Given the nascent stage of AI application in this domain, there is ample room for further fine-tuning and adaptation of the methods used. Another limitation is the cross-sectional nature of the dataset, which does not capture the temporal dynamics of relationships. Future studies could aim for a longitudinal approach, tracking couples over time to gather more nuanced data.
The present study contributed to the scientific field in multiple ways. Our study leverages advanced artificial intelligence techniques, specifically deep learning and machine learning, to analyze a substantial dataset of celebrity facial images. This approach marks a significant methodological advancement, moving beyond subjective assessments traditionally used in facial resemblance studies. We would like to draw your attention to the fact that this is the first study that applied AI-based techniques to examine the study question. Although there are studies that apply AI techniques to investigate social behavior none have utilized these techniques to explore the relationship between facial features and aspects of relationships. We believe this represents a valid scientific inquiry, particularly considering that previous studies have only examined this topic using subjective rating methods, without objectively analyzing facial features. The use of the CELEB-A dataset, known for its size and diversity, lays a solid foundation for our analysis. This enables us to derive more generalizable insights than would be possible with smaller, less varied samples that are obtained in usual settings. This study used the most extensive dataset to examine the research question. Our research challenges the conventional wisdom of facial resemblance among partners, providing a fresh viewpoint on partner selection criteria. The exploration of facial dissimilarity’s correlation with relationship duration adds a new dimension to the understanding of relationship dynamics. The findings could significantly contribute to sociological and psychological discourses, particularly in theories related to mate selection and relationship psychology. Such insights can influence the broader understanding of interpersonal relationships. We employed a range of robust analytical techniques, including various predictive models and statistical tests, ensuring the reliability of our findings. The comprehensive examination of facial features, covering different facial regions, adds depth and granularity to our analysis. There was no subjective rating of facial features which could bias the examination. This is a significant advancement in this field. While our study focuses on a specific dataset, it is intended as a stepping stone for future research in this area. We believe our work opens the door to a more extensive exploration of physical attributes in relationship dynamics. The interdisciplinary nature of our study, straddling artificial intelligence, psychology, and sociology, underscores its potential to inspire further research across these fields. It is imperative to highlight the innovative application of our methods in the context of this research. While individual models such as deep learning and machine learning algorithms are well-established in various fields, their application in studying the correlation between facial features and relationship dynamics is novel. This study not only integrates a diverse set of sophisticated techniques, ranging from neural networks to machine learning algorithms like Linear Regression, Ridge Regression, Random Forest, and SVM, but also adapts and fine-tunes them to analyze a unique aspect of human relationships. The use of deep learning for nuanced facial feature analysis in relation to relationship dynamics is particularly groundbreaking, demonstrating the potential of these techniques in uncovering new insights in social science research. This interdisciplinary approach, blending advanced computational methods with relationship science, marks a significant step forward in exploring uncharted territories of human interactions. While it is acknowledged that the research question may be perceived as aligning with ‘popular science’ due to its focus on a subject matter often associated with emotional interpretations, we assert that investigating the potential associations between facial features, dissimilarity, and partnership dynamics through objective measurements constitutes a legitimate scientific inquiry. The potential for a topic to be interpreted within the realm of popular science does not diminish its validity nor the necessity of exploring it through rigorous, objective methodologies.

5. Conclusions

Our pioneering approach establishes an initial framework for harnessing artificial intelligence-based imaging analysis in exploring romantic relationships. While the predictive utility of facial dissimilarity in determining relationship duration was not conclusively supported, our study paves the way for future research and methodological innovations at the nexus of AI, facial recognition technology, and relationship studies. This multidisciplinary endeavor, despite its preliminary limitations, unveils promising opportunities for a more comprehensive and nuanced understanding of the dynamics within relationships. In conclusion, our study highlights the intricacies involved in predicting relationship outcomes solely based on facial attributes, as the results do not provide substantial evidence to support a direct correlation between facial dissimilarity and partnership duration. This underscores the intricate nature of partnership dynamics, suggesting that other nuanced factors may play a more crucial role in determining the course of relationships.

Author Contributions

Conceptualization, V.S.; Data curation, V.S. and B.S.; Formal analysis, V.S., A.V., C.S., M.V., G.M.L. and B.S.; Investigation, A.V. and M.V.; Methodology, V.S. and B.S.; Project administration, C.S. and G.M.L.; Resources, C.S.; Supervision, C.S. and G.M.L.; Validation, A.V., M.V. and G.M.L.; Writing—original draft, V.S. and B.S.; Writing—review and editing, A.V., C.S., M.V. and G.M.L. All authors have read and agreed to the published version of the manuscript.

Funding

The article processing charge was funded by the Baden-Wuerttemberg Ministry of Science, Research and Art, and the University of Freiburg in the funding program Open Access Publishing.

Data Availability Statement

We used an online dataset (the Celeb-A) dataset which is available for non-commercial research purposes from the Multimedia Laboratory of the Chinese University of Hong Kong, China. These imaging data and landmark boundary information are available from https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html (accessed on 26 December 2023). The Python code and algorithm structures are available from: https://github.com/Freiburg-AI-Research (accessed on 26 December 2023).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Alvarez, L.; Jaffe, K. Narcissism Guides Mate Selection: Humans Mate Assortatively, as Revealed by Facial Resemblance, Following an Algorithm of “self Seeking Like”. Evol. Psychol. 2004, 2, 177–194. [Google Scholar] [CrossRef]
  2. Griffiths, R.W.; Kunz, P.R. Assortative Mating: A Study of Physiognomic Homogamy. Soc. Biol. 1973, 20, 448–453. [Google Scholar] [CrossRef] [PubMed]
  3. Hinsz, V.B. Facial Resemblance in Engaged and Married Couples. J. Soc. Pers. Relat. 1989, 6, 223–229. [Google Scholar] [CrossRef]
  4. Little, A.C.; Burt, D.M.; Perrett, D.I. Assortative Mating for Perceived Facial Personality Traits. Pers. Individ. Dif. 2006, 40, 973–984. [Google Scholar] [CrossRef]
  5. Zajonc, R.B.; Adelmann, P.K.; Murphy, S.T.; Niedenthal, P.M. Convergence in the Physical Appearance of Spouses. Motiv. Emot. 1987, 11, 335–346. [Google Scholar] [CrossRef]
  6. Wong, Y.K.; Wong, W.W.; Lui, K.F.H.; Wong, A.C.-N. Revisiting Facial Resemblance in Couples. PLoS ONE 2018, 13, e0191456. [Google Scholar] [CrossRef]
  7. Thiessen, D.; Gregg, B. Human Assortative Mating and Genetic Equilibrium: An Evolutionary Perspective. Ethol. Sociobiol. 1980, 1, 111–140. [Google Scholar] [CrossRef]
  8. Jiahui, G.; Feilong, M.; Visconti Di Oleggio Castello, M.; Nastase, S.A.; Haxby, J.V.; Gobbini, M.I. Modeling Naturalistic Face Processing in Humans with Deep Convolutional Neural Networks. Proc. Natl. Acad. Sci. USA 2023, 120, e2304085120. [Google Scholar] [CrossRef]
  9. Schrimpf, M.; Kubilius, J.; Hong, H.; Majaj, N.J.; Rajalingham, R.; Issa, E.B.; Kar, K.; Bashivan, P.; Prescott-Roy, J.; Geiger, F. Brain-Score: Which Artificial Neural Network for Object Recognition Is Most Brain-Like? BioRxiv 2018, 407007. [Google Scholar] [CrossRef]
  10. Yamins, D.L.K.; Hong, H.; Cadieu, C.F.; Solomon, E.A.; Seibert, D.; DiCarlo, J.J. Performance-Optimized Hierarchical Models Predict Neural Responses in Higher Visual Cortex. Proc. Natl. Acad. Sci. USA 2014, 111, 8619–8624. [Google Scholar] [CrossRef]
  11. Dobs, K.; Martinez, J.; Kell, A.J.; Kanwisher, N. Brain-like Functional Specialization Emerges Spontaneously in Deep Neural Networks. Sci. Adv. 2022, 8, eabl8913. [Google Scholar] [CrossRef]
  12. Grossman, S.; Gaziv, G.; Yeagle, E.M.; Harel, M.; Mégevand, P.; Groppe, D.M.; Khuvis, S.; Herrero, J.L.; Irani, M.; Mehta, A.D.; et al. Convergent Evolution of Face Spaces across Human Face-Selective Neuronal Groups and Deep Convolutional Networks. Nat. Commun. 2019, 10, 4934. [Google Scholar] [CrossRef]
  13. Ratan Murty, N.A.; Bashivan, P.; Abate, A.; DiCarlo, J.J.; Kanwisher, N. Computational Models of Category-Selective Brain Regions Enable High-Throughput Tests of Selectivity. Nat. Commun. 2021, 12, 5540. [Google Scholar] [CrossRef]
  14. Park, S.H.; Russ, B.E.; McMahon, D.B.T.; Koyano, K.W.; Berman, R.A.; Leopold, D.A. Functional Subpopulations of Neurons in a Macaque Face Patch Revealed by Single-Unit fMRI Mapping. Neuron 2017, 95, 971–981.e5. [Google Scholar] [CrossRef]
  15. Oosterhof, N.N.; Todorov, A. The Functional Basis of Face Evaluation. Proc. Natl. Acad. Sci. USA 2008, 105, 11087–11092. [Google Scholar] [CrossRef]
  16. Todorov, A.; Said, C.P.; Engell, A.D.; Oosterhof, N.N. Understanding Evaluation of Faces on Social Dimensions. Trends Cogn. Sci. 2008, 12, 455–460. [Google Scholar] [CrossRef]
  17. Visconti di Oleggio Castello, M.; Halchenko, Y.O.; Guntupalli, J.S.; Gors, J.D.; Gobbini, M.I. The Neural Representation of Personally Familiar and Unfamiliar Faces in the Distributed System for Face Perception. Sci. Rep. 2017, 7, 12237. [Google Scholar] [CrossRef]
  18. Visconti di Oleggio Castello, M.; Haxby, J.V.; Gobbini, M.I. Shared Neural Codes for Visual and Semantic Information about Familiar Faces in a Common Representational Space. Proc. Natl. Acad. Sci. USA 2021, 118, e2110474118. [Google Scholar] [CrossRef] [PubMed]
  19. Ramon, M.; Gobbini, M.I. Familiarity Matters: A Review on Prioritized Processing of Personally Familiar Faces. Vis. Cogn. 2018, 26, 179–195. [Google Scholar] [CrossRef]
  20. Carlin, J.D.; Calder, A.J.; Kriegeskorte, N.; Nili, H.; Rowe, J.B. A Head View-Invariant Representation of Gaze Direction in Anterior Superior Temporal Sulcus. Curr. Biol. 2011, 21, 1817–1821. [Google Scholar] [CrossRef]
  21. Hoffman, E.A.; Haxby, J.V. Distinct Representations of Eye Gaze and Identity in the Distributed Human Neural System for Face Perception. Nat. Neurosci. 2000, 3, 80–84. [Google Scholar] [CrossRef] [PubMed]
  22. Pashos, A.; Niemitz, C. Results of an Explorative Empirical Study on Human Mating in Germany: Handsome Men, Not High-Status Men, Succeed in Courtship. Anthropol. Anz. 2003, 61, 331–341. [Google Scholar] [CrossRef] [PubMed]
  23. Frieze, I.H.; Olson, J.E.; Russell, J. Attractiveness and Income for 680 Men and Women in Management 1. J. Appl. Soc. Psychol. 1991, 21, 1039–1057. [Google Scholar] [CrossRef]
  24. Henderson, J.J.; Anglin, J.M. Facial Attractiveness Predicts Longevity. Evol. Hum. Behav. 2003, 24, 351–356. [Google Scholar] [CrossRef]
  25. Perrett, D.I.; May, K.A.; Yoshikawa, S. Facial Shape and Judgements of Female Attractiveness. Nature 1994, 368, 239. [Google Scholar] [CrossRef] [PubMed]
  26. Rubenstein, A.J.; Langlois, J.H.; Roggman, L.A. What Makes a Face Attractive and Why: The Role of Averageness in Defining Facial Beauty; Rhodes, G., Zebrowitz, L.A., Eds.; Ablex Publishing: Westport, CT, USA, 2002. [Google Scholar]
  27. Schmid, K.; Marx, D.; Samal, A. Computation of a Face Attractiveness 800 Index Based on Neoclassical Canons, Symmetry, and Golden Ratios. Pattern Recognit. 2008, 41, 2710–2717. [Google Scholar] [CrossRef]
  28. Jayaratne, Y.S.; Deutsch, C.K.; McGrath, C.P.; Zwahlen, R.A. Are Neoclassical Canons Valid for Southern Chinese Faces? PLoS ONE 2012, 7, 52593. [Google Scholar] [CrossRef]
  29. Borissavlievitch, M.; Hautecœr, L. The Golden Number and the Scientific Aesthetics of Architecture; Alec Tiranti Ltd.: London, UK, 1958. [Google Scholar]
  30. Jefferson, Y. Facial Beauty-Establishing a Universal Standard. Int. J. Orthod. 2004, 15, 9–26. [Google Scholar]
  31. Farkas, L.G.; Schendel, S.A. Anthropometry of the Head and Face. Plast. Reconstr. Surg. 1995, 96, 480. [Google Scholar]
  32. Farkas, L.G.; Kolar, J.C. Anthropometrics and Art in the Aesthetics of Women’s Faces. Clin. Plast. Surg. 1987, 14, 599–616. [Google Scholar] [CrossRef] [PubMed]
  33. Pallett, P.M.; Link, S.; Lee, K. New “Golden” Ratios for Facial Beauty. Vis. Res. 2010, 50, 149–154. [Google Scholar] [PubMed]
  34. Bóo, F.L.; Rossi, M.A.; Urzúa, S.S. The Labor Market Return to an Attractive Face: Evidence from a Field Experiment. Econ. Lett. 2013, 118, 170–172. [Google Scholar]
  35. Holland, E. Marquardt’s Phi Mask: Pitfalls of Relying on Fashion Models and the Golden Ratio to Describe a Beautiful Face. Aesthetic Plast. Surg. 2008, 32, 200–208. [Google Scholar] [CrossRef]
  36. Shen, H.; Chau, D.K.; Su, J.; Zeng, L.-L.; Jiang, W.; He, J.; Fan, J.; Hu, D. Brain Responses to Facial Attractiveness Induced by Facial Proportions: Evidence from an Fmri Study. Sci. Rep. 2016, 6, 35905. [Google Scholar] [CrossRef]
  37. Gunes, H.; Piccardi, M. Assessing Facial Beauty through Proportion Analysis by Image Processing and Supervised Learning. Int. J. 2006, 64, 1184–1199. [Google Scholar] [CrossRef]
  38. Chen, F.; Zhang, D. Evaluation of the Putative Ratio Rules for Facial Beauty Indexing; IEEE: Piscataway, NJ, USA, 2014. [Google Scholar]
  39. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  40. Wang, S.; Shao, M.; Fu, Y. Attractive or Not?: Beauty Prediction with Attractiveness-Aware Encoders and Robust Late Fusion. In Proceedings of the 22nd ACM international conference on Multimedia, Orlando, FL, USA, 3–7 November 2014. [Google Scholar]
  41. Rothe, R.; Timofte, R.; Gool, L. Some like It Hot-Visual Guidance for Preference Prediction. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 5553–5561. [Google Scholar]
  42. Cadieu, C.F.; Hong, H.; Yamins, D.L.; Pinto, N.; Ardila, D.; Solomon, E.A.; Majaj, N.J.; DiCarlo, J.J. Deep Neural Networks Rival the Representation of Primate It Cortex for Core Visual Object Recognition. PLoS Comput. Biol. 2014, 10, e1003963. [Google Scholar] [CrossRef]
  43. Yamins, D.L.; DiCarlo, J.J. Using Goal-Driven Deep Learning Models 845 to Understand Sensory Cortex. Nat. Neurosci. 2016, 19, 356. [Google Scholar] [CrossRef]
  44. Cichy, R.M.; Pantazis, D.; Oliva, A. Similarity-Based Fusion of Meg and Fmri Reveals Spatio-Temporal Dynamics in Human Cortex during Visual Object Recognition. Cereb. Cortex 2016, 26, 3563–3579. [Google Scholar] [CrossRef] [PubMed]
  45. Cichy, R.M.; Khosla, A.; Pantazis, D.; Torralba, A.; Oliva, A. Comparison of Deep Neural Networks to Spatio-Temporal Cortical Dynamics of Human Visual Object Recognition Reveals Hierarchical Correspondence. Sci. Rep. 2016, 6, 27755. [Google Scholar] [CrossRef]
  46. Wang, P.; Cottrell, G.W. Central and Peripheral Vision for Scene Recognition: A Neurocomputational Modeling Exploration. J. Vis. 2017, 17, 9. [Google Scholar] [CrossRef]
  47. Seeliger, K.; Fritsche, M.; Güçlü, U.; Schoenmakers, S.; Schoffelen, J.-M.; Bosch, S.; Gerven, M. Convolutional Neural Network-Based Encoding and Decoding of Visual Object Recognition in Space and Time. NeuroImage 2018, 180, 253–266. [Google Scholar] [CrossRef] [PubMed]
  48. OToole, A.J.; Castillo, C.D.; Parde, C.J.; Hill, M.Q.; Chellappa, R. Face Space Representations in Deep Convolutional Neural Networks. Trends Cogn. Sci. 2018, 22, 794–809. [Google Scholar] [CrossRef]
  49. Kietzmann, T.C.; Spoerer, C.J.; Sörensen, L.K.; Cichy, R.M.; Hauk, O.; Kriegeskorte, N. Recurrence Is Required to Capture the Representational Dynamics of the Human Visual System. Proc. Natl. Acad. Sci. USA 2019, 116, 21854–21863. [Google Scholar] [CrossRef] [PubMed]
  50. McCurrie, M.; Beletti, F.; Parzianello, L.; Westendorp, A.; Anthony, S.; Scheirer, W.J. Convolutional Neural Networks for Subjective Face Attributes. Image Vis. Comput. 2018, 78, 14–25. [Google Scholar] [CrossRef]
  51. Parde, C.J.; Hu, Y.; Castillo, C.; Sankaranarayanan, S.; OToole, A.J. Social Trait Information in Deep Convolutional Neural Networks Trained for Face Identification. Cogn. Sci. 2019, 43, 12729. [Google Scholar] [CrossRef]
  52. Nemrodov, D.; Niemeier, M.; Mok, J.N.Y.; Nestor, A. The Time Course of Individual Face Recognition: A Pattern Analysis of ERP Signals. NeuroImage 2016, 132, 469–476. [Google Scholar] [CrossRef]
  53. Willis, J.; Todorov, A. First Impressions: Making Up Your Mind After a 100-Ms Exposure to a Face. Psychol. Sci. 2006, 17, 592–598. [Google Scholar] [CrossRef] [PubMed]
  54. Collins, J.A.; Olson, I.R. Beyond the FFA: The Role of the Ventral Anterior Temporal Lobes in Face Processing. Neuropsychologia 2014, 61, 65–79. [Google Scholar] [CrossRef]
  55. Liu, S.; Quinn, P.C.; Wheeler, A.; Xiao, N.; Ge, L.; Lee, K. Similarity and Difference in the Processing of Same- and Other-Race Faces as Revealed by Eye Tracking in 4- to 9-Month-Olds. J. Exp. Child Psychol. 2011, 108, 180–189. [Google Scholar] [CrossRef]
  56. Jiang, F.; Blanz, V.; Rossion, B. Holistic Processing of Shape Cues in Face Identification: Evidence from Face Inversion, Composite Faces, and Acquired Prosopagnosia. Vis. Cogn. 2011, 19, 1003–1034. [Google Scholar] [CrossRef]
  57. Riesenhuber, M.; Jarudi, I.; Gilad, S.; Sinha, P. Face Processing in Humans Is Compatible with a Simple Shape–Based Model of Vision. Proc. R. Soc. Lond. B 2004, 271 (Suppl. S6), S448–S450. [Google Scholar] [CrossRef]
  58. Jones, B.C.; DeBruine, L.M.; Flake, J.K.; Liuzza, M.T.; Antfolk, J.; Arinze, N.C.; Ndukaihe, I.L.G.; Bloxsom, N.G.; Lewis, S.C.; Foroni, F.; et al. To Which World Regions Does the Valence-Dominance Model of Social Perception Apply? Nat. Hum. Behav. 2021, 5, 159–169. [Google Scholar] [CrossRef]
  59. Todorov, A.; Pakrashi, M.; Oosterhof, N.N. Evaluating Faces on Trustworthiness After Minimal Time Exposure. Soc. Cogn. 2009, 27, 813–833. [Google Scholar] [CrossRef]
  60. Todorov, A.; Olivola, C.Y.; Dotsch, R.; Mende-Siedlecki, P. Social Attributions from Faces: Determinants, Consequences, Accuracy, and Functional Significance. Annu. Rev. Psychol. 2015, 66, 519–545. [Google Scholar] [CrossRef] [PubMed]
  61. Dotsch, R.; Hassin, R.R.; Todorov, A. Statistical Learning Shapes Face Evaluation. Nat. Hum. Behav. 2016, 1, 0001. [Google Scholar] [CrossRef]
  62. Ng, W.-J.; Lindsay, R.C.L. Cross-Race Facial Recognition: Failure of the Contact Hypothesis. J. Cross-Cult. Psychol. 1994, 25, 217–232. [Google Scholar] [CrossRef]
  63. Crookes, K.; Ewing, L.; Gildenhuys, J.; Kloth, N.; Hayward, W.G.; Oxner, M.; Pond, S.; Rhodes, G. How Well Do Computer-Generated Faces Tap Face Expertise? PLoS ONE 2015, 10, e0141353. [Google Scholar] [CrossRef]
  64. Luo, S. Assortative Mating and Couple Similarity: Patterns, Mechanisms, and Consequences. Soc. Pers. Psychol. Compass 2017, 11, e12337. [Google Scholar] [CrossRef]
  65. Watson, D.; Klohnen, E.C.; Casillas, A.; Nus Simms, E.; Haig, J.; Berry, D.S. Match Makers and Deal Breakers: Analyses of Assortative Mating in Newlywed Couples. J. Personal. 2004, 72, 1029–1068. [Google Scholar] [CrossRef]
  66. Buss, D.M. Marital Assortment for Personality Dispositions: Assessment with Three Different Data Sources. Behav. Genet. 1984, 14, 111–123. [Google Scholar] [CrossRef]
  67. Schwartz, C.R.; Graf, N.L. Assortative Matching among Same-Sex and Different-Sex Couples in the United States, 1990–2000. Demogr. Res. 2009, 21, 843. [Google Scholar] [CrossRef] [PubMed]
  68. Robinson, M.R.; Kleinman, A.; Graff, M.; Vinkhuyzen, A.A.; Couper, D.; Miller, M.B.; Peyrot, W.J.; Abdellaoui, A.; Zietsch, B.P.; Nolte, I.M. Genetic Evidence of Assortative Mating in Humans. Nat. Hum. Behav. 2017, 1, 0016. [Google Scholar] [CrossRef]
  69. Vandenberg, S.G. Assortative Mating, or Who Marries Whom? Behav. Genet. 1972, 2, 127–157. [Google Scholar] [CrossRef]
  70. Epstein, E.; Guttman, R. Mate Selection in Man: Evidence, Theory, and Outcome. Soc. Biol. 1984, 31, 243–278. [Google Scholar] [CrossRef]
  71. Hitsch, G.J.; Hortaçsu, A.; Ariely, D. What Makes You Click?—Mate Preferences in Online Dating. Quant. Mark. Econ. 2010, 8, 393–427. [Google Scholar] [CrossRef]
  72. Watson, D.; Beer, A.; McDade-Montez, E. The Role of Active Assortment in Spousal Similarity. J. Pers. 2014, 82, 116–129. [Google Scholar] [CrossRef]
  73. Xie, Y.; Cheng, S.; Zhou, X. Assortative Mating without Assortative Preference. Proc. Natl. Acad. Sci. USA 2015, 112, 5974–5978. [Google Scholar] [CrossRef]
  74. Zajonc, R.B. Emotion and Facial Efference: A Theory Reclaimed. Science 1985, 228, 15–21. [Google Scholar] [CrossRef] [PubMed]
  75. Tea-makorn, P.P.; Kosinski, M. Spouses’ Faces Are Similar but Do Not Become More Similar with Time. Sci. Rep. 2020, 10, 17001. [Google Scholar] [CrossRef]
  76. Liu, Z.; Luo, P.; Wang, X.; Tang, X. Deep Learning Face Attributes in the Wild. arXiv 2014, arXiv:1411.7766. [Google Scholar] [CrossRef]
  77. Bruce, V. Stability from Variation: The Case of Face Recognition. The M.D. Vernon Memorial Lecture. Q. J. Exp. Psychol. A 1994, 47, 5–28. [Google Scholar] [CrossRef] [PubMed]
  78. Milord, J.T. Aesthetic Aspects of Faces: A (Somewhat) Phenomenological Analysis Using Multidimensional Scaling Methods. J. Pers. Soc. Psychol. 1978, 36, 205–216. [Google Scholar] [CrossRef]
  79. Harmon, L.D. The Recognition of Faces. Sci. Am. 1973, 229, 71–82. [Google Scholar] [CrossRef]
  80. Lindsay, R.C.L. Biased Lineups: Where Do They Come From? In Adult Eyewitness Testimony; Ross, D.F., Read, J.D., Toglia, M.P., Eds.; Cambridge University Press: Cambridge, UK, 1994; pp. 182–200. ISBN 978-0-521-43255-9. [Google Scholar]
  81. Spuhler, J.N. Assortative Mating with Respect to Physical Characteristics. Eugen. Q. 1968, 15, 128–140. [Google Scholar] [CrossRef]
  82. Hill, C.T.; Rubin, Z.; Peplau, L.A. Breakups Before Marriage: The End of 103 Affairs. J. Soc. Issues 1976, 32, 147–168. [Google Scholar] [CrossRef]
Figure 1. Comparative analysis of partnership duration (in months) among unmarried and married couples (p < 0.001). Values that are more than 1.5 × interquartile range (IQR) below Q1 or above Q3 are represented by circles and values that are more than 3.0 × IQR below Q1 or above Q3 are represented by asterisks.
Figure 1. Comparative analysis of partnership duration (in months) among unmarried and married couples (p < 0.001). Values that are more than 1.5 × interquartile range (IQR) below Q1 or above Q3 are represented by circles and values that are more than 3.0 × IQR below Q1 or above Q3 are represented by asterisks.
Symmetry 16 00176 g001
Figure 2. Comparative Analysis of dissimilarity values in whole face images and landmark regions among unmarried and married couples.
Figure 2. Comparative Analysis of dissimilarity values in whole face images and landmark regions among unmarried and married couples.
Symmetry 16 00176 g002
Figure 3. Scatter plots depicting the relationship between partnership duration (in months) and facial dissimilarity values in whole face images and landmark regions for the entire cohort, differentiating between married and unmarried couples.
Figure 3. Scatter plots depicting the relationship between partnership duration (in months) and facial dissimilarity values in whole face images and landmark regions for the entire cohort, differentiating between married and unmarried couples.
Symmetry 16 00176 g003
Table 1. Models and Hyperparameters.
Table 1. Models and Hyperparameters.
ModelHyperparameterValues Considered
Random ForestNumber of Trees[50, 100, 400, 700]
Maximum Tree Depth[None, 10, 20, 40, 60]
Min Samples for Split[2, 5, 10, 20, 30]
Min Samples for Leaf[1, 2, 4, 7, 9]
Support VectorRegularization Strength (C)[0.1, 1, 10]
Machine (SVM)Kernel Type[‘linear’, ‘rbf’, ‘poly’]
Kernel Coefficient (Gamma)[‘scale’, ‘auto’, 0.1, 1]
Linear Regression--
Ridge RegressionRegularization Strength (Alpha)[0.1, 1, 10]
Deep LearningNumber of Epochs[10, 20, 30]
Batch Size[32, 64, 128]
Number of Hidden Units[32, 64, 128]
Learning Rate[0.001, 0.01, 0.1]
Table 2. Mean dissimilarity values and medians for various facial regions. p-values represent statistical comparison between unmarried and married couples.
Table 2. Mean dissimilarity values and medians for various facial regions. p-values represent statistical comparison between unmarried and married couples.
Facial RegionOverall Mean (95% CI)Overall MedianPartnership Mean (95% CI)Partnership MedianMarried Mean (95% CI)Married Medianp-Value
Whole Face1.88 (1.85–1.91)1.971.90 (1.86–1.94)1.991.86 (1.82–1.90)1.950.319
Left Eye0.93 (0.91–0.95)0.860.91 (0.88–0.94)0.840.94 (0.91–0.98)0.890.194
Right Eye0.90 (0.88–0.92)0.850.91 (0.88–0.94)0.860.89 (0.86–0.92)0.820.449
Nose0.86 (0.84–0.88)0.800.87 (0.84–0.90)0.820.85 (0.82–0.88)0.780.374
Right Mouth Region0.74 (0.72–0.76)0.680.72 (0.70–0.75)0.660.75 (0.73–0.78)0.700.071
Left Mouth Region0.66 (0.65–0.68)0.610.66 (0.64–0.68)0.610.67 (0.65–0.69)0.620.848
Table 3. Spearman’s rho correlation coefficients and p-values for the relationship between duration of partnership and facial dissimilarity values in different facial regions.
Table 3. Spearman’s rho correlation coefficients and p-values for the relationship between duration of partnership and facial dissimilarity values in different facial regions.
Duration of PartnershipDissimilarity Value
Whole Face
Dissimilarity Value
Left Eye
Dissimilarity Value
Right Eye
Dissimilarity Value
Nose
Dissimilarity Value
Left Mouth
Dissimilarity Value
Right Mouth
Duration of partnershipCorrelation Coefficient1.000−0.0450.0200.006−0.026−0.007−0.010
p-value-0.0550.3920.7970.2620.7800.657
Dissimilarity value whole faceCorrelation Coefficient−0.0451.0000.129 **0.048 *0.049 *−0.001−0.027
p-value0.055-0.0000.0420.0360.9630.255
Dissimilarity value left eyeCorrelation Coefficient0.0200.129 **1.0000.0370.0100.055 *0.040
p-value0.3920.000-0.1150.6660.0190.092
Dissimilarity value right eyeCorrelation Coefficient0.0060.048 *0.0371.0000.179 **−0.0050.011
p-value0.7970.0420.115-0.0000.8460.625
Dissimilarity value noseCorrelation Coefficient−0.0260.049 *0.0100.179 **1.000−0.0040.019
p-value0.2620.0360.6660.000-0.8680.427
Dissimilarity value left mouthCorrelation Coefficient−0.007−0.0010.055 *−0.005−0.0041.0000.244 **
p-value0.7800.9630.0190.8460.868-0.000
Dissimilarity value right mouthCorrelation Coefficient−0.010−0.0270.0400.0110.0190.244 **1.000
Sig. (2-tailed)0.6570.2550.0920.6250.4270.000-
Note: Bold numbers indicate statistical significance. ** Correlation is significant at the 0.01 level (2-tailed). * Correlation is significant at the 0.05 level (2-tailed).
Table 4. Results of prediction modeling utilizing Random Forest, Linear Regression, Ridge Regression, Support Vector Machine (SVM), and a Deep learning algorithm. MSE: Mean squared error over all folds. R2: Mean R2 over all folds.
Table 4. Results of prediction modeling utilizing Random Forest, Linear Regression, Ridge Regression, Support Vector Machine (SVM), and a Deep learning algorithm. MSE: Mean squared error over all folds. R2: Mean R2 over all folds.
FeatureAlgorithmMean MSEMean R2
Whole FaceNeural Network1.1720.0227
Linear Regression1.1280.0587
Ridge Regression1.1240.0623
Random Forest1.1980.0011
SVM1.1620.0296
Left EyeNeural Network1.1150.0684
Linear Regression1.1260.0599
Ridge Regression1.1220.0631
Random Forest1.1790.0164
SVM1.1720.0215
Left MouthNeural Network1.1490.0410
Linear Regression1.1270.0599
Ridge Regression1.1240.0617
Random Forest1.1680.0254
SVM1.1680.0247
NoseNeural Network1.1560.0358
Linear Regression1.1270.0595
Ridge Regression1.1240.0619
Random Forest1.1660.0255
SVM1.1690.0231
Right EyeNeural Network1.1500.0388
Linear Regression1.1380.0505
Ridge Regression1.1320.0549
Random Forest1.1730.0218
SVM1.1760.0184
Right MouthNeural Network1.1650.0307
Linear Regression1.1280.0592
Ridge Regression1.1250.0614
Random Forest1.1690.0262
SVM1.1690.0235
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shavlokhova, V.; Vollmer, A.; Stoll, C.; Vollmer, M.; Lang, G.M.; Saravi, B. Assessing the Role of Facial Symmetry and Asymmetry between Partners in Predicting Relationship Duration: A Pilot Deep Learning Analysis of Celebrity Couples. Symmetry 2024, 16, 176. https://doi.org/10.3390/sym16020176

AMA Style

Shavlokhova V, Vollmer A, Stoll C, Vollmer M, Lang GM, Saravi B. Assessing the Role of Facial Symmetry and Asymmetry between Partners in Predicting Relationship Duration: A Pilot Deep Learning Analysis of Celebrity Couples. Symmetry. 2024; 16(2):176. https://doi.org/10.3390/sym16020176

Chicago/Turabian Style

Shavlokhova, Veronika, Andreas Vollmer, Christian Stoll, Michael Vollmer, Gernot Michael Lang, and Babak Saravi. 2024. "Assessing the Role of Facial Symmetry and Asymmetry between Partners in Predicting Relationship Duration: A Pilot Deep Learning Analysis of Celebrity Couples" Symmetry 16, no. 2: 176. https://doi.org/10.3390/sym16020176

APA Style

Shavlokhova, V., Vollmer, A., Stoll, C., Vollmer, M., Lang, G. M., & Saravi, B. (2024). Assessing the Role of Facial Symmetry and Asymmetry between Partners in Predicting Relationship Duration: A Pilot Deep Learning Analysis of Celebrity Couples. Symmetry, 16(2), 176. https://doi.org/10.3390/sym16020176

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop