Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Automatic classification of human facial features based on their appearance

  • Felix Fuentes-Hurtado,

    Roles Formal analysis, Investigation, Methodology, Software, Validation, Writing – review & editing

    Affiliation I3B - Institute for Research and Innovation in Bioengineering, Universitat Politècnica de València, Valencia, Spain

  • Jose A. Diego-Mas ,

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Software, Validation, Writing – original draft, Writing – review & editing

    jodiemas@dpi.upv.es

    Affiliation I3B - Institute for Research and Innovation in Bioengineering, Universitat Politècnica de València, Valencia, Spain

  • Valery Naranjo,

    Roles Investigation, Methodology, Resources, Supervision, Writing – review & editing

    Affiliation I3B - Institute for Research and Innovation in Bioengineering, Universitat Politècnica de València, Valencia, Spain

  • Mariano Alcañiz

    Roles Investigation, Methodology, Resources, Supervision, Writing – review & editing

    Affiliation I3B - Institute for Research and Innovation in Bioengineering, Universitat Politècnica de València, Valencia, Spain

Abstract

Classification or typology systems used to categorize different human body parts have existed for many years. Nevertheless, there are very few taxonomies of facial features. Ergonomics, forensic anthropology, crime prevention or new human-machine interaction systems and online activities, like e-commerce, e-learning, games, dating or social networks, are fields in which classifications of facial features are useful, for example, to create digital interlocutors that optimize the interactions between human and machines. However, classifying isolated facial features is difficult for human observers. Previous works reported low inter-observer and intra-observer agreement in the evaluation of facial features. This work presents a computer-based procedure to automatically classify facial features based on their global appearance. This procedure deals with the difficulties associated with classifying features using judgements from human observers, and facilitates the development of taxonomies of facial features. Taxonomies obtained through this procedure are presented for eyes, mouths and noses.

Introduction

Humans have especially developed their perceptual capacity to process faces and to extract information from facial features [1,2]. Our brain has a specialized neural network for processing facial information [3] that allows us to identify people, their gender, age, and race, or even to judge their emotions. Using our behavioral capacity to perceive faces, we make attributions such as personality, intelligence or trustworthiness based on facial appearance [4]. Therefore, faces play a central role in our relationships with other people and in our everyday decisions [5,6].

For centuries, artists and researchers have tried to develop procedures to measure and classify human faces. Anthropometric facial analysis is used in different fields like surgery [79], forensic science [1012], art [13,14], face recognition [15], emotion recognition [16], and facial feature judgments [1720]. In recent decades, new technologies have opened up ways to automatically evaluate facial features and gestures, and computational methods for the analysis of facial information are now applied to classify faces based on anthropometric or emotional criteria [21].

Classification or typology systems used to categorize different human body parts have existed for many years. In 1940, William Sheldon developed somatotypes to describe the constitution of an individual. Sheldon proposed a classification system in which all possible body types were characterized based on the degree to which they matched these somatotypes [22]. Other taxonomies have been developed for the shape of the body [23,24], hands [25], feet [26] or head [27]. Taxonomies, as a classification system, allow us to use a common terminology to define body part configurations while providing a standardized way to describe them, and are widely used in many fields such as ergonomics and biomechanics [28][29], criminalistics [12], sports [30,31], medicine [32], design or apparel industry [23]. In general, these kind of typology systems are intended for qualitative categorization based on the global appearance of body parts, although, in some cases, a quantitative analysis of some selected features is developed to obtain the classification.

In the case of facial features, taxonomies are useful, for example, in ergonomics, forensic anthropology, crime prevention, human-machine interaction or online activities. E-commerce, e-learning, games, dating or social networks, are fields in which classifications of facial features are needed. In these activities it is common to use human digital representations that symbolize the user’s presence or that act as a virtual interlocutor [33]. The importance of communicative behaviors of avatars in new interaction systems [3437] has led to an increasing interest in creating realistic avatars able to convey appropriate sensations to users. In this context, it is common to synthesize faces and facial expressions combining facial features [3841].

Several taxonomies of facial features can be found in the literature. For example, Vanezis’s atlas [42] classifies 23 facial features, the Disaster Victim Identification Form (DVI) by Interpol categorizes 6, and the DVM database [43,44] 45 facial traits. In [45] different shapes of the human nose are classified into 14 groups based on the analysis of 1,793 pictures of noses. A similar approach was used to classify human chins [46]. In these works, a big set of photographs were analyzed and classified based on the similarity of the features.

This approach, while intuitively logical, has several problems not only in the development of taxonomies, but also in its subsequent use. The classification of facial features is obtained from the opinion of a limited group of human observers. Classic behavioral work has shown that the human brain integrates facial features into a gestalt whole when it processes facial information (holistic face processing) [47], decreasing our ability for processing individual features or parts of faces [48]. This part-whole effect makes it difficult, for example, to recognize familiar faces from isolated features [4951]. Moreover, individual differences exist in face recognition ability [52], and some matters, like the race of the face, influence the performance in processing features and the configuration of facial information [53,54]. This is reflected in low inter-observer and intra-observer agreement in the evaluation of facial features [12]. Finally, apart from the difficulties of processing parts of faces, creating these kinds of taxonomy implies classifying a very big set of elements (the number of possible different features) in an undefined number of groups, and this kind of task easily overcomes our capacities for information processing [55,56]. To deal with these problems, we propose a new procedure to develop taxonomies of facial features based on their appearance, using computational methods for automatically classifying features.

Recently, analysis of facial images has become a major research topic, and new computational methods for analysis of facial information have been developed. A comparison of these techniques shows two different approaches to deal with facial information [19]. The first one (structural approach) automatically encodes the geometry of faces using several significant points and relationships between them, carrying out a metric or morphological assessment of facial features [57]. Examples of these kinds of techniques are those based on SIFT feature descriptors [58,59], point distribution models [60,61] or local binary patterns [6264]. On the other hand, the holistic approach uses appearance-based representations, considering all available information and encompassing the global nature of the faces. Holistic techniques include, for example, fisherfaces [65] or eigenfaces [66]. Some work in facial features characterization has been done mixing structural and holistic techniques [67].

Classification methods of facial features are needed in order to develop taxonomies. Research using computational methods is usually focused on the characterization of complete faces. However, less efforts have been made in classification of facial features based on their appearance. In this work, we use an appearance-based method to obtain a relatively low-dimensional vector of characteristics for facial features. On this basis, large sets of three facial features (noses, mouths, and eyes) of varying ethnicity (Asian, Black, Latino, and White) were characterized. Using this characterization, the features were clustered obtaining new taxonomies for each ethnic group. The procedure followed avoids the problems related to human limitations in classifying facial features. On the one hand, the characterization and clustering of the features were not based on human judgements. On the other hand, classifying new features in one of the groups of the taxonomies can be done in an automatized way. Finally, the procedure was tested comparing human opinions with automatically generated groups of facial features.

The next section shows the preliminary process of treatment of images to obtain large sets of facial features from photographs of complete faces. Afterwards, we used eigenfaces in order to characterize large sets of photographs of three facial features (noses, mouths, and eyes). This holistic technique seems to be more consistent and reliable for categorizations than those that imply subjective judgements [19]. The clustering process used to group features is also shown. Next, we present the classifications obtained and the agreement between human judgements and these automatically generated taxonomies. Finally, the results are discussed and conclusions are shown.

Whole face image preprocessing

Our first objective was to obtain a large database of facial features of different ethnic groups with a neutral expression. Many real face databases are accessible for research purposes [68], however, to the best of our knowledge, there are not large public databases of real facial features available. Therefore, we developed an algorithm to process images from a whole face database and to extract images of the facial features.

The available datasets differ in the size and resolution of the images, the pose and orientations of the faces, the uniformity of the background, the illumination, and other important aspects. After reviewing several well-known databases, we selected the Chicago Face Database [69] to extract images of the facial features. After its second revision, this database contains high-resolution standardized images of real faces of Asian, Black, Latino, and White males and females with several expressions (including neutral). 290 images of males with neutral expression (93 Black, 52 Asian, 52 Latino, and 93 White) were used to create four subsets of face images (one per ethnic group).

The input of the algorithm for facial feature extraction were all the RGB full-face photographs. Initially, the images were converted to gray-scale. Next, the facial landmarks of each feature (eyes, mouth, and nose) were detected and each feature separately extracted in images of same size for each feature. To achieve this, the CHEHRA facial key-point detector [70] was used. The outcome was a set of 49 landmarks distributed as shown in Fig 1 (A). Based on these landmarks, a mask for each feature was automatically created (Fig 1 (D)). Using these masks, the part of the image corresponding to each facial feature was separated. The procedure to extract the features from the whole face photographs is detailed as a pseudo-code algorithm in Fig 2.

thumbnail
Fig 1. Masks creation for feature extraction.

(a) Landmarks distribution. (b) Mask created from landmarks. (c) Thickened mask. (d) Independent masks for each feature. (e) Right eye. (f) Mirrored left eye. (g) Extracted nose. (h) Original mouth. (i) Shaved mouth.

https://doi.org/10.1371/journal.pone.0211314.g001

thumbnail
Fig 2. Pseudo-code of the algorithm to extract the features from the whole face photographs.

https://doi.org/10.1371/journal.pone.0211314.g002

Once the features of the faces are available in independent files, each family of them (i.e. eyes, noses, and mouths) goes through a set of different operations. The first process performed over the features’ images was an alignment operation. For every feature, a polygon was formed using the previously acquired landmarks and its centroid was computed. Then, all the features were aligned using the previously calculated centroids as reference. After that, the size of the bounding box of the polygon created by the landmarks was computed, and a mask was created to crop all features to the size of the biggest bounding box. By performing this, the cropping rectangle fits the feature itself in the most tight-fitting way possible, discarding as much skin as possible to avoid noise in the clustering step. This procedure was performed for each kind of feature, obtaining the results shown in Fig 1.

Before saving them as independent files, eyes and mouths required a special treatment. On the one hand, two eyes were obtained from each face. Except in very particular cases, one person's eyes are highly symmetrical and both must be classified in the same group when using appearance as criterion to cluster the eyes. Therefore, they can be used as an indicator of the correctness of a clustering process, and we decided to use both eyes of each face. To homogenize the appearance of the eyes, left eyes images were mirrored horizontally before saving them (Fig 1 (F)). On the other hand, is common the presence of hair around the mouth of men. In our first tests we detected that the presence of hair greatly affected the process of grouping the mouths, therefore, we decided to remove the surroundings of the original mouth (Fig 1 (H)), obtaining a “shaved” mouth (Fig 1 (I)).

The procedure followed to “shave” the mouths was as follows: first, the outer landmarks of the mouth were selected to form a polygon. Then, this polygon was enlarged by 5 pixels in every direction to ensure all the mouth was taken inside the mask. Finally, a Gaussian Blur Filter (sigma = 2) [71] was applied to the mask in order to smooth the transition between the skin and the black background of the image (Fig 1 (I)).

Proposed procedure for automatic classification of facial features

At this stage, sets of 290 noses, 290 “shaved” mouths, and 580 eyes (Fig 3) were available. Several techniques could be used for data reduction and feature extraction, and to group the facial features. Holistic models based on principal component analysis, like fisherfaces and eigenfaces, have proved their suitability in face detection, recognition and face judgements, and are currently used in applications in which process speed and resource consumption are critical [7276]. On the other hand, artificial neural networks, support vector machines and deep learning methods [77,78] are currently able to jointly optimize feature extraction and clustering, yielding better results than sequentially applying them [79].

thumbnail
Fig 3. Dataset of 580 images of eyes obtained using the extraction algorithm.

https://doi.org/10.1371/journal.pone.0211314.g003

Our objective was to develop taxonomies of human facial features in a simple and automatized way; therefore, our criteria to select the most suitable techniques were efficiency and simplicity. We tested different combinations of procedures like eigenfaces, fisherfaces and autoencoders [80] for feature extraction; hybrid PCA/multilayer perceptron networks and convolutional neural networks for feature extraction; as well as K-means, G-means [81] and DBScan [82] for clustering. Our initial tests found that the results obtained sequentially using eigenfaces and K-means were almost equal to those obtained using more complex processes. As our criteria to select the most suitable techniques for our procedure were efficiency and simplicity, we finally selected eigenfaces and K-means. Both are well known techniques, easy to implement, fast and efficient and have only a few parameters to tune. As a drawback, eigenfaces is a global appearance method that is less robust to face misalignment and background variations than other procedures. However, in the previous image preprocessing stage, the facial features were aligned and the background removed.

Therefore, eigenfaces were used in order to characterize each feature of each dataset (we maintain the term eigenfaces although we used it over facial features). Finally, the K-Means clustering algorithm [51] was used to clusterize the features using their eigenvalues as characteristics.

Using eigenfaces on features.

The eigenfaces approach is a method to efficiently represent pictures of faces by a relatively low-dimensional vector. A principal component analysis can be used on an ensemble of face images to form a set of basis features [83]. These basis images, known as eigenpictures, can be linearly combined to reconstruct images in the original set.

In mathematical terms, the eigenfaces method aims to find the principal components of the distribution of faces, or the eigenvectors of the covariance matrix of the set of face images, treating each image as a vector in a very high dimensional space. These eigenvectors (or eigenfaces) can be thought of as a set of features that together characterize the variation between images, and are ordered accounting for the explained variance. Each individual face can be represented exactly in terms of a linear combination of the eigenfaces, or using the "best" eigenfaces (those that explain the largest variances, and therefore account for the most variation within the set of images). The best M eigenfaces span an M-dimensional subspace of all possible images. Using this procedure over each set of features it was possible to characterize each feature by a set of M eigenvalues, reducing the quantity of information used to describe the features. This holistic approach was selected to characterize the features because the objective was to classify them based on their global appearance more than on their geometrical characteristics (structural approach). This procedure allow us to consider the global appearance of faces while summarizing the central information to characterize them.

The Eigenfaces method was applied over each subset of facial features. To facilitate the subsequent clustering process, the same number of eigenfaces (45) for each subset was selected bearing in mind that the explained variances were about 85% or higher in all cases (Table 1).

thumbnail
Table 1. Percentages of variance explained by 45 eigenfaces for each dataset.

https://doi.org/10.1371/journal.pone.0211314.t001

At this stage, the appearance of each feature could be characterized using 45 real values (eigenvalues). As an example of the information of the features that was captured using eigenfaces, Fig 4 shows a reduced set of original mouths (a), and the same set of mouths reconstructed using 45 eigenvalues before de-normalization (b).

thumbnail
Fig 4. Original and reconstructed mouths before de-normalization using 45 eigenfaces.

(a) Original mouths. (b) Reconstructed mouths.

https://doi.org/10.1371/journal.pone.0211314.g004

Clustering the facial features.

The K-Means clustering algorithm [51] was selected to cluster the features using their eigenvalues as characteristics. A drawback of using this method is that the number of clusters (K) must be predefined. The approach used to deal with this problem was to perform several K-Means executions varying K, and to calculate the Dunn’s Index [53] for each set of clusters. The Dunn’s Index measures the compactness and separation of the clusters obtained for each K. A higher Dunn’s Index points to a small intra-cluster variance and a high inter-cluster distance, i.e. the features included in each cluster are more similar to each other, and more different from the features belonging to other clusters. Therefore, the number of clusters for each feature was selected as the K that maximized the Dunn’s Index.

Results

Four subsets (Asian, Black, Latino, and White) of three facial features (eyes, noses, and mouths) previously obtained were grouped according to their appearance, measured through 45 eigenvalues, using the K-Means clustering algorithm. In order to determine the most suitable number of clusters, several runs of the algorithm were performed increasing the K from 5 to 30, and the Dunn’s Index for each obtained set of clusters was calculated. The results of iterative clustering algorithms like K-Means can vary depending on the initialization, which consists of selecting random initial positions for the clusters. That could yield different results in each execution; therefore, a round of 10 K-Means runs for each K were performed to check the coherence of the results throughout executions. The experiment was implemented using Matlab R2016a on an Intel(R) Core(TM) i7-4770S at 3.10GHz processor PC with 16 GB of RAM.

As an example of how the number of clusters was selected for each subset, Fig 5 shows the Dunn’s Index obtained for each K for the case of white mouths, and the number of clusters with a single element (SEC) per total number of clusters. As can be seen, high Dunn’s Index values tend to be associated with high values of K, however, the number of SECs also increases with K. SECs were usually formed by features that have had some problem in the previous automatic preprocessing of the image (centering, cropping or resizing), and can be considered outliers. For these reasons, the optimal number of clusters was selected as the K that produced higher Dunn’s Index and two or less SECs. After that, SECs were reviewed and eliminated if their elements were considered outliers. For the mouths and the noses, SECs were those formed by only one mouth or one nose. For the eyes, SECs were those formed by less than a pair of eyes. In this way, clusters containing only one individual eye, or containing only the two eyes of the same person, or two eyes of different people, were all considered SECs.

In the case of the white mouths, the highest Dunn’s Index was obtained for K = 11 (being SEC number ≤2). Fig 5 shows the image of the mouths belonging to the two SECs. One of them was considered an outlier because its size was very large with respect to the size of the image, and the other one because it was rotated with respect to the horizontal axis. Therefore, these clusters were not considered and only 9 clusters were used for this subset.

thumbnail
Fig 5. Dunn’s Index and clusters with a single element per number of clusters for white mouths.

https://doi.org/10.1371/journal.pone.0211314.g005

The same procedure was performed for each subset. Table 2 shows the number of clusters finally obtained for each feature and ethnic group. The percentage of elements in each cluster over the total number of elements in each subset was calculated, and the clusters were sorted from highest to lowest percentage. To identify the clusters, a code composed of four digits was assigned to each one. The first digit was A (Asian), B (Black), L (Latino) or W (White). The second was M (mouth), N (nose) or E (eye). The two last digits were the order of the cluster in its subset. For example, cluster AM01 was the most populated cluster of mouths for Asian ethnicity, and WN12 the least populated cluster of noses for White people. Finally, the closest features to the center of their clusters were selected as representatives of their groups. Figs 68 show the obtained classification for each feature, and Figs 911 present the complete set of clusters for eyes, noses and mouths. The images of all the clusters are available for download on https://www.ergonautas.upv.es/lab/facial_features/clusters/).

thumbnail
Fig 6. Taxonomy of mouths.

The name, the representative feature, and the membership percentage of each cluster are shown for each ethnic group.

https://doi.org/10.1371/journal.pone.0211314.g006

thumbnail
Fig 7. Taxonomy of noses.

The name, the representative feature, and the membership percentage of each cluster are shown for each ethnic group.

https://doi.org/10.1371/journal.pone.0211314.g007

thumbnail
Fig 8. Taxonomy of eyes.

The name, the representative feature, and the membership percentage of each cluster are shown for each ethnic group.

https://doi.org/10.1371/journal.pone.0211314.g008

thumbnail
Fig 10. Clusters of Black, White, Latino and Asian noses.

https://doi.org/10.1371/journal.pone.0211314.g010

thumbnail
Fig 11. Clusters of Black, White, Latino and Asian mouths.

https://doi.org/10.1371/journal.pone.0211314.g011

thumbnail
Table 2. Number of clusters for each feature for each ethnic group.

https://doi.org/10.1371/journal.pone.0211314.t002

The codification employed for the features was expanded to classify whole faces considering their features. In this case, the first digit indicates the ethnic group of the face, i.e. A (Asian), B (Black), L (Latino) or W (White). After a hyphen, three groups of three digits express the mouth, nose, and eyes cluster. As an example, in Fig 12 four faces were composed using the representative features of the most populated clusters for each ethnic group (A-M01N01E01, B-M01N01E01, W-M01N01E01, and L-M01N01E01). The representative features of the most populated clusters are illustrative of the most typical features in the face database employed to obtain this taxonomy.

thumbnail
Fig 12. Codification of faces composed using the representative facial features of the most populated clusters for each ethnic group.

https://doi.org/10.1371/journal.pone.0211314.g012

Validation of the procedure

This work proposes an automatic procedure to classify features based on their appearance. This procedure was used to group features of faces extracted from the Chicago Face Database. The intuitively logical approach to validate the procedure is to compare the obtained taxonomies with those generated by human evaluators. However, as aforementioned (Section 1), this last approach has important drawbacks. Classifying a big set of features in an undefined number of groups is a hard task considering human capabilities for information processing [55,56]. On the other hand, some important problems of using this approach are the part-whole effect [48], that decreases human ability for processing individual features, and the influence of the race of the face on the performance in processing facial information [53,54]. Previous works have reported low inter-observer and intra-observer agreement in the evaluation of facial features [12]; therefore, a different approach must be used to validate the proposed procedure.

Instead of comparing the obtained taxonomies with those generated by humans, we measured the agreement of human evaluators with the proposed taxonomies. The main objectives were: to reduce the number of features presented simultaneously to the human evaluators in order to make a decision, and to simplify the decision that must be made. To do this, a survey composed of several stages was developed. Initially, the image of one feature was selected from the entire dataset in a random way (target feature). Four different representative features were randomly selected (representative features are those designated as representatives of their groups in the obtained taxonomy). In the first stage of the survey the five features were presented to the evaluator in a web form (Fig 13 (A)). The target feature was in the center of the form, and the four representative features were at the corners. The evaluator was asked to select the representative feature most similar to the target feature clicking it using the mouse. The request presented to the participants was: “Please select the eye/nose/mouth most similar to the one shown in the center of the screen”. Once the participant made the decision, the selected representative feature passed to the second stage in which a new form was composed as in Fig 13 (B). The target feature was in the center again, and the selected representative feature was at a corner of the form. Three new different representative features were randomly selected and situated in the three remaining corners. This process was repeated until each representative feature was shown at least once. The cluster of the representative feature selected in the last stage was considered to be the result of the survey (i.e. the cluster to which the target feature belongs according to the opinion of the respondent). Using this procedure, the decision-making process was simplified because the number of simultaneous alternatives was reduced to four. As a drawback, the probability of one representative feature to be finally selected depends slightly on the stage in which it is shown.

21 white males and 11 white females aged between 25 and 46 years old participated in three surveys (mouths, eyes and noses). The Comité de Ética en la Investigación (Institutional Review Board of the Universidad Politécnica de Valencia) reviewed and approved these studies. Participants were recruited from May to July 2017 through internal media coverage of the study in the university. Participants gave written informed consent according to the procedures of the Universidad Politécnica de Valencia. The surveys were carried out at the Instituto de Investigación e Innovación en Bioingeniería in Valencia, Spain. In each survey, 200 target features were selected at random from each white features dataset, excluding the representative features. The target features were presented in the survey web form, and the cluster of the representative feature finally selected by the evaluators was registered.

Table 3 shows the results of the survey. The first column of this table presents the cluster finally selected. In this column, Expected refers to the cluster in which the target feature was grouped by the automatic procedure. 82 target mouths, 62 target eyes and 93 target noses were classified in the expected cluster. The distance between clusters can be measured through the eigenvalues of their representative features; therefore, it is possible to determine the distance from the expected cluster to each of the other clusters. The closer two clusters are, the more similar are the features they contain. In Table 3, 1st closest is the cluster nearest to the expected cluster, 2nd closest is the second cluster nearest to the expected cluster and so on. The number, the percentage and the cumulative percentage of features classified in each cluster are shown. The percentages of features classified in the expected cluster or in the three clusters closest to it were 75.5% for mouths, 73.0% for eyes and 81.0% for noses.

Discussion

Classification systems to categorize human body parts, or taxonomies obtained from them, provide a standardized way to describe or configure the human body, and a lot of work has been done to categorize many different body parts. Describing facial features using a common terminology is essential in disciplines such us ergonomics, forensics, surgery or criminology. Moreover, the growth of new technologies that use virtual interlocutors or avatars has led to an increasing interest in synthetizing faces and facial expressions that symbolize the user’s presence in new human-machine interaction systems and online activities.

However, there are very few classification systems or taxonomies for facial features, probably due to the complexity of this task, and to limited human capacity for processing individual features compared to the capacity for processing whole faces. Classifying the appearance of facial features requires a holistic approach that considers all visible information. Therefore, encoding the geometry and carrying out a metric or morphological assessment is not enough to obtain facial features taxonomies based on appearance. In this work, appearance-based representations (Eigenfaces) are used to classify the facial features. The developed procedure forms groups of features taking into account all available information and encompassing their global nature.

This procedure was used to classify the facial features of 290 images of males with neutral expression from the Chicago Face Database, obtaining taxonomies of eyes, mouths, and noses for several ethnic groups. To validate the procedure, the agreement of human evaluators with the proposed taxonomies was measured. Out of 200 cases for each feature, 41.0% of mouths, 31.0% of eyes and 46.5% of noses, were classified by humans in the same cluster as in the automatic procedure. More than 73.0% of the features were classified in the expected cluster or in the three clusters closest to it (75.5% of mouths, 73.0% of eyes and 81.0% of noses).

To the best of our knowledge, there are no similar studies to compare these results. In [12], the applicability and feasibility of the DMV atlas [43] was tested measuring the inter-observer and intra-observer errors when classifying several morphological features of male faces (e.g. head shape, nose bridge length, chin shape…). As an example, in this test the shape of the chin was classified into three classes. Despite the low number of classes, the inter-observer error was approximately 39%, while the intra-observer error was 30% for inexperienced observers. These results reflect the subjectivity and the wide variability when judging facial features; every observer showed a specific recognition pattern for the individual facial features. Moreover, this study also concluded that the morphologic assessment of faces is affected by cultural variables. Although more tests must be carried out, in the light of these results it can be concluded that the proposed automatic procedure is a good approach to classify facial features.

Nevertheless, this study has some limitations. The experiment carried out employed 290 images of males with neutral expression from the Chicago Face Database. Therefore, the taxonomies obtained are only representative of the features of the faces belonging to this database. The representativeness of these taxonomies with respect to other populations must be carefully analyzed before their use. The objective of this work was not to obtain the taxonomies but to develop the automatic procedure to classify facial features based on their appearance. A more comprehensive face database can be used to obtain more representative taxonomies. Therefore, our future work will be focused on increasing the sample size of faces used to develop the taxonomies. At the same time, we will test the performance of the proposed system when classifying new faces not used to develop the taxonomies, comparing the results with the classification of human observers.

In the same way, the validation of the proposed procedure was performed for the White facial features. The results obtained for Latino, Asian and Black facial features must be tested, and future work must be done to extend this procedure to other facial features like eyebrows, chins or hair, and to obtain taxonomies of facial features from faces of females.

Conclusions

Although judging the similarity of facial features is a subjective process with wide inter-observer and intra-observer variability, the results of the validation survey developed in this work show that the proposed procedure can be considered appropriate for the automatic classification of facial features based on their appearance. This procedure deals with the difficulties associated to classify features using judgements from human observers, and facilitates the development of facial features taxonomies.

Acknowledgments

This study was developed using the Chicago Face Database developed at the University of Chicago by Debbie S. Ma, Joshua Correll, and Bernd Wittenbrink.

References

  1. 1. Damasio AR. Prosopagnosia. Trends Neurosci. 1985; 132–135.
  2. 2. Bruce V, Young A. Understanding face recognition. Br J Psychol. 1986;77: 305–327. pmid:3756376
  3. 3. Kanwisher N, McDermott J, Chun MM. The Fusiform Face Area: A module in human extrastriate cortex specialized for the perception of faces. J Neurosci. 1997;17: 4302–4311. Available: http://www.ncbi.nlm.nih.gov/pubmed/9151747 pmid:9151747
  4. 4. Bruce V, Young A. Face perception. New York, NY: Psychology Press; 2012.
  5. 5. Todorov A. Evaluating Faces on Social Dimensions. Social Neuroscience: Toward Understanding the Underpinnings of the Social Mind. 2011.
  6. 6. Little AC, Burriss RP, Jones BC, Roberts SC. Facial appearance affects voting decisions. Evol Hum Behav. 2007;28: 18–27.
  7. 7. Porter JP, Olson KL. Anthropometric facial analysis of the African American woman. Arch Facial Plast Surg. 2001;3: 191–197. pmid:11497505
  8. 8. Gündüz Arslan S, Genç C, Odabaş B, Devecioǧlu Kama J. Comparison of facial proportions and anthropometric norms among Turkish young adults with different face types. Aesthetic Plast Surg. 2008;32: 234–242. pmid:17952492
  9. 9. Ferring V, Pancherz H. Divine proportions in the growing face. Am J Orthod Dentofac Orthop. 2008;134: 472–479. pmid:18929263
  10. 10. Mane DR, Kale AD, Bhai MB, Hallikerimath S. Anthropometric and anthroposcopic analysis of different shapes of faces in group of Indian population: A pilot study. J Forensic Leg Med. 2010;17: 421–425. pmid:21056876
  11. 11. Ritz-Timme S, Gabriel P, Tutkuviene J, Poppa P, Obertov Z, Gibelli D, et al. Metric and morphological assessment of facial features: A study on three European populations. Forensic Sci Int. 2011;207: 239. pmid:21388762
  12. 12. Ritz-Timme S, Gabriel P, Obertovà Z, Boguslawski M, Mayer F, Drabik A, et al. A new atlas for the evaluation of facial features: Advantages, limits, and applicability. Int J Legal Med. 2011;125: 301–306. pmid:20369248
  13. 13. Robins G. Analysis of facial proportions in Egyptian art. Goettinger Miszellen Beitraege zur aegyptologischen Diskuss. 1984; 31–41.
  14. 14. Hochscheid H, Hamel R. Shaping space: facial asymmetries in fifth-century Greek sculpture. Art Mak Antiq. 2015;
  15. 15. Kong SG, Heo J, Abidi BR, Paik J, Abidi MA. Recent advances in visual and infrared face recognition—A review. Computer Vision and Image Understanding. 2005. pp. 103–135.
  16. 16. Tavares G, Mourão A, Magalhães J. Crowdsourcing facial expressions for affective-interaction. Comput Vis Image Underst. 2016;147: 102–113.
  17. 17. Buckingham G, DeBruine LM, Little AC, Welling LLM, Conway CA, Tiddeman BP, et al. Visual adaptation to masculine and feminine faces influences generalized preferences and perceptions of trustworthiness. Evol Hum Behav. 2006;27: 381–389.
  18. 18. Boberg M, Piippo P, Ollila E. Designing Avatars. DIMEA ‘08 Proc 3rd Int Conf Digit Interact Media Entertain Arts. ACM; 2008; 232–239. https://doi.org/10.1145/1413634.1413679
  19. 19. Rojas MM, Masip D, Todorov A, Vitria J. Automatic prediction of facial trait judgments: Appearance vs. structural models. PLoS One. 2011;6. pmid:21858069
  20. 20. Laurentini A, Bottino A. Computer analysis of face beauty: A survey. Comput Vis Image Underst. 2014;125: 184–199.
  21. 21. Li SZ, Jain AK. Handbook of Face Recognition. Handbook of face recognition. 2005.
  22. 22. Sheldon W. Atlas of Men: A Guide for Somatotyping the Adult Image of All Ages. Macmillan Pub. Co; 1970.
  23. 23. Alemany S, Gonzalez J, Nacher B, Soriano C, Arnaiz C, Heras H. Anthropometric survey of the Spanish female population aimed at the apparel industry. Proceedings of the 2010 Intl Conference on 3D Body scanning Technologies. 2010. pp. 307–315.
  24. 24. Vinué G, Epifanio I, Alemany S. Archetypoids: A new approach to define representative archetypal data. Comput Stat Data Anal. 2015;87: 102–115.
  25. 25. Jee SC, Yun MH. An anthropometric survey of Korean hand and hand shape types. Int J Ind Ergon. 2016;53: 10–18.
  26. 26. Kim N-S, Do W-H. Classification of Elderly Women’s Foot Type. J Korean Soc Cloth Text. The Korean Society of Clothing and Textiles; 2014;38: 305–320.
  27. 27. Sarakon P, Charoenpong T, Charoensiriwath S. Face shape classification from 3D human data by using SVM. The 7th 2014 Biomedical Engineering International Conference. IEEE; 2014. pp. 1–5. https://doi.org/10.1109/BMEiCON.2014.7017382
  28. 28. Preston TA, Singh M. Redintegrated Somatotyping. Ergonomics. Taylor & Francis Group; 1972;15: 693–700. pmid:4652867
  29. 29. Lin Y-L, Lee K-L. Investigation of anthropometry basis grouping technique for subject classification. Ergonomics. Taylor & Francis Group; 1999;42: 1311–1316.
  30. 30. Massidda M, Toselli S, Brasili P, Calò CM. Somatotype of elite Italian gymnasts. Coll Antropol. 2013;37: 853–7. Available: http://www.ncbi.nlm.nih.gov/pubmed/24308228 pmid:24308228
  31. 31. Malousaris GG, Bergeles NK, Barzouka KG, Bayios IA, Nassis GP, Koskolou MD. Somatotype, size and body composition of competitive female volleyball players. J Sci Med Sport. 2008;11: 337–344. pmid:17697797
  32. 32. Koleva M, Nacheva a, Boev M. Somatotype and disease prevalence in adults. Rev Env Heal. 2002;17: 65–84.
  33. 33. Davis A, Murphy J, Owens D, Khazanchi D, Zigurs I. Avatars, people, and virtual worlds: Foundations for research in metaverses. J Assoc Inf Syst. 2009;10: 90–117. doi:1660426061
  34. 34. Carvalho PVR, dos Santos IL, Gomes JO, Borges MRS, Guerlain S. Human factors approach for evaluation and redesign of human-system interfaces of a nuclear power plant simulator. Displays. 2008;29: 273–284.
  35. 35. Fabri M, Moore D. The use of emotionally expressive avatars in Collaborative Virtual Environments. AISB’05 Convention:Proceedings of the Joint Symposium on Virtual Social Agents: Social Presence Cues for Virtual Humanoids Empathic Interaction with Synthetic Characters Mind Minding Agents. 2005. pp. 88–94. doi:citeulike-article-id:790934
  36. 36. Orvalho V, Miranda J, Sousa AA. Facial Synthesys of 3D Avatars for Therapeutic Applications. Stud Health Technol Inform. 2009;144: 96–98. pmid:19592739
  37. 37. Yee N, Bailenson J. The proteus effect: The effect of transformed self-representation on behavior. Hum Commun Res. 2007;33: 1–38.
  38. 38. Albin-Clark A, Howard T. Automatically Generating Virtual Humans using Evolutionary Algorithms. EG UK Theory Pract Comput Graph. Wen Tang John Collomosse; 2009;
  39. 39. Diego-Mas JA, Alcaide-Marzal J. A computer based system to design expressive avatars. Comput Human Behav. 2015;
  40. 40. Sukhija P, Behal S, Singh P. Face Recognition System Using Genetic Algorithm. Procedia Comput Sci. 2016;85: 410–417.
  41. 41. Trescak T, Bogdanovych A, Simoff S, Rodriguez I. Generating diverse ethnic groups with genetic algorithms. Proceedings of the 18th ACM symposium on Virtual reality software and technology—VRST ‘12. New York, New York, USA: ACM Press; 2012. p. 1. https://doi.org/10.1145/2407336.2407338
  42. 42. Vanezis P, Lu D, Cockburn J, Gonzalez a, McCombe G, Trujillo O, et al. Morphological classification of facial features in adult Caucasian males based on an assessment of photographs of 50 subjects. J Forensic Sci. 1996;41: 786–91. Available: http://www.ncbi.nlm.nih.gov/pubmed/8789838 pmid:8789838
  43. 43. Asmann S, Nohrden D, Schmitt R, Gabriel P, Ritz-Timme S. Anthropological atlas of male facial features. Frankfurt: Verlag für Polizeiwissenschaft; 2007.
  44. 44. Ohlrogge S, Arent T, Huckenbeck W, Gabriel P, Ritz-Timme S. Anthropological atlas of female facial features. Frankfurt: Verlag für Polizeiwissenschaft; 2009.
  45. 45. Tamir A. Numerical Survey of the Different Shapes of the Human Nose. J Craniofac Surg. 2011;22: 1104–1107. pmid:21586956
  46. 46. Tamir A. Numerical Survey of the Different Shapes of Human Chin. J Craniofac Surg. 2013;24: 1657–1659. pmid:24036746
  47. 47. Richler JJ, Cheung OS, Gauthier I. Holistic processing predicts face recognition. Psychol Sci. 2011;22: 464–471. pmid:21393576
  48. 48. Taubert J, Apthorp D, Aagten-Murphy D, Alais D. The role of holistic processing in face perception: Evidence from the face inversion effect. Vision Res. 2011;51: 1273–1278. pmid:21496463
  49. 49. Donnelly N, Davidoff J. The mental representations of faces and houses: Issues concerning parts and wholes. Vis cogn. 1999;6: 319–343.
  50. 50. Davidoff J, Donnelly N. Object superiority: A comparison of complete and part probes. Acta Psychol (Amst). 1990;73: 225–243.
  51. 51. Tanaka JW, Farah MJ. Parts and wholes in face recognition. Q J Exp Psychol. 1993;46: 225–245.
  52. 52. Wang R, Li J, Fang H, Tian M, Liu J. Individual differences in holistic processing predict face recognition ability. Psychol Sci. 2012;23: 169–177. pmid:22222218
  53. 53. Hayward WG, Rhodes G, Schwaninger A. An own-race advantage for components as well as configurations in face recognition. Cognition. 2008. pmid:17524388
  54. 54. Rhodes G, Ewing L, Hayward WG, Maurer D, Mondloch CJ, Tanaka JW. Contact and other-race effects in configural and component processing of faces. Br J Psychol. Blackwell Publishing Ltd; 2009;100: 717–728. pmid:19228441
  55. 55. Miller G. The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychol Rev. 1956;101: 343–352.
  56. 56. Scharff A, Palmer J, Moore CM. Evidence of fixed capacity in visual object categorization. Psychon Bull Rev. 2011;18: 713–721. pmid:21538202
  57. 57. Shyam R, Singh YN. Identifying individuals using multimodal face recognition techniques. Procedia Computer Science. 2015.
  58. 58. Meyers E, Wolf L. Using biologically inspired features for face processing. Int J Comput Vis. 2008;76: 93–104.
  59. 59. Wu J, Cui Z, Sheng VS, Zhao P, Su D, Gong S. A comparative study of SIFT and its variants. Meas Sci Rev. 2013;
  60. 60. Cootes TF, Edwards GJ, Taylor CJ. Active appearance models. IEEE Trans Pattern Anal Mach Intell. 2001;23: 681–685.
  61. 61. Ashraf AB, Lucey S, Cohn JF, Chen T, Ambadar Z, Prkachin K, et al. The Painful Face—Pain Expression Recognition Using Active Appearance Models. IEEE Trans Syst Man Cybern. 2007; pmid:22837587
  62. 62. Ahonen T, Hadid A, Pietikäinen M. Face description with local binary patterns: Application to face recognition. IEEE Trans Pattern Anal Mach Intell. 2006;28: 2037–2041. pmid:17108377
  63. 63. Liu L, Fieguth P, Zhao G, Pietikäinen M, Hu D. Extended local binary patterns for face recognition. Inf Sci (Ny). 2016;
  64. 64. Tang H, Yin B, Sun Y, Hu Y. 3D face recognition using local binary patterns. Signal Processing. 2013;
  65. 65. Belhumeur PN, Hespanha JP, Kriegman DJ. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell. 1997;19: 711–720.
  66. 66. Turk M, Pentland A. Eigenfaces for Recognition. Journal of Cognitive Neuroscience. 1991. pp. 71–86. pmid:23964806
  67. 67. Klare B, Jain AK. On a taxonomy of facial features. IEEE 4th International Conference on Biometrics: Theory, Applications and Systems, BTAS 2010. IEEE; 2010. pp. 1–8. https://doi.org/10.1109/BTAS.2010.5634533
  68. 68. Chihaoui M, Elkefi A, Bellil W, Ben Amar C. A Survey of 2D Face Recognition Techniques. Computers. Multidisciplinary Digital Publishing Institute; 2016;5: 21.
  69. 69. Ma DS, Correll J, Wittenbrink B. The Chicago face database: A free stimulus set of faces and norming data. Behav Res Methods. 2015;47: 1122–1135. pmid:25582810
  70. 70. Asthana A, Zafeiriou S, Cheng S, Pantic M. Incremental face alignment in the wild. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2014. pp. 1859–1866. https://doi.org/10.1109/CVPR.2014.240
  71. 71. Shapiro LG, Stockman GC. Computer Vision [Internet]. Upper Saddle River, New Jersey: Prentice Hall, Inc; 2001. https://doi.org/10.1525/jer.2008.3.1.toc
  72. 72. Bag S, Barik S, Sen P, Sanyal G. A statistical nonparametric approach of face recognition: combination of eigenface & modified k-means clustering. Proceedings Second International Conference on Information Processing. 2008. p. 198.
  73. 73. Siregar STM, Syahputra MF, Rahmat RF. Human face recognition using eigenface in cloud computing environment. IOP Conf Ser Mater Sci Eng. 2018;
  74. 74. Šušteršič T, Vulović A, Filipović N, Peulić A. FPGA implementation of face recognition algorithm. Lect Notes Inst Comput Sci Soc Telecommun Eng LNICST. 2018;
  75. 75. Doukas C, Maglogiannis I. A fast mobile face recognition system for android OS based on Eigenfaces decomposition. IFIP Advances in Information and Communication Technology. 2010. pp. 295–302.
  76. 76. Dharejo FA, Jatoi MA, Hao Z, Tunio MA. PCA based improved face recognition system. Frontiers in Artificial Intelligence and Applications. 2017.
  77. 77. Huang P, Huang Y, Wang W, Wang L. Deep embedding network for clustering. Proceedings—International Conference on Pattern Recognition. 2014. pp. 1532–1537. https://doi.org/10.1109/ICPR.2014.272
  78. 78. Dizaji KG, Herandi A, Deng C, Cai W, Huang H. Deep Clustering via Joint Convolutional Autoencoder Embedding and Relative Entropy Minimization. Proceedings of the IEEE International Conference on Computer Vision. 2017. https://doi.org/10.1109/ICCV.2017.612
  79. 79. Xie J, Girshick R, Farhadi A. Unsupervised deep embedding for clustering analysis [Internet]. Proceedings of the 33rd International Conference on International Conference on Machine Learning—Volume 48. JMLR.org; 2016. pp. 478–487. Available: https://dl.acm.org/citation.cfm?id=3045442
  80. 80. Nousi P, Tefas A. Discriminatively trained autoencoders for fast and accurate face recognition. Communications in Computer and Information Science. 2017.
  81. 81. Hamerly G, Elkan C. Learning the k in k means. Adv neural Inf Process. 2004; doi:10.1.1.9.3574‎
  82. 82. Ester M, Kriegel H, Sander J, Xu X. A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. Computer (Long Beach Calif). 1996; doi:10.1.1.71.1980
  83. 83. Sirovich L, Kirby M. Low-dimensional procedure for the characterization of human faces. [Internet]. Journal of the Optical Society of America. A, Optics and image science. 1987. pp. 519–524. pmid:3572578