Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Image Generation Using Bidirectional Integral Features for Face Recognition with a Single Sample per Person

  • Yonggeol Lee,

    Affiliation Department of Computer Science and Engineering, Dankook University, 126, Jukjeon-dong, Suji-gu, Yongin-si, Gyeonggi-do, 448–701, Korea

  • Minsik Lee,

    Affiliation Division of Electrical Engineering, Hanyang University, 55 Hanyangdaehak-ro, Sangnok-gu, Ansan-si, Gyeonggi-do, 426–791, Korea

  • Sang-Il Choi

    choisi@dankook.ac.kr

    Affiliation Department of Computer Science and Engineering, Dankook University, 126, Jukjeon-dong, Suji-gu, Yongin-si, Gyeonggi-do, 448–701, Korea

Abstract

In face recognition, most appearance-based methods require several images of each person to construct the feature space for recognition. However, in the real world it is difficult to collect multiple images per person, and in many cases there is only a single sample per person (SSPP). In this paper, we propose a method to generate new images with various illuminations from a single image taken under frontal illumination. Motivated by the integral image, which was developed for face detection, we extract the bidirectional integral feature (BIF) to obtain the characteristics of the illumination condition at the time of the picture being taken. The experimental results for various face databases show that the proposed method results in improved recognition performance under illumination variation.

Introduction

Face recognition is used to identify individuals from facial images by using a face database labeled with people’s identities [1]. Compared to other types of biometric recognition, face recognition is less invasive and does not require a subject to be in proximity to or in contact with a sensor, which makes it widely applicable in areas including user identification, e-commerce, access control, surveillance, and human-computer interaction [1]. For this reason, face recognition has received extensive attention from many researchers over the last two decades.

The motivation for using appearance-based methods, the most widely adopted approach in the face recognition field [26], is its ability to construct a small-scale, low-dimensional feature subspace that maintains the intrinsic characteristics of the original face samples by using supervised, semi-supervised, or unsupervised learning. Beginning from the most basic schemes such as the Eigenface [2] and PCA+LDA [3] methods have continuously evolved to produce other methods including Discriminant Common Vector (DCV) [4], Direct Linear Discriminant Analysis (Direct LDA) [5] and Eigenfeature Regularization and Extraction (ERE) [7], and Marginal Fisher Analysis (MFA) [8]. These methods transform each image into a vector form, and then extract appropriate features from various kinds of covariance matrices based on a statistical analysis. The methods based on discriminant analysis such as the variants of LDA (linear discriminant analysis) seek the linear transformation that minimizes within-class variation while simultaneously maximizing the between-class variation; although, this is only effective when the within-class variance is small and the between-class variance is large. It is important to note that numerous problems still need to be overcome to develop a robust face recognition system.

In a real-world setting, facial-data acquisition can occur in many different environments, so a sufficient number of facial images are therefore needed to construct a face recognition system that is reliable under various conditions. Numerous face recognition methods have been proposed for face recognition under the assumption that a number of images are accessible for each individual, but in fact, a much smaller number of training images can be acquired in most real-world applications [9]. To use a specific example, large-scale identification applications including law enforcement and passport identification typically use databases that contain a single training sample per person (SSPP). Additionally, due to the exorbitant monetary cost of capturing additional samples, further training images are rarely added to an individual’s profile; furthermore, for an image collection—in which several training samples of an individual are accessible—to be useful, the images need to have been taken under a variety of conditions to account for different variations [9].

The small number of training samples for each person raises several problems for appearance-based face recognition systems. If the feature dimension of the face samples is larger than the number of training images, it is not possible to apply LDA without hindrance because the within-class scatter matrix develops into a singular form—this issue is known as the small sample size (SSS) problem. The number of training samples per person continues to exert a major influence on the functioning of appearance-based methods in face recognition, even though the variants of the LDA method, such as PCA+LDA, DCV, and Direct LDA, were considered solutions to the SSS problem. Of particular relevance here is an understanding that a considerably lower number of training samples per person relative to the feature dimension equates to an inability to both accurately estimate the within-class variance of the LDA and make use of distinguishing data [10].

To address the problems that stem from the small number of training samples per person, several methods have been introduced. In [1115], new representational methods for mining more information from a single image were proposed. In [11], the representational oriented component analysis (ROCA) was presented. This method applies several linear and non-linear filters to each gallery and for production of its 150 representations. The method in [13] uses the singular value decomposition (SVD) perturbation to extract the greatest amount of information possible from a single training image. In the E(PC)2A2+ method [14], new images are generated by linearly combining the original image and its corresponding 1/2-, first-, and second-order projected images. In [15], many samples are synthesized by using real images (sets of two) and their weighted combination. However, since the images generated by the above methods are highly correlated, the new images cannot be considered as independent training images [16].

Some methods have proposed to generate virtual face images to enlarge the training set. In [17], the virtual images were produced by using the symmetry transform for intra-class and the linear combination for inter-class, while in proposal [18], the symmetrical PCA method uses even and odd symmetrical image sets. The method in [19] also uses the symmetrical structure of a face to generate new training samples. However, while most of the above methods mainly focused on enlarging the training set, they did not consider the different variations such as illumination variation that are likely to occur in uncontrolled conditions.

In this paper, we propose a novel approach to generate new face images from a single training image to solve the SSPP problem. We first propose the bidirectional integral features (BIF) based on the idea of the integral image [20]. Since the value at (x, y) of the integral image is the sum of pixels above and to the left of (x, y) in the original image, its first-order derivatives represent the distribution of gray-level intensities in the sub-region, i.e., the first-order derivatives in the corresponding region of the integral image are small for the dark region of the original image, while those for the bright region are large. We defined two kinds of integral images, which are the left and right integral images, depending on the direction of the light source. The values of the left and right integral images were obtained by calculating the sum from the left-top to the right-bottom, and from the right-top to the left-bottom, respectively. Then, we extracted BIF by normalizing each integral image for values ranging between 0 and 1.

In terms of shape, human faces are typically similar, consisting of two eyes, one nose, and one mouth. Each of these facial components casts shadows upon the face, of which the form is reliant upon the location of the light source. When the light source is located at one side of the face, the shadows occur on the opposite side of the face. As the light source moves away from the frontal direction, the attached and cast shadows become severe and large. Since the region with shadows leads to small BIF, we extract illumination-variation information from these BIF. Based on the illumination information, we generate several new face images that are similar to the images taken under various illuminations, but are derived from only a single image taken under a frontal illumination.

Our proposed method for circumventing the SSPP issue is advantageous on a number of fronts compared with comparable algorithms. Once the BIF for the pre-defined light direction categories is defined, we can simply generate new images from a face image for each of the pre-defined categories. As a result, we not only solve the SSS problem, but can also effectively deal with face recognition under illumination variation. Additionally, the proposed method is not reliant upon the choice of a particular appearance-based face recognition algorithm, and a single training sample per person is sufficient to improve face recognition performance when there are illumination variations, as shown in the experimental results.

The remainder of this paper is organized as follows. In the next section, we provide preliminaries for the proposed method. Then, we present the BIF and how they can be used to generate new face images with shadows. Finally, the experimental results are described and the conclusion follows.

Preliminaries to the SSPP Problem in Appearance Based Methods

The SSPP problem, which is an extreme case of the SSS problem in classification, is defined as a key problem of face recognition technology, as there is only a single sample for each person. In this section, we provide a brief overview of the feature extraction methods of appearance-based face recognition methods, and explain how the SSPP problem influences these methods. LDA, which was originally applied in the supervised learning field, has been popularly adopted for its capacity to pare down the dimensions of a spatial context for easier handling [21, 22]. Let us consider a set of N samples, each of which belongs to one of Nc subjects or classes. Each sample xRn can be represented as a point in the n-dimensional vector space. Let xij denote the jth sample belonging to the ith class. The ith class consists of Ni samples, and N = ∑i Ni. The LDA method finds the optimal projection matrix in accordance with Fisher’s criterion to maximize the ratio of the between-class scatter matrix (SB) and the within-class scatter matrix (SW). (1) where m is the mean of all the samples and mi is the mean of the samples belonging to class i. The column vectors of W = [w1, .., wNc−1] are the generalized eigenvectors associated with the generalized eigenvalues satisfying (2) where k = 1, .., Nc − 1 [23]. They can be obtained by the simultaneous diagonalization of SB and SW if SW is nonsingular [21].

In face recognition problems, since the dimension of the input space(n) is usually much larger than the number of available samples (N), SW becomes singular, resulting in the SSS problem [21]. To avoid the SSS problem, several variants of LDA have been proposed [25] including PCA+LDA, Direct LDA, DCV, and ERE. However, even though the SSS problem is solved in terms of computation, some issues regarding the SSPP problem remain outstanding. Firstly, in the case of the SSPP problem, the SW can not actually be computed. Also, LDA performance is biased toward data that complies with the assumption that normal distribution applies to the samples in each class [24]—this is evidenced by a demonstration in which the full realization of the objective function in Eq 2 corresponds to the full realization of the Euclidean distance that occurs between the class means [25]. Therefore, to effectively overcome the SSPP problem, it is important to secure several images for each class so that the samples in each class have normal distribution.

In subsequent sections, we use the proposed method to demonstrate the improvement in face recognition performance, and show how BIF can be used with the virtual face image-generation method.

Proposed Method

Bidirectional Integral Features

The integral image was originally designed to very rapidly compute rectangular features for face detection. The following definition of the integral image states [20]: (3) where the integral image is represented by A(x, y) and the original image is represented by I(x′, y′). In Eq 3, the tally of the pixels above and to the left of (x, y) forms the integral image at the (x, y) position. A(x, y) monotonically increases as x and y increase because all of the pixels of I(x, y) have non-negative values.

The shape outline of the human face shows an azimuth convex emphasis, so we therefore distinguished the directions of light into L categories {Cl∣ − LlL} (here, L = 3), starting at the left side and moving across until the endpoint on the right. For a frontal light source, C0 was used. We denoted a face image of the mth individual under frontal illumination and right-side illumination as , l = 0 and , l = −3, respectively, and the corresponding integral image as , l = 0 and , l = −3, respectively. Fig 1 shows the integral images, which were obtained from two face images for two individuals (m = 1,2) from the CMU-PIE database [26]; frontal illumination was used for one of the face images and the other was subject to right-side illumination. The images in Fig 1 are scaled to have values ranging between 0 and 255. The patterns of integral images are more dependent on the differences of illumination conditions than the unique features of individuals. As shown in Fig 1, different individuals and are similar to each other, whereas under different illumination conditions, and are different.

thumbnail
Fig 1. The patterns of integral images.

(a) individual images. (b) integral images.

https://doi.org/10.1371/journal.pone.0138859.g001

We characterized the illumination conditions depending on the direction of the light source by using integral images. Meanwhile, in the case of an image under the right-side illumination (Fig 2), since the shadows occur on the right side of the image, where the corresponding pixels have small intensity values, the values of the integral image can be saturated (or flattened with large values) after the middle position of the image. To effectively obtain the characteristics of both left- and right-side illuminations, we defined the left integral image (BL(x, y)) and the right integral image (BR(x, y)) as follows: (4) We called the above pair of integral images as “bidirectional integral images”. We applied the left integral image (BL(x, y)) for categories C−3C−1 and the right integral image (BR(x, y)) for categories C1C3.

thumbnail
Fig 2. The saturated pattern of integral images from right side illumination.

https://doi.org/10.1371/journal.pone.0138859.g002

The bidirectional integral images BL(x, y) and BR(x, y) of one person are insufficient for extracting information about illumination condition because BL(x, y) and BR(x, y) also contain illumination information, as well as the partially unique features of each individual. To extract the characteristics of the illumination conditions, we need to eliminate the influence of the features that are innate to each individual. For this, we conducted the following few processes. Firstly, to avoid the influence of overall skin tone and race, we normalized the integral images to have values ranging between 0 and 1 as follows: (5)

Secondly, we defined the average integral images and for the category Cl as follows: (6) where the subscripts m(= 1, 2, .., M) denote the mth individual in the category Cl. Since these average bidirectional integral images represent the general characteristic of the illumination for the category Cl, it can be applied without regard to the individuals. The average bidirectional integral images in Fig 3a were made from the images belonging to each category Cl for 40 subjects in the CMU-PIE database. Table 1 shows the face images used in obtaining the average bidirectional integral images for each category Cl.

thumbnail
Table 1. Classification for illumination of frontal face in CMU-PIE database.

https://doi.org/10.1371/journal.pone.0138859.t001

thumbnail
Fig 3. Feature generation.

(a) average bidirectional integral images. (b) bidirectional integral features (BIF).

https://doi.org/10.1371/journal.pone.0138859.g003

On the other hand, and contain the overall illumination information for the category Cl, which includes not only the distinctive characteristics of Cl, but also the information for the ambient illumination (here, frontal illumination). To counteract the effects of the ambient illumination in and , we extracted the bidirectional integral features (, ) by dividing and into and , respectively, as follows: (7) We call the above features BIF, which are used to generate new images in the following section. Fig 3b shows BIF for each category.

Generation of New Images from a Single Face Image

In face recognition, since the illumination variation is not closely related to the identity of a face, it is akin to static in the way that its hampers the recognition process. We assumed that an original image, which is taken under frontal illumination (an image in C0), are corrupted through illumination variation to become shadowed face images. We represented the corrupted image (ICor) apart from the original image (IOri) by using the following noise model [27]: (8) where c(x, y) and b(x, y) are contrast and brightness factors at (x, y), respectively.

In Eq 7, the BIF are the relative illumination characteristics of Cl alongside those in C0. By representing the changes of contrast from illumination variation as and , we generated new face images (, l ≠ 0), which correspond to Cl, from the image under frontal illumination (Il, l = 0), as follows: (9)

These are the procedural instructions from a summary of our proposed method:

  • Step 1: Define the categories of light direction in Cl.
  • Step 2: After obtaining the normalized bidirectional integral images (, ) for each category Cl, compute the average integral images (, ).
  • Step 3: Extract the BIF (, ) by dividing and into and , respectively.
  • Step 4: Generate new face images for each category Cl from I0 by using Eq 9.

Once the BIF are obtained for each category Cl, the BIF can be utilized to generate new face images for any individual because the BIF depend solely on the light source direction. Fig 4 shows the overall procedure of the proposed method including the images at each process: a single collected face image (under frontal illumination), BIF for each light category Cl, and the generated face images for the category.

Experimental Results

Image Generation

To see how well the proposed method generated new images under different illumination variations, we compared the images generated by the proposed method with the raw images from the CMU-PIE database, which are actually taken under different illumination conditions.

We selected 40 subjects and 7 images of each subject with different illumination variations (‘27_04′, ‘27_05′, ‘27_06′, ‘27_11′, ‘27_12′, ‘27_14′, and ‘27_15′) for each light category C−3C3 (Fig 5a) to obtain the BIF (see Fig 3). Fig 5b shows the images generated from a single image with the use of the proposed method. As can be seen in Fig 5a and 5b, the proposed method generated new images for each category Cl as if they were taken under different illumination conditions.

thumbnail
Fig 5. Image generation from a single image(I0) by using BIF.

(a) images for each light category(C−3C3). (b) generated images().

https://doi.org/10.1371/journal.pone.0138859.g005

We compared the proposed method to other methods dealing with the SSPP problem, namely ICR [15], E(PC2)A+ [14], SPCA+ [13] and SLC [17]. For 7 subjects in the CMU-PIE database—after generating images from a single image taken under a normal condition (frontal illumination) with several methods—we plotted the image samples in the two-dimensional discriminative common vector(DCV) feature space [4]. Compared with the distribution of the raw images in the CMU-PIE database (Fig 6a), the distribution of the images generated by the proposed method is most similar to that of the raw images, where the samples of the same subject are clustered closely and there is less overlap between samples for different subjects. Meanwhile, in the distributions of the other methods, some of the samples belonging to the same subject are widely scattered, and some samples belonging to different subjects are overlapped with each other (Fig 6c–6f).

thumbnail
Fig 6. Sample distributions of 7 subjects that are generated from a single image in the two-dimensional discriminative common vector(DCV) feature space.

(a) Raw. (b) BIF (Proposed method). (c) ICR. (d) E(PC2)A+. (e) SPCA+. (f) SLC.

https://doi.org/10.1371/journal.pone.0138859.g006

Face Recognition

To demonstrate how effective our proposed method is, we evaluated the performance of face recognition on the CMU-PIE, Yale B [28], 3D [29, 30], and Yale [31] databases. The characteristics of the different databases are presented in Table 2. For each database, we compared the face recognition performance of the above methods, which are ICR, E(PC2)A2+, SPCA+, and SLC. The center of each eye was manually detected in all of the images, and the subsequent horizontal alignment of the eyes was achieved with rotation, as in [32]. The cropping and rescaling of all of the face images meant that the central point of each eye could be statically positioned in an image measuring 80 (pixels) x 80 (pixels). Then, the histogram equalization process [32] was applied to the smaller-sized image. The one nearest-neighbor (NN) rule was used with the l2 norm as a classifier. As a feature extraction method for recognition, the DCV method was used for the SLC and the proposed method, while the PCA method was used for ICR, E(PC2)A2+, and SPCA+ because they are motivated by the PCA method.

For the evaluation of face recognition performance under illumination variation, we experimented on the CMU-PIE, Yale B, and 3D databases. We selected 65 of the 68 subjects in the CMU-PIE database, which have been placed under 21 illumination variations, as the other images were affected by defects or illumination-variation types were missing. Excluding the 40 subjects that were used to obtain the BIF, 25 subjects were used to evaluate the recognition rates. One image of each subject was selected from the images under frontal illumination (‘27_11′) for training, while the other 20 images were tested. There was no overlap of images between the training and testing sets. We used 45 face images of subjects in the frontal pose (YaleB/Pose00) from the Yale B database, which is comprised of 10 subjects placed under 64 illumination variations. To evaluate the recognition rates, one image of each subject under frontal illumination (A + 000E + 00) was selected for training, and the other images were used for testing. The 3D database consists of 106 subjects with 24 illumination variations, and each subject has two different sessions with an average of a 60-day gap. In this experiment, we selected 72 subjects with 23 illuminations in session 2 because some subjects were missed or did not include all types of illumination variations. One image of each subject under frontal illumination (‘frame1′) was selected for training, while the other 22 images were used for testing. We then conducted the experiments on the Yale database to evaluate face recognition performance for the various types of variations. The 15 subjects of the Yale database were placed under different illumination variations, and 165 gray images depict numerous facial expressions and the use or non-use of eyeglasses. Among them, one image of each subject labeled “normal” was selected for the training set, while the others were used as the test set.

The proposed method generated 6 virtual images with different illuminations for each subject. In ICR, the different numbers of synthesized images for each subject are generated by using the inter-class relationship depending on the database, as the number of nearest neighbors k are different in each database (k = 55, 46, 85, and 52 for CMU-PIE, Yale B, 3D, and Yale databases, respectively). The E(PC2)A+ method generated 3 images for training from the original image, which correspond to their 1/2-, first-, and second-order projected images, respectively. In SPCA+, 7 images for each subject were generated for training, which were obtained from different n order singular values, and in SLC, 11 images for each subject were added to the training set, which correspond to the symmetric images and linear combination virtual images.

Fig 7 shows that the proposed method outperformed the other methods for each database. For illumination variation, as shown in Fig 7a–7c, the proposed method gives a recognition rate of 99.00%, 88.86% and 99.05%, which are 4.00% ∼ 57.00%, 7.73% ∼ 27.27% and 0.06% ∼ 15.40% higher than the other methods for the CMU-PIE, Yale B, and 3D face databases, respectively. Similarly, as shown in Fig 7d, the proposed method outperforms the other methods with a recognition rate of 86.67% in the presence of different types of variations.

thumbnail
Fig 7. Face recognition results.

(a) CMU-PIE database (b) Yale B database (c) 3D face database (d) Yale database.

https://doi.org/10.1371/journal.pone.0138859.g007

Conclusions

The number of images for each subject is an important factor that affects the recognition performance when using appearance-based methods, which are widely used in face recognition. In face recognition practice, one frequently encounters the SSPP problem, i.e., a situation that only accessibility to a stored SSPP, which is an unfortunate reality underpinned by a number of key issues, including the difficulties associated with collecting samples and storage capability. In this paper, we propose a novel method to generate new images from a single image to address the SSPP problem. We extracted the BIF, which reflect the characteristics of various illumination conditions, and produced new images with six different illumination variations from a single image taken under frontal illumination. The experimental results showed that the generated images from the proposed method are distributed similarly to the real images that were taken under different illumination conditions. The proposed method therefore improved the face recognition performance compared with the other methods on the CMU-PIE, Yale B, Yale, and 3D face databases, all of which contain images with various types of variations.

Acknowledgments

The present research was conducted by the research fund of Dankook University in 2013.

Author Contributions

Conceived and designed the experiments: YL SC. Performed the experiments: YL. Analyzed the data: YL SC. Contributed reagents/materials/analysis tools: YL. Wrote the paper: YL SC ML. Discussed idea in this paper: YL SC ML.

References

  1. 1. Choi SI. Face recognition based on 2D images under various conditions. Seoul National University; 2010.
  2. 2. Turk MA, Pentland AP. Face recognition using eigenfaces. In: Computer Vision and Pattern Recognition, 1991. Proceedings CVPR’91., IEEE Computer Society Conference on. IEEE; 1991. p. 586–591.
  3. 3. Belhumeur PN, Hespanha JP, Kriegman D. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. Pattern Analysis and Machine Intelligence, IEEE Transactions on. 1997;19(7):711–720.
  4. 4. Cevikalp H, Neamtu M, Wilkes M, Barkana A. Discriminative common vectors for face recognition. Pattern Analysis and Machine Intelligence, IEEE Transactions on. 2005;27(1):4–13.
  5. 5. Yu H, Yang J. A direct LDA algorithm for high-dimensional data—with application to face recognition. Pattern recognition. 2001;34(10):2067–2070.
  6. 6. Kim C, Choi CH. Image covariance-based subspace method for face recognition. Pattern recognition. 2007;40(5):1592–1604.
  7. 7. Jiang X, Mandal B, Kot A. Eigenfeature regularization and extraction in face recognition. Pattern Analysis and Machine Intelligence, IEEE Transactions on. 2008;30(3):383–394.
  8. 8. Yan S, Liu J, Tang X, Huang TS. A parameter-free framework for general supervised subspace learning. Information Forensics and Security, IEEE Transactions on. 2007;2(1):69–76.
  9. 9. Tan X, Chen S, Zhou ZH, Zhang F. Face recognition from a single image per person: A survey. Pattern recognition. 2006;39(9):1725–1745.
  10. 10. Lu J, Tan YP, Wang G. Discriminative multimanifold analysis for face recognition from a single training sample per person. Pattern Analysis and Machine Intelligence, IEEE Transactions on. 2013;35(1):39–51.
  11. 11. De la Torre F, Gross R, Baker S, Kumar BV. Representational oriented component analysis (ROCA) for face recognition with one sample image per training class. In: Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on. vol. 2. IEEE; 2005. p. 266–273.
  12. 12. Wu J, Zhou ZH. Face recognition with one training image per person. Pattern Recognition Letters. 2002;23(14):1711–1719.
  13. 13. Zhang D, Chen S, Zhou ZH. A new face recognition method based on SVD perturbation for single example image per person. Applied Mathematics and computation. 2005;163(2):895–907.
  14. 14. Chen S, Zhang D, Zhou ZH. Enhanced (PC) 2 A for face recognition with one training image per person. Pattern Recognition Letters. 2004;25(10):1173–1181.
  15. 15. Li Q, Wang HJ, You J, Li ZM, Li JX. Enlarge the Training Set Based on Inter-Class Relationship for Face Recognition from One Image per Person. PloS one. 2013;8(7):e68539. pmid:23874661
  16. 16. Martínez AM. Recognizing imprecisely localized, partially occluded, and expression variant faces from a single sample per class. Pattern Analysis and Machine Intelligence, IEEE Transactions on. 2002;24(6):748–763.
  17. 17. Zhang T, Li X, Guo RZ. Producing virtual face images for single sample face recognition. Optik-International Journal for Light and Electron Optics. 2014;125(17):5017–5024.
  18. 18. Yang Q, Ding X. Symmetrical PCA in face recognition. In: Image Processing. 2002. Proceedings. 2002 International Conference on. vol. 2. IEEE; 2002. p. II–97.
  19. 19. Xu Y, Zhu X, Li Z, Liu G, Lu Y, Liu H. Using the original and ‘symmetrical face’training samples to perform representation based two-step face recognition. Pattern Recognition. 2013;46(4):1151–1158.
  20. 20. Viola P, Jones MJ. Robust real-time face detection. International journal of computer vision. 2004;57(2):137–154.
  21. 21. Fukunaga K. Introduction to statistical pattern recognition. Access Online via Elsevier; 1990.
  22. 22. Choi SI, Oh J, Choi CH, Kim C. Input variable selection for feature extraction in classification problems. Signal Processing. 2012;92(3):636–648.
  23. 23. Oh J, Choi SI, Kim C, Cho J, Choi CH. Selective generation of Gabor features for fast face recognition on mobile devices. Pattern Recognition Letters. 2013;34(13):1540–1547.
  24. 24. Choi SI, Jeong GM, Kim C. Classification of odorants in the vapor phase using composite features for a portable e-nose system. Sensors. 2012;12(12):16182–16193. pmid:23443373
  25. 25. Kim C, Choi SI, Turk M, Choi CH. A new biased discriminant analysis using composite vectors for eye detection. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on. 2012;42(4):1095–1106.
  26. 26. Sim T, Baker S, Bsat M. The CMU pose, illumination, and expression database. Pattern Analysis and Machine Intelligence, IEEE Transactions on. 2003;25(12):1615–1618.
  27. 27. Jung HC, Hwang BW, Lee SW. Authenticating corrupted face image based on noise model. In: Automatic Face and Gesture Recognition, 2004. Proceedings. Sixth IEEE International Conference on. IEEE; 2004. p. 272–277.
  28. 28. Georghiades AS, Belhumeur PN, Kriegman D. From few to many: Illumination cone models for face recognition under variable lighting and pose. Pattern Analysis and Machine Intelligence, IEEE Transactions on. 2001;23(6):643–660.
  29. 29. Mian A. Illumination invariant recognition and 3D reconstruction of faces using desktop optics. Optics express. 2011;19(8):7491–7506. pmid:21503057
  30. 30. Mian AS. Shade face: multiple image-based 3D face recognition. In: Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on. IEEE; 2009. p. 1833–1839.
  31. 31. Georghiades A. Yale face database. Center for Computational Vision and Control at Yale University[Online] Available: http://cvc.yale.edu/projects/yalefaces/yalefaces.html. 1997;.
  32. 32. Choi SI, Choi CH, Jeong GM, Kwak N. Pixel selection based on discriminant features with application to face recognition. Pattern Recognition Letters. 2012;33(9):1083–1092.