The authors have declared that no competing interests exist.
Reviewed the manuscript: IAT MWA XW. Conceived and designed the experiments: UIB. Performed the experiments: UIB. Analyzed the data: UIB IAT. Wrote the paper: UIB IAT MWA XW.
Face recognition has emerged as the fastest growing biometric technology and has expanded a lot in the last few years. Many new algorithms and commercial systems have been proposed and developed. Most of them use Principal Component Analysis (PCA) as a base for their techniques. Different and even conflicting results have been reported by researchers comparing these algorithms. The purpose of this study is to have an independent comparative analysis considering both performance and computational complexity of six appearance based face recognition algorithms namely PCA, 2DPCA, A2DPCA, (2D)^{2}PCA, LPP and 2DLPP under equal working conditions. This study was motivated due to the lack of unbiased comprehensive comparative analysis of some recent subspace methods with diverse distance metric combinations. For comparison with other studies, FERET, ORL and YALE databases have been used with evaluation criteria as of FERET evaluations which closely simulate real life scenarios. A comparison of results with previous studies is performed and anomalies are reported. An important contribution of this study is that it presents the suitable performance conditions for each of the algorithms under consideration.
Due to growing requirements of noninvasive recognition systems, Face Recognition has recently become a very popular area of research. A variety of algorithms for face recognition have been proposed and a few evaluation methodologies have also been used to evaluate these algorithms. However, current systems still need to be improved to be practically implementable in real life problems.
A recent comprehensive study
Another recent and robust face recognition algorithm
A large variety of subspace face recognition algorithms have been proposed in different studies including some recently proposed methods. An interesting observation about these studies is that each proposed method claims to give the best recognition rates. However, since every study use their own datasets and implementation parameters specifically designed to highlight their own performance, individual performance analysis are misleading. Therefore it is of great significance that an unbiased comparative analysis of these algorithms under equal and testing working conditions is done. The evaluation methodology is therefore very important and it should be designed to simulate the real world problems. It is very difficult to find such comprehensive evaluation methodologies in the literature, the only exemplary evaluation method being that of the FERET evaluations run by National Institute of Standards and Technology (NIST)
A comparative analysis should be fair not only in terms of the databases and testing methodology but also in terms of operating conditions such as trying a complete group of classifiers for all candidate subspace methods. Trying different classifiers/distance metrics may actually bring out the strengths of a subspace projection algorithm, which may not be visible on a single metric. However, very few studies been directed towards comparative analysis of subspace based algorithms and even fewer studied the effect of different distance metrics on the algorithms for their comparison.
One of the early studies
This study was motivated due to the lack of comprehensive comparative study of many subspace methods with many distance metric combinations. Comparative studies found in the literature are limited in their scope in terms of the testing methodology and the number of test vectors and test parameters being used in the analysis. This study, unlike earlier studies, compares different algorithms based on theoretical aspects, such as resultant data structure sizes and algorithm complexity, as well as recognition rates on different facial databases. Three different databases have been used, namely FERET, YALE
Six subspace projection methods have been included in the comparison, which are evaluated using four distance metrics. These methods include, 1DPCA
The rest of the paper is organized as follows: Section 2 describes the subspace algorithms under consideration, Section 3 explains the evaluation methodology followed, Section 4 presents the results and related discussion and section 5 concludes the whole study and proposes future work to be done.
Three basic steps of recognition system are training, projection, and recognition. During the training phase, the basis vectors of the subspace for each algorithm are calculated and saved. During projection, these basis vectors are loaded and then all the database images are projected onto these basis vectors, which convert them to the lower dimensional subspace. These projected images are saved as templates to be later used in distance calculation in the recognition phase. The whole process is shown in
Since all the algorithms used in this study are well known, they will be described briefly for the sake of completeness. These algorithms are referred to as subspace methods because they project the images to lower dimensional space to perform recognition task which is not computationally feasible to be done in high dimensional space. These algorithms retain maximum possible discriminative information while converting the images to lower dimensional space. Their property of retaining maximum discriminative information is what prioritizes them over each other.
Matrix Dimensions/Size  Algorithm Complexity  
Algorithm  Training Images  Covariance Matrix  Projection Matrix  Projected Images/Templates  Training Time  Testing Time  Memory Space 

mn x N  NxN  mn x d  d x M  O(m^{2}n^{2}d)  O(MNd)  O(m^{2}n^{2}) 

m x n x N  n x n  n x d  m x d x M  O(n^{2}d)  O(mMNd)  O(n^{2}) 

m x n x N  m x m  m x d  n x d x M  O(m^{2}d)  O(nMNd)  O(m^{2}) 

m x n x N  n x n & m x m  n x d_{1}& m x d_{2}  d_{2} x d_{1} x M  O(n^{2}d_{1}+ m^{2}d_{2})  O(d_{2}MNd_{1})  O(m^{2}+ n^{2}) 

mn x N  NxN (at PCA step)  mn x d_{LPP}  d_{LPP} x M  O(m^{2}n^{2}d + mnN^{2})  O(MNd)  O(m^{2}n^{2}) 

m x n x N  N/A  n x d  m x d x M  O(n^{2}d + mnN^{2})  O(mMNd)  O(n^{2}) 
Principal Component Analysis (PCA)
Suppose there are
In the projection phase the desired
In 2D PCA
Suppose there are
In case of A2DPCA, it works in column direction of images; therefore the difference is in calculating the image covariance matrix
Therefore for A2DPCA, the projection matrix
As discussed above, 2DPCA and A2DPCA preserve the variance between rows and between columns of the image respectively. The disadvantage of 2DPCA and A2DPCA is that they have a relatively bigger template size as compared to that of PCA which is evident from
Suppose there are
In the projection phase the two dimensional images
Laplacianfaces (LPP) algorithm
In
In the projection phase, using
Two Dimensional Laplacianfaces (2DLPP)
The
The evaluation methodology followed in this study is explained by addressing the training and projection method and the testing variables used in the evaluation. A MATLAB based evaluation platform that is constructed as a result of this study is also described.
Four basic modules of the evaluation methodology include Training, Projection, Distance Calculation and Result Calculation as shown in
For FERET, the training, gallery and probe sets are already defined by FERET evaluation tests
Prior to training, FERET and Yale images have been preprocessed by first alignment using eye coordinates to compensate head tilt, then illumination adjustment using histogram equalization, then cropping using an elliptic mask so that only face is visible, and finally resizing to 150×130 pixels. ORL isn’t processed because it has minimal background variation and limited head tilt. In case of FERET the eye coordinate file is supplied along with the database. For YALE database, eye coordinates are manually selected and a similar eyecoordinates file is maintained.
During the training phase, the projection matrix is trained using the images from the training image list of a particular database by the projection algorithm to be evaluated. The size of the projection matrix is determined by the retained percentage of basis vectors.
In the projection phase, the images listed in the “all” image list of the specific database are projected onto the face subspace using the projection matrix and saved as the output of this phase. The training and projection operation along with the rest of operations is shown in
In distance calculation phase, the distances between a projected probe image and all other projected images in the gallery are calculated and written in a file named after the name of the projected image. The same is repeated for every projected image against the distance metric of our choice. These distance files are later used in the result calculation phase.
In the result calculation phase, the gallery and probe image lists are read and the distance file for each probe image is loaded to check if the closest match is among the images named in gallery list. Here the match scores are calculated against each Rank. Rank 1 means the first match and Rank 50 means 50^{th} match. The results are calculated for all the probe sets and saved.
Databases  FERET  ORL  YALE  
Probe Sets  fafb  fafc  dup1  dup2  probe  probe  
Algorithms  PCA  2DPCA  A2DPCA  (2D)^{2}PCA  LPP  2DLPP  
Distance Metrics  Euclidean  Cosine  Mahalanobis  Mahalanobis Cosine 
Three databases are selected for our comparative study, namely FERET, YALE and ORL. The description and reasons for choosing these databases is given in the following paragraphs.
FERET database has been extensively used by FERET evaluation tests, face recognition vendor tests (FRVT) and by many researchers for different research algorithms as well as commercial face recognition systems
Evaluation Against  Probe Set Names  No. of Gallery Images  No. of Images in Probe Set 
FERET Database  
Expression  Fafb  1196  1195 
Illumination  fafc  1196  194 
Aging  dup 1  1196  722 
Aging  dup2  1196  234 

501  

1196  

1 to 25  

3368  
ORL Database  
General Evaluation  probe  40  200 

200  

40  

10  

400  
YALE Database  
General Evaluation  probe  15  75 

90  

15  

11  

165 
The ORL database
The YALE database
Four distance metrics are chosen including Euclidean (L2) and Cosine for the image space and their counter parts in Mahalanobis space, Mahalanobis (L2) & Mahalanobis Cosine. These metrics are referred throughout the study as Euc, Cos, Maha and MahCos, respectively. The Mahalanobis space based distance metrics are applied by transforming the templates from image space to Mahalanobis space. For each vector pair u and v in image space the transformed vector pair m and n in Mahalanobis space is given as in
For the sake of completeness, mathematical description of each distance metric is given below.
Higher similarity means higher score in this case; therefore the actual distance is calculated by subtracting the above calculated value from 1 as in
Similar to Mahalanobis, the actual distance is calculated by subtracting the above calculated value from 1 as in
As discussed in section 3.2.2, it is important to compute the standard deviation/spread to be used in calculating Mahalanobis space based distance metrics. The variance of the face data along its principal component directions is determined by the Eigen values of the image covariance matrix along all the dimensions. Therefore the spread in a specific dimension will be the square root of the Eigen value corresponding to that dimension. For PCA based algorithms, the Eigen values of the initial covariance matrix can be used as the spread at later stage to calculate Mahalanobis based distance metrics but for Laplacianfaces the Eigen values of initial covariance doesn’t represent the actual spread of Laplacian projected images. Therefore the Eigen values of the covariance matrix (of projected images) have to be calculated, to finally get to the spread.
It has been confirmed that the Eigen values of the initial covariance matrix and the Eigen values of the covariance matrix of projected images are same. An exception exists for 1D PCA, if the vectors of projection matrix/basis vectors are not normalized then the spread of projected images is square of the spread of training images. Therefore basis vectors in 1D PCA are normalized before further usage.
For the sake of similarity and generalization in the platform code, in case of 2D algorithms, the projected images are reshaped into vectors first. It is confirmed that it yields the same result either two dimensional projected images are used directly or if they are reshaped into vectors first.
As a part of this study, a MATLAB based platform FaceRecEval has also been implemented which serves the purpose of evaluating and comparing different algorithms. This platform is developed being inspired by the CSUFaceIdEval System
All the main functionalities described in section 3.1 including training, projection, distance calculation and result calculation are incorporated in form of modules. The result calculation module calculates the results as described at start of this section. The reason behind projecting all the images and calculating the distances between all projected images is to accommodate any changes in gallery and probe image lists, because no rework prior to this module will be needed.
For the sake of completion and to avoid confusion due to diversity of testing parameters, the results and discussions have been grouped based on recognition tasks, facial databases, distance metrics, algorithms, memory and computational complexities and comparison to previous work.
FERET Database  FAFC  % of basis vectors  FAFB  % of basis vectors  DUP1  % of basis vectors  DUP2  % of basis vectors  
Algorithm  Classifier  5  10  25  50  75  5  10  25  50  75  5  10  25  50  75  5  10  25  50  75 
PCA  Cos  0.05  0.07  0.09  0.10 

0.79  0.83  0.86 

0.87  0.39  0.44 

0.47  0.47  0.22  0.26 

0.28  0.27 
Euc  0.06  0.08  0.10 

0.11  0.79  0.83  0.86 

0.87  0.39  0.44 

0.46  0.46  0.22  0.26 

0.26  0.26  
Maha  0.21  0.41  0.47 

0.47  0.77  0.86 

0.84  0.78  0.43 

0.48  0.39  0.30  0.24 

0.27  0.20  0.16  
MahCos  0.18  0.40  0.51 

0.52  0.77  0.86 

0.91  0.86  0.44  0.53 

0.49  0.37  0.23  0.31 

0.28  0.21  
2DPCA  Cos  0.13  0.19 

0.21  0.15  0.84 

0.82  0.69  0.57  0.36 

0.38  0.27  0.19 

0.13  0.12  0.07  0.05 
Euc  0.08  0.10 

0.10  0.09  0.90 

0.91  0.89  0.87  0.49 

0.47  0.43  0.41 

0.29  0.25  0.21  0.18  
Maha  0.07  0.11 

0.06  0.03 

0.90  0.76  0.51  0.34 

0.45  0.35  0.18  0.10 

0.23  0.13  0.04  0.01  
MahCos  0.07  0.10 

0.09  0.06  0.89 

0.81  0.63  0.49 

0.45  0.38  0.23  0.15 

0.20  0.13  0.05  0.03  
A2DPCA  Cos  0.14  0.32 

0.47  0.38 

0.82  0.79  0.71  0.67  0.44 

0.53  0.44  0.37  0.28  0.42 

0.34  0.24 
Euc  0.09  0.16 

0.16  0.15  0.89 

0.90  0.89  0.87  0.46  0.51 

0.48  0.45  0.27  0.34 

0.30  0.26  
Maha  0.13  0.25 

0.12  0.06  0.87 

0.80  0.51  0.31  0.45 

0.45  0.20  0.10  0.28 

0.32  0.08  0.01  
MahCos  0.13  0.26 

0.13  0.06  0.87 

0.81  0.55  0.34  0.45 

0.46  0.23  0.11  0.28 

0.33  0.10  0.03  
(2D)2PCA  Cos  0.07  0.15 

0.23  0.15  0.78  0.82 

0.69  0.58  0.28  0.37 

0.28  0.19  0.12 

0.13  0.08  0.05 
Euc  0.05  0.09 

0.10  0.09  0.87  0.90 

0.89  0.87  0.43  0.48 

0.44  0.41  0.24 

0.26  0.21  0.19  
Maha  0.11 

0.21  0.04  0.01  0.87 

0.68  0.27  0.11  0.44 

0.32  0.08  0.03  0.25 

0.16  0.01  0.00  
MahCos  0.11  0.24 

0.17  0.09  0.87 

0.79  0.52  0.36  0.44 

0.39  0.19  0.11  0.25 

0.20  0.05  0.02  
LPP  Cos  0.11  0.12  0.29  0.40 

0.39  0.52  0.71  0.80 

0.19  0.26  0.33 

0.38  0.07  0.11  0.15  0.17 

Euc  0.13  0.12  0.26  0.32 

0.35  0.48  0.61  0.66 

0.17  0.23 

0.25  0.22  0.06  0.08  0.10  0.11 


Maha  0.03  0.03  0.05  0.06 

0.15  0.17  0.18  0.20 


0.10  0.10  0.07  0.06  0.05  0.05  0.07  0.08 


MahCos  0.02  0.03  0.06  0.08 

0.16  0.18  0.20  0.20 

0.11  0.12 

0.13  0.12  0.04  0.05  0.08  0.10 


2DLPP  Cos  0.16  0.21  0.26  0.27 

0.36  0.45  0.46  0.47 

0.08  0.12  0.12  0.13 

0.02 

0.03  0.02  0.02 
Euc  0.01 

0.02  0.02  0.02  0.21 

0.27  0.27  0.26  0.03 

0.05  0.05  0.05 

0.01  0.01  0.01  0.01  
Maha  0.11 

0.09  0.11  0.08  0.76  0.75  0.79 

0.78  0.39  0.36 

0.41  0.41  0.27  0.23  0.29  0.27 


MahCos  0.09  0.09  0.11  0.14 

0.79 

0.80  0.80  0.79  0.43  0.43 

0.43  0.43  0.31  0.28  0.31 

0.32 
YALE Database  % of basis vectors  ORL Database  % of basis vectors  
Algorithm  Classifier  5  10  25  50  75  Algorithm  Classifier  5  10  25  50  75 
PCA  Cos  0.62  0.70  0.73 

0.74  PCA  Cos  0.65  0.68 

0.68  0.67 
Euc  0.59  0.68 

0.72  0.72  Euc  0.68  0.70 

0.70  0.69  
Maha  0.56 

0.69  0.65  0.55  Maha  0.67 

0.65  0.46  0.35  
MahCos  0.61 

0.73  0.72  0.62  MahCos  0.66 

0.64  0.49  0.39  
2DPCA  Cos 

0.66  0.61  0.59  0.57  2DPCA  Cos 

0.60  0.54  0.47  0.42 
Euc 

0.76  0.75  0.74  0.74  Euc 

0.76  0.75  0.71  0.68  
Maha 

0.73  0.64  0.62  0.59  Maha 

0.70  0.55  0.31  0.20  
MahCos 

0.73  0.66  0.63  0.61  MahCos 

0.68  0.53  0.38  0.32  
A2DPCA  Cos 

0.71  0.61  0.60  0.57  A2DPCA  Cos  0.65 

0.60  0.52  0.48 
Euc 

0.77  0.74  0.73  0.72  Euc  0.73 

0.74  0.71  0.69  
Maha 

0.73  0.60  0.53  0.45  Maha  0.73 

0.61  0.32  0.18  
MahCos 

0.73  0.61  0.53  0.43  MahCos  0.73 

0.61  0.46  0.34  
(2D)2PCA  Cos 

0.67  0.61  0.59  0.57  (2D)2PCA  Cos  0.55 

0.54  0.46  0.42 
Euc 

0.77  0.75  0.75  0.74  Euc  0.74 

0.75  0.71  0.68  
Maha 

0.70  0.51  0.53  0.49  Maha 

0.69  0.42  0.13  0.08  
MahCos 

0.71  0.55  0.56  0.54  MahCos 

0.70  0.54  0.35  0.29  
LPP  Cos  0.40  0.42  0.46  0.47 

LPP  Cos  0.23  0.26  0.28  0.25 

Euc  0.25  0.45 

0.44  0.45  Euc  0.17  0.18  0.30  0.26 


Maha  0.08  0.09  0.10  0.10 

Maha  0.05  0.05  0.07  0.08 


MahCos  0.09  0.12  0.13  0.12 

MahCos  0.05  0.06  0.08  0.08 


2DLPP  Cos  0.36  0.40  0.47 

0.48  2DLPP  Cos  0.21  0.22  0.21  0.24 

Euc  0.49  0.48  0.53 

0.53  Euc  0.17 

0.18  0.16  0.17  
Maha 

0.69  0.69  0.68  0.68  Maha  0.58  0.54  0.56  0.64 


MahCos  0.72  0.73 

0.74  0.72  MahCos  0.58  0.63  0.67  0.67 

Starting with algorithm performance against illumination variations, it can be noted from
Two dimensional PCA algorithms perform relatively better when 25% of basis vectors are retained. LPP along with simple Cosine and Euclidean distance metrics achieves good recognition rates against this task but it is while retaining the highest percentage of basis vectors. PCA along with Mahalanobis distance variants generally perform the best for different percentages of retained basis vectors against this task. The best performing algorithms for this task are PCAMahCos, with 50% retained basis vectors, and A2DPCACos, with 25% retained basis vectors.
The FAFB set is used to evaluate performance of an algorithm against change in expression. This is the easiest task with highest recognition rates as evident from
Dup 1 and Dup 2 are the two sets provided to test the performance of algorithms against temporal changes. Dup 2 being the harder task has lower recognition rates as compared to that of Dup 1. PCA based algorithms perform generally better for both Dup 1 and Dup 2, as compared to LPP based algorithms. The best performing algorithm is A2DPCACos, with 25% retained basis vectors, for both Dup 1 and Dup 2 sets.
PCAMahCos and A2DPCACos are generally the best performers on the FERET database as they each achieve the top recognition rates in three out of four of the face recognition tasks. They perform well on YALE and ORL database too, but the top recognition rates are achieved by 2DPCAEuc on ORL and 2D^{2}PCAMahCos on YALE.
For FERET, the best algorithms that perform equally well on all probe sets are PCAMahCos and A2DPCACos. For YALE, the best performing algorithm is 2D^{2}PCAMahCos. PCAMahCos and A2DPCACos along with 2DPCAEuc are close too. ORL images include slight pose variations and here the best performing algorithm is 2DPCAEuc. Other algorithms close in performance are 2D^{2}PCAEuc and A2DPCAEuc. The algorithms performing the best on average over all databases are A2DPCACos and PCAMahCos.
Though variants of Mahalanobis distance metric did not work well with 1D LPP on all three databases, yet they perform well with all PCA based algorithms and 2D LPP for all face recognition tasks on all databases. The need for experimenting with variants of Mahalanobis distance metrics was pointed out in
It is worth noting that the Euclidean distance metric works well against the expression task which actually leads to the local geometrical distortions in a facial image. On the other hand, Cosine distance metric which is close to the correlation of image vectors, works well against illumination changes which are nongeometrical distortions. This general trend is evident from the results in
It should be noted that there is quite a lot of variations in the performance of different algorithms and thus in the performance ranking for different type of datasets. 2D^{2}PCA generally gives the highest recognition rates on both YALE and ORL database as well as for the expression test set on FERET. PCA recognition rates are highest for FERET database. A2DPCA is on average the best algorithm over all the three databases. The reason is, because this algorithm works along the rows of images. All the images of the three databases have more rows than columns, therefore this algorithm had chance to retain more information as compared to 2DPCA which works along columns. For the same number of retained basis vectors, A2DPCA consumes lesser testing time as compared to 2DPCA, because length of rows is lesser than the length of columns. To conclude, PCA based algorithms perform the best overall on all the three databases, though 2DPCA based algorithms give better recognition rates than PCA on average but with bigger template sizes. Another thing worth noting from
The sizes of covariance matrix, projection matrix, templates and the time and memory complexity of each algorithm are summarized in
The training time complexity depends upon both the size of the covariance matrices and the number of retained basis vectors. Therefore for PCA it is O(m^{2}n^{2}d) and for 2DPCA it is O(n^{2}d) due to a smaller covariance matrix. The A2DPCA has O(m^{2}d) because it works along the columns and for 2D^{2}PCA it is O(n^{2}d_{1}+ m^{2}d_{2}) because it has to calculate two covariance matrices, one along rows and other along columns. For LPP and 2DLPP an extra cost O(mnN^{2}) to construct the adjacency matrix
The testing time is calculated by the number of tests to perform and the time complexity for each test. This time also reflects the computational complexity during recognition which is very critical especially for identification systems. This turns out to be O(MN) for the number of tests, and time complexity for each test is O(d) for one dimensional algorithms and O(md) for two dimensional algorithms. So PCA and LPP have the time complexity of O(MNd), 2DPCA and 2DLPP have O(mMNd), A2DPCA has O(nMNd) and 2D^{2}PCA has O(d_{2}MNd_{1}).
The memory cost depends on the size of the covariance matrices. Therefore for PCA and LPP it is O(m^{2}n^{2}), for 2DPCA and 2DLPP it is O(n^{2}) and for A2DPCA it is O(m^{2}). The 2D^{2}PCA algorithm has a memory cost O(m^{2}+ n^{2}) due to the fact that it calculates two Eigen equations.
To summarize the above discussion, it is obvious that PCA variants are computationally efficient as compared to LPP variants. In an identification system the training and projection is usually done offline, while the distance calculation and recognition is done online, mostly real time, which has critical timing constraints. The above analysis shows that the training time and memory space complexity for 1D PCA, which generally demonstrates better recognition rates, is higher due to bigger covariance matrix. However it is very efficient at matching stage due to smaller template size and thus suitable for identification systems. On the other hand A2DPCA is efficient during training due to smaller covariance matrix but has a bigger template size and needs more online processing time during recognition as compared to PCA. 2D^{2}PCA on average has a comparatively smaller template size and it is also efficient during matching, therefore it is the most efficient in both respects among the two dimensional PCA algorithms.
For comparing the results of this study, similar studies which have used one or more of the algorithm and distance metric combinations are considered here. Variation of results as compared to previous studies can be attributed to different preprocessing technique and the standard testing methodology not used by most of these studies. But as this study is an independent comparative analysis, it serves the purpose.
Regarding FERET evaluation methodology tasks, we found that the FAFB task is the easiest with highest recognition rates which is consistent with
PCA with variants of Mahalanobis based distance metrics is experimented as more investigation was recommended by
While using Euclidean as a distance metric, the recognition rates of all the two dimensional PCA algorithms on all three databases are pretty close to each other which is in agreement with
Regarding the 2DLPP algorithm, our results are not in agreement with
The aim of this study was to independently compare and analyze the relative performance of famous subspace face recognition algorithms against the same working conditions. As mentioned in the testing methodology section, we have followed the FERET evaluations methodology which closely simulates real life scenarios. Six popular subspace face recognition algorithms were tested accompanied with four popular distance metrics.
An important and novel contribution of this study is that it introduced an unbiased comparative analysis of popular subspace algorithms under equal and testing working conditions, such as same preprocessing steps, same testing criteria, same testing and training sets and also introduced the favorable performance conditions for each of these algorithms. After thorough experimentations it was shown that Algorithm 1D PCA performed best with distance metric MahalanobisCosine, and 2DPCA variants and 1D LPP performed generally much better with simple Euclidean and Cosine distance metrics. Similarly 2DLPP performed much better with distance metrics Mahalanobis and MahalanobisCosine. In addition to this it was shown that Cosine based distance metrics, MahCos and Cos, gave better results than Euclidean based metrics. The algorithmmetric combination of PCAMahCos was clearly ahead in performance under difficult conditions of illumination changes. As evident from
A thorough computational complexity analysis was also performed on the subject algorithms. It was shown that though 2D algorithms have lower complexity during training, they need more computations during recognition which is critical for identification systems. On the other hand 1D algorithms have higher computational complexity during training but generally require less computations during recognition stage.
It was also noted that the performance variations are very significant for different databases. Any algorithm alone cannot be qualified as the best performing algorithm for all the variations of a facial image. To extract the optimal performance on all facial variations, it may be necessary to combine several subspace techniques in a computationally economical unified classifier which makes a good research topic for future.
A MATLAB based evaluation platform was also constructed in result of this study which may serve as a useful tool for researchers in this field.
Portions of the research in this paper use the FERET database of facial images collected under the FERET programme, sponsored by the DOD Counterdrug Technology Development Program Office. The authors are thankful to Mr. Xiaofei He