Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Quaternion-Based Discriminant Analysis Method for Color Face Recognition

  • Yong Xu

    Affiliations Bio-computing Research Center, Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen, Guangdong, People's Republic of China, Key Laboratory of Network Oriented Intelligent Computation, Shenzhen, China

Quaternion-Based Discriminant Analysis Method for Color Face Recognition

  • Yong Xu


Pattern recognition techniques have been used to automatically recognize the objects, personal identities, predict the function of protein, the category of the cancer, identify lesion, perform product inspection, and so on. In this paper we propose a novel quaternion-based discriminant method. This method represents and classifies color images in a simple and mathematically tractable way. The proposed method is suitable for a large variety of real-world applications such as color face recognition and classification of the ground target shown in multispectrum remote images. This method first uses the quaternion number to denote the pixel in the color image and exploits a quaternion vector to represent the color image. This method then uses the linear discriminant analysis algorithm to transform the quaternion vector into a lower-dimensional quaternion vector and classifies it in this space. The experimental results show that the proposed method can obtain a very high accuracy for color face recognition.


Color images can provide a large quantity of appearance information of the real-world objects and allow the objects to be more accurately described than the grey-scale image [1][3]. In the field of face recognition, many literatures have shown that color face recognition usually can obtain a higher accuracy than conventional face recognition using the gray image of the face. There are three kinds of color face recognition methods. The first kind usually first converts the 3-D color space into a new lower-dimensional space and then perform classification in the new space. For example, an optimum conversion is proposed by Neagoe to transform the 3-D color space into a 2-D color space [4]. It was showed that the obtained 2-D color space was better for face recognition. Jones and Abbott proposed to convert the original 3-D color space to 1-D space, using Karhunen-Loeve (KL) analysis, linear regression, and genetic algorithms [5]. Yang et al. proposed the optimal discriminant model of color face images [6]. The second kind focuses on transforming the original color space into a new color space for better classification result. For example, Kittler and Sadeghi proposed the IG(R-G) color space for face verification [7]. This color space includes the following three color channels: an intensity (the mean of R, G, B channels), a chromaticity (normalized G) and an opponent chromaticity (normalized (R-G)) channel. Shih and Liu proposed the optimal color configuration for color face recognition, where and color components are from the color space and is from the color space [8]. Liu proposed the so-called uncorrelated color space (UCS), the independent color space (ICS), and the discriminating color space (DCS) for color face recognition [9]. By using these spaces, a very high face recognition accuracy can be obtained [9]. Wang et al. used a sparse tensor discriminant color space (STDCS) model to represent the color image as a third-order tensor [10]. This model is able to preserve the underlying spatial structure of color images and to enhance robustness. The third kind integrates color information and the texture information for face recognition. For instance, Liu et al. used a hybrid color and frequency feature (CFF) method to perform color face recognition [11]. Liu et al. also fused multiple global and local features derived from a hybrid color space [12]. Choi et al. proposed color local Gabor wavelets (CLGWs) and color local binary pattern (CLBP) for face recognition [13]. The color local texture features proposed in [13] can use the discriminative information derived from spatiochromatic texture patterns of different spectral channels.

Color images require more storage space than grey-scale images. Moreover, the transmission of the color image also needs a larger bandwidth. The amount of the color image data such as a RGB, HIS, or YCbCr color image is usually three times of that of a grey-scale image with the same size. As a result, it is crucial to seek a way to effectively represent the color image in a low-dimensional space.

We note that classical image processing algorithm is not able to simultaneously mathematically deal with the three channels of the color image. Instead, when dealing with the color image, previous methods first separate the color image into three channels and then apply the traditional image processing algorithms to these three channels, respectively.

The quaternion can be used as a way to represent the color pixel consisting of three components [14][20]. Actually, the quaternion allows the three components of a color pixel to be simply denoted by a denotation in the form of a “number”. Moreover, the color image can be straightly represented by a quaternion matrix. Because a quaternion is composed of four real numbers, a simple means to represent the color pixel by a quaternion is to let three real numbers in the quaternion be respectively equal to the three color components of the color pixel and let the remaining real number in the quaternion be zero.

The quaternion representation of the color has been used in the context of color texture region segmentation [17]. Shi et al. also used a Hessian matrix defined on the basis of the quaternion to measure the curvature in color images [17]. Angulo [10] exploited the structure tensor of color quaternion image representations to perform feature extraction. Besides the color image, the quaternion was also used for greyscale images [10]. Moreover, we can also use a quaternion matrix to represent the multi-spectrum images such as three or four channel remote images.

A concise survey on matrices of quaternion entries is presented in [14]. Denis [16] studied the quaternionic Fourier spectrum and attempted to explain the color information contained in the new domain and how the different real and imaginary parts of the spectral quaternionic domain interact with the pure quaternion component chosen to encode colors in the spatial domain. Quaternion Fourier Transform was also proposed as a frequency analysis tool [20][23]. The quaternion has also been used to separate polarized waves and in block truncation coding [24], [25]. Miron et. al. [24] considered the problem of direction of arrival and polarization parameters estimation in the case of multiple polarized sources impinging on a vector-sensor array and devised a MUSIC-like algorithm to estimate direction of arrival of waves.

We also note that based on the quaternion algebra that was developed in recent years, we can also easily apply matrix operations to the quaternion matrix. This means that once we denote a color image by a quaternion matrix, we can directly exploit the quaternion algebra to implement some image processing techniques such as image denosing, image segmentation, edge detection and image transform [26][32].

In this paper, we aim at transforming the color image into lower-dimensional data for color face recognition. In order to do so, we extend the widely used linear discriminant analysis technique [33], [34] to the quaternion matrix that represents the color image. Previous research has shown that dimensional reduction is usually helpful for classification, as dimensional reduction is able to extract the important and robust features from the image and to neglect the trivial and non- salient information of the image. The experimental results show that the proposed method can not only use lower-dimensional quaternion array to describe the color image but also can achieve a high recognition accuracy. The proposed method has the following rationale: as it is a feature level fusion method, it can convey much richer information of the bimodal biometric traits than the matching score level fusion and decision level fusion methods [35][37]. The proposed method differs from the method proposed in [38] as follows: first, the proposed method and the method in [38] use a complex matrix and a real third-order tensor to denote the color face image, respectively. Second, as the proposed method and the method in [38] respectively perform calculation in a complex and real space, the three color components will be fused in two very different ways.

Materials and Methods


A real quaternion, simply referred to as quaternion, is a vector(1)where denote real coefficients. We note that the quaternion is defined in a four-dimensional vector space with an ordered basis, denoted by . The conjugate of is defined by(2)Alternatively, we can also represent and defined in (1) and (2) by and , respectively. and has the same norm .

The product of any two of the quatemions is defined by(3)(4)(5)(6)

Basis of the quaternion-based discriminant method

We first denote each color pixel of a color image by quaternion as follows: are set to , respectively. stand for the first, second and third components of the color pixel, respectively.

Suppose that there are classes. Let represent the quaternion matrix of the th color image, . We can convert into a quaternion vector by concatenating the rows of in sequence.

We define the generative matrix of the discriminant algorithm as(7)where stands for conjugate transpose, represents the mean of the quaternion vectors of all the training samples and denotes the mean of the quaternion vectors of the training samples of the th class. is indeed the covariance matrix of the class mean of the quaternion arrays corresponding to the training samples. The eigen-equation of quaternion matrix is as follows:(8)

Once we obtain the eigenvalues and eigenvectors of , we can select the eigenvectors corresponding to the first largest eigenvalues as the transform axes. We extract features from a sample represented by the quaternion array by respectively projecting this quaternion array onto the transform axes.

Algorithm of the quaternion-based discriminant method

We note that as is a quaternion matrix, it is hard to directly solve its eigenvalues and eigenvectors. However, we can solve this problem in the following way: first of all we construct an equivalent complex matrix of . Then we solve the eigenvalues and eigenvectors of the equivalent complex matrix.

Corollary 1. If the quaternion matrix , then the equivalent complex matrix of is defined as follows:(9)If is a quaternion matrix, then is an complex matrix.

For example, if is the quaternion matrix , then we have . As a result, the equivalent complex matrix of is the following complex matrix:(10)

From the reference, we know real eigenvalues of always appear as a pair and so do the complex eigenvectors of . In other words, has the following eigenvalues and eigenvectors: ; . is the adjoint vector of , . It is clear that . From the above context, we know that if , then has the following equivalent complex matrix . We also note that eigenvalues and eigenvectors of the quaternion matrix and its equivalent complex matrix have the following relationship [14]: the complex eigenvalues of a quaternion matrix are the same as the eigenvalues of its equivalent complex matrix. In addition, if is the eigenvector with respect to the eigenvalue of equivalent complex matrix , then is also an eigenvector with respect to the eigenvalue of quaternion matrix . Hereafter denotes the complex conjugate.

The main steps of the algorithm to implement the discriminant method are as follows.

Step 1. Represent each pix of every color image by a quaternion number and denote a color image by a quaternion vector.

Step 2. Use Eq.(7) to calculate the covariance matrix of the class mean of the quaternion arrays corresponding to the training samples. Then construct the equivalent complex matrix of the covariance matrix.

Step 3. Compute the eigenvalues and eigenvectors of the equivalent complex matrix. Suppose that the eigenvalues have the relationship . We exploit to construct a complex transform matrix .

Step 4. Convert the quaternion array corresponding to each color image into its equivalent complex matrix. For a quaternion array , its equivalent complex matrix is . Project onto to generate the features .

Step 5. Exploit the nearest neighbor classifier to perform classification.

The method presented above is referred to as quaternion LDA.


In this section we use the Georgia Tech face database to test our method. Georgia Tech face database (GTFB) was built at Georgia Institute of Technology. GTFB contains images of 50 people taken in two or three sessions. All people in the database were represented by 15 color JPEG images with cluttered background taken at the resolution of 640×480 pixels. The pictures show frontal and/or tilted faces with different facial expressions, lighting conditions and scale. Each image was manually labeled to determine the position of the face in the image. We used the face images with the background removed.

Since these images have different sizes, we first resized them into the same size of 40×30. We used the first 12 images of each subject as training samples and treated the remaining images as testing samples. Figure 1 shows the classification accuracies of our method and several other methods. Here ‘Fusion PCA’ represents the method that first performs PCA for the three channels of the color image and then uses the sum rule to fuse the matching scores of the PCA feature extraction results of the three channels for the ultimate face recognition. ‘Fusion LDA’ does similarly except that the performed feature extraction procedure is LDA rather than PCA. From Figure 2, we see that our method, quaternion LDA, obtains a much higher classification accuracy than quaternion PCA, fusion LDA and fusion PCA. Figure 2 shows classification accuracies of LDA using a single color channel on the GTFB database.

Figure 1. Classification accuracies of our method and several other methods on the GTFB database.

Figure 2. Classification accuracy of LDA using a single color channel on the GTFB database.


This paper, for the first time, proposes the qauaternion-based discriminant analysis method. This method can represent the color images in a simple and tractable way. Since the proposed method is feature level fusion method, it is able to convey richer information of the color image than the score level and decision level fusion methods. Moreover, when the method transforms the color image into a very low-dimensional space, it can also well represent the image. The experiments show that the proposed method can perform very well in color face recognition.

Author Contributions

Conceived and designed the experiments: YX. Performed the experiments: YX. Analyzed the data: YX. Contributed reagents/materials/analysis tools: YX. Wrote the paper: YX. YX.


  1. 1. Suhre A, Kose K, Cetin AE, Gurcan MN (2011) Content-adaptive color transform for image compression. Optical Engineering 50
  2. 2. Tsagaris V (2009) Objective evaluation of color image fusion methods. Optical Engineering 48
  3. 3. Yihui Y, Junju Z, Benkang C, Yiyong H (2011) Objective quality evaluation of visible and infrared color fusion image. Optical Engineering 50
  4. 4. Neagoe VE (2006) An optimum color feature space and its applications for pattern recognition. WSEAS Transactions on Signal Processing 2
  5. 5. Jones CF, Abbott AL (2004) Optimization of color conversion for face recognition. Eurasip Journal on Applied Signal Processing 2004: 522–529.
  6. 6. Yang J, Liu C (2008) Color Image Discriminant Models and Algorithms for Face Recognition. Ieee Transactions on Neural Networks 19: 2088–2098.
  7. 7. Kittler J, Sadeghi MT (2004) Physics-based decorrelation of image data for decision level fusion in face verification. In: Roli FKJWT, editor. Multiple Classifier Systems, Proceedings. pp. 354–363.
  8. 8. Shih P, Liu C (2006) Improving the face recognition grand challenge baseline performance using color configurations across color spaces. 2006 IEEE International Conference on Image Processing, ICIP 2006, Proceedings 1001–1004.
  9. 9. Chengjun L (2008) Learning the uncorrelated, independent, and discriminating color spaces for face recognition. IEEE Transactions on Information Forensics and Security 3
  10. 10. Wang SJ, Yang J, Sun MF, Peng XJ, Sun MM, et al. (2012) Sparse Tensor Discriminant Color Space for Face Verification. Neural Networks and Learning Systems, IEEE Transactions on 23: 876–888.
  11. 11. Liu Z, Liu C (2008) A hybrid color and Frequency Features method for face recognition. Ieee Transactions on Image Processing 17: 1975–1980.
  12. 12. Zhiming L, Chengjun L (2010) Fusion of color, local spatial and global frequency information for face recognition. Pattern Recognition 43
  13. 13. Choi JY, Ro YM, Plataniotis KN (2012) Color Local Texture Features for Color Face Recognition. Ieee Transactions on Image Processing 21: 1366–1380.
  14. 14. Zhang F (1997) Quaternions and matrices of quaternions. Linear Algebra and its Applications 251: 21–57.
  15. 15. Hypercomplex BT (1999) Spectral Signal Representations for the processing and Analysis of Images: Christian-Albrechts-University of Kiel.
  16. 16. Denis P, Carre P, Fernandez-Maloigne C (2007) Spatial and spectral quaternionic approaches for colour images. Computer Vision and Image Understanding 107: 74–87.
  17. 17. Shi L, Funt B (2007) Quaternion color texture segmentation. Computer Vision and Image Understanding 107: 88–96.
  18. 18. Lilong S, Brian F, Ghassan H, Simon F (2008) Quaternion Color Curvature. pp. 338–341.
  19. 19. Angulo J (2009) Structure Tensor of Colour Quaternion Image Representations for Invariant Feature Extraction. In: Tremeau ASRTS, editor. Computational Color Imaging. pp. 91–100.
  20. 20. Felsberg M, Sommer G (1999) Optimized fast algorithms for the quaternionic Fourier transform. In: Solina FLA, editor. Computer Analysis of Images and Patterns. pp. 209–216.
  21. 21. Soo-Chang P, Ja-Han C, Jian-Jiun D (2004) Commutative reduced biquaternions and their Fourier transform for signal and image processing applications. IEEE Transactions on Signal Processing 52
  22. 22. Soo-Chang P, Jian-Jiun D, Ja-Han C (2001) Efficient implementation of quaternion Fourier transform, convolution, and correlation by 2-D complex FFT. IEEE Transactions on Signal Processing 49
  23. 23. Bulow T, Sommer G (2001) Hypercomplex signals - A novel extension of the analytic signal to the multidimensional case. IEEE Transactions on Signal Processing 49: 2844–2852.
  24. 24. Soo-Chang P, Ching-Min C (1997) A novel block truncation coding of color images using a quaternion-moment-preserving principle. IEEE Transactions on Communications 45
  25. 25. Le Bihan N, Mars J (2004) Singular value decomposition of quaternion matrices: a new tool for vector-sensor signal processing. Signal Processing 84: 1177–1199.
  26. 26. Sangwine SJ (1998) Colour image edge detector based on quaternion convolution. Electronics Letters 34: 969–971.
  27. 27. Le Bihan N, Sangwine SJ (2003) Quaternion principal component analysis of color images. Proceedings 2003 International Conference on Image Processing. (Cat No03CH37429)
  28. 28. Soo C, Ja-Han C, Jian-Jiun D (2003) Quaternion matrix singular value decomposition and its applications for color image processing. Proceedings 2003 International Conference on Image Processing (Cat No03CH37429)
  29. 29. Sangwine SJ, Le Bihan N (2006) Quaternion singular value decomposition based on bidiagonalization to a real or complex matrix using quaternion Householder transformations. Applied Mathematics and Computation 182: 727–738.
  30. 30. Janovská D, Opfer G (2008) Matrix decompositions for quaternions. World Academy of Science, Engineering and Technology 47: 141–142.
  31. 31. Le Bihan N, Mars J (2002) Quaternion subspace method for vector-sensor wave separation; Toulouse, France. pp. 637–640.
  32. 32. Miron S, Le Bihan N, Mars JI (2006) Quaternion-MUSIC for vector-sensor array processing. IEEE Transactions on Signal Processing 54: 1218–1229.
  33. 33. Yong X, Jing-yu Y, Zhong J (2003) Theory analysis on FSLDA and ULDA. Pattern Recognition 36
  34. 34. Zhang D, Song F, Xu Y, Liang Z (2008) Advanced Pattern Recognition Technologies with Applications to Biometrics. Hershey: Medical Information Science Reference.
  35. 35. Xu Y, Zhang D, Song F, Yang J-Y, Jing Z, et al. (2007) A method for speeding up feature extraction based on KPCA. Neurocomputing 70: 1056–1061.
  36. 36. Yong X, Zhang D, Jian Y, Jing-Yu Y (2008) An approach for directly extracting features from matrix data and its application in face recognition. Neurocomputing 71
  37. 37. Yong X, Zhang D, Jing-Yu Y (2010) A feature extraction method for use with bimodal biometrics. Pattern Recognition 43
  38. 38. Wang S-J, Yang J, Zhang N, Zhou C-G (2011) Tensor Discriminant Color Space for Face Recognition. Ieee Transactions on Image Processing 20: 2490–2501.