Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Multispectral image fusion for illumination-invariant palmprint recognition

  • Longbin Lu ,

    baqihuti@stu.xjtu.edu.cn

    Affiliation MOE Key Lab for Intelligent Networks and Network Security, School of Electronics and Information Engineering, Xi’an Jiaotong University, Xi’an, China

  • Xinman Zhang,

    Affiliation MOE Key Lab for Intelligent Networks and Network Security, School of Electronics and Information Engineering, Xi’an Jiaotong University, Xi’an, China

  • Xuebin Xu,

    Affiliation Guangdong Xi'an Jiaotong University Academy, Foshan, China

  • Dongpeng Shang

    Affiliation MOE Key Lab for Intelligent Networks and Network Security, School of Electronics and Information Engineering, Xi’an Jiaotong University, Xi’an, China

Abstract

Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied.

1 Introduction

Nowadays, biometrics [13] plays an increasingly important role in the modern information society and has drawn more and more research attention throughout the world. As an emerging and promising biometric characteristic, palmprint possesses some remarkable advantages such as high distinguishability, excellent user-friendliness and strong stability. Generally speaking, palmprint recognition [47] is to verify the identity of a person based on the palm information including principal lines, wrinkles and fine ridges. In contrast to password cards or identification cards, palmprint recognition is much more convenient, efficient and reliable with extensive and successful applications [8]. However, it is still faced with some challenges in real noisy environments, where the illumination condition may be unsatisfied or even corrupted and then the performance of a palmprint recognition system based only on the visible spectrum degrades quickly. In addition, traditional methods obtain features from a single spectral band and consequently cannot achieve enough discriminative information of identities. In recent researches, there is a growing trend to use multispectral images instead of exploiting a single spectral image to improve the accuracy of a palmprint recognition system [912]. Images are captured at Blue, Green, Red and Near-infrared (NIR) spectral bands respectively, each of which commonly highlights different specific and complementary palm features. It is demonstrated that the utilization of multispectral images has made palmprint recognition as one of the most reliable and successful personal identification approaches.

Multispectral palmprint analysis is mainly focused on two separate directions, i.e., fusing multispectral information either at image level or at matching score level. For the first approach, the basis idea is to perform a multiscale decomposition on each source image, then integrate all these decompositions to form a composite representation, and finally reconstruct the fused image to be recognized by performing an inverse transform. Two major kinds of multiscale techniques, namely pyramid decomposition and wavelet decomposition, have been investigated in multispectral palmprint image fusion. A comparative research on multispectral palmprint image fusion was conducted in [13], where wavelet transform (WT), gradient pyramid (GP), morphological pyramid (MP) and curvelet transform (CT) were evaluated on two different spectral bands. Qualitative analysis demonstrated that the CT based image fusion could achieve a higher recognition accuracy. Besides, some other innovative methods, such as nonsubsampled contourlet transform (NSCT) [14, 15], discrete wavelet transform (DWT) [11, 12, 16], shift-invariant digital wavelet transform (SIDWT) [17, 18] and digital shearlet transform (DST) [19, 20], were widely and successfully used in multispectral palmprint image fusion. Alternatively, in the case of fusion at matching score level, palmprint features are extracted from different spectral bands separately, followed by a comparator to obtain a matching score. These matching scores in turn are fused using a sum rule and then verification is carried out on the fusion results. For example, Zhang et al. [21] employed the orientation-based coding (OC) for feature extraction of each spectral band and proposed a matching rule robust to the effect of information overlapping. In [22], Khan et al. applied the contour code (CC) for the representation of multispectral images before performing the matching score-level fusion. In [23], sum and weighted sum rules were utilized at the fusion stage. Additionally, some other explorations have been made in recent years. In [24], Hong et al. developed a novel hierarchical approach based on the block dominant orientation code (BDOC) and the block-based histogram of oriented gradient (BHOG) for feature-level fusion. Instead of using a fusion strategy, Xu et al. [25] presented a new method from a different perspective by utilizing the quaternion principal component analysis (QPCA) and the quaternion discrete wavelet transform (QDWT), which could fully extract the multispectral information.

Among the abovementioned works, the image fusion based scheme appears to be more attractive because it can effectively remove the noise that may be present during the acquisition process of palmprint images. Thus in this paper, we mainly concentrate on developing a novel method for illumination-invariant palmprint recognition by fusing multispectral information at image level. Firstly, the fast and adaptive bidimensional empirical mode decomposition (FABEMD) [2628] is applied to each image captured at different spectral bands respectively, and then the fused image can be represented by the weighted sum of some bidimensional intrinsic mode functions (BIMFs). Secondly, a weighted Fisher criterion [29, 30] is introduced to select the proper fusion weights such that the fused image can contain enough discriminative information. Finally to improve the recognition accuracy and reduce the computation cost, a novel tensor-based extreme learning machine (TELM) [3133] mechanism is proposed for classification of two-dimensional (2D) images. Extensive experiments under even or uneven illumination conditions are carried out on the PolyU multispectral palmprint database [11, 21, 34, 35] to show the superiority of our proposed method.

The rest of this paper is organized as follows: Section 2 introduces the multispectral imaging device and the region of interest (ROI) extraction method. Section 3 provides a schematic diagram of the proposed method and describes the FABEMD, the weighted Fisher criterion based image fusion strategy and the TELM in detail. Section 4 introduces the database and presents an experimental analysis. Finally, some concluding remarks are reported in Section 5.

2 Multispectral palmprint imaging and acquisition

Before elaborating on the proposed method, we make an introduction to how the multispectral palmprint images are acquired and how the ROIs are located. Fig 1 shows the structure of the imaging device, which consists of a computer, an A/D converter, a CCD camera, a multispectral ring light source and a light controller. With signals from the light controller, the ring light source can successively generate four kinds of uniform illuminators at multiple spectral bands, i.e., Blue (470 nm), Green (525 nm), Red (660 nm) and NIR (880 nm). These four illuminators are switched between each other so quickly that a user’s multispectral palmprint images can be captured almost at the same time. Therefore the translation or rotation between two images is very small, making registration no longer necessary for image fusion. During the acquisition process, users are required to put their palms on the device panel where several pegs are employed to fix the placement of the hands. The CCD camera then acquires the palmprint images under the generated illuminators. Afterwards, by an A/D converter, the analog signals of images are converted to digital ones stored in the computer. Fig 2 illustrates a typical multispectral palmprint sample.

thumbnail
Fig 1. Structure of the multispectral palmprint imaging device.

https://doi.org/10.1371/journal.pone.0178432.g001

thumbnail
Fig 2. A typical multispectral palmprint sample: (a) Blue, (b) Green, (c) Red, and (d) NIR.

The white square is the ROI of the image.

https://doi.org/10.1371/journal.pone.0178432.g002

Extracting an ROI from the acquired image is an essential step for multispectral palmprint recognition, which could efficiently decrease the effect of rotation and translation of the palm. As shown in Fig 2(a), by finding the two key points (E1, E2) located at the troughs between fingers, a coordinate system is built at the Blue band to crop the ROI. Here the line passing through E3 and E4 is the perpendicular bisector. Once the coordinate system is established, it is applied to the other spectral bands. The detailed steps are described in [34]. Fig 3 shows the extracted ROIs with a size of 128×128. Particularly, the palmprint images in color are also exhibited so that we can follow which channel is more informative visually.

thumbnail
Fig 3. Extracted ROIs: (a) Color, (b) Blue, (c) Green, (d) Red, and (e) NIR.

https://doi.org/10.1371/journal.pone.0178432.g003

3 Proposed multispectral palmprint image recognition method

Fig 4 illustrates the outline of the proposed palmprint recognition method, which mainly consists of three key steps: performing the FABEMD on the multispectral images and applying illumination compensation to the extracted BIMFs, determining the appropriate fusion coefficients based on the weighted Fisher criterion, and verifying the identities of palmprint images by a TELM classifier. Initially, for each spectral channel, the palmprint image is decomposed into some BIMFs and a residue using the FABEMD technique, where the residue can be considered as the estimation of the illumination condition at this spectral band. Based on the residue, BIMFs are adjusted with an illumination compensation operation. Afterwards, the fusion of multispectral images can be completed by calculating a weighted sum of all the adjusted BIMFs from Blue, Green, Red and NIR spectral bands. An improved Fisher criterion considering the neighborhood information is utilized to solve the fusion coefficients. By this means, the training samples in the fusion space can contain very discriminative information. In other words, the ratio of the between-class distance to the within-class distance in the fusion space tends to be maximized. Finally, the training fusion images are prepared in a tensor format and then employed to learn a TELM model. TELM combines tensor representation and extreme learning machine theory to determine the input weights and output weights of a single-hidden-layer feedforward neural network, by which the testing samples are classified.

3.1 Fast and adaptive bidimensional empirical mode decomposition

Fast and adaptive bidimensional empirical mode decomposition (FABEMD) is a data-driven signal analysis method that decomposes a 2D signal into its characteristic hierarchical components known as bidimensional intrinsic mode functions (BIMFs) [26]. It is based on an iterative shifting process, where the local extrema of the signals are initially detected and then the envelopes are estimated with the detected results. FABEMD adopts two kinds of order-statistics filters, namely MAX and MIN filters, to get the upper and lower envelopes, where the filter size is derived from the data.

Given a 2D signal I, FABEMD can represent it by (1)

Here, K is the number of BIMFs decomposed from I, Si denotes the ith BIMF, and R is the residue. In the shifting process, Si is extracted from its source signal Ji, where Ji = Ji−1Si−1 and J1 = I. The detailed steps are explained as follows:

Step 1: Set i = 1 and initialize Ji = I.

Step 2: Identify the local maxima and minima maps Mi, Ni of Ji.by exploiting a neighboring window search strategy as shown in Fig 5. A data point is regarded as the local extremum if its value is strictly higher or lower than all of the neighbors within the window. Particularly, for finding extrema points at the boundary or corner, the neighbors outside the image are neglected. Usually a window with a size of 3×3 is preferred for optimal results.

thumbnail
Fig 5. Demonstration of local maxima and minima maps: (a) source signal, (b) local maxima map, and (c) local minima map.

https://doi.org/10.1371/journal.pone.0178432.g005

Step 3: Determine the proper window size for order-statistics filters based on the local maxima and minima maps Mi, Ni. For each local maximum point in Mi, the Euclidean distance to the nearest other local maximum point is computed and stored in an adjacent maxima distance vector denoted as dadj−max. Similarly, an adjacent minima distance vector denoted as dadj−max is calculated as well. The number of elements in the maxima (minima) adjacent vector is equal to the number of local maxima (minima) points. Considering a square window, the gross window size weng for the order-statistics filters can be determined in two different ways as shown below: (2)

The final window size wen is obtained by rounding weng to the nearest odd integer. Here, the range of wen used for the multispectral palmprint images is from 3 to 69.

Step 4: Generate the upper and lower envelops UEi, LEi by applying order-statistics and smoothing average filters. MAX and MIN filters with the window size of wen×wen are employed to form the upper and lower envelops respectively according to the following equations: (3) where the value UEi(m, n) of the upper envelop at any point (m, n) is simply the maximum value of the elements in Ji in the region defined by Zmn. Zmn is the square region with a size of wen×wen centered at the point (m, n). Similarly the value LEi(m, n) of the lower envelop at any point (m, n) is simply the minimum value of the elements in Ji in the region defined by Zmn. To attain smooth continuous surfaces for upper and lower envelopes, smoothing operations are performed on both UEi(m, n) and LEi(m, n), which may be stated as (4)

Step 5: Compute the ith BIMF by Si = (UEi + LEi)/2 and set ii + 1, Ji = Ji−1Si−1. Repeat steps 2 to 5 until the number of the extracted BIMFs is K.

Based on the above steps, a 2D signal is decomposed into K BIMFs Si, i = 1, …, K. Then the residue R can be calculated according to (Eq 1). The decomposition results of a palmprint image using FABEMD are shown in Fig 6.

thumbnail
Fig 6. Decompositions of a palmprint image using FABEMD: (a) the source image, (b) the 1st BIMF, (c) the 2nd BIMF, (d) the 3rd BIMF, (e) the 4th BIMF, and (f) the residue.

https://doi.org/10.1371/journal.pone.0178432.g006

In practical applications, the process of multispectral image acquisition is not so much restricted as described in Fig 1. For example, the images may be acquired in an open environment using a multispectral camera. The lighting condition is usually uncontrolled, and perhaps the images may be not uniformly illuminated. However, the results of FABEMD are very sensitive to the variation of lighting as demonstrated in Fig 7. In order to extract stable BIMFs, an illumination compensation method based on the residue of FABEMD is applied. Seen from Figs 6(f) and 7(f), the residue can be considered as a trend of the illumination. After an average filtering operation, the obtained smooth residue Rs is taken as the approximate illumination estimation. Then the adjusted BIMFs can be addressed by (5) where and Rs (m, n) are the values of and Rs at any point (m, n), eps stands for a very small offset (In our paper, we set it as 1E-5).

thumbnail
Fig 7. Decompositions of the noised image in Fig 6(a) using FABEMD: (a) the noised image, (b) the 1st BIMF, (c) the 2nd BIMF, (d) the 3rd BIMF, (e) the 4th BIMF, and (f) the residue.

https://doi.org/10.1371/journal.pone.0178432.g007

Fig 8 shows the adjusted BIMFs. Compared with the decompositions exhibited in Fig 7, it can be seen that the uneven lighting condition is obviously improved by the illumination compensation operation. By this means, we can extract stable BIMFs which could be utilized to reconstruct the original image.

thumbnail
Fig 8. Demonstration of the adjusted results of Fig 7: (a) the smooth residue using an average filter with a size of 10×10, (b) the 1st BIMF, (c) the 2nd BIMF, (d) the 3rd BIMF, (e) the 4th BIMF, and (f) the reconstructed image by summing the K BIMFs.

https://doi.org/10.1371/journal.pone.0178432.g008

3.2 Image fusion based on the weighted Fisher criterion

Image fusion aims to combine the complementary information of multisource images and make the fused image more understandable and purposeful. For multispectral palmprint recognition [912], the task of image fusion is to reserve the useful features and remove the confusing identity information in each fusion component so that the images can be separated perfectly in the fusion space. For this purpose, an improved weighted Fisher criterion is applied to the BIMFs extracted from multispectral images.

For the jth palmprint sample, the corresponding vectorized BIMFs decomposed from Blue, Green, Red and NIR bands are denoted by and respectively. Here j = 1, 2, …, N and i = 1, 2, …, K. N is the number of palmprint samples. K is the number of BIMFs that an image is decomposed into., , and are the ith adjusted BIMFs of the images captured at Blue, Green, Red and NIR bands of the jth palmprint sample. A general image fusion framework can be described by (6) where and φ = [φ1, φ2, …, φ4*K]T. Fusion based on the classic Fisher criterion [29] is to construct a set of fusion coefficients φ which could maximize the between-class distance and minimize the within-class distance simultaneously in the fusion space, that is (7) (8) where is the mean of all samples Vj, j = 1, 2, …, N, is the mean of the samples belonging to the lth class (Here, class means the identity of the palmprint), is the jth sample of the lth class, Nl is the number of samples of the lth class, m is the number of classes and . Then the fusion coefficient vector φ is obtained by solving the generalized eigenvalue decomposition: (9)

Here, φ is the eigenvector corresponding to the largest eigenvalue.

A drawback of the traditional Fisher criterion is that it pays equal attention to every sample when constructing the fusion coefficient vector. In fact, the samples near the class center maintain relative rest in the projection from decomposition space to fusion subspace. Meanwhile, the samples close to the border should be projected towards their corresponding class centers and keep farther away from other class points. In other words, the closer to the class center these samples are, the less contribution they make to the projection. Whereas, the farther away from the class center and the closer to the border those samples are, the more contribution they make. Inspired by this fact, a contribution factor μj of a sample Vj is defined as (10) where Ψj is the set of the k-nearest neighbors of the jth sample Vj, is the subset of Ψj with classes different from the one of Vj, and δ is the spread of Gaussian. From this definition, it can be inferred that when a sample is located inside the class with no between-class samples surrounded, the contribution factor μj is zero. When a sample is near the border and its k-nearest neighbors are not all from the same class, the value of μj is nonzero. Moreover, when the number of between-class samples increases and the distance of between-class samples decreases, the value of μj becomes larger and the contribution of this sample is greater. An extreme condition is that the value of μj is one, meaning that all the k-nearest neighbors are from other classes.

Based on the contribution factor, a weighted Fisher criterion is proposed. A large weight is arranged to the sample located close to the border and a small weight is given to the sample near the class center: (11) where is the contribution factor of the jth sample of the lth class, D and Dw are redefined as (12)

The fusion vector φ can be computed with a generalized eigenvalue decomposition according to (Eq 9), and then the fused image is achieved by (Eq 6). Fig 9 shows the weighted Fisher criterion based image fusion results under different illumination conditions. As expected, the fused images include all the detailed information from each spectral band. Moreover, we can find that the fused images are nearly not influenced by the lighting change.

thumbnail
Fig 9. Demonstration of the weighted Fisher criterion based image fusion under different illumination conditions.

Each row illustrates a multispectral palmprint sample and the corresponding fusion image.

https://doi.org/10.1371/journal.pone.0178432.g009

3.3 Tensor-based extreme learning machine

Extreme learning machine (ELM) is a novel training method for single-hidden-layer feedforward neural networks (SLFNs) with the hidden nodes randomly assigned and then fixed without iteratively tuning [32]. It has gained comprehensive interest due to its fast learning speed, good generalization ability and ease of implementation. However, ELM is originally proposed for one-order tensor (i.e. vector) classification. In the case of higher-order signals, they must be preliminarily vectorized, which may lose some structure information and degrade the final classification performance. In our work, we have made an improvement of the traditional ELM based on tensor decomposition. All input training samples are represented by a higher-order tensor. The input weights of an SLFN are calculated by applying a higher-order singular value decomposition (HOSVD) technique [31], and then the output weights are analytically determined by the simple generalized inverse operation.

For N distinct samples {xi, ti}, i = 1, 2, …, N, xi is a 1×n input vector and ti is a 1×m output vector. In our work, xi represents the vectorized fused image of the ith sample and n denotes the number of pixels in the fused image. ti is the class label and m denotes the number of classes. Training an SLFN with hidden nodes is to find the suitable input weights and output weights such that (13) where αj is a (n + 1)×1 vector and denotes the weight vector connecting the input nodes to the jth hidden node, βj is a 1×m vector and denotes the weight vector connecting the jth hidden node to the output nodes, is the augmenting vector of xi with the format of [xi 1] ∈ Rn+1, and g(x) is the activation function (e.g., sigmoid and threshold). Here, we select sigmoid as the activation function. The above formula can be written compactly as (14) where , and . Furthermore, H is described in a more compact way as (15) where and .

The ELM theory has proved that, if the activation functions are infinitely differentiable, the hidden layer output matrix H can be obtained by using a random map α with (Eq 15). Afterwards, the output weight matrix β is calculated by (16) where H is the Moore—Penrose generalized inverse of H.

Different from ELM designed only for signals in vector format, our proposed tensor-based ELM (TELM) is an extension for higher-order signals. Here, tensors are the generalization of vectors with orders higher than one. A tensor has order p. I1, I2, …, Ip represent the number of elements for each dimension. Instead of using a random map, a HOSVD-based method is employed in TELM to construct the multidimensional feature projection matrices, by which the input training data are mapped into the hidden layer.

Firstly, we introduce two basic operations in HOSVD. The matrix unfolding of a tensor along dimension q is defined as (17)

The product between a tensor and a matrix is denoted by (18) where is a tensor with the elements computed by (19)

Note that the matrix unfolding C(q) along dimension q is the product between B and A(q), that is (20)

Given N distinct training samples in higher-order format , where xi is the 2D fused image in our work and , the core task for TELM is to construct the multidimensional projection matrices. For this purpose, we first prepare the input training tensor as (21)

Then the HOSVD decomposes the training tensor Γ as (22) where U2, U3 are the multidimensional projection matrices and Z is the hidden layer input tensor. For i = 2, 3, Ui can be computed from the standard SVD of the unfolding matrix Γ(i), i.e.,. Ui is the orthogonal matrix that contains the orthonormal vectors spanning the column space of the matrix unfolding Γ(i). Then the tensor Z can be addressed by using the inversion formula: (23)

Actually, we use the simple truncation of the first columns of the matrices U2, U3 to calculate the hidden layer output matrix H, that is (24)

This truncation operation can not only maintain the discriminative multidimensional projections but also greatly reduce the computational cost. With the multidimensional projection matrices, the input tensors are mapped into the feature subspace. Finally, we can achieve the output weight matrix β of the hidden layer through (Eq 16).

In the TELM algorithm, the multidimensional feature projection matrices are utilized as the input weights, which effectively reserves the structure information of the input tensors. The output weights are calculated by solving the generalized inverse. There are no iterative learning steps and thus the learning speed is very fast.

4 Experiments

In this section, we report the experimental results and evaluate the performance of the proposed method. The recognition accuracy (RA) indicator is used as the assessment standard and it is defined as (25) where Numc stands for the number of correctly recognized samples and Num is the total number of testing samples.

All the experiments were conducted on a machine with a 2.50 GHz Intel core processor and 8 GB memory. MATLAB 2012a was used as the simulation software.

4.1 Multispectral palmprint database

We conducted the experiments on the PolyU multispectral palmprint database offered by Hong Kong Polytechnic University [11, 21, 34, 35]. All the images were collected from 250 volunteers (195 males and 55 females) aged from 20 to 60 years old. The acquisition was completed in two different sessions, each lasting about 9 days. In one session, the subject was required to provide 6 images for his left and right palms respectively. The palmprint images were acquired at four spectral bands, i.e., Red, Green, Blue and NIR. For each band, there are 6,000 images obtained from 500 different palms in total. Fig 10 shows some multispectral palmprint samples in the PolyU database.

thumbnail
Fig 10. Demonstration of multispectral palmprint images in the PolyU database.

Each row shows a multispectral palmprint sample.

https://doi.org/10.1371/journal.pone.0178432.g010

All the original images in the database were illuminated uniformly. To verify the robustness of our method against the variation of illumination, we manually generated the noised data through multiplying the palmprint images by an uneven illumination image as shown in Fig 11. In the experiments, the 12000 original palmprint images captured from four spectral bands in the first session were used as the training samples, while the remaining ones with light noise added were taken as the testing samples.

thumbnail
Fig 11. Demonstration of how to generate a noised palmprint image.

https://doi.org/10.1371/journal.pone.0178432.g011

4.2 Parameter discussion

We conducted several experiments to investigate the effects of different settings in FABEMD. The results are shown in Fig 12. To test the influence of the number K of BIMFs, we gradually increased it from 1 to 5. In the accomplishment of FABEMD, an illumination compensation operation was performed based on the residue. In order to verify its actual performance, a comparison was made between two experiments with and without illumination compensation. We also discussed the results with different ways (d1 or d2) of determining the gross window size weng for the order-statistics filters. All these trials were carried out on the original images and the noised images, respectively.

thumbnail
Fig 12. Demonstration of the effects of different settings in FABEMD: (a) experiments without illumination compensation, and (b) experiments with illumination compensation.

https://doi.org/10.1371/journal.pone.0178432.g012

From Fig 12(a) and 12(b), we can conclude that the illumination compensation operation significantly improves the robustness of the method against the variation of lighting condition. As shown in Fig 12(a), without this operation, the recognition accuracy decreases seriously when the images are noised by uneven illumination. Meanwhile, when illumination compensation is completed, the results with original images and noised images are nearly the same (Fig 12(b)). It can also be seen that the recognition accuracy increases rapidly as K becomes larger. When K is 4, the results tend to be optimal. In addition, it is observed that the accuracy by using d1 as the window size is slightly higher than that by using d2.

Table 1 lists the recognition accuracies with different parameters in the weighted Fisher criterion. Here, k is the sample number in the nearest neighbors for calculating the contribution factor and δ indicates the spread of Gaussian. It can be inferred that the value of k has a great influence on the recognition accuracy. As shown in Table 1, in the given range, a larger k usually yields a higher recognition accuracy. When k = 6 and δ = 5, it produces the highest recognition accuracy.

thumbnail
Table 1. Recognition results with different parameters in the weighted Fisher criterion.

https://doi.org/10.1371/journal.pone.0178432.t001

Fig 13 shows the performance of the tensor-based extreme learning machine with different number of hidden nodes. It is obvious that the recognition accuracy has a growing trend as the number of hidden nodes increases progressively. It converges to the optimal accuracy when the values of and are large enough. In our experiments, we set and , respectively.

thumbnail
Fig 13. Performance of tensor-based extreme learning machine with different number of hidden nodes.

https://doi.org/10.1371/journal.pone.0178432.g013

4.3 Results analysis of the proposed method

Each spectral band may capture some specific and complementary palm features, providing different information for palmprint recognition. Table 2 illustrates the quantitative results of the proposed method tested with different combinations of the four spectral bands. Some findings can be obtained from the table. In terms of palmprint recognition based on a single spectral band, the Red and NIR bands achieve higher recognition accuracies than the Blue and Green bands. This is because the images captured at Red and NIR spectral bands contain some additional palm vein information, which plays an important role in classifying the images sharing similar palm lines. In addition, it can be observed that the recognition accuracy of fusing multiple spectral bands is higher than that of any single band. While for multispectral fusion, the more bands in fusion do not always achieve a better recognition accuracy. For example, the accuracy of the combination between Red and NIR is 99.47%, which is higher than the result of fusing Blue, Green and NIR bands. We can also find that the performance of the proposed method is seldom affected by the uneven lighting condition.

thumbnail
Table 2. Recognition results by different combinations of the spectral bands.

https://doi.org/10.1371/journal.pone.0178432.t002

In order to verify the effectiveness of the proposed fusion strategy, a comparison was made with the sum rule based and the Fisher criterion based image fusion. The results are reported in Table 3. It is evident that for any fusion combination, the proposed weighted Fisher criterion consistently and significantly outperforms both the two comparison methods.

thumbnail
Table 3. Performance comparison with different fusion rules.

https://doi.org/10.1371/journal.pone.0178432.t003

Another comparison was made by using different classifiers. The KNN, ELM and TELM were compared in terms of recognition accuracy and computational time. Table 4 depicts the recognition accuracies with different fusion combinations. It can be found that the TELM yields the highest recognition accuracy. For any spectral band combination, the result of TELM is much higher than that of ELM. So we can conclude the TELM is an effective improvement of ELM. Compared with KNN, the TELM also maintains an obvious advantage. As for the computational cost shown in Table 5 (Here, the time is referred to as the computational time for the entire database), it can be seen that the TELM costs the least computational time. In comparison with ELM, the TELM tends to be optimized with fewer hidden nodes, resulting in much less computational time. Although the KNN does not need a training process, it executes a matching operation with each reference sample when classification, making it the most complicated to be calculated. Overall, the TELM outperforms both the KNN and ELM.

thumbnail
Table 4. Performance comparison with different classifiers.

https://doi.org/10.1371/journal.pone.0178432.t004

In order to further evaluate the proposed fusion rule and classification method, some Cumulative Match Characteristic curves were generated by using the sum rule, the Fisher criterion, the weighted Fisher criterion for image fusion and the TELM, the KNN, the ELM for classification, respectively. Seen from Fig 14, we can find that the proposed method (weighted Fisher criterion + TELM) has the highest rank-1 recognition accuracy. At the same time, it is more towards the upper left corner of the plots compared with the other methods. So we can conclude that the weighted Fisher criterion based image fusion and the TELM classifier are superior to all the other methods, which is quite consistent with the results reported in Tables 3 and 4.

thumbnail
Fig 14. Performance comparison of different fusion and classification methods in terms of Cumulative Match Characteristic curves.

https://doi.org/10.1371/journal.pone.0178432.g014

Table 6 shows the comparison results with some state-of-art multispectral palmprint recognition methods, including an image-level fusion method, a matching score-level fusion method and a quaternion matrix based method. We can find that the performance of the method in [25] degrades seriously when the palmprint images are noised with uneven illumination. Although the methods in [11] and [21] are rarely affected by the illumination change, they can’t provide a recognition accuracy as high as ours. The proposed method can attain the highest recognition accuracy under even or uneven illumination conditions among the four methods.

thumbnail
Table 6. Performance comparison with different multispectral palmprint recognition methods.

https://doi.org/10.1371/journal.pone.0178432.t006

Table 7 gives the average execution time for each step when using the proposed method to recognize the identity of a single multispectral palmprint sample. It should be noted that the results for image fusion and palmprint classification are the testing time with the fusion coefficients and the TELM model calculated in advance. Actually, these parameters only need to be computed once and the corresponding computational times are meaningless. As shown in the table, the proposed method is fast enough for real-time applications.

5 Conclusions

In this paper, we have investigated an illumination-invariant multispectral palmprint recognition method. It combined the information across multiple spectral bands (Blue, Green, Red and NIR) by performing a fusion at image level. Each image captured at a single spectral band was decomposed into several BIMFs and a residue using FABEMD. Then the residue was used to estimate the illumination condition of the palmprint, based on which the BIMFs were adjusted. To guarantee the final recognition accuracy of images in the fusion space as high as possible, a weighted Fisher criterion considering the different contributions of image samples was proposed to find the fusion coefficients. Furthermore, an improved extreme learning machine based on tensor decomposition was utilized for feature extraction and classification. It occupied a higher-order singular value decomposition technique to determine the input weights of a single-hidden-layer feedforward neural network, which could fully maintain the structure features of two-dimensional signals. Experiments carried out on the PolyU multispectral palmprint database under different illumination conditions showed that our proposed method could achieve very competitive results with great robustness against illumination variation.

Supporting information

Acknowledgments

The authors would like to thank the Hong Kong Polytechnic University for sharing the multispectral palmprint database.

Author Contributions

  1. Conceptualization: LBL XBX.
  2. Data curation: XMZ.
  3. Formal analysis: XMZ DPS.
  4. Funding acquisition: XMZ.
  5. Investigation: LBL DPS.
  6. Methodology: LBL XBX.
  7. Project administration: XMZ LBL.
  8. Resources: XBX.
  9. Software: LBL.
  10. Supervision: XMZ.
  11. Validation: XBX.
  12. Visualization: LBL DPS.
  13. Writing – original draft: LBL XMZ.
  14. Writing – review & editing: LBL XMZ.

References

  1. 1. Jain AK. Technology: biometric recognition. Nature. 2007; 449(7158):38–40. pmid:17805286
  2. 2. Jain AK, Flynn P, Ross AA. Handbook of Biometrics. Springer New York USA; 2007.
  3. 3. Gorman L. Comparing passwords, tokens, and biometrics for user authentication. Proceedings of the IEEE. 2003; 91(12): 2021–2040.
  4. 4. Dai J, Zhou J. Multifeature-based high-resolution palmprint recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2011; 33(35):945–957.
  5. 5. Cappelli R, Ferrara M, Maio D. A fast and accurate palmprint recognition system based on minutiae. IEEE Transactions on Systems Man and Cybernetics Part B: Cybernetics. 2012; 42(3):956–62.
  6. 6. Qian Z, Kumar A, Gang P. A 3d feature descriptor recovered from a single 2d palmprint image. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2016; 38(6):1272–1279. pmid:27164564
  7. 7. Zhang D, Kanhangad V, Luo N, Kumar A. Robust palmprint verification using 2d and 3d features. Pattern Recognition. 2010; 43(1):358–368.
  8. 8. Kong A, Zhang D, Kamel M. A survey of palmprint recognition. Pattern Recognition. 2009; 42(7):1408–1418.
  9. 9. Guo Z, Zhang D, Zhang L, Liu W. Feature Band Selection for Online Multispectral Palmprint Recognition. IEEE Transactions on Information Forensics and Security. 2012; 7(3):1094–1099.
  10. 10. Xu Y, Fan Z, Qiu M, Zhang D, Yang JY. A sparse representation method of bimodal biometrics and palmprint recognition experiments. Neurocomputing. 2013; 103(2):164–171.
  11. 11. Han D, Guo Z, Zhang D. Multispectral palmprint recognition using wavelet-based image fusion. In Proceedings of the IEEE International Conference on Signal Processing (ICSP), Beijing, China, 26–29 October 2008, pp. 2074–2077.
  12. 12. Bouchemha A, Doghmane N, Naithamoud MC, Naitali A. Multispectral palmprint recognition methodology based on multiscale representation. Journal of Electronic Imaging. 2015; 24(4):043005.
  13. 13. Hao Y, Sun Z, Tan T, Ren C. Multispectral palm image fusion for accurate contact-free palmprint recognition. In Proceedings of the 15th IEEE International Conference on Image Processing (ICIP), San Diego, California, USA, 12–15 October 2008, pp. 281–284.
  14. 14. Cunha ALD, Zhou J, Do MN. The nonsubsampled contourlet transform: theory, design, and applications. IEEE Transactions on Image Processing. 2006; 15(10):3089–3101. pmid:17022272
  15. 15. Masood H, Asim M, Mumtaz M, Mansoor AB. Combined Contourlet and Non-subsampled Contourlet Transforms Based Approach for Personal Identification Using Palmprint. In Proceedings of Digital Image Computing: Techniques and Applications (DICTA), Melbourne, Australia, 1–3 December 2009, pp. 408–415.
  16. 16. Raghavendra R, Busch C. Novel image fusion scheme based on dependency measure for robust multispectral palmprint recognition. Pattern Recognition. 2014; 47(6):2205–2221.
  17. 17. Hao Y, Sun Z, Tan T. Comparative Studies on Multispectral Palm Image Fusion for Biometrics. In Proceedings of the Asian Conference on Computer Vision (ACCV), Tokyo, Japan, 18–22 November 2007, pp. 12–21.
  18. 18. Gopinath R. The phaselet transform-an integral redundancy nearly shift-invariant wavelet transform. IEEE Transactions on Signal Processing. 2003; 51(7):1792–1805.
  19. 19. Lim WQ. The discrete shearlet transform: a new directional transform and compactly supported shearlet frames. IEEE Transactions on Image Processing. 2010; 19(5):1166–80. pmid:20106737
  20. 20. Xu X, Lu L, Zhang X, Lu H, Deng W. Multispectral palmprint recognition using multiclass projection extreme learning machine and digital shearlet transform. Neural Computing and Applications. 2014; 27(1):1–11.
  21. 21. Zhang D, Guo Z, Gong Y. An online system of multispectral palmprint verification. IEEE Transactions on Instrumentation and Measurement. 2010; 59(2):480–490.
  22. 22. Khan Z, Mian A, Hu Y. Contour Code: Robust and efficient multispectral palmprint encoding for human recognition. In Proceedings of the International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November 2011, pp. 1935–1942.
  23. 23. Zhang D, Guo Z, Lu G, Zhang L, Liu Y, Zuo W, et al. Online joint palmprint and palmvein verification. Expert Systems with Applications. 2011; 38(3):2621–2631.
  24. 24. Hong D, Liu W, Su J, Pan Z, Wang G. A novel hierarchical approach for multispectral palmprint recognition. Neurocomputing. 2015; 151:511–521.
  25. 25. Xu X, Guo Z, Song C, Li Y. Multispectral palmprint recognition using a quaternion matrix. Sensors. 2012; 12(4):4633–47. pmid:22666049
  26. 26. Bhuiyan SMA, Adhami RR, Khan JF. Fast and adaptive bidimensional empirical mode decomposition using order-statistics filter based envelope estimation. EURASIP Journal on Advances in Signal Processing. 2008; 2008(164):1–18.
  27. 27. Demir B, Erturk S. Empirical mode decomposition of hyperspectral images for support vector machine classification. IEEE Transactions on Geoscience and Remote Sensing. 2010; 48(11):4071–4084.
  28. 28. Ahmed MU, Mandic DP. Image fusion based on Fast and Adaptive Bidimensional Empirical Mode Decomposition. In Proceedings of the International Conference on Information Fusion (ICIF), Edinburgh, UK, 26–29 July 2010, pp. 1–6.
  29. 29. Guo H, Zhang Q, Nandi AK. Feature extraction and dimensionality reduction by genetic programming based on the fisher criterion. Expert Systems. 2008; 25(5):444–459.
  30. 30. Zhong D, Han J, Zhang X, Liu Y. Neighborhood discriminant embedding in face recognition. Optical Engineering. 2010; 49(7):253–258.
  31. 31. Costantini R, Sbaiz L, Süsstrunk S. Higher order svd analysis for dynamic texture synthesis. IEEE Transactions on Image Processing. 2008; 17(1):42–52. pmid:18229803
  32. 32. Huang G, Zhu Q, Siew CK. Extreme learning machine: theory and applications. Neurocomputing. 2006; 70(1–3):489–501.
  33. 33. Huang G, Zhou H, Ding X, Zhang R. Extreme learning machine for regression and multiclass classification. IEEE Transactions on Systems Man and Cybernetics Part B: Cybernetics. 2012; 42(2):513–29.
  34. 34. Zhang D, Kong W, You J, Wong M. On-line palmprint identification. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2003; 25(9):1041–1050.
  35. 35. Zhang D, Guo Z, Gong Y. Empirical study of light source selection for palmprint recognition. Pattern Recognition Letters. 2010; 32(2):120–126.