Correction
19 Dec 2014: The PLOS ONE Staff (2014) Correction: Palm-Vein Classification Based on Principal Orientation Features. PLOS ONE 9(12): e116446. https://doi.org/10.1371/journal.pone.0116446 View correction
Figures
Abstract
Personal recognition using palm–vein patterns has emerged as a promising alternative for human recognition because of its uniqueness, stability, live body identification, flexibility, and difficulty to cheat. With the expanding application of palm–vein pattern recognition, the corresponding growth of the database has resulted in a long response time. To shorten the response time of identification, this paper proposes a simple and useful classification for palm–vein identification based on principal direction features. In the registration process, the Gaussian-Radon transform is adopted to extract the orientation matrix and then compute the principal direction of a palm–vein image based on the orientation matrix. The database can be classified into six bins based on the value of the principal direction. In the identification process, the principal direction of the test sample is first extracted to ascertain the corresponding bin. One-by-one matching with the training samples is then performed in the bin. To improve recognition efficiency while maintaining better recognition accuracy, two neighborhood bins of the corresponding bin are continuously searched to identify the input palm–vein image. Evaluation experiments are conducted on three different databases, namely, PolyU, CASIA, and the database of this study. Experimental results show that the searching range of one test sample in PolyU, CASIA and our database by the proposed method for palm–vein identification can be reduced to 14.29%, 14.50%, and 14.28%, with retrieval accuracy of 96.67%, 96.00%, and 97.71%, respectively. With 10,000 training samples in the database, the execution time of the identification process by the traditional method is 18.56 s, while that by the proposed approach is 3.16 s. The experimental results confirm that the proposed approach is more efficient than the traditional method, especially for a large database.
Citation: Zhou Y, Liu Y, Feng Q, Yang F, Huang J, Nie Y (2014) Palm-Vein Classification Based on Principal Orientation Features. PLoS ONE 9(11): e112429. https://doi.org/10.1371/journal.pone.0112429
Editor: Ian McLoughlin, The University of Science and Technology of China, China
Received: August 20, 2014; Accepted: September 28, 2014; Published: November 10, 2014
Copyright: © 2014 Zhou et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The authors confirm that, for approved reasons, some access restrictions apply to the data underlying the findings. Data are available from the following locations: Figshare, using the DOI is http://dx.doi.org/10.6084/m9.figshare.1204711; The Chinese Academy of Sciences' Institute of Automation (CASIA) (Available: http://biometrics.idealtest.org/), where readers are able to download the data from the website without restriction, and the PolyU Multispectral Palmprint Database (PolyU Database) (Available: http://www4.comp.polyu.edu.hk/~biometrics/).
Funding: This work was supported in part by the Natural Science Foundation of China (http://www.nsfc.gov.cn/) under Grant (No.61271155). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. There is no other funding to declare.
Competing interests: The authors have declared that no competing interests exist.
Introduction
The development and popularity of computers and the Internet, particularly electronic commerce, have rendered biometrics-based automated human identification as very important and indispensable [1]. Vein recognition is an automated human identification technology based on the vein pattern, which is the vast network of blood vessels under human hand skin [2]–[5]. Compared with other biometrics technology, such as that using fingerprints [6], [7], palmprints [8]–[10], [26], and iris [11], palm–vein recognition [5], [39], [40] has the advantage of uniqueness and abundance of identity information, live body identification, counterfeiting difficulties, etc. These advantages confirm palm–vein recognition as a promising and effective technology with the merits of high accuracy and wide application range.
Existing palm-vein identification algorithms focus on improving the accuracy of one-to-one matching, which can be broadly categorized in three categories: (1) subspace learning approaches [13], [14] such as locality preserving projection (LPP) in [13] and scale invariant feature transform (SIFT) in [14], (2) statistics-based methods [15]–[17] such as the image-invariant moment [15], [16] and local binary pattern (LBP) and its variant local derivative pattern (LDP) [17], (3) texture-based coding [12], [18]–[25] such as Radon transform [12], Gaussian function and its variants [18]–[21], and Gabor-based methods [22]–[25]. Among various palm–vein matching schemes, orientation-based coding, as a valid representation of palm–vein patterns, has the advantages of high accuracy, robust illumination, and fast implementation.
The algorithms described in the literature are designed for more delicate one-to-one comparisons in palm–vein verification, most with the accuracy of more than 95% and the response time within 1 s, which basically conform to the real-time applications of palm–vein verification. When used in one-to-many applications of palm–vein identification, the input palm–vein pattern is matched with all the palm–vein patterns in a database. If the database is very large, real-time requirements may not be fulfilled. For example, the Gabor-based methods in [18] have the shortest response time of 0.70 ms for one matching. For a 106 palm–vein database, the response time of the identification was nearly 11.67 min. To shorten the response time of palm–vein identification, palm–vein images should be classified based on certain features. Palm–vein images can be assigned into some bins rather than into only one bin. The input palm–vein pattern must be matched only by a subset of palm–vein patterns in the entire database. Literature claims that problems of fingerprint classification [27], [28], [29] and palmprint classification [30], [31] have been addressed, but studies remain limited in the area of palm–vein identification. Besides, approaches proposed for fingerprints or palmprints are difficult to adopt for palm–vein recognition because of particular characteristics of palm–vein, such as irregular texture structures and low contrast.
To solve the previously mentioned problems, this paper proposes a simple and useful classification of palm–vein identification based on the principal direction features. The flowchart of the palm–vein identification approach is shown in Fig. 1. First, the input palm–vein image undergoes pre-processing steps, including ROI extraction and nonlinear enhancement, to extract a stable ROI called , with the size of 128×128. Second, the principal direction and the orientation matrix of
are identified. An input palm–vein image is assigned to one of the pre-specified bins based on the principal direction and is uniquely identified by one-to-one matching based on the orientation matrix. The main contributions of this paper can be summarized as follows:
- Local features-the orientation matrix extraction: Using the continuous line-like features, the Gaussian-Radon transform is proposed to effectively determine the local orientation of each pixel in its neighborhood. The orientation matrix, which is composed of the local orientation of all pixels in the palm–vein image, is subsequently obtained.
- Global features-the principal direction extraction: The image-pyramid, which is composed of three palm–vein images, is designed in a proportion of 2 to 1 to 0.5. At the same time, the filter-pyramid using three Gaussian-Radon filters is designed, with the same proportions as those of the image-pyramid. Finally, three orientation matrices are obtained by the image-pyramid and the Gaussian-Radon filter-pyramid, which are used to compute the principal direction of the input palm–vein image. Thus, the principal direction of the input palm–vein image can be used to classify all the palm–vein images into specific sub-classes.
The rest of this paper is organized as follows. In Method, Section 1 provides a description of the principal direction extraction. Section 2 details the palm–vein classification method. Section 3 discusses the retrieval efficiency and the response time of the classification method. Experiment results are presented and discussed followed by the conclusions.
Method
1. Extraction of principal orientation features
Palm–vein images contain significant continuous line-like characteristics, which can be viewed as texture images. Fig. 2 shows some images in the database of this study. Thus, texture-based coding methods can be used to extract palm–vein features. Among these texture-based coding methods, the orientation-based coding is one of the efficient representation methods of palm–vein images because of its high accuracy and robust illumination, as well as fast matching [26].
By repeated observation, the palm–vein image is found to have two main types of orientation-based features (Fig. 3). First is the global orientation-based feature. In general, the continuous line-like features of a palm–vein image exhibit a certain tendency in the directions called as the principal direction. For example, as shown in Fig. 3, the direction of red lines comprises the majority, which is 120° in relation to the x-axis. Second is the local orientation-based feature. The veins in the neighborhood of a pixel have directions called the local orientation. The matrix composed of the local orientations of all pixels is called the orientation matrix. The principal direction of the palm–vein image is used to classify all palm–vein images into specific bins. The orientation matrix is adopted to uniquely identify the input palm–vein image by one-to-one matching with the candidates in the corresponding bin. Section 1.1 of this study introduces the extraction of the orientation matrix of the palm–vein image based on the Gaussian-Radon transform, while Section 1.2 describes the calculation of the principal direction of the palm–vein image based on the orientation matrix.
1.1 Orientation matrix extraction based on the Gaussian Radon transform.
According to the characteristic of the palm–vein image and based on the Radon transform [33], the literature [12], [32] proposed the Modified Finite Radon Transform (MFRAT), with the definition of , where p is a positive integer. The MFRAT of real function
on the finite grid
is defined as:
(1)where
denotes the sets of points that comprise a line on the finite grid
:
(2)where
denotes the center point of
, and k represents the corresponding slope/orientation of
. The experiments have proven that six orientations have the best results [34]. Finally, the direction of the center point
of
can be calculated by a winner-take-all rule:
(3)where
is the winner index of the orientation of center point
. The orientations of all pixels can also be calculated if the center of lattice
moves over an image pixel-by-pixel. Fig. 4 shows an example of MFRAT whose size is 16×16 at the directions of 0°, π/6, 2π/6, 3π/6, 4π/6, and 5π/6. In the 16×16 MFRAT,
is 4 pixels wide.
The MFRAT method has the following steps. First, the MFRAT at the directions of 0°, π/6, 2π/6, 3π/6, 4π/6, and 5π/6 is designed. (the black and gray pixels in MFRAT as shown in Fig. 4) is assigned as the gray value of 1, and another is 0. The convolution of 6 filters and a palm–vein image is performed. Finally, the winner index of the orientation of a pixel is obtained using the winner-take-all rule, which is viewed as the local orientation of the pixel.
The MFRAT method can effectively extract finite length lines with the adjustable line-width. Thus, the MFRAT method is more suitable in extracting the orientation of the lines in the palm–vein image than the Radon method. However, MFRAT has the following limitations. First, the lattices in MFRAT reflect a square. When the lines
change with the orientation of k, the initial response of MFRAT at different directions differs. That is, the total number of pixels with the value of 1 will change at different directions. The initial conditions of the competition coding will then differ, which will influence the accuracy of the coding. For example, the initial response of MFRAT at the directions of 0° and 3π/6 is the smallest, while that of near to 45° is the largest. Second, the pixels in the lattices
have the same contribution to the center pixel regardless of their distance from the center pixel, which will affect the coding value of the center pixel. For example, as shown in Fig. 3, when the center pixel moves to the region A, the contribution of the veins remains large regardless of their distance from the center pixel.
With the previously described limitations, the Gaussian-Radon transform is proposed. The Gaussian-Radon of a pixel in palm–vein image I is defined as:
(4)where
denotes the pixels in the neighborhood of
. Lk refers to the lines with the slope of k, which satisfies Equation (2).
is defined as:
(5)where
is the gray level of the pixel
in palm-vein image I. And
is defined as:
(6)where
is the standard deviation of the Gaussian envelope. The weighting rules in the Gaussian function ensure that the contributions of the pixels in the neighborhood of the center point are directly proportional to the distance of the pixels to the center point. Besides, the operation of normalization makes the initial response of the Gaussian-Radon filters at different directions to be same. Fig. 5 illustrates an example of Gaussian-Radon filters, whose size is 63×63, at six directions. In the Gaussian-Radon filters,
is 5 pixels wide and
is 14.
Similarly, the convolution operation of six Gaussian-Radon filters and a palm–vein image is performed. The orientation of the center point
can be calculated by the winner-take-all rule, as expressed in Equation (7), which is the winner index of the orientation at the center point
. With six Gaussian-Radon filters, the value range of the orientation
is from 1 to 6.
(7)
The orientation matrix (OM) is obtained by moving the center point over an image pixel-by-pixel:
(8)where
is the size of the image,
is the size of OM, and c denotes the line-width of the lines Lk. The size of OM is inversely proportional to line width.
1.2 Computing for the principal direction of the palm–vein image.
With the irregular texture structures of the palm–vein and the low contrast of the palm–vein images, the approaches used in the fingerprint and the palm print classification methods are difficult to employ in palm–vein classification. However, a specific tendency in the directions of the continuous line-like features in a palm–vein image is discovered by repeated observation. As shown in Fig. 6(a), most of the directions of the palm–vein texture structures are horizontal, that is, 0° with respect to the x-axis. Those reflected in Figs. 6(b), 6(c), 6(d), 6(e), and 6(f) prefer the directions at 30°, 60°, 90°, 120°, and 150°, respectively. This tendency of direction is regarded as the principal direction of the palm–vein image. Principal direction features are extracted using the following algorithm:
- The image-pyramid construction: An input palm–vein image is defined as the original image
, whose size is 128×128.
is doubled to obtain
with the size of 256×256 by the bicubic interpolation, and
is reduced to obtain the half-size image
, with the image size of 64×64 by down-sampling. The image-pyramid composed of
,
, and
in a proportion of 2 to 1 to 0.5 is acquired.
- The Gaussian-Radon filter-pyramid construction: Three Gaussian-Radon filters, namely,
,
and
, are designed, with their correspondence parameters having the same proportion as the image-pyramid.
- Orientation matrix extraction: Three orientation matrices, namely,
are obtained by the convolution operation of the image-pyramid and of the Gaussian-Radon filter-pyramid based on the extraction methods detailed in Section 1.1.
- The statistical distribution of the local orientation in OM: Using the original palm–vein image
as an example, the statistical distribution of the local orientation in
is obtained, and then the global orientation of the image
is calculated by:
(9)where
denotes the winner index of the global orientation of the image
, m×n is the original size of
, and
represents the possible values of
, i.e., 1–6 Similarly, the winner indexes,
and
, of the global orientation of the images,
and
, in the image-pyramid can be calculated.
- Calculating the principal direction: Finally, the principal direction
of the original palm–vein image
is defined as
(10)where the mode operation is the most frequent value in the set [37].
Therefore, calculating the principal direction of the original palm–vein image
ensures the stability of the feature, regardless of noise, the contraction and relaxation of veins, and the elasticity of the palm.
2. Palm–vein classification
In existing literatures, the approaches for palm–vein identification assign all palm–vein images into one database. Therefore, during the identification process, the correct correspondence of an input palm–vein image is obtained by matching the input image with all the samples in the database. The searching method is called as the traditional method that demonstrates difficulty in meeting the real-time requirements of the palm–vein identification system, especially with a large database.
To solve identified problems, a simple and useful classification method for palm–vein identification based on the principal direction features is proposed. When used in the registration phase, the registration samples in the database are assigned into several bins. In the identification phase, the test sample is only required to match one-by-one with the samples in the corresponding bin. In this palm–vein classification approach, the number of matching for one test sample is significantly reduced, and high recognition speed is achieved, while better recognition accuracy is maintained. The classification process is shown as follows:
The principal direction of the palm–vein image is extracted by the methods described in Section 1.2. Based on the principal direction
, the palm–vein database (DB) is classified into six sub-classes expressed as
. The construction of six sub-classes is shown in Fig. 7.
denotes the bin that contains the palm–vein images with the principal direction
. The palm–vein images in the bin
have the orientation tendency of 0°. Similarly,
denote the bins containing images with the principal direction
. And the palm–vein images in the bins
have the orientation tendency of 30°, 60°, 90°, 120° and 150°, respectively.
Because of particular characteristics of palm–vein, such as low contrast and biased information of the palm–vein image due to different collections, the principal directions of palm–vein images from the same palm may be not consistent. And palm–vein images from the same palm may have more than two principal directions. However, the characterisitics of adjacency in the principal directions of palm–vein images from the same palm is discovered by repeated experiments, that is, these palm–vein images will be assigned to the adjacent bins. Therefore, in the identification process, the searching range will be enlarged to the neighborhood bins when the true correspondence between the input and the database palm–vein image is not found in the corresponding bin.
Accuracy can clearly be traded off for efficiency. Searching beyond the top one class will provide higher accuracy but lower efficiency. To improve recognition efficiency, while maintaining better recognition accuracy, the following are formulated: (1) The search will end when the true correspondence between the input and the database palm–vein image is found. (2) If the true correspondence from the corresponding bin is not found, the neighborhood bins,
and
, of
will be searched. Adjacency of the subclasses is defined based on the principal direction. For example, principal directions of 30° and 150° are the neighborhood of 0°. Therefore, the subclasses,
and
, are in the neighborhood of
. Similarly, the subclasses,
and
, are in the neighborhood of
.
3. Evaluation analysis
3.1 Retrieval efficiency and retrieval accuracy.
In palm–vein identification for a large database, the unique identification of the input palm–vein image is a retrieval problem. In this study, retrieval refers to filtering out the bins of candidate palm–vein patterns for finer one-to-one matching in a given input palm–vein image, until the palm–vein image is identified. In one-to-one matching, OM is regarded as the feature template and is used as the similarity measurement in [18] to compute the matching score between two feature templates. If the matching score is larger than the predefined threshold, the search ends, and the retrieved candidate reflects the true correspondence of the input image. The predefined threshold is the value around the matching score that corresponds to the EER. This threshold is determined by trial and error.
Given that the purpose of the retrieval is to find the true correspondence between the input and the database palm–vein image, retrieval accuracy and retrieval efficiency must also be considered to evaluate the proposed method.
If one of the retrieved candidates originates from the same palm as the input, the retrieval is successful for this input palm–vein image. Otherwise, the retrieval is a failure. Therefore, retrieval accuracy is calculated by the percentage of the retrieval success of input palm–vein images. The higher the retrieval accuracy, the better the performance of the system.
For one test sample, the percentage of the retrieved palm vein patterns from the database in one palm vein pattern represents the retrieval efficiency. In Section 2, we mention that the search will end when the true correspondence between the input and the database palm–vein image is found. Therefore, the number of the retrieved palm vein patterns varies with each query. Hence, in a test set, the average percentage of the retrieved palm–vein patterns from the database in all input palm–vein patterns represents retrieval efficiency. Because the retrieval efficiency is measured by the searching range, smaller searching range equates to smaller retrieval efficiency showing better performance of the system.
To demonstrate the validity of the retrieval method, that denotes the total number of samples in the entire database and
that denotes the total number of samples in bins
are defined. The relationship between
and
is expressed as:
(11)
denotes the matching number of the ith sample by the traditional method, and
(12)With
as the matching number of the ith sample by the proposed method, based on the rule described in Section 2, the number of retrieved candidates differs for each input sample, resulting in different values of
among different input samples:
(13)where Q is the number of bins that must be searched,
denotes the total number of samples in the corresponding bin
, and
and
denote the total number of samples in the neighborhood bins,
and
, of
. According to Equation (11), we can conclude:
(14)in which the matching number of the search in three bins by the proposed method remains lesser than that by the traditional method.
Therefore, using Equations (12), (13) and (14), there is:(15)
It means that the matching number of the ith sample by the proposed method is lesser than that by the traditional method.
Given the total number of test samples represented as S, the sum of matching number for S test samples using the traditional method and proposed method can be represented by and
, respectively. And
(16)
Based on the above definition of the retrieval efficiency, the retrieval efficiency of the traditional and proposed methods, and
, can be calculated as follows:
(17)
Therefore, using Equations (16), we can conclude:(19)
It means that the retrieval efficiency of S test samples by the proposed method is lesser than that by the traditional method, which shows the superiority of the proposed method (See Section 3.2 in Results and Discussion for details). If the database has millions of samples, the matching number of one test sample is significantly reduced by the proposed method, considerably improving the speed of the identification.
3.2 The response time of the identification process.
In the palm–vein identification process, the execution time lengthens as the number of samples in the database increases, resulting in the difficulty of meeting the requirement of the system in real-time. Whether by the traditional or the proposed method, the response time of the identification process is the sum of the duration of pre-processing, feature extraction, and one-to-many matching. We assume that ,
,
and
represent the response time of the pre-processing, feature extraction by traditional method, feature extraction by proposed method and one-to-one matching, respectively. Therefore, the duration of the identification process for one test sample by the traditional and proposed methods is determined as follows:
Proposed method:(21)where
and
denote the time of one-to-many matching for one test sample by the traditional method and the proposed method, respectively. Obviously, the time of one-to-many matching for one test sample means that the matching number of one test sample is multiplied by the time of the one-to-one matching. Using Equations (15), we can obtain:
(22)The duration of the identification process is mainly restricted by the time of the one-to-many matching. If the database has millions of samples, the matching time of one test sample by the proposed method becomes shorter than that by the traditional method. (See Section 3.3 in Results and Discussion for details).
Results and Discussion
1. Ethics Statements
This study was approved by the Ethics Committees of Guangdong Wicrown Information Technology Co., Ltd and Southern Medical University. Participants records/information was anonymized and de-identified prior to analysis. Therefore, the written informed consent of the participant was not obtained.
2. Database
Evaluation experiments are conducted on three different databases, including contact-based and non-contact databases. In this study, two contact-based databases are employed. One is the PolyU Multispectral Palmprint Database (PolyU database) [35], in which all the 6,000 images were acquired using a constrained device with finger-pegs in two sessions (six images in each session) with an average interval of nine days between the sessions. The other database is that created solely for this study, in which 1,224 images were also acquired using a constrained device but without finger- and palm-pegs. The images in the second database were captured in two sessions (six images in each session), with an average interval of thirty days between the sessions.
The non-contact database in this study is the CASIA Multi-Spectral Palmprint Database [36] in which all the 1,200 images were acquired using the non-contact device in two sessions (three images in each session), with an average interval of one month between the two sessions.
The matching of the same session data tends to achieve better matching than that of a different session because of small variations, resulting in an unreliable estimation. Therefore, the samples from the first session become the database samples, while the rest of the images become the test samples.
The experiments were conducted using Matlab 2011a in an i3-3240 CPU at 3.4 GHz with 4 GB RAM.
3. Evaluation experiments on the proposed classification method
The proposed classification method is evaluated by the distribution of the palm–vein images, retrieval efficiency and accuracy, and the response time of the identification process.
3.1 The distribution of palm–vein images.
The distribution of palm–vein images in six sub-classes in three databases by the proposed approach is shown in Table 1. The proposed classification method uniformly distributes images into sub-classes regardless of the database type. In particular, the proportions of these six categories in the PolyU database containing 6,000 samples are 19.65%, 19.13%, 17.35%, 12.60%, 16.06%, and 15.20%, which are near an even distribution.
3.2 Retrieval efficiency and accuracy.
The predefined threshold in PolyU, CASIA, and the database used in this study are set to 0.24, 0.27, and 0.34, respectively. Comparison experiment results on retrieval efficiency and accuracy by the traditional and proposed methods are shown in Table 2. Table 2 indicates that regardless of the retrieval efficiency or accuracy, the proposed approach is outstanding in all three databases. For example, at the total number of 3,000 test samples S and the total number of samples in the database as 1500, the sum of matching number by the traditional method is
, while by the proposed method is 643,377. The retrieval efficiency is
and the retrieval accuracy is 96.67%. Besides, retrieval efficiency is 14.50% and 14.28% in the other two databases, with retrieval accuracy of 96.00% and 97.71%, respectively. Thus, the proposed method is superior in the identification process.
Methods for classifying palm veins are currently unavailable. Regarding our classification method, this approach is suitable for other state-of-the-art coding methods for extracting orientation features, such as competitive code [41], Radon transform, and Gaussian transform. We perform comparative experiments by using different coding methods with the same classification process discussed in Sections 1.2 and 2 in Method. Table 3 presents the comparison among the four coding methods in three databases with the same condition as Table 2. This table indicates that the proposed approach provides the best results for the three databases regardless of retrieval efficiency or accuracy.
3.3 The response time of the identification process.
We assume that the feature extraction process via the traditional method uses Gaussian-Radon transform to extract the orientation features of a palm vein image that measures 128×128. Different coding methods have the ROI image and OM with the same size in three databases. Therefore, the execution times for preprocessing and one-to-one matching are approximately the same, whereas that for feature extraction is different, as shown in Tables 4 and 5. The execution time for matching will lengthen considerably as the number of samples in the database increases. The traditional method may not be able to meet the speed requirements of the palm–vein identification system, especially with a very large database.
The matching number is proportional to retrieval efficiency. Hence, the matching numbers from the proposed method, competitive code, Radon transform, and Gaussian transform can be reduced roughly by a factor of 6.96 (100%/14.36%, where 14.36% is the mean retrieval efficiency value in the three databases), 5.59 (100%/17.89%), 5.34 (100%/18.73%), and 4.20 (100%/23.80%), respectively, based on the retrieval efficiency results obtained via the different coding methods listed in Table 3.
The identification time for one testing sample via different coding methods in a large database can be calculated using Equations (20) and (21) and the computation time listed in Tables 4 and 5. Fig. 8 shows the response time of the identification process for one test sample via different coding methods and the traditional method at different database sizes. The proposed approach is evidently more efficient than the traditional method for large databases. With 10,000 training samples in the database, the execution times for the identification process via the traditional method, competitive code, Radon transform, and Gaussian transform are 18.56, 4.27, 3.95, and 5.07 s, respectively. And the proposed approach only requires 3.16 s.
4. Evaluation experiments on the one-to-one matching algorithm
To verify the effectiveness of the Gaussian-Radon transform in one-to-one matching algorithm, evaluation experiments are performed on three databases. The capability to achieve high performance using a small number of registration samples is highly desirable in any biometrics system [38]. In the palm–vein recognition algorithms perspective, the Neighborhood Matching Radon Transform (NMRT) [12] demonstrates the best results. Therefore, in this study, only the comparison experiments between the proposed method and NMRT using one registration sample are considered, as shown in Tables 6–8. The results confirm the superiority of the proposed method.
Conclusions
To solve the problem of a long response time in palm–vein identification in a large database, this paper proposed a simple and useful classification based on the principal direction features. Gaussian-Radon transform was employed to extract the orientation matrix and compute the principal direction of the image. Using the principal direction as the classification index, the large database is categorized into six bins. In the identification process, the input palm–vein image was first assigned to one of the bins and then matched with the candidates in the bin one-by-one. Besides, the neighborhood rule to speed the searching process was adopted, while maintaining a relatively high accuracy. Compared with traditional methods, experiments in the three databases by the proposed method showed its advantages on retrieval efficiency and identification time, especially for large palm–vein databases.
Acknowledgments
The authors would like to sincerely thank The Hong Kong Polytechnic University for providing PolyU Multispectral Palmprint Database (PolyU database) and the CASIA-MS-PalmprintV1 collected by the Chinese Academy of Sciences’ Institute of Automation(CASIA) used in this work.
Author Contributions
Conceived and designed the experiments: YJZ YQL QJF FY JH. Performed the experiments: YJZ YQL. Analyzed the data: YJZ YQL QJF FY. Contributed reagents/materials/analysis tools: YJZ YQL QJF JH. Contributed to the writing of the manuscript: YJZ YQL YXN.
References
- 1. Jain AK, Ross A, Prabhakar S (2004) An introduction to biometric recognition[J]. Circuits and Systems for Video Technology, IEEE Transactions on 14(1): 4–20.
- 2.
Wilson C (2011) Vein pattern recognition: a privacy-enhancing biometric[M]. CRC press.
- 3. Song W, Kim T, Kim HC, Choi JH, Kong HJ, et al. (2011) A finger-vein verification system using mean curvature[J]. Pattern Recognition Letters 32(11): 1541–1547.
- 4. Kumar A, Prathyusha KV (2009) Personal authentication using hand vein triangulation and knuckle shape[J]. Image Processing, IEEE Transactions on 18(9): 2127–2136.
- 5.
Watanabe M, Endoh T, Shiohara M, Sasaki S (2005) Palm vein authentication technology and its applications[C]//Proceedings of the biometric consortium conference: 19–21.
- 6. Wang Y, Hu J (2011) Global ridge orientation modeling for partial fingerprint identification[J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on 33(1): 72–87.
- 7. Peralta D, Triguero I, Sanchez-Reillo R, Herrera F, Benítez JM (2014) Fast fingerprint identification for large databases[J]. Pattern Recognition 47(2): 588–602.
- 8. Dai J, Feng J, Zhou J (2012) Robust and efficient ridge-based palmprint matching[J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on 34(8): 1618–1632.
- 9. Kong A, Zhang D, Kamel M (2009) A survey of palmprint recognition[J]. Pattern Recognition 42(7): 1408–1418.
- 10. Zhang D, Kong WK, You J, Wong M (2003) Online palmprint identification[J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on 25(9): 1041–1050.
- 11. Pillai JK, Patel VM, Chellappa R, Ratha NK (2011) Secure and robust iris recognition using random projections and sparse representations[J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on 33(9): 1877–1893.
- 12. Zhou Y, Kumar A (2011) Human identification using palm-vein images[J]. Information Forensics and Security, IEEE Transactions on 6(4): 1259–1274.
- 13. Wang JG, Yau WY, Suwandy A, Sung E (2008) Person recognition by fusing palmprint and palm vein images based on “Laplacianpalm” representation[J]. Pattern Recognition 41(5): 1514–1527.
- 14. Ladoux PO, Rosenberger C, Dorizzi B (2009) Palm vein verification system based on SIFT matching[J]. In ICB, LNCS 5558: 1290–1298.
- 15.
Xueyan L, Shuxu G, Fengli G, Ye L (2007) Vein pattern recognitions by moment invariants[C]//Bioinformatics and Biomedical Engineering, 2007. ICBBE 2007. The 1st International Conference on. IEEE: 612–615.
- 16.
Qian X, Guo S, Li X, Zhong F, Shao X (2009) Finger-vein recognition based on the score level moment invariants fusion[C]//Computational Intelligence and Software Engineering, 2009. CiSE 2009. International Conference on. IEEE: 1–4.
- 17.
Mirmohamadsadeghi L, Drygajlo A (2011) Palm vein recognition with local binary patterns and local derivative patterns[C]//Biometrics (IJCB), 2011 International Joint Conference on. IEEE: 1–6.
- 18. Wu KS, Lee JC, Lo TM, Chang KC, Chang CP (2013) A secure palm vein recognition system[J]. Journal of Systems and Software 86(11): 2870–2876.
- 19.
Hao Y, Sun Z, Tan T, Ren C (2008) Multispectral palm image fusion for accurate contact-free palmprint recognition[C]//San Diego, USA.15th IEEE International Conference on. IEEE Image Processing: 281–284.
- 20.
Zhang YB, Li Q, You J, Bhattacharya P (2007) Palm vein extraction and matching for personal authentication. In Advances in Visual Information Systems (pp. 154–164). Springer Berlin Heidelberg.
- 21.
Chen H, Lu G, Wang R (2009) A new palm vein matching method based on ICP algorithm[C]//Seoul, Korea.Proceedings of the 2nd International Conference on Interaction Sciences: Information Technology, Culture and Human: 1207–1211.
- 22. Lee JC (2012) Palm Vein Feature Extraction Using 2-D Gabor Filters[J]. Advanced Science Letters 8(1): 807–812.
- 23. Lee JC (2012) Palm vein recognition using adaptive Gabor filter[J]. Expert Systems with Applications 39: 13225–13234.
- 24. Lee JC (2012) A novel biometric system based on palm vein image[J]. Pattern Recognition Letters 33: 1520–1528.
- 25. Wang R, Wang G, Chen Z, Zeng Z, Wang Y (2014) A palm vein identification system based on Gabor wavelet features[J]. Neural Computing and Applications 24(1): 161–168.
- 26.
Zhang D, Guo Z, Lu G, Zuo W (2010) An online system of multispectral palmprint verification. Instrumentation and Measurement, IEEE Transactions on, 59(2), 480–490.
- 27.
Maltoni D, Maio D, Jain AK, Prabhakar S (2009) Handbook of fingerprint recognition[M]. springer.
- 28. Karu K, Jain AK (1996) Fingerprint classification[J]. Pattern recognition 29(3): 389–404.
- 29. Jain AK, Prabhakar S, Hong L (1999) A multichannel approach to fingerprint classification[J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on 21(4): 348–359.
- 30. You J, Li W, Zhang D (2002) Hierarchical palmprint identification via multiple feature extraction[J]. Pattern recognition 35(4): 847–859.
- 31. You J, Kong WK, Zhang D, Cheung KH (2004) On hierarchical palmprint coding with multiple features for personal identification in large databases[J]. Circuits and Systems for Video Technology, IEEE Transactions on 14(2): 234–243.
- 32. Jia W, Huang DS, Zhang D (2008) Palmprint verification based on robust line orientation code[J]. Pattern Recognition 41(5): 1504–1513.
- 33. Radon J (1986) On the determination of functions from their integral values along certain manifolds[J]. Medical Imaging, IEEE Transactions on 5(4): 170–176.
- 34. Yue F, Zuo W, Zhang D, Wang K (2009) Orientation selection using modified FCM for competitive code-based palmprint recognition[J]. Pattern recognition 42(11): 2841–2849.
- 35.
Biometrics Research Centre (BRC) Website PolyU multispectral palmprint Database. Available: http://www.comp.polyu.edu.hk/~biometrics/MultispectralPalmprint/MSP.htm. Accessed 2014 Oct 19.
- 36.
Biometrics Ideal Test Website. CASIA Multi-Spectral Palmprint Database. Available: http://biometrics.idealtest.org/. Accessed 2014 Oct 19.
- 37.
Han J, Kamber M, Pei J (2006) Data mining: concepts and techniques[M]. Morgan kaufmann.
- 38.
Phillips P J, Grother P, Micheals R, Blackburn DM, Tabassi E, et al.. (2003) Face recognition vendor test 2002[C]//Analysis and Modeling of Faces and Gestures, 2003. AMFG 2003. IEEE International Workshop on. IEEE: 44.
- 39. Kang W, Liu Y, Wu Q, Yue X (2014) Contact-Free Palm-Vein Recognition Based on Local Invariant Features[J]. PloS one 9(5): e97548.
- 40. Raut SD, Humbe VT (2014) Review of Biometrics: Palm Vein Recognition System[J]. IBMRD's Journal of Management & Research 3(1): 217–223.
- 41. Kong AWK, Zhang D (2004) Competitive coding scheme for palmprint verification[C]//Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on. IEEE 1: 520–523.