Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

BPKEM: A biometric-based private key encryption and management framework for blockchain

  • Hao Cai,

    Roles Conceptualization, Funding acquisition, Investigation, Project administration, Resources, Supervision, Validation, Writing – original draft

    Affiliation Department of Computer Science, Shantou University, Shantou, China

  • Han Li,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Software, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Department of Computer Science, Shantou University, Shantou, China

  • Jianlong Xu ,

    Roles Conceptualization, Funding acquisition, Investigation, Methodology, Resources, Supervision, Writing – original draft, Writing – review & editing

    xujianlong@stu.edu.cn

    Affiliation Department of Computer Science, Shantou University, Shantou, China

  • Linfeng Li,

    Roles Data curation, Project administration, Supervision

    Affiliation Department of Computer Science, Shantou University, Shantou, China

  • Yue Zhang

    Roles Conceptualization, Methodology, Visualization

    Affiliation Department of Computer Science, Shantou University, Shantou, China

Abstract

The fundamental technology behind bitcoin, known as blockchain, has been studied and used in a variety of industries especially in finance. The security of blockchain is extremely important as it will affects the assets of the clients as well as it is the lifeline feature of the entire system that needs to be guaranteed. Currently, there is a lack of a methodical approach to guarantee the security and dependability of the private key during its whole life. Furthermore, there is no quick, easy, or secure way to create the encryption key. A biometric-based private key encryption and management framework (BPKEM) for blockchain is proposed not only to solve the private key lifecycle manag- ement problem, but also it maintains compatibility with existing blockchain systems. For the problem of private key encryption, a biometric-based stable key generation method is proposed. By using the relative invariance between facial and fingerprint feature points, this measure can convert feature points into stable and distinguishable descriptors, then using a reusable fuzzy extractor to create a stable key. The correct- ness and efficiency of the newly proposed biometric-based blockchain encryption tech- nique in this paper has been validated in the experiments.

1 Introduction

To meet security requirements and establish ownership, asymmetric encryption is a cryptography method integrated into the blockchain. The public key and the private key are two distinct keys that are commonly utilized during encryption and decryption. The public key is the number produced by the private key through the elliptic curve encryption algorithm (ECC), which is used to generate the address used in bitcoin transactions. Additionally, the production of the public key is irreversible, making it impossible to flip the private key using the public key. Participants in a blockchain system hold various keys based on their position. The public key is made available to the transaction verifier, who uses it to confirm the legitimacy of the transaction as well as the identity of the transaction publisher. The private key must be kept confidential by the user themselves due to the decentralized nature of the blockchain system, and the user uses the private key to digitally sign transactions in order to manage its assets. Due to the immutability of the blockchain, the loss of private keys due to inappropriate use and storage would result in a significant loss of benefits for blockchain users. The security of the private key has a direct impact on the security of all the user’s assets and the system as a whole. Therefore, how to secure the security of the private key in the blockchain system is a highly important study topic.

There are many threats to private key security today. First of all, there are still great hidden dangers in the security of the generation of public and private key pairs. Since quantum computing poses a great threat to the security of blockchain cryptographic algorithms, the generation of public and private keys based on elliptic curves and digital signatures cannot resist quantum computing [1]. Therefore, it is necessary to introduce a cryptographic method that is resistant to quantum computing to encrypt the private key. Traditional keys are typically generated randomly by the system or chosen by the user, but the system’s randomly chosen keys are forgettable and the user-chosen keys are susceptible to dictionary attacks and exhaustive search attacks. Biometric features are unique, difficult to forge, and provide privacy. Based on this, biometric keys are difficult to decipher, portable, and non-repudiating. Second, there are no efficient management solutions for the secure storage and recovery of private keys [2]. The current management mechanisms for private keys mainly include human brain memory, local storage, offline storage, escrow wallets, threshold wallets and other methods. Because the private key is long and irregular, and once any bit is forgotten, it will make the private key unavailable, using the human brain to remember the private key is a high-risk and difficult challenge. Local storage faces risks such as malicious reads and damage to physical devices; offline storage still needs to be connected to the Internet when used, and malware attacks cannot be completely avoided; hosted wallets undermine the principle of blockchain decentralization, and there are problems such as central node trust problems and hosting servers being attacked. Threshold wallet distributes private keys across multiple devices using threshold secret sharing technology, which is difficult to design, complex, and not scalable.

Therefore, it is of great practical significance to study the use of biometric cryptography to encrypt private keys, and to explore an efficient and systematic solution to ensure the security of private keys from generation, encryption and storage to recovery.

For the above problems, we propose a biometric-based stable key generation (BKG) method to encrypt the blockchain’s private key. BKG mainly extracts stable and distinguishable descriptors through facial features and fingerprint features to further encrypt the blockchain’s private key. Additionally, a thorough and organized plan is put out based on BKG to guarantee the security of the entire process of creating private keys, encrypting them, and recovering them.

The contributions of this work are as follows:

  1. To address an efficient management solution for secure storage and recovery of private keys, we propose A comprehensive management solution for the blockchain’s private key system is developed based on BKG. For the purpose of evaluating the effectiveness of the plan, tests were done on the blockchain.
  2. For the problem of safe and convenient encryption of blockchain private keys, we propose a method for creating distinguishing descriptors that is reliable, accurate, and based on several biometric factors. It can effectively screen fingerprint and facial biometrics, as the majority of a person’s biometric characteristics vary throughout time. The problem of single biometric features being easily cracked is effectively resolved, and the error of feature descriptors is decreased, thanks to the usage of feature points in the generation of descriptors. Reusable fuzzy extractors are also used in key creation to ensure the security of user biometric features.

The structure of this paper is as follows: Section 2 presents related work. Section 3 introduces some preliminaries. The fundamental architecture, a biometrics-based private key encryption technique, and a blockchain private key security management scheme are all thoroughly introduced in Section 4. Section 5 shows the security and performance analysis about the scheme. The method and scheme are then experimentally validated in Section 6. Finally, Section 7 provides conclusions.

2 Related work

There is a burgeoning body of research on blockchain private key encryption. Presently, widely researched encryption technologies comprise of searchable encryption, homomorphic encryption, and biometric encryption. In this manuscript, we will examine these encryption methods and their strengths and limitations. On Searchable Encryption, in 2017, Li et al. [3] proposed a blockchain-based symmetric searchable encryption scheme that can complete multi-keyword searches, allowing users to obtain search results automatically without verification [4]. Yan et al. [5] amalgamated symmetric searchable encryption with attribute encryption to realize one-to-many searchable encryption and implemented fine-grained access control using a ciphertext policy attribute encryption mechanism for shared keys. However, these schemes are afflicted with issues of high complexity [4] and low efficiency [5]. On Homomorphic Encryption, Zheng et al. [6] utilized homomorphic encryption and threshold Paillier cryptosystem to encrypt private keys, facilitating shared data trading, and transaction information protection using (p,t)-threshold Pallier cryptosystem. Wang et al. [7] proposed a copyright blockchain privacy protection scheme based on lightweight homomorphic encryption and zero-knowledge proof. Nonetheless, existing homomorphic encryption algorithms still suffer from low efficiency [7] and excessive key size [6] and ciphertext explosion. Biometric encryption is one of the recently extensively studied techniques, Zhu et al. [8] proposed an efficient user login scheme for biometric authentication of any domain server based on blockchain nodes. Negin Hamian et al. [9] proposed a blockchain-based re-registration scheme for a biometric-based authentication system. The scheme employs Shamir’s secret sharing, ElGamal encryption, and Schnorr’s digital signature to calculate the secret biometric share. However, the description of the biometric application is too rudimentary. Aydar et al. [2] used fingerprints as biometrics to encrypt blockchain private keys. Carmen Bisogni et al. [10] proposed a blockchain private key encryption scheme based on face biometrics, which employs CNN to encode face features and then fuses them with RSA keys. Nevertheless, this scheme has relatively high lighting and background requirements. Bao et al. [11] proposed a novel identity authentication scheme combining fingerprint features and blockchain. Nevertheless, the efficiency of the fuzzy extractor employed is relatively low, and there is still room for improvement in security. To conclude, while there have been numerous strides in blockchain private key encryption, there are still challenges to surmount. Searchable encryption is beset with complexity and efficiency issues, homomorphic encryption encounters challenges with efficiency and key size, and biometric encryption necessitates improvement in its applications and security. Further research is indispensable to devise more efficient and secure blockchain private key encryption methods.

In addition to encryption, the storage and recovery of blockchain private keys is an essential guarantee for private key security. The private key storage stage includes various methods such as local storage, account custody [12, 13], offline storage [14, 15], cloud storage, encrypted wallet protection schemes. Although some researchers have proposed several methods to encrypt and store private keys, limitations still exist. For instance, Lusetti et al. [12] used symmetric and asymmetric methods to encrypt medical image files taken by forensics and uploaded them to an online platform and stored them in the blockchain. However, this method has some limitations. Xiao et al. [14] proposed a multi-signature scheme based on Gamma signature. However, if the multi-signature is tampered or forged, the nodes in the tree need to verify part of the response top-down to find the malicious signer, which increases the running cost. Maria et al. [16] proposed a blockchain-based anonymous authentication scheme for providing secure communication in VANETs. In the proposed scheme, a Merkle Hash Tree (MHT) is used to understand the real-time authentication record, and blockchain-assisted successful V2R anonymous handover authentication is proposed when the vehicle user becomes a valid member of the in-vehicle network through initial anonymous authentication. Rapid re-authentication of the vehicle is achieved through secure verification code transmission between successive RSUs. Fan et al. [17] proposes a safe and reliable data transmission scheme based on the blockchain-based Internet of Things environment. In the scheme, the key generation center generates a private key for each base station and IoT node. After the IoT node enters the Internet of Things for the first time, it and the base station perform identity signature mutual verification. After successful verification, the base station generates a negotiated key, encrypts the symmetric key, and sends it to the node. The node then uses the inherent key and symmetric key encryption information to send to the base station, and the information is uploaded to the blockchain after decoding. In the private key recovery stage, Xiong et al. [18] proposed a blockchain key protection scheme based on secret sharing by introducing a private key distribution method to recover lost private keys. Li et al. [19] proposed a secret sharing scheme based on double-threshold key protection to recover the private key in 2021. However, this method has a high computational overhead, and the secure transmission of shadow sharing between blockchain nodes cannot be guaranteed. Many scholars have conducted in-depth research on a certain part of them, but there is no comprehensive and effective security solution for the generation, encryption, storage, and recovery of private keys.

In conclusion, private key encryption and management is a crucial factor in ensuring the security of blockchain-based systems. While various methods exist, they don’t provide a comprehensive and effective security solution for the entire process of private key generation, encryption, storage, and recovery. Considering all the above issues, we propose a biometric-based blockchain private key encryption and management framework. Our framework achieves an appropriate trade-off between security and efficiency, realizing a proper balance between security and efficiency, it can be a significant step forward in the field of blockchain security.

3 Preliminaries

3.1 Fuzzy extractor

Fuzzy extractor [3] is a cryptographic technique used to extract secret keys from imperfect biometric data. Traditional biometric technologies, such as fingerprint recognition and facial recognition, usually only provide limited usable information and thus cannot be directly used as key material. Fuzzy extractor solve this problem by converting biometric data into entropy-density security keys. Fuzzy Extractor can convert a random source with a certain amount of noise into a uniformly random and accurately reproducible key. As shown in Fig 1, fuzzy extractor includes two parts: generation algorithm and regeneration algorithm, which can be expressed as Eq (1): (1)

In Fig 1, Gen means the generative algorithm, the generation algorithm Gen takes the input string w (one sample of the random source), and outputs a random value R and public auxiliary data P. Rep means the regeneration algorithm, the regeneration algorithm Rep takes as input w′ (another sample of the random source) and public auxiliary data P, and outputs the string R′. The fuzzy extractor generates the same random value R when the Hamming distance of the two samples is close enough, which means R = R′. The security requirement of the fuzzy extractor is that R is uniformly random if the random source has enough entropy.

3.2 Chinese remainder theorem

In mathematics, the Chinese Remainder Theorem (CRT) states that if one knows the remainders of the Euclidean division of an integer n by several integers, then one can uniquely determine the remainder of the division of n by the product of these integers, under the condition that the divisors are pairwise coprime (no two divisors share a common factor other than 1). CRT is a theorem that gives a unique solution to simultaneous linear congruences with prime moduli. Several versions of the Chinese remainder theorem have been proposed. The next one is called the general Chinese remainder theorem [20]: m1, m2, ⋯, mk are pairwise mutually prime positive integers that form the modular set β = {m1, m2, ⋯, mk}, . When X < M, X can be represented by unique set φ = {a1, a2, ⋯, ak}, Xai(mod mi), (i = 1, 2, ⋯, k). When β and φ are known, X can be recovered, , with , (i = 1, 2, ⋯, k).

3.3 Learning with errors

Elliptic curve-based algorithms have been broken by quantum computing, and the use of post-quantum cryptography with resistance to quantum computing is necessary to resist quantum computing attacks. The problem of Learning with errors (LWE) [21], proposed by Regev, is recognized as having resistance to quantum properties and is necessary for post-quantum cryptography.

In linear algebra, a linear space can be described by finding a set of bases representing this space. if a set of bases of a linear space is known, then any vector in this space can be decomposed into a linear combination of this set of basis vectors. If a constraint is placed on this linear space, where the coefficients of all linear combinations must be integers, then the vectors generated by varying the coefficients of the linear combinations can only form a dense, lattice-like discrete set, as shown in Fig 2.

thumbnail
Fig 2. Integer lattice, generating space sets based on discrete basis vectors.

https://doi.org/10.1371/journal.pone.0286087.g002

Generating a collection of spaces based on discrete basis vectors is known as an Integer Lattice. Since it is not possible to use linear combinations in the Integer Lattice to represent the exact vector we want, it is quite feasible to find a vector v that is closest to the vector v′ that we want to represent and ensure that v′ is exactly within the range that this Integer Lattice can represent. Then we need to find a set of integer coefficients to make v′ the closest distance to the target vector in this integer lattice. This type of problem of approximating a target vector in a discrete linear set is collectively known as the Closest Vector Problem (CVP). The Learning With Errors (LWE) problem is derived from this problem. A matrix A is known, and a vector , where e is a random noise vector sampled at random over a fixed range of values. The LWE problem is how to obtain the vector s from the matrix A and the vector and the error e. Based on LWE of asymmetric key generation algorithms, vector s is used as the private key and matrix A and vector are used as the public key. Due to the addition of random noise, it is not possible to use Gaussian elimination. The only way to find s is by brute force cracking, trying the possible values of s one by one. No quantum algorithm can currently crack this problem, and it is therefore quantum computationally secure.

4 Proposed scheme

4.1 Framework

Fig 3 shows the core architecture of the private key security management scheme proposed in this article. The relevant research content is mainly divided into four parts:

  1. Private key security generation. We use the LWE algorithm based on lattice theory to generate public-private key pair.
  2. Generation of a private key encryption key. We use the user’s biometrics to generate a stable and unique key, and use a secret sharing scheme based on the CRT to achieve secret distribution and secret reconstruction of private keys, then fuse the biometric keys to achieve reliable encryption of private keys.
  3. Distributed storage for encrypted private keys. We use a blockchain to store encrypted private keys, which ensures the consistency of storage based on the blockchain’s own mechanism and realizes secure and practical storage.
  4. Recovery of private key. If the private key is lost, recover the private key by querying and comparing.

The whole process is as follows: firstly, the asymmetric public and private keys are generated by LWE, the multi-biometric key is generated based on the face and fingerprint features based on the fuzzy extractor, then, the private key is reliably encrypted based on the CRT and biometric key to generate secret fragments, and the secret fragments are stored on the blockchain. If the private key needs to be recovered, the new biometric feature is extracted as input to decrypt the ciphertext to recover the private key.

4.2 Private key security generation

In order to resist quantum computing attacks, we use the LWE problem based on lattice theory to ensure the quantum attack resistance of the blockchain system, and generates a public private key pair based on the LWE public key cryptography scheme as shown in Alg 1. In the LWE-based asymmetric key generation algorithm, n is the input system security parameter; the private key is a random number s generated based on system noise and time; q is a prime number between n2 and 2n2, all operations are carried out in Zq, that is, all are modulo q operations; m is the number of polynomials; α is the noise parameter; χ is the probability distribution on Zq with α as the parameter. Select m linearly independent vector from the uniform distribution, select m elements e1, ⋯ emZq according to χ, finally calculate the public key , and output the public private key pair .

Algorithm 1: LWE-based Asymmetric Key Generation Algorithm

Input: security parameter n

1 define private key s;

2 q = a prime number between n2 and 2n2;

3 m = (1 + ε)(n + 1)logq, ε > 0;

4 ;

5 for i = 1; im do

6 ;

7eiZq;

8 ;

Output: public-private key pair

We use the LWE-based asymmetric key generation algorithm to generate the public-private key pair of the blockchain (sk1, pk1), where the private key sk1 is used to encrypt the hash value of the transaction, and the public key pk1 is used to generate the transaction address. The private key sk1 is used to sign the transaction, and the public key pk1 is used to disclose to each node to verify the validity of the transaction. Users only need a digital signature and the signer’s public key to verify the authenticity and integrity of the data. Therefore, the transaction owner can sign the transaction using their private key. Other nodes in the network are then able to verify owner and transaction integrity using only the transaction sender’s public key.

4.3 Generation of a private key encryption key

In order to realize the reliable generation of the encryption key of the blockchain private key, we propose a stable key generation method based on biometric features. First, the face and fingerprint features are extracted, and then to generate stable and distinguishable descriptors based on feature points, in order to eliminate errors, a stable key generation method based on fuzzy extraction algorithm is proposed to realize key generation based on multiple biometrics. We use the CRT to ensure the security of the private key, finally, the private key is reliably encrypted based on the Chinese remainder theorem and multiple biometrics.

4.3.1 Stable distinguishable descriptor generation for private key encryption keys.

In order to solve the problem of stable distinguishable descriptor generation for private key encryption keys, we propose a universal and stable distinguishable descriptor extraction method based on the pixel coordinates of two-dimensional feature points. Fig 4 depicts the flow of our method, We first extract the biometric points of faces and fingerprints and calculate their pixel coordinates in the two-dimensional image coordinate system; secondly, to reduce the error of the generated stable distinguishable descriptors, we use the inherent properties of faces and fingerprints to screen out the unstable feature points and keep the stable extracted feature points, and we calculate the stable distinguishable descriptors based on the extracted feature points that are universal at any image resolution to ensure that stable and distinguishable descriptors can be generated for the biometric data collected from the same organism at different times.

thumbnail
Fig 4. Stable distinguishable description of the sub-extraction process.

https://doi.org/10.1371/journal.pone.0286087.g004

We define the n × 2 matrix boldsymbol Pn×2 to represent the biometric feature points extracted from the image, where each row of the matrix corresponds to the two-dimensional pixel coordinates of a single feature point. The pixel coordinates of the k-th feature point are denoted as boldsymbolpk. This definition is illustrated in Eq (2). (2)

To describe n 2D feature point pixel coordinates, this paper employs three control points, where each feature point’s true 2D coordinate, boldsymbolpi can be derived by applying a weight to the 2D coordinates of the three control points, Cj, as shown in Eq (3). The weighting factor, boldsymbolαij, determines the contribution of each control point in determining the true coordinate of the feature point. (3)

Expanding Eq (3) into matrix form: (4)

The weighting factor αij for a given organism’s i-th feature point pi is unique when the 2D coordinates of the three control points Cj, j = 1, 2, 3 are determined which are not collinear. As a result, the weighting factor boldsymbolαij can serve as a universal, stable, and distinguishable biometric descriptor.

The crucial step is to obtain three non-collinear 2D control points from the n 2D pixel feature points. To ensure that the control points are not collinear, we use the theorem that eigenvectors corresponding to different eigenvalues of a matrix are not aligned. The first step is to find the control point C1 using Eq (5), and then construct a new matrix A using Eq (6). (5) (6)

The matrix ATA is invertible, and therefore its eigenvalues and eigenvectors λi, i = 1, 2 can be easily computed. Using the eigenvector decomposition theorem, which states that eigenvectors corresponding to distinct eigenvalues of a matrix are orthogonal, we can find the remaining two control points, C2, C3, as shown in Eq (7). (7)

It can be observed that the control points C1, C2, C3 obtained from the previous steps are non-collinear, thus ensuring that the weighting factor boldsymbolαij is unique for the same organism. Therefore, for a given feature point pi, its corresponding weighting factor boldsymbolαij can be computed using Eq (8). (8)

The stable distinguishable descriptor extraction algorithm is shown in the Alg 2. Fig 5 obtain the face feature points (left) and the fingerprint feature points (right).

thumbnail
Fig 5. Distribution of pixel coordinates of facial (left) and fingerprint (right) feature points.

https://doi.org/10.1371/journal.pone.0286087.g005

Algorithm 2: Stable distinguishable descriptor extraction algorithm

Input: Feature point coordinates Pnx2 = (p1, ⋯, pn)T

1 define control points C = {c1, c2, c3};

2 define weighted coefficient αij, with ;

3 define and ;

4 define λ = (λ1, λ2) as matrix ATA Eigenvalues;

5 define V = (v1, v2) as matrix ATA Eigenvectors;

6 ;

7 get weight factors αij = C1 Pnx2;

8 convert weight factors boldsymbolαij to binary descriptor w;

Output: binary descriptor w

Using the algorithm described in this paper, the three non-collinear control points C1, C2, C3 can be obtained based on the facial feature points and fingerprint feature points under the pixel coordinate system. The distribution of these control points and feature points is illustrated in Fig 6. To determine the weighting factor boldsymbolαij for a given feature point pi, the chi-square linear equation described in Eq (8) is solved using the two-dimensional pixel coordinates of the control points and feature points. The resulting binary form of the unique weighted coefficient boldsymbolαij is then used as the final descriptor.

thumbnail
Fig 6. Distribution of feature and control points, with feature points in blue and calculated control points in red.

https://doi.org/10.1371/journal.pone.0286087.g006

4.3.2 Multiple biometric key generation.

After creating a stable distinguishable descriptor using the face and fingerprint features as input, there will be an error within a certain range, each calculation result won’t be exactly the same, there will be a certain deviation, and it cannot be used as the key directly. Additionally, the key should be made up of random and uniform bits in order to ensure the security of the encryption algorithm. Therefore, we introduce a fuzzy extraction algorithm after biometrics generate stable and distinguishable descriptors, and then generate a stable and distinguishable biological key after the fuzzy extractor in order to produce a key that can be accurately regenerated and is uniformly random.

Use the stable discriminative descriptor as the input random source w = W1, W2, ⋯, Wn (in this paper, a string of length n and value range {0, 1} for the generation algorithm of the reusable fuzzy extractor. First choose a random number R as the output of the fuzzy extractor, then randomly sample w to form subset v1 = Wj1, Wj2, ⋯, Wjk and then use v1 to create a digital locker to hide R. Repeat this process to produce L number lockers, which all contain R, each of which can be unlocked with v1, v2, ⋯, vL respectively. The generation algorithm uses the face descriptor Dfaand fingerprint descriptor Dfp calculated based on the stable distinguishable descriptor generation method as input, and obtains a random number key and a public helper character for key recovery respectively, namely (Rfa, Pfa) ← Gen(Dfa), (Rfp, Pfp) ← Gen(Dfp) where R is the key that can be used for encryption, and P is the public key that needs to be saved help string. If the above generation algorithm is repeated for many times, multiple sets of (Rfa, Pfa) or (Rfp, Pfp) corresponding to R and P can be generated, which realizes the reusability of the fuzzy extractor.

When the user enters the biometric again, input random source . First randomly sample w′ to form subset and then use to try to unlock the digital locker to get R. Repeat this process L times, using to unlock respectively, until R is successfully obtained. Then use the newly generated descriptors and as input to obtain the respective public help characters Pfa and Pfp for key recovery, namely . When the Hamming distance of D and D′ is less than t and the correct P is used, the random number R can be recovered, namely .

4.3.3 Secret sharing based on chinese remainder theorem.

In order to protect the private key security, we choose to use the secret sharing technology to realize the secret distribution and secret reconstruction of the private key to achieve the protection function, which divides the private key into several secret fragments and stores them in different nodes respectively. In the secret sharing technique, when the share of secret fragments is not enough to reconstruct the secret, no information about the secret can be obtained; when a sufficient number of secret fragments are obtained, the reconstruction of the secret can be completed and the complete secret information can be obtained. In this paper, the (k, n) threshold construction is completed using a secret sharing scheme based on the CRT, including secret distribution and secret reconstruction.

In the secret distribution stage, we enter the secret S, parameters k and n to be shared, select the (m1, m2, ⋯ mn) ∈ Z+ hat meets the conditions, and then calculates the secret fragment sc1, ⋯, scn as output. In the secret reconstruction stage, at least enter the secret fragment sc1, ⋯, scn, according to at least k sc1, ⋯, sck pairs, we can get the recovery of secret S.

The secret distribution algorithm based on the secret sharing scheme splits the private key sk1 into n secret fragments SK1, SK2, ‥, SKn. When the private key is restored, at least k secret fragments are obtained, and the secret is restored based on the secret reconstruction algorithm to regain the private key as shown in Fig 7. We divide the private key into 20 fragments, and 10 or more of them need to be obtained to recover the full private key.

thumbnail
Fig 7. Private key encryption and private key recovery process.

https://doi.org/10.1371/journal.pone.0286087.g007

4.3.4 Biometric-based private key encryption.

To achieve reliable encryption of private keys, we propose a biometric-based private key encryption algorithm, Alg 3 is the biometrically based private key encryption algorithm. Firstly, a random value of facial features Rfa is generated based on a fuzzy extractor. Secondly, LWE-based asymmetric key generation algorithm is used to generate a blockchain public-private key pair (sk1, pk1), and the key pair (sk2, pk2) is generated based on fingerprint features, then the secret sharing-based scheme divides sk1 into n secret fragments SK1, SK2, ‥, SKn. We will encrypt the key pair SK1, SK2, ‥, SKn generated by fusing biometrics. Salt encryption refers to a method of adding a salt value to the information to be encrypted, thereby enhancing password security. We use the hash value of Rfa to salt n secret fragments SK1, SK2, ‥, SKn respectively, to obtain n salted encrypted ciphertexts . the salted ciphertext is asymmetrically encrypted using the public key pk2 to obtain the ciphertext C1, C2, ⋯, Cn.

Algorithm 3: Biometric-based private key encryption algorithm

Input: face key Rfa, fingerprint key Rfp

1 generate (sk1, pk1) and (sk2, pk2) based on LWE;

2 hash the face key Rfa get hk1: hk1 = hash(Rfa);

3 Salt N shares with hk1: Salt(N(mi, sci));

4 using hash(Rfa) as the private sk2;

5 generate the corresponding public key use sk2;

6 use (sk2, pk2) to asymmetrically encrypt the N shares: Encrypt{Salt(N(mi, sci))};

Output: N secure key encrypted fragments

4.4 Distributed storage for encrypted private keys

In view of the cryptographic chain structure of blockchain, distributed nodes and consensus algorithm, we store the secret fragments on the block through multiple transactions to achieve secure and convenient storage of private keys.

The secret fragments generated the ciphertext C1, C2, ⋯, Cn, package it together with the information needed to restore the secret sk1 and store it as a block transaction. The information that needs to be stored include publicly available information Rfa and Rfp to help recover Pfa and Pfp to help authenticate. Constructing n group information (C1, Pfa, Pfp, pk2), ⋯, (Cn, Pfa, Pfp, pk2). The information of all blocks in the blockchain is the data that can be seen and shared by all nodes, so in order to prevent attackers from maliciously collecting n group information (C1, Pfa, Pfp, pk2), ⋯, (Cn, Pfa, Pfp, pk2) to attack, using n transactions to issue and store in different blocks, while recording the index of the block where the n group information is located and the transaction storage.

4.5 Recovery of private key

According to the stored index or query the transaction corresponding to one’s own address, at least obtain the information stored in k transactions and verify whether the information has been tampered with, which are recorded as . The steps to recover the private key are as follows:

  1. get the ciphertext to help recover public information (Pfa, Pfp) of (Rfa, Rfp) and key pair the public key pk2.
  2. obtain the user’s face image and fingerprint image, extract the new face descriptor and fingerprint descriptor as input, use the respective public help characters Pfa and Pfp for key recovery, namely .
  3. use through SHA256 algorithm to get its hash value hash(), and then use the LWE public-private key pair generation algorithm to calculate the key pair , where the private key is hash(). Compare the generated public key with pk2 to confirm that the generated public and private key pair is correct, namely . Finally, the ideal private key is obtained.
  4. use the private key to decrypt the obtained ciphertext to obtain the ciphertext .
  5. use hash() to decrypt , then we can obtain .
  6. the secret reconstruction based on the CRT uses k fragments to reconstruct the secret, that is, the private key
  7. use the LWE public-private key pair generation algorithm to calculate the key pair , where the private key is . Compare the generated public key with pk1 to confirm that the generated public and private key pair is correct, that is . The final successful recovery of the user’s private key is sk1.

5 Security and performance analysis

5.1 security analysis

In this section, we briefly analyze the security of the authentication schemes of the proposed framework, satisfying the following performance and security properties.

5.1.1 Anonymity.

The proposed scheme ensures the user’s biometric data is not stored on any server. Only the stable and distinguishable descriptor generated from the user’s biometric feature is used, which is converted into a random value through the fuzzy extractor. Therefore, the user’s original biometric feature cannot be restored from the random value.

5.1.2 Collusion resistance.

The Asmuth-Bloom threshold scheme based on the Chinese remainder theorem is used for secret sharing. If the secret is distributed into n fragments, it is assumed that at least k fragments are required for secret recovery. Even if k − 1 nodes collude with each other, they cannot change the calculation result or achieve recovery of the private key.

5.1.3 Anti-tampering.

The blockchain’s non-tampering characteristics, combined with the sum of all private key fragments stored on the blockchain, make it impossible for attackers to tamper with the protocol after its execution.

5.1.4 Disaster recovery.

Since the fragments are distributed to different nodes for storage during the transaction, even if the data on k-1 nodes is lost, the user can still recover according to the fragments on the remaining nodes.

5.1.5 Forward secrecy.

The user generates multiple biological feature points through the face and fingerprint, and generates random values for the feature descriptor through the fuzzy extractor. We use the LWE public-private key pair generation method, hash function, and salt processing to encrypt the private key, ensuring forward secrecy.

5.1.6 Resisting quantum attacks.

The authentication scheme designed based on the lattice public key cryptographic algorithm can resist quantum attacks since there is currently no algorithm that can solve the lattice problem. Literature has also proven that the LWE problem can resist quantum algorithm attacks. Our proposed authentication scheme is based on the LWE problem, ensuring resistance to quantum attacks.

5.1.7 Resistance to man-in-the-middle attacks.

In our scheme, two-way authentication between users and blockchain nodes is considered. By using asymmetric encryption to store information on blockchain nodes, even if attackers intercept information, they cannot generate keys or tamper with information due to the one-way nature of the hash function and the complexity of LWE, ensuring resistance to man-in-the-middle attacks.

5.1.8 Resistance to replay attacks.

In the proposed scheme, the salted hash function and LWE are used multiple times in the encryption process to enhance the security of the key fragments. Therefore, even if an attacker intercepts the information transmitted during the authentication process and attempts to carry out replay attacks, they will not be able to obtain any useful information for calculating other related keys through the key. There are generally two scenarios for replay attacks: one is to replay the information transmitted by the device to the server, and the other is to replay the information transmitted by the server to the device. However, due to the robustness of the proposed scheme, both types of replay attacks are effectively prevented.

We also compare this work with related work shown in Table 1, which confirms the good performance of the proposed method in terms of security properties. We use “Y” to denote the scheme satisfies the property; otherwise it is denoted by “N”.

5.2 performance analysis

5.2.1 storage cost.

The information to be stored includes fragmented encrypted ciphertext C, Pfa, Pfp are the public information to help restore Rfa, Rfp respectively, and the public key pk2 is used to help with identity verification, the number of ciphertext fragments is set to n. Then the total cost of storing a complete ciphertext is: C + n × (Pfa + Pfp + pk2)

5.2.2 Computing costs.

TL represents the time-consuming calculation of the LWE public-private key pair generation algorithm, TH represents the time-consuming of hash function encryption and decryption, TN represents the time-consuming to divide the ciphertext into n pieces using the Chinese remainder theorem, TK represents the time spent on secret recovery of k shards, TS represents the time consumption of salt treatment, TF represents the time-consuming to find k fragments on the blockchain for secret recovery. The total cost calculated is: .

6 Experimental results

In this section, we will conduct experiments to verify the reliability and feasibility of the proposed method. Our experiments are intended to address the following questions:

  1. RQ1: How to select suitable feature points to ensure sufficient stability and distinguishability?
  2. RQ2: The success rate of the stable discriminant descriptor fuzzy Extraction.
  3. RQ3: Feasibility of private key encryption based on BKG and Chinese Remainder Theorem secret sharing algorithm.

The characteristics of the personal computer that was used to perform several tests, i.e., AMD Ryzen 7 5800X 8-Core Processor, RAM Memory of 16 GB in Ubuntu 18.04 environment.

6.1 Data set description

The dataset about the facial features is the face recognition dataset of Extended Yale Face Database B [27]. This dataset collects images of the same face in different lighting environments, which fully meets the data type required for this project. We selected the data of four groups of faces (B11, B12, B13, and B14) under different lighting conditions for experiments.

We conduct experimental verification based on the SOCOFing fingerprint dataset [28]. SOCOFing consists of fingerprint images of multiple ethnicities, including the original fingerprint data and the rotated fingerprint data.

6.2 Metrics

6.2.1 Hamming distance.

The Hamming distance indicates the number of different digits/letters in a pair of messages. In order to calculate the Hamming distance between two strings, and, we perform their XOR operation, xor = ab, and then count the total number of 1s in the resultant string.

6.2.2 Success rate.

Success rate is the fraction or percentage of success among a number of attempts to perform a procedure or task. In our experiments, Success Rate (SR) is defined as Eq (9): (9)

6.2.3 Average Hamming distance.

Average Hamming distance is the average value of the Hamming distance obtained by randomly sampling n comparisons, Average Hamming distance (AH) is defined as Eq (10): (10)

6.2.4 Decline rate.

Decline rate is the percentage between the decline after executing the program or task and the original value. In our experiments, Decline Rate (DR) is defined as Eq (11): (11)

6.3 Experiment on stable discriminative descriptors based on biometrics (RQ1)

6.3.1 Experiment on stable discriminative descriptors based on facial features.

Three steps are primarily involved in the extraction of stable and distinguishable descriptors for facial features. First, using the HOG directional gradient histogram algorithm, the face image is extracted from the input original image; next, using the Dlib library [29], the specific 82 feature points are extracted from the face image, the distribution of these feature points and the number of them are shown in Table 2; finally, the feature points are classified and chosen; The stable distinguishable descriptor is then determined by categorizing the feature points that have been chosen.

According to the stable and distinguishable description of the sub-extraction process, the face is extracted by the HOG algorithm based on the direction gradient histogram, as shown in Table 3, in the four sets of data selected in this paper, the accuracy of face detection based on the HOG algorithm has reached more than 80%, and the successful detection of the face image is shown in Fig 8.

After obtaining the successfully detected face image, based on the Dlib library, extract the feature points of the stable distribution of the face, as shown in Fig 9, the horizontal and vertical coordinates respectively represent the coordinate axes based on image pixels after feature points are extracted. With the change of the face expression, the feature points near the eyes and lips will fluctuate greatly, resulting in a large change in the descriptor error. In view of this feature, we screen different types of feature points on the face. Fig 10 is the two-dimensional feature points extracted after screening the feature points based on Fig 9a and 9b, the outer contour of the face and the more obvious and prominent nose part are basically preserved, and the feature points extracted near the eyes and lips are filtered out.

thumbnail
Fig 9. Distribution of all feature points.

a) and b) represent the changes of feature points under different expressions of the same person.

https://doi.org/10.1371/journal.pone.0286087.g009

thumbnail
Fig 10. Characteristic point distribution after screening.

https://doi.org/10.1371/journal.pone.0286087.g010

For different biometric acquisition devices, the biometric images obtained by them have great differences in image size, angle, relative distance, and the like. However, for the same creature, its biological features are relatively unchanged in an image of any resolution, and the biological features can be described uniformly without the image size. Therefore, this paper is based on control points to obtain any image with a fixed dimension. The stable distinguishable descriptor is shown in Alg 2. For the same face, the error between descriptors should be within the acceptable range, and for different faces, the error should be much larger than the acceptable range. The descriptors extracted in this paper are binary descriptors, so the Hamming distance is used to calculate the error between the descriptors.

When the key is generated and regenerated based on the stable and distinguishable descriptors of the same creature at different times, there will be a certain probability of wrong regeneration according to the difference of its Hamming distance. It is assumed that the Hamming distance extracted twice is less than or equal to t, that is, dis(w, w′) ≤ t. For each i, the probability of is at least . The probability that the regeneration algorithm does not find a matching resulting in an output error prompt is . In this paper, the input n is 128 bits, t = 10 bits are allowed to be different, the output unique key k = 16 bits. When the sampling times L = 20, according to the above formula, the key error regeneration probability is 6.28e−4, when the sampling times L = 100, the key error regeneration probability is 9.773e−12.

The descriptor calculated in this paper is 128 bits and can be obtained with a 99.99% probability when the Hamming distance is within 10 based on the fuzzy extractor’s basic principle. The descriptor is taken from the facial feature points in Fig 9; however, stability is not met because of the great hamming distance. To fully meet the error range necessary for fuzzy extraction, extract the descriptor from the landmarks depicted in Fig 10. We randomly sampled 9 times, and calculated the Hamming distance of the descriptor before and after feature point screening, as shown in Fig 11 and Table 4. It is clear that before screening the feature points, the Hamming distance between the describers is extremely unstable, even up to 30 bits of error, and after screening the feature points, the Hamming distance error is reduced to less than 10. According to the calculation and comparison of the data in the figure, it can be seen that the average Hamming distance difference of the four groups of descriptors before and after removing the feature points is 36.89, 14.67, 14.66, 12.00, respectively, and the decline rate is 84.47%, 98.52%, 84.06%, 59.36%. Satisfy the requirements of fuzzy extraction to generate a unique key. Fig 11 and Table 4 show the stability of the descriptors obtained by screening feature points. This paper also analyzes the distinguishability of different faces.

thumbnail
Fig 11. Hamming distance error before and after screening feature points.

https://doi.org/10.1371/journal.pone.0286087.g011

thumbnail
Table 4. Hamming distance error before and after screening feature points.

https://doi.org/10.1371/journal.pone.0286087.t004

In this paper, the Hamming distance of the descriptors is calculated in two pairs of four sets of face data, Fig 12 and Table 5 shows the Hamming distance error of the descriptor between B11 and B12, B11 and B13, B11 and B14, B12 and B14. According to the calculation of the data in the figure, it can be known that the average Hamming distance of the four groups of descriptors between different person is 89.71, 82.57, 88, and 80.14 respectively, which are obviously much greater than 10.

thumbnail
Fig 12. Hamming distance of the descriptor between four sets of experimental data.

https://doi.org/10.1371/journal.pone.0286087.g012

thumbnail
Table 5. Hamming distance of the descriptor between four sets of experimental data.

https://doi.org/10.1371/journal.pone.0286087.t005

From the above experimental analysis, it can be seen that the descriptor error between different face data is much larger than the threshold. Therefore, the proposed stable and distinguishable descriptor based on control points not only has sufficient stability for the same face data in different periods, but also has distinguishability between different faces.

6.3.2 Experiment on stable discriminative descriptors based on fingerprint features.

To improve the quality of the raw fingerprint data, it is necessary to enhance the dataset using an appropriate algorithm. The fingerprint enhancement algorithm exploits the well-defined frequency properties of the fingerprint image, targeting locally stable frequency variations in the valleys and ridges of the fingerprint image. In order to achieve this, Gabor filters are used to stabilize the tuning to the appropriate frequency and direction. This results in the removal of noise between valleys and ridges, while retaining a clear ridge-valley structure. Fig 13 provides a visual representation of the enhanced fingerprint image.

thumbnail
Fig 13. Enhanced fingerprint image.

fingerprints of 6 different people.

https://doi.org/10.1371/journal.pone.0286087.g013

The corresponding feature points are extracted from the enhanced fingerprint image based on the pixel characteristics of the ridge end and fork points. Fig 14 shows the fingerprints of 6 different people. The enhanced fingerprint image has a very fine structure. The figure illustrates how feature point extraction is extremely unstable only around the fingerprint image due to edge issues. The feature point at the edge of the fingerprint image must be removed because, in the case of normal acquisition, the fingerprint image typically does not contain structural mutations. We calculate the stable and distinguishable descriptors of the fingerprint image based on Alg 2, and analyze the errors of the fingerprint descriptors before and after screening feature points.

thumbnail
Fig 14. Fingerprint image feature point extraction results.

fingerprints of 6 different people.

https://doi.org/10.1371/journal.pone.0286087.g014

We randomly sampled 7 times, and calculated the Hamming distance of the descriptor, Fig 15 and Table 6 illustrates how the Hamming distance error between the profilers, A, B, C, D represent four fingerprints from different people. Before screening out the edge feature points, the Hamming distance error between descriptors can reach 30, but after screening, it remains within 10. According to the calculation of the data in the figure, it can be known that the average Hamming distance difference of the four groups of fingerprint feature descriptors before and after removing the feature points is 14.00, 16.57, 16.29, and 14.86 respectively, and the decline rate is 66.23%, 70.30%, 69.53%, 65.00%. Achieving the stability of the same biometric fingerprint feature descriptor.

thumbnail
Fig 15. Fingerprint description sub-Hamming distance before and after screening.

https://doi.org/10.1371/journal.pone.0286087.g015

thumbnail
Table 6. Fingerprint description sub-Hamming distance before and after screening feature points.

https://doi.org/10.1371/journal.pone.0286087.t006

As shown in Fig 16 and Table 7, A, B, C, D represent four fingerprints from different people. the stable distinguishable distance between various biometric profiles is well beyond the error range. According to the calculation of the data in the figure, it can be known that the average Hamming distance of the four groups of descriptors between different person is 74.00, 81.29, 81.00, and 77.57 respectively. This distance, known as the Hamming distance, fully satisfies the requirement for distinguishability between fingerprint profilers.

thumbnail
Fig 16. The sub-Hamming distance between different biometric fingerprints.

https://doi.org/10.1371/journal.pone.0286087.g016

thumbnail
Table 7. The sub-Hamming distance between different biometric fingerprints.

https://doi.org/10.1371/journal.pone.0286087.t007

6.4 Stable discrimination descriptor fuzzy extraction experiment (RQ2)

From the aforementioned experiments, it is possible to derive, respectively, 128-bit binary descriptors based on face data and 128-bit binary descriptors based on fingerprint data. However, the calculated descriptors are still used even when the filtered feature points are used. It is necessary to remove the error based on a fuzzy extractor and obtain a distinct biometric key because there is still a Hamming distance error within 10 of the target value. We examined the fuzzy extraction’s rate of success. The success rate of face descriptor fuzzy regeneration is displayed in Table 8. It can be seen that in 1000 experiments, the success rate is almost 100% when the Hamming distance error is less than 10.

Similar to the previous experiment, we also run the same test on fingerprint descriptors and achieve 100% regeneration power when the Hamming distance error is within 10. As a result, it has been established that both face data and fingerprint data can produce distinctive keys.

6.5 Private key sharing and encryption experiment based on BKG and CRT (RQ3)

Ethereum is an open-source blockchain platform that enables developers to build blockchain-based applications. In this article, we constructed a private blockchain system using the Ethereum Geth platform and conducted experiments to verify the proposed private key encryption and decryption algorithm based on biometrics. Table 9 shows the equipment used in the experiment, and we set the parameters of the Ethereum creation block as indicated in Table 10. To facilitate effective testing of the algorithm, we set transaction fees to zero and reduced the hash calculation difficulty required for block mining.

As shown in Fig 17, this article finally built a command-line Ethereum blockchain.

thumbnail
Fig 17. Schematic diagram of a private blockchain system based on Ethereum.

https://doi.org/10.1371/journal.pone.0286087.g017

The public-private key pair is generated based on the LWE algorithm as shown in Table 11. According to experiment 1, t = 10, if we choose L = 20 based on the condition that the success rate is similar between 20 and 200 times, the time consumed can be reduced. The generated private key is secretly shared based on the CRT, as shown in Table 12, we share the private key into 20 secret fragments, and set the minimum recovery private key fragmentation threshold to 10.

thumbnail
Table 11. Public-private key pair generated baesd on LWE.

https://doi.org/10.1371/journal.pone.0286087.t011

The unique key of face data is used to salt 20 private key fragments, and then the unique key of fingerprint data is used as a new private key sk2, as shown in Table 11. For the public key pk2 paired with the fingerprint and private key, use (sk2, pk2) to encrypt the N fragments generated by the private key sk1 to obtain the private key fragments encrypted with salt, as shown in Table 13.

After salted encryption, this article obtains the confidential fragment and distributes it over the blockchain node. When restoring the private key, only 10 or more of the 20 fragments must be obtained from the blockchain node. The fragments of the private key are then desalted and decrypted using the updated face and fingerprint data, and processed using the CRT to produce the final recovered private key, as shown in Table 14. If there are only 9 fragments, the recovered private key is wholly incorrect. The private key can be fully recovered once there are 10 copies of the fragments.

7 Conclusion and future work

It is especially crucial to ensure the security of the private key because, in blockchain technology, the security of the user’s assets and the security of the entire blockchain system are directly impacted by the security of the private key. For the whole process of private key generation, encryption, storage and recovery, this paper proposes a biometric-based private key encryption and management framework for blockchain (BPKEM), and conducts experimental verification.

The security of the entire generation, encryption, storage, and recovery of private keys are discussed, we propose a biometric-based private key management framework (BPMF). In order to achieve the secure generation of private keys against quantum computing, the blockchain system first uses the LWE-based asymmetric key generation algorithm to generate public and private key pairs. Then, the secret distribution and encryption of the private key is achieved by using the secret sharing and biometric key of the Chinese residual theorem. After encryption, a reliable storage of the private fragments is implemented based on the blockchain system itself. Finally, a reliable private key recovery scheme is realized, and systematic experiments are carried out to talk about the capability of private key regeneration and recovery.

For the private key encryption problem, due to the need for conventional keys that face problems such as storage security, a biometric-based stable key generation (BKG) method is proposed, using the key generated based on multiple biometrics to encrypt the private key. The bio-key generation uses the bio-key generation method based on stable and distinguishable descriptors in this paper. The key generation of face features and fingerprint features are experimented with and discussed in this article. The stable distinguishable descriptor is primarily divided into three steps. First, we obtain the face and fingerprint images from the relevant dataset, extract the biometric points of the face and fingerprint, and calculate their pixel coordinates in the two-dimensional image coordinate system; secondly, in order to reduce the error of the generated stable distinguishable descriptors, the unstable feature points are screened out and the stable extracted feature points are retained to calculate the stable distinguishable descriptors using the inherent properties of faces and fingerprints; finally, for the errors caused by the angle and resolution of face and fingerprint images collected at different periods, a control point-based stable distinguishable descriptor extraction algorithm is proposed using the invariance of the relative distribution of face and fingerprint feature points, and the errors of the stable distinguishable descriptors extracted from the biometric data collected for the same organism at different periods are controlled within a certain range. The success rate of the fuzzy extractor to regenerate the key is discussed by using the descriptor as the input of the fuzzy extractor.

On the basis of our work, the possible follow-up improvements and possible research directions are summarized as follows: Firstly, in terms of biometric key generation, the method based on deep learning can be used to further improve and strengthen. Secondly, for the private key security management scheme, in which distributed storage is based on blockchain, can explore the possibility of proposing identity authentication-based transaction storage and acquisition schemes based on smart contracts or other methods.

References

  1. 1. Fedorov Aleksey K., Kiktenko Evgeniy O., Lvovsky Alexander I. Quantum computers put blockchain security at risk. Nature. 2018 Nov; p465–467. pmid:30451981
  2. 2. Mehmet Aydar, Salih Cemil Cetin, Serkan Ayvaz, Betul Aygun. Private key encryption and recovery in blockchain. arXiv preprint arXiv:1907.04156.2019
  3. 3. Li Huige, Zhang Fangguo, He Jiejie, Tian Haibo. A Searchable Symmetric Encryption Scheme using BlockChain. arXiv preprint arXiv:1711.01030.2017
  4. 4. Huige Li, Haibo Tian, Fangguo Zhang, Jiejie He. Blockchain-based searchable symmetric encryption scheme. Computers Electrical Engineering. 2019 Jan; Volume 73, p32–45.
  5. 5. Yan XX, Yuan HX, Tang YL, Chen YL. Verifiable attribute-based searchable encryption scheme based on blockchain. Journal on Communications. 2020;41(02): p187–198.
  6. 6. Baokun Zheng, Liehuang Zhu, Meng Shen, Feng Gao, Chuan Zhang, Yandong Li, et al. Scalable and privacy-preserving data sharing based on blockchain. Journal of Computer Science and Technology. 2018;33(03): p557–567.
  7. 7. Wang RJ, Tang YC, Zhang WQ, ZHANG Fengli. Privacy protection scheme for internet of vehicles based on homomorphic encryption and block chain technology. Chin. J. Netw. Inf. Secur. 2020;6(01): p46–53.
  8. 8. Hongfeng Zhu, Zexi Li. An Efficient Biometric Authenticated Protocol for Arbitrary-domain-server with Blockchain Technology. International Journal of Network Security. 2021 May;23(03): p386–394.
  9. 9. Hamian Negin, Bayat Majid, Alaghband Mahdi R., Hatefi Zahra, Pournaghi Seyed Morteza. Blockchain-based User Re-enrollment for Biometric Authentication Systems. International Journal of Electronics and Information Engineering. 2022 Jun;14(01): p18–38.
  10. 10. Bisogni Carmen, Iovane Gerardo, Landi Riccardo Emanuele, Nappi Michele. ECB2: A novel encryption scheme using face biometrics for signing blockchain transactions. Journal of Information Security and Applications. 2021 Jun. ISSN 2214-2126.
  11. 11. Bao Di, You Lin. Two-factor identity authentication scheme based on blockchain and fuzzy extractor. Soft Comput. 2021 Jul. ISSN 1433-7479.
  12. 12. Lusetti Monia, Salsi Luca, Dallatana Andrea. A blockchain based solution for the custody of digital files in forensic medicine. Forensic Science International: Digital Investigation. 2020 Dec.
  13. 13. Guri, Mordechai. Beatcoin: Leaking private keys from air-gapped cryptocurrency wallets. IEEE. 2018: p1308-1316.
  14. 14. Yue Xiao, Zhang Peng, and Liu Yuhong. Secure and efficient multi-signature schemes for fabric: An enterprise blockchain platform. IEEE Transactions on Information Forensics and Security. 2020: p1782–1794.
  15. 15. Pal Om, Alam Bashir, Thakur Vinay, Singh Surendra. Key management for blockchain technology. ICT Express. 2021 Mar;7(1): p76–80.
  16. 16. Maria Azees, Pandi Vijayakumar, Lazarus Jeatha Deborah, Karuppiah Marimuthu, Christo Mary Subaja. BBAAS: Blockchain-based anonymous authentication scheme for providing secure communication in VANETs. Security and Communication Networks. 2021 Feb. p1–11.
  17. 17. Fan Qing, Chen Jianhua, Deborah Lazarus Jegatha, Luo Min. A secure and efficient authentication and data sharing scheme for Internet of Things based on blockchain. Journal of Systems Architecture. 2021 Aug. p102–112.
  18. 18. Xiong F, Xiao R, Ren W, Zheng R, Jiang J. A key protection scheme based on secret sharing for blockchain-based construction supply chain system. IEEE access. 2019 Aug: p126773–126786. ISSN: 2169-3536.
  19. 19. Li Guojia, Lin You. A Consortium Blockchain Wallet Scheme Based on Dual-Threshold Key Sharing. Symmetry, 2021;13(8).
  20. 20. Ore Oystein. The general Chinese remainder theorem. The American Mathematical Monthly. 1952: p365–370.
  21. 21. Regev Oded. On lattices, learning with errors, random linear codes, and cryptography. Journal of the ACM (JACM). 2009 Sep: p1–40.
  22. 22. ZHOU Zhicheng, LI Lixin, GUO Song, LI Zuohui. Biometric and password two-factor cross domain authentication scheme based on blockchain technology. Journal of Computer Applications. 2022,31(3): p38–47.
  23. 23. Maria Azees, Pandi Vijayakumar, Lazarus Jeatha Deborah. EAAP: Efficient anonymous authentication with conditional privacy-preserving scheme for vehicular ad hoc networks. IEEE Transactions on Intelligent Transportation Systems. 2017 Sep: p2467–2476.
  24. 24. Kim Semin, Mun Hyung-Jin, Hong Sunghyuck. Multi-Factor Authentication with Randomly Selected Authentication Methods with DID on a Random Terminal. Applied Sciences. 2022 Feb 12(5), 2301.
  25. 25. Alaghband Mahdi R., Hatefi Zahra, Pournaghi Seyed Morteza, Bayat Majid, Hamian Negin. Blockchain-based User Re-enrollment for Biometric Authentication Systems. International Journal of Electronics and Information Engineering. 2021 Nov 5: p18–38.
  26. 26. Li Guojia, You Lin, Hu Gengran, Hu Liqin. Recoverable Private Key Scheme for Consortium Blockchain Based on Verifiable Secret Sharing. KSII Transactions on Internet and Information Systems (TIIS). 2021 Aug 31 15(8): p2865–2878.
  27. 27. Georghiades A, Belhumeur P., Kriegman D. From few to many: illumination cone models for face recognition under variable lighting and pose. IEEE transactions on pattern analysis and machine intelligence, 2001 23(6): p643–660. Available from: http://vision.ucsd.edu/iskwak/ExtYaleDatabase/ExtYaleB.html
  28. 28. Yahaya Isah Shehu, Ariel Ruiz-Garcia, Vasile Palade, Anne James. Sokoto Coventry Fingerprint Dataset. arXiv preprint arXiv:1807.10609, 2018. Available from: https://www.kaggle.com/datasets/ruizgara/socofing.
  29. 29. King Davis E. Dlib-ml: A machine learning toolkit. Journal of Machine Learning Research. 2009 Sep: p1755–1758.