Biometric-Based Key Generation and User Authentication Using Acoustic Characteristics of the Outer Ear and a Network of Correlation Neurons
Abstract
:1. Introduction
2. Related Works: Common Terms
3. Materials and Methods
3.1. Theory
3.1.1. A Curved Feature Space: The Informativeness and Cross-Correlation of Features
3.1.2. The Bayes–Minkowski Meta-Feature Space
3.1.3. Assessment of Bayes–Minkowski’s Meta-Feature Informativeness Using Synthetic Datasets
- A correlation between features can carry more information than the features themselves. If the initial features are more informative (I ≈ 0.5) and independent (|Cj,t| < 0.3), then in the meta-feature space, the accuracy of the identification of images is lower than in a case where the initial features are less informative (I ≈0.15) but strongly correlated (1 > Cj,t > 0.95).
- If the initial features are independent, the meta-features cause noise (the accuracy of the identification is higher when using only the initial independent features than when combining the independent features with the meta-features);
- The “transition” to the space of the meta-features does not lead to a manifestation of the “Curse of dimensionality” problem if the features are strongly correlated. The curse of dimensionality is a problem associated with an exponential increase in the volume of the training set and related calculations due to the linear growth of the dimension of the features. As we can see in Figure 7, by using a similar training set (10 images), it is possible to achieve higher accuracy when going into the space of greater dimension (n’ = 435) compared to the initial dimension (n = 30). At the same time, the number of calculations when computing a posteriori (“Bayesian”) probabilities grows linearly to the increase in the dimension of the feature space. The number of calculations when computing the correlation matrix and the number of features grows not exponentially but according to a power law (10).
3.2. C-Neuro-Extractor
3.2.1. Correlation Neuron Model for Biometric Authentication
3.2.2. Synthesis and Automatic Training of C-Neuro-Extractors
- Calculation of the feature correlation matrix.
- Counting pairs of negatively correlated features (Cj,t < C-). If the number of pairs is less than η·N-, then C- is increased by 0.05 and this step is repeated.
- Counting pairs of positively correlated features (Cj,t > C+). If the number of pairs is less than η·N+, then C+ decreases by 0.05 and this step is repeated.
- correlation neurons are not affected by the problem of imbalance in training (the size of the «Impostors» training set is much larger than the size of the “Genuine” training set);
- the correlation network setting up process is robust (overfitting does not occur);
- the length of the key associated with the c-neuro-extractor is potentially much larger than those associated with the fuzzy extractors and the base model of the neuro-extractor;
- this model should have a much higher level of resistance to adversarial attacks [16,17] than the classical deep network with the softmax activation function at the output, at least in terms of the effect on the FAR indicator. Adding noise and other modifications is unlikely to affect the closeness of the correlations of the «Impostor» image to the «Genuine» image.
4. Experiments and Results
4.1. Data Set
4.2. Image Preprocessing
4.3. Autoencoder Training for Feature Extraction
4.4. Autoencoder Training for Feature Extraction
- EER = 0.03041 (FRR = 0.2288 at FAR < 0.001) with a key length L = 716; the size of the «Genuine» training set was KG = 8 and the size of the «Impostors» training set was KI = 49.
5. Discussion
6. Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
Algorithm A1: Synthesis and training of the correlation neuron |
Initiate C-, C+, AUCMAX, b //Select an array Ψ of random pairs of features (not involved in any neuron) //with a cross-correlation level of more than C- but less than C+ for ι from 1 to η do Ψ[ι] = GetRandomUniqueFeaturesPair(C-; C+) end p = 0.9 //Calculation of an array W of neurons’ weight coefficients using Formula (18) //Calculation of meta-features is performed according to Formula (12) for ι from 1 to η do W[ι] = GetSynapsesWeights(Ψ[ι], p) end //Calculation of arrays YG and YI of neuron responses //to «Genuine» and «Impostors» training images according to Formula (17) for k from 1 to KG do YG[k] = GetNeuronResponse(āG,k, Ψ, W, p) end for k from 1 to KI do YI[k] = GetNeuronResponse(āI,k, Ψ, W, p) end ξG = GetMathematicalExpectation(YG) ςG = GetStandardDeviation(YG, ξG) ξI = GetMathematicalExpectation(YI) ςI = GetStandardDeviation(YI, ξI) yImin = ξI − 4ςI yImax = ξI + 4ςI ΔyG = 4ςG yGmin = ξG − ΔyG yGmax = ξG + ΔyG FGmax = F(yGmax)//Formula (21) FGmin = F(yGmin)//Formula (21) ΔFG = FGmax − FGmin if AUCMAX > AUC(ՓG(y), ՓI(y)) then//Formula (22) do if ΔFG < 0.4 then do while ΔFG < 0.1 do ΔyG = 1.05ΔyG yGmin = ξG − ΔyG yGmax = ξG + ΔyG FGmax = F(yGmax)//Formula (21) FGmin = F(yGmin)//Formula (21) ΔFG = FGmax-FGmin end if FGmin < 0.1 then if 1 − FGmax > 0.6 then do ɸG = 3 Tleft = yGmax ΔT = (yImax − yGmax)/4 Tmiddle = Tleft + ΔT Tright = Tmiddle + ΔT H = GetHashTransformationNumber(ɸG, b) return {H, Ψ, W, Tleft, Tmiddle, Tright} end else return NULL//Neuron is not trained and must be removed else if FGmin > 0.4 then if 1 − FGmax < 0.1 then if 1 − FGmin < 0.4 then do ɸG = 0 ΔT = (yGmin − yImin)/4 Tleft = yGmin − 2ΔT Tmiddle = yGmin − ΔT Tright = yGmin H = GetHashTransformationNumber(ɸG, b) return {H, Ψ, W, Tleft, Tmiddle, Tright} end else return NULL//Neuron is not trained and must be removed else do ɸG = 1 ΔT = (yGmin − yImin)/3 Tleft = yGmin − ΔT Tmiddle = yGmin Tright = yGmax H = GetHashTransformationNumber(ɸG, b) return {H, Ψ, W, Tleft, Tmiddle, Tright} end else do ɸG = 2 ΔT = (yImax-yGmax)/3 Tleft = yGmin Tmiddle = yGmax Tright = Tmiddle + ΔT H = GetHashTransformationNumber(ɸG, b) return {H, Ψ, W, Tleft, Tmiddle, Tright} end end else return NULL//Neuron is not trained and must be removed end else return NULL//Neuron is not trained and must be removed |
References
- Sulavko, A.E.; Samotuga, A.E.; Kuprik, I.A. Personal Identification Based on Acoustic Characteristics of the Outer Ear Using Cepstral Analysis, Bayesian Classifier and Artificial Neural Networks. IET Biom. 2021, 10, 692–705. [Google Scholar] [CrossRef]
- Catak, F.O.; Yayilgan, S.Y.; Abomhara, M. Privacy-Preserving Fully Homomorphic Encryption and Parallel Computation Based Biometric Data Matching. Math. Comput. Sci. 2020. [Google Scholar] [CrossRef]
- Gomez-Barrero, M. Multi-biometric template protection based on Homomorphic Encryption. Pattern Recognit. 2017, 67, 149–163. [Google Scholar] [CrossRef]
- Ma, Y.; Wu, L.; Gu, X. A secure face-verification scheme based on homomorphic encryption and deep neural networks. IEEE Access 2017, 5, 16532–16538. [Google Scholar] [CrossRef]
- Rathgeb, C.; Tams, B.; Wagner, J.; Busch, C. Unlinkable improved multibiometric iris fuzzy vault. Eurasip J. Inf. Secur. 2016, 1, 26. [Google Scholar] [CrossRef] [Green Version]
- Hine, G.E.; Maiorana, E.; Campisi, P. A zero-leakage fuzzy embedder from the theoretical formulation to real data. IEEE Trans. Inf. Forensics Secur. 2017, 12, 1724–1734. [Google Scholar] [CrossRef]
- Ponce-Hernandez, W.; Blanco-Gonzalo, R.; Liu-Jimenez, J.; Sanchez-Reillo, R. Fuzzy vault scheme based on xed-length templates applied to dynamic signature verification. IEEE Access 2020, 8, 11152–11164. [Google Scholar] [CrossRef]
- Sun, Y.; Lo, B. An artificial neural network framework for gait-based biometrics. IEEE J. Biomed. Health Inform. 2018, 23, 987–998. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Elrefaei, L.A.; Mohammadi, A.A.-M. Machine vision gait-based biometric cryptosystem using a fuzzy commitment scheme. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 204–217. [Google Scholar] [CrossRef]
- Akhmetov, B.; Doszhanova, A.; Ivanov, A. Biometric Technology in Securing the Internet Using Large Neural Network Technology. World Acad. Sci. Eng. Technol. 2013, 7, 129–139. [Google Scholar]
- Malygin, A.; Seilova, N.; Boskebeev, K.; Alimseitova, Z. Application of artificial neural networks for handwritten biometric images recognition. Comput. Model. New Technol. 2017, 21, 31–38. [Google Scholar]
- Bogdanov, D.S.; Mironkin, V.O. Data recovery for a neural network-based biometric authentication scheme. Mat. Vopr. Kriptografii 2019, 10, 61–74. [Google Scholar] [CrossRef]
- Marshalko, G.B. On the security of a neural network-based biometric authentication scheme. Mat. Vopr. Kriptografii 2014, 5, 87–98. [Google Scholar]
- Pandey, R.K.; Zhou, Y.; Kota, B.U.; Govindaraju, V. Deep secure encoding for face template protection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 77–83. [Google Scholar]
- Jindal, A.K.; Chalamala, S.; Jami, S.K. Face template protection using deep convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 462–470. [Google Scholar]
- Alcorn, M.A. Strike (With) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 4840–4849. [Google Scholar]
- Hafemann, L.G.; Sabourin, R.; Oliveira, L.S. Characterizing and evaluating adversarial examples for Offline Handwritten Signature Verification. IEEE Trans. Inf. Forensics Secur. 2019, 14, 2153–2166. [Google Scholar] [CrossRef] [Green Version]
- Boddeti, V.N. Secure Face Matching Using Fully Homomorphic Encryption. In Proceedings of the 9th International Conference on Biometrics Theory, Applications and Systems (BTAS), Redondo Beach, CA, USA, 22–25 October 2018; pp. 1–10. [Google Scholar]
- Nautsch, A.; Isadskiy, S.; Kolberg, J. Homomorphic Encryption for Speaker Recognition: Protection of Biometric Templates and Vendor Model Parameters. In Proceedings of the Odyssey 2018 The Speaker and Language Recognition Workshop, Les Sables d’Olonne, France, 26–29 June 2018; pp. 16–23. [Google Scholar]
- Sulavko, A.E. Bayes-Minkowski measure and building on its basis immune machine learning algorithms for biometric facial identification. J. Phys. Conf. Ser. 2020, 1546, 012103. [Google Scholar] [CrossRef]
- Sulavko, A.E.; Samotuga, A.E.; Stadnikov, D.G.; Pasenchuk, V.A.; Zhumazhanova, S.S. Biometric authentication on the basis of lectroencephalograms parameters. In Proceedings of the III International scientific conference “Mechanical Science and Technology Update”, Omsk, Russia, 23–24 April 2019; p. 022011. [Google Scholar]
- Probst, R.; Grevers, G.; Iro, H. Basic Otorhinolaryngology: A Step-by-Step Learning Guide Paperback, 2nd ed.; Cambridge University Press: Cambridge, UK, 2017; p. 430. [Google Scholar]
- Nagrani, A.; Chung, J.S.; Xie, W.; Zisserman, A. Voxceleb: Large-scale speaker verification in the wild. Comput. Speech Lang. 2020, 60, 101027. [Google Scholar] [CrossRef]
- Akkermans, T.H.; Kevenaar, T.A.; Schobben, D.W. Acoustic ear recognition. In Proceedings of the International Conference on Biometrics, Hong Kong, China, 5–7 January 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 697–705. [Google Scholar]
- Gao, Y.; Wang, W.; Phoha, V.V.; Sun, W.; Jin, Z. EarEcho: Using Ear Canal Echo for Wearable Authentication. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2019, 3, 1–24. [Google Scholar] [CrossRef]
- Mahto, S. Ear Acoustic Biometrics Using Inaudible Signals and Its Application to Continuous User Authentication. In Proceedings of the 26th European Signal Processing Conference (EUSIPCO), Roma, Italy, 3–7 September 2018; pp. 1407–1411. [Google Scholar]
Hash Transformation № | ɸ(y) | |||
---|---|---|---|---|
0 | 1 | 2 | 3 | |
1 | «11» | «00» | «01» | «10» |
2 | «11» | «00» | «10» | «01» |
3 | «11» | «01» | «00» | «10» |
4 | «11» | «01» | «10» | «00» |
5 | «11» | «10» | «00» | «01» |
6 | «11» | «10» | «01» | «00» |
7 | «00» | «01» | «11» | «10» |
8 | «00» | «01» | «10» | «11» |
9 | «00» | «10» | «01» | «11» |
10 | «00» | «10» | «11» | «01» |
11 | «00» | «11» | «10» | «01» |
12 | «00» | «11» | «01» | «10» |
13 | «01» | «00» | «11» | «10» |
14 | «01» | «00» | «10» | «11» |
15 | «01» | «10» | «00» | «11» |
16 | «01» | «10» | «11» | «00» |
17 | «01» | «11» | «10» | «00» |
18 | «01» | «11» | «00» | «10» |
19 | «10» | «00» | «01» | «11» |
20 | «10» | «00» | «11» | «01» |
21 | «10» | «01» | «11» | «00» |
22 | «10» | «01» | «00» | «11» |
23 | «10» | «11» | «00» | «01» |
24 | «10» | «11» | «01» | «00» |
Architecture №1 | Architecture №2 | ||
---|---|---|---|
Layer Type | Layer Parameters | Layer Type | Layer Parameters |
Input | shape = 2048 | Input | shape = 2048 |
Conv1D | filters = 8, kernel_size = 12, strides = 4, activation = relu, initializer = glorot | Conv1D | filters = 8, kernel_size = 9, strides = 2, activation = relu, initializer = he |
Conv1D | filters = 16, kernel_size = 3, strides = 2, activation = relu, initializer = glorot | Conv1D | filters = 16, kernel_size = 3, strides = 2, activation = relu, initializer = he |
Batch normalization | Batch normalization | ||
Conv1D | filters = 16, kernel_size = 4, strides = 2, activation = relu, initializer = glorot | Conv1D | filters = 16, kernel_size = 4, strides = 2, activation = relu, initializer = he |
Conv1D | filters = 32, kernel_size = 3, strides = 2, activation = relu, initializer = glorot | Conv1D | filters = 32, kernel_size = 3, strides = 2, activation = relu, initializer = he |
Batch normalization | Batch normalization | ||
Conv1D | filters = 32, kernel_size = 3, strides = 2, activation = relu, initializer = glorot | Conv1D | filters = 32, kernel_size = 3, strides = 1,2, activation = relu, initializer = he |
Conv1D | filters = 64, kernel_size = 3, strides = 2, activation = relu, initializer = glorot | Conv1D | filters = 64, kernel_size = 3, strides = 2, activation = relu, initializer = he |
Batch normalization | Batch normalization | ||
Conv1D | filters = 64, kernel_size = 3, strides = 2, activation = relu, initializer = glorot | Conv1D | filters = 64, kernel_size = 3, strides = 2, activation = relu, initializer = he |
Conv1D | filters = 64, kernel_size = 3, strides = 2, activation = relu, initializer = glorot | Conv1D | filters = 64, kernel_size = 3, strides = 2, activation = relu, initializer = he |
Batch normalization | Batch normalization | ||
Conv1D | filters = 128, kernel_size = 3, strides = 2, activation = relu, initializer = glorot | Conv1D | filters = 128, kernel_size = 3, strides = 2, activation = relu, initializer = he |
Conv1D | filters = 256, kernel_size = 3, strides = 2, activation = relu, initializer = glorot | Conv1D | filters = 160, kernel_size = 3, strides = 2, activation = relu, initializer = he |
Fully connected | units = 128, activation = linear, initializer = glorot | Batch normalization | |
Conv1D | filters = 192, kernel_size = 3, strides = 2, activation = relu, initializer = he | ||
Fully connected | units = 128, activation = linear, initializer = glorot |
Layer Type | Layer Parameters |
---|---|
Input | shape = 128 |
Conv1D Transpose | filters = 160, kernel_size = 8, strides = 4, activation = relu, initializer = glorot |
Conv1D Transpose | filters = 128, kernel_size = 3, strides = 2, activation = relu, initializer = glorot |
Batch normalization | |
Conv1D Transpose | filters = 64, kernel_size = 5, strides = 2, activation = relu, initializer = glorot |
Conv1D Transpose | filters = 32, kernel_size = 3, strides = 2, activation = relu, initializer = glorot |
Batch normalization | |
Conv1D Transpose | filters = 32, kernel_size = 3, strides = 2, activation = relu, initializer = glorot |
Conv1D Transpose | filters = 16, kernel_size = 3, strides = 2, activation = relu, initializer = glorot |
Batch normalization | |
Conv1D Transpose | filters = 8, kernel_size = 3, strides = 2, activation = relu, initializer = glorot |
Conv1D Transpose | filters = 8, kernel_size = 3, strides = 2, activation = relu, initializer = glorot |
Batch normalization | |
Conv1D Transpose | filters = 4, kernel_size = 3, strides = 2, activation = relu, initializer = glorot |
Conv1D Transpose | filters = 1, kernel_size = 3, strides = 2, activation = sigmoid, initializer = glorot |
Type of Spectra | L | N | n | η | EER1 | EER2 |
---|---|---|---|---|---|---|
Blackman | 128 | 64 | 128 | 4 | 0.0863 | 0.08828 |
Hamming | 128 | 64 | 128 | 4 | 0.08747 | 0.07332 |
Triangular | 128 | 64 | 128 | 4 | 0.09133 | 0.08995 |
Rectangular | 128 | 64 | 128 | 4 | 0.08122 | 0.07686 |
Gauss | 128 | 64 | 128 | 4 | 0.09047 | 0.08039 |
Gaussian parametric | 128 | 64 | 128 | 4 | 0.08465 | 0.07629 |
Laplace | 128 | 64 | 128 | 4 | 0.07452 | 0.08589 |
Rectangular + Hamming | 128 | 64 | 256 | 4 | 0.04351 | 0.04272 |
Rectangular + Hamming + triangular | 256 | 128 | 384 | 4 | 0.04245 | 0.04218 |
Rectangular + Hamming + triangular + Laplace + Blackman | 512 | 256 | 640 | 4 | 0.04061 | 0.03812 |
All window functions | 512 | 256 | 896 | 4 | 0.04031 | 0.03574 |
Type of Spectra | L | N | n | η | EER |
---|---|---|---|---|---|
Hamming | 128 | 64 | 256 | 4 | 0.03591 |
rectangular + Hamming | 256 | 128 | 512 | 5 | 0.03823 |
rectangular + Hamming + triangular + Laplace + Blackman | 512 | 256 | 1280 | 5 | 0.03474 |
All window functions | 512 | 256 | 1792 | 5 | 0.03955 |
All window functions | 1024 | 512 | 1792 | 5 | 0.0366 |
All window functions | 1024 | 512 | 1792 | 20 | 0.03834 |
All window functions | 2048 | 1024 | 1792 | 5 | 0.03466 |
All window functions | 2048 | 1024 | 1792 | 10 | 0.03635 |
All window functions | 4096 | 2048 | 1792 | 5 | 0.03573 |
All window functions | 4096 | 2048 | 1792 | 10 | 0.04124 |
L | N | n | η | EER | C+ | C- | KG | AUCMAX |
---|---|---|---|---|---|---|---|---|
4096 | 2048 | 3584 | 5 | 0.02865 | 0.5 | −0.5 | 8 | 0.3 |
6144 | 3072 | 3584 | 5 | 0.02811 | 0.5 | −0.5 | 8 | 0.3 |
8192 | 4096 | 3584 | 7 | 0.02956 | 0.5 | −0.5 | 8 | 0.3 |
8192 | 4096 | 3584 | 6 | 0.02584 | 0.5 | −0.5 | 8 | 0.3 |
8192 | 4096 | 3584 | 5 | 0.02552 | 0.5 | −0.5 | 8 | 0.3 |
8192 | 4096 | 3584 | 4 | 0.02561 | 0.5 | −0.5 | 8 | 0.3 |
8192 | 4096 | 3584 | 3 | 0.0273 | 0.5 | −0.5 | 8 | 0.3 |
10,240 | 5120 | 3584 | 5 | 0.02729 | 0.5 | −0.5 | 8 | 0.3 |
12,288 | 6144 | 3584 | 5 | 0.02725 | 0.5 | −0.5 | 8 | 0.3 |
8192 | 4096 | 3584 | 5 | 0.03025 | 0.5 | −0.5 | 8 | 0.4 |
8192 | 4096 | 3584 | 5 | 0.02878 | 0.5 | −0.5 | 8 | 0.35 |
8192 | 4096 | 3584 | 5 | 0.02703 | 0.5 | −0.5 | 8 | 0.25 |
8192 | 4096 | 3584 | 5 | 0.03673 | 0.5 | −0.5 | 8 | 0.2 |
8192 | 4096 | 3584 | 5 | 0.02619 | 0.3 | −0.3 | 8 | 0.3 |
8192 | 4096 | 3584 | 5 | 0.026 | 0.4 | −0.4 | 8 | 0.3 |
8192 | 4096 | 3584 | 5 | 0.02563 | 0.45 | −0.45 | 8 | 0.3 |
8192 | 4096 | 3584 | 5 | 0.02754 | 0.55 | −0.55 | 8 | 0.3 |
8192 | 4096 | 3584 | 5 | 0.02855 | 0.6 | −0.6 | 8 | 0.3 |
8192 | 4096 | 3584 | 5 | 0.03274 | 0.7 | −0.7 | 8 | 0.3 |
8192 | 4096 | 3584 | 5 | 0.02785 | 0.5 | −0.5 | 7 | 0.3 |
8192 | 4096 | 3584 | 5 | 0.0238 | 0.5 | −0.5 | 6 | 0.3 |
8192 | 4096 | 3584 | 5 | 0.03868 | 0.5 | −0.5 | 5 | 0.3 |
8192 | 4096 | 3584 | 5 | 0.04098 | 0.5 | −0.5 | 4 | 0.3 |
Feature Extraction Method | Classification Method | Data Set | Error Probabilities | |
---|---|---|---|---|
Filter of saddle-point method | Determination of the correlation coefficient between the template and the image | Mobile phone: 17 test subjects, 8 images each | 0.055 ≤ EER ≤ 0.18 [24] | |
In-ear headphones: 31 test subjects, 8 images each | 0.01 ≤ EER ≤ 0.06 [24] | |||
On-ear headphones: 31 test subjects, 8 images each | 0.008 ≤ EER ≤ 0.08 [24] | |||
Short-time Fourier transform | k-nearest neighbors | 20 test subjects, 600 images each, 11,900 images in total | 0.05 ≤ FRR ≤ 0.1 0.07 ≤ FAR ≤ 0.15 [25] | |
Decision trees | 0.09 ≤ FRR ≤ 0.152 0.09 ≤ FAR ≤ 0.145 [25] | |||
Naive Bayes | 0.03 ≤ FRR ≤ 0.08 0.09 ≤ FAR ≤ 0.275 [25] | |||
Multilayer perceptron | 0.03 ≤ FRR ≤ 0.098 0.04 ≤ FAR ≤ 0.08 [25] | |||
Support vector machine | 0.04 ≤ FRR ≤ 0.07 0.03 ≤ FAR ≤ 0.07 [25] | |||
Author’s technology EarEcho | EER = 0.0484 [25] | |||
MFCCs, LDA | Cosine similarity | 25 test subjects | EER = 0.0447 [26] | |
Cepstrograms | Naive Bayes | AIC-ears-75 75 test subjects, 15 images for each ear, 2250 images in total | «Genuine» training set of 8 images | EER = 0.0053 FRR = 0.1028 FAR < 0.001 [1] |
Convolutional neural networks | EER = 0.0285 [1] | |||
Average spectrum | Fully connected shallow neural networks | EER = 0.0266 [1] | ||
Average spectrum, training of autoencoders on voice images | Base model of neuro-extractor (GOST R 52,633.5) | «Genuine» training set of 8 images | EER = 0.03041 FRR = 0.2288 FAR < 0.001 | |
C-neuro-extractor | «Genuine» training set of 6 images | EER = 0.0238 FRR = 0.093 FAR < 0.001 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sulavko, A. Biometric-Based Key Generation and User Authentication Using Acoustic Characteristics of the Outer Ear and a Network of Correlation Neurons. Sensors 2022, 22, 9551. https://doi.org/10.3390/s22239551
Sulavko A. Biometric-Based Key Generation and User Authentication Using Acoustic Characteristics of the Outer Ear and a Network of Correlation Neurons. Sensors. 2022; 22(23):9551. https://doi.org/10.3390/s22239551
Chicago/Turabian StyleSulavko, Alexey. 2022. "Biometric-Based Key Generation and User Authentication Using Acoustic Characteristics of the Outer Ear and a Network of Correlation Neurons" Sensors 22, no. 23: 9551. https://doi.org/10.3390/s22239551
APA StyleSulavko, A. (2022). Biometric-Based Key Generation and User Authentication Using Acoustic Characteristics of the Outer Ear and a Network of Correlation Neurons. Sensors, 22(23), 9551. https://doi.org/10.3390/s22239551