Multi-Block Color-Binarized Statistical Images for Single-Sample Face Recognition
Abstract
:1. Introduction
2. Related Work
2.1. Virtual Sample Generating Methods
2.2. Generic Learning Methods
2.3. Image Partitioning Methods
2.4. Deep Learning Methods
3. Proposed Method
3.1. Preprocessing
3.2. MB-C-BSIF-Based Feature Extraction
3.3. K-Nearest Neighbors (K-NN) Classifier
Algorithm 1 SSFR based on MB-C-BSIF and K-NN |
Input: Facial image X 1. Apply histogram normalization on X 2. Apply median filtering on X 3. Divide X into three components (red, green, blue): 4. for 5. Divide into equivalent blocks: 6. for 7. Compute BSIF on the block-component : 8. end for 9. Concatenate the computed MB-BSIF features of the component : 10. 11. end for 12. Concatenate the computed MB-C-BSIF features: 13. Apply K-NN associated with a metric distance Output: Identification decision |
4. Experimental Analysis
4.1. Experiments on the AR Database
4.1.1. Database Description
4.1.2. Setups
4.1.3. Experiment #1 (Effects of BSIF Parameters)
4.1.4. Experiment #2 (Effects of Distance)
4.1.5. Experiment #3 (Effects of Image Segmentation)
- -
- For subsets of facial expression variation, a small change arises because the results of the previous experiment were already reasonable (e.g., subsets A, B, D, N, and P). However, the accuracy rises from 71% to 76% for subset Q, which is characterized by significant changes in facial expression.
- -
- For occluded subsets, there was a significant increase in recognition accuracy when the number of blocks was augmented. As an illustration, when we applied 1 to 16 patches, the accuracy grew from 31% to 71% for subset Z, from 46% to 79% for subset W, and from 48% to 84% for subset Y.
- -
- As such, in the case of partial occlusion, we may claim that local information is essential. It helps to go deeper in extracting relevant information from the face like details about the facial structure, such as the nose, eyes, or mouth, and information about position relationships, such as nose to mouth, eye to eye, and so on.
- -
- Finally, we note that the 4 × 4 blocks provided the optimum configuration with the best accuracy for subsets of facial expression, occlusion by sunglasses, and scarf occlusion.
4.1.6. Experiment #4 (Effects of Color Texture Information)
- -
- The results are almost identical for subsets of facial expression variation with all checked color-spaces. In fact, with the HSV color-space, a slight improvement is reported, although slight degradations are observed with both RGB and YCbCr color-spaces.
- -
- All color-spaces see enhanced recognition accuracy compared to the grayscale standard for sunglasses occlusion subsets. RGB is the color-space with the highest output, seeing an increase from 91.83% to 93.50% in terms of average accuracy.
- -
- HSV shows some regression for scarf occlusion subsets, but both the RGB and YCbCr color-spaces display some progress compared to the grayscale norm. Additionally, RGB remains the color-space with the highest output.
- -
- The most significant observation is that the RGB color-space saw significantly improved performance in the V, W, Y, and Z subsets (from 81% to 85% with V; 79% to 84% with W; 84% to 88% with Y; and 77% to 87% with Z). Note that images of these occluded subsets are characterized by light degradation (either to the right or left, as shown in Figure 6).
- -
- Finally, we note that the optimum color-space, providing a perfect balance between lighting restoration and improvement in identification, was the RGB.
4.1.7. Comparison #1 (Protocol I)
4.1.8. Comparison #2 (Protocol II)
- -
- The BSIF descriptor scans the image pixel by pixel, i.e., we consider the benefits of local information.
- -
- The image is decomposed into several blocks, i.e., we exploit regional information.
- -
- BSIF descriptor occurrences are accumulated in a global histogram, i.e., we manipulate global information.
- -
- The MB-BSIF is applied to all RGB image components, i.e., color texture information is exploited.
Authors | Year | Method | Occlusion (H + K) (%) | Lighting + Occlusion (J + M) (%) | Average Accuracy (%) |
---|---|---|---|---|---|
Zhang et al. [35] | 2011 | CRC | 58.10 | 23.80 | 40.95 |
Deng et al. [31] | 2012 | ESRC | 83.10 | 68.60 | 75.85 |
Zhu et al. [34] | 2012 | PCRC | 95.60 | 81.30 | 88.45 |
Yang et al. [32] | 2013 | SVDL | 86.30 | 79.40 | 82.85 |
Lu et al. [36] | 2012 | DMMA | 46.90 | 30.90 | 38.90 |
Zhu et al. [33] | 2014 | LGR | 98.80 | 96.30 | 97.55 |
Ref. [67] | 2016 | SeetaFace | 63.13 | 55.63 | 59.39 |
Zeng et al. [41] | 2017 | DCNN | 96.5 | 88.3 | 92.20 |
Chu et al. [65] | 2019 | MFSA+ | 91.3 | 79.00 | 85.20 |
Cuculo et al. [68] | 2019 | SSLD | 90.18 | 82.02 | 86.10 |
Zhang et al. [39] | 2020 | DNNC | 92.50 | 79.50 | 86.00 |
Du and Da [44] | 2020 | BDL | 93.03 | 91.55 | 92.29 |
Our method | 2021 | MB-C-BSIF | 99.5 | 98.5 | 99.00 |
4.2. Experiments on the LFW Database
4.2.1. Database Description
4.2.2. Experimental Protocol
4.2.3. Limitations of SSFR Systems
- -
- BSIF descriptor with filter size and bit string length .
- -
- K-NN classifier associated with city block distance.
- -
- Segmentation of the image into blocks of 40 × 40 and 20 × 20 pixels.
- -
- RGB color-space.
5. Conclusions and Perspectives
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Alay, N.; Al-Baity, H.H. Deep Learning Approach for Multimodal Biometric Recognition System Based on Fusion of Iris, Face, and Finger Vein Traits. Sensors 2020, 20, 5523. [Google Scholar] [CrossRef] [PubMed]
- Pagnin, E.; Mitrokotsa, A. Privacy-Preserving Biometric Authentication: Challenges and Directions. Secur. Commun. Netw. 2017, 2017, 1–9. [Google Scholar] [CrossRef] [Green Version]
- Mahfouz, A.; Mahmoud, T.M.; Sharaf Eldin, A. A Survey on Behavioral Biometric Authentication on Smartphones. J. Inf. Secur. Appl. 2017, 37, 28–37. [Google Scholar] [CrossRef] [Green Version]
- Ferrara, M.; Cappelli, R.; Maltoni, D. On the Feasibility of Creating Double-Identity Fingerprints. IEEE Trans. Inf. Forensics Secur. 2017, 12, 892–900. [Google Scholar] [CrossRef]
- Thompson, J.; Flynn, P.; Boehnen, C.; Santos-Villalobos, H. Assessing the Impact of Corneal Refraction and Iris Tissue Non-Planarity on Iris Recognition. IEEE Trans. Inf. Forensics Secur. 2019, 14, 2102–2112. [Google Scholar] [CrossRef]
- Benzaoui, A.; Bourouba, H.; Boukrouche, A. System for Automatic Faces Detection. In Proceedings of the 3rd International Conference on Image Processing, Theory, Tools, and Applications (IPTA), Istanbul, Turkey, 15–18 October 2012; pp. 354–358. [Google Scholar]
- Phillips, P.J.; Flynn, P.J.; Scruggs, T.; Bowyer, K.W.; Chang, J.; Hoffman, K.; Marques, J.; Min, J.; Worek, W. Overview of the Face Recognition Grand Challenge. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 20–26 June 2005; pp. 947–954. [Google Scholar]
- Femmam, S.; M’Sirdi, N.K.; Ouahabi, A. Perception and Characterization of Materials Using Signal Processing Techniques. IEEE Trans. Instrum. Meas. 2001, 50, 1203–1211. [Google Scholar] [CrossRef]
- Ring, T. Humans vs Machines: The Future of Facial Recognition. Biom. Technol. Today 2016, 4, 5–8. [Google Scholar] [CrossRef]
- Phillips, P.J.; Yates, A.N.; Hu, Y.; Hahn, A.C.; Noyes, E.; Jackson, K.; Cavazos, J.G.; Jeckeln, G.; Ranjan, R.; Sankaranarayanan, S.; et al. Face Recognition Accuracy of Forensic Examiners, Superrecognizers, and Face Recognition Algorithms. Proc. Natl. Acad. Sci. USA 2018, 115, 6171–6176. [Google Scholar] [CrossRef] [Green Version]
- Kortli, Y.; Jridi, M.; Al Falou, A.; Atri, M. Face Recognition Systems: A Survey. Sensors 2020, 20, 342. [Google Scholar] [CrossRef] [Green Version]
- Ouahabi, A.; Taleb-Ahmed, A. Deep learning for real-time semantic segmentation: Application in ultrasound imaging. Pattern Recognition Letters 2021, 144, 27–34. [Google Scholar]
- Rahman, J.U.; Chen, Q.; Yang, Z. Additive Parameter for Deep Face Recognition. Commun. Math. Stat. 2019, 8, 203–217. [Google Scholar] [CrossRef]
- Fan, Z.; Jamil, M.; Sadiq, M.T.; Huang, X.; Yu, X. Exploiting Multiple Optimizers with Transfer Learning Techniques for the Identification of COVID-19 Patients. J. Healthc. Eng. 2020, 2020, 1–13. [Google Scholar]
- Benzaoui, A.; Boukrouche, A. Ear Recognition Using Local Color Texture Descriptors from One Sample Image Per Person. In Proceedings of the 4th International Conference on Control, Decision and Information Technologies (CoDIT), Barcelona, Spain, 5–7 April 2017; pp. 827–832. [Google Scholar]
- Vapnik, V.N.; Chervonenkis, A. Learning Theory and Its Applications. IEEE Trans. Neural Netw. 1999, 10, 985–987. [Google Scholar]
- Vezzetti, E.; Marcolin, F.; Tornincasa, S.; Ulrich, L.; Dagnes, N. 3D Geometry-Based Automatic Landmark Localization in Presence of Facial Occlusions. Multimed. Tools Appl. 2017, 77, 14177–14205. [Google Scholar] [CrossRef]
- Echeagaray-Patron, B.A.; Miramontes-Jaramillo, D.; Kober, V. Conformal Parameterization and Curvature Analysis for 3D Facial Recognition. In Proceedings of the 2015 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, 7–9 December 2015; pp. 843–844. [Google Scholar]
- Kannala, J.; Rahtu, E. BSIF: Binarized Statistical Image Features. In Proceedings of the 21th International Conference on Pattern Recognition (ICPR), Tsukuba, Japan, 11–15 November 2012; pp. 1363–1366. [Google Scholar]
- Djeddi, M.; Ouahabi, A.; Batatia, H.; Basarab, A.; Kouamé, D. Discrete Wavelet for Multifractal Texture Classification: Application to Medical Ultrasound Imaging. In Proceedings of the 2010 IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 637–640. [Google Scholar]
- Ouahabi, A. Multifractal Analysis for Texture Characterization: A New Approach Based on DWT. In Proceedings of the 10th International Conference on Information Science, Signal Processing and their Applications (ISSPA 2010), Kuala Lumpur, Malaysia, 10–13 May 2010; pp. 698–703. [Google Scholar]
- Ouahabi, A. Signal and Image Multiresolution Analysis, 1st ed.; ISTE-Wiley: London, UK, 2012. [Google Scholar]
- Ouahabi, A. A Review of Wavelet Denoising in Medical Imaging. In Proceedings of the 8th International Workshop on Systems, Signal Processing and their Applications (WoSSPA), Tipaza, Algeria, 12–15 May 2013; pp. 19–26. [Google Scholar]
- Sidahmed, S.; Messali, Z.; Ouahabi, A.; Trépout, S.; Messaoudi, C.; Marco, S. Nonparametric Denoising Methods Based on Contourlet Transform with Sharp Frequency Localization: Application to Low Exposure Time Electron Microscopy Images. Entropy 2015, 17, 3461–3478. [Google Scholar]
- Kumar, N.; Garg, V. Single Sample Face Recognition in the Last Decade: A Survey. Int. J. Pattern Recognit. Artif. Intell. 2019, 33, 1956009. [Google Scholar] [CrossRef]
- Vetter, T. Synthesis of Novel Views from a Single Face Image. Int. J. Comput. Vis. 1998, 28, 103–116. [Google Scholar] [CrossRef]
- Zhang, D.; Chen, S.; Zhou, Z.H. A New Face Recognition Method Based on SVD Perturbation for Single Example Image per Person. Appl. Math. Comput. 2005, 163, 895–907. [Google Scholar] [CrossRef] [Green Version]
- Gao, Q.X.; Zhang, L.; Zhang, D. Face Recognition Using FLDA with Single Training Image per Person. Appl. Math. Comput. 2008, 205, 726–734. [Google Scholar] [CrossRef]
- Hu, C.; Ye, M.; Ji, S.; Zeng, W.; Lu, X. A New Face Recognition Method Based on Image Decomposition for Single Sample per Person Problem. Neurocomputing 2015, 160, 287–299. [Google Scholar] [CrossRef]
- Dong, X.; Wu, F.; Jing, X.Y. Generic Training Set Based Multimanifold Discriminant Learning for Single Sample Face Recognition. KSII Trans. Internet Inf. Syst. 2018, 12, 368–391. [Google Scholar]
- Deng, W.; Hu, J.; Guo, J. Extended SRC: Undersampled Face Recognition via Intraclass Variant Dictionary. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 1864–1870. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Yang, M.; Van, L.V.; Zhang, L. Sparse Variation Dictionary Learning for Face Recognition with a Single Training Sample per Person. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Sydney, Australia, 1–8 December 2013; pp. 689–696. [Google Scholar]
- Zhu, P.; Yang, M.; Zhang, L.; Lee, L. Local Generic Representation for Face Recognition with Single Sample per Person. In Proceedings of the Asian Conference on Computer Vision (ACCV), Singapore, 1–5 November 2014; pp. 34–50. [Google Scholar]
- Zhu, P.; Zhang, L.; Hu, Q.; Shiu, S.C.K. Multi-Scale Patch Based Collaborative Representation for Face Recognition with Margin Distribution Optimization. In Proceedings of the European Conference on Computer Vision (ECCV), Florence, Italy, 7–13 October 2012; pp. 822–835. [Google Scholar]
- Zhang, L.; Yang, M.; Feng, X. Sparse Representation or Collaborative Representation: Which Helps Face Recognition? In Proceedings of the International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November 2011; pp. 471–478. [Google Scholar]
- Lu, J.; Tan, Y.P.; Wang, G. Discriminative Multimanifold Analysis for Face Recognition from a Single Training Sample per Person. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 39–51. [Google Scholar] [CrossRef] [PubMed]
- Zhang, W.; Xu, Z.; Wang, Y.; Lu, Z.; Li, W.; Liao, Q. Binarized Features with Discriminant Manifold Filters for Robust Single-Sample Face Recognition. Signal Process. Image Commun. 2018, 65, 1–10. [Google Scholar] [CrossRef]
- Gu, J.; Hu, H.; Li, H. Local Robust Sparse Representation for Face Recognition with Single Sample per Person. IEEE/CAA J. Autom. Sin. 2018, 5, 547–554. [Google Scholar] [CrossRef]
- Zhang, Z.; Zhang, L.; Zhang, M. Dissimilarity-Based Nearest Neighbor Classifier for Single-Sample Face Recognition. Vis. Comput. 2020, 1–12. [Google Scholar] [CrossRef]
- Mimouna, A.; Alouani, I.; Ben Khalifa, A.; El Hillali, Y.; Taleb-Ahmed, A.; Menhaj, A.; Ouahabi, A.; Ben Amara, N.E. OLIMP: A Heterogeneous Multimodal Dataset for Advanced Environment Perception. Electronics 2020, 9, 560. [Google Scholar] [CrossRef] [Green Version]
- Zeng, J.; Zhao, X.; Qin, C.; Lin, Z. Single Sample per Person Face Recognition Based on Deep Convolutional Neural Network. In Proceedings of the 3rd IEEE International Conference on Computer and Communications (ICCC), Chengdu, China, 13–16 December 2017; pp. 1647–1651. [Google Scholar]
- Ding, C.; Bao, T.; Karmoshi, S.; Zhu, M. Single Sample per Person Face Recognition with KPCANet and a Weighted Voting Scheme. Signal Image Video Process. 2017, 11, 1213–1220. [Google Scholar] [CrossRef]
- Zhang, Y.; Peng, H. Sample Reconstruction with Deep Autoencoder for One Sample per Person Face Recognition. IET Comput. Vis. 2018, 11, 471–478. [Google Scholar] [CrossRef]
- Du, Q.; Da, F. Block Dictionary Learning-Driven Convolutional Neural Networks for Few-Shot Face Recognition. Vis. Comput. 2020, 1–10. [Google Scholar]
- Stone, J.V. Independent Component Analysis: An Introduction. Trends Cogn. Sci. 2002, 6, 59–64. [Google Scholar] [CrossRef]
- Ataman, E.; Aatre, V.; Wong, K. A Fast Method for Real-Time Median Filtering. IEEE Trans. Acoust. Speech Signal Process. 1980, 28, 415–421. [Google Scholar] [CrossRef]
- Benzaoui, A.; Hadid, A.; Boukrouche, A. Ear Biometric Recognition Using Local Texture Descriptors. J. Electron. Imaging 2014, 23, 053008. [Google Scholar] [CrossRef]
- Zehani, S.; Ouahabi, A.; Oussalah, M.; Mimi, M.; Taleb-Ahmed, A. Bone Microarchitecture Characterization Based on Fractal Analysis in Spatial Frequency Domain Imaging. Int. J. Imaging Syst. Technol. 2020, 1–19. [Google Scholar]
- Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
- Ojansivu, V.; Heikkil, J. Blur Insensitive Texture Classification Using Local Phase Quantization. In Proceedings of the 3rd International Conference on Image and Signal Processing (ICSIP), Paris, France, 7–8 July 2012; pp. 236–243. [Google Scholar]
- Martinez, A.M.; Benavente, R. The AR Face Database. CVC Tech. Rep. 1998, 24, 1–10. [Google Scholar]
- Huang, G.B.; Mattar, M.; Berg, T.; Learned-Miller, E. Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments; Technical Report 07-49; University of Massachusetts: Amherst, MA, USA, 2007; pp. 7–49. [Google Scholar]
- Mehrasa, N.; Ali, A.; Homayun, M. A Supervised Multimanifold Method with Locality Preserving for Face Recognition Using Single Sample per Person. J. Cent. South Univ. 2017, 24, 2853–2861. [Google Scholar] [CrossRef]
- Ji, H.K.; Sun, Q.S.; Ji, Z.X.; Yuan, Y.H.; Zhang, G.Q. Collaborative Probabilistic Labels for Face Recognition from Single Sample per Person. Pattern Recognit. 2017, 62, 125–134. [Google Scholar] [CrossRef]
- Turk, M.; Pentland, A. Eigenfaces for Recognition. J. Cogn. Neurosci. 1991, 3, 71–86. [Google Scholar] [CrossRef]
- Wu, J.; Zhou, Z.H. Face Recognition with One Training Image per Person. Pattern Recognit. Lett. 2002, 23, 1711–1719. [Google Scholar] [CrossRef]
- Chen, S.; Zhang, D.; Zhou, Z.H. Enhanced (PC)2A for Face Recognition with One Training Image per Person. Pattern Recognit. Lett. 2004, 25, 1173–1181. [Google Scholar] [CrossRef]
- Yang, J.; Zhang, D.; Frangi, A.F.; Yang, J.Y. Two-Dimensional PCA: A New Approach to Appearance-Based Face Representation and Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 131–137. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Gottumukkal, R.; Asari, V.K. An Improved Face Recognition Technique Based on Modular PCA Approach. Pattern Recognit. Lett. 2004, 25, 429–436. [Google Scholar] [CrossRef]
- Chen, S.; Liu, J.; Zhou, Z.H. Making FLDA Applicable to Face Recognition with One Sample per Person. Pattern Recognit. 2004, 37, 1553–1555. [Google Scholar] [CrossRef]
- Zhang, D.; Zhou, Z.H. (2D)2PCA: Two-Directional Two-Dimensional PCA for Efficient Face Representation and Recognition. Neurocomputing 2005, 69, 224–231. [Google Scholar] [CrossRef]
- Tan, X.; Chen, S.; Zhou, Z.H.; Zhang, F. Recognizing Partially Occluded, Expression Variant Faces from Single Training Image per Person with SOM and Soft K-NN Ensemble. IEEE Trans. Neural Netw. 2005, 16, 875–886. [Google Scholar] [CrossRef] [Green Version]
- He, X.; Yan, S.; Hu, Y.; Niyogi, P.; Zhang, H.J. Face Recognition Using Laplacian Faces. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 328–340. [Google Scholar]
- Deng, W.; Hu, J.; Guo, J.; Cai, W.; Fenf, D. Robust, Accurate and Efficient Face Recognition from a Single Training Image: A Uniform Pursuit Approach. Pattern Recognit. 2010, 43, 1748–1762. [Google Scholar] [CrossRef]
- Chu, Y.; Zhao, L.; Ahmad, T. Multiple Feature Subspaces Analysis for Single Sample per Person Face Recognition. Vis. Comput. 2019, 35, 239–256. [Google Scholar] [CrossRef]
- Pang, M.; Cheung, Y.; Wang, B.; Liu, R. Robust Heterogeneous Discriminative Analysis for Face Recognition with Single Sample per Person. Pattern Recognit. 2019, 89, 91–107. [Google Scholar] [CrossRef]
- Seetafaceengine. 2016. Available online: https://github.com/seetaface/SeetaFaceEngine (accessed on 1 September 2020).
- Cuculo, V.; D’Amelio, A.; Grossi, G.; Lanzarotti, R.; Lin, J. Robust Single-Sample Face Recognition by Sparsity-Driven Sub-Dictionary Learning Using Deep Features. Sensors 2019, 19, 146. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust Face Recognition via Sparse Representation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 210–227. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Su, Y.; Shan, S.; Chen, X.; Gao, W. Adaptive Generic Learning for Face Recognition from a Single Sample per Person. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 2699–2706. [Google Scholar]
- Zhou, D.; Yang, D.; Zhang, X.; Huang, S.; Feng, S. Discriminative Probabilistic Latent Semantic Analysis with Application to Single Sample Face Recognition. Neural Process. Lett. 2019, 49, 1273–1298. [Google Scholar] [CrossRef]
- Zeng, J.; Zhao, X.; Gan, J.; Mai, C.; Zhai, Y.; Wang, F. Deep Convolutional Neural Network Used in Single Sample per Person Face Recognition. Comput. Intell. Neurosci. 2018, 2018, 1–11. [Google Scholar] [CrossRef]
- Adjabi, I.; Ouahabi, A.; Benzaoui, A.; Taleb-Ahmed, A. Past, Present, and Future of Face Recognition: A Review. Electronics 2020, 9, 1188. [Google Scholar] [CrossRef]
- Sadiq, M.T.; Yu, X.; Yuan, Z.; Aziz, M.Z. Motor Imagery BCI Classification Based on Novel Two-Dimensional Modelling in Empirical Wavelet Transform. Electron. Lett. 2020, 56, 1367–1369. [Google Scholar] [CrossRef]
- Sadiq, M.T.; Yu, X.; Yuan, Z.; Fan, Z.; Rehman, A.U.; Li, G.; Xiao, G. Motor Imagery EEG Signals Classification Based on Mode Amplitude and Frequency Components Using Empirical Wavelet Transform. IEEE Access 2019, 7, 127678–127692. [Google Scholar] [CrossRef]
- Sadiq, M.T.; Yu, X.; Yuan, Z. Exploiting Dimensionality Reduction and Neural Network Techniques for the Development of Expert Brain—Computer Interfaces. Expert Syst. Appl. 2021, 164, 114031. [Google Scholar] [CrossRef]
- Khaldi, Y.; Benzaoui, A. A New Framework for Grayscale Ear Images Recognition Using Generative Adversarial Networks under Unconstrained Conditions. Evol. Syst. 2020. [Google Scholar] [CrossRef]
- Nguyen, B.P.; Tay, W.L.; Chui, C.K. Robust Biometric Recognition from Palm Depth Images for Gloved Hands. IEEE Trans. Hum. Mach. Syst. 2015, 45, 799–804. [Google Scholar] [CrossRef]
Accuracy (%) | Average Accuracy (%) | ||||||||
---|---|---|---|---|---|---|---|---|---|
B | C | D | N | O | P | Q | |||
3 × 3 | 5 | 70 | 72 | 38 | 36 | 20 | 24 | 14 | 39.14 |
5 × 5 | 9 | 94 | 97 | 59 | 75 | 60 | 66 | 30 | 68.71 |
9 × 9 | 12 | 100 | 100 | 91 | 95 | 90 | 92 | 53 | 88.71 |
11 × 11 | 8 | 97 | 99 | 74 | 85 | 70 | 75 | 43 | 77.57 |
15 × 15 | 12 | 100 | 100 | 96 | 97 | 96 | 96 | 73 | 94.00 |
17 × 17 | 12 | 100 | 100 | 98 | 97 | 96 | 97 | 71 | 94.14 |
Accuracy (%) | Average Accuracy (%) | |||||||
---|---|---|---|---|---|---|---|---|
H | I | J | U | V | W | |||
3 × 3 | 5 | 29 | 8 | 4 | 12 | 4 | 3 | 10.00 |
5 × 5 | 9 | 70 | 24 | 14 | 28 | 14 | 8 | 26.50 |
9 × 9 | 12 | 98 | 80 | 61 | 80 | 38 | 30 | 61.50 |
11 × 11 | 8 | 78 | 34 | 23 | 48 | 26 | 15 | 37.33 |
15 × 15 | 12 | 100 | 84 | 85 | 87 | 50 | 46 | 75.33 |
17 × 17 | 12 | 100 | 91 | 87 | 89 | 58 | 46 | 78.50 |
Accuracy (%) | Average Accuracy (%) | |||||||
---|---|---|---|---|---|---|---|---|
K | L | M | X | Y | Z | |||
3 × 3 | 5 | 7 | 4 | 2 | 3 | 2 | 2 | 3.33 |
5 × 5 | 9 | 22 | 9 | 6 | 12 | 6 | 2 | 9.50 |
9 × 9 | 12 | 88 | 54 | 34 | 52 | 31 | 15 | 45.67 |
11 × 11 | 8 | 52 | 12 | 90 | 22 | 9 | 7 | 32.00 |
15 × 15 | 12 | 97 | 69 | 64 | 79 | 48 | 37 | 65.67 |
17 × 17 | 12 | 98 | 80 | 63 | 90 | 48 | 31 | 68.33 |
Distance | Accuracy (%) | Average Accuracy (%) | ||||||
---|---|---|---|---|---|---|---|---|
B | C | D | N | O | P | Q | ||
Hamming | 63 | 79 | 9 | 69 | 23 | 40 | 6 | 41.29 |
Euclidean | 99 | 100 | 80 | 90 | 83 | 82 | 43 | 82.43 |
City block | 100 | 100 | 98 | 97 | 96 | 97 | 71 | 94.14 |
Distance | Accuracy (%) | Average Accuracy (%) | |||||
---|---|---|---|---|---|---|---|
H | I | J | U | V | W | ||
Hamming | 37 | 5 | 6 | 11 | 4 | 2 | 10.83 |
Euclidean | 96 | 68 | 42 | 68 | 31 | 17 | 53.67 |
City block | 100 | 91 | 87 | 89 | 58 | 46 | 78.50 |
Distance | Accuracy (%) | Average Accuracy (%) | |||||
---|---|---|---|---|---|---|---|
K | L | M | X | Y | Z | ||
Hamming | 34 | 5 | 8 | 20 | 4 | 4 | 12.50 |
Euclidean | 79 | 32 | 16 | 41 | 22 | 5 | 32.50 |
City block | 98 | 80 | 63 | 90 | 48 | 31 | 68.33 |
Segmentation | Accuracy (%) | Average Accuracy (%) | ||||||
---|---|---|---|---|---|---|---|---|
B | C | D | N | O | P | Q | ||
(1 × 1) | 100 | 100 | 98 | 97 | 96 | 97 | 71 | 94.14 |
(2 × 2) | 100 | 100 | 95 | 98 | 92 | 91 | 60 | 90.86 |
(4 × 4) | 100 | 100 | 99 | 98 | 92 | 97 | 76 | 94.57 |
Segmentation | Accuracy (%) | Average Accuracy (%) | |||||
---|---|---|---|---|---|---|---|
H | I | J | U | V | W | ||
(1 × 1) | 100 | 91 | 87 | 89 | 58 | 46 | 78.50 |
(2 × 2) | 100 | 99 | 98 | 91 | 83 | 71 | 90.33 |
(4 × 4) | 100 | 99 | 99 | 93 | 81 | 79 | 91.83 |
Segmentation | Accuracy (%) | Average Accuracy (%) | |||||
---|---|---|---|---|---|---|---|
K | L | M | X | Y | Z | ||
(1 × 1) | 98 | 80 | 63 | 90 | 48 | 31 | 68.33 |
(2 × 2) | 98 | 95 | 92 | 92 | 79 | 72 | 88.00 |
(4 × 4) | 99 | 98 | 95 | 93 | 84 | 77 | 91.00 |
Color-Space | Accuracy (%) | Average Accuracy (%) | ||||||
---|---|---|---|---|---|---|---|---|
B | C | D | N | O | P | Q | ||
Gray Scale | 100 | 100 | 99 | 98 | 92 | 97 | 76 | 94.57 |
RGB | 100 | 100 | 95 | 97 | 92 | 93 | 67 | 92.00 |
HSV | 100 | 100 | 99 | 97 | 96 | 95 | 77 | 94.86 |
YCbCr | 100 | 100 | 96 | 98 | 93 | 93 | 73 | 93.29 |
Color-Space | Accuracy (%) | Average Accuracy (%) | |||||
---|---|---|---|---|---|---|---|
H | I | J | U | V | W | ||
Gray Scale | 100 | 99 | 99 | 93 | 81 | 79 | 91.83 |
RGB | 100 | 99 | 100 | 93 | 85 | 84 | 93.50 |
HSV | 100 | 97 | 99 | 96 | 82 | 80 | 92.33 |
YCbCr | 100 | 99 | 98 | 93 | 81 | 80 | 91.83 |
Color-Space | Accuracy (%) | Average Accuracy (%) | |||||
---|---|---|---|---|---|---|---|
K | L | M | X | Y | Z | ||
Gray Scale | 99 | 98 | 95 | 93 | 84 | 77 | 91.00 |
RGB | 99 | 97 | 97 | 94 | 88 | 81 | 92.67 |
HSV | 99 | 96 | 90 | 95 | 75 | 74 | 88.17 |
YCbCr | 98 | 98 | 96 | 93 | 87 | 78 | 91.67 |
Authors | Year | Method | Accuracy | Average Accuracy (%) | |||||
---|---|---|---|---|---|---|---|---|---|
B | C | D | N | O | P | ||||
Turk, Pentland [55] | 1991 | PCA | 97.00 | 87.00 | 60.00 | 77.00 | 76.00 | 67.00 | 77.33 |
Wu and Zhou [56] | 2002 | (PC)2A | 97.00 | 87.00 | 62.00 | 77.00 | 74.00 | 67.00 | 77.33 |
Chen et al. [57] | 2004 | E(PC)2A | 97.00 | 87.00 | 63.00 | 77.00 | 75.00 | 68.00 | 77.83 |
Yang et al. [58] | 2004 | 2DPCA | 97.00 | 87.00 | 60.00 | 76.00 | 76.00 | 67.00 | 77.17 |
Gottumukkal and Asari [59] | 2004 | Block-PCA | 97.00 | 87.00 | 60.00 | 77.00 | 76.00 | 67.00 | 77.33 |
Chen et al. [60] | 2004 | Block-LDA | 85.00 | 79.00 | 29.00 | 73.00 | 59.00 | 59.00 | 64.00 |
Zhang and Zhou [61] | 2005 | (2D)2PCA | 98.00 | 89.00 | 60.00 | 71.00 | 76.00 | 66.00 | 76.70 |
Tan et al. [62] | 2005 | SOM | 98.00 | 88.00 | 64.00 | 73.00 | 77.00 | 70.00 | 78.30 |
He et al. [63] | 2005 | LPP | 94.00 | 87.00 | 36.00 | 86.00 | 74.00 | 78.00 | 75.83 |
Zhang et al. [27] | 2005 | SVD-LDA | 73.00 | 75.00 | 29.00 | 75.00 | 56.00 | 58.00 | 61.00 |
Deng et al. [64] | 2010 | UP | 98.00 | 88.00 | 59.00 | 77.00 | 74.00 | 66.00 | 77.00 |
Lu et al. [36] | 2012 | DMMA | 99.00 | 93.00 | 69.00 | 88.00 | 85.00 | 85.50 | 79.00 |
Mehrasa et al. [53] | 2017 | SLPMM | 99.00 | 94.00 | 65.00 | - - | - - | - - | - - |
Ji et al. [54] | 2017 | CPL | 92.22 | 88.06 | 83.61 | 83.59 | 77.95 | 72.82 | 83.04 |
Zhang et al. [37] | 2018 | DMF | 100.00 | 99.00 | 66.00 | - - | - - | - - | - - |
Chu et al. [65] | 2019 | MFSA+ | 100.00 | 100.00 | 74.00 | 93.00 | 85.00 | 86.00 | 89.66 |
Pang et al. [66] | 2019 | RHDA | 97.08 | 97.00 | 96.25 | - - | - - | - - | - - |
Zhang et al. [39] | 2020 | DNNC | 100.00 | 98.00 | 69.00 | 92.00 | 76.00 | 85.00 | 86.67 |
Our method | 2021 | MB-C-BSIF | 100.00 | 100.00 | 95.00 | 97.00 | 92.00 | 93.00 | 96.17 |
Authors | Year | Method | Accuracy (%) |
---|---|---|---|
Chen et al. [60] | 2004 | Block LDA | 16.40 |
Zhang et al. [27] | 2005 | SVD-FLDA | 15.50 |
Wright et al. [69] | 2009 | SRC | 20.40 |
Su et al. [70] | 2010 | AGL | 19.20 |
Zhang et al. [35] | 2011 | CRC | 19.80 |
Deng et al. [31] | 2012 | ESRC | 27.30 |
Zhu et al. [34] | 2012 | PCRC | 24.20 |
Yang et al. [32] | 2013 | SVDL | 28.60 |
Lu et al. [36] | 2012 | DMMA | 17.80 |
Zhu et al. [33] | 2014 | LGR | 30.40 |
Ji et al. [54] | 2017 | CPL | 25.20 |
Dong et al. [30] | 2018 | KNNMMDL | 32.30 |
Chu et al. [65] | 2019 | MFSA+ | 26.23 |
Pang et al. [66] | 2019 | RHDA | 32.89 |
Zhou et al. [71] | 2019 | DpLSA | 37.55 |
Our method | 2021 | MB-C-BSIF | 38.01 |
Parkhi et al. [12] | 2015 | Deep-Face | 62.63 |
Zeng et al. [72] | 2018 | TDL | 74.00 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Adjabi, I.; Ouahabi, A.; Benzaoui, A.; Jacques, S. Multi-Block Color-Binarized Statistical Images for Single-Sample Face Recognition. Sensors 2021, 21, 728. https://doi.org/10.3390/s21030728
Adjabi I, Ouahabi A, Benzaoui A, Jacques S. Multi-Block Color-Binarized Statistical Images for Single-Sample Face Recognition. Sensors. 2021; 21(3):728. https://doi.org/10.3390/s21030728
Chicago/Turabian StyleAdjabi, Insaf, Abdeldjalil Ouahabi, Amir Benzaoui, and Sébastien Jacques. 2021. "Multi-Block Color-Binarized Statistical Images for Single-Sample Face Recognition" Sensors 21, no. 3: 728. https://doi.org/10.3390/s21030728
APA StyleAdjabi, I., Ouahabi, A., Benzaoui, A., & Jacques, S. (2021). Multi-Block Color-Binarized Statistical Images for Single-Sample Face Recognition. Sensors, 21(3), 728. https://doi.org/10.3390/s21030728