A New Approach to Recognize Faces Amidst Challenges: Fusion Between the Opposite Frequencies of the Multi-Resolution Features
Abstract
:1. Introduction
- We proposed a new combination of image fusion using opposite-frequency components from discrete wavelet transform sub-bands and Gaussian-filtered images or Difference of Gaussian. The proposed method needed only the same images to produce the opposite frequency and solve the problems of inputs from multiple sources, and the requirements for features needed to be as complementary as possible.
- We examined the effects of Laplacian pyramid image fusion and discrete wavelet transform image fusion to improve recognition performance.
- We investigated the variation in parameters inside the methods, such as the wavelet family employed in multi-resolution analysis of discrete wavelet transform and the wavelet family and level of decomposition employed in discrete wavelet transform image fusion.
- Due to the high potential of other combinations inside our proposed method, including pixel-level or feature-level fusion of features, we showed that our chosen combination outperformed other combinations’ recognition performance.
- The classification methods were compared using a support vector machine with linear, quadratic, and cubic kernels; nearest neighbor; and neural network.
- Our proposed method was tested against challenges from six face datasets (four datasets and two sub-datasets) and compared with other image fusion and non-fusion methods.
2. Materials and Methods
2.1. Face Datasets
- The Database of Faces (AT&T) [40]The Database of Faces (AT&T Laboratories, Cambridge), formerly known as The ORL Database of Faces, consists of 400 face images from 40 people. The images are in grayscale, with each image having 92 × 112 pixels. The database has variations in lighting conditions and facial expressions, with some variations in glasses and no glasses. Several people were photographed at different times. The background of each image is dark, and the image was taken from the front with an upright position. Figure 1a displays examples of images from this database.
- The BeautyREC Dataset (BeautyREC) [41]The BeautyREC Dataset (BeautyREC) is a face makeup transfer dataset containing 3000 color images from various people. It is worth noting that this dataset is imbalanced. The size of each image is 512 × 512 pixels, then resized to 100 × 100 pixels and the color changed into grayscale images. The makeup transferring method is detailed in [41]. The makeup transfer dataset encompasses various makeup styles, face poses, and races. Figure 1b shows an example of images from this dataset.
- The Extended Yale B (EYB) Face dataset has 2414 frontal face images with variations in pose and lighting conditions. All 2414 images are from a total of 38 people. In this research, we employed only 58 images from each person due to the extremely dark images. Each image has a size of 168 × 192 pixels in grayscale. Figure 1c displays an example of images from this dataset.
- To challenge the method against dark images, we separated four images from each person in the EYB Face dataset and created a subset of EYB-Dark. The four images were taken randomly, but they needed to strictly abide by this rule: one under a slight illumination variation and three dark images. The total of this sub-dataset is 38 people × 4 images = 152 images. The size of each image is the same as the EYB Face dataset, and they are all in grayscale. Figure 1d shows an example of images from this dataset.
- The FEI Face Database (FEI) [44]The FEI Face Database contains 2800 images from 200 people; each person has 14 images. Each image is a color image captured in an upright frontal position with a 180-degree rotation of the profile on a uniform white background. The scale may differ by approximately 10%; each image was originally 640 × 480 pixels. This database is a collection of face images from students and employees, aged 19 to 40, from the Artificial Intelligence Laboratory of FEI in São Bernardo do Campo, São Paulo, Brazil. All people have distinct appearances, hairstyles, and jewelry. The ratio between men and women is approximately the same. In this research, we resized the images into 320 × 240 pixels and changed the color into grayscale images. Figure 1e shows the example of images from FEI.
- The FEI Face Database (FEI) Subset Frontal and Expression (FEI-FE) [44]The FEI Face Database has a subset that contains only two images per person and 400 images in total, consisting of one frontal face image with a neutral (non-smiling) expression and one frontal face image with a smiling expression. In this research, we resized the images into 180 × 130 pixels and changed the color into grayscale images. Figure 1f shows the example of images from FEI-FE.
2.2. Methods
2.2.1. Proposed Design
2.2.2. Multi-Resolution Analysis with Discrete Wavelet Transform (MRA-DWT)
2.2.3. Gaussian Filtering and the Difference of Gaussian
2.2.4. Image Fusion
- Laplacian pyramid image fusion (LP-IF)
- Discrete wavelet transform image fusion (DWT/IDWT-IF)
- Fused A (a low-frequency component) with L (a high-frequency component), resulting in AL;
- Fused H (a high-frequency component) with G1_1 (a low-frequency component), resulting in HG;
- Fused V (a high-frequency component) with G1_1 (a low-frequency component), resulting in VG;
- Fused D (a high-frequency component) with G1_1 (a low-frequency component), resulting in DG.
- LP-IF: averaging for the base low-pass and highest absolute value for the fused high-pass coefficient;
- DWT/IDWT-IF: max-min, min-max, and mean-mean rules with variations in wavelet family and level of decomposition.
2.2.5. Inverse Discrete Wavelet Transform for Multi-Resolution Analysis (MRA-IDWT)
2.2.6. Histogram of Oriented Gradient (HoG)
2.2.7. Classification and Experiment Setup
- Multi-resolution analysis with discrete wavelet transform, to divide the input image into four different sub-bands, and inverse discrete wavelet transform, to produce an enhanced reconstructed image after fusion (MRA-DWT/IDWT);
- The discrete wavelet transform and inverse discrete wavelet transform as a method of image fusion (DWT/IDWT-IF).
3. Results and Discussion
3.1. Results for AT&T Face Dataset
3.1.1. Results for All Experiment Designs
3.1.2. Results for Different Wavelet Families
3.2. Results for Other Face Datasets
3.2.1. Results for EYB and EYB-Dark Face Datasets
3.2.2. Results for BeautyREC Face Dataset
3.2.3. Results for FEI and FEI-FE Face Database
3.3. Comparisons with Other Methods
4. Conclusions
Supplementary Materials
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Li, H.; Hu, J.; Yu, J.; Yu, N.; Wu, Q. Ufacenet: Research on multi-task face recognition algorithm based on CNN. Algorithms 2021, 14, 268. [Google Scholar] [CrossRef]
- Adjabi, I.; Ouahabi, A.; Benzaoui, A.; Taleb-Ahmed, A. Past, present, and future of face recognition: A review. Electronics 2020, 9, 1188. [Google Scholar] [CrossRef]
- Zhong, Y.; Oh, S.; Moon, H.C. Service transformation under industry 4.0: Investigating acceptance of facial recognition payment through an extended technology acceptance model. Technol. Soc. 2021, 64, 101515. [Google Scholar] [CrossRef]
- Li, C.; Li, H. Disentangling facial recognition payment service usage behavior: A trust perspective. Telemat. Inform. 2023, 77, 101939. [Google Scholar] [CrossRef]
- Kortli, Y.; Jridi, M.; Al Falou, A.; Atri, M. Face recognition systems: A survey. Sensors 2020, 20, 342. [Google Scholar] [CrossRef]
- Muwardi, R.; Qin, H.; Gao, H.; Ghifarsyam, H.U.; Hajar, M.H.I.; Yunita, M. Research and Design of Fast Special Human Face Recognition System. In Proceedings of the 2nd International Conference on Broadband Communications, Wireless Sensors and Powering (BCWSP), Yogyakarta, Indonesia, 28–30 September 2020; pp. 68–73. [Google Scholar] [CrossRef]
- Setiawan, H.; Alaydrus, M.; Wahab, A. Multibranch Convolutional Neural Network for Gender and Age Identification Using Multiclass Classification And FaceNet Model. In Proceedings of the 7th International Conference on Informatics and Computing (ICIC), Denpasar, Indonesia, 8–9 December 2022. [Google Scholar] [CrossRef]
- Kaur, H.; Koundal, D.; Kadyan, V. Image Fusion Techniques: A Survey. Arch. Comput. Methods Eng. 2021, 28, 4425–4447. [Google Scholar] [CrossRef]
- Karim, S.; Tong, G.; Li, J.; Qadir, A.; Farooq, U.; Yu, Y. Current advances and future perspectives of image fusion: A comprehensive review. Inf. Fusion 2023, 90, 185–217. [Google Scholar] [CrossRef]
- Singh, S.; Gyaourova, A.; Bebis, G.; Pavlidis, I. Infrared and visible image image fusion for face recognition. In Proceedings of the Biometric Technology for Human Identification, Orlando, FL, USA, 12–16 April 2004; Volume 5404, pp. 585–596. [Google Scholar] [CrossRef]
- Heo, J.; Kong, S.G.; Abidi, B.R.; Abidi, M.A. Fusion of visual and thermal signatures with eyeglass removal for robust face recognition. In Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop, Washington, DC, USA, 27 June–2 July 2004. [Google Scholar] [CrossRef]
- Chen, X.; Wang, H.; Liang, Y.; Meng, Y.; Wang, S. A novel infrared and visible image fusion approach based on adversarial neural network. Sensors 2022, 22, 304. [Google Scholar] [CrossRef]
- Hüsken, M.; Brauckmann, M.; Gehlen, S.; von der Malsburg, C. Strategies and benefits of fusion of 2D and 3D face recognition. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)—Workshops, San Diego, CA, USA, 21–23 September 2005. [Google Scholar] [CrossRef]
- Kusuma, G.P.; Chua, C.S. Image level fusion method for multimodal 2D + 3D face recognition. In Image Analysis and Recognition, Proceedings of the 5th International Conference, ICIAR 2008, Póvoa de Varzim, Portugal, 25–27 June 2008; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar] [CrossRef]
- Ouamane, A.; Belahcene, M.; Benakcha, A.; Bourennane, S.; Taleb-Ahmed, A. Robust multimodal 2D and 3D face authentication using local feature fusion. Signal Image Video Process. 2016, 10, 129–137. [Google Scholar] [CrossRef]
- Sarangi, P.P.; Nayak, D.R.; Panda, M.; Majhi, B. A feature-level fusion based improved multimodal biometric recognition system using ear and profile face. J. Ambient. Intell. Humaniz. Comput. 2022, 13, 1867–1898. [Google Scholar] [CrossRef]
- Alay, N.; Al-Baity, H.H. Deep learning approach for multimodal biometric recognition system based on fusion of iris, face, and finger vein traits. Sensors 2020, 20, 5523. [Google Scholar] [CrossRef] [PubMed]
- Safavipour, M.H.; Doostari, M.A.; Sadjedi, H. A hybrid approach to multimodal biometric recognition based on feature-level fusion of face, two irises, and both thumbprints. J. Med. Signals Sens. 2022, 12, 177–191. [Google Scholar] [CrossRef] [PubMed]
- Byahatti, P.; Shettar, M.S. Fusion Strategies for Multimodal Biometric System Using Face and Voice Cues. IOP Conf. Ser. Mater. Sci. Eng. 2020, 925, 012031. [Google Scholar] [CrossRef]
- AL-Shatnawi, A.; Al-Saqqar, F.; El-Bashir, M.; Nusir, M. Face Recognition Model based on the Laplacian Pyramid Fusion Technique. Int. J. Adv. Soft Comput. Its Appl. 2021, 13, 27–46. [Google Scholar]
- Alfawwaz, B.M.; Al-Shatnawi, A.; Al-Saqqar, F.; Nusir, M.; Yaseen, H. Face recognition system based on the multi-resolution singular value decomposition fusion technique. Int. J. Data Netw. Sci. 2022, 6, 1249–1260. [Google Scholar] [CrossRef]
- Alfawwaz, B.M.; Al-Shatnawi, A.; Al-Saqqar, F.; Nusir, M. Multi-Resolution Discrete Cosine Transform Fusion Technique Face Recognition Model. Data 2022, 7, 80. [Google Scholar] [CrossRef]
- Pong, K.H.; Lam, K.M. Multi-resolution feature fusion for face recognition. Pattern Recognit. 2014, 47, 556–567. [Google Scholar] [CrossRef]
- Zhang, J.; Yan, X.; Cheng, Z.; Shen, X. A face recognition algorithm based on feature fusion. Concurr. Comput. Pract. Exp. 2022, 34, e5748. [Google Scholar] [CrossRef]
- Zhu, Y.; Jiang, Y. Optimization of face recognition algorithm based on deep learning multi feature fusion driven by big data. Image Vis. Comput. 2020, 104, 104023. [Google Scholar] [CrossRef]
- Karanwal, S. Improved local descriptor (ILD): A novel fusion method in face recognition. Int. J. Inf. Technol. 2023, 15, 1885–1894. [Google Scholar] [CrossRef]
- Meng, L.; Yan, C.; Li, J.; Yin, J.; Liu, W.; Xie, H.; Li, L. Multi-Features Fusion and Decomposition for Age-Invariant Face Recognition. In Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2020; pp. 3146–3154. [Google Scholar] [CrossRef]
- Li, Y.; Gao, M. Face Recognition Algorithm Based on Multiscale Feature Fusion Network. Comput. Intell. Neurosci. 2022, 2022, 5810723. [Google Scholar] [CrossRef] [PubMed]
- Charoqdouz, E.; Hassanpour, H. Feature Extraction from Several Angular Faces Using a Deep Learning Based Fusion Technique for Face Recognition. Int. J. Eng. Trans. B Appl. 2023, 36, 1548–1555. [Google Scholar] [CrossRef]
- Kumar, P.M.A.; Raj, L.A.; Sagayam, K.M.; Ram, N.S. Expression invariant face recognition based on multi-level feature fusion and transfer learning technique. Multimed. Tools Appl. 2022, 81, 37183–37201. [Google Scholar] [CrossRef]
- Tiong, L.C.O.; Kim, S.T.; Ro, Y.M. Multimodal facial biometrics recognition: Dual-stream convolutional neural networks with multi-feature fusion layers. Image Vis. Comput. 2020, 102, 103977. [Google Scholar] [CrossRef]
- Zhang, W.; Zhou, L.; Zhuang, P.; Li, G.; Pan, X.; Zhao, W.; Li, C. Underwater Image Enhancement via Weighted Wavelet Visual Perception Fusion. IEEE Trans. Circuits Syst. Video Technol. 2024, 34, 2469–2483. [Google Scholar] [CrossRef]
- Ding, I.J.; Zheng, N.W. CNN Deep Learning with Wavelet Image Fusion of CCD RGB-IR and Depth-Grayscale Sensor Data for Hand Gesture Intention Recognition. Sensors 2022, 22, 803. [Google Scholar] [CrossRef]
- Bellamkonda, S.; Gopalan, N.P. An enhanced facial expression recognition model using local feature fusion of Gabor wavelets and local directionality patterns. Int. J. Ambient. Comput. Intell. 2020, 11, 48–70. [Google Scholar] [CrossRef]
- Huang, Z.H.; Li, W.J.; Wang, J.; Zhang, T. Face recognition based on pixel-level and feature-level fusion of the top-level’s wavelet sub-bands. Inf. Fusion 2015, 22, 95–104. [Google Scholar] [CrossRef]
- Wenjing, T.; Fei, G.; Renren, D.; Yujuan, S.; Ping, L. Face recognition based on the fusion of wavelet packet sub-images and fisher linear discriminant. Multimed. Tools Appl. 2017, 76, 22725–22740. [Google Scholar] [CrossRef]
- Chai, P.; Luo, X.; Zhang, Z. Image Fusion Using Quaternion Wavelet Transform and Multiple Features. IEEE Access 2017, 5, 6724–6734. [Google Scholar] [CrossRef]
- Ye, S. A Face Recognition Method Based on Multifeature Fusion. J. Sens. 2022, 2022, 2985484. [Google Scholar] [CrossRef]
- Dey, A.; Chowdhury, S.; Sing, J.K. Performance evaluation on image fusion techniques for face recognition. Int. J. Comput. Vis. Robot. 2018, 8, 455–475. [Google Scholar] [CrossRef]
- Samaria, F.S.; Harter, A.C. Parameterisation of a stochastic model for human face identification. In Proceedings of the 1994 IEEE Workshop on Applications of Computer Vision, Sarasota, FL, USA, 5–7 December 1994; pp. 138–142. [Google Scholar] [CrossRef]
- Yan, Q.; Guo, C.; Zhao, J.; Dai, Y.; Loy, C.C.; Li, C. Beautyrec: Robust, efficient, and component-specific makeup transfer. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vancouver, BC, Canada, 17–24 June 2023; pp. 1102–1110. [Google Scholar] [CrossRef]
- Georghiades, A.S.; Belhumeur, P.N.; Kriegman, D.J. From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 643–660. [Google Scholar] [CrossRef]
- Lee, K.C.; Ho, J.; Kriegman, D.J. Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 684–698. [Google Scholar] [CrossRef]
- Thomaz, C.E.; Giraldi, G.A. A new ranking method for principal components analysis and its application to face image analysis. Image Vis. Comput. 2010, 28, 902–913. [Google Scholar] [CrossRef]
- Starosolski, R. Hybrid adaptive lossless image compression based on discrete wavelet transform. Entropy 2020, 22, 751. [Google Scholar] [CrossRef]
- Sundararajan, D. Discrete Wavelet Transform: A Signal Processing Approach; John Wiley & Sons: Singapore, 2015. [Google Scholar]
- Burger, W.; Burge, M.J. Principles of Digital Image Processing: Advanced Methods; Springer: London, UK, 2013. [Google Scholar]
- Lionnie, R.; Apriono, C.; Gunawan, D. Eyes versus Eyebrows: A Comprehensive Evaluation Using the Multiscale Analysis and Curvature-Based Combination Methods in Partial Face Recognition. Algorithms 2022, 15, 208. [Google Scholar] [CrossRef]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Li, J.; Zhang, J.; Yang, C.; Liu, H.; Zhao, Y.; Ye, Y. Comparative Analysis of Pixel-Level Fusion Algorithms and a New High-Resolution Dataset for SAR and Optical Image Fusion. Remote Sens. 2023, 15, 5514. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, L.; Cheng, J.; Li, C.; Chen, X. Multi-focus image fusion: A Survey of the state of the art. Inf. Fusion 2020, 64, 71–91. [Google Scholar] [CrossRef]
- Burt, P.J.; Adelson, E.H. The Laplacian Pyramid as a Compact Image Code. IEEE Trans. Commun. 1983, 31, 532–540. [Google Scholar] [CrossRef]
- Burt, P.J.; Adelson, E.H. Merging Images Through Pattern Decomposition. In Applications of Digital Image Processing VIII, Proceedings the 29th Annual Technical Symposium, San Diego, CA, USA, 20–23 August 1985; SPIE: Bellingham, DC, USA, 1985; Volume 0575. [Google Scholar] [CrossRef]
- Liu, Y.; Liu, S.; Wang, Z. A general framework for image fusion based on multi-scale transform and sparse representation. Inf. Fusion 2015, 24, 147–164. [Google Scholar] [CrossRef]
- Li, H.; Manjunath, B.S.; Mitra, S.K. Multisensor Image Fusion Using the Wavelet Transform. Graph. Models Image Process. 1995, 57, 235–245. [Google Scholar] [CrossRef]
- Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–26 June 2005; Volume 1, pp. 886–893. [Google Scholar] [CrossRef]
- Teng, J.H.; Ong, T.S.; Connie, T.; Anbananthen, K.S.M.; Min, P.P. Optimized Score Level Fusion for Multi-Instance Finger Vein Recognition. Algorithms 2022, 15, 161. [Google Scholar] [CrossRef]
- Lionnie, R.; Apriono, C.; Chai, R.; Gunawan, D. Curvature Best Basis: A Novel Criterion to Dynamically Select a Single Best Basis as the Extracted Feature for Periocular Recognition. IEEE Access 2022, 10, 113523–113542. [Google Scholar] [CrossRef]
- Lionnie, R.; Hermanto, V. Human vs machine learning in face recognition: A case study from the travel industry. SINERGI 2024. in editing. [Google Scholar]
- Aleem, S.; Yang, P.; Masood, S.; Li, P.; Sheng, B. An accurate multi-modal biometric identification system for person identification via fusion of face and finger print. World Wide Web 2020, 23, 1299–1317. [Google Scholar] [CrossRef]
- Miakshyn, O.; Anufriiev, P.; Bashkov, Y. Face Recognition Technology Improving Using Convolutional Neural Networks. In Proceedings of the 2021 IEEE 3rd International Conference on Advanced Trends in Information Theory (ATIT), Kyiv, Ukraine, 15–17 December 2021; pp. 116–120. [Google Scholar] [CrossRef]
- Hung, B.T.; Khang, N.N. Student Attendance System Using Face Recognition. In Proceedings of the Integrated Intelligence Enable Networks and Computing. Algorithms for Intelligent Systems, Gopeshwar, India, 25–27 May 2020; Springer: Singapore, 2021. [Google Scholar] [CrossRef]
- Bahrami, S.; Dornaika, F.; Bosaghzadeh, A. Joint auto-weighted graph fusion and scalable semi-supervised learning. Inf. Fusion 2021, 66, 213–228. [Google Scholar] [CrossRef]
- Zhang, Y.; Zheng, S.; Zhang, X.; Cui, Z. Multi-resolution dictionary learning method based on sample expansion and its application in face recognition. Signal Image Video Process. 2021, 15, 307–313. [Google Scholar] [CrossRef]
- Kas, M.; El-Merabet, Y.; Ruichek, Y.; Messoussi, R. A comprehensive comparative study of handcrafted methods for face recognition LBP-like and non LBP operators. Multimed. Tools Appl. 2020, 79, 375–413. [Google Scholar] [CrossRef]
- Nikan, S.; Ahmadi, M. Local gradient-based illumination invariant face recognition using local phase quantisation and multi-resolution local binary pattern fusion. IET Image Process. 2015, 9, 12–21. [Google Scholar] [CrossRef]
- Curtidor, A.; Baydyk, T.; Kussul, E. Analysis of random local descriptors in face recognition. Electronics 2021, 10, 1358. [Google Scholar] [CrossRef]
- Talab, M.A.; Awang, S.; Ansari, M.D. A Novel Statistical Feature Analysis-Based Global and Local Method for Face Recognition. Int. J. Opt. 2020, 2020, 4967034. [Google Scholar] [CrossRef]
- Al-Ghrairi, A.H.T.; Mohammed, A.A.; Sameen, E.Z. Face detection and recognition with 180 degree rotation based on principal component analysis algorithm. IAES Int. J. Artif. Intell. 2022, 11, 593–602. [Google Scholar] [CrossRef]
- Al-Shebani, Q.; Premarante, P.; Vial, P.J. A hybrid feature extraction technique for face recognition. In Proceedings of the International Proceedings of Computer Science and Information Technology, Shanghai, China, 27–28 March 2014; pp. 166–170. Available online: https://ro.uow.edu.au/eispapers/2231/ (accessed on 14 November 2024).
Exp. | Flowchart | Fusion Level | Method of Fusion * | Coefficients |
---|---|---|---|---|
1 | Figure S1 | Pixel-level | LP-IF | AL, HG, VG, DG (choose one) |
2 | Figure S2 | Pixel- and feature-level | LP-IF | Concatenated vector (c-AL+HG+VG+DG) |
3 | Figure S1 | Pixel-level | DWT-IF ** | AL, HG, VG, DG (choose one) |
4 | Figure S2 | Pixel- and feature-level | DWT-IF ** | Concatenated vector (c-AL+HG+VG+DG) |
5 | Figure 2 | Pixel-level | LP-IF | Enhanced reconstructed image using MRA-IDWT |
6 | Figure 2 | Pixel-level | DWT-IF ** | Enhanced reconstructed image using MRA-IDWT |
Dataset | Challenges |
---|---|
AT&T | Lighting conditions, facial expressions, glasses and no glasses, image acquisition with a time gap |
EYB | Pose, variation in lighting conditions |
EYB-Dark | Very dark face images (1 normal and 3 dark images) |
BeautyREC | Face makeup (transfer), imbalanced dataset |
FEI | 180-degree rotation of the face profile |
FEI-FE | Neutral and smiling expression |
Exp. | Coefficients | 1-NN | SVM | NN-ReLU | ||
---|---|---|---|---|---|---|
Linear | Quadratic | Cubic | ||||
1 | AL | 97.0 | 97.5 | 97.8 | 97.8 | 96.2 |
HG | 96.8 | 95.8 | 97.8 | 98.0 | 96.5 | |
VG | 96.5 | 96.0 | 97.5 | 97.2 | 96.5 | |
DG | 96.2 | 96.8 | 98.0 | 98.2 | 97.0 | |
2 | (c-AL+HG+VG+DG) | 97.2 | 97.5 | 98.2 | 98.2 | 97.8 |
3 | AL | 97 | 97.8 | 98.2 | 98.2 | 97 |
HG | 96.8 | 96.2 | 97.8 | 97.8 | 96.8 | |
VG | 97 | 96.2 | 97.8 | 97.5 | 96.2 | |
DG | 96 | 96.8 | 98.2 | 97.5 | 97 | |
4 | (c-AL+HG+VG+DG) | 96.2 | 97.5 | 98.2 | 98.2 | 97.5 |
5 | I’ | 96.8 | 98.2 | 99.2 | 99.2 | 97.2 |
6 | I’ | 96 | 97.2 | 98 | 97.8 | 95.5 |
Exp. | Number of Images | SVM Kernel | Accuracy | Precision | Recall | F1-Score |
---|---|---|---|---|---|---|
5 | 10 | Quadratic | 98.5 | 98.5 | 98.64 | 98.57 |
Cubic | 98.5 | 98.5 | 98.64 | 98.57 | ||
5 | Quadratic | 93 | 93 | 93.87 | 93.43 | |
Cubic | 93.5 | 93.5 | 94.82 | 94.16 | ||
6 | 10 | Quadratic | 97.75 | 97.75 | 97.93 | 97.84 |
Cubic | 97.75 | 97.75 | 97.95 | 97.85 | ||
5 | Quadratic | 94 | 94 | 95.16 | 94.57 | |
Cubic | 94 | 94 | 95.04 | 94.52 |
Exp. | Dataset | SVM Kernel | Accuracy | Precision | Recall | F1-Score |
---|---|---|---|---|---|---|
5 | Imbalanced | Quadratic | 28.82 | 28.82 | 38.84 | 33.09 |
Cubic | 30.7 | 30.7 | 43.76 | 36.09 | ||
Balanced | Quadratic | 53.33 | 53.33 | 53.33 | 53.33 | |
Cubic | 50.67 | 50.67 | 48.79 | 49.71 | ||
6 | Imbalanced | Quadratic | 21.17 | 21.17 | 25.7 | 23.22 |
Cubic | 22.77 | 22.77 | 30.09 | 25.92 | ||
Balanced | Quadratic | 53.33 | 53.33 | 53.33 | 53.33 | |
Cubic | 52 | 52 | 50.03 | 50.99 |
Dataset | Research | Fusion/Non-Fusion | Methods | Accuracy/Recognition Rate (%) |
---|---|---|---|---|
AT&T (ORL) | [60] | Fusion | Extended local binary patterns and reduction with local non-negative matrix factorization (on unimodal face) | 97.20 |
[22] | Fusion | Multi-resolution discrete cosine transform fusion | 97.70 | |
[21] | Fusion | Multi-resolution singular value decomposition fusion | 97.78 | |
[61] | Non-fusion | Convolutional neural network | 98 | |
[20] | Fusion | Laplacian pyramid fusion | 98.2 | |
[62] | Non-fusion | Haar-cascade and convolutional neural network | 98.57 | |
Ours | Fusion | Opposite-frequency features of MRA-DWT/IDWT and image fusion with LP-IF or DWT/IDWT-IF | 99.2 | |
EYB | [63] | Fusion (graph) | Auto-weighted multi-view semi-supervised learning method | 88.57 |
[64] | Non-fusion | Multi-resolution dictionary learning method based on sample expansion | 89.59 | |
[65] | Non-fusion | Local neighborhood difference pattern | 97.64 | |
[66] | Fusion | Pre-processing and local phase quantization and multi-scale local binary pattern with score-level fusion and decision-level fusion | 98.30 | |
Ours | Fusion | Opposite-frequency features of MRA-DWT/IDWT and image fusion with LP-IF or DWT/IDWT-IF | 99.8 | |
EYB-Dark | [59] | non-fusion | Histogram of oriented gradient and contrast-limited adaptive histogram equalization | 95.4 |
Ours | Fusion | Opposite-frequency features of MRA-DWT/IDWT and image fusion with LP-IF or DWT/IDWT-IF | 96.1 | |
FEI | [67] | Non-fusion | Permutation coding neural classifier based on random local descriptor | 93.57 |
[68] | Non-fusion | Integration of the binary-level occurrencematrix and the fuzzy local binary pattern and neural network classifier | 95.27 | |
[69] | Non-fusion | Principal component analysis | 96 | |
Ours | Fusion | Opposite-frequency features of MRA-DWT/IDWT and image fusion with LP-IF or DWT/IDWT-IF | 97.5 | |
FEI-FE | [70] | Non-fusion | Using eye region with Gabor transform and nearest neighbor | 97 |
Ours | Fusion | Opposite-frequency features of MRA-DWT/IDWT and image fusion with LP-IF or DWT/IDWT-IF | 99.5 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lionnie, R.; Andika, J.; Alaydrus, M. A New Approach to Recognize Faces Amidst Challenges: Fusion Between the Opposite Frequencies of the Multi-Resolution Features. Algorithms 2024, 17, 529. https://doi.org/10.3390/a17110529
Lionnie R, Andika J, Alaydrus M. A New Approach to Recognize Faces Amidst Challenges: Fusion Between the Opposite Frequencies of the Multi-Resolution Features. Algorithms. 2024; 17(11):529. https://doi.org/10.3390/a17110529
Chicago/Turabian StyleLionnie, Regina, Julpri Andika, and Mudrik Alaydrus. 2024. "A New Approach to Recognize Faces Amidst Challenges: Fusion Between the Opposite Frequencies of the Multi-Resolution Features" Algorithms 17, no. 11: 529. https://doi.org/10.3390/a17110529
APA StyleLionnie, R., Andika, J., & Alaydrus, M. (2024). A New Approach to Recognize Faces Amidst Challenges: Fusion Between the Opposite Frequencies of the Multi-Resolution Features. Algorithms, 17(11), 529. https://doi.org/10.3390/a17110529