FRMDB: Face Recognition Using Multiple Points of View
Abstract
:1. Introduction
- It proposes a novel dataset, the FRMDB, composed of 39 subjects with mugshots taken from 28 different perspectives plus 5 surveillance videos taken from 5 different perspectives. The dataset is open-access and freely released in a GitHub repository (the proposed dataset is available at: https://github.com/airtlab/face-recognition-from-mugshots-database, accessed on the 30 December 2022).
- It presents a literature review of existing databases for face recognition, analyzing their potential in benchmarking techniques for verification and identification in surveillance scenarios. Although existing surveys and reviews about face recognition also include a detailed description of available databases, such as in [25,26,27,28], we analyze datasets considering the availability of images and clips suitable to test recognition in video surveillance conditions.
- It compares the results of two well-established CNNs for face recognition on the proposed dataset and the Surveillance Cameras Face (SCFace) database [29]. Such comparison is useful to validate the goal of the FRMDB, i.e., testing face recognition on security camera frames when different mugshots are available for the identification.
- It provides an initial benchmark for the proposed dataset, starting to analyze the performance of the face recognition when different subsets of mugshots, taken from various POVs, are available as a reference. The source code of the experiments is published in an open-access GitHub repository (the source code of the tests is available at: https://github.com/airtlab/tests-on-the-FRMDB, accessed on the 30 December 2022).
2. Literature Review
2.1. Databases for Face Recognition
2.2. Evolution of Face Recognition Techniques
3. Materials and Methods
3.1. The Proposed Dataset
- A total of 28 mugshots, i.e., 28 color pictures taken from different points of view with the subject posing during the acquisition.
- A total of 5 security cameras videos, taken from 5 points of view. In addition, a mosaic video including all 5 clips at the same time is available.
- An additional frontal picture (1920 × 1080 pixels, JPEG) for each subject, taken with a different light from a camera placed in front of the subject.
- For 12 out of the 39 subjects, a second set of 5 videos from the security cameras (plus the mosaic) is available. For these subjects, the second set of the security videos varies because the subject wears different accessories on their head, such as glasses, sunglasses, hats, and bandanas. The subjects do not wear such accessories in the mugshots.
- For 3 out of the 39 subjects, a second set of 28 mugshots taken with the subject smiling.
- A text file for each subject containing the subject’s sex, age, and the accessories worn in the second set of security videos, if available.
3.2. The Compared CNNs
3.3. Experimental Protocol and Evaluation Metrics
- The “Test F” subset, composed of the frontal picture only, i.e., the one at (0, 0) for both databases. The name “Test F” comes from the SCFace database, where “F” is the label given to the frontal pictures.
- The “Test F-L1-R1”, composed of the frontal picture, the left angle nearest to the frontal picture (which is (−22.5, 0) for the SCFace database and (−45, 0) for the FRMDB), and the right angle nearest to the frontal picture (SCFace: (22.5, 0); FRMDB: (45, 0)). The name “Test F-L1-R1” comes from the SCFace database, as F, L1 and R1 are the image labels used for the included pictures.
- The “Test 1” subset, composed of the frontal picture and the right profile picture, i.e., (90, 0), for both databases. This subset reproduces the only mugshots currently available in the database of most police forces.
- The “Test 2” database, composed of the frontal picture, the right profile picture, and the left profile, i.e., (−90, 0).
- The “Test 2” pictures plus the pictures one step closer to the frontal picture starting from the right profile and left profile, which are (77.5, 0) and (−77.5, 0) for the SCFace database, and (45, 0) and (−45, 0) for the FRMDB. We called these subsets “Test 3”.
- The “Test 3” pictures plus the pictures at (45, 0) and (−45, 0) for the SCFace database and the pictures at (135, 0) and (−135, 0) for the FRMDB. We called these subsets “Test 4”. In fact, the “Test 4” includes all the pictures with 0 on the vertical plane of the proposed dataset.
- All the 9 mugshots for the SCFace database, and the “Test 4” pictures plus all the mugshots with a 30 angle on the vertical plane for the FRMDB. We called these subsets “Test 5”.
- All 28 mugshots of the FRMDB. We call this subset “Test 6”.
- As the number of security camera images for which the correct subject was in the top-1, top-3, top-5, and top-10 most similar mugshots over the total number of security camera images for the most similar mugshots.
- As the number of security camera images for which the correct subject was in the top-1, top-3, top-5, and top-10 nearest identities over the total number of security camera images for the most similar identities.
4. Results and Discussion
4.1. Results and Discussion on the SCFace Database
4.2. Results and Discussion on the FRMDB
4.3. Limitations
5. Conclusions
- The proposed dataset is adequate to benchmark face recognition techniques for the identification of subjects in the videos using mugshots, taking into account different points of view. The lower accuracy with respect to the SCFace database highlighted the challenging nature of the dataset. In addition, the subset of mugshots composed of the frontal face only did not show the same predominance scored on the SCFace, as the FRMDB includes surveillance videos from multiple points of view.
- With both datasets, the traditional photo-signaling pictures, i.e., the frontal image and the right profile, are outperformed by other subsets of mugshots. Specifically, with the proposed FRMDB, the subset composed of the frontal picture and the pictures at ±45 on the horizontal plane achieves the best accuracy in most of the tests.
- Further research is needed to obtain results about an ideal number of mugshots, looking for a compromise with the need for additional tools (and storage space) necessary for law enforcement agencies to collect more mugshots pictures. For more general results, more techniques need to be tested, including those for Pose-Invariant Face Recognition (PIFR) and pose estimation, in order to pick the mugshot with the pose nearest to the security camera frames before the comparison.
Author Contributions
Funding
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Raaijmakers, S. Artificial Intelligence for Law Enforcement: Challenges and Opportunities. IEEE Secur. Priv. 2019, 17, 74–77. [Google Scholar] [CrossRef]
- Rademacher, T. Artificial Intelligence and Law Enforcement. In Regulating Artificial Intelligence; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 225–254. [Google Scholar] [CrossRef]
- Sernani, P.; Falcionelli, N.; Tomassini, S.; Contardo, P.; Dragoni, A.F. Deep Learning for Automatic Violence Detection: Tests on the AIRTLab Dataset. IEEE Access 2021, 9, 160580–160595. [Google Scholar] [CrossRef]
- Vrskova, R.; Hudec, R.; Kamencay, P.; Sykora, P. A New Approach for Abnormal Human Activities Recognition Based on ConvLSTM Architecture. Sensors 2022, 22, 2946. [Google Scholar] [CrossRef]
- Bhatti, M.T.; Khan, M.G.; Aslam, M.; Fiaz, M.J. Weapon Detection in Real-Time CCTV Videos Using Deep Learning. IEEE Access 2021, 9, 34366–34382. [Google Scholar] [CrossRef]
- Berardini, D.; Galdelli, A.; Mancini, A.; Zingaretti, P. Benchmarking of Dual-Step Neural Networks for Detection of Dangerous Weapons on Edge Devices. In Proceedings of the 2022 18th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications (MESA), Taipei, Taiwan, 29–31 August 2022; pp. 1–6. [Google Scholar] [CrossRef]
- Yuan, Z.; Zhou, X.; Yang, T. Hetero-ConvLSTM: A Deep Learning Approach to Traffic Accident Prediction on Heterogeneous Spatio-Temporal Data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. Association for Computing Machinery, London, UK, 19–23 August 2018; KDD ’18. pp. 984–992. [Google Scholar] [CrossRef]
- Rossi, L.; Paolanti, M.; Pierdicca, R.; Frontoni, E. Human trajectory prediction and generation using LSTM models and GANs. Pattern Recognit. 2021, 120, 108136. [Google Scholar] [CrossRef]
- Xu, Z.; Hu, C.; Mei, L. Video structured description technology based intelligence analysis of surveillance videos for public security applications. Multimed. Tools Appl. 2016, 75, 12155–12172. [Google Scholar] [CrossRef]
- Khairwa, A.; Abhishek, K.; Prakash, S.; Pratap, T. A comprehensive study of various biometric identification techniques. In Proceedings of the 2012 Third International Conference on Computing, Communication and Networking Technologies (ICCCNT’12), Karur, India, 26–28 July 2012; pp. 1–6. [Google Scholar] [CrossRef]
- Gomez-Barrero, M.; Drozdowski, P.; Rathgeb, C.; Patino, J.; Todisco, M.; Nautsch, A.; Damer, N.; Priesnitz, J.; Evans, N.; Busch, C. Biometrics in the Era of COVID-19: Challenges and Opportunities. IEEE Trans. Technol. Soc. 2022, 3, 307–322. [Google Scholar] [CrossRef]
- Turk, M.; Pentland, A. Face recognition using eigenfaces. In Proceedings of the Computer Vision and Pattern Recognition, 1991, Proceedings CVPR ’91., IEEE Computer Society Conference, Maui, HI, USA, 3–6 June 1991; pp. 586–591. [Google Scholar] [CrossRef]
- Belhumeur, P.; Hespanha, J.; Kriegman, D. Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection. Pattern Anal. Mach. Intell. IEEE Trans. 1997, 19, 711–720. [Google Scholar] [CrossRef]
- Guo, G.; Zhang, N. A survey on deep learning based face recognition. Comput. Vis. Image Underst. 2019, 189, 102805. [Google Scholar] [CrossRef]
- Crouse, D.; Han, H.; Chandra, D.; Barbello, B.; Jain, A.K. Continuous authentication of mobile user: Fusion of face image and inertial Measurement Unit data. In Proceedings of the 2015 International Conference on Biometrics (ICB), Sassari, Italy, 7–8 June 2015; pp. 135–142. [Google Scholar] [CrossRef]
- Opitz, A.; Kriechbaum-Zabini, A. Evaluation of face recognition technologies for identity verification in an eGate based on operational data of an airport. In Proceedings of the 2015 12th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Karlsruhe, Germany, 25–28 August 2015; pp. 1–5. [Google Scholar] [CrossRef]
- Ammour, B.; Boubchir, L.; Bouden, T.; Ramdani, M. Face–Iris Multimodal Biometric Identification System. Electronics 2020, 9, 85. [Google Scholar] [CrossRef]
- Forti, M. AI-driven migration management procedures: Fundamental rights issues and regulatory answers. Biolaw J. 2021, 2021, 433–451. [Google Scholar] [CrossRef]
- Ding, C.; Tao, D. A Comprehensive Survey on Pose-Invariant Face Recognition. ACM Trans. Intell. Syst. Technol. 2016, 7, 1–42. [Google Scholar] [CrossRef] [Green Version]
- Ahmed, S.; Ali, S.; Ahmad, J.; Adnan, M.; Fraz, M. On the frontiers of pose invariant face recognition: A review. Artif. Intell. Rev. 2020, 53, 2571–2634. [Google Scholar] [CrossRef]
- Hassaballah, M.; Aly, S. Face recognition: Challenges, achievements and future directions. IET Comput. Vis. 2015, 9, 614–626. [Google Scholar] [CrossRef]
- Contardo, P.; Sernani, P.; Falcionelli, N.; Dragoni, A.F. Deep Learning for Law Enforcement: A Survey about Three Application Domains. In Proceedings of the 4th International Conference on Recent Trends and Applications in Computer Science and Information Technology, Tirana, Albania, 21–22 May 2021; CEUR Workshop Proceedings. Volume 2872, pp. 36–45. [Google Scholar]
- Contardo, P.; Lorenzo, E.D.; Falcionelli, N.; Dragoni, A.F.; Sernani, P. Analyzing the impact of police mugshots in face verification for crime investigations. In Proceedings of the 2022 IEEE International Conference on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE), Roma, Italy, 26–28 October 2022; pp. 236–241. [Google Scholar] [CrossRef]
- Adjabi, I.; Ouahabi, A.; Benzaoui, A.; Taleb-Ahmed, A. Past, Present, and Future of Face Recognition: A Review. Electronics 2020, 9, 1188. [Google Scholar] [CrossRef]
- Kortli, Y.; Jridi, M.; Al Falou, A.; Atri, M. Face Recognition Systems: A Survey. Sensors 2020, 20, 342. [Google Scholar] [CrossRef]
- Taskiran, M.; Kahraman, N.; Erdem, C.E. Face recognition: Past, present and future (a review). Digital Signal Process. 2020, 106, 102809. [Google Scholar] [CrossRef]
- Wang, M.; Deng, W. Deep face recognition: A survey. Neurocomputing 2021, 429, 215–244. [Google Scholar] [CrossRef]
- Grgic, M.; Delac, K.; Grgic, S. SCface—Surveillance Cameras Face Database. Multimed. Tools Appl. 2011, 51, 863–879. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Parkhi, O.M.; Vedaldi, A.; Zisserman, A. Deep Face Recognition; British Machine Vision Association: Durham, UK, 2015. [Google Scholar]
- Cao, Q.; Shen, L.; Xie, W.; Parkhi, O.M.; Zisserman, A. VGGFace2: A Dataset for Recognising Faces across Pose and Age. In Proceedings of the 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi’an, China, 15–19 May 2018; pp. 67–74. [Google Scholar] [CrossRef] [Green Version]
- Wang, Z.; Wang, G.; Huang, B.; Xiong, Z.; Hong, Q.; Wu, H.; Yi, P.; Jiang, K.; Wang, N.; Pei, Y.; et al. Masked Face Recognition Dataset and Application. arXiv 2020, arXiv:2003.09093. [Google Scholar]
- Wang, C.; Fang, H.; Zhong, Y.; Deng, W. MLFW: A Database for Face Recognition on Masked Faces. In Proceedings of the Biometric Recognition; Springer Nature Switzerland: Cham, Switzerland, 2022; pp. 180–188. [Google Scholar] [CrossRef]
- Samaria, F.; Harter, A. Parameterisation of a stochastic model for human face identification. In Proceedings of the 1994 IEEE Workshop on Applications of Computer Vision, Seattle, WA, USA, 21–23 June 1994; pp. 138–142. [Google Scholar] [CrossRef]
- Best-Rowden, L.; Han, H.; Otto, C.; Klare, B.F.; Jain, A.K. Unconstrained Face Recognition: Identifying a Person of Interest From a Media Collection. IEEE Trans. Inf. Forensics Secur. 2014, 9, 2144–2157. [Google Scholar] [CrossRef]
- Huang, G.B.; Ramesh, M.; Berg, T.; Learned-Miller, E. Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments; Technical Report 07-49; University of Massachusetts: Amherst, MA, USA, 2007. [Google Scholar]
- Huang, G.B.; Learned-Miller, E. Labeled Faces in the Wild: Updates and New Reporting Procedures; Technical Report UM-CS-2014-003; University of Massachusetts: Amherst, MA, USA, 2014. [Google Scholar]
- Wolf, L.; Hassner, T.; Maoz, I. Face recognition in unconstrained videos with matched background similarity. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 529–534. [Google Scholar] [CrossRef]
- Viola, P.; Jones, M. Robust Real-Time Face Detection. Int. J. Comput. Vis. 2004, 57, 137–154. [Google Scholar] [CrossRef]
- Yi, D.; Lei, Z.; Liao, S.; Li, S.Z. Learning Face Representation from Scratch. arXiv 2014, arXiv:1411.7923. [Google Scholar]
- Kemelmacher-Shlizerman, I.; Seitz, S.M.; Miller, D.; Brossard, E. The MegaFace Benchmark: 1 Million Faces for Recognition at Scale. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 4873–4882. [Google Scholar] [CrossRef]
- Nech, A.; Kemelmacher-Shlizerman, I. Level Playing Field for Million Scale Face Recognition. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 3406–3415. [Google Scholar] [CrossRef]
- Phillips, P.; Wechsler, H.; Huang, J.; Rauss, P.J. The FERET database and evaluation procedure for face-recognition algorithms. Image Vis. Comput. 1998, 16, 295–306. [Google Scholar] [CrossRef]
- Phillips, P.; Moon, H.; Rizvi, S.; Rauss, P. The FERET evaluation methodology for face-recognition algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1090–1104. [Google Scholar] [CrossRef]
- Blanz, V.; Vetter, T. A Morphable Model for the Synthesis of 3D Faces. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 8–13 August 1999; ACM Press/Addison-Wesley Publishing Co.: Cambridge, MA, USA, 1999. SIGGRAPH ’99. pp. 187–194. [Google Scholar] [CrossRef]
- Georghiades, A.; Belhumeur, P.; Kriegman, D. From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 643–660. [Google Scholar] [CrossRef]
- Lee, K.C.; Ho, J.; Kriegman, D. Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 684–698. [Google Scholar] [CrossRef]
- Bon-Woo, H.; Byun, H.; Myoung-Cheol, R.; Seong-Whan, L. Performance Evaluation of Face Recognition Algorithms on the Asian Face Database, KFDB. In Proceedings of the Audio- and Video-Based Biometric Person Authentication; Kittler, J., Nixon, M.S., Eds.; Springer: Berlin/Heidelberg, Germany, 2003; pp. 557–565. [Google Scholar] [CrossRef]
- Gao, W.; Cao, B.; Shan, S.; Zhou, D.; Zhang, X.; Zhao, D. The CAS-PEAL Large Scale Chinese Face Database and Evaluation Protocols; Technical Report JDL-TR_04_FR_001; ICT-ISVISION Joint Research & Development Laboratory for Face Recognition, Chinese Academy of Sciences: Beijing, China, 2004. [Google Scholar]
- Gross, R.; Matthews, I.; Cohn, J.; Kanade, T.; Baker, S. Multi-PIE. Image Vis. Comput. 2010, 28, 807–813. [Google Scholar] [CrossRef]
- Watson, C.; Flanagan, P. NIST Special Database 18. NIST Mugshot Identification Database (MID); Technical Report; National Institute of Standards and Technology: Gaithersburg, ML, USA, 2016. [Google Scholar] [CrossRef]
- Wong, Y.; Chen, S.; Mau, S.; Sanderson, C.; Lovell, B.C. Patch-based Probabilistic Image Quality Assessment for Face Selection and Improved Video-based Face Recognition. In Proceedings of the IEEE Biometrics Workshop, Computer Vision and Pattern Recognition (CVPR) Workshops, Salt Lake City, UT, USA, 18–22 June 2011; pp. 81–88. [Google Scholar] [CrossRef]
- Ahonen, T.; Hadid, A.; Pietikainen, M. Face Description with Local Binary Patterns: Application to Face Recognition. Pattern Anal. Mach. Intell. IEEE Trans. 2006, 28, 2037–2041. [Google Scholar] [CrossRef]
- Masi, I.; Wu, Y.; Hassner, T.; Natarajan, P. Deep Face Recognition: A Survey. In Proceedings of the 2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Paraná, Brazil, 29 October–1 November 2018; pp. 471–478. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2012; Volume 2, pp. 1097–1105. [Google Scholar]
- Taigman, Y.; Yang, M.; Ranzato, M.; Wolf, L. DeepFace: Closing the Gap to Human-Level Performance in Face Verification. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1701–1708. [Google Scholar] [CrossRef]
- Learned-Miller, E.; Huang, G.B.; RoyChowdhury, A.; Li, H.; Hua, G. Labeled Faces in the Wild: A Survey. In Advances in Face Detection and Facial Image Analysis; Springer International Publishing: Cham, Switzerland, 2016; pp. 189–248. [Google Scholar] [CrossRef]
- Schroff, F.; Kalenichenko, D.; Philbin, J. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 815–823. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7132–7141. [Google Scholar] [CrossRef]
- You, M.; Han, X.; Xu, Y.; Li, L. Systematic evaluation of deep face recognition methods. Neurocomputing 2020, 388, 144–156. [Google Scholar] [CrossRef]
- Hassner, T.; Harel, S.; Paz, E.; Enbar, R. Effective face frontalization in unconstrained images. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 4295–4304. [Google Scholar] [CrossRef]
- Tran, L.; Yin, X.; Liu, X. Disentangled Representation Learning GAN for Pose-Invariant Face Recognition. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1283–1292. [Google Scholar] [CrossRef]
- Tran, L.; Yin, X.; Liu, X. Representation Learning by Rotating Your Faces. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 3007–3021. [Google Scholar] [CrossRef]
- Zhao, J.; Cheng, Y.; Xu, Y.; Xiong, L.; Li, J.; Zhao, F.; Jayashree, K.; Pranata, S.; Shen, S.; Xing, J.; et al. Towards Pose Invariant Face Recognition in the Wild. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 2207–2216. [Google Scholar] [CrossRef]
- Xiang, J.; Zhu, G. Joint Face Detection and Facial Expression Recognition with MTCNN. In Proceedings of the 2017 4th International Conference on Information Science and Control Engineering (ICISCE), Changsha, China, 21–23 July 2017; pp. 424–427. [Google Scholar] [CrossRef]
- Hazra, D.; Byun, Y.C. Upsampling Real-Time, Low-Resolution CCTV Videos Using Generative Adversarial Networks. Electronics 2020, 9, 1312. [Google Scholar] [CrossRef]
Database | # Subjects | # Face Images | Posed/In the Wild | Multiple POVs (°) | Images/Videos From Security Cams | Availability |
---|---|---|---|---|---|---|
AT&T [36] | 40 | 400 (grayscale) | Posed | none | none | Not available |
LFW [38,39] | 5749 | 13,233 (color) | In the wild | none | none | Open-access |
YouTube Faces [40] | 1688 | 3425 (color videos) | In the wild | none | none | Open-access |
CASIA-Webface [42] | 10,575 | 494,414 (color) | In the wild | none | none | Upon request |
Megaface [43,44] | 672,057 | 4.7 million (color) | In the wild | none | none | Not available |
VGGFace [32] | 2622 | 982,803 (color) | In the wild | none | none | Open-access |
VGGFace2 [33] | 9131 | 3.31 million (color) | In the wild | none | none | Open-access |
FERET [45,46] | 1199 | 14,051 (color) | Posed | Horizontal plane: −60°, −40°, −25°, −15°, 0°, 15°, 25°, 40°, 60° Vertical plane: none | none | Upon request |
MPI [47] | 200 | 1400 (color) | Posed | Horizontal plane: from −90° to +90°, 30° step Vertical plane: none | none | Not available |
Extended Yale [48,49] | 28 | 16,128 (grayscale) | Posed | Horizontal plane: 0°, 12°, 24° Vertical plane: none | none | Open-access |
KFDB [50] | 1000 | 52,000 (color) | Posed | Horizontal plane: from −45° to +45°, 15° step Vertical plane: none | none | Not available |
CAS-PEAL [51] | 1040 | 30,900 (color) | Posed | Horizontal plane: from −67.5° to +67.5°, 22.5° step Vertical plane: −30° to +30°, 30° step | none | Upon request |
Multi-PIE [52] | 337 | 755,370 (color) | Posed | Horizontal plane: from −90° to +90°, 15° step Vertical plane: 2 pictures on a different unknown angle | none | Upon request |
NIST MID [53] | 1573 | 3288 (color) | Posed | Horizontal plane: −90°, 0°, +90° Vertical plane: none | none | Upon request |
ChokePoint [54] | 25–29 | 48 (color videos) | Security Cams | Horizontal plane: 3 unknown angles Vertical plane: none | 48 Videos from 3 POVs in total | Open-access |
SCFace [29] | 130 | 4160 (color and IR) | Posed + Security Cams | Horizontal plane: from −90° to +90°, 22.5° step Vertical plane: none | 23 Frontal Face Images per subject | Upon request |
FRMDB (proposed) | 39 | 1092 (color) 195 (color videos) | Posed + Security Cams | Horizontal plane: from −135° to +135°, 45° step Vertical plane: −60° to +30°, 30° step | 5 Videos from multiple POVs per subject | Open-access |
Subset Name | Mugshots (SCFace) | Mugshots (FRMDB) |
---|---|---|
“Test F” | (0, 0) | (0, 0) |
“Test F-L1-R1” | (0, 0), (−22.5, 0), (22.5, 0) | (0, 0), (−45, 0), (45, 0) |
“Test 1” | (0, 0), (90, 0) | (0, 0), (90, 0) |
“Test 2” | (0, 0), (90, 0), (−90, 0) | (0, 0), (90, 0), (−90, 0) |
“Test 3” | (0, 0), (90, 0), (−90, 0), (77.5, 0), (−77.5, 0) | (0, 0), (90, 0), (−90, 0), (45, 0), (45, 0) |
“Test 4” | (0, 0), (90, 0), (−90, 0), (77.5, 0), (−77.5, 0), (45, 0), (−45, 0) | (0, 0), (135, 0), (−135, 0), (90, 0), (−90, 0), (45, 0), (45, 0) |
“Test 5” | (0, 0), (90, 0), (−90, 0), (77.5, 0), (−77.5, 0), (45, 0), (−45, 0), (−22.5, 0), (22.5, 0) | (0, 0), (135, 0), (−135, 0), (90, 0), (−90, 0), (45, 0), (45, 0), (0, 30), (135, 30), (−135, 30), (90, 30), (−90, 30), (45, 30), (45, 30) |
“Test 6” | None | (0, 0), (135, 0), (−135, 0), (90, 0), (−90, 0), (45, 0), (45, 0), (0, 30), (135, 30), (−135, 30), (90, 30), (−90, 30), (45, 30), (45, 30), (0, 60), (135, 60), (−135, 60), (90, 60), (−90, 60), (45, 60), (45, 60), (0, −30), (135, −30), (−135, −30), (90, −30), (−90, −30), (45, −30), (45, −30) |
1. 008 (0°, 0°) | 6. 001 (0°, 0°) |
2. 009 (0°, 0°) | 7. 005 (45°, 0°) |
3. 009 (45°, 0°) | 8. 005 (45°, 30°) |
4. 008 (−45°, 30°) | 9. 002 (0°, 0°) |
5. 005 (0°, 0°) | … |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Contardo, P.; Sernani, P.; Tomassini, S.; Falcionelli, N.; Martarelli, M.; Castellini, P.; Dragoni, A.F. FRMDB: Face Recognition Using Multiple Points of View. Sensors 2023, 23, 1939. https://doi.org/10.3390/s23041939
Contardo P, Sernani P, Tomassini S, Falcionelli N, Martarelli M, Castellini P, Dragoni AF. FRMDB: Face Recognition Using Multiple Points of View. Sensors. 2023; 23(4):1939. https://doi.org/10.3390/s23041939
Chicago/Turabian StyleContardo, Paolo, Paolo Sernani, Selene Tomassini, Nicola Falcionelli, Milena Martarelli, Paolo Castellini, and Aldo Franco Dragoni. 2023. "FRMDB: Face Recognition Using Multiple Points of View" Sensors 23, no. 4: 1939. https://doi.org/10.3390/s23041939
APA StyleContardo, P., Sernani, P., Tomassini, S., Falcionelli, N., Martarelli, M., Castellini, P., & Dragoni, A. F. (2023). FRMDB: Face Recognition Using Multiple Points of View. Sensors, 23(4), 1939. https://doi.org/10.3390/s23041939