Iris Liveness Detection for Biometric Authentication: A Systematic Literature Review and Future Directions
Abstract
:1. Introduction
1.1. Significance and Relevance
1.2. Evolution of Iris Biometric Authentication System
1.3. Prior Research
- Only one SLR is available related to this domain.
- The Prevailing literature does not scrutinize the generalizability of the spoofing attacks techniques to recognize the diverse sorts of attacks.
- The discussion on the different performances measures used is not revealed in the prevailing assessments.
- There are limited studies that surveyed feature extraction techniques and datasets obtainable for the iris liveness detection.
1.4. Motivation
- The prevailing ILD techniques and their confines to recognize spoofing attacks.
- The comparative analysis of different types of spoofing attacks used in iris liveness detection.
- Publicly available datasets for iris biometric detection.
- Different performance measures used for implementation of iris liveness.
1.5. Research Goals
1.6. Contributions of the Study
- A comparison of the methodologies in literature employed for detecting and classifying iris liveness was presented.
- To study feature extraction techniques which were used with ILD in the literature.
- Survey identified, to detect all the types of iris spoofing attacks and the literary works addressing the same were studied.
- To study and compare available data sets in the literature constructed for detecting ILD and spoofing attacks.
- To analyze ILD methods using various evaluation metrics.
- Figure 2 depicts how our SLR is organized into distinct segments.
2. Research Methodology
2.1. Selection Criterion for Research Studies
2.2. Inclusion and Exclusion Criteria
2.3. Study Selection Results
2.4. Quality Assessment Criteria for the Research Studies
3. Focus Areas in Study of Iris Liveness Detection Literature
3.1. Feature Extraction Techniques Used for Iris Liveness Detection
3.2. Iris Spoofing Attacks
- Print attacks—The imposter offers a printed image of validated iris to the biometric sensor.
- Contact Lenses attacks—The imposter wears contact lenses on which the pattern of a genuine iris is printed.
- Video attacks—“Imposter plays the video of registered identity in front of a biometric system” [14].
- Cadaver attacks—Imposter uses the eye of a dead person in front of biometric system.
- Synthetic attacks—“Embedding the iris region into the real images makes the synthesized images more realistic” [22].
3.3. Datasets Used for Iris Liveness Detection
- Real-Time datasets—Some authors created their own datasets for testing the modal of iris liveness detection.
- Standard Benchmark Datasets—Many universities created datasets and published datasets so that anyone can use them for implementation. Standard available datasets for ILD are explained in detail in Section 4.3.
3.4. Performance Measures
4. Outcome of Survey
4.1. Feature Extraction Techniques Used for Iris Liveness Detection (RQ1)
4.1.1. Handcrafted Feature Extraction
- A.
- Local Features Extraction
- B.
- Global Features Extraction
- (A)
- Local Features Extraction
- (1)
- Texture Features
- (2)
- Statistical Features
- (3)
- LBP
- (4)
- SIFT
- (B)
- Global Features Extraction:
- (1)
- BSIF
- (2)
- Histogram
- (3)
- Image Quality Measures
- (a)
- Error Sensitivity Measures: Traditional iris image quality assessment approaches are based on measuring the errors between the distorted and the real iris images. These features are simple to calculate and typically have very low computational complexity [33].
- (b)
- Pixel Difference Measures: These features compute the misrepresentation between two Iris images based on their pixelwise differences. Here we take in: Mean Squared Error (MSE), Peak Signal to Noise Ratio (PSNR), Signal to Noise Ratio (SNR), Structural Content (SC), Maximum Difference (MD), Average Difference (AD), Normalized Absolute Error (NAE), R-Averaged Maximum Difference (RAMD) and Laplacian Mean Squared Error (LMSE)”. Formal definition and formulas are conversed in [33].
- (c)
- Correlation-based Measures: The resemblance between two digital images can also be quantified in terms of the correlation function. A type of correlation-based measures can be obtained by considering the statistics of the angles between the pixel vectors of the original and distorted images. These features comprise of Normalized Cross-Correlation (NXC), Mean Angle Similarity (MAS) and Mean Angle- Magnitude Similarity (MAMS)”. Formal definition and formulas are conversed in [33].
- (d)
- Edge-based Measures: Edges and corners, are some of the most enlightening parts of an image. Structural alteration of an iris image is firmly connected with its edge degradation. Two edge-related quality measures are used as: Total Edge Difference (TED) and Total Corner Difference (TCD) [34].
- (e)
- Spectral distance Measures: “The Fourier Transform is another traditional image processing tool that has been applied to the field of image quality assessment. For the extracting of IQ spectral-related features: the Spectral Magnitude Error (SME)and the Spectral Phase Error (SPE)” are used. Formal definitions and formulas are conversed in [33].
- (f)
- Gradient-based Measures: Many of the distortions that can distress an image are replicated by a modification in its gradient. Consequently, using such information, structural and contrast changes can be effectually captured. Two simple gradient-based features are comprised in the biometric protection system: Gradient Magnitude Error (GME) and Gradient Phase Error (GPE), formal definition and formulas as conversed in [33].
- (g)
- Structural Similarity Measures: Human visual system is vastly improved to extract the structural information from the viewing field. Hence, distortions in an image which are caused due to disparities in lighting, “such as contrast or brightness changes (nonstructural distortions), should be treated” in a different way from the structural distortions.
- (h)
- Information-Theoretic Measures: The central idea related to these approaches is that: an image source connects to a receiver through a channel that confines the amount of information that could flow through it, by this means presenting distortions. “The goal is to relate the visual quality of the test image to the total of information shared between the test and the reference signals.” To extract information-theoretic features: the Visual Information Fidelity (VIF) and the Reduced Reference Entropic Difference index (RRED) are used, which is discoursed in [33].
- (i)
- Distortion-Specific Approaches: These techniques count on the knowledge formerly acquired, “about the form of visual quality loss triggered by a precise distortion. The ultimate quality measure is computed, as per the model trained on clean images and on images affected by this certain distortion. Two of these measures” are JPEG Quality index and high-low frequency index” as conversed in [33].
- (j)
- Training-based Approaches: In this technique, “a model is trained using clean and distorted images. At that time, the quality score is computed based on a number of extracted features from the test image and linked to the general model. However, unlike the former approaches, these metrics aim to offer a general quality score” which is not associated with an explicit distortion. Thus, the Blind Image Quality Index (BIQI) follows a two-stage framework, in which the specific measures of different distortion-specific experts are joint to create one global quality score.
- (k)
- Natural Scene Statistic Approaches: This approach is surveyed by the Natural Image Quality Evaluator (NIQE). The NIQE is completely a blind image quality analyzer based on the construction of a quality aware collection of statistical features (derived from “a corpus of natural undistorted images). It is related to a multivariate Gaussian natural scene statistical model.”
- (C)
- Drawback of Handcrafted Methods:
- The range of handcrafted feature extractors profoundly depend on the expertise of the researchers on the problem”.
- Handcrafted features frequently replicate restricted aspects of the problem with frequent sensitivity to fluctuating “acquisition conditions, such as camera devices, lighting conditions, and Presentation Attack Instruments (PAIs)”.
- The Detection accuracy differs suggestively among different databases, signifying that the handcrafted features have the poor generalizing ability. Therefore, they fail to have the complete solution for the PAD problem.
- The obtainable cross-database tests in the literature propose that the performance of hand-engineered texture-based techniques can worsen intensely, when it functions in unfamiliar circumstances [35].
4.1.2. Self-Learned Feature Extraction
- A.
- Deep Learning Model
- (1)
- Convolutional Neural Network (CNN):
- (a)
- The main building component of CNN, the convolution layer [33], conducts the majority of the intensive computing tasks. This layer’s parameters consist of filter banks (kernels) that extract more complex features. Thus, the input image is convoluted using filter banks in this layer (kernels). The dot product of the filter entries and the input image is then computed. This creates the feature maps for the equivalent filter kernels. Accordingly, the network learns filters that trigger when it notices some precise type of feature at the same spatial position in the input.
- (b)
- The maxpooling layer is used to reduce the size of the representation and the hyper-parameters in the network, which reduces computing overheads. This layer functions with filters of size 2 × 2. This layer also regulates the over-fitting problem.
- (c)
- “A fully connected layer reflects all of the features to obtain information about the image’s overall shape. The final layer calculates a probability score based on the number of classes for which the network has been trained.”
- B.
- Pre-Trained Model
- (1)
- Very Deep Convolution Network (VGG):
- (2)
- Inception-v3 (GoogLeNet):
- (3)
- Deep Residual Network Architectures (ResNet):
4.2. Iris Spoofing Attacks (RQ2)
4.2.1. Print Attacks
4.2.2. Contact Lenses Attacks
4.2.3. Synthetic Iris Attacks
4.2.4. Video Attacks
4.2.5. Cadavers Eyes Attacks
4.3. Iris Datasets (RQ3)
4.3.1. Standard Benchmark Datasets
- A.
- Controlled Environment Datasets
- The Conditions during the image captures include:
- The spectrum at which the iris is captured (Near InfraRed (NIR) or visible spectrum)
- The size of the iris in the image
- The influence of eye glasses and conjunct specular reflections
- Factors with environmental conditions like:
- Light
- Illumination
- Sound (in case of iris video datasets)
- B.
- Uncontrolled Environment/Visible Light Datasets
- C.
- Smartphone/Mobile datasets
- D.
- Multi/cross-sensor iris datasets
- E.
- Iris spoofing and liveness detection datasets
- CL means the images of iris after wearing contact lenses
- PP means iris images after taking print out on paper
- PM means images of Cadaver iris
- PD means pupil dynamics iris Images
4.3.2. Common Properties/Observation/Findings of Popular Datasets
- The first thing to notice is that none of the datasets include both actual and fake samples. For instance, some datasets such as “IIITD iris spoofing, Post-Mortem-Iris v1.0, CASIA-Iris-Syn V4, synthetic iris textured based, and synthetic iris model-based, offer only fake samples”. “In contrast, datasets such as pupil-dynamics v1.0 and CAVE offer only authentic samples”. These example datasets are still helpful, and when combined with other datasets, they can provide an additional source of samples.
- The second point to consider is that there is a diversity of spoofed images in datasets. This diversity arises due to:
- ∘
- Capturing techniques in case of print attacks.
- ∘
- Vendors specific techniques in case of contact lens attacks.
- “The IIITD iris spoofing dataset is the only benchmark that provides multiple attacks mean; it includes photographs of paper printouts of people wearing textured contact lenses. However, the authors report inferior and real comparison scores when the authentic eyes are compared to these hybrid attacks. Furthermore, it is compared either to use the textured contact lenses or the images of paper printouts of living eyes. Hence, it seems that this hybrid way of preparing the artifacts does not improve the detection accuracy of the attack”.
4.3.3. Challenges/Issues with Existing Datasets
- A.
- Limited Access Restrictions
- B.
- Scale
- C.
- Need of Standard Format of Presenting a Dataset
- D.
- Confidential
- E.
- Attacks specific datasets
4.4. Performances Measures (RQ4)
- A.
- FAR (False Acceptance Rate)
- B.
- FRR (False Rejection Rate)
- C.
- APCER (Attack Presentation Classification Error Rate)
- D.
- NPCER (Normal Presentation Classification Error Rate)
- E.
- False Positive Rate
- F.
- BPCER (Bona-Fide Presentation Classification Error Rate)
- G.
- Accuracy:
- H.
- Precision:
- I.
- Recall/True Positive Rate:
- J.
- F1-measure/F1-Score:
- K.
- CCR (Correct Classification Rate):
- L.
- ACA (Average Classification Accuracy):
4.5. Summary of Survey
5. Prototype/Framework for Iris Liveness Detection
6. Discussions
6.1. RQ1. What Are the Diverse Feature Extraction Techniques Available for Iris Liveness Detection?
6.2. RQ2. What Are the Different Types of Spoofing Attacks Done on Iris Liveness Detection?
6.3. RQ3. Which Are Relevant Datasets Available for Iris Liveness Detection?
- Each dataset uses different sensors to capture the iris images.
- The quality of the image varies, depending on the used sensors.
- Sensor type was not declared in some datasets.
- The position of the sensor, that is, the distance from eyes, was not revealed in the datasets document.
6.4. RQ4. What Are All the Different Evaluation Measures Used for Iris Liveness Detection?
7. Threats to Validity
8. Limitations of the Study
9. Conclusions
10. Future Work and Opportunities
10.1. Feature Extraction
10.2. Spoofing Attacks
10.3. Iris Attacks Specific Datasets
10.4. Limited Publicly Available Datasets
10.5. Standard Format for Presenting the Iris Datasets
Author Contributions
Funding
Conflicts of Interest
Abbreviations
N/A | Not Applicable, |
VIS | Visible Light |
N/R | Not Reported |
MS | Multi-Sensor |
PP | Live + Paper Printouts |
CL | Live + Textured Contact Lenses |
PM | Post-Mortem (cadaver) Iris |
MB | Mobile Datasets. |
A | IrisGuard AD100 Sensor |
L2 | LG 2200 Sensor |
L3 | LG Iris Access EOU3000 Sensor |
L4 | LG 4000 Sensor |
V | Vista Imaging VistaFA2E Sensor |
BM | CMTech BMT-20 f/3.5–5.6 Zoom lens Sensor |
IS | IriTech Irishield M2120U Sensor |
C | Cogent CIS 202 Sensor |
PE | Live + Prosthetic Eyes |
CON | Controlled Environment |
SY | Live + Synthetic Irises; |
SS | Single Sensor |
RA | Live + Replay Attack |
PD | Pupil Dynamics |
EM | Eye Movement Tracking |
EV | Eyes Video |
LY | Lytro Light Field Camera Sensor |
IP | iPhone 5S Sensor |
NL | Nokia Lumia 1020 Sensor |
GS | Galaxy Samsung IV Sensor |
DA | Dalsa (Unknown Model) Sensor |
GT | Galaxy Tablet 2 Sensor |
CN3 | Canon EOS Rebel T3i with EF-S 18–135mm IS Sensor |
H | IrisGuard H100 Sensor |
References
- Khade, S.; Thepade, S.D. Fingerprint Liveness Detection with Machine Learning Classifiers Using Feature Level Fusion of Spatial and Transform. Domain Features. In Proceedings of the 5th International Conference On Computing, Communication, Control And Automation (ICCUBEA), Pune, India, 12–21 September 2019; pp. 1–6. [Google Scholar] [CrossRef]
- Khade, S.; Thepade, S. Novel Fingerprint Liveness Detection with Fractional Energy of Cosine Transformed Fingerprint Images and Machine Learning Classifiers. In 2018 IEEE Punecon; IEEE: Piscataway, NJ, USA, 2018; pp. 1–7. [Google Scholar] [CrossRef]
- Gupta, R.; Sehgal, P. A survey of attacks on iris biometric systems. Int. J. Biom. 2016, 8, 145. [Google Scholar] [CrossRef]
- Islam, I.; Munim, K.M.; Islam, M.N.; Karim, M. A Proposed Secure Mobile Money Transfer System for SME in Bangladesh: An Industry 4.0 Perspective. In Proceedings of the International Conference on Sustainable Technologies for Industry 4.0 (STI), Dhaka, Bangladesh, 24–25 December 2019; pp. 1–6. [Google Scholar] [CrossRef]
- Jeon, B.; Jeong, B.; Jee, S.; Huang, Y.; Kim, Y.; Park, G.H.; Kim, J.; Wufuer, M.; Jin, X.; Kim, S.W.; et al. A Facial Recognition Mobile App for Patient Safety and Biometric Identification: Design, Development, and Validation. JMIR mHealth uHealth 2019, 7, e11472. [Google Scholar] [CrossRef] [PubMed]
- Gusain, R.; Jain, H.; Pratap, S. Enhancing bank security system using Face Recognition, Iris Scanner and Palm Vein Technology. In Proceedings of the 3rd International Conference On Internet of Things: Smart Innovation and Usages (IoT-SIU), Bhimtal, India, 23–24 February 2018; pp. 1–5. [Google Scholar] [CrossRef]
- Hsiao, C.-S.; Fan, C.-P. EfficientNet Based Iris Biometric Recognition Methods with Pupil Positioning by U-Net. In Proceedings of the 3rd International Conference on Computer Communication and the Internet (ICCCI), Nagoya, Japan, 25–27 June 2021; pp. 1–5. [Google Scholar] [CrossRef]
- Xu, L.; Jiao, N.T. The Design of Hotel Management System Based on Iris Recognition Research. Appl. Mech. Mater. 2014, 543–547, 4565–4568. [Google Scholar] [CrossRef]
- Su, L.; Shimahara, T. Advanced iris recognition using fusion techniques. NEC Tech. J. 2019, 13, 74–77. [Google Scholar]
- Kaur, B.; Singh, S.; Kumar, J. Cross-sensor iris spoofing detection using orthogonal features. Comput. Electr. Eng. 2018, 73, 279–288. [Google Scholar] [CrossRef]
- Choudhary, M.; Tiwari, V. An approach for iris contact lens detection and classification using ensemble of customized DenseNet and SVM. Fut. Gener. Comput. Syst. 2019, 101, 1259–1270. [Google Scholar] [CrossRef]
- Kaur, J.; Jindal, N. A secure image encryption algorithm based on fractional transforms and scrambling in combination with multimodal biometric keys. Multimedia Tools Appl. 2018, 78, 11585–11606. [Google Scholar] [CrossRef]
- Kuehlkamp, A.; Pinto, A.; Rocha, A.; Bowyer, K.W.; Czajka, A. Ensemble of Multi-View Learning Classifiers for Cross-Domain Iris Presentation Attack Detection. IEEE Trans. Inf. Forensics Secur. 2018, 14, 1419–1431. [Google Scholar] [CrossRef] [Green Version]
- Chen, Y.; Zhang, W. Iris Liveness Detection: A Survey. In Proceedings of the IEEE Fourth International Conference on Multimedia Big Data (BigMM), Xi’an, China, 13–16 September 2018; pp. 1–7. [Google Scholar] [CrossRef]
- Taylor, P.J.; Dargahi, T.; Dehghantanha, A.; Parizi, R.M.; Choo, K.-K.R. A systematic literature review of blockchain cyber security. Digit. Commun. Netw. 2020, 6, 147–156. [Google Scholar] [CrossRef]
- Dharmadhikari, S.C.; Ingle, M.; Kulkarni, P. Empirical Studies On Machine Learning Based Text Classification Algorithms. Adv. Comput. Int. J. 2011, 2, 161–169. [Google Scholar] [CrossRef]
- Raheem, E.A.; Ahmad, S.M.S.; Adnan, W.A.W. Insight on face liveness detection: A systematic literature review. Int. J. Electr. Comput. Eng. (IJECE) 2019, 9, 5165–5175. [Google Scholar] [CrossRef]
- Nguyen, K.; Fookes, C.; Jillela, R.; Sridharan, S.; Ross, A. Long range iris recognition: A survey. Pattern Recognit. 2017, 72, 123–143. [Google Scholar] [CrossRef]
- Dronky, M.R.; Khalifa, W.; Roushdy, M. A Review on Iris Liveness Detection Techniques. In Proceedings of the Ninth International Conference on Intelligent Computing and Information Systems (ICICIS), Cairo, Egypt, 8–10 December 2019; pp. 48–59. [Google Scholar] [CrossRef]
- Rattani, A.; Derakhshani, R. Ocular biometrics in the visible spectrum: A survey. Image Vis. Comput. 2017, 59, 1–16. [Google Scholar] [CrossRef]
- Kohli, N.; Yadav, D.; Vatsa, M.; Singh, R.; Noore, A. Detecting medley of iris spoofing attacks using DESIST. In Proceedings of the IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS), Niagara Falls, NY, USA, 6–9 September 2016; pp. 1–6. [Google Scholar] [CrossRef]
- Fathy, W.S.-A.; Ali, H.S. Entropy with Local Binary Patterns for Efficient Iris Liveness Detection. Wirel. Pers. Commun. 2017, 102, 2331–2344. [Google Scholar] [CrossRef]
- Armi, L.; Fekri-Ershad, S. Texture image analysis and texture classification methods—A review. arXiv 2019, preprint. arXiv:1904.06554. [Google Scholar]
- Agarwal, R.; Jalal, A.S.; Arya, K.V. A multimodal liveness detection using statistical texture features and spatial analysis. Multimed. Tools Appl. 2020, 79, 13621–13645. [Google Scholar] [CrossRef]
- Zhang, H.; Sun, Z.; Tan, T. Contact Lens Detection Based on Weighted LBP. In Proceedings of the 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 4279–4282. [Google Scholar] [CrossRef]
- He, Z.; Sun, Z.; Tan, T.; Wei, Z. Efficient Iris Spoof Detection via Boosted Local Binary Patterns; Springer: Berlin/Heidelberg, Germany, 2009; pp. 1080–1090. [Google Scholar] [CrossRef] [Green Version]
- Geng, J.; Li, Y.; Chian, T. SIFT based iris feature extraction and matching. In Geoinformatics 2007 Geospatial Information Science; International Society for Optics and Photonics: Bellingham, WA, USA, 2007; Volume 6753, p. 67532F. [Google Scholar] [CrossRef]
- Raja, K.B.; Raghavendra, R.; Busch, C. Binarized Statistical Features for Improved Iris and Periocular Recognition In Visible Spectrum. In Proceedings of the 2nd International Workshop on Biometrics and Forensics, Valletta, Malta, 27–28 March 2014; pp. 1–6. [Google Scholar]
- McGrath, J.; Bowyer, K.W.; Czajka, A. Open Source Presentation Attack Detection Baseline for Iris Recognition. arXiv 2018, preprint. arXiv:1809.10172. [Google Scholar]
- Raghavendra, R.; Raja, K.B.; Busch, C. Ensemble of Statistically Independent Filters for Robust Contact Lens Detection in Iris Images. In Proceedings of the Indian Conference on Computer Vision Graphics and Image Processing, Bangalore, India, 14–18 December 2014. [Google Scholar] [CrossRef]
- Demirel, H.; Anbarjafari, G. Iris recognition system using combined histogram statistics. In Proceedings of the 23rd International Symposium on Computer and Information Sciences, Istanbul, Turkey, 27–29 October 2008; pp. 1–4. [Google Scholar] [CrossRef]
- Yambay, D.; Czajka, A.; Ii, F. LivDet-Iris 2015—Iris Liveness Detection Competition 2015 University of Naples. In Proceedings of the IEEE International Conference on Identity, Security and Behavior Analysis (ISBA), New Delhi, India, 22–24 January 2015. [Google Scholar]
- Galbally, J.; Marcel, S.; Fierrez, J. Image Quality Assessment for Fake Biometric Detection: Application to Iris, Fingerprint, and Face Recognition. IEEE Trans. Image Process. 2013, 23, 710–724. [Google Scholar] [CrossRef] [Green Version]
- Vasantha, K.; Ravichander, J. Image quality assessment for fake biometric detection: Application to iris, fingerprint, and face recognition. Int. J. Recent Technol. Eng. 2019, 8, 63–67. [Google Scholar]
- El-Din, Y.S.; Moustafa, M.N.; Mahdi, H. Deep convolutional neural networks for face and iris presentation attack detection: Survey and case study. IET Biom. 2020, 9, 179–193. [Google Scholar] [CrossRef]
- Czajka, A. Pupil Dynamics for Iris Liveness Detection. IEEE Trans. Inf. Forens. Secur. 2015, 10, 726–735. [Google Scholar] [CrossRef]
- Nguyen, K.; Fookes, C.; Ross, A.; Sridharan, S. Iris Recognition with Off-the-Shelf CNN Features: A Deep Learning Perspective. IEEE Access 2017, 6, 18848–18855. [Google Scholar] [CrossRef]
- System, R. Deep Learning Approach for Multimodal Biometric. Sensors 2020, 19, 5523. [Google Scholar]
- Abed, E.A.; Mohammed, R.J.; Shihab, D.T. Intelligent multimodal identification system based on local feature fusion between iris and finger vein. Indones. J. Electr. Eng. Comput. Sci. 2021, 21, 224–232. [Google Scholar] [CrossRef]
- Tan, C.-W.; Kumar, A. Integrating ocular and iris descriptors for fake iris image detection. In Proceedings of the 2nd International Workshop on Biometrics and Forensics (IWBF), Valletta, Malta, 27–28 March 2014; pp. 1–4. [Google Scholar] [CrossRef]
- Raja, K.B.; Raghavendra, R.; Busch, C. Presentation attack detection using Laplacian decomposed frequency response for visible spectrum and Near-Infra-Red iris systems. In Proceedings of the IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS), Arlington, VA, USA, 8–11 September 2015; pp. 1–8. [Google Scholar] [CrossRef]
- Silva, P.; Luz, E.; Baeta, R.; Pedrini, H.; Falcao, A.X.; Menotti, D. An Approach to Iris Contact Lens Detection Based on Deep Image Representations. In Proceedings of the 28th SIBGRAPI Conference on Graphics, Patterns and Images, Salvador, Brazil, 26–29 August 2015; pp. 157–164. [Google Scholar] [CrossRef]
- Poster, D.; Nasrabadi, N.; Riggan, B. Deep sparse feature selection and fusion for textured contact lens detection. In Proceedings of the International Conference of the Biometrics Special Interest Group (BIOSIG), Darmstadt, Germany, 26–28 September 2018; pp. 1–5. [Google Scholar]
- Sequeira, A.; Murari, J.; Cardoso, J.S. Iris Liveness Detection Methods in Mobile Applications. In Proceedings of the International Conference on Computer Vision Theory and Applications (VISAPP), Lisbon, Portugal, 5–8 January 2014; pp. 22–33. [Google Scholar] [CrossRef] [Green Version]
- Menotti, D.; Chiachia, G.; Pinto, A.; Schwartz, W.; Pedrini, H.; Falcao, A.X.; Rocha, A. Deep Representations for Iris, Face, and Fingerprint Spoofing Detection. IEEE Trans. Inf. Forensics Secur. 2015, 10, 864–879. [Google Scholar] [CrossRef] [Green Version]
- Gupta, P.; Behera, S.; Vatsa, M.; Singh, R.; Gupta, P.; Behera, S.; Singh, R. On Iris Spoofing Using Print Attack. In Proceedings of the 22nd International Conference on Pattern Recognition, Stockholm, Sweden, 24–28 August 2014; pp. 1681–1686. [Google Scholar] [CrossRef]
- CCC. Chaos Computer Clubs Breaks Iris Recognition System of the Samsung Galaxy S8. Available online: https://www.ccc.de/en/updates/2017/iriden (accessed on 12 May 2021).
- Yes, Cops Are Now Opening iPhones with Dead People’s Fingerprints. Available online: https://www.forbes.com/sites/thomasbrewster/2018/03/22/yes-cops-are-now-opening-iphones-with-dead-peoples-fingerprints/?sh=5ebb6c25393e (accessed on 12 May 2021).
- Raghavendra, R.; Busch, C. Robust Scheme for Iris Presentation Attack Detection Using Multiscale Binarized Statistical Image Features. IEEE Trans. Inf. Forensics Secur. 2015, 10, 703–715. [Google Scholar] [CrossRef]
- Agarwal, R.; Jalal, A.S.; Arya, K.V. Local binary hexagonal extrema pattern (LBHXEP): A new feature descriptor for fake iris detection. Vis. Comput. 2020, 37, 1357–1368. [Google Scholar] [CrossRef]
- Hu, Y.; Sirlantzis, K.; Howells, G. Iris liveness detection using regional features. Pattern Recognit. Lett. 2016, 82, 242–250. [Google Scholar] [CrossRef]
- Arora, S.; Bhatia, M.P.S. Presentation attack detection for iris recognition using deep learning. Int. J. Syst. Assur. Eng. Manag. 2020, 11, 232–238. [Google Scholar] [CrossRef]
- Söllinger, D.; Trung, P.; Uhl, A. Non-reference image quality assessment and natural scene statistics to counter biometric sensor spoofing. IET Biom. 2018, 7, 314–324. [Google Scholar] [CrossRef]
- Yan, Z.; He, L.; Zhang, M.; Sun, Z.; Tan, T. Hierarchical Multi-class Iris Classification for Liveness Detection. In Proceedings of the International Conference on Biometrics (ICB), Gold Coast, QLD, Australia, 20–23 February 2018; pp. 47–53. [Google Scholar] [CrossRef]
- Umer, S.; Sardar, A.; Dhara, B.C.; Rout, R.K.; Pandey, H.M. Person identification using fusion of iris and periocular deep features. Neural Netw. 2019, 122, 407–419. [Google Scholar] [CrossRef] [PubMed]
- Trokielewicz, M.; Czajka, A.; Maciejewicz, P. Post-mortem iris recognition with deep-learning-based image segmentation. Image Vis. Comput. 2019, 94, 103866. [Google Scholar] [CrossRef]
- Hoffman, S.; Sharma, R.; Ross, A. Convolutional Neural Networks for Iris Presentation Attack Detection: Toward Cross-Dataset and Cross-Sensor Generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 1701–17018. [Google Scholar] [CrossRef]
- Trokielewicz, M.; Czajka, A.; Maciejewicz, P. Presentation Attack Detection for Cadaver Iris. In Proceedings of the IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS), Redondo Beach, CA, USA, 22–25 October 2018; pp. 1–10. [Google Scholar] [CrossRef] [Green Version]
- Trokielewicz, M.; Czajka, A.; Maciejewicz, P. Human iris recognition in post-mortem subjects: Study and database. In Proceedings of the IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS), Niagara Falls, NY, USA, 6–9 September 2016; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
- Das, A.; Pal, U.; Ferrer, M.A.; Blumenstein, M. A framework for liveness detection for direct attacks in the visible spectrum for multimodal ocular biometrics. Pattern Recognit. Lett. 2016, 82, 232–241. [Google Scholar] [CrossRef] [Green Version]
- Yadav, D.; Kohli, N.; Agarwal, A.; Vatsa, M.; Singh, R.; Noore, A. Fusion of Handcrafted and Deep Learning Features for Large-Scale Multiple Iris Presentation Attack Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 685–6857. [Google Scholar] [CrossRef]
- Nguyen, D.T.; Pham, T.D.; Lee, Y.W.; Park, K.R. Deep Learning-Based Enhanced Presentation Attack Detection for Iris Recognition by Combining Features from Local and Global Regions Based on NIR Camera Sensor. Sensors 2018, 18, 2601. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kimura, G.; Lucio, D.A.B., Jr.; Menotti, D. CNN Hyperparameter Tuning Applied to Iris Liveness Detection. In Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications—Volume 5: VISAPP, Valletta, Malta, 27–29 February 2020; pp. 428–434. [Google Scholar] [CrossRef] [Green Version]
- Long, M.; Zeng, Y. Detecting Iris Liveness with Batch Normalized Convolutional Neural Network. Comput. Mater. Contin. 2019, 58, 493–504. [Google Scholar] [CrossRef]
- Boyd, A.; Fang, Z.; Czajka, A.; Bowyer, K. Iris presentation attack detection: Where are we now? Pattern Recognit. Lett. 2020, 138, 483–489. [Google Scholar] [CrossRef]
- Kohli, N.; Yadav, D.; Vatsa, M.; Singh, R.; Noore, A. Synthetic iris presentation attack using iDCGAN. In Proceedings of the IEEE International Joint Conference on Biometrics (IJCB), Denver, CO, USA, 1–4 October 2017; pp. 674–680. [Google Scholar] [CrossRef] [Green Version]
- Center for Biometrics and Security Research. Available online: http://www.cbsr.ia.ac.cn/english/IrisDatabase.asp (accessed on 12 May 2021).
- He, L.; Li, H.; Liu, F.; Liu, N.; Sun, Z.; He, Z. Multi-patch convolution neural network for iris liveness detection. In Proceedings of the IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS), Niagara Falls, NY, USA, 6–9 September 2016; pp. 1–7. [Google Scholar] [CrossRef]
- Thavalengal, S.; Nedelcu, T.; Bigioi, P.; Corcoran, P. Iris liveness detection for next generation smartphones. IEEE Trans. Consum. Electron. 2016, 62, 95–102. [Google Scholar] [CrossRef]
- Omelina, L.; Goga, J.; Pavlovicova, J.; Oravec, M.; Jansen, B. A survey of iris datasets. Image Vis. Comput. 2021, 108, 104109. [Google Scholar] [CrossRef]
- Biometrics Ideal Test. Available online: http://biometrics.idealtest.org/findTotalDbByMode.do?mode=Iris (accessed on 5 May 2021).
- UBIRIS. Available online: http://iris.di.ubi.pt/index.html (accessed on 5 May 2021).
- Fang, Z.; Czajka, A.; Bowyer, K.W. Robust Iris Presentation Attack Detection Fusing 2D and 3D Information. IEEE Trans. Inf. Forensics Secur. 2020, 16, 510–520. [Google Scholar] [CrossRef]
- Trokielewicz, M.; Czajka, A.; Maciejewicz, P. Iris Recognition After Death. IEEE Trans. Inf. Forensics Secur. 2018, 14, 1501–1514. [Google Scholar] [CrossRef] [Green Version]
- Kinnison, J.; Trokielewicz, M.; Carballo, C.; Czajka, A.; Scheirer, W. Learning-Free Iris Segmentation Revisited: A First Step Toward Fast Volumetric Operation Over Video Samples. In Proceedings of the International Conference on Biometrics (ICB), Crete, Greece, 4–7 June 2019; pp. 1–8. [Google Scholar] [CrossRef] [Green Version]
- Yambay, D.; Becker, B.; Kohli, N.; Yadav, D.; Czajka, A.; Bowyer, K.W.; Schuckers, S.; Singh, R.; Vatsa, M.; Noore, A.; et al. LivDet iris 2017—Iris liveness detection competition 2017. In Proceedings of the IEEE International Joint Conference on Biometrics (IJCB), Denver, CO, USA, 1–4 October 2017; pp. 733–741. [Google Scholar] [CrossRef]
- Image Analysis and Biometrics Lab. IIITD Contact Lens Iris Database, Iris Combined Spoofing Database. 2016. Available online: http://iab-rubric.org/resources.html (accessed on 4 June 2021).
- Kohli, N.; Yadav, D.; Vatsa, M.; Singh, R. Revisiting iris recognition with color cosmetic contact lenses. In Proceedings of the International Conference on Biometrics (ICB), Madrid, Spain, 4–7 June 2013; pp. 1–7. [Google Scholar] [CrossRef]
- Doyle, J.S.; Bowyer, K.; Flynn, P.J. Variation in accuracy of textured contact lens detection based on sensor and lens pattern. In Proceedings of the IEEE Sixth International Conference on Biometrics: Theory, Applications and Systems (BTAS), Arlington, VA, USA, 29 September–2 October 2013; pp. 1–7. [Google Scholar] [CrossRef]
- Doyle, J.S.; Bowyer, K.W. Robust Detection of Textured Contact Lenses in Iris Recognition Using BSIF. IEEE Access 2015, 3, 1672–1683. [Google Scholar] [CrossRef]
- Holland, C.D.; Komogortsev, O.V. Complex Eye Movement Pattern Biometrics: The Effects of Environment and Stimulus. IEEE Trans. Inf. Forensics Secur. 2013, 8, 2115–2126. [Google Scholar] [CrossRef]
- Rathgeb, C.; Uhl, A. Attacking Iris Recognition: An Efficient Hill-Climbing Technique. In Proceedings of the 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 1217–1220. [Google Scholar] [CrossRef] [Green Version]
- Czajka, A.; Fang, Z.; Bowyer, K. Iris Presentation Attack Detection Based on Photometric Stereo Features. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 7–11 January 2019; pp. 877–885. [Google Scholar] [CrossRef] [Green Version]
- Yadav, D.; Kohli, N.; Vatsa, M.; Singh, R.; Noore, A. Detecting Textured Contact Lens in Uncontrolled Environment Using DensePAD. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019; pp. 2336–2344. [Google Scholar] [CrossRef]
- Galbally, J.; Ortiz-Lopez, J.; Fierrez, J.; Ortega-Garcia, J. Iris liveness detection based on quality related features. In Proceedings of the 5th IAPR International Conference on Biometrics (ICB), New Delhi, India, 29 March–1 April 2012; pp. 271–276. [Google Scholar] [CrossRef]
- Czajka, A. Database of iris printouts and its application: Development of liveness detection method for iris recognition. In Proceedings of the 18th International Conference on Methods & Models in Automation & Robotics (MMAR), Miedzyzdroje, Poland, 26–29 August 2013; pp. 28–33. [Google Scholar] [CrossRef]
- Doyle, J.S.; Flynn, P.J.; Bowyer, K.W. Automated classification of contact lens type in iris images. In Proceedings of the International Conference on Biometrics (ICB), Madrid, Spain, 4–7 June 2013; pp. 1–6. [Google Scholar] [CrossRef]
- Sun, Z.; Zhang, H.; Tan, T.; Wang, Y. Iris Image Classification Based on Hierarchical Visual Codebook. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 36, 1120–1133. [Google Scholar] [CrossRef] [PubMed]
- De Marsico, M.; Nappi, M.; Riccio, D.; Wechsler, H. Mobile Iris Challenge Evaluation (MICHE)-I, biometric iris dataset and protocols. Pattern Recognit. Lett. 2015, 57, 17–23. [Google Scholar] [CrossRef]
- MICHE I—Mobile Iris CHallenge Evaluation—Part, I. Available online: http://biplab.unisa.it/MICHE/index_miche.htm (accessed on 5 May 2021).
- Khan, F.F.; Akif, A.; Haque, M.A. Iris recognition using machine learning from smartphone captured images in visible light. In Proceedings of the IEEE International Conference on Telecommunications and Photonics (ICTP), Dhaka, Bangladesh, 26–28 December 2017; pp. 33–37. [Google Scholar] [CrossRef]
- International Conference of Information Science and Management Engineering. ISME 2013, 2, 2014.
- Iris Database. Available online: http://phoenix.inf.upol.cz/iris/ (accessed on 5 May 2021).
- Ali Jahanian’s Website. Available online: http://facultymembers.sbu.ac.ir/eshghi/index.html (accessed on 5 May 2021).
- Busch, C. Standards for biometric presentation attack detection. In Handbook of Biometric Anti-Spoofing; Springer: Cham, Switzerland, 2019; pp. 503–514. [Google Scholar]
- Biometric Performance Metrics: Select the Right Solution. Available online: https://www.bayometric.com/biometric-performance-metrics-select-right-solution/ (accessed on 9 June 2021).
- Arora, S.; Bhatia, M.P.S.; Kukreja, H. A Multimodal Biometric System for Secure User Identification Based on Deep Learning; Springer: Berlin, Germany, 2020; pp. 95–103. [Google Scholar] [CrossRef]
- Wang, K.; Kumar, A. Cross-spectral iris recognition using CNN and supervised discrete hashing. Pattern Recognit. 2018, 86, 85–98. [Google Scholar] [CrossRef]
- Pradeepa, S.; Anisha, R.; Jenkin, W.J. Classifiers in IRIS Biometrics for Personal Authentication. In Proceedings of the 2nd International Conference on Signal Processing and Communication (ICSPC), Coimbatore, India, 29–30 March 2019; pp. 352–355. [Google Scholar] [CrossRef]
- Marra, F.; Poggi, G.; Sansone, C.; Verdoliva, L. A deep learning approach for iris sensor model identification. Pattern Recognit. Lett. 2018, 113, 46–53. [Google Scholar] [CrossRef]
- Accuracy, Precision, Recall or F1? Available online: https://towardsdatascience.com/accuracy-precision-recall-or-f1-331fb37c5cb9 (accessed on 5 May 2021).
- Lin, Y.-N.; Hsieh, T.-Y.; Huang, J.-J.; Yang, C.-Y.; Shen, V.R.L.; Bui, H.H. Fast Iris localization using Haar-like features and AdaBoost algorithm. Multimed. Tools Appl. 2020, 79, 34339–34362. [Google Scholar] [CrossRef]
- Alay, N.; Al-Baity, H.H. Deep Learning Approach for Multimodal Biometric Recognition System Based on Fusion of Iris, Face, and Finger Vein Traits. Sensors 2020, 20, 5523. [Google Scholar] [CrossRef]
- Abdellatef, E.; Omran, E.M.; Soliman, R.F.; Ismail, N.A.; Elrahman, S.E.S.E.A.; Ismail, K.N.; Rihan, M.; El-Samie, F.E.A.; Eisa, A.A. Fusion of deep-learned and hand-crafted features for cancelable recognition systems. Soft Comput. 2020, 24, 15189–15208. [Google Scholar] [CrossRef]
- Wang, H.; Zheng, H. Positive Predictive Value. In Encyclopedia of Systems Biology; Springer: New York, NY, USA, 2013; pp. 1723–1724. [Google Scholar]
- Wang, H.; Zheng, H. True Positive Rate. In Encyclopedia of Systems Biology; Springer: New York, NY, USA, 2013; pp. 2302–2303. [Google Scholar]
- Garg, M.; Arora, A.; Gupta, S. An Efficient Human Identification Through Iris Recognition System. J. Signal. Process. Syst. 2021, 93, 701–708. [Google Scholar] [CrossRef]
- Nguyen, D.T.; Baek, N.R.; Pham, T.D.; Park, K.R. Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor. Sensors 2018, 18, 1315. [Google Scholar] [CrossRef] [Green Version]
- Adamović, S.; Miškovic, V.; Maček, N.; Milosavljević, M.; Šarac, M.; Saračević, M.; Gnjatović, M. An efficient novel approach for iris recognition based on stylometric features and machine learning techniques. Fut. Gener. Comput. Syst. 2020, 107, 144–157. [Google Scholar] [CrossRef]
- Naqvi, R.A.; Lee, S.-W.; Loh, W.-K. Ocular-Net: Lite-Residual Encoder Decoder Network for Accurate Ocular Regions Segmentation in Various Sensor Images. In Proceedings of the IEEE International Conference on Big Data and Smart Computing (BigComp), Busan, Korea, 19–22 February 2020; pp. 121–124. [Google Scholar] [CrossRef]
- Pala, F.; Bhanu, B. Iris Liveness Detection by Relative Distance Comparisons. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 664–671. [Google Scholar] [CrossRef]
- Gragnaniello, D.; Sansone, C.; Poggi, G.; Verdoliva, L. Biometric Spoofing Detection by a Domain-Aware Convolutional Neural Network. In Proceedings of the 12th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Naples, Italy, 28 November–1 December 2016; pp. 193–198. [Google Scholar] [CrossRef]
- De Gibert, O.; Perez, N.; García-Pablos, A.; Cuadros, M. Hate Speech Dataset from a White Supremacy Forum. arXiv 2018, preprint. arXiv:1809.04444. [Google Scholar]
- Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights Imag. 2018, 9, 611–629. [Google Scholar] [CrossRef] [Green Version]
Application Areas | Usage |
---|---|
Finance and Banking | Authentication system for banking domain [4]. |
Healthcare and welfare | Tracking the patient registration, repetitive treatment, supporting national or private health insurance cards [5]. |
Immigration and border control | The United States, Canadian airports, the Netherlands, in Europe, and Heathrow airport, in London [6]. |
Public safety | Used by law enforcement agencies to track and identify criminals [6]. |
Point of sale and ATM | Used by bank ATMs, retail merchants and restaurants [7]. |
Hospitality and tourism | Iris scanning door entry system for guest identification and access control [8]. |
RQ No | Research Questions (RQ) | Objective/Discussion |
---|---|---|
RQ1 | What are the different Features Extraction Techniques for Iris Liveness Detection? | Find out different Feature Extraction Techniques used for Iris Liveness Detection. |
RQ2 | What are the different types of spoofing attacks performed on Iris Liveness Detection? | This question provides information about the types of attacks needed to consider for implementing the ILD system. Considering all the types of attacks increases the security of Biometric systems. |
RQ3 | Which are the relevant datasets available for iris liveness detection? | By identifying different publicly available datasets, which can serve as benchmarks, evaluate performances of the different approaches and provide the jump start to the new researchers. |
RQ4 | What are the different evaluation measures used for iris liveness detection? | Different standards and metrics used most frequently for liveness detection are discussed. |
Database | Query | Search Result |
---|---|---|
Scopus | TITLE-ABS KEY (“Biometric” AND (“Recognition “OR “Identification” OR “detection” OR “Liveness” OR “Classification” OR “Feature Extraction”) AND (“Iris” OR “Multimodal”) AND (“Deep Learning” OR “artificial intelligence” OR “machine learning” OR “AI” OR “network analysis”)) | 586 |
Web of sciences | TOPIC: ((“Biometric” AND (“Recognition “OR “Identification” OR “detection” OR “Liveness” OR “Classification” OR “Feature Extraction”) AND (“Iris” OR “Multimodal”) AND (“Deep Learning” OR “artificial intelligence” OR “machine learning” OR “AI” OR “network analysis”)) | 178 |
ACM | [All: “biometric”] AND [[All: “recognition “] OR [All: “identification”] OR [All: “detection”] OR [All: “liveness”] OR [All: “classification”] OR [All: “feature extraction”]] AND [[All: “iris”] OR [All: “multimodal”]] AND [[All: “deep learning”] OR [All: “artificial intelligence”] OR [All: “machine learning”] OR [All: “ai”] OR [All: “network analysis”]] | 331 |
Inclusion Criteria | Exclusion Criteria |
---|---|
A context in Iris Liveness Detection, either in broad or tailored to a certain application domain. | Papers not peer-reviewed, Duplicate papers. |
Aimed at Software-based Liveness Detection approaches. | Written in the languages except for English. |
Aimed at ILD approaches for Deep Learning-based and Machine Learning-based. | Absence of full text |
Quality Assessment Criteria | Score |
---|---|
Have the studies provided findings and results? | Yes = 1 No = 0 |
Has the study provided an empirical proof on the findings? | Yes = 1 No = 0 |
Are the research objectives and arguments well justified in the paper? | Yes = 1 No = 0 |
Is the study well written and cited? | Yes = 1 No = 0 |
Scheme | Data |
---|---|
1 | Title |
2 | Year of the document |
3 | Conference or journal name |
4 | Keywords in the document |
5 | AI category (we created the categories: Machine Learning and Deep Learning) |
6 | Machine\Deep Algorithm Used (CNN, DNN, Texture analysis) |
7 | Datasets used |
8 | Type of attack identified (we created the categories: Print, Contact, Synth, Video, Cadaver). |
9 | Performance Measure used (ACE, Accuracy) |
Type | Sub-Type | Name |
---|---|---|
Error Sensitivity Measures | Difference Based | Mean Squared Error |
Peak signal to Noise ratio | ||
Signal to Nosie ratio | ||
Structural Content | ||
Maximum Difference | ||
Average Difference | ||
Normalized Absolute Error | ||
R- Averaged MD | ||
Laplacian MSE | ||
Correlation Based | Normalized Cross correlation | |
Mean Angle Similarity | ||
Mean Angle Magnitude Similarity | ||
Edge Based | Total Edge Difference | |
Total Corner Difference | ||
Spectral Based | Spectral Magnitude Error | |
Spectral Phase Error | ||
Gradient Based | Gradient Magnitude Error | |
Gradient Phase Error | ||
Structural Similarity Measures | Structural Similarity Index | |
Information theoretic Measures | Visual Information Fidelity | |
Reduced Ref. Entropic Difference | ||
Distortion Specific Measures | JPEG Quality Index | |
High-Low Frequency Index | ||
Training Based Measures | Blind Image Quality Index | |
Natural Scene Statistics Measures | Naturalness Image Quality Estimator |
Sr. No | Paper ID | Dataset Name | Environment Used | Sensors Used | Capturing Sensor | Types of Images | No of Iris Images | ||
---|---|---|---|---|---|---|---|---|---|
Live | Fake | Total | |||||||
1 | [73] | ND Iris3D | CON | MS | LG4000, IrisGuard AD 100 | CL | 3458 | 3392 | 6850 |
2 | [74] | Warsaw-BioBase-Postmortem-Iris-v2 | CON | MS | IriShield MK2120U, Olympus TG-3 | PM | 1200 | 1200 | 2400 |
3 | [74] | Warsaw-BioBase-Postmortem-Iris-v3 | CON | MS | IriShield MK2120U, Olympus TG-3 | PM | 0 | 785 | 785 |
4 | [75] | Warsaw-BioBase-Pupil-Dynamics v3.0 | CON | MS | SD | PD | 117,117 | 0 | 117,117 |
5 | [76] | LivDet-Iris Clarkson 2017 | CON | MS | L2, DA, IP | PP, CL | 3954 | 4141 | 8095 |
6 | [77] | IIITD-WVU4 | CON | MS | C, V, IS, HP, KM | PP, CL | 2952 | 4507 | 7459 |
7 | [78] | IIITD Contact Lens Iris | CON | MS | C, V | PP, CL | NR | NR | 6570 |
8 | [77] | IIITD Iris Spoofing | CON | MS | C, V, HP | PP, CL | 0 | 4848 | 4848 |
9 | [21] | IIITD Combined Spoofing | CON | MS | C, V, HP | PP, CL, SY | 9325 | 11,368 | 20,693 |
10 | [79] | ND CLD 2013 | CON | MS | A, L4 | CL | 3400 | 1700 | 5100 |
11 | [80] | ND CLD 2015 | CON | MS | A, L4 | CL | 4800 | 2500 | 7300 |
12 | [81] | EMBD v2 | CON | MS | TX, EL, PS | EM | 1808 | 0 | 1808 |
13 | [58] | ETPAD v1 | CON | MS | EL, BM | EM, PP | 400 | 800 | 1200 |
14 | [58] | ETPAD v2 | CON | MS | EL, BM | EM, PP | 800 | 800 | 1600 |
15 | [82] | CASIA-Iris-Syn V4 | CON | N/A | N/A | SY | 0 | 10,000 | 10,000 |
16 | [44] | MobBIOfake | CON | MB | AT | PP | 800 | 800 | 1600 |
17 | [71] | CASIA-IrisV4-Thousand | CON | MS | Irisking IKEMB-100 | NR | 20,000 | NR | 20,000 |
18 | [71] | CASIA-IrisV4-Lamp | CON | SS | OKI Irispass-H | NR | 16,212 | NR | 16,212 |
19 | [71] | CASIA-IrisV4-Interval | CON | SS | Irisking IKEMB-100 | NR | 2639 | NR | 2639 |
20 | [70] | IITD-V1 | CON | SS | NR | CL | 1120 | NR | 1120 |
21 | [83] | ND WACV 2019 | CON | SS | LG4000 | CL | 1404 | 2664 | 4068 |
22 | [84] | WVU Un-MIPA | CON | SS | IrisShield BK 2121U, CMITECH EMX-30, Irishield MK2120U | CL | 9319 | 9387 | 18,706 |
23 | [32] | LivDet-Iris Clarkson 2015 Dalsa | CON | SS | DA | PP, CL | 1078 | 3177 | 4255 |
24 | [85] | ATVS-Fir | CON | SS | L3 | PP | 800 | 800 | 1600 |
25 | [86] | LivDet-Iris Warsaw 2013 | CON | SS | A | PP | 852 | 815 | 1667 |
26 | [32] | LivDet-Iris Warsaw 2015 | CON | SS | A | PP | 2854 | 4705 | 7559 |
27 | [76] | LivDet-Iris Warsaw 2017 | CON | SS | A, PWUT-1 | PP | 5168 | 6845 | 12,013 |
28 | [36] | Pupil-Dynamics v1.013 | CON | SS | PWUT-2 | PD | 204 | 0 | 204 |
29 | [87] | ND CCL 2012 | CON | SS | L4 | CL | 2800 | 1400 | 4200 |
30 | [88] | CASIA-Iris-Fake | CON | SS | H | PP, CL, PE, SY | 6000 | 4120 | 10,120 |
31 | [89] | VISSIV | VIS | MS | NL, IP | RA | 248 | 248 | 496 |
32 | [90] | MICHE-I | VIS | MS | GS, IP, GT | PP | 800 | 800 | 1600 |
33 | [72] | CAVE | VIS | MS | EV | 5880 | 0 | 5880 | |
34 | [72] | ND-CrossSensor-Iris-2013 | VIS | MS | LG2200 EOU, LG iCam 4000 | NR | 146,550 | NR | 146,550 |
35 | [91] | IIITD-WVU Iris Spoofing | VIS | MS | Cogent dual iris sensor (CIS 202), VistaFA2E, Irishield MK2120U | CL | 2250 | 4000 | 6250 |
36 | [90] | MICHE DB | VIS | MB | NR | PP | 3732 | 0 | 3732 |
37 | [92] | CASIA Iris M1 (mobile) | VIS | MB | CL | MB | 11,000 | 0 | 11,000 |
38 | [92] | CASIA BTAS | VIS | MB | MB | 4500 | 0 | 4500 | |
39 | [93] | ND-CrossSensor-Iris-2012 | VIS | NR | LG2200 EOU, LG iCam 4000 | 147,442 | NR | 147,442 | |
40 | [93] | UPOL | VIS | NR | 384 | NR | 384 | ||
41 | [94] | Eye SBU | VIS | SS | NR | 70 | NR | 70 | |
42 | [72] | UBIRIS-V2 | VIS | SS | NR | 11,102 | NR | 11,102 | |
43 | [72] | UBIRIS-V1 | VIS | SS | NR | 1877 | NR | 1877 | |
44 | [59] | Post-Mortem-Iris v1.0 | VIS | SS | NR | PM | 0 | 480 | 480 |
Dataset Name | Types of Images | No of Iris Images | ||
---|---|---|---|---|
Live | Fake | Total | ||
ND WACV 2019 [83] | CL | 1404 | 2664 | 4068 |
ND Iris3D [73,76] | CL | 3458 | 3392 | 6850 |
Warsaw-BioBase-Postmortem-Iris-v2 [74] | PM | 1200 | 1200 | 2400 |
WVU Un-MIPA [8] | CL | 9319 | 9387 | 18,706 |
LivDet-Iris Clarkson 2015 Dalsa [32] | PP, CL | 1078 | 3177 | 4255 |
LivDet-Iris Clarkson 2017 [76] | PP, CL | 3954 | 4141 | 8095 |
IIITD-WVU4 [77] | PP, CL | 2952 | 4507 | 7459 |
IIITD Combined Spoofing [21] | PP, CL, SY | 9325 | 11,368 | 20,693 |
ND CCL 2012 [87] | CL | 2800 | 1400 | 4200 |
ND CLD 2013 [79] | CL | 3400 | 1700 | 5100 |
ND CLD 2015 [80] | CL | 4800 | 2500 | 7300 |
ATVS-Fir [85] | PP | 800 | 800 | 1600 |
Paper ID | Authors/Year | Feature Extraction | Attacks Identified | Datasets | Classifiers | Performances |
---|---|---|---|---|---|---|
[98] | Arora et al., 2021 | VGGNet, LeNet, ConvNet | NR | IIITD Iris Dataset | Softmax | FAR, Accuracy. |
[108] | Garg et al. 2021 | 2DPCA, GA, SIFT | NR | (CASIA-Iris-Interval | BPNN | Accuracy = 96.40 %,FAR FRR Accuracy (%) F- measure Recall (%) Precision (%) MCC |
[109] | Nguyen et al., 2020 | MLBP +CNN | Print Contact | Warsaw2017 ND2015 | SVM | APCER. |
[110] | Adamović et al., 2020 | Stylometric features | NR | IITD and MMU | Random Forest, DT, NB, SVM | Accuracy (%) (%) Precision (%) Recall (%) F score (%) AUC” |
[103] | Lin et al., 2020 | Haar Features | NR | CASIA1, 2 and MMU1,2 | AdaBoost | Accuracy 95.3% |
[24] | Agarwal et al., 2020 | Texture feature, GLCM | ATVS(Iris) LivDet2011 (finger) IIIT-D CLI dataset(Iris) | SVM | ACA = 96.3% | |
[50] | Agarwal et al., 2020 | Local binary hexagonal extrema pattern | Contact | IIIT-D CLI ATVS-FIr | SVM | AER = 1.8 %, |
[10] | B. Kaur et al., 2019 | Orthogonal rotation-invariant feature-set comprising of ZMs and PHTs | Print + scan, Print + capture, patterned contact lenses | IIITD-CLI, IIS, Clarkson LivDet-Iris 2015, Warsaw LivDet-Iris 2015 | KNN | Accuracy= 98.49% (given differ. accuracy for diff. datasets) |
[22] | Fathy and Ali, 2018 | Wavelet packets (WPs), local binary pattern (LBP), Entropy | Print Synthetic | ATVS-Fir CASIA-Iris-Syn | SVM | ACA= 99.92% Recall, Precision, F1. |
[53] | Söllinger et al., 2018 | - “Non-reference image quality measures (IQM). - Natural scene statistics (NSS).” | NR | SDUMLA-HMT | KNN, SVM | “ACER using IQM features: kNN-IQM = 7.09%, SVM-IQM = 2.22% - ACER using NSS features: kNN-NSS = 0.88%, SVM-NSS = 0.06%” |
[69] | Thavalengal et al., 2016 | Pupil localization techniques with distance metrics are used for the detection | Real-time Datasets | Binary Tree Classifier | ACER= 0% | |
[60] | Das et al., 2016 | Image quality features | Contact Lens | Realtime | Euclidean distance as classifiers | ACA = 95% |
[51] | Hu et al., 2016 | LBP, Histogram, SID. | Contact lenses, | Clarkson, Warsaw, Notre Dame, MobBIOfake | SVM | ER, Clarkson = 7.87%, Warsaw = 6.15% ND = 0.08%, MobBIOfake = 1.50% |
[21] | Naman Kohli et al., 2016 | Multi-order dense Zernike moments. -LBP with Variance | Print + Scan Print + Capture, Synthetic, Textured Contact Lens, Soft Contact Lens | IIIT-Delhi CLI, IIITD IIS, IIT Delhi Iris, Synthetic DB, Multi-sensor iris DB. | ANN as classifiers | Mean Classification Accuracy = 82.20 Std. Dev = 1.29 |
[41] | Kiran B Raja et al., n.d. | Laplacian pyramids, STFT | Video | Real-Time ’Presentation Attack Video Iris Database’ (PAVID). LiveDet iris 2013 | SVM | ACER = 0.64% |
[44] | Sequeira et al., 2014 | High Frequency Power, Local Contrast, Global Contrast, Frequency Distribution Rates, Statistical Texture Analysis. | Print Contact Lense | MobBIOfake, - Clarkson, Biosec | DA, KNN, SVM | Best average Classification Error: - MobBIOfake DB using SVM = 12.50% - Clarkson DB using SVM = 5.69% - Biosec DB using KNNk 0.37%” |
[33] | Galbally et al., 2014 | Image quality measures. | ATVS CASIA-IrisV1(real images) WVU-Synthetic iris(spoofed Images) | LDA, QDA Classifiers | ER = 0.3% | |
[57] | Mateusz Trokielewicz et al., 2020 | Self-learned | Cadaver | Warsaw-BioBase-Postmortem-Iris-v1.1, Iris-v2, -Iris-v3 | DCNN | Accuracy, EER = 1% |
[55] | Umer et al., 2020 | Self-learned | N/R | MMU1, UPOL, CASIA-Iris-distance, and UBIRIS.v2 | VGG16, ResNet50, Inception-v3 CNN | CCR= 99.64% |
[52] | Arora and Bhatia, 2020 | Self-learned | Print Contact | IIITD-WVU dataset of LivDet 2017 Iris | DCNN | ACER = 26.19% |
[105] | Abdellatef et al., 2020 | LBPs, ICA Mini-batch size Learning rate | N/R | CASIA-IrisV3 | CNN | Accuracy (%) Specificity (%) Precision (%) Recall (%) Fscore (%)” |
[104] | Alay and Al-Baity, 2020 | CNN | N/R | SDUMLA-HMT, IT Delhi FERET | CNN | Accuracy = 99.35% |
[63] | Kimura et al., 2020 | CNN | Print Contact | Clarkson, Warsaw, IIITD-WVU, Notre Dame | APCER = 4.18% BPCER = 0% | |
[111] | Naqvi et al., 2020 | CNN model with a lite-residual encoder-decoder network | NA | NICE-II dataset, SBVPI | CNN | Average Segmentation Error = 0.0061 |
[11] | Choudhary et al., 2019 | DenseNet | contact lens | ND Contact Lens 2013 Database, IIIT-Delhi (IIITD) Contact Lens | SVM DenseNet | Accuracy = 99.10% |
[13] | Kuehlkamp et al., 2019 | Statistical features (BSIF). - CNN” | Print, Contact | Clarkson IIITD + WVU Notre Dame Warsaw | SVM CNN | HTER Clarkson = 9.45%, IIITD + WVU = 14.92%, Notre Dame = 3.28%, Warsaw = 0.68%” |
[64] | Long and Zeng, 2019 | BNCNN | Synthetic, Contact | CASIA iris Lamp, CASIA iris Syn, Ndcontact | BNCNN | Correct recognition rate= 100% |
[61] | Yadav et al., 2018 | LBP, W-LBP, DESIST, AlexNet | Contact Lens | MUIPAD database | SVM | Total Error = 1.01% APCER = 18.58% |
[57] | Hoffman et al., 2018 | CNN | Print, Contact, Plastic. | LivDet-Iris Warsaw 2015 dataset, CASIA-Iris-Fake, BERC-Iris-Fake dataset | CNN | True Detection Rate (TDR) of: - LivDet-Iris Warsaw 2015 dataset = 95.11% - The printed PAs of the CASIA-Iris-Fake = 100% - Plastic CASIA = 43.75% - Contact PAs of the CASIA dataset = 9.30%” |
[64] | D. T. Nguyen et al., 2018 | Local and global regions from iris image used for feature extraction with CNN | Print, Contact | LivDet-Iris 2017-Warsaw, Notre Dame Contact Lens Detection (NDCLD2015) | SVM | APCER, BPCER, ACER. Warsaw-2017 = 0.016% NDCLD-2015 = 0.292%” |
[59] | Mateusz Trokielewicz et al., 2018 | VGG-16 | Cadaver | Real-Time Dataset | CNN | CCR = 99% |
[54] | Yan et al., 2018 | Hierarchical Multi-class Iris Classification, Google Net | Print, Contact, Synthetic. | ND- Contact, CASIA-Iris-Interval, CASIA-Iris-Syn and LivDet- Iris-2017-Warsaw | CNN | CCR = 100%, FAR = 0%, FRR = 0% |
[43] | Poster et al., 2017 | Eight-layer CNN and multi-layer perceptrons, VGG based network. | Contact lens. | Clarkson Livdet 2013, Notre Dame 1 and 2, Cogent and Vista IIITD Contact Lens. | CNN | ACER Clarkson = 3.25%, Cogent = 1.57% Vista IIITD = 0.22%, Notre Dame 1 = 0.1% Notre Dame 2 = 0.0%” |
[112] | Pala & Bhanu, 2017 | CNN | Print, Contact | Iris-2013-Warsaw, IIIT Cogent and Vista | CNN | ACE = 0.0% |
[113] | Gragnaniello et al., 2017 | CNN, Local Descriptors and Bag-of-Words | Print, Contact | Cogent, Vista, Notre dame | CNN | HTER = 1.03 Accuracy = 99.05 |
[68] | He et al., 2016 | MCNN | Print, Contact, synthetic, plastic | ND-Contact, CASIA-Iris- Interval, CASIA-Iris-Syn, LivDet-Iris-2013-Warsaw, CASIA-Iris-Fake | MCNN | CCR = 100% FAR = 0% FRR = 0% |
[45] | Menotti et al., 2015 | Texture analysis. -Deep Learning (neural networks). | Biosec, Warsaw, MobBIOfake | SVM | ER = 0.9% |
RQ. No | Area | Popular Techniques | Ref. | Merit | Demerits | Research Gaps |
---|---|---|---|---|---|---|
RQ1 | Feature Extraction Techniques |
| [11,24,43,50,51,56,59,60,61,63,76,98,104,105,111] |
|
|
|
RQ2 | Iris Spoofing Attacks |
| [10,13,21,22,24,44,50,51,52,53,57,61] |
|
|
|
RQ3 | Iris Dataset |
| [65,70,71,72,73,74,75,76,77,79,80,83,84,85,87,91,92] |
|
|
|
RQ4 | Perfromance Measures |
| [21,22,24,54,55,60,64,68] |
|
|
|
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Khade, S.; Ahirrao, S.; Phansalkar, S.; Kotecha, K.; Gite, S.; Thepade, S.D. Iris Liveness Detection for Biometric Authentication: A Systematic Literature Review and Future Directions. Inventions 2021, 6, 65. https://doi.org/10.3390/inventions6040065
Khade S, Ahirrao S, Phansalkar S, Kotecha K, Gite S, Thepade SD. Iris Liveness Detection for Biometric Authentication: A Systematic Literature Review and Future Directions. Inventions. 2021; 6(4):65. https://doi.org/10.3390/inventions6040065
Chicago/Turabian StyleKhade, Smita, Swati Ahirrao, Shraddha Phansalkar, Ketan Kotecha, Shilpa Gite, and Sudeep D. Thepade. 2021. "Iris Liveness Detection for Biometric Authentication: A Systematic Literature Review and Future Directions" Inventions 6, no. 4: 65. https://doi.org/10.3390/inventions6040065
APA StyleKhade, S., Ahirrao, S., Phansalkar, S., Kotecha, K., Gite, S., & Thepade, S. D. (2021). Iris Liveness Detection for Biometric Authentication: A Systematic Literature Review and Future Directions. Inventions, 6(4), 65. https://doi.org/10.3390/inventions6040065