Synthetic Iris Images: A Comparative Analysis between Cartesian and Polar Representation
Abstract
:1. Introduction
2. State of the Art in Synthetic Iris Data Generation
3. Training Iris Image Databases
4. Generating Synthetic Iris Images
4.1. Our Approach
4.2. Synthetic Iris Images in Cartesian Representation
4.3. Bypassing Preprocessing Steps: Synthetic Iris Images in Polar Representation
4.4. Visual Similarity
5. Quality Assessment of Synthetic Data
5.1. FID Assessment for Authentic vs. Synthetic Image Quality
- Authentic vs. authentic—in order to define “what is true” and to take an input threshold for subsequent comparisons of metrics, the differences of the set of true images were calculated. For this purpose, two sets of 2000 images were extracted. The images were randomly selected, ensuring that there were no duplicates in the two sets being compared. The resulting FID score was 11.82.
- Authentic vs. synthetic—the next step of the evaluation was to find out the differences in a set consisting only of generated images. The experiment consisted of dividing each set of generated images within each checkpoint and comparing them with each other. For the snapshots from the beginning of training (200 kimg), the FID metric indicated 15.79 for the set samples. Thereafter, it remained at the value of 19 until the last snapshot (23,200 kimg).
- Synthetic vs. synthetic—after learning the variations in the FID metric for subcollections from the same source (authentic, synthetic), the generated images were compared with the original ones. For this purpose, a calculation of the distance of the generated images with the true ones was performed for each of the 25 snapshots.
5.2. Image Sharpness
- S—phase strength, a parameter that influences how strongly the PST emphasizes structures in the phase domain of the image. Too small a value of the parameter (S) can lead to low sensitivity to certain image features, while too high a parameter can lead to too much sensitivity in the details, which can result in a noisy image;
- W—warp strength, which controls the degree of curvature of the image during transformation to the phase domain, thus introducing flexibility in the transformation. Too small a value can lead to a poor adaptation of the PST to image structures and features, causing instability in the transformation results. On the other hand, too large a value of the parameter transformation (W) can lead to too much image distortion, and the structures and details in the image may no longer reflect their true appearance characteristics, resulting in a loss of information and making it impossible to interpret the results reliably.
5.3. Texture Density (Richness)
5.4. Discussion
6. Biometric Performance Evaluation on Synthetic Iris Images
6.1. Proposed Methodology
- Cartesian representation: The first experiment utilized synthetic data generated in the VGA format. Eye images underwent comprehensive processing, including iris segmentation on VGA images using a dedicated iris segmentation model, followed by transformation to the polar domain and feature extraction using OSIRIS.
- Polar representation: The second experiment involved synthetic iris images generated in the polar domain. The generated images underwent semantic segmentation using a custom model and were subsequently encoded using the OSIRIS feature extractor.
- Within-class real image matching (authentic genuine pairs),
- Between-class real image matching from different irises (authentic impostors),
- Matching between authentic and synthetic data (authentic to synthetic samples),
- Matching between synthetic data (synthetic to synthetic samples).
6.2. Results for the Biometric Evaluation
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
GAN | Generative adversarial network |
XOR | Exclusive OR |
FID | Fréchet inception distance |
FAR | False accept rate |
FRR | False reject rate |
References
- Makrushin, A.; Uhl, A.; Dittmann, J. A Survey on Synthetic Biometrics: Fingerprint, Face, Iris and Vascular Patterns. IEEE Access 2023, 11, 33887–33899. [Google Scholar] [CrossRef]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Advances in Neural Information Processing Systems; Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., Weinberger, K., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2014; Volume 27. [Google Scholar]
- Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; Hochreiter, S. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. In Advances in Neural Information Processing Systems; Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Volume 30. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. arXiv, 2015. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar] [CrossRef]
- Kynkäänniemi, T.; Karras, T.; Aittala, M.; Aila, T.; Lehtinen, J. The Role of ImageNet Classes in Fréchet Inception Distance. arXiv 2023. [Google Scholar] [CrossRef]
- Daugman, J. How iris recognition works. IEEE Transactions on Circuits and Systems for Video Technology 2004, 14, 21–30. [Google Scholar] [CrossRef]
- Karras, T.; Aittala, M.; Laine, S.; Härkönen, E.; Hellsten, J.; Lehtinen, J.; Aila, T. Alias-Free Generative Adversarial Networks. arXiv 2021. [Google Scholar] [CrossRef]
- Othman, N.; Dorizzi, B.; Garcia-Salicetti, S. OSIRIS: An open source iris recognition software. Pattern Recognit. Lett. 2016, 82, 124–131. [Google Scholar] [CrossRef]
- IRIS: Iris Recognition Inference System of the Worldcoin Project. 2023. Available online: https://github.com/worldcoin/open-iris (accessed on 21 January 2024).
- Joshi, I.; Grimmer, M.; Rathgeb, C.; Busch, C.; Bremond, F.; Dantcheva, A. Synthetic Data in Human Analysis: A Survey. arXiv 2022. [Google Scholar] [CrossRef] [PubMed]
- Kohli, N.; Yadav, D.; Vatsa, M.; Singh, R.; Noore, A. Synthetic Iris Presentation Attack using iDCGAN. arXiv 2017. [Google Scholar] [CrossRef]
- Minaee, S.; Abdolrashidi, A. Iris-GAN: Learning to Generate Realistic Iris Images Using Convolutional GAN. arXiv 2018. [Google Scholar] [CrossRef]
- Lee, M.B.; Kim, Y.H.; Park, K.R. Conditional Generative Adversarial Network- Based Data Augmentation for Enhancement of Iris Recognition Accuracy. IEEE Access 2019, 7, 122134–122152. [Google Scholar] [CrossRef]
- Yadav, S.; Chen, C.; Ross, A. Synthesizing Iris Images Using RaSGAN With Application in Presentation Attack Detection. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019; pp. 2422–2430. [Google Scholar] [CrossRef]
- Yadav, S.; Ross, A. CIT-GAN: Cyclic Image Translation Generative Adversarial Network With Application in Iris Presentation Attack Detection. arXiv 2020. [Google Scholar] [CrossRef]
- Tinsley, P.; Czajka, A.; Flynn, P.J. Haven’t I Seen You Before? Assessing Identity Leakage in Synthetic Irises. In Proceedings of the 2022 IEEE International Joint Conference on Biometrics (IJCB), Abu Dhabi, United Arab Emirates, 10–13 October 2022; pp. 1–9. [Google Scholar] [CrossRef]
- Yadav, S.; Ross, A. iWarpGAN: Disentangling Identity and Style to Generate Synthetic Iris Images. arXiv 2023. [Google Scholar] [CrossRef]
- Khan, S.K.; Tinsley, P.; Mitcheff, M.; Flynn, P.; Bowyer, K.W.; Czajka, A. EyePreserve: Identity-Preserving Iris Synthesis. arXiv 2023. [Google Scholar] [CrossRef]
- Kakani, V.; Jin, C.B.; Kim, H. Segmentation-based ID preserving iris synthesis using generative adversarial networks. Multimed. Tools Appl. 2024, 83, 27589–27617. [Google Scholar] [CrossRef]
- CASIA Iris Image Database V3.0. Chinese Academy of Sciences. Available online: http://www.cbsr.ia.ac.cn/english/IrisDatabase.asp (accessed on 21 July 2021).
- Phillips, P.J.; Scruggs, W.T.; O’Toole, A.J.; Flynn, P.J.; Bowyer, K.W.; Schott, C.L.; Sharpe, M. FRVT 2006 and ICE 2006 Large-Scale Experimental Results. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 831–846. [Google Scholar] [CrossRef] [PubMed]
- Trokielewicz, M.; Bartuzi, E.; Michowska, K.; Andrzejewska, A.; Selegrat, M. Exploring the feasibility of iris recognition for visible spectrum iris images obtained using smartphone camera. In Proceedings of the Symposium on Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments (WILGA), Wilga, Poland, 25–31 May 2015. [Google Scholar]
- Monro, D.; Rakshit, S.; Zhang, D. UK Iris Image Database. Available online: http://www.cbsr.ia.ac.cn/china/Iris%20Databases%20CH.asp (accessed on 21 July 2021).
- Fierrez, J.; Ortega-Garcia, J.; Torre Toledano, D.; Gonzalez-Rodriguez, J. Biosec baseline corpus: A multimodal biometric database. Pattern Recognit. 2007, 40, 1389–1392. [Google Scholar] [CrossRef]
- Chen, L.C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
- ISO/IEC 39794-6:2021; Information technology – Extensible biometric data interchange formats – Part 6: Iris image data. International Organization for Standardization: Geneva, Switzerland, 2021.
- Melekhov, I.; Kannala, J.; Rahtu, E. Siamese network features for image matching. In Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 4–8 December 2016; pp. 378–383. [Google Scholar] [CrossRef]
- Tan, M.; Le, Q. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; Volume 97, pp. 6105–6114. [Google Scholar]
- Suthar, M.; Asghari, H.; Jalali, B. Feature Enhancement in Visually Impaired Images. IEEE Access 2018, 6, 1407–1415. [Google Scholar] [CrossRef] [PubMed]
- Zhou, Y.; MacPhee, C.; Suthar, M.; Jalali, B. PhyCV: The First Physics-inspired Computer Vision Library. arXiv 2023. [Google Scholar] [CrossRef]
- Ma, L.; Tan, T.; Wang, Y.; Zhang, D. Personal identification based on iris texture analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 1519–1533. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kordas, A.; Bartuzi-Trokielewicz, E.; Ołowski, M.; Trokielewicz, M. Synthetic Iris Images: A Comparative Analysis between Cartesian and Polar Representation. Sensors 2024, 24, 2269. https://doi.org/10.3390/s24072269
Kordas A, Bartuzi-Trokielewicz E, Ołowski M, Trokielewicz M. Synthetic Iris Images: A Comparative Analysis between Cartesian and Polar Representation. Sensors. 2024; 24(7):2269. https://doi.org/10.3390/s24072269
Chicago/Turabian StyleKordas, Adrian, Ewelina Bartuzi-Trokielewicz, Michał Ołowski, and Mateusz Trokielewicz. 2024. "Synthetic Iris Images: A Comparative Analysis between Cartesian and Polar Representation" Sensors 24, no. 7: 2269. https://doi.org/10.3390/s24072269
APA StyleKordas, A., Bartuzi-Trokielewicz, E., Ołowski, M., & Trokielewicz, M. (2024). Synthetic Iris Images: A Comparative Analysis between Cartesian and Polar Representation. Sensors, 24(7), 2269. https://doi.org/10.3390/s24072269