A Future Picture: A Review of Current Generative Adversarial Neural Networks in Vitreoretinal Pathologies and Their Future Potentials
Abstract
:1. Introduction
2. Fundamental Concepts on Generative Adversarial Networks
2.1. Basic Terminology of GANs and Neural Networks
2.2. How Do GANs Work? Explanation with the Counterfeit’s Analogy
3. Generative Adversarial Network Model Training
3.1. Discriminator Training
3.2. Generator Training
3.3. Alternate Training
3.4. Categories of Input
4. Evaluating GANs
4.1. Evaluative Measures
4.2. Qualitative vs. Quantitative Methods (Pixel-Wise Loss)
4.3. Other Objective Methods
5. Loss Function
5.1. Minimax Loss
5.2. Modified Minimax Loss and Wasserstein Loss
5.3. Other Loss Functions
6. Overview of GANs Applications in Ophthalmology
6.1. Image Quality Improvement
6.2. Inpainting
6.3. Conditional GANs
6.4. Multimodal GANs
6.5. Translational GANs
7. Current Generative Adversarial Networks Applications with Different Imaging and Functional Testing Modalities in Ophthalmology and Retina
7.1. Fundus Autofluorescence
7.2. Fluorescein Angiography
7.3. Indocyanine Green Angiography
7.4. Optical Coherence Tomography
7.5. Optical Coherence Tomography-Angiography
7.6. Electroretinogram
7.7. Visual Fields
8. Future Perspectives
9. Limitations
10. Tackling the Challenges Associated with Generative Adversarial Networks
10.1. Vanishing Gradients
10.2. Failure to Converge
10.3. Mode Collapse
10.4. Class Imbalance
10.5. Unintended Bias
10.6. Hyperparameter Sensitivity
10.7. Data Dependence
10.8. Summary
11. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Bohr, A.; Memarzadeh, K. The rise of artificial intelligence in healthcare applications. In Artificial Intelligence in Healthcare; Elsevier: Amsterdam, The Netherlands, 2020; pp. 25–60. [Google Scholar]
- Ahuja, A.S. The impact of artificial intelligence in medicine on the future role of the physician. PeerJ 2019, 7, e7702. [Google Scholar] [CrossRef] [PubMed]
- Aloysius, N.; Geetha, M. A review on deep convolutional neural networks. In Proceedings of the 2017 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 6–8 April 2017; pp. 0588–0592. [Google Scholar]
- Li, Z.; Liu, F.; Yang, W.; Peng, S.; Zhou, J. A survey of convolutional neural networks: Analysis, applications, and prospects. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 6999–7019. [Google Scholar] [CrossRef] [PubMed]
- Gu, J.; Wang, Z.; Kuen, J.; Shahroudy, L.M.A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; Chen, T. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef]
- Zou, J.; Han, Y.; So, S.S. Overview of artificial neural networks. Artif. Neural Netw. Methods Appl. 2009, 458, 14–22. [Google Scholar]
- Yegnanarayana, B. Artificial Neural Networks; PHI Learning Pvt. Ltd.: Delhi, India, 2009. [Google Scholar]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Myles, A.J.; Feudale, R.N.; Liu, Y.; Woody, N.A.; Brown, S.D. An introduction to decision tree modeling. J. Chemom. A J. Chemom. Soc. 2004, 18, 275–285. [Google Scholar] [CrossRef]
- Çelik, E.; İnan, E. Artificial Intelligence in Ophthalmology Clinical Practices. Izmir Democr. Univ. Health Sci. J. 2023, 6, 445–459. [Google Scholar] [CrossRef]
- Chaurasia, A.K.; Greatbatch, C.J.; Hewitt, A.W. Diagnostic accuracy of artificial intelligence in glaucoma screening and clinical practice. J. Glaucoma 2022, 31, 285–299. [Google Scholar] [CrossRef]
- Ting DS, W.; Peng, L.; Varadarajan, A.V.; Keane, P.A.; Burlina, P.M.; Chiang, M.F.; Schmetterer, L.; Pasquale, L.R.; Bressler, N.M.; Webster, D.R.; et al. Deep learning in ophthalmology: The technical and clinical considerations. Prog. Retin. Eye Res. 2019, 72, 100759. [Google Scholar] [CrossRef]
- Padhy, S.K.; Takkar, B.; Chawla, R.; Kumar, A. Artificial intelligence in diabetic retinopathy: A natural step to the future. Indian J. Ophthalmol. 2019, 67, 1004–1009. [Google Scholar]
- Cleland, C.R.; Rwiza, J.; Evans, J.R.; Gordon, I.; MacLeod, D.; Burton, M.J.; Bascaran, C. Artificial intelligence for diabetic retinopathy in low-income and middle-income countries: A scoping review. BMJ Open Diabetes Res. Care 2023, 11, e003424. [Google Scholar] [CrossRef]
- Rajalakshmi, R. The impact of artificial intelligence in screening for diabetic retinopathy in India. Eye 2020, 34, 420–421. [Google Scholar] [CrossRef]
- Chen, J.S.; Coyner, A.S.; Ostmo, S.; Sonmez, K.; Bajimaya, S.; Pradhan, E.; Valikodath, N.; Cole, E.D.; Al-Khaled, T.; Chan RV, P.; et al. Deep Learning for the Diagnosis of Stage in Retinopathy of Prematurity. Ophthalmol. Retin. 2021, 5, 1027–1035. [Google Scholar] [CrossRef] [PubMed]
- Leng, X.; Shi, R.; Wu, Y.; Zhu, S.; Cai, X.; Lu, X.; Liu, R. Deep learning for detection of age-related macular degeneration: A systematic review and meta-analysis of diagnostic test accuracy studies. PLoS ONE 2023, 18, e0284060. [Google Scholar] [CrossRef] [PubMed]
- Nagasato, D.; Tabuchi, H.; Ohsugi, H.; Masumoto, H.; Enno, H.; Ishitobi, N.; Sonobe, T.; Kameoka, M.; Niki, M.; Hayashi, K.; et al. Deep Neural Network-Based Method for Detecting Central Retinal Vein Occlusion Using Ultrawide-Field Fundus Ophthalmoscopy. J. Ophthalmol. 2018, 2018, 1875431. [Google Scholar] [CrossRef] [PubMed]
- Ren, X.; Feng, W.; Ran, R.; Gao, Y.; Lin, Y.; Fu, X.; Tao, Y.; Wang, T.; Wang, B.; Ju, L.; et al. Artificial intelligence to distinguish retinal vein occlusion patients using color fundus photographs. Eye 2023, 37, 2026–2032. [Google Scholar] [CrossRef]
- Chen, Q.; Yu, W.H.; Lin, S.; Liu, B.S.; Wang, Y.; Wei, Q.J.; He, X.X.; Ding, F.; Yang, G.; Chen, Y.X.; et al. Artificial intelligence can assist with diagnosing retinal vein occlusion. Int. J. Ophthalmol. 2021, 14, 1895–1902. [Google Scholar] [CrossRef]
- Cai, L.; Hinkle, J.W.; Arias, D.; Gorniak, R.J.; Lakhani, P.C.; Flanders, A.E.; Kuriyan, A.E. Applications of Artificial Intelligence for the Diagnosis, Prognosis, and Treatment of Age-related Macular Degeneration. Int. Ophthalmol. Clin. 2020, 60, 147–168. [Google Scholar] [CrossRef]
- Bogunović, H.; Mares, V.; Reiter, G.S.; Schmidt-Erfurth, U. Predicting treat-and-extend outcomes and treatment intervals in neovascular age-related macular degeneration from retinal optical coherence tomography using artificial intelligence. Front. Med. 2022, 9, 958469. [Google Scholar] [CrossRef]
- Schmidt-Erfurth, U.; Waldstein, S.M.; Klimscha, S.; Sadeghipour, A.; Hu, X.; Gerendas, B.S.; Osborne, A.; Bogunovic, H. Prediction of Individual Disease Conversion in Early AMD Using Artificial Intelligence. Investig. Ophthalmol. Vis. Sci. 2018, 59, 3199. [Google Scholar] [CrossRef]
- Prahs, P.; Radeck, V.; Mayer, C.; Cvetkov, Y.; Cvetkova, N.; Helbig, H.; Märker, D. OCT-based deep learning algorithm for the evaluation of treatment indication with anti-vascular endothelial growth factor medications. Graefes Arch. Clin. Exp. Ophthalmol. 2018, 256, 91–98. [Google Scholar] [CrossRef] [PubMed]
- Zhang, C.; Cheng, J.; Li, C.; Tian, Q. Image-Specific Classification with Local and Global Discriminations. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 4479–4486. [Google Scholar] [CrossRef] [PubMed]
- Wang, T.C.; Liu, M.Y.; Zhu, J.Y.; Tao, A.; Kautz, J.; Catanzaro, B. High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8798–8807. [Google Scholar]
- Li, Z.; Xia, B.; Zhang, J.; Wang, C.; Li, B. A comprehensive survey on data-efficient GANs in image generation. arXiv 2022, arXiv:220408329. [Google Scholar]
- Osokin, A.; Chessel, A.; Carazo Salas, R.E.; Vaggi, F. GANs for biological image synthesis. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2233–2242. [Google Scholar]
- Liu, S.; Wang, T.; Bau, D.; Zhu, J.Y.; Torralba, A. Diverse image generation via self-conditioned gans. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 14286–14295. [Google Scholar]
- Siarohin, A.; Sangineto, E.; Lathuiliere, S.; Sebe, N. Deformable gans for pose-based human image generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3408–3416. [Google Scholar]
- Harshvardhan, G.; Gourisaria, M.K.; Pandey, M.; Rautaray, S.S. A comprehensive survey and analysis of generative models in machine learning. Comput. Sci. Rev. 2020, 38, 100285. [Google Scholar]
- Adams, L.C.; Busch, F.; Truhn, D.; Makowski, M.R.; Aerts, H.J.W.L.; Bressem, K.K. What Does DALL-E 2 Know About Radiology? J. Med. Internet Res. 2023, 25, e43110. [Google Scholar] [CrossRef]
- You, A.; Kim, J.K.; Ryu, I.H.; Yoo, T.K. Application of generative adversarial networks (GAN) for ophthalmology image domains: A survey. Eye Vis. 2022, 9, 6. [Google Scholar] [CrossRef]
- Niu, Y.; Wang, Y.D.; Mostaghimi, P.; Swietojanski, P.; Armstrong, R.T. An innovative application of generative adversarial networks for physically accurate rock images with an unprecedented field of view. Geophys. Res. Lett. 2020, 47, e2020GL089029. [Google Scholar] [CrossRef]
- Chen, Y.; Shi, F.; Christodoulou, A.G.; Xie, Y.; Zhou, Z.; Li, D. Efficient and accurate MRI super-resolution using a generative adversarial network and 3D multi-level densely connected network. In Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2018; pp. 91–99. [Google Scholar]
- Wolterink, J.M.; Mukhopadhyay, A.; Leiner, T.; Vogl, T.J.; Bucher, A.M.; Išgum, I. Generative Adversarial Networks: A Primer for Radiologists. RadioGraphics 2021, 41, 840–857. [Google Scholar] [CrossRef]
- Lim, S.; Nam, H.; Shin, H.; Jeong, S.; Kim, K.; Lee, Y. Noise Reduction for a Virtual Grid Using a Generative Adversarial Network in Breast X-ray Images. J. Imaging 2023, 9, 272. [Google Scholar] [CrossRef]
- Sun, Y.; Liu, X.; Cong, P.; Li, L.; Zhao, Z. Digital radiography image denoising using a generative adversarial network. J. X-Ray Sci. Technol. 2018, 26, 523–534. [Google Scholar] [CrossRef]
- Lyu, Q.; Wang, G. Conversion between CT and MRI images using diffusion and score-matching models. arXiv 2022, arXiv:220912104. [Google Scholar]
- Kadambi, S.; Wang, Z.; Xing, E. WGAN domain adaptation for the joint optic disc-and-cup segmentation in fundus images. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 1205–1213. [Google Scholar] [CrossRef]
- Yang, J.; Dong, X.; Hu, Y.; Peng, Q.; Tao, G.; Ou, Y.; Cai, H.; Yang, X. Fully Automatic Arteriovenous Segmentation in Retinal Images via Topology-Aware Generative Adversarial Networks. Interdiscip. Sci. 2020, 12, 323–334. [Google Scholar] [CrossRef]
- Khan, Z.K.; Umar, A.I.; Shirazi, S.H.; Rasheed, A.; Qadir, A.; Gul, S. Image based analysis of meibomian gland dysfunction using conditional generative adversarial neural network. BMJ Open Ophthalmol. 2021, 6, e000436. [Google Scholar] [CrossRef] [PubMed]
- Tavakkoli, A.; Kamran, S.A.; Hossain, K.F.; Zuckerbrod, S.L. A novel deep learning conditional generative adversarial network for producing angiography images from retinal fundus photographs. Sci. Rep. 2020, 10, 21580. [Google Scholar] [CrossRef] [PubMed]
- Liu, Y.; Yang, J.; Zhou, Y.; Wang, W.; Zhao, J.; Yu, W.; Zhang , D.; Ding, D.; Li, X.; Chen, Y. Prediction of OCT images of short-term response to anti-VEGF treatment for neovascular age-related macular degeneration using generative adversarial network. Br. J. Ophthalmol. 2020, 104, 1735–1740. [Google Scholar] [CrossRef] [PubMed]
- Helm, J.M.; Swiergosz, A.M.; Haeberle, H.S.; Karnuta, J.M.; Schaffer, J.L.; Krebs, V.E.; Spitzer, A.I.; Ramkumar, P.N. Machine learning and artificial intelligence: Definitions, applications, and future directions. Curr. Rev. Musculoskelet. Med. 2020, 13, 69–76. [Google Scholar] [CrossRef]
- Packin, N.G.; Lev-Aretz, Y. Learning algorithms and discrimination. In Research Handbook on the Law of Artificial Intelligence; Edward Elgar Publishing: Cheltenham, UK, 2018; pp. 88–113. [Google Scholar]
- Castelli, M.; Manzoni, L. Generative models in artificial intelligence and their applications. Appl. Sci. 2022, 12, 4127. [Google Scholar] [CrossRef]
- Jin, L.; Tan, F.; Jiang, S. Generative Adversarial Network Technologies and Applications in Computer Vision. Comput. Intell. Neurosci. 2020, 2020, 1459107. [Google Scholar] [CrossRef]
- Munro, P. Backpropagation. In Encyclopedia of Machine Learning; Sammut, C., Webb, G.I., Eds.; Springer: Boston, MA, USA, 2010; p. 73. [Google Scholar] [CrossRef]
- Kornblith, S.; Chen, T.; Lee, H.; Norouzi, M. Why do better loss functions lead to less transferable features? Adv. Neural Inf. Process. Syst. 2021, 34, 28648–28662. [Google Scholar]
- Komatsuzaki, A. One epoch is all you need. arXiv 2019, arXiv:190606669. [Google Scholar]
- Ying, X. An overview of overfitting and its solutions. J. Phys. Conf. Ser. 2019, 1168, 022022. [Google Scholar] [CrossRef]
- Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. arXiv 2014, arXiv:1406.2661. Available online: https://arxiv.org/abs/1406.2661 (accessed on 4 September 2024).
- Feizi, S.; Farnia, F.; Ginart, T.; Tse, D. Understanding GANs: The LQG Setting. arXiv 2017, arXiv:1710.10793. Available online: https://arxiv.org/abs/1710.10793 (accessed on 4 September 2024).
- Sorin, V.; Barash, Y.; Konen, E.; Klang, E. Creating Artificial Images for Radiology Applications Using Generative Adversarial Networks (GANs)—A Systematic Review. Acad. Radiol. 2020, 27, 1175–1185. [Google Scholar] [CrossRef] [PubMed]
- Wang, Z.; Lim, G.; Ng, W.Y.; Keane, P.A.; Campbell, J.P.; Tan, G.S.W.; Schmetterer , L.; Wong , T.Y.; Liu, Y.; Ting, D.S.W. Generative adversarial networks in ophthalmology: What are these and how can they be used? Curr. Opin. Ophthalmol. 2021, 32, 459–467. [Google Scholar] [CrossRef] [PubMed]
- Arjovsky, M.; Bottou, L. Towards Principled Methods for Training Generative Adversarial Networks. arXiv 2017, arXiv:1701.04862. Available online: https://arxiv.org/abs/1701.04862 (accessed on 4 September 2024).
- Little, C.; Elliot, M.; Allmendinger, R.; Samani, S.S. Generative adversarial networks for synthetic data generation: A comparative study. arXiv 2021, arXiv:211201925. [Google Scholar]
- Chavdarova, T.; Fleuret, F. SGAN: An Alternative Training of Generative Adversarial Networks. arXiv 2017, arXiv:1712.02330. Available online: https://arxiv.org/abs/1712.02330 (accessed on 4 September 2024).
- Pan, Z.; Yu, W.; Wang, B.; Xie, H.; Sheng, V.S.; Lei, J. Loss Functions of Generative Adversarial Networks (GANs): Opportunities and Challenges. IEEE Trans. Emerg. Top Comput. Intell. 2020, 4, 500–522. [Google Scholar] [CrossRef]
- Borji, A. Pros and cons of GAN evaluation measures. Comput. Vis. Image Underst. 2019, 179, 41–65. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
- Barratt, S.; Sharma, R. A Note on the Inception Score. arXiv 2018, arXiv:1801.01973. Available online: https://arxiv.org/abs/1801.01973 (accessed on 4 September 2024).
- Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; Chen, X. Improved Techniques for Training GANs. arXiv 2016, arXiv:1606.03498. Available online: https://arxiv.org/abs/1606.03498 (accessed on 4 September 2024).
- Lucic, M.; Kurach, K.; Michalski, M.; Gelly, S.; Bousquet, O. Are GANs Created Equal? A Large-Scale Study. arXiv 2017, arXiv:1711.10337. Available online: https://arxiv.org/abs/1711.10337 (accessed on 4 September 2024).
- Brock, A.; Donahue, J.; Simonyan, K. Large Scale GAN Training for High Fidelity Natural Image Synthesis. arXiv 2018, arXiv:1809.11096. Available online: https://arxiv.org/abs/1809.11096 (accessed on 4 September 2024).
- Soloveitchik, M.; Diskin, T.; Morin, E.; Wiesel, A. Conditional Frechet Inception Distance. arXiv 2021, arXiv:2103.11521. Available online: https://arxiv.org/abs/2103.11521 (accessed on 4 September 2024).
- Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein GAN. arXiv 2017, arXiv:1701.07875. Available online: https://arxiv.org/abs/1701.07875 (accessed on 4 September 2024).
- Frogner, C.; Zhang, C.; Mobahi, H.; Araya-Polo, M.; Poggio, T. Learning with a Wasserstein Loss. arXiv 2015, arXiv:1506.05439. Available online: https://arxiv.org/abs/1506.05439 (accessed on 4 September 2024).
- Ledig, C.; Theis, L.; Huszar, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. arXiv 2016, arXiv:1609.04802. Available online: https://arxiv.org/abs/1609.04802 (accessed on 4 September 2024).
- Gupta, R.; Sharma, A.; Kumar, A. Super-Resolution using GANs for Medical Imaging. Procedia Comput. Sci. 2020, 173, 28–35. [Google Scholar] [CrossRef]
- Ha, A.; Sun, S.; Kim, Y.K.; Lee, J.; Jeoung, J.W.; Kim, H.C.; Park, K.H. Deep-learning-based enhanced optic-disc photography. PLoS ONE 2020, 15, e0239913. [Google Scholar] [CrossRef]
- Fu, J.; Cao, L.; Wei, S.; Xu, M.; Song, Y.; Li, H.; You, Y. A GAN-based deep enhancer for quality enhancement of retinal images photographed by a handheld fundus camera. Adv. Ophthalmol. Pract. Res. 2022, 2, 100077. [Google Scholar] [CrossRef] [PubMed]
- Das, V.; Dandapat, S.; Bora, P.K. Unsupervised Super-Resolution of OCT Images Using Generative Adversarial Network for Improved Age-Related Macular Degeneration Diagnosis. IEEE Sens. J. 2020, 20, 8746–8756. [Google Scholar] [CrossRef]
- Yeh, R.A.; Chen, C.; Lim, T.Y.; Schwing, A.G.; Hasegawa-Johnson, M.; Do, M.N. Semantic Image Inpainting with Deep Generative Models. arXiv 2016, arXiv:1607.07539. Available online: https://arxiv.org/abs/1607.07539 (accessed on 4 September 2024).
- DeVries, T.; Romero, A.; Pineda, L.; Taylor, G.W.; Drozdzal, M. On the Evaluation of Conditional GANs. arXiv 2019, arXiv:1907.08175. Available online: https://arxiv.org/abs/1907.08175 (accessed on 4 September 2024).
- Mirza, M.; Osindero, S. Conditional Generative Adversarial Nets. arXiv 2014, arXiv:1411.1784. Available online: https://arxiv.org/abs/1411.1784 (accessed on 4 September 2024).
- Sricharan, K.; Bala, R.; Shreve, M.; Ding, H.; Saketh, K.; Sun, J. Semi-supervised Conditional GANs. arXiv 2017, arXiv:1708.05789. Available online: https://arxiv.org/abs/1708.05789 (accessed on 4 September 2024).
- Lan, L.; You, L.; Zhang, Z.; Fan, Z.; Zhao, W.; Zeng, N.; Chen, Y.; Zhou, X. Generative Adversarial Networks and Its Applications in Biomedical Informatics. Front. Public Health 2020, 8, 164. [Google Scholar] [CrossRef]
- Agarwal, R.; Tripathi, A. Current Modalities for Low Vision Rehabilitation. Cureus 2021, 13, e16561. Available online: https://www.cureus.com/articles/64479-current-modalities-for-low-vision-rehabilitation (accessed on 4 September 2024). [CrossRef]
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. arXiv 2016, arXiv:1611.07004. Available online: https://arxiv.org/abs/1611.07004 (accessed on 4 September 2024).
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. arXiv 2017, arXiv:1703.10593. Available online: https://arxiv.org/abs/1703.10593 (accessed on 4 September 2024).
- Huang, D.; Swanson, E.A.; Lin, C.P.; Schuman, J.S.; Stinson, W.G.; Chang, W.; Hee, M.R.; Flotte, T.; Gregory, K.; Puliafito, C.A. Optical coherence tomography. Science 1991, 254, 1178–1181. [Google Scholar] [CrossRef]
- Fujimoto, J.G.; Pitris, C.; Boppart, S.A.; Brezinski, M.E. Optical Coherence Tomography: An Emerging Technology for Biomedical Imaging and Optical Biopsy-PMC. Neoplasia 2000, 2, 9–25. Available online: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1531864/ (accessed on 6 June 2024). [CrossRef]
- Gómez-Benlloch, A.; Garrell-Salat, X.; Cobos, E.; López, E.; Esteve-Garcia, A.; Ruiz, S.; Vázquez, M.; Sararols, L.; Biarnés, M. Optical Coherence Tomography in Inherited Macular Dystrophies: A Review. Diagnostics 2024, 14, 878. [Google Scholar] [CrossRef]
- Wang, J.; Li, W.; Chen, Y.; Fang, W.; Kong, W.; He, Y.; Shi, G. Weakly supervised anomaly segmentation in retinal OCT images using an adversarial learning approach. Biomed. Opt. Express. 2021, 12, 4713. [Google Scholar]
- Ouyang, J.; Mathai, T.S.; Lathrop, K.; Galeotti, J. Accurate tissue interface segmentation via adversarial pre-segmentation of anterior segment OCT images. Biomed. Opt. Express. 2019, 10, 5291. [Google Scholar] [CrossRef] [PubMed]
- Menten, M.J.; Holland, R.; Leingang, O.; Bogunović, H.; Hagag, A.M.; Kaye, R.; Riedl, S.; Traber, G.L.; Hassan, O.N.; Pawlowski, N.; et al. Exploring Healthy Retinal Aging with Deep Learning. Ophthalmol. Sci. 2023, 3, 100294. [Google Scholar] [CrossRef] [PubMed]
- Sun, L.C.; Pao, S.I.; Huang, K.H.; Wei, C.Y.; Lin, K.F.; Chen, P.N. Generative adversarial network-based deep learning approach in classification of retinal conditions with optical coherence tomography images. Graefes. Arch. Clin. Exp. Ophthalmol. 2023, 261, 1399–1412. [Google Scholar] [CrossRef]
- Assaf, J.F.; Abou Mrad, A.; Reinstein, D.Z.; Amescua, G.; Zakka, C.; Archer, T.J.; Yammine, J.; Lamah, E.; Haykal, M.; Awwad, S.T. Creating realistic anterior segment optical coherence tomography images using generative adversarial networks. Br. J. Ophthalmol. 2024, 108, bjo-2023-324633. [Google Scholar] [CrossRef]
- Assaf, J.F.; Yazbeck, H.; Reinstein, D.Z.; Archer, T.J.; Arbelaez, J.; Bteich, Y.; Arbelaez, M.C.; Abou Mrad, A.; Awwad, S.T. Enhancing the Automated Detection of Implantable Collamer Lens Vault Using Generative Adversarial Networks and Synthetic Data on Optical Coherence Tomography. J. Refract. Surg. 2024, 40, e199–e207. Available online: https://journals.healio.com/doi/10.3928/1081597X-20240214-01 (accessed on 22 May 2024). [CrossRef]
- Zheng, C.; Ye, H.; Yang, J.; Fei, P.; Qiu, Y.; Xie, X.; Wang, Z.; Chen, J.; Zhao, P. Development and Clinical Validation of Semi-Supervised Generative Adversarial Networks for Detection of Retinal Disorders in Optical Coherence Tomography Images Using Small Dataset. Asia-Pac. J. Ophthalmol. 2022, 11, 219–226. [Google Scholar] [CrossRef]
- He, X.; Fang, L.; Rabbani, H.; Chen, X.; Liu, Z. Retinal optical coherence tomography image classification with label smoothing generative adversarial network. Neurocomputing 2020, 405, 37–47. [Google Scholar] [CrossRef]
- Kugelman, J.; Alonso-Caneiro, D.; Read, S.A.; Vincent, S.J.; Chen, F.K.; Collins, M.J. Data augmentation for patch-based OCT chorio-retinal segmentation using generative adversarial networks. Neural Comput. Appl. 2021, 33, 7393–7408. [Google Scholar] [CrossRef]
- Ni, G.; Wu, R.; Zheng, F.; Li, M.; Huang, S.; Ge, X. Toward ground-truth optical coherence tomography via three-dimensional unsupervised deep learning processing and data. IEEE Trans. Med. Imaging 2024, 43, 2395–2407. [Google Scholar] [CrossRef]
- Mehdizadeh, M.; Saha, S.; Alonso-Caneiro, D.; Kugelman, J.; MacNish, C.; Chen, F. Employing texture loss to denoise OCT images using generative adversarial networks. Biomed. Opt. Express. 2024, 15, 2262. [Google Scholar] [CrossRef] [PubMed]
- Liang, K.; Liu, X.; Chen, S.; Xie, J.; Qing Lee, W.; Liu, L.; Kuan Lee, H. Resolution enhancement and realistic speckle recovery with generative adversarial modeling of micro-optical coherence tomography. Biomed. Opt. Express. 2020, 11, 7236. [Google Scholar] [CrossRef] [PubMed]
- Cheong, H.; Devalla, S.K.; Pham, T.H.; Zhang, L.; Tun, T.A.; Wang, X.; Perera, S.; Schmetterer, L.; Aung, T.; Boote, C.; et al. DeshadowGAN: A Deep Learning Approach to Remove Shadows from Optical Coherence Tomography Images. Trans. Vis. Sci. Tech. 2020, 9, 23. [Google Scholar] [CrossRef]
- Halupka, K.J.; Antony, B.J.; Lee, M.H.; Lucy, K.A.; Rai, R.S.; Ishikawa, H.; Wollstein, G.; Schuman, J.S.; Garnavi, R. Retinal optical coherence tomography image enhancement via deep learning. Biomed. Opt. Express. 2018, 9, 6205. [Google Scholar] [CrossRef] [PubMed]
- Ren, M.; Dey, N.; Fishbaugh, J.; Gerig, G. Segmentation-Renormalized Deep Feature Modulation for Unpaired Image Harmonization. IEEE Trans. Med. Imaging 2021, 40, 1519–1530. [Google Scholar] [CrossRef]
- Wu, Y.; Olvera-Barrios, A.; Yanagihara, R.; Kung, T.H.; Lu, R.; Leung, I.; Mishra, A.V.; Nussinovitch, H.; Grimaldi, G.; Blazes, M.; et al. Training Deep Learning Models to Work on Multiple Devices by Cross-Domain Learning with No Additional Annotations. Ophthalmology 2023, 130, 213–222. [Google Scholar] [CrossRef]
- Chen, S.; Ma, D.; Lee, S.; Yu TT, L.; Xu, G.; Lu, D.; Popuri, K.; Ju, M.J.; Sarunic, M.V.; Beg, M.F. Segmentation-guided domain adaptation and data harmonization of multi-device retinal optical coherence tomography using cycle-consistent generative adversarial networks. Comput. Biol. Med. 2023, 159, 106595. [Google Scholar] [CrossRef]
- Lazaridis, G.; Lorenzi, M.; Ourselin, S.; Garway-Heath, D. Improving statistical power of glaucoma clinical trials using an ensemble of cyclical generative adversarial networks. Med. Image Anal. 2021, 68, 101906. [Google Scholar] [CrossRef]
- Romo-Bucheli, D.; Seeböck, P.; Orlando, J.I.; Gerendas, B.S.; Waldstein, S.M.; Schmidt-Erfurth, U.; Bogunović, H. Reducing image variability across OCT devices with unsupervised unpaired learning for improved segmentation of retina. Biomed. Opt. Express. 2020, 1s1, 346. [Google Scholar] [CrossRef]
- Tripathi, A.; Kumar, P.; Mayya, V.; Tulsani, A. Generating OCT B-Scan DME images using optimized Generative Adversarial Networks (GANs). Heliyon 2023, 9, e18773. [Google Scholar] [CrossRef]
- Liu, S.; Hu, W.; Xu, F.; Chen, W.; Liu, J.; Yu, X.; Wang, Z.; Li, Z.; Li, Z.; Yang, X.; et al. Prediction of OCT images of short-term response to anti-VEGF treatment for diabetic macular edema using different generative adversarial networks. Photodiagnosis Photodyn. Ther. 2023, 41, 103272. [Google Scholar] [CrossRef] [PubMed]
- Lee, H.; Kim, S.; Kim, M.A.; Chung, H.; Kim, H.C. Post-treatment prediction of optical coherence tomography using a conditional generative adversarial network in age-related macular degeneration. Retina 2021, 41, 572–580. [Google Scholar] [CrossRef] [PubMed]
- Xu, F.; Yu, X.; Gao, Y.; Ning, X.; Huang, Z.; Wei, M.; Zhai, W.; Zhang, R.; Wang, S.; Li, J. Predicting OCT images of short-term response to anti-VEGF treatment for retinal vein occlusion using generative adversarial network. Front. Bioeng. Biotechnol. 2022, 10, 914964. [Google Scholar] [CrossRef]
- Ehrlich, R.; Harris, A.; Wentz, S.M.; Moore, N.A.; Siesky, B.A. Anatomy and Regulation of the Optic Nerve Blood Flow. In Reference Module in Neuroscience and Biobehavioral Psychology; Elsevier: Amsterdam, The Netherlands, 2017; p. 9780128093245013018. Available online: https://linkinghub.elsevier.com/retrieve/pii/B9780128093245013018 (accessed on 26 May 2024).
- Javed, A.; Khanna, A.; Palmer, E.; Wilde, C.; Zaman, A.; Orr, G.; Kumudhan, D.; Lakshmanan, A.; Panos, G.D. Optical coherence tomography angiography: A review of the current literature. J. Int. Med. Res. 2023, 51, 03000605231187933. [Google Scholar] [CrossRef] [PubMed]
- de Carlo, T.E.; Romano, A.; Waheed, N.K.; Duker, J.S. A review of optical coherence tomography angiography (OCTA). Int. J. Retin. Vitr. 2015, 1, 5. [Google Scholar] [CrossRef]
- Badhon, R.H.; Thompson, A.C.; Lim, J.I.; Leng, T.; Alam, M.N. Quantitative Characterization of Retinal Features in Translated OCTA. medRxiv 2024. Available online: http://medrxiv.org/lookup/doi/10.1101/2024.02.23.24303275 (accessed on 25 May 2024). [CrossRef]
- Coronado, I.; Pachade, S.; Trucco, E.; Abdelkhaleq, R.; Yan, J.; Salazar-Marioni, S.; Jagolino-Cole, A.; Bahrainian, M.; Channa, R.; Sheth, S.A.; et al. Synthetic OCT-A blood vessel maps using fundus images and generative adversarial networks. Sci. Rep. 2023, 13, 15325. [Google Scholar]
- Cao, J.; Xu, Z.; Xu, M.; Ma, Y.; Zhao, Y. A two-stage framework for optical coherence tomography angiography image quality improvement. Front. Med. 2023, 10, 1061357. [Google Scholar] [CrossRef]
- Jiang, Z.; Huang, Z.; Qiu, B.; Meng, X.; You, Y.; Liu, X.; Liu, G.; Zhou, C.; Yang, K.; Maier, A.; et al. Comparative study of deep learning models for optical coherence tomography angiography. Biomed. Opt. Express. 2020, 11, 1580. [Google Scholar] [CrossRef]
- Kornblau, I.S.; El-Annan, J.F. Adverse reactions to fluorescein angiography: A comprehensive review of the literature. Surv. Ophthalmol. 2019, 64, 679–693. [Google Scholar] [CrossRef]
- Maquire, A.M.; Bennett, J. Fluorescein elimination in human breast milk. Arch. Ophthalmol. 1988, 106, 718–719. [Google Scholar] [CrossRef]
- Huang, K.; Li, M.; Yu, J.; Miao, J.; Hu, Z.; Yuan, S.; Chen, Q. Lesion-aware generative adversarial networks for color fundus image to fundus fluorescein angiography translation. Comput. Methods Programs Biomed. 2023, 229, 107306. [Google Scholar] [CrossRef] [PubMed]
- Li, P.; He, Y.; Wang, P.; Wang, J.; Shi, G.; Chen, Y. Synthesizing multi-frame high-resolution fluorescein angiography images from retinal fundus images using generative adversarial networks. BioMed. Eng. OnLine 2023, 22, 16. [Google Scholar] [CrossRef] [PubMed]
- Li, W.; He, Y.; Kong, W.; Wang, J.; Deng, G.; Chen, Y. SequenceGAN: Generating Fundus Fluorescence Angiography Sequences from Structure Fundus Image. In Simulation and Synthesis in Medical Imaging; Svoboda, D., Burgos, N., Wolterink, J.M., Zhao, C., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2021; Volume 12965, pp. 110–120. Available online: https://link.springer.com/10.1007/978-3-030-87592-3_11 (accessed on 25 May 2024).
- Kamran, S.A.; Fariha Hossain, K.; Tavakkoli, A.; Zuckerbrod, S.; Baker, S.A.; Sanders, K.M. Fundus2Angio: A Conditional GAN Architecture for Generating Fluorescein Angiography Images from Retinal Fundus Photography. In Advances in Visual Computing; Bebis, G., Yin, Z., Kim, E., Bender, J., Subr, K., Kwon, B.C., Zhao, J., Kalkofen, D., Baciu, G., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2020; Volume 12510, pp. 125–138. Available online: https://link.springer.com/10.1007/978-3-030-64559-5_10 (accessed on 25 May 2024).
- Kamran, S.A.; Hossain, K.F.; Tavakkoli, A.; Zuckerbrod, S.L. Attention2AngioGAN: Synthesizing Fluorescein Angiography from Retinal Fundus Images using Generative Adversarial Networks. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 9122–9129. Available online: https://ieeexplore.ieee.org/document/9412428/ (accessed on 25 May 2024).
- Kamran, S.A.; Hossain, K.F.; Tavakkoli, A.; Zuckerbrod, S.L.; Baker, S.A. VTGAN: Semi-supervised Retinal Image Synthesis and Disease Prediction using Vision Transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; Available online: https://arxiv.org/abs/2104.06757 (accessed on 25 May 2024).
- Kamran, S.A.; Hossain, K.F.; Ong, J.; Waisberg, E.; Zaman, N.; Baker, S.A.; Lee, A.G.; Tavakkoli, A. FA4SANS-GAN: A Novel Machine Learning Generative Adversarial Network to Further Understand Ophthalmic Changes in Spaceflight Associated Neuro-Ocular Syndrome (SANS). Ophthalmol. Sci. 2024, 4, 100493. [Google Scholar] [CrossRef]
- Shi, D.; Zhang, W.; He, S.; Chen, Y.; Song, F.; Liu, S.; Wang, R.; Zheng, Y.; He, M. Translation of Color Fundus Photography into Fluorescein Angiography Using Deep Learning for Enhanced Diabetic Retinopathy Screening. Ophthalmol. Sci. 2023, 3, 100401. [Google Scholar] [CrossRef]
- Shi, D.; He, S.; Yang, J.; Zheng, Y.; He, M. One-shot Retinal Artery and Vein Segmentation via Cross-modality Pretraining. Ophthalmol. Sci. 2024, 4, 100363. [Google Scholar] [CrossRef]
- Ge, R.; Fang, Z.; Wei, P.; Chen, Z.; Jiang, H.; Elazab, A.; Li, W.; Wan, X.; Zhang, S.; Wang, C. UWAFA-GAN: Ultra-Wide-Angle Fluorescein Angiography Transformation via Multi-scale Generation and Registration Enhancement. IEEE J. Biomed. Health Inform. 2024, 28, 4820–4829. [Google Scholar] [CrossRef]
- Wang, X.; Ji, Z.; Ma, X.; Zhang, Z.; Yi, Z.; Zheng, H.; Fan, W.; Chen, C. Automated Grading of Diabetic Retinopathy with Ultra-Widefield Fluorescein Angiography and Deep Learning. J. Diabetes Res. 2021, 2021, 2611250. [Google Scholar] [CrossRef]
- Abdelmotaal, H.; Sharaf, M.; Soliman, W.; Wasfi, E.; Kedwany, S.M. Bridging the resources gap: Deep learning for fluorescein angiography and optical coherence tomography macular thickness map image translation. BMC Ophthalmol. 2022, 22, 355. [Google Scholar] [CrossRef]
- Pole, C.; Ameri, H. Fundus Autofluorescence and Clinical Applications. J. Ophthalmic. Vis. Res. 2021, 16, 432–461. [Google Scholar] [CrossRef]
- Wu, M.; Cai, X.; Chen, Q.; Ji, Z.; Niu, S.; Leng, T.; Rubin, D.L.; Park, H. Geographic atrophy segmentation in SD-OCT images using synthesized fundus autofluorescence imaging. Comput. Methods Programs Biomed. 2019, 182, 105101. [Google Scholar] [CrossRef]
- Su, J.; She, K.; Song, L.; Jin, X.; Li, R.; Zhao, Q.; Xiao, J.; Chen, D.; Cheng, H.; Lu, F.; et al. In vivo base editing rescues photoreceptors in a mouse model of retinitis pigmentosa. Mol. Ther.-Nucleic Acids 2023, 31, 596–609. [Google Scholar] [CrossRef] [PubMed]
- Veturi, Y.A.; Woof, W.; Lazebnik, T.; Moghul, I.; Woodward-Court, P.; Wagner, S.K.; Cabral de Guimarães, T.A.; Daich Varela, M.; Liefers, B.; Patel, P.J.; et al. SynthEye: Investigating the Impact of Synthetic Data on Artificial Intelligence-assisted Gene Diagnosis of Inherited Retinal Disease. Ophthalmol. Sci. 2023, 3, 100258. [Google Scholar] [CrossRef] [PubMed]
- Asanad, S.; Karanjia, R. Full-Field Electroretinogram. In StatPearls [Internet]; StatPearls Publishing: Treasure Island, FL, USA, 2024. Available online: http://www.ncbi.nlm.nih.gov/books/NBK557483/ (accessed on 9 June 2024).
- Asanad, S.; Karanjia, R. Multifocal Electroretinogram. In StatPearls [Internet]; StatPearls Publishing: Treasure Island, FL, USA, 2024. Available online: https://www.ncbi.nlm.nih.gov/books/NBK564322/ (accessed on 9 June 2024).
- Kulyabin, M.; Zhdanov, A.; Maier, A.; Loh, L.; Estevez, J.J.; Constable, P.A. Generating synthetic electroretinogram waveforms using Artificial Intelligence to improve classification of retinal conditions in under-represented populations. arXiv 2024, arXiv:2404.11842. Available online: https://arxiv.org/abs/2404.11842 (accessed on 27 May 2024). [CrossRef]
- Muraleedharan, S.; Tripathy, K. Indocyanine Green (ICG) Angiography. In StatPearls [Internet]; StatPearls Publishing: Treasure Island, FL, USA, 2024. Available online: http://www.ncbi.nlm.nih.gov/books/NBK580479/ (accessed on 9 June 2024).
- Chen, R.; Zhang, W.; Song, F.; Yu, H.; Cao, D.; Zheng, Y.; He, M.; Shi, D. Translating color fundus photography to indocyanine green angiography using deep-learning for age-related macular degeneration screening. npj Digit. Med. 2024, 7, 34. [Google Scholar] [CrossRef]
- Jiang, H.; Chen, X.; Shi, F.; Ma, Y.; Xiang, D.; Ye, L.; Su, J.; Li, Z.; Chen, Q.; Hua, Y.; et al. Improved cGAN based linear lesion segmentation in high myopia ICGA images. Biomed. Opt. Express. 2019, 10, 2355. [Google Scholar] [CrossRef]
- Coscas, G.; Soubrane, G. Pathologic Myopia. In Retinal Imaging; Elsevier: Amsterdam, The Netherlands, 2006; pp. 164–174. Available online: https://linkinghub.elsevier.com/retrieve/pii/B9780323023467500181 (accessed on 26 May 2024).
- Rao, H.L.; Yadav, R.K.; Begum, V.U.; Addepalli, U.K.; Choudhari, N.S.; Senthil, S.; Garudadri, C.S. Role of Visual Field Reliability Indices in Ruling Out Glaucoma. JAMA Ophthalmol. 2015, 133, 40. [Google Scholar] [CrossRef]
- Kang, H.; Ko, S.; Kim, J.-C.; Le, D.T.; Bum, J.; Han, J.C.; Choo, H. Visual Field Prediction for Fundus Image with Generative AI. In Proceedings of the 2024 18th International Conference on Ubiquitous Information Management and Communication (IMCOM), Kuala Lumpur, Malaysia, 3–5 January 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–3. Available online: https://ieeexplore.ieee.org/document/10418344/ (accessed on 27 May 2024).
- Ranga, V.; Dave, M.; Verma, A.K. Modified Max-Min Algorithm for Game Theory. In Proceedings of the 2015 Fifth International Conference on Advanced Computing & Communication Technologies, Haryana, India, 20–21 February 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 153–156. Available online: http://ieeexplore.ieee.org/document/7079070/ (accessed on 4 September 2024).
- Roth, K.; Lucchi, A.; Nowozin, S.; Hofmann, T. Stabilizing Training of Generative Adversarial Networks through Regularization. arXiv 2017, arXiv:1705.09367. Available online: https://arxiv.org/abs/1705.09367 (accessed on 4 September 2024).
- Chong, P.; Ruff, L.; Kloft, M.; Binder, A. Simple and Effective Prevention of Mode Collapse in Deep One-Class Classification. arXiv 2020, arXiv:2001.08873. Available online: https://arxiv.org/abs/2001.08873 (accessed on 4 September 2024).
- Thanh-Tung, H.; Tran, T. On Catastrophic Forgetting and Mode Collapse in Generative Adversarial Networks. arXiv 2018, arXiv:1807.04015. Available online: https://arxiv.org/abs/1807.04015 (accessed on 4 September 2024).
- Metz, L.; Poole, B.; Pfau, D.; Sohl-Dickstein, J. Unrolled Generative Adversarial Networks. arXiv 2016, arXiv:1611.02163. Available online: https://arxiv.org/abs/1611.02163 (accessed on 4 September 2024).
- Guo, X.; Yin, Y.; Dong, C.; Yang, G.; Zhou, G. On the Class Imbalance Problem. In Proceedings of the 2008 Fourth International Conference on Natural Computation, Jinan, China, 18–20 October 2018; IEEE: Piscataway, NJ, USA, 2008; pp. 192–201. Available online: http://ieeexplore.ieee.org/document/4667275/ (accessed on 4 September 2024).
- Dixon, L.; Li, J.; Sorensen, J.; Thain, N.; Vasserman, L. Measuring and Mitigating Unintended Bias in Text Classification. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New Orleans, LA, USA, 27 December 2018; ACM: New York, NY, USA, 2018; pp. 67–73. Available online: https://dl.acm.org/doi/10.1145/3278721.3278729 (accessed on 4 September 2024).
- Hutchinson, B.; Prabhakaran, V.; Denton, E.; Webster, K.; Zhong, Y.; Denuyl, S. Unintended machine learning biases as social barriers for persons with disabilitiess. SIGACCESS Access. Comput. 2020, 125, 1558–2337. [Google Scholar] [CrossRef]
- Suresh, H.; Guttag, J.V. A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. arXiv 2019, arXiv:1901.10002. Available online: https://arxiv.org/abs/1901.10002 (accessed on 4 September 2024).
- Dumont, V.; Ju, X.; Mueller, J. Hyperparameter Optimization of Generative Adversarial Network Models for High-Energy Physics Simulations. arXiv 2022, arXiv:2208.07715. Available online: https://arxiv.org/abs/2208.07715 (accessed on 4 September 2024).
- Krähenbühl, P.; Doersch, C.; Donahue, J.; Darrell, T. Data-dependent Initializations of Convolutional Neural Networks. arXiv 2015, arXiv:1511.06856. Available online: https://arxiv.org/abs/1511.06856 (accessed on 4 September 2024).
- Olden, J.D.; Joy, M.K.; Death, R.G. An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data. Ecol. Model. 2004, 178, 389–397. [Google Scholar] [CrossRef]
Term | Explanation | References |
---|---|---|
AI a model | A program that has been trained to perform a task without further human intervention. For example, this task can be as simple as telling if the number provided (called input) is greater or less than a certain integer (i.e., 5). | [44] |
Neural network/artificial neural networks | Type of AI model that uses several layers of nodes (artificial neurons) to perform a task. The first node layer is the input layer, followed by one or several layers of “hidden nodes”, and the last layer is the output. Nodes in one layer are related to nodes in previous layers by weights and biases that summarize the input signal and an activation function that determines whether that signal should be transmitted. | [6,7] |
Convolutional Neural Network (CNN) a | A CNN a is a type of neural network adapted for image-type inputs and data that is in the form of a grid. Their major particularity is that they apply a matrix (i.e., a 3 × 3 grid matrix) and convolutional operations to input data. | [4,5] |
Deep convolutional neural networks | A type of neural network similar to CNNs, but with a larger architecture allowing the model to perform more complex tasks. This gain in terms of accuracy and complexity costs more computer power. | [3] |
Discriminatory models | AI models that perform classifications and output labels. These models help with discriminatory tasks, such as, but not limited to, distinguishing healthy versus pathological conditions, benign versus malignant lesions, disease A versus disease B, etc. | [45] |
Generative models | AI models that learn patterns from a dataset and output new but similar data. In imaging generative models, the output is often a new image rather than a predefined label. | [46,47] |
Generator | The first part of a GAN that uses an internal distribution of imaging data is to create a candidate image close to a concrete real-world counterpart. The objective of training is to maximize the generator’s performance. | [47] |
Discriminator | The second part of a GAN tries to distinguish between real and fake data provided by the generator. The objective of the training is to have a discriminator that is unable to distinguish between real versus fake data provided by the generator while maximizing the discriminator’s performance. | [47] |
Backpropagation | A mathematical update of weights from the last to the first layer of the AI model based on the loss function. The partial derivatives of the loss function to each weight are used in the weight update. This mathematic type of optimization is called gradient descent. The objective of backpropagation and gradient descent is to maximize the models’ accuracy by minimizing the loss function. | [48] |
Loss function | A function that evaluates the algorithm’s performance when comparing its output to the answers (ground truth). Incorrect answers provide a high loss value. The objective of training is to minimize the loss function. | [49] |
Epoch | Corresponds to one algorithm training session with a complete pass of the training dataset. | [50] |
Overfitting | Undesirable behavior of an algorithm that fails to generalize from input data by being too specific and fitting too closely. The algorithm is thus unable to correctly perform accurate prediction. | [51] |
Convergence | Corresponds to the point where the loss function is successfully minimized, and training parameters reach a stable state. The algorithm is thus able to accurately perform predictions with the obtained parameters. | [52] |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Remtulla, R.; Samet, A.; Kulbay, M.; Akdag, A.; Hocini, A.; Volniansky, A.; Kahn Ali, S.; Qian, C.X. A Future Picture: A Review of Current Generative Adversarial Neural Networks in Vitreoretinal Pathologies and Their Future Potentials. Biomedicines 2025, 13, 284. https://doi.org/10.3390/biomedicines13020284
Remtulla R, Samet A, Kulbay M, Akdag A, Hocini A, Volniansky A, Kahn Ali S, Qian CX. A Future Picture: A Review of Current Generative Adversarial Neural Networks in Vitreoretinal Pathologies and Their Future Potentials. Biomedicines. 2025; 13(2):284. https://doi.org/10.3390/biomedicines13020284
Chicago/Turabian StyleRemtulla, Raheem, Adam Samet, Merve Kulbay, Arjin Akdag, Adam Hocini, Anton Volniansky, Shigufa Kahn Ali, and Cynthia X. Qian. 2025. "A Future Picture: A Review of Current Generative Adversarial Neural Networks in Vitreoretinal Pathologies and Their Future Potentials" Biomedicines 13, no. 2: 284. https://doi.org/10.3390/biomedicines13020284
APA StyleRemtulla, R., Samet, A., Kulbay, M., Akdag, A., Hocini, A., Volniansky, A., Kahn Ali, S., & Qian, C. X. (2025). A Future Picture: A Review of Current Generative Adversarial Neural Networks in Vitreoretinal Pathologies and Their Future Potentials. Biomedicines, 13(2), 284. https://doi.org/10.3390/biomedicines13020284