A Comprehensive Performance Analysis of Transfer Learning Optimization in Visual Field Defect Classification
Abstract
:1. Introduction
- Right-to-left homonymous hemianopia—A VF defect condition that affects half of the eye, possibly both eyes or only the right and left eyes. Hemianopia can indicate a brain bleed, hemorrhage, tumor, or plus collection.
- Left/right/lower/upper quadrantanopia—A VF defect in the quarter section of the eye at several locations (right, left, higher, or lower). This defect indicates an abnormality in the temporal and parietal parts of the brain, which can cause brain stroke, hemorrhage, tumor, or plus collection.
- Inferior/superior defect field—A VF defect occurs in half of the VF’s upper or lower half. This defect can signal the possibility of retinal detachment or malignancy in the eye.
- Central Scotoma—A defect pattern that appears as large or small spots in the VF’s center, either right or left. This vision impairment is connected to a greater risk of central macula problems.
- Tunnel vision—A VF defect associated with glaucoma, a disease that manifests as peripheral VF loss in the early stage, constricting the field and ending up with tunnel vision before total blindness occurs.
- Normal VF—Included in the study as a baseline condition.
- What is the performance of transfer learning models in visual field defect classification?
- What is the performance of transfer learning after applying Bayesian optimization?
- How does a combination of different hyperparameter tuning and fine-tuning layers by Bayesian optimization affect the performance of the transfer learning models in visual field defect classification?
- How does the fine-tuning of network layers affect the performance of the transfer learning models in visual field defect classification?
2. Related Works
3. Dataset Characteristics
4. Framework
4.1. Pre-Processing
4.2. Pre-Trained Models
- Alleviate the vanishing gradient problem.
- Strengthen feature propagation.
- Encourage feature reuse.
- Substantially reduce the number of parameters.
4.3. Bayesian Optimization
4.4. Model Evaluation
5. Experimental Results and Discussion
5.1. Part I: Validation Results and Analysis before Bayesian Optimization
5.2. Part II: Validation Results and Analysis after Bayesian Optimization
5.3. Part III: Classification Results and Analysis after Bayesian Optimization
6. Conclusions and Future Works
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Moses, S. Neurologic Anatomy of the Eye. Family Practice Notebook. 2022. Available online: https://fpnotebook.com/eye/Anatomy/NrlgcAntmyOfThEy.htm (accessed on 13 February 2022).
- Kucur, Ş.S.; Holló, G.; Sznitman, R. A Deep Learning Approach to Automatic Detection of Early Glaucoma from Visual Fields. PLoS ONE 2018, 13, e0206081. [Google Scholar] [CrossRef] [PubMed]
- Chakravarty, A.; Sivaswamy, J. Joint optic disc and cup boundary extraction from monocular fundus images. Comput. Methods Programs Biomed. 2017, 147, 51–61. [Google Scholar] [CrossRef] [PubMed]
- Park, K.; Kim, J.; Lee, J. Visual Field Prediction using Recurrent Neural Network. Sci. Rep. 2019, 9, 8385. [Google Scholar] [CrossRef] [PubMed]
- Patel, R.; Chaware, A. Transfer Learning with Fine-Tuned MobileNetV2 for Diabetic Retinopathy. In Proceedings of the 2020 International Conference for Emerging Technology (INCET), Belgaum, India, 5–7 June 2020; pp. 1–4. [Google Scholar] [CrossRef]
- Shankar, K.; Zhang, Y.; Liu, Y.; Wu, L.; Chen, C.H. Hyperparameter Tuning Deep Learning for Diabetic Retinopathy Fundus Image Classification. IEEE Access 2020, 8, 118164–118173. [Google Scholar] [CrossRef]
- Abu, M.; Zahri, N.A.H.; Amir, A.; Ismail, M.I.; Kamarudin, L.M.; Nishizaki, H. Classification of Multiple Visual Field Defects using Deep Learning. J. Phys. Conf. Ser. 2021, 1755, 012014. [Google Scholar] [CrossRef]
- Chakrabarty, N. A Deep Learning Method for The Detection of Diabetic Retinopathy. In Proceedings of the 2018 5th IEEE Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering (UPCON), Gorakhpur, India, 2–4 November 2018; pp. 1–5. [Google Scholar]
- Vaghefi, E.; Yang, S.; Hill, S.; Humphrey, G.; Walker, N.; Squirrell, D. Detection of Smoking Status from Retinal Images; A Convolutional Neural Network Study. Sci. Rep. 2019, 9, 1–9. [Google Scholar] [CrossRef] [Green Version]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 4510–4520. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
- Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar] [CrossRef] [Green Version]
- Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
- Weiss, K.; Khoshgoftaar, T.M.; Wang, D.D. A Survey of Transfer Learning; Springer International Publishing: Berlin/Heidelberg, Germany, 2016. [Google Scholar] [CrossRef] [Green Version]
- Tan, C.; Sun, F.; Kong, T.; Zhang, W.; Yang, C.; Liu, C. A survey on deep transfer learning. In Proceedings of the International conference on artificial neural networks, Rhodes, Greece, 4–7 October 2018; pp. 270–279. [Google Scholar] [CrossRef] [Green Version]
- Karthikeyan, S.; Kumar, P.S.; Madhusudan, R.J.; Sundaramoorthy, S.K.; Namboori, P.K.K. Detection of Multiclass Retinal Diseases Using Artificial Intelligence: An Expeditious Learning Using Deep CNN with Minimal Data. Biomed. Pharmacol. J. 2019, 12, 3. [Google Scholar] [CrossRef]
- Naik, N. Eye Disease Detection Using RESNET. Int. Res. J. Eng. Technol. 2016, 7, 3331–3335. [Google Scholar]
- Nazir, T.; Nawaz, M.; Rashid, J.; Mahum, R.; Masood, M.; Mehmood, A.; Ali, F.; Kim, J.; Kwon, H.; Hussain, A. Detection of Diabetic Eye Disease from Retinal Images Using a Deep Learning Based Centernet Model. Sensors 2021, 21, 5238. [Google Scholar] [CrossRef] [PubMed]
- Mu, N.; Wang, H.; Zhang, Y.; Jiang, J.; Tang, J. Progressive global perception and local polishing network for lung infection segmentation of COVID-19 CT images. Pattern Recognit. 2021, 120, 108168. [Google Scholar] [CrossRef]
- He, J.; Zhu, Q.; Zhang, K.; Yu, P.; Tang, J. An evolvable adversarial network with gradient penalty for COVID-19 infection segmentation. Appl. Soft Comput. 2021, 113, 107947. [Google Scholar] [CrossRef] [PubMed]
- Miranda, M.; Valeriano, K.; Sulla-Torres, J. A Detailed Study on the Choice of Hyperparameters for Transfer Learning in COVID-19 Image Datasets using Bayesian Optimization. Int. J. Adv. Comput. Sci. Appl. 2021, 12, 327–335. [Google Scholar] [CrossRef]
- Dewancker, I.; McCourt, M.; Clark, S. Bayesian Optimization Primer. 2015. Available online: chrome-extension://oemmndcbldboiebfnladdacbdfmadadm/https://static.sigopt.com/b/20a144d208ef255d3b981ce419667ec25d8412e2/static/pdf/SigOpt_Bayesian_Optimization_Primer.pdf (accessed on 12 February 2022).
- Wang, Y.; Plested, J.; Gedeon, T. MultiTune: Adaptive Integration of Multiple Fine-Tuning Models for Image Classification. In Proceedings of the 27th International Conference, ICONIP 2020, Bangkok, Thailand, 18–22 November 2020. [Google Scholar] [CrossRef]
- Vrbančič, G.; Podgorelec, V. Transfer Learning with Adaptive Fine-Tuning. IEEE Access 2020, 8, 196197–196211. [Google Scholar] [CrossRef]
- Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How transferable are features in deep neural networks? arXiv 2014, arXiv:1411.1792. [Google Scholar]
- Zaremba, W.; Sutskever, I.; Vinyals, O. Recurrent Neural Network Regularization. arXiv 2014, arXiv:1409.2329. [Google Scholar]
- Google Brain. Messidor DR Dataset. Kaggle 2018. Available online: https://www.kaggle.com/google-brain/messidor2-dr-grades (accessed on 12 February 2022).
- Loey, M.; El-Sappagh, S.; Mirjalili, S. Bayesian-based optimized deep learning model to detect COVID-19 patients using chest X-ray image data. Comput. Biol. Med. 2022, 142, 105213. [Google Scholar] [CrossRef]
- Monshi, M.M.A.; Poon, J.; Chung, V.; Monshi, F.M. CovidXrayNet: Optimizing data augmentation and CNN hyperparameters for improved COVID-19 detection from CXR. Comput. Biol. Med. 2021, 133, 104375. [Google Scholar] [CrossRef] [PubMed]
- Loey, M.; Mirjalili, S. COVID-19 cough sound symptoms classification from scalogram image representation using deep learning models. Comput. Biol. Med. 2021, 139, 105020. [Google Scholar] [CrossRef] [PubMed]
- Guo, Y.; Shi, H.; Kumar, A.; Grauman, K.; Rosing, T.; Feris, R. SpotTune: Transfer Learning Through Adaptive Fine-Tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 4800–4809. [Google Scholar] [CrossRef] [Green Version]
- Maji, S.; Kannala, J.; Rahtu, E.; Blaschko, M.; Vedaldi, A. Fine-Grained Visual Classification of Aircraft. arXiv 2013, arXiv:1306.5151. [Google Scholar]
- Krizhevsky, A. Learning Multiple Layers of Features from Tiny Images. 2009. Available online: https://www.cs.toronto.edu/kriz/learning-features-2009-TR.pdf (accessed on 12 February 2022).
- Google. Dataset Search. Available online: https://datasetsearch.research.google.com/ (accessed on 23 September 2021).
- Gessesse, G.W.; Tamrat, L.; Damji, K.F. 10–2 Humphrey SITA standard visual field test and white on black amsler grid test results among 200 eyes [Data set]. PLoS ONE 2020, 15, e0230017. [Google Scholar] [CrossRef]
- Bryan, S.R.; Eilers, P.H.; Lesaffre, E.M.; Lemij, H.G.; Vermeer, K.A. Longitudinal Glaucomatous Visual Field Data. Rotterdam Ophthalmic Data Repository. Investig. Ophthalmol. Vis. Sci. 2015, 56, 4283–4289. Available online: http://www.rodrep.com/longitudinal-glaucomatous-vf-data---description.html (accessed on 23 September 2021). [CrossRef] [PubMed] [Green Version]
- Erler, N.S.; Bryan, S.R.; Eilers, P.H.C.; Lesaffre, E.M.E.H.; Lemij, H.G.; Vermeer, K.A. Optimizing Structure-function Relationship by Maximizing Correspondence between Glaucomatous Visual Fields and Mathematical Retinal Nerve Fiber Models. Investig. Ophthalmol. Vis. Sci. 2014, 55, 2350–2357. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kucur, Ş.S. Early Glaucoma Identification. GitHub. Available online: https://github.com/serifeseda/early-glaucoma-identification (accessed on 29 September 2021).
- Lifferth, A.; Fisher, B.; Stursma, A.; Cordes, S.; Carter, S.; Perkins, T. 10-2 Visual Field Testing: A Tool for All Glaucoma Stages. Rev. Optom. 2017, 154, 54–59. Available online: https://www.reviewofoptometry.com/article/ro0717-102-visual-field-testing-a-tool-for-all-glaucoma-stages (accessed on 29 September 2021).
- Jiang, Z.; Zhang, H.; Wang, Y.; Ko, S. Retinal blood vessel segmentation using fully convolutional network with transfer learning. Comput. Med. Imaging Graph. 2018, 68, 1–15. [Google Scholar] [CrossRef]
- Lei, Z.; Gan, Z.H.; Jiang, M.; Dong, K. Artificial robot navigation based on gesture and speech recognition. In Proceedings of the 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC), Wuhan, China, 18–19 October 2014; pp. 323–327. [Google Scholar] [CrossRef]
- Li, Z.; He, Y.; Keel, S.; Meng, W.; Chang, R.T.; Mingguang, H. Efficacy of a Deep Learning System for Detecting Glaucomatous Optic Neuropathy Based on Color Fundus Photographs. Ophthalmology 2018, 125, 1199–1206. [Google Scholar] [CrossRef] [Green Version]
- Hosny, K.M.; Kassem, M.A.; Foaud, M.M. Skin Cancer Classification using Deep Learning and Transfer Learning. In Proceedings of the 2018 9th Cairo International Biomedical Engineering Conference (CIBEC), Cairo, Egypt, 20–22 December 2018; pp. 90–93. [Google Scholar] [CrossRef]
- Mahiba, C.; Jayachandran, A. Severity analysis of diabetic retinopathy in retinal images using hybrid structure descriptor and modified CNNs. Measurement 2019, 135, 762–767. [Google Scholar] [CrossRef]
- Lin, M.; Chen, Q.; Yan, S. Network in network. arXiv 2013, arXiv:1312.4400. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 2014, 56, 1929–1958. [Google Scholar] [CrossRef] [Green Version]
- Frazier, P.I. A Tutorial on Bayesian Optimization. arXiv 2018, arXiv:1807.02811. [Google Scholar]
- Rasmussen, C.E.; Williams, C.K.I. Gaussian Processes for Machine Learning. 2006. Available online: http://www.gaussianprocess.org/gpml/ (accessed on 12 February 2022).
- Shahriari, B.; Swersky, K.; Wang, Z.; Adams, R.P.; de Freitas, N. Taking the Human Out of the Loop: A Review of Bayesian Optimization. Proc. IEEE 2016, 104, 148–175. [Google Scholar] [CrossRef] [Green Version]
- Joy, T.T.; Rana, S.; Gupta, S.; Venkatesh, S. A flexible transfer learning framework for Bayesian optimization with convergence guarantee. Expert Syst. Appl. 2019, 115, 656–672. [Google Scholar] [CrossRef]
- Das, A.; Giri, R.; Chourasia, G.; Bala, A.A. Classification of Retinal Diseases Using Transfer Learning Approach. In Proceedings of the 2019 International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 17–19 July 2019. [Google Scholar] [CrossRef]
- Mitra, A.; Banerjee, P.S.; Roy, S.; Roy, S.; Setua, S.K. The region of interest localization for glaucoma analysis from retinal fundus image using deep learning. Comput. Methods Programs Biomed. 2018, 165, 25–35. [Google Scholar] [CrossRef]
- Abu, M.; Amir, A.; Yen, H.L.; Zahri, N.A.H.; Azemi, S.A. The Performance Analysis of Transfer Learning for Steel Defect Detection by Using Deep Learning. In Proceedings of the 5th International Conference on Electronic Design (ICED), Perlis, Malaysia, 19 August 2020; p. 1755. [Google Scholar]
- Zhang, C.; Benz, P.; Argaw, D.M.; Lee, S.; Kim, J.; Rameau, F.; Bazin, J.C.; Kweon, I.S. ResNet or DenseNet? Introducing dense shortcuts to ResNet. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2021; pp. 3549–3558. [Google Scholar] [CrossRef]
- Hoffer, E.; Hubara, I.; Soudry, D. Train longer, generalize better: Closing the generalization gap in large batch training of neural networks. Adv. Neural Inf. Process. Syst. 2017, 12, 1732–1742. [Google Scholar]
Type of VF Defect | No. of Record |
---|---|
Central scotoma | 188 |
Right/Left hemianopia | 205 |
Right/left/upper/lower quadrantanopia | 150 |
Normal | 273 |
Tunnel vision | 207 |
Superior/inferior defect field | 177 |
Defect Type | VF Image |
---|---|
Central scotoma | |
Right/left hemianopia | |
Right/left/upper/lower quadrantanopia | |
Tunnel vision | |
Superior/inferior defect field |
Model | Image Size | Parameter | Validation Accuracy (%) | |
---|---|---|---|---|
without VF | with VF | |||
VGG-16 | 224 | 14,714,688 | 14,865,222 | 97.63 |
256 | 14,911,302 | 96.55 | ||
VGG-19 | 224 | 20,024,384 | 20,174,918 | 96.34 |
256 | 20,220,998 | 17.69 | ||
MobileNet | 224 | 3,228,864 | 3,529,926 | 88.79 |
256 | 3,622,086 | 94.41 | ||
MobileNetV2 | 224 | 2,257,984 | 2,634,310 | 70.91 |
256 | 2,749,510 | 39.24 | ||
ResNet50 | 224 | 23,587,712 | 24,189,830 | 90.46 |
256 | 24,374,150 | 86.66 | ||
ResNet101 | 224 | 42,658,176 | 43,260,294 | 95.69 |
256 | 43,444,614 | 92.91 | ||
DenseNet121 | 224 | 7,037,504 | 7,338,566 | 74.72 |
256 | 7,430,726 | 94.20 | ||
DenseNet169 | 224 | 12,642,880 | 13,132,102 | 97.20 |
256 | 13,281,862 | 93.27 |
Model | Hyperparameter | Fine-Tuned | Validation Accuracy (%) | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Feature Map | Filter Size | Activation Function | Pool Size | Optimizer | Learning Rate | Batch Size | Epoch | Dropout Rate | Upper Layer | Lower Layer | ||
VGG-16 | 64 | 3 | ReLU | 2 | ADAM | 0.001 | 32 | 200 | 0.2 | FALSE | FALSE | 97.72 |
43 | 2 | Sigmoid | 1 | RMSprop | 0.0002 | 29 | 42 | 0.6 | TRUE | TRUE | 20.69 | |
52 | 2 | ReLU | 2 | ADAM | 0.0006 | 19 | 54 | 0.7 | FALSE | FALSE | 98 | |
48 | 1 | Sigmoid | 1 | RMSprop | 0.0161 | 27 | 92 | 0.2 | TRUE | FALSE | 20.69 | |
53 | 2 | Sigmoid | 2 | ADAM | 0.0081 | 15 | 103 | 0.8 | FALSE | TRUE | 17.24 | |
52 | 2 | Sigmoid | 2 | Adadelta | 0.0507 | 18 | 69 | 0.3 | TRUE | TRUE | 20.69 | |
39 | 3 | Sigmoid | 2 | RMSprop | 0.0513 | 11 | 13 | 0.6 | TRUE | TRUE | 18.1 | |
53 | 1 | ReLU | 1 | ADAM | 0.0046 | 9 | 11 | 0.8 | FALSE | FALSE | 20.69 | |
55 | 3 | ReLU | 1 | Adadelta | 0.0813 | 31 | 24 | 0.3 | FALSE | TRUE | 93.97 | |
34 | 2 | Sigmoid | 2 | RMSprop | 0.0002 | 15 | 66 | 0.8 | TRUE | TRUE | 20.69 | |
32 | 2 | ReLU | 1 | SGD | 0.0001 | 1 | 200 | 0.1 | TRUE | FALSE | 95.73 | |
VGG-19 | 64 | 3 | ReLU | 2 | ADAM | 0.001 | 32 | 200 | 0.2 | FALSE | FALSE | 20.69 |
60 | 2 | Sigmoid | 2 | ADAM | 0.0031 | 9 | 144 | 0.3 | TRUE | FALSE | 20.69 | |
60 | 2 | Sigmoid | 1 | SGD | 0.0082 | 23 | 169 | 0.6 | TRUE | FALSE | 20.69 | |
33 | 2 | ReLU | 2 | ADAM | 0.0004 | 31 | 166 | 0.4 | FALSE | FALSE | 97.84 | |
35 | 2 | ReLU | 1 | SGD | 0.0338 | 23 | 57 | 0.2 | TRUE | TRUE | 93.84 | |
46 | 1 | Sigmoid | 1 | ADAM | 0.0008 | 19 | 163 | 0.1 | FALSE | FALSE | 20.69 | |
54 | 2 | ReLU | 2 | RMSprop | 0.0048 | 30 | 193 | 0.5 | FALSE | FALSE | 20.69 | |
40 | 3 | ReLU | 2 | ADAM | 0.0248 | 6 | 101 | 0.7 | TRUE | FALSE | 20.69 | |
35 | 2 | Sigmoid | 1 | ADAM | 0.0002 | 30 | 73 | 0.2 | FALSE | FALSE | 20.69 | |
40 | 3 | ReLU | 1 | RMSprop | 0.0046 | 31 | 36 | 0.1 | TRUE | TRUE | 20.69 | |
41 | 1 | ReLU | 1 | RMSprop | 0.0002 | 13 | 79 | 0.8 | TRUE | FALSE | 53.28 | |
MobileNet | 64 | 3 | ReLU | 2 | ADAM | 0.001 | 32 | 200 | 0.2 | FALSE | FALSE | 93.58 |
45 | 2 | Sigmoid | 2 | SGD | 0.0322 | 15 | 96 | 0.6 | TRUE | FALSE | 18.44 | |
60 | 3 | Sigmoid | 1 | RMSprop | 0.0018 | 18 | 138 | 0.3 | FALSE | FALSE | 29.26 | |
41 | 2 | Sigmoid | 2 | Adadelta | 0.0003 | 9 | 61 | 0.8 | TRUE | TRUE | 20.69 | |
48 | 2 | Sigmoid | 2 | ADAM | 0.0002 | 9 | 135 | 0.4 | TRUE | FALSE | 15.51 | |
57 | 1 | ReLU | 2 | Adadelta | 0.0055 | 7 | 106 | 0.2 | TRUE | FALSE | 94.4 | |
45 | 2 | ReLU | 2 | Adadelta | 0.001 | 13 | 154 | 0.4 | TRUE | FALSE | 93.62 | |
57 | 3 | Sigmoid | 1 | SGD | 0.0001 | 20 | 65 | 0.3 | TRUE | FALSE | 20.69 | |
32 | 2 | Sigmoid | 1 | ADAM | 0.0531 | 2 | 45 | 0.9 | FALSE | FALSE | 20.69 | |
57 | 1 | ReLU | 2 | RMSprop | 0.0005 | 32 | 21 | 0.4 | TRUE | TRUE | 95.13 | |
52 | 2 | Sigmoid | 1 | RMSprop | 0.0319 | 5 | 133 | 0.2 | FALSE | FALSE | 17.16 | |
MobileNetV2 | 64 | 3 | ReLU | 2 | ADAM | 0.001 | 32 | 200 | 0.2 | FALSE | FALSE | 92.5 |
58 | 1 | ReLU | 1 | RMSprop | 0.0053 | 14 | 16 | 0.7 | FALSE | FALSE | 20.82 | |
49 | 3 | ReLU | 1 | Adadelta | 0.0009 | 17 | 189 | 0.8 | TRUE | FALSE | 36.42 | |
44 | 3 | ReLU | 1 | RMSprop | 0.0003 | 26 | 43 | 0.4 | TRUE | TRUE | 86.47 | |
44 | 3 | ReLU | 1 | SGD | 0.01 | 14 | 65 | 0.6 | TRUE | TRUE | 81.77 | |
36 | 2 | ReLU | 2 | RMSprop | 0.0001 | 11 | 67 | 0.4 | FALSE | FALSE | 95.6 | |
64 | 2 | ReLU | 2 | RMSprop | 0.0134 | 7 | 119 | 0.7 | FALSE | TRUE | 20.69 | |
51 | 2 | Sigmoid | 2 | RMSprop | 0.0015 | 24 | 181 | 0.1 | TRUE | FALSE | 74.05 | |
51 | 1 | Sigmoid | 1 | RMSprop | 0.0006 | 6 | 168 | 0.5 | TRUE | TRUE | 76.38 | |
52 | 2 | ReLU | 2 | SGD | 0.0022 | 14 | 66 | 0.3 | FALSE | FALSE | 52.93 | |
46 | 3 | ReLU | 1 | RMSprop | 0.0003 | 21 | 33 | 0.1 | FALSE | TRUE | 84.01 | |
ResNet-50 | 64 | 3 | ReLU | 2 | ADAM | 0.001 | 32 | 200 | 0.2 | FALSE | FALSE | 97.33 |
63 | 3 | ReLU | 2 | RMSprop | 0.0601 | 25 | 101 | 0.4 | TRUE | TRUE | 47.85 | |
50 | 3 | ReLU | 1 | RMSprop | 0.0005 | 30 | 174 | 0.8 | FALSE | TRUE | 97.28 | |
50 | 1 | Sigmoid | 1 | ADAM | 0.0051 | 16 | 18 | 0.8 | FALSE | FALSE | 21.25 | |
54 | 2 | Sigmoid | 1 | ADAM | 0.0048 | 7 | 157 | 0.2 | FALSE | TRUE | 74.18 | |
41 | 1 | Sigmoid | 2 | ADAM | 0.0364 | 24 | 129 | 0.3 | FALSE | TRUE | 22.63 | |
45 | 1 | ReLU | 1 | RMSprop | 0.0189 | 13 | 12 | 0.3 | FALSE | TRUE | 85.6 | |
41 | 2 | Sigmoid | 2 | Adadelta | 0.0142 | 11 | 66 | 0.9 | TRUE | TRUE | 97.46 | |
50 | 2 | ReLU | 2 | ADAM | 0.0017 | 23 | 178 | 0.3 | TRUE | TRUE | 95.6 | |
46 | 2 | ReLU | 2 | Adadelta | 0.0049 | 25 | 13 | 0.8 | TRUE | FALSE | 75 | |
51 | 2 | ReLU | 2 | RMSprop | 0.0076 | 32 | 200 | 0.1 | TRUE | TRUE | 96.25 | |
ResNet-101 | 64 | 3 | ReLU | 2 | ADAM | 0.001 | 32 | 200 | 0.2 | FALSE | FALSE | 96.07 |
42 | 2 | ReLU | 2 | RMSprop | 0.061 | 15 | 110 | 0.6 | FALSE | TRUE | 75.47 | |
52 | 2 | Sigmoid | 1 | RMSprop | 0.003 | 32 | 106 | 0.2 | FALSE | TRUE | 19.01 | |
61 | 3 | ReLU | 2 | Adadelta | 0.0023 | 20 | 54 | 0.7 | TRUE | FALSE | 93.36 | |
45 | 2 | Sigmoid | 2 | ADAM | 0.0001 | 10 | 87 | 0.3 | FALSE | FALSE | 45.13 | |
36 | 2 | Sigmoid | 1 | SGD | 0.0029 | 19 | 133 | 0.1 | FALSE | TRUE | 96.67 | |
38 | 2 | ReLU | 1 | ADAM | 0.0011 | 12 | 130 | 0.7 | TRUE | FALSE | 96.29 | |
44 | 3 | ReLU | 2 | SGD | 0.0002 | 16 | 182 | 0.6 | FALSE | TRUE | 96.8 | |
58 | 2 | Sigmoid | 2 | SGD | 0.0001 | 18 | 32 | 0.4 | FALSE | FALSE | 96.77 | |
32 | 2 | Sigmoid | 2 | Adadelta | 0.1 | 18 | 66 | 0.9 | TRUE | FALSE | 93.92 | |
36 | 1 | ReLU | 2 | ADAM | 0.0016 | 7 | 113 | 0.1 | TRUE | FALSE | 96.94 | |
DenseNet-121 | 64 | 3 | ReLU | 2 | ADAM | 0.001 | 32 | 200 | 0.2 | FALSE | FALSE | 97.96 |
63 | 1 | Sigmoid | 1 | ADAM | 0.0329 | 24 | 61 | 0.1 | TRUE | TRUE | 71.38 | |
38 | 2 | Sigmoid | 2 | RMSprop | 0.0687 | 22 | 38 | 0.8 | FALSE | FALSE | 83.58 | |
44 | 1 | ReLU | 2 | SGD | 0.0324 | 9 | 167 | 0.7 | FALSE | TRUE | 97.89 | |
41 | 2 | Sigmoid | 2 | ADAM | 0.0003 | 15 | 67 | 0.6 | FALSE | FALSE | 76.59 | |
51 | 3 | ReLU | 2 | Adadelta | 0.0091 | 17 | 195 | 0.8 | FALSE | FALSE | 98.45 | |
49 | 1 | ReLU | 1 | Adadelta | 0.0333 | 11 | 85 | 0.7 | FALSE | TRUE | 98.06 | |
60 | 2 | ReLU | 1 | ADAM | 0.0217 | 18 | 111 | 0.1 | TRUE | FALSE | 89.44 | |
34 | 1 | Sigmoid | 2 | SGD | 0.0024 | 10 | 136 | 0.4 | TRUE | FALSE | 82.46 | |
47 | 2 | Sigmoid | 1 | RMSprop | 0.01 | 14 | 67 | 0.7 | TRUE | FALSE | 95.39 | |
55 | 2 | Sigmoid | 2 | ADAM | 0.0044 | 18 | 142 | 0.7 | TRUE | FALSE | 76.72 | |
DenseNet-169 | 64 | 3 | ReLU | 2 | ADAM | 0.001 | 32 | 200 | 0.2 | FALSE | FALSE | 94.27 |
52 | 2 | ReLU | 1 | ADAM | 0.0183 | 25 | 52 | 0.5 | FALSE | FALSE | 87.93 | |
40 | 2 | Sigmoid | 1 | RMSprop | 0.0108 | 14 | 69 | 0.8 | TRUE | TRUE | 96.29 | |
38 | 2 | ReLU | 1 | Adadelta | 0.0022 | 24 | 143 | 0.3 | FALSE | FALSE | 98.43 | |
52 | 2 | ReLU | 2 | ADAM | 0.0005 | 11 | 188 | 0.3 | TRUE | TRUE | 97.76 | |
43 | 3 | ReLU | 1 | ADAM | 0.0013 | 24 | 76 | 0.2 | TRUE | TRUE | 96.85 | |
42 | 1 | ReLU | 2 | ADAM | 0.0004 | 19 | 156 | 0.6 | FALSE | TRUE | 97.93 | |
54 | 2 | ReLU | 1 | RMSprop | 0.0049 | 4 | 176 | 0.8 | FALSE | FALSE | 91.98 | |
45 | 3 | Sigmoid | 2 | RMSprop | 0.0003 | 25 | 20 | 0.5 | FALSE | FALSE | 52.84 | |
49 | 1 | Sigmoid | 1 | SGD | 0.0061 | 2 | 178 | 0.4 | FALSE | FALSE | 96.9 | |
35 | 3 | ReLU | 1 | ADAM | 0.0621 | 19 | 109 | 0.4 | TRUE | FALSE | 16.72 |
Method | Precision (%) | Recall (%) | F1 (%) | Accuracy (%) | Loss |
---|---|---|---|---|---|
VGG-16 | 97.66 | 97.66 | 97.50 | 98.28 | 0.0760 |
VGG-19 | 96.66 | 96.83 | 96.66 | 97.84 | 0.1701 |
MobileNet | 92.00 | 93.83 | 91.50 | 92.45 | 0.3170 |
MobileNetV2 | 97.66 | 97.93 | 97.66 | 97.84 | 0.3087 |
ResNet-50 | 97.33 | 97.83 | 97.33 | 97.41 | 0.0792 |
ResNet-101 | 96.66 | 96.33 | 96.33 | 96.55 | 0.1346 |
DenseNet-121 | 99.83 | 99.83 | 99.66 | 99.57 | 0.0048 |
DenseNet-169 | 98.83 | 98.83 | 98.66 | 98.92 | 0.0774 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Abu, M.; Zahri, N.A.H.; Amir, A.; Ismail, M.I.; Yaakub, A.; Anwar, S.A.; Ahmad, M.I. A Comprehensive Performance Analysis of Transfer Learning Optimization in Visual Field Defect Classification. Diagnostics 2022, 12, 1258. https://doi.org/10.3390/diagnostics12051258
Abu M, Zahri NAH, Amir A, Ismail MI, Yaakub A, Anwar SA, Ahmad MI. A Comprehensive Performance Analysis of Transfer Learning Optimization in Visual Field Defect Classification. Diagnostics. 2022; 12(5):1258. https://doi.org/10.3390/diagnostics12051258
Chicago/Turabian StyleAbu, Masyitah, Nik Adilah Hanin Zahri, Amiza Amir, Muhammad Izham Ismail, Azhany Yaakub, Said Amirul Anwar, and Muhammad Imran Ahmad. 2022. "A Comprehensive Performance Analysis of Transfer Learning Optimization in Visual Field Defect Classification" Diagnostics 12, no. 5: 1258. https://doi.org/10.3390/diagnostics12051258
APA StyleAbu, M., Zahri, N. A. H., Amir, A., Ismail, M. I., Yaakub, A., Anwar, S. A., & Ahmad, M. I. (2022). A Comprehensive Performance Analysis of Transfer Learning Optimization in Visual Field Defect Classification. Diagnostics, 12(5), 1258. https://doi.org/10.3390/diagnostics12051258