An Efficient Deep Learning Approach to Automatic Glaucoma Detection Using Optic Disc and Optic Cup Localization
Abstract
:1. Introduction
- We present a robust model namely EfficientDet-D0 with EfficientNet-B0 for keypoints extraction to enhance the glaucoma recognition performance while decreasing the model training and execution time.
- The presented technique can accurately identify the glaucomatous regions from the human eyes because of the robustness of the EfficientDet framework.
- Accurate detection and classification of glaucoma-affected images due to the ability of the EfficientDet model to tackle the over-trained model data.
- The model is computationally robust as EfficientDet uses a one-stage object identification procedure.
- Huge performance evaluations have been performed over the two datasets namely ORIGA and HRF which are diverse in terms of varying lesion color, size, and positions and contain samples with several distortions to show the robustness of the proposed solution.
2. Related Work
3. Proposed Methodology
Algorithm 1: Steps for the presented method. |
INPUT: |
TrD, Ann |
OUTPUT: |
Localized RoI, EfficientDet, Classified glaucoma diseased portion |
TrD—training data. |
Ann—Position of the glaucomatous region in suspected images. |
Localized RoI—Glaucomatous area in output. |
EfficientDet—EfficientNet-B0 based EfficientDet network. |
Classified glaucoma diseases portion—Class of identified suspected region. |
imageSize ← [x y] |
Bbox calculation |
µ ← AnchorsCalculation (TrD, Ann) |
EfficientDet—Model |
EfficientDet ← EfficientNet-B0-Based EfficientDet (imageSize, µ) |
[dr dt] ← Splitting database in the training and testing set |
The training module of glaucoma recognition |
For each sample s in → dr |
ExtractEfficientNet-B0-keypoints → ds |
Perform features Fusion (ds) → Fs |
End |
Training EfficientDet on Fs, and compute processing time t_Edet |
η_Edet ← DetermineDiseasedPortion(Fs) |
Ap_ Edet ← Evaluate_AP (EfficientNet-B0, η_ Edet) |
For each image S in → dt |
(a) Calculate key points via trained network € → βI |
(b) [Bbox, localization_score, class] ← Predict (βI) |
(c) Output sample together with Bbox, class |
(d) η ← [η Bbox] |
End For |
Ap_€ ← Evaluate model € employing η |
Output_class ← EfficientDet (Ap_€). |
3.1. Annotations
3.2. EfficientDet
3.2.1. Feature Extraction through EfficientNet-B0
3.2.2. BiFPN
3.2.3. Box/Class Prediction Network
3.3. Detection Procedure
4. Experimental Results
4.1. Dataset
4.2. Evaluation Metrics
4.3. Proposed Technique Evaluation
4.4. Comparison with Other Object Detection Approaches
4.5. Comparison with State-of-the-Art
4.6. Cross Dataset Validation
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Moreno, M.V.; Houriet, C.; Grounauer, P.A. Ocular Phantom-Based Feasibility Study of an Early Diagnosis Device for Glaucoma. Sensors 2021, 21, 579. [Google Scholar] [CrossRef] [PubMed]
- Xu, Y.L.; Lu, S.; Li, H.X.; Li, R.R. Mixed maximum loss design for optic disc and optic cup segmentation with deep learning from imbalanced samples. Sensors 2019, 19, 4401. [Google Scholar] [CrossRef] [Green Version]
- Syed, H.H.; Tariq, U.; Armghan, A.; Alenezi, F.; Khan, J.A.; Rho, S.; Kadry, S.; Rajinikanth, V. A Rapid Artificial Intelligence-Based Computer-Aided Diagnosis System for COVID-19 Classification from CT Images. Behav. Neurol. 2021, 2021, 2560388. [Google Scholar] [CrossRef] [PubMed]
- Quigley, H.A.; Broman, A.T. The number of people with glaucoma worldwide in 2010 and 2020. Br. J. Ophthalmol. 2006, 90, 262–267. [Google Scholar] [CrossRef] [Green Version]
- Marsden, J. Glaucoma: The silent thief of sight. Nurs. Times 2014, 110, 20–22. [Google Scholar]
- Khan, M.A.; Akram, T.; Zhang, Y.-D.; Sharif, M. Attributes based skin lesion detection and recognition: A mask RCNN and transfer learning-based deep learning framework. Pattern Recognit. Lett. 2021, 143, 58–66. [Google Scholar] [CrossRef]
- Razzak, M.I.; Naz, S.; Zaib, A. Deep learning for medical image processing: Overview, challenges and the future. Classif. BioApps 2018, 2, 323–350. [Google Scholar]
- Rehman, A.; Naz, S.; Razzak, M.I.; Akram, F.; Imran, M. A deep learning-based framework for automatic brain tumors classification using transfer learning. Circuits Syst. Signal Process. 2020, 39, 757–775. [Google Scholar] [CrossRef]
- Akram, T.; Attique, M.; Gul, S.; Shahzad, A.; Altaf, M.; Naqvi, S.S.R.; Damaševičius, R.; Maskeliūnas, R. A novel framework for rapid diagnosis of COVID-19 on computed tomography scans. Pattern Anal. Appl. 2021, 24, 951–964. [Google Scholar] [CrossRef] [PubMed]
- Tham, Y.-C.; Li, X.; Wong, T.Y.; Quigley, H.A.; Aung, T.; Cheng, C.Y. Global prevalence of glaucoma and projections of glaucoma burden through 2040: A systematic review and meta-analysis. Ophthalmology 2014, 121, 2081–2090. [Google Scholar] [CrossRef]
- Nawaz, M.; Mehmood, Z.; Nazir, T.; Naqvi, R.A.; Rehman, A.; Iqbal, M.; Saba, T. Skin cancer detection from dermoscopic images using deep learning and fuzzy k-means clustering. Microsc. Res. Tech. 2021, 85, 339–351. [Google Scholar] [CrossRef] [PubMed]
- Khan, T.M.; Khan, M.A.; Rehman, N.U.; Naveed, K.; Afridi, I.U.; Naqvi, S.S.; Raazak, I. Width-wise vessel bifurcation for improved retinal vessel segmentation. Biomed. Signal Process. Control 2022, 71, 103169. [Google Scholar] [CrossRef]
- Dromain, C.; Boyer, B.; Ferré, R.; Canale, S.; Delaloge, S.; Balleyguier, C. Computed-aided diagnosis (CAD) in the detection of breast cancer. Eur. J. Radiol. 2013, 82, 417–423. [Google Scholar] [CrossRef]
- Mehmood, A.; Iqbal, M.; Mehmood, Z.; Irtaza, A.; Nawaz, M.; Nazir, T.; Masood, M. Prediction of Heart Disease Using Deep Convolutional Neural Networks. Arab. J. Sci. Eng. 2021, 46, 3409–3422. [Google Scholar] [CrossRef]
- Arshad, M.; Khan, M.A.; Tariq, U.; Armghan, A.; Alenezi, F.; Younus Javed, M.; Aslam, S.M.; Kadry, S. A Computer-Aided Diagnosis System Using Deep Learning for Multiclass Skin Lesion Classification. Comput. Intell. Neurosci. 2021, 2021, 9619079. [Google Scholar] [CrossRef] [PubMed]
- Khan, I.A.; Moustafa, N.; Razzak, I.; Tanveer, M.; Pi, D.; Pan, Y.; Ali, B.S. XSRU-IoMT: Explainable simple recurrent units for threat detection in Internet of Medical Things networks. Future Gener. Comput. Syst. 2022, 127, 181–193. [Google Scholar] [CrossRef]
- Tan, M.; Pang, R.; Le, Q.V. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10781–10790. [Google Scholar]
- EfficientDet. Available online: https://github.com/xuannianz/EfficientDet (accessed on 5 September 2021).
- Shoba, S.G.; Therese, A.B. Detection of glaucoma disease in fundus images based on morphological operation and finite element method. Biomed. Signal Process. Control. 2020, 62, 101986. [Google Scholar] [CrossRef]
- Pruthi, J.; Khanna, K.; Arora, S. Optic Cup segmentation from retinal fundus images using Glowworm Swarm Optimization for glaucoma detection. Biomed. Signal Process. Control 2020, 60, 102004. [Google Scholar] [CrossRef]
- Kirar, B.S.; Reddy, G.R.S.; Agrawal, D.K. Glaucoma Detection Using SS-QB-VMD-Based Fine Sub-Band Images from Fundus Images. IETE J. Res. 2021, 1–12. [Google Scholar] [CrossRef]
- Qureshi, I.; Khan, M.A.; Sharif, M.; Saba, T.; Ma, J. Detection of glaucoma based on cup-to-disc ratio using fundus images. Int. J. Intell. Syst. Technol. Appl. 2020, 19, 1–16. [Google Scholar] [CrossRef]
- Guo, J.; Azzopardi, G.; Shi, C.; Jansonius, N.M.; Petkov, N. Automatic Determination of Vertical Cup-to-Disc Ratio in Retinal Fundus Images for Glaucoma Screening. IEEE Access 2019, 7, 8527–8541. [Google Scholar] [CrossRef]
- Martins, J.; Cardoso, J.; Soares, F. Offline computer-aided diagnosis for Glaucoma detection using fundus images targeted at mobile devices. Comput. Methods Programs Biomed. 2020, 192, 105341. [Google Scholar] [CrossRef]
- Nayak, D.R.; Das, D.; Majhi, B.; Bhandary, S.V.; Acharya, U.R. ECNet: An evolutionary convolutional network for automated glaucoma detection using fundus images. Biomed. Signal Process. Control 2021, 67, 102559. [Google Scholar] [CrossRef]
- Shinde, R. Glaucoma detection in retinal fundus images using U-Net and supervised machine learning algorithms. Intell. Med. 2021, 5, 100038. [Google Scholar] [CrossRef]
- Song, W.T.; Lai, I.-C.; Su, Y.-Z. A Statistical Robust Glaucoma Detection Framework Combining Retinex, CNN, and DOE Using Fundus Images. IEEE Access 2021, 9, 103772–103783. [Google Scholar] [CrossRef]
- Hemelings, R.; Elen, B.; Barbosa-Breda, J.; Lemmens, S.; Meire, M.; Pourjavan, S.; Vandewalle, E.; Van De Veire, S.; Blaschko, M.B.; De Boever, P.; et al. Accurate prediction of glaucoma from colour fundus images with a convolutional neural network that relies on active and transfer learning. Acta Ophthalmol. 2019, 98, e94–e100. [Google Scholar] [CrossRef] [PubMed]
- Ovreiu, S.; Paraschiv, E.-A.; Ovreiu, E. Deep Learning & Digital Fundus Images: Glaucoma Detection using DenseNet. In Proceedings of the 2021 13th International Conference on Electronics, Computers and Artificial Intelligence (ECAI), Pitesti, Romania, 1–3 July 2021. [Google Scholar]
- Serte, S.; Serener, A. Graph-based saliency and ensembles of convolutional neural networks for glaucoma detection. IET Image Process. 2020, 15, 797–804. [Google Scholar] [CrossRef]
- Nazir, T.; Irtaza, A.; Starovoitov, V. Optic Disc and Optic Cup Segmentation for Glaucoma Detection from Blur Retinal Images Using Improved Mask-RCNN. Int. J. Opt. 2021, 2021, 6641980. [Google Scholar] [CrossRef]
- Nazir, T.; Irtaza, A.; Javed, A.; Malik, H.; Hussain, D.; Naqvi, R.A. Retinal Image Analysis for Diabetes-Based Eye Disease Detection Using Deep Learning. Appl. Sci. 2020, 10, 6185. [Google Scholar] [CrossRef]
- Yu, S.; Xiao, D.; Frost, S.; Kanagasingam, Y. Robust optic disc and cup segmentation with deep learning for glaucoma detection. Comput. Med. Imaging Graph. 2019, 74, 61–71. [Google Scholar] [CrossRef]
- Gómez-Valverde, J.J.; Antón, A.; Fatti, G.; Liefers, B.; Herranz, A.; Santos, A.; Sánchez, C.I.; Ledesma-Carbayo, M.J. Automatic glaucoma classification using color fundus images based on convolutional neural networks and transfer learning. Biomed. Opt. Express 2019, 10, 892–913. [Google Scholar] [CrossRef] [Green Version]
- Bajwa, M.N.; Malik, M.I.; Siddiqui, S.A.; Dengel, A.; Shafait, F.; Neumeier, W.; Ahmed, S. Two-stage framework for optic disc localization and glaucoma classification in retinal fundus images using deep learning. BMC Med. Inform. Decis. Mak. 2019, 19, 136. [Google Scholar]
- Zhao, R.; Liao, W.; Zou, B.; Chen, Z.; Li, S. Weakly-Supervised Simultaneous Evidence Identification and Segmentation for Automated Glaucoma Diagnosis. Proc. Conf AAAI Artif. Intell. 2019, 33, 809–816. [Google Scholar] [CrossRef]
- Liao, W.; Zou, B.; Zhao, R.; Chen, Y.; He, Z.; Zhou, M. Clinical Interpretable Deep Learning Model for Glaucoma Diagnosis. IEEE J. Biomed. Health Inform. 2019, 24, 1405–1412. [Google Scholar] [CrossRef] [PubMed]
- Aceto, G.; Ciuonzo, D.; Montieri, A.; Pescapé, A. Toward effective mobile encrypted traffic classification through deep learning. Neurocomputing 2020, 409, 306–315. [Google Scholar] [CrossRef]
- Hinton, G.E.; Osindero, S.; Teh, Y.-W. A Fast Learning Algorithm for Deep Belief Nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef] [PubMed]
- Aceto, G.; Ciuonzo, D.; Montieri, A.; Pescapé, A. MIMETIC: Mobile encrypted traffic classification using multimodal deep learning. Comput. Netw. 2019, 165, 106944. [Google Scholar] [CrossRef]
- Alsajri, M.; Ismail, M.A.; Abdul-Baqi, S. A review on the recent application of Jaya optimization algorithm. In Proceedings of the 2018 1st Annual International Conference on Information and Sciences (AiCIS), Fallujah, Iraq, 20–21 November 2018. [Google Scholar]
- Ibraheem, H.R.; Hussain, Z.F.; Ali, S.M.; Aljanabi, M.; Mohammed, M.A.; Sutikno, T. A new model for large dataset dimensionality reduction based on teaching learning-based optimization and logistic regression. TELKOMNIKA Telecommun. Comput. Electron. Control. 2020, 18, 1688–1694. [Google Scholar] [CrossRef]
- Fu, H.; Cheng, J.; Xu, Y.; Zhang, C.; Wong, D.W.K.; Liu, J.; Cao, X. Disc-Aware Ensemble Network for Glaucoma Screening From Fundus Image. IEEE Trans. Med. Imaging 2018, 37, 2493–2501. [Google Scholar] [CrossRef] [Green Version]
- Fumero, F.; Alayón, S.; Sigut, J.; Sánchez, J.L.; SÁnchez, J.; González, M.; Gonzalez-Hern, M. RIM-ONE: An open retinal image database for optic nerve evaluation. In Proceedings of the 2011 24th International Symposium on Computer-Based Medical Systems (CBMS), Bristol, UK, 27–30 June 2011. [Google Scholar]
- Batista, F.J.F.; Diaz-Aleman, T.; Sigut, J.; Alayon, S.; Arnay, R.; Angel-Pereira, D. RIM-ONE DL: A Unified Retinal Image Database for Assessing Glaucoma Using Deep Learning. Image Anal. Ster. 2020, 39, 161–167. [Google Scholar] [CrossRef]
- Muhammad, K.; Sharif, M.; Akram, T.; Kadry, S. Intelligent fusion-assisted skin lesion localization and classification for smart healthcare. Neural Comput. Appl. 2021, 31, 1–16. [Google Scholar]
- Rashid, M.; Sharif, M.; Javed, K.; Akram, T. Classification of gastrointestinal diseases of stomach from WCE using improved saliency-based method and discriminant features selection. Multimed. Tools Appl. 2019, 78, 27743–27770. [Google Scholar]
- Sharif, M.; Akram, T.; Kadry, S.; Hsu, C.H. A two-stream deep neural network-based intelligent system for complex skin cancer types classification. Int. J. Intell. Syst. 2021, 2, 1–26. [Google Scholar]
- Imran, T.; Sharif, M.; Tariq, U.; Zhang, Y.-D.; Nam, Y.; Nam, Y.; Kang, B.-G. Malaria Blood Smear Classification Using Deep Learning and Best Features Selection. Comput. Mater. Contin. 2021, 71, 1–15. [Google Scholar] [CrossRef]
- Zia, F.; Irum, I.; Qadri, N.N.; Nam, Y.; Khurshid, K.; Ali, M.; Ashraf, I. A Multilevel Deep Feature Selection Framework for Diabetic Retinopathy Image Classification. Comput. Mater. Contin. 2022, 70, 2261–2276. [Google Scholar] [CrossRef]
Reference | Technique | Accuracy | Limitation |
---|---|---|---|
ML-based | |||
[19] | CED, FEM along with the SVM classifier. | 93.22% | The model is tested on a small dataset. |
[20] | Glowworm Swarm Optimization algorithm | 94.86% | The work is unable to compute the cup-to-disc ratio. |
[21] | SS-QB-VMD along with the LS-SVM classifier. | 92.67% | The classification accuracy requires further improvements. |
[22] | Pixel-based threshold along with the watershed transformation | 96.1% | The approach is not robust to scale and rotation alterations in the input image. |
[23] | The disk selective COSFIRE filters along with the GMLVQ classifier. | 97.78% | The work is not robust to noisy samples. |
DL-based | |||
[24] | MobileNetV2 with CNN classifier. | 88% | The work requires extensive data for model training. |
[25] | ECNet along with the KNN, SVM, BPNN, and ELM classifiers. | 96.37% | The technique is economically expensive. |
[27] | CNN | 98% | The approach needs evaluation on a standard dataset. |
[28] | ResNet-50 | NA | The work is not robust to noise and blurring in the suspected images. |
[29] | DenseNet-201 | 97% | This approach requires further performance improvements. |
[30] | AlexNet, ResNet-50, and ResNet-152 | 88% | The work requires extensive processing power. |
[31] | Mask-RCNN | 96.5% | The work needs further performance improvements. |
[32] | FRCNN along with the FKM | 95% | The work is computationally inefficient. |
[33] | UNET | 96.44% | Detection accuracy is dependent on the quality of fundus samples. |
[34] | VGG-16 | 83.03% | The model needs extensive training data. |
[35] | Faster-RCNN | 96.14% | The work is not robust to color variations of the input images. |
[36] | WSMTL | NA | The classification performance requires improvements. |
[37] | ResNet | 88% | The method is not robust to blurry images. |
Model Parameters | Value |
---|---|
No. of epochs | 60 |
Learning rate | 0.01 |
Selected batch size | 90 |
Confidence score value | 0.5 |
Unmatched Score value | 0.5 |
Model | mAP | Test Time (s/img) |
---|---|---|
RCNN | 0.913 | 0.30 |
Faster-RCNN | 0.940 | 0.25 |
Mask-RCNN | 0.942 | 0.24 |
DenseNet77-based Mask-RCNN | 0.965 | 0.23 |
Proposed | 0.971 | 0.20 |
Approach | AUC | Recall | Time (s) |
---|---|---|---|
Liao et al. [37] | 0.880 | - | - |
Fu et al. [43] | 0.910 | 0.920 | - |
Bajwa et al. [35] | 0.868 | 0.710 | - |
Nazir et al. [32] | 0.941 | 0.945 | 0.90 |
Nazir et al. [31] | 0.970 | 0.963 | 0.55 |
Proposed | 0.979 | 0.970 | 0.20 |
Dataset | ORIGA (Test) | HRF (Test) | RIM-ONE DL (Test) |
---|---|---|---|
ORIGA (trained) | 97.20% | 98.21% | 97.96% |
RIM-ONE DL (trained) | 97.83% | 98.19% | 97.85% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Nawaz, M.; Nazir, T.; Javed, A.; Tariq, U.; Yong, H.-S.; Khan, M.A.; Cha, J. An Efficient Deep Learning Approach to Automatic Glaucoma Detection Using Optic Disc and Optic Cup Localization. Sensors 2022, 22, 434. https://doi.org/10.3390/s22020434
Nawaz M, Nazir T, Javed A, Tariq U, Yong H-S, Khan MA, Cha J. An Efficient Deep Learning Approach to Automatic Glaucoma Detection Using Optic Disc and Optic Cup Localization. Sensors. 2022; 22(2):434. https://doi.org/10.3390/s22020434
Chicago/Turabian StyleNawaz, Marriam, Tahira Nazir, Ali Javed, Usman Tariq, Hwan-Seung Yong, Muhammad Attique Khan, and Jaehyuk Cha. 2022. "An Efficient Deep Learning Approach to Automatic Glaucoma Detection Using Optic Disc and Optic Cup Localization" Sensors 22, no. 2: 434. https://doi.org/10.3390/s22020434
APA StyleNawaz, M., Nazir, T., Javed, A., Tariq, U., Yong, H. -S., Khan, M. A., & Cha, J. (2022). An Efficient Deep Learning Approach to Automatic Glaucoma Detection Using Optic Disc and Optic Cup Localization. Sensors, 22(2), 434. https://doi.org/10.3390/s22020434