Next Article in Journal
A Population-Based Iterated Greedy Algorithm for Maximizing Sensor Network Lifetime
Next Article in Special Issue
ECG Data Analysis with Denoising Approach and Customized CNNs
Previous Article in Journal
A New Coarse Gating Strategy Driven Multidimensional Assignment for Two-Stage MHT of Bearings-Only Multisensor-Multitarget Tracking
Previous Article in Special Issue
Automated Knee MR Images Segmentation of Anterior Cruciate Ligament Tears
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Untangling Computer-Aided Diagnostic System for Screening Diabetic Retinopathy Based on Deep Learning Techniques

by
Muhammad Shoaib Farooq
1,*,
Ansif Arooj
2,
Roobaea Alroobaea
3,
Abdullah M. Baqasah
4,
Mohamed Yaseen Jabarulla
5,
Dilbag Singh
5 and
Ruhama Sardar
1
1
Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore 54000, Pakistan
2
Division of Science and Technology, University of Education, Lahore 54000, Pakistan
3
Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
4
Department of Information Technology, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
5
School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Gwangju 61005, Korea
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(5), 1803; https://doi.org/10.3390/s22051803
Submission received: 15 December 2021 / Revised: 16 February 2022 / Accepted: 17 February 2022 / Published: 24 February 2022
(This article belongs to the Special Issue Machine Learning and AI for Medical Data Analysis)

Abstract

:
Diabetic Retinopathy (DR) is a predominant cause of visual impairment and loss. Approximately 285 million worldwide population is affected with diabetes, and one-third of these patients have symptoms of DR. Specifically, it tends to affect the patients with 20 years or more with diabetes, but it can be reduced by early detection and proper treatment. Diagnosis of DR by using manual methods is a time-consuming and expensive task which involves trained ophthalmologists to observe and evaluate DR using digital fundus images of the retina. This study aims to systematically find and analyze high-quality research work for the diagnosis of DR using deep learning approaches. This research comprehends the DR grading, staging protocols and also presents the DR taxonomy. Furthermore, identifies, compares, and investigates the deep learning-based algorithms, techniques, and, methods for classifying DR stages. Various publicly available dataset used for deep learning have also been analyzed and dispensed for descriptive and empirical understanding for real-time DR applications. Our in-depth study shows that in the last few years there has been an increasing inclination towards deep learning approaches. 35% of the studies have used Convolutional Neural Networks (CNNs), 26% implemented the Ensemble CNN (ECNN) and, 13% Deep Neural Networks (DNN) are amongst the most used algorithms for the DR classification. Thus using the deep learning algorithms for DR diagnostics have future research potential for DR early detection and prevention based solution.

1. Introduction

Diabetes, also called diabetes mellitus, is a disease that occurs when the human body produces a large amount of blood glucose. Glucose a the main source of energy and comes from the food that the human body consumes daily. Diabetes has become the cause of many diseases such as heart, stroke, nerve damage, foot problems, gum disease, and many more [1]. The eye is also one of the major organs affected by diabetes. The diabetic problem associated with the eye is called Diabetic Retinopathy (DR). Comparison of a normal and DR eye is illustrated in Figure 1 inspired by [2]. DR is the primary cause of blindness, mostly in adults [3]. It disturbs the retina, a light-sensitive part of the eye, and can become the cause of blindness if it has not been diagnosed in the early stages, or not treated well. Chances of suffering DR become higher if the duration of having the disease is extensively over the threshold. A patient with a history of 20 years of diabetes has an 80% chance to encounter DR [4]. It is also eminent that the patient of DR may have no symptoms or only have a mild vision problem. Any damage or abnormal changes that occurred in the tissue of the organ caused by the disease is called a lesion. A consensus of international experts [5] formulated International Clinical Diabetic Retinopathy (ICDR) and Diabetic Macular Edema (DME) Disease Severity Scales to simply standardize the DR classification. ICDR improves the coordination and communication among physicians for caring for patients with diabetes [6,7].

1.1. Retinal Lesions

Microaneurysms (MA), Hemorrhages (HM), and soft exudates and hard exudates (SE and HE) were frequently observed by ophthalmologists in the retinal fundus images [8]. The presence of these lesions are pathognomonic signs of Diabetic Retinopathy (DR).
(i) Microaneurysms: The earliest symptoms of DR is the presence of microaneurysms in the retina of diabetic patient. They may present in the group of tiny, dark red spots or look like tiny hemorrhages within a retinal light-sensitive area. They vary in size, mostly between 10 to 100 microns but less than 150 microns [8,9,10]. In the early stage, it is not threatening to the eyesight but needs to be diagnosed and treated early.
(ii) Hemorrhages: The primary symptom of DR is leakage of blood in the retina of the eye that may appear anywhere in the retina, with any shape and size [11]. Hemorrhages are of many types such as flame hemorrhages or dot and blot hemorrhages.
(iii) Soft Exudates: Soft exudates are also known as cotton wool spots [12]. These are often pale yellow or buttery in color and round or oval-shaped due to capillary occlusions that caused permanent damage to the function of the retina.
(iv) Hard Exudates: This is the most advanced abnormality of the DR. These can be in different sizes from small tiny fragments to larger reinforcements and become the cause of visual impairment by foiling light from reaching the retina [8]. Figure 2 demonstrates the lesions in retinal images.
Automatic detection of these lesions could help in early DR monitoring and control of progression in an efficient and optimal way. For many decades researchers have put their efforts towards building an effective method through which they can identify different abnormalities by achieving higher accuracy. However, extracting accurate features is a challenging task. Moreover, the selection of accurate machine learning (ML) algorithms at early stages lead to better results in DR early detection and thus can consider the foremost step of pervasive solutions. All algorithms have their pros and cons, but no method can be regarded as superior in all stages [13]. Recently, deep learning networks have achieved great attention due to their ability to automatically extract features and strong abstract representation learning. Like machine learning, this paradigm is also based on learning from the data, but instead of using handcrafted features, which is a very difficult and time-consuming task, DL provides many advantages such as improvement in the performance of training data and the exclusion of the clumsy feature extraction process.
With the huge development of computing power and digital imaging, the algorithmic revolution to detect the severity of DR has improved over the past 5 years. A significant amount of work has been already achieved for automatic DR detection with the use of fundus photographs, but only one study has been procured to date [14]. Previous studies and reviews have discussed traditional methods of DR detection [9,15,16], but expert knowledge is required for hand-engineered characteristics of this highly sensitive investigation with several parameter settings. The last era of computer-based medical applications and solutions has provided a huge amount of data and the hardware capabilities to process these dataset. For instance, intensive increases in graphics processing units (GPUs) have encouraged many researchers to apply computer vision and image processing to achieve optimal performance. These operations have gained the interest of many researchers and market contributors over traditional methods. The major categorization of the work completed in this paper is divided into three main sections: (i) overview of different tasks to diagnose DR, (ii) preparation of a summary of publicly available dataset, and (iii) providing an outline of DL methods for blood vessels, Optic disc segmentation, detection, and classification of various DR lesions. However, this review does not embody the queries we tend to address during this study. A comparison of these studies has been presented in Table 1.

1.2. Purpose of This Study

This article presents a descriptive review to explore the significant work related to detecting DR by classifying different stages of DR, for instance mild, severe, and PDR, by using deep learning techniques. The proposed study also focuses on the public as well as private dataset used in the primary selected studies. Although in the past several literature reviews have been presented for compiling the deep learning in DR detection [14,19,20], some limitations have been found in these studies; these are being addressed in this study.
Contributions: This study is piloted systematically and follows the practice of the Cochrane methodology to synthesize the evidence by randomized trials for addressing the research questions [21]. The primary aim of this study is to provide the following state-of-the-art features:
  • A comprehensive overview of DR grading protocols and taxonomy to provide a general understanding of DR.
  • To identify the need for automatic detection of DR and ascertain how deep learning can ease the traditional detection process for early detection and faster recovery.
  • To conduct the meta-analysis of 61 research studies addressing the Deep learning techniques for DR classification problems and quality assessment evaluation of these studies.
  • To identify the quality research gaps in the body of knowledge in order to recommend the research directions to address these identified gaps. To the best of our knowledge, this study is the first work that focuses on preparing the taxonomy of DR problems and identification of the deep learning techniques for DR classifications based on the grading protocols that have efficient and highly accurate results.
The rest of the paper has been structured in the following manner: Section 2 outlines the methodology used to conduct this study. It includes three phases: planning, conducting, and documenting; moreover, the classification criteria to categorize studies according to scoring criteria is also part of this section. Section 3 describes the mapping study and findings which discussed the most used deep learning algorithms as well as discussing publicly available dataset to diagnose DR stages. Section 4 includes the discussion section and explains the validity threats that could affect this study. The last section, Section 6, provides a conclusion with suggestions and future work.

2. Research Methodologies

2.1. Systematic Literature Review Protocol

We have developed a research protocol for identifying, planning, synthesizing, and screening all the published work in the proposed research questions of this review. The main idea of this review is to discover the body of knowledge for using deep learning in DR. For this review, we have adopted the guidelines of the Cochrane methodology as a reference [21]. According to the author, the study is composed of three basic steps (i) planning, (ii) conducting, and (iii) documenting the review. Thus, we have developed the research protocol inspired by the their methodology, presented in Figure 3. The main objective of our study is designated to explore well-reputed digital libraries for perusing the answers to research questions using search terms.

2.2. Planning Phase

The initial phase of the study is to devise the protocol for review in a rigorous and well-described manner to preserve the primary rationale of the study. In this phase, we have prepared the review procedure from identification, evaluation, and interpretation of all available resources for deep learning in DR. Moreover, research objectives, research questions, and searching processes have been the main segments of this part of the research.

2.2.1. Research Objectives

The objectives of building this study are:
  • RO1: To identify the clinical features of retinal images for DR detection.
  • RO2: To consolidate the DR stages and grading protocols for DR classification.
  • RO3: To propose a state-of-the-art taxonomy to highlight the deep learning methods, approaches, and techniques used in DR.
  • Investigation of deep learning methods used to detect DR.
  • RO4: To consolidate the available dataset of DR at well-reputed repositories for validating the deep learning approaches to classify different DR stages.
  • RO5: To identify the research gaps in terms of challenges and open issues and propose the solution in each DR research domain.

2.2.2. Research Questions

In order to gain an in-depth view of the domain, the Research Questions (RQ) have been presented in a Table 2 with their resultant motivations. These questions will help us in classifying the current research in detecting DR by deep learning and also provide the future directions of the research domain.

2.2.3. Search Process and String

There is a need for a well-managed plan to prepare for the searching process and narrow down the targeted results for focused solutions and answers. For the effective resultant approach, we have executed automatic and manual searching strategies from well-reputed repositories such as IEEE Xplore, Science Direct, Arxiv, ResearchGate, ACM, LANCET, PLOS, PubMed, and SpringerLink. After a closed and rigorous review, the following search string has been prepared for automatic retrieval of search studies from the selected search engines and demonstrated in Figure 4 as:
("diabetic" AND "retinopathy") AND ("Computer" OR "Automate*" OR "assisted" OR "early detect*" OR "identifi*" OR "screening") AND ("deep" OR "learning" OR "neural network”)
However, to retrieve the most relevant research papers, we have performed the primary research retrieval on selected search engines to find adequate results. Furthermore, for optimal retrieval, the search terms need to be bound, because many research terms can falsify the results due to the extended term of machine learning as a base. Therefore, there is a need to limit the coverage of the term to deep learning only. The search conditions for the searching process have been described as:
  • Retrieval of relevant keywords in deep learning for DR that can satisfy all research questions of this study;
  • Identification of domain synonyms and related terms;
  • Formulation of state of the art search string consisting of a key and substantial terms with “AND” or “OR” Boolean operators.

2.3. Conducting Phase

The second phase of the research methodology of this study is the execution of the search. It deals with the collection of all the existing research papers and studies in deep learning for DR. However, before the systematic literature review, we have created the inclusion and exclusion criteria for the study with data extraction and synthesis strategy.

2.3.1. Inclusion/ Exclusion Criteria

The preliminary step after selecting all papers from well-reputed repositories and search engines is to eliminate redundant titles and all irrelevant papers. Based on Table 3, the retrieved papers that follow any of the exclusion criteria have been excluded.

2.3.2. Study Selection

This study aimed to identify all potential papers that were the most significant to the objectives of this systematic review. To eliminate redundancy of inclusion, we have ensured to consider the papers once rather than they be retrieved from different search engines or repositories. Every selected paper was reviewed based on reviewing the title, abstract, keywords, and conclusion. Table 4 demonstrates the details about the number of searchers found in the digital libraries and selected research papers after applying inclusion/exclusion criteria.

2.3.3. Quality Assessment Criteria

In the final set, each publication was assessed for its quality. The quality assessment was performed during the data extraction phase and ensures that selected studies must be a valuable contribution to the study. Hence, to fulfill this purpose, a questionnaire inspired by [22] was designed. These criteria are labeled as (a), (b), (c), (d), and (e). The final score of each paper is calculated by assigning the individual score for these criteria and accumulative in last. The quality assessment and scoring criteria are represented in Table 5.

2.3.4. Data Extraction Strategy and Syntheses Method

Data extraction strategy has been defined in Table A1 in Appendix A to collect the relevant information which was required to address research questions defined in this systematic mapping study. Data extraction ID from DE1 to DE3 collects the basic information related to the papers. These data extraction features include study identifier, publication type, source of the publications, and the publication location. The remaining IDs from DE4 to DE6 were extracted after studying the papers.
Figure 5 shows the step by step procedure for searching the relevant studies for this study. We retrieved a total of 5120 studies after searching in the selected digital libraries. In the first step, we eliminated studies that were not relevant to the research questions as listed earlier and have also removed redundancy. After the initial step, rigorous manual screening based on the title, abstract, and content of initial screened studies was performed. The studies focused only on detecting lesions of diabetic retinopathy resulted in the exclusion of 755 studies. In the last step, only 61 studies were selected for further consideration. Importantly, 90% of the selected studies were published in JCR-listed journals and 10% in proceeding conferences. Figure 6 represents the year-wise publication status with 16% papers selected from 2016 and 2017, 10% from 2018, 31% from 2019, and 20% from 2021.

2.3.5. Classification Criteria

The classification criteria were divided into categories that were established through selected primary studies as below.
RQ1 categories include:
  • Deep learning techniques to identify DR severity levels or stages;
  • Stages of DR;
  • Highest achieved an accuracy rate of deep learning algorithms.
RQ2 categories include:
  • Publicly available dataset;
  • Privately available dataset.
And the last question RQ3 includes
  • Limitation of the study;
  • Open issues and challenges;
  • Future Directions.

2.4. Documenting Phase

After conducting the study of the study, we presented the analysis of the results in a comprehensive format and also developed the meta-analysis in Table 6, Table 7, Table 8 and Table 9 in order to present the information in a precise, comprehensive, and easy to understand format.

3. Mapping Results and Findings

The step wise process of examination, review, and extraction is presented in Figure 5 and Figure A1. These figures demonstrated step by step observation and assessment of selected studies. After profound examination, six more papers were eliminated due to less relevancy with the proposed research questions of this study. Finally, 61 studies were selected after automatic and manual screening and evaluation. This section aims to review, analyze, and present the selected studies to answer the research questions listed in Table 2. Each research question is addressed individually along with the in-depth review and study. Table 10 and Table 11 represent the quality assessment of each selected paper with respect to the quality assessment criteria presented in Table 5. The results show the area of study and research has potential for future enhancements and optimization in health care solutions concerning image processing and deep learning.

3.1. Research Question 1: What Clinical Features of Retinal Images Are Required for DR Detection and Classification and Which Deep Learning Methods Are Mostly Used to Classify DR Problems?

To answer RQ1, deep learning techniques have been reviewed that were used to detect DR in its early stages. Furthermore, architecture, tools, libraries, and performance matrices have been analyzed and reviewed. However, before adding the description of all deep learning methods, there is a need to understand the clinical features and taxonomy required to detect DR. In the subsection below, we have comprehended some of the important DR clinical features and have also presented the Lesions and DR grading protocols. Furthermore, the taxonomy of DR detection classification is demonstrated in Figure 7.

3.1.1. DR Stages and Grading

According to the lesions, the severity of DR is mainly divided into five stages [17] and illustrated in Figure 8 and Figure 9:
  • No DR;
  • Moderate non-proliferative DR (Class 1);
  • Mild non-proliferative DR (Class 2);
  • proliferative DR (Class 3);
  • Severe DR (Class 4).
In no DR, an eye is clear from any problem caused by DR. In mild diabetic retinopathy, the retina of the eye has a balloon-like swelling called micro-aneurysms, and this is the early stage of the disease. As the disease progresses, it converts into a more severe stage called moderate diabetic retinopathy in which blood vessels nurturing the retina of the eye may swell and distort, thus causing the retina to lose its ability to transport blood and become the reason for the change in the appearance of the retina. In severe DR, many blood vessels have been blocked, causing non-supply of blood to the retinal areas.
The most advanced stage of the DR is Proliferative Diabetic Retinopathy (PDR) and it may cause sudden blindness due to the vitreous hemorrhages of the central hemorrhages. If DR is not detected and treated properly in its Severe DR stages, it could turn into PDR [13]. Demonstrations of different stages of DR are shown in Figure 7 and Figure 8 presents the taxonomy of DR concerning the severity of the problem. The summary of diabetic retinopathy grading protocols is illustrated in Table 12 and nomenclature is presented in Abbreviations.

3.1.2. DR Screening & Detection Techniques

Detection and diagnosis of DR in its early stage can help to prevent and slow down the process of visual impairment or blindness. Expert ophthalmologists use standard grading protocols for grading diabetic retinopathy. The screening method can help to enable DR detection earlier. In the screening session, retinas of both eyes are observed by expert ophthalmologists, and if the DR is detected, the patient is referred for further treatment [83]. The DR screening methods are enlisted hereunder:
  • Visual Activity Test;
  • Ophthalmoscopy or fundus photography;
  • Fundus Fluorescein angiography (FFA);
  • Retinal vessel analysis;
  • Optical coherence tomography (OCT).

3.1.3. Clinical Features of Retinal image for DR Detection

(1) Retinal Blood Vessel Classification
Retina blood vessels are the anatomical and complex structure of the human eye and only be observable by the non-invasive method. The structure of these vessels and their changes reflect the impact of DR stages and help to understand the prodigality of the disease. Moreover, a cataract is an overcast and thick area of the eye. It starts with the protein transformation in the eye which makes the image of any object foggy or frosty, and also causes pain. It is graded into four classes, normal, mild, moderate, and severe [82]. Based on the cataract clinical understanding, the vessel segmentation and classification is tedious work due to several features of vessels such as the length, width, position, and tortuosity of branches. Therefore, automatic DR detection for vessel segment understanding and classification is necessary for DR early detection and recovery. Several deep learning studies have been undertaken to achieve accurate and better methods and models concerning retinal blood vessel classification. For instance, Deep Supervision and Smoothness Regularization Network (DSSRN) was proposed for deep observation of the regular network of blood vessels. DSSRN was established by VGG16 and Conditional Random Fields were used to check the smoothness of the results with higher accuracy but minimum sensitivity [84]. Patch-based ensemble models have been used on DRIVE, STARE, and CHASE models with optimal accuracy and sensitivity. Moreover, there are several classification methods used for understanding the transference of semantic knowledge between the output layers of sides. The ResNet-101 and VGGNet CNN models were used for experiments and have achieved better accuracy on STARE and Drive Dataset. However, these models cannot be used for micro-vessel segmentation with wider angles [50,64]. Moreover, in another study, the CNN model was integrated with 12 variations to differentiate vessel and non-vessel segmentation. DRIVE dataset was used for process evaluation [84]. DCNN was used on three dataset, DRIVE, STARE, and CHASE, for vessel segmentation by extensive pre-processing features, i.e., normalization and data augmentation by geometric feature transformation [65]. Regression Analysis was applied with VGG pre-trained model on vessel images by modifying VGG layers on DRIVE and STARE dataset [85]. The performance analysis of all the proposed segmentation models is presented in Table 6, Table 7, Table 8 and Table 9.
(2) Optic Disc Feature Segmentation
The optic disc feature can be extracted by considering the conversion and contrast as parameters and then developing the DR diagnostic algorithm. Importantly, localization and classification of optic disc segmentation are two well-known operations to detect DR. During the survey of this study, it has been observed that boundary classification, edge detection, and orbital approximation are major tasks in DR deep learning optic disc feature classification. It has been observed that patch-based CNN, multi-scale CNN, FCN, and RCNN are majorly used deep learning methods for segmentation and localization. The detailed description and performance analysis of this feature is presented in Table 6, Table 7, Table 8 and Table 9.
(3) Lesion Detection and Classification
Referable DR identification and classification is not possible without information about lesions. However, state-of-the-art machines can identify DR without lesion information, and thus their predictions dearth clinical clarification rather than having high accuracy. However, it has been observed that recent enhancements in visualization methods have encouraged researchers to work intensely in DR detection. Thus, generic heat maps and gradient models are a major contribution to lesion detection and classification to date. Hence, multiple lesions have still not achieved acceptable accuracy. During the study, we intensively reviewed lesion-based DR. In general, we retrieved 23 papers for DR lesion classification explaining the micro-aneurysms (11 studies), hard exudates (9 articles), and macular edema and hemorrhages (3 studies). The details of these studies are described in Table 6, Table 7, Table 8 and Table 9.

3.1.4. Deep Learning Methods for DR Problems

(1) Convolutional Neural Network (CNN)
Convolutional neural network (CNN) is a category of neural networks heavily used in the field of computer vision. It consists of hidden, input, and fully connected output layers. It derives its name from the convolutional layer, which is part of the hidden layer. The hidden layer of the CNN typically consists of convolutional, pooling, normalization, and fully connected layers [54,86,87]. The convolutional layer is the main building blocks of any CNN architecture. CNN is the most common type of architecture used in detecting DR in its early stages due to its neural and hierarchical structure. Authors in [23,29,39,43,88] demonstrate the effectiveness of CNN for five-stage classification of DR, and authors in [39,40,44] demonstrate CNN’s performance parameters for classifying two or three stages of DR. Refs. [46,51,54,57,60] have used the CNN architecture either to improve the results of already published studies or to detect different five stages of DR [51,53,54,58,59,69,77,89]. Experiments with a small fraction of data can achieve state-of-the-art performance in referable or non-referable DR classification.
(2) Deep Neural Network (DNN)
DNN has also shown a remarkable performance in many computer vision tasks. The term “deep” generally refers to the use of a few or several convolutional layers in a network, and deep learning refers to the use of the methodologies to train the model to learn essential parameters automatically using data representation to solve a specific domain of area of interest [10,55,72,90]. The major benefit of this algorithm is that with the increased number of samples in the training set, classification accuracy also increases. This algorithm has been used in [25,30,35,44,46,49,51] to improve classification accuracy among different DR stages. In [40], the author builds a DCNN model with three fully connected layers to automatically learn the image’s local features and to generate a classification model.
(3) Ensemble Convolutional Neural Network
An ensemble of two or more different techniques to find better training results is not a new technique in machine learning. For example, upgrading the decision tree model to the random forest model. In deep learning, an ensemble of neural networks is performed by collecting different networks with the same configuration, and initial weights are assigned randomly to train some dataset. Every model makes its predictions. In the end, the actual prediction is calculated by taking the average of all predictions. The number of models in the ensemble method is kept as small as three, five, or ten trained models due to the computational cost and due to the lessening returns in performance. Moreover, ensemble techniques have been used to improve classification accuracy and model performance [14,26,32,37,38,50,53,55,63]. Based on the primary selected studies, some other DL methods have been applied to detect a DR problem by using different data sets. These algorithms are graph neural network, Hopfield neural network, deep multiple instance learning, BNCNN, GoogleNet neural network, and densely connected neural network.
Figure 10 helps us to answer RQ1. The chart shows that researchers mostly prefer to use CNN architecture (31%) for detecting complex stages of DR. On the other hand, Deep CNN (20%) also gains good attention and results as well. Based on the final selected studies, three major algorithms CNN, DCNN, and ensemble CNN, were commonly implemented for DR detection. Moreover, for red-eye lesion detection, YOLOv3 has also been proposed [76] by using CNN and DCNN. Deep belief networks, Recurrent Neural Networks (RNN), autoencoder-based methods, and stacked autoencoder methods have also been used for DR classification problems. However, some researchers [26,27,28,32,37,46,63], ensemble CNN variations to achieve better performance and results. The author in [35] used the Adaboost algorithm to combine ensemble CNN models and presented the proven optimization with higher accuracy. Moreover, the authors in [40,50,53,60,61,73,80,91,92] tend to combine two or more CNN models to empower the training process in the field to detect DR. The in-depth analysis with major contribution of each selected study has been presented in Table 9Table 10.

3.2. Research Question 2: Which DR dataset Have Been Acquired, Managed, and Classified to Identify Several Stages of DR?

To answer RQ2, we thoroughly studied each final selected study. Each of the 61 studies used different sets of private or publicly available data to validate their results. This section started with reviewing the most popular publicly available dataset for the task of detecting different severity levels of DR. Thereafter, we focus especially on those dataset those were used in the selected studies by the researchers. Figure 11 represents the rapid increase in the DR data set for classification and segmentation [87].

Publicly Available Dataset

There are many data sets publicly available online, according to the eye conditions such as glaucoma, age-related macular degeneration, and diabetic retinopathy. These publicly available dataset aim to provide good retinal images for training and testing purposes. These dataset are very important for the enhancement of science as they lead to the development of better algorithms as well as support technology to transfer from research laboratories to clinical practice. The descriptions of these dataset are enlisted in Table 13. Moreover, there are some reasons for providing dataset publicly. (1) Sharing and availability of data will encourage researchers to process and analyze said dataset, thus will lead to better solutions in DR-based applications. Diversity and availability of dataset can have remarkable results. In previous studies, it has been observed that, due to the unavailability of relevant data, many solutions were not able to achieve their appropriate goals. However, these solutions can have ultimate results by associating with the relevant and updated dataset. (2) Almost all kinds of research in the same area of study require the same type of data; it would not be necessary to replicate data if these dataset are available publicly and can be beneficial in an economic and financial term, as well as saving time and effort. (3) Publicly available data and their re-processing and re-use for diverse analyses are fundamental resources for innovation. The research and experiments on these dataset can provide new insights and opportunities for researchers to collaborate with medical practitioners to develop potential and data-driven applications. Moreover, these dataset will statistically empower the solutions and encourage multidisciplinary researchers to come up with new analyses by using their expertise, thus leading to comparative analyses and optimal solutions.
(i) DRIVE Dataset: DRIVE abbreviated as Digital Retinal Images for Vessel Extraction (DRIVE) is a publicly available dataset, which consists of 40 color fundus images used for any automated vessel detection algorithms. DRIVE images were taken from the Netherlands during the DR screening program. The total screening population consists of 400 patients aged 25–90 years. From these data, 40 images were selected randomly; 33 images showed no signs of diabetic retinopathy but 7 images showed early signs of DR such as exudates and hemorrhages. Each image is in the form of a JPEG compress. All these images were acquired using a canon CR5 camera. Moreover, the set of these 40 images was equally divided into 20 images for training and the other 20 images for test data. For the training set, a single manual segmentation is available for the test dataset and two manual segmentation are available for vasculature. This process of the extraction of the blood vessel or segmentation from the retinal image is the task many researchers tried to automate as it is a very difficult and time-consuming process. The studies [37,64,65,67,68,69,70,73,73] used the DRIVE dataset for their research work.
(ii) STARE Dataset: Structured Analysis of the Retina (STARE) was initiated by Micheal Goldbaum at the University of California. It contains 20 digitized images captured with TopCon TRV 50. The STARE website provides 20 images for blood vessel detection with labeled ground truth and 81 images for optic disc localization with ground truth. The performance of the vessel detection is measured using ROC curve analysis, and where sensitivity is the correct, classified proportion of blood vessels and specificity is the proportion of correctly classified normal pixels. In the case of the optic disc, localization performance is measured against the correctly localized optic disc, and localization is successful if the algorithm detects optic disc within 60 pixels from the ground truth [27,64,65,68,72] have used the STARE dataset for results validation.
(iii) MESSIDOR Dataset: Messidor is the largest dataset publicly available online that contains 1200 eye fundus color retinal images. It was acquired by the three ophthalmology departments using a color video 3CCD camera. All images are stored in the format of TIFF. Eight hundred images were acquired with the dilation of the pupil and the other 400 were without dilation. Its primary aim is to analyze the complexity of diabetic retinopathy by evaluation and comparison of algorithms. The severity of DR is measured based on the existence and number of diabetic lesions and also from their distance to the macula. Many studies for instance [10,32,35,39,40,45,46,57,58,59,65,68,70,79,89] have used the Messidor dataset for deep learning and classification.
(iv) DIARETDB Dataset: This is a publicly available dataset for the detection of diabetic retinopathy from fundus images. The current dataset consists of 130 images from which 20 images show no sign of retinopathy but the remaining show signs such as exudates, hemorrhages, and micro-aneurysms. All images were captured using a 50-degree field of a view fundus camera with unknown camera settings. The main aims of designing this dataset are to define data unambiguously and testing, which can be used as a benchmark for diabetic retinopathy detection methods. DIARETDB has a further three levels DIARETDB 1, DIARETDB 2, and DIARETDB. Refs. [32,60] used DIARETDR dataset.
(v) EYEPACS Dataset: EyePACS consist of nearly 10,000 retinal images and provides a free platform for retinopathy screening. All images were taken under a different variety of retinal conditions. Every subject provides the left and right fields. The clinician-rated each image according to the presence of diabetic retinopathy. Images for this dataset are taken from different type and model of camera. Due to the large size of the dataset, it has been divided into separate files with multi-part archives such as train.zip, test.zip, sample.zip, etc. [10,40,46,58,65,68,70,78] have used this dataset in research.
(vi) KAGGLE Dataset: The dataset (Kaggle, 2015) by Kaggle consists 82GB image files, which contain a total of 88,702 color fundus images. Each image was rated by an ophthalmologist for the presence of lesions, using severity levels from 0 to 4. Level 0 contains images with no sign of retinopathy and level 4 shows images with advanced stages of retinopathy. The training set is highly unbalanced, with 25,811 images for level 0, 2444 images for level 1, 5293 images for level 2, 874 images for level 3, and 709 images for level 4. This dataset contains challenging images, which are of poor quality and unfocused, which makes it difficult for any algorithm to classify them correctly according to the severity of retinopathy. Many researchers, for instance, [10,22,26,32,35,40,44,46,53,77,90] have used the Kaggle dataset.
(vii) Others: Instead of using only publicly available dataset, some researchers [23,24,28,29,30,31,33,34,36,37,39,40,47,55,60,72,74,80,81,93] have preferred to not use public dataset to classify DR stages. And few researchers [40,50] have used two dataset, one public and another private, to create their new dataset. To answer RQ3, section A provides detailed information about the existing dataset that was mostly used to detect two to five stages of DR. Figure 8 shows that, other than only using public dataset, some researchers prefer to collect their dataset (34%) either by collecting from the screening process or different hospitals or research laboratories. The Kaggle dataset (29%) also gains more attention to detect all five stages of DR. In 20% of the cases Messidor, 9% EyePACS, and only 2% DIARETDR, STARE, and DRIVE, dataset were used. Table 13 shows some of the publicly available dataset description format and dataset download links.

3.3. Research Question 3: What Are the Open Issues and Challenges of Using Deep Learning Methods to Detect DR?

RQ3 is associated with finding the main challenges faced by the researchers while conducting experiments further highlighted the open issues of DL in the area of detecting DR automatically.

3.3.1. DR Detection Challenges

After thoroughly reviewing and processing the final selected papers, we have highlighted some of the main challenges faced by the researchers. Generally, these challenges include the size of the dataset, dataset bias, dataset quality, computation cost, and ethical and legal issues.
(i) Dataset Bias: The annotation work is based on the experience of the trained ophthalmologists or physicians on how they grade the images [28,40,58,83]. As a result, the algorithm may perform differently when an image with missing variables (microaneurysms, exudates, as well as important biomarkers of DR) that the majority of clinicians could not identify fed into the network, which may lead to various errors in the results [20,30,42,46,94].
(ii) Dataset Quality Issue: The quality of the DR imaging dataset is mainly affected by how the dataset is collected, i.e., quality of the camera, type of the camera, and overexposure to light [33]. A low-quality camera may be missing important information, and dark dots caused by the camera dust may become the cause of misclassified DR lesions [30,42,58].
(iii) Computational Cost: Deep neural networks are computationally rigorous, multilayered algorithms consisting of millions of parameters. Therefore, the convergence of such algorithms requires more computational power as well as running time [29,38,95]. Although there is no strict rule about how much data are required to optimally train the neural network, experiential studies recommend that tenfold training data produce an effective model. Furthermore, training high quality and a large number of images requires a powerful GPU, which is sometimes a very cost-effective issue [48].
(iv) dataset Size: Managing dataset size is also a common challenge faced by many researchers. Algorithms applied to a small dataset may not be able to accurately identify the severity grades of DR, and choosing too large data may lead to over-fitting [29,31,46,48].
(v) Interpretability: The power of the DL methods to map complex, non-linear functions makes them hard to interpret. Deep learning algorithms are like a black box in which the algorithm automatically extracts discriminative features from the images and associated grades [23,26,36]. Therefore, the specific features chosen by the algorithm are unknown. To date, understanding the features used by deep learning to perform calculations are an active area of research [49,83].
(vi) Legal Issues: Medical misconduct rules govern standards of clinical practice to show proper care to their patients. However, until now, no standard rules have been established to assign blame in contexts where the algorithm provides bad predictions and poor recommendations for treatment. The creation of such rules is an essential prerequisite for the worldwide adoption of deep learning algorithms in the medical field.
(vii) Ethical Issues: Some DR imaging data sets are publicly accessible for researchers. However, retrieving and collecting private dataset without formal agreement produces several ethical issues, particularly when the research contains sensitive information about the patient.

3.3.2. Open Issues

Deep learning is a promising technique that can be used for diagnosing DR. However, researchers should pay attention while selecting the DL technique. DL models often require a large number of dataset to train the model efficiently, which are not publicly available or sometimes require permission from the hospitals or research labs to gain access. This restricts the number of researchers who can work in this field for those who are based at large academic medical centers where this dataset is available and for the most part eliminates the core deep learning community that has an essential algorithmic and theoretical background to advance the field. Moreover, used training data becomes an essential part to check the performance of the model; it is currently almost impossible to compare new approaches that were proposed in the literature with each other if the training data are not shared publicly while publishing the manuscript. Additionally, sometimes while training the DL model, there is not a clear methodology to understand how many layers should be added and which algorithm is used to find appropriate results for detecting DR automatically. Setting hyper-parameters while training the model also affects the performance of the model. It is difficult to judge which setting of the hyper-parameters to choose. Sometimes after retraining, parameters in the model become the cause of a decrease inaccuracy [39,94,96].

4. Discussion

This study used Deep learning techniques to detect the main five stages of DR. Taxonomy of several severity levels of DR according to the number of lesions present in the retinal fundus image is demonstrated in Figure 9. Due to the high damages of DR, the need for such diagnostic tools is a major requirement that can diagnose DR automatically with less involvement of experts. This literature review applies the systematic approach to estimate the diagnostic accuracy of DL methods for DR classification in patients with long-term diabetes. As compared to previously tested machine learning techniques, an advantage of DL includes an extensive reduction in the manpower required for extracting features by hand, as DL algorithms learn to extract features automatically. Moreover, incorporating DL algorithms might enhance the performance of the automatic diagnostic of the DR system.
We collected 39 studies to explore the use of deep learning for automatic DR-detection problems. For eight studies, reporting accuracy is above 95% [20,24,28,31,36,45,48,52]; three studies achieve accuracy above 90%. [25,27,37]; and for four studies accuracy score is above 80%. [19,21,23,26,46]. All these studies show the exceptional suggested performance rate required for detecting DR automatically and accurately. Seven studies reported specificity and sensitivity percentage variation between 90 and 98% [40,41,43,45,56,57,58,94,96].
One study achieves the highest Area Under the Curve (AUC) score of 100% [52]. All the mentioned studies reach the highest performance rate, demonstrating the potential for using DL approaches to detect DR efficiently, while keeping in mind that the included studies vary concerning target condition, the dataset used for training, and the reference standards. At some point, comparison of studies is possible between 6 studies from all 39 selected studies, as they use the same validation dataset (Kaggle) to detect five-stage severity levels of DR by using CNN [21,22,38,42,83].
From all these studies, one study [21] achieves the highest accuracy score of 80.8% on Kaggle and [42] reaches 92.2% specificity and 80.28% sensitivity at a high-sensitivity set point. The CNN model’s accuracy is considered as a strong reference standard because most of the experts expected to improve the performance and accuracy to diagnose the disease correctly as compared to detecting disease by the human grader. Nevertheless, some aspects are still changing, and it is difficult to say if the high-performance score is due to an algorithm or a combination of multiple features. Besides using the Kaggle dataset, ten studies used the Messidor-2 dataset to validate their results and achieve the highest performance scores.
DL algorithms can easily be implemented into a screening program in numerous ways. Further leading to the limitations of the primary studies, it is worth noticing that the DL model could be only validated using high-quality images and also needed a huge number of images to train the model e.g., Messidor-2, STARE, and DRIVE dataset. The images present in the dataset are not necessarily of good quality, including noise and distortion which could lead to misclassification of the model’s performance. Secondly, the majority of the involved studies used privately collected dataset that are not publicly accessible by the researchers to validate their results, so others could not validate their performance accuracy. Lastly, training a large number of images needs GPU to run an algorithm that is sometimes not very cost-effective [22,28]. Given the above-mentioned limitations, there is a need to make improvements before completely replacing the manual system with CAD tools. While our results are based on evidence-based methodology, this study still has some limitations. Firstly, the search strategy could have been more sensitive, running search string on search engines to collect studies that belong to JCR journals and CORE conferences. Moreover, performing a snow-balling approach to collect papers may have led to the inclusion of some more studies. Lastly, having incomparable studies, it was difficult to make an expectation of how the DL algorithm would perform in the real world. The purpose of this study is to provide an increased focus on the development of different DL algorithms and their performance to classify various stages of DR as the included studies show great promise.
The focus of this study was only detecting different stages of DR using deep learning techniques and not focused on lesions detection, DME, or other retinal diseases. In the future, it would be quite interesting to see whether combining DR classification with other detection techniques improves the performance of the algorithm or not. Moreover, studies should include sensitivity, AUC, specificity, confusion matrix, false positive, and false negative values, because these are important to access patient safety.
In conclusion, deep learning is currently the hottest topic (as it can be seen from the selected studies that the majority of the work has been completed in 2019) and the leading method for automatically classifying DR in patients having long-term diabetes based on diagnostic performance. Even though CAD systems cannot completely outperform humans, it is suggested to take advantage of deep learning and build semi-automated deep learning models to be used in screening programs to overcome the burden of ophthalmologists.

Threats to Validity

This study is conducted in May 2021, so studies that appeared after that date would not have been captured. Reflecting on this methodology, this study has the following limitations.
Construct Validity: The search string is built using keywords that can extract all relevant papers, but there is a possibility that the addition of some extra keywords may alter the final results. The search string for finding results was run in IEEE explore, ACM digital library, PLOS, ResearchGate, Science direct, ArXiv, PubMed, and Springer. These digital libraries were considered a major source of data extraction in our area of interest. We did not run a research string on Google Scholar to find any related study. We believe that most of the studies to detect DR automatically can be found in these digital libraries, including all ranks of conferences and journals.
Internal validity: Internal validity deals with data analysis and the extraction process. Some studies signify overlapping contributions and areas of focus; generally, it has also been noted that one study influences and focuses only a single component of the research area. For example, improvement in the previously published study is improved by adding one or more features [22,28]. In such cases, a study that claims contribution in the area of focus has been considered for categorization. In some studies, different titles were used for the DR detection; but after reading the whole paper, those were categorized based on the DR severity stages.
External validity: Papers from various source engines were examined and selected by the authors who used the university’s access to the database to extract data, but some papers might have been ignored due to the limited access to digital libraries. This thread was managed by requesting the full article from the original authors but some authors did not reply. While shortlisting articles that were most relevant to the area of focus, some papers might be discarded due to the presence of one or more exclusion criteria. The time limit was introduced in the search for the published studies. However, the representation of the selected studies might be affected.
Conclusion validity: Threats to conclusion validity were managed by a clearer representation of each step of systematic study. While examining the study, it is put into consideration that no identification of incorrect relationships is made that may lead to incorrect conclusions. In the author’s opinion, a slight difference particular to some publication misclassification and selection bias would not change the main conclusions which were drawn from the 61 studies selected in our systematic study.

5. Future Directions

This study has revealed the fact that deep learning can be useful in DR detection and classification, but it still has several aspects to be open for research. It has been observed that many deep learning and advanced computation techniques have been used for the solutions of DR problems. However, due to the gap in the medical and technical field, the interpretation and accuracy of solutions concerning clinical understanding are still under discussion. Therefore, the result of this study did not include clinical accuracy and effectiveness. Another challenge is designing the deep-learning models, because these models required a huge amount of dataset. As described in the previous section, the availability of a dataset is no more a problem, rather the image annotations are one of the major issues because expert ophthalmologists are required to develop accurate annotation and thus lead the better deep-learning models. Data augmentation samples are also required for real-time DR dataset, and the development of knowledge-based data augmenting features is required for robust deep learning techniques. Moreover, image classification for DR problems requires a variety of samples for fundus images, but the available dataset are not appropriate for every kind of DR problem. Particularly, these images create a bias for DR classifications, thus this is another research direction to create morphological lesion variation and extensive data augmentation methods for preserving and classifying different kinds of DR problems. The summary of DR-related research issues and their solutions is presented in Table 14.

6. Conclusions

Diabetic retinopathy is the leading cause of severe eye conditions that can damage the eye retina vessels and cause vision problems. Late detection of DR problems can harm retina vessels dangerously, thus can lead to blindness. However, early detection of DR problems can prevent the potential damage. Computational methods, including image processing, machine learning, and deep learning, have been used for DR early detection. Deep learning has opened a new paradigm for designing and developing the models for the identification of DR complications including segmentation and classification. Numerous deep learning techniques have been implemented to detect different complex stages of diabetic retinopathy. In this study, we have performed a review to accurately present the state of the art in deep learning methods to detect and classify DR. Then, we have presented the taxonomy and DR grading schemes with an in-depth analysis of all the related DR problems. Moreover, we have identified major challenges and accomplishments of the studies. Eight major publication portals were used to search the relevant articles. The analysis and review have presented the main DL techniques used for multi or binary classification of DR. Further, we have provided a thorough comparison of studies based on used architecture, performance, and tools. We also have highlighted the primary data sets used by the community of researchers according to the dataset type, image type, and source. Lastly, we have provided an insight into the open issues and challenges researchers face while detecting diabetic retinopathy automatically with the help of DL methods, which also provide potential future research directions in this area. Generally, the deep learning methods have overtaken the traditional manual detection methods. This study has provided a comprehensive understanding of importance and future insights of using deep learning methods for early detection which can suggest the patients according to the severity of the problem as 35% studies used convolutional neural networks (CNNs), 26% implemented the ensemble CNN (ECNN), 13% of the studies have used Deep Neural Networks (DNN) for DR Classification. The in-depth analysis of algorithms used for DR classification also have been presented in this study. This review gives a comprehensive view of state-of-the-art deep-learning based methods related to DR diagnosis and will help researchers to conduct further research on this problem.

Author Contributions

Initial identification, preparation of research statement and research questions: M.S.F., A.A. and R.S. Evaluate and selection of literature: R.A., A.M.B. and A.A. Organization of literature: M.S.F., A.A., M.Y.J. and D.S. Wrote the paper: M.S.F., A.A. and R.S. Revised the manuscript: M.S.F., A.A., A.M.B. and M.Y.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Taif University Researchers Supporting Project, Taif University, Taif, Saudi Arabia under Grant TURSP-2020/36.

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

Not Applicable.

Conflicts of Interest

There is no conflict of interest for this research.

Abbreviations

BDRBackground diabetic retinopathy
DMIDiabetic macular ischemia
PDRProliferative diabetic retinopathy
NPDRNon-Proliferative Diabetic Retinopathy
CVFCentral Visual Field
IRMAIntraretinal Microvascular Abnormalities
BCBase Curve
MAMicroaneurysm
DMMellitus
R2Pre-proliferative Retinopathy
DMEDiabetic Macular Edema
ETDRSEarly Treatment Diabetic Retinopathy Study
HMsHemorrhages
EXsExudates
HEsHard Exudates
SEsSoft Exudates
VBVenous Beading
NVNeovascularization
MEMacular Edema
ODOptic Disc
AMDAge-Related Macular Degeneration
AAOAmerican academy of Ophthalmology
ICDRInternational Clinical Diabetic Retinopathy

Appendix A

Figure A1. The obtained results from the execution of the study filtering process.
Figure A1. The obtained results from the execution of the study filtering process.
Sensors 22 01803 g0a1
Table A1. Data Extraction Strategy.
Table A1. Data Extraction Strategy.
DE IDData ExtractionDescriptionType
DE1Study IdentifierUnique Identity for every studyGeneral
DE2Publication TypeNature of Publication (Journal/Conference/Symposium etc.)General
DE3Bibliographic ReferencesTitle, Author Name, Publication Year, LocationGeneral
DE4DatasetType of Dataset, Total number of images in each dataset, SourceRQ2
DE5Deep Learning
Architecture
Type of deep learning methods used in selected studiesRQ1
DE6Type of ClassificationMulti-class or BinaryRQ2

References

  1. Diabetes, Heart Disease, and Stroke. February 2019. Available online: https://www.niddk.nih.gov/health-information/diabetes/overview/preventing-problems/heart-disease-stroke (accessed on 14 December 2021).
  2. ‘Diabetic Retinopathy-Symptoms and Causes’, Mayo Clinic. Available online: https://www.mayoclinic.org/diseases-conditions/diabetic-retinopathy/symptoms-causes/syc-20371611 (accessed on 14 December 2021).
  3. Introduction to Diabetes and Diabetic Retinopathy. Available online: https://www.visionaware.org/info/your-eye-condition/diabetic-retinopathy/1 (accessed on 14 December 2021).
  4. Quellec, G.; Lamard, M.; Josselin, P.M.; Cazuguel, G.; Cochener, B.; Roux, C. Optimal wavelet transform for the detection of microaneurysms in retina photographs. IEEE Trans. Med Imaging 2008, 27, 1230–1241. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Abràmoff, M.D.; Folk, J.C.; Han, D.P.; Walker, J.D.; Williams, D.F.; Russell, S.R.; Massin, P.; Cochener, B.; Gain, P.; Tang, L.; et al. Automated Analysis of Retinal Images for Detection of Referable Diabetic Retinopathy. JAMA Ophthalmol. 2013, 131, 351–357. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Wilkinson, C.P.; Ferris, F.L.; Klein, R.E.; Lee, P.P.; Agardh, C.D.; Davis, M.; Dills, D.; Kampik, A.; Pararajasegaram, R.; Verdaguer, J.T.; et al. Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales. Ophthalmology 2003, 110, 1677–1682. [Google Scholar] [CrossRef]
  7. Wong, T.Y.; Cheung, C.M.G.; Larsen, M.; Sharma, S.; Simó, R. Diabetic retinopathy. Nat. Rev. Dis. Prim. 2016, 2, 1–16. [Google Scholar] [CrossRef] [PubMed]
  8. EyeRound.org. October 2010. Available online: https://webeye.ophth.uiowa.edu/eyeforum/tutorials/Diabetic-Retinopathy-Med-Students/Classification.htm (accessed on 14 December 2021).
  9. Shilpa, J.; Karule, P.T. A review on exudates detection methods for diabetic retinopathy. Biomed. Pharmacother. 2018, 97, 1454–1460. [Google Scholar]
  10. Beede, E.; Baylor, E.; Hersch, F.; Iurchenko, A.; Wilcox, L.; Ruamviboonsuk, P.; Vardoulakis, L.M. A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–12. [Google Scholar]
  11. American Academy of Ophthalmology. Available online: https://www.aao.org/eyenet/article/vitreous-hemorrhage-diagnosis-treatment-2 (accessed on 14 December 2021).
  12. Brown, G.C.; Brown, M.M.; Hiller, T.Y.R.I.E.; Fischer, D.A.V.I.D.; Benson, W.E.; Magargal, L.E. Cotton-wool spots. Retina 1985, 5, 206–214. [Google Scholar] [CrossRef] [PubMed]
  13. Difference Between Machine Learning And Deep Learning. Available online: http://www.iamwire.com/2017/11/difference-between-machine-learning-and-deep-learning/169100 (accessed on 14 December 2021).
  14. Asiri, N.; Hussain, M.; Abualsamh, H.A. Deep Learning based Computer-Aided Diagnosis Systems for Diabetic Retinopathy: A Survey. arXiv 2018, arXiv:1811.01238. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Zhao, L.; Ren, H.; Zhang, J.; Cao, Y.; Wang, Y.; Meng, D.; Li, L. Diabetic retinopathy, classified using the lesion-aware deep learning system, predicts diabetic end-stage renal disease in chinese patients. Endocr. Pract. 2020, 26, 429–443. [Google Scholar] [CrossRef]
  16. Mansour, R.F. Evolutionary computing enriched computer-aided diagnosis system for diabetic retinopathy: A survey. IEEE Rev. Biomed. Eng. 2017, 10, 334. [Google Scholar] [CrossRef]
  17. National Eye Institute. August 2019. Available online: https://www.nei.nih.gov/learn-about-eye-health/eye-conditions-and-diseases/diabetic-retinopathy (accessed on 14 December 2021).
  18. Pires, R.; Avila, S.; Wainer, J.; Valle, E.; Abramoff, M.D.; Rocha, A. A data-driven approach to referable diabetic retinopathy detection. Artif. Intell. Med. 2019, 96, 93–106. [Google Scholar] [CrossRef]
  19. Nielsen, K.B.; Lautrup, M.L.; Andersen, J.K.; Savarimuthu, T.R.; Grauslund, J. Deep learning–based algorithms in screening of diabetic retinopathy: A systematic review of diagnostic performance. Ophthalmol. Retin. 2019, 3, 294–304. [Google Scholar] [CrossRef] [PubMed]
  20. Alyoubi, W.L.; Shalash, W.M.; Abulkhair, M.F. Diabetic retinopathy detection through deep learning techniques: A review. Inform. Med. Unlocked 2020, 100377. [Google Scholar] [CrossRef]
  21. Chler, J.; Hopewell, S. Cochrane methods-twenty years experience in developing systematic review methods. Syst. Rev. 2013, 2, 76. [Google Scholar] [CrossRef] [Green Version]
  22. Ouhbi, S.; Idri, A.; Fernández-Alemán, J.L.; Toval, A. Requirements engineering education: A systematic mapping study. Requir. Eng. 2015, 20, 119–138. [Google Scholar] [CrossRef]
  23. Zeng, X.; Chen, H.; Luo, Y.; Ye, W. Automated Diabetic Retinopathy Detection Based on Binocular Siamese-Like Convolutional Neural Network. IEEE Access 2019, 7, 30744–30753. [Google Scholar] [CrossRef]
  24. Sun, Y. The Neural Network of One-Dimensional Convolution-An Example of the Diagnosis of Diabetic Retinopathy. IEEE Access 2019, 7, 69657–69666. [Google Scholar] [CrossRef]
  25. Gao, Z.; Li, J.; Guo, J.; Chen, Y.; Yi, Z.; Zhong, J. Diagnosis of Diabetic Retinopathy Using Deep Neural Networks. IEEE Access 2018, 7, 3360–3370. [Google Scholar] [CrossRef]
  26. Qummar, S.; Khan, F.G.; Shah, S.; Khan, A.; Shamshirb, S.; Rehman, Z.U.; Jadoon, W. A Deep Learning Ensemble Approach for Diabetic Retinopathy Detection. IEEE Access 2019, 7, 150530–150539. [Google Scholar] [CrossRef]
  27. Zhang, W.; Zhong, J.; Yang, S.; Gao, Z.; Hu, J.; Chen, Y.; Yi, Z. Automated identification and grading system of diabetic retinopathy using deep neural networks. Knowl.-Based Syst. 2019, 175, 12–25. [Google Scholar] [CrossRef]
  28. Liu, Y.P.; Li, Z.; Xu, C.; Li, J.; Liang, R. Referable diabetic retinopathy identification from eye fundus images with weighted path for convolutional neural network. Artif. Intell. Med. 2019, 99, 101694. [Google Scholar] [CrossRef]
  29. Li, T.; Gao, Y.; Wang, K.; Guo, S.; Liu, H.; Kang, H. Diagnostic Assessment of Deep Learning Algorithms for Diabetic Retinopathy Screening. Inf. Sci. 2019, 501, 511–522. [Google Scholar] [CrossRef]
  30. Li, X.; Shen, L.; Shen, M.; Tan, F.; Qiu, C.S. Deep learning based early stage diabetic retinopathy detection using optical coherence tomography. Neurocomputing 2019, 369, 134–144. [Google Scholar] [CrossRef]
  31. Takahashi, H.; Tampo, H.; Arai, Y.; Inoue, Y.; Kawashima, H. Applying artificial intelligence to disease staging: Deep learning for improved staging of diabetic retinopathy. PLoS ONE 2017, 12, e0179790. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Bellemo, V.; Lim, Z.W.; Lim, G.; Nguyen, Q.D.; Xie, Y.; Yip, M.Y.; Lee, M.L. Artificial intelligence using deep learning to screen for referable and vision-threatening diabetic retinopathy in Africa: A clinical validation study. Lancet Digit. Health 2019, 1, e35–e44. [Google Scholar] [CrossRef] [Green Version]
  33. Zhou, L.; Zhao, Y.; Yang, J.; Yu, Q.; Xu, X. Deep multiple instance learning for automatic detection of diabetic retinopathy in retinal images. IET Image Process. 2017, 12, 563–571. [Google Scholar] [CrossRef]
  34. Li, Q.; Fan, S.; Chen, C. An Intelligent Segmentation and Diagnosis Method for Diabetic Retinopathy Based on Improved U-NET Network. J. Med. Syst. 2019, 43, 304. [Google Scholar] [CrossRef] [PubMed]
  35. Yip, M.Y.T.; Lim, Z.W.; Lim, G.; Quang, N.D.; Hamzah, H.; Ho, J.; Hsu, W. Enhanced Detection of Referable Diabetic Retinopathy via DCNNs and Transfer Learning. In Proceedings of the Asian Conference on Computer Vision, Perth, WA, Australia, 2–6 December 2018; Springer: Cham, Switzerland, 2018; pp. 282–288. [Google Scholar]
  36. Chakravarthy, S.N.; Singhal, H.; RP, N.Y. DR-NET: A Stacked Convolutional Classifier Framework for Detection of Diabetic Retinopathy. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–7. [Google Scholar]
  37. Jiang, H.; Yang, K.; Gao, M.; Zhang, D.; Ma, H.; Qian, W. An Interpretable Ensemble Deep Learning Model for Diabetic Retinopathy Disease Classification. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 2045–2048. [Google Scholar]
  38. Suriyal, S.; Druzgalski, C.; Gautam, K. Mobile assisted diabetic retinopathy detection using deep neural network. In Proceedings of the 2018 Global Medical Engineering Physics Exchanges/Pan American Health Care Exchanges (GMEPE/PAHCE), Porto, Portugal, 19–24 March 2018; pp. 1–4. [Google Scholar]
  39. Raumviboonsuk, P.; Krause, J.; Chotcomwongse, P.; Sayres, R.; Raman, R.; Widner, K.; Silpa-Acha, S. Deep Learning vs. Human Graders for Classifying Severity Levels of Diabetic Retinopathy in a Real-World Nationwide Screening Program. arXiv 2018, arXiv:1810.08290. [Google Scholar]
  40. Voets, M.; Møllersen, K.; Bongo, L.A. Replication study: Development and validation of deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. arXiv 2018, arXiv:1803.04337. [Google Scholar]
  41. Sakaguchi, A.; Wu, R.; Kamata, S.I. Fundus Image Classification for Diabetic Retinopathy Using Disease Severity Grading. In Proceedings of the 2019 9th International Conference on Biomedical Engineering and Technology, Tokyo, Japan, 28–30 March 2019; pp. 190–196. [Google Scholar]
  42. Romero-Aroca, P.; Verges-Puig, R.; de la Torre, J.; Valls, A.; Relaño-Barambio, N.; Puig, D.; Baget-Bernaldiz, M. Validation of a Deep Learning Algorithm for Diabetic Retinopathy. Telemed. e-Health 2019, 26, 1001–1009. [Google Scholar] [CrossRef]
  43. Nagasawa, T.; Tabuchi, H.; Masumoto, H.; Enno, H.; Niki, M.; Ohara, Z.; Mitamura, Y. Accuracy of ultrawide-field fundus ophthalmoscopy-assisted deep learning for detecting treatment-naïve proliferative diabetic retinopathy. Int. Ophthalmol. 2019, 1–7. [Google Scholar] [CrossRef] [Green Version]
  44. Raju, M.; Pagidimarri, V.; Barreto, R.; Kadam, A.; Kasivajjala, V.; Aswath, A. Development of a deep learning algorithm for automatic diagnosis of diabetic retinopathy. In MEDINFO 2017: Precision Healthcare through Informatics; IOS Press: Amsterdam, The Netherlands, 2017; pp. 559–563. [Google Scholar]
  45. Ramach, R.N.; Hong, S.C.; Sime, M.J.; Wilson, G.A. Diabetic retinopathy screening using deep neural network. Clin. Exp. Ophthalmol. 2018, 46, 412–416. [Google Scholar]
  46. Hemanth, D.J.; Anitha, J.; Mittal, M. Diabetic retinopathy diagnosis from retinal images using modified hopfield neural network. J. Med Syst. 2018, 42, 247. [Google Scholar] [CrossRef] [PubMed]
  47. Li, Y.-H.; Yeh, N.-N.; Chen, S.-J.; Chung, Y.-C. Computer-Assisted Diagnosis for Diabetic Retinopathy Based on Fundus Images Using Deep Convolutional Neural Network. Mob. Inf. Syst. 2019, 14, 6142839. [Google Scholar] [CrossRef]
  48. Gargeya, R.; Leng, T. Automated identification of diabetic retinopathy using deep learning. Ophthalmology 2017, 124, 962–969. [Google Scholar] [CrossRef]
  49. Mansour, R.F. Deep-learning-based automatic computer-aided diagnosis system for diabetic retinopathy. Biomed. Eng. Lett. 2018, 8, 41–57. [Google Scholar] [CrossRef]
  50. Abràmoff, M.D.; Lou, Y.; Erginay, A.; Clarida, W.; Amelon, R.; Folk, J.C.; Niemeijer, M. Improved automated detection of diabetic retinopathy on a publicly available dataset through integration of deep learning. Investig. Ophthalmol. Vis. Sci. 2016, 57, 5200–5206. [Google Scholar] [CrossRef] [Green Version]
  51. Sahlsten, J.; Jaskari, J.; Kivinen, J.; Turunen, L.; Jaanio, E.; Hietala, K.; Kaski, K. Deep Learning Fundus Image Analysis for Diabetic Retinopathy and Macular Edema Grading. arXiv 2019, arXiv:1904.08764. [Google Scholar] [CrossRef] [Green Version]
  52. Swapna, G.; Vinayakumar, R.; Soman, K.P. Diabetes detection using deep learning algorithms. ICT Express 2018, 4, 243–246. [Google Scholar]
  53. Pratt, H.; Coenen, F.; Broadbent, D.M.; Harding, S.P.; Zheng, Y. Convolutional neural networks for diabetic retinopathy. Procedia Comput. Sci. 2016, 90, 200–205. [Google Scholar] [CrossRef] [Green Version]
  54. Riaz, H.; Park, J.; Choi, H.; Kim, H.; Kim, J. Deep and Densely Connected Networks for Classification of Diabetic Retinopathy. Diagnostics 2020, 10, 24. [Google Scholar] [CrossRef] [Green Version]
  55. Keel, S.; Wu, J.; Lee, P.Y.; Scheetz, J.; He, M. Visualizing Deep Learning Models for the Detection of Referable Diabetic Retinopathy and Glaucoma. JAMA Ophthalmol. 2019, 137, 288–292. [Google Scholar] [CrossRef] [PubMed]
  56. Ting, D.S.W.; Cheung, C.Y.L.; Lim, G.; Tan, G.S.W.; Quang, N.D.; Gan, A.; Wong, E.Y.M. Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. JAMA 2017, 318, 2211–2223. [Google Scholar] [CrossRef]
  57. Abbas, Q.; Fondon, I.; Sarmiento, A.; Jiménez, S.; Alemany, P. Automatic recognition of severity level for diagnosis of diabetic retinopathy using deep visual features. Med Biol. Eng. Comput. 2017, 55, 1959–1974. [Google Scholar] [CrossRef] [PubMed]
  58. Gulshan, V.; Peng, L.; Coram, M.; Stumpe, M.C.; Wu, D.; Narayanaswamy, A.; Kim, R. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 2016, 316, 2402–2410. [Google Scholar] [CrossRef] [PubMed]
  59. Shankar, K.; Zhang, Y.; Liu, Y.; Wu, L.; Chen, C.H. Hyperparameter tuning deep learning for diabetic retinopathy fundus image classification. IEEE Access 2020, 8, 118164–118173. [Google Scholar] [CrossRef]
  60. Gadekallu, T.R.; Khare, N.; Bhattacharya, S.; Singh, S.; Maddikunta, P.K.R.; Ra, I.H.; Alazab, M. Early detection of diabetic retinopathy using PCA-firefly based deep learning model. Electronics 2020, 9, 274. [Google Scholar] [CrossRef] [Green Version]
  61. Tymchenko, B.; Marchenko, P.; Spodarets, D. Deep Learning Approach to Diabetic Retinopathy Detection. arXiv 2020, arXiv:2003.02261. [Google Scholar]
  62. Thakur, N.; Juneja, M. Survey on segmentation and classification approaches of optic cup and optic disc for diagnosis of glaucoma. Biomed. Signal Process. Control 2018, 42, 162–189. [Google Scholar] [CrossRef]
  63. Maji, D.; Santara, A.; Mitra, P.; Sheet, D. Ensemble of deep convolutional neural networks for learning to detect retinal vessels in fundus images. arXiv 2016, arXiv:1603.04833. [Google Scholar]
  64. Liskowski, P.; Krawiec, K. Segmenting retinal blood vessels with deep neural networks. IEEE Trans. Med Imaging 2016, 35, 2369–2380. [Google Scholar] [CrossRef]
  65. Maninis, K.K.; Pont-Tuset, J.; Arbeláez, P.; Gool, L.V. Deep retinal image understanding. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 17–21 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 140–148. [Google Scholar]
  66. Wu, A.; Xu, Z.; Gao, M.; Buty, M.; Mollura, D.J. Deep vessel tracking: A generalized probabilistic approach via deep learning. In Proceedings of the 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), Prague, Czech Republic, 16 June 2016; pp. 1363–1367. [Google Scholar]
  67. Dasgupta, A.; Singh, S. A fully convolutional neural network based structured prediction approach towards the retinal vessel segmentation. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, VIC, Australia, 18–21 April 2017; pp. 248–251. [Google Scholar]
  68. Fu, H.; Xu, Y.; Wong, D.W.K.; Liu, J. Retinal vessel segmentation via deep learning network and fully-connected conditional random fields. In Proceedings of the 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), Prague, Czech Republic, 13–16 April 2016; pp. 698–701. [Google Scholar]
  69. Mo, J.; Zhang, L. Multi-level deep supervised networks for retinal vessel segmentation. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 2181–2193. [Google Scholar] [CrossRef] [PubMed]
  70. Lahiri, A.; Roy, A.G.; Sheet, D.; Biswas, P.K. Deep neural ensemble for retinal vessel segmentation in fundus images towards achieving label-free angiography. In Proceedings of the 2016 IEEE 38th Annual International Conference of the Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 1340–1343. [Google Scholar]
  71. Li, Q.; Feng, B.; Xie, L.; Liang, P.; Zhang, H.; Wang, T. A cross-modality learning approach for vessel segmentation in retinal images. IEEE Trans. Med Imaging 2016, 35, 109–118. [Google Scholar] [CrossRef] [PubMed]
  72. Qiao, L.; Zhu, Y.; Zhou, H. Diabetic retinopathy detection using prognosis of microaneurysm and early diagnosis system for non-proliferative diabetic retinopathy based on deep learning algorithms. IEEE Access 2020, 8, 104292–104302. [Google Scholar] [CrossRef]
  73. Hacisoftaoglu, R.E.; Karakaya, M.; Sallam, A.B. Deep learning frameworks for diabetic retinopathy detection with smartphone-based retinal imaging systems. Pattern Recognit. Lett. 2020, 135, 409–417. [Google Scholar] [CrossRef]
  74. Nawaz, F.; Ramzan, M.; Mehmood, K.; Khan, H.U.; Khan, S.H.; Bhutta, M.R. Early Detection of Diabetic Retinopathy Using Machine Intelligence through Deep Transfer and Representational Learning. CMC Comput. Mater. Contin 2021, 66, 1631–1645. [Google Scholar] [CrossRef]
  75. Mishra, S.; Hanchate, S.; Saquib, Z. Diabetic Retinopathy Detection using Deep Learning. In Proceedings of the 2020 International Conference on Smart Technologies in Computing, Electrical and Electronics (ICSTCEE), Bengaluru, India, 9–10 October 2020; pp. 515–520. [Google Scholar]
  76. Das, S.; Kharbanda, K.; Suchetha, M.; Raman, R.; Dhas, E. Deep learning architecture based on segmented fundus image features for classification of diabetic retinopathy. Biomed. Signal Process. Control 2021, 68, 102600. [Google Scholar] [CrossRef]
  77. Li, X.; Hu, X.; Yu, L.; Zhu, L.; Fu, C.W.; Heng, P.A. CANet: Cross-disease Attention Network for Joint Diabetic Retinopathy and Diabetic Macular Edema Grading. IEEE Trans. Med. Imaging 2020, 39, 1483–1493. [Google Scholar] [CrossRef] [Green Version]
  78. Bodapati, J.D.; Shaik, N.S.; Naralasetti, V. Composite deep neural network with gated-attention mechanism for diabetic retinopathy severity classification. J. Ambient. Intell. Hum. Comput. 2021, 12, 9825–9839. [Google Scholar] [CrossRef]
  79. Hsieh, Y.T.; Chuang, L.M.; Jiang, Y.D.; Chang, T.J.; Yang, C.M.; Yang, C.H.; Chan, L.W.; Kao, T.Y.; Chen, T.C.; Lin, H.C.; et al. Application of deep learning image assessment software VeriSee™ for diabetic retinopathy screening. J. Formos. Med Assoc. 2021, 120, 165–171. [Google Scholar] [CrossRef]
  80. Wang, J.; Luo, J.; Liu, B.; Feng, R.; Lu, L.; Zou, H. Automated diabetic retinopathy grading and lesion detection based on the modified R-FCN object-detection algorithm. IET Comput. Vis. 2020, 14, 1–8. [Google Scholar] [CrossRef]
  81. PAhmad, P.; Jin, H.; Alroobaea, R.; Qamar, S.; Zheng, R.; Alnajjar, F.; Aboudi, F. MH UNet: A Multi-Scale Hierarchical Based Architecture for Medical Image Segmentation. IEEE Access 2021, 9, 148384–148408. [Google Scholar] [CrossRef]
  82. Yang, J.-J.; Li, J.; Shen, R.; Zeng, Y.; He, J.; Bi, J.; Li, Y.; Zhang, Q.; Peng, L.; Wang, Q. Exploiting ensemble learning for automatic cataract detection and grading. Comput. Methods Programs Biomed. 2016, 124, 45–57. [Google Scholar] [CrossRef] [PubMed]
  83. Cataracts. June 2018. Available online: https://www.mayoclinic.org/diseases-conditions/cataracts/diagnosis-treatment/drc-20353795 (accessed on 14 December 2021).
  84. Lin, Y.; Zhang, H.; Hu, G. Automatic retinal vessel segmentation via deeply supervised and smoothly regularized network. IEEE Access 2018, 7, 57717–57724. [Google Scholar] [CrossRef]
  85. Kumar, D.; Taylor, G.W.; Wong, A. Discovery Radiomics With CLEAR-DR: Interpretable Computer Aided Diagnosis of Diabetic Retinopathy. IEEE Access 2019, 7, 25891–25896. [Google Scholar] [CrossRef]
  86. Kathiresan, S.; Sait, A.R.W.; Gupta, D.; Lakshmanaprabu, S.K.; Khanna, A.; Pandey, H.M. Automated detection and classification of fundus diabetic retinopathy images using synergic deep learning model. Pattern Recognit. Lett. 2020, 133, 210–216. [Google Scholar]
  87. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Available online: http://www.deeplearningbook.org (accessed on 14 December 2021).
  88. Pal, P.; Kundu, S.; Dhara, A.K. Detection of red lesions in retinal fundus images using YOLO V3. Curr. Indian Eye Res. J. Ophthalmic Res. Group 2020, 7, 49. [Google Scholar]
  89. Gonzalez, R.C. Deep convolutional neural networks [Lecture Notes]. IEEE Signal Process. Mag. 2018, 35, 79–87. [Google Scholar] [CrossRef]
  90. Islam, M.M.; Yang, H.C.; Poly, T.N.; Jian, W.S.; Li, Y.C.J. Deep learning algorithms for detection of diabetic retinopathy in retinal fundus photographs: A systematic review and meta-analysis. Comput. Methods Programs Biomed. 2020, 191, 105320. [Google Scholar] [CrossRef]
  91. Wang, Z.; Yang, J. Diabetic retinopathy detection via deep convolutional networks for discriminative localization and visual explanation. arXiv 2017, arXiv:1703.10757. [Google Scholar]
  92. Malik, H.; Farooq, M.S.; Khelifi, A.; Abid, A.; Qureshi, J.N.; Hussain, M. A Comparison of Transfer Learning Performance Versus Health Experts in Disease Diagnosis From Medical Imaging. IEEE Access 2020, 8, 139367–139386. [Google Scholar] [CrossRef]
  93. Ahn, S.; Pham, Q.; Shin, J.; Song, S.J. Future Image Synthesis for Diabetic Retinopathy Based on the Lesion Occurrence Probability. Electronics 2021, 10, 726. [Google Scholar] [CrossRef]
  94. O’Shea, K.; Nash, R. An introduction to convolutional neural networks. arXiv 2015, arXiv:1511.08458. [Google Scholar]
  95. Farooq, S.; Khan, Z. A Survey of Computer Aided Diagnosis (Cad) of Liver in Medical Diagnosis. VAWKUM Trans. Comput. Sci. 2019, 8, 23–29. [Google Scholar] [CrossRef]
  96. Sopharak, A.; Uyyanonvara, B.; Barman, S. Simple hybrid method for fine microaneurysm detection from non-dilated diabetic retinopathy retinal images. Comput. Med Imaging Graph. 2013, 37, 394–402. [Google Scholar] [CrossRef]
Figure 1. Comparison between Normal and DR eye [2].
Figure 1. Comparison between Normal and DR eye [2].
Sensors 22 01803 g001
Figure 2. Lesions in retinal fundus image.
Figure 2. Lesions in retinal fundus image.
Sensors 22 01803 g002
Figure 3. Systematic Literature Review Methodology.
Figure 3. Systematic Literature Review Methodology.
Sensors 22 01803 g003
Figure 4. Target combination of words to form Search String.
Figure 4. Target combination of words to form Search String.
Sensors 22 01803 g004
Figure 5. Studies Selection Process and Results.
Figure 5. Studies Selection Process and Results.
Sensors 22 01803 g005
Figure 6. Year-wise Publication Results.
Figure 6. Year-wise Publication Results.
Sensors 22 01803 g006
Figure 7. Taxonomy of Diabetic Retinopathy (DR).
Figure 7. Taxonomy of Diabetic Retinopathy (DR).
Sensors 22 01803 g007
Figure 8. Five stages of Diabetic Retinopathy.
Figure 8. Five stages of Diabetic Retinopathy.
Sensors 22 01803 g008
Figure 9. Five stages of Diabetic Retinopathy.
Figure 9. Five stages of Diabetic Retinopathy.
Sensors 22 01803 g009
Figure 10. Deep Learning based Models used for DR Classification.
Figure 10. Deep Learning based Models used for DR Classification.
Sensors 22 01803 g010
Figure 11. Dataset Size Over Time [87].
Figure 11. Dataset Size Over Time [87].
Sensors 22 01803 g011
Table 1. Comparison with other studies.
Table 1. Comparison with other studies.
Description[17][18][19]This Paper
Diabetic Retinopathy Grading ProtocolsXX
DR Screening Detection TechniquesXXX
Taxonomy of Diabetic Retinopathy (DR)XXX
Clinical Features of Retinal image for DR DetectionXX
Open IssuesXX
Solution to the DR based research ProblemsXXX
DR datasetX
Year2019202020182022
Table 2. Research Questions.
Table 2. Research Questions.
RQ NoQuestionsMotivation
RQ1What clinical features of retinal image are required for DR detection and classification, and which deep learning methods are mostly used to classify DR problems?To answer this question, an overview of the current trends in deep learning in DR classification has been reviewed. The answer to this research question will help researchers in selecting the best deep learning technique to use as a baseline in their research.
RQ2Which DR dataset have been acquired, managed, and classified to identify several stages of DR?Identify available dataset to help researchers to use dataset as a benchmark and can compare the performance with their work.
RQ3What are the open issues and challenges of using deep learning methods to detect DR?This question will allow researchers to recognize open research challenges and future directions in order to detect DR by using more advanced deep learning techniques.
Table 3. Exclusion and Inclusion Criteria.
Table 3. Exclusion and Inclusion Criteria.
Inclusion CriteriaExclusion Criteria
IC1: Studies that only focus on deep learning algorithms to detect diabetic retinopathy severity levelsEC1: Papers that do not focus on detecting DR by using deep learning techniques.
IC2: Full-text articlesEC2: Studies that examine only lesions e.g., micro-aneurysms, hemorrhages, exudates, cotton wool spots, etc.
IC3: Paper written in English languageEC3: Papers not published in a complete form or the form of a book, tutorial, symposium, workshop, presentation, or an essay.
EC4: Papers not presented in the English language.
EC5: Publication date is before the year 2015.
Table 4. Selection and Results.
Table 4. Selection and Results.
Digital LibraryNo of Studies FoundNo of Studies Selected
IEEE Xplore1259
Science Direct535
PLOS One101
ACM91
Springer255
Arxiv124
Research Gate199
PubMed105
Table 5. Quality Assessment Criteria.
Table 5. Quality Assessment Criteria.
CriterionRankScore
(a) The study provides a clear contribution
to detect DR by using deep learning methods
Yes1
No0
(b) The study documented the clear limitation
of work while detecting DR
Yes1
No0
(c) The study was methodologically explained
so that it can be trusted
Yes1
No0
Partially0.5
(d) The size of the selected dataset and the
collection methods mentioned
Yes1
No0
(e) Jounal/ Confence/ Symposium RankingQ12.5
Q22
Q31.5
Q4 & Q51
Core A1.5
Core B1
Core C0.75
IEEE / ACM
Sponsored
0.25
Others0
Table 6. DR Detection Tools, Techniques, Methodologies, and Performance Evaluation (I).
Table 6. DR Detection Tools, Techniques, Methodologies, and Performance Evaluation (I).
Ref. NoDL MethodMajor FocusEnvironmentPerformance Criteria
[14]CNNPropose CLEAR-DR CAD
system via deep
radiomic sequencer
NoAccuracy = 73.2%
[23]CNNPropose Siamese-like CNN
architecture which accepts
input as binocular fundus
images
NoAUC = 95.1%,
kappa score = 82.9
[24]BNCNNRedesign the LeNet model
by adding batch normalization
layer with CNN to
effectively preventing gradient
diffusion to improve model
accuracy
NoAccuracy = 97.56%
[25]DNNProposed modification of
Inception-V3 model
to grade four severity
levels of DR
MXNETAccuracy = 88.73%,
precision = 95.77,
Recall = 94.84
[26]Ensemble CNNCombine five models; Resnet50,
Inceptionv3, Xception,
Dense121, and Dense169
Keras,
Tensorflow
Accuracy = 80.8%,
Precision = 86.72,
Recall = 51.5,
F-score = 63.85
[27]Ensemble CNNCombine three models: inceptionv3,
Xception, and inceptionResNetV2
KerasAccuracy = 97.15%,
Precision = 0.96,
Recall = 0.96,
F1-score = 0.96
[28]WP-CNNBuild various weighted
path CNN networks and
optimized by backpropagation.
WP-CNN105 achieves
the highest accuracy.
NoAccuracy = 94.23%,
F1-score = 0.9087
[29]Ensemble CNNFive ensemble models VGG-16,
ResNet-18, SE-BN-inception,
GoogleNet, and DenseNet
were used as benchmark
for DR grading
CaffeAccuracy = 82.84%
[30]OCTD-NetDevelop novel deep network
OCTD-NET. Consist of two
features one for feature
extraction and other for
retinal layer information
KerasAccuracy = 92%
[31]GoogLeNetPropose modification of
GoogLeNet
convolutional neural network
NoAccuracy = 98%
[32]Ensemble CNNEnsemble CNN
VGG
net and ResNet models
used as ensemble
NoAUC = 97.3%
[33]Deep Multi-Instance
Learning
Image patches extracted from
the preprocessing step regularly
and then fed into CNN
based patch level
classifier
MatConvNetPrecision = 86.3,
F1-score = 92.1
[34]Fully connected NetworkConstruct U-Net based
regional segmentation and
diagnosis model
KerasPM coefficient is 2.55%
lower
[35]DCNNTransfer learning used for
initial weight initialization
and for feature extraction
NoAccuracy = 93.6%, 95.8%
[36]DR-NetDevelop DR-Net framework
by fully stacked convolution
network to reduce overfitting
and to improve performance
imageMagick,
OpenCV
Accuracy = 81.0%
[37]Ensemble CNNThree models, inception V3,
Resnet152, and inception-Resnet-v3
put together that work
individually and Adaboost
algorithm is used to merge
them.
UbuntuAccuracy = 88.21%
[38]DNNNeural network with 28
convolutional layers, after each
layer batch normalization
and ReLu applied except
the last one Network
trained with inception-v3 model
Tensorflow,
Android studio
Accuracy = 73.3%
[39]CNNNetwork consists of range
of convolutional layers that
converts pixel intensities to
local features before
converting them to global
features
NoAccuracy = 97.8%
[40]CNNPropose CNN model with
the addition of regression
activation map
NoAccuracy = 94%, 80%
Table 7. DR Detection Tools, Techniques, Methodologies, and Performance Evaluation (II).
Table 7. DR Detection Tools, Techniques, Methodologies, and Performance Evaluation (II).
Ref. NoDL MethodMajor FocusEnvironmentPerformance Criteria
[41]Graph-NNPropose GNN model which
consists of two features.
One is to extract region-of-interest
focusing only regions to
remove noise while preprocessing
and others in applying GNN
for classification
NoAccuracy = 80%
[42]CNNConstructed a model in
which artificial neurons are
organized in a hierarchical
manner which are able
to learn multiple level
of abstraction
NoAccuracy = 79.3%
[43]Deep CNNUse VGG-16 DCNN to
automatically detect local
features and to generate
a classification model
NoSpecificity 97%,
Sensitivity 96.7%
[44]CNNDemonstrates the potential of
CNN to classify DR fundus
images based of severity
in real times
NoAUC 96.6%,
Specificity = 97.2 %,
Sensitivity = 94.7%
[45]Ensemble
DNN
Build high quality medical
imaging dataset of DR
also propose a grading and
identification system called DeepDR
and evaluate the model
using nine validity matrices
NoSpecificity = 92.29%,
Sensitivity = 80.28%
[46]Modified Hopfield
NN
Propose Modify Hopfield neural
network to handling drawbacks
of conventional HNN where
weigh values changed based
on the output values
in order to avoid the
local minima
KerasAUC = 90.1 %,
Sensitivity = 84.6, 90.6,
Specificity = 79.9, 90 %
[47]Deep CNNDCNN pooling layer is replaced
with fractional max pooling
to drive more discriminative
features for classification
also use SVM to classify
underlying boundary of distribution.
Furthermore build an app
called Deep Retina
NoAccuracy = 99.25%,
Specificity = 99.0%
[48]DCNNData driven features learned
from deep learning network
through dataset and then
these deep features were
propagated into a tree
based classification model
that output a final
diagnostic disease
NoAccuracy = 86.71,
Specificity = 90.89,
Sensitivity = 89.30
[49]DNNPropose Alex Net DNN
with caffeNet model to
extract multi-dimensional
features at fully connected
DNN layers and use SVM
for optimal five class
DR classification
NoAUC = 97%,
Sensitivity = 94,
Specificity = 98
[50]CNNBuild an automated system
to detect DR IDX-DR
X2.1 composed of client software
and analysis software.
The device applied
a set of CNN based detectors
to examine each image
NoAccuracy = 97.93
[51]DCNNProposed a systematic computation
model using DCNN for DR
classification and assessed
performance on non-open dataset
and found that model
achieves better results
with only a small fraction
of training set images
NoAUC = 98.0%,
Sensitivity = 96.8,
Specificity = 87.0
[52]CNNEmploy LSTM, CNN, and
their combination for extracting
complex features to input
into heart rate variability
dataset.
NoSensitivity = 88.3,
Specificity = 98.0
Table 8. DR Detection Tools, Techniques, Methodologies, and Performance Evaluation (III).
Table 8. DR Detection Tools, Techniques, Methodologies, and Performance Evaluation (III).
Ref. NoDL MethodMajor FocusEnvironmentPerformance Criteria
[53]CNNBuild a network using CNN
and data augmentation to
identify the intricate
features like Micro-aneurysms,
exudates, cotton wool spots
to automatically diagnosis
DR without user input
NoAccuracy= 95.7%
[54]Deep Densely
Connected NN
Pioneer work to use densely
connected NN to classify
DR with the motivation
behind to deploy
network with more
deep supervision to
extract comprehensive
features from the
images.
Scikit-learnAccuracy= 75,
Sensitivity= 95,
AUC = 100,
Precision = 0.95,
Recall= 0.98,
F1-score = 0.97,
Specificity = 0.98
[55]CNNAdopted CNN-independent
adaptive kernel
visualization technique to
validate deep learning model
for the detection of referable
diabetic retinopathy
KerasAUC= 0.93,
Specificity = 91.6,
Sensitivity = 90.5
[56]CNNDeploy CNN to evaluate the
performance of deep
learning system
in detecting RDR by using
10 different dataset
NoAUC = 0.924,
Sensitivity = 92.180,
Specificity = 94.50
[57]Deep Visual FeaturesPropose a novel method to
detect SLDR without
performing pre and post
processing steps on
retinal images
through learning of deep
visual features and
gradient location
orientation histogram
MatlabAUC = 0.924,
Sensitivity = 92.180,
Specificity = 94.50
[58]CNNUse CNN model that uses
a function that combine
nearby pixels into local
features and then combined
it into global
features.
The algorithm does not
explicitly detect lesions
but recognize them using local
features
NoAUC = 97.4,
Sensitivity = 97.5,
specificity = 93.4
[59]Tuning Inception-v4
principal component analysis (PCA)
Comparison of different CNN
algorithms to test
the extensive
DR image classification .
NoAccuracy= 99.49,
Sensitivity= 98.83,
Specificity = 99.68
[60]Principal Component Analysis (PCA)
DNN-PCAFirefly
Performed DR and NDPR
dataset analysis for early
detection of DR to
prevent the damages.
NoAccuracy= 97,
Sensitivity= 96,
Specificity = 96
[61]Synergic Deep Learning
(SDL)
Prepare a classifier and
model as SDL for fundus
DR detection.
NoAccuracy= 99.28,
Sensitivity= 98,
Specificity = 99
[62]Lesion-aware Deep Learning
System (RetinalNET)
Developed a model and
classifier for prediction
of end stage DR in Chinese
patients .
NoHR 2.18, 95
confidence interval
(CI) 1.05–4.53,
P=0.04
[63]Ensemble DCNNPrepared a classifier for
detection of DR by
using fundus images.
NoAccuracy= 99.28,
Sensitivity=98
[64]Patch-based CNNUse filters for fundus
dataset pre-processing
to train patch
based CNN classifier.
NoAccuracy= 95.35,
Sensitivity=97
[65]FCN VGG-16Developed the deep
learning FCN VGG 16
Trainer for understanding
of Fundus images for
early detection of DR.
NoRPR=0.822,
RPR=0.831
[66]Patch Based CNN/PCAPrepare a Deep learning
patch based CNN/PCA
classifier for vessel
tracking in DRIVE
dataset for DR image
learning.
NoAccuracy= 0.9701
Table 9. DR Detection Tools, Techniques, Methodologies, and Performance Evaluation (IV).
Table 9. DR Detection Tools, Techniques, Methodologies, and Performance Evaluation (IV).
Ref. NoDL MethodMajor FocusEnvironmentPerformance Criteria
[67]Patch Based FCNSegmented the DR vessel data by patch
based FCN for Deep DR prediction
NoSN = 76.91,
SP = 98.01,
AUC = 0.974,
ACC = 95.33
[68]FCN/CRFBiomedical image processing by FCN and
CRF deep learning models to train
and test the DRIVE and STARE dataset.
NoSN = 72.94,
ACC = 94.70,
SN = 71.40,
ACC = 95.45
[69]Multi-level FCNUsed DRIVE, STARE, and CHASE dataset
for deep segmentation of retinal vessels
also prepared comparison of the
optimization model of these three dataset.
NoDRIVE[SN = 77.79,
SP = 97.80,
AUC = 0.9782],
STARE[SN = 95.21,
SP = 81.47, AUC = 98.44]
CHASE[SN = 96.76,
SP = 76.61, AUC = 98.16]
[70]Patch-based DSAEDeep learning based ensembling
of classifier was used
to achieve label-free
DR angiography
for efficient retinal
classification and segmentation.
NoAccuracy = 95.3
[71]Patch-based SDAEUsed deep learning to
understand the segmentation
of retinal vessel in DR
on DRIVE,
CHASE, and STARE dataset.
NoSN = 75.6,
SP = 98,
AUC = 0.9738
ACC = 95.27
[72]Deep CNNDeep Learning based PMNPDR
diagnostic system has
been proposed for
non-proliferative DR
NoSensitivity dark
lesions = 97.4, 98.4
and 95.1,
Sensitivity bright
lesions = 96.8,
97.1 and 95.3
[10]Deep CNNPrepared a human-centric
evaluation on the dataset
of several clinics to
detect the DR by using
deep learning. This dataset
has been retrieved
from the systems deployed
in the clinic.
NoSensitivity>90
[73]CNN-based AlexNet,
GoogLeNet,
and ResNet50
Introduced smartphone
diagnostic system for
detecting the DR and
used Deep
Learning classifiers frameworks.
Noaccuracy of 98.6,
sensitivity and a 99.1
[74]Inception-v4Detection of early DR symptoms
and severity by
recognizing features.
Noaccuracy of 96.11,
Kappa Score 89.81
[75]VGG16,
DenseNet121
Prepared a model based
on high resolution
images to classify and
early detection.
Noaccuracy of 96.11,
Kappa Score 89.81
[76]CNNProposed the architecture
by using nonlinear
ReLU function and
batch normalization
by CNN.
Noaccuracy of 98.7,
sensitivity 0.996
[77]CNNProposed a method to
join to DR and
DME by using CANet
Noaccuracy of 65.1
[78]DCNNPrepared an architecture
using DCCN via gated
attention for classification
of DR images.
Noaccuracy 82.54,
Kappa score 79
[79]VeriSeeDevelop a deep learning
based assessment software to
validate the DR severity.
Noaccuracy 89.2,
Specificity 90.1 and,
AUC 0.95
[80]RFCN, SDD-515,
VGG16
Prepared a deep learning
based model for enhancing
the small object detection
for better classification.
Noaccuracy 98,
Specificity 99.39 and,
percision 92.15
[81]ResNet50, EfficientNet-b0,
Se-ResNet50
Proposed a framework to
train the fundus images
future prediction
of lesions and other DR
issues.
Noaccuracy 94,
Specificity 95 and,
Senstivity 92
Table 10. Quality Assessment Results (I).
Table 10. Quality Assessment Results (I).
Ref. NoStudy TypeDatasetGPUScoringTotal
(a)(b)(c)(d)(e)
[14]ExperimentKaggleNo100.512.55
[23]ExperimentKaggleNVIDIA
GeForce
11112.56.5
[24]ExperimentCollect 6 billion
records from
301 hospitals
No11112.56.5
[25]ExperimentCollect 447 images
from three different
\clinical departments
2 Tesla
K40 GPUs
and 64G of
RAM
11112.56.5
[26]ExperimentKaggleNVIDIA
Tesla K40
10112.55.5
[27]ExperimentNine medical
records
were used
NVIDIA Tesla
K40 GPU
and 64 GB
RAM
11112.56.5
[28]ExperimentSTARENo100.512.55
[29]ExperimentFundus image dataset
collected from Chinese
population
NVIDIA
Tesla k40
100.512.55
[30]ExperimentOCT images provided
by Wenzhou Medical
university
GeForce GTX
Titan X and
12 GB RAM
10112.55.5
[31]ExperimentCollect 9939 posterior
photos from jichi
Medical University
Titan X with
12 GB RAM
100.512.55
[32]Case study76,370 fundus images
from Singapore
integrated DR program
No100.5124.5
[33]ExperimentKaggle,
Messidor,
DIARETDR1
4 NVIDIA
GeForce
Titan X
111126
[34]ExperimentNo556 images from
china west
hospital
110.5024.5
[35]ExperimentObtained from
SiDRP
No100012
[36]ExperimentKaggle, Drive,
Messidor
NVIDIA
Tesla k20
10011.53.5
[37]Experiment30,244 fundus
images from Beijing
Tongren Eye center
NVIDIA
Tesla P40 of
24 GB RAM
100.500.251.75
[38]Model16,789 fundus
images
No100.500.251.75
[39]Experiment25,326 fundus
images from screening
program in Thailand
No10111.54.5
[40]Case StudyMessidor 2No11111.55.5
[41]ExperimentKaggleNo10111.54.5
[42]ExperimentIDRiDTesla p10010111.54.5
[43]ExperimentEyePACS, Messidor-2,
19,230 images from
DM population
No11112.56.5
[44]ExperimentObtained 132 images
from Saneikai Tsukazaki
Hospital and Tokushima
University Hospital
No111126
[45]ExperimentKaggleNVIDIA
GTX980Ti,
mazon EC2
instance
containing
NVIDIA
10111.54.5
[46]Experiment13,767 images
obtained from
ophthalmology,
endocrinology and
physical examination
centers
NVIDIA
TeslaK40
11112.56.5
[47]ExperimentModified
Hopfield NN
No100.512.55
Table 11. Quality Assessment Results (II).
Table 11. Quality Assessment Results (II).
Ref. NoStudy TypeDatasetGPUScoringTotal
(a)(b)(c)(d)(e)
[48]ExperimentKaggleIntel dual
core processor
with 4 GB RAM
100.5124.5
[49]ExperimentMessidor-2, EyePACS,
E-Ophtha
No11111.55.5
[50]ExperimentKaggleIntel dual core
processor,
iphone 5
11112.56.5
[51]ExperimentObtained from
EyeCheck
project and university
of lowa, Messidor-2
No11111.55.5
[52]ExperimentPrivately collected
Data
No11112.56.5
[53]Experiment41122 colour DR
images from screening
process in finland
No11112.56.5
[54]ExperimentCollect ECG of
20 people
No100.5124.5
[55]ExperimentKaggleNVIDIA
k40c
111115
[56]Experiment66,790 images collected
from Label database
in China
No110.5125.5
[57]ExperimentHalf a million images
were collected
from 10 different
private locations
No11112.56.5
[58]ExperimentMessidor, DIARETDR,
retinopathies were
collected from HUPM
Spain
No111126
[59]ExperimenteyePACS, Messidor-2No11112.56.5
[60]ExperimentMESSIDORTI-V4(PCA)11112.56.5
[61]ExperimentDiabetic Retinopathy
Debrecen dataset
from UCI machine
learning repository
No11112.56.5
[62]ExperimentMessidorNo11112.56.5
[63]ExperimenteyePACS, Messidor-2No11112.56.5
[64]ExperimentDRIVENo11112.56.5
[65]ExperimentDRIVE, STARE
and CHASE
No11112.56.5
[66]ExperimentDRIVE, STARENo11112.56.5
[67]ExperimentDRIVENo11112.56.5
[68]ExperimentDRIVENo11112.56.5
[69]ExperimentDRIVE, STARENo11112.56.5
[70]ExperimenteyePACS, Messidor-2No11112.56.5
[71]ExperimentDRIVENo11112.56.5
[72]ExperimenteyePACS, Messidor-2No11112.56.5
[10]ExperimentDRIVE, STARE
and CHASE
No11112.56.5
[73]Case studyreal time-ThilandNo11112.56.5
[74]ExperimentEyePACS,
Messidor,
Messidor-2,
IDRiD,UoA-DR
No11112.56.5
[75]Experiment207,130 retinal
images
No10112.55.5
[76]Experiment3662 retinal
images
No100.5001.5
[77]Experiment89 retinal
images
No101024
[78]ExperimentIDRiD, MessidorNo11112.56.5
[79]ExperimentKaggleNo10112.55.5
[80]ExperimentEyePACSNo100.512.55
[81]ExperimentMessidorNo101125
[82]ExperimentKangbuk Samsung
Hospital dataset
No100.512.55
Table 12. Diabetic Retinopathy Grading Protocols.
Table 12. Diabetic Retinopathy Grading Protocols.
Grading ProtocolRank
American Academy of Ophthalmology
(AAO) & International Clinical
Diabetic Retinopathy (ICDR)
No DR, Very Mild,
Mild, Moderate, Severe,
Very Severe
Early Treatment of Diabetic
Retinopathy Study (ETDRS)
0-Level 10
1-Level 20
2-Level 35, 43, 47
3-Level 53
4-Level 61, 65, 71, 75, 81, 85
Scottish DR GradingR0-DR
R1-mild NPDR
R2 &R3-pre-Moderate and Severe NPDR
R4- PDR
National Screening Committee—UKR0-DR
R1-mild NPDR
R2-Moderate and Severe PDR
R3-PDR Pre-retinal fibrosis
Table 13. Publically available dataset to Detect DR.
Table 13. Publically available dataset to Detect DR.
DatasetNo. of ImagesFormatProvided By
DRIVE40JPEGScreening Program
in Netherlands
Download URL: https://drive.grand-challenge.org/Download/ (accessed on 14 December 2021)
KAGGLE80,000JPEGEYEPACS
Download URL: http://Kaggle.diabetic-retinopathy-detection/data (accessed on 14 December 2021)
DIARETDB 0
& 1
89GT:PNGImageRet Project
Download URL: http://www2.it.lut.fi/project/imageret/diaretdb1/#DATA (accessed on 14 December 2021)
MESSIDOR1200TIFFMessidor Program
Partner
Download URL: http://www.adcis.net/en/third-party/messidor/ (accessed on 14 December 2021)
STARE402PPMShiley Eye Centre
Download URL: http://cecas.clemson.edu (accessed on 14 December 2021)
AREDS72,000JPEGNational Eye
Institute (NEI)
Download URL: https://www.ncbi.nlm.nih.gov/gap/ (accessed on 14 December 2021)
EyePACS-19963JPEGU.S. screening
program
Download URL: http://www.eyepacs.com/data-analysis/ (accessed on 14 December 2021)
e-ophtha463JPEG, PNGFrench Research
Agency (ANR)
Download URL: https://www.adcis.net/en/third-party/e-ophtha/ (accessed on 14 December 2021)
ORIGA625BMPInstitute for Infocomm
Research, A*STAR,
Singapore
Download URL: available free on request
SCES1676BMPSingapore Eye
Research Institute
Download URL:http://biomisa.org/index.php/glaucoma-database/ (accessed on 14 December 2021)
SIDRP 2014–201571,896BMPSingapore National DR
Screening Program,
Singapore
Download URL:http://messidor.crihan.fr/index-en.php (accessed on 14 December 2021)
Table 14. Research Gaps and Future directions.
Table 14. Research Gaps and Future directions.
Research Gap & IssueDescriptionSolution
Clinical ResultsOphthalmologists feedback is required in order
to check the accuracy of the deep learning predictor.
Cross-Database Validation
Data AugmentationAccurate data augmentation is an expensive solution
and expert Ophthalmologist services are required in
every angle of lesion image.
Generative Adversarial Networks
(GANs),
New data augmentation techniques
with fewer learnable parameters
Class ImbalanceThe number of DR cases is much lower than normal
cases
Data Augmentation Techniques,
Geometric Transformations
Lack of UniformityAngle of images are not uniform, out of focus, and
causes the diffusion of light in the retina
Generative Adversarial Networks
(GAN)
New Augmentation Techniques
Translation EffectVariability and screening programs do not follow
a standard and cause issues
Translation Standards are required
Race ScalingIt has been observed that darker retina vascular
properties are comparatively different to the
light tone retina
Heterogeneous cohorts
parameters
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Farooq, M.S.; Arooj, A.; Alroobaea, R.; Baqasah, A.M.; Jabarulla, M.Y.; Singh, D.; Sardar, R. Untangling Computer-Aided Diagnostic System for Screening Diabetic Retinopathy Based on Deep Learning Techniques. Sensors 2022, 22, 1803. https://doi.org/10.3390/s22051803

AMA Style

Farooq MS, Arooj A, Alroobaea R, Baqasah AM, Jabarulla MY, Singh D, Sardar R. Untangling Computer-Aided Diagnostic System for Screening Diabetic Retinopathy Based on Deep Learning Techniques. Sensors. 2022; 22(5):1803. https://doi.org/10.3390/s22051803

Chicago/Turabian Style

Farooq, Muhammad Shoaib, Ansif Arooj, Roobaea Alroobaea, Abdullah M. Baqasah, Mohamed Yaseen Jabarulla, Dilbag Singh, and Ruhama Sardar. 2022. "Untangling Computer-Aided Diagnostic System for Screening Diabetic Retinopathy Based on Deep Learning Techniques" Sensors 22, no. 5: 1803. https://doi.org/10.3390/s22051803

APA Style

Farooq, M. S., Arooj, A., Alroobaea, R., Baqasah, A. M., Jabarulla, M. Y., Singh, D., & Sardar, R. (2022). Untangling Computer-Aided Diagnostic System for Screening Diabetic Retinopathy Based on Deep Learning Techniques. Sensors, 22(5), 1803. https://doi.org/10.3390/s22051803

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop