Next Article in Journal
An Advanced Big Data Quality Framework Based on Weighted Metrics
Next Article in Special Issue
Analyzing the Performance of Transformers for the Prediction of the Blood Glucose Level Considering Imputation and Smoothing
Previous Article in Journal
Innovative Business Process Reengineering Adoption: Framework of Big Data Sentiment, Improving Customers’ Service Level Agreement
Previous Article in Special Issue
Contact Tracing Strategies for COVID-19 Prevention and Containment: A Scoping Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Systematic Literature Review on Diabetic Retinopathy Using an Artificial Intelligence Approach

1
Department of Artificial Intelligence and Machine Learning, Symbiosis Centre for Applied Artificial Intelligence (SCAAI), Symbiosis Institute of Technology, Symbiosis International (Deemed University) (SIU), Lavale, Pune 412115, India
2
Natasha Eye Care and Research Centre, Pimple Saudagar, Pune 411027, India
*
Authors to whom correspondence should be addressed.
Big Data Cogn. Comput. 2022, 6(4), 152; https://doi.org/10.3390/bdcc6040152
Submission received: 9 October 2022 / Revised: 17 November 2022 / Accepted: 30 November 2022 / Published: 8 December 2022

Abstract

:
Diabetic retinopathy occurs due to long-term diabetes with changing blood glucose levels and has become the most common cause of vision loss worldwide. It has become a severe problem among the working-age group that needs to be solved early to avoid vision loss in the future. Artificial intelligence-based technologies have been utilized to detect and grade diabetic retinopathy at the initial level. Early detection allows for proper treatment and, as a result, eyesight complications can be avoided. The in-depth analysis now details the various methods for diagnosing diabetic retinopathy using blood vessels, microaneurysms, exudates, macula, optic discs, and hemorrhages. In most trials, fundus images of the retina are used, which are taken using a fundus camera. This survey discusses the basics of diabetes, its prevalence, complications, and artificial intelligence approaches to deal with the early detection and classification of diabetic retinopathy. The research also discusses artificial intelligence-based techniques such as machine learning and deep learning. New research fields such as transfer learning using generative adversarial networks, domain adaptation, multitask learning, and explainable artificial intelligence in diabetic retinopathy are also considered. A list of existing datasets, screening systems, performance measurements, biomarkers in diabetic retinopathy, potential issues, and challenges faced in ophthalmology, followed by the future scope conclusion, is discussed. To the author, no other literature has analyzed recent state-of-the-art techniques considering the PRISMA approach and artificial intelligence as the core.

1. Introduction

Ophthalmology is a medical specialty that focuses on the scientific research of diseases and diagnosing and treating various eye disorders. Ophthalmologists used to diagnose eye problems manually, which took a long time [1]. Diabetes is a long-term illness that interferes with our body’s average capacity to digest food. Most of our foods are broken down into glucose and enter our bloodstream. When blood sugar levels rise, our pancreas is pushed to secrete insulin. Insulin is the element that permits blood glucose to enter our body’s cells and then be used as food. Whenever a person develops diabetes, the body either does not create enough insulin or does not utilize it that well. There is more blood glucose when insufficient insulin or cells stop producing insulin. Complications of diabetes [1] include diabetic retinopathy (eye damage), neuropathy (nerve damage), nephropathy (kidney disease), cardiomyopathy (heart problems), gastroparesis, skin problems, etc. [1,2,3,4]. In primarily elderly populations, eye problems are the leading cause of blindness.
Furthermore, according to a World Health Organization (WHO) report, as the world’s population ages, patients suffering from ocular disorders are predicted to increase [5,6]. As a result, there is a lot of interest in applying artificial intelligence (AI) to improve ocular treatment while simultaneously cutting healthcare costs, especially when telemedicine is included [7,8]. Compared to the number of medical facilities accessible, the ratio of people suffering from eye disease is vast [9]. The most common causes of visual impairment are diabetic retinopathy, macular degeneration because of growing older, and glaucoma, a disease that affects the eyes. Cataracts are a type of aberration, and macular edema is a type of edema that affects the retina. Neovascularization of the choroids (CNV), retinal detachment, refractive errors, amblyopia, and strabismus are some of the retinal problems that may lead to a poor visual prognosis.

1.1. Applications of AI in Retina Images

There are three primary use case situations in retina image applications: classifying, segmentation, and predictions, which are shown in Figure 1.
  • Classification: Categorization cases are commonly used in binary or multi-class retinal image analysis, such as automated screenings or detecting of the stage of disease or type. ML and DL methods are applicable here based on the level of understandability required or the quantity of the dataset provided.
  • Segmentation: The fundamental goal of segmentation-based approaches is to subdivide the objects in a picture. The primary purpose of all these techniques is to investigate morphological features or retrieve a meaningful pattern or feature of relevance from a snapshot, such as borders in 2D or 3D imaging. Segmentation of pigment epithelial detachment (PED) is used to diagnose chorioretinal diseases.
  • Prediction: Most predicted situations are alarmed with illness development, future treatment outcomes based on an image, etc. The prediction approach can also be used to depict the local retention region.

1.2. Diabetic Retinopathy (DR)

DR is a sight-threatening illness resulting from damage to the retinal vessels, and it is increasing at an exponential rate. Diabetic problems are widespread and, as a result, this condition affects blood vessels, which then affect the retina’s light-sensitive components. The primary cause of this disease’s progression is a deficiency of oxygen delivered to the retina [10]. Persons with a long poor glycemic control are much more susceptible to causing it; whether an individual is type 1 or type 2 diabetic, the illness rises as they age [11,12]. DR is a secret illness that only emerges through its latter stages when therapy is impossible. Frequent retinal scanning is necessary for diabetic individuals to effectively treat DR at an earlier time to avert disability [13]. Automated screenings are essential to save manual tasks because the cost of such a method is high [14]. Additionally, because most of the population is above 45 years of age, a non-invasive procedure would be beneficial. According to the researchers, fundus imaging is a comfortable and non-invasive technology optometrists use to determine DR severity levels. Parameters such as microaneurysms (MAs), hemorrhagic (HEMs), exudates (EXs), and cotton wool spots are examined (CWS) for the detection of DR. There is a need for technology that allows a non-technical person to take a picture on [15,16,17,18] a mobile phone and email it to ophthalmologists, who can then subsequently advise their patients by looking at the picture on their phone.
Early detection can overcome DR. In the current circumstances, AI, along with related approaches, such as ML and DL in computer science, has proved to be a powerful tool for detecting complicated patterns in ocular illnesses. Computerized DR detection [19] techniques save cash and effort and are much more effective than traditional diagnoses [20]. Figure 2 shows the difference between a standard vision image and DR.
An occurrence of various sorts of lesions on an eye can be used to diagnose a DR image, as shown in Figure 3. Lesions of microaneurysms (MA), hemorrhages (HM), and hard and soft exudates (EX) are shown [2,3,4].
Microaneurysms (MA): Microaneurysms (MA) in the fundus picture is an early clinical symptom of DR, causing retinal dysfunction due to blood/fluid leaking on the retina [21,22]. It appears as small red spots on the retina [23]. They may be encircled by a yellow lipid ring or hard exudates. They are surrounded by sharp borders and would be less than 125 μm. Endothelial dysfunction in these microaneurysms can result in leakage and retinal edema, leading to visual loss. The criterion for correctly detecting MAs is fluorescein angiography (FA), whose shapes fall into various categories such as focal bulge, saccular, fusiform, mixed, pedunculated, and irregular [24].
  • Hemorrhages (HM) appear as patches on the retina which can be 125 μm in diameter with an uneven edge. Its two categories are flames (superficial HM) and blot (deep HM) [23].
  • Hard exudates: Hard exudates, which typically can be seen as bright yellow areas on the eye, are caused by hemolysis. These were also found in the eye’s coastal parts and had clear boundaries.
  • Soft exudates: White spots on the eye generated from nerve fiber swelling are called soft exudates (cotton wool). These are ovular or circular. Soft or hard secretions constitute white lesions, whereas MA and HM were red growths (EX). A sample image of various stages of DR is provided in Figure 4. DR is classified as non-proliferative DR (NPDR) and proliferative DR (PDR). Further, NPDR is classified as mild, moderate, and severe, as shown in Figure 5.
Figure 4. Stages of DR [25].
Figure 4. Stages of DR [25].
Bdcc 06 00152 g004
Later phases, as opposed to earlier phases such as mild, moderate, and NPDR, are much more extreme versions in which additional frail vessels emerge. Such blood veins bleed blood into the ocular, hurting a human’s sight. Fuzzified eyesight, decreased color recognition, dark spots, or threads swimming across our eyesight, changing vision, etc., are all symptoms of this disorder [10]. The typical situation involves the absence of DR lesions and fluid leakage from arteries.
Mainly in the event of milder NPDR, at minimum, one MA with/without HEMs, exudates, plus CWS, could be seen. MAs, HEMs, exudates, and CWS, less serious than severe NPDR, are known as moderate NPDR. Severe NPDR is characterized by more than a MA and HEM in four quadrants, venous beading in two quadrants, and intra-retinal and mild vascular abnormalities in one quadrant. PDR is identified by the massive size of the optic disc, cup, blood vessels, or pre-retinal vitreous, with DR abnormalities [26,27]. Furthermore, DR is the most significant cause of visual impairment in most developing and industrialized countries, especially among working individuals [28,29]. Furthermore, the manual DR evaluation of a specialist is arbitrary and firmly based on our technical experience. As a result, computerized technologies are urgently needed to quantify DR on a more extensive dataset reliably and reduce inter- and intra-reader variability [30,31].

1.3. Evolution of DR Using AI

In the United States, Europe, and Asia, it is estimated that around one-third of patients with diabetes (34.6%) have DR to some degree [32]. It is also worth noting that one out of every ten people (10.2%) has vision-threatening DR [33]. Vision loss caused by DR has been proven preventable with timely treatment. Enormous work has been performed in this field.
Additionally, there are various methods for detecting DR. Multiple DL-based fully automated DR monitoring methodologies have been introduced in the latest years. This section discusses some of the present research performed on DR with AI. The evolution of DR with artificial intelligence is shown in Figure 6. There are pioneering works carried out in this evolution. One of the works, presented by Gardner et al. [34] in 1996, stated that neural networks can detect diabetic features in fundus images and compare the network against an ophthalmologist screening a set of fundus images. Cree et al. [35] proved that computer vision techniques were suitable to detect microaneurysms. Their experiments relied on simple morphological and thresholding techniques using eight features among pixel area and total pixel intensity measured on each candidate. The proposed method achieved similar results to those obtained by clinicians and proved that automated detection can be used for diagnostic purposes. Franklin [36] proposed a novel retinal imaging method that segments the blood vessels automatically from retinal images. This method segments each image pixel as vessel or non-vessel, which, in turn, is used for automatic recognition of the vasculature in retinal images. Retinal blood vessels were identified by means of a multilayer perceptron neural network, for which the inputs were derived from the Gabor and moment invariant-based features. Later, the methods evolved to detect not only microaneurysms in the fundus, but also the stage of diabetic retinopathy using ML and DL approaches [37]. A detailed literature review is discussed in Section 3.1 and Section 3.2.

1.4. Prior Research

According to our understanding, only a few systematic literature review (SLR) publications on DR using AI Technology are available. Jahangir R. [15] covers five main areas: data, preprocessing method, methods used in ML- and DL-based approaches, and performance evaluations used in DR detecting tactics. Contrast enhancement paired with green stream removal yielded the best classification results in picture preprocessing approaches. Among several features considered for DR detection, structure, texture, and statistical features were determined to be the most discriminative [25]. Compared to other ML classifiers, the artificial neural network has proven to be the most effective. The convolutional neural networks outperformed other DL networks in DL.
Pragathi P [38] suggested an integrated ML approach that incorporates support vector machines (SVMs), principal component analysis (PCA), and moth-flame optimization approaches. The DR dataset is first subjected to the ML algorithm’s decision tree (DT), SVM, random forest (RF), and naïve Bayes (NB). The SVM algorithm outperforms the others, with an average performance of 76.96%. The moth-flame optimization technique is combined with SVM and PCA to increase the performance of ML systems. This proposed approach outperforms all previous ML algorithms with an average performance of 85.61%, and the accurate categorization of class labels is accomplished. Wei Zhang [39] discusses DeepDR. It uses transfer learning and ensemble learning to detect DR in fundus images. It consists of cutting-edge neural network works built from common convolutional neural networks and custom standard deep neural networks. DeepDR is developed by creating a high-quality dataset of DR medical images and then having clinical ophthalmologists label it. The primary objective of this review would be to study the existing scholarly articles and related conclusions on the specified study topic. Table 1 lists the study topics that will aid in focusing on this SLR. To the author’s knowledge, this study ensures detailed SLR, which is the first to cover methodologies for the detection of DR based on the AI methodology for robust and reliable detection.
The major highlights of this paper are:
  • Datastores in the discipline of DR detection are accessible online, as well as the existence of DR datasets.
  • An exhaustive survey of widely used ML and DL methodologies for DR detection is discussed.
  • Feature extraction and classification techniques used in DR are discussed.
  • Future research concepts such as domain adaptation, multitask learning, and explainable AI in DR detection are discussed

1.5. Motivation

Working in the discipline of ophthalmology is primarily motivated by a concern for human health. The eye is among the essential sensory inputs since it receives data from the world and then transmits it to the central nervous system. Our mind subsequently converts the information collected by the eyes to usable information. As a result, our eyes are crucial senses which provide us with knowledge of the world, enabling us to try new knowledge, participate in artistic processes, and generate lovely memories. There is plenty of work to be performed in today’s industrialized world utilizing various modern personal digital assistants, laptops, smartphones, etc. Additionally, because of the influence of COVID-19 over the last two years, most people who work from home utilize various internet platforms. As a result of all these factors, most people are experiencing vision problems. Various disorders such as obesity, cardiovascular disease, hypertension, strokes, and depression are more prevalent in persons with vision problems [37]. They are also more likely to stumble, become hurt, or become depressed.
As per recent publications, surveys [42], and clinical information, many people have indeed been detected with DR, AMD, cataracts, glaucoma, CNV, drusen, corneal scarring, and a variety of other eye ailments [33,38]. AI-related techniques have been used to diagnose eye-related disorders, and there is still a lot of AI potential to be discovered. Because AI and related approaches would fundamentally alter vision care, it would be a good possibility for the healthcare business as the element of AI is only beginning to be unveiled. As a result, there is a lot of interest in using AI to improve ophthalmologic treatment while lowering healthcare costs. This review further highlights a variety of allied methodologies and datasets to keep up with the field of ophthalmology’s rapid growth. This publication aims to assist young researchers in a greater understanding of visual retinal problems and to work in optics to develop a self-contained platform.

1.6. Research Goals

According to the earlier studies and their outcomes, this research compares DR with AI and, accordingly, research questions are proposed to obtain a comprehensive survey of DR detection using AI. Table 2 shows the grouped survey items to make this study more comprehensive by enlisting research questions considered during SLR.
Specific perspectives must be available to help academics generate innovative thinking by evaluating related studies. The first research question examines previously published work and the most common AI-based DR detection methods. The goal of research question 2 is to create a list of all feature extraction techniques used in DR [43]. Research question 3 will outline relevant datasets for DR exploration. Research question 4 will look at a few prominent evaluation measures in DR utilizing AI methodologies. In research question 5, existing effective methodologies’ limitations and future directions constraints are listed.

1.7. Contribution of the Study

Our systematic literature review made the following contributions:
  • To exploring available data sets which have been used for detecting DR.
  • To investigate artificial intelligence strategies that have been employed in the literature for DR detection.
  • To explore feature extraction and classification.
  • To study multiple assessment metrics to analyze DR detection and categorization.
  • To highlight the scope of future research, concepts such as domain adaptation, multitask learning, and explainable AI in DR detection techniques used in DR.
This survey includes the research methodology for artificial intelligence-based ophthalmic analysis and the study’s contribution in Section 2. The literature on techniques-based ophthalmology analysis is discussed in length in Section 3. In addition, a comparative examination of approaches and findings achieved by various ophthalmology researchers has been reviewed. In Section 4, a comparative examination looked at the potential difficulties and implications of related procedures in the optics domain. Section 5 clarifies the overview by outlining clinical applications, future research, and avenues for studying various diseases, diagnoses, and treatments for eye disorders. Section 6 discusses performance measures, Section 7 introduces biomarkers in DR, and Section 8 highlights research challenges with future research directions. The analysis is arranged to categorize and evaluate existing publications to encompass the study’s breadth. The first step in defining the study topics is that the inclusion ratio of prior work could be precisely calculated. Some views can help scientists develop new innovative ideas by examining related spadework. Figure 7 demonstrates the organization of the Systematic Literature Review.
Table 2 provides a list of research questions used in SLR. The identification of information sources is the second phase in our SLR. Databases such as Scopus and Web of Science were used to find related papers. Some general and specific keywords are included in Table 3 that can be used to create search queries for finding research publications.

2. Research Mechanism of Study

A descriptive analysis was carried out using the eligible study items for a structured literature review and meta-analysis (PRISMA) method. PRISMA comprises a list of protocols for longitudinal studies and other data-driven concepts, which includes writing and formatting guidelines. A three-step technique is employed to conduct a systematic review: the creation of research questions, online databases, and guidelines for accepting and discarding scientific papers of such research analysis processes are presented in the following pages.
The analysis is arranged systematically to categorize and evaluate existing publications to encompass the study’s breadth. The first step in defining the study topics is that the inclusion ratio of prior work could be precisely calculated. Some views can help scientists develop new innovative ideas by examining related spadework. Table 2 provides a list of the relevant studies used in SLR. The identification of information sources is the second phase in our SLR. Scopus and Web of Science were used to find related papers. Some general and specific keywords are included in Table 3 that can be used to create search queries for finding research publications. The third step is to develop techniques for evaluating the technical and scientific documents. These results uncovered identified publications related to our condition. The proposed approach is divided into two sections: (I) choose queries that are used to search for and collect all relevant data using Boolean AND/OR, and (II) use Boolean operators AND/OR to find out search keywords from survey questions to make a note of topics. Table 4 lists the web searches used for this research. Figure 8 gives relevance of paper distribution based on (a) Data Source, (b) Year, and (c) Document Type.

Paradigms for Inclusion and Exclusion

A set of study protocols for selecting and excluding factors for rejections of scientific studies was developed to pick relevant scholarly articles for the literature review (Table 5). Three inclusion criteria phrases are employed in the screening procedure,
(a)
Insignificant scientific studies were weeded out depending on the info and terms found in study summaries. Summaries of scientific papers that fulfilled a minimum of 40% of an IC are maintained for other stages.
(b)
Full-text screening: Articles with summaries that only address limited elements of the keyword search are rejected if they do not reference or connect to a particular keyword in Table 3.
(c)
Step of quality assurance: The leftover scientific studies were subject to something such as a qualitative review, and those that did not meet any of the eligibility principles were eliminated.
  • RC1: Recommendations and results must be included in research articles.
  • RC2: Scientific data must be included in scientific papers to support their conclusions.
  • RC3: The aims and findings of the research must be expressed.
  • RC4: For scientific studies, citations must be proper and adequate.

3. RQ1 Artificial Intelligence for DR Detection

The first half of this section discusses artificial intelligence-based DR detection tools, ML techniques, DL techniques, and transfer learning (Table 6). DR is a leading primary cause of preventable blindness worldwide, and artificial intelligence technologies can help with early detection, monitoring, and treatment. The lack of ground truth criteria in retinal exam datasets concerns supervised artificial intelligence algorithms that require high-quality exams. Various groups have used AI methods, such as ML and DL, to construct automated DR detection tools [44]. Such AI-based technology has been demonstrated to reduce prices, improve detection ability, and increase universal care for DR screening. Latest DL studies in optics suggest that it could substantially change human evaluators while retaining a good exactness.

3.1. Machine Learning Techniques in DR Detection

Several ML techniques are developed for DR classification. They are discussed below and a diagrammatic explanation is given in Figure 9
(a)
Linear Discrimination Analysis: The local linear discriminating study forms the most extensively utilized classifications and dimension reduction methods. It can be used to discriminate between multiple classes. It finds a projection to a line, allowing samples from various classes to be separated [15]. In the critical ML investigations, LLDA was utilized only once. Wu and Xin [52] used the LLDA algorithm to detect microaneurysms and compared the results with the SVM and k-NN in the ROC dataset. The authors found that the LLDA algorithm failed to perform, and the SVM gave good results for both the LLDA and k-NN, in terms of accuracy.
(b)
Decision Trees: A fundamental tool for solving classification tasks. Its structure is similar to a tree and hierarchical, where an internal node represents a test on an attribute, a branch represents a test outcome, and a terminal node carries a class label. A root node is a topmost node in a tree. It is used to represent decisions in decision analysis. One of the benefits of a DT is that it does not necessitate much data preparation. One downside of a DT is that it can occasionally result in overly complicated DTs, often known as overfitting. With DIARETDB0 and DIARETDB1 datasets, Rahim and Jayne [53] detected microaneurysms from retinopathy images using SVM and k-NN.90% of the total photos were used for training, while the remaining 10% were used to test the classification algorithms.
(c)
Support Vector Machines: SVMs (support vector machines) is a CNN model for categorizing data. This generates a binary classifier around the datasets example (support vectors and hyperplane). Accordingly, the A+ and A− categories denote the nearest distances toward the positively and negatively extreme examples. A hyperplane is a plane that separates the A+ and A− levels, featuring A+ on one side of the hyperplane and A− on another [15]. In various studies, researchers successfully employed SVM techniques to classify distinct DR conditions, including [54,55,56,57,58]. Furthermore, the authors claim that the SVM improves classification performance. SVM, SCG-BPN, and GRN exudates within retinal photos have been detected and classified using techniques by Vanithamani and Renee Christina [59]. The DIARETDB1 dataset, consisting of 40 training and 40 testing images, was used. The experimental findings revealed that the SVM algorithm outperformed the SCG-BPN and GRN algorithms in classification performance.
The naïve Bayes (NB) classifier is probabilistically based. It works with numerical data to build a model. It takes significantly less amount of numeric data to predict classification.
(d)
Naïve Bayes: As a result, it is a quick and straightforward categorization algorithm. Wang and Tang [60] examined three classification systems for microaneurysm detection: NB, k-NN, and SVM. Tests were conducted on the private and public datasets. k-NN outperformed in its research compared to the NB method.
(e)
K-Nearest Neighbor: One of the most basic and straightforward ML techniques is the k-nearest neighbor (k-NN) methodology. It classifies items in a feature set using the training set’s nearest instances. The character “k” shows the cluster count utilized by the classifiers to build its prediction. Among the 40 publications about ML, the k-NN algorithm was employed in numerous investigations [15]. The k-NN method was utilized by Nijalingappa and Sandeep [61] to classify DR into severity stages. They employed 169 photos from Messidor and DIARETDB1 datasets and a unique dataset [53] in their research. They used 119 photos to train their ML method and 50 photographs to test it. The classification results produced by k-NN are satisfactory.
(f)
Random Forest: Random forest is an effective and successful ML classification method. It forms decision trees in forests (DT). The projections will be much more accurate if enough bushes are in the forests. Every tree casts a judgment about categorizing a novel method depending on its characteristics, and the models are then stored with the tree’s name. The category with the best scores is chosen by forest. To put it differently, an RF classification technique is comparable to the bagging technique. A subset of the training dataset is formed in RF, and a DT is created for each subset. In the testing set, every input sequence is classified by all the DTs, as well as the forest chooses the one with the best scores [15]. The RF classifier was only used once in the experiments that were chosen. The RF classifier was used by Xiao and Yu [62] to detect bleeding in retinal images. They used 35 photos from a unique dataset and 55 images from DIARETDB1. They employed 70% of the photos for training the ML network, and the remaining 30% were used for testing and classification with the RF technique. The RF algorithm acquired good sensitivity, according to the findings of the experiments.
(g)
Artificial Neural Networks: This classifier comprises three layers: input nodes, hidden nodes, and an output vector. There seem to be numerous nodes in the input and hidden nodes, but only one in the output nodes. A neuron is a type of activating unit in a neural network. Patterns are sent from the input nodes layer, which does the actual processing. The node with hidden units is allocated random weights. The output node is equipped with a hidden layer, ready for the outcome. It is similar to a perceptron in that it takes numerous inputs and creates a single output.
Considering DR imaging, several researchers applied a single ANN classification technique and got outstanding results. The authors of [63,64,65,66,67,68] employed a single ANN model and discovered that it was superior diabetes classifying strategy.
(h)
Unsupervised Classification: Unsupervised classification is employed when prior knowledge is unavailable. Inside this circumstance, just the set of information and characteristics that correspond to specific occurrences is revealed. In the chosen papers, unsupervised classification techniques were used many times. Zhou and Wu [22,69] used a ROC dataset with 100 images to perform unsupervised classification for microaneurysm identification. 50% of the photos were used for training, and the other 50% were used for testing in their experiments. In their experiments, the authors found that unsupervised classifiers performed reasonably well. Unsupervised classification methods were used by Kusakunniran and Wu [70], and Biyani and Patre [53], to identify exudates in DR scans, with a sensitivity of 89% and 88%, accordingly.
(i)
Ensemble Classifiers: It is also called group learning and it combines different classification methods to create a more accurate model, and is a type of learning that takes place in groups [71]. There are three ways to do it: bagging, boosting, and stacking. Many classifying techniques work at a time in parallel during bagging, and the most accurate one is voted on at the end. The final classifier is the one that receives the most votes. Boosting is a technique that employs a sequence of classification algorithms. The weight of every model is adjusted based on the prior iteration. The data are split into many segments, each of which is checked with the help of others, and so on [72]. The stacking comprises base models, often known as level-0 models, as well as a meta-model that combines the level-0 model prediction. Stacking contrasts with boosting, in which a meta-model focuses on learning how to effectively combine the predictions again from base models, rather than a series of models that solve former models’ prediction mistakes [73].
HDT with FFNN was used by Mane and Jadhav [74] to generate a categorization mechanism. The authors used two data sets, DIARETDB0 and DIARETDB1, to evaluate their DR image categorization capabilities over HDT and LMNN independently, but instead concluded that it was 98% reliable. Fraz and Jahangir [75] developed a classifier model using datasets DIARETDB1, e-Ophtha Ex, and Messidor. All 137 images were used for training their ML system and 341 images were used to evaluate their ensemble-based classifier. 98% accuracy was achieved in their experimentation.
(j)
Adaptive Boosting: AdaBoost is a systematic way to analyze a wide range of empirical systems. It works step-by-step, wherein each tree fits into a modified version of the original dataset before producing a robust classifier. This technique was utilized once in the chosen significant research. The AdaBoost method was used by Prentasic and Loncaric [76], wherein exudates were detected in DR images and a sensitivity of 75% was achieved, according to their experiments.
(k)
Self-Adaptive Resource Allocation Network Classifier: It selects training data based on self-regularized phenomena and then discards redundant data, requiring less memory and computer capacity. The network is then trained using the selected samples that have more information. Although the SRAN method was used two times in the primary ML tests, it did not perform as well as other categorization techniques. The authors of [22,77] evaluated the SRAN classification method to the McNN and SVM classifiers again to identify and track different ocular illnesses. A dataset from Coimbatore, India’s Lotus Eye Hospital, was used in their study.
In ML-based techniques, better performance is given by statistically-based characteristics also followed by shape and structure [78]. ANN gives better performance for classification over SVM and, in the case of ML techniques, ensemble classifiers perform better. Deep learning is a part of machine learning which works with artificial neural networks. It requires a huge amount of data and gives more accurate results as compared to ML techniques. In DL, CNN is primarily applied to extract and categorize the DR images automatically. The further section discusses DL and DR in detail.

3.2. Deep Learning in DR Screening

DL is a subclass of ML that has grown towards a more robust and valuable tool, a practical methodology for ML. A DL model is comprised of a complex architectural style with a multidimensional framework. In medical image analyses, DL [79] is used to categorize, localize, segment, and identify medical images. With multiple methodologies, DL delivers more spectacular and promising DR disease diagnosis and categorization results. Convolutional neural networks (CNN), deep Boltzmann machines (DBM), autoencoders, deep neural networks (DNN), recurrent neural networks (RNN), deep belief networks (DBN), and generative adversarial networks (GAN) are only a few of the DL-based techniques [80]. CNN is more widely employed in medical imaging than other DL approaches. The three layers within CNN architecture are convolution, pooling, and fully connected layers. The CNNs dimensions, levels, and filter number could be changed to suit the author’s requirements. Several filters combine in the convolutional layers to extract image features and build feature maps. Second, the average or max-pooling strategy is commonly used to reduce the volume of feature maps in pooling layers. Finally, the whole image feature set is created using completely connected layers. Subsequently, the data are categorized among two kernel functions, sigmoid (binary classification) or SoftMax (multi-classification) [80].
Whenever the sample data are insufficient, the data can be retrieved and precompiled to increase image features, and augmenting is performed as needed. The information is then examined. To sort Kaggle color images into five categories depending on DR scales, Pratt et al. [22] used a CNN containing ten convolutional layers, eight max-pooling layers, three fully connected layers, and a SoftMax classifier. Fundus images are shrunk and adjusted. L2 regularization and dropout methods proved helpful for the authors to minimize overfitting. They achieved a specificity of 95%, correctness of 75%, and sensitivity of 30%.
Wang et al. [81] combined handmade characteristics with CNN features. They then used the random forest (RF) classifier to develop a classifier that identifies exudate growths (hard) using the HEI-MED and E-ophtha datasets. Three convolutional and pooling layers were utilized to create the CNN, and only one FC layer was used for feature finding. HEI-MED and E-ophtha achieved an AUC of 0.9323 and 0.9644, respectively, and a sensitivity of 0.9477 and 0.8990. The DRIVE, STARE, and CHASE DB1 datasets were used to create a complete CNN model to capture blood vessels, as noted by Oliveira et al. [70]. The color fundus images were first refined. The dataset images were normalized using the stationary wavelet transform after extracting the green channel (SWT). Finally, before CNN processing, the patches were removed and augmented. In the DRIVE, STARE, and CHASE DB1 databases, the model achieved AUCs of 0.9821, 0.9905, and 0.9855, respectively. Chudzik et al. [82] used eighteen convolution layers, batch normalization layers, three max-pooling layers, up-sampling layers, and four skipped connections to construct a CNN model. Using image databases, the author employed three datasets to identify microaneurysms: E-ophtha, DIARETDB1, and ROC. Before CNN processing, images were preprocessed.
Yan et al. [83], introduced a technique for identifying DR lesions using DIARETDB1 by combining standard handcrafted features, improvised LeNet features, and a classifier and taking a combination of U-net and improved LeNet. During the preparatory step, green channels were clipped, CLAHE was used to increase contrast enhancement, a Gaussian filter was employed to reduce distortion, and morphological approaches were applied. The U-net design was utilized to segment the blood arteries when detecting red lesions. The LeNet architecture was upgraded with four convolutional layers, three max-pooling layers, and a fully connected layer to produce 48.71% sensitivity. DeepDR, a self-contained technique that combines pre-trained models Resnet and Inception V3, was proposed by Zhang et al. [39]. It has a sensitivity of 97.5% and a specificity of 97.3%, with an AUC of 97.7%.
DL models (DLMs) are a recent type of ML that uses a multi-layered artificial neural network (ANN) to learn more excellent representation from the information. DL has become the most common technique for detecting, predicting, forecasting, and classifying problems in various domains in recent years. It exposes numerous prospects for avoiding such a terrible condition in the healthcare profession, especially DR [12]. The DLMs were highly influential in various computer vision and biomedical image analysis applications [39,84]. It has proven to be an effective method of categorizing medical data. The CNN model plays a vital role in NPDR category classification, including excellent specificity and sensitivity using fundus photos. This also improved the effectiveness, availability, and affordability of DR grading systems, which were tested on several exciting images and scenarios [83,85]. Figure 10 shows the diagrammatic explanation of DR using DL.
Apart from the other DL approaches presented in the above section, transfer learning is a novel strategy that allows us to employ a previously trained model on a new problem statement. The topic of DR transfer learning is covered further down.

3.3. Transfer Learning in DR

Transfer Learning is a pretrained network, also known as a stored network, and is a typical DL approach that uses a limited set of feature examples. It improves the learning performance of a target task by transferring knowledge learned in a pre-trained network [86,87]. Transfer learning is utilized when inadequate data are required for the target task training and even when the perfect option for possible concerns is available. Applying this understanding to the target problem, transfer learning is employed to train a CNN by not re-initializing CNN weights [87]. Weights are instead loaded from a different CNN trained on a large dataset. The only healthy dataset to transfer trained version weights is the ImageNet problem [88].
There are two parts to pre-trained networks. The initial section consists of a succession of convolution and pooling layers followed by a densely linked classifier. Convolutional feature maps consider object locations in an input image. Object detection tasks, on either hand, are frequently worthless for densely coupled levels at the tip of the convolutional base. A pretrained network has been trained in a vast dataset, typically used for large-scale picture classification techniques. VGG19, Xception, MobilNet, ResNet, DenseNet, and Inceptionv3 are a few types of networks. The ImageNet dataset was used to train these networks, which consist primarily of everyday items and animals. By understanding the spatial ranking of characteristics, pre-trained systems might be used as generic models if the data are large and general enough [88]. A pretrained model could be used in feature extraction and fine-tuning. The convolutional base of a pretrained system with new classifiers is used for feature extraction, and the dataset is processed via it. Since it learns general representations for various tasks, the convolutional foundation is reusable. Fine-tuning, performed in conjunction with feature extraction, entails unfreezing portions of the model’s top layers (used for feature extraction). The top layers and the classifier of the system are then trained together [85,88]. Figure 11 depicts transfer learning from a pre-trained CNN model. The learned convolutional base can be used to extract features, which are then fed into a new classifier. Negative transfer and overfitting are significant difficulties when using full-scale, fine-tuned transfer learning systems. Whenever the domain of the pre-trained system and the performance are not equal, the reverse transfer happens, leading to low efficiency in the transfer learning model [89]. This can be due to the complexity of the characteristics collected in the last layers, which are unsuitable for the target domain. Fine-tuning subsequent layers, on the other hand, cause overfitting issues. To adjust later-level features to our goal domain, we must train the layer with such a huge set of parameters. It is not possible because the common, pretrained network InceptionV3 contains 21,802,784 parameters, while ResNet152 contains 58,370,944 parameters. Whenever these large-scale networks are trained, there is a risk of overfitting [87,88]. The success of using transfer learning for DR classifications may be analyzed by comparing trained models from inception to their fine-tuned variants.
Several researchers in [90,91,92] concluded that utilizing transfer learning improves a model’s accuracy in categorizing DR [93]. To train a DL model, many photos are required. The number of images in DR image datasets is restricted. Collecting DR photos and correctly labeling them is a time-consuming, experience-intensive, and resource-intensive operation [73].

4. RQ2 Feature Extraction Techniques for DR

4.1. Explicit or Traditional Feature Extraction Methods

The literature on image processing and image segmentation is researched a lot. These methods use shallow ML classifiers to diagnose DR using image processing-based feature detectors to measure blood vessels and the optic disc, and count abnormalities such as the presence of lesions, including hard exudates, soft exudates, microaneurysms, and hemorrhages, in an image. Additionally, features such as shape, color, intensity, statistical features, and texture-based features are examples of these characteristics. The following paragraphs provide a summary of these characteristics and are also given in Figure 12 and Table 7 gives traditional feature extraction methods used in DR.
(a) Characteristics based on shape and structure: The shape and size of various DR lesions, such as hard and soft exudates, hemorrhages, and microaneurysms, are among these characteristics. For example, Zhou and Wu [94] used shape-based criteria to detect microaneurysms: area and perimeter, axis length, circularity, and compactness.
(b) Features based on RGB (color): These characteristics are based on the image’s red, green, and blue color planes. For example, authors Jaya and Dheeba [95] employed color fundus images to detect hard exudates and used four color-based characteristics. They used RGB color space to build color histograms.
(c) Features based on intensity: The pixel intensity in the red, green, and blue planes is called intensity. The authors of [11] employed intensity characteristics to detect cotton-wool patches in DR images. Similarly, the authors of [16] employed intensity characteristics to determine hard and soft exudates by computing maximum and lowest pixel intensities.
(d) Statistical characteristics: Statistical parameters are used to calculate statistical calculations from the pixels of a DR image. The authors of [96] used statistical and color features to identify hemorrhages in retinal images. The statistical parameters employed are maximum and minimum mean with standard deviation. Feature extraction plays a vital role in research methodologies. Feature extraction methods in the case of DR can be classified as explicit or traditional feature extraction methods and direct methods or implicit methods (based on CNN).
(e) Texture-based characteristics: They provide texture-related information on DR images. Entropy, cluster shade, discrepancy, and correlation were four GLCM-based features that Vanithamani and Renee Christina identified [97]. Authors Nijalingappa and Sandeep [61] employed GLCM to extract textural features such as contrast, correlation, energy, homogeneity, similarity entropy, sum variance, difference variance, sum entropy, difference entropy, sum average, and inverse difference moment. In the selected papers, writers used several features in ML methodologies, such as form, shade, intensity, statistical, and texture-based characteristics. Shape and statistical features are the most frequently employed combination of features.

4.2. Direct Methods

Direct methods are also called implicit methods. In recent works, direct methods are likely to use deep CNN. These direct methods do not need to extract features manually; instead, they acquire the patterns of DR anomalies and deliver grading results according to the grading criteria. CNN architecture such as ImageNet AlexNet, GoogleNet, and Inception-V3 are all used to train DR pictures in the literature. Furthermore, we do not need to create a feature vector with direct methods. These techniques are new research in the literature.
Zago et al. [113] used two CNNs to build an approach for detecting DR vs. non-DR color images based on the expectation of lesion regions (pre-trained VGG16 and CNN). A DIARETDB1 dataset was utilized for training. The datasets IDRiD, Messidor, Messidor-2, DDR, DIARETDB0, and Kaggle were used for checking. The Messidor data yielded the highest scores, with an AUC of 0.912 and a sensitivity of 0.94. Jiang et al. [114] generated a method that classified the fundus image dataset as referable DR or non-referable DR using three CNNs (Inception-v3, ResNet152, and Inception-ResNet-v2). The photos were scaled, improved, and augmented before CNN training. The Adaboost approach would then be used to connect them. All network weights were modified using the Adam optimizer, and the model had a correctness of 88.21% and an AUC of 0.946, to estimate the five-stage DR and evaluate the performance of CNNs.
Wang et al. [75] used the Kaggle fundus dataset and three distinct CNNs, namely (pre-trained VGG16, AlexNet, and Inception-v3). With all three pre-trained models, the fundus images have been resized to different forms, yielding 63.23%, 50.03%, and 37.43% using Inception-v3, VGG16, and AlexNet, accordingly.
Hua et al. [111] used the DRIVE dataset photos to extract the retinal blood vessels. The author chose four feature maps using a ResNet-101 pre-trained network; individual feature maps were then blended to create a single map. Before CNN processing, the fundus images were enhanced. With the best feature maps merged, the accuracy was 0.951, the sensitivity was 0.793, the AUC was 0.9732, and the specificity was 0.9741. The retinal blood vessels were extracted using a CNN created by Wu et al. [112] from three well-known databases: STARE, DRIVE, and CHASE. In the preprocessing phase, the color images were transformed to grayscale images, normalized, augmented, and the image contrast was increased using CLAHE. For datasets such as STARE, DRIVE, and CHASE, the model attained AUCs of 98.75%, 98.30%, and 98.94%, respectively, and an accuracy of 96.72%, 95.82%, and 96.88%.

5. RQ3 Datasets Available for DR

An enormous number of public datasets are available for DR. Training, validation, and testing can be performed using various datasets, and the performance of various systems can be compared and examined. Fundus color photographs and optical coherence tomography (OCT) are retinal imaging methods. OCT gives the internal structure of the retina and is available in 2D and 3D. In contrast, images taken from fundus cameras are 2D photographs of the retina [115]. The Table 8 gives various fundus image datasets:

6. RQ4 Evaluation Measures Used for DR Detection

Performance measure parameters play a vital role in evaluating a model on futuristic or unknown data to estimate a model’s generalization accuracy. A few of them are listed below.

6.1. False Positive Rate (FPR)

The percentage of times that segmenting retinal images produces positive instead of negative findings.
It can be written as follows:
FPR = FP TN + FP

6.2. False Negative Rate (FNR)

It is the percentage of time that segmenting retinal images produces negative instead of positive findings.
It can be written as follows:
FNR = FN/(TP + FN)

6.3. Accuracy [89]

It is the ratio of correctly assigned pixels in the segmented image to several blood vessel pixels.
It is given as:
    A = TN + TP TN + FN + FP + TP

6.4. Specificity

It is the ratio of correctly detected vessels to the total number of non-vessels.
Spec = TN FP + TN

6.5. Sensitivity/Recall Rate

It is the ratio of accurately identified vessels to the total number of vessels.
Sen = TP FN + TP

6.6. F-Score

The F-score is a measurement of test accuracy. The number of positive outcomes is divided by the number of authentic positive results.
F Score = 2 × Recall × Precision Recall + Precision

6.7. ROC

It is a graph showing classifier performance at all conceivable thresholds. The graph depicts the positive rate (on the Y-axis) and the false positive rate (on the X-axis).

6.8. Positive Predictive Value (PPV)

It can be given by the probability of fundus images being segmented accurately.

6.9. Negative Predictive Value (NPV)

It can be given by the probability of fundus images being segmented inaccurately.

6.10. False Discovery Rate (FDR)

It’s also known as a false positive and can be defined as the rate of an expected part of errors.

6.11. Confusion Matrix

A confusion matrix is used to find out what our ML algorithm achieved and where it went wrong. It is a matrix used for evaluating classification models’ performance on a given set of test data. It can only be determined if the values for testing data are initially known. It is also known as an error matrix since it displays the flaws in the model’s performance as a matrix [157].
The following are some characteristics:
(a) Rows correspond to what is predicted and columns correspond to the known truth or actual values. Here, a matrix for the prediction of two classes for a classifier is given by a 2 × 2 table, three classes by a 3 × 3 table, etc.
(b) Actual values are the actual values for the given observations, whereas projected values are predicted by the model.
(c) The following Table 9 gives the values,
The following cases are listed in the table above.
  • True Negative: when the model’s predicted and the actual value is No.
  • True Positive: when the model’s predicted and the actual value is Yes.
  • False Negative: when the model’s predicted value is Yes, and the actual value is No. It is also known as a Type-II mistake.
  • False Positive: when the model’s predicted value is No, and the actual value is Yes. A type-I mistake is another name for it.

6.12. Kappa Value

Cohen’s Kappa is a common statistic for determining how well two raters agree. It can also be used to evaluate a classification model’s performance. Using Kappa, a comparison of ML model predictions to the humanly established credit scores can be made. Similar to many other evaluation measures, Cohen’s Kappa is calculated using the confusion matrix. On the other hand, Cohen’s Kappa considers imbalances in class distribution and might be more challenging to assess than overall accuracy [158].

7. EMR and Biomarkers in DR

Biomarkers are biological elements found in blood, other bodily fluids, or tissues that indicate the presence of a good or aberrant process, as well as a condition or disease. They act as a highlighter and can be used to assess how well the body reacts to treatment for a symptom. They play an essential role in identifying the physiological state or disease detection. Measurable indicators such as blood pressure, temperature, C-reactive protein, etc., are examples of a few biomarkers. Circulating biomarkers may be beneficial in diagnosing initial retinal illness before structural reforms are visible with existing imaging technology [159]. Personalized diabetes vision care precisely forecasts the threat of diabetic retinopathy (DR) development and loss of vision in real-time [160]. This utilization of electronic medical records (EMR) provides a framework for the incorporation of artificial intelligence (AI) algorithms that anticipate DR development into healthcare decisions [161]. The threat of retinopathy evolution and vision problems can be projected using an algorithm applied to pieces of information from each patient, enabling patients to obtain prompt therapy. Hemoglobin A1c, also called HbA1c levels, are among the most well-known indicators for glycemic control. Hemoglobin A1c has been demonstrated to have a high correlation with the evolution of systemic symptoms of diabetes, particularly DR [162]. Early treatment diabetic retinopathy study (ETDRS) and the diabetic retinopathy severity scale (DRSS) are traditional biomarkers for DR. Longitudinal clinical studies show that DRSS-based fundus photographic assessment effectively represents the projected risk of disease development, responsiveness to treatments, and long-term visual results [3]. The DRSS has affected DR patients globally, employing a scale ranging from no diabetic retinopathy (NPDR) to severe proliferative diabetic retinopathy (PDR). Different ocular biomarkers consist of various parameters found in ocular coherence tomography (OCT), retinal blood flow, retinal oxygen saturation, vascular endothelial growth factor (VEGF), neural retina assessments (electroretinograms), and retinal vessel geometry [161,163]. Some novel biomarkers include:
  • Genetics: The investigation of genes associated with the development of advanced DR, vascular endothelial growth factor (VEGF), lipoproteins, and inflammation. There have been genome-wide association studies and single nucleotide polymorphisms (SNPs) linked to an enhanced danger of sight-threatening retinopathy [164].
  • Epigenetics: It is the study of how environmental variables interact with genes. DNA methylation, histone modification, and microRNAs are among the biomarkers being investigated [165,166].
  • Proteomics: It is the study of protein structure and function research in cultured cells and tissues. A current study shows that diabetic patients have higher levels of transport proteins (vitamin D binding protein), arginine N-methyltransferase 5, and inflammatory proteins (leucine-rich alpha-2-glycoprotein) [167,168].
  • Metabolomics: The study of chemical traces left by biological activities. Data on increased metabolite cytidine, cytosine, and thymidine found in DR patients using mass spectrometry is included in the studies. These nucleotide concentrations may be relevant in monitoring DR progression and evaluating therapy [169].

8. RQ5 Challenges and Future Research Directions

This section addresses several scientific issues that previous diabetic detection investigations have not addressed. Much effort is required to improve the effectiveness of various diabetic detection systems. Various research challenges and their workable solutions are listed below.
Challenge 1: The origins of DL models are frequently unknown; hence, they are viewed as a black box. This results in a need for an automatic (parameter) optimization strategy. It is also challenging to determine the best configuration and values for layer numbers and node numbers in different layers. Basic domain knowledge is also required for selecting parameters for the number of epochs, learning rate, and to regularize strength. As a result, automatic optimization methodologies for various DL architecture parts for specific DM datasets and additional clinical datasets may be introduced.
Future Research Direction: Explainable artificial intelligence (XAI).
Recent developments in DL techniques have aroused interest in using AI technology in every field; however, the method’s opacity has raised concerns regarding its use in security applications. The “explainability” component is critical since it demonstrates how black-box methods operate and provides accountability and accessibility aspects that regulations, consumers, and network operators care about. Explainable artificial intelligence (XAI) is indeed a collection of technologies and methodologies for transforming so-called black-box AI algorithms into white-box algorithms, wherein the outcomes of the methodologies and the variables, parameters, and measures adopted by the algorithm show up in such results have been transparent and straightforward [170]. There are three dimensions to evaluate when analyzing the comprehensiveness of AI models, as stated below.
Explainability is a learning model feature that allows the model’s processes to be explained in detail. The strategy is to make the learning model’s internal workings increasingly transparent. It is worth mentioning that sensitive applications necessitate explanation ability for both scientific interest’s purpose and since the danger component takes priority over other factors whenever human lives are threatened. A learning model’s interpretability is a factor that helps people to understand and make logical sense of it, as opposed to explainability. Transparency is typically associated with understandability; a learning model is said to be transparent in it can be understood without the need for an interface. The term “transparent” is defined as a communication paradigm that may be comprehended without additional elements.
For the objective of categorizing DR severity using color fundus photography, G. Quellec [151] describes explanatory artificial intelligence (XAI), which achieves the same level of performance as black-box AI (CFP). The algorithm learns to segment and categorize lesions in images, and the final image-level classification is derived directly from these multivariate lesion segmentations. The peculiarity of this explanatory framework is that similar to black-box AI algorithms, it is trained from the beginning to the end with only image supervision; the notions of lesions and lesion categories develop independently. Se-In Jang1 [171] describes a classification model of a neural-symbolic learning-based explainable DR (ExplainDR). To accomplish explainability, the authors develop a human-readable symbolic representation that follows a taxonomy style of DR characteristics connected to eye health issues. The disease prediction then incorporates human-readable information gained from the symbolic representation. There are various XAI models such as LIME, the What-If-Tool, DeepLIFT, Skater, SHAP, AIX360, Activation Atlases, Rulex Explainable AI, and GradCAM [172]. The XAI model’s shapley additive explanations (SHAP) with guided backpropagation (GBP), and the inception-V3 framework have been used for ophthalmic diagnosis [173]. Diagramatic explanation of DR using XAI is shown in Figure 13.
Challenge 2: Insufficient and unlabeled data availability.
Future Research Direction: For training, DL algorithms often require a large amount of labeled diabetes data, hence there is a need to develop labeled and sufficient datasets for training. When the training range is restricted, it is impossible to achieve sufficient precision. This problem can be approached in two ways. To begin, low-learning algorithms were used to collect training data. Second, various enhancement techniques were used, including cropping, rotating, flipping, and color casting. More research is needed to generate more precise training data so that the DL design with more consistency and distinguishing features.
Challenge 3: Practitioners, especially ophthalmologists, prefer making decisions by referring to OCT, fundus, and other modalities. Hence, using multimodality in DR will help in detecting the severity of DR.
Future Research Direction: Recently, images such as fundus are used to diagnose eye-related disorders, such as DR and glaucoma, and OCT is used to detect other eye-related disorders such as diabetic macular edema and age-related macular degeneration, etc. [174]. There are many scopes to develop a similar architecture that is extensible enough to accommodate fundus images and OCTs for DR identification. Most available investigations used a single modality to build a DR detection model. In the future, however, a multi-modal concept can be used to view DR detection from more than one data perspective. This will boost the practitioner’s confidence in detecting DR early.
Challenge 4: Lack of self-supervised or unsupervised approaches.
Future Research Direction: Domain adaptation applies a technique learned inside one domain to a new target domain. Duy M. H. Nguyen [175] tackles the topic of domain adaptation for DR grading by creating a novel self-supervised task based on retinal vascular image reconstructions inspired by medical domain knowledge to learn invariant target-domain features. Then, a benchmark of current state-of-the-art unsupervised domain adaptation approaches is offered on the DR problem. It has been demonstrated that their method outperforms other domain adaptation methodologies. In [176], Ruoxian Song proposed a domain adaption for multi-instance learning for DR grade, which organized weakly supervised DR grade as a multi-instance learning issue. Cross-domain produces tagged examples to filter out irrelevant examples in the target domain. To model the link between suspicious occurrences and bag labels, multi-instance learning with only an attention mechanism can collect location information of highly suspected lesions and predict the grade for DR. The proposed technique is tested on the Messidor dataset. The results showed an average accuracy of 0.764 and an AUC value of 0.749. Domain adaptation in DR is diagrammatically shown in Figure 14.
Challenge 5: The need for improved data efficiency, less overfitting through common representations, and fast learning using auxiliary information in DR is required.
Future Research Direction: Multi-task learning (MTL) is a subfield of ML in which a shared model learns many tasks simultaneously. Improved data efficiency, less overfitting through common representations, and fast learning using auxiliary information are advantages of such techniques. On the other hand, simultaneous learning of many tasks poses modern design and optimization issues, and deciding which tasks should be learned together is a non-trivial problem in and of itself [177]. Although DL for DR screening has demonstrated incredible healthcare accuracy of referable versus non-referable DR, extra fine-grained grading of the DR severity level and automated segmentation of lesions (if any) in retina images will still be necessary. To conduct the DR grading and lesion segmentation tasks, A. Foo and W. Hsu [178] used a multi-task learning strategy using MTUnet. They proposed a semi-supervised learning technique to acquire segmentation masks for enormous datasets due to the lack of ground truths for lesion segmentation masks. A. Foo and W. Hsu [178] discovered that the DR severity level of an image could be influenced by the existence and prevalence of several lesions. Experimentation was performed on publicly accessible datasets and records, and data produced via screening programs indicate the efficacy of the multi-task approach over the state-of-the-art network.
Challenge 6: There seems to be currently no unified standard for assessing and validating AI algorithms. The testing sets used in numerous studies vary substantially. A few other studies did not use independent external testing sets but instead used internal validation sets to test the algorithm’s sensitivity, specificity, and AUC. Various studies’ sensitivity, specificity, AUC, and other indicators just were not comparable. As a result, standard testing is necessary to analyze each algorithm.
Future Research Direction: The need for a standard testing technique to analyze algorithms.
Challenge 7: In the most recent findings, one AI system can only identify one disease, implying that a patient can only be evaluated for a single issue during a fundus examination. However, if we consider the fundus image modality, almost all retinal vascular diseases can be examined through it, unless the media is hazy, so the image is not clear [35]. The eye examination process will be greatly simplified if an AI system can diagnose multiple diseases. Detection of other eye diseases during DR screening has been reported in studies, which can detect age-related macular degeneration, as well as other diseases, at the same time.
Future Research Direction: The need for a simplified AI system to diagnose multiple diseases.

9. Discussion

A total of 178 studies were reviewed for this survey. All of the studies discussed mention work in DR screening systems using artificial intelligence techniques. There is a substantial need for automated, reliable systems for DR screening due to a substantial increase in DR patients worldwide. Studies considered publications published between January 2014 and June 2022. This study discusses various AI tools for DR followed by ML and DL techniques in DR. The numerous studies that created their own CNN framework versus others that usually use existing structures with transfer learning, such as VGG, ResNet, or AlexNet, are slightly different. Creating a CNN from scratch takes time and resources; however, employing transfer learning makes it easier and faster. The overall performance of its own CNN architecture is greater than that of the systems that used existing structures. This point should be taken into consideration by researchers, and further research should be conducted to differentiate between the two tendencies. Implicit and explicit feature extraction techniques are discussed, which help develop a model with reduced feature vector size and less machine effort, leading to better performance and speed. Publicly available datasets are discussed in detail with different properties. Performance measures serve as a matrix to evaluate the quality and accuracy of the research. They play a vital role in determining whether the research is desired. The availability of a robust DR detection technique capable of detecting all sorts of lesions and DR stages leads to a better follow-up strategy for DR patients, thereby avoiding loss of vision. This lack of technologies that might predict the five DR levels and detect DR lesions was a gap that needed to be filled. The above point could be viewed as a contemporary research question for researchers to pursue.

10. Conclusions

Automated methods for DR identification play a significant role in the early diagnosis of DR. A detailed review has been carried out, including 178 research studies found in Scopus, WOS, ophthalmology journals, Jama, PubMed, etc., to find the primary studies using the PRISMA approach. This review critically focuses on publicly available datasets, classification techniques used in ML and DL, and various traditional and currently used feature extraction methods, followed by various performance metrics used in DR. Traditional and novel biomarkers used in DR are highlighted. This study discovered and reported on several publicly available datasets with distinctive properties. In ML-based techniques, better performance is given by statistical-based characteristics also followed by shape and structure. ANN gives better performance for classification over SVM and, in the case of ML techniques, ensemble classifiers perform better. When this concerns DL, CNN was primarily applied to automatically extract and categorize the DR images. Accuracy, sensitivity, specificity, and area under the curve are the widely used performance metrics in DR. This review also described four novel research challenges in the DR detection field. This comprehensive review provides a profound overview of the topic of DR detection approaches and helpful insights to researchers working in this field. The scope of the evaluation can be expanded in the future to overcome limitations. Concepts such as transfer learning, ensemble learning, explainable AI, multi-task learning, and domain adaptation can be widely used in the future to detect DR at its early stages. Intelligent health monitoring technologies decrease the time to detect diagnoses, sparing ophthalmologists’ time and cost, and allowing patients to communicate more quickly. The authors strongly believe that scientists and medical practitioners working in DR detection would benefit from this review. The readers of this article will gain the desired knowledge and obtain future research directions to extend their research work.

Author Contributions

Conceptualization, P.B. and S.G.; methodology, P.B., S.G. and K.K.; writing—original draft preparation, P.B.; writing—review and editing, P.B., S.G., K.P. and K.K.; visualization, S.G.; supervision, S.G., K.P. and K.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Khatri, M. Diabetes Complications. Available online: https://www.webmd.com/diabetes/diabetes-complications (accessed on 18 May 2022).
  2. Chakrabarti, R.; Harper, C.A.; Keeffe, J.E. Diabetic Retinopathy Management Guidelines. Expert Rev. Ophthalmol. 2012, 7, 417–439. [Google Scholar] [CrossRef]
  3. Early Treatment Diabetic Retinopathy Study Research Group. Grading diabetic retinopathy from stereoscopic color fundus photographs- an extension of the modified Airlie House classification. Ophthalmology 2020, 127, S99–S119. [Google Scholar] [CrossRef] [PubMed]
  4. Scanlon, P.H.; Wilkinson, C.P.; Aldington, S.J.; Matthews, D.R. A Practical Manual of Diabetic Retinopathy Management, 1st ed.; Wiley-Blackwell: Hoboken, NJ, USA, 2009; pp. 1–214. [Google Scholar] [CrossRef]
  5. Ravelo, J.L. Aging and Population Growth, Challenges for Vision Care: WHO Report. 2019. Available online: https://www.devex.com/news/aging-and-population-growth-challenges-for-vision-care-who-report-95763 (accessed on 3 January 2022).
  6. WHO. World Report on Vision, 2019. Available online: https://www.who.int/publications/i/item/9789241516570 (accessed on 3 January 2022).
  7. Kumar, R.; Pal, R. India achieves WHO recommended doctor population ratio: A call for a paradigm shift in public health discourse! J. Fam. Med. Prim. Care 2018, 7, 841–844. [Google Scholar] [CrossRef] [PubMed]
  8. WHO. Global Data on Visual Impairment. 2010. Available online: http://www.who.int/blindness/GLOBALDATAFINALforweb.pdf (accessed on 5 May 2022).
  9. Centers for Disease Control and Prevention. Common Eye Disorders and Diseases. 2020. Available online: https://www.cdc.gov/visionhealth/basics/ced/index.html (accessed on 10 May 2022).
  10. Malik, U. Most Common Eye Problems—Signs, Symptoms and Treatment Options. 2021. Available online: https://irisvision.com/most-common-eye-problems-signs-symptoms-and-treatment/ (accessed on 3 April 2022).
  11. Stoitsis, J.; Valavanis, I.; Mougiakakou, S.G.; Golemati, S.; Nikita, A.; Nikita, K.S. Computer aided diagnosis based on medical image processing and artificial intelligence methods. Nucl. Instrum. Methods Phys. Res. Sect. A 2006, 569, 591–595. [Google Scholar] [CrossRef]
  12. Mushtaq, G.; Siddiqui, F. Detection of diabetic retinopathy using deep learning methodology. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1070, 012049. [Google Scholar] [CrossRef]
  13. Taylor, R.; Batey, D. Handbook of Retinal Screening in Diabetes: Diagnosis and Management. In Handbook of Retinal Screening in Diabetes: Diagnosis and Management, 2nd ed.; Wiley-Blackwell: Chichester, UK, 2012; pp. 89–103. [Google Scholar]
  14. Gupta, A.; Chhikara, R. Diabetic Retinopathy: Present and Past. Procedia Comput. Sci. 2018, 132, 1432–1440. [Google Scholar] [CrossRef]
  15. Ishtiaq, U.; Kareem, S.A.; Abdullah, E.R.M.F.; Mujtaba, G.; Jahangir, R.; Ghafoor, H.Y. Diabetic retinopathy detection through artificial intelligent techniques: A review and open issues. Multimedia Tools Appl. 2019, 79, 15209–15252. [Google Scholar] [CrossRef]
  16. Lin, J.; Yu, L.; Weng, Q.; Zheng, X. Retinal image quality assessment for diabetic retinopathy screening: A survey. Multimedia Tools Appl. 2020, 79, 16173–16199. [Google Scholar] [CrossRef]
  17. Qureshi, I. Glaucoma Detection in Retinal Images Using Image Processing Techniques: A Survey. Int. J. Adv. Netw. Appl. 2015, 7, 2705–2718. Available online: http://www.ijana.in/papers/V7I2-10.pdf (accessed on 5 April 2022).
  18. Wang, Z.; Yin, Y.; Shi, J.; Fang, W.; Li, H.; Wang, X. Zoom-in-Net: Deep Mining Lesions for Diabetic Retinopathy Detection. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Quebec City, QC, Canada, 11–13 September 2017; pp. 267–275. [Google Scholar]
  19. Scotland, G.S.; McNamee, P.; Fleming, A.D.; Goatman, K.A.; Philip, S.; Prescott, G.J.; Sharp, P.F.; Williams, G.J.; Wykes, W.; Leese, G.P.; et al. Costs and consequences of automated algorithms versus manual grading for the detection of referable diabetic retinopathy. Br. J. Ophthalmol. 2010, 94, 712–719. [Google Scholar] [CrossRef]
  20. Difference between Normal Vision and DR Vision. Available online: https://www.researchgate.net/publication/350930649_DRISTI_a_hybrid_deep_neural_network_for_diabetic_retinopathy_diagnosis/figures?lo=1 (accessed on 20 May 2022).
  21. Li, T.; Gao, Y.; Wang, K.; Guo, S.; Liu, H.; Kang, H. Diagnostic assessment of deep learning algorithms for diabetic retinopathy screening. Inf. Sci. 2019, 501, 511–522. [Google Scholar] [CrossRef]
  22. Pratt, H.; Coenen, F.; Broadbent, D.M.; Harding, S.P.; Zheng, Y. Convolutional neural networks for diabetic retinopathy. Procedia Comput. Sci. 2016, 90, 200–205. [Google Scholar] [CrossRef] [Green Version]
  23. Alyoubi, W.L.; Shalash, W.M.; Abulkhair, M.F. Diabetic retinopathy detection through deep learning techniques: A review. Inform. Med. Unlocked 2020, 20, 100377. [Google Scholar] [CrossRef]
  24. Arrigo, A.; Teussink, M.; Aragona, E.; Bandello, F.; Parodi, M.B. MultiColor imaging to detect different subtypes of retinal microaneurysms in diabetic retinopathy. Eye 2020, 35, 277–281. [Google Scholar] [CrossRef] [PubMed]
  25. Yasin, S.; Iqbal, N.; Ali, T.; Draz, U.; Alqahtani, A.; Irfan, M.; Rehman, A.; Glowacz, A.; Alqhtani, S.; Proniewska, K.; et al. Severity Grading and Early Retinopathy Lesion Detection through Hybrid Inception-ResNet Architecture. Sensors 2021, 21, 6933. [Google Scholar] [CrossRef] [PubMed]
  26. Guo, S.; Li, T.; Kang, H.; Li, N.; Zhang, Y.; Wang, K. An end-to-end unified framework for multi-lesion segmentation offundus images. Neurocomput 2019, 349, 52–63. [Google Scholar] [CrossRef]
  27. Li, B.; Li, H.K. Automated Analysis of Diabetic Retinopathy Images: Principles, Recent Developments, and Emerging Trends. Curr. Diabetes Rep. 2013, 13, 453–459. [Google Scholar] [CrossRef] [PubMed]
  28. Mishra, A.; Singh, L.; Pandey, M. Short Survey on machine learning techniques used for diabetic retinopathy detection. In Proceedings of the IEEE 2021 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), Greater Noida, India, 19–20 February 2021; pp. 601–606. [Google Scholar] [CrossRef]
  29. Oh, K.; Kang, H.M.; Leem, D.; Lee, H.; Seo, K.Y.; Yoon, S. Early detection of diabetic retinopathy based on deep learning and ultra-wide-field fundus images. Sci. Rep. 2021, 11, 1–9. [Google Scholar] [CrossRef]
  30. Kermany, D.S.; Goldbaum, M.; Cai, W.; Valentim, C.C.S.; Liang, H.; Baxter, S.L.; McKeown, A.; Yang, G.; Wu, X.; Yan, F.; et al. Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning. Cell 2018, 172, 1122–1131.e9. [Google Scholar] [CrossRef]
  31. Mansour, R.F. Deep-learning-based automatic computer-aided diagnosis system for diabetic retinopathy. Biomed. Eng. Lett. 2017, 8, 41–57. [Google Scholar] [CrossRef]
  32. Pepose, J.S. A prospective randomized clinical evaluation of 3 presbyopia-correcting intraocular lenses after cataract extraction. Am. J. Ophthalmol. 2014, 3–9. [Google Scholar] [CrossRef] [PubMed]
  33. Boudry, C.; Denion, E.; Mortemousque, B.; Mouriaux, F. Trends and topics in eye disease research in PubMed from 2010 to 2014. PeerJ 2016, 4, e1557. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Gardner, G.G.; Keating, D.; Williamson, T.H.; Elliott, A.T. ORIGINAL ARTICLES-Clinical science Automatic detection of diabetic retinopathy using an artificial neural network: A screening tool. Br. J. Ophthalmol. 1996, 80, 940–944. [Google Scholar] [CrossRef] [Green Version]
  35. Shi, C.; Lee, J.; Wang, G.; Dou, X.; Yuan, F.; Zee, B. Assessment of image quality on color fundus retinal images using the automatic retinal image analysis. Sci. Rep. 2022, 12, 1–11. [Google Scholar] [CrossRef] [PubMed]
  36. Franklin, S.W.; Rajan, S.E. An automated retinal imaging method for the early diagnosis of diabetic retinopathy. Technol. Health Care 2013, 21, 557–569. [Google Scholar] [CrossRef]
  37. Li, S.; Zhao, R.; Zou, H. Artificial intelligence for diabetic retinopathy. Chin. Med. J. 2021, 135, 253–260. [Google Scholar] [CrossRef]
  38. Pragathi, P.; Rao, A.N. An effective integrated machine learning approach for detecting diabetic retinopathy. Open Comput. Sci. 2022, 12, 83–91. [Google Scholar] [CrossRef]
  39. Zhang, W.; Zhong, J.; Yang, S.; Gao, Z.; Hu, J.; Chen, Y.; Yi, Z. Automated identification and grading system of diabetic retinopathy using deep neural networks. Knowl. Based Syst. 2019, 175, 12–25. [Google Scholar] [CrossRef]
  40. Asiri, N.; Hussain, M.; Al Adel, F.; Alzaidi, N. Deep learning based computer-aided diagnosis systems for diabetic retinopathy: A survey. Artif. Intell. Med. 2019, 99, 101701. [Google Scholar] [CrossRef]
  41. Kamal, M.M.; Shanto, M.H.I.; Mirza Mahmud Hossan, M.; Hasnat, A.; Sultana, S.; Biswas, M. A Comprehensive Review on the Diabetic Retinopathy, Glaucoma and Strabismus Detection Techniques Based on Machine Learning and Deep Learning. Eur. J. Med. Health Sci. 2022, 24–40. [Google Scholar] [CrossRef]
  42. Aoun, S.M.; Breen, L.J.; Oliver, D.; Henderson, R.D.; Edis, R.; O’Connor, M.; Howting, D.; Harris, R.; Birks, C. Family carers’ experiences of receiving the news of a diagnosis of Motor Neurone Disease: A national survey. J. Neurol. Sci. 2017, 372, 144–151. [Google Scholar] [CrossRef] [PubMed]
  43. Khade, S.; Ahirrao, S.; Phansalkar, S.; Kotecha, K.; Gite, S.; Thepade, S.D. Iris Liveness Detection for Biometric Authentication: A Systematic Literature Review and Future Directions. Inventions 2021, 6, 65. [Google Scholar] [CrossRef]
  44. Grzybowski, A.; Brona, P.; Lim, G.; Ruamviboonsuk, P.; Tan, G.S.W.; Abramoff, M.; Ting, D.S.W. Artificial intelligence for diabetic retinopathy screening: A review. Eye 2020, 34, 451–460. [Google Scholar] [CrossRef] [PubMed]
  45. Ribeiro, L.; Oliveira, C.M.; Neves, C.; Ramos, J.D.; Ferreira, H.; Cunha-Vaz, J. Screening for Diabetic Retinopathy in the Central Region of Portugal. Added Value of Automated ‘Disease/No Disease’ Grading. Ophthalmologica 2015, 233, 96–103. [Google Scholar] [CrossRef] [PubMed]
  46. Ipp, E.; Liljenquist, D.; Bode, B.; Shah, V.N.; Silverstein, S.; Regillo, C.D.; Lim, J.I.; Sadda, S.; Domalpally, A.; Gray, G.; et al. Pivotal Evaluation of an Artificial Intelligence System for Autonomous Detection of Referrable and Vision-Threatening Diabetic Retinopathy. JAMA Netw. Open 2021, 4, e2134254. [Google Scholar] [CrossRef]
  47. Larsen, N.; Godt, J.; Grunkin, M.; Lund-Andersen, H.; Larsen, M. Automated Detection of Diabetic Retinopathy in a Fundus Photographic Screening Population. Investig. Opthalmology Vis. Sci. 2003, 44, 767–771. [Google Scholar] [CrossRef] [Green Version]
  48. Ting, D.S.W.; Cheung, C.Y.-L.; Lim, G.; Tan, G.S.W.; Quang, N.D.; Gan, A.; Hamzah, H.; Garcia-Franco, R.; Yeo, I.Y.S.; Lee, S.Y.; et al. Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images from Multiethnic Populations With Diabetes. JAMA 2017, 318, 2211–2223. [Google Scholar] [CrossRef]
  49. Gulshan, V.; Peng, L.; Coram, M.; Stumpe, M.C.; Wu, D.; Narayanaswamy, A.; Venugopalan, S.; Widner, K.; Madams, T.; Cuadros, J.; et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA 2016, 316, 2402–2410. [Google Scholar] [CrossRef]
  50. Abràmoff, M.D.; Lavin, P.T.; Birch, M.; Shah, N.; Folk, J.C. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digit. Med. 2018, 39, 1–8. [Google Scholar] [CrossRef]
  51. Dong, X.; Du, S.; Zheng, W.; Cai, C.; Liu, H.; Zou, J. Evaluation of an Artificial Intelligence System for the Detection of Diabetic Retinopathy in Chinese Community Healthcare Centers. Front. Med. 2022, 9, 840024. [Google Scholar] [CrossRef]
  52. Wu, J.; Xin, J.; Hong, L.; You, J.; Zheng, N. New hierarchical approach for microaneurysms detection with matched filter and machine learning. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; IEEE: Milan, Italy, 2015; pp. 4322–4325. [Google Scholar] [CrossRef] [Green Version]
  53. Biyani, R.S.; Patre, B.M. A clustering approach for exudates detection in screening of diabetic retinopathy. In Proceedings of the 2016 International Conference on Signal and Information Processing (IConSIP), Nanded, India, 6–8 October 2016; pp. 1–5. [Google Scholar] [CrossRef]
  54. Naqvi, S.A.G.; Zafar, M.F.; Haq, I.U. Referral system for hard exudates in eye fundus. Comput. Biol. Med. 2015, 64, 217–235. [Google Scholar] [CrossRef] [PubMed]
  55. Rahimy, E. Deep learning applications in ophthalmology. Curr. Opin. Ophthalmol. 2018, 29, 254–260. [Google Scholar] [CrossRef] [PubMed]
  56. Sisodia, D.S.; Nair, S.; Khobragade, P. Diabetic Retinal Fundus Images: Preprocessing and Feature Extraction for Early Detection of Diabetic Retinopathy. Biomed. Pharmacol. J. 2017, 10, 615–626. [Google Scholar] [CrossRef]
  57. Xiao, Z.; Zhang, X.; Geng, L.; Zhang, F.; Wu, J.; Tong, J.; Ogunbona, P.O.; Shan, C. Automatic non-proliferative diabetic retinopathy screening system based on color fundus image. Biomed. Eng. Online 2017, 16, 122. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  58. Srivastava, R.; Duan, L.; Wong, D.W.; Liu, J.; Wong, T.Y. Detecting retinal microaneurysms and hemorrhages with robustness to the presence of blood vessels. Comput. Methods Programs Biomed. 2017, 138, 83–91. [Google Scholar] [CrossRef] [PubMed]
  59. Vanithamani, R.R.C.R.; Renee Christina, R. Exudates in detection and classification of diabetic retinopathy. In International Conference on Soft Computing and Pattern Recognition; Springer: Cham, Germany, 2016; pp. 252–261. Available online: https://books.google.co.in/books?id=hFNuDwAAQBAJ&pg=PA108&lpg=PA108&dq=Vanithamani+R,+Renee+Christina+R+(2018)+Exudates+in+detection+and+classification+of+diabetic+retinopathy:+252–261&source=bl&ots=CWbiXEy9bP&sig=ACfU3U1vfBvwrh06MvSJbSKzMp8Sl2Cm4w&hl=en& (accessed on 5 April 2022).
  60. Wang, S.; Tang, H.L.; Al Turk, L.I.; Hu, Y.; Sanei, S.; Saleh, G.M.; Peto, T. Localizing Microaneurysms in Fundus Images Through Singular Spectrum Analysis. IEEE Trans. Biomed. Eng. 2016, 64, 990–1002. [Google Scholar] [CrossRef] [PubMed]
  61. Nijalingappa, P.; Sandeep, B. Machine learning approach for the identification of diabetes retinopathy and its stages. In Proceedings of the 2015 International Conference on Applied and Theoretical Computing and Communication Technology (iCATccT), Davangere, India, 29–31 October 2016; pp. 653–658. [Google Scholar] [CrossRef]
  62. Xiao, D.; Yu, S.; Vignarajan, J.; An, D.; Tay-Kearney, M.-L.; Kanagasingam, Y. Retinal hemorrhage detection by rule-based and machine learning approach. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jeju Island, Republic of Korea, 11–15 July 2017; pp. 660–663. [Google Scholar] [CrossRef]
  63. Almotiri, J.; Elleithy, K.; Elleithy, A. Retinal Vessels Segmentation Techniques and Algorithms: A Survey. Appl. Sci. 2018, 8, 155. [Google Scholar] [CrossRef] [Green Version]
  64. Bui, T.; Maneerat, N.; Watchareeruetai, U. Detection of cotton wool for diabetic retinopathy analysis using neural network. In Proceedings of the IEEE 10th International Workshop on Computational Intelligence and Applications, Hiroshima, Japan, 11–12 November 2017; pp. 203–206. [Google Scholar] [CrossRef]
  65. Franklin, S.W.; Rajan, S.E. Computerized screening of diabetic retinopathy employing blood vessel segmentation in retinal images. Biocybern. Biomed. Eng. 2014, 34, 117–124. [Google Scholar] [CrossRef]
  66. Hanđsková, V.; Pavlovičova, J.; Oravec, M.; Blaško, R. Diabetic Rethinopathy Screening by Bright Lesions Extraction from Fundus Images. J. Electr. Eng. 2013, 64, 311–316. [Google Scholar] [CrossRef] [Green Version]
  67. Kavitha, M.; Palani, S. Hierarchical classifier for soft and hard exudates detection of retinal fundus images. J. Intell. Fuzzy Syst. 2014, 27, 2511–2528. [Google Scholar] [CrossRef]
  68. Paing, M.P.; Choomchuay, S.; Yodprom, M.D.R. Detection of lesions and classification of diabetic retinopathy using fundus images. In Proceedings of the 2016 9th Biomedical engineering international conference (BMEiCON), Laung Prabang, Laos, 7–9 December 2016; pp. 1–5. [Google Scholar] [CrossRef]
  69. Zhou, W.; Wu, C.; Chen, D.; Yi, Y.; Du, W. Automatic Microaneurysm Detection Using the Sparse Principal Component Analysis-Based Unsupervised Classification Method. IEEE Access 2017, 5, 2563–2572. [Google Scholar] [CrossRef]
  70. Kusakunniran, W.; Wu, Q.; Ritthipravat, P.; Zhang, J. Hard exudates segmentation based on learned initial seeds and iterative graph cut. Comput. Methods Programs Biomed. 2018, 158, 173–183. [Google Scholar] [CrossRef] [PubMed]
  71. Khade, S.; Gite, S.; Thepade, S.D.; Pradhan, B.; Alamri, A. Detection of Iris Presentation Attacks Using Feature Fusion of Thepade’s Sorted Block Truncation Coding with Gray-Level Co-Occurrence Matrix Features. Sensors 2021, 21, 7408. [Google Scholar] [CrossRef] [PubMed]
  72. Wen, L. Algorithms: A Comparative Study of Bagging, Boosting and Stacking Techniques. Remote Sens 2020, 12, 1683. [Google Scholar] [CrossRef]
  73. Brownlee, J. Stacking Ensemble Machine Learning with Python. In Machine Learning Mastery; Machine Learning Mastery: San Francisco, CA, USA, 2020; Available online: https://machinelearningmastery.com/stacking-ensemble-machine-learning-with-python/ (accessed on 21 May 2022).
  74. Mane, V.M.; Jadhav, D.V.; Shirbahadurkar, S.D. Hybrid classifier and region-dependent integrated features for detection of diabetic retinopathy. J. Intell. Fuzzy Syst. 2017, 32, 2837–2844. [Google Scholar] [CrossRef]
  75. Fraz, M.M.; Jahangir, W.; Zahid, S.; Hamayun, M.M.; Barman, S.A. Multiscale segmentation of exudates in retinal images using contextual cues and ensemble classification. Biomed. Signal Process. Control. 2017, 35, 50–62. [Google Scholar] [CrossRef] [Green Version]
  76. Prentašić, P.; Lončarić, S. Detection of exudates in fundus photographs using deep neural networks and anatomical landmark detection fusion. Comput. Methods Programs Biomed. 2016, 137, 281–292. [Google Scholar] [CrossRef]
  77. Bala, M.P.; Vijayachitra, S. Early detection and classification of microaneurysms in retinal fundus images using sequential learning methods. Int. J. Biomed. Eng. Technol. 2014, 15, 128. [Google Scholar] [CrossRef]
  78. Sopharak, A.; Uyyanonvara, B.; Barman, S.; Williamson, T. Comparative Analysis of Automatic Exudate Detection between Machine Learning and Traditional Approaches. IEICE Trans. Inf. Syst. 2009, 92, 2264–2271. [Google Scholar] [CrossRef] [Green Version]
  79. Srinivasan, R.; Surya, J.; Ruamviboonsuk, P.; Chotcomwongse, P.; Raman, R. Influence of Different Types of Retinal Cameras on the Performance of Deep Learning Algorithms in Diabetic Retinopathy Screening. Life 2022, 12, 1610. [Google Scholar] [CrossRef]
  80. Valarmathi, S.; Vijayabhanu, R. A Survey on Diabetic Retinopathy Disease Detection and Classification using Deep Learning Techniques. In Proceedings of the 2021 IEEE 7th International Conference on Bio Signals, Images and Instrumentation, ICBSII, Chennai, India, 25–27 March 2021; pp. 1–4. [Google Scholar] [CrossRef]
  81. Wang, X.; Lu, Y.; Wang, Y.; Chen, W.-B. Diabetic Retinopathy Stage Classification Using Convolutional Neural Networks. In Proceedings of the 2018 IEEE 19th International Conference on Information Reuse and Integration for Data Science, IRI, Salt Lake City, UT, USA, 7–9 July 2018; pp. 465–471. [Google Scholar] [CrossRef]
  82. Chudzik, P.; Majumdar, S.; Calivá, F.; Al-Diri, B.; Hunter, A. Microaneurysm detection using fully convolutional neural networks. Comput. Methods Programs Biomed. 2018, 158, 185–192. [Google Scholar] [CrossRef]
  83. Yan, Y.; Gong, J.; Liu, Y. A Novel Deep Learning Method for Red Lesions Detection Using Hybrid Feature. In Proceedings of the 31st Chinese Control and Decision Conference, CCDC, Nanchang, China, 3–5 June 2019; pp. 2287–2292. [Google Scholar] [CrossRef]
  84. Guo, Y.; Liu, Y.; Oerlemans, A.; Lao, S.; Wu, S.; Lew, M.S. Deep learning for visual understanding: A review. Neurocomputing 2016, 187, 27–48. [Google Scholar] [CrossRef]
  85. Abbas, Q.; Ibrahim, M.E.A.; Jaffar, M.A. Video scene analysis: An overview and challenges on deep learning algorithms. Multimed. Tools Appl. 2017, 77, 20415–20453. [Google Scholar] [CrossRef]
  86. Gurcan, O.F.; Beyca, O.F.; Dogan, O. A Comprehensive Study of Machine Learning Methods on Diabetic Retinopathy Classification. Int. J. Comput. Intell. Syst. 2021, 14, 1132–1141. [Google Scholar] [CrossRef]
  87. Khade, S.; Gite, S.; Pradhan, B. Iris Liveness Detection Using Multiple Deep Convolution Networks. Big Data Cogn. Comput. 2022, 6, 67. [Google Scholar] [CrossRef]
  88. Ketkar, N.; Moolayil, J. Deep Learning with Python; Manning Publications: New York, NY, USA, 2021. [Google Scholar] [CrossRef]
  89. Olivas, E.S.; Guerrero, J.D.M.; Martinez-Sober, M.; Magdalena-Benedito, J.R.; Serrano, L. Magdalena-Benedito, Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques; IGI Global: Pennsylvania, PA, USA, 2010. [Google Scholar]
  90. Masood, S.; Luthra, T.; Sundriyal, H.; Ahmed, M. Identification of diabetic retinopathy in eye images using transfer learning. In Proceedings of the International Conference on Computing, Communication and Automation (ICCCA), Greater Noida, India, 5–6 May 2017; pp. 1183–1187. [Google Scholar] [CrossRef]
  91. Xu, X.; Lin, J.; Tao, Y.; Wang, X. An Improved DenseNet Method Based on Transfer Learning for Fundus Medical Images. In Proceedings of the 2018 7th international conference on digital home (ICDH), Guilin, China, 30 November–1 December 2018; pp. 137–140. [Google Scholar] [CrossRef]
  92. Lian, C.; Liang, Y.; Kang, R.; Xiang, Y. Deep Convolutional Neural Networks for Diabetic Retinopathy Classification. ACM Int. Conf. Proceeding Ser. 2018, 72, 68–72. [Google Scholar] [CrossRef]
  93. Blakely, M. ‘The Importance of Sight and Vision,’ Marvel Optics. 2015. Available online: https://www.marveloptics.com/blog/the-importance-of-sight-and-vision-molly-blakely/ (accessed on 7 April 2022).
  94. Quellec, G.; Charrière, K.; Boudi, Y.; Cochener, B.; Lamard, M. Deep image mining for diabetic retinopathy screening. Med. Image Anal. 2017, 39, 178–193. [Google Scholar] [CrossRef] [PubMed]
  95. Oliveira, A.; Pereira, S.; Silva, C. Retinal vessel segmentation based on Fully Convolutional Neural Networks. Expert Syst. Appl. 2018, 112, 229–242. [Google Scholar] [CrossRef] [Green Version]
  96. Orlando, J.I.; Prokofyeva, E.; del Fresno, M.; Blaschko, M.B. An ensemble deep learning based approach for red lesion detection in fundus images. Comput. Methods Programs Biomed. 2018, 153, 115–127. [Google Scholar] [CrossRef] [Green Version]
  97. Mahendran, G.; Dhanasekaran, R. Investigation of the severity level of diabetic retinopathy using supervised classifier algorithms. Comput. Electr. Eng. 2015, 45, 312–323. [Google Scholar] [CrossRef]
  98. Wu, D.; Zhang, M.; Liu, J.-C.; Bauman, W. On the Adaptive Detection of Blood Vessels in Retinal Images. IEEE Trans. Biomed. Eng. 2006, 53, 341–343. [Google Scholar] [CrossRef]
  99. Sánchez, C.I.; García, M.; Mayo, A.; Lopez, M.I.; Hornero, R. Retinal image analysis based on mixture models to detect hard exudates. Med. Image Anal. 2009, 13, 650–658. [Google Scholar] [CrossRef] [PubMed]
  100. García, M.; Sánchez, C.I.; López, M.I.; Abásolo, D.; Hornero, R. Neural network based detection of hard exudates in retinal images. Comput. Methods Programs Biomed. 2009, 93, 9–19. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  101. Sánchez, C.I.; Hornero, R.; López, M.I.; Aboy, M.; Poza, J.; Abásolo, D. A novel automatic image processing algorithm for detection of hard exudates based on retinal image analysis. Med. Eng. Phys. 2008, 30, 350–357. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  102. Quellec, G.; Lamard, M.; Abramoff, M.; Decencière, E.; Lay, B.; Erginay, A.; Cochener, B.; Cazuguel, G. A multiple-instance learning framework for diabetic retinopathy screening. Med. Image Anal. 2012, 16, 1228–1240. [Google Scholar] [CrossRef]
  103. Köse, C.; Şevik, U.; Ikibaş, C.; Erdöl, H. Simple methods for segmentation and measurement of diabetic retinopathy lesions in retinal fundus images. Comput. Methods Programs Biomed. 2012, 107, 274–293. [Google Scholar] [CrossRef]
  104. Giancardo, L.; Meriaudeau, F.; Karnowski, T.P.; Li, Y.; Garg, S.; Tobin, K.W.; Chaum, E. Exudate-based diabetic macular edema detection in fundus images using publicly available datasets. Med. Image Anal. 2012, 16, 216–226. [Google Scholar] [CrossRef]
  105. Zhang, B.; Karray, F.; Li, Q.; Zhang, L. Sparse Representation Classifier for microaneurysm detection and retinal blood vessel extraction. Inf. Sci. 2012, 200, 78–90. [Google Scholar] [CrossRef]
  106. Qureshi, R.J.; Kovacs, L.; Harangi, B.; Nagy, B.; Peto, T.; Hajdu, A. Combining algorithms for automatic detection of optic disc and macula in fundus images. Comput. Vis. Image Underst. 2012, 116, 138–145. [Google Scholar] [CrossRef]
  107. Noronha, K.; Acharya, U.R.; Nayak, K.P.; Kamath, S.; Bhandary, S.V. Decision support system for diabetic retinopathy using discrete wavelet transform. Proc. Inst. Mech. Eng. Part H J. Eng. Med. 2012, 227, 251–261. [Google Scholar] [CrossRef]
  108. Gharaibeh, N.; Al-Hazaimeh, O.M.; Abu-Ein, A.; Nahar, K.M. A Hybrid SVM NAÏVE-BAYES Classifier for Bright Lesions Recognition in Eye Fundus Images. Int. J. Electr. Eng. Inform. 2021, 13, 530–545. [Google Scholar] [CrossRef]
  109. Al Hazaimeh, O.M.; Nahar, K.M.; Al Naami, B.; Gharaibeh, N. An effective image processing method for detection of diabetic retinopathy diseases from retinal fundus images. Int. J. Signal Imaging Syst. Eng. 2018, 11, 206. [Google Scholar] [CrossRef]
  110. Akram, M.U.; Khalid, S.; Khan, S.A. Identification and classification of microaneurysms for early detection of diabetic retinopathy. Pattern Recognit. 2013, 46, 107–116. [Google Scholar] [CrossRef]
  111. Harini, R.; Sheela, N. Feature extraction and classification of retinal images for automated detection of Diabetic Retinopathy. In Proceedings of the 2016 Second International Conference on Cognitive Computing and Information Processing (CCIP), Mysuru, India, 12–13 August 2016; Nagabhusan, T.N., Sundararajan, N., Suresh, S., Eds.; Sri Jayachamarajendra College of Engineering, JSS TI Campus: Mysuru, India, 2016; p. 570006. [Google Scholar]
  112. Umapathy, A.; Sreenivasan, A.; Nairy, D.S.; Natarajan, S.; Rao, B.N. Image Processing, Textural Feature Extraction and Transfer Learning based detection of Diabetic Retinopathy. In Proceedings of the 2019 9th International Conference on Bioscience, Biochemistry and Bioinformatics, Singapore, 7–9 January 2019; pp. 17–21. [Google Scholar] [CrossRef]
  113. Zago, G.T.; Andreão, R.V.; Dorizzi, B.; Salles, E.O.T. Diabetic retinopathy detection using red lesion localization and convolutional neural networks. Comput. Biol. Med. 2019, 116, 103537. [Google Scholar] [CrossRef]
  114. Jiang, H.; Yang, K.; Gao, M.; Zhang, D.; Ma, H.; Qian, W. An Interpretable Ensemble Deep Learning Model for Diabetic Retinopathy Disease Classification. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 2045–2048. [Google Scholar] [CrossRef]
  115. Abràmoff, M.D.; Garvin, M.K.; Sonka, M. Retinal Imaging and Image Analysis. IEEE Rev. Biomed. Eng. 2010, 3, 169–208. [Google Scholar] [CrossRef]
  116. Gargeya, R.; Leng, T. Automated Identification of Diabetic Retinopathy Using Deep Learning. Ophthalmology 2017, 124, 962–969. [Google Scholar] [CrossRef]
  117. Doshi, D.; Shenoy, A.; Sidhpura, D.; Gharpure, P. Diabetic retinopathy detection using deep convolutional neural networks. In Proceedings of the 2016 International Conference on Computing, Analytics and Security Trends (CAST), Pune, India, 11 July 2016; pp. 261–266. [Google Scholar]
  118. Ghosh, R.; Ghosh, K.; Maitra, S. Automatic detection and classification of diabetic retinopathy stages using CNN. In Proceedings of the 2017 4th International Conference on Signal Processing and Integrated Networks (SPIN), Delhi, India, 26–27 August 2017; pp. 550–554. [Google Scholar] [CrossRef]
  119. Gondal, W.M.; Kohler, J.M.; Grzeszick, R.; Fink, G.A.; Hirsch, M. Weakly-supervised localization of diabetic retinopathy lesions in retinal fundus images. In Proceedings of the 2017 IEEE international conference on image processing (ICIP), Beijing, China, 17–20 September 2017; pp. 2069–2073. [Google Scholar] [CrossRef] [Green Version]
  120. Jiang, Y.; Wu, H.; Dong, J. Automatic Screening of Diabetic Retinopathy Images with Convolution Neural Network Based on Caffe Framework. In Proceedings of the 1st International Conference on Medical and Health Informatics 2017, Taichung city, Taiwan, 20–22 May 2017; pp. 90–94. [Google Scholar] [CrossRef]
  121. Prentasic, P.; Loncaric, S. Weighted ensemble based automatic detection of exudates in fundus photographs. IEEE 2014, 2014, 138–141. [Google Scholar] [CrossRef]
  122. Roy, P.; Tennakoon, R.; Cao, K.; Sedai, S.; Mahapatra, D.; Maetschke, S.; Garnavi, R. A novel hybrid approach for severity assessment of Diabetic Retinopathy in colour fundus images. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, Australia, 18–21 April 2017; pp. 1078–1082. [Google Scholar] [CrossRef]
  123. Xu, K.; Feng, D.; Mi, H. Deep Convolutional Neural Network-Based Early Automated Detection of Diabetic Retinopathy Using Fundus Image. Molecules 2017, 22, 2054. [Google Scholar] [CrossRef] [Green Version]
  124. Yang, Y.; Li, T.; Li, W.; Wu, H.; Fan, W.; Zhang, W. Lesion Detection and Grading of Diabetic Retinopathy via Two-Stages Deep Convolutional Neural Networks. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Germany, 2017; pp. 533–540. [Google Scholar] [CrossRef] [Green Version]
  125. van Grinsven, M.J.J.P.; van Ginneken, B.; Hoyng, C.B.; Theelen, T.; Sanchez, C.I. Fast Convolutional Neural Network Training Using Selective Data Sampling: Application to Hemorrhage Detection in Color Fundus Images. IEEE Trans. Med. Imaging 2016, 35, 1273–1284. [Google Scholar] [CrossRef]
  126. Budak, U.; Şengür, A.; Guo, Y.; Akbulut, Y. A novel microaneurysms detection approach based on convolutional neural networks with reinforcement sample learning algorithm. Health Inf. Sci. Syst. 2017, 5, 14. [Google Scholar] [CrossRef]
  127. Zhou, W.; Wu, C.; Chen, D.; Wang, Z.; Yi, Y.; Du, W. Automatic Microaneurysms Detection Based on Multifeature Fusion Dictionary Learning. Comput. Math. Methods Med. 2017, 2017, 1–11. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  128. Barkana, B.D.; Sariçiçek, I.; Yildirim, B. Performance analysis of descriptive statistical features in retinal vessel segmentation via fuzzy logic, ANN, SVM, and classifier fusion. Knowl. Based Syst. 2017, 118, 165–176. [Google Scholar] [CrossRef]
  129. Dasgupta, A.; Singh, S. A fully convolutional neural network based structured prediction approach towards the retinal vessel segmentation. In Proceedings of the 14th International Symposium on Biomedical Imaging (ISBI), Melbourne, VIC, Australia, 18–21 April 2017; pp. 248–251. [Google Scholar]
  130. Mo, J.; Zhang, L. Multi-level deep supervised networks for retinal vessel segmentation. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 2181–2193. [Google Scholar] [CrossRef] [PubMed]
  131. Tan, J.H.; Acharya, U.R.; Bhandary, S.V.; Chua, K.C.; Sivaprasad, S. Segmentation of optic disc, fovea, and retinal vasculature using a single convolutional neural network. J. Comput. Sci. 2017, 20, 70–79. [Google Scholar] [CrossRef] [Green Version]
  132. Vega, R.; Sanchez-Ante, G.; Falcon-Morales, L.E.; Sossa, H.; Guevara, E. Retinal vessel extraction using lattice neural networks with dendritic processing. Comput. Biol. Med. 2015, 58, 20–30. [Google Scholar] [CrossRef] [PubMed]
  133. Wang, S.; Yin, Y.; Cao, G.; Wei, B.; Zheng, Y.; Yang, G. Hierarchical retinal blood vessel segmentation based on feature and ensemble learning. Neurocomputing. Neurocomputing 2015, 149, 708–717. [Google Scholar] [CrossRef]
  134. Choi, J.Y.; Yoo, T.K.; Seo, J.G.; Kwak, J.; Um, T.T.; Rim, T.H. Multi-categorical deep learning neural network to classify retinal images: A pilot study employing small database. PLoS ONE 2017, 12, 16. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  135. Mumtaz, R.; Hussain, M.; Sarwar, S.; Khan, K.; Mumtaz, S.; Mumtaz, M. Automatic detection of retinal hemorrhages by exploiting image processing techniques for screening retinal diseases in diabetic patients. Int. J. Diabetes Dev. Ctries. 2017, 38, 80–87. [Google Scholar] [CrossRef]
  136. Santhi, D.; Manimegalai, D.; Parvathi, S.; Karkuzhali, S. Segmentation and classification of bright lesions to diagnose diabetic retinopathy in retinal images. Biomed. Eng. Biomed. Tech. 2016, 61, 443–453. [Google Scholar] [CrossRef]
  137. Li, G.; Zheng, S.; Li, X. Exudate Detection in Fundus Images via Convolutional Neural Network. In International Forum on Digital TV and Wireless Multimedia Communications; Springer: Singapore, 2018; pp. 193–202. [Google Scholar] [CrossRef]
  138. Bala, P.; Vijayachitra, S. A Sequential learning method for detection and classification of exudates in retinal images to assess diabetic retinopathy. J. Biol. Syst. 2014, 22, 413–428. [Google Scholar] [CrossRef]
  139. Rahim, S.S.; Jayne, C.; Palade, V.; Shuttleworth, J. Automatic detection of microaneurysms in colour fundus images for diabetic retinopathy screening. Neural Comput. Appl. 2015, 27, 1149–1164. [Google Scholar] [CrossRef]
  140. Omar, M.; Khelifi, F.; Tahir, M.A. Detection and classification of retinal fundus images exudates using region based multiscale LBP texture approach. In Proceedings of the 2016 International Conference on Control, Decision and Information Technologies (CoDIT), Saint Julian’s, Malta, 6–8 April 2016; pp. 227–232. [Google Scholar] [CrossRef]
  141. Abbas, Q.; Fondon, I.; Sarmiento, A.; Jiménez, S.; Alemany, P. Automatic recognition of severity level for diagnosis of diabetic retinopathy using deep visual features. Med. Biol. Eng. Comput. 2017, 55, 1959–1974. [Google Scholar] [CrossRef] [PubMed]
  142. Ouyang, W.; Luo, P.; Zeng, X.; Qiu, S.; Tian, Y.; Li, H.; Yang, S.; Wang, Z.; Xiong, Y.; Qian, C.; et al. Deepid-net: Multi-stage and deformable deep convolutional neural networks for object detection. arXiv 2014, arXiv:1409.3505. [Google Scholar]
  143. Shan, J.; Li, L. A deep learning method for microaneurysm detection in fundus images. In Proceedings of the IEEE First International Conference on Connected Health: Applications, Systems, and Engineering Technologies, Washington, DC, USA, 27–29 June 2016; pp. 357–358. [Google Scholar]
  144. Shirbahadurkar, S.D.; Mane, V.M.; Jadhav, D.V. Early Stage Detection of Diabetic Retinopathy Using an Optimal Feature Set. In Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2017; Volume 678, pp. 15–23. [Google Scholar] [CrossRef]
  145. SK, S. A Machine Learning Ensemble Classifier for Early Prediction of Diabetic Retinopathy. J. Med. Syst. 2017, 41, 201. [Google Scholar] [CrossRef]
  146. Abràmoff, M.D.; Lou, Y.; Erginay, A.; Clarida, W.; Amelon, R.; Folk, J.C.; Niemeijer, M. Improved Automated Detection of Diabetic Retinopathy on a Publicly Available Dataset Through Integration of Deep Learning. Investig. Ophthalmol. Vis. Sci. 2016, 57, 5200–5206. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  147. Antal, B.; Hajdu, A. An ensemble-based system for automatic screening of diabetic retinopathy. Knowl. Based Syst. 2014, 60, 20–27. [Google Scholar] [CrossRef] [Green Version]
  148. Carrera, E.V.; Gonzalez, A.; Carrera, R. Automated detection of diabetic retinopathy using SVM. In Proceedings of the IEEE XXIV International Conference on Electronics, Electrical Engineering and Computing (INTERCON), Cusco, Peru, 15–18 August 2017; pp. 1–4. [Google Scholar] [CrossRef]
  149. Gegundez-Arias, M.E.; Marin, D.; Ponte, B.; Alvarez, F.; Garrido, J.; Ortega, C.; Vasallo, M.J.; Bravo, J.M. A tool for automated diabetic retinopathy pre-screening based on retinal image computer analysis. Comput. Biol. Med. 2017, 88, 100–109. [Google Scholar] [CrossRef]
  150. Li, X.; Pang, T.; Xiong, B.; Liu, W.; Liang, P.; Wang, T. Convolutional neural networks based transfer learning for diabetic retinopathy fundus image classification. In Proceedings of the 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Shanghai, China, 14–16 October 2017. [Google Scholar]
  151. Arunkumar, R.; Karthigaikumar, P. Multi-retinal disease classification by reduced deep learning features. Neural Comput. Applic. 2017, 28, 329–334. [Google Scholar] [CrossRef]
  152. Tan, J.H.; Fujita, H.; Sivaprasad, S.; Bhandary, S.V.; Rao, A.K.; Chua, K.C.; Acharya, U.R. Automated segmentation of exudates, hemorrhages, and microaneurysms using a single convolutional neural network. Inf. Sci. 2017, 420, 66–76. [Google Scholar] [CrossRef]
  153. Takahashi, H.; Tampo, H.; Arai, Y.; Inoue, Y.; Kawashima, H. Applying artificial intelligence to disease staging: Deep learning for the improved staging of diabetic retinopathy. PLoS ONE 2017, 12, e0179790. [Google Scholar] [CrossRef] [Green Version]
  154. Hemanth, D.J.; Anitha, J.; Indumathy, A. Diabetic Retinopathy Diagnosis in Retinal Images Using Hopfield Neural Network. IETE J. Res. 2016, 62, 893–900. [Google Scholar] [CrossRef]
  155. Sharma, S.; Singh, G.; Sharma, M. A comprehensive review and analysis of supervised-learning and soft computing techniques for stress diagnosis in humans. Comput. Biol. Med. 2021, 134, 104450. [Google Scholar] [CrossRef] [PubMed]
  156. Lakshminarayanan, V.; Kheradfallah, H.; Sarkar, A.; Balaji, J.J. Automated Detection and Diagnosis of Diabetic Retinopathy: A Comprehensive Survey. J. Imaging 2021, 7, 165. [Google Scholar] [CrossRef]
  157. Shultz, T.R.; Fahlman, S.E.; Craw, S.; Andritsos, P.; Tsaparas, P.; Silva, R.; Drummond, C.; Lanzi, P.L.; Gama, J.; Wiegand, R.P. Confusion Matrix. Encycl. Mach. Learn. 2011, 61, 209. [Google Scholar]
  158. Wikipedia, F. Cohen Kappa. 2006. Available online: https://thenewstack.io/cohens-kappa-what-it-is-when-to-use-it-and-how-to-avoid-its-pitfalls (accessed on 12 March 2022).
  159. Hernández, C.; Porta, M.; Bandello, F.; Grauslund, J.; Harding, S.P.; Aldington, S.J.; Egan, C.; Frydkjaer-Olsen, U.; García-Arumí, J.; Gibson, J.; et al. The Usefulness of Serum Biomarkers in the Early Stages of Diabetic Retinopathy: Results of the EUROCONDOR Clinical Trial. J. Clin. Med. 2020, 9, 1233. [Google Scholar] [CrossRef] [PubMed]
  160. Jacoba, C.M.P.; Celi, L.A.; Silva, P.S. Biomarkers for Progression in Diabetic Retinopathy: Expanding Personalized Medicine through Integration of AI with Electronic Health Records. Semin. Ophthalmol. 2021, 36, 250–257. [Google Scholar] [CrossRef]
  161. Records, H. HHS Public Access. Biomarkers 2022, 36, 250–257. [Google Scholar]
  162. Control, D.; Trial, C. Progression of Retinopathy with Intensive versus Conventional Treatment in the Diabetes Control and Complications Trial. Ophthalmology 1995, 102, 647–661. [Google Scholar] [CrossRef]
  163. Group, A.S.; Group, A.E.S.; Chew, E.Y.; Ambrosius, W.T.; Davis, M.D.; Danis, R.P.; Gangaputra, S.; Greven, C.M.; Hubbard, L.; Esser, B.A.; et al. Effects of Medical Therapies on Retinopathy Progression in Type 2 Diabetes. N. Engl. J. Med. 2010, 363, 233–244. [Google Scholar] [CrossRef]
  164. Kuo, J.Z.; Wong, T.Y.; Rotter, J.I. Challenges in elucidating the genetics of diabetic retinopathy. JAMA Ophthalmol. 2014, 132, 96–107. [Google Scholar] [CrossRef] [Green Version]
  165. Mastropasqua, R.; Toto, L.; Cipollone, F.; Santovito, D.; Carpineto, P.; Mastropasqua, L. Role of microRNAs in the modulation of diabetic retinopathy. Prog. Retin. Eye Res. 2014, 43, 92–107. [Google Scholar] [CrossRef] [PubMed]
  166. Cooper, M.E.; El-Osta, A. Epigenetics. Circ. Res. 2010, 107, 1403–1413. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  167. Torok, Z.; Peto, T.; Csosz, E.; Tukacs, E.; Molnar, A.M.; Berta, A.; Tozser, J.; Hajdu, A.; Nagy, V.; Domokos, B.; et al. Combined Methods for Diabetic Retinopathy Screening, Using Retina Photographs and Tear Fluid Proteomics Biomarkers. J. Diabetes Res. 2015, 2015, 1–8. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  168. Lu, C.-H.; Lin, S.-T.; Chou, H.-C.; Lee, Y.-R.; Chan, H.-L. Proteomic analysis of retinopathy-related plasma biomarkers in diabetic patients. Arch. Biochem. Biophys. 2013, 529, 146–156. [Google Scholar] [CrossRef]
  169. Xia, J.-F.; Wang, Z.-H.; Liang, Q.-L.; Wang, Y.-M.; Li, P.; Luo, G.-A. Correlations of six related pyrimidine metabolites and diabetic retinopathy in Chinese type 2 diabetic patients. Clin. Chim. Acta 2011, 412, 940–945. [Google Scholar] [CrossRef] [PubMed]
  170. Hussain, F.; Hussain, R.; Hossain, E. Explainable Artificial Intelligence (XAI): An Engineering Perspective. arXiv 2021, arXiv:2101.03613. Available online: http://arxiv.org/abs/2101.03613 (accessed on 21 March 2022).
  171. Jang, S.I.; Girard, M.J.; Thiéry, A.H. Thiery, Explainable diabetic retinopathy classification based on neural-symbolic learning. CEUR Workshop Proc. 2021, 2986, 104–113. [Google Scholar]
  172. Deshpande, N.M.; Gite, S.; Pradhan, B.; Assiri, M.E. Explainable Artificial Intelligence–A New Step towards the Trust in Medical Diagnosis with AI Frameworks: A Review. Comput. Model. Eng. Sci. 2022, 133, 1–30. [Google Scholar] [CrossRef]
  173. Leopold, H.A.; Singh, A.; Sengupta, S.; Zelek, J.S.; Lakshminarayanan, V. Recent advances in deep learning applications for retinal diagnosis using OCT. 2020. In State of the Art in Neural Networks; Elsevier: New York, NY, USA, 2020. [Google Scholar]
  174. Liu, R.; Li, Q.; Xu, F.; Wang, S.; He, J.; Cao, Y.; Shi, F.; Chen, X.; Chen, J. Application of artificial intelligence-based dual-modality analysis combining fundus photography and optical coherence tomography in diabetic retinopathy screening in a community hospital. Biomed. Eng. Online 2022, 21, 1–11. [Google Scholar] [CrossRef]
  175. Nguyen, D.M.H.; Mai, T.T.N.; Than, N.T.T.; Prange, A.; Sonntag, D. Self-supervised Domain Adaptation for Diabetic Retinopathy Grading Using Vessel Image Reconstruction. In German Conference on Artificial Intelligence (Künstliche Intelligenz); Springer: Cham, Germany, 2021; pp. 349–361. [Google Scholar] [CrossRef]
  176. Song, R.; Cao, P.; Yang, J.; Zhao, D.; Zaiane, O.R. A Domain Adaptation Multi-instance Learning for Diabetic Retinopathy Grading on Retinal Images. In Proceedings of the 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Seoul, Republic of Korea, 16–19 December 2020; pp. 743–750. [Google Scholar] [CrossRef]
  177. Crawshaw, M. Multi-task learning with deep neural networks: A survey. arXiv 2020, arXiv:2009.09796. [Google Scholar]
  178. Foo, A.; Hsu, W.; Lee, M.L.; Lim, G.; Wong, T.Y. Multi-Task Learning for Diabetic Retinopathy Grading and Lesion Segmentation. Proc. Conf. AAAI Artif. Intell. 2020, 34, 13267–13272. [Google Scholar] [CrossRef]
Figure 1. Applications of AI in retina imaging.
Figure 1. Applications of AI in retina imaging.
Bdcc 06 00152 g001
Figure 2. Normal vision and DR vision.
Figure 2. Normal vision and DR vision.
Bdcc 06 00152 g002
Figure 3. Representation of a fundus image with the lesion annotations.
Figure 3. Representation of a fundus image with the lesion annotations.
Bdcc 06 00152 g003
Figure 5. Classification of diabetic retinopathy.
Figure 5. Classification of diabetic retinopathy.
Bdcc 06 00152 g005
Figure 6. Evolution of DR with artificial intelligence.
Figure 6. Evolution of DR with artificial intelligence.
Bdcc 06 00152 g006
Figure 7. Demonstrates the organization of our SLR into various portions.
Figure 7. Demonstrates the organization of our SLR into various portions.
Bdcc 06 00152 g007
Figure 8. Relevant paper distribution based on (a) Data Source, (b) Year, and (c) Document Type.
Figure 8. Relevant paper distribution based on (a) Data Source, (b) Year, and (c) Document Type.
Bdcc 06 00152 g008
Figure 9. Machine learning methods.
Figure 9. Machine learning methods.
Bdcc 06 00152 g009
Figure 10. DR recognition and classification using deep learning.
Figure 10. DR recognition and classification using deep learning.
Bdcc 06 00152 g010
Figure 11. DR Detection and classification using transfer learning.
Figure 11. DR Detection and classification using transfer learning.
Bdcc 06 00152 g011
Figure 12. Feature extraction parameters.
Figure 12. Feature extraction parameters.
Bdcc 06 00152 g012
Figure 13. Diabetic retinopathy using XAI.
Figure 13. Diabetic retinopathy using XAI.
Bdcc 06 00152 g013
Figure 14. Diabetic retinopathy using domain adaptation.
Figure 14. Diabetic retinopathy using domain adaptation.
Bdcc 06 00152 g014
Table 1. Highlights of earlier research related to DR.
Table 1. Highlights of earlier research related to DR.
Ref NoObjectives and TopicDiscussionsType
[15]Datasets, picture preparation methods, ML-based methods, DL-based strategies, and evaluation metrics are presented as five components of DR screening methodologies.Did not follow the PRISMA approach. Studies that were released between January 2013 and March 2018 are considered in this study.Review
[39]It discusses DeepDR, an automated DR identification, and grading system. DeepDR uses transfer learning and ensemble learning to detect the presence and severity of DR in fundus images.Did not follow the PRISMA approach. Experiment results indicate the importance and effectiveness of the ideal number and combinations of component classifiers in model performance.Review
[38]It discusses an integrated ML approach that incorporates support vector machines (SVMs), principal component analysis (PCA), and moth flame optimization approaches for DR.Did not follow the PRISMA approach.
Utilizing the PCA technique to reduce the dimensions has had a detrimental impact on the performance of the majority of ML algorithms.
Review
[40]It presents the latest DL algorithms used in DR detection, highlighting the contributions and challenges of recent research papers.Did not follow the PRISMA approach. Robust deep-learning methods must be developed to give satisfactory performance in cross-database evaluation, i.e., trained with one dataset and tested with another.Review
[41]It presents a comprehensive survey of automated eye diseases detection systems using available datasets, techniques of image preprocessing, and deep learning models.Studies that did not follow the PRISMA approach are considered from January 2016 to June 2021.Review
Table 2. Research questions.
Table 2. Research questions.
RQ. No.Research QuestionObjective/Discussion
1What are the most common artificial intelligence-based methods for DR detection?It assists in determining the most relevant artificial intelligence algorithms for DR diagnosis applications nowadays.
2What are the various Features Extraction Techniques for DR?List various feature extraction techniques used for DR.
3What are the relevant datasets for DR?Discovers several publicly available datasets that may be used as benchmarks to compare and assess the performance of various methodologies, as well as gives new researchers a head start.
4What are the various evaluation measures used for DR detection?The most used standards and metrics for DR detection are reviewed.
5What are the potential solutions for a robust and reliable DR detection system?It makes it easier to find significant research areas to be studied.
Table 3. Direct and indirect keywords were used.
Table 3. Direct and indirect keywords were used.
Fundamental Keyword“Diabetic Retinopathy”
Direct Keyword“Artificial Intelligence”“Machine Learning”“Deep Learning”
Indirect Keyword“Ophthalmology”“Fundus Images”“DR Stages”“OCT”
Table 4. Search terms employed.
Table 4. Search terms employed.
DatabaseQueryInitial Outcome
Scopus(Diabetic AND Retinopathy AND Artificial AND Intelligence AND Machine AND Learning AND Deep AND Learning)149
Web of Science79
Table 5. Inclusion and exclusion criteria considered.
Table 5. Inclusion and exclusion criteria considered.
Inclusion Criteria
Rather than reviews or survey pieces, scientific papers should be primary research papers.
Scholarly articles that appeared between 2014 and April 2022.
Query terms must be included in the titles, abstracts, or whole body of peer-reviewed publications.
Articles that address at least one research question.
The developed solution should aim at resolving issues with diabetic retinopathy detection using AI.
Exclusion Criteria
Articles that are written in languages other than English.
Studies published that are identical.
Complete scientific papers are not always available.
Research papers that are not related to diabetic retinopathy using AI.
Table 6. Major automated diabetic retinopathy tools.
Table 6. Major automated diabetic retinopathy tools.
SoftwareSample SizeOnly DR OR
Controls
DeviceGrading/
Mechanism
LimitationSoftware
Mechanism
UsedAccuracy
Bosch [44]1128DR with age of 18+.Bosch Mobile Eye Care fundus camera.
Single field non-mydriatic.
ETDRS.In some of the eyes diagnosed as normal, the other eye may have had early evidence. Further, while the study notes the findings of DR, it would be useful to know how accurate this software is for individual lesions, such as exudates, microaneurysms, and macular edema.CNN-based AI software.For DR screening in India.Sensitivity—91%.
Specificity—96%.
Positive predicted value (PPV)—94%.
Negative predictive value (NPV)—95%.
Retmarker DR [45]45,148Screening diabetic patients.Used non-mydriatic cameras.
Canon CR6-45NM with a Sony DXC-950P 3CCD color video camera other cameras, such as Nidek AFC-330 and
CSO Cobra, have been used temporarily.
Coimbra Ophthalmology. Reading Centre (CORC).The short duration of the study (2 years) and the lack of more detailed information on systemic parameters, such as lipid stratification.Feature-based ML algorithms.Used in local DR screening in Portugal, Aveiro,
Coimbra, Leiria, Viseu, Castelo Branco, and Cova da
Beira.
R0—71.5%, RL—22.7%, M—2.2%,
RP—0.1%,
NC—3.5%.
Human grading burden reduction of 48.42%.
Eye Art [46]78,685A cross-sectional diagnostic study of individuals with diabetes.Two-field undilated fundus photograph.
Two-field retinal CFP images (one disc-centered and one macula-centered) were taken for each eye (Canon CR-2 AF or Canon CR-2 Plus AF; Canon USA Inc.).
ETDRS.A limitation of the study is that optical coherence tomography was not used to determine clinically significant macular edema. Color fundus photographs CFP is known to be an accurate, sufficient, and widely accepted clinical reference standard, including by the FDA.AI Algo.Used in Canada for detection of both mtmDR and vtDR without physician assistance.Sensitivity—91.7%.
Specificity—91.5%.
Retinalyze [47]260Retrospective cross-sectional study of diabetic patients attending routine.Mydriatic 60° fundus photography on 35-mm color transparency film was used, with a single fovea-centered field
fundus camera (CF-60UV; Canon Europa NV, Amstelveen, The Netherlands) set.
Routine grading was based on a visual examination of slide-mounted transparencies. Reference grading was performed with specific emphasis on achieving high sensitivity.Commercially unavailable for a long time until reintroduced into its web-based form with DL improvements.Deep learning based.Used in Europe to a greater extent.Sensitivity 93.1%
and specificity 71.6%.
Singapore SERI-NUS [48]76,370 SIDRP between 2010 and 2013 (SIDRP 2010–2013)With diabetes.FundusVue, Canon, Topcon, and Carl Zeiss nonmydriatic.Grading was completed by a certified ophthalmologist and retina specialist.Identification of diabetic macular edema from fundus photographs may not identify all cases appropriately without clinical examination and optical coherence tomography.Using a deep learning system.Singapore.Sensitivity 90.5%
and specificity 91.6%.
AUC—0.936
Google [49]128,175
Aravind Eye Hospital, Sankara Nethralaya, and Narayana Nethralaya
Macula-centered retinal fundus images were retrospectively obtained from EyePACS in the United States and three eye hospitals in India among patients presenting for diabetic retinopathy screening.Two sets of 9963 Eyepacs images from Centervue DRS, Optovue iCam, Canon CR1/DGi/CR2, and Topcon NW using 45° FOV and 40% acquired with pupil dilation.
Images from a 1748- Messidor-2 from a Topcon TRC NW6 nonmydriatic camera and 45° FOV with 44% pupil dilation.
DR severity (none, mild, moderate, severe, or proliferative) was graded according to the International Clinical Diabetic Retinopathy scale.Further research is necessary to determine the feasibility of applying this algorithm in the clinical setting and to determine whether the use of the algorithm could lead to improved care and outcomes compared with current ophthalmologic assessment.CNN based. Inception-v3 architecture.Used in North Carolina to a greater extent.Sensitivity—97.5%.
Specificity—93.4%.
IDx-DR [50]900With no history of DR.Widefield stereoscopic photography
mydriatic.
FPRC Wisconsin Fundus Photograph Reading Center, and ETDRS.The prevalence of referable retinopathy in this population is small, which limits the comparison to other populations with higher disease prevalence.AI-based logistic regression model.Dutch diabetic Care system-1410.Sensitivity—87.2%.
Specificity—90.7%.
Comprehensive Artificial
Intelligence Retinal Expert (CARE)system [51]
443 subjects (848 eyes)Previously diagnosed diabetic patients.One-field color fundus photography (CFP) (macula-centered
with a 50◦ field of vision) was taken for both eyes using a nonmydriatic fundus camera (RetiCam 3100, China) by three trained
ophthalmologists in dark rooms.
International Clinical Diabetic Retinopathy
(ICDR) classification criteria.
This technique has drawbacks when it comes to detecting severe PDR and DME.
(1) Poor imaging results from fundus such as ghost images and fuzzy lesions, in leukoplakia, lens opacity, and tiny pupils. Cases create difficulty in AI identification.
(2) The difference in the results was caused by the study’s insufficient sample size.
(3) Some lesions were overlooked during the 50-degree fundus photography focused on the macula.
AI-based.Chinese community health care centers.Sensitivity—75.19%.
Specificity
93.99%.
Table 7. Traditional feature extraction methods used in DR.
Table 7. Traditional feature extraction methods used in DR.
Ref. NoAuthorsFeature SelectedFeatures and Classifiers
(Technique)
WeaknessDatabase(Performance Analysis)
[98]Di Wu, Wu, Zhang, Liu, and Bauman, 2006.To find out blood vessels in the retina.Gabor filters.Requires high-performance time with greater feature vector dimension.STARE.Tested 20 images.
For normal images, TPR—80 to 91% and
FPR—2.8 to 5.5%.
For abnormal images, TPR—73.8–86.5%
FPR—2.1–5.3%.
[99]Sanchez et al. (2009).To detect hard exudates from cotton wool spots and other artifacts.Edge detection and mixture models.The diversity of brightness and size makes it difficult to detect the hard exudates, hence method may fail when they appear very few in the retina.Eighty retinal images with variable color, brightness, and quality.A sensitivity of 90.2% and a positive predictive value of 96.8% for an image-based classification accuracy sensitivity of 100% and a specificity of 90%.
[100]Garcia, Sanchez, Lopez, Abasolo, and Hornero (2009).Red lesions image and shape features.Neural networks with multilayer perceptron (MLP), radial basis function (RBF), and support vector machine (SVM).The black box nature of ANN and more accuracy requires more amount of data.The database was composed of 117 images with variable color, brightness, and quality. 50 were used for training and 67 for testing.Using lesion-based sensitivity and positive prediction values in percent.
MLP—88.1, 80.722.
RBF—88.49, 77.41.
SVM—87.61, 83.51.
Using image-based
sensitivity and specificity in percent.
MLP—100, 92.59.
RBF—100, 81.48.
SVM—100, 77.78.
[101]Sanchez et al. (2008).Hard exudates.Color information and Fisher’s linear discriminant analysis.When there are only a few very faint HEs in the retina, this proposed algorithm may have limited performance. More images are required for better results.Fifty-eight retinal images with variable color, brightness, and quality from the Instituto de Oftalmobiología Aplicadaat University of Valladolid, Spain.Using a lesion-based performance sensitivity of 88% with a mean number of 4.83 ± 4.64 false positives per image.
Using Image-based sensitivity-100 and Specificity of 100% is achieved.
[102]Quellec et al. (2012).Abnormal patterns in fundus images.Multiple-instance learning.The training procedure is complex and takes a lot of time.Messidor
(1200 images) and e-optha (25,000 images).
In the Messidor dataset, the proposed framework achieved an area under the ROC curve of Az = 0.881 and e-optha Az = 0.761.
[103]Kose, ¸SEvik, ˙IKiba¸s, and Erdo¨l (2012).Image pixel information.Inverse segmentation using region growing, adaptive region growing, and Bayesian approaches.Difficult to choose the correct way to select a prior.A total of 328 images with 760 X 570 resolution from the Department of Ophthalmology at the Faculty of Medicine at Karadeniz Technical University were used.This approach successfully identifies and localizes over 97% of ODs and segments around 95% of DR lesions.
[104]Giancardo et al. (2012).Exudates in fundus images.Feature cector generated using an exudate probability map, the color analysis, and the wavelet analysis. Exudate probability map and wavelet analysis.Intensive calculation.HEI-MED, Messidor, and DIARETDB1.AUC is between 0.88 and 0.94, depending on the dataset/features used.
[105]Zhang, Karray, Li, and Zhang (2012).Microaneurysms and blood vessel detection.Locate MAs using multi-scale Gaussian correlation filtering (MSC) with dictionary learning and sparse representation classifier (SRC).Dictionaries for vessel extraction are artificially generated using Gaussian functions which can cause a low discriminative ability for SRC. Additionally, a larger dataset is required.STARE and DRIVE.For STARE:
FPR—0.00480.
TPR—0.73910.
PPV—0.740888.
For DRIVE:
FPR—0.0028.
TPR—0.5766.
PPV—0.8467.
[106]Qureshi et al. (2012).Identifying macula and optic disk (OD).Ensemble combined algorithm of edge detectors, Hough transform, and pyramidal decomposition. It is difficult to determine which one is the best approach because good results were reported for healthy retinas but less precise on a difficult data set.Diaretdb0, Diaretdb1, and DRIVE 40% of the images from each benchmark are used for training and 60% of the images are used for testing.The average detection rate of macula is 96.7 and OD is 98.6.
[107]Noronha and Nayak (2013).Two energy features and six energy values in three orientations.Wavelet transforms and support vector machine (SVM) kernels.The performance depends on factors such
size and quality of the training features, the robustness of the training, and the features extracted.
Fundus images were used.Accuracy, sensitivity, and specificity of more than 99% are achieved.
[108]Gharaibeh N
(2021).
Cotton wool spots, exudates. Nineteen features were extracted from the fundus image.Unsupervised particle swarm optimization based relative reduct algo (US-PSO-RR), SVM, and naïve-Bayes classifiers.Detection and elimination of optic discs from fundus images are difficult, hence lesion detection is challenging.Image-Ret.Obtained a sensitivity of 99%, A specificity of 99% and a high accuracy of 98.60%.
[109]Gharaibeh N (2018).Microaneurysm, hemorrhage, and exudates.Co-occurrence matrix and SVM.Can be tried on larger datasets.DIARETDB1.Obtained a sensitivity of 99%, a specificity of 96%, and an accuracy of 98.4%.
[110]Akram, Khalid, and Khan (2013).Image shape and statistics.Gaussian mixture models and support vector machine and Gabor filter bank.Need to work on a large dataset.Four hundred and thirty-eight Fundus images. An accuracy of 99.4%, a sensitivity of 98.64%, and a specificity of 99.40% are achieved.
[111]Harini R and Sheela N (2016).Blood vessels, microaneurysms, and exudates. The gray level co-occurrence matrix (GLCM) is utilized to extract textural features the classification is completed using SVM.Problem working with large datasets since training requires more time with SVMs.Seventy-five Fundus images were considered, forty-five were used for training, and thirty for testingAn accuracy of 96.67%, a sensitivity of fundus of 100%, and a specificity of 95.83% are achieved.
[112]Anjana Umapathy, Anusha Sreenivasan, Divya S. Nairy (2019).Exudates and red lesions in the fundus image. Decision tree classifier.Requires more time for training and persistent overfitting.STARE, HRF, MESSIDOR, and a novel dataset created from Retina Institute of Karnataka.The approach achieved an accuracy of 94.4%.
Table 8. Datasets.
Table 8. Datasets.
Sr. NoDataset NameDescriptionReferencesAvailabilityLink
1KaggleEyePACS has supplied this dataset for the DR detection challenge. There are 88,702 photos in this collection (35,126 for training and 53,576 for testing) [116].[31,49,55,56,116,117,118,119,120,121,122,123,124,125]Freehttps://www.kaggle.com/c/diabetic-retinopathy-detection/data (accessed on 2 May 2022).
2ROC (Retinopathy Online Challenge)There are 100 photos in this collection. Canon CR5-45NM, Topcon NW 100, and NW 200 cameras were used.[52,57,60,69,82,126,127]Freehttp://webeye.ophth.uiowa.edu/ROC/ (accessed on 2 May 2022)
3DRIVEThis dataset contains 40 photos from a DR program in Holland (split into training and testing, 20 images each). The camera was a Canon CR5 non-mydriatic 3CCD with a 45-degree field of view (FOV).[57,65,128,129,130,131,132,133]Freehttps://www.isi.uu.nl/Research/Databases/DRIVE/Gulshan (accessed on 2 May 2022)
4STAREThere are 400 photos in total in this dataset. The fundus camera used was a Topcon TRV-50 with a 35-degree field of view.[57,128,130,132,133,134,135,136]Freehttp://www.cecas.clemson.edu/~ahoover/stare/ (accessed on 3 May 2022)
5E-OpthaThe OPHDIAT telemedical network created this dataset. E-Ophtha MA and E-Ophtha EX are the two datasets that make up this collection. Both have 381 and 82 photos in them, respectively.[55,70,75,82,96,137,138,139]Freehttp://www.adcis.net/en/Download-Third-Party/E-Ophtha.html (accessed on 3 May 2022)
6DIARETDB0There are 130 photos in this dataset (normal images = 20, images with DR symptoms = 110). The photos were obtained with a fundus camera with a field of view of 50 degrees.[55,61,74,140]Freehttp://www.it.lut.fi/project/imageret/diaretdb0/ (accessed on 3 May 2022)
7DIARETDB1There are 89 photos in this dataset (standard images = 5, images with at least mild DR = 84). The photos were obtained with a fundus camera with a field of view of 50 degrees.[53,55,57,58,59,60,62,63,64,67,70,74,75,82,96,97,119,135,137,139,141,142,143,144,145]Freehttp://www.it.lut.fi/project/imageret/diaretdb1/index.html (accessed on 4 May 2022)
8Messidor-2This dataset includes 1748 photos collected with a Topcon TRC NW6 non-mydriatic fundus camera with a 45-degree field of view.[146]On-demandhttp://www.latim.univ-brest.fr/indexfce0.html (accessed on 3 May 2022)
9MessidorThis dataset includes 1748 photos collected with a Topcon TRC NW6 non-mydriatic fundus camera with a 45-degree field of view.[58,61,66,75,97,125,137,141,147,148,149,150]Freehttp://www.adcis.net/en/Download-Third-Party/Messidor.html (accessed on 3 May 2022)
10DRiDBThis dataset, which includes 50 photos, is accessible upon request.[76,94]On-demandhttps://www.ipg.fer.hr/ipg/resources/image_database (accessed on 3 May 2022)
11DR1The Department of Ophthalmology of the Federal University of Sao Paulo created this dataset. (UNIFESP). It contains 234 images captured with TRX-50X, the mydriatic camera having 45 degrees FOV.[54,150]Freehttp://www.recod.ic.unicamp.br/site/asdr (accessed on 4 May 2022)
12DR2The Department of Ophthalmology at the Federal University of Sao Paulo also contributed to this dataset (UNIFESP). It contains 520 photographs taken with the TRC-NW8, a non-mydriatic camera with a 45-degree field of view.[54]Freehttp://www.recod.ic.unicamp.br/site/asdr (accessed on 3 May 2022)
13ARIAThis dataset contains 143 images. The camera used was a Zeiss FF450+ fundus camera with a 50-degree field of view.[151]Freehttp://www.damianjjfarnell.com/?page_id=276 (accessed on 5 May 2022)
14FAZ (Foveal Avascular Zone)There are 60 photos in this dataset (25 images that are normal and 35 images with DR).[141]Freehttp://www.biosigdata.com/?download=Zone (accessed on 5 May 2022)
15CHASE-DB1There are 28 photos of 14 children included in this dataset (consisting of one image/eye). CHASE-DB1 deals with Child Heart and Health Study (CHASE) in England.[130]Freehttps://www.blogs.kingston.ac.uk/retinal/chasedb1/ (accessed on 5 May 2022)
16Tianjin Medical University Metabolic Diseases HospitalThis dataset contains 414 fundus images.[57]Not publicly availablehttp://eng.tmu.edu.cn/ResearchCenter/list.htm
(accessed on 5 May 2022)
17Moorfields Eye HospitalData from countries such as Kenya, Botswana, Mongolia, China, Saudi Arabia, Italy, Lithuania, and Norway are collected at Moorfields Eye Hospital in London.[60]Not publicly availablehttps://www.moorfields.nhs.uk/research-and-development (accessed on 5 May 2022)
18CLEOPATRAThe CLEOPATRA collection consists of 298 fundus images. It includes images from 15 hospitals across the United Kingdom to diagnose DR.[152]Not publicly availableNot available
19Jichi Medical UniversityThere are 9939 posterior pole fundus images of diabetic patients in this dataset. The camera used was a NIDEK Co., Ltd., Aichi, Japan, AFC-230, with a 45-degree field of view.[153]Not publicly availablehttps://www.jichi.ac.jp/ (accessed on 5 May 2022)
20Singapore National DR Screening ProgramThis dataset was collected during the Singapore National Diabetic Screening Program (SIDRP) between 2010 and 2013; a total of 197,085 retinal images were collected.[97]Not publicly availableNot available
21Lotus Eye Care Hospital Coimbatore, IndiaIt contains 122 fundus images (normal = 28, DR = 94). A Canon non-mydriatic Zeiss fundus camera with a FOV of 90 degrees was used.[22,77,154]Not publicly availablehttps://www.lotuseye.org/centers/sitra/ (accessed on 5 May 2022)
22Department of Ophthalmology, Kasturba Medical College, Manipal, IndiaThis dataset contains 340 images (normal = 170, with retinopathy = 170). Non-mydriatic retinal camera, namely, TOPCON, was used[155]Not publicly availablehttps://manipal.edu/kmc-manipal/department-faculty/department-list/ophthalmology.html (accessed on 5 May 2022)
23HUPM, Cádiz, SpainFundus photos from Hospital Puerta del Mar in Spain were taken, including 250 photos (50 normal and 200 with DR symptoms).[156]Not publicly availablehttps://hospitalpuertadelmar.com/ (accessed on 5 May 2022)
Table 9. Sets ed.
Table 9. Sets ed.
N = Total PredictionsActual: NOActual: Yes
Predicted: NoTrue NegativeFalse Positive
Predicted: YesFalse NegativeTrue Positive
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bidwai, P.; Gite, S.; Pahuja, K.; Kotecha, K. A Systematic Literature Review on Diabetic Retinopathy Using an Artificial Intelligence Approach. Big Data Cogn. Comput. 2022, 6, 152. https://doi.org/10.3390/bdcc6040152

AMA Style

Bidwai P, Gite S, Pahuja K, Kotecha K. A Systematic Literature Review on Diabetic Retinopathy Using an Artificial Intelligence Approach. Big Data and Cognitive Computing. 2022; 6(4):152. https://doi.org/10.3390/bdcc6040152

Chicago/Turabian Style

Bidwai, Pooja, Shilpa Gite, Kishore Pahuja, and Ketan Kotecha. 2022. "A Systematic Literature Review on Diabetic Retinopathy Using an Artificial Intelligence Approach" Big Data and Cognitive Computing 6, no. 4: 152. https://doi.org/10.3390/bdcc6040152

APA Style

Bidwai, P., Gite, S., Pahuja, K., & Kotecha, K. (2022). A Systematic Literature Review on Diabetic Retinopathy Using an Artificial Intelligence Approach. Big Data and Cognitive Computing, 6(4), 152. https://doi.org/10.3390/bdcc6040152

Article Metrics

Back to TopTop