Next Article in Journal
Entomopathogenic Fungi Infecting Lepidopteran Larvae: A Case from Central Argentina
Next Article in Special Issue
The Effect of Oral Citicoline and Docosahexaenoic Acid on the Visual Field of Patients with Glaucoma: A Randomized Trial
Previous Article in Journal
Protective Effects of High-Fat Diet against Murine Colitis in Association with Leptin Signaling and Gut Microbiome
Previous Article in Special Issue
Sodium-Glucose Co-Transporter 2 Inhibitors Reduce Macular Edema in Patients with Diabetes mellitus
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs?

1
Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh
2
CAPM Company Limited, Bonani, Dhaka 1213, Bangladesh
3
Faculty of Engineering, Shizuoka University, Hamamatsu 432-8561, Japan
4
Faculty of Information Technology, Brno University of Technology, 61200 Brno, Czech Republic
*
Author to whom correspondence should be addressed.
Life 2022, 12(7), 973; https://doi.org/10.3390/life12070973
Submission received: 27 April 2022 / Revised: 25 May 2022 / Accepted: 1 June 2022 / Published: 28 June 2022
(This article belongs to the Collection Retinal Disease and Metabolism)

Abstract

:
Color fundus photographs are the most common type of image used for automatic diagnosis of retinal diseases and abnormalities. As all color photographs, these images contain information about three primary colors, i.e., red, green, and blue, in three separate color channels. This work aims to understand the impact of each channel in the automatic diagnosis of retinal diseases and abnormalities. To this end, the existing works are surveyed extensively to explore which color channel is used most commonly for automatically detecting four leading causes of blindness and one retinal abnormality along with segmenting three retinal landmarks. From this survey, it is clear that all channels together are typically used for neural network-based systems, whereas for non-neural network-based systems, the green channel is most commonly used. However, from the previous works, no conclusion can be drawn regarding the importance of the different channels. Therefore, systematic experiments are conducted to analyse this. A well-known U-shaped deep neural network (U-Net) is used to investigate which color channel is best for segmenting one retinal abnormality and three retinal landmarks.

1. Introduction

Diagnosing retinal diseases at their earliest stage can save a patient’s vision since, at an early stage, the diseases are more likely to be treatable. However, ensuring regular retina checkups for each citizen by ophthalmologists is infeasible not only in developing countries with huge populations but also in developed countries with small populations. The main reason is that the number of ophthalmologists compared to citizens is very small. It is particularly true for low-income and low-middle-income countries with huge populations, such as Bangladesh and India. For example, according to a survey conducted by the International Council of Ophthalmology (ICO) in 2010 [1], there were only four ophthalmologists per million people in Bangladesh. For India, the number was 11. Even for high-income countries with a small population, such as Switzerland and Norway, the numbers of ophthalmologists per million were not very high (91 and 68, respectively). More than a decade later, in 2021, these numbers remain roughly the same. Moreover, 60+ people (who are generally at high risk of retinal diseases) are increasing in most countries. The shortage of ophthalmologists and the necessity of regular retina checkups at low cost inspired researchers to develop computer-aided systems to detect retinal diseases automatically.
Different kinds of imaging technologies (e.g., color fundus photography, monochromatic retinal photography, wide-field imaging, autofluorescence imaging, indocyanine green angiography, scanning laser ophthalmoscopy, Heidelberg retinal tomography and optical coherence tomography) have been developed for the clinical care and management of patients with retinal diseases [2]. Among them, color fundus photography is available and affordable in most parts of the world. A color fundus photograph can be captured using a non-mydriatic fundus camera, handled by non-professional personnel, and delivered online to major ophthalmic institutions for follow-up in the case a disease is suspected. Moreover, there are many publicly available data sets of color fundus photographs such as CHASE_DB1 [3,4], DRIVE [5], HRF [6], IDRiD [7], Kaggle EyePACS data set [8], Messidor [9], STARE [10,11] and UoA_DR [12] to help researchers compare the performances of their proposed approaches. Therefore, color fundus photography is used more widely than other retinal imaging techniques for automatically diagnosing retinal diseases.
In color fundus photographs, the intensity of colors reflected from the retina are recorded in three color channels, red, green, and blue. In this paper, we investigate which color channel is better for the automatic detection of retinal diseases as well as the segmentation of retinal landmarks. Although the detection of retinal diseases is the main objective of computer-aided diagnostic (CAD) systems, segmentation is also an important part of many CAD systems. For example, structural changes in the central retinal blood vessels (CRBVs) may indicate diabetic retinopathy (DR). Therefore, a technique for segmenting CRBVs is often an important step in DR detection systems. Similarly, optic disc (OD) segmentation is important for some glaucoma detection algorithms.
In this work, we first extensively survey the usage of the different color channels in previous works. Specifically, we investigate works on four retinal diseases (i.e., glaucoma, age-related macular degeneration (AMD), and DR, diabetic macular edema (DME)) which are the major causes of blindness [13,14,15,16] as well as works on the segmentation of retinal landmarks, such as OD, macula/fovea and CRBVs, and retinal atrophy. We notice that the focus of the previous works was not to investigate which of the different channels (or combination of channels) is the best for the automatic analysis of fundus photographs. At the same time, there does not seem to be complete consensus on this since different studies used different channels (or combinations of channels). Therefore, to better understand the importance of the different color channels, we develop color channel-specific U-shaped deep neural networks (i.e., U-Nets [17]) for segmenting OD, macula, and CRBVs. We also develop U-Nets for segmenting retinal atrophy. The U-Net is well-known for its excellent performance in medical image segmentation tasks. The U-Net can segment images in great detail, even using very few images in the training phase. It is shown in [17] that a U-Net trained using only 30 images outperformed a sliding window convolutional neural network for the ISBI neuronal structures in the EM stacks challenge 2012.
To the best of our knowledge, a systematic exploration of the importance of different color channels for the automatic processing of color fundus photographs has not been undertaken before. Naturally, a better understanding of the effectiveness of different color channels can reduce the amount of development time of future algorithms. In the long term, it may also affect the design of new fundus cameras and the procedures for capturing fundus photographs, e.g., the appropriate light conditions.
The organization of this paper is as follows: in Section 2, we describe briefly the different color channels of a color fundus photograph, in Section 3, we survey which color channels were used in previous works for the automatic detection of retinal diseases and segmentation. In Section 4, we describe our setup for U-Nets based experiments. In Section 5, we show the performance of color channel-specific U-Nets. At last, in Section 6, we draw conclusions about our findings. Some steps of image pre-processing in more detail and additional experiments are described in the Appendices.

2. Fundus Photography

Our retina does not have any illumination power. Moreover, it is a minimally reflective surface. Therefore, a fundus camera which is a complex optical system, needs to illuminate and capture the low reflected light of the retina simultaneously while imaging [18]. A single image sensor coated with a color filter array (CFA) is used more commonly to capture the reflected light in a fundus camera. In a CFA, in general, color filters are arranged following the Bayer pattern [19], developed by the Eastman Kodak company, as shown in Figure 1a. Instead of using three filters for capturing three primary colors (i.e., red, green and blue) reflected from the retina, only one filter is used per pixel to capture one primary color in the Bayer pattern. In this pattern, the number of green filters is twice the number of blue and red filters. Different kinds of demosaicing techniques are applied to get full color fundus photographs [20,21,22]. Some sophisticated and expensive fundus cameras do not use a CFA with a Bayer pattern to distinguish color, rather they use a direct imaging sensor with three layers of photosensitive elements as shown in Figure 1b. No demosaicing technique is necessary for getting full color fundus photographs from such fundus cameras.
As shown in Figure 2, in a color fundus photograph, we can see the major retinal landmarks, such as the optic disc (OD), macula, and central retinal blood vessels (CRBVs), on the colored foreground surrounded by the dark background. As can be seen in Figure 3, different color channels highlight different things in color fundus photographs. We can see the boundary of the OD more clearly and the choroid in more detail in the red channel. The red channel helps us segment the OD more accurately and see the choroidal blood vessels and choroidal lesions such as nevi or tumors more clearly than the other two color channels. The CRBVs and hemorrhages can be seen in the green channel with excellent contrast. The blue channel allows us to see the retinal nerve fiber layer (RNFL) defects and epiretinal membranes more clearly than the other two color channels.

3. Previous Works on Diagnosing Retinal Disease Automatically

Many diseases can be the cause of retinal damage, such as glaucoma, age-related macular degeneration (AMD), diabetic retinopathy (DR), diabetic macular edema (DME), retinal artery occlusion, retinal vein occlusion, hypertensive retinopathy, macular hole, epiretinal membrane, retinal hemorrhage, lattice degeneration, retinal tear, retinal detachment, intraocular tumors, penetrating ocular trauma, pediatric and neonatal retinal disorders, cytomegalovirus retinal infection, uveitis, infectious retinitis, central serous retinopathy, retinoblastoma, endophthalmitis, and retinitis pigmentosa. Among them, glaucoma, AMD, DR, and DME drew the main focus of researchers for color fundus photograph-based automation. One reason could be that for many cases, these causes lead to irreversible complete vision loss, i.e., blindness if they are left undiagnosed and untreated. According to the information reported in [23,24], glaucoma, AMD, and DR are among the five most common causes of vision impairment in adults. Among 7.79 billion people living in 2020, 295.09 million people experienced moderate or severe vision impairment (MSVI) and 43.28 million people were blind. Glaucoma was the cause of MSVI for 4.14 million people, whereas AMD for 6.23 million and DR for 3.28 million people. Glaucoma was the cause of blindness for 3.61 million people, whereas AMD for 1.85 million and DR for 1.07 million people [24]. Therefore, in our literature survey, we investigate the color channels used in previously published studies for automatically diagnosing glaucoma, DR, AMD, and DME. We also survey works on segmentation of retinal landmarks, such as OD, macula/fovea and CRBVs, and retinal atrophy.
We consider both original studies and reviews as the source of information. However, our survey includes only original studies written in English and published in SJR ranked Q1 and Q2 journals. Note that SJR (SCImago Journal Rank) is an indicator developed by SCImago from the widely known algorithm Google PageRank [25]. This indicator shows the visibility of the journals contained in the Scopus database from 1996. We used different keywords such as ‘automatic retinal disease detection’, ‘automatic diabetic retinopathy detection’, ‘automatic glaucoma detection’, ‘detect retinal disease by deep learning’, ‘segment macula’, ‘segment optic disc’, and ‘segment central retinal blood vessels’ in the Google search engine to find previous studies. After finding a paper, we checked the SJR rank of the journal. We used the reference list of papers published in Q1/Q2 journals; we especially benefited from the review papers related to our area of interest.
In this paper, we include our findings based on information reported in 199 journal papers. As shown in Table 1, the green channel dominates non-neural network-based previous works, whereas RGB images (i.e., red, green, and blue channels together) dominate neural network-based previous works. Few works were based on the red and blue channels and they were mainly for atrophy segmentation. See Table 2, Table 3, Table 4, Table 5 and Table 6 for the color channel distribution in our studied previous works.

4. Experimental Setup

4.1. Hardware & Software Tools

We performed all experiments using TensorFlow’s Keras API 2.0.0, OpenCV 4.2.0, and Python 3.6.9. We used a standard PC with 32 GB memory, Intel 10th Gen Core i5-10400 Processor with six cores per socket, and Intel UHD Graphics 630 (CML GT2).

4.2. Data Sets

We used RGB color fundus photographs from seven publicly available data sets: (1) Child Heart Health Study in England (CHASE) data set [3,4], (2) Digital Retinal Images for Vessel Extraction (DRIVE) data set [5], (3) High-Resolution Fundus (HRF) data set [6], (4) Indian Diabetic Retinopathy Image Dataset (IDRiD) [7], (5) Pathologic Myopica Challenge (PALM) data set [218], (6) STructured Analysis of the Retina (STARE) data set [10,11], and (7) University of Auckland Diabetic Retinopathy (UoA-DR) data set [12]. Images in these data sets were captured by different fundus cameras for different kinds of research objectives, as shown in Table 7.
Since all of the seven data sets do not have manually segmented images for all retinal landmarks and atrophy, we cannot use all of them for all kinds of segmentation tasks. Therefore, instead of seven data sets we used five data sets for the experiments of segmenting CRBVs, three data sets for OD, and two data sets for macula, while only one data set for the experiments of segmenting retinal atrophy. We emphasize to have reliable results. For that we used the majority of the data (i.e., 55% of the data) as the test data. We prepared one training and one validation set. By combining 25% of the data from each data set, we prepared the training set, whereas we prepared the validation set by combining 20% of the data from each data set. By taking the rest of the 55% of the data from each data set, we prepared individual test sets for each type of segmentation. See Table 8 for the number of images in the training, validation, and test sets. Note that the training set is used to tune the parameters of the U-Net (i.e., weights and biases), the validation set is used to tune the hyperparameters (such number of epochs, learning rate, and activation function), and the test set is used to evaluate the performance of the U-Net.

4.3. Image Pre-Processing

We prepared four types of 2D fundus photographs: I R , I G , I B , and I G r . By splitting 3D color fundus photographs into three color channels (i.e., red, green and blue), we prepared I R , I G , I B . Moreover, by performing a weighted summation of I R , I G , I B , we prepared the grayscale image, I G r . By a grayscale image, we generally mean an image whose pixels have only one value representing the amount of light. It can be visualized as different shades of gray. An 8-bit grayscale image has pixel values in the range 0–255. There are many ways to convert a color image into a grayscale image. In this paper, we use a function from the OpenCV library where each grey pixel is generated according to the following scheme: I G r = 0.299 × I R + 0.587 × I G + 0.114 × I B . This conversion scheme is frequently used in computer vision and implemented in different toolboxes, e.g., GIMP and MATLAB [219] including OpenCV.
The background of a fundus photograph does not contain any information about the retina, which can be helpful for manual or automatic retina-related tasks. Sometimes background noise can be misleading. In order to avoid the interference of the background noise in any decision, we need to use a binary background mask, which has zero for the pixels of the background and 2 n 1 for the pixels of the foreground, where n is the number of bits used for the intensity of each pixel. For an 8-bit image, 2 n 1 = 255 . Except the DRIVE and HRF data sets, background masks are not provided for the other five data sets. Therefore, we followed the steps described in Appendix A to generate the background masks for all data sets. We generated binary background masks for DRIVE and HRF data sets in order to keep the same set up for all data sets. Overall, I R has a higher intensity than I G and I B in all data sets, whereas I B has a lower intensity compared to I R and I G . Moreover, in I R , the foreground is less likely to overlap with the background noise than I G and I B . In I B , the foreground intensity has the highest possibility to be overlapped with the intensity of the background noise, as shown in Figure 4. Therefore, we use I R (i.e., the red channel image) for generating the binary background masks.
We used the generated background mask and followed the steps described in Appendix B for cropping out the background as much as possible and removing background noise outside the field-of-view (FOV). Since cropped fundus photographs of different data sets have different resolutions as shown in Table 7, we re-sized all masked and cropped fundus photographs to 256 × 256 by bicubic interpolation so that we could use one U-Net. After resizing fundus photographs, we applied contrast limited adaptive histogram equalization (CLAHE) [220] to improve the contrast of each single colored image. Then we re-scaled pixel values to [ 0 , 1 ] . Note that, re-scaling pixel values to [ 0 , 1 ] is not necessary for fundus photographs. However, we did it to keep the input and output in the same range. We did not apply any other pre-processing techniques to the images.
Similar to the fundus photographs, reference masks provided by the data sets for segmenting OD, CRBVs and retinal atrophy can have an unnecessary and noisy background. We, therefore, cropped out the unnecessary background of the provided reference masks and removed noise outside the field-of-view area by following the steps described in Appendix B. Since some provided masks are not binary masks, we turned them into 2D binary masks by following the steps described in Appendix C. No data set provides binary masks for segmenting the macula. Instead the center of the macula are provided by the PALM and UoA-DR. We generated binary masks for segmenting macula using the center values of the macula and the OD masks of the PALM and UoA-DR by following the steps described in Appendix D. We re-sized all kinds of binary masks to 256 × 256 by bicubic interpolation. We then re-scaled pixel values to [ 0 , 1 ] , since we used the sigmoid function as the activation function in the output layer of the U-Net and the range of this function is [ 0 , 1 ] .

4.4. Setup for U-Net

We trained color-specific U-Nets with an architecture as shown in Table A3 of Appendix E. To train our U-Nets, we set Jaccard co-efficient loss (JCL) as the loss function; RMSProp with a learning rate of 0.0001 as the optimizer and m i n i _ b a t c h _ s i z e = 8 . We reduced the learning rate if there was no change in the v a l i d a t i o n _ l o s s for more than 30 consecutive epochs. We stopped the training if the v a l i d a t i o n _ l o s s did not change in 100 consecutive epochs. We trained all color-specific U-Nets five times to avoid the effect of randomness caused by different factors, including weight initialization and dropout, on the U-Net’s performance. That means, in total, we trained 100 U-Nets, among which 25 U-Nets for OD segmentation (i.e., five models for each RGB, gray, red, green, and blue), 25 U-Nets for macula segmentation, 25 U-Nets for CRBVs segmentation, and 25 U-Nets for atrophy segmentation. We estimate the performance of each model separately and then report m e a n ± s t a n d a r d   d e v i a t i o n of the performance for each category.

4.5. Evaluation Metrics

In segmentation, the U-Net shall predict whether a pixel is part of the object in question (e.g., OD) or not. Ideally, it should therefore output:
p i x e l _ l a b e l = 1 , If   the   pixel   belongs   to   the   targeted retinal   landmark   or   atrophy . 0 , Otherwise .
However, instead of 0/1, the output of the U-Net is in the range [0, 1] for each pixel since we use sigmoid as the activation function in the last layer. The output can be interpreted as the probability that the pixel is part of the mask. To obtain a hard prediction (0/1), we use a threshold of 0.5. By comparing the hard prediction to the reference, it is decided whether the prediction is a true positive (TP), true negative (TN), false-positive (FP), or false negative (FN). Using those results for each pixel in the test set, we estimated the performance of the U-Net using four metrics. We used three metrics that are commonly used in classification tasks (i.e., precision, recall, and area-under-curve (AUC)) and one metric which is commonly used in image segmentation tasks (i.e., mean intersection-over-union (MIoU), also known as Jaccard index or Jaccard similarity coefficient). We computed precision = TP / (TP + FP) and recall = TP / (TP + FN) for both semantic classes together. On the other hand, we computed IoU = TP / (TP + FP + FN) for each semantic class (i.e., 0/1) and then averaged over the classes to estimate MIoU. We estimated the AUC for the receiver operating characteristic (ROC) curve using a linearly spaced set of thresholds. Note that AUC is a threshold-independent metric, unlike precision, recall, and MIoU, which are threshold-dependent metrics.

5. Performance of Color Channel Specific U-Net

Comparing the results as shown in Table 9, Table 10, Table 11 and Table 12, we can say that the U-Net is more successful at segmenting the OD and less successful at segmenting CRBVs for all channels. The U-Net performs better when all three color channels (i.e., RGB images) are used together than when the color channels are used individually. For segmenting the OD, the red and gray channels are better than the green and blue channels (see Table 9). For segmenting CRBVs the green channel performs better than other single channels, whereas both the red and blue channels perform poorly (see Table 10). For macula segmentation, there is no clear winner among gray and green channels. Although, the blue channel is a bad choice for segmenting the CRBVs, it is reasonably good at segmenting macula (see Table 11). For segmenting retinal atrophy, the green channel is better than other single channel and the blue channel is also a good choice (see Table 12).
To better understand the performance of U-Nets, we manually inspect all images together with their reference and predicted masks. As shown in Table 13, we see that for the majority number of cases, all color-specific U-Nets can generate at least partially accurate masks for segmenting OD and macula. When the retinal atrophy severely affects any retina, no channel-specific U-Net can generate accurate masks for segmenting OD and macula, as shown in Figure 5 and Figure 6. For many cases multiple areas in the generated masks are pointed as OD (see Figure 5d–f) and macula (see Figure 6d). As shown in Table 14, it happens more in the gray channel for the macula and in the green channel for the OD.
We find that our U-Nets trained for the RGB, gray, and green channel images can segment thick vessels quite well, whereas they are in general not good at segmenting thin blood vessels. As shown in Figure 7b,e, Figure 7c,f, and Figure 7h,k, discontinuity occurs in the thin vessels segmented by our U-Nets.
The performance of U-Nets also depends to some extent on how accurately CRBVs are marked in the reference masks. Among the five data sets, the reference masks of the DRIVE data set are very accurate for both thick and thin vessels. That could be one reason we get the best performance for this data set. On the contrary, we get the worst performance for the UoA-DR data set because of the inaccurate reference masks (see Appendix F for more details). If the reference masks have inaccurate information, then the estimated performance of the U-Nets will be lower than what it should be. Two things can happen when reference masks are inaccurate. The first thing is that inaccurate reference masks in the training set may deteriorate the performance of the U-Net. However, if most reference masks are accurate enough, the deterioration may be small. The second thing is that inaccurate reference masks in the test set can generate inaccurate values for the estimated metrics. These two cases happen for the UoA-DR data set. Our U-Nets can tackle the negative effect of inaccurate reference masks in the training set of the UoA-DR. Our U-Nets learn to predict the majority of the thick vessels and some parts of thin vessels quite accurately for the UoA-DR data set. However, because of the inaccurate reference masks of the test data, the precision and recall are extremely low for all channels for the UoA-DR data set.
We also notice that quite often, the red channel is affected by the overexposure, whereas the blue channel is affected by the underexposure (see Table 15). Both kinds of inappropriate exposure wash out retinal information that causes low entropy. Therefore, the generated masks for segmenting CRBVs do not have lines in the inappropriately exposed parts of a fundus photograph (see the overexposed part of the red channel in Figure 7j and the underexposed part of the blue channel in Figure 7l). Note that histograms of inappropriately exposed images are highly skewed and have low entropy (as shown in Figure 8).
It is not surprising that using all three color channels (i.e., RGB images) as input to the U-Net performs the best since the convolutional layers of the U-Net are flexible enough to use all information from the three color channels appropriately. By using multiple filters in each convolutional layer, U-Nets can extract multiple features from the retinal images, many of which are appropriate for segmentation. As discussed in Section 3, previous works based on non-neural network-based models usually used one color channel, most likely because these models could not be benefited from the information contained in three channels. The fact that the individual color channel performs well in certain situations raises two questions regarding the camera design:
  • Would it be worth it to develop cameras with only one color channel rather than red, green, and blue, possibly customized for retina analysis?
  • Could a more detailed representation of the spectrum than RGB improve the automatic analysis of retinas? The RGB representation captures the information from the spectrum that the human eye can recognize. Perhaps this is not all information from the spectrum that an automatic system could have used.
To fully answer those questions, many hardware developments would be needed. However, an initial analysis to address the first question could be to tune the weights used to produce the grayscale image from the RGB images.

6. Conclusions

We conduct an extensive survey to investigate which color channel in color fundus photographs is most commonly preferred for automatically diagnosing retinal diseases. We find that the green channel images dominate previous non-neural network-based works while all three color channels together, i.e., RGB images, dominate neural network-based works. In non-neural network-based works, researchers almost ignored the red and blue channels, reasoning that these channels are prone to poor contrast, noise, and inappropriate exposure. However, no works provided a conclusive experimental comparison of the performance of different color channels. In order to fill up that gap we conduct systematic experiments. We use a well-known U-shaped deep neural network (U-Net) to investigate which color channel is best for segmenting retinal atrophy and three retinal landmarks (i.e., central retinal blood vessels, optic disc, and macula). In our U-Net based segmentation approach, we see that segmentation of retinal landmarks and retinal atrophy can be conducted more accurately when RGB images are used than when a single channel is used. We also notice that as a single channel, the red channel is bad for segmenting the central retinal blood vessels, but better than other single channels for the optic disc segmentation. Although, the blue channel is a bad choice for segmenting the central retinal blood vessels, it is reasonably good for segmenting macula and very good for segmenting retinal atrophy. For all cases, RGB images perform the best which reveals the fact that the red and blue channels can provide supplementary information to the green channel. Therefore, we can conclude that all color channels are important in color fundus photographs.

Author Contributions

Conceptualization, S.B.; methodology, S.B.; formal analysis, S.B. and J.R.; investigation, M.I.A.K., M.T.H. and A.B.; resources, M.I.A.K., M.T.H. and A.B.; data curation, M.I.A.K., M.T.H. and A.B.; writing—original draft preparation, S.B.; writing—review and editing, S.B. and J.R.; funding acquisition, S.B., M.I.A.K. and T.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by “Faculty of Engineering, University of Rajshahi, Bangladesh, grant number 71/5/52/R.U./Engg.-08/2020-2021, and 70/5/52/R.U./Engg.-08/2020-2021”.

Institutional Review Board Statement

Not applicable. We only used publicly available data sets prepared by other organizations and these data sets are standard to use for automatic diagnosis of retinal diseases.

Informed Consent Statement

Informed Consent Statement: Not applicable. We only used publicly available data sets prepared by other organizations and these data sets are standard to use for automatic diagnosis of retinal diseases.

Data Availability Statement

All data sets used in this work are publicly available as described in Section 4.2.

Conflicts of Interest

We declare no conflict of interest. Author Angkan Biswas was employed by the company CAPM Company Limited. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A. Generating Background Mask

The background of a fundus photograph can be noisy, i.e., the background pixels can have non-zero values. Noisy background pixels, in general, are invisible to the bare eye because of their low intensity. Exceptions to this occur. For example, images in the STARE data set have visible background noise. Moreover, sometimes non-retinal information, such as image capturing date-time and patient’s name, can be present with high intensities in the background (e.g., images in the UoA-DR data set). This kind of information is also considered as noise when they are not useful for any decision. No matter whether noise in the background is visible or invisible to human eyes, or whether the intensity of background pixels are high or low, by global binary thresholding with threshold, θ = 0 , we detect the presence of noisy background pixels in almost all data sets as shown in Figure A1.
Figure A1. Noisy pixels in the background of the data sets we experiment on are highlighted by binary thresholding with a threshold equal to 0.
Figure A1. Noisy pixels in the background of the data sets we experiment on are highlighted by binary thresholding with a threshold equal to 0.
Life 12 00973 g0a1
Using a background mask, we can get rid of background noise. A simple method for creating a background mask would be to consider all pixels with an intensity lower than or equal to a threshold, θ , to be part of the background and the other pixels to be part of the foreground. When the image is noiseless, setting θ = 0 (i.e., keeping zero-valued pixels unchanged while setting pixels with non-zero intensities to 2 n 1 ) is good enough to generate the background mask. However, for a noisy background, if we set the threshold, θ , to a very small value (i.e., a value lower than the intensities of noise), then the background mask will consider the background parts as a foreground, as shown in Figure A2c–i. On the other hand, if we set a very high value to θ (i.e., a value higher than the intensities of foreground pixels), then some parts of the foreground may get lost in the background mask, as shown in Figure A2k,l. Of course, in reality, some background pixels may have a higher intensity than some foreground pixels so that no threshold would accurately separate the foreground from the background. Further, the optimal threshold may depend on the data set.
Figure A2. Effect of threshold value, θ , on the binary background mask when the background is noisy. (a) RGB image, (b) red channel, and (cl) binary background masks for different θ . In (c) θ = 0 , (d) θ = 1 , (e) θ = 5 , (f) θ = 10 , (g) θ = 15 , (h) θ = 20 , (i) θ = 25 , (j) θ = 35 , (k) θ = 55 , and (l) θ = 65 . Depending on the intensity of the background noise, we need to decide the value of θ . If we assign too small value to θ , then some noisy background pixels may be considered as the foreground (see (ci)). If we set too high value to θ , then some area of the foreground will be considered as the background (see (k,l)). Source of fundus photograph: STARE data set and image file: im0240.ppm.
Figure A2. Effect of threshold value, θ , on the binary background mask when the background is noisy. (a) RGB image, (b) red channel, and (cl) binary background masks for different θ . In (c) θ = 0 , (d) θ = 1 , (e) θ = 5 , (f) θ = 10 , (g) θ = 15 , (h) θ = 20 , (i) θ = 25 , (j) θ = 35 , (k) θ = 55 , and (l) θ = 65 . Depending on the intensity of the background noise, we need to decide the value of θ . If we assign too small value to θ , then some noisy background pixels may be considered as the foreground (see (ci)). If we set too high value to θ , then some area of the foreground will be considered as the background (see (k,l)). Source of fundus photograph: STARE data set and image file: im0240.ppm.
Life 12 00973 g0a2
As a more robust procedure for generating background masks for removing background noise, we apply the following steps:
  • Step-1: Generate a preliminary background mask, B 1 , by global binary thresholding, i.e., by setting the pixel intensity, p, of a single channeled image, I, to 0 or 2 n 1 in the following way:
    p = 0 if p θ 2 n 1 if p > θ ,
    where n is the number of bit used for the intensity of p (see Figure A3c). For an 8-bit image, 2 n 1 = 255 . Note that we are using the red channeled image, I R . By trial-and-error, we finally set θ to 15, 40, 35, 35, 5, 35, and 5 to get good preliminary background masks for the CHASE_DB1, DRIVE, HRF, IDRiD, PALM, STARE, and UoA-DR data sets, respectively.
  • Step-2: Determine the boundary contour of the retina by finding the contour which has the maximum area. Note that a contour is a closed curve joining all the straight points having the same color or intensity (see Figure A3d).
  • Step-3: Set the pixels inside the boundary contour to 2 n 1 and outside the boundary contour to zero in order to generate the final background mask, B 2 (see Figure A3e).
Figure A3. Steps of cropping unnecessary background and removing background noise of a colored fundus photograph. (a) RGB fundus photograph [Source of fundus photograph: STARE data set and image file: im0291.ppm.], (b) Red channel of the fundus photograph, (c) Background mask generated by global thresholding using θ = 35 , (d) Boundary contour, (e) Background mask generated by filling boundary contour, (f) Background mask with minimum bounding rectangle (MBR), (g) Cropped RGB fundus photograph generated by using the width, height, and position of the MBR, (h) Cropped background mask generated by using the width, height and position of the MBR, and (i) Cropped RGB fundus photograph after removing background noise by using cropped background mask.
Figure A3. Steps of cropping unnecessary background and removing background noise of a colored fundus photograph. (a) RGB fundus photograph [Source of fundus photograph: STARE data set and image file: im0291.ppm.], (b) Red channel of the fundus photograph, (c) Background mask generated by global thresholding using θ = 35 , (d) Boundary contour, (e) Background mask generated by filling boundary contour, (f) Background mask with minimum bounding rectangle (MBR), (g) Cropped RGB fundus photograph generated by using the width, height, and position of the MBR, (h) Cropped background mask generated by using the width, height and position of the MBR, and (i) Cropped RGB fundus photograph after removing background noise by using cropped background mask.
Life 12 00973 g0a3
Figure A4 shows seven examples of generated binary background masks and Figure A5 illustrates the benefit of using B 2 instead of B 1 for masking out the high-intensity background noise caused by text information in an image.
Figure A4. Generated foreground masks of seven fundus photographs of seven data sets.
Figure A4. Generated foreground masks of seven fundus photographs of seven data sets.
Life 12 00973 g0a4
Figure A5. Effect of background masks, B 1 and B 2 on an RGB fundus photograph. (a) background mask, B 1 , generated by global binary thresholding, (b) retinal image masked by B 1 , (c) background mask, B 2 , generated by filling the boundary contour, and (d) retinal image masked by B 2 . When intensity of background noise is very high, global thresholding cannot generate B 1 with zero values for the background pixels. Therefore, the masked image is not noise free. On the other hand, B 2 is background noise free and so the masked image. Source of fundus photograph: UoA-DR data set and image file: 195.jpg.
Figure A5. Effect of background masks, B 1 and B 2 on an RGB fundus photograph. (a) background mask, B 1 , generated by global binary thresholding, (b) retinal image masked by B 1 , (c) background mask, B 2 , generated by filling the boundary contour, and (d) retinal image masked by B 2 . When intensity of background noise is very high, global thresholding cannot generate B 1 with zero values for the background pixels. Therefore, the masked image is not noise free. On the other hand, B 2 is background noise free and so the masked image. Source of fundus photograph: UoA-DR data set and image file: 195.jpg.
Life 12 00973 g0a5
Using the provided masks of the DRIVE and HRF data sets, we estimate the performance of our approach of generating binary background masks. As shown in Table A1, our approach is highly successful.
Table A1. Performance of our approach of generating background masks.
Table A1. Performance of our approach of generating background masks.
Data SetPrecisionRecallAUCMIoU
DRIVE0.9970.9970.9960.995
HRF1.0001.0001.0001.000

Appendix B. Cropping Out Background

The background of an image, I x , does not contain any information about the retina, which can be helpful for automatic retina-related tasks. Note that I x can be an RGB image, a single channeled image, or a binary mask for segmenting OD, macula, CRBVs, or retinal atrophy. As a robust procedure for cropping the unnecessary background and removing background noise from the I x , we apply the following steps:
  • Step-1: Generate the background mask, B 2 , using the steps described in Appendix A.
  • Step-2: Determine the minimum bounding rectangle (MBR) which minimally covers the background mask, B 2 (See Figure A3f).
  • Step-3: Crop I x and B 2 equal to the MBR (see Figure A3g,h).
  • Step-4: Remove background noise from the cropped I x by masking it by the cropped B 2 (see Figure A3i).

Appendix C. Turning Provided Reference Masks into Binary Masks

Although reference masks used for segmentation need to be binary masks (i.e., having only two pixel intensities, e.g., zero for the background pixels and 255 for the foreground pixels of an 8-bit image), we notice that two data sets (i.e., HRF and UoA-DR) do not fulfill this requirement, as shown in Table A2. Three out of 45 provided masks of the HRF data set, and all 200 provided masks of the UoA-DR data set have pixels of multiple intensities. There are two cases: the first case is that the noisy background pixels which are supposed to be 0 have intensities other than zero and the second case is that the foreground pixels, which are supposed to be 255 have intensities other than 255. We also notice that even though the provided masks of the IDRiD data set are binary masks, however, the maximum intensity is 29 instead of 255.
We turn all provided masks into binary masks having pixel intensity [ 0 , 255 ] by global binary thresholding with threshold, θ = 127 . Before binarization, we remove noisy pixels from the outside of the field-of-view area by using the estimated background mask, B 2 (see Figure A6b for an example). As shown in Figure A6c, there will still be noisy pixels inside the FOV area. For that, we apply binary thresholding and generate the final binary mask as shown in Figure A6d.
Table A2. Distribution of provided binary and non-binary masks for segmenting CRBVs, optic discs, macula and retinal atrophy. n: total number of provided masks, m: number of provided binary masks.
Table A2. Distribution of provided binary and non-binary masks for segmenting CRBVs, optic discs, macula and retinal atrophy. n: total number of provided masks, m: number of provided binary masks.
Segmentation TypeCHASE_DB1DRIVEHRFIDRiDPALMSTAREUoA-DR
nmnmnmnmnmnmnm
CRBVs2804004500000400200200
Optic Disc000000810400000200200
Macula00000000000000
Retinal Atrophy0000000031100000
Figure A6. Effect of binarization on provided mask for segmenting CRBVs. (a) Provided mask, (b) Provided mask after removing the background noise outside the field-of-view (FOV) area but having invisible background noise inside the FOV area, (c) Binary mask after highlighting invisible background noise inside the FOV area by global binary thresholding using θ = 0 , and (d) Binary mask generated by global binary thresholding using θ = 0 for segmenting CRBVs.
Figure A6. Effect of binarization on provided mask for segmenting CRBVs. (a) Provided mask, (b) Provided mask after removing the background noise outside the field-of-view (FOV) area but having invisible background noise inside the FOV area, (c) Binary mask after highlighting invisible background noise inside the FOV area by global binary thresholding using θ = 0 , and (d) Binary mask generated by global binary thresholding using θ = 0 for segmenting CRBVs.
Life 12 00973 g0a6

Appendix D. Generating Binary Masks for Segmenting Macula

Even though three data sets (i.e., IDRiD, PALM, and UoA-DR) provide reference masks for segmenting the optic disc (OD), five data sets (i.e., CHASE_DB1, DRIVE, HRF, STARE, and UoA-DR) for CRBVs and one data set (i.e., PALM) for retinal atrophy, none of the seven data sets provide reference masks for segmenting macula. However, two data sets (PALM and UoA-DR) provide the center of the macula. The average size of the macula in humans is around 5.5 mm. However, the average clinical size of the macula in humans is 1.5 mm, whereas the average size of the OD is 1.825 mm (vertically 1.88 mm and horizontally 1.77 mm). We assume that the size of maculas are equal to the ODs and using the provided center values we generate binary masks for segmenting the macula using the following steps:
  • Step-1: Get the corresponding reference mask, R O D of a color fundus photograph for segmenting OD.
  • Step-2: Generate the background mask, B 2 , by following the steps described in Appendix A.
  • Step-3: Remove the background noise outside the foreground of R O D by masking it by B 2 .
  • Step-4: Turn R O D into a binary mask R O D _ B i n a r y by global thresholding.
  • Step-5: Find the boundary contour of the foreground of R O D _ B i n a r y .
  • Step-6: Determine radius, r of the minimum closing circle of R O D _ B i n a r y .
  • Step-7: Draw a circle in the provided center of the macula having radius r.
  • Step-8: Set the pixels inside the circle to 2 n 1 and outside the circle to 0 in order to generate the final reference mask, R M a c u l a _ B i n a r y .

Appendix E. Architecture of U-Net

Our color specific U-Nets have the architecture shown in Table A3. Similar to the original U-Net proposed in [17], our U-Nets consist of two parts: a contracting side and an expansive side. None of these sides have any fully connected layers instead both sides have mainly convolutional layers. Unlike the original U-Net, we use convolutional layer with stride two instead of a max poling layer for down-sampling in the contracting side. Instead of using unpadded convolutions we use same padding convolutions in both the contracting side and expansive side. Note that in the same padding the output size is the same as the input size. Therefore, we do not need cropping in the expansive side which was needed in the original work due to the loss of border pixels in every convolution. We use Exponential Linear Unit (ELU) instead of Rectified Linear Unit (ReLU) as activation function in each convolutional layer except the output layer. In the output layer, we usethe sigmoid function as the activation function. An alternative would have been the softmax function with two outputs. In both the contracting and expansive sides, the two padded convolutional layers are separated by a drop-out layer. We use a drop-out layer in order to avoid over-fitting. There are 23 convolutional layers in the original U-Net, whereas in our U-Nets there are 29 convolutional layers. In the original U-Net, there are four down-sampling blocks in the contracting side and four up-sampling blocks in the expansive side, whereas in our U-Nets there are five down-sampling and five up-sampling blocks. In total, each of our U-Nets has 5,939,521 trainable parameters.
Table A3. Architecture of our U-Net. #Params: Number of parameters.
Table A3. Architecture of our U-Net. #Params: Number of parameters.
LayerOutput Shape# Params
Input(256, 256, 1)0
Convolution (strides = (1, 1), filters = 16, kernel = (3, 3), activation = ELU)(256, 256, 16)160
Dropout (0.1)(256, 256, 16)0
Convolution (strides = (1, 1), filters = 16, kernel = (3, 3), activation = ELU, name = C1)(256, 256, 16)2320
Convolution (strides = (2, 2), filters = 16, kernel = (3, 3), activation = ELU)(128, 128, 16)2320
Convolution (strides = (1, 1), filters = 32, kernel = (3, 3), activation = ELU)(128, 128, 32)4640
Dropout (0.1)(128, 128, 32)0
Convolution (strides = (1, 1), filters = 32, kernel = (3, 3), activation = ELU, name = C2)(128, 128, 32)9248
Convolution (strides = (2, 2), filters = 32, kernel = (3, 3), activation = ELU)(64, 64, 32)9248
Convolution (strides = (1, 1), filters = 64, kernel = (3, 3), activation = ELU)(64, 64, 64)18,496
Dropout (0.2)(64, 64, 64)0
Convolution (strides = (1, 1), filters = 64, kernel = (3, 3), activation = ELU, name = C3)(64, 64, 64)36,928
Convolution (strides = (2, 2), filters = 64, kernel = (3, 3), activation = ELU)(32, 32, 64)36,928
Convolution (strides = (1, 1), filters = 128, kernel = (3, 3), activation = ELU)(32, 32, 128)73,856
Dropout (0.2)(32, 32, 128)0
Convolution (strides = (1, 1), filters = 128, kernel = (3, 3), activation = ELU, name = C4)(32, 32, 128)147,584
Convolution (strides = (2, 2), filters = 128, kernel = (3, 3), activation = ELU)(16, 16, 128)147,584
Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU)(16, 16, 256)295,168
Dropout (0.3)(16, 16, 256)0
Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU, name = C5)(16, 16, 256)590,080
Convolution (strides = (2, 2), filters = 256, kernel = (3, 3), activation = ELU)(8, 8, 256)590,080
Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU)(8, 8, 256)590,080
Dropout (0.3)(8, 8, 256)0
Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU)(8, 8, 256)590,080
Transposed Convolution (strides = (2, 2), filters = 256, kernel = (2, 2), activation = ELU, name = U1)(16, 16, 256)262,400
Concatenation (C5, U1)(16, 16, 512)0
Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU)(16, 16, 256)1,179,904
Dropout (0.3)(16, 16, 256)0
Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU)(16, 16, 256)590,080
Transposed Convolution (strides = (2, 2), filters = 128, kernel = (2, 2), activation = ELU, name = U2)(32, 32, 128)131,200
Concatenation (C4, U2)(32, 32, 256)0
Convolution (strides = (1, 1), filters = 128, kernel = (3, 3), activation = ELU)(32, 32, 128)295,040
Dropout (0.2)(32, 32, 128)0
Convolution (strides = (1, 1), filters = 128, kernel = (3, 3), activation = ELU)(32, 32, 128)147,584
Transposed Convolution (strides = (2, 2), filters = 64, kernel = (2, 2), activation = ELU, name = U3)(64, 64, 64)32,832
Concatenation (C3, U3)(64, 64, 128)0
Convolution (strides = (1, 1), filters = 64, kernel = (3, 3), activation = ELU)(64, 64, 64)73,792
Dropout (0.2)(64, 64, 64)0
Convolution (strides = (1, 1), filters = 64, kernel = (3, 3), activation = ELU)(64, 64, 64)36,928
Transposed Convolution (strides = (2, 2), filters = 32, kernel = (2, 2), activation = ELU, name = U4)(128, 128, 32)8224
Concatenation (C2, U4)(128, 128, 64)0
Convolution (strides = (1, 1), filters = 32, kernel = (3, 3), activation = ELU)(128, 128, 32)18,464
Dropout (0.1)(128, 128, 32)0
Convolution (strides = (1, 1), filters = 32, kernel = (3, 3), activation = ELU)(128, 128, 32)9248
Transposed Convolution (strides = (2, 2), filters = 16, kernel = (2, 2), activation = ELU, name = U5)(256, 256, 16)2064
Concatenation (C1, U5)(256, 256, 16)0
Convolution (strides = (1, 1), filters = 16, kernel = (3, 3), activation = ELU)(256, 256, 16)4624
Dropout (0.1)(256, 256, 16)0
Convolution (strides = (1, 1), filters = 16, kernel = (3, 3), activation = ELU)(256, 256, 16)2320
Convolution (strides = (1, 1), filters = 1, kernel = (1, 1), activation = Sigmoid, name = Output)(256, 256, 1)17

Appendix F. Inaccurate Masks in UoA_DR for Segmenting CRBVs

Among the five data sets we experiment on, the UoA-DR data set has the largest number of masks for segmenting CRBVs. Even though it could be a good data set for training and testing U-Nets, the performance of any color-specific U-Net for the UoA-DR test set is the worst among all data sets regardless of whether U-Nets are trained by combining data of five data sets together or by using data only from the UoA-DR data set. The reason behind it is that all the reference masks provided by the UoA-DR data set for segmenting CRBVs are inaccurate. In the UoA-DR data set, the reference masks usually do not match the real blood vessels well. In many places of the reference masks, vessels are marked in the wrong places. Moreover, thick vessels are marked by thinner lines, and thin vessels are marked by thicker lines in many places of reference masks. Even in some reference masks, clearly visible thin vessels are not marked as shown in Figure A7.
Figure A7. A fundus photograph of the UoA-DR data set overlaid by the reference binary mask. Rectangles point out some inaccurately marked places with the green border. (a) Mismatched thickness between the real blood vessels (dark red colored) and marked line (white-colored), (b) Inaccurate marked line (white-colored) gone to the Foveal Avascular Zone (FAZ), which is a region in the center of the macula that is entirely lacking in CRBVs, (c) Unmarked clearly visible CRBV, (d) Inaccurate marked line (white-colored) whose position is different from the original vessel (dark red colored), and (e) Inaccurate marked line which goes to the background. Source of image: UoA-DR data set and image file: 163.jpg.
Figure A7. A fundus photograph of the UoA-DR data set overlaid by the reference binary mask. Rectangles point out some inaccurately marked places with the green border. (a) Mismatched thickness between the real blood vessels (dark red colored) and marked line (white-colored), (b) Inaccurate marked line (white-colored) gone to the Foveal Avascular Zone (FAZ), which is a region in the center of the macula that is entirely lacking in CRBVs, (c) Unmarked clearly visible CRBV, (d) Inaccurate marked line (white-colored) whose position is different from the original vessel (dark red colored), and (e) Inaccurate marked line which goes to the background. Source of image: UoA-DR data set and image file: 163.jpg.
Life 12 00973 g0a7

Appendix G. Performance of U-Nets Trained and Tested on Individual Data Set

Since different fundus cameras capture retinal images of different data sets in different experimental setups, different data sets may be of different difficulties. We, therefore, do experiments on the different sets individually, i.e., training and testing on the same set for segmenting CRBVs. Table A4 and Table A5 show the results for CRBVs’ segmentation of five data sets: CHASE_DB1, DRIVE, HRF, STARE, and UoA_DR. The first and second blocks in these tables show the results of the U-Nets for which 25% of the data is used for training, whereas the third block shows the results of the U-Nets for which 55% of the data is used for training. In the first block, 55% of the data is sued for testing, whereas in the second and third blocks, only 25% of the data is used for testing. For all three cases, 20% of the data is used as the validation set. It should be noted that individual test sets prepared by taking 25% of the data are fairly small, so these results may not be very reliable. However, the results in the first and second blocks are fairly similar, which indicates that the results are reasonably stable. Overall, we see a substantial improvement in the third block compared to the second, suggesting that the U-Nets benefit from more training data. We also notice that both in Table 10 (same training data for all sets) and Table A4 (set specific training data), there is a large difference in the results for the different data sets which indicates that different data sets have different levels of difficulty.
Table A4. Effect of different amounts of training data on the performance (mean ± standard deviation) of U-Nets trained using different color channels for segmenting CRBVs. Note that the CLAHE is applied in the pre-processing stage.
Table A4. Effect of different amounts of training data on the performance (mean ± standard deviation) of U-Nets trained using different color channels for segmenting CRBVs. Note that the CLAHE is applied in the pre-processing stage.
CHASE_DB1
Data SplitColorPrecisionRecallAUCMIoU
25% Training,
20% Validation,
55% Test
RGB0.569 ± 0.2030.448 ± 0.0410.729 ± 0.0590.537 ± 0.046
GRAY0.615 ± 0.0810.412 ± 0.0410.735 ± 0.0240.503 ± 0.051
RED0.230 ± 0.0300.332 ± 0.0530.613 ± 0.0100.474 ± 0.006
GREEN0.782 ± 0.0260.526 ± 0.0200.792 ± 0.0070.606 ± 0.045
BLUE0.451 ± 0.1140.370 ± 0.0180.683 ± 0.0320.485 ± 0.008
25% Training,
20% Validation,
25% Test
RGB0.571 ± 0.2070.441 ± 0.0450.724 ± 0.0620.538 ± 0.048
GRAY0.624 ± 0.0800.407 ± 0.0360.731 ± 0.0230.502 ± 0.050
RED0.244 ± 0.0370.342 ± 0.0460.619 ± 0.0090.474 ± 0.007
GREEN0.791 ± 0.0260.515 ± 0.0220.787 ± 0.0080.602 ± 0.044
BLUE0.449 ± 0.1160.362 ± 0.0170.677 ± 0.0330.484 ± 0.008
55% Training,
20% Validation,
25% Test
RGB0.816 ± 0.0120.541 ± 0.0240.784 ± 0.0120.684 ± 0.018
GRAY0.803 ± 0.0020.515 ± 0.0260.775 ± 0.0100.671 ± 0.016
RED0.389 ± 0.0390.363 ± 0.0270.680 ± 0.0210.504 ± 0.028
GREEN0.838 ± 0.0050.583 ± 0.0170.806 ± 0.0090.687 ± 0.038
BLUE0.648 ± 0.0190.383 ± 0.0120.698 ± 0.0060.601 ± 0.010
DRIVE
Data SplitColorPrecisionRecallAUCMIoU
25% Training,
20% Validation,
55% Test
RGB0.796 ± 0.0360.443 ± 0.0650.749 ± 0.0280.622 ± 0.072
GRAY0.835 ± 0.0160.419 ± 0.0220.739 ± 0.0090.590 ± 0.066
RED0.362 ± 0.0980.342 ± 0.0720.628 ± 0.0150.476 ± 0.007
GREEN0.846 ± 0.0100.463 ± 0.0250.758 ± 0.0090.671 ± 0.027
BLUE0.537 ± 0.0780.297 ± 0.0280.660 ± 0.0220.512 ± 0.026
25% Training,
20% Validation,
25% Test
RGB0.839 ± 0.0350.442 ± 0.0680.749 ± 0.0300.626 ± 0.073
GRAY0.874 ± 0.0180.413 ± 0.0230.737 ± 0.0090.592 ± 0.068
RED0.400 ± 0.1080.352 ± 0.0730.637 ± 0.0140.476 ± 0.009
GREEN0.896 ± 0.0090.462 ± 0.0250.760 ± 0.0090.676 ± 0.028
BLUE0.575 ± 0.0800.300 ± 0.0240.663 ± 0.0200.512 ± 0.027
55% Training,
20% Validation,
25% Test
RGB0.896 ± 0.0050.539 ± 0.0100.787 ± 0.0060.732 ± 0.014
GRAY0.895 ± 0.0040.528 ± 0.0120.781 ± 0.0050.731 ± 0.006
RED0.660 ± 0.0850.316 ± 0.0370.674 ± 0.0170.520 ± 0.038
GREEN0.904 ± 0.0030.533 ± 0.0080.786 ± 0.0030.718 ± 0.024
BLUE0.783 ± 0.0420.386 ± 0.0440.705 ± 0.0210.645 ± 0.037
HRF
Data SplitColorPrecisionRecallAUCMIoU
25% Training,
20% Validation,
55% Test
RGB0.792 ± 0.0060.537 ± 0.0210.799 ± 0.0130.597 ± 0.024
GRAY0.776 ± 0.0040.497 ± 0.0170.781 ± 0.0110.579 ± 0.025
RED0.204 ± 0.0240.258 ± 0.0170.591 ± 0.0140.467 ± 0.002
GREEN0.821 ± 0.0130.578 ± 0.0120.824 ± 0.0060.624 ± 0.037
BLUE0.155 ± 0.0020.361 ± 0.0100.580 ± 0.0010.482 ± 0.008
25% Training,
20% Validation,
25% Test
RGB0.759 ± 0.0060.535 ± 0.0230.797 ± 0.0140.593 ± 0.023
GRAY0.741 ± 0.0050.503 ± 0.0170.782 ± 0.0110.576 ± 0.025
RED0.197 ± 0.0210.245 ± 0.0170.586 ± 0.0130.467 ± 0.002
GREEN0.794 ± 0.0160.581 ± 0.0130.824 ± 0.0060.619 ± 0.036
BLUE0.149 ± 0.0040.368 ± 0.0130.578 ± 0.0020.480 ± 0.007
55% Training,
20% Validation,
25% Test
RGB0.781 ± 0.0080.608 ± 0.0050.824 ± 0.0040.693 ± 0.013
GRAY0.768 ± 0.0100.573 ± 0.0170.807 ± 0.0090.677 ± 0.022
RED0.512 ± 0.0090.271 ± 0.0210.641 ± 0.0130.536 ± 0.011
GREEN0.788 ± 0.0060.647 ± 0.0090.846 ± 0.0030.674 ± 0.060
BLUE0.274 ± 0.1100.341 ± 0.0470.620 ± 0.0320.500 ± 0.019
STARE
Data SplitColorPrecisionRecallAUCMIoU
25% Training,
20% Validation,
55% Test
RGB0.556 ± 0.2040.300 ± 0.0730.659 ± 0.0730.478 ± 0.008
GRAY0.619 ± 0.0500.283 ± 0.0580.680 ± 0.0330.478 ± 0.017
RED0.148 ± 0.0030.222 ± 0.0330.516 ± 0.0090.468 ± 0.000
GREEN0.600 ± 0.2420.351 ± 0.0300.680 ± 0.0820.483 ± 0.019
BLUE0.167 ± 0.0360.145 ± 0.0340.518 ± 0.0210.469 ± 0.001
25% Training,
20% Validation,
25% Test
RGB0.531 ± 0.1950.334 ± 0.0820.672 ± 0.0820.482 ± 0.009
GRAY0.607 ± 0.0550.314 ± 0.0660.691 ± 0.0390.483 ± 0.020
RED0.143 ± 0.0030.231 ± 0.0380.512 ± 0.0110.471 ± 0.000
GREEN0.587 ± 0.2430.376 ± 0.0480.688 ± 0.0920.488 ± 0.024
BLUE0.164 ± 0.0320.142 ± 0.0390.517 ± 0.0200.472 ± 0.001
55% Training,
20% Validation,
25% Test
RGB0.756 ± 0.0140.448 ± 0.0310.749 ± 0.0150.610 ± 0.038
GRAY0.748 ± 0.0100.504 ± 0.0260.770 ± 0.0100.656 ± 0.017
RED0.181 ± 0.0200.293 ± 0.0690.558 ± 0.0080.474 ± 0.006
GREEN0.749 ± 0.0130.550 ± 0.0250.795 ± 0.0120.659 ± 0.038
BLUE0.163 ± 0.0070.324 ± 0.0590.547 ± 0.0060.469 ± 0.004
UoA_DR
Data SplitColorPrecisionRecallAUCMIoU
25% Training,
20% Validation,
55% Test
RGB0.320 ± 0.0110.398 ± 0.0080.699 ± 0.0060.541 ± 0.015
GRAY0.315 ± 0.0110.353 ± 0.0160.675 ± 0.0070.526 ± 0.017
RED0.203 ± 0.0130.260 ± 0.0160.614 ± 0.0060.516 ± 0.005
GREEN0.332 ± 0.0070.415 ± 0.0180.705 ± 0.0090.534 ± 0.014
BLUE0.237 ± 0.0120.260 ± 0.0080.620 ± 0.0070.526 ± 0.006
25% Training,
20% Validation,
25% Test
RGB0.313 ± 0.0110.395 ± 0.0080.697 ± 0.0050.540 ± 0.015
GRAY0.306 ± 0.0110.350 ± 0.0160.673 ± 0.0080.524 ± 0.017
RED0.201 ± 0.0130.259 ± 0.0150.614 ± 0.0060.516 ± 0.005
GREEN0.326 ± 0.0070.412 ± 0.0170.704 ± 0.0090.532 ± 0.014
BLUE0.232 ± 0.0110.257 ± 0.0070.618 ± 0.0060.524 ± 0.005
55% Training,
20% Validation,
25% Test
RGB0.333 ± 0.0050.445 ± 0.0120.717 ± 0.0040.557 ± 0.007
GRAY0.330 ± 0.0030.413 ± 0.0140.700 ± 0.0060.559 ± 0.004
RED0.289 ± 0.0110.299 ± 0.0070.641 ± 0.0040.543 ± 0.003
GREEN0.335 ± 0.0020.470 ± 0.0100.728 ± 0.0040.564 ± 0.004
BLUE0.281 ± 0.0120.280 ± 0.0130.630 ± 0.0060.540 ± 0.004

Appendix H. Effect of CLAHE

Different data sets have different qualities, which cause different levels of difficulty. One reason for poor-quality images is inappropriate contrast. In general, histogram equalization techniques such as Contrast Limited Adaptive Histogram Equalization (CLAHE) are applied to enhance the local contrast of fundus photographs. In this work, we also apply CLAHE in the pre-processing stage of the experiments mentioned above. In order to investigate the effect of CLAHE on different data sets, we conduct experiments using the fundus photographs without applying CLAHE. Table A5 shows results when CLAHE is not applied. These results are obtained using the same training/validation/test splits as in the third blocks of Table A4. Overall, CLAHE improves the results of the STARE set a lot, and also quite a lot of the DRIVE and HRF data sets. The results of the CHASE_DB1 data set are a bit mixed depending on the metric. For the UoA-DR data set, CLAHE does not seem to help at all.
Table A5. Performance (mean ± standard deviation) of U-Nets trained using different color channels for segmenting CRBVs when CLAHE is not applied on the retinal images in the pre-processing stage. Note that 55% data was used for training, whereas, 25% data is for validation and 25% data for testing.
Table A5. Performance (mean ± standard deviation) of U-Nets trained using different color channels for segmenting CRBVs when CLAHE is not applied on the retinal images in the pre-processing stage. Note that 55% data was used for training, whereas, 25% data is for validation and 25% data for testing.
DatabaseColorPrecisionRecallAUCMIoU
CHASEDB1RGB0.676 ± 0.0570.419 ± 0.0370.727 ± 0.0200.576 ± 0.051
GRAY0.629 ± 0.0780.406 ± 0.0520.714 ± 0.0250.570 ± 0.060
RED0.217 ± 0.0120.353 ± 0.0260.611 ± 0.0060.476 ± 0.009
GREEN0.802 ± 0.0170.530 ± 0.0190.781 ± 0.0090.672 ± 0.023
BLUE0.589 ± 0.0230.373 ± 0.0160.690 ± 0.0060.556 ± 0.050
DRIVERGB0.856 ± 0.0240.470 ± 0.0170.750 ± 0.0100.693 ± 0.011
GRAY0.855 ± 0.0210.464 ± 0.0300.746 ± 0.0150.693 ± 0.024
RED0.297 ± 0.0090.376 ± 0.0170.619 ± 0.0030.472 ± 0.010
GREEN0.886 ± 0.0060.509 ± 0.0100.771 ± 0.0050.722 ± 0.004
BLUE0.504 ± 0.1710.331 ± 0.0430.642 ± 0.0310.551 ± 0.071
HRFRGB0.757 ± 0.0140.533 ± 0.0230.784 ± 0.0100.664 ± 0.026
GRAY0.730 ± 0.0100.520 ± 0.0110.776 ± 0.0060.655 ± 0.011
RED0.164 ± 0.0020.311 ± 0.0100.577 ± 0.0010.483 ± 0.005
GREEN0.791 ± 0.0070.603 ± 0.0080.820 ± 0.0030.705 ± 0.008
BLUE0.153 ± 0.0040.347 ± 0.0220.576 ± 0.0030.476 ± 0.006
STARERGB0.579 ± 0.0770.348 ± 0.0300.696 ± 0.0200.497 ± 0.023
GRAY0.379 ± 0.1460.312 ± 0.0670.624 ± 0.0410.487 ± 0.032
RED0.157 ± 0.0040.444 ± 0.0550.558 ± 0.0100.456 ± 0.017
GREEN0.592 ± 0.0850.442 ± 0.0210.742 ± 0.0100.517 ± 0.033
BLUE0.164 ± 0.0050.327 ± 0.0560.546 ± 0.0130.474 ± 0.003
UoADRRGB0.323 ± 0.0030.411 ± 0.0040.699 ± 0.0020.555 ± 0.004
GRAY0.319 ± 0.0030.372 ± 0.0190.679 ± 0.0090.556 ± 0.005
RED0.238 ± 0.0170.220 ± 0.0140.598 ± 0.0080.522 ± 0.005
GREEN0.328 ± 0.0090.438 ± 0.0190.713 ± 0.0080.563 ± 0.004
BLUE0.262 ± 0.0120.261 ± 0.0080.619 ± 0.0040.535 ± 0.002

References

  1. Resnikoff, S.; Felch, W.; Gauthier, T.M.; Spivey, B. The number of ophthalmologists in practice and training worldwide: A growing gap despite more than 200000 practitioners. Br. J. Ophthalmol. 2012, 96, 783–787. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Abràmoff, M.D.; Garvin, M.K.; Sonka, M. Retinal Imaging and Image Analysis. IEEE Rev. Biomed. Eng. 2010, 3, 169–208. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Owen, C.G.; Rudnicka, A.R.; Mullen, R.; Barman, S.A.; Monekosso, D.; Whincup, P.H.; Ng, J.; Paterson, C. Measuring Retinal Vessel Tortuosity in 10-Year-Old Children: Validation of the Computer-Assisted Image Analysis of the Retina (CAIAR) Program. Investig. Ophthalmol. Vis. Sci. 2009, 50, 2004–2010. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Fraz, M.; Remagnino, P.; Hoppe, A.; Uyyanonvara, B.; Rudnicka, A.; Owen, C.; Barman, S. An Ensemble Classification-Based Approach Applied to Retinal Blood Vessel Segmentation. IEEE Trans. Biomed. Eng. 2012, 59, 2538–2548. [Google Scholar] [CrossRef]
  5. Staal, J.J.; Abramoff, M.D.; Niemeijer, M.; Viergever, M.A.; van Ginneken, B. Ridge based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 2004, 23, 501–509. [Google Scholar] [CrossRef]
  6. Budai, A.; Bock, R.; Maier, A.; Hornegger, J.; Michelson, G. Robust Vessel Segmentation in Fundus Images. Int. J. Biomed. Imaging 2013, 2013, 154860. [Google Scholar] [CrossRef] [Green Version]
  7. Porwal, P.; Pachade, S.; Kamble, R.; Kokare, M.; Deshmukh, G.; Sahasrabuddhe, V.; Meriaudeau, F. Indian Diabetic Retinopathy Image Dataset (IDRiD): A Database for Diabetic Retinopathy Screening Research. Data 2018, 3, 25. [Google Scholar] [CrossRef] [Green Version]
  8. Cuadros, J.; Bresnick, G. EyePACS: An Adaptable Telemedicine System for Diabetic Retinopathy Screening. J. Diabetes Sci. Technol. 2009, 3, 509–516. [Google Scholar] [CrossRef] [Green Version]
  9. Decencière, E.; Zhang, X.; Cazuguel, G.; Lay, B.; Cochener, B.; Trone, C.; Gain, P.; Ordonez, R.; Massin, P.; Erginay, A.; et al. Feedback on a publicly distributed database: The Messidor database. Image Anal. Stereol. 2014, 33, 231–234. [Google Scholar] [CrossRef] [Green Version]
  10. Hoover, A.; Kouznetsova, V.; Goldbaum, M. Locating Blood Vessels in Retinal Images by Piece-wise Threshold Probing of a Matched Filter Response. IEEE Trans. Med. Imaging 2000, 19, 203–210. [Google Scholar] [CrossRef] [Green Version]
  11. Hoover, A.; Goldbaum, M. Locating the optic nerve in a retinal image using the fuzzy convergence of the blood vessels. IEEE Trans. Med. Imaging 2003, 22, 951–958. [Google Scholar] [CrossRef] [Green Version]
  12. Abdulla, W.; Chalakkal, R.J. University of Auckland Diabetic Retinopathy (UoA-DR) Database; University of Auckland: Auckland, NZ, USA, 2018. [Google Scholar] [CrossRef]
  13. Davis, B.M.; Crawley, L.; Pahlitzsch, M.; Javaid, F.; Cordeiro, M.F. Glaucoma: The retina and beyond. Acta Neuropathol. 2016, 132, 807–826. [Google Scholar] [CrossRef] [Green Version]
  14. Ferris, F.L.; Fine, S.L.; Hyman, L. Age-Related Macular Degeneration and Blindness due to Neovascular Maculopathy. JAMA Ophthalmol. 1984, 102, 1640–1642. [Google Scholar] [CrossRef]
  15. Wykoff, C.C.; Khurana, R.N.; Nguyen, Q.D.; Kelly, S.P.; Lum, F.; Hall, R.; Abbass, I.M.; Abolian, A.M.; Stoilov, I.; To, T.M.; et al. Risk of Blindness Among Patients with Diabetes and Newly Diagnosed Diabetic Retinopathy. Diabetes Care 2021, 44, 748–756. [Google Scholar] [CrossRef]
  16. Romero-Aroca, P. Managing diabetic macular edema: The leading cause of diabetes blindness. World J. Diabetes 2011, 2, 98–104. [Google Scholar] [CrossRef]
  17. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In International Conference on Medical Image Computing and Computer Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar] [CrossRef] [Green Version]
  18. DeHoog, E.; Schwiegerling, J. Fundus camera systems: A comparative analysis. Appl. Opt. 2009, 48, 221–228. [Google Scholar] [CrossRef] [Green Version]
  19. Bayer, B.E. Color Imaging Array. U.S. Patent 3971065, 1976. Available online: https://patentimages.storage.googleapis.com/89/c6/87/c4fb7fbb6d0a0d/US3971065.pdf (accessed on 17 June 2022).
  20. Zhang, L.; Wu, X. Color demosaicking via directional linear minimum mean square-error estimation. IEEE Trans. Image Process. 2005, 14, 2167–2178. [Google Scholar] [CrossRef]
  21. Chung, K.; Chan, Y. Color Demosaicing Using Variance of Color Differences. IEEE Trans. Image Process. 2006, 15, 2944–2955. [Google Scholar] [CrossRef] [Green Version]
  22. Chung, K.; Yang, W.; Yan, W.; Wang, C. Demosaicing of Color Filter Array Captured Images Using Gradient Edge Detection Masks and Adaptive Heterogeneity-Projection. IEEE Trans. Image Process. 2008, 17, 2356–2367. [Google Scholar] [CrossRef]
  23. Flaxman, S.R.; Bourne, R.R.A.; Resnikoff, S.; Ackland, P.; Braithwaite, T.; Cicinelli, M.V.; Das, A.; Jonas, J.B.; Keeffe, J.; Kempen, J.H.; et al. Global causes of blindness and distance vision impairment 1990–2020: A systematic review and meta-analysis. Lancet Glob. Health 2017, 5, 1221–1234. [Google Scholar] [CrossRef] [Green Version]
  24. Burton, M.J.; Ramke, J.; Marques, A.P.; Bourne, R.R.A.; Congdon, N.; Jones, I.; Tong, B.A.M.A.; Arunga, S.; Bachani, D.; Bascaran, C.; et al. The Lancet Global Health Commission on Global Eye Health: Vision beyond 2020. Lancet Glob. Health 2021, 9, 489–551. [Google Scholar] [CrossRef]
  25. Guerrero-Bote, V.P.; Moya-Anegón, F. A further step forward in measuring journals’ scientific prestige: The SJR2 indicator. J. Inf. 2012, 6, 674–688. [Google Scholar] [CrossRef] [Green Version]
  26. Hipwell, J.H.; Strachan, F.; Olson, J.A.; Mchardy, K.C.; Sharp, P.F.; Forrester, J.V. Automated detection of microaneurysms in digital red-free photographs: A diabetic retinopathy screening tool. Diabet. Med. 2000, 17, 588–594. [Google Scholar] [CrossRef] [PubMed]
  27. Walter, T.; Klein, J.C.; Massin, P.; Erginay, A. A contribution of image processing to the diagnosis of diabetic retinopathy—Detection of exudates in color fundus images of the human retina. IEEE Trans. Med. Imaging 2002, 21, 1236–1243. [Google Scholar] [CrossRef]
  28. Klein, R.; Meuer, S.M.; Moss, S.E.; Klein, B.E.K.; Neider, M.W.; Reinke, J. Detection of Age-Related Macular Degeneration Using a NonmydriaticDigital Camera and a Standard Film Fundus Camera. JAMA Arch. Ophthalmol. 2004, 122, 1642–1646. [Google Scholar] [CrossRef] [Green Version]
  29. Scott, I.U.; Edwards, A.R.; Beck, R.W.; Bressler, N.M.; Chan, C.K.; Elman, M.J.; Friedman, S.M.; Greven, C.M.; Maturi, R.K.; Pieramici, D.J.; et al. A Phase II Randomized Clinical Trial of Intravitreal Bevacizumab for Diabetic Macular Edema. Am. Acad. Ophthalmol. 2007, 114, 1860–1867. [Google Scholar] [CrossRef] [Green Version]
  30. Kose, C.; Sevik, U.; Gencalioglu, O. Automatic segmentation of age-related macular degeneration in retinal fundus images. Comput. Biol. Med. 2008, 38, 611–619. [Google Scholar] [CrossRef]
  31. Abramoff, M.D.; Niemeijer, M.; Suttorp-Schultan, M.S.A.; Viergever, M.A.; Russell, S.R.; Ginneken, B.V. Evaluation of a System for Automatic Detection of Diabetic Retinopathy From Color Fundus Photographs in a Large Population of Patients With Diabetes. Diabetes Care 2008, 31, 193–198. [Google Scholar] [CrossRef] [Green Version]
  32. Gangnon, R.E.; Davis, M.D.; Hubbard, L.D.; Aiello, L.M.; Chew, E.Y.; Ferris, F.L.; Fisher, M.R. A Severity Scale for Diabetic Macular Edema Developed from ETDRS Data. Investig. Ophthalmol. Vis. Sci. 2008, 49, 5041–5047. [Google Scholar] [CrossRef]
  33. Bock, R.; Meier, J.; Nyul, L.G.; Hornegger, J.; Michelson, G. Glaucoma risk index: Automated glaucoma detection from color fundus images. Med. Image Anal. 2010, 14, 471–481. [Google Scholar] [CrossRef] [Green Version]
  34. Kose, C.; Sevik, U.; Gencalioglu, O.; Ikibas, C.; Kayikicioglu, T. A Statistical Segmentation Method for Measuring Age-Related Macular Degeneration in Retinal Fundus Images. J. Med. Syst. 2010, 34, 1–13. [Google Scholar] [CrossRef]
  35. Muramatsu, C.; Hayashi, Y.; Sawada, A.; Hatanaka, Y.; Hara, T.; Yamamoto, T.; Fujita, H. Detection of retinal nerve fiber layer defects on retinal fundus images for early diagnosis of glaucoma. J. Biomed. Opt. 2010, 15, 016021. [Google Scholar] [CrossRef]
  36. Joshi, G.D.; Sivaswamy, J.; Krishnadas, S.R. Optic Disk and Cup Segmentation From Monocular Color Retinal Images for Glaucoma Assessment. IEEE Trans. Med. Imaging 2011, 30, 1192–1205. [Google Scholar] [CrossRef]
  37. Agurto, C.; Barriga, E.S.; Murray, V.; Nemeth, S.; Crammer, R.; Bauman, W.; Zamora, G.; Pattichis, M.S.; Soliz, P. Automatic Detection of Diabetic Retinopathy and Age-Related Macular Degeneration in Digital Fundus Images. Investig. Ophthalmol. Vis. Sci. 2011, 52, 5862–5871. [Google Scholar] [CrossRef]
  38. Fadzil, M.H.A.; Izhar, L.I.; Nugroho, H.; Nugroho, H.A. Analysis of retinal fundus images for grading of diabetic retinopathy severity. Med. Biol. Eng. Comput. 2011, 49, 693–700. [Google Scholar] [CrossRef]
  39. Mookiah, M.R.K.; Acharya, U.R.; Lim, C.M.; Petznick, A.; Suri, J.S. Data mining technique for automated diagnosis of glaucoma using higher order spectra and wavelet energy features. Knowl.-Based Syst. 2012, 33, 73–82. [Google Scholar] [CrossRef]
  40. Hijazi, M.H.A.; Coenen, F.; Zheng, Y. Data mining techniques for the screening of age-related macular degeneration. Knowl.-Based Syst. 2012, 29, 83–92. [Google Scholar] [CrossRef] [Green Version]
  41. Deepak, K.S.; Sivaswamy, J. Automatic Assessment of Macular Edema From Color Retinal Images. IEEE Trans. Med. Imaging 2012, 31, 766–776. [Google Scholar] [CrossRef] [Green Version]
  42. Akram, M.U.; Khalid, S.; Tariq, A.; Javed, M.Y. Detection of neovascularization in retinal images using multivariate m-Mediods based classifier. Comput. Med. Imaging Graph. 2013, 37, 346–357. [Google Scholar] [CrossRef]
  43. Oh, E.; Yoo, T.K.; Park, E. Diabetic retinopathy risk prediction for fundus examination using sparse learning: A cross-sectional study. Med. Inform. Decis. Mak. 2013, 13, 106. [Google Scholar] [CrossRef] [Green Version]
  44. Fuente-Arriaga, J.A.D.L.; Felipe-Riveron, E.M.; Garduno-Calderon, E. Application of vascular bundle displacement in the optic disc for glaucoma detection using fundus images. Comput. Biol. Med. 2014, 47, 27–35. [Google Scholar] [CrossRef] [PubMed]
  45. Akram, M.U.; Khalid, S.; Tariq, A.; Khan, S.A.; Azam, F. Detection and classification of retinal lesions for grading of diabetic retinopathy. Comput. Biol. Med. 2014, 45, 161–171. [Google Scholar] [CrossRef] [PubMed]
  46. Noronha, K.P.; Acharya, U.R.; Nayak, K.P.; Martis, R.J.; Bhandary, S.V. Automated classification of glaucoma stages using higher order cumulant features. Biomed. Signal Process. Control 2014, 10, 174–183. [Google Scholar] [CrossRef]
  47. Mookiah, M.R.K.; Acharya, U.R.; Koh, J.E.W.; Chua, C.K.; Tan, J.H.; Chandran, V.; Lim, C.M.; Noronha, K.; Laude, A.; Tong, L. Decision support system for age-related macular degeneration using discrete wavelet transform. Med. Biol. Eng. Comput. 2014, 52, 781–796. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Casanova, R.; Saldana, S.; Chew, E.Y.; Danis, R.P.; Greven, C.M.; Ambrosius, W.T. Application of Random Forests Methods to Diabetic Retinopathy Classification Analyses. PLoS ONE 2014, 9, e98587. [Google Scholar] [CrossRef]
  49. Issac, A.; Sarathi, M.P.; Dutta, M.K. An Adaptive Threshold Based Image Processing Technique for Improved Glaucoma Detection and Classification. Comput. Methods Programs Biomed. 2015, 122, 229–244. [Google Scholar] [CrossRef]
  50. Mookiah, M.R.K.; Acharya, U.R.; Chandran, V.; Martis, R.J.; Tan, J.H.; Koh, J.E.W.; Chua, C.K.; Tong, L.; Laude, A. Application of higher-order spectra for automated grading of diabetic maculopathy. Med. Biol. Eng. Comput. 2015, 53, 1319–1331. [Google Scholar] [CrossRef] [Green Version]
  51. Jaya, T.; Dheeba, J.; Singh, N.A. Detection of Hard Exudates in Colour Fundus ImagesUsing Fuzzy Support Vector Machine-Based Expert System. J. Digit. Imaging 2015, 28, 761–768. [Google Scholar] [CrossRef] [Green Version]
  52. Oh, J.E.; Yang, H.K.; Kim, K.G.; Hwang, J.M. Automatic Computer-Aided Diagnosis of Retinal Nerve Fiber Layer Defects Using Fundus Photographs in Optic Neuropathy. Investig. Ophthalmol. Vis. Sci. 2015, 56, 2872–2879. [Google Scholar] [CrossRef] [Green Version]
  53. Singh, A.; Dutta, M.K.; ParthaSarathi, M.; Uher, V.; Burget, R. Image Processing Based Automatic Diagnosis of Glaucoma using Wavelet Features of Segmented Optic Disc from Fundus Image. Comput. Methods Programs Biomed. 2016, 124, 108–120. [Google Scholar] [CrossRef]
  54. Acharya, U.R.; Mookiah, M.R.K.; Koh, J.E.W.; Tan, J.H.; Noronha, K.; Bhandary, S.V.; Rao, A.K.; Hagiwara, Y.; Chua, C.K.; Laude, A. Novel risk index for the identification of age-related macular degeneration using radon transform and DWT features. Comput. Biol. Med. 2016, 73, 131–140. [Google Scholar] [CrossRef]
  55. Bhaskaranand, M.; Ramachandra, C.; Bhat, S.; Cuadros, J.; Nittala, M.G.; Sadda, S.; Solanki, K. Automated Diabetic Retinopathy Screening and Monitoring Using Retinal Fundus Image Analysis. J. Diabetes Sci. Technol. 2016, 10, 254–261. [Google Scholar] [CrossRef] [Green Version]
  56. Phan, T.V.; Seoud, L.; Chakor, H.; Cheriet, F. Automatic Screening and Grading of Age-Related Macular Degeneration from Texture Analysis of Fundus Images. J. Ophthalmol. 2016, 2016, 5893601. [Google Scholar] [CrossRef] [Green Version]
  57. Wang, Y.T.; Tadarati, M.; Wolfson, Y.; Bressler, S.B.; Bressler, N.M. Comparison of Prevalence of Diabetic Macular Edema Based on Monocular Fundus Photography vs Optical Coherence Tomography. JAMA Ophthalmol. 2016, 134, 222–228. [Google Scholar] [CrossRef]
  58. Acharya, U.R.; Bhat, S.; Koh, J.E.W.; Bhandary, S.V.; Adeli, H. A novel algorithm to detect glaucoma risk using texton and local configuration pattern features extracted from fundus images. Comput. Biol. Med. 2017, 88, 72–83. [Google Scholar] [CrossRef]
  59. Acharya, U.R.; Mookiah, M.R.K.; Koh, J.E.W.; Tan, J.H.; Bhandary, S.V.; Rao, A.K.; Hagiwara, Y.; Chua, C.K.; Laude, A. Automated Diabetic Macular Edema (DME) Grading System using DWT, DCT Features and Maculopathy Index. Comput. Biol. Med. 2017, 84, 59–68. [Google Scholar] [CrossRef]
  60. Leontidis, G. A new unified framework for the early detection of the progression to diabetic retinopathy from fundus images. Comput. Biol. Med. 2017, 90, 98–115. [Google Scholar] [CrossRef] [Green Version]
  61. Maheshwari, S.; Pachori, R.B.; Acharya, U.R. Automated Diagnosis of Glaucoma Using Empirical Wavelet Transform and Correntropy Features Extracted From Fundus Images. IEEE J. Biomed. Health Inform. 2017, 21, 803–813. [Google Scholar] [CrossRef]
  62. Maheshwari, S.; Pachori, R.B.; Kanhangad, V.; Bhandary, S.V.; Acharya, R. Iterative variational mode decomposition based automated detection of glaucoma using fundus images. Comput. Biol. Med. 2017, 88, 142–149. [Google Scholar] [CrossRef]
  63. Saha, S.K.; Fernando, B.; Cuadros, J.; Xiao, D.; Kanagasingam, Y. Automated Quality Assessment of Colour Fundus Images for Diabetic Retinopathy Screening in Telemedicine. J. Digit. Imaging 2018, 31, 869–878. [Google Scholar] [CrossRef]
  64. Colomer, A.; Igual, J.; Naranjo, V. Detection of Early Signs of Diabetic Retinopathy Based on Textural and Morphological Information in Fundus Images. Sensors 2020, 20, 5. [Google Scholar] [CrossRef] [Green Version]
  65. Gardner, G.G.; Keating, D.; Williamson, T.H.; Elliott, A.T. Automatic detection of diabetic retinopathy using an artificial neural network: A screening tool. Br. J. Ophthalmol. 1996, 80, 940–944. [Google Scholar] [CrossRef] [Green Version]
  66. Nayak, J.; Acharya, U.R.; Bhat, P.S.; Shetty, N.; Lim, T.C. Automated Diagnosis of Glaucoma Using Digital Fundus Images. J. Med. Syst. 2009, 33, 337–346. [Google Scholar] [CrossRef]
  67. Ganesan, K.; Martis, R.J.; Acharya, U.R.; Chua, C.K.; Min, L.C.; Ng, E.Y.K.; Laude, A. Computer-aided diabetic retinopathy detection using trace transforms on digital fundus images. Med. Biol. Eng. Comput. 2014, 52, 663–672. [Google Scholar] [CrossRef] [PubMed]
  68. Mookiah, M.R.K.; Acharya, U.R.; Fujita, H.; Koh, J.E.W.; Tan, J.H.; Noronha, K.; Bhandary, S.V.; Chua, C.K.; Lim, C.M.; Laude, A.; et al. Local Configuration Pattern Features for Age-Related Macular Degeneration Characterisation and Classification. Comput. Biol. Med. 2015, 63, 208–218. [Google Scholar] [CrossRef] [PubMed]
  69. Asaoka, R.; Murata, H.; Iwase, A.; Araie, M. Detecting Preperimetric Glaucoma with Standard Automated Perimetry Using a Deep Learning Classifier. Ophthalmology 2016, 123, 1974–1980. [Google Scholar] [CrossRef] [PubMed]
  70. Abramoff, M.D.; Lou, Y.; Erginay, A.; Clarida, W.; Amelon, R.; Folk, J.C.; Niemeijer, M. Improved Automated Detection of Diabetic Retinopathy on a Publicly Available Dataset Through Integration of Deep Learning. Investig. Ophthalmol. Vis. Sci. 2016, 57, 5200–5206. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  71. Gulshan, V.; Peng, L.; Coram, M.; Stumpe, M.C.; Wu, D.; Narayanaswamy, A.; Venugopalan, S.; Widner, K.; Madams, T.; Cuadros, J.; et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA 2016, 316, 2402–2410. [Google Scholar] [CrossRef]
  72. Zilly, J.; Buhmann, J.M.; Mahapatra, D. Glaucoma Detection Using Entropy Sampling And Ensemble Learning For Automatic Optic Cup And Disc Segmentation. Comput. Med. Imaging Graph. 2017, 55, 28–41. [Google Scholar] [CrossRef]
  73. Burlina, P.M.; Joshi, N.; Pekala, M.; Pacheco, K.D.; Freund, D.E.; Bressler, N.M. Automated Grading of Age-Related Macular Degeneration From Color Fundus Images Using Deep Convolutional Neural Networks. JAMA Ophthalmol. 2017, 135, 1170–1176. [Google Scholar] [CrossRef]
  74. Abbas, Q.; Fondon, I.; Sarmiento, A.; Jimenez, S.; Alemany, P. Automatic recognition of severity level for diagnosis of diabetic retinopathy using deep visual features. Med. Biol. Eng. Comput. 2017, 55, 1959–1974. [Google Scholar] [CrossRef]
  75. Ting, D.S.W.; Cheung, C.Y.; Lim, G.; Tan, G.S.W.; Quang, N.D.; Gan, A.; Hamzah, H.; Garcia-Franco, R.; Yeo, I.Y.S.; Lee, S.Y.; et al. Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images from Multiethnic Populations with Diabetes. JAMA 2017, 318, 2211–2223. [Google Scholar] [CrossRef]
  76. Burlina, P.; Pacheco, K.D.; Joshi, N.; Freund, D.E.; Bressler, N.M. Comparing humans and deep learning performance for grading AMD: A study in using universal deep features and transfer learning for automated AMD analysis. Comput. Biol. Med. 2017, 82, 80–86. [Google Scholar] [CrossRef] [Green Version]
  77. Gargeya, R.; Leng, T. Automated Identification of Diabetic Retinopathy Using Deep Learning. Ophthalmology 2017, 124, 962–969. [Google Scholar] [CrossRef]
  78. Quellec, G.; Charriere, K.; Boudi, Y.; Cochener, B.; Lamard, M. Deep Image Mining for Diabetic Retinopathy Screening. Med. Image Anal. 2017, 39, 178–193. [Google Scholar] [CrossRef] [Green Version]
  79. Ferreira, M.V.D.S.; Filho, A.O.D.C.; Sousa, A.D.D.; Silva, A.C.; Gattass, M. Convolutional neural network and texture descriptor-based automatic detection and diagnosis of Glaucoma. Expert Syst. Appl. 2018, 110, 250–263. [Google Scholar] [CrossRef]
  80. Grassmann, F.; Mengelkamp, J.; Brandl, C.; Harsch, S.; Zimmermann, M.E.; Linkohr, B.; Peters, A.; Heid, I.M.; Palm, C.; Weber, B.H.F. A Deep Learning Algorithm for Prediction of Age-Related Eye Disease Study Severity Scale for Age-Related Macular Degeneration from Color Fundus Photography. Am. Acad. Ophthalmol. 2018, 125, 1410–1420. [Google Scholar] [CrossRef] [Green Version]
  81. Khojasteh, P.; Aliahmad, B.; Kumar, D.K. Fundus images analysis using deep features for detection of exudates, hemorrhages and microaneurysms. BMC Ophthalmol. 2018, 18, 288. [Google Scholar] [CrossRef] [Green Version]
  82. Raghavendra, U.; Fujita, H.; Bhandary, S.V.; Gudigar, A.; Tan, J.H.; Acharya, U.R. Deep Convolution Neural Network for Accurate Diagnosis of Glaucoma Using Digital Fundus Images. Inf. Sci. 2018, 441, 41–49. [Google Scholar] [CrossRef]
  83. Burlina, P.M.; Joshi, N.; Pacheco, K.D.; Freund, D.E.; Kong, J.; Bressler, N.M. Use of Deep Learning for Detailed Severity Characterization and Estimation of 5-Year Risk Among Patients with Age-Related Macular Degeneration. JAMA Ophthalmol. 2018, 136, 1359–1366. [Google Scholar] [CrossRef] [Green Version]
  84. Lam, C.; Yu, C.; Huang, L.; Rubin, D. Retinal Lesion Detection with Deep Learning Using Image Patches. Investig. Ophthalmol. Vis. Sci. 2018, 59, 590–596. [Google Scholar] [CrossRef]
  85. Li, Z.; He, Y.; Keel, S.; Meng, W.; Chang, R.T.; He, M. Efficacy of a Deep Learning System for Detecting Glaucomatous Optic Neuropathy Based on Color Fundus Photographs. Ophthalmology 2018, 125, 1199–1206. [Google Scholar] [CrossRef] [Green Version]
  86. Fu, H.; Cheng, J.; Xu, Y.; Zhang, C.; Wong, D.W.K.; Liu, J.; Cao, X. Disc-Aware Ensemble Network for Glaucoma Screening From Fundus Image. IEEE Trans. Med. Imaging 2018, 37, 2493–2501. [Google Scholar] [CrossRef] [Green Version]
  87. Liu, S.; Graham, S.L.; Schulz, A.; Kalloniatis, M.; Zangerl, B.; Cai, W.; Gao, Y.; Chua, B.; Arvind, H.; Grigg, J.; et al. A Deep Learning-Based Algorithm Identifies Glaucomatous Discs Using Monoscopic Fundus Photographs. Ophthalmol. Glaucoma 2018, 1, 15–22. [Google Scholar] [CrossRef]
  88. Liu, H.; Li, L.; Wormstone, I.M.; Qiao, C.; Zhang, C.; Liu, P.; Li, S.; Wang, H.; Mou, D.; Pang, R.; et al. Development and Validation of a Deep Learning System to Detect Glaucomatous Optic Neuropathy Using Fundus Photographs. JAMA Ophthalmol. 2019, 137, 1353–1360. [Google Scholar] [CrossRef]
  89. Keel, S.; Li, Z.; Scheetz, J.; Robman, L.; Phung, J.; Makeyeva, G.; Aung, K.; Liu, C.; Yan, X.; Meng, W.; et al. Development and validation of a deep-learning algorithm for the detection of neovascular age-related macular degeneration from colour fundus photographs. Clin. Exp. Ophthalmol. 2019, 47, 1009–1018. [Google Scholar] [CrossRef] [Green Version]
  90. Li, F.; Liu, Z.; Chen, H.; Jiang, M.; Zhang, X.; Wu, Z. Automatic Detection of Diabetic Retinopathy in Retinal Fundus Photographs Based on Deep Learning Algorithm. Transl. Vis. Sci. Technol. 2019, 8, 4. [Google Scholar] [CrossRef] [Green Version]
  91. Diaz-Pinto, A.; Morales, S.; Naranjo, V.; Kohler, T.; Mossi, J.M.; Navea, A. CNNs for automatic glaucoma assessment using fundus images: An extensive validation. BMC Biomed. Eng. Online 2019, 18, 29. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  92. Peng, Y.; Dharssi, S.; Chen, Q.; Keenan, T.D.; Agron, E.; Wong, W.T.; Chew, E.Y.; Lu, Z. DeepSeeNet: A Deep Learning Model for Automated Classification of Patient-based Age-related Macular Degeneration Severity from Color Fundus Photographs. Ophthalmology 2019, 126, 565–575. [Google Scholar] [CrossRef] [PubMed]
  93. Zeng, X.; Chen, H.; Luo, Y.; Ye, W. Automated Diabetic Retinopathy Detection Based on Binocular Siamese-Like Convolutional Neural Network. IEEE Access 2019, 4, 30744–30753. [Google Scholar] [CrossRef]
  94. Matsuba, S.; Tabuchi, H.; Ohsugi, H.; Enno, H.; Ishitobi, N.; Masumoto, H.; Kiuchi, Y. Accuracy of ultra-wide-field fundus ophthalmoscopy-assisted deep learning, a machine-learning technology, for detecting age-related macular degeneration. Int. Ophthalmol. 2019, 39, 1269–1275. [Google Scholar] [CrossRef] [Green Version]
  95. Raman, R.; Srinivasan, S.; Virmani, S.; Sivaprasad, S.; Rao, C.; Rajalakshmi, R. Fundus photograph-based deep learning algorithms in detecting diabetic retinopathy. Eye 2019, 33, 97–109. [Google Scholar] [CrossRef] [Green Version]
  96. Singh, R.K.; Gorantla, R. DMENet: Diabetic Macular Edema diagnosis using Hierarchical Ensemble of CNNs. PLoS ONE 2020, 15, e0220677. [Google Scholar] [CrossRef] [Green Version]
  97. Gonzalez-Gonzalo, C.; Sanchez-Gutierrez, V.; Hernandez-Martinez, P.; Contreras, I.; Lechanteur, Y.T.; Domanian, A.; Ginneken, B.V.; Sanchez, C.I. Evaluation of a deep learning system for the joint automated detection of diabetic retinopathy and age-related macular degeneration. Acta Ophthalmol. 2020, 98, 368–377. [Google Scholar] [CrossRef]
  98. Gheisari, S.; Shariflou, S.; Phu, J.; Kennedy, P.J.; Ashish, A.; Kalloniatis, M.; Golzan, S.M. A combined convolutional and recurrent neural network for enhanced glaucoma detection. Sci. Rep. 2021, 11, 1945. [Google Scholar] [CrossRef]
  99. Chaudhuri, S.; Chatterjee, S.; Katz, N.; Nelson, M.; Goldbaum, M. Detection of blood vessels in retinal images using two-dimensional matched filter. IEEE Trans. Med. Imaging 1989, 8, 263–269. [Google Scholar] [CrossRef] [Green Version]
  100. Sinthanayothin, C.; Boyce, J.; Cook, H.; Williamson, J. Automated localization of the optic disc, fovea and retinal blood vessels from digital color fundus images. Br. J. Ophthalmol. 1999, 83, 902–910. [Google Scholar] [CrossRef]
  101. Lowell, J.; Hunter, A.; Steel, D.; Basu, A.; Ryder, R.; Fletcher, E.; Kennedy, L. Optic Nerve Head Segmentation. IEEE Trans. Med. Imaging 2004, 23, 256–264. [Google Scholar] [CrossRef] [Green Version]
  102. Li, H.; Chutatape, O. Automated feature extraction in color retinal images by a model based approach. IEEE Trans. Biomed. Eng. 2004, 51, 246–254. [Google Scholar] [CrossRef]
  103. Soares, J.V.B.; Leandro, J.J.G.; Cesar, R.M.; Jelinek, H.F.; Cree, M.J. Retianl Vessel Segmentation Using the 2-D Gabor Wavelet and Supervised Classification. IEEE Trans. Med. Imaging 2006, 25, 1214–1222. [Google Scholar] [CrossRef] [Green Version]
  104. Xu, J.; Chutatape, O.; Sung, E.; Zheng, C.; Kuan, P.C.T. Optic disk feature extraction via modified deformable model technique for glaucoma analysis. Pattern Recognit. 2007, 40, 2063–2076. [Google Scholar] [CrossRef]
  105. Niemeijer, M.; Abramoff, M.D.; Ginneken, B.V. Segmentation of the Optic Disc, Macula and Vascular Arch in Fundus Photographs. IEEE Trans. Med. Imaging 2007, 26, 116–127. [Google Scholar] [CrossRef]
  106. Ricci, E.; Perfetti, R. Retinal Blood Vessel Segmentation Using Line Operators and Support Vector Classification. IEEE Trans. Med. Imaging 2007, 26, 1357–1365. [Google Scholar] [CrossRef]
  107. Abràmoff, M.D.; Alward, W.L.M.; Greenlee, E.C.; Shuba, L.; Kim, C.Y.; Fingert, J.H.; Kwon, Y.H. Automated segmentation of the optic disc from stereo color photographs using physiologically plausible features. Investig. Ophthalmol. Vis. Sci. 2007, 48, 1665–1673. [Google Scholar] [CrossRef]
  108. Tobin, K.W.; Chaum, E.; Govindasamy, V.P.; Karnowski, T.P. Detection of Anatomic Structures in Human Retinal Imagery. IEEE Trans. Med. Imaging 2007, 26, 1729–1739. [Google Scholar] [CrossRef] [PubMed]
  109. Youssif, A.; Ghalwash, A.Z.; Ghoneim, A. Optic Disc Detection From Normalized Digital Fundus Images by Means of a Vessels’ Direction Matched Filter. IEEE Trans. Med. Imaging 2008, 27, 11–18. [Google Scholar] [CrossRef] [PubMed]
  110. Niemeijer, M.; Abramoff, M.D.; Ginneken, B.V. Fast Detection of the Optic Disc and Fovea in Color Fundus Photographs. Med. Image Anal. 2009, 13, 859–870. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  111. Cinsdikici, M.; Aydin, D. Detection of blood vessels in ophthalmoscope images using MF/ant (matched filter/ant colony) algorithm. Comput. Methods Programs Biomed. 2009, 96, 85–95. [Google Scholar] [CrossRef]
  112. Welfer, D.; Scharcanski, J.; Kitamura, C.M.; Pizzol, M.M.D.; Ludwig, L.W.B.; Marinho, D.R. Segmentation of the optic disk in color eye fundus images using an adaptive morphological approach. Comput. Biol. Med. 2010, 40, 124–137. [Google Scholar] [CrossRef]
  113. Aquino, A.; Gegundez-Arias, M.E.; Marín, D. Detecting the Optic Disc Boundary in Digital Fundus Images Using Morphological, Edge Detection, and Feature Extraction Techniques. IEEE Trans. Med. Imaging 2010, 29, 1860–1869. [Google Scholar] [CrossRef] [Green Version]
  114. Zhu, X.; Rangayyan, R.M.; Ells, A.L. Detection of the Optic Nerve Head in Fundus Images of the Retina Using the Hough Transform for Circles. J. Digit. Imaging 2010, 23, 332–341. [Google Scholar] [CrossRef] [Green Version]
  115. Lu, S. Accurate and Efficient Optic Disc Detection and Segmentation by a Circular Transformation. IEEE Trans. Med. Imaging 2011, 30, 2126–2133. [Google Scholar] [CrossRef]
  116. Welfer, D.; Scharcanski, J.; Marinho, D.R. Fovea center detection based on the retina anatomy and mathematical morphology. Comput. Methods Programs Biomed. 2011, 104, 397–409. [Google Scholar] [CrossRef]
  117. Cheung, C.; Butty, Z.; Tehrani, N.; Lam, W.C. Computer-assisted image analysis of temporal retinal vessel width and tortuosity in retinopathy of prematurity for the assessment of disease severity and treatment outcome. Am. Assoc. Pediatr. Ophthalmol. Strabismus 2011, 15, 374–380. [Google Scholar] [CrossRef]
  118. Kose, C.; Ikibas, C. A personal identification system using retinal vasculature in retinal fundus images. Expert Syst. Appl. 2011, 38, 13670–13681. [Google Scholar] [CrossRef]
  119. You, X.; Peng, Q.; Yuan, Y.; Cheung, Y.; Lei, J. Segmentation of retinal blood vessels using the radial projection and semi-supervised approach. Pattern Recognit. 2011, 44, 2314–2324. [Google Scholar] [CrossRef]
  120. Bankhead, P.; Scholfield, N.; Mcgeown, G.; Curtis, T. Fast Retinal Vessel Detection and Measurement Using Wavelets and Edge Location Refinement. PLoS ONE 2012, 7, e32435. [Google Scholar] [CrossRef] [Green Version]
  121. Qureshi, R.J.; Kovacs, L.; Harangi, B.; Nagy, B.; Peto, T.; Hajdu, A. Combining algorithms for automatic detection of optic disc and macula in fundus images. Comput. Vis. Image Underst. 2012, 116, 138–145. [Google Scholar] [CrossRef]
  122. Fraz, M.; Barman, S.A.; Remagnino, P.; Hoppe, A.; Basit, A.; Uyyanonvara, B.; Rudnicka, A.R.; Owen, C. An approach to localize the retinal blood vessels using bit planes and centerline detection. Comput. Methods Programs Biomed. 2012, 108, 600–616. [Google Scholar] [CrossRef]
  123. Li, Q.; You, J.; Zhang, D. Vessel segmentation and width estimation in retinal images using multiscale production of matched filter responses. Expert Syst. Appl. 2012, 39, 7600–7610. [Google Scholar] [CrossRef]
  124. Lin, K.S.; Tsai, C.L.; Sofka, M.; Chen, S.J.; Lin, W.Y. Retinal Vascular Tree Reconstruction with Anatomical Realism. IEEE Trans. Biomed. Eng. 2012, 59, 3337–3347. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  125. Moghimirad, E.; Rezatofighi, S.H.; Soltanian-Zadeh, H. Retinal vessel segmentation using a multi-scale medialness function. Comput. Biol. Med. 2012, 42, 50–60. [Google Scholar] [CrossRef] [PubMed]
  126. Morales, S.; Naranjo, V.; Angulo, J.; Alcaniz, M. Automatic Detection of Optic Disc Based on PCA and Mathematical Morphology. IEEE Trans. Med. Imaging 2013, 32, 786–796. [Google Scholar] [CrossRef] [PubMed]
  127. Chin, K.S.; Trucco, E.; Tan, L.L.; Wilson, P.J. Automatic Fovea Location in Retinal Images Using Anatomical Priors and Vessel Density. Pattern Recognit. Lett. 2013, 34, 1152–1158. [Google Scholar] [CrossRef]
  128. Akram, M.; Khan, S. Multilayered thresholding-based blood vessel segmentation for screening of diabetic retinopathy. Eng. Comput. 2013, 29, 165–173. [Google Scholar] [CrossRef]
  129. Gegundez, M.E.; Marin, D.; Bravo, J.M.; Suero, A. Locating the fovea center position in digital fundus images using thresholding and feature extraction techniques. Comput. Med. Imaging Graph. 2013, 37, 386–393. [Google Scholar] [CrossRef]
  130. Badsha, S.; Reza, A.W.; Tan, K.G.; Dimyati, K. A New Blood Vessel Extraction Technique Using Edge Enhancement and Object Classification. J. Digit. Imaging 2013, 26, 1107–1115. [Google Scholar] [CrossRef] [Green Version]
  131. Fathi, A.; Naghsh-Nilchi, A. Automatic wavelet-based retinal blood vessels segmentation and vessel diameter estimation. Biomed. Signal Process. Control 2013, 8, 71–80. [Google Scholar] [CrossRef]
  132. Fraz, M.; Basit, A.; Barman, S.A. Application of Morphological Bit Planes in Retinal Blood Vessel Extraction. J. Digit. Imaging 2013, 26, 274–286. [Google Scholar] [CrossRef] [Green Version]
  133. Nayebifar, B.; Moghaddam, H.A. A novel method for retinal vessel tracking using particle filters. Comput. Biol. Med. 2013, 43, 541–548. [Google Scholar] [CrossRef]
  134. Nguyen, U.T.V.; Bhuiyan, A.; Park, L.A.F.; Ramamohanarao, K. An effective retinal blood vessel segmentation method using multi-scale line detection. Pattern Recognit. 2013, 46, 703–715. [Google Scholar] [CrossRef]
  135. Wang, Y.; Ji, G.; Lin, P. Retinal vessel segmentation using multiwavelet kernels and multiscale hierarchical decomposition. Pattern Recognit. 2013, 46, 2117–2133. [Google Scholar] [CrossRef]
  136. Giachetti, A.; Ballerini, L.; Trucco, E. Accurate and reliable segmentation of the optic disc in digital fundus images. J. Med. Imaging 2014, 1, 024001. [Google Scholar] [CrossRef] [Green Version]
  137. Kao, E.F.; Lin, P.C.; Chou, M.C.; Jaw, T.S.; Liu, G.C. Automated detection of fovea in fundus images based on vessel-free zone and adaptive Gaussian template. Comput. Methods Programs Biomed. 2014, 117, 92–103. [Google Scholar] [CrossRef]
  138. Bekkers, E.; Duits, R.; Berendschot, T.; Romeny, B.T.H. A Multi-Orientation Analysis Approach to Retinal Vessel Tracking. J. Math. Imaging Vis. 2014, 49, 583–610. [Google Scholar] [CrossRef] [Green Version]
  139. Aquino, A. Establishing the macular grading grid by means of fovea centre detection using anatomical-based and visual-based features. Comput. Biol. Med. 2014, 55, 61–73. [Google Scholar] [CrossRef]
  140. Cheng, E.; Du, L.; Wu, Y.; Zhu, Y.J.; Megalooikonomou, V.; Ling, H. Discriminative vessel segmentation in retinal images by fusing context-aware hybrid features. Mach. Vis. Appl. 2014, 25, 1779–1792. [Google Scholar] [CrossRef]
  141. Miri, M.S.; Abràmoff, M.D.; Lee, K.; Niemeijer, M.; Wang, J.K.; Kwon, Y.H.; Garvin, M.K. Multimodal Segmentation of Optic Disc and Cup From SD-OCT and Color Fundus Photographs Using a Machine-Learning Graph-Based Approach. IEEE Trans. Med. Imaging 2015, 34, 1854–1866. [Google Scholar] [CrossRef] [Green Version]
  142. Dai, P.; Luo, H.; Sheng, H.; Zhao, Y.; Li, L.; Wu, J.; Zhao, Y.; Suzuki, K. A New Approach to Segment Both Main and Peripheral Retinal Vessels Based on Gray-Voting and Gaussian Mixture Model. PLoS ONE 2015, 10, e0127748. [Google Scholar] [CrossRef]
  143. Mary, M.C.V.S.; Rajsingh, E.B.; Jacob, J.K.K.; Anandhi, D.; Amato, U.; Selvan, S.E. An empirical study on optic disc segmentation using an active contour model. Biomed. Signal Process. Control 2015, 18, 19–29. [Google Scholar] [CrossRef]
  144. Hassanien, A.E.; Emary, E.; Zawbaa, H.M. Retinal blood vessel localization approach based on bee colony swarm optimization, fuzzy c-means and pattern search. J. Vis. Commun. Image Represent. 2015, 31, 186–196. [Google Scholar] [CrossRef]
  145. Harangi, B.; Hajdu, A. Detection of the Optic Disc in Fundus Images by Combining Probability Models. Comput. Biol. Med. 2015, 65, 10–24. [Google Scholar] [CrossRef] [PubMed]
  146. Imani, E.; Javidi, M.; Pourreza, H.R. Improvement of Retinal Blood Vessel Detection Using Morphological Component Analysis. Comput. Methods Programs Biomed. 2015, 118, 263–279. [Google Scholar] [CrossRef] [PubMed]
  147. Lazar, I.; Hajdu, A. Segmentation of retinal vessels by means of directional response vector similarity and region growing. Comput. Biol. Med. 2015, 66, 209–221. [Google Scholar] [CrossRef]
  148. Roychowdhury, S.; Koozekanani, D.D.; Parhi, K.K. Iterative Vessel Segmentation of Fundus Images. IEEE Trans. Biomed. Eng. 2015, 62, 1738–1749. [Google Scholar] [CrossRef]
  149. Pardhasaradhi, M.; Kande, G. Segmentation of optic disk and optic cup from digital fundus images for the assessment of glaucoma. Biomed. Signal Process. Control 2016, 24, 34–46. [Google Scholar] [CrossRef]
  150. Medhi, J.P.; Dandapat, S. An effective Fovea detection and Automatic assessment of Diabetic Maculopathy in color fundus images. Comput. Biol. Med. 2016, 74, 30–44. [Google Scholar] [CrossRef]
  151. Aslani, S.; Sarnel, H. A new supervised retinal vessel segmentation method based on robust hybrid features. Biomed. Signal Process. Control 2016, 30, 1–12. [Google Scholar] [CrossRef]
  152. Roychowdhury, S.; Koozekanani, D.; Kuchinka, S.; Parhi, K. Optic Disc Boundary and Vessel Origin Segmentation of Fundus Images. J. Biomed. Health Inform. 2016, 20, 1562–1574. [Google Scholar] [CrossRef]
  153. Onal, S.; Chen, X.; Satamraju, V.; Balasooriya, M.; Dabil-Karacal, H. Automated and simultaneous fovea center localization and macula segmentation using the new dynamic identification and classification of edges model. J. Med. Imaging 2016, 3, 034002. [Google Scholar] [CrossRef] [Green Version]
  154. Bahadarkhan, K.; Khaliq, A.A.; Shahid, M. A Morphological Hessian Based Approach for Retinal Blood Vessels Segmentation and Denoising Using Region Based Otsu Thresholding. PLoS ONE 2016, 11, e0158996. [Google Scholar] [CrossRef] [Green Version]
  155. Sarathi, M.P.; Dutta, M.K.; Singh, A.; Travieso, C.M. Blood vessel inpainting based technique for efficient localization and segmentation of optic disc in digital fundus images. Biomed. Signal Process. Control 2016, 25, 108–117. [Google Scholar] [CrossRef]
  156. Christodoulidis, A.; Hurtut, T.; Tahar, H.B.; Cheriet, F. A Multi-scale Tensor Voting Approach for Small Retinal Vessel Segmentation in High Resolution Fundus Images. Comput. Med. Imaging Graph. 2016, 52, 28–43. [Google Scholar] [CrossRef]
  157. Orlando, J.I.; Prokofyeva, E.; Blaschko, M.B. A Discriminatively Trained Fully Connected Conditional Random Field Model for Blood Vessel Segmentation in Fundus Images. IEEE Trans. Biomed. Eng. 2016, 64, 16–27. [Google Scholar] [CrossRef] [Green Version]
  158. Ramani, R.G.; Balasubramanian, L. Macula segmentation and fovea localization employing image processing and heuristic based clustering for automated retinal screening. Comput. Methods Programs Biomed. 2018, 160, 153–163. [Google Scholar] [CrossRef]
  159. Khan, K.B.; Khaliq, A.A.; Jalil, A.; Shahid, M. A robust technique based on VLM and Frangi filter for retinal vessel extraction and denoising. PLoS ONE 2018, 13, e0192203. [Google Scholar] [CrossRef] [Green Version]
  160. Chalakkal, R.J.; Abdulla, W.H.; Thulaseedharan, S.S. Automatic detection and segmentation of optic disc and fovea in retinal images. IET Image Process. 2018, 12, 2100–2110. [Google Scholar] [CrossRef]
  161. Xia, H.; Jiang, F.; Deng, S.; Xin, J.; Doss, R. Mapping Functions Driven Robust Retinal Vessel Segmentation via Training Patches. IEEE Access 2018, 6, 61973–61982. [Google Scholar] [CrossRef]
  162. Thakur, N.; Juneja, M. Optic disc and optic cup segmentation from retinal images using hybrid approach. Expert Syst. Appl. 2019, 127, 308–322. [Google Scholar] [CrossRef]
  163. Khawaja, A.; Khan, T.M.; Naveed, K.; Naqvi, S.S.; Rehman, N.U.; Nawaz, S.J. An Improved Retinal Vessel Segmentation Framework Using Frangi Filter Coupled With the Probabilistic Patch Based Denoiser. IEEE Access 2019, 7, 164344–164361. [Google Scholar] [CrossRef]
  164. Naqvi, S.S.; Fatima, N.; Khan, T.M.; Rehman, Z.U.; Khan, M.A. Automatic Optic Disc Detection and Segmentation by Variational Active Contour Estimation in Retinal Fundus Images. Signal Image Video Process. 2019, 13, 1191–1198. [Google Scholar] [CrossRef]
  165. Wang, X.; Jiang, X.; Ren, J. Blood Vessel Segmentation from Fundus Image by a Cascade Classification Framework. Pattern Recognit. 2019, 88, 331–341. [Google Scholar] [CrossRef]
  166. Dharmawan, D.A.; Ng, B.P.; Rahardja, S. A new optic disc segmentation method using a modified Dolph-Chebyshev matched filter. Biomed. Signal Process. Control 2020, 59, 101932. [Google Scholar] [CrossRef]
  167. Carmona, E.J.; Molina-Casado, J.M. Simultaneous segmentation of the optic disc and fovea in retinal images using evolutionary algorithms. Neural Comput. Appl. 2020, 33, 1903–1921. [Google Scholar] [CrossRef]
  168. Saroj, S.K.; Kumar, R.; Singh, N.P. Fréchet PDF based Matched Filter Approach for Retinal Blood Vessels Segmentation. Comput. Methods Programs Biomed. 2020, 194, 105490. [Google Scholar] [CrossRef] [PubMed]
  169. Guo, X.; Wang, H.; Lu, X.; Hu, X.; Che, S.; Lu, Y. Robust Fovea Localization Based on Symmetry Measure. J. Biomed. Health Inform. 2020, 24, 2315–2326. [Google Scholar] [CrossRef]
  170. Zhang, Y.; Lian, J.; Rong, L.; Jia, W.; Li, C.; Zheng, Y. Even faster retinal vessel segmentation via accelerated singular value decomposition. Neural Comput. Appl. 2020, 32, 1893–1902. [Google Scholar] [CrossRef]
  171. Zhou, C.; Zhang, X.; Chen, H. A New Robust Method for Blood Vessel Segmentation in Retinal fundus Images based on weighted line detector and Hidden Markov model. Comput. Methods Programs Biomed. 2020, 187, 105231. [Google Scholar] [CrossRef]
  172. Kim, G.; Lee, S.; Kim, S.M. Automated segmentation and quantitative analysis of optic disc and fovea in fundus images. Multimed. Tools Appl. 2021, 80, 24205–24220. [Google Scholar] [CrossRef]
  173. Marin, D.; Aquino, A.; Gegundez, M.; Bravo, J.M. A New Supervised Method for Blood Vessel Segmentation in Retinal Images by Using Gray-Level and Moment Invariants-Based Features. IEEE Trans. Med. Imaging 2011, 30, 146–158. [Google Scholar] [CrossRef] [Green Version]
  174. Wang, S.; Yin, Y.; Cao, G.; Wei, B.; Zheng, Y.; Yang, G. Hierarchical retinal blood vessel segmentation based on feature and ensemble learning. Neurocomputing 2015, 149, 708–717. [Google Scholar] [CrossRef]
  175. Liskowski, P.; Krawiec, K. Segmenting Retinal Blood Vessels With Deep Neural Networks. IEEE Trans. Med. Imaging 2016, 35, 2369–2380. [Google Scholar] [CrossRef]
  176. Barkana, B.D.; Saricicek, I.; Yildirim, B. Performance analysis of descriptive statistical features in retinal vessel segmentation via fuzzy logic, ANN, SVM, and classifier fusion. Knowl.-Based Syst. 2017, 118, 165–176. [Google Scholar] [CrossRef]
  177. Mo, J.; Zhang, L. Multi-level deep supervised networks for retinal vessel segmentation. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 2181–2193. [Google Scholar] [CrossRef]
  178. Fu, H.; Cheng, J.; Xu, Y.; Wong, D.W.K.; Liu, J.; Cao, X. Joint Optic Disc and Cup Segmentation Based on Multi-Label Deep Network and Polar Transformation. IEEE Trans. Med. Imaging 2018, 37, 1597–1605. [Google Scholar] [CrossRef] [Green Version]
  179. Al-Bander, B.; Al-Nuaimy, W.; Williams, B.M.; Zheng, Y. Multiscale sequential convolutional neural networks for simultaneous detection of fovea and optic disc. Biomed. Signal Process. Control 2018, 40, 91–101. [Google Scholar] [CrossRef]
  180. Guo, Y.; Budak, U.; Sengur, A. A Novel Retinal Vessel Detection Approach Based on Multiple Deep Convolution Neural Networks. Comput. Methods Programs Biomed. 2018, 167, 43–48. [Google Scholar] [CrossRef]
  181. Guo, Y.; Budak, U.; Vespa, L.J.; Khorasani, E.S.; Şengur, A. A Retinal Vessel Detection Approach Using Convolution Neural Network with Reinforcement Sample Learning Strategy. Measurement 2018, 125, 586–591. [Google Scholar] [CrossRef]
  182. Hu, K.; Zhang, Z.; Niu, X.; Zhang, Y.; Cao, C.; Xiao, F.; Gao, X. Retinal vessel segmentation of color fundus images using multiscale convolutional neural network with an improved cross-entropy loss function. Neurocomputing 2018, 309, 179–191. [Google Scholar] [CrossRef]
  183. Jiang, Z.; Zhang, H.; Wang, Y.; Ko, S.B. Retinal blood vessel segmentation using fully convolutional network with transfer learning. Comput. Med. Imaging Graph. 2018, 68, 1–15. [Google Scholar] [CrossRef]
  184. Oliveira, A.; Pereira, S.; Silva, C.A. Retinal Vessel Segmentation based on Fully Convolutional Neural Networks. Expert Syst. Appl. 2018, 112, 229–242. [Google Scholar] [CrossRef] [Green Version]
  185. Sangeethaa, S.N.; Maheswari, P.U. An Intelligent Model for Blood Vessel Segmentation in Diagnosing DR Using CNN. J. Med. Syst. 2018, 42, 175. [Google Scholar] [CrossRef]
  186. Wang, L.; Liu, H.; Lu, Y.; Chen, H.; Zhang, J.; Pu, J. A coarse-to-fine deep learning framework for optic disc segmentation in fundus images. Biomed. Signal Process. Control 2019, 51, 82–89. [Google Scholar] [CrossRef]
  187. Jebaseeli, T.J.; Durai, C.A.D.; Peter, J.D. Retinal Blood Vessel Segmentation from Diabetic Retinopathy Images using Tandem PCNN Model and Deep Learning Based SVM. Optik 2019, 199, 163328. [Google Scholar] [CrossRef]
  188. Chakravarty, A.; Sivaswamy, J. RACE-net: A Recurrent Neural Network for Biomedical Image Segmentation. J. Biomed. Health Inform. 2019, 23, 1151–1162. [Google Scholar] [CrossRef]
  189. Lian, S.; Li, L.; Lian, G.; Xiao, X.; Luo, Z.; Li, S. A Global and Local Enhanced Residual U-Net for Accurate Retinal Vessel Segmentation. IEEE/ACM Trans. Comput. Biol. Bioinform. 2019, 18, 852–862. [Google Scholar] [CrossRef] [PubMed]
  190. Gu, Z.; Cheng, J.; Fu, H.; Zhou, K.; Hao, H.; Zhao, Y.; Zhang, T.; Gao, S.; Liu, J. CE-Net: Context Encoder Network for 2D Medical Image Segmentation. IEEE Trans. Med. Imaging 2019, 38, 2281–2292. [Google Scholar] [CrossRef] [Green Version]
  191. Noh, K.J.; Park, S.J.; Lee, S. Scale-Space Approximated Convolutional Neural Networks for Retinal Vessel Segmentation. Comput. Methods Programs Biomed. 2019, 178, 237–246. [Google Scholar] [CrossRef]
  192. Jiang, Y.; Tan, N.; Peng, T. Optic Disc and Cup Segmentation Based on Deep Convolutional Generative Adversarial Networks. IEEE Access 2019, 7, 64483–64493. [Google Scholar] [CrossRef]
  193. Wang, C.; Zhao, Z.; Ren, Q.; Xu, Y.; Yu, Y. Dense U-net Based on Patch-Based Learning for Retinal Vessel Segmentation. Entropy 2019, 21, 168. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  194. Jiang, Y.; Duan, L.; Cheng, J.; Gu, Z.; Xia, H.; Fu, H.; Li, C.; Liu, J. JointRCNN: A Region-based Convolutional Neural Network for Optic Disc and Cup Segmentation. IEEE Trans. Biomed. Eng. 2019, 67, 335–343. [Google Scholar] [CrossRef]
  195. Gao, J.; Jiang, Y.; Zhang, H.; Wang, F. Joint disc and cup segmentation based on recurrent fully convolutional network. PLoS ONE 2020, 15, e0238983. [Google Scholar] [CrossRef]
  196. Feng, S.; Zhuo, Z.; Pan, D.; Tian, Q. CcNet: A Cross-connected Convolutional Network for Segmenting Retinal Vessels Using Multi-scale Features. Neurocomputing 2020, 392, 268–276. [Google Scholar] [CrossRef]
  197. Jin, B.; Liu, P.; Wang, P.; Shi, L.; Zhao, J. Optic Disc Segmentation Using Attention-Based U-Net and the Improved Cross-Entropy Convolutional Neural Network. Entropy 2020, 22, 844. [Google Scholar] [CrossRef]
  198. Tamim, N.; Elshrkawey, M.; Azim, G.A.; Nassar, H. Retinal Blood Vessel Segmentation Using Hybrid Features and Multi-Layer Perceptron Neural Networks. Symmetry 2020, 12, 894. [Google Scholar] [CrossRef]
  199. Sreng, S.; Maneerat, N.; Hamamoto, K.; Win, K.Y. Deep Learning for Optic Disc Segmentation and Glaucoma Diagnosis on Retinal Images. Appl. Sci. 2020, 10, 4916. [Google Scholar] [CrossRef]
  200. Bian, X.; Luo, X.; Wang, C.; Liu, W.; Lin, X. Optic Disc and Optic Cup Segmentation Based on Anatomy Guided Cascade Network. Comput. Methods Programs Biomed. 2020, 197, 105717. [Google Scholar] [CrossRef]
  201. Almubarak, H.; Bazi, Y.; Alajlan, N. Two-Stage Mask-RCNN Approach for Detecting and Segmenting the Optic Nerve Head, Optic Disc, and Optic Cup in Fundus Images. Appl. Sci. 2020, 10, 3833. [Google Scholar] [CrossRef]
  202. Tian, Z.; Zheng, Y.; Li, X.; Du, S.; Xu, X. Graph convolutional network based optic disc and cup segmentation on fundus images. Biomed. Opt. Express 2020, 11, 3043–3057. [Google Scholar] [CrossRef]
  203. Zhang, L.; Lim, C.P. Intelligent optic disc segmentation using improved particle swarm optimization and evolving ensemble models. Appl. Soft Comput. 2020, 92, 106328. [Google Scholar] [CrossRef]
  204. Xie, Z.; Ling, T.; Yang, Y.; Shu, R.; Liu, B.J. Optic Disc and Cup Image Segmentation Utilizing Contour-Based Transformation and Sequence Labeling Networks. J. Med. Syst. 2020, 44, 96. [Google Scholar] [CrossRef]
  205. Bengani, S.; Jothi, J.A.A. Automatic segmentation of optic disc in retinal fundus images using semi-supervised deep learning. Multimed. Tools Appl. 2021, 80, 3443–3468. [Google Scholar] [CrossRef]
  206. Hasan, M.K.; Alam, M.A.; Elahi, M.T.E.; Roy, S.; Martí, R. DRNet: Segmentation and localization of optic disc and Fovea from diabetic retinopathy image. Artif. Intell. Med. 2021, 111, 102001. [Google Scholar] [CrossRef]
  207. Gegundez-Arias, M.E.; Marin-Santos, D.; Perez-Borrero, I.; Vasallo-Vazquez, M.J. A new deep learning method for blood vessel segmentation in retinal images based on convolutional kernels and modified U-Net model. Comput. Methods Programs Biomed. 2021, 205, 106081. [Google Scholar] [CrossRef] [PubMed]
  208. Veena, H.N.; Muruganandham, A.; Kumaran, T.S. A Novel Optic Disc and Optic Cup Segmentation Technique to Diagnose Glaucoma using Deep Learning Convolutional Neural Network over Retinal Fundus Images. J. King Saud Univ. Comput. Inf. Sci. 2021; in press. [Google Scholar] [CrossRef]
  209. Wang, L.; Gu, J.; Chen, Y.; Liang, Y.; Zhang, W.; Pu, J.; Chen, H. Automated segmentation of the optic disc from fundus images using an asymmetric deep learning network. Pattern Recognit. 2021, 112, 107810. [Google Scholar] [CrossRef] [PubMed]
  210. Lu, C.K.; Tang, T.B.; Laude, A.; Deary, I.J.; Dhillon, B.; Murray, A.F. Quantification of parapapillary atrophy and optic disc. Investig. Ophthalmol. Vis. Sci. 2011, 52, 4671–4677. [Google Scholar] [CrossRef]
  211. Cheng, J.; Tao, D.; Liu, J.; Wong, D.W.K.; Tan, N.M.; Wong, T.Y.; Saw, S.M. Peripapillary atrophy detection by sparse biologically inspired feature manifold. IEEE Trans. Med. Imaging 2012, 31, 2355–2365. [Google Scholar] [CrossRef]
  212. Lu, C.K.; Tang, T.B.; Laude, A.; Dhillon, B.; Murray, A.F. Parapapillary atrophy and optic disc region assessment (PANDORA): Retinal imaging tool for assessment of the optic disc and parapapillary atrophy. J. Biomed. Opt. 2012, 17, 106010. [Google Scholar] [CrossRef] [Green Version]
  213. Septiarini, A.; Harjoko, A.; Pulungan, R.; Ekantini, R. Automatic detection of peripapillary atrophy in retinal fundus images using statistical features. Biomed. Signal Process. Control 2018, 45, 151–159. [Google Scholar] [CrossRef]
  214. Li, H.; Li, H.; Kang, J.; Feng, Y.; Xu, J. Automatic detection of parapapillary atrophy and its association with children myopia. Comput. Methods Programs Biomed. 2020, 183, 105090. [Google Scholar] [CrossRef]
  215. Chai, Y.; Liu, H.; Xu, J. A new convolutional neural network model for peripapillary atrophy area segmentation from retinal fundus images. Appl. Soft Comput. J. 2020, 86, 105890. [Google Scholar] [CrossRef]
  216. Son, J.; Shin, J.Y.; Kim, H.D.; Jung, K.H.; Park, K.H.; Park, S.J. Development and Validation of Deep Learning Models for Screening Multiple Abnormal Findings in Retinal Fundus Images. Ophthalmology 2020, 127, 85–94. [Google Scholar] [CrossRef] [Green Version]
  217. Sharma, A.; Agrawal, M.; Roy, S.D.; Gupta, V.; Vashisht, P.; Sidhu, T. Deep learning to diagnose Peripapillary Atrophy in retinal images along with statistical features. Biomed. Signal Process. Control 2021, 64, 102254. [Google Scholar] [CrossRef]
  218. Fu, H.; Li, F.; Orlando, J.I.; Bogunović, H.; Sun, X.; Liao, J.; Xu, Y.; Zhang, S.; Zhang, X. PALM: PAthoLogic Myopia Challenge. IEEE Dataport 2019. [Google Scholar] [CrossRef]
  219. Kanan, C.; Cottrell, G.W. Color-to-Grayscale: Does the Method Matter in Image Recognition? PLoS ONE 2012, 7, e29740. [Google Scholar] [CrossRef] [Green Version]
  220. Zuiderveld, K.J. Contrast Limited Adaptive Histogram Equalization. In Graphics Gems; Heckbert, P.S., Ed.; Elsevier: Amsterdam, The Netherlands, 1994; pp. 474–485. [Google Scholar] [CrossRef]
Figure 1. Sensors used in fundus cameras: (a) commonly used a single layered sensor coated with a color filter array having a Bayer pattern and (b) less commonly used three-layered direct imaging sensor. R: Red, G: Green, B: Blue.
Figure 1. Sensors used in fundus cameras: (a) commonly used a single layered sensor coated with a color filter array having a Bayer pattern and (b) less commonly used three-layered direct imaging sensor. R: Red, G: Green, B: Blue.
Life 12 00973 g001
Figure 2. A color fundus photograph. We can see the retinal landmarks, i.e., optic disc, macula, and central retinal blood vessels, on the circular and colored foreground, surrounded by a dark background. Source of image: publicly available DRIVE data set and image file: 21_training.tif.
Figure 2. A color fundus photograph. We can see the retinal landmarks, i.e., optic disc, macula, and central retinal blood vessels, on the circular and colored foreground, surrounded by a dark background. Source of image: publicly available DRIVE data set and image file: 21_training.tif.
Life 12 00973 g002
Figure 3. Pros and cons of different color channels. 1st column i.e., (a,e,i,m): RGB fundus photographs, 2nd column i.e., (b,f,j,n): red channel images, 3rd column i.e., (c,g,k,o): green channel images, and 4th column i.e., (d,h,l,p): blue channel images. Choroidal blood vessels are clearly visible in the red channel, as shown inside the red box in (b). Lens flares are more visible in the blue channel, as shown inside the blue box in (d). Atrophy and diabetic retinopathy affected areas are more clearly visible in the green channel as shown inside the green boxes in (g,k). As shown inside the blue box in (l), the blue channel is prone to underexposure. The red channel is prone to overexposure, as shown inside the red box in (m). Source of fundus photographs: (a) PALM/PALM-Training400/H0025.jpg, (e) PALM/PALM-Training400/P0010.jpg, (i) UoA_DR/94/94.jpg, and (m) CHASE_DB1/images/Image_11L.jpg.
Figure 3. Pros and cons of different color channels. 1st column i.e., (a,e,i,m): RGB fundus photographs, 2nd column i.e., (b,f,j,n): red channel images, 3rd column i.e., (c,g,k,o): green channel images, and 4th column i.e., (d,h,l,p): blue channel images. Choroidal blood vessels are clearly visible in the red channel, as shown inside the red box in (b). Lens flares are more visible in the blue channel, as shown inside the blue box in (d). Atrophy and diabetic retinopathy affected areas are more clearly visible in the green channel as shown inside the green boxes in (g,k). As shown inside the blue box in (l), the blue channel is prone to underexposure. The red channel is prone to overexposure, as shown inside the red box in (m). Source of fundus photographs: (a) PALM/PALM-Training400/H0025.jpg, (e) PALM/PALM-Training400/P0010.jpg, (i) UoA_DR/94/94.jpg, and (m) CHASE_DB1/images/Image_11L.jpg.
Life 12 00973 g003
Figure 4. There is a noticeable overlap in the histograms of the foreground and the background in the blue channel. Histograms are slightly overlapped in the green channel. In the red channel, histograms are not overlapped and easily separable. Therefore, by setting 0 to the pixels lower than the threshold value, θ and setting 255 to the pixels higher than the θ , we can easily generate the background mask from the red channeled image. Source of fundus photograph: STARE data set and image file: im0139.ppm.
Figure 4. There is a noticeable overlap in the histograms of the foreground and the background in the blue channel. Histograms are slightly overlapped in the green channel. In the red channel, histograms are not overlapped and easily separable. Therefore, by setting 0 to the pixels lower than the threshold value, θ and setting 255 to the pixels higher than the θ , we can easily generate the background mask from the red channeled image. Source of fundus photograph: STARE data set and image file: im0139.ppm.
Life 12 00973 g004
Figure 5. Failure case of OD segmentation. (a) RGB image overlaid by reference mask for OD segmentation, (b) RGB image overlaid by inaccurately predicted OD mask, (c) Grayscale image overlaid by inaccurately predicted mask for OD segmentation, (d) Red channel image overlaid by inaccurately predicted mask for OD segmentation, (e) Green channel image overlaid by inaccurately predicted mask for OD segmentation, and (f) Blue channel image overlaid by inaccurately predicted mask for OD segmentation. Source of image: PALM/P0159.jpg.
Figure 5. Failure case of OD segmentation. (a) RGB image overlaid by reference mask for OD segmentation, (b) RGB image overlaid by inaccurately predicted OD mask, (c) Grayscale image overlaid by inaccurately predicted mask for OD segmentation, (d) Red channel image overlaid by inaccurately predicted mask for OD segmentation, (e) Green channel image overlaid by inaccurately predicted mask for OD segmentation, and (f) Blue channel image overlaid by inaccurately predicted mask for OD segmentation. Source of image: PALM/P0159.jpg.
Life 12 00973 g005
Figure 6. Failure case of macula segmentation. (a) RGB image overlaid by reference mask for macula segmentation, (b) RGB image overlaid by inaccurate predicted macula mask, (c) Grayscale image overlaid by inaccurately predicted mask for macula segmentation, (d) Red channel image overlaid by inaccurately predicted mask for macula segmentation, (e) Green channel image overlaid by inaccurately predicted mask for macula segmentation, and (f) Blue channel image overlaid by inaccurately predicted mask for macula segmentation. Source of image: PALM/P0159.jpg.
Figure 6. Failure case of macula segmentation. (a) RGB image overlaid by reference mask for macula segmentation, (b) RGB image overlaid by inaccurate predicted macula mask, (c) Grayscale image overlaid by inaccurately predicted mask for macula segmentation, (d) Red channel image overlaid by inaccurately predicted mask for macula segmentation, (e) Green channel image overlaid by inaccurately predicted mask for macula segmentation, and (f) Blue channel image overlaid by inaccurately predicted mask for macula segmentation. Source of image: PALM/P0159.jpg.
Life 12 00973 g006
Figure 7. Examples of generated masks by the color-specific U-Nets for segmenting the CRBVs. The reference mask and the generated masks are shown in the first and third rows, whereas different color channels overlaid by masks are shown in the second and fourth rows. (a) the reference mask & (d) RGB fundus photograph overlaid by the reference mask, (b) generated mask by the U-Net trained by the RGB fundus photographs & (e) RGB image overlaid by the mask in (b), (c) generated mask by the U-Net trained by the grayscale fundus photographs & (f) Grayscaled image overlaid by the mask in (c), (g) generated mask by the U-Net trained by the red channel fundus photographs & (j) Red channeled fundus photograph overlaid by the mask in (g), (h) generated mask by the U-Net trained by the green channel fundus photographs & (k) Green channel image overlaid by the mask in (h), and (i) generated mask by the U-Net trained by the blue channel fundus photographs & (l) Blue channel image overlaid by the mask in (i). Source of image: CHASE_DB1/Image_14R.jpg.
Figure 7. Examples of generated masks by the color-specific U-Nets for segmenting the CRBVs. The reference mask and the generated masks are shown in the first and third rows, whereas different color channels overlaid by masks are shown in the second and fourth rows. (a) the reference mask & (d) RGB fundus photograph overlaid by the reference mask, (b) generated mask by the U-Net trained by the RGB fundus photographs & (e) RGB image overlaid by the mask in (b), (c) generated mask by the U-Net trained by the grayscale fundus photographs & (f) Grayscaled image overlaid by the mask in (c), (g) generated mask by the U-Net trained by the red channel fundus photographs & (j) Red channeled fundus photograph overlaid by the mask in (g), (h) generated mask by the U-Net trained by the green channel fundus photographs & (k) Green channel image overlaid by the mask in (h), and (i) generated mask by the U-Net trained by the blue channel fundus photographs & (l) Blue channel image overlaid by the mask in (i). Source of image: CHASE_DB1/Image_14R.jpg.
Life 12 00973 g007
Figure 8. Example of overexposed red channel and underexposed blue channel of a retinal image. First row shows different channels of a fundus photograph and second row shows their corresponding histograms. Histograms of inappropriately exposed images are highly skewed and have low entropy. Source of image: CHASE_DB1/Image_11R.jpg.
Figure 8. Example of overexposed red channel and underexposed blue channel of a retinal image. First row shows different channels of a fundus photograph and second row shows their corresponding histograms. Histograms of inappropriately exposed images are highly skewed and have low entropy. Source of image: CHASE_DB1/Image_11R.jpg.
Life 12 00973 g008
Table 1. Color distribution in previous works for the automatic detection of retinal diseases and segmentation of retinal landmarks and atrophy. NN: Neural network-based approaches, Non-NN: Non-neural network-based approaches.
Table 1. Color distribution in previous works for the automatic detection of retinal diseases and segmentation of retinal landmarks and atrophy. NN: Neural network-based approaches, Non-NN: Non-neural network-based approaches.
ColorNumber of Papers
Disease DetectionSegmentation
Non-NNNNNon-NNNN
TotalQ1Q2TotalQ1Q2TotalQ1Q2TotalQ1Q2
(42)(30)(12)(35)(28)(7)(77)(56)(21)(37)(28)(9)
RGB1899292451410428226
R7522111596000
G2211114225943161082
B330110871000
Gr633541752303
Table 2. Color channel used in non-neural Network (Non-NN) based previous works for automatically detecting diseases in retina. DR: Diabetic Retinopathy, AMD: Age-related Macular Degeneration, DME: Diabetic Macular Edema, R: Red, G: Green, B: Blue, Gr: Grayscale weighted summation of Red, Green and Blue.
Table 2. Color channel used in non-neural Network (Non-NN) based previous works for automatically detecting diseases in retina. DR: Diabetic Retinopathy, AMD: Age-related Macular Degeneration, DME: Diabetic Macular Edema, R: Red, G: Green, B: Blue, Gr: Grayscale weighted summation of Red, Green and Blue.
YearGlaucomaAMD & DMEDR
ReferenceColorReferenceColorReferenceColor
2000 Hipwell [26]G, B
2002 Walter [27]G
2004 Klein [28]RGB
2007 Scott [29]RGB
2008 Kose [30]RGBAbramoff [31]RGB
Gangnon [32]RGB
2010Bock [33]GKose [34]Gr
Muramatsu [35]R, G
2011Joshi [36]RAgurto [37]GFadzil [38]RGB
2012Mookiah [39]GrHijazi [40]RGB
Deepak [41]RGB, G
2013 Akram [42]RGB
Oh [43]RGB
2014Fuente-Arriaga [44]R, G Akram [45]RGB
Noronha [46]RGBMookiah [47]GCasanova [48]RGB
2015Issac [49]R, GMookiah [50]R, GJaya [51]RGB
Oh [52]G, Gr
2016Singh [53]G, GrAcharya [54]GBhaskaranand [55]RGB
Phan [56]G
Wang [57]RGB
2017Acharya [58]GrAcharya [59]GLeontidis [60]RGB
Maheshwari [61]R, G, B, Gr
Maheshwari [62]G
2018 Saha [63]G, RGB
2020 Colomer [64]G
Table 3. Color channel used in neural network (NN) based previous works for automatically detecting diseases in retina. DR: Diabetic Retinopathy, AMD: Age-related Macular Degeneration, DME: Diabetic Macular Edema, Gr: Grayscale weighted summation of Red, Green and Blue, R: Red, G: Green, B: Blue.
Table 3. Color channel used in neural network (NN) based previous works for automatically detecting diseases in retina. DR: Diabetic Retinopathy, AMD: Age-related Macular Degeneration, DME: Diabetic Macular Edema, Gr: Grayscale weighted summation of Red, Green and Blue, R: Red, G: Green, B: Blue.
YearGlaucomaAMD & DMEDR
ReferenceColorReferenceColorReferenceColor
1996 Gardner [65]RGB
2009Nayak [66]R, G
2014 Ganesan [67]Gr
2015 Mookiah [68]G
2016Asoka [69]Gr Abramoff [70]RGB
Gulshan [71]RGB
2017Zilly [72]G, GrBurlina [73]RGBAbbas [74]RGB
Ting [75]RGBBurlina [76]RGBGargeya [77]RGB
Quellec [78]RGB
2018Ferreira [79]RGB, GrGrassmann [80]RGBKhojasteh [81]RGB
Raghavendra [82]RGBBurlina [83]RGBLam [84]RGB
Li [85]RGB
Fu [86]RGB
Liu [87]RGB
2019Liu [88]R, G, B, GrKeel [89]RGBLi [90]RGB
Diaz-Pinto [91]RGBPeng [92]RGBZeng [93]RGB
Matsuba [94]RGBRaman [95]RGB
2020 Singh [96]RGB
Gonzalez-Gonzalo [97]RGB
2021Gheisari [98]RGB
Table 4. Color channel used in non-neural network (Non-NN) based previous works for segmenting retinal landmarks. OD: Optic Disc, CRBVs: Central Retinal Blood Vessels, Gr: Grayscale weighted summation of Red, Green and Blue, R: Red, G: Green, B: Blue.
Table 4. Color channel used in non-neural network (Non-NN) based previous works for segmenting retinal landmarks. OD: Optic Disc, CRBVs: Central Retinal Blood Vessels, Gr: Grayscale weighted summation of Red, Green and Blue, R: Red, G: Green, B: Blue.
YearODMacula/FoveaCRBVs
ReferenceColorReferenceColorReferenceColor
1989 Chaudhuri [99]G
1999 Sinthanayothin [100]RGB
2000 Hoover [10]RGB
2004Lowell [101]GrLi [102]RGB
2006 Soares [103]G
2007Xu [104]RGBNiemeijer [105]GRicci [106]G
Abramoff [107]R, G, BTobin [108]G
2008Youssif [109]RGB
2009 Niemeijer [110]GCinsdikici [111]G
2010Welfer [112]G
Aquino [113]R, G
Zhu [114]RGB
2011Lu [115]R, GWelfer [116]GCheung [117]RGB
Kose [118]RGB
You [119]G
2012 Bankhead [120]G
Qureshi [121]GFraz [4]G
Fraz [122]G
Li [123]RGB
Lin [124]G
Moghimirad [125]G
2013Morales [126]GrChin [127]RGBAkram [128]G
Gegundez [129]GBadsha [130]Gr
Budai [6]G
Fathi [131]G
Fraz [132]G
Nayebifar [133]G, B
Nguyen [134]G
Wang [135]G
2014Giachetti [136]G, GrKao [137]GBekkers [138]G
Aquino [139]R, GCheng [140]G
2015Miri [141]R, G, B Dai [142]G
Mary [143]R Hassanien [144]G
Harangi [145]RGB, G Imani [146]G
Lazar [147]G
Roychowdhury [148]G
2016Mittapalli [149]RGBMedhi [150]RAslani [151]G
Roychowdhury [152]GOnal [153]GrBahadarkhan [154]G
Sarathi [155]R, G Christodoulidis [156]G
Orlando [157]G
2018 Ramani [158]GKhan [159]G
Chalakkal [160]RGBXia [161]G
2019Thakur [162]Gr Khawaja [163]G
Naqvi [164]R, G Wang [165]RGB
2020Dharmawan [166]R, G, BCarmona [167]GSaroj [168]Gr
Guo [169]GZhang [170]G
Zhou [171]G
2021 Kim [172]G
Table 5. Color channel used in neural network (NN) based previous works for segmenting retinal landmarks. OD: Optic Disc, CRBVs: Central Retinal Blood Vessels, Gr: Grayscale weighted summation of Red, Green and Blue, R: Red, G: Green, B: Blue.
Table 5. Color channel used in neural network (NN) based previous works for segmenting retinal landmarks. OD: Optic Disc, CRBVs: Central Retinal Blood Vessels, Gr: Grayscale weighted summation of Red, Green and Blue, R: Red, G: Green, B: Blue.
YearODMacula/FoveaCRBVs
ReferenceColorReferenceColorReferenceColor
2011 Marin [173]G
2015 Wang [174]G
2016 Liskowski [175]G
2017 Barkana [176]G
Mo [177]RGB
2018Fu [178]RGBAl-Bander [179]GrGuo [180]G
Guo [181]RGB
Hu [182]RGB
Jiang [183]RGB
Oliveira [184]G
Sangeethaa [185]G
2019Wang [186]RGB, Gr Jebaseeli [187]G
Chakravarty [188]RGB Lian [189]RGB
Gu [190]RGB Noh [191]RGB
Tan [192]RGB Wang [193]Gr
Jiang [194]RGB
2020Gao [195]RGB Feng [196]G
Jin [197]RGB Tamim [198]G
Sreng [199]RGB
Bian [200]RGB
Almubarak [201]RGB
Tian [202]RGB
Zhang [203]RGB
Xie [204]RGB
2021Bengani [205]RGBHasan [206]RGBGegundez-Arias [207]RGB
Veena [208]RGB
Wang [209]RGB
Table 6. Color channel used for automatically detecting atrophy in retina. R: Red, G: Green, B: Blue.
Table 6. Color channel used for automatically detecting atrophy in retina. R: Red, G: Green, B: Blue.
YearNon-NNNN
ReferenceColorReferenceColor
2011Lu [210]R, B
2012Cheng [211]R, G, B
Lu [212]R, B
2018Septiarini [213]R, G
2020Li [214]R, G, BChai [215]RGB
Son [216]RGB
2021 Sharma [217]RGB
Table 7. Data sets used in our experiments.
Table 7. Data sets used in our experiments.
Data SetHeight × WidthField-of-ViewFundus CameraNumber of Images
CHASE_DB1 960 × 999 30 Nidek NM-200-D28
DRIVE 584 × 565 45 Canon CR5-NM 3CCD40
HRF 3264 × 4928 45 Canon CR-145
IDRiD 2848 × 4288 50 Kowa VX-10 α 81
PALM 1444 × 1444
2056 × 2124
45 Zeiss VISUCAM 500 NM400
STARE 605 × 700 35 TopCon TRV-5020
UoA-DR 2056 × 2124 45 Zeiss VISUCAM 500200
Table 8. Training, validation and test sets used in our experiments.
Table 8. Training, validation and test sets used in our experiments.
Segmentation ofData SetNumber of Images in
Training SetValidation SetTest Set
CRBVsCHASE_DB17516
DRIVE10822
HRF11925
STARE5411
UoA-DR5040110
Optic DiscIDRiD201645
PALM10080220
UoA-DR5040110
MaculaPALM10080220
UoA-DR5040110
AtrophyPALM10080220
Table 9. Performance (mean ± standard deviation) of U-Nets using different color channels for segmenting optic disc.
Table 9. Performance (mean ± standard deviation) of U-Nets using different color channels for segmenting optic disc.
ColorDatasetPrecisionRecallAUCMIoU
RGBIDRiD0.897 ± 0.0180.877 ± 0.0100.940 ± 0.0050.896 ± 0.003
PALM0.859 ± 0.0090.862 ± 0.0130.933 ± 0.0060.873 ± 0.003
UoA_DR0.914 ± 0.0120.868 ± 0.0060.936 ± 0.0030.895 ± 0.004
GrayIDRiD0.868 ± 0.0200.902 ± 0.0160.952 ± 0.0070.892 ± 0.004
PALM0.758 ± 0.0200.737 ± 0.0250.870 ± 0.0110.788 ± 0.009
UoA_DR0.907 ± 0.0070.840 ± 0.0050.923 ± 0.0020.876 ± 0.008
RedIDRiD0.892 ± 0.0060.872 ± 0.0080.936 ± 0.0040.892 ± 0.004
PALM0.798 ± 0.0040.824 ± 0.0120.912 ± 0.0060.837 ± 0.003
UoA_DR0.900 ± 0.0070.854 ± 0.0060.928 ± 0.0030.885 ± 0.003
GreenIDRiD0.837 ± 0.0230.906 ± 0.0090.953 ± 0.0040.882 ± 0.008
PALM0.708 ± 0.0120.718 ± 0.0130.859 ± 0.0060.771 ± 0.004
UoA_DR0.895 ± 0.0090.821 ± 0.0100.912 ± 0.0050.869 ± 0.006
BlueIDRiD0.810 ± 0.0380.715 ± 0.0110.858 ± 0.0050.799 ± 0.010
PALM0.662 ± 0.0320.692 ± 0.0190.845 ± 0.0090.748 ± 0.008
UoA_DR0.873 ± 0.0120.800 ± 0.0090.901 ± 0.0040.851 ± 0.002
Table 10. Performance (mean ± standard deviation) of U-Nets using different color channels for segmenting CRBVs.
Table 10. Performance (mean ± standard deviation) of U-Nets using different color channels for segmenting CRBVs.
ColorDatasetPrecisionRecallAUCMIoU
RGBCHASE_DB10.795 ± 0.0050.638 ± 0.0040.840 ± 0.0020.696 ± 0.018
DRIVE0.851 ± 0.0070.519 ± 0.0090.781 ± 0.0040.696 ± 0.013
HRF0.730 ± 0.0170.633 ± 0.0070.838 ± 0.0050.651 ± 0.021
STARE0.822 ± 0.0090.488 ± 0.0100.766 ± 0.0060.654 ± 0.011
UoA_DR0.373 ± 0.0030.341 ± 0.0080.669 ± 0.0050.556 ± 0.004
GrayCHASE_DB10.757 ± 0.0190.635 ± 0.0160.834 ± 0.0090.648 ± 0.040
DRIVE0.864 ± 0.0140.529 ± 0.0140.786 ± 0.0080.673 ± 0.032
HRF0.721 ± 0.0320.617 ± 0.0080.825 ± 0.0050.605 ± 0.038
STARE0.810 ± 0.0210.522 ± 0.0220.784 ± 0.0110.619 ± 0.031
UoA_DR0.373 ± 0.0070.298 ± 0.0220.648 ± 0.0120.540 ± 0.009
RedCHASE_DB10.507 ± 0.0180.412 ± 0.0070.703 ± 0.0050.602 ± 0.001
DRIVE0.713 ± 0.0260.391 ± 0.0160.705 ± 0.0100.637 ± 0.005
HRF0.535 ± 0.0270.349 ± 0.0140.680 ± 0.0080.581 ± 0.004
STARE0.646 ± 0.0400.271 ± 0.0110.649 ± 0.0080.563 ± 0.005
UoA_DR0.304 ± 0.0110.254 ± 0.0120.621 ± 0.0060.539 ± 0.002
GreenCHASE_DB10.781 ± 0.0170.676 ± 0.0210.858 ± 0.0070.691 ± 0.059
DRIVE0.862 ± 0.0110.541 ± 0.0260.794 ± 0.0120.703 ± 0.047
HRF0.754 ± 0.0180.662 ± 0.0200.856 ± 0.0080.647 ± 0.077
STARE0.829 ± 0.0180.558 ± 0.0280.806 ± 0.0110.662 ± 0.052
UoA_DR0.384 ± 0.0070.326 ± 0.0230.662 ± 0.0120.552 ± 0.011
BlueCHASE_DB10.581 ± 0.0240.504 ± 0.0230.751 ± 0.0100.638 ± 0.004
DRIVE0.771 ± 0.0160.449 ± 0.0150.736 ± 0.0080.657 ± 0.007
HRF0.473 ± 0.0160.279 ± 0.0160.633 ± 0.0070.558 ± 0.004
STARE0.446 ± 0.0140.242 ± 0.0180.608 ± 0.0070.535 ± 0.003
UoA_DR0.316 ± 0.0100.271 ± 0.0150.630 ± 0.0070.540 ± 0.002
Table 11. Performance (mean ± standard deviation) of U-Nets using different color channels for segmenting macula.
Table 11. Performance (mean ± standard deviation) of U-Nets using different color channels for segmenting macula.
ColorDatasetPrecisionRecallAUCMIoU
RGBPALM0.732 ± 0.0160.649 ± 0.0290.825 ± 0.0140.753 ± 0.009
UoA_DR0.804 ± 0.0270.713 ± 0.0430.858 ± 0.0210.794 ± 0.012
GrayPALM0.712 ± 0.0240.638 ± 0.0160.819 ± 0.0070.744 ± 0.003
UoA_DR0.811 ± 0.0170.712 ± 0.0180.858 ± 0.0080.796 ± 0.005
RedPALM0.719 ± 0.0130.648 ± 0.0150.823 ± 0.0070.749 ± 0.005
UoA_DR0.768 ± 0.0060.726 ± 0.0130.863 ± 0.0060.790 ± 0.003
GreenPALM0.685 ± 0.0200.641 ± 0.0040.820 ± 0.0020.739 ± 0.005
UoA_DR0.791 ± 0.0130.693 ± 0.0110.848 ± 0.0050.783 ± 0.005
BluePALM0.676 ± 0.0200.637 ± 0.0190.817 ± 0.0090.734 ± 0.002
UoA_DR0.801 ± 0.0350.649 ± 0.0130.826 ± 0.0060.769 ± 0.012
Table 12. Performance (mean ± standard deviation) of U-Nets using different color channels for segmenting atrophy.
Table 12. Performance (mean ± standard deviation) of U-Nets using different color channels for segmenting atrophy.
ColorDatasetPrecisionRecallAUCMIoU
RGBPALM0.719 ± 0.0330.638 ± 0.0300.814 ± 0.0140.707 ± 0.019
GrayPALM0.630 ± 0.0210.571 ± 0.0250.777 ± 0.0120.658 ± 0.039
RedPALM0.514 ± 0.0100.430 ± 0.0290.705 ± 0.0130.596 ± 0.015
GreenPALM0.695 ± 0.0090.627 ± 0.0320.808 ± 0.0150.714 ± 0.011
BluePALM0.711 ± 0.0150.578 ± 0.0160.785 ± 0.0080.687 ± 0.018
Table 13. Number of cases where a U-Net marks OD and macula correctly in the masks. N: Total number of fundus photographs in the test set.
Table 13. Number of cases where a U-Net marks OD and macula correctly in the masks. N: Total number of fundus photographs in the test set.
Segmentation forNNumber of Cases in
RGBGrayRedGreenBlue
Optic Disc (OD)375329324316303297
Macula330270265271265267
Table 14. Number of cases where a U-Net marks multiple places as OD and macula in the masks. N: Total number of fundus photographs in the test set.
Table 14. Number of cases where a U-Net marks multiple places as OD and macula in the masks. N: Total number of fundus photographs in the test set.
Segmentation forNNumber of Cases in
RGBGrayRedGreenBlue
Optic Disc (OD)3752926434643
Macula3301725141714
Table 15. Number of inappropriately exposed fundus photographs. N: Total number RGB fundus photographs in the test set of a specific data set.
Table 15. Number of inappropriately exposed fundus photographs. N: Total number RGB fundus photographs in the test set of a specific data set.
Data SetNNumber of Cases in Each Color Channel
Where | skewness | > 6.0 Where entropy < 3.0
GrayRedGreenBlueGrayRedGreenBlue
CHASE_DB1280100130403
DRIVE40012000103
HRF4500000002
IDRiD81020600023
PALM40000140002121
STARE20020100004
UoA-DR2000002200088
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Biswas, S.; Khan, M.I.A.; Hossain, M.T.; Biswas, A.; Nakai, T.; Rohdin, J. Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? Life 2022, 12, 973. https://doi.org/10.3390/life12070973

AMA Style

Biswas S, Khan MIA, Hossain MT, Biswas A, Nakai T, Rohdin J. Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? Life. 2022; 12(7):973. https://doi.org/10.3390/life12070973

Chicago/Turabian Style

Biswas, Sangeeta, Md. Iqbal Aziz Khan, Md. Tanvir Hossain, Angkan Biswas, Takayoshi Nakai, and Johan Rohdin. 2022. "Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs?" Life 12, no. 7: 973. https://doi.org/10.3390/life12070973

APA Style

Biswas, S., Khan, M. I. A., Hossain, M. T., Biswas, A., Nakai, T., & Rohdin, J. (2022). Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? Life, 12(7), 973. https://doi.org/10.3390/life12070973

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop