Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs?
Abstract
:1. Introduction
2. Fundus Photography
3. Previous Works on Diagnosing Retinal Disease Automatically
4. Experimental Setup
4.1. Hardware & Software Tools
4.2. Data Sets
4.3. Image Pre-Processing
4.4. Setup for U-Net
4.5. Evaluation Metrics
5. Performance of Color Channel Specific U-Net
- Would it be worth it to develop cameras with only one color channel rather than red, green, and blue, possibly customized for retina analysis?
- Could a more detailed representation of the spectrum than RGB improve the automatic analysis of retinas? The RGB representation captures the information from the spectrum that the human eye can recognize. Perhaps this is not all information from the spectrum that an automatic system could have used.
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. Generating Background Mask
- Step-1: Generate a preliminary background mask, , by global binary thresholding, i.e., by setting the pixel intensity, p, of a single channeled image, I, to 0 or in the following way:
- Step-2: Determine the boundary contour of the retina by finding the contour which has the maximum area. Note that a contour is a closed curve joining all the straight points having the same color or intensity (see Figure A3d).
- Step-3: Set the pixels inside the boundary contour to and outside the boundary contour to zero in order to generate the final background mask, (see Figure A3e).
Data Set | Precision | Recall | AUC | MIoU |
---|---|---|---|---|
DRIVE | 0.997 | 0.997 | 0.996 | 0.995 |
HRF | 1.000 | 1.000 | 1.000 | 1.000 |
Appendix B. Cropping Out Background
- Step-1: Generate the background mask, , using the steps described in Appendix A.
- Step-2: Determine the minimum bounding rectangle (MBR) which minimally covers the background mask, (See Figure A3f).
- Step-3: Crop and equal to the MBR (see Figure A3g,h).
- Step-4: Remove background noise from the cropped by masking it by the cropped (see Figure A3i).
Appendix C. Turning Provided Reference Masks into Binary Masks
Segmentation Type | CHASE_DB1 | DRIVE | HRF | IDRiD | PALM | STARE | UoA-DR | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
n | m | n | m | n | m | n | m | n | m | n | m | n | m | |
CRBVs | 28 | 0 | 40 | 0 | 45 | 0 | 0 | 0 | 0 | 0 | 40 | 0 | 200 | 200 |
Optic Disc | 0 | 0 | 0 | 0 | 0 | 0 | 81 | 0 | 400 | 0 | 0 | 0 | 200 | 200 |
Macula | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Retinal Atrophy | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 311 | 0 | 0 | 0 | 0 | 0 |
Appendix D. Generating Binary Masks for Segmenting Macula
- Step-1: Get the corresponding reference mask, of a color fundus photograph for segmenting OD.
- Step-2: Generate the background mask, , by following the steps described in Appendix A.
- Step-3: Remove the background noise outside the foreground of by masking it by .
- Step-4: Turn into a binary mask by global thresholding.
- Step-5: Find the boundary contour of the foreground of .
- Step-6: Determine radius, r of the minimum closing circle of .
- Step-7: Draw a circle in the provided center of the macula having radius r.
- Step-8: Set the pixels inside the circle to and outside the circle to 0 in order to generate the final reference mask, .
Appendix E. Architecture of U-Net
Layer | Output Shape | # Params |
---|---|---|
Input | (256, 256, 1) | 0 |
Convolution (strides = (1, 1), filters = 16, kernel = (3, 3), activation = ELU) | (256, 256, 16) | 160 |
Dropout (0.1) | (256, 256, 16) | 0 |
Convolution (strides = (1, 1), filters = 16, kernel = (3, 3), activation = ELU, name = C1) | (256, 256, 16) | 2320 |
Convolution (strides = (2, 2), filters = 16, kernel = (3, 3), activation = ELU) | (128, 128, 16) | 2320 |
Convolution (strides = (1, 1), filters = 32, kernel = (3, 3), activation = ELU) | (128, 128, 32) | 4640 |
Dropout (0.1) | (128, 128, 32) | 0 |
Convolution (strides = (1, 1), filters = 32, kernel = (3, 3), activation = ELU, name = C2) | (128, 128, 32) | 9248 |
Convolution (strides = (2, 2), filters = 32, kernel = (3, 3), activation = ELU) | (64, 64, 32) | 9248 |
Convolution (strides = (1, 1), filters = 64, kernel = (3, 3), activation = ELU) | (64, 64, 64) | 18,496 |
Dropout (0.2) | (64, 64, 64) | 0 |
Convolution (strides = (1, 1), filters = 64, kernel = (3, 3), activation = ELU, name = C3) | (64, 64, 64) | 36,928 |
Convolution (strides = (2, 2), filters = 64, kernel = (3, 3), activation = ELU) | (32, 32, 64) | 36,928 |
Convolution (strides = (1, 1), filters = 128, kernel = (3, 3), activation = ELU) | (32, 32, 128) | 73,856 |
Dropout (0.2) | (32, 32, 128) | 0 |
Convolution (strides = (1, 1), filters = 128, kernel = (3, 3), activation = ELU, name = C4) | (32, 32, 128) | 147,584 |
Convolution (strides = (2, 2), filters = 128, kernel = (3, 3), activation = ELU) | (16, 16, 128) | 147,584 |
Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU) | (16, 16, 256) | 295,168 |
Dropout (0.3) | (16, 16, 256) | 0 |
Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU, name = C5) | (16, 16, 256) | 590,080 |
Convolution (strides = (2, 2), filters = 256, kernel = (3, 3), activation = ELU) | (8, 8, 256) | 590,080 |
Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU) | (8, 8, 256) | 590,080 |
Dropout (0.3) | (8, 8, 256) | 0 |
Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU) | (8, 8, 256) | 590,080 |
Transposed Convolution (strides = (2, 2), filters = 256, kernel = (2, 2), activation = ELU, name = U1) | (16, 16, 256) | 262,400 |
Concatenation (C5, U1) | (16, 16, 512) | 0 |
Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU) | (16, 16, 256) | 1,179,904 |
Dropout (0.3) | (16, 16, 256) | 0 |
Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU) | (16, 16, 256) | 590,080 |
Transposed Convolution (strides = (2, 2), filters = 128, kernel = (2, 2), activation = ELU, name = U2) | (32, 32, 128) | 131,200 |
Concatenation (C4, U2) | (32, 32, 256) | 0 |
Convolution (strides = (1, 1), filters = 128, kernel = (3, 3), activation = ELU) | (32, 32, 128) | 295,040 |
Dropout (0.2) | (32, 32, 128) | 0 |
Convolution (strides = (1, 1), filters = 128, kernel = (3, 3), activation = ELU) | (32, 32, 128) | 147,584 |
Transposed Convolution (strides = (2, 2), filters = 64, kernel = (2, 2), activation = ELU, name = U3) | (64, 64, 64) | 32,832 |
Concatenation (C3, U3) | (64, 64, 128) | 0 |
Convolution (strides = (1, 1), filters = 64, kernel = (3, 3), activation = ELU) | (64, 64, 64) | 73,792 |
Dropout (0.2) | (64, 64, 64) | 0 |
Convolution (strides = (1, 1), filters = 64, kernel = (3, 3), activation = ELU) | (64, 64, 64) | 36,928 |
Transposed Convolution (strides = (2, 2), filters = 32, kernel = (2, 2), activation = ELU, name = U4) | (128, 128, 32) | 8224 |
Concatenation (C2, U4) | (128, 128, 64) | 0 |
Convolution (strides = (1, 1), filters = 32, kernel = (3, 3), activation = ELU) | (128, 128, 32) | 18,464 |
Dropout (0.1) | (128, 128, 32) | 0 |
Convolution (strides = (1, 1), filters = 32, kernel = (3, 3), activation = ELU) | (128, 128, 32) | 9248 |
Transposed Convolution (strides = (2, 2), filters = 16, kernel = (2, 2), activation = ELU, name = U5) | (256, 256, 16) | 2064 |
Concatenation (C1, U5) | (256, 256, 16) | 0 |
Convolution (strides = (1, 1), filters = 16, kernel = (3, 3), activation = ELU) | (256, 256, 16) | 4624 |
Dropout (0.1) | (256, 256, 16) | 0 |
Convolution (strides = (1, 1), filters = 16, kernel = (3, 3), activation = ELU) | (256, 256, 16) | 2320 |
Convolution (strides = (1, 1), filters = 1, kernel = (1, 1), activation = Sigmoid, name = Output) | (256, 256, 1) | 17 |
Appendix F. Inaccurate Masks in UoA_DR for Segmenting CRBVs
Appendix G. Performance of U-Nets Trained and Tested on Individual Data Set
CHASE_DB1 | |||||
---|---|---|---|---|---|
Data Split | Color | Precision | Recall | AUC | MIoU |
25% Training, 20% Validation, 55% Test | RGB | 0.569 ± 0.203 | 0.448 ± 0.041 | 0.729 ± 0.059 | 0.537 ± 0.046 |
GRAY | 0.615 ± 0.081 | 0.412 ± 0.041 | 0.735 ± 0.024 | 0.503 ± 0.051 | |
RED | 0.230 ± 0.030 | 0.332 ± 0.053 | 0.613 ± 0.010 | 0.474 ± 0.006 | |
GREEN | 0.782 ± 0.026 | 0.526 ± 0.020 | 0.792 ± 0.007 | 0.606 ± 0.045 | |
BLUE | 0.451 ± 0.114 | 0.370 ± 0.018 | 0.683 ± 0.032 | 0.485 ± 0.008 | |
25% Training, 20% Validation, 25% Test | RGB | 0.571 ± 0.207 | 0.441 ± 0.045 | 0.724 ± 0.062 | 0.538 ± 0.048 |
GRAY | 0.624 ± 0.080 | 0.407 ± 0.036 | 0.731 ± 0.023 | 0.502 ± 0.050 | |
RED | 0.244 ± 0.037 | 0.342 ± 0.046 | 0.619 ± 0.009 | 0.474 ± 0.007 | |
GREEN | 0.791 ± 0.026 | 0.515 ± 0.022 | 0.787 ± 0.008 | 0.602 ± 0.044 | |
BLUE | 0.449 ± 0.116 | 0.362 ± 0.017 | 0.677 ± 0.033 | 0.484 ± 0.008 | |
55% Training, 20% Validation, 25% Test | RGB | 0.816 ± 0.012 | 0.541 ± 0.024 | 0.784 ± 0.012 | 0.684 ± 0.018 |
GRAY | 0.803 ± 0.002 | 0.515 ± 0.026 | 0.775 ± 0.010 | 0.671 ± 0.016 | |
RED | 0.389 ± 0.039 | 0.363 ± 0.027 | 0.680 ± 0.021 | 0.504 ± 0.028 | |
GREEN | 0.838 ± 0.005 | 0.583 ± 0.017 | 0.806 ± 0.009 | 0.687 ± 0.038 | |
BLUE | 0.648 ± 0.019 | 0.383 ± 0.012 | 0.698 ± 0.006 | 0.601 ± 0.010 | |
DRIVE | |||||
Data Split | Color | Precision | Recall | AUC | MIoU |
25% Training, 20% Validation, 55% Test | RGB | 0.796 ± 0.036 | 0.443 ± 0.065 | 0.749 ± 0.028 | 0.622 ± 0.072 |
GRAY | 0.835 ± 0.016 | 0.419 ± 0.022 | 0.739 ± 0.009 | 0.590 ± 0.066 | |
RED | 0.362 ± 0.098 | 0.342 ± 0.072 | 0.628 ± 0.015 | 0.476 ± 0.007 | |
GREEN | 0.846 ± 0.010 | 0.463 ± 0.025 | 0.758 ± 0.009 | 0.671 ± 0.027 | |
BLUE | 0.537 ± 0.078 | 0.297 ± 0.028 | 0.660 ± 0.022 | 0.512 ± 0.026 | |
25% Training, 20% Validation, 25% Test | RGB | 0.839 ± 0.035 | 0.442 ± 0.068 | 0.749 ± 0.030 | 0.626 ± 0.073 |
GRAY | 0.874 ± 0.018 | 0.413 ± 0.023 | 0.737 ± 0.009 | 0.592 ± 0.068 | |
RED | 0.400 ± 0.108 | 0.352 ± 0.073 | 0.637 ± 0.014 | 0.476 ± 0.009 | |
GREEN | 0.896 ± 0.009 | 0.462 ± 0.025 | 0.760 ± 0.009 | 0.676 ± 0.028 | |
BLUE | 0.575 ± 0.080 | 0.300 ± 0.024 | 0.663 ± 0.020 | 0.512 ± 0.027 | |
55% Training, 20% Validation, 25% Test | RGB | 0.896 ± 0.005 | 0.539 ± 0.010 | 0.787 ± 0.006 | 0.732 ± 0.014 |
GRAY | 0.895 ± 0.004 | 0.528 ± 0.012 | 0.781 ± 0.005 | 0.731 ± 0.006 | |
RED | 0.660 ± 0.085 | 0.316 ± 0.037 | 0.674 ± 0.017 | 0.520 ± 0.038 | |
GREEN | 0.904 ± 0.003 | 0.533 ± 0.008 | 0.786 ± 0.003 | 0.718 ± 0.024 | |
BLUE | 0.783 ± 0.042 | 0.386 ± 0.044 | 0.705 ± 0.021 | 0.645 ± 0.037 | |
HRF | |||||
Data Split | Color | Precision | Recall | AUC | MIoU |
25% Training, 20% Validation, 55% Test | RGB | 0.792 ± 0.006 | 0.537 ± 0.021 | 0.799 ± 0.013 | 0.597 ± 0.024 |
GRAY | 0.776 ± 0.004 | 0.497 ± 0.017 | 0.781 ± 0.011 | 0.579 ± 0.025 | |
RED | 0.204 ± 0.024 | 0.258 ± 0.017 | 0.591 ± 0.014 | 0.467 ± 0.002 | |
GREEN | 0.821 ± 0.013 | 0.578 ± 0.012 | 0.824 ± 0.006 | 0.624 ± 0.037 | |
BLUE | 0.155 ± 0.002 | 0.361 ± 0.010 | 0.580 ± 0.001 | 0.482 ± 0.008 | |
25% Training, 20% Validation, 25% Test | RGB | 0.759 ± 0.006 | 0.535 ± 0.023 | 0.797 ± 0.014 | 0.593 ± 0.023 |
GRAY | 0.741 ± 0.005 | 0.503 ± 0.017 | 0.782 ± 0.011 | 0.576 ± 0.025 | |
RED | 0.197 ± 0.021 | 0.245 ± 0.017 | 0.586 ± 0.013 | 0.467 ± 0.002 | |
GREEN | 0.794 ± 0.016 | 0.581 ± 0.013 | 0.824 ± 0.006 | 0.619 ± 0.036 | |
BLUE | 0.149 ± 0.004 | 0.368 ± 0.013 | 0.578 ± 0.002 | 0.480 ± 0.007 | |
55% Training, 20% Validation, 25% Test | RGB | 0.781 ± 0.008 | 0.608 ± 0.005 | 0.824 ± 0.004 | 0.693 ± 0.013 |
GRAY | 0.768 ± 0.010 | 0.573 ± 0.017 | 0.807 ± 0.009 | 0.677 ± 0.022 | |
RED | 0.512 ± 0.009 | 0.271 ± 0.021 | 0.641 ± 0.013 | 0.536 ± 0.011 | |
GREEN | 0.788 ± 0.006 | 0.647 ± 0.009 | 0.846 ± 0.003 | 0.674 ± 0.060 | |
BLUE | 0.274 ± 0.110 | 0.341 ± 0.047 | 0.620 ± 0.032 | 0.500 ± 0.019 | |
STARE | |||||
Data Split | Color | Precision | Recall | AUC | MIoU |
25% Training, 20% Validation, 55% Test | RGB | 0.556 ± 0.204 | 0.300 ± 0.073 | 0.659 ± 0.073 | 0.478 ± 0.008 |
GRAY | 0.619 ± 0.050 | 0.283 ± 0.058 | 0.680 ± 0.033 | 0.478 ± 0.017 | |
RED | 0.148 ± 0.003 | 0.222 ± 0.033 | 0.516 ± 0.009 | 0.468 ± 0.000 | |
GREEN | 0.600 ± 0.242 | 0.351 ± 0.030 | 0.680 ± 0.082 | 0.483 ± 0.019 | |
BLUE | 0.167 ± 0.036 | 0.145 ± 0.034 | 0.518 ± 0.021 | 0.469 ± 0.001 | |
25% Training, 20% Validation, 25% Test | RGB | 0.531 ± 0.195 | 0.334 ± 0.082 | 0.672 ± 0.082 | 0.482 ± 0.009 |
GRAY | 0.607 ± 0.055 | 0.314 ± 0.066 | 0.691 ± 0.039 | 0.483 ± 0.020 | |
RED | 0.143 ± 0.003 | 0.231 ± 0.038 | 0.512 ± 0.011 | 0.471 ± 0.000 | |
GREEN | 0.587 ± 0.243 | 0.376 ± 0.048 | 0.688 ± 0.092 | 0.488 ± 0.024 | |
BLUE | 0.164 ± 0.032 | 0.142 ± 0.039 | 0.517 ± 0.020 | 0.472 ± 0.001 | |
55% Training, 20% Validation, 25% Test | RGB | 0.756 ± 0.014 | 0.448 ± 0.031 | 0.749 ± 0.015 | 0.610 ± 0.038 |
GRAY | 0.748 ± 0.010 | 0.504 ± 0.026 | 0.770 ± 0.010 | 0.656 ± 0.017 | |
RED | 0.181 ± 0.020 | 0.293 ± 0.069 | 0.558 ± 0.008 | 0.474 ± 0.006 | |
GREEN | 0.749 ± 0.013 | 0.550 ± 0.025 | 0.795 ± 0.012 | 0.659 ± 0.038 | |
BLUE | 0.163 ± 0.007 | 0.324 ± 0.059 | 0.547 ± 0.006 | 0.469 ± 0.004 | |
UoA_DR | |||||
Data Split | Color | Precision | Recall | AUC | MIoU |
25% Training, 20% Validation, 55% Test | RGB | 0.320 ± 0.011 | 0.398 ± 0.008 | 0.699 ± 0.006 | 0.541 ± 0.015 |
GRAY | 0.315 ± 0.011 | 0.353 ± 0.016 | 0.675 ± 0.007 | 0.526 ± 0.017 | |
RED | 0.203 ± 0.013 | 0.260 ± 0.016 | 0.614 ± 0.006 | 0.516 ± 0.005 | |
GREEN | 0.332 ± 0.007 | 0.415 ± 0.018 | 0.705 ± 0.009 | 0.534 ± 0.014 | |
BLUE | 0.237 ± 0.012 | 0.260 ± 0.008 | 0.620 ± 0.007 | 0.526 ± 0.006 | |
25% Training, 20% Validation, 25% Test | RGB | 0.313 ± 0.011 | 0.395 ± 0.008 | 0.697 ± 0.005 | 0.540 ± 0.015 |
GRAY | 0.306 ± 0.011 | 0.350 ± 0.016 | 0.673 ± 0.008 | 0.524 ± 0.017 | |
RED | 0.201 ± 0.013 | 0.259 ± 0.015 | 0.614 ± 0.006 | 0.516 ± 0.005 | |
GREEN | 0.326 ± 0.007 | 0.412 ± 0.017 | 0.704 ± 0.009 | 0.532 ± 0.014 | |
BLUE | 0.232 ± 0.011 | 0.257 ± 0.007 | 0.618 ± 0.006 | 0.524 ± 0.005 | |
55% Training, 20% Validation, 25% Test | RGB | 0.333 ± 0.005 | 0.445 ± 0.012 | 0.717 ± 0.004 | 0.557 ± 0.007 |
GRAY | 0.330 ± 0.003 | 0.413 ± 0.014 | 0.700 ± 0.006 | 0.559 ± 0.004 | |
RED | 0.289 ± 0.011 | 0.299 ± 0.007 | 0.641 ± 0.004 | 0.543 ± 0.003 | |
GREEN | 0.335 ± 0.002 | 0.470 ± 0.010 | 0.728 ± 0.004 | 0.564 ± 0.004 | |
BLUE | 0.281 ± 0.012 | 0.280 ± 0.013 | 0.630 ± 0.006 | 0.540 ± 0.004 |
Appendix H. Effect of CLAHE
Database | Color | Precision | Recall | AUC | MIoU |
---|---|---|---|---|---|
CHASEDB1 | RGB | 0.676 ± 0.057 | 0.419 ± 0.037 | 0.727 ± 0.020 | 0.576 ± 0.051 |
GRAY | 0.629 ± 0.078 | 0.406 ± 0.052 | 0.714 ± 0.025 | 0.570 ± 0.060 | |
RED | 0.217 ± 0.012 | 0.353 ± 0.026 | 0.611 ± 0.006 | 0.476 ± 0.009 | |
GREEN | 0.802 ± 0.017 | 0.530 ± 0.019 | 0.781 ± 0.009 | 0.672 ± 0.023 | |
BLUE | 0.589 ± 0.023 | 0.373 ± 0.016 | 0.690 ± 0.006 | 0.556 ± 0.050 | |
DRIVE | RGB | 0.856 ± 0.024 | 0.470 ± 0.017 | 0.750 ± 0.010 | 0.693 ± 0.011 |
GRAY | 0.855 ± 0.021 | 0.464 ± 0.030 | 0.746 ± 0.015 | 0.693 ± 0.024 | |
RED | 0.297 ± 0.009 | 0.376 ± 0.017 | 0.619 ± 0.003 | 0.472 ± 0.010 | |
GREEN | 0.886 ± 0.006 | 0.509 ± 0.010 | 0.771 ± 0.005 | 0.722 ± 0.004 | |
BLUE | 0.504 ± 0.171 | 0.331 ± 0.043 | 0.642 ± 0.031 | 0.551 ± 0.071 | |
HRF | RGB | 0.757 ± 0.014 | 0.533 ± 0.023 | 0.784 ± 0.010 | 0.664 ± 0.026 |
GRAY | 0.730 ± 0.010 | 0.520 ± 0.011 | 0.776 ± 0.006 | 0.655 ± 0.011 | |
RED | 0.164 ± 0.002 | 0.311 ± 0.010 | 0.577 ± 0.001 | 0.483 ± 0.005 | |
GREEN | 0.791 ± 0.007 | 0.603 ± 0.008 | 0.820 ± 0.003 | 0.705 ± 0.008 | |
BLUE | 0.153 ± 0.004 | 0.347 ± 0.022 | 0.576 ± 0.003 | 0.476 ± 0.006 | |
STARE | RGB | 0.579 ± 0.077 | 0.348 ± 0.030 | 0.696 ± 0.020 | 0.497 ± 0.023 |
GRAY | 0.379 ± 0.146 | 0.312 ± 0.067 | 0.624 ± 0.041 | 0.487 ± 0.032 | |
RED | 0.157 ± 0.004 | 0.444 ± 0.055 | 0.558 ± 0.010 | 0.456 ± 0.017 | |
GREEN | 0.592 ± 0.085 | 0.442 ± 0.021 | 0.742 ± 0.010 | 0.517 ± 0.033 | |
BLUE | 0.164 ± 0.005 | 0.327 ± 0.056 | 0.546 ± 0.013 | 0.474 ± 0.003 | |
UoADR | RGB | 0.323 ± 0.003 | 0.411 ± 0.004 | 0.699 ± 0.002 | 0.555 ± 0.004 |
GRAY | 0.319 ± 0.003 | 0.372 ± 0.019 | 0.679 ± 0.009 | 0.556 ± 0.005 | |
RED | 0.238 ± 0.017 | 0.220 ± 0.014 | 0.598 ± 0.008 | 0.522 ± 0.005 | |
GREEN | 0.328 ± 0.009 | 0.438 ± 0.019 | 0.713 ± 0.008 | 0.563 ± 0.004 | |
BLUE | 0.262 ± 0.012 | 0.261 ± 0.008 | 0.619 ± 0.004 | 0.535 ± 0.002 |
References
- Resnikoff, S.; Felch, W.; Gauthier, T.M.; Spivey, B. The number of ophthalmologists in practice and training worldwide: A growing gap despite more than 200000 practitioners. Br. J. Ophthalmol. 2012, 96, 783–787. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Abràmoff, M.D.; Garvin, M.K.; Sonka, M. Retinal Imaging and Image Analysis. IEEE Rev. Biomed. Eng. 2010, 3, 169–208. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Owen, C.G.; Rudnicka, A.R.; Mullen, R.; Barman, S.A.; Monekosso, D.; Whincup, P.H.; Ng, J.; Paterson, C. Measuring Retinal Vessel Tortuosity in 10-Year-Old Children: Validation of the Computer-Assisted Image Analysis of the Retina (CAIAR) Program. Investig. Ophthalmol. Vis. Sci. 2009, 50, 2004–2010. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Fraz, M.; Remagnino, P.; Hoppe, A.; Uyyanonvara, B.; Rudnicka, A.; Owen, C.; Barman, S. An Ensemble Classification-Based Approach Applied to Retinal Blood Vessel Segmentation. IEEE Trans. Biomed. Eng. 2012, 59, 2538–2548. [Google Scholar] [CrossRef]
- Staal, J.J.; Abramoff, M.D.; Niemeijer, M.; Viergever, M.A.; van Ginneken, B. Ridge based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 2004, 23, 501–509. [Google Scholar] [CrossRef]
- Budai, A.; Bock, R.; Maier, A.; Hornegger, J.; Michelson, G. Robust Vessel Segmentation in Fundus Images. Int. J. Biomed. Imaging 2013, 2013, 154860. [Google Scholar] [CrossRef] [Green Version]
- Porwal, P.; Pachade, S.; Kamble, R.; Kokare, M.; Deshmukh, G.; Sahasrabuddhe, V.; Meriaudeau, F. Indian Diabetic Retinopathy Image Dataset (IDRiD): A Database for Diabetic Retinopathy Screening Research. Data 2018, 3, 25. [Google Scholar] [CrossRef] [Green Version]
- Cuadros, J.; Bresnick, G. EyePACS: An Adaptable Telemedicine System for Diabetic Retinopathy Screening. J. Diabetes Sci. Technol. 2009, 3, 509–516. [Google Scholar] [CrossRef] [Green Version]
- Decencière, E.; Zhang, X.; Cazuguel, G.; Lay, B.; Cochener, B.; Trone, C.; Gain, P.; Ordonez, R.; Massin, P.; Erginay, A.; et al. Feedback on a publicly distributed database: The Messidor database. Image Anal. Stereol. 2014, 33, 231–234. [Google Scholar] [CrossRef] [Green Version]
- Hoover, A.; Kouznetsova, V.; Goldbaum, M. Locating Blood Vessels in Retinal Images by Piece-wise Threshold Probing of a Matched Filter Response. IEEE Trans. Med. Imaging 2000, 19, 203–210. [Google Scholar] [CrossRef] [Green Version]
- Hoover, A.; Goldbaum, M. Locating the optic nerve in a retinal image using the fuzzy convergence of the blood vessels. IEEE Trans. Med. Imaging 2003, 22, 951–958. [Google Scholar] [CrossRef] [Green Version]
- Abdulla, W.; Chalakkal, R.J. University of Auckland Diabetic Retinopathy (UoA-DR) Database; University of Auckland: Auckland, NZ, USA, 2018. [Google Scholar] [CrossRef]
- Davis, B.M.; Crawley, L.; Pahlitzsch, M.; Javaid, F.; Cordeiro, M.F. Glaucoma: The retina and beyond. Acta Neuropathol. 2016, 132, 807–826. [Google Scholar] [CrossRef] [Green Version]
- Ferris, F.L.; Fine, S.L.; Hyman, L. Age-Related Macular Degeneration and Blindness due to Neovascular Maculopathy. JAMA Ophthalmol. 1984, 102, 1640–1642. [Google Scholar] [CrossRef]
- Wykoff, C.C.; Khurana, R.N.; Nguyen, Q.D.; Kelly, S.P.; Lum, F.; Hall, R.; Abbass, I.M.; Abolian, A.M.; Stoilov, I.; To, T.M.; et al. Risk of Blindness Among Patients with Diabetes and Newly Diagnosed Diabetic Retinopathy. Diabetes Care 2021, 44, 748–756. [Google Scholar] [CrossRef]
- Romero-Aroca, P. Managing diabetic macular edema: The leading cause of diabetes blindness. World J. Diabetes 2011, 2, 98–104. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In International Conference on Medical Image Computing and Computer Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar] [CrossRef] [Green Version]
- DeHoog, E.; Schwiegerling, J. Fundus camera systems: A comparative analysis. Appl. Opt. 2009, 48, 221–228. [Google Scholar] [CrossRef] [Green Version]
- Bayer, B.E. Color Imaging Array. U.S. Patent 3971065, 1976. Available online: https://patentimages.storage.googleapis.com/89/c6/87/c4fb7fbb6d0a0d/US3971065.pdf (accessed on 17 June 2022).
- Zhang, L.; Wu, X. Color demosaicking via directional linear minimum mean square-error estimation. IEEE Trans. Image Process. 2005, 14, 2167–2178. [Google Scholar] [CrossRef]
- Chung, K.; Chan, Y. Color Demosaicing Using Variance of Color Differences. IEEE Trans. Image Process. 2006, 15, 2944–2955. [Google Scholar] [CrossRef] [Green Version]
- Chung, K.; Yang, W.; Yan, W.; Wang, C. Demosaicing of Color Filter Array Captured Images Using Gradient Edge Detection Masks and Adaptive Heterogeneity-Projection. IEEE Trans. Image Process. 2008, 17, 2356–2367. [Google Scholar] [CrossRef]
- Flaxman, S.R.; Bourne, R.R.A.; Resnikoff, S.; Ackland, P.; Braithwaite, T.; Cicinelli, M.V.; Das, A.; Jonas, J.B.; Keeffe, J.; Kempen, J.H.; et al. Global causes of blindness and distance vision impairment 1990–2020: A systematic review and meta-analysis. Lancet Glob. Health 2017, 5, 1221–1234. [Google Scholar] [CrossRef] [Green Version]
- Burton, M.J.; Ramke, J.; Marques, A.P.; Bourne, R.R.A.; Congdon, N.; Jones, I.; Tong, B.A.M.A.; Arunga, S.; Bachani, D.; Bascaran, C.; et al. The Lancet Global Health Commission on Global Eye Health: Vision beyond 2020. Lancet Glob. Health 2021, 9, 489–551. [Google Scholar] [CrossRef]
- Guerrero-Bote, V.P.; Moya-Anegón, F. A further step forward in measuring journals’ scientific prestige: The SJR2 indicator. J. Inf. 2012, 6, 674–688. [Google Scholar] [CrossRef] [Green Version]
- Hipwell, J.H.; Strachan, F.; Olson, J.A.; Mchardy, K.C.; Sharp, P.F.; Forrester, J.V. Automated detection of microaneurysms in digital red-free photographs: A diabetic retinopathy screening tool. Diabet. Med. 2000, 17, 588–594. [Google Scholar] [CrossRef] [PubMed]
- Walter, T.; Klein, J.C.; Massin, P.; Erginay, A. A contribution of image processing to the diagnosis of diabetic retinopathy—Detection of exudates in color fundus images of the human retina. IEEE Trans. Med. Imaging 2002, 21, 1236–1243. [Google Scholar] [CrossRef]
- Klein, R.; Meuer, S.M.; Moss, S.E.; Klein, B.E.K.; Neider, M.W.; Reinke, J. Detection of Age-Related Macular Degeneration Using a NonmydriaticDigital Camera and a Standard Film Fundus Camera. JAMA Arch. Ophthalmol. 2004, 122, 1642–1646. [Google Scholar] [CrossRef] [Green Version]
- Scott, I.U.; Edwards, A.R.; Beck, R.W.; Bressler, N.M.; Chan, C.K.; Elman, M.J.; Friedman, S.M.; Greven, C.M.; Maturi, R.K.; Pieramici, D.J.; et al. A Phase II Randomized Clinical Trial of Intravitreal Bevacizumab for Diabetic Macular Edema. Am. Acad. Ophthalmol. 2007, 114, 1860–1867. [Google Scholar] [CrossRef] [Green Version]
- Kose, C.; Sevik, U.; Gencalioglu, O. Automatic segmentation of age-related macular degeneration in retinal fundus images. Comput. Biol. Med. 2008, 38, 611–619. [Google Scholar] [CrossRef]
- Abramoff, M.D.; Niemeijer, M.; Suttorp-Schultan, M.S.A.; Viergever, M.A.; Russell, S.R.; Ginneken, B.V. Evaluation of a System for Automatic Detection of Diabetic Retinopathy From Color Fundus Photographs in a Large Population of Patients With Diabetes. Diabetes Care 2008, 31, 193–198. [Google Scholar] [CrossRef] [Green Version]
- Gangnon, R.E.; Davis, M.D.; Hubbard, L.D.; Aiello, L.M.; Chew, E.Y.; Ferris, F.L.; Fisher, M.R. A Severity Scale for Diabetic Macular Edema Developed from ETDRS Data. Investig. Ophthalmol. Vis. Sci. 2008, 49, 5041–5047. [Google Scholar] [CrossRef]
- Bock, R.; Meier, J.; Nyul, L.G.; Hornegger, J.; Michelson, G. Glaucoma risk index: Automated glaucoma detection from color fundus images. Med. Image Anal. 2010, 14, 471–481. [Google Scholar] [CrossRef] [Green Version]
- Kose, C.; Sevik, U.; Gencalioglu, O.; Ikibas, C.; Kayikicioglu, T. A Statistical Segmentation Method for Measuring Age-Related Macular Degeneration in Retinal Fundus Images. J. Med. Syst. 2010, 34, 1–13. [Google Scholar] [CrossRef]
- Muramatsu, C.; Hayashi, Y.; Sawada, A.; Hatanaka, Y.; Hara, T.; Yamamoto, T.; Fujita, H. Detection of retinal nerve fiber layer defects on retinal fundus images for early diagnosis of glaucoma. J. Biomed. Opt. 2010, 15, 016021. [Google Scholar] [CrossRef]
- Joshi, G.D.; Sivaswamy, J.; Krishnadas, S.R. Optic Disk and Cup Segmentation From Monocular Color Retinal Images for Glaucoma Assessment. IEEE Trans. Med. Imaging 2011, 30, 1192–1205. [Google Scholar] [CrossRef]
- Agurto, C.; Barriga, E.S.; Murray, V.; Nemeth, S.; Crammer, R.; Bauman, W.; Zamora, G.; Pattichis, M.S.; Soliz, P. Automatic Detection of Diabetic Retinopathy and Age-Related Macular Degeneration in Digital Fundus Images. Investig. Ophthalmol. Vis. Sci. 2011, 52, 5862–5871. [Google Scholar] [CrossRef]
- Fadzil, M.H.A.; Izhar, L.I.; Nugroho, H.; Nugroho, H.A. Analysis of retinal fundus images for grading of diabetic retinopathy severity. Med. Biol. Eng. Comput. 2011, 49, 693–700. [Google Scholar] [CrossRef]
- Mookiah, M.R.K.; Acharya, U.R.; Lim, C.M.; Petznick, A.; Suri, J.S. Data mining technique for automated diagnosis of glaucoma using higher order spectra and wavelet energy features. Knowl.-Based Syst. 2012, 33, 73–82. [Google Scholar] [CrossRef]
- Hijazi, M.H.A.; Coenen, F.; Zheng, Y. Data mining techniques for the screening of age-related macular degeneration. Knowl.-Based Syst. 2012, 29, 83–92. [Google Scholar] [CrossRef] [Green Version]
- Deepak, K.S.; Sivaswamy, J. Automatic Assessment of Macular Edema From Color Retinal Images. IEEE Trans. Med. Imaging 2012, 31, 766–776. [Google Scholar] [CrossRef] [Green Version]
- Akram, M.U.; Khalid, S.; Tariq, A.; Javed, M.Y. Detection of neovascularization in retinal images using multivariate m-Mediods based classifier. Comput. Med. Imaging Graph. 2013, 37, 346–357. [Google Scholar] [CrossRef]
- Oh, E.; Yoo, T.K.; Park, E. Diabetic retinopathy risk prediction for fundus examination using sparse learning: A cross-sectional study. Med. Inform. Decis. Mak. 2013, 13, 106. [Google Scholar] [CrossRef] [Green Version]
- Fuente-Arriaga, J.A.D.L.; Felipe-Riveron, E.M.; Garduno-Calderon, E. Application of vascular bundle displacement in the optic disc for glaucoma detection using fundus images. Comput. Biol. Med. 2014, 47, 27–35. [Google Scholar] [CrossRef] [PubMed]
- Akram, M.U.; Khalid, S.; Tariq, A.; Khan, S.A.; Azam, F. Detection and classification of retinal lesions for grading of diabetic retinopathy. Comput. Biol. Med. 2014, 45, 161–171. [Google Scholar] [CrossRef] [PubMed]
- Noronha, K.P.; Acharya, U.R.; Nayak, K.P.; Martis, R.J.; Bhandary, S.V. Automated classification of glaucoma stages using higher order cumulant features. Biomed. Signal Process. Control 2014, 10, 174–183. [Google Scholar] [CrossRef]
- Mookiah, M.R.K.; Acharya, U.R.; Koh, J.E.W.; Chua, C.K.; Tan, J.H.; Chandran, V.; Lim, C.M.; Noronha, K.; Laude, A.; Tong, L. Decision support system for age-related macular degeneration using discrete wavelet transform. Med. Biol. Eng. Comput. 2014, 52, 781–796. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Casanova, R.; Saldana, S.; Chew, E.Y.; Danis, R.P.; Greven, C.M.; Ambrosius, W.T. Application of Random Forests Methods to Diabetic Retinopathy Classification Analyses. PLoS ONE 2014, 9, e98587. [Google Scholar] [CrossRef]
- Issac, A.; Sarathi, M.P.; Dutta, M.K. An Adaptive Threshold Based Image Processing Technique for Improved Glaucoma Detection and Classification. Comput. Methods Programs Biomed. 2015, 122, 229–244. [Google Scholar] [CrossRef]
- Mookiah, M.R.K.; Acharya, U.R.; Chandran, V.; Martis, R.J.; Tan, J.H.; Koh, J.E.W.; Chua, C.K.; Tong, L.; Laude, A. Application of higher-order spectra for automated grading of diabetic maculopathy. Med. Biol. Eng. Comput. 2015, 53, 1319–1331. [Google Scholar] [CrossRef] [Green Version]
- Jaya, T.; Dheeba, J.; Singh, N.A. Detection of Hard Exudates in Colour Fundus ImagesUsing Fuzzy Support Vector Machine-Based Expert System. J. Digit. Imaging 2015, 28, 761–768. [Google Scholar] [CrossRef] [Green Version]
- Oh, J.E.; Yang, H.K.; Kim, K.G.; Hwang, J.M. Automatic Computer-Aided Diagnosis of Retinal Nerve Fiber Layer Defects Using Fundus Photographs in Optic Neuropathy. Investig. Ophthalmol. Vis. Sci. 2015, 56, 2872–2879. [Google Scholar] [CrossRef] [Green Version]
- Singh, A.; Dutta, M.K.; ParthaSarathi, M.; Uher, V.; Burget, R. Image Processing Based Automatic Diagnosis of Glaucoma using Wavelet Features of Segmented Optic Disc from Fundus Image. Comput. Methods Programs Biomed. 2016, 124, 108–120. [Google Scholar] [CrossRef]
- Acharya, U.R.; Mookiah, M.R.K.; Koh, J.E.W.; Tan, J.H.; Noronha, K.; Bhandary, S.V.; Rao, A.K.; Hagiwara, Y.; Chua, C.K.; Laude, A. Novel risk index for the identification of age-related macular degeneration using radon transform and DWT features. Comput. Biol. Med. 2016, 73, 131–140. [Google Scholar] [CrossRef]
- Bhaskaranand, M.; Ramachandra, C.; Bhat, S.; Cuadros, J.; Nittala, M.G.; Sadda, S.; Solanki, K. Automated Diabetic Retinopathy Screening and Monitoring Using Retinal Fundus Image Analysis. J. Diabetes Sci. Technol. 2016, 10, 254–261. [Google Scholar] [CrossRef] [Green Version]
- Phan, T.V.; Seoud, L.; Chakor, H.; Cheriet, F. Automatic Screening and Grading of Age-Related Macular Degeneration from Texture Analysis of Fundus Images. J. Ophthalmol. 2016, 2016, 5893601. [Google Scholar] [CrossRef] [Green Version]
- Wang, Y.T.; Tadarati, M.; Wolfson, Y.; Bressler, S.B.; Bressler, N.M. Comparison of Prevalence of Diabetic Macular Edema Based on Monocular Fundus Photography vs Optical Coherence Tomography. JAMA Ophthalmol. 2016, 134, 222–228. [Google Scholar] [CrossRef]
- Acharya, U.R.; Bhat, S.; Koh, J.E.W.; Bhandary, S.V.; Adeli, H. A novel algorithm to detect glaucoma risk using texton and local configuration pattern features extracted from fundus images. Comput. Biol. Med. 2017, 88, 72–83. [Google Scholar] [CrossRef]
- Acharya, U.R.; Mookiah, M.R.K.; Koh, J.E.W.; Tan, J.H.; Bhandary, S.V.; Rao, A.K.; Hagiwara, Y.; Chua, C.K.; Laude, A. Automated Diabetic Macular Edema (DME) Grading System using DWT, DCT Features and Maculopathy Index. Comput. Biol. Med. 2017, 84, 59–68. [Google Scholar] [CrossRef]
- Leontidis, G. A new unified framework for the early detection of the progression to diabetic retinopathy from fundus images. Comput. Biol. Med. 2017, 90, 98–115. [Google Scholar] [CrossRef] [Green Version]
- Maheshwari, S.; Pachori, R.B.; Acharya, U.R. Automated Diagnosis of Glaucoma Using Empirical Wavelet Transform and Correntropy Features Extracted From Fundus Images. IEEE J. Biomed. Health Inform. 2017, 21, 803–813. [Google Scholar] [CrossRef]
- Maheshwari, S.; Pachori, R.B.; Kanhangad, V.; Bhandary, S.V.; Acharya, R. Iterative variational mode decomposition based automated detection of glaucoma using fundus images. Comput. Biol. Med. 2017, 88, 142–149. [Google Scholar] [CrossRef]
- Saha, S.K.; Fernando, B.; Cuadros, J.; Xiao, D.; Kanagasingam, Y. Automated Quality Assessment of Colour Fundus Images for Diabetic Retinopathy Screening in Telemedicine. J. Digit. Imaging 2018, 31, 869–878. [Google Scholar] [CrossRef]
- Colomer, A.; Igual, J.; Naranjo, V. Detection of Early Signs of Diabetic Retinopathy Based on Textural and Morphological Information in Fundus Images. Sensors 2020, 20, 5. [Google Scholar] [CrossRef] [Green Version]
- Gardner, G.G.; Keating, D.; Williamson, T.H.; Elliott, A.T. Automatic detection of diabetic retinopathy using an artificial neural network: A screening tool. Br. J. Ophthalmol. 1996, 80, 940–944. [Google Scholar] [CrossRef] [Green Version]
- Nayak, J.; Acharya, U.R.; Bhat, P.S.; Shetty, N.; Lim, T.C. Automated Diagnosis of Glaucoma Using Digital Fundus Images. J. Med. Syst. 2009, 33, 337–346. [Google Scholar] [CrossRef]
- Ganesan, K.; Martis, R.J.; Acharya, U.R.; Chua, C.K.; Min, L.C.; Ng, E.Y.K.; Laude, A. Computer-aided diabetic retinopathy detection using trace transforms on digital fundus images. Med. Biol. Eng. Comput. 2014, 52, 663–672. [Google Scholar] [CrossRef] [PubMed]
- Mookiah, M.R.K.; Acharya, U.R.; Fujita, H.; Koh, J.E.W.; Tan, J.H.; Noronha, K.; Bhandary, S.V.; Chua, C.K.; Lim, C.M.; Laude, A.; et al. Local Configuration Pattern Features for Age-Related Macular Degeneration Characterisation and Classification. Comput. Biol. Med. 2015, 63, 208–218. [Google Scholar] [CrossRef] [PubMed]
- Asaoka, R.; Murata, H.; Iwase, A.; Araie, M. Detecting Preperimetric Glaucoma with Standard Automated Perimetry Using a Deep Learning Classifier. Ophthalmology 2016, 123, 1974–1980. [Google Scholar] [CrossRef] [PubMed]
- Abramoff, M.D.; Lou, Y.; Erginay, A.; Clarida, W.; Amelon, R.; Folk, J.C.; Niemeijer, M. Improved Automated Detection of Diabetic Retinopathy on a Publicly Available Dataset Through Integration of Deep Learning. Investig. Ophthalmol. Vis. Sci. 2016, 57, 5200–5206. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Gulshan, V.; Peng, L.; Coram, M.; Stumpe, M.C.; Wu, D.; Narayanaswamy, A.; Venugopalan, S.; Widner, K.; Madams, T.; Cuadros, J.; et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA 2016, 316, 2402–2410. [Google Scholar] [CrossRef]
- Zilly, J.; Buhmann, J.M.; Mahapatra, D. Glaucoma Detection Using Entropy Sampling And Ensemble Learning For Automatic Optic Cup And Disc Segmentation. Comput. Med. Imaging Graph. 2017, 55, 28–41. [Google Scholar] [CrossRef]
- Burlina, P.M.; Joshi, N.; Pekala, M.; Pacheco, K.D.; Freund, D.E.; Bressler, N.M. Automated Grading of Age-Related Macular Degeneration From Color Fundus Images Using Deep Convolutional Neural Networks. JAMA Ophthalmol. 2017, 135, 1170–1176. [Google Scholar] [CrossRef]
- Abbas, Q.; Fondon, I.; Sarmiento, A.; Jimenez, S.; Alemany, P. Automatic recognition of severity level for diagnosis of diabetic retinopathy using deep visual features. Med. Biol. Eng. Comput. 2017, 55, 1959–1974. [Google Scholar] [CrossRef]
- Ting, D.S.W.; Cheung, C.Y.; Lim, G.; Tan, G.S.W.; Quang, N.D.; Gan, A.; Hamzah, H.; Garcia-Franco, R.; Yeo, I.Y.S.; Lee, S.Y.; et al. Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images from Multiethnic Populations with Diabetes. JAMA 2017, 318, 2211–2223. [Google Scholar] [CrossRef]
- Burlina, P.; Pacheco, K.D.; Joshi, N.; Freund, D.E.; Bressler, N.M. Comparing humans and deep learning performance for grading AMD: A study in using universal deep features and transfer learning for automated AMD analysis. Comput. Biol. Med. 2017, 82, 80–86. [Google Scholar] [CrossRef] [Green Version]
- Gargeya, R.; Leng, T. Automated Identification of Diabetic Retinopathy Using Deep Learning. Ophthalmology 2017, 124, 962–969. [Google Scholar] [CrossRef]
- Quellec, G.; Charriere, K.; Boudi, Y.; Cochener, B.; Lamard, M. Deep Image Mining for Diabetic Retinopathy Screening. Med. Image Anal. 2017, 39, 178–193. [Google Scholar] [CrossRef] [Green Version]
- Ferreira, M.V.D.S.; Filho, A.O.D.C.; Sousa, A.D.D.; Silva, A.C.; Gattass, M. Convolutional neural network and texture descriptor-based automatic detection and diagnosis of Glaucoma. Expert Syst. Appl. 2018, 110, 250–263. [Google Scholar] [CrossRef]
- Grassmann, F.; Mengelkamp, J.; Brandl, C.; Harsch, S.; Zimmermann, M.E.; Linkohr, B.; Peters, A.; Heid, I.M.; Palm, C.; Weber, B.H.F. A Deep Learning Algorithm for Prediction of Age-Related Eye Disease Study Severity Scale for Age-Related Macular Degeneration from Color Fundus Photography. Am. Acad. Ophthalmol. 2018, 125, 1410–1420. [Google Scholar] [CrossRef] [Green Version]
- Khojasteh, P.; Aliahmad, B.; Kumar, D.K. Fundus images analysis using deep features for detection of exudates, hemorrhages and microaneurysms. BMC Ophthalmol. 2018, 18, 288. [Google Scholar] [CrossRef] [Green Version]
- Raghavendra, U.; Fujita, H.; Bhandary, S.V.; Gudigar, A.; Tan, J.H.; Acharya, U.R. Deep Convolution Neural Network for Accurate Diagnosis of Glaucoma Using Digital Fundus Images. Inf. Sci. 2018, 441, 41–49. [Google Scholar] [CrossRef]
- Burlina, P.M.; Joshi, N.; Pacheco, K.D.; Freund, D.E.; Kong, J.; Bressler, N.M. Use of Deep Learning for Detailed Severity Characterization and Estimation of 5-Year Risk Among Patients with Age-Related Macular Degeneration. JAMA Ophthalmol. 2018, 136, 1359–1366. [Google Scholar] [CrossRef] [Green Version]
- Lam, C.; Yu, C.; Huang, L.; Rubin, D. Retinal Lesion Detection with Deep Learning Using Image Patches. Investig. Ophthalmol. Vis. Sci. 2018, 59, 590–596. [Google Scholar] [CrossRef]
- Li, Z.; He, Y.; Keel, S.; Meng, W.; Chang, R.T.; He, M. Efficacy of a Deep Learning System for Detecting Glaucomatous Optic Neuropathy Based on Color Fundus Photographs. Ophthalmology 2018, 125, 1199–1206. [Google Scholar] [CrossRef] [Green Version]
- Fu, H.; Cheng, J.; Xu, Y.; Zhang, C.; Wong, D.W.K.; Liu, J.; Cao, X. Disc-Aware Ensemble Network for Glaucoma Screening From Fundus Image. IEEE Trans. Med. Imaging 2018, 37, 2493–2501. [Google Scholar] [CrossRef] [Green Version]
- Liu, S.; Graham, S.L.; Schulz, A.; Kalloniatis, M.; Zangerl, B.; Cai, W.; Gao, Y.; Chua, B.; Arvind, H.; Grigg, J.; et al. A Deep Learning-Based Algorithm Identifies Glaucomatous Discs Using Monoscopic Fundus Photographs. Ophthalmol. Glaucoma 2018, 1, 15–22. [Google Scholar] [CrossRef]
- Liu, H.; Li, L.; Wormstone, I.M.; Qiao, C.; Zhang, C.; Liu, P.; Li, S.; Wang, H.; Mou, D.; Pang, R.; et al. Development and Validation of a Deep Learning System to Detect Glaucomatous Optic Neuropathy Using Fundus Photographs. JAMA Ophthalmol. 2019, 137, 1353–1360. [Google Scholar] [CrossRef]
- Keel, S.; Li, Z.; Scheetz, J.; Robman, L.; Phung, J.; Makeyeva, G.; Aung, K.; Liu, C.; Yan, X.; Meng, W.; et al. Development and validation of a deep-learning algorithm for the detection of neovascular age-related macular degeneration from colour fundus photographs. Clin. Exp. Ophthalmol. 2019, 47, 1009–1018. [Google Scholar] [CrossRef] [Green Version]
- Li, F.; Liu, Z.; Chen, H.; Jiang, M.; Zhang, X.; Wu, Z. Automatic Detection of Diabetic Retinopathy in Retinal Fundus Photographs Based on Deep Learning Algorithm. Transl. Vis. Sci. Technol. 2019, 8, 4. [Google Scholar] [CrossRef] [Green Version]
- Diaz-Pinto, A.; Morales, S.; Naranjo, V.; Kohler, T.; Mossi, J.M.; Navea, A. CNNs for automatic glaucoma assessment using fundus images: An extensive validation. BMC Biomed. Eng. Online 2019, 18, 29. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Peng, Y.; Dharssi, S.; Chen, Q.; Keenan, T.D.; Agron, E.; Wong, W.T.; Chew, E.Y.; Lu, Z. DeepSeeNet: A Deep Learning Model for Automated Classification of Patient-based Age-related Macular Degeneration Severity from Color Fundus Photographs. Ophthalmology 2019, 126, 565–575. [Google Scholar] [CrossRef] [PubMed]
- Zeng, X.; Chen, H.; Luo, Y.; Ye, W. Automated Diabetic Retinopathy Detection Based on Binocular Siamese-Like Convolutional Neural Network. IEEE Access 2019, 4, 30744–30753. [Google Scholar] [CrossRef]
- Matsuba, S.; Tabuchi, H.; Ohsugi, H.; Enno, H.; Ishitobi, N.; Masumoto, H.; Kiuchi, Y. Accuracy of ultra-wide-field fundus ophthalmoscopy-assisted deep learning, a machine-learning technology, for detecting age-related macular degeneration. Int. Ophthalmol. 2019, 39, 1269–1275. [Google Scholar] [CrossRef] [Green Version]
- Raman, R.; Srinivasan, S.; Virmani, S.; Sivaprasad, S.; Rao, C.; Rajalakshmi, R. Fundus photograph-based deep learning algorithms in detecting diabetic retinopathy. Eye 2019, 33, 97–109. [Google Scholar] [CrossRef] [Green Version]
- Singh, R.K.; Gorantla, R. DMENet: Diabetic Macular Edema diagnosis using Hierarchical Ensemble of CNNs. PLoS ONE 2020, 15, e0220677. [Google Scholar] [CrossRef] [Green Version]
- Gonzalez-Gonzalo, C.; Sanchez-Gutierrez, V.; Hernandez-Martinez, P.; Contreras, I.; Lechanteur, Y.T.; Domanian, A.; Ginneken, B.V.; Sanchez, C.I. Evaluation of a deep learning system for the joint automated detection of diabetic retinopathy and age-related macular degeneration. Acta Ophthalmol. 2020, 98, 368–377. [Google Scholar] [CrossRef]
- Gheisari, S.; Shariflou, S.; Phu, J.; Kennedy, P.J.; Ashish, A.; Kalloniatis, M.; Golzan, S.M. A combined convolutional and recurrent neural network for enhanced glaucoma detection. Sci. Rep. 2021, 11, 1945. [Google Scholar] [CrossRef]
- Chaudhuri, S.; Chatterjee, S.; Katz, N.; Nelson, M.; Goldbaum, M. Detection of blood vessels in retinal images using two-dimensional matched filter. IEEE Trans. Med. Imaging 1989, 8, 263–269. [Google Scholar] [CrossRef] [Green Version]
- Sinthanayothin, C.; Boyce, J.; Cook, H.; Williamson, J. Automated localization of the optic disc, fovea and retinal blood vessels from digital color fundus images. Br. J. Ophthalmol. 1999, 83, 902–910. [Google Scholar] [CrossRef]
- Lowell, J.; Hunter, A.; Steel, D.; Basu, A.; Ryder, R.; Fletcher, E.; Kennedy, L. Optic Nerve Head Segmentation. IEEE Trans. Med. Imaging 2004, 23, 256–264. [Google Scholar] [CrossRef] [Green Version]
- Li, H.; Chutatape, O. Automated feature extraction in color retinal images by a model based approach. IEEE Trans. Biomed. Eng. 2004, 51, 246–254. [Google Scholar] [CrossRef]
- Soares, J.V.B.; Leandro, J.J.G.; Cesar, R.M.; Jelinek, H.F.; Cree, M.J. Retianl Vessel Segmentation Using the 2-D Gabor Wavelet and Supervised Classification. IEEE Trans. Med. Imaging 2006, 25, 1214–1222. [Google Scholar] [CrossRef] [Green Version]
- Xu, J.; Chutatape, O.; Sung, E.; Zheng, C.; Kuan, P.C.T. Optic disk feature extraction via modified deformable model technique for glaucoma analysis. Pattern Recognit. 2007, 40, 2063–2076. [Google Scholar] [CrossRef]
- Niemeijer, M.; Abramoff, M.D.; Ginneken, B.V. Segmentation of the Optic Disc, Macula and Vascular Arch in Fundus Photographs. IEEE Trans. Med. Imaging 2007, 26, 116–127. [Google Scholar] [CrossRef]
- Ricci, E.; Perfetti, R. Retinal Blood Vessel Segmentation Using Line Operators and Support Vector Classification. IEEE Trans. Med. Imaging 2007, 26, 1357–1365. [Google Scholar] [CrossRef]
- Abràmoff, M.D.; Alward, W.L.M.; Greenlee, E.C.; Shuba, L.; Kim, C.Y.; Fingert, J.H.; Kwon, Y.H. Automated segmentation of the optic disc from stereo color photographs using physiologically plausible features. Investig. Ophthalmol. Vis. Sci. 2007, 48, 1665–1673. [Google Scholar] [CrossRef]
- Tobin, K.W.; Chaum, E.; Govindasamy, V.P.; Karnowski, T.P. Detection of Anatomic Structures in Human Retinal Imagery. IEEE Trans. Med. Imaging 2007, 26, 1729–1739. [Google Scholar] [CrossRef] [PubMed]
- Youssif, A.; Ghalwash, A.Z.; Ghoneim, A. Optic Disc Detection From Normalized Digital Fundus Images by Means of a Vessels’ Direction Matched Filter. IEEE Trans. Med. Imaging 2008, 27, 11–18. [Google Scholar] [CrossRef] [PubMed]
- Niemeijer, M.; Abramoff, M.D.; Ginneken, B.V. Fast Detection of the Optic Disc and Fovea in Color Fundus Photographs. Med. Image Anal. 2009, 13, 859–870. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Cinsdikici, M.; Aydin, D. Detection of blood vessels in ophthalmoscope images using MF/ant (matched filter/ant colony) algorithm. Comput. Methods Programs Biomed. 2009, 96, 85–95. [Google Scholar] [CrossRef]
- Welfer, D.; Scharcanski, J.; Kitamura, C.M.; Pizzol, M.M.D.; Ludwig, L.W.B.; Marinho, D.R. Segmentation of the optic disk in color eye fundus images using an adaptive morphological approach. Comput. Biol. Med. 2010, 40, 124–137. [Google Scholar] [CrossRef]
- Aquino, A.; Gegundez-Arias, M.E.; Marín, D. Detecting the Optic Disc Boundary in Digital Fundus Images Using Morphological, Edge Detection, and Feature Extraction Techniques. IEEE Trans. Med. Imaging 2010, 29, 1860–1869. [Google Scholar] [CrossRef] [Green Version]
- Zhu, X.; Rangayyan, R.M.; Ells, A.L. Detection of the Optic Nerve Head in Fundus Images of the Retina Using the Hough Transform for Circles. J. Digit. Imaging 2010, 23, 332–341. [Google Scholar] [CrossRef] [Green Version]
- Lu, S. Accurate and Efficient Optic Disc Detection and Segmentation by a Circular Transformation. IEEE Trans. Med. Imaging 2011, 30, 2126–2133. [Google Scholar] [CrossRef]
- Welfer, D.; Scharcanski, J.; Marinho, D.R. Fovea center detection based on the retina anatomy and mathematical morphology. Comput. Methods Programs Biomed. 2011, 104, 397–409. [Google Scholar] [CrossRef]
- Cheung, C.; Butty, Z.; Tehrani, N.; Lam, W.C. Computer-assisted image analysis of temporal retinal vessel width and tortuosity in retinopathy of prematurity for the assessment of disease severity and treatment outcome. Am. Assoc. Pediatr. Ophthalmol. Strabismus 2011, 15, 374–380. [Google Scholar] [CrossRef]
- Kose, C.; Ikibas, C. A personal identification system using retinal vasculature in retinal fundus images. Expert Syst. Appl. 2011, 38, 13670–13681. [Google Scholar] [CrossRef]
- You, X.; Peng, Q.; Yuan, Y.; Cheung, Y.; Lei, J. Segmentation of retinal blood vessels using the radial projection and semi-supervised approach. Pattern Recognit. 2011, 44, 2314–2324. [Google Scholar] [CrossRef]
- Bankhead, P.; Scholfield, N.; Mcgeown, G.; Curtis, T. Fast Retinal Vessel Detection and Measurement Using Wavelets and Edge Location Refinement. PLoS ONE 2012, 7, e32435. [Google Scholar] [CrossRef] [Green Version]
- Qureshi, R.J.; Kovacs, L.; Harangi, B.; Nagy, B.; Peto, T.; Hajdu, A. Combining algorithms for automatic detection of optic disc and macula in fundus images. Comput. Vis. Image Underst. 2012, 116, 138–145. [Google Scholar] [CrossRef]
- Fraz, M.; Barman, S.A.; Remagnino, P.; Hoppe, A.; Basit, A.; Uyyanonvara, B.; Rudnicka, A.R.; Owen, C. An approach to localize the retinal blood vessels using bit planes and centerline detection. Comput. Methods Programs Biomed. 2012, 108, 600–616. [Google Scholar] [CrossRef]
- Li, Q.; You, J.; Zhang, D. Vessel segmentation and width estimation in retinal images using multiscale production of matched filter responses. Expert Syst. Appl. 2012, 39, 7600–7610. [Google Scholar] [CrossRef]
- Lin, K.S.; Tsai, C.L.; Sofka, M.; Chen, S.J.; Lin, W.Y. Retinal Vascular Tree Reconstruction with Anatomical Realism. IEEE Trans. Biomed. Eng. 2012, 59, 3337–3347. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Moghimirad, E.; Rezatofighi, S.H.; Soltanian-Zadeh, H. Retinal vessel segmentation using a multi-scale medialness function. Comput. Biol. Med. 2012, 42, 50–60. [Google Scholar] [CrossRef] [PubMed]
- Morales, S.; Naranjo, V.; Angulo, J.; Alcaniz, M. Automatic Detection of Optic Disc Based on PCA and Mathematical Morphology. IEEE Trans. Med. Imaging 2013, 32, 786–796. [Google Scholar] [CrossRef] [PubMed]
- Chin, K.S.; Trucco, E.; Tan, L.L.; Wilson, P.J. Automatic Fovea Location in Retinal Images Using Anatomical Priors and Vessel Density. Pattern Recognit. Lett. 2013, 34, 1152–1158. [Google Scholar] [CrossRef]
- Akram, M.; Khan, S. Multilayered thresholding-based blood vessel segmentation for screening of diabetic retinopathy. Eng. Comput. 2013, 29, 165–173. [Google Scholar] [CrossRef]
- Gegundez, M.E.; Marin, D.; Bravo, J.M.; Suero, A. Locating the fovea center position in digital fundus images using thresholding and feature extraction techniques. Comput. Med. Imaging Graph. 2013, 37, 386–393. [Google Scholar] [CrossRef]
- Badsha, S.; Reza, A.W.; Tan, K.G.; Dimyati, K. A New Blood Vessel Extraction Technique Using Edge Enhancement and Object Classification. J. Digit. Imaging 2013, 26, 1107–1115. [Google Scholar] [CrossRef] [Green Version]
- Fathi, A.; Naghsh-Nilchi, A. Automatic wavelet-based retinal blood vessels segmentation and vessel diameter estimation. Biomed. Signal Process. Control 2013, 8, 71–80. [Google Scholar] [CrossRef]
- Fraz, M.; Basit, A.; Barman, S.A. Application of Morphological Bit Planes in Retinal Blood Vessel Extraction. J. Digit. Imaging 2013, 26, 274–286. [Google Scholar] [CrossRef] [Green Version]
- Nayebifar, B.; Moghaddam, H.A. A novel method for retinal vessel tracking using particle filters. Comput. Biol. Med. 2013, 43, 541–548. [Google Scholar] [CrossRef]
- Nguyen, U.T.V.; Bhuiyan, A.; Park, L.A.F.; Ramamohanarao, K. An effective retinal blood vessel segmentation method using multi-scale line detection. Pattern Recognit. 2013, 46, 703–715. [Google Scholar] [CrossRef]
- Wang, Y.; Ji, G.; Lin, P. Retinal vessel segmentation using multiwavelet kernels and multiscale hierarchical decomposition. Pattern Recognit. 2013, 46, 2117–2133. [Google Scholar] [CrossRef]
- Giachetti, A.; Ballerini, L.; Trucco, E. Accurate and reliable segmentation of the optic disc in digital fundus images. J. Med. Imaging 2014, 1, 024001. [Google Scholar] [CrossRef] [Green Version]
- Kao, E.F.; Lin, P.C.; Chou, M.C.; Jaw, T.S.; Liu, G.C. Automated detection of fovea in fundus images based on vessel-free zone and adaptive Gaussian template. Comput. Methods Programs Biomed. 2014, 117, 92–103. [Google Scholar] [CrossRef]
- Bekkers, E.; Duits, R.; Berendschot, T.; Romeny, B.T.H. A Multi-Orientation Analysis Approach to Retinal Vessel Tracking. J. Math. Imaging Vis. 2014, 49, 583–610. [Google Scholar] [CrossRef] [Green Version]
- Aquino, A. Establishing the macular grading grid by means of fovea centre detection using anatomical-based and visual-based features. Comput. Biol. Med. 2014, 55, 61–73. [Google Scholar] [CrossRef]
- Cheng, E.; Du, L.; Wu, Y.; Zhu, Y.J.; Megalooikonomou, V.; Ling, H. Discriminative vessel segmentation in retinal images by fusing context-aware hybrid features. Mach. Vis. Appl. 2014, 25, 1779–1792. [Google Scholar] [CrossRef]
- Miri, M.S.; Abràmoff, M.D.; Lee, K.; Niemeijer, M.; Wang, J.K.; Kwon, Y.H.; Garvin, M.K. Multimodal Segmentation of Optic Disc and Cup From SD-OCT and Color Fundus Photographs Using a Machine-Learning Graph-Based Approach. IEEE Trans. Med. Imaging 2015, 34, 1854–1866. [Google Scholar] [CrossRef] [Green Version]
- Dai, P.; Luo, H.; Sheng, H.; Zhao, Y.; Li, L.; Wu, J.; Zhao, Y.; Suzuki, K. A New Approach to Segment Both Main and Peripheral Retinal Vessels Based on Gray-Voting and Gaussian Mixture Model. PLoS ONE 2015, 10, e0127748. [Google Scholar] [CrossRef]
- Mary, M.C.V.S.; Rajsingh, E.B.; Jacob, J.K.K.; Anandhi, D.; Amato, U.; Selvan, S.E. An empirical study on optic disc segmentation using an active contour model. Biomed. Signal Process. Control 2015, 18, 19–29. [Google Scholar] [CrossRef]
- Hassanien, A.E.; Emary, E.; Zawbaa, H.M. Retinal blood vessel localization approach based on bee colony swarm optimization, fuzzy c-means and pattern search. J. Vis. Commun. Image Represent. 2015, 31, 186–196. [Google Scholar] [CrossRef]
- Harangi, B.; Hajdu, A. Detection of the Optic Disc in Fundus Images by Combining Probability Models. Comput. Biol. Med. 2015, 65, 10–24. [Google Scholar] [CrossRef] [PubMed]
- Imani, E.; Javidi, M.; Pourreza, H.R. Improvement of Retinal Blood Vessel Detection Using Morphological Component Analysis. Comput. Methods Programs Biomed. 2015, 118, 263–279. [Google Scholar] [CrossRef] [PubMed]
- Lazar, I.; Hajdu, A. Segmentation of retinal vessels by means of directional response vector similarity and region growing. Comput. Biol. Med. 2015, 66, 209–221. [Google Scholar] [CrossRef]
- Roychowdhury, S.; Koozekanani, D.D.; Parhi, K.K. Iterative Vessel Segmentation of Fundus Images. IEEE Trans. Biomed. Eng. 2015, 62, 1738–1749. [Google Scholar] [CrossRef]
- Pardhasaradhi, M.; Kande, G. Segmentation of optic disk and optic cup from digital fundus images for the assessment of glaucoma. Biomed. Signal Process. Control 2016, 24, 34–46. [Google Scholar] [CrossRef]
- Medhi, J.P.; Dandapat, S. An effective Fovea detection and Automatic assessment of Diabetic Maculopathy in color fundus images. Comput. Biol. Med. 2016, 74, 30–44. [Google Scholar] [CrossRef]
- Aslani, S.; Sarnel, H. A new supervised retinal vessel segmentation method based on robust hybrid features. Biomed. Signal Process. Control 2016, 30, 1–12. [Google Scholar] [CrossRef]
- Roychowdhury, S.; Koozekanani, D.; Kuchinka, S.; Parhi, K. Optic Disc Boundary and Vessel Origin Segmentation of Fundus Images. J. Biomed. Health Inform. 2016, 20, 1562–1574. [Google Scholar] [CrossRef]
- Onal, S.; Chen, X.; Satamraju, V.; Balasooriya, M.; Dabil-Karacal, H. Automated and simultaneous fovea center localization and macula segmentation using the new dynamic identification and classification of edges model. J. Med. Imaging 2016, 3, 034002. [Google Scholar] [CrossRef] [Green Version]
- Bahadarkhan, K.; Khaliq, A.A.; Shahid, M. A Morphological Hessian Based Approach for Retinal Blood Vessels Segmentation and Denoising Using Region Based Otsu Thresholding. PLoS ONE 2016, 11, e0158996. [Google Scholar] [CrossRef] [Green Version]
- Sarathi, M.P.; Dutta, M.K.; Singh, A.; Travieso, C.M. Blood vessel inpainting based technique for efficient localization and segmentation of optic disc in digital fundus images. Biomed. Signal Process. Control 2016, 25, 108–117. [Google Scholar] [CrossRef]
- Christodoulidis, A.; Hurtut, T.; Tahar, H.B.; Cheriet, F. A Multi-scale Tensor Voting Approach for Small Retinal Vessel Segmentation in High Resolution Fundus Images. Comput. Med. Imaging Graph. 2016, 52, 28–43. [Google Scholar] [CrossRef]
- Orlando, J.I.; Prokofyeva, E.; Blaschko, M.B. A Discriminatively Trained Fully Connected Conditional Random Field Model for Blood Vessel Segmentation in Fundus Images. IEEE Trans. Biomed. Eng. 2016, 64, 16–27. [Google Scholar] [CrossRef] [Green Version]
- Ramani, R.G.; Balasubramanian, L. Macula segmentation and fovea localization employing image processing and heuristic based clustering for automated retinal screening. Comput. Methods Programs Biomed. 2018, 160, 153–163. [Google Scholar] [CrossRef]
- Khan, K.B.; Khaliq, A.A.; Jalil, A.; Shahid, M. A robust technique based on VLM and Frangi filter for retinal vessel extraction and denoising. PLoS ONE 2018, 13, e0192203. [Google Scholar] [CrossRef] [Green Version]
- Chalakkal, R.J.; Abdulla, W.H.; Thulaseedharan, S.S. Automatic detection and segmentation of optic disc and fovea in retinal images. IET Image Process. 2018, 12, 2100–2110. [Google Scholar] [CrossRef]
- Xia, H.; Jiang, F.; Deng, S.; Xin, J.; Doss, R. Mapping Functions Driven Robust Retinal Vessel Segmentation via Training Patches. IEEE Access 2018, 6, 61973–61982. [Google Scholar] [CrossRef]
- Thakur, N.; Juneja, M. Optic disc and optic cup segmentation from retinal images using hybrid approach. Expert Syst. Appl. 2019, 127, 308–322. [Google Scholar] [CrossRef]
- Khawaja, A.; Khan, T.M.; Naveed, K.; Naqvi, S.S.; Rehman, N.U.; Nawaz, S.J. An Improved Retinal Vessel Segmentation Framework Using Frangi Filter Coupled With the Probabilistic Patch Based Denoiser. IEEE Access 2019, 7, 164344–164361. [Google Scholar] [CrossRef]
- Naqvi, S.S.; Fatima, N.; Khan, T.M.; Rehman, Z.U.; Khan, M.A. Automatic Optic Disc Detection and Segmentation by Variational Active Contour Estimation in Retinal Fundus Images. Signal Image Video Process. 2019, 13, 1191–1198. [Google Scholar] [CrossRef]
- Wang, X.; Jiang, X.; Ren, J. Blood Vessel Segmentation from Fundus Image by a Cascade Classification Framework. Pattern Recognit. 2019, 88, 331–341. [Google Scholar] [CrossRef]
- Dharmawan, D.A.; Ng, B.P.; Rahardja, S. A new optic disc segmentation method using a modified Dolph-Chebyshev matched filter. Biomed. Signal Process. Control 2020, 59, 101932. [Google Scholar] [CrossRef]
- Carmona, E.J.; Molina-Casado, J.M. Simultaneous segmentation of the optic disc and fovea in retinal images using evolutionary algorithms. Neural Comput. Appl. 2020, 33, 1903–1921. [Google Scholar] [CrossRef]
- Saroj, S.K.; Kumar, R.; Singh, N.P. Fréchet PDF based Matched Filter Approach for Retinal Blood Vessels Segmentation. Comput. Methods Programs Biomed. 2020, 194, 105490. [Google Scholar] [CrossRef] [PubMed]
- Guo, X.; Wang, H.; Lu, X.; Hu, X.; Che, S.; Lu, Y. Robust Fovea Localization Based on Symmetry Measure. J. Biomed. Health Inform. 2020, 24, 2315–2326. [Google Scholar] [CrossRef]
- Zhang, Y.; Lian, J.; Rong, L.; Jia, W.; Li, C.; Zheng, Y. Even faster retinal vessel segmentation via accelerated singular value decomposition. Neural Comput. Appl. 2020, 32, 1893–1902. [Google Scholar] [CrossRef]
- Zhou, C.; Zhang, X.; Chen, H. A New Robust Method for Blood Vessel Segmentation in Retinal fundus Images based on weighted line detector and Hidden Markov model. Comput. Methods Programs Biomed. 2020, 187, 105231. [Google Scholar] [CrossRef]
- Kim, G.; Lee, S.; Kim, S.M. Automated segmentation and quantitative analysis of optic disc and fovea in fundus images. Multimed. Tools Appl. 2021, 80, 24205–24220. [Google Scholar] [CrossRef]
- Marin, D.; Aquino, A.; Gegundez, M.; Bravo, J.M. A New Supervised Method for Blood Vessel Segmentation in Retinal Images by Using Gray-Level and Moment Invariants-Based Features. IEEE Trans. Med. Imaging 2011, 30, 146–158. [Google Scholar] [CrossRef] [Green Version]
- Wang, S.; Yin, Y.; Cao, G.; Wei, B.; Zheng, Y.; Yang, G. Hierarchical retinal blood vessel segmentation based on feature and ensemble learning. Neurocomputing 2015, 149, 708–717. [Google Scholar] [CrossRef]
- Liskowski, P.; Krawiec, K. Segmenting Retinal Blood Vessels With Deep Neural Networks. IEEE Trans. Med. Imaging 2016, 35, 2369–2380. [Google Scholar] [CrossRef]
- Barkana, B.D.; Saricicek, I.; Yildirim, B. Performance analysis of descriptive statistical features in retinal vessel segmentation via fuzzy logic, ANN, SVM, and classifier fusion. Knowl.-Based Syst. 2017, 118, 165–176. [Google Scholar] [CrossRef]
- Mo, J.; Zhang, L. Multi-level deep supervised networks for retinal vessel segmentation. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 2181–2193. [Google Scholar] [CrossRef]
- Fu, H.; Cheng, J.; Xu, Y.; Wong, D.W.K.; Liu, J.; Cao, X. Joint Optic Disc and Cup Segmentation Based on Multi-Label Deep Network and Polar Transformation. IEEE Trans. Med. Imaging 2018, 37, 1597–1605. [Google Scholar] [CrossRef] [Green Version]
- Al-Bander, B.; Al-Nuaimy, W.; Williams, B.M.; Zheng, Y. Multiscale sequential convolutional neural networks for simultaneous detection of fovea and optic disc. Biomed. Signal Process. Control 2018, 40, 91–101. [Google Scholar] [CrossRef]
- Guo, Y.; Budak, U.; Sengur, A. A Novel Retinal Vessel Detection Approach Based on Multiple Deep Convolution Neural Networks. Comput. Methods Programs Biomed. 2018, 167, 43–48. [Google Scholar] [CrossRef]
- Guo, Y.; Budak, U.; Vespa, L.J.; Khorasani, E.S.; Şengur, A. A Retinal Vessel Detection Approach Using Convolution Neural Network with Reinforcement Sample Learning Strategy. Measurement 2018, 125, 586–591. [Google Scholar] [CrossRef]
- Hu, K.; Zhang, Z.; Niu, X.; Zhang, Y.; Cao, C.; Xiao, F.; Gao, X. Retinal vessel segmentation of color fundus images using multiscale convolutional neural network with an improved cross-entropy loss function. Neurocomputing 2018, 309, 179–191. [Google Scholar] [CrossRef]
- Jiang, Z.; Zhang, H.; Wang, Y.; Ko, S.B. Retinal blood vessel segmentation using fully convolutional network with transfer learning. Comput. Med. Imaging Graph. 2018, 68, 1–15. [Google Scholar] [CrossRef]
- Oliveira, A.; Pereira, S.; Silva, C.A. Retinal Vessel Segmentation based on Fully Convolutional Neural Networks. Expert Syst. Appl. 2018, 112, 229–242. [Google Scholar] [CrossRef] [Green Version]
- Sangeethaa, S.N.; Maheswari, P.U. An Intelligent Model for Blood Vessel Segmentation in Diagnosing DR Using CNN. J. Med. Syst. 2018, 42, 175. [Google Scholar] [CrossRef]
- Wang, L.; Liu, H.; Lu, Y.; Chen, H.; Zhang, J.; Pu, J. A coarse-to-fine deep learning framework for optic disc segmentation in fundus images. Biomed. Signal Process. Control 2019, 51, 82–89. [Google Scholar] [CrossRef]
- Jebaseeli, T.J.; Durai, C.A.D.; Peter, J.D. Retinal Blood Vessel Segmentation from Diabetic Retinopathy Images using Tandem PCNN Model and Deep Learning Based SVM. Optik 2019, 199, 163328. [Google Scholar] [CrossRef]
- Chakravarty, A.; Sivaswamy, J. RACE-net: A Recurrent Neural Network for Biomedical Image Segmentation. J. Biomed. Health Inform. 2019, 23, 1151–1162. [Google Scholar] [CrossRef]
- Lian, S.; Li, L.; Lian, G.; Xiao, X.; Luo, Z.; Li, S. A Global and Local Enhanced Residual U-Net for Accurate Retinal Vessel Segmentation. IEEE/ACM Trans. Comput. Biol. Bioinform. 2019, 18, 852–862. [Google Scholar] [CrossRef] [PubMed]
- Gu, Z.; Cheng, J.; Fu, H.; Zhou, K.; Hao, H.; Zhao, Y.; Zhang, T.; Gao, S.; Liu, J. CE-Net: Context Encoder Network for 2D Medical Image Segmentation. IEEE Trans. Med. Imaging 2019, 38, 2281–2292. [Google Scholar] [CrossRef] [Green Version]
- Noh, K.J.; Park, S.J.; Lee, S. Scale-Space Approximated Convolutional Neural Networks for Retinal Vessel Segmentation. Comput. Methods Programs Biomed. 2019, 178, 237–246. [Google Scholar] [CrossRef]
- Jiang, Y.; Tan, N.; Peng, T. Optic Disc and Cup Segmentation Based on Deep Convolutional Generative Adversarial Networks. IEEE Access 2019, 7, 64483–64493. [Google Scholar] [CrossRef]
- Wang, C.; Zhao, Z.; Ren, Q.; Xu, Y.; Yu, Y. Dense U-net Based on Patch-Based Learning for Retinal Vessel Segmentation. Entropy 2019, 21, 168. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Jiang, Y.; Duan, L.; Cheng, J.; Gu, Z.; Xia, H.; Fu, H.; Li, C.; Liu, J. JointRCNN: A Region-based Convolutional Neural Network for Optic Disc and Cup Segmentation. IEEE Trans. Biomed. Eng. 2019, 67, 335–343. [Google Scholar] [CrossRef]
- Gao, J.; Jiang, Y.; Zhang, H.; Wang, F. Joint disc and cup segmentation based on recurrent fully convolutional network. PLoS ONE 2020, 15, e0238983. [Google Scholar] [CrossRef]
- Feng, S.; Zhuo, Z.; Pan, D.; Tian, Q. CcNet: A Cross-connected Convolutional Network for Segmenting Retinal Vessels Using Multi-scale Features. Neurocomputing 2020, 392, 268–276. [Google Scholar] [CrossRef]
- Jin, B.; Liu, P.; Wang, P.; Shi, L.; Zhao, J. Optic Disc Segmentation Using Attention-Based U-Net and the Improved Cross-Entropy Convolutional Neural Network. Entropy 2020, 22, 844. [Google Scholar] [CrossRef]
- Tamim, N.; Elshrkawey, M.; Azim, G.A.; Nassar, H. Retinal Blood Vessel Segmentation Using Hybrid Features and Multi-Layer Perceptron Neural Networks. Symmetry 2020, 12, 894. [Google Scholar] [CrossRef]
- Sreng, S.; Maneerat, N.; Hamamoto, K.; Win, K.Y. Deep Learning for Optic Disc Segmentation and Glaucoma Diagnosis on Retinal Images. Appl. Sci. 2020, 10, 4916. [Google Scholar] [CrossRef]
- Bian, X.; Luo, X.; Wang, C.; Liu, W.; Lin, X. Optic Disc and Optic Cup Segmentation Based on Anatomy Guided Cascade Network. Comput. Methods Programs Biomed. 2020, 197, 105717. [Google Scholar] [CrossRef]
- Almubarak, H.; Bazi, Y.; Alajlan, N. Two-Stage Mask-RCNN Approach for Detecting and Segmenting the Optic Nerve Head, Optic Disc, and Optic Cup in Fundus Images. Appl. Sci. 2020, 10, 3833. [Google Scholar] [CrossRef]
- Tian, Z.; Zheng, Y.; Li, X.; Du, S.; Xu, X. Graph convolutional network based optic disc and cup segmentation on fundus images. Biomed. Opt. Express 2020, 11, 3043–3057. [Google Scholar] [CrossRef]
- Zhang, L.; Lim, C.P. Intelligent optic disc segmentation using improved particle swarm optimization and evolving ensemble models. Appl. Soft Comput. 2020, 92, 106328. [Google Scholar] [CrossRef]
- Xie, Z.; Ling, T.; Yang, Y.; Shu, R.; Liu, B.J. Optic Disc and Cup Image Segmentation Utilizing Contour-Based Transformation and Sequence Labeling Networks. J. Med. Syst. 2020, 44, 96. [Google Scholar] [CrossRef]
- Bengani, S.; Jothi, J.A.A. Automatic segmentation of optic disc in retinal fundus images using semi-supervised deep learning. Multimed. Tools Appl. 2021, 80, 3443–3468. [Google Scholar] [CrossRef]
- Hasan, M.K.; Alam, M.A.; Elahi, M.T.E.; Roy, S.; Martí, R. DRNet: Segmentation and localization of optic disc and Fovea from diabetic retinopathy image. Artif. Intell. Med. 2021, 111, 102001. [Google Scholar] [CrossRef]
- Gegundez-Arias, M.E.; Marin-Santos, D.; Perez-Borrero, I.; Vasallo-Vazquez, M.J. A new deep learning method for blood vessel segmentation in retinal images based on convolutional kernels and modified U-Net model. Comput. Methods Programs Biomed. 2021, 205, 106081. [Google Scholar] [CrossRef] [PubMed]
- Veena, H.N.; Muruganandham, A.; Kumaran, T.S. A Novel Optic Disc and Optic Cup Segmentation Technique to Diagnose Glaucoma using Deep Learning Convolutional Neural Network over Retinal Fundus Images. J. King Saud Univ. Comput. Inf. Sci. 2021; in press. [Google Scholar] [CrossRef]
- Wang, L.; Gu, J.; Chen, Y.; Liang, Y.; Zhang, W.; Pu, J.; Chen, H. Automated segmentation of the optic disc from fundus images using an asymmetric deep learning network. Pattern Recognit. 2021, 112, 107810. [Google Scholar] [CrossRef] [PubMed]
- Lu, C.K.; Tang, T.B.; Laude, A.; Deary, I.J.; Dhillon, B.; Murray, A.F. Quantification of parapapillary atrophy and optic disc. Investig. Ophthalmol. Vis. Sci. 2011, 52, 4671–4677. [Google Scholar] [CrossRef]
- Cheng, J.; Tao, D.; Liu, J.; Wong, D.W.K.; Tan, N.M.; Wong, T.Y.; Saw, S.M. Peripapillary atrophy detection by sparse biologically inspired feature manifold. IEEE Trans. Med. Imaging 2012, 31, 2355–2365. [Google Scholar] [CrossRef]
- Lu, C.K.; Tang, T.B.; Laude, A.; Dhillon, B.; Murray, A.F. Parapapillary atrophy and optic disc region assessment (PANDORA): Retinal imaging tool for assessment of the optic disc and parapapillary atrophy. J. Biomed. Opt. 2012, 17, 106010. [Google Scholar] [CrossRef] [Green Version]
- Septiarini, A.; Harjoko, A.; Pulungan, R.; Ekantini, R. Automatic detection of peripapillary atrophy in retinal fundus images using statistical features. Biomed. Signal Process. Control 2018, 45, 151–159. [Google Scholar] [CrossRef]
- Li, H.; Li, H.; Kang, J.; Feng, Y.; Xu, J. Automatic detection of parapapillary atrophy and its association with children myopia. Comput. Methods Programs Biomed. 2020, 183, 105090. [Google Scholar] [CrossRef]
- Chai, Y.; Liu, H.; Xu, J. A new convolutional neural network model for peripapillary atrophy area segmentation from retinal fundus images. Appl. Soft Comput. J. 2020, 86, 105890. [Google Scholar] [CrossRef]
- Son, J.; Shin, J.Y.; Kim, H.D.; Jung, K.H.; Park, K.H.; Park, S.J. Development and Validation of Deep Learning Models for Screening Multiple Abnormal Findings in Retinal Fundus Images. Ophthalmology 2020, 127, 85–94. [Google Scholar] [CrossRef] [Green Version]
- Sharma, A.; Agrawal, M.; Roy, S.D.; Gupta, V.; Vashisht, P.; Sidhu, T. Deep learning to diagnose Peripapillary Atrophy in retinal images along with statistical features. Biomed. Signal Process. Control 2021, 64, 102254. [Google Scholar] [CrossRef]
- Fu, H.; Li, F.; Orlando, J.I.; Bogunović, H.; Sun, X.; Liao, J.; Xu, Y.; Zhang, S.; Zhang, X. PALM: PAthoLogic Myopia Challenge. IEEE Dataport 2019. [Google Scholar] [CrossRef]
- Kanan, C.; Cottrell, G.W. Color-to-Grayscale: Does the Method Matter in Image Recognition? PLoS ONE 2012, 7, e29740. [Google Scholar] [CrossRef] [Green Version]
- Zuiderveld, K.J. Contrast Limited Adaptive Histogram Equalization. In Graphics Gems; Heckbert, P.S., Ed.; Elsevier: Amsterdam, The Netherlands, 1994; pp. 474–485. [Google Scholar] [CrossRef]
Color | Number of Papers | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Disease Detection | Segmentation | |||||||||||
Non-NN | NN | Non-NN | NN | |||||||||
Total | Q1 | Q2 | Total | Q1 | Q2 | Total | Q1 | Q2 | Total | Q1 | Q2 | |
(42) | (30) | (12) | (35) | (28) | (7) | (77) | (56) | (21) | (37) | (28) | (9) | |
RGB | 18 | 9 | 9 | 29 | 24 | 5 | 14 | 10 | 4 | 28 | 22 | 6 |
R | 7 | 5 | 2 | 2 | 1 | 1 | 15 | 9 | 6 | 0 | 0 | 0 |
G | 22 | 11 | 11 | 4 | 2 | 2 | 59 | 43 | 16 | 10 | 8 | 2 |
B | 3 | 3 | 0 | 1 | 1 | 0 | 8 | 7 | 1 | 0 | 0 | 0 |
Gr | 6 | 3 | 3 | 5 | 4 | 1 | 7 | 5 | 2 | 3 | 0 | 3 |
Year | Glaucoma | AMD & DME | DR | |||
---|---|---|---|---|---|---|
Reference | Color | Reference | Color | Reference | Color | |
2000 | Hipwell [26] | G, B | ||||
2002 | Walter [27] | G | ||||
2004 | Klein [28] | RGB | ||||
2007 | Scott [29] | RGB | ||||
2008 | Kose [30] | RGB | Abramoff [31] | RGB | ||
Gangnon [32] | RGB | |||||
2010 | Bock [33] | G | Kose [34] | Gr | ||
Muramatsu [35] | R, G | |||||
2011 | Joshi [36] | R | Agurto [37] | G | Fadzil [38] | RGB |
2012 | Mookiah [39] | Gr | Hijazi [40] | RGB | ||
Deepak [41] | RGB, G | |||||
2013 | Akram [42] | RGB | ||||
Oh [43] | RGB | |||||
2014 | Fuente-Arriaga [44] | R, G | Akram [45] | RGB | ||
Noronha [46] | RGB | Mookiah [47] | G | Casanova [48] | RGB | |
2015 | Issac [49] | R, G | Mookiah [50] | R, G | Jaya [51] | RGB |
Oh [52] | G, Gr | |||||
2016 | Singh [53] | G, Gr | Acharya [54] | G | Bhaskaranand [55] | RGB |
Phan [56] | G | |||||
Wang [57] | RGB | |||||
2017 | Acharya [58] | Gr | Acharya [59] | G | Leontidis [60] | RGB |
Maheshwari [61] | R, G, B, Gr | |||||
Maheshwari [62] | G | |||||
2018 | Saha [63] | G, RGB | ||||
2020 | Colomer [64] | G |
Year | Glaucoma | AMD & DME | DR | |||
---|---|---|---|---|---|---|
Reference | Color | Reference | Color | Reference | Color | |
1996 | Gardner [65] | RGB | ||||
2009 | Nayak [66] | R, G | ||||
2014 | Ganesan [67] | Gr | ||||
2015 | Mookiah [68] | G | ||||
2016 | Asoka [69] | Gr | Abramoff [70] | RGB | ||
Gulshan [71] | RGB | |||||
2017 | Zilly [72] | G, Gr | Burlina [73] | RGB | Abbas [74] | RGB |
Ting [75] | RGB | Burlina [76] | RGB | Gargeya [77] | RGB | |
Quellec [78] | RGB | |||||
2018 | Ferreira [79] | RGB, Gr | Grassmann [80] | RGB | Khojasteh [81] | RGB |
Raghavendra [82] | RGB | Burlina [83] | RGB | Lam [84] | RGB | |
Li [85] | RGB | |||||
Fu [86] | RGB | |||||
Liu [87] | RGB | |||||
2019 | Liu [88] | R, G, B, Gr | Keel [89] | RGB | Li [90] | RGB |
Diaz-Pinto [91] | RGB | Peng [92] | RGB | Zeng [93] | RGB | |
Matsuba [94] | RGB | Raman [95] | RGB | |||
2020 | Singh [96] | RGB | ||||
Gonzalez-Gonzalo [97] | RGB | |||||
2021 | Gheisari [98] | RGB |
Year | OD | Macula/Fovea | CRBVs | |||
---|---|---|---|---|---|---|
Reference | Color | Reference | Color | Reference | Color | |
1989 | Chaudhuri [99] | G | ||||
1999 | Sinthanayothin [100] | RGB | ||||
2000 | Hoover [10] | RGB | ||||
2004 | Lowell [101] | Gr | Li [102] | RGB | ||
2006 | Soares [103] | G | ||||
2007 | Xu [104] | RGB | Niemeijer [105] | G | Ricci [106] | G |
Abramoff [107] | R, G, B | Tobin [108] | G | |||
2008 | Youssif [109] | RGB | ||||
2009 | Niemeijer [110] | G | Cinsdikici [111] | G | ||
2010 | Welfer [112] | G | ||||
Aquino [113] | R, G | |||||
Zhu [114] | RGB | |||||
2011 | Lu [115] | R, G | Welfer [116] | G | Cheung [117] | RGB |
Kose [118] | RGB | |||||
You [119] | G | |||||
2012 | Bankhead [120] | G | ||||
Qureshi [121] | G | Fraz [4] | G | |||
Fraz [122] | G | |||||
Li [123] | RGB | |||||
Lin [124] | G | |||||
Moghimirad [125] | G | |||||
2013 | Morales [126] | Gr | Chin [127] | RGB | Akram [128] | G |
Gegundez [129] | G | Badsha [130] | Gr | |||
Budai [6] | G | |||||
Fathi [131] | G | |||||
Fraz [132] | G | |||||
Nayebifar [133] | G, B | |||||
Nguyen [134] | G | |||||
Wang [135] | G | |||||
2014 | Giachetti [136] | G, Gr | Kao [137] | G | Bekkers [138] | G |
Aquino [139] | R, G | Cheng [140] | G | |||
2015 | Miri [141] | R, G, B | Dai [142] | G | ||
Mary [143] | R | Hassanien [144] | G | |||
Harangi [145] | RGB, G | Imani [146] | G | |||
Lazar [147] | G | |||||
Roychowdhury [148] | G | |||||
2016 | Mittapalli [149] | RGB | Medhi [150] | R | Aslani [151] | G |
Roychowdhury [152] | G | Onal [153] | Gr | Bahadarkhan [154] | G | |
Sarathi [155] | R, G | Christodoulidis [156] | G | |||
Orlando [157] | G | |||||
2018 | Ramani [158] | G | Khan [159] | G | ||
Chalakkal [160] | RGB | Xia [161] | G | |||
2019 | Thakur [162] | Gr | Khawaja [163] | G | ||
Naqvi [164] | R, G | Wang [165] | RGB | |||
2020 | Dharmawan [166] | R, G, B | Carmona [167] | G | Saroj [168] | Gr |
Guo [169] | G | Zhang [170] | G | |||
Zhou [171] | G | |||||
2021 | Kim [172] | G |
Year | OD | Macula/Fovea | CRBVs | |||
---|---|---|---|---|---|---|
Reference | Color | Reference | Color | Reference | Color | |
2011 | Marin [173] | G | ||||
2015 | Wang [174] | G | ||||
2016 | Liskowski [175] | G | ||||
2017 | Barkana [176] | G | ||||
Mo [177] | RGB | |||||
2018 | Fu [178] | RGB | Al-Bander [179] | Gr | Guo [180] | G |
Guo [181] | RGB | |||||
Hu [182] | RGB | |||||
Jiang [183] | RGB | |||||
Oliveira [184] | G | |||||
Sangeethaa [185] | G | |||||
2019 | Wang [186] | RGB, Gr | Jebaseeli [187] | G | ||
Chakravarty [188] | RGB | Lian [189] | RGB | |||
Gu [190] | RGB | Noh [191] | RGB | |||
Tan [192] | RGB | Wang [193] | Gr | |||
Jiang [194] | RGB | |||||
2020 | Gao [195] | RGB | Feng [196] | G | ||
Jin [197] | RGB | Tamim [198] | G | |||
Sreng [199] | RGB | |||||
Bian [200] | RGB | |||||
Almubarak [201] | RGB | |||||
Tian [202] | RGB | |||||
Zhang [203] | RGB | |||||
Xie [204] | RGB | |||||
2021 | Bengani [205] | RGB | Hasan [206] | RGB | Gegundez-Arias [207] | RGB |
Veena [208] | RGB | |||||
Wang [209] | RGB |
Year | Non-NN | NN | ||
---|---|---|---|---|
Reference | Color | Reference | Color | |
2011 | Lu [210] | R, B | ||
2012 | Cheng [211] | R, G, B | ||
Lu [212] | R, B | |||
2018 | Septiarini [213] | R, G | ||
2020 | Li [214] | R, G, B | Chai [215] | RGB |
Son [216] | RGB | |||
2021 | Sharma [217] | RGB |
Data Set | Height × Width | Field-of-View | Fundus Camera | Number of Images |
---|---|---|---|---|
CHASE_DB1 | Nidek NM-200-D | 28 | ||
DRIVE | Canon CR5-NM 3CCD | 40 | ||
HRF | Canon CR-1 | 45 | ||
IDRiD | Kowa VX-10 | 81 | ||
PALM | Zeiss VISUCAM 500 NM | 400 | ||
STARE | TopCon TRV-50 | 20 | ||
UoA-DR | Zeiss VISUCAM 500 | 200 |
Segmentation of | Data Set | Number of Images in | ||
---|---|---|---|---|
Training Set | Validation Set | Test Set | ||
CRBVs | CHASE_DB1 | 7 | 5 | 16 |
DRIVE | 10 | 8 | 22 | |
HRF | 11 | 9 | 25 | |
STARE | 5 | 4 | 11 | |
UoA-DR | 50 | 40 | 110 | |
Optic Disc | IDRiD | 20 | 16 | 45 |
PALM | 100 | 80 | 220 | |
UoA-DR | 50 | 40 | 110 | |
Macula | PALM | 100 | 80 | 220 |
UoA-DR | 50 | 40 | 110 | |
Atrophy | PALM | 100 | 80 | 220 |
Color | Dataset | Precision | Recall | AUC | MIoU |
---|---|---|---|---|---|
RGB | IDRiD | 0.897 ± 0.018 | 0.877 ± 0.010 | 0.940 ± 0.005 | 0.896 ± 0.003 |
PALM | 0.859 ± 0.009 | 0.862 ± 0.013 | 0.933 ± 0.006 | 0.873 ± 0.003 | |
UoA_DR | 0.914 ± 0.012 | 0.868 ± 0.006 | 0.936 ± 0.003 | 0.895 ± 0.004 | |
Gray | IDRiD | 0.868 ± 0.020 | 0.902 ± 0.016 | 0.952 ± 0.007 | 0.892 ± 0.004 |
PALM | 0.758 ± 0.020 | 0.737 ± 0.025 | 0.870 ± 0.011 | 0.788 ± 0.009 | |
UoA_DR | 0.907 ± 0.007 | 0.840 ± 0.005 | 0.923 ± 0.002 | 0.876 ± 0.008 | |
Red | IDRiD | 0.892 ± 0.006 | 0.872 ± 0.008 | 0.936 ± 0.004 | 0.892 ± 0.004 |
PALM | 0.798 ± 0.004 | 0.824 ± 0.012 | 0.912 ± 0.006 | 0.837 ± 0.003 | |
UoA_DR | 0.900 ± 0.007 | 0.854 ± 0.006 | 0.928 ± 0.003 | 0.885 ± 0.003 | |
Green | IDRiD | 0.837 ± 0.023 | 0.906 ± 0.009 | 0.953 ± 0.004 | 0.882 ± 0.008 |
PALM | 0.708 ± 0.012 | 0.718 ± 0.013 | 0.859 ± 0.006 | 0.771 ± 0.004 | |
UoA_DR | 0.895 ± 0.009 | 0.821 ± 0.010 | 0.912 ± 0.005 | 0.869 ± 0.006 | |
Blue | IDRiD | 0.810 ± 0.038 | 0.715 ± 0.011 | 0.858 ± 0.005 | 0.799 ± 0.010 |
PALM | 0.662 ± 0.032 | 0.692 ± 0.019 | 0.845 ± 0.009 | 0.748 ± 0.008 | |
UoA_DR | 0.873 ± 0.012 | 0.800 ± 0.009 | 0.901 ± 0.004 | 0.851 ± 0.002 |
Color | Dataset | Precision | Recall | AUC | MIoU |
---|---|---|---|---|---|
RGB | CHASE_DB1 | 0.795 ± 0.005 | 0.638 ± 0.004 | 0.840 ± 0.002 | 0.696 ± 0.018 |
DRIVE | 0.851 ± 0.007 | 0.519 ± 0.009 | 0.781 ± 0.004 | 0.696 ± 0.013 | |
HRF | 0.730 ± 0.017 | 0.633 ± 0.007 | 0.838 ± 0.005 | 0.651 ± 0.021 | |
STARE | 0.822 ± 0.009 | 0.488 ± 0.010 | 0.766 ± 0.006 | 0.654 ± 0.011 | |
UoA_DR | 0.373 ± 0.003 | 0.341 ± 0.008 | 0.669 ± 0.005 | 0.556 ± 0.004 | |
Gray | CHASE_DB1 | 0.757 ± 0.019 | 0.635 ± 0.016 | 0.834 ± 0.009 | 0.648 ± 0.040 |
DRIVE | 0.864 ± 0.014 | 0.529 ± 0.014 | 0.786 ± 0.008 | 0.673 ± 0.032 | |
HRF | 0.721 ± 0.032 | 0.617 ± 0.008 | 0.825 ± 0.005 | 0.605 ± 0.038 | |
STARE | 0.810 ± 0.021 | 0.522 ± 0.022 | 0.784 ± 0.011 | 0.619 ± 0.031 | |
UoA_DR | 0.373 ± 0.007 | 0.298 ± 0.022 | 0.648 ± 0.012 | 0.540 ± 0.009 | |
Red | CHASE_DB1 | 0.507 ± 0.018 | 0.412 ± 0.007 | 0.703 ± 0.005 | 0.602 ± 0.001 |
DRIVE | 0.713 ± 0.026 | 0.391 ± 0.016 | 0.705 ± 0.010 | 0.637 ± 0.005 | |
HRF | 0.535 ± 0.027 | 0.349 ± 0.014 | 0.680 ± 0.008 | 0.581 ± 0.004 | |
STARE | 0.646 ± 0.040 | 0.271 ± 0.011 | 0.649 ± 0.008 | 0.563 ± 0.005 | |
UoA_DR | 0.304 ± 0.011 | 0.254 ± 0.012 | 0.621 ± 0.006 | 0.539 ± 0.002 | |
Green | CHASE_DB1 | 0.781 ± 0.017 | 0.676 ± 0.021 | 0.858 ± 0.007 | 0.691 ± 0.059 |
DRIVE | 0.862 ± 0.011 | 0.541 ± 0.026 | 0.794 ± 0.012 | 0.703 ± 0.047 | |
HRF | 0.754 ± 0.018 | 0.662 ± 0.020 | 0.856 ± 0.008 | 0.647 ± 0.077 | |
STARE | 0.829 ± 0.018 | 0.558 ± 0.028 | 0.806 ± 0.011 | 0.662 ± 0.052 | |
UoA_DR | 0.384 ± 0.007 | 0.326 ± 0.023 | 0.662 ± 0.012 | 0.552 ± 0.011 | |
Blue | CHASE_DB1 | 0.581 ± 0.024 | 0.504 ± 0.023 | 0.751 ± 0.010 | 0.638 ± 0.004 |
DRIVE | 0.771 ± 0.016 | 0.449 ± 0.015 | 0.736 ± 0.008 | 0.657 ± 0.007 | |
HRF | 0.473 ± 0.016 | 0.279 ± 0.016 | 0.633 ± 0.007 | 0.558 ± 0.004 | |
STARE | 0.446 ± 0.014 | 0.242 ± 0.018 | 0.608 ± 0.007 | 0.535 ± 0.003 | |
UoA_DR | 0.316 ± 0.010 | 0.271 ± 0.015 | 0.630 ± 0.007 | 0.540 ± 0.002 |
Color | Dataset | Precision | Recall | AUC | MIoU |
---|---|---|---|---|---|
RGB | PALM | 0.732 ± 0.016 | 0.649 ± 0.029 | 0.825 ± 0.014 | 0.753 ± 0.009 |
UoA_DR | 0.804 ± 0.027 | 0.713 ± 0.043 | 0.858 ± 0.021 | 0.794 ± 0.012 | |
Gray | PALM | 0.712 ± 0.024 | 0.638 ± 0.016 | 0.819 ± 0.007 | 0.744 ± 0.003 |
UoA_DR | 0.811 ± 0.017 | 0.712 ± 0.018 | 0.858 ± 0.008 | 0.796 ± 0.005 | |
Red | PALM | 0.719 ± 0.013 | 0.648 ± 0.015 | 0.823 ± 0.007 | 0.749 ± 0.005 |
UoA_DR | 0.768 ± 0.006 | 0.726 ± 0.013 | 0.863 ± 0.006 | 0.790 ± 0.003 | |
Green | PALM | 0.685 ± 0.020 | 0.641 ± 0.004 | 0.820 ± 0.002 | 0.739 ± 0.005 |
UoA_DR | 0.791 ± 0.013 | 0.693 ± 0.011 | 0.848 ± 0.005 | 0.783 ± 0.005 | |
Blue | PALM | 0.676 ± 0.020 | 0.637 ± 0.019 | 0.817 ± 0.009 | 0.734 ± 0.002 |
UoA_DR | 0.801 ± 0.035 | 0.649 ± 0.013 | 0.826 ± 0.006 | 0.769 ± 0.012 |
Color | Dataset | Precision | Recall | AUC | MIoU |
---|---|---|---|---|---|
RGB | PALM | 0.719 ± 0.033 | 0.638 ± 0.030 | 0.814 ± 0.014 | 0.707 ± 0.019 |
Gray | PALM | 0.630 ± 0.021 | 0.571 ± 0.025 | 0.777 ± 0.012 | 0.658 ± 0.039 |
Red | PALM | 0.514 ± 0.010 | 0.430 ± 0.029 | 0.705 ± 0.013 | 0.596 ± 0.015 |
Green | PALM | 0.695 ± 0.009 | 0.627 ± 0.032 | 0.808 ± 0.015 | 0.714 ± 0.011 |
Blue | PALM | 0.711 ± 0.015 | 0.578 ± 0.016 | 0.785 ± 0.008 | 0.687 ± 0.018 |
Segmentation for | N | Number of Cases in | ||||
---|---|---|---|---|---|---|
RGB | Gray | Red | Green | Blue | ||
Optic Disc (OD) | 375 | 329 | 324 | 316 | 303 | 297 |
Macula | 330 | 270 | 265 | 271 | 265 | 267 |
Segmentation for | N | Number of Cases in | ||||
---|---|---|---|---|---|---|
RGB | Gray | Red | Green | Blue | ||
Optic Disc (OD) | 375 | 29 | 26 | 43 | 46 | 43 |
Macula | 330 | 17 | 25 | 14 | 17 | 14 |
Data Set | N | Number of Cases in Each Color Channel | |||||||
---|---|---|---|---|---|---|---|---|---|
Where | Where | ||||||||
Gray | Red | Green | Blue | Gray | Red | Green | Blue | ||
CHASE_DB1 | 28 | 0 | 10 | 0 | 13 | 0 | 4 | 0 | 3 |
DRIVE | 40 | 0 | 12 | 0 | 0 | 0 | 1 | 0 | 3 |
HRF | 45 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
IDRiD | 81 | 0 | 2 | 0 | 6 | 0 | 0 | 0 | 23 |
PALM | 400 | 0 | 0 | 1 | 40 | 0 | 0 | 2 | 121 |
STARE | 20 | 0 | 2 | 0 | 10 | 0 | 0 | 0 | 4 |
UoA-DR | 200 | 0 | 0 | 0 | 22 | 0 | 0 | 0 | 88 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Biswas, S.; Khan, M.I.A.; Hossain, M.T.; Biswas, A.; Nakai, T.; Rohdin, J. Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? Life 2022, 12, 973. https://doi.org/10.3390/life12070973
Biswas S, Khan MIA, Hossain MT, Biswas A, Nakai T, Rohdin J. Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? Life. 2022; 12(7):973. https://doi.org/10.3390/life12070973
Chicago/Turabian StyleBiswas, Sangeeta, Md. Iqbal Aziz Khan, Md. Tanvir Hossain, Angkan Biswas, Takayoshi Nakai, and Johan Rohdin. 2022. "Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs?" Life 12, no. 7: 973. https://doi.org/10.3390/life12070973
APA StyleBiswas, S., Khan, M. I. A., Hossain, M. T., Biswas, A., Nakai, T., & Rohdin, J. (2022). Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? Life, 12(7), 973. https://doi.org/10.3390/life12070973