Next Article in Journal
Osteoporosis Assessment among Adults with Liver Cirrhosis
Next Article in Special Issue
Bilateral Branch Retinal Vein Occlusion after mRNA-SARS-CoV-2 Booster Dose Vaccination
Previous Article in Journal
Devices and Treatments to Address Low Adherence in Glaucoma Patients: A Narrative Review
Previous Article in Special Issue
En Face Choroidal Vascularity in Both Eyes of Patients with Unilateral Central Serous Chorioretinopathy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Application of Deep Learning to Retinal-Image-Based Oculomics for Evaluation of Systemic Health: A Review

1
Shiley Eye Institute and Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92093, USA
2
Wilmer Eye Institute, Johns Hopkins University, Baltimore, MD 21287, USA
*
Author to whom correspondence should be addressed.
J. Clin. Med. 2023, 12(1), 152; https://doi.org/10.3390/jcm12010152
Submission received: 20 November 2022 / Revised: 17 December 2022 / Accepted: 22 December 2022 / Published: 24 December 2022

Abstract

:
The retina is a window to the human body. Oculomics is the study of the correlations between ophthalmic biomarkers and systemic health or disease states. Deep learning (DL) is currently the cutting-edge machine learning technique for medical image analysis, and in recent years, DL techniques have been applied to analyze retinal images in oculomics studies. In this review, we summarized oculomics studies that used DL models to analyze retinal images—most of the published studies to date involved color fundus photographs, while others focused on optical coherence tomography images. These studies showed that some systemic variables, such as age, sex and cardiovascular disease events, could be consistently robustly predicted, while other variables, such as thyroid function and blood cell count, could not be. DL-based oculomics has demonstrated fascinating, “super-human” predictive capabilities in certain contexts, but it remains to be seen how these models will be incorporated into clinical care and whether management decisions influenced by these models will lead to improved clinical outcomes.

1. Introduction

The retina is considered a window to the human body [1,2,3,4], as many systemic conditions have ocular manifestations, especially in the retina. The extensive correlations between retinal findings and systemic conditions can be attributed to the facts that the human retina is a direct extension of the central nervous system during embryonic development [5], and the retina is one of the most vascularized and metabolically active organs in the human body [6]. Characterization and quantification of retinal-systemic correlations is particularly valuable for gaining new insights, especially since the retina can be conveniently and readily imaged non-invasively using a variety of technologies. The term “oculomics” is coined to describe the clinical insights provided by correlating ophthalmic biomarkers with systemic health and diseases [1,7].
The most common retinal imaging modalities used in oculomics are color fundus photography and optical coherence tomography (OCT). Briefly, OCT performs high-resolution cross-sectional imaging of tissue structures in situ and in real time by measuring the time delay of light echoed from the tissue under examination [8,9]. The most common groups of diseases studied in oculomics are cardiovascular diseases (CVD) and neurodegenerative diseases (NDD) [1,10,11].
Oculomics studies concerning CVD typically involve color fundus photographs. For example, prior studies have shown that retinal vascular morphologies, such as vessel caliber and tortuosity, can help predict CVD risk factors [12], CVD mortality [13,14], and various major CVD events [15,16,17,18]. Similarly, retinal microvascular changes have been linked to higher risks of other systemic vascular diseases, such as kidney diseases and preeclampsia [19,20,21].
Oculomics studies concerning NDD typically involve OCTs. For example, retinal thickness measurements based on OCT have been used to diagnose and monitor multiple sclerosis (MS) [22,23,24]. Other studies have demonstrated an association between a thinner retinal nerve fiber layer (RNFL) and the diagnosis of Alzheimer’s disease (AD) [25,26,27,28,29], which accounts for more than 60% of clinical dementia. A major area of OCT-based oculomics is the early detection of pre-clinical NDDs.
Historically, retinal image annotation and feature labeling were performed either manually by humans or semi-automatically in oculomics. The process is time-consuming, labor-intensive and limited by intra/inter-reader imprecision. Recently, the advent of deep learning (DL) has revolutionized the field of oculomics. Briefly, DL, a subtype of machine learning (ML), is a representation learning method that uses multilayered neural networks (NN) to reiteratively adjust parameters and enhance performance [30,31,32,33]. DL is superior to classical ML techniques in image analysis, and has emerged as the leading ML technique for medical image classification.
Medical subspecialities such as ophthalmology, with access to a large amount of imaging data, have been at the forefront of the DL revolution. Notably, DL has been shown to be on par with human experts in classifying various retinal diseases such as age-related macular degeneration and diabetic retinopathy [33,34,35,36,37,38,39], and the first FDA-approved fully autonomous system in any medical field is a DL-based system to detect diabetic retinopathy from color fundus photographs [40].
The retinal-systemic associations in oculomics were traditionally established using conventional statistical models or classical ML techniques. Given that oculomics primarily involves correlating ophthalmic biomarkers captured in retinal imaging with systemic conditions and that DL is the leading ML technique to analyze retinal images, the goal of this review is to summarize the latest literature in DL-based oculomics involving color fundus photography and OCT.

2. Literature Search Methods

The PubMed and Google Scholar databases were searched for published studies through July 2022, using individual and combinative search terms relevant to the this review. Major key words used included: (1) Deep learning-associated: “deep learning”, “machine learning”, and “neural network”; (2) Retinal imaging-associated: “ocular biomarkers”, “oculomics”, “ocular imaging”, “retinal imaging”, “fundus photographs”, “optical coherence tomography”; (3) Systemic disease/health-associated: “age”, “sex”, “demographic”, “systemic disease”, “systemic biomarkers”, “cardiovascular disease”, “neurodegenerative disease”, “stroke”, “multiple sclerosis”, “atherosclerosis”, “blood pressure”, “myocardial ischemia”, “dementia”, “Alzheimers disease”, “diabetes”, “renal disease” “kidney disease”, etc.
No filter for publication year, language, or study type was applied. Reference of identified records were also checked. Studies applying DL on retinal-image-based oculomics to assess, predict, or diagnose systemic diseases and health biomarkers were considered relevant to the current review. Abstracts of non-English articles with relevant information were also included.

3. Results and Discussion

The following text is organized based on the imaging modality (fundus photography first, then OCT), and each sub-section is organized by the systemic parameter considered, with CVDs and their risk factors being the major focus.

3.1. Retinal Fundus Photography

Using retinal color fundus photographs from the UK Biobank and EyePACS, Poplin et al. published one of the first oculomics studies that demonstrated the ability of DL to predict systemic disease states and biomarkers [41]. In their study, a deep neural network (NN) showed reasonably robust performance in predicting major CVD events with an area-under-the curve (AUC) of the operating characteristic curve of 0.70. For reference, an AUC of 1.0 indicates perfect predictions, while an AUC of 0.5 indicates predictions no better than random chance. The deep NN was also capable of robust prediction of age (mean absolute error [MAE] ≤ 3.3 years), sex (AUC = 0.97), and smoking status (AUC = 0.71), etc. Regions of the color fundus photographs most activated during decision making by the deep NN were highlighted using attention maps [41]. For example, strong activation centered on the retinal blood vessels was seen during prediction for age and smoking status, while strong activation at the optic disc, retinal blood vessels and macula was seen during prediction for gender.

3.1.1. Risk Assessment of CVD

Chang et al. presented a model that could generate a fundus atherosclerosis score (FAS) using DL-based retinal image analysis. The DL-generated FAS was then compared to the ground truth: a physician-graded score based on carotid ultrasonographic images. The DL model achieved an AUC of 0.71 in predicting the presence of carotid atherosclerosis [42]. Furthermore, by using the FAS to risk stratify patients, the authors found that cases in the top tertile (FAS > 0.66) had a significantly increased risk (hazard ratio = 8.33) of CVD mortality as compared to cases in the bottom tertile (FAS < 0.33). A similar CVD risk stratification study was performed by Son et al. [43]. They presented a model that could generate a coronary artery calcium score (CACS), by using DL-based retinal image analysis. The DL-generated CACS was compared to the cardiac computed tomography-derived CACS, and the model achieved an AUC > 0.82 in identifying cases with high CACS (CACS > 100).
Khan et al., the DL model was trained to predict the presence of cardiac diseases from fundus photographs. With the electronic health record (EHR) as the ground truth, their model reached an AUC of 0.7 [44]. In another study, Cheung et al. used convolutional neural network (CNN) to segment the retinal vessels from fundus photographs and measured the vessel calibers [45]. They correlated the vessel calibers generated from DL-based segmentation with incident CVD events (defined as newly diagnosed clinical stroke, myocardial infarction or CVD mortality in EHR), and found that narrower calibers at certain vascular zones were associated with increased incident CVD risk. Lastly, a recent Chinese study trained a DL model to predict 10-year ischemic CVD risk using retinal image analysis [46]. Their estimation was compared with the calculation by a previously validated 10-year Chinese CVD risk prediction model, and an AUC of 0.86 and 0.88 was reported for predicting 10-year ischemic CVD risk ≥5% and ≥7.5%, respectively.

3.1.2. Blood Pressure and Hypertension

In the study by Poplin et al., the DL model predicted diastolic BP (DBP) and systolic BP (SBP) with an MAE of 6.42 mmHg and 11.23 mmHg, respectively [41]. Subsequent studies published by different groups of authors showed similar results in that, in general, MAE of DBP (range: 6–9 mmHg) was smaller than that of SBP (range: 9–15 mmHg) [47,48]. Of note, a weak-to-moderate R2 ranged from 0.20 to 0.50 was observed for most DL models for BP prediction. Other studies attempted to train DL models to identify patients with hypertension [44,49,50]. The best result was reported by Zhang and colleagues using a cross-sectional Chinese dataset and neural network (NN) model [49]. Their model achieved an AUC of 0.77 in classifying patients with self-reported hypertension.

3.1.3. Hyperglycemia and Dyslipidemia

The overall performance of DL models in estimating outcomes associated with hyperglycemia and dyslipidemia using retinal images was not robust. For the fundus-based prediction of HbA1c, the MAE reported in different studies ranged between 0.33–1.39%, with a low R2 of <0.10 in most studies [41,47,48]. Similar poor model performance and low R2 were observed for most DL models trained to predict blood glucose level and lipid profile [47,48]. An exception was a model developed by Zhang et al., which was able to discriminate patients with self-reported hyperglycemia and dyslipidemia from normal controls with an AUC of 0.88 and 0.70, respectively [49].

3.1.4. Sex

Most DL studies predicting sex only performed internal validation, and in these studies, the models typically achieved an AUC of >0.95 during internal validation [41,47,48,51]. A notable exception was the study by Rim et al., in which the model was trained to predict multiple biomarkers, including sex. During external validation with 4 datasets obtained from patients of different ethnicities, this particular model predicting sex achieved an AUC ranging from 0.80 to 0.91 [47]. In the study by Korot et al., external validation was also performed using another local dataset, and their model achieved an accuracy of 78.6% [51].

3.1.5. Age

For retinal-image-based prediction of age, most studies reported similar MAEs in internal validation, ranging from 2.43 to 3.55 years [41,47,48,52]. Khan et al. also trained the DL model to predict age > 70 years and reported an AUC of 0.90 for this task [44]. Interestingly, Zu et al. further calculated the retinal age gap, which was the difference between chronological age and the age predicted by DL [52]. Using mortality data in the national EHR, they found that each 1-year increase in the retinal age gap was associated with a 2% risk increase (hazard ratio [HR] = 1.02, p = 0.020) in all-cause mortality. This novel finding suggests DL-based retinal “age” may be a better marker for senescence on a tissue level than chronological age.

3.1.6. Other Systemic Biomarkers and Disease Status

Other systemic biomarkers examined in DL-based oculomics included ethnicity, medication use, body composition, systemic organ functions, hematological parameters, and smoking status. Khan et al.’s model predicted ethnicity (Hispanic/Latino, non-Hispanic/Latino, others) based on fundus photographs using EHR as the ground truth, and reached an AUC of 0.93 [44]. Their model also showed a modest ability (AUC = 0.78–0.82) in identifying patients who take specific class of medications, such as angiotensin II receptor blockers and angiotensin-converting enzyme (ACE) inhibitors. In the study by Mitani et al., the DL model was trained to predict hemoglobin (Hb) and anemia, defined as Hb < 12 g/dL for women and <13 g/dL for men based on guidelines from the World Health Organization (WHO), using three types of data: retinal fundus images, participant metadata (race/ethnicity, age, sex and BP), and the combination of retinal images and metadata (multimodal data) [53]. The multimodal training data yielded the best model performance, with an AUC of 0.88 for anemia prediction and an MAE of 0.63 g/dL for Hb estimation. In contrast, the model trained only with retinal images yielded an AUC of 0.74 for anemia prediction and an MAE of 0.73 g/dL for Hb estimation. For the prediction of self-reported smoking status using fundus photographs, past studies [41,44,48,49,54] have reported models with AUC ranging from 0.70 to 0.86. As for the prediction of body mass index (BMI), most studies reported an MAE within 2–4 kg/m² and a low R2 < 0.30 [41,47,48].
Of note, Rim et al. reported an ambitious study that trained NN models to predict a total of 47 systemic biomarkers using retinal fundus photographs [47]. Although satisfactory results were achieved for sex (AUC = 0.96 in internal validation, AUC = 0.80–0.91 in external validation) and age (MAEs = 2.43 years in internal validation, MAEs = 3.4–4.5 years in external validation) prediction, the height prediction (MAEs = 5.5–7.1 cm), weight (MAEs = 8.3–11.8 kg), BMI (MAEs = 2.4–3.5 kg/m2), and creatinine (MAEs = 0.11–0.17 mg/dL) showed limited accuracy and generalizability in external validation with datasets of other ethnicities (R2 < 0.30 for all). Other biomarkers, such as C-reactive protein, thyroid functions, and blood cell counts, could not be predicted from retinal fundus images using DL in this study.
For chronic kidney disease (CKD) prediction, Sabanayagam et al. presented DL models that predicted the presence of CKD, defined as an estimated glomerular filtration rate (eGFR) < 60 mL/min per 1.73 m2, via retinal image analysis [55]. In their study, 3 model variations were trained: using only retinal fundus images, using only selected clinical data, and using both retinal images and clinical data (multimodal data). An AUC ranging from 0.73–0.84 and 0.81–0.86 was achieved for the retinal-image-only model and the multimodal data model, respectively, in external validation. Zhang et al. [56] presented a similar study that used 3 DL model variations to predict CKD. In external validation, an AUC ranging from 0.87–0.89 and 0.88–0.90 was reported for the retinal-image-only model and the multimodal data model, respectively. Additional analysis was performed to predict the eGFR values based on fundus photographs, and the DL models achieved an MAE ranging from 11–13 mL/min per 1.73 m2 (R2: 0.33–0.48) in external validation [56].
Tian et al. used retinal fundus images and DL techniques to predict the presence of Alzheimer’s Disease (AD) [57]. Patients diagnosed with AD were identified based on ICD codes in the EHR. The authors used DL techniques to segment retinal vessels, and then the segmentation maps were used for classification via a support vector machine (SVM). An overall accuracy of 82% (sensitivity: 0.79%, specificity: 0.85%) for discriminating normal subjects from subjects with AD was achieved. Saliency map analysis demonstrated that small retinal vessels were more prominently activated than large retinal vessels during decision making.

3.2. Optical Coherence Tomography

3.2.1. Multiple Sclerosis (MS)

Compare to color fundus photographs, OCT is less commonly used in DL-based oculomics. Of the DL-based oculomics studies involving OCT, MS is the most studied systemic condition. In the study by Montolío et al., the performances of different ML algorithms, including linear regression, SVM, decision tree, k-nearest neighbors, Naïve Bayes, ensemble classifier and long short-term memory recurrent NN, in diagnosing MS and predicting the long-term disability course of MS were compared [58]. The diagnosis of MS was extracted from EHR and based on standard clinical and neuroimaging criteria (the McDonald criteria), [59] and the long-term disability ground truth was based on the expanded disability status scale (EDSS) scoring. All the ML models were trained with both clinical data and OCT-measured retinal nerve fiber layer (RNFL) thickness. The ensemble classifier, which performs prediction based on the weighted votes by various individual classifiers, [60] showed the best results for diagnosing MS (accuracy = 88%, AUC = 0.88), while the recurrent NN model showed the best prediction of long-term disability (accuracy = 82%, AUC = 0.82). In another study by López-Dorado et al., an NN model was also trained to diagnose MS using OCT images, with the ground truth determined by a neurologist based on the McDonald criteria [61]. Their model achieved a diagnostic accuracy of >90%. Additionally, they found the OCT-measured ganglion cell layer and whole retinal thicknesses to be the most discriminative features for diagnosing MS.

3.2.2. Age and Sex

Using OCT images centered on the optic nerve head and fovea, the MAE of DL-based age prediction ranged between 3.3–6 years, [62,63,64,65] with the best result reported by Hassan et al. [65]. Notably, in the study by Shigueoka et al., the CNN model revealed different correlations between the different retinal layers and age, [62] but this finding was not replicated in the study by Chueh et al. [64]. As for the OCT-based prediction of sex, accuracies and AUC ranged from 68% to 86% [63,64,65]. One study further compared the performances of DL models predicting sex using OCT foveal contour, OCT macular thickness, and infrared fundus photography, and showed the OCT foveal contour was most predictive [64].
Generally, as compared to color fundus photograph studies, OCT studies produced less robust DL models in predicting systemic biomarkers. Furthermore, most published OCT studies lacked external, independent validations.

4. Conclusions and Future Direction

Most of the published studies to date only used a single imaging modality, e.g., either color fundus photograph or OCT, for model training. Ideally, multiple imaging modalities should be used simultaneously for model training. For example, in a recent study published in 2022 by Wisely et al., multimodal retinal imaging consisting of OCT, OCT angiography, and ultra-widefield pseudo-color and ultra-widefield autofluorescence images were used to train a CNN model in predicting symptomatic AD [66]. In addition to multimodal retinal imaging, tabular clinical data can also be incorporated into model training. For example, in the studies by Sabanayagam et al. and Zhang et al., incorporating relevant demographic data such as age, gender, ethnicity, etc. were found to improve the prediction of CKD from color fundus photographs [55,56]. However, the incorporation of multimodal retinal imaging and different data types into model training will inevitably increase the technical complexity from a machine learning point of view. “Detailed analysis of salient retinal regions/features associated with DL predictability will provide further insights into ocular-systemic relationships. Such information was only provided by a limited number of studies included in this review, most of which used DL to predict age, sex and CVD via color fundus images (Table 1). For future directions, it remains to be seen how these deep learning-based oculomics models will be incorporated into clinical care and whether management decisions influenced by these models will lead to improved clinical outcomes.

Author Contributions

Study conception and design: J.-H.W. and T.Y.A.L. Literature search and data collection: J.-H.W. Analysis and interpretation of data: J.-H.W. and T.Y.A.L. Drafting of the manuscript: J.-H.W. and T.Y.A.L. Critical revision of the manuscript: J.-H.W. and T.Y.A.L. Supervision of study conduction: T.Y.A.L. Approval of the final version for submission: J.-H.W. and T.Y.A.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wagner, S.K.; Fu, D.J.; Faes, L.; Liu, X.; Huemer, J.; Khalid, H.; Ferraz, D.; Korot, E.; Kelly, C.; Balaskas, K.; et al. Insights into Systemic Disease through Retinal Imaging-Based Oculomics. Transl. Vis. Sci. Technol. 2020, 9, 6. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Gupta, K.; Reddy, S. Heart, Eye, and Artificial Intelligence: A Review. Cardiol. Res. 2021, 12, 132–139. [Google Scholar] [CrossRef] [PubMed]
  3. Vujosevic, S.; Parra, M.M.; Hartnett, M.E.; O’Toole, L.; Nuzzi, A.; Limoli, C.; Villani, E.; Nucci, P. Optical coherence tomography as retinal imaging biomarker of neuroinflammation/neurodegeneration in systemic disorders in adults and children. Eye 2022. [Google Scholar] [CrossRef]
  4. MacGillivray, T.J.; Trucco, E.; Cameron, J.R.; Dhillon, B.; Houston, J.G.; van Beek, E.J. Retinal imaging as a source of biomarkers for diagnosis, characterization and prognosis of chronic illness or long-term conditions. Br. J. Radiol. 2014, 87, 20130832. [Google Scholar] [CrossRef] [Green Version]
  5. London, A.; Benhar, I.; Schwartz, M. The retina as a window to the brain—From eye research to CNS disorders. Nat. Rev. Neurol. 2013, 9, 44–53. [Google Scholar] [CrossRef]
  6. Country, M.W. Retinal metabolism: A comparative look at energetics in the retina. Brain Res. 2017, 1672, 50–57. [Google Scholar] [CrossRef]
  7. Honavar, S.G. Oculomics—The eyes talk a great deal. Indian J. Ophthalmol. 2022, 70, 713. [Google Scholar] [CrossRef]
  8. Fujimoto, J.G.; Pitris, C.; Boppart, S.A.; Brezinski, M.E. Optical coherence tomography: An emerging technology for biomedical imaging and optical biopsy. Neoplasia 2000, 2, 9–25. [Google Scholar] [CrossRef] [Green Version]
  9. Bille, J.F. (Ed.) High Resolution Imaging in Microscopy and Ophthalmology: New Frontiers in Biomedical Optics; Springer: Cham, Switzerland, 2019. [Google Scholar]
  10. Snyder, P.J.; Alber, J.; Alt, C.; Bain, L.J.; Bouma, B.E.; Bouwman, F.H.; DeBuc, D.C.; Campbell, M.C.W.; Carrillo, M.C.; Chew, E.Y.; et al. Retinal imaging in Alzheimer’s and neurodegenerative diseases. Alzheimer’s Dement. 2021, 17, 103–111. [Google Scholar] [CrossRef]
  11. Christinaki, E.; Kulenovic, H.; Hadoux, X.; Baldassini, N.; Van Eijgen, J.; De Groef, L.; Stalmans, I.; van Wijngaarden, P. Retinal imaging biomarkers of neurodegenerative diseases. Clin. Exp. Optom. 2022, 105, 194–204. [Google Scholar] [CrossRef]
  12. Owen, C.G.; Rudnicka, A.R.; Welikala, R.A.; Fraz, M.M.; Barman, S.A.; Luben, R.; Hayat, S.A.; Khaw, K.T.; Strachan, D.P.; Whincup, P.H.; et al. Retinal Vasculometry Associations with Cardiometabolic Risk Factors in the European Prospective Investigation of Cancer-Norfolk Study. Ophthalmology 2019, 126, 96–106. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Liew, G.; Mitchell, P.; Rochtchina, E.; Wong, T.Y.; Hsu, W.; Lee, M.L.; Wainwright, A.; Wang, J.J. Fractal analysis of retinal microvasculature and coronary heart disease mortality. Eur. Heart J. 2010, 32, 422–429. [Google Scholar] [CrossRef] [PubMed]
  14. Witt, N.; Wong, T.Y.; Hughes, A.D.; Chaturvedi, N.; Klein, B.E.; Evans, R.; McNamara, M.; Thom, S.A.M.; Klein, R. Abnormalities of Retinal Microvascular Structure and Risk of Mortality from Ischemic Heart Disease and Stroke. Hypertension 2006, 47, 975–981. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. McGeechan, K.; Liew, G.; Macaskill, P.; Irwig, L.; Klein, R.; Klein, B.E.; Wang, J.J.; Mitchell, P.; Vingerling, J.R.; Dejong, P.T.; et al. Meta-analysis: Retinal vessel caliber and risk for coronary heart disease. Ann. Intern. Med. 2009, 151, 404–413. [Google Scholar] [CrossRef] [PubMed]
  16. McGeechan, K.; Liew, G.; Macaskill, P.; Irwig, L.; Klein, R.; Klein, B.E.; Wang, J.J.; Mitchell, P.; Vingerling, J.R.; de Jong, P.T.; et al. Prediction of incident stroke events based on retinal vessel caliber: A systematic review and individual-participant meta-analysis. Am. J. Epidemiol. 2009, 170, 1323–1332. [Google Scholar] [CrossRef]
  17. Wong, T.Y.; Klein, R.; Couper, D.J.; Cooper, L.S.; Shahar, E.; Hubbard, L.D.; Wofford, M.R.; Sharrett, A.R. Retinal microvascular abnormalities and incident stroke: The Atherosclerosis Risk in Communities Study. Lancet 2001, 358, 1134–1140. [Google Scholar] [CrossRef]
  18. Wong, T.Y.; Klein, R.; Sharrett, A.R.; Manolio, T.A.; Hubbard, L.D.; Marino, E.K.; Kuller, L.; Burke, G.; Tracy, R.P.; Polak, J.F.; et al. The prevalence and risk factors of retinal microvascular abnormalities in older persons: The Cardiovascular Health Study. Ophthalmology 2003, 110, 658–666. [Google Scholar] [CrossRef]
  19. Lim, L.S.; Cheung, C.Y.-l.; Sabanayagam, C.; Lim, S.C.; Tai, E.S.; Huang, L.; Wong, T.Y. Structural Changes in the Retinal Microvasculature and Renal Function. Investig. Ophthalmol. Vis. Sci. 2013, 54, 2970–2976. [Google Scholar] [CrossRef] [Green Version]
  20. Liew, G.; Mitchell, P.; Wong, T.Y.; Wang, J.J. Retinal microvascular signs are associated with chronic kidney disease in persons with and without diabetes. Kidney Blood Press Res. 2012, 35, 589–594. [Google Scholar] [CrossRef]
  21. Lupton, S.J.; Chiu, C.L.; Hodgson, L.A.; Tooher, J.; Ogle, R.; Wong, T.Y.; Hennessy, A.; Lind, J.M. Changes in retinal microvascular caliber precede the clinical onset of preeclampsia. Hypertension 2013, 62, 899–904. [Google Scholar] [CrossRef]
  22. Petzold, A.; de Boer, J.F.; Schippling, S.; Vermersch, P.; Kardon, R.; Green, A.; Calabresi, P.A.; Polman, C. Optical coherence tomography in multiple sclerosis: A systematic review and meta-analysis. Lancet Neurol. 2010, 9, 921–932. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Britze, J.; Frederiksen, J.L. Optical coherence tomography in multiple sclerosis. Eye 2018, 32, 884–888. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Paul, F.; Calabresi, P.A.; Barkhof, F.; Green, A.J.; Kardon, R.; Sastre-Garriga, J.; Schippling, S.; Vermersch, P.; Saidha, S.; Gerendas, B.S.; et al. Optical coherence tomography in multiple sclerosis: A 3-year prospective multicenter study. Ann. Clin. Transl. Neurol. 2021, 8, 2235–2251. [Google Scholar] [CrossRef] [PubMed]
  25. Marziani, E.; Pomati, S.; Ramolfo, P.; Cigada, M.; Giani, A.; Mariani, C.; Staurenghi, G. Evaluation of Retinal Nerve Fiber Layer and Ganglion Cell Layer Thickness in Alzheimer’s Disease Using Spectral-Domain Optical Coherence Tomography. Investig. Ophthalmol. Vis. Sci. 2013, 54, 5953–5958. [Google Scholar] [CrossRef] [PubMed]
  26. Wang, M.; Zhu, Y.; Shi, Z.; Li, C.; Shen, Y. Meta-analysis of the relationship of peripheral retinal nerve fiber layer thickness to Alzheimer’s disease and mild cognitive impairment. Shanghai Arch. Psychiatry 2015, 27, 263–279. [Google Scholar]
  27. Lian, T.-H.; Jin, Z.; Qu, Y.-Z.; Guo, P.; Guan, H.-Y.; Zhang, W.-J.; Ding, D.-Y.; Li, D.-N.; Li, L.-X.; Wang, X.-M.; et al. The Relationship Between Retinal Nerve Fiber Layer Thickness and Clinical Symptoms of Alzheimer’s Disease. Front. Aging Neurosci. 2021, 12, 584244. [Google Scholar] [CrossRef]
  28. Ko, F.; Muthy, Z.A.; Gallacher, J.; Sudlow, C.; Rees, G.; Yang, Q.; Keane, P.A.; Petzold, A.; Khaw, P.T.; Reisman, C.; et al. Association of Retinal Nerve Fiber Layer Thinning With Current and Future Cognitive Decline: A Study Using Optical Coherence Tomography. JAMA Neurol. 2018, 75, 1198–1205. [Google Scholar] [CrossRef]
  29. Mutlu, U.; Colijn, J.M.; Ikram, M.A.; Bonnemaijer, P.W.M.; Licher, S.; Wolters, F.J.; Tiemeier, H.; Koudstaal, P.J.; Klaver, C.C.W.; Ikram, M.K. Association of Retinal Neurodegeneration on Optical Coherence Tomography with Dementia: A Population-Based Study. JAMA Neurol. 2018, 75, 1256–1263. [Google Scholar] [CrossRef]
  30. Chan, H.P.; Samala, R.K.; Hadjiiski, L.M.; Zhou, C. Deep Learning in Medical Image Analysis. Adv. Exp. Med. Biol. 2020, 1213, 3–21. [Google Scholar]
  31. Shen, D.; Wu, G.; Suk, H.I. Deep Learning in Medical Image Analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221–248. [Google Scholar] [CrossRef] [Green Version]
  32. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  33. Wu, J.H.; Liu, T.Y.A.; Hsu, W.T.; Ho, J.H.; Lee, C.C. Performance and Limitation of Machine Learning Algorithms for Diabetic Retinopathy Screening: Meta-analysis. J. Med. Internet Res. 2021, 23, e23863. [Google Scholar] [CrossRef] [PubMed]
  34. Abràmoff, M.D.; Lou, Y.; Erginay, A.; Clarida, W.; Amelon, R.; Folk, J.C.; Niemeijer, M. Improved Automated Detection of Diabetic Retinopathy on a Publicly Available Dataset Through Integration of Deep Learning. Invest. Ophthalmol. Vis. Sci. 2016, 57, 5200–5206. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. De Fauw, J.; Ledsam, J.R.; Romera-Paredes, B.; Nikolov, S.; Tomasev, N.; Blackwell, S.; Askham, H.; Glorot, X.; O’Donoghue, B.; Visentin, D.; et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat. Med. 2018, 24, 1342–1350. [Google Scholar] [CrossRef]
  36. Lee, C.S.; Baughman, D.M.; Lee, A.Y. Deep Learning Is Effective for Classifying Normal versus Age-Related Macular Degeneration OCT Images. Ophthalmol. Retin. 2017, 1, 322–327. [Google Scholar] [CrossRef]
  37. Gulshan, V.; Peng, L.; Coram, M.; Stumpe, M.C.; Wu, D.; Narayanaswamy, A.; Venugopalan, S.; Widner, K.; Madams, T.; Cuadros, J.; et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA 2016, 316, 2402–2410. [Google Scholar] [CrossRef]
  38. Grassmann, F.; Mengelkamp, J.; Brandl, C.; Harsch, S.; Zimmermann, M.E.; Linkohr, B.; Peters, A.; Heid, I.M.; Palm, C.; Weber, B.H.F. A Deep Learning Algorithm for Prediction of Age-Related Eye Disease Study Severity Scale for Age-Related Macular Degeneration from Color Fundus Photography. Ophthalmology 2018, 125, 1410–1420. [Google Scholar] [CrossRef] [Green Version]
  39. Ting, D.S.W.; Cheung, C.Y.; Lim, G.; Tan, G.S.W.; Quang, N.D.; Gan, A.; Hamzah, H.; Garcia-Franco, R.; San Yeo, I.Y.; Lee, S.Y.; et al. Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images from Multiethnic Populations with Diabetes. JAMA 2017, 318, 2211–2223. [Google Scholar] [CrossRef]
  40. Abràmoff, M.D.; Lavin, P.T.; Birch, M.; Shah, N.; Folk, J.C. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. npj Digit. Med. 2018, 1, 39. [Google Scholar] [CrossRef] [Green Version]
  41. Poplin, R.; Varadarajan, A.V.; Blumer, K.; Liu, Y.; McConnell, M.V.; Corrado, G.S.; Peng, L.; Webster, D.R. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat. Biomed. Eng. 2018, 2, 158–164. [Google Scholar] [CrossRef] [Green Version]
  42. Chang, J.; Ko, A.; Park, S.M.; Choi, S.; Kim, K.; Kim, S.M.; Yun, J.M.; Kang, U.; Shin, I.H.; Shin, J.Y.; et al. Association of Cardiovascular Mortality and Deep Learning-Funduscopic Atherosclerosis Score derived from Retinal Fundus Images. Am. J. Ophthalmol. 2020, 217, 121–130. [Google Scholar] [CrossRef] [PubMed]
  43. Son, J.; Shin, J.Y.; Chun, E.J.; Jung, K.-H.; Park, K.H.; Park, S.J. Predicting High Coronary Artery Calcium Score from Retinal Fundus Images With Deep Learning Algorithms. Transl. Vis. Sci. Technol. 2020, 9, 28. [Google Scholar] [CrossRef] [PubMed]
  44. Khan, N.C.; Perera, C.; Dow, E.R.; Chen, K.M.; Mahajan, V.B.; Mruthyunjaya, P.; Do, D.V.; Leng, T.; Myung, D. Predicting Systemic Health Features from Retinal Fundus Images Using Transfer-Learning-Based Artificial Intelligence Models. Diagnostics 2022, 12, 1714. [Google Scholar] [CrossRef]
  45. Cheung, C.Y.; Xu, D.; Cheng, C.-Y.; Sabanayagam, C.; Tham, Y.-C.; Yu, M.; Rim, T.H.; Chai, C.Y.; Gopinath, B.; Mitchell, P.; et al. A deep-learning system for the assessment of cardiovascular disease risk via the measurement of retinal-vessel calibre. Nat. Biomed. Eng. 2021, 5, 498–508. [Google Scholar] [CrossRef] [PubMed]
  46. Ma, Y.; Xiong, J.; Zhu, Y.; Ge, Z.; Hua, R.; Fu, M.; Li, C.; Wang, B.; Dong, L.; Zhao, X.; et al. Development and validation of a deep learning algorithm using fundus photographs to predict 10-year risk of ischemic cardiovascular diseases among Chinese population. medRxiv 2021. medRxiv:2021.04.15.21255176. [Google Scholar]
  47. Rim, T.H.; Lee, G.; Kim, Y.; Tham, Y.C.; Lee, C.J.; Baik, S.J.; Kim, Y.A.; Yu, M.; Deshmukh, M.; Lee, B.K.; et al. Prediction of systemic biomarkers from retinal photographs: Development and validation of deep-learning algorithms. Lancet Digit. Health 2020, 2, e526–e536. [Google Scholar] [CrossRef]
  48. Gerrits, N.; Elen, B.; Craenendonck, T.V.; Triantafyllidou, D.; Petropoulos, I.N.; Malik, R.A.; De Boever, P. Age and sex affect deep learning prediction of cardiometabolic risk factors from retinal images. Sci. Rep. 2020, 10, 9432. [Google Scholar] [CrossRef]
  49. Zhang, L.; Yuan, M.; An, Z.; Zhao, X.; Wu, H.; Li, H.; Wang, Y.; Sun, B.; Li, H.; Ding, S.; et al. Prediction of hypertension, hyperglycemia and dyslipidemia from retinal fundus photographs via deep learning: A cross-sectional study of chronic diseases in central China. PLoS ONE 2020, 15, e0233166. [Google Scholar] [CrossRef] [PubMed]
  50. Dai, G.; He, W.; Xu, L.; Pazo, E.E.; Lin, T.; Liu, S.; Zhang, C. Exploring the effect of hypertension on retinal microvasculature using deep learning on East Asian population. PLoS ONE 2020, 15, e0230111. [Google Scholar] [CrossRef] [Green Version]
  51. Korot, E.; Pontikos, N.; Liu, X.; Wagner, S.K.; Faes, L.; Huemer, J.; Balaskas, K.; Denniston, A.K.; Khawaja, A.; Keane, P.A. Predicting sex from retinal fundus photographs using automated deep learning. Sci. Rep. 2021, 11, 10286. [Google Scholar] [CrossRef]
  52. Zhu, Z.; Shi, D.; Guankai, P.; Tan, Z.; Shang, X.; Hu, W.; Liao, H.; Zhang, X.; Huang, Y.; Yu, H.; et al. Retinal age gap as a predictive biomarker for mortality risk. Br. J. Ophthalmol. 2022. [Google Scholar] [CrossRef] [PubMed]
  53. Mitani, A.; Huang, A.; Venugopalan, S.; Corrado, G.S.; Peng, L.; Webster, D.R.; Hammel, N.; Liu, Y.; Varadarajan, A.V. Detection of anaemia from retinal fundus images via deep learning. Nat. Biomed. Eng. 2020, 4, 18–27. [Google Scholar] [CrossRef] [PubMed]
  54. Vaghefi, E.; Yang, S.; Hill, S.; Humphrey, G.; Walker, N.; Squirrell, D. Detection of smoking status from retinal images; a Convolutional Neural Network study. Sci. Rep. 2019, 9, 7180. [Google Scholar] [CrossRef] [PubMed]
  55. Sabanayagam, C.; Xu, D.; Ting, D.S.W.; Nusinovici, S.; Banu, R.; Hamzah, H.; Lim, C.; Tham, Y.C.; Cheung, C.Y.; Tai, E.S.; et al. A deep learning algorithm to detect chronic kidney disease from retinal photographs in community-based populations. Lancet Digit. Health 2020, 2, e295–e302. [Google Scholar] [CrossRef]
  56. Zhang, K.; Liu, X.; Xu, J.; Yuan, J.; Cai, W.; Chen, T.; Wang, K.; Gao, Y.; Nie, S.; Xu, X.; et al. Deep-learning models for the detection and incidence prediction of chronic kidney disease and type 2 diabetes from retinal fundus images. Nat. Biomed. Eng. 2021, 5, 533–545. [Google Scholar] [CrossRef]
  57. Tian, J.; Smith, G.; Guo, H.; Liu, B.; Pan, Z.; Wang, Z.; Xiong, S.; Fang, R. Modular machine learning for Alzheimer’s disease classification from retinal vasculature. Sci. Rep. 2021, 11, 238. [Google Scholar] [CrossRef]
  58. Montolío, A.; Martín-Gallego, A.; Cegoñino, J.; Orduna, E.; Vilades, E.; Garcia-Martin, E.; Palomar, A.P.d. Machine learning in diagnosis and disability prediction of multiple sclerosis using optical coherence tomography. Comput. Biol. Med. 2021, 133, 104416. [Google Scholar] [CrossRef]
  59. McDonald, W.I.; Compston, A.; Edan, G.; Goodkin, D.; Hartung, H.P.; Lublin, F.D.; McFarland, H.F.; Paty, D.W.; Polman, C.H.; Reingold, S.C. Recommended diagnostic criteria for multiple sclerosis: Guidelines from the International Panel on the diagnosis of multiple sclerosis. Ann. Neurol. Off. J. Am. Neurol. Assoc. Child Neurol. Soc. 2001, 50, 121–127. [Google Scholar] [CrossRef]
  60. Dietterich, T.G. (Ed.) Ensemble Methods in Machine Learning. Multiple Classifier Systems; Springer: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
  61. López-Dorado, A.; Ortiz, M.; Satue, M.; Rodrigo, M.J.; Barea, R.; Sánchez-Morla, E.M.; Cavaliere, C.; Rodríguez-Ascariz, J.M.; Orduna-Hospital, E.; Boquete, L.; et al. Early Diagnosis of Multiple Sclerosis Using Swept-Source Optical Coherence Tomography and Convolutional Neural Networks Trained with Data Augmentation. Sensors 2022, 22, 167. [Google Scholar] [CrossRef]
  62. Shigueoka, L.S.; Mariottoni, E.B.; Thompson, A.C.; Jammal, A.A.; Costa, V.P.; Medeiros, F.A. Predicting Age From Optical Coherence Tomography Scans With Deep Learning. Transl. Vis. Sci. Technol. 2021, 10, 12. [Google Scholar] [CrossRef]
  63. Mendoza, L.; Christopher, M.; Brye, N.; Proudfoot, J.A.; Belghith, A.; Bowd, C.; Rezapour, J.; Fazio, M.A.; Goldbaum, M.H.; Weinreb, R.N.; et al. Deep Learning Predicts Demographic and Clinical Characteristics from Optic Nerve Head OCT Circle and Radial Scans. Investig. Ophthalmol. Vis. Sci. 2021, 62, 2120. [Google Scholar]
  64. Chueh, K.-M.; Hsieh, Y.-T.; Chen, H.H.; Ma, I.H.; Huang, S.-L. Identification of Sex and Age from Macular Optical Coherence Tomography and Feature Analysis Using Deep Learning. Am. J. Ophthalmol. 2022, 235, 221–228. [Google Scholar] [CrossRef] [PubMed]
  65. Hassan, O.N.; Menten, M.J.; Bogunovic, H.; Schmidt-Erfurth, U.; Lotery, A.; Rueckert, D. (Eds.) Deep Learning Prediction Of Age And Sex From Optical Coherence Tomography. In Proceedings of the 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), Nice, France, 13–16 April 2021. [Google Scholar]
  66. Wisely, C.E.; Wang, D.; Henao, R.; Grewal, D.S.; Thompson, A.C.; Robbins, C.B.; Yoon, S.P.; Soundararajan, S.; Polascik, B.W.; Burke, J.R.; et al. Convolutional neural network to identify symptomatic Alzheimer’s disease using multimodal retinal imaging. Br. J. Ophthalmol. 2022, 106, 388–395. [Google Scholar] [CrossRef] [PubMed]
Table 1. Salient retinal fundus regions/features associated with deep learning predictions.
Table 1. Salient retinal fundus regions/features associated with deep learning predictions.
Study, Publication Year (Country)Prediction TargetsSalient Regions/Features
Identified
Cardiovascular diseases (CVD) and CVD risk factors
Poplin et al. 2018 [41] (United States of America [USA])5-year major adverse cardiovascular eventsRetinal vessels (for major CVD risk factors)
Chang et al., 2020 [42] (Korea)Carotid artery atherosclerosisOptic disc and retinal vessels
Son et al., 2020 [43] (Korea)Accumulation of coronary artery calcium Central main retinal vessel branches
Age
AgeRetinal vessels
Optic disc and retinal vessels
Zhu et al. 2022 [53] (China)Peri-vascular regions
Sex
Poplin et al. 2018 [41] (USA)SexOptic disc and retinal vessels
Rim et al. 2020 [47] (Singapore)Optic disc and retinal vessels
Korot et al. 2021 [51] (United Kingdom)Fovea, optic nerve and vascular arcades
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, J.-H.; Liu, T.Y.A. Application of Deep Learning to Retinal-Image-Based Oculomics for Evaluation of Systemic Health: A Review. J. Clin. Med. 2023, 12, 152. https://doi.org/10.3390/jcm12010152

AMA Style

Wu J-H, Liu TYA. Application of Deep Learning to Retinal-Image-Based Oculomics for Evaluation of Systemic Health: A Review. Journal of Clinical Medicine. 2023; 12(1):152. https://doi.org/10.3390/jcm12010152

Chicago/Turabian Style

Wu, Jo-Hsuan, and Tin Yan Alvin Liu. 2023. "Application of Deep Learning to Retinal-Image-Based Oculomics for Evaluation of Systemic Health: A Review" Journal of Clinical Medicine 12, no. 1: 152. https://doi.org/10.3390/jcm12010152

APA Style

Wu, J. -H., & Liu, T. Y. A. (2023). Application of Deep Learning to Retinal-Image-Based Oculomics for Evaluation of Systemic Health: A Review. Journal of Clinical Medicine, 12(1), 152. https://doi.org/10.3390/jcm12010152

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop