1. Introduction
There is also another group of 325 million with risk of Type II diabetes in 2017 and the numbers of these patients are also progressively increasing throughout the world [
1]. Most of the people in this category belonged to the age group between 40 to 59 years of age wherein 1 out of 2 people among 212 million people are completely ignorant and uninformed of their disease. Hence it is quite evident that diabetic retinopathy has the probability to soon become a major health issue throughout the world. Obesity, unhealthy diet and physical inactivity are the primary factors responsible for Type 2 diabetes. But it is important to understand that diabetic retinopathy gets developed only when a patient has diabetes for at least 10 years and remains unaware and untreated without proper eye examination. Diabetic Retinopathy can always be prevented if it is detected early enough by conducting health check-ups and systematic treatment of Diabetes [
2,
3].
Duration of Diabetes acts as one of the primary cause for patients getting affected with retinopathy wherein with longer duration, the probability of occurrence of the disease gets enhanced. Hence it is evident that retinopathy initiates when the patient has diabetes for a longer timeframe being completely unaware, ignorant and untreated on the possibilities of diabetic retinopathy as a natural progression from diabetes [
4]. The onset of diabetes happens when there is an abnormal fluctuation in the blood sugar level. Normally the glucose in the body gets transformed into energy helping to conduct regular human activities. But in an adverse situation when there is abnormal shooting up of blood sugar level, the excess blood sugar generated finds no other option but to get accumulated in blood vessels of various organs in the body including human eye [
3]. This phenomenon is called hyperglycemia. Diabetic eye disease or retinopathy has two stages namely Non-proliferative Diabetic Retinopathy (NPDR) and Proliferative Diabetic Retinopathy (PDR). In NPDR the retina gets swollen, a case of macular edema, due accumulation of glucose leading to blood vessel leakages in the eyes. The swelling could be so worse that the vessels could get completely blocked resulting, in macular ischemia. In all of the instances the patient loses vision partially, completely or sometimes suffers from blurred vision. The PDR occurs at a much advanced stage of diabetes when new blood vessels start growing in the retina, a case known as neovascularization. The new blood vessels are extremely thin and fragile being more prone to haemorrhage. The blood from the haemorrhage leads to partial or complete vision loss. Also the newly created blood vessels form scar tissues which lead to detachment of retina resulting in loss of central or peripheral vision. The symptoms of NPDR and PDR include blurred vision, haemorrhages, wool spots, double vision, corneal abnormalities, intra-retinal abrasions, microvascular abnormalities, microaneurysms and increase in retinal permeability [
5].
The popular diagnosis of diabetic retinopathy includes fluorescein angiography and optical coherence tomography. In fluorescein angiography, the physician injects a dye in the patient’s arm vein and pictures are taken as the dye flows through the blood vessels in the eyes detecting cases of blockages, leakages and haemorrhage. In optical coherence tomography, tests are conducted to take cross sectional images of the retina which helps to identify issues pertinent to fluid leakages or damages in the retinal tissue [
6].
It is thus evident from the fact that detection of the disease plays a major role in saving patients from vision loss. The more time the disease gets lingered being ignorant or untreated, the consequences could be severe. Machine learning algorithms have been a prevalent choice in the prediction of various diseases. The concept of machine learning was framed by Arthur Samuel in 1959 as a technique for computers to automatically learn without programming interventions and further make decisions from the experiences of learning. Deep neural network is based on the concept of machine learning and artificial neural networks [
7,
8,
9,
10]. DNN has successfully contributed towards analysis and decision making in the fields of computer vision, speech recognition, drug design, medical image processing and many others. The implementation of such advanced machine learning approaches as DNN had significantly contributed towards pathological screening and disease predictions thereby reducing the burden of human interpretations. With such glorifying results of application of DNN and machine learning approaches in various other fields of healthcare, application of the same in the detection of diabetic retinopathy was a natural point of interest with an objective to reduce occurrence of this disease [
11,
12]. Thus motivation of the present study was:
Early detection of the diabetic retinopathy disease giving opportunity for medical practitioners to treat and cure the same at an early stage with higher accuracy.
Focus on the most significant factors of the disease eliminating the irrelevant ones ensuring more accurate classification.
A Deep neural network model is used in the present study in convergence with Principal Component Analysis (PCA) and firefly algorithm for the classification of diabetic retinopathy set. The dataset is collected from the publicly available UCI machine learning repository. The data being collected from the public domain includes attributes which are irrelevant and inclusion of the same would only increase burden on the ML model. Hence Principal component analysis (PCA) algorithm is implemented for feature extraction from DR image dataset. To further improve on the classification results, firefly algorithm is implemented for dimensionality reduction. The resultant reduced dataset is fed into the deep neural network model generating enhanced classification of the diabetic retinopathy dataset. The result of the proposed model is evaluated against the traditional state of the model to establish its superiority in terms of the accuracy, specificity, precision, recall and sensitivity.
The following sections of the paper are organized as follows:
Section 2 presents an explicit literature review,
Section 3 discusses the preliminaries and experimental setting,
Section 4 describes the methodology,
Section 5 highlights the results and
Section 6 provides conclusion and scope of future work.
2. Related Work
The study in [
13] developed a deep learning system for identification of diabetic retinopathy with enhanced accuracy than existing studies. The analysis was performed on a small percentage of images with higher resolutions. The results highlighted the ability of deep learning models to diagnose the disease achieving the desired performance level considering limitations in cost as well.
The study in [
14] has implemented adjudication for the quantification of errors in diabetic retinopathy (DR) focusing on grading of the disease using a deep learning algorithm. The kappa score was measured and performance of the model was compared based on sensitivity, accuracy and area under the curve (AUC).
The research work conducted in [
15] developed a data oriented deep learning model for DR detection wherein the coloured fundus images [
16] of the disease were processed and the classification model helped to segregate the diseased ones from the healthy images.
In [
17] a deep learning model was designed to detect diabetic retinopathy and macular edema from images of retinal fundus. A deep convolutional neural network [
18] was used to train a retinal image dataset consisting of 128175 images. The sensitivity and specificity scores in the study helped to detect referable diabetic retinopathy (RDR among diabetic patients using a deep neural network model.
The study by Swapna et. al [
19] designed a deep learning model for the classification of diabetes using HRV data. The dynamic features relevant to the HRV data were extracted using a combination of long-short-term memory (LSTM) and convolutional neural network. The output of the model achieved prominent accuracy in detection of diabetes in HRV data.
The study in [
20] presented a hybrid technique incorporating image processing and deep learning for detection and classification of diabetic retinopathy. The model was validated using the retinal fundus dataset consisting of 400 images of the MESSIDOR database yielding good results.
The study in [
21] developed a computer-aided screening system that helped to analyse fundus images having different illumination and views. The study basically helped to detect the severity level of DR using ML models. The model used AdaBoost classifier for feature extraction and analysed the data using Gaussian Mixture Model, KNN, SVM to classify retinopathy lesion cases from nonlesions.
The study in [
22,
23] developed an FFBAT-based algorithms for classification of diabetes. The unique contribution of the study was its use of LPP algorithm using fuzzy rules for feature extraction and the FFBAT-ANN model for classification. This combination helped to achieve better classification results yielding improve accuracy in the results.
Various studies have also used Probabilistic Neural Network (PNN), Bayesian Classification and Support Vector Machines (SVM) for the classification NPDR and PDR types of diabetic retinopathy. The images of haemorrhages [
24] of blood vessels and analysed using image processing techniques and the extracted features when fed into the classifiers help to classify the types of DR diseases [
25,
26,
27].
It is quite evident from the related work that majority of the work in diabetic retinopathy detected revolves around use of various machine learning models and comparison of the performance of these models. It is also observed that less emphasis has been given on improvement of quality of the diabetic retinopathy dataset which could lead to more accurate results. It is important to highlight the fact that the reliability of results generated from the machine learning model depends on the features of the dataset. Extraction of the most significant features in the dataset and use of appropriate dimensionality reduction techniques help to enhance accuracy of the prediction results of the machine learning models. The present focuses on this aspect and adopts a two layered dimensionality reduction approach followed by the use of deep neural network model for classification. The unique contributions of the proposed work include:
A three layered rigorous pre-processing approach is adopted to enhance the quality of the dataset and include relevant and significant attributes alone for training the proposed model.
The implementation of PCA+Firefly is used to significantly reduce the training time of the ML based models.
3. Preliminaries and Experimeental Setting
This section discusses the methodologies used in the proposed model namely, PCA and Firefly algorithms. The detailed architecture of the proposed model is also presented.
3.1. Principal Component Analysis
The concept of PCA is based on the objective of reduction in dimensionality of a data set consisting of multiple variables which are correlated with each other while retaining maximum variability in the data set [
28,
29]. The algorithm transforms the variables in the data set to a new set of orthogonal principal components ordered in a manner such that the retention of variation in the original variables decreases while traversing down the order. Hence the first principal component retains maximum variation present in the original components. The principal components are the eigenvectors in covariation matrix which are orthogonal. The dataset to be used in PCA needs to be a scaled one and the method summarizes the data generating results which are also sensitive to relative scaling. The principal component is defined as a “linear combination of optimally weighted observed variables”. The output generated from PCA are such principal components whose numbers are either lesser or equal to the original variables. The steps involved in implementing PCA on a two dimensional data set starts with Normalization of the data. This is done by subtracting the respective means from each of the respective columns in the data set computing a data set with mean of zero. The second step involves calculation of the covariation matrix. Then the Eigen values and Eigen vectors are calculated for the covariance matrix. The Eigen values are then ordered in a descending order to provide the order of significance for the components and the dimensionality is reduced by choosing first set of Eigen values and ignoring the rest. A matrix of vectors is formed to create a feature vector. In the final step the principal components are formed by considering the transpose of the feature vector and computing the left multiplication with the transpose of scaled version of the data set. The concept of dimensionality reduction in PCA pitches its use in facial recognition, computer vision and image compression. It has also wide spectrum of applications in pattern identification of high dimensional data pertaining to the field of finance, datamining, bio-informatics and psychology [
30,
31,
32,
33].
3.2. Firefly Algorithm
Firefly algorithm is a “nature—inspired” algorithm based on the behaviour of flies. Nature inspired algorithms are extensively used in several stages of machine learning process [
34,
35]. The fireflies have natural lights emitting from their body that help them to attract or find other flying mates [
36,
37,
38]. It also helps them to catch their prey and protect themselves from predators. The algorithm is based on three primary assumptions [
39]:
The artificial fireflies are unisex and their attraction are not dependent on gender.
The attractiveness of a firefly is proportional to the brightness of the lights emitted and hence it decreases as they move away from each other due to absorption of the light by air. Since all fireflies emit light, the one emitting the brightest one attracts most of its neighbours. On the contrary, if there is a situation of no such bright firefly, all the fireflies move around in a haphazard fashion in any direction.
The brightness of the flashing light being the criteria for attraction is the objective function to be optimized in the algorithm.
The basic schema followed in this Algorithm 1 is:
Algorithm 1: Pseudo Code of Firefly Algorithm [36] |
|
4. Experimental Setting
The experimental setting of proposed methodology is illustrated in
Figure 1. The dataset used in this work has 19 contributing attributes. The values of these attributes are of different range. This variation in the range of the values of the attributes may lead to varied weights of some instances which may results in biased prediction results. In order to avoid such heterogeneity, as part of pre-processing, a StandardScaler method is used in the proposed work. Standardscaler method normalizes the data converting it to a common range to eliminate bias in the prediction results. The Principal Component Analysis algorithm is then applied on this normalized data. The main reason for using PCA is to eliminate the insignificant attributes from consideration for training the DNN. To further strengthen the feature engineering process, one of the popular nature inspired algorithms, Firefly Optimization Algorithm, is used in this work. The main strength of Firefly algorithm is that it tunes the parameters in such a way that this algorithm chooses the optimal parameters, whose convergence rate would be very fast avoiding local minima. This property of Firefly algorithm makes it an ideal choice for feature engineering to choose optimal parameters which influence the classification in a positive way thereby reducing training time. The dimensionally reduced dataset is then fed to DNN for classification of diabetic retinopathy datasets. Adam optimizer and Softsign activation function was used at each layer except the output layer. The output layer used sigmoid activation function to classify the Diabetic Retinopathy dataset, since it is a binary classifier. For backpropagation the Root mean square propagation (RMSprop) error was used. The dataset was split into 8:2 ratio to train and test respectively. Instead of training entire 80 percent of data and then testing the model on remaining 20 percent of data at one go, for every epoch a batch of 64 records were fed to the model, out of which 80 percent of the records were used to train the model and remaining 20 percent of those records were used to test the model. The proposed model is summarized as follows:
Input: Diabetic Retinopathy Dataset
Output: Classification of class label
Data Transformation: Normalization of the input dataset is done using StandardScaler.
Dimensionality Reduction: Input the transformed dataset to the PCA for dimensonality reduction. To further refine feature engineering use firefly optimization algorithm.
Classification: Feed the extracted features to the DNN for classifying the Diabetic Retinopathy dataset.
Evaluation: Evaluate the performance of the model using several measures like Accuracy, Precision, Recall, Sensitivity and Specificity.
Comparison: Comparison of the experimental results of the proposed model with traditional ML algorithms.
5. Results and Discussion
This section discusses about the dataset used, experimental framework, metrics used and the experimental results.
The diabetic retinopathy dataset used in this study had 1151 instances and 20 attributes. The attributes of the dataset used in this paper are discussed in
Table 1. Softsign activation fnction was used in all the layers except at the output layer.
The experimentation was carried out on Diabetic Retinopathy Debrecen dataset from UCI machine learning repository [
40]. The attributes in this dataset were the features extracted from the Messidor image dataset. A personal computer with 8 GB RAM was used for performing the experimentation using Python.
5.1. Metrics for Evaluation of the Model
The following metrics are used to evaluate the proposed model.
Accuracy: It is the percentage of correct predictions that a classifier has made when compared to the actual value of the label in the testing phase. Also, it can be said as the ratio of Number of correct assessments to the Number of all assessments. Accuracy can be calculated using the following Equation (
1).
where, TP is true positives, TN is false negatives, FP is false positives, FN is false negatives.
If the class label of a record in a dataset is positive, and the classifier predicts the class label for that record as positive, then it is called as true positive. If the class label of a record in a dataset is negative, and the classifier predicts the class label for that record as negative, then it is called as true negative. If the class label of a record in a dataset is positive, but the classifier predicts the class label for that record as negative, then it is called as false negative. If the class label of a record in a dataset is negative, but the classifier predicts the class label for that record as positive, then it is called as false positive.
Sensitivity: It is the percentage of true positives that are correctly identified by the classifier during testing. It is calculated using the following Equation (
2).
Specificity: It is the percentage of true negatives that are correctly identified by the classifier during testing. It is calculated using the following Equation (
3).
Precision: Precision is a significant measure for determining exactness, it states that how much percentage of instances the classifier labelled as positive, with respect to the total predictive positive instances as shown in Equation (
4).
Recall: Recall determines completeness i.e, the percentage of positive instances identified by the classifier as positive. The recall is a performance metric used to select the best model when there is a high cost associated with False Negative as shown in Equation (
5).
F1-measure: (F1 or F1-score) represents the harmonic mean of precision and recall as shown in Equation (
6).
F1 Score is required to find a balance between Precision and Recall. Accuracy is mainly contributed by a large number of True Negatives whereas False Negative and False Positive usually have business costs (tangible & intangible). Thus F1 Score might be a better measure when a balance between Precision and Recall is needed with an uneven class distribution (large number of Actual Negatives).
5.2. Performance Analysis
For evaluating the proposed model, a sequential model was used to build the DNN-PCA model. For the purpose of cross validation, the dataset was split into two parts, 80% of the dataset was used for training and 20% for validating/testing for every 64 records (batchsize). To identify the activation function best suited for the dataset, experimentation was performed on several activation functions like relu, elu, tanh, softmax, selu, softplus and softsign on the dataset with 50 epochs and batchsize of 64. The results of the experimentation are shown in
Figure 2. As observed from
Figure 2, Softsign activation function gave best average training and testing accuracy. Hence Softmax activation function is chosen on the dense layers for evaluating the model.
To choose the best optimizer in the layers of deep neural networks, experimentation was conducted on the dataset using several optimizers like Adam, Nadam, SGD, rmsprop, adagrad, adadelta, and adamax with 50 epochs and batch size of 64. The results of this experimentation are shown in
Figure 3. As per
Figure 3, adam optimizer provided best accuracy. Hence, adam optimizer is chosen for experimentation at input and also other dense layers. Sigmoid optimizer is chosen for the output layer.
To choose the number of layers in deep neural networks for experimentation, the DR dataset was experimented using several layers with softsign activation function, adam optimizer at input and dense layers, sigmoid optimizer at the output layer, 50 epochs and batch size of 64. As shown in
Figure 4, the model had best training and testing accuracy with 5 layers with accuracy level starting to dip with 6 layers. Hence a deep neural network with 5 layers was used for the experimentation.
To choose the number of epochs, the DR data set was experimented using 5 intermediate layers with softsign activation function, adam optimizer at input and dense layers, sigmoid optimizer at the output layer with batchsize of 64. As shown in
Figure 5, the model was successful in providing best average training and testing accuracy with 600 epochs with the testing accuracy starting to dip with 650 epochs. Hence, a deep neural network was trained with 600 epochs.
In the experimental work, the number of components chosen for the PCA was 0.9 percent i.e., to retain 99 percent of the information.
Figure 6,
Figure 7,
Figure 8,
Figure 9,
Figure 10 and
Figure 11 illustrate the performance evaluation of the ML models based on the measures accuracy, precision, recall, sensitivity and specificity. It is evident from these figures that PCA-Firefly based ML models outperform the other two cases - ML with PCA and ML without dimensionality reduction. Considering inclusion and noninclusion of dimensionality reduction and feature engineering concepts with ML algorithms, it is observed that the proposed model- DNN-PCA-Firefly performs better than the other hybrid ML algorithms considered.
The results obtained based on the experimentation are tabulated in
Table 2.
The highlights of the results pertinent to the proposed model are:
DNN-PCA-Firefly model outperforms other popular ML hybrid models considered for comparision.
Application of PCA alone on DNN and other ML algorithms results in slight deterioration in the performance measures. But the training time gets reduced.
The implementation of PCA+Firefly on the contrary enhances the performance of the ML algorithms with further reduction in training time as illustrated in
Figure 12.
The original dataset when used was succumbed to over fitting having an negative effect on the testing accuracy. However, when the number of records in the dataset was increased by resampling, the performance enhanced with higher testing accuracy.
6. Conclusions and Future Work
In the present study a hybrid principal component analysis (PCA) – firefly based deep neural network model is used for the classification of diabetic retinopathy dataset. The dataset is collected from the publicly available UCI machine learning repository which at its raw state had redundant and irrelevant attributes. Rigorous pre-processing was the prime focus of the study and hence a three layered pre-processing framework was initiated. At the outset, standardscalar technique was employed to normalize the dataset and then Principal Component Analysis (PCA) was used for feature selection. Further, Firefly algorithm was used for the purpose of dimensionality reduction. This reduced dataset was fed into the deep neural network (DNN) which generated classifications results with enhanced accuracy. The results of the model were also evaluated with the predominant machine learning approaches wherein the results defended the superiority of the model in terms of the Accuracy, Precision, Recall, Sensitivity and Specificity. The major benefits of the model as highlighted, includes its potential to be implemented on any high dimensional dataset in various other domains. However, the same performance may not be observed in case of low domensional dataset with possibilities of the model being overfitted acting as its limitation. As part of the future study, the proposed model could be utilized for data sets in other domains. The performance of the proposed model therefore motivates to conduct similar studies in various other domains having high dimensional data. This approach can also be used for eliminating noisy data in Magnetoencephalography (MEG) data analysis contributing towards better prediction in healthcare.
Author Contributions
Conceptualization, T.R.G., M.A.; Data curation, S.S., P.K.R.M.; Formal analysis, N.K.; Investigation, S.B.; Methodology, T.R.G., I.-H.R.; Project administration, M.A., I.-H.R.; Resources, N.K.; Software, T.R.G., S.S.; Visualization, P.K.R.M.; Writing—S.B., M.A., N.K.; Writing—review and editing, T.R.G., S.B., S.S. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by the Institute for Information and communications Technology Promotion (IITP) grant funded by the Korean government (MSIT) (No. 2018-0-00508), Development of blockchain-based embedded devices and platform for MG security and operational efficiency.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Solomon, S.D.; Chew, E.; Duh, E.J.; Sobrin, L.; Sun, J.K.; VanderBeek, B.L.; Wykoff, C.C.; Gardner, T.W. Diabetic retinopathy: A position statement by the American Diabetes Association. Diabetes Care 2017, 40, 412–418. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Luo, H.; Bell, R.A.; Garg, S.; Cummings, D.M.; Patil, S.P.; Jones, K. Trends and Racial/Ethnic Disparities in Diabetic Retinopathy Among Adults with Diagnosed Diabetes in North Carolina, 2000–2015. N. C. Med. J. 2019, 80, 76–82. [Google Scholar] [CrossRef] [PubMed]
- Duh, E.J.; Sun, J.K.; Stitt, A.W. Diabetic retinopathy: Current understanding, mechanisms, and treatment strategies. JCI Insight 2017, 2, e93751. [Google Scholar] [CrossRef] [PubMed]
- Stitt, A.W.; Curtis, T.M.; Chen, M.; Medina, R.J.; McKay, G.J.; Jenkins, A.; Gardiner, T.A.; Lyons, T.J.; Hammes, H.P.; Simo, R.; et al. The progress in understanding and treatment of diabetic retinopathy. Prog. Retin. Eye Res. 2016, 51, 156–186. [Google Scholar] [CrossRef]
- Ting, D.S.W.; Cheung, G.C.M.; Wong, T.Y. Diabetic retinopathy: Global prevalence, major risk factors, screening practices and public health challenges: A review. J. Clin. Exp. Ophthalmol. 2016, 44, 260–277. [Google Scholar] [CrossRef] [Green Version]
- Soares, M.; Neves, C.; Marques, I.P.; Pires, I.; Schwartz, C.; Costa, M.Â.; Santos, T.; Durbin, M.; Cunha-Vaz, J. Comparison of diabetic retinopathy classification using fluorescein angiography and optical coherence tomography angiography. Br. J. Ophthalmol. 2017, 101, 62–68. [Google Scholar] [CrossRef]
- Cuzzocrea, A.; Bosco, G.L.; Pilato, G.; Schicchi, D. Multi-class Text Complexity Evaluation via Deep Neural Networks. In Proceedings of the International Conference on Intelligent Data Engineering and Automated Learning, Manchester, UK, 14–16 November 2019; pp. 313–322. [Google Scholar]
- Vinayakumar, R.; Alazab, M.; Soman, K.; Poornachandran, P.; Al-Nemrat, A.; Venkatraman, S. Deep learning approach for intelligent intrusion detection system. IEEE Access 2019, 7, 41525–41550. [Google Scholar] [CrossRef]
- Vinayakumar, R.; Alazab, M.; Soman, K.; Poornachandran, P.; Venkatraman, S. Robust intelligent malware detection using deep learning. IEEE Access 2019, 7, 46717–46738. [Google Scholar] [CrossRef]
- Bhattacharya, S.; Kaluri, R.; Singh, S.; Alazab, M.; Tariq, U. A Novel PCA-Firefly based XGBoost classification model for Intrusion Detection in Networks using GPU. Electronics 2020, 9, 219. [Google Scholar] [CrossRef] [Green Version]
- Mansour, R.F. Deep-learning-based automatic computer-aided diagnosis system for diabetic retinopathy. Biomed. Eng. Lett. 2018, 8, 41–57. [Google Scholar] [CrossRef]
- Venkatraman, S.; Alazab, M. Use of data visualisation for zero-day Malware detection. Secur. Commun. Netw. 2018, 2018. [Google Scholar] [CrossRef]
- Sahlsten, J.; Jaskari, J.; Kivinen, J.; Turunen, L.; Jaanio, E.; Hietala, K.; Kaski, K. Deep learning fundus image analysis for diabetic retinopathy and macular edema grading. Sci. Rep. 2019, 9, 1–11. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Krause, J.; Gulshan, V.; Rahimy, E.; Karth, P.; Widner, K.; Corrado, G.S.; Peng, L.; Webster, D.R. Grader variability and the importance of reference standards for evaluating machine learning models for diabetic retinopathy. Ophthalmology 2018, 125, 1264–1272. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Li, X.; Pang, T.; Xiong, B.; Liu, W.; Liang, P.; Wang, T. Convolutional neural networks based transfer learning for diabetic retinopathy fundus image classification. In Proceedings of the 2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Shanghai, China, 14–16 October 2017; pp. 1–11. [Google Scholar]
- Lahmiri, S.; Shmuel, A. Variational mode decomposition based approach for accurate classification of color fundus images with hemorrhages. Opt. Laser. Technol. 2017, 96, 243–248. [Google Scholar] [CrossRef]
- Gulshan, V.; Peng, L.; Coram, M.; Stumpe, M.C.; Wu, D.; Narayanaswamy, A.; Venugopalan, S.; Widner, K.; Madams, T.; Cuadros, J.; et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 2016, 316, 2402–2410. [Google Scholar] [CrossRef]
- Castellano, G.; Castiello, C.; Mencar, C.; Vessio, G. Crowd Detection for Drone Safe Landing Through Fully-Convolutional Neural Networks. In Proceedings of the International Conference on Current Trends in Theory and Practice of Informatics, Dortmund, Germany, 17–21 February 2020; pp. 301–312. [Google Scholar]
- Swapna, G.; Vinayakumar, R.; Soman, K. Diabetes detection using deep learning algorithms. ICT Express 2018, 4, 243–246. [Google Scholar]
- Hemanth, D.J.; Deperlioglu, O.; Kose, U. An enhanced diabetic retinopathy detection and classification approach using deep convolutional neural network. Neural Comput. Appl. 2020, 32, 707–721. [Google Scholar] [CrossRef]
- Lunscher, N.; Chen, M.L.; Jiang, N.; Zelek, J. Automated screening for diabetic retinopathy using compact deep networks. Int. J. Imaging Syst. Technol. 2017, 3, 1–3. [Google Scholar] [CrossRef]
- Reddy, G.T.; Khare, N. Hybrid firefly-bat optimized fuzzy artificial neural network based classifier for diabetes diagnosis. IJIES 2017, 10, 18–27. [Google Scholar] [CrossRef]
- Reddy, G.T.; Khare, N. Heart disease classification system using optimised fuzzy rule based algorithm. IJBET 2018, 27, 183–202. [Google Scholar] [CrossRef]
- Lahmiri, S. High-frequency-based features for low and high retina haemorrhage classification. Healthc. Technol. 2017, 4, 20–24. [Google Scholar] [CrossRef] [PubMed]
- Kanungo, Y.S.; Srinivasan, B.; Choudhary, S. Detecting diabetic retinopathy using deep learning. In Proceedings of the 2017 2nd IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT), Bengaluru, India, 19–20 May 2017; pp. 801–804. [Google Scholar]
- Shen, L.; Chen, H.; Yu, Z.; Kang, W.; Zhang, B.; Li, H.; Yang, B.; Liu, D. Evolving support vector machines using fruit fly optimization for medical data classification. Knowl.-Based Syst. 2016, 96, 61–75. [Google Scholar] [CrossRef]
- Shanthi, T.; Sabeenian, R. Modified Alexnet architecture for classification of diabetic retinopathy images. Comput. Electr. Eng. 2019, 76, 56–64. [Google Scholar] [CrossRef]
- Jolliffe, I.T.; Cadima, J. Principal component analysis: A review and recent developments. Philos. Trans. A Math. Phys. Eng. Sci. 2016, 374, 20150202. [Google Scholar] [CrossRef] [PubMed]
- Sapuppo, F.; Umana, E.; Frasca, M.; La Rosa, M.; Shannahoff-Khalsa, D.; Fortuna, L.; Bucolo, M. Complex spatio-temporal features in meg data. Math. Biosci. Eng. 2006, 3, 697. [Google Scholar] [PubMed]
- Mohsen, H.; El-Dahshan, E.S.A.; El-Horbaty, E.S.M.; Salem, A.B.M. Classification using deep learning neural networks for brain tumors. Future Comput. Inform. J. 2018, 3, 68–71. [Google Scholar] [CrossRef]
- Zisselman, E.; Adler, A.; Elad, M. Compressed learning for image classification: A deep neural network approach. In Handbook of Numerical Analysis; Elsevier: Amsterdam, The Netherlands, 2018; Volume 19, pp. 3–17. [Google Scholar]
- Diaz, M.; Ferrer, M.A.; Impedovo, D.; Pirlo, G.; Vessio, G. Dynamically enhanced static handwriting representation for Parkinson’s disease detection. Pattern Recogn. Lett. 2019, 128, 204–210. [Google Scholar] [CrossRef]
- Casalino, G.; Castellano, G.; Consiglio, A.; Liguori, M.; Nuzziello, N.; Primiceri, D. A Predictive Model for MicroRNA Expressions in Pediatric Multiple Sclerosis Detection. In Proceedings of the International Conference on Modeling Decisions for Artificial Intelligence, Berlin, Germany, 26–30 August 2019; pp. 177–188. [Google Scholar]
- Reddy, G.T.; Reddy, M.P.K.; Lakshmanna, K.; Rajput, D.S.; Kaluri, R.; Srivastava, G. Hybrid genetic algorithm and a fuzzy logic classifier for heart disease diagnosis. Evol. Intell. 2019, 1–12. [Google Scholar] [CrossRef]
- Reddy, M.P.K.; Babu, M.R. Implementing self adaptiveness in whale optimization for cluster head section in Internet of Things. Cluster Comput. 2019, 22, 1361–1372. [Google Scholar] [CrossRef]
- Wang, H.; Wang, W.; Zhou, X.; Sun, H.; Zhao, J.; Yu, X.; Cui, Z. Firefly algorithm with neighborhood attraction. Inform. Sci. 2017, 382, 374–387. [Google Scholar] [CrossRef]
- Thippa Reddy, G.; Khare, N. FFBAT-optimized rule based fuzzy logic classifier for diabetes. Int. J. Eng. Res. Afr. Trans. Tech. Publ. 2016, 24, 137–152. [Google Scholar] [CrossRef]
- Reddy, G.T.; Khare, N. An efficient system for heart disease prediction using hybrid OFBAT with rule-based fuzzy logic model. J. Circuit. Syst. Comp. 2017, 26, 1750061. [Google Scholar] [CrossRef]
- Sánchez, D.; Melin, P.; Castillo, O. Optimization of modular granular neural networks using a firefly algorithm for human recognition. Eng. Appl. Artif. Intel. 2017, 64, 172–186. [Google Scholar] [CrossRef]
- Antal, B.; Hajdu, A. An ensemble-based system for automatic screening of diabetic retinopathy. Knowl.-Based Syst. 2014, 60, 20–27. [Google Scholar] [CrossRef] [Green Version]
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).