Next Article in Journal
A Novel Study of Fuzzy Bi-Ideals in Ordered Semirings
Previous Article in Journal
An Exponentiated Skew-Elliptic Nonlinear Extension to the Log–Linear Birnbaum–Saunders Model with Diagnostic and Residual Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Classification of Hypertensive Retinopathy by Gray Wolf Optimization Algorithm and Naïve Bayes Classification

by
Usharani Bhimavarapu
1,*,
Gopi Battineni
2,3 and
Nalini Chintalapudi
2
1
Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vijayawada 520002, India
2
Clinical Research Center, School of Medicinal and Health Products Sciences, University of Camerino, 62032 Camerino, Italy
3
The Research Centre of the ECE Department, V. R. Siddhartha Engineering College, Vijayawada 520007, India
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(7), 625; https://doi.org/10.3390/axioms12070625
Submission received: 17 April 2023 / Revised: 1 June 2023 / Accepted: 21 June 2023 / Published: 24 June 2023

Abstract

:
Retinal blood vessels are affected by a variety of eye diseases, including hypertensive retinopathy (HR) and diabetic retinopathy (DR). A person with HR needs to be sure to check their eyes regularly, which requires the use of computer vision methods to analyze images of the back of the eye and help ophthalmologists automatically. Automated diagnostic systems are useful for diagnosing different retinal diseases for ophthalmologists and patients who need to establish an automated HR detection and classification system using retinal images. In this work, a sliding band filter was used to improve the back-of-the-eye images and small convex regions to develop an automated system for detecting and classifying HR gravity levels. An image classification with improved wolf optimization along Bayes algorithm was conducted. The current model was tested using the publicly available dataset, and its results were compared to existing models. The results mentioned that the model-improved Naïve Bayes model classified the different HR severity levels on the optimized features and produced a maximum accuracy of 100% while being compared to other classifiers.

1. Introduction

Hypertension is caused primarily by blood pressure and leads to many cardiovascular diseases. It is reported that high blood pressure expenses are expected to cost $274 billion among the population of the United States in 2030 [1]. Complications caused by high blood pressure put the patient’s health and life at serious risk. Hypertensive complications are a significant cause of death, leading to organ injury and other complications [2,3,4,5].
While hypertensive retinopathy (HR) can cause a lot of disturbances on the ocular retina, HR does not show any symptoms during the early stages, and more than 90% of HR patients can prevent the disease from being vision threatening with the proper treatment initiated at the right time. The only way to detect this disease is to carry out regular eye examinations using color images of the fundus obtained by the fundus photograph. Computational vision is one of the main tools used in the development of automation in the diagnosis of many diseases. As the computer system ignores all factors unrelated to the diagnosis of the disease, the possibility of error is reduced, and better diagnosis is offered with great accuracy and agility [6]. It is impossible to derive the correct boundaries based on image segmentation traits due to the lack of image detail information.
For fundus image segmentation, threshold segmentation is the best option, as it offers benefits such as easy implementation, less computational complexity, and better performance [7]. Additionally, fundus images can be classified based on threshold values and can be segmented based on a single- or multi-threshold value. Using thresholding, the segmentation thresholds are determined by optimizing the criteria, such as the maximum difference between the class variance and the various criteria [3].
When it comes to image segmentation, the thresholding technique shows remarkable performance. When dealing with complex image segmentation issues, the high threshold issue will increase the algorithm’s complexity. A multilevel threshold, using Kapur’s entropy and the dragonfly optimization algorithm, optimized it to minimize computer complexity [8]. A grey wolf optimizer for multilevel image thresholding was created by adapting Kapur’s entropy and Otsu’s methods to determine segmentation thresholds [9]. For diagnosing hypertension, ophthalmologists use the ratio of arterial blood vessels of the retina (ARVS) in order to determine the presence of hypertension. Changes in retinal blood vessels may follow retinal disturbances over time, and the abnormal width of the vein leads to a low ratio of the arterial–venous mean diameter (AVR). It has been reported that fundus image segmentation helps in the provision of better treatment [10].
Existing literature has proposed systems for the early detection of HR, and they segmented the vessels using multi-scale filtering [11,12]. Authors estimated the vessel width in the region of interest, and AVR was calculated to detect HR and achieved an accuracy of about 93.1–93.7% [11]. HR detection by segmenting the vessels using moment-based and grey-level features and a support vector machine (SVM) for classification was presented [12]. The segmentation of the blood vessels using top-hat transformation and a double-ring filter produced 75% accuracy [13]. Similarly, another study segmented vessels by extracting intensity- and color-based characteristics to classify vessels as arteries and veins produced a maximum accuracy of 96% [14].
Despite the significant results generated by existing works, the discussed models are complex, in that they are used to segment the grey-scale images instead of the color images with low accuracy. This work presents a wolf optimization algorithm that reduces the computational complexity and improves the segmentation accuracy. We have extracted several prominent features to classify the severity levels of HR using the improved Naïve Bayes classifier. These distinct features are detected based on repeated experiments and are statistically significant in the classification task.

2. Materials and Methods

In this study, we aimed to present a classifier for the classification of HR with improved wolf optimization for the segmentation of the fundus images. The image pre-processing was conducted by applying a sliding band filter for the fundus image enhancement. In the segmentation phase, we applied improved wolf optimization algorithms to the pre-processed image and distinguished the candidate regions.
The block diagram of the experimental framework has been visualized in Figure 1. It is decomposed into five stages, including (1) fundus image collection, (2) image preprocessing, (3) image segmentation, (4) feature extraction, and (5) HR severity classification.

2.1. Image Collection and Preprocessing

Using the publicly available datasets ODIR [15], DRIVE [16], STARE [17], and VICAVR [18], 1200 fundus images were collected. The STARE dataset consists of 400 images, the DRIVE dataset consists of 40 fundal images, the VICAVR dataset consists of 58 fundal images, and the ODIR dataset consists of 6426 fundal images. These images are categorized into six classes by classifiers trained according to international standards for the classification of hypertensive retinopathy. Images of poor quality that do not show clearly visible lesions are considered ungradable. Therefore, the six levels, i.e., normal, mild, moderate, severe, malignant, and ungradable, were included. In the present study, we focused solely on the five-class classification task for HR classification, i.e., we did not use images belonging to the ungradable class. Table 1 shows the breakdown of the collection of data by HR category.
We used a sliding band filter to improve the fundus image, a multi-season segmentation for feature extraction, followed by a multi-class Naïve Bayes classifier for image classification. If the retinal lesions could not be completely identified, regardless of the quality of the model, the missing lesion could not be detected at the final detection. Essentially, non-linear gamma transformation was applied to each pixel of the fundus image, with the individual gamma parameter calculated from the pixel and the adjacent pixel. The process was wrapped in a multiscale approach to enhance the differently sized lesions in the fundus image. Sliding band filters (SBF) are found in retinal images. They rely on gradient convergence, which identifies lighting changes and, finally, detects low contrast candidates in the noisy retina image. This depends on maximizing the convergence index at every point in the retinal image. An SBF filter eases the detection of the shape and size of the candidates [19].
The mathematical representation of the SBF is:
T m ( p ) = 1 N t = 0 N 1 max B min n b B max n ( 1 k + 1 b ( k / 2 ) b + ( k / 2 ) Conv ( p , v ) ) ;
Conv ( t , n ) = cos ( θ t α ( θ t , v ) )
θ t = 2 π N ( t 1 )
α ( θ t , v ) = arctan   ( G v c G v R )
where N represents the support region lines count; B min and B max represent the inner and the outer sliding limits of the band; and θ p , q represents the angle of the gradient vector. G v C and G v R are the column and row gradients at fundus image position; k is the sliding band width; b is the position of the band center in the support region line ranging from B min to B max ; and cos ( θ t α ( θ t , v ) is the angle between the gradient vector at ( θ t , v ) and the direction of θ t . The experimental framework was implemented on a 2.2 GHz Intel Core i7 processor with 16 GB RAM using Python 3.7.2.

2.2. Fundus Segmentation

Image segmentation is a critical technique in image processing that splits the given fundus image into specific regions with unique features followed by the extraction of the object of interest. Threshold-based segmentation extracts regions in accordance with pixel intensity values. In [20], the authors’ white pixels are substituted for pixels with more significant intensities than a threshold value. The kurtosis-based weighted whale optimization (KMWWOA) selects the optimal n-level cut-off in the retinal image segmentation.
For instance, if there are M intensity levels for each channel in the input-enhanced RGB image, and these levels are in the range of 0, 1, 2, …, M − 1. The probability distribution is represented as
P s h = I s h N
s = 1 , h = { R , G , B } P s h = 1
where s represents the number of specific intensity level that ranges from 0 to M − 1; h represents the image channel; N represents the total number of pixels in the retina image; and I s h represents the pixels count for the intensity level s in the channel h. The K Q h of each image is represented as
K Q h = s = 1 , h = { R , G , B } k z h ( p s ( sP s h Bz h ) 2 ) 2
The optimal threshold values on the retina image can be evaluated by maximizing the kurtosis between any two classes, and it is represented as
T c h = z = 1 , h = { R , G , B } B z h ( K z h K Q h )
where K z h and B z h represent the kurtosis and the probability of occurrences; and z represents the class from the existing m classes.
The modelling of the occurrence of the probability B z h of m classes C 1 h , , C m h
B z h = { s = 1 , h = { R , G , B } k z h P s h z = 1 s = k z h + 1 , h = { R , G , B } k z h P s h 1 < z < m s = k z h + 1 , h = { R , G , B } L P s h z = m
The K z h of each class is computed as
K z h = { s = 1 , h = { R , G , B } k z h ( p s sP s h Bz h ) 4 s = 1 , h = { R , G , B } k z h ( p s ( sP s h Bz h ) 2 ) 2 , z = 1 s = k z h + 1 , h = { R , G , B } k z h ( p s sP s h Bz h ) 4 s = 1 , h = { R , G , B } k z h ( p s ( sP s h Bz h ) 2 ) 2 , 1 < z < m s = k z h + 1 , h = { R , G , B } L ( p s sP s h Bz h ) 4 s = 1 , h = { R , G , B } k z h ( p s ( sP s h Bz h ) 2 ) 2 , z = m
The search space agents update the positions using the optimum value, and the behavior of the agent is represented as
Y ( i + 1 ) = Y * ( i ) G C
C = | B . Y * ( i ) Y ( i ) |
Here, G and B represent the coefficient vectors, and i represents the current iteration. Y * represents the best obtained position vector. G = 2 g r g and C = 2 r , where g represents the used vector, which decreases the iterations from 2 to 0; and r represents a random vector that makes the agent reach the position that ensures that the exploration ranges from 0 to 1.
Y ( i + 1 ) = C .   e f ν Cos   ( 2 π v ) + Y * ( i )
Here, C represents the distance between the whale and the prey; f represents the shape of the spiral; and v represents a random value range from −1 to 1. For updating the whale position, it is mathematically presented as
Y * ( i + 1 ) = { P Y * ( i ) G C       i f   p < 0.5 C .   e f ν   C o s ( 2 π v ) + Y * ( i )   i f   p 0.5
The search agent position is updated similar to as shown below
C = | B .   x rand Y |
Y ( i + 1 ) = x rand G C
where x rand represents the random position vector.
To evaluate the optimum thresholds, the equation represents as
η h = m a x 1 < k 1 h < . k m 1 h < L ( T c h ( k z h ) )
The kurtosis-based, multi-thresholding whale optimization algorithm is presented in Algorithm 1.
Algorithm 1: Kurtosis-based, multi-thresholding whale optimization algorithm.
Input: Enhanced retina Image
output: Segmented retina image
1. Initialize initial solution xi (i = 1, …, 250)
2. Evaluate the fitness value using equation m a x 1 < k 1 h < . k m 1 h < L ( T c h ( k z h )
3. Initialize Y*
   while (d < maximum iterations)
   for each search agent
   update a, A, C, l and p
4. if (p < 0.5); (|A| < 1)
  Update the current search agent equation using the equation Y ( i + 1 ) = Y * ( i ) G C
5. else if (|A| > 1)
  select xrand and update the current search agent position using the equation Y ( i + 1 ) = x r a n d G C
6. else if (p ≥ 0.5)
  update current search position using the equation Y ( i + 1 ) = C . e f ν Cos (2 π v ) + Y * ( i )
  repeat for all search space agents
7. Evaluate the search agent fitness for each and every search agent.
8. Segment the retina image with best value which maximizes the kurtosis

2.3. Feature Extraction and Selection

With CAD systems, features are combined to characterize lesions in a similar way to traditional visual diagnosis, with high sensitivities and specificities of features. In computational environments, background image pixels provide sufficient information. Characteristic extraction methods based on color and texture are widely used to evaluate candidates. It is possible to drastically reduce the amount of training data by selecting the most powerful features and incorporating them into the learning algorithm. The types of features were presented, including Hu’s [21], the gray level co-occurrence matrix (GLCM), and sliding band features. Hu’s seven invariant moments of rotation, similarity, size, and rotation are represented as
μ x , y = a , b ( a a p ) m ( b b p ) n
where ( a p , b p ) in the object’s center.
η xy = μ xy μ 00 and the evaluated seven moments listed below
(a)
ϕ 1 = η 2 , 0 + η 0 , 2
(b)
ϕ 2 = ( η 2 , 0 η 0 , 2 ) 2 + 4 η 1 , 1 1
(c)
ϕ 3 = ( η 3 , 0 3 η 1 , 2 ) 2 + ( η 3 , 0 3 η 2 , 1 ) 2
(d)
ϕ 4 = ( η 3 , 0 + 3 η 1 , 2 ) 2 + ( η 0 , 3 + η 2 , 1 ) 2
(e)
ϕ 5 = ( η 3 , 0 η 1 , 2 ) ( η 3 , 0 + η 1 , 2 ) [ ( η 3 , 0 + η 1 , 2 ) 2 3 ( η 2 , 1 + η 0 , 3 ) 2 ] + ( 3 η 2 , 1 η 0 , 3 ) ( η 2 , 1 + η 0 , 3 ) [ 3 ( η 3 , 0 + η 1 , 2 ) 2 ( η 2 , 1 + η 0 , 3 ) 2 ]
(f)
ϕ 6 = ( η 2 , 0 η 0 , 2 ) [ ( η 3 , 0 + η 1 , 2 ) 2 ( η 2 , 1 + η 0 , 3 ) 2 ] + 4 η 1 , 1 ( η ) ( η 2 , 1 + η 0 , 3 )
(g)
ϕ 7 = ( 3 η 2 , 1 η 0 , 3 ) ( η 3 , 0 + η 1 , 2 ) [ ( η 3 , 0 + η 1 , 2 ) 2 3 ( η 2 , 1 + η 0 , 3 ) 2 ] + ( η 3 , 0 η 1 , 2 ) ( η 2 , 1 + η 0 , 3 ) [ 3 ( η 3 , 0 + η 1 , 2 ) 2 ( η 2 , 1 + η 0 , 3 ) 2 ]
GLCM highlights the texture information of an image since it computes the texture features by considering the spatial relationship of the pixels. GLCM reduces time consumption. Due to space constraints, we showcased only some of the features in this paper. Table 2 presents the list of its features.
In addition to detecting all the candidates in a convex shape, SBF features can help differentiate between vessels and arteries. The mean value of the SB filter was output inside the feature ( μ s feature ) and its neighborhood ( μ s neigh ). The difference between the mean and the SB filter output inside the feature and its neighbor is
μ s feature μ s neigh .
Due to the oxidized blood supply, arteries are thinner than veins and have a stronger central reflex. There is less contrast and color variations between veins and arteries. Based on color and statistical features, AVR classifies segmented vessels into arteries and veins. For each segmented vessel, center line pixels were divided into features based on pixel size, vessel profiles, and vessel segments. For AV classification, a total of 59 features were extracted. The characteristic vector containing all the extracted features was created for each central line in the segmented vessels and the surrounding pixels. The feature vector of each pixel was entered into the proposed classifier that identified the vessel and whether it was part of the arterial class or the vein class. This process was repeated for all vessel segments in the fundus images. The classifications of the arterial vessel and the vascular segment of the vein were represented as red and blue, respectively. Feature vectors were created by combining textual features, disc margin obscuration features, and vessel features from segmented fundus images. The extracted features and the AVR value detect papilledema.
AVR = 0.87 a w 2 + 1.01 b w 2 0.22 a w b w 10.76 0.72 a w 2 + 0.91 b w 2 + 45 0.05
where a w is the smaller value; and b w is the larger value.
It was crucial to select the right features for classification in order to achieve successful image recognition. When classifiers were trained with a limited set of learning samples, a peaking phenomenon occurred; thus, feature selection was used. If the number of features increased, the classification rate of the classifiers decreased [22]. An improved ant colony optimization was used to select the features in this study [23].

2.4. Classification Using Improved Naïve Bayes Classifier

Once the affected region was segmented and its features were extracted, a decision support system helped to group by hypertensive retinopathy. In the probabilistic classifier category, the Naïve Bayes (NB) classifiers were independent of features. This has proved effective in many practical applications, including medical diagnosis and system performance management [24,25]. If a data point x = { x 1 , x 2 , , x m } of m features were given, an improved NB classifier was proposed to predict the class C q for x based on probability, and its mathematical representation is P( C q | x ) = P ( C q | x 1 , x 2 , , x m ) for q = 1, 2……Q.
After applying the Bayes theorem, the mathematical representation of the above equation is:
P   ( C q | x ) = p ( x | C q ) p ( C q ) p ( x )
= P ( x 1 , x 2 , , x m | C q ) p ( C q ) P ( x 1 , x 2 , , x m )
The mathematical representation of the decomposition of the term P ( x 1 , x 2 , , x m | C q ) is
P ( x 1 , x 2 , , x m | C q ) = P ( x 1 | x 2 , , x m , C q )   P ( x 2 | x 3 , , x m , C q ) .   P   ( x n 1 | x n , C q )   P ( x n | C q )
Considering the Naïve Bayes’ conditional independence, and the features in x are independent to each other. The mathematical representation of the decomposition is:
P ( x 1 , x 2 , , x m | C q ) = P   ( x s | C q ) = >   P ( x 1 , x 2 , , x m | C q ) = s = 1 m p ( x s | D s ) ;
thus,
P ( C q | x 1 , x 2 , , x m ) α   P ( C q , x 1 , x 2 , , x m )
α   P ( C q ) P ( x 1 , x 2 , , x m | C q )
α   P ( C q ) P ( x 1 | C q )   P ( x 1 | C q ) , ,   P ( x m | C q )
α   P ( C q )   s = 1 m p ( x s | D s )
where α dentoes proportionality. The distribution over the class C q is represented as
P ( C q | x 1 , x 2 , , x m ) = 1 N   P ( C q )   s = 1 m p ( x s | D s )
P ( x ) = P ( C q ) P ( x | C q ) represents the scaling factor, depending on x 1 , x 2 , , x m .
The data point x = { x 1 , x 2 , , x m } of m features were assigned to the best relevant class by evaluating P ( C q ) = s = 1 m p ( x s | D s ) for q = 1, 2, …, Q, class C q was assigned to x for the maximum value, and it was mathematically represented as
C ^ = argmax q { 1 Q } R ( x s ) P ( C q )   s = 1 m p ( x s | D s )
here R ( x s ) = 1 M k = 1 M ( x k 1 N k = 1 M x k ) 4 ( 1 M k = 1 M x k ) 4

3. Results

The experiments were undertaken to assess the performance of the proposed segmentation model for vessel segmentation. The proposed model was implemented with different threshold levels, 2–5. In Table 3, we tabulated the results of different thresholds their corresponding accuracies and noted that a higher threshold value led to more accurate results.
The evaluation function was implemented in the proposed against different threshold levels between 2 and 5. Table 4 compares the proposed values with various optimization algorithms at various threshold levels. It was evident that the proposed model provided more precise and robust segmentation and less computation time with an increase in the number of thresholds.
Table 5 presents population stability of 50–100 iterations, and the peak PSO model reached stability [26]. In addition, the maximum level of exploration prevented the model from dropping into local minima and allowed it to be more effective [27]. Compared to the advanced model, it required fewer iterations to reach the best value. It was easy to identify which algorithm converged quickly based on the number of iterations. For iteration 50 and population size 100, the state-of-the-art model achieved stability faster than the state-of-the-art model with the same threshold value.
The parameter values chosen for the number of iterations and population size were based on the findings of many search agents. Maximum iterations could approximate the global optimum better for faster convergence, and any other value would weaken it. From Figure 2, it is evident that we were achieving more accurate results by raising the threshold value.
Table 6 tabulated the performance of the proposed improved WOA compared to the state-of-the-art models. Compared with other alternative models, it produced accurate, detailed segmentation results with tiny vessels, making it a perfect choice for automatic CAD systems that rely on vessel segmentation results to estimate their estimates.
Figure 3 shows the vessel segmentation and detection by the proposed model with the STARE and DRIVE datasets. We used an improved Naïve Bayes classifier for HR classification. To figure out the Bayes classifier’s performance, precision, recall, and F1 score, parameters were used. During the test time, we obtained a predictive posterior distribution. By avoiding multiple training sessions, this method required less resources and improved the accuracy of the classification and detection of hypertensive retinopathy. Multi-class classifications require predicting the likelihood of several mutually exclusive classes. Based on the clinical criteria applied, these classes represented pathology 4 or 5 [25,32].
A severity classification confusion matrix is shown in Figure 4, along with samples that were correctly and incorrectly classified. Almost all sample classes (i.e., no HR, mild, moderate, severe, and proliferative) were correctly classified. Once these statistical values of the classifier were calculated, they were compared against the state-of-the-art models for robustness and efficiency findings. Table 7 tabulates the comparison of the proposed with the state-of-the-art model’s performance measures for multi-class classification in terms of accuracy, precision, recall, and F1 score.

4. Discussion

Many people around the world suffer from HR disease, which is caused by high blood pressure in the retinal blood vessels. However, HR patients are not aware of the disease, and its severity can be detected by a patient’s eye ophthalmologic examination. Blindness or vision loss often results from HR diagnosed at the last stage. This mainly focuses on HR detection using a nature-inspired optimization algorithm. Hypertensive retinopathy vessels are segmented more accurately by whale optimization. HR diagnosis requires a deep analysis of the retinal vasculature. A feature map that represents the maximum features of retinal blood vessels is designed in this study to detect retinal blood vessels in unconstrained scenarios.
We proposed an automated CAD system to detect and classify hypertensive retinopathy. Starting with the contrast enhancement, performed using sliding band filtering, the resulting output can be seen. In addition to enhancing and highlighting the legion region, these preprocessing steps make it difficult to classify the region. We used the KMWWOA algorithm to segment the vessels, which improved the segmentation by targeting small vessels. As it was applied to different datasets with different conditions, it proved robust to changes in the input fundus images. No matter if the entry image was normal or abnormal with exudates or microaneurysms, the accuracy was preserved.
The number of iterations has an essential impact on the multi-thresholding model’s performance. We tested the effect of iterations number on the best threshold level value and convergence using both the proposed model as well as the state-of-the-art models, such as PSO and WOA. The algorithm had a high level of exploration, which, in turn, allowed one to search all over the search space. This prevented it from falling into local minima. From the results, the proposed solution achieved the highest value with fewer iterations than the PSO. In PSO, there was a better level of exploitation that may have fell within local minima, and the proposed option would present a greater variety of exploration and exploitation speeds. Depending on the number of iterations, when the number increased, the thresholding level became better. This was until it reached an extent to which increasing the number of iterations had not affected the most effective threshold level. From the number of iterations, it could be determined that the suggested algorithm converged faster. Both the proposed algorithm and the PSO almost reached the same highest intensity threshold level of 25. The proposed method achieved faster convergence than PSO with the same threshold level value.
This study used various features, and the developed model produced 96.7% accuracy, a specificity of 0.935, and a sensitivity of 0.998 during feature extraction. Comparing the performance of the developed model with the state-of-the-art models on the DRIVE, STARE, and VICAVR datasets, it saved computational time and achieved accurate, detailed segmentation results with tiny vessels. Consequently, this model was suitable for classifying HR severity levels based on vessel segmentation and estimations. It exceeded most state-of-the-art models in terms of overall accuracy, and the proposed classifier needed to produce more misclassifications.

5. Conclusions

A novel vessel segmentation model for a kurtosis-based, multi-threshold WOA in fundus images was presented in this study. Using KMWWOA, optimum n-level thresholds were automatically selected for retinal image segmentation. According to the results, they were identical to the ground truth, which was an indication of high accuracy compared to state-of-the-art models. In this study, we detected and classified the five stages (i.e., no HR, mild, moderate, severe and malignant) of HR based on the improved segmentation technique. The performance metrics were approximately reached 100% with the proposed classifier.

Author Contributions

Conceptualization, N.C.; data curation, U.B.; funding acquisition, U.B. and G.B.; investigation, U.B. and N.C.; methodology, U.B. and G.B.; resources, U.B. and G.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created.

Conflicts of Interest

No author has any conflict of interest.

References

  1. Mozaffarian, D.; Benjamin, E.J.; Go, A.S.; Arnett, D.K.; Blaha, M.J.; Cushman, M.; De Ferranti, S.; Després, J.P.; Fullerton, H.J.; Howard, V.J.; et al. Heart disease and stroke statistics—2015 update: A report from the American Heart Association. Circulation 2015, 131, e29–e322. [Google Scholar] [CrossRef] [Green Version]
  2. Mensah, G.A. Hypertension and Target Organ Damage: Don’t Believe Everything You Think! Ethn Dis. 2016, 26, 275–278. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Sundström, J.; Neovius, M.; Tynelius, P.; Rasmussen, F. Association of blood pressure in late adolescence with subsequent mortality: Cohort study of Swedish male conscripts. BMJ 2011, 342, d643. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Battistoni, A.; Canichella, F.; Pignatelli, G.; Ferrucci, A.; Tocci, G.; Volpe, M. Hypertension in young people: Epidemiology, diagnostic assessment and therapeutic approach. High Blood Press. Cardiovasc. Prev. 2015, 22, 381–388. [Google Scholar] [CrossRef] [PubMed]
  5. Mensah, G.A.; Croft, J.B.; Giles, W.H. The heart, kidney, and brain as target organs in hypertension. Cardiol. Clin. 2002, 20, 225–247. [Google Scholar] [CrossRef]
  6. Rodrigues, M.B.; Da Nobrega, R.V.M.; Alves, S.S.A.; Reboucas Filho, P.P.; Duarte, J.B.F.; Sangaiah, A.K.; De Albuquerque, V.H.C. Health of things algorithms for malignancy level classification of lung nodules. IEEE Access 2018, 6, 18592–18601. [Google Scholar] [CrossRef]
  7. Wong, T.; Mitchell, P. The eye in hypertension. Lancet 2007, 369, 425–435. [Google Scholar] [CrossRef]
  8. Sambandam, R.K.; Jayaraman, S. Self-adaptive dragonfly based optimal thresholding for multilevel segmentation of digital images. J. King Saud Univ. Comput. Inf. Sci. 2018, 30, 449–461. [Google Scholar] [CrossRef] [Green Version]
  9. Khairuzzaman, A.K.M.; Chaudhury, S. Multilevel thresholding using grey wolf optimizer for image segmentation. Expert Syst. Appl. 2017, 86, 64–76. [Google Scholar] [CrossRef]
  10. Usher, D.; Dumskyj, M.; Himaga, M.; Williamson, T.H.; Nussey, S.; Boyce, J. Automated detection of diabetic retinopathy in digital retinal images: A tool for diabetic retinopathy screening. Diabet. Med. 2004, 21, 84–90. [Google Scholar] [CrossRef]
  11. Manikis, G.C.; Sakkalis, V.; Zabulis, X.; Karamaounas, P.; Triantafyllou, A.; Douma, S.; Zamboulis, C.; Marias, K. An image analysis framework for the early assessment of hypertensive retinopathy signs. In Proceedings of the 2011 E-Health and Bioengineering Conference (EHB), Iași, Romania, 24–26 November 2011; IEEE: Iași, Romania, 2011; pp. 1–6. [Google Scholar]
  12. Narasimhan, K.; Neha, V.C.; Vijayarekha, K. Hypertensive Retinopathy Diagnosis from Fundus Images by Estimation of Avr. Procedia Eng. 2012, 38, 980–993. [Google Scholar] [CrossRef] [Green Version]
  13. Muramatsu, C.; Hatanaka, Y.; Iwase, T.; Hara, T.; Fujita, H. Automated detection and classification of major retinal vessels for determination of diameter ratio of arteries and veins. In Medical Imaging 2010: Computer-Aided Diagnosis; SPIE: Bellingham, WA, USA, 2010; Volume 7624, pp. 153–160. [Google Scholar]
  14. Mirsharif, Q.; Tajeripour, F.; Pourreza, H. Automated characterization of blood vessels as arteries and veins in retinal images. Comput. Med. Imaging Graph. 2013, 37, 607–617. [Google Scholar] [CrossRef]
  15. Kaggle. Available online: https://www.kaggle.com/datasets/andrewmvd/ocular-disease-recognition-odir5k (accessed on 4 February 2023).
  16. DRIVE. Available online: https://drive.grand-challenge.org/ (accessed on 4 February 2023).
  17. STARE. Available online: https://cecas.clemson.edu/~ahoover/stare/ (accessed on 4 February 2023).
  18. VICAVR. Available online: http://www.varpa.es/research/ophtalmology.html#vicavr (accessed on 4 February 2023).
  19. Quelhas, P.; Marcuzzo, M.; Mendonça, A.M.; Campilho, A. Cell nuclei and cytoplasm joint segmentation using the sliding band filter. IEEE Trans. Med. Imaging 2010, 29, 1463–1473. [Google Scholar] [CrossRef]
  20. Kulkarni, R.V.; Venayagamoorthy, G.K. Bio-inspired algorithms for autonomous deployment and localization of sensor nodes. IEEE Trans. Syst. Man Cybern. Part C 2010, 40, 663–675. [Google Scholar] [CrossRef]
  21. Krause, J.; Gulshan, V.; Rahimy, E.; Karth, P.; Widner, K.; Corrado, G.S.; Peng, L.; Webster, D.R. Grader variability and the importance of reference standards for evaluating machine learning models for diabetic retinopathy. Ophthalmology 2018, 125, 1264–1272. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Binkley, K.J.; Hagiwara, M. Balancing exploitation and exploration in particle swarm optimization: Velocity-based reinitialization. Inf. Media Technol. 2008, 3, 103–111. [Google Scholar] [CrossRef] [Green Version]
  23. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Software 2016, 95, 51–67. [Google Scholar] [CrossRef]
  24. Al Shalchi, N.F.A.; Rahebi, J. Human retinal optic disc detection with grasshopper optimization algorithm. Multimed. Tools Appl. 2022, 81, 24937–24955. [Google Scholar] [CrossRef]
  25. Arnay, R.; Fumero, F.; Sigut, J. Ant colony optimization-based method for optic cup segmentation in retinal images. Appl. Soft Comput. 2017, 52, 409–417. [Google Scholar] [CrossRef]
  26. Jadhav, A.S.; Patil, P.B.; Biradar, S. Optimal feature selection-based diabetic retinopathy detection using improved rider optimization algorithm enabled with deep learning. Evol. Intell. 2021, 14, 1431–1448. [Google Scholar] [CrossRef]
  27. Chakraborty, S.; Pradhan, R.; Ashour, A.S.; Moraru, L.; Dey, N. Grey-Wolf-Based Wang’s Demons for retinal image registration. Entropy 2020, 22, 659. [Google Scholar] [CrossRef] [PubMed]
  28. Decencière, E.; Zhang, X.; Cazuguel, G.; Lay, B.; Cochener, B.; Trone, C.; Gain, P.; Ordonez, R.; Massin, P.; Erginay, A.; et al. Feedback on a publicly distributed image database: The Messidor database. Image Anal. Stereol. 2014, 33, 231–234. [Google Scholar] [CrossRef] [Green Version]
  29. Agurto, C.; Joshi, V.; Nemeth, S.; Soliz, P.; Barriga, S. Detection of hypertensive retinopathy using vessel measurements and textural features. In Proceedings of the 2014 36th annual international conference of the IEEE engineering in medicine and biology society, Chicago, IL, USA, 26–30 August 2014; IEEE: Chicago, IL, USA, 2014; pp. 5406–5409. [Google Scholar]
  30. Irshad, S.; Akram, M.U. Classification of retinal vessels into arteries and veins for detection of hypertensive retinopathy. In Proceedings of the 2014 Cairo International Biomedical Engineering Conference (CIBEC), Giza, Egypt, 11–13 December 2014; IEEE: Giza, Egypt, 2014; pp. 133–136. [Google Scholar]
  31. Akbar, S.; Akram, M.U.; Sharif, M.; Tariq, A.; ullah Yasin, U. Arteriovenous ratio and papilledema based hybrid decision support system for detection and grading of hypertensive retinopathy. Comput. Methods Programs Biomed. 2018, 154, 123–141. [Google Scholar] [CrossRef]
  32. Domingos, P.; Pazzani, M. On the optimality of the simple Bayesian classifier under zero-one loss. Mach. Learn. 1997, 29, 103–130. [Google Scholar] [CrossRef]
  33. Hellerstein, J.L.; Jayram, T.S.; Rish, I. Recognizing End-User Transactions in Performance Management; IBM Thomas J. Watson Research Division: Hawthorne, NY, USA, 2000; pp. 596–602. [Google Scholar]
  34. Jain, A.K.; Waller, W.G. On the optimal number of features in the classification of multivariate Gaussian data. Pattern Recognit. 1978, 10, 365–374. [Google Scholar] [CrossRef]
  35. Peng, H.; Ying, C.; Tan, S.; Hu, B.; Sun, Z. An improved feature selection algorithm based on ant colony optimization. IEEE Access 2018, 6, 69203–69209. [Google Scholar] [CrossRef]
Figure 1. Block diagram of the proposed system.
Figure 1. Block diagram of the proposed system.
Axioms 12 00625 g001
Figure 2. Segmentation results with threshold levels from STARE dataset: (a) ground truth, (b) threshold 2 segmented image, (c) threshold 3 segmented image, (d) threshold 4 segmented image, and (e) threshold 5 segmented image.
Figure 2. Segmentation results with threshold levels from STARE dataset: (a) ground truth, (b) threshold 2 segmented image, (c) threshold 3 segmented image, (d) threshold 4 segmented image, and (e) threshold 5 segmented image.
Axioms 12 00625 g002
Figure 3. Vessel Segmentation using Improved WOA for STARE (left) and DRIVE (right) datasets.
Figure 3. Vessel Segmentation using Improved WOA for STARE (left) and DRIVE (right) datasets.
Axioms 12 00625 g003
Figure 4. Confusion matrix for multi class classification.
Figure 4. Confusion matrix for multi class classification.
Axioms 12 00625 g004
Table 1. Dataset Distribution for this study.
Table 1. Dataset Distribution for this study.
CategoryCount
Normal200
Mild400
Moderate200
Severe200
Malignant200
Table 2. GLCM feature lists.
Table 2. GLCM feature lists.
NFeatureFormula
F1Angular Second Moment x y [ p ( x , y ) ] 2
F2Contrast q = 0 M 1 q 2 [ p i ( q ) ]
F3Correlation x y ( x , y ) p ( x , y ) μ i μ j σ i σ j
F4Inverse difference moment x y 1 1 + ( x + y ) 2 P ( x , y )
F5Sum average x = 2 2 K x [ p i + j ( x ) ]
F6Sum variance x = 2 2 K ( x x = 2 2 K p i + j ( x ) [ p i + j ( x ) ] ) 2 [ p i + j ( x ) ]
F7Sum Entropy x = 2 2 K p i + j ( x ) [ p i + j ( x ) ]
F8Entropy x y P ( x y ) log [ P ( x y ) ]
F9Difference varianceVariance of p x + y
F10Difference entropy x = 0 K = 1 p i j log [ p i j ( x ) ]
F11Info. Measure of correlation AIJ AIJ 1 max [ AI AJ ]
F12Max. corel.coefficient(Square of the eigen value B)1/2; B = k P ( xk ) P ( Jk ) P i ( x ) P j ( k )
Table 3. Accuracies and processing times of the proposed model for different segmentation results.
Table 3. Accuracies and processing times of the proposed model for different segmentation results.
ThresholdAccuracyProcessing Time (ms)
298.670.2474
398.850.2479
499.040.2482
599.360.2485
Table 4. Comparison of average accuracy for different segmentation models at different threshold levels.
Table 4. Comparison of average accuracy for different segmentation models at different threshold levels.
Threshold LevelPSOWOAKMWWOA
296.3497.2598.67
396.7997.3798.85
497.3598.3399.04
597.4798.6299.36
Table 5. Comparison of the performance of the proposed with the state-of-the-art models with effect of iterations and population size.
Table 5. Comparison of the performance of the proposed with the state-of-the-art models with effect of iterations and population size.
PSOProposed
Population SizeIterationsAvg
Threshold Value
Population SizeIterationsBest
Threshold Value
501
25
50
75
100
150
200
31
28
27
32
35
27
25
501
25
50
75
100
150
200
24
21
20
26
25
25
25
1001
25
50
75
100
150
200
21
25
26
21
22
25
25
1001
25
50
75
100
150
200
18
21
25
25
25
25
25
200
Table 6. Comparison of segmentation results with the state-of-the-art models.
Table 6. Comparison of segmentation results with the state-of-the-art models.
ModelAccuracySensitivitySpecificity
GrasshopperOA [28]95.2482.6497.42
AntCO [29]94.6379.4697.21
Rider OA [30]96.7289.4697.46
GreyWOA [31]97.3691.6398.42
Proposed99.3693.5699.85
Table 7. Performance measure for multi class classification.
Table 7. Performance measure for multi class classification.
ProjectClassifierAccuracyPrecisionRecall (%)F-Score (%)AUCClass
Agurto et al. [33]Partial Least Squares968789880.97No HR
969494940.97Mild
958683840.95Moderate
958387850.95Severe
968984870.96Malignant
Irshad S et al. [34]SVM989495950.97No HR
979597960.97Mild
979289900.98Moderate
979094920.98Severe
979689920.97Malignant
Akbar et al. [35]SVM-RBF999997960.98No HR
999999990.98Mild
999795960.98Moderate
999796970.98Severe
999698970.99Malignant
ProposedImproved Naïve Bayes1009999990.99No HR
1001001001000.99Mild
10099100990.99Moderate
1009999990.99Severe
100100991000.99Malignant
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bhimavarapu, U.; Battineni, G.; Chintalapudi, N. Automatic Classification of Hypertensive Retinopathy by Gray Wolf Optimization Algorithm and Naïve Bayes Classification. Axioms 2023, 12, 625. https://doi.org/10.3390/axioms12070625

AMA Style

Bhimavarapu U, Battineni G, Chintalapudi N. Automatic Classification of Hypertensive Retinopathy by Gray Wolf Optimization Algorithm and Naïve Bayes Classification. Axioms. 2023; 12(7):625. https://doi.org/10.3390/axioms12070625

Chicago/Turabian Style

Bhimavarapu, Usharani, Gopi Battineni, and Nalini Chintalapudi. 2023. "Automatic Classification of Hypertensive Retinopathy by Gray Wolf Optimization Algorithm and Naïve Bayes Classification" Axioms 12, no. 7: 625. https://doi.org/10.3390/axioms12070625

APA Style

Bhimavarapu, U., Battineni, G., & Chintalapudi, N. (2023). Automatic Classification of Hypertensive Retinopathy by Gray Wolf Optimization Algorithm and Naïve Bayes Classification. Axioms, 12(7), 625. https://doi.org/10.3390/axioms12070625

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop