Next Article in Journal
Applications of Symmetry Breaking in Plasmonics
Previous Article in Journal
Supersymmetry and Exceptional Points
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Retinal Blood Vessel Segmentation Using Hybrid Features and Multi-Layer Perceptron Neural Networks

Faculty of Computers and Information, Suez Canal University, Ismailia 41522, Egypt
*
Authors to whom correspondence should be addressed.
Symmetry 2020, 12(6), 894; https://doi.org/10.3390/sym12060894
Submission received: 5 April 2020 / Revised: 17 May 2020 / Accepted: 22 May 2020 / Published: 1 June 2020

Abstract

:
Segmentation of retinal blood vessels is the first step for several computer aided-diagnosis systems (CAD), not only for ocular disease diagnosis such as diabetic retinopathy (DR) but also of non-ocular disease, such as hypertension, stroke and cardiovascular diseases. In this paper, a supervised learning-based method, using a multi-layer perceptron neural network and carefully selected vector of features, is proposed. In particular, for each pixel of a retinal fundus image, we construct a 24-D feature vector, encoding information on the local intensity, morphology transformation, principal moments of phase congruency, Hessian, and difference of Gaussian values. A post-processing technique depending on mathematical morphological operators is used to optimise the segmentation. Moreover, the selected feature vector succeeded in outfitting the symmetric features that provided the final blood vessel probability as a binary map image. The proposed method is tested on three known datasets: Digital Retinal Image for Extraction (DRIVE), Structure Analysis of the Retina (STARE), and CHASED_DB1 datasets. The experimental results, both visual and quantitative, testify to the robustness of the proposed method. This proposed method achieved 0.9607, 0.7542, and 0.9843 in DRIVE, 0.9632, 0.7806, and 0.9825 on STARE, 0.9577, 0.7585 and 0.9846 in CHASE_DB1, with respectable accuracy, sensitivity, and specificity performance metrics. Furthermore, they testify that the method is superior to seven similar state-of-the-art methods.

1. Introduction

In this paper, an automatic DR diagnosis system is proposed. Pointing to the ‘fact sheet’ of the world health organisation (WHO), over 2.2 billion people all over the world have a vision impairment or blindness. One billion of these cases can be prevented or at least controlled if these cases are detected at an early stage. Diabetic retinopathy (DR) is the fifth leading cause of vision impairment and is estimated at 4.5 million out of the one billion cases [1,2,3,4]. Currently, the need for eye disease screening and check ups has increased rapidly. Despite that, there is a notable shortfall of ophthalmologists, particularly in developing countries. Hence, the automation detection of the DR process by supervised procedures is a benefit and can significantly assist in working out this issue [5,6].
In the last couple of decades, the clinical importance of the segmentation of retinal blood vessels from fundus images has attracted researchers to diagnose many diseases. Consequently, the segmentation of retinal blood vessels is an attractive field of research as it helps in retinal image analysis and computer-assisted diagnosis [7,8].
Retinal vessel investigation is noninvasive, as it can be observed directly without any surgical intervention [9]. Additionally, it is an indispensable element to any automatic diagnosis system for ophthalmic disorders. The variation in the structure of retinal blood vessels can serve as an indicator of serious diseases such as stroke, glaucoma, hypertension, arteriosclerosis, cardiovascular, age-related macular degeneration (AMD), refractive error and DR [7,10]. It is commonly accepted by the retinal medical community that uses 2 D colour fundus images and 3D Optic Coherent Tomography (OCT) images modality as a clinical indicator for DR diagnosis because of its low cost and the simplicity in computation [11,12]. Fundus image modality is more often used than OCT images in both screening and clinical diagnosis systems. The lesions structure of the retina such as micro-aneurysms (MA), neovascularisation, haemorrhages, hard exudates, and cotton-wool-spots can be more prominent in fundus retinal images. In retinal fundus image diagnosis, there are two objects to be segmented: the anatomical structure (optic disc, blood vessels, macula, and fovea) and the lesions caused by diseases. The identification of the anatomical structure (main region) of the retinal image can assist in the analysis of conditions that affect these regions. For example, glaucoma affects the optic disc by making it more dilated, hypertension affects the vessels by tortuosity, and proliferative DR affects the number of new vessels [13]. The latter is the focal point of the present work. The anatomical attributes of retinal blood vessels, such as width, length, tortuosity, local branching pattern, and angles, play an important role in diagnostic results. They are widely used not only to predict ophthalmological disorders (i.e., DR, glaucoma, and age-related macular degeneration), but also to non-ophthalmological disorders (i.e., hypertension, arteriosclerosis, obesity, and cardiovascular disease) [14,15]. Therefore, the analysis of the vasculature structure in retinal fundus images allows both monitoring and evaluating the progress of these disorders.
Manual segmentation of the vascular tree in retinal fundus images is a tedious task for many reasons [16]. First, the variation in the contrast between blood vessel pixels and background in fundus images can be very tiny. Second, the noise presence and the difference in illumination conditions can cause problems. Third, the blood vessel variations, such as width, shape, branching angles, and brightness, may not be readily observable. Finally, the presence of lesions and haemorrhage in the background of retinal images can introduce confusion. Hence, automatic segmentation is a necessity.
A substantial amount of work is reported in the literature for detecting blood vessels in colour retinal fundus images [9,10,17]. Since retinal image analysis is complicated, it usually fails when a stand-alone image processing technique is used [18]. According to [19], the most classical segmentation techniques on the fundus images encompass six domains: (i) pattern recognition, (ii) mathematical morphology, (iii) matched filter, (iv) vessel tracking, (v) model-based, (vi) parallel-hardware. Pattern recognition is considered the most accurate and robust [18]. Parallel-hardware methods can accelerate the speed of execution, but the accuracy may be relatively lowered [20]. Hybrid methods, which combine one or more different domains to improve system performance, are in common use. Mathematical morphology, matched filter, vessel tracking and model-based techniques can contribute to each other or be applied to pattern recognition techniques and/or parallel-hardware methods to build an efficient, automated, comprehensive blood vessel segmentation system.
Machine learning, supervised and unsupervised, can be used in retinal blood vessel segmentation. In supervised learning, where for each input data, there is a corresponding output label, one can use a feature vector and a gold standard (GS) dataset. The latter includes manual markings for blood vessels, to train a classifier to divide retinal colour image pixels into vessel pixels and non-vessel pixels. On the other hand, where labels cannot exist, unsupervised methods explore interesting patterns of input data. In unsupervised methods, no prior knowledge is required upon the pattern before segmenting; moreover, the run is faster than the supervised methods. Although the supervised approaches are more accurate than the unsupervised method, it requires a craft, features engineering and lots of extracted features to obtain good results in the segmentation of the retinal blood vessel [21,22,23,24].
In the last five years, a new dawn of an era has shone based on deep learning as an automatic features extraction such as Deep convolutional neural networks(CNN). CNN shows the highest robustness and effectiveness in segmenting the blood vessels. Unlike any conventional supervised approach, the deep learning neural network (DNN) methods are intelligent in automatically learning the features by using a predefined filter, which is used to extract features from the input image. Additionally to this, Transfer learning, an advanced approach of Deep learning, is recently used. Rather than building and training the network to learn features from scratch, Transfer learning concepts are uses as a pre-trained model on vast numbers of different images. So the model is already pre-learned by the tremendous number of different features and only need to fine-tune for the more suitable and relevant application. Since the proposed work belongs to the former category, previous work using supervised learning is reviewed next.
A supervised learning method as an ensemble system based on feature-based AdaBoost classifier (FABC) using a collection of a 41-D feature vector is constructed for segmentation, but a high relative number of features is necessary to get the accurate results [25]. A 7-D feature vector comprises a grey level and moment invariant based feature segments. An artificial neural network (ANN) is used as a classifier. The method approaches are unable to achieve the desired results without sophisticated preprocessing steps [26]. An ensemble system of bagged and boosting analysis composed of the 9-D feature vector is constructed from the gradient vector field, morphological transformation, line strength measures, and Gabor filter to handle healthy and pathological colour retinal images. The methods use a number of features, and a decision tree applies where the effects of weak learners are combined using bootstrap aggregation to increase the accuracy of the segmentation [27]. The retinal blood vessel morphology is exploited to identify the successive stage of the severity of the DR disease. A technique based on an ANN is used to automate the screening of DR. An ANN that contains one input layer, five hidden layers, and one output layer is used. The input to the ANN is derived from the Gabor filter and moment invariant based features. The input image is processed using a mathematical morphology operator, mean, and Gaussian filter. This technique is used to diagnose diseases like hypertension and diabetes mellitus, for which a change in the retinal vasculature is a primary symptom. The method is applied only on the one dataset while it fails on another dataset [17]. The retinal vessel is uprooted using Lattice Neural Networks, and dendritic processing introduced relatively better performance in vessels. Nevertheless, the final results are low in DRIVE, STARE datasets [28]. Extensive preprocessing steps, such as zero-phase whitening, global contrast normalisation and gamma correlation on fundus image patches, are used as inputs to CNN. Although the prediction on the DRIVE, STARE, and CHASE_DB1, showed reduced vessels misclassification and central vessels reflex problem, the results are mainly based on preprocessing steps [29]. Five stages of DNN with the autoencoder transform the selected RGB patches of the retinal image into a vessel map image. Even though the prediction is made at every pixel in a patch, final results are based on the boundless cross-modality training on datasets [24]. A supervised hierarchical retinal blood vessel segmentation method is presented to diagnose several ailments, such as cardiovascular diseases, DR, and hypertension. The proposed method uses two superior layers: A convolution neural network, as a deep hierarchical feature extractor, and a random forest, which works as a trainable classifier. This method is tested only on two datasets. Although it outperforms the aforementioned methods, the time cost of this method is relatively high [30]. A supervised method based on simple extreme machine learning uses a 39-D feature vector constructed from local features (29 features), morphological transformation operators (6 features), phase congruency (1 feature), Hessian components (2 features) and divergence of a vector field (1 feature). Regardless of the fact that the method tries only one public dataset and another unknown private dataset, the obtained results are relatively low concerning the other publicly available datasets [12]. Information obtained from a stationary wavelet transform with a multi-scale convolution neural network can catch the tiny vessel pixel. This method uses a rotation operator as a basis of a joint strategy for data augmentation and prediction [31]. A hybrid feature set and hierarchical classification used to segment retinal blood vessels from a retinal colour image. Six groups of discerning features were used to construct the feature vector compose of local intensity variation features, the local binary pattern features, a histogram of gradient features, vector field divergence features, morphological transformation, and high-order-local auto-correlation features [32]. Moreover, a random forest-based technique was used as a feature selection in addition to using two hierarchical trainers, comprised of a support vector machine (SVM) and random forest, used as weak classifiers. At the same time, the final results are from a Bayesian network. The final output results completely depend on the random forest technique, as a feature selection in addition to hierarchical classification. However, it was so low when it tested for each classifier separately. A hybrid segmentation method with a self-mask generated schema was used to extract the retinal blood vessel from the fundus retinal image. However, its performance was very low on the DRIVE dataset [4]. A transfer learning model, VGG16, as a pre-trained model with multilevel/multi-scale deep supervision layers was incorporated to segment the retinal blood vessel. The VGG16 model was improved by convolving a vessel-specific Gaussian kernel with two pre-initialised scales. The final output is an activation map with well-learned vessel-specific feature at multiple scales. The activation map is increasingly obtained by the symmetric feature map. Regardless of using a pre-trained model, VGG16, which trained on ImageNet (15 million of high-resolution labelled images), the method required better post-processing steps [6]. Despite the increasing use of deep learning, there are some drawbacks that face it. First, the need for copious numbers of images to train and test, which does not exist in our case, and second, a high amount of power in processing using a graphical processing unit (GPU). Furthermore, it has a complicated, long structure, which needs a lot of experience, training and effort.
For fulfilling the retinal blood vessel segmentation, this paper uses hand-crafted features and a multi-layer perceptron neural network (MLP), with the following contribution.
  • Carefully, 24 hand-crafted features are outlined.
  • Special multi-layer perceptron neural network structure for full gauge fundus retinal evaluation.
  • Cross entropy loss function.
  • Transformed green channel image.
  • Morphological transformation are applied twice. First, white top-hat and black bottom-hat are used as filters to extract features and to suppress noise during the extraction. Second, a closing operator is used to connect the isolated vessels and to fill the gap areas in the segmented images during the post-processing step.
  • Model for automatic feature extraction and classification of blood vessel in retinal fundus images.
  • Maximum and minimum moments of phase congruency are used for the first time.
  • Retinal vessel segmentation with improved generalisation on multiple datasets.
  • In-depth visual analysis plus comparison with the state-of-the-art methods.
This paper is organised as follows. Section 2 outlines the datasets and the methodology, where feature extracting steps are discussed in detail. In Section 3, the results of the proposed method are presented. The discussion of the results is in Section 4. The conclusions of the paper are described in Section 5.

2. Materials and Methods

Three datasets are commonly in use for vessel segmentation research. These are the Digital Retinal Images for Vessel Extraction (DRIVE) dataset [33], the Structure Analysis of Retina (STARE) dataset [34] and CHASE_DB1 dataset [35]. Because we do some early processing on STARE and CHASE_DB datasets in creating mask images related to each one, the difference between the three datasets must be explained as follows.
1.
DRIVE dataset
The DRIVE dataset consists of 40 images (7 of which are pathology) divided into 20 images for training and 20 for testing (3 of which are pathology). The image size is 565 × 584 pixels captured by a Canon CR5 non-mydriatic 3 charge-coupled-device (CCD) cameras at a 45° field of view (FOV). The image has a FOV approximately 540 pixels in diameter. The DRIVE dataset provides a mask for every image to facilitate the identification of the field of view (FOV) area, as shown in Figure 1.
Every image is accompanied by a manual segmentation label, manually created by three experts (observers) and validated by an experienced ophthalmologist (annotator). The training set is segmented only once and twice for the testing set, resulting in 577,649 pixels ( 12.7 % ) marked as a vessel in the first set (first observer), against 556,532 pixels ( 12.3 % ) in the second set (second observer). In this paper, the first set (first observer) is used as the ground truth (label) for the performance evaluation, while the second set is serving as a human observer reference for the performance comparison. The DRIVE dataset is depicted in Figure 2 and Figure 3.
2.
STARE dataset
The STARE (Structure Analysis of Retina) dataset comprises 20 images (10 of which present pathology) of size 700 × 605 pixels. The dataset is captured by a TopCon TRV-50 fundus camera at 35° FOV. The dataset is manually segmented twice by two observers; the first one segments 10.4 % of pixels as vessels and 14.9 % for the other. Since the blood vessels of the second observer are much thinner than the first, the first is considered the ground truth. Figure 4 illustrates a sample of the STARE dataset. Unlike DRIVE, STARE does not offer masks to determine the FOV, so the method described in [11,34] is leveraged to create them.
3.
The CHASE_DB1 Dataset
The CHASE_DB1 Dataset includes 28 images of size 990 × 960 pixels, captured from 14 school children for both left and right eyes by a hand-held NidekNM-200-D fundus camera at 30° FOV. The segmentation results of the first observer are deployed as the ground truth. This dataset is characterised by nonuniform background illumination. Thus, the contrast of blood vessels is inferior, with the presence of central vessel reflexes. The CHASE_DB1 is pictured in Figure 5. Like STARE, CHASE_DB1 does not offer mask images, so as in the above, the method described in [11,34] is also leveraged to create them.
The proposed model to segment retinal blood vessels from a colour retinal image contains four stages: (i) green channel extraction, (ii) features extraction, (iii) evaluation of segmentation, (iv) post-processing. In the first stage, the green channel is extracted from the RGB retinal fundus image. The choice of the green channel, in particular, is due to the fact that in this channel, the blood vessel tree is more vivid than in other channels. In the second stage, a 24 feature vector is constructed for pixel representation. The features of this vector encode information on the local intensity value (1 feature), morphological white top-hat value at six different scales (6 features), morphology black-bottom-hat values at six different scales (6 features), maximum-moment of phase congruency (1 feature), minimum-moment of phase congruency (1 feature), Hessian components values (4 features) and difference of Gaussian features at five different scales (5 features). At the end of this stage, a features matrix based on the feature vector and the correspondence manual label is constructed for each pixel in the green channel image. Later, a training sample from this feature matrix is randomly selected and used as input to the classifier to perform the segmentation. In the third stage, a supervised learning approach based on a multi-layer perceptron neural network is adopted. The rule for blood vessel extraction is to learn by the algorithm through the provided training sample. The segmented reference images are usually termed label or gold standard images. The last stage, post-processing, is used to enhance the classifier performance and acts to reduce two types of artefacts. The proposed method is tested on the three publicly available datasets, DRIVE, STARE, and CHASE_DB1, which contain digital colour retinal images with 8 bits per colour channel. A complete block diagram for the proposed model is shown in Figure 6.
The remainder of this section is dedicated to explaining the method in detail.

2.1. Feature Extraction

As mentioned above, in the second stage of our proposed method, a 24 feature vector is constructed for pixel representation. The features composing the vector are as follows.

2.1.1. Local Intensity Feature

In a retinal fundus image, blood vessel pixels appear much darker than the background pixels. So, the intensity value for each pixel in the retinal green channel is considered as a local intensity feature. Figure 7 depicts an image after a local intensity feature is extracted from the colour image.

2.1.2. Morphological White Top-Hat Transformation

In general, mathematical morphology transformation in image processing can combine (merge) two sets of inputs, the first related to the image and the second a predefined linear structuring element. One type of such transformation, white top-hat, is an operator used for lighting objects on a dark background. In the beginning, the opening operator removes objects smaller than the structuring element. Then the difference between the original image and the output of the opening operator is eliminated. Specifically, the opening operator uses the predefined structuring element, which is oriented at a particular angle θ , to eradicate a vessel or part of it when the vessel’s width is shorter than the element and, both are orthogonal [15].
Let I be the green image to be processed, and S be the linear structuring element. Then the white top-hat transformation feature of I is given by
T c θ = I I S c θ ,
where is the opening operator, c 3 , 7 , 11 , 15 , 19 , 23 the length of the structuring element and θ the rotation angle of S, spanning 0 : π in steps of π / 15 . The result of the white top-hat transformation in all direction is given by
T c = θ A T c θ ,
where A = x 0 x π a n d x mod ( π / 12 ) = 0 , and set A is a set that contains the orientation angles.
The length of S is critical to extract the vessel with the largest diameter. Therefore, a set of different lengths is used to formulate six different features. In case the opening operator along a class of S is considered, the summation of white top-hat transformation at each direction will brighten the vessels regardless of its direction. The effectiveness of the summation on the entire retinal image will enhance all vessels regardless of their orientation. That includes the small (tortuous) vessels in the bright zone. Figure 8 shows the image after applying the white top-hat operator.
Following the white top-hat feature extractor computation proposed in [15], six different features are extracted. Not only do vessel-like structures become much lighter but they also acquire intensity values relative to the local background. So, the intensity values of extracted blood vessels are relative to the local intensity of neighbourhood pixels in the original image [36]. The final response of the white top-hat transformation operator is a corrected uniform image with enhanced edge information.

2.1.3. Morphological Black Bottom-Hat Transformation

The counterpart of morphological white top-hat is the morphological black bottom-hat transformation. While the former is used for light objects on a dark background, the latter is used for the opposite. Black bottom-hat transformation is defined as the residue of the closing operator and the original image. In other words, it is a closing • of the image minus the image [12]. The black bottom-hat transformation feature can be defined as follows
Let I be the green image to be processed, S is the structuring element, • is the closing operator and θ is an orientation angle at a given scale c. Then the black bottom-hat transformation feature on image I is given by
B c θ = I S c θ I ,
Now,
B c = θ A B c θ .
where B c is the summation of the black bottom-hat transformation in all directions of S at a different θ . Both c and θ are defined as in the white top-hat transformation.
The image is simplified by both of the above morphological filters. So the cross-curvature computation is easy, and the vessel segments are linearly coherent. Figure 9 shows the image after applying black top-hat operators.

2.1.4. The Principal Moments of Phase Congruency

Instead of extracting features by using intensity gradient-based methods in the spatial domain, the phase congruency method is based on Fourier components. Its importance stems from its ability to detect edges and corner points in an image effectively and invariantly to change in contrast and illumination conditions. Furthermore, phase congruency has a significant advantage over all other feature detectors as it can mark features that discriminate like edges (blood vessels) correctly. It is based on the local energy model, reported by Morrone et al. [37], which suggests that features as line and edges are perceived at points where the Fourier components are maximal in phase. The model is defined as a ratio of local energy to the overall path length used up by Fourier components from the head to tail.
Let x be some location at a signal, the nth Fourier component of which at point x has amplitude A n , and let | E ( x ) | be the magnitude at that point of the vector from the head to tail (considered as the local energy). Then the phase congruency, ( P C ) , of the signal is given by
P C = | E ( x ) | n A n ( x ) .
In case all the Fourier components are in phase, then all the complex vectors will be aligned, and the P C will be 1; otherwise, it will be 0. While the model is not based on gradient intensity for detecting features, the construction of a dimensionless measure P C at any point in the image is possible. The most notable drawback of this model is the bad localisation and its sensitivity to noise. A modified model is proposed by Kovesi [38,39], to compensate for the image noise. It provides proper localisation since P C is represented as the phase difference between the amplitudes of the weighted mean of the local phase energy angle ϕ ¯ ( x ) and the local phase of nth Fourier component ϕ ( x ) , both evaluated at point x. As such, the modified model can be expressed as follows.
P C ( x , y ) = i I k K w i ( x , y ) A i , k ( x , y ) cos ( ϕ i , k ( x , y ) ϕ ¯ ( x , y ) ) sin ( ϕ i , k ( x , y ) ϕ ¯ i , k ( x , y ) ) T i I k K A i , k ( x , y ) + η ,
where ( x , y ) is the (green channel) pixel coordinate, I is the total number of orientation. K is the total number of scales, w i ( x , y ) is a weighted factor for frequency spread in orientation i (because P C from many frequencies is significantly better than P C over a few frequencies), T is the estimated noise influence (recommended value is 3), and η is a constant incorporated to avoid division by zero if the local energy vanishes. In Equation (6), a cosine of the phase difference between the amplitude of ϕ ¯ ( x ) and ϕ ( x ) at a particular point is used instead of a sine because when the frequency components are aligned (in phase) with the “local energy”, the phase difference value can be expressed as ϕ n ϕ ¯ n = 0 , with the cosine being 1, resulting in a maximum P C . The weighting function w used to compute frequencies in ranges is defined as a sigmoid function
w ( x , y ) = 1 1 + exp ( ρ ( c s ( x , y ) ) ,
where c is the filter response cut-off value, below which P C values are ignored, ρ is the value of the gain factor that controls the sharpness of the cut-off (its value set to 10), and the fractional value of s ( x , y ) of the frequencies is calculated by taking the amplitude of the sum of the filter response and dividing by the maximum amplitude A m a x ( x , y ) at each point (pixel) (x,y) of the images as follows.
s ( x , y ) = 1 K k K A k ( x , y ) A max ( x , y ) + η .
Herein, Equation (6) represents P C for all orientation from [ 0 : π ] overall scales K in a 2D image. Unlike the proposed method in [12], the modified model in [39,40] is used by calculating P C independently for each orientation using Equation (6), then the moment of P C covariance is computed. Hence, the variation of the moment within an orientation is taken into account. In such a case, the principal axis is corresponding to the axis about which the magnitude of the moment is minimum (minimum moment), which provides a good indicator for the orientation of the feature. On the other side, the magnitude of the maximum moment corresponding to the moment about the axis perpendicular to the principal axis is considered as an indicator of the significance of the feature. Accordingly, the maximum moment of P C is used directly to establish whether the point in the retinal image has a significant edge feature (blood vessel), where the magnitude has a large value. Meanwhile, if the minimum moment of P C is also large then it is considered as a strong indicator that the feature point has a strong 2D component and can be considered a corner.
The calculation of the maximum and minimum moment of P C is represented in [39,40] based on a classical moment analysis equation. Firstly, the P C information is combined in an orientation matrix ( O m ) given by
O m = ( P C o cos ( θ ) ) 2 ( P C o cos ( θ ) ( P C o sin ( θ ) ) ( P C o sin ( θ ) ) ( P C o cos ( θ ) ) ( P C o sin ( θ ) ) 2 ,
evaluated over all multiple orientations formulated from the P C covariance matrix ( C m ) given by
C m = P C x 2 P C x P C y P C y P C x P C y 2 ,
where P C o is the amplitude of P C , which is determined at orientation o, and the sum is performed overall six orientations. Figure 10 vividly depicts the phase congruency filter response at the six different directions. According to the above illustration of the PC model, in case all Fourier components are in phase, all the complex vectors would be aligned, indicating that the feature point has a robust 2D component and can be considered an edge like feature.
A singular value decomposition on a P C covariance matrix is performed, and the maximum moment, (M), and minimum moment, (m), are obtained as follows.
M = 1 2 γ + α + β 2 + ( α γ ) 2 , m = 1 2 γ + α β 2 + ( α γ ) 2 ,
where
α = o ( P C o cos ( θ ) ) 2 , β = 2 0 ( P C o cos ( θ ) ) ( P C o sin ( θ ) ) , γ = o ( P C o sin ( θ ) ) 2 .
Figure 11 visualises the extracted features image of the two different moments (M) and (m) features.

2.1.5. Multi-Scale Second-Order of Local Image Structure (Hessian matrix)

The Hessian matrix H ( x , y ) for a point I ( x i , y j ) in the 2D retinal image is used for vessel detection and is given by
H ( x , y ) = 2 f x 2 2 f x y 2 f y x 2 f y 2 = f x x f x y f y x f y y ,
where the second partial derivatives components are presented as f x x , f x y , f y x , and f y y for the neighbours of the point ( x , y ) . The Hessian matrix is exploited to describe the second-order structure of intensity variation value around a pixel on the retinal image. The information obtained from the H describes the local curvature of the data in a small neighbourhood surrounding each pixel. The often employed approach to the retinal image is inspired by the theory of linear scale-space [36]. Accordingly, a multi-scale algorithm is designed to identify vascular structure by investigating the signs and magnitudes of the eigenvalues of H , which reflect specific shapes of structures in the retinal image, besides their brightness.
Let I ( x , y ) be the intensity of the a 2 D green retinal image at a particular point ( x , y ) in an image, and G 2 D ( x , y , σ ) is a 2 D Gaussian kernel. Then, 2 × 2 Hessian matrix H ˜ ( x , y , σ ) components of the second order derivatives of G 2 D ( x , y , σ ) multiplied by I ( x , y ) at a scale σ are defined as:
f x x = I ( x , y ) 2 x G 2 D ( x , y , σ ) ,
f x y = f y x = I ( x , y ) 2 x y G 2 D ( x , y , σ ) ,
f y y = I ( x , y ) 2 y G 2 D ( x , y , σ ) ,
where the 2 D Gaussian kernel is given by
G 2 D ( x , y , σ ) = 1 2 π σ 2 exp x 2 + y 2 2 σ 2 .
with σ the standard deviation of the Gaussian distribution, referring to a scale of the filter, and ∗ the convolution operator. Different values of σ are experimentally tested, and the optimal values of σ are detected as 1 2 , 1, 2, and 2 2 . The response of second-order derivative of a Gaussian function at a scale σ represents a probe kernel that can be a measure of the contrast between regions inside and outside the given range ( σ , σ ) in the derivative direction in the retinal image [41,42]. In this respect, the blood vessel pixels gain the highest response at a suitable scale of σ , while the background pixels get a low response. It is noteworthy that the nature of the second-order derivative of the Hessian matrix is invariant to linear grey variation of the retinal image pixels, because of the smooth nature of the Gaussian kernel. So only local contrast is assessed along with the actual greyscale values.
As soon as the elements of the Hessian matrix are obtained, the corresponding eigenvalues and eigenvectors are determined. The eigenvalues, λ i , have an essential role in the discrimination of the local orientation pattern. Eigenvalues not only represent the magnitude of maximal local contrast change but also the change of the magnitude of the local intensity in the other perpendicular direction. Consequently, the eigenvalue decomposition of Hessian can be exploited to distinguish between the edge-like feature in a fundus retinal image and the background. Concerning blood vessel skeleton, darker features are considered. We are interested in only the case where one eigenvalue has a positive and high magnitude. Table 1 illustrates all possible variants of the structure’s shape and brightness that can be identified through the analysis of Hessian eigenvalues in any 2D medical image. We are only interested in the fundus retinal image modality (in our case, the blob-like structures are dark).
Geometric information for the blob-like structure is obtained through a dissimilarity measure based on the second-ordered derivative that is called blobness ratio, R b , given by
R b = λ 1 λ 2 .
This ratio is maximal for a blob-like structure and invariant to the grey level. Since the intensity value of the blood vessel pixels is lower (darker) than the background pixels, the magnitude of the second-derivative (eigenvalues) will be high at the blob-like structure and small at the background. So the ratio will be higher at the blood vessel pixels and lower at the background pixels.
The magnitude of the second-order derivatives at the background is small, distinguishing background structure from the blood vessels. The norm of Hessian is used to quantify this property in terms of the structureness ratio, S, and is given by
S = | | H | | F = ( f x x 2 + 2 f x y 2 + f y y 2 ) ,
where | | . | | F is the Forbenius norm of the components of Hessian matrix. Intuitively, the structureness ratio S is small for the background pixels mostly because of their lack of structure and higher amount of blood vessel pixels.
Finally, a vessel function V ( S ) combines the components of the blobness ratio R b , and structureness ratio, V ( S ) , to map these features into a probability-like estimates V o ( S ) of being a vessel given by
V o ( S ) = 0 ,       λ 2 > 0 exp ( R b 2 2 β 2 ) ( 1 exp ( S 2 2 c 2 ) ) , elsewhere ,
where β is equal to 0.5 and c is half of the maximum Frobenius norm S. Both V and S are computed at a different scale σ . Herein, R b accounts for the eccentricity of the second-order ellipse. Then the maximum value for both V, S are taken as a Hessian feature for each pixel and they are given by:
V M a x = max ( V 0.5 , V 1 , V 2 , V 2 2 ) ,
S M a x = max ( S 0.5 , S 1 , S 2 , S 2 2 ) .
Both V M a x , S M a x , λ 1 and λ 2 are used as a features for every pixel of a green image.
The visualisation of the four features of the Hessian matrix components are delineated in Figure 12.

2.1.6. Difference of Gaussian

Difference of Gaussian (DoG) kernels are exploited as an edge detector [16]. A DoG kernel is much closer to the second-order of Gaussian function, which is described in detail in Section 2.1.5. To characterise a pixels features regarding its neighbours, a DoG filter is utilised at five different smoothing scales, σ { 2 2 , 1 , 2 , 2 , 2 2 } , with the base scale set to σ = 0.5 . Using DoG as a band path filter not only serves to remove noise in an image but also vividly depicts the homogeneous area in an image. The visualisation of the five DoG features are depicted in Figure 13.
In this paper, a 24-D feature vector is constructed for each pixel in a retinal image and is used to differentiate vessel pixels, lesions, and background. Five groups of features are used. The RGB image is transformed into a green channel image because the contrast between blood vessel pixels and background is more precise and prominent than the other channels. So, the intensity value of each pixel in the green channel image is used as a feature. The moments of phase congruency emphasise the edges of the retinal blood vessel and it is the first time to be used as a feature in the fundus retinal image. The Hessian matrix components emphasise the zero crossing. The white top-hat transformation using multi-scales and multi-orientation structuring elements are efficient in making a corrected uniform image. Black bottom-hat transformation using multi-scales and a multi-orientation structuring element is used as a filter to suppress noise. The difference of Gaussian is used, and the response of the filter is a homogeneous image.

2.2. Segmentation

The procedure can be distinguished in three steps: first, scaling the features by normalisation; second, design of the neural network, in which the configuration is decided and trained (applied what the NN learned); third, testing, to identify each pixel as a vessel or a non-vessel to get the final binary image.

2.2.1. Normalisation

In the end, in the feature extraction process, each pixel in the green channel image is characterised by a vector V ( x , y ) (in a 24-D feature space) given by
V ( x , y ) = ( v 1 ( x , y ) , v 2 ( x , y ) , v 3 ( x , y ) , , v 24 ( x , y ) ) ,
where the features v j have distinct ranges and values. Normalisation, by narrowing down the range between the different features is necessary and leads to an easier classification procedure of assigning the candidate pixel to the suitable class: class K 1 of blood vessel pixel or class K 2 of non-vessel pixel. One can obtain a normalised version v ¯ j of features v j by
v ¯ j = v j μ j σ j ,
where μ j is the mean value of the jth feature and σ j their standard deviation. Finally, the features are normalised with a zero mean and unit variance for each pixel independently. The resulting features are used in the classification procedure.
According to [26], using the results of linear classifiers to separate the two classes of vessel segmentation gives poor results because of the nature of the distribution of the training dataset at the feature space, which is nonlinear. Consequently, segmentation can be made more accurately and efficiently using a nonlinear classifier, such as support vector machine (SVM) [32], neural network [24], a multi-layer feed forward neural network [17], decision tree [43], random forest [32], extreme learning machine [12] and convolution neural network(CNN) [44].

2.2.2. Design of the Neural Network

In the present work, we use a modified MLP neural network of one input layer (with 24 input units), three hidden layers, and one output layer as a binary classifier. In the purpose of getting the optimal topology of hidden layers, several topologies and different numbers of neurons were tested. The test results showed that excellent results are obtained when each of the three hidden layers contains 27 units. A rectified linear unit (Relu) activation function is selected for the hidden layers [45]. Further, in classification, cross-entropy (CE) was found better to use as a loss function than a Minimum Mean Square Error (MSE) loss function, because of the unbalanced distribution of vessel and non-vessel pixels [45], where the CE measures the nearness of two probability distributions.

2.2.3. Training

To train the classifier, a set S T r a i n of M candidates features for which the feature vector [ V (22)], and the classification result, K 1 (vessel pixel) or K 2 (non-vessel) are known. The sample that forms S T r a i n is collected randomly from the manually labelled pixels, vessel and non-vessel, from the training images as proposed in [26,43] for each dataset. Once the classifier passes the training steps, it becomes ready to work and classify images of which the results are not known a priori.

2.2.4. Application of Neural Network (Testing)

The trained neural network is used to test “unseen” dataset images. Specifically, a binary image is generated in which blood vessel pixels are distinguished from background pixels. The mathematical descriptions of pixels are individually input to MLP. In particular, the input layer units receive a vector V of the normalised features, as in Equations (22) and (23). The Relu function has a tremendous out-of-sample prediction ability due to its simplicity. It also alleviates the problem of a vanishing gradient in the learning phase of the ANN [46]. Finally, we obtained a binary image that contains only extracted blood vessel and background pixels.
In this paper, we use a multi-layer neural network (MLP) as a binary classifier, since there are only two classes (blood vessel) and (non-vessel). MLP is an extension of an artificial neural network (ANN), considered to be more nonlinear when mapping a set of the input variable to a set of output variables [47]. Also, it does not make any assumption regarding the probabilistic information about the pattern under consideration.

2.3. Post-Processing

In the post-processing stage, two steps are used to reduce two types of classification artefacts. First, the discontinuous vessels pixels, which are surrounded by vessel neighbours and are misclassified as background pixels ( F N ). Second, the non-vessel pixels, which appear as small isolated regions and are misclassified as vessels ( F P ). For the sake of overcoming both artefacts, a mathematical operator is used. A mathematical closing operator is applied for the first artefact to fill any holes smaller than a predefined structure (a disk of radius 9 in our experiments) in the vessel map. For the other artefact, the acreage of isolated pixels are connected in eight connected areas, then any area below 100 pixels is removed, as proposed by [9]. Once both artefacts are corrected, they will be reclassified correctly as vessel pixels for the first artefact and background pixels for the second.
The experimental results in Section 3 show that the combination of the five groups of features, 24D features, which are encoded to the modified MLP, yields great performance. Our experiments are tested in the environment of 2.4 GHz Intel7-77ooQH CPU and 16 GB memory using Scikit-learn: open source machine learning libraries [48].

3. Results

The segmentation performance of the proposed method on the fundus image can be assessed by comparing the segmentation testing results with the GS image label. To this end, we coded our method and carried out numerous experiments on the three publicly available datasets DRIVE, STARE, and CHASE_DB, as mentioned at the beginning of this section.

3.1. Performance Measures

The classification result of every pixel can be one of the four types shown in the confusion matrix of Table 2, where the entries are as follows. True positive ( T P ) indicates a vessel pixel classified correctly as a vessel pixel. False-positive ( F P ) indicates a non-vessel pixel classified wrongly as a vessel pixel. True negative ( T N ) indicates a non-vessel pixel classified correctly as a non-vessel pixel. False negative ( F N ) indicates a vessel pixel classified wrongly as a non-vessel pixel.
We started by training our classifier using a set of images selected randomly from three datasets: DRIVE, STARE and CHASE_DB1. From each dataset, a sample of around 30 % of the S T r a i n are used as a training sample set. Firstly, the RGB image is transformed into a green channel image, since the green channel shows the best contrast between vessel and background [11,18,19,20,25]. Then, for every pixel in the image, a collection of hybrid features including local intensity value, morphology top-hat transformation, morphology bottom-hat transformation, minimum/maximum moment of phase congruency, Hessian matrix components and difference of Gaussian features are extracted from the green band image to generate a 24-D feature vector, which is represented as 24 different features for each pixel.
The design of the feature vector characterises every pixel in terms of some quantifiable measurements that can be easily used to differentiate effectively between a vessel and non-vessel pixel in the classification step, as we described in detail in Section 2. The extracted features are collected in a feature matrix, which represents all features from training images for each dataset. The ( S T r a i n ) are randomly selected from the feature matrix to teach the classifier how to differentiate between vessel pixels and background. After the MLP is trained, it is used to identify and differentiate between the two different patterns in the test images, blood vessel pixel and the background pixel.
To quantify the classification results, we use the following standard metrics: sensitivity ( S e ), specificity ( S p ), positive prediction value ( P p v ), negative prediction value ( N p v ), and accuracy ( A c c ), which are given by the equations below [49]. It should be noted, however, that the values of the variables in those equations are cumulative, i.e., denote the total number of pixels classified. For example, T P in those equations denotes the total number of vessel pixels correctly classified as vessel pixels.
S e = T P T P + F N ,
S p = T N T N + F P ,
P p v = T P T P + F P ,
N p v = T N T N + F N ,
A c c = T P + T N T P + F P + T N + F N ,
Additionally, the F1 metric is suggested for evaluating classification performance when the dataset is unbalanced, as is the case here, and is given by
F 1 = 2 T P F P + F N + 2 T P .
All the above metrics are in [0,1], with 0 being the worst and 1 being the best (though generally not practically attainable.) In our tests, all these metrics are evaluated for each image, then the average for all images tested from each dataset is calculated. In our experiments, we tested images from all three datasets: DRIVE, STARE, and CHASE_DB1. For each image tested, the feature vectors were calculated for all pixels and applied consecutively to the classifier. Based on the classifier’s decision for each pixel, and on our knowledge of the truth, we calculated the total numbers of the T P , F P , T N , F N cases for the image as a whole. These numbers are then plugged into the above equations to obtain the corresponding metrics.

3.2. The Proposed Method Evaluation

The last results are obtained from the probability map image (binary image) and the five performance metrics on the DRIVE, STRAE, CHASE_DB, respectively. The average of the selected measures is tabulated in Table 3, Table 4 and Table 5. We start with the DRIVE dataset, from which we randomly selected 20 images for testing. The results for the performance metrics are given in Table 3.
Table 4 and Table 5 show, respectively, the same metrics for the STARE and CHASE_DB1 datasets, where 20 images from the former and 14 from the latter were tested. At the bottom of each table, the averages of the metrics for that dataset are placed. Then segmentation results are identified from the obtained metrics on 20 images of the STARE dataset are presented in Table 4.
The best and worst segmentation results are identified from the obtained metrics and are presented in Table 5 on the CHASE_DB1 dataset.
For DRIVE dataset, the best case of segmentation in terms of Se, Sp, Ppv, Npv, Acc, and F1 are 0.8282, 0.9951, 0.9391, 0.9782, 0.9711, and 0.8200 respectively, while the worst case are 0.5888, 0.9699, 0.7696, 0.9429, 0.9535, and 0.5829 respectively. For the STARE dataset, the best case of segmentation in terms of Se, Sp, Ppv, Npv, Acc, and F1 are 0.9678, 0.9975, 0.9416, 0.9888, 0.9757, and 0.9660 respectively, while the worst case are 0.6218, 0.9636, 0.7350, 0.9461, 0.9421, and 0.6196 respectively. For the CHASEDB1 dataset, the best case of segmentation in terms of Se, Sp, Ppv, Npv, Acc, and F1 are 0.8672, 0.9963, 0.9797, 0.9718, and 0.8585 respectively, while the worst case are 0.6240, 0.9585, 0.6085, 0.9559, 0.9422, and 0.6177 respectively. Table 6 summarises the best and the worst results on the whole dataset under study using the proposed method.

3.3. Comparison with The-State-Of-Art Methods

This method is applied with hand-crafted features and a modified MLP with CE, to improve its performance compared with seven other state-of-the-art methods of the same category. The comparison is carried out on the three datasets mentioned above. Table 7, Table 8 and Table 9 show the comparison results in terms of the sensitivity ( S e ), specificity ( S p ), accuracy ( A c c ), and F 1 metrics for the three data sets. In all three tables, the results for the proposed method are shown in the last row. From the tables, it is evident that the proposed method is better for all metrics than all the other methods except for the F 1 metric in Table 9. Moreover, visual evidence of the results are pictured by showing some examples for the best and the worst cases are also provided in Figure 14, Figure 15, Figure 16, Figure 17, Figure 18 and Figure 19. It is worth mentioning that using different FOVs may give significantly different results. So, to ensure a fair comparison, segmentation metrics are computed on pixels inside the FOV only for all methods, and FOV masks were created for STARE and CHASE_DB1 datasets as mentioned earlier.
Table 7 shows the quantitative results of the proposed method with comparison to the relevant state-of-the-art methods on the DRIVE dataset.
Table 7, Table 8 and Table 9 show an overview of segmentation results for the DRIVE, STARE and CHASE_DB1 datasets. The performance results of the proposed method in terms of average accuracy ( A c c ), sensitivity ( S e ), specificity ( S p ), and F 1 score outperform the seven state-of-the-art methods.
A sample of the visual results of the best and worst results for the three datasets in two different images are shown in Figure 14 and Figure 15 for DRIVE, Figure 16 and Figure 17 for the STARE dataset, and Figure 18 and Figure 19 for the CHASE_DB1 dataset. However, this method shows bad performance for CHASE_DB1 compared with DRIVE and STARE where a small decreasing in performance (associated with Table 9) is noticed for all results and for F 1 in particular. This may be due to the high dimensions of this dataset, the characteristic of the nonuniform background illumination for the images, the bad contrast between vessels and background in addition to the presence of central vessel reflex.
Table 8 shows the quantitative results of the proposed method with comparison to the relevant state-of-the-art-methods on the STARE dataset.
A sample of the visual results of the best and worst results on the STARE dataset in two different images is shown in Figure 16 and Figure 17.
Table 9 shows the quantitative results of the proposed method with comparison to the relevant state-of-the-art-methods on the CHASE_DB1 dataset.
A sample of the visual results of the best and worst results on the CHASE_DB1 dataset in two different images is shown in Figure 18 and Figure 19.
The evaluation performance of the proposed method is computed on pixels inside the FOV only. As with most of the previous work in the DRIVE dataset, the FOV mask is used, while the FOV for both STARE and CHASE_DB1 are created by the same method as that used in [11,15,26,30].
It worth mentioning that using different FOVs may give significantly different results. To compare our results to other relative retinal vessel segmentation algorithms, the results are compared to those of state-of-the-art methods for DRIVE, STARE, and CHASE_DB1 datasets, respectively. The comparison results show the robustness of the proposed method in improving performance measure values of accuracy, sensitivity, and specificity to all datasets used in this study.

4. Discussion

4.1. The Performance Model

While experimenting, we observed that the proposed method performs better than the conventional supervised and faster than the automatic feature extracted models. We proposed an automatic system to extract and differentiate between the blood vessel and background pixels in the fundus image. A modified MLP neural network was employed to act as a semi-deep network. This network is strengthened with the use of Relu, which is used mainly in convolution neural networks (CNN) and CE loss function. CE as a loss function is the most proper to use due to the nature of the distribution for the vessel and background pixels, which is not a Gaussian prior. However, it is independent and identically distributed (I.I.D). The modified MLP learns from the 24 hand-crafted features how to successfully detect and differentiate between the blood vessel and the background pixel. The proposed method profoundly depends on the variety between the type of the selected features and the design of the network itself. In other words, we take our account to make some symmetry for features itself to form a feature vector that can represent each pixel adequately. Despite that preprocessing is always the first stage in most of the state-of-the-art-methods in vessels segmentation model, we preferred to offset it by using five groups of features that can homogenise, uniform, and process the noise in the image intensity. For example, the blood vessel pixel has a blob like-edge, which is detected effectively using the maximum and minimum moment of phase congruency features. The second derivative of hessian matrix features is used to identify the zero-crossing in the final images, the five Gaussian scaled features successfully rendered a more homogeneous image, white top-hat features corrected the uniformity of the image, black bottom-hat features suppressed the noise in the image. Because of the difference in the image dimensions between datasets, each dataset was independently evaluated and tested. We used only one classifier, MLP, and the final results depend on it without any other procedure of features selection or any ensemble or hierarchy classifier. We represented some gaps in the introduction, which were mainly in the complicated structures of the system or the high time cost to segment the vessel. We described some gaps, which were primarily focal to the complicated nature of the system and the high time cost. The datasets have many challenges, such as the scarcity, number of available images, and the variety of image illumination and dimensions. Hence, for more generalised DR diagnosis, more retinal fundus images with high illumination are necessary.

4.2. The Medical Implications of the Model

A modified MLP model acts as a soft and rapid automation system to provide the ophthalmology with primary information and knowledge about the blood vessels (e.g., size, tortuosity, crossing, lesions structure, hard and soft exudates, cotton-wool-spots) and the quick aids. This system can not uproot the role of the ophthalmology doctors, but it can help in understanding the diagnosis results, early diagnosis, and increase the result accuracy as well. Despite getting good results in the proposed model, there is still a chance for further improvement.

5. Conclusions

In this paper, we have proposed an integrated method to segment blood vessels in retinal images. The experimental results prove that the method succeeds both absolutely and in comparison with seven other state-of-the-art similar methods using three well known publicly available datasets. The proposed method encompasses many elements that wholly contribute to its success. The first element is a feature choice, where 24 carefully selected features are employed. The second element is the choice of the classifier, namely an MLP neural network, and the design of this classifier. The third element is the way the classifier is trained, where randomly chosen images from all three datasets participate in the training. The fourth element is post-processing, where classification artefacts can be mitigated. Classification results are provided both visually and quantitatively, wherein the latter case, five standard classification metrics are employed. Both the visual and quantitative results show the good performance of the proposed method vividly, despite the wide variation of vessel sizes. Actually, the method detects both wide and tiny blood vessel with the same efficiency. We avoid using preprocessing to preserve the blood vessel structure and avoid losing any information. Furthermore, our results are only based on the crafty choice of features and the correcting of the outlier values by normalisation. So concentrating on the handling of the outlier values by the feature engineer may help in decreasing the reliability of using feature selection techniques or hierarchical classification as a way to increase system performance. Our experimental results contradict previous thought that the only advantage is in using feature selection and/or hierarchy classification techniques to improve performance. On the contrary, it suggests that using features that are selected craftily or by using feature engineers can give promising results. In the retinal disease diagnosis community, the relevant dataset is scarce. In other words, the availability for investigating and researching using real images from patients who suffer from an eye ailment is a hard task, particularly in a developing country. So, the only available choice is to work on the most publicity available dataset. The average accuracy, sensitivity and specificity values accomplished by the proposed method are 0.9607, 0.7542, 0.9843 on DRIVE, 0.9634, 0.7806, and 0.9825 on STARE, and 0.9577, 0.7585, and 0.9846 on CHASE_DB1, which are comparable with the current blood vessel segmentation techniques. Although 24-D feature vectors are constructed for every pixel in the image, the ensuing space and time complexity is offset by the simplicity of the method, compared to the cutting edge methods like deep learning. The fast implementation and simplicity of the proposed method make it a handy early prediction tool of DR.

Author Contributions

Conceptualization, N.T.; methodology, N.T. and M.E.; software, N.T.; validation, N.T.; formal analysis, M.E., G.A.A. and H.N; investigation, N.T. and M.E.; resources, N.T.; data curation, N.T.; writing—original draft preparation, N.T. and H.N.; writing—review and editing, N.T. and H.N.; visualization, N.T., M.E., G.A.A. and H.N.; supervision, M.E., G.A.A. and H.N.; project administration, M.E., G.A.A. and H.N.; funding acquisition, N.T.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors are thankful to the authors of DRIVE, STARE and CHASE_DB1 for making their datasets are publicly available online.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Organisation, W.H. Blindness and Vision Impairment. 2020. Available online: http://www.who.int/health-topics/blindness-and-vision-loss#tab=tab_1 (accessed on 7 May 2020).
  2. Bourne, R.R.; Flaxman, S.R.; Braithwaite, T.; Cicinelli, M.V.; Das, A.; Jonas, J.B.; Keeffe, J.; Kempen, J.H.; Leasher, J.; Limburg, H.; et al. Magnitude, temporal trends, and projections of the global prevalence of blindness and distance and near vision impairment: A systematic review and meta-analysis. Lancet Glob. Health 2017, 5, e888–e897. [Google Scholar] [CrossRef] [Green Version]
  3. Imran, A.; Li, J.; Pei, Y.; Yang, J.J.; Wang, Q. Comparative Analysis of Vessel Segmentation Techniques in Retinal Images. IEEE Access 2019, 7, 114862–114887. [Google Scholar] [CrossRef]
  4. Sundaram, R.; KS, R.; Jayaraman, P. Extraction of Blood Vessels in Fundus Images of Retina through Hybrid Segmentation Approach. Mathematics 2019, 7, 169. [Google Scholar] [CrossRef] [Green Version]
  5. Resnikoff, S.; Felch, W.; Gauthier, T.M.; Spivey, B. The number of ophthalmologists in practice and training worldwide: A growing gap despite more than 200000 practitioners. Br. J. Ophthalmol. 2012, 96, 783–787. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Samuel, P.M.; Veeramalai, T. Multilevel and Multiscale Deep Neural Network for Retinal Blood Vessel Segmentation. Symmetry 2019, 11, 946. [Google Scholar] [CrossRef] [Green Version]
  7. Baniasadi, N.; Wang, M.; Wang, H.; Mahd, M.; Elze, T. Associations between optic nerve head–related anatomical parameters and refractive error over the full range of glaucoma severity. Transl. Vis. Sci. Technol. 2017, 6, 9. [Google Scholar] [CrossRef] [Green Version]
  8. Wang, M.; Jin, Q.; Wang, H.; Li, D.; Baniasadi, N.; Elze, T. The interrelationship between refractive error, blood vessel anatomy, and glaucomatous visual field loss. Transl. Vis. Sci. Technol. 2018, 7, 4. [Google Scholar] [CrossRef] [Green Version]
  9. Li, J.; Hu, Q.; Imran, A.; Zhang, L.; Yang, J.j.; Wang, Q. Vessel Recognition of Retinal Fundus Images Based on Fully Convolutional Network. In Proceedings of the 2018 IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC), Tokyo, Japan, 23–27 July 2018; Volume 2, pp. 413–418. [Google Scholar]
  10. Newman, A.; Andrew, N.; Casson, R. Review of the association between retinal microvascular characteristics and eye disease. Clin. Exp. Ophthalmol. 2018, 46, 531–552. [Google Scholar] [CrossRef] [Green Version]
  11. Abràmoff, M.D.; Garvin, M.K.; Sonka, M. Retinal imaging and image analysis. IEEE Rev. Biomed. Eng. 2010, 3, 169–208. [Google Scholar] [CrossRef] [Green Version]
  12. Zhu, C.; Zou, B.; Zhao, R.; Cui, J.; Duan, X.; Chen, Z.; Liang, Y. Retinal vessel segmentation in colour fundus images using Extreme Learning Machine. Comput. Med. Imaging Graph. 2017, 55, 68–77. [Google Scholar] [CrossRef]
  13. Qureshi, I.; Ma, J.; Abbas, Q. Recent Development on Detection Methods for the Diagnosis of Diabetic Retinopathy. Symmetry 2019, 11, 749. [Google Scholar] [CrossRef] [Green Version]
  14. Poplin, R.; Varadarajan, A.V.; Blumer, K.; Liu, Y.; McConnell, M.V.; Corrado, G.S.; Peng, L.; Webster, D.R. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat. Biomed. Eng. 2018, 2, 158. [Google Scholar] [CrossRef] [PubMed]
  15. Fraz, M.M.; Remagnino, P.; Hoppe, A.; Uyyanonvara, B.; Rudnicka, A.R.; Owen, C.G.; Barman, S.A. Blood vessel segmentation methodologies in retinal images–a survey. Comput. Methods Programs Biomed. 2012, 108, 407–433. [Google Scholar] [CrossRef] [PubMed]
  16. Azzopardi, G.; Strisciuglio, N.; Vento, M.; Petkov, N. Trainable COSFIRE filters for vessel delineation with application to retinal images. Med. Image Anal. 2015, 19, 46–57. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Franklin, S.W.; Rajan, S.E. Retinal vessel segmentation employing ANN technique by Gabor and moment invariants-based features. Appl. Soft Comput. 2014, 22, 94–100. [Google Scholar] [CrossRef]
  18. GeethaRamani, R.; Balasubramanian, L. Retinal blood vessel segmentation employing image processing and data mining techniques for computerized retinal image analysis. Biocybern. Biomed. Eng. 2016, 36, 102–118. [Google Scholar] [CrossRef]
  19. Almotiri, J.; Elleithy, K.; Elleithy, A. Retinal vessels segmentation techniques and algorithms: A survey. Appl. Sci. 2018, 8, 155. [Google Scholar] [CrossRef] [Green Version]
  20. Jiang, Z.; Yepez, J.; An, S.; Ko, S. Fast, accurate and robust retinal vessel segmentation system. Biocybern. Biomed. Eng. 2017, 37, 412–421. [Google Scholar] [CrossRef]
  21. Aslani, S.; Sarnel, H. A new supervised retinal vessel segmentation method based on robust hybrid features. Biomed. Signal Process. Control 2016, 30, 1–12. [Google Scholar] [CrossRef]
  22. Zhang, J.; Chen, Y.; Bekkers, E.; Wang, M.; Dashtbozorg, B.; Ter Haar Romeny, B.M. Retinal vessel delineation using a brain-inspired wavelet transform and random forest. Pattern Recognit. 2017, 69, 107–123. [Google Scholar] [CrossRef]
  23. Guo, Y.; Budak, Ü.; Şengür, A.; Smarandache, F. A retinal vessel detection approach based on shearlet transform and indeterminacy filtering on fundus images. Symmetry 2017, 9, 235. [Google Scholar] [CrossRef] [Green Version]
  24. Li, Q.; Feng, B.; Xie, L.; Liang, P.; Zhang, H.; Wang, T. A cross-modality learning approach for vessel segmentation in retinal images. IEEE Trans. Med. Imaging 2015, 35, 109–118. [Google Scholar] [CrossRef] [PubMed]
  25. Lupascu, C.A.; Tegolo, D.; Trucco, E. FABC: Retinal vessel segmentation using AdaBoost. IEEE Trans. Inf. Technol. Biomed. 2010, 14, 1267–1274. [Google Scholar] [CrossRef] [PubMed]
  26. Marín, D.; Aquino, A.; Gegúndez-Arias, M.E.; Bravo, J.M. A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features. IEEE Trans. Med. Imaging 2010, 30, 146–158. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Fraz, M.M.; Remagnino, P.; Hoppe, A.; Uyyanonvara, B.; Rudnicka, A.R.; Owen, C.G.; Barman, S.A. An ensemble classification-based approach applied to retinal blood vessel segmentation. IEEE Trans. Biomed. Eng. 2012, 59, 2538–2548. [Google Scholar] [CrossRef] [PubMed]
  28. Vega, R.; Sanchez-Ante, G.; Falcon-Morales, L.E.; Sossa, H.; Guevara, E. Retinal vessel extraction using lattice neural networks with dendritic processing. Comput. Biol. Med. 2015, 58, 20–30. [Google Scholar] [CrossRef] [PubMed]
  29. Liskowski, P.; Krawiec, K. Segmenting retinal blood vessels with deep neural networks. IEEE Trans. Med. Imaging 2016, 35, 2369–2380. [Google Scholar] [CrossRef]
  30. Wang, S.; Yin, Y.; Cao, G.; Wei, B.; Zheng, Y.; Yang, G. Hierarchical retinal blood vessel segmentation based on feature and ensemble learning. Neurocomputing 2015, 149, 708–717. [Google Scholar] [CrossRef]
  31. Oliveira, A.; Pereira, S.; Silva, C.A. Retinal vessel segmentation based on fully convolutional neural networks. Expert Syst. Appl. 2018, 112, 229–242. [Google Scholar] [CrossRef] [Green Version]
  32. Khowaja, S.A.; Khuwaja, P.; Ismaili, I.A. A framework for retinal vessel segmentation from fundus images using hybrid feature set and hierarchical classification. Signal Image Video Process. 2019, 13, 379–387. [Google Scholar] [CrossRef]
  33. Staal, J.; Abràmoff, M.D.; Niemeijer, M.; Viergever, M.A.; Van Ginneken, B. Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 2004, 23, 501–509. [Google Scholar] [CrossRef] [PubMed]
  34. Hoover, A.; Kouznetsova, V.; Goldbaum, M. Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Trans. Med. Imaging 2000, 19, 203–210. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Owen, C.G.; Rudnicka, A.R.; Mullen, R.; Barman, S.A.; Monekosso, D.; Whincup, P.H.; Ng, J.; Paterson, C. Measuring retinal vessel tortuosity in 10-year-old children: Validation of the computer-assisted image analysis of the retina (CAIAR) program. Investig. Ophthalmol. Vis. Scie. 2009, 50, 2004–2010. [Google Scholar] [CrossRef] [Green Version]
  36. Lindeberg, T. Edge detection and ridge detection with automatic scale selection. In Proceedings of the CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 18–20 June 1996; pp. 465–470. [Google Scholar]
  37. Morrone, M.C.; Owens, R.A. Feature detection from local energy. Pattern Recognit. Lett. 1987, 6, 303–313. [Google Scholar] [CrossRef]
  38. Kovesi, P. Image features from phase congruency. Videre J. Comput. Vis. Res. 1999, 1, 1–26. [Google Scholar]
  39. Kovesi, P. Phase congruency detects corners and edges. In The Australian Pattern Recognition Society Conference: DICTA; Csiro Publishing: Clayton, Australia, 2003; Volume 2003. [Google Scholar]
  40. Shariatmadar, Z.S.; Faez, K. Visual saliency detection via integrating bottom-up and top-down information. Optik 2019, 178, 1195–1207. [Google Scholar] [CrossRef]
  41. Frangi, A.F.; Niessen, W.J.; Vincken, K.L.; Viergever, M.A. Multiscale vessel enhancement filtering. In International Conference on Medical Image Computing And Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 1998; pp. 130–137. [Google Scholar]
  42. Jerman, T.; Pernuš, F.; Likar, B.; Špiclin, Ž. Blob enhancement and visualization for improved intracranial aneurysm detection. IEEE Trans. Vis. Comput. Graph. 2015, 22, 1705–1717. [Google Scholar] [CrossRef]
  43. Fraz, M.M.; Rudnicka, A.R.; Owen, C.G.; Barman, S.A. Delineation of blood vessels in pediatric retinal images using decision trees-based ensemble classification. Int. J. Comput. Assist. Radiolo. Surg. 2014, 9, 795–811. [Google Scholar] [CrossRef]
  44. Bendaoudi, H.; Cheriet, F.; Manraj, A.; Tahar, H.B.; Langlois, J.P. Flexible architectures for retinal blood vessel segmentation in high-resolution fundus images. J. Real-Time Image Process. 2018, 15, 31–42. [Google Scholar] [CrossRef]
  45. Li, Y.; Yuan, Y. Convergence analysis of two-layer neural networks with relu activation. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 597–607. [Google Scholar]
  46. Hansson, M.; Olsson, C. Feedforward Neural Networks With ReLU Activation Functions Are Linear Splines. Bachelor’s Thesis, Mathematical Sciences, Lund University, Lund, Sweden, 2017. [Google Scholar]
  47. Pandya, M.D.; Shah, P.D.; Jardosh, S. Medical image diagnosis for disease detection: A deep learning approach. In U-Healthcare Monitoring Systems; Elsevier: Amsterdam, The Netherlands, 2019; pp. 37–60. [Google Scholar]
  48. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  49. Moccia, S.; De Momi, E.; El Hadji, S.; Mattos, L.S. Blood vessel segmentation algorithms—Review of methods, datasets and evaluation metrics. Comput. Methods Programs Biomed. 2018, 158, 71–91. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Zhao, Y.Q.; Wang, X.H.; Wang, X.F.; Shih, F.Y. Retinal vessels segmentation based on level set and region growing. Pattern Recognit. 2014, 47, 2437–2446. [Google Scholar] [CrossRef]
  51. Soomro, T.A.; Afifi, A.J.; Gao, J.; Hellwich, O.; Khan, M.A.; Paul, M.; Zheng, L. Boosting sensitivity of a retinal vessel segmentation algorithm with convolutional neural network. In Proceedings of the 2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Sydney, NSW, Australia, 29 November–1 December 2017; pp. 1–8. [Google Scholar]
  52. Fu, H.; Xu, Y.; Wong, D.W.K.; Liu, J. Retinal vessel segmentation via deep learning network and fully-connected conditional random fields. In Proceedings of the 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), Prague, Czech Republic, 13–16 April 2016; pp. 698–701. [Google Scholar]
Figure 1. A sample image from the Digital Retinal Images for Vessel Extraction (DRIVE) dataset. (a) The image. (b) The mask.
Figure 1. A sample image from the Digital Retinal Images for Vessel Extraction (DRIVE) dataset. (a) The image. (b) The mask.
Symmetry 12 00894 g001
Figure 2. A sample from the DRIVE dataset. (a) The training image; (b) the manual of the training image; (c) the training mask of the image.
Figure 2. A sample from the DRIVE dataset. (a) The training image; (b) the manual of the training image; (c) the training mask of the image.
Symmetry 12 00894 g002
Figure 3. The test image of the DRIVE dataset. (a) A sample from the test image from DRIVE dataset; (b) the first manual of image 2_test (02_manual 1); (c) the second manual of 2_test (02_manual 1); (d) the musk image for image 2_test.
Figure 3. The test image of the DRIVE dataset. (a) A sample from the test image from DRIVE dataset; (b) the first manual of image 2_test (02_manual 1); (c) the second manual of 2_test (02_manual 1); (d) the musk image for image 2_test.
Symmetry 12 00894 g003
Figure 4. A sample from the Structure Analysis of the Retina (STARE) data set. (a) An image; (b) the first manual segmentation; (c) the second manual segmentation.
Figure 4. A sample from the Structure Analysis of the Retina (STARE) data set. (a) An image; (b) the first manual segmentation; (c) the second manual segmentation.
Symmetry 12 00894 g004
Figure 5. A sample from the CHASE_DB1 dataset. (a) An image; (b) the first manual segmentation; (c) the second manual segmentation.
Figure 5. A sample from the CHASE_DB1 dataset. (a) An image; (b) the first manual segmentation; (c) the second manual segmentation.
Symmetry 12 00894 g005
Figure 6. Block diagram of the proposed method.
Figure 6. Block diagram of the proposed method.
Symmetry 12 00894 g006
Figure 7. The extracted local intensity feature. (a) RGB image; (b) Red channel; (c) Blue channel; (d) Green channel, where the blood vessels are more vivid.
Figure 7. The extracted local intensity feature. (a) RGB image; (b) Red channel; (c) Blue channel; (d) Green channel, where the blood vessels are more vivid.
Symmetry 12 00894 g007
Figure 8. White top-hat features at six scales, extracted from the DRIVE dataset. Every image is the summation of the white top-hat filter at all orientation: (a) white-top-hat image at c = 3; (b) white top-hat image at c = 7; (c) white top-hat image at c = 11; (d) white top-hat image at c = 15; (e) white top-hat image at c = 19; (f) white top-hat image at c = 23.
Figure 8. White top-hat features at six scales, extracted from the DRIVE dataset. Every image is the summation of the white top-hat filter at all orientation: (a) white-top-hat image at c = 3; (b) white top-hat image at c = 7; (c) white top-hat image at c = 11; (d) white top-hat image at c = 15; (e) white top-hat image at c = 19; (f) white top-hat image at c = 23.
Symmetry 12 00894 g008
Figure 9. Black bottom-hat features at six scales extracted from DRIVE and every image is the summation of the black top-hat filter at all orientations: (a) black bottom-hat image at c = 3; (b) black bottom-hat image at c = 7; (c) black bottom-hat image at c = 11; (d) black bottom-hat image at c = 15; (e) black bottom-hat image at c = 19; (f) black bottom-hat image at c = 23.
Figure 9. Black bottom-hat features at six scales extracted from DRIVE and every image is the summation of the black top-hat filter at all orientations: (a) black bottom-hat image at c = 3; (b) black bottom-hat image at c = 7; (c) black bottom-hat image at c = 11; (d) black bottom-hat image at c = 15; (e) black bottom-hat image at c = 19; (f) black bottom-hat image at c = 23.
Symmetry 12 00894 g009
Figure 10. A list of phase congruency images at different orientations: (a) PC at orientation π / 6 ; (b) PC at orientation π / 3 ; (c) PC at orientation π / 2 ; (d) PC at orientation 2 π / 3 ; (e) PC at orientation 5 π / 6 ; (f) PC at orientation π .
Figure 10. A list of phase congruency images at different orientations: (a) PC at orientation π / 6 ; (b) PC at orientation π / 3 ; (c) PC at orientation π / 2 ; (d) PC at orientation 2 π / 3 ; (e) PC at orientation 5 π / 6 ; (f) PC at orientation π .
Symmetry 12 00894 g010
Figure 11. The momentum features: (a) the image of the maximum moment of P C ; (b) the image of the minimum moment of P C .
Figure 11. The momentum features: (a) the image of the maximum moment of P C ; (b) the image of the minimum moment of P C .
Symmetry 12 00894 g011
Figure 12. Hessian Features. (a) The vesselness features image; (b) the structureness features image; (c) the λ 1 image features; (d) The λ 2 image features.
Figure 12. Hessian Features. (a) The vesselness features image; (b) the structureness features image; (c) the λ 1 image features; (d) The λ 2 image features.
Symmetry 12 00894 g012
Figure 13. The difference of Gaussian features (DoG). (a) The DoG image at σ = 2 2 ; (b) the DoG image at σ = 1 ; (c) the DoG image at σ = 2 ; (d) the DoG image at σ = 2 ; (e) the DoG image at σ = 2 2 .
Figure 13. The difference of Gaussian features (DoG). (a) The DoG image at σ = 2 2 ; (b) the DoG image at σ = 1 ; (c) the DoG image at σ = 2 ; (d) the DoG image at σ = 2 ; (e) the DoG image at σ = 2 2 .
Symmetry 12 00894 g013
Figure 14. Pictorial results of the best accuracy on the DRIVE dataset in 2 different images in each row: (a) The colour retinal images. (b) The transformed green channel images. (c) The manual label image. (d) The segmented vessel map images.
Figure 14. Pictorial results of the best accuracy on the DRIVE dataset in 2 different images in each row: (a) The colour retinal images. (b) The transformed green channel images. (c) The manual label image. (d) The segmented vessel map images.
Symmetry 12 00894 g014
Figure 15. Pictorial results of the worst accuracy on the DRIVE dataset in 2 different images in each row: (a) The colour retinal images. (b) The transformed green channel images. (c) The manual label image. (d) The segmented vessel map images.
Figure 15. Pictorial results of the worst accuracy on the DRIVE dataset in 2 different images in each row: (a) The colour retinal images. (b) The transformed green channel images. (c) The manual label image. (d) The segmented vessel map images.
Symmetry 12 00894 g015
Figure 16. Pictorial results of the best accuracy on the STARE dataset in 2 different images in each row: (a) The colour retinal images. (b) The transformed green channel images. (c) The manual label image. (d) The segmented vessel map images.
Figure 16. Pictorial results of the best accuracy on the STARE dataset in 2 different images in each row: (a) The colour retinal images. (b) The transformed green channel images. (c) The manual label image. (d) The segmented vessel map images.
Symmetry 12 00894 g016
Figure 17. Pictorial results of the worst accuracy on the STARE dataset in 2 different images in each row: (a) The colour retinal images. (b) The transformed green channel images. (c) The manual label image. (d) The segmented vessel map images.
Figure 17. Pictorial results of the worst accuracy on the STARE dataset in 2 different images in each row: (a) The colour retinal images. (b) The transformed green channel images. (c) The manual label image. (d) The segmented vessel map images.
Symmetry 12 00894 g017
Figure 18. Pictorial results of the best accuracy on the CHASEDB1 dataset in 2 different images in each row: (a) The colour retinal images. (b) The transformed green channel images. (c) The manual label image. (d) The segmented vessel map images.
Figure 18. Pictorial results of the best accuracy on the CHASEDB1 dataset in 2 different images in each row: (a) The colour retinal images. (b) The transformed green channel images. (c) The manual label image. (d) The segmented vessel map images.
Symmetry 12 00894 g018
Figure 19. Pictorial results of the worst accuracy on the CHASEDB1 dataset in 2 different images in each row: (a) The colour retinal images. (b) The transformed green channel images. (c) The manual label image. (d) The segmented vessel map images.
Figure 19. Pictorial results of the worst accuracy on the CHASEDB1 dataset in 2 different images in each row: (a) The colour retinal images. (b) The transformed green channel images. (c) The manual label image. (d) The segmented vessel map images.
Symmetry 12 00894 g019
Table 1. The relations between the eigenvalues for different features in any 2D image are adopted from Frangi’ Schema where H = High-value; L = low-value; N = Noise usually small; +/ − = positive or negative eigenvalue; Dark = dark feature on a bright background (blood vessel in our case [41]. The representation structure we are interested in is indicated with a *.
Table 1. The relations between the eigenvalues for different features in any 2D image are adopted from Frangi’ Schema where H = High-value; L = low-value; N = Noise usually small; +/ − = positive or negative eigenvalue; Dark = dark feature on a bright background (blood vessel in our case [41]. The representation structure we are interested in is indicated with a *.
2D Image
λ 1 λ 2 The Pattern
NNNoisy, no preferred direction
LH+Tubular structure (dark)
LH-Tubular structure (bright)
H+H+Blob-like structure (dark) *
H-H-Blob-like structure (bright)
Table 2. The contingency table for vessel segmentation.
Table 2. The contingency table for vessel segmentation.
Gold Standard (GS) Segmentation
Vessel PixelsBackground Pixels
Vessel Pixels (Predicted)True-Positive ( T P )False Positive ( F P )
Background pixels (Predicted)False Negative ( F N )True Negative ( T N )
Table 3. The performance results on 20 images of the DRIVE dataset.
Table 3. The performance results on 20 images of the DRIVE dataset.
Image Se Sp Ppv Npv Acc F 1
10.81980.97640.83540.97230.96110.8091
20.81710.98390.89750.96810.96510.8117
30.75920.97730.83190.96350.95350.7516
40.79400.98290.86990.96940.96360.7861
50.79160.98760.89430.97170.96050.6572
60.66390.99150.92320.94890.95480.8117
70.81890.98430.78520.97820.95980.7993
80.58880.99510.93910.94740.96050.5829
90.74340.97850.78770.97170.95650.7359
100.80630.97820.80990.97670.96300.7983
110.81810.97920.84250.97420.95920.8100
120.76750.98240.85040.96890.96230.7598
130.65660.99120.92750.94290.95670.6500
140.81430.98430.78520.97820.95980.8062
150.82820.96990.76960.97790.96250.8200
160.75880.98730.89310.96590.9621.0.7512
170.67540.98940.88450.96110.95720.6686
180.70930.99190.93020.95600.96090.7022
190.77650.98620.90380.96220.97110.7687
200.67820.98950.90840.95110.96470.6714
Average0.75420.98430.86340.96530.96070.7475
Table 4. Performance results on 20 images of the STARE dataset.
Table 4. Performance results on 20 images of the STARE dataset.
Image Se Sp Ppv Npv Acc F 1
10.79040.98030.79390.95560.94840.7834
20.69330.97650.74460.96830.95620.6869
30.86120.97870.76290.97940.96830.8527
40.68680.99610.90290.95820.96280.6792
50.75360.97340.77720.95420.94210.7467
60.82530.98210.80760.97940.97160.8214
70.83690.97270.80090.98640.97050.8265
80.76540.96550.74720.98880.96590.7577
90.87520.98420.86460.98210.97520.8426
100.85120.96360.73500.97760.95450.8504
110.87620.98000.81680.98250.97280.8741
120.88770.98160.84230.98380.97570.8679
130.83820.98410.87540.97210.96800.8294
140.75460.98710.89740.97070.96940.7498
150.64910.99240.91500.94990.95380.6371
160.75460.98570.88430.94870.94840.7395
170.65910.98250.86650.97350.96790.6487
180.62180.99750.94160.96960.97260.6196
190.96780.99510.87260.97100.97420.9660
200.66440.99130.83460.94610.94730.6548
Average0.78060.98250.83410.96980.96320.7717
Table 5. Performance results on 14 images of the CHASE_DB1 dataset.
Table 5. Performance results on 14 images of the CHASE_DB1 dataset.
Image Se Sp Ppv Npv Acc F 1
10.78330.98710.77620.96580.95600.7754
20.83580.97780.71840.97100.95260.8274
30.65510.98890.83810.96730.96970.6485
40.62400.98120.87200.96520.96800.6177
50.80460.97020.64580.97030.94490.7965
60.80150.95850.60850.97860.94220.8924
70.84550.99270.78910.97850.97180.8370
80.78240.99630.81570.97850.97080.7745
90.78260.98290.75420.97410.95070.7747
100.86720.98090.76420.96380.95620.8585
110.65600.99510.97970.9714095300.6495
12062800.99340.77840.95950.95130.6220
130.73900.99580.84220.95590.96020.7318
140.81450.98490.73260.96260.96040.8063
Average0.75850.98460.7796 0.96870.95770.7580
Table 6. Segmentation results for the best and the worst cases.
Table 6. Segmentation results for the best and the worst cases.
Dataset Se Sp Ppv Npv Acc F 1
DRIVE B e s t ( max ) 0.8282 0.9951 0.9391 0.9782 0.9711 0.8200
W o r s t ( min ) 0.5888 0.9699 0.7696 0.9429 0.9535 0.5829
STARE B e s t ( max ) 0.9678 0.9975 0.9416 0.9888 0.9757 0.9660
W o r s t ( min ) 0.6218 0.9636 0.7350 0.9461 0.9421 0.6196 .
CHASE_DB1 B e s t ( max ) 0.8672 0.9963 0.9797 0.9786 0.9718 0.8585
W o r s t ( min ) 0.6240 0.9585 0.6085 0.9559 0.9422 0.6177
Table 7. Comparison with state-of-the-art-methods on the DRIVE dataset. N.A; not available by their authors.
Table 7. Comparison with state-of-the-art-methods on the DRIVE dataset. N.A; not available by their authors.
MethodYearSeSpAccF1
Marin et al. [26]20100.70670.98010.9452N.A
Fraz et al. [27]20140.74060.98070.9480N.A
Zhao et al. [50]20140.71870.97890.9509N.A
Fu et al. [28]20160.74440.96000.94120.6884
Somro et al. [51]20170.73820.91230.9421N.A
Khowaja et al. [32]20190.74370.95960.94100.6877
Sundram et al. [4]20190.69090.94010.9301N.A
Proposed Method 0.75420.98430.96070.7475
Table 8. Comparison with state-of-the-art-methods on the STARE dataset, N.A; not available by their authors.
Table 8. Comparison with state-of-the-art-methods on the STARE dataset, N.A; not available by their authors.
MethodsYearSeSpAccF1
Marin et al. [26]20100.69440.98200.9526N.A
Fraz et al. [43]20140.75480.97630.9534N.A
Zhoa et al. [50]20140.71870.97670.9509N.A
Fu et al. [52]20160.74020.94790.93360.6187
Soomra et al. [51]20190.73820.91230.9421N.A
Khowaja et al. [32]20190.71550.97460.94830.7224
Proposed Method 0.78060.9825 0.96320.7717
Table 9. Comparison with state-of-the- art-methods on the CHASE_DB1 dataset, N.A; not available by their authors.
Table 9. Comparison with state-of-the- art-methods on the CHASE_DB1 dataset, N.A; not available by their authors.
MethodsYearSeSpAccF1
Fraz et al. [27]20120.72240.97110.9469N.A
Fraz et al. [43]20140.72590.97700.9524N.A
Azzopardi et al. [16]20150.75850.95870.9587N.A
Fu et al. [52]20160.7130N.A.0.94860.7304
Khowaja et al. [32]20190.75590.97580.95180.7646
Sundram et al. [4]20190.71050.96010.9501N.A
Proposed Method 0.75850.98460.95770.7580

Share and Cite

MDPI and ACS Style

Tamim, N.; Elshrkawey, M.; Abdel Azim, G.; Nassar, H. Retinal Blood Vessel Segmentation Using Hybrid Features and Multi-Layer Perceptron Neural Networks. Symmetry 2020, 12, 894. https://doi.org/10.3390/sym12060894

AMA Style

Tamim N, Elshrkawey M, Abdel Azim G, Nassar H. Retinal Blood Vessel Segmentation Using Hybrid Features and Multi-Layer Perceptron Neural Networks. Symmetry. 2020; 12(6):894. https://doi.org/10.3390/sym12060894

Chicago/Turabian Style

Tamim, Nasser, M. Elshrkawey, Gamil Abdel Azim, and Hamed Nassar. 2020. "Retinal Blood Vessel Segmentation Using Hybrid Features and Multi-Layer Perceptron Neural Networks" Symmetry 12, no. 6: 894. https://doi.org/10.3390/sym12060894

APA Style

Tamim, N., Elshrkawey, M., Abdel Azim, G., & Nassar, H. (2020). Retinal Blood Vessel Segmentation Using Hybrid Features and Multi-Layer Perceptron Neural Networks. Symmetry, 12(6), 894. https://doi.org/10.3390/sym12060894

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop