Next Article in Journal
Flexoelectric and Piezoelectric Coupling in a Bended MoS2 Monolayer
Previous Article in Journal
Analysis and Optimal Control of φ-Hilfer Fractional Semilinear Equations Involving Nonlocal Impulsive Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Skin Lesion Extraction Using Multiscale Morphological Local Variance Reconstruction Based Watershed Transform and Fast Fuzzy C-Means Clustering

1
Department of Electronics and Communication Engineering, GIET University, Rayagada 765022, Odisha, India
2
Department of Computer Science, College of Computer and Information Systems, Umm Al-Qura University, Makkah 21955, Saudi Arabia
3
Department of Information Technology, College of Computers and Information Technology, Taif University, Taif 21944, Saudi Arabia
4
Al-Nahrain Nanorenewable Energy Research Center, Al-Nahrain University, Baghdad 64074, Iraq
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(11), 2085; https://doi.org/10.3390/sym13112085
Submission received: 3 October 2021 / Revised: 29 October 2021 / Accepted: 31 October 2021 / Published: 3 November 2021

Abstract

:
Early identification of melanocytic skin lesions increases the survival rate for skin cancer patients. Automated melanocytic skin lesion extraction from dermoscopic images using the computer vision approach is a challenging task as the lesions present in the image can be of different colors, there may be a variation of contrast near the lesion boundaries, lesions may have different sizes and shapes, etc. Therefore, lesion extraction from dermoscopic images is a fundamental step for automated melanoma identification. In this article, a watershed transform based on the fast fuzzy c-means (FCM) clustering algorithm is proposed for the extraction of melanocytic skin lesion from dermoscopic images. Initially, the proposed method removes the artifacts from the dermoscopic images and enhances the texture regions. Further, it is filtered using a Gaussian filter and a local variance filter to enhance the lesion boundary regions. Later, the watershed transform based on MMLVR (multiscale morphological local variance reconstruction) is introduced to acquire the superpixels of the image with accurate boundary regions. Finally, the fast FCM clustering technique is implemented in the superpixels of the image to attain the final lesion extraction result. The proposed method is tested in the three publicly available skin lesion image datasets, i.e., ISIC 2016, ISIC 2017 and ISIC 2018. Experimental evaluation shows that the proposed method achieves a good result.

Graphical Abstract

1. Introduction

Skin cancer is quite prevalent throughout the world. It affects both males and females of all ages. According to the skin cancer foundation, the estimated case count in the U.S. alone is going to be 207,390 in 2021. Melanoma is the most dangerous of all skin cancers because it spreads quickly to other organs of the body. Early detection is a key factor for effective melanoma care. Basically, melanomas have distinct features such as asymmetry, uneven borders, different colors, large size and frequently changing shape and size. These features of melanomas, named the ABCDE (asymmetry, border, color, diameter and evolving) rule, help the experts during visual inspection to identify melanoma. However, it is still a challenging task for experts to identify melanoma by naked eye. Therefore, computer-vision-assisted diagnosis systems [1] are used that help the experts in timely detection of the melanocytic lesions accurately and provide a better way toward diagnosis. The system uses dermoscopic images where the diagnosis process comprises different stages, such as preprocessing, lesion extraction and classification of lesions that detects the melanoma by ignoring all the artifacts present in the affected region and segregates the skin lesion accurately from healthy skin.
Lesion extraction is a fundamental step that helps the experts to detect and classify the lesions from the acquired images. Various lesion extraction approaches have previously been developed to assist experts in efficiently identifying and classifying lesions using computer-vision-assisted diagnostic systems. However, because of the location of lesions in the human body; variations in colors, shapes and sizes; and contrast in lesion boundary regions, extracting lesions from dermoscopic images remains a hard task, as it increases computational time and results in inaccurate lesion extraction. Many supervised, unsupervised and deep learning methods have been developed to overcome the aforementioned challenges for the extraction of lesions with better accuracy.
Literature shows that there are several challenges in effective lesion extraction, and thus to overcome them, the proposed method uses the local variance method instead of gradient based on boundary detection. Here, lesion extraction is done by hybridizing superpixel and FCM. The proposed method extracts the lesions by removing the undesired artifacts and enhancing the lesion regions as compared to the healthy skin regions.
The proposed approach, being an unsupervised one, extracts the skin lesion in a better way because of the usage of the following process:
(1)
Preprocessing techniques comprise hair removal and texture enhancement. To remove the hairs from the dermoscopic images, one of the popular hair removal approaches, DullRazor [2] is used. It removes the hairs from the input images and helps in further processing of the images.
(2)
Due to the large intensity variations in dermoscopic images, it is very difficult to segregate the lesion regions from healthy skin regions. Therefore, to enhance the lesion regions, the hair-removed images are processed through a contrast enhancement technique known as dominant orientation-based texture histogram equalization (DOTHE), and it enhances the lesion regions of the dermoscopic images based on histogram equalization.
(3)
Further, the preprocessed images come across the MMLVR-WT for generation of superpixels of images and compute the histogram of superpixel images to achieve fast fuzzy c-means (FCM). The proposed method uses the local variance method for accurate detection of boundary regions and helps to separate the lesions from healthy skin regions effectively.
(4)
Later, the postprocessing technique is used to remove the undesired pixel regions from the lesion regions.
The rest of the paper is organized as follows: Section 2 presents the related works. Section 3 provides an idea of the datasets used for experimentation work. The proposed method is discussed elaborately in Section 4. Section 5 and Section 6 discuss the proposed method’s performance analysis and results. Finally, Section 7 concludes the paper.

2. Related Work

Skin lesion extraction is carried out invasively using dermoscopic images. Here, the lesion regions are segregated from the healthy skin regions. The lesion extraction methods available in literature are broadly categorized into supervised and unsupervised approaches. The supervised approaches [3,4,5,6,7,8] use the prior knowledge of lesions and non-lesions in the case of dermoscopic images for accurate identification of melanocytic skin lesions. This process requires large image datasets with annotated ground truths by the experts in order to create an accurate model for detection. Most of the supervised approaches presently use deep convolutional neural networks for segmentation [9]. Some of the popular CNN architectures used for dermoscopic images are U-Net [10] proposed by Ronneberger et al., SegNet [11] by Badrinarayanan et al., DeepLab [12] by Bagheri et al., deep FCN along with a combination of shallow networks and deep networks proposed by Zhang et al. [13]. Furthermore, to improve the lesion extraction accuracy, a hybrid combination of supervised and unsupervised methods was developed for skin lesion extraction.
Ünver and Ayan [14] combined the deep neural network YOLO with the unsupervised approach GrabCut followed by morphological operations, to extract the melanocytic lesions. Nida et al. [15] used RCNN for lesion localization, followed by fuzzy c-means clustering for segmentation. Banerjee et al. [16] extract the lesions using a combination of the deep network YOLO and L-type fuzzy number based approximations. The former methods, such as [14,16], use a combination of supervised and unsupervised hybrid combination for lesion extraction.
The hybrid scheme signifies that, although good detection accuracy is attained, there is still some scope of improvement when it is combined with an unsupervised approach. It can be well remarked that the hybrid schemes using both the supervised and unsupervised approach require an enormous amount of data for the supervised part, whereas the unsupervised approach uses human visual attention models for detecting lesions.
Among the various unsupervised approaches, such as threshold, region-based [17], edge detection [18] and clustering approaches [19], clustering is one of the widely used algorithms, and it applies to both grayscale and color images. Fuzzy c-means (FCM) clustering is an unsupervised approach, because of its overall success in feature analysis and clustering; it is typically used in image segmentation. It divides the n feature vectors into c fuzzy groups, evaluates the clustering center for each group and reduces the non-similarity index function value. Kumar et al. [20] developed a novel method for segmentation and classification based on fuzzy c-means to differentiate homogeneous image regions for image segmentation and DE-ANN for classification of skin cancers.
For achieving better segmentation accuracy, researchers have combined the fuzzy c-means clustering with existing state-of-arts methods. However, fuzzy c-means defines the utilization of membership function that divides the images into various regions. Lee and Chen [21] developed a classical FCM clustering segmentation method for various skin cancers. Zhang et al. [22] integrated the local spatial features of membership with the fuzzy c-means functional objective that achieves a satisfactory outcome for segmented images. An updated FCM algorithm was introduced by Liu et al. [23], considering the distance among various regions achieved by mean-shift and the distance among pixels were integrated into its objective function. To address FCM’s inability to handle ambiguous data, the NS and FCM frameworks were combined by Guo et al. [24].
The unsupervised clustering approach, particularly FCM, is used in combination with supervised approaches to yield better lesion extraction results. More recently, the superpixel approach in combination with the deep learning framework has been used to achieve a high accuracy rate in skin lesion extraction. Several superpixel generation approaches are available in literature.
To generate a superpixel image with accurate boundaries, Lei et al. [25] proposed an algorithm based on multiscale morphological gradient reconstruction (MMGR), and further, a fast FCM method was implemented for color image segmentation. It is observed that the gradient based method is suitable for detecting the boundary region accurately for real images. However, for dermoscopic images, it is a challenging task to detect the boundary region accurately using the gradient-based method of uneven boundary regions.
In this context, Ali et al. [26] proposed an automated approach to detect and measure the border irregularity and trained the network by combining the CNN and Gaussian naïve Bayes to detect the border irregularity automatically, which helps to determine whether the lesion’s border region is regular or irregular. Afza et al. [27] provided a three-step superpixel approach for lesion extraction from dermoscopic images. A boundary detection method proposed by Liu et al. [28] combined the CNN with edge prediction in dermoscopic images for better lesion extraction. Ali et al. [29] used the Feret’s diameter method for the prediction of asymmetry parameters, along with an improved Otsu thresholding method for extraction of skin lesions from dermoscopic images. A stochastic region-merging and pixel-based Markov random field approach was proposed [30] to decompose the likelihood function by multiplying the stochastic region-merging likelihood function and the pixel likelihood function for skin lesion extraction. The lesion extraction results obtained from existing boundary detection methods are shown in Figure 1.
In Figure 1a, the original dermoscopic image was considered and Figure 1h depicts the corresponding ground truth (GT) for it. The lesion region had irregular intensity variations near the lesion border regions. The lesion extracted using the AD method [29] is shown in Figure 1b, and the corresponding lesion mask is shown in Figure 1i. Although Figure 1b shows that the lesions are effectively extracted, the comparison of the lesion mask of Figure 1i with the GT (Figure 1h) shows that some emergence of the background region is occurring.
Similarly, for the ADR method [26], the extracted lesion is shown in Figure 1c, and the corresponding mask is given in Figure 1j. From Figure 1j, it can be observed that there is some loss of lesion regions and emergence of background healthy skin region. Figure 1d,k demonstrates the extracted lesion and lesion mask of the AT approach [28]. Although it uses CNN, it can still be observed from Figure 1d that the lesion extraction is not accurate, and it extracts lesion along with the healthy skin regions. Figure 1e,l shows the lesion and mask for the HTSDL method [27].
From Figure 1e, it can be observed that some of the lesion regions were missed, which is not desirable. Similarly, Figure 1f,m represents the lesion extraction result and lesion mask of the MRF method [30]. From Figure 1m, it can be observed that although it resembles the GT given in Figure 1h, it has extensive emergence of healthy skin regions, as shown in Figure 1f. However, it can be seen from Figure 1g,n, which represents the lesion extraction result and lesion mask of the proposed method, that the proposed approach has a minimal loss of lesion region and fewer emergences of healthy skin regions as compared to the other state-of-art recent approaches.

3. Datasets

For experimentation purpose, three publicly available datasets, i.e., ISIC 2016 [31], ISIC 2017 [32] and ISIC 2018 [33] were used. The dermoscopic images available in the datasets were of varying intensities and different resolutions and sizes along with the ground truth images. The proposed method used the dermoscopic images as they were with no modifications in size and resolution. The RGB image size ranged from 542 × 718 to 2848 × 4288 for the ISIC 2016 dataset. For ISIC 2017 and ISIC 2018 datasets, the size of the RGB images varied from 576 × 768 to 6748 × 4499.

4. Proposed Method

The proposed method segregates the lesions from dermoscopic images. The entire lesion extraction process comprised four basic sections such as preprocessing, filtering-based watershed transform, fast fuzzy c-means (FCM) clustering and morphological postprocessing techniques. The proposed method’s architecture is illustrated in Figure 2. The subsequent subsections explain the proposed system in more detail.

4.1. Input Image

For experimentation, the proposed method used the three publicly available skin lesion image datasets, i.e., ISIC 2016, ISIC 2017 and ISIC 2018. The input images considered were of RGB-type dermoscopic images. For example, the input image is shown in Figure 3a.

4.2. Preprocessing Techniques

The preprocessing techniques play a vital role in the automated melanocytic skin lesion extraction process. It is the first step that helps to make the dermoscopic images ready for further analysis by enhancing certain features and removing undesired artifacts. It comprised two basic steps, i.e., hair removal and texture enhancement.

4.2.1. Hair Removal

The undesired artifacts, such as hairs present in the captured dermoscopic images, increase the computational cost and leads to inaccurate skin lesion extraction. To remove the hairs and other artifacts, several hair-removal methods [2,34] proposed by various researchers exist that detect the hair pixels and remove them. Thus, the computational time can be reduced. The proposed method used one of the popular hair-removal approach, the DullRazor [2], to remove hairs from dermoscopic images. This technique removes the hairs present in the input RGB images. Figure 3 gives the process of hair removal from the input RGB images. Initially, from the original RGB image given in Figure 3a, the hair mask shown in Figure 3b was generated using the DullRazor algorithm. Further, using the image inpainting method, the hair regions in the binary mask generated in Figure 3b were replaced with the nearest non-hair pixel intensity values as shown in Figure 3c. Later, it was smoothed to get a clear, hair-removed image. Figure 3d shows the hair-removed image obtained using DullRazor from the original RGB image, shown in Figure 3a.

4.2.2. Texture Enhancement

The input RGB images had large variations in intensity, due to which it was not possible to segregate the lesion regions from healthy skin. This leads to improper extraction of skin lesions and also affects [35] the diagnosis process. Therefore, it is highly essential to enhance the texture region accurately for further processing. One of the contrast-enhancement techniques to enhance the texture region based on histogram equalization is dominant orientation-based texture histogram equalization (DOTHE) [36]. The DOTHE algorithm comprises the following six steps:
(i)
Initially the entire image that is to be enhanced is divided into a number of blocks.
(ii)
To differentiate each image block into smooth and rough blocks, a variance threshold is applied to each one.
(iii)
The rough blocks are further divided into dominant and non-dominant orientation blocks, based on singular value decomposition (SVD) of the gradient vectors of the block.
(iv)
The intensity distribution or histogram is computed from dominant orientation blocks (textures blocks) of the image.
(v)
Depending on the cumulative density function (CDF) of the input image, the texture histogram of the input image is mapped into a new dynamic range of the image.
(vi)
Finally, the texture-enhanced image is obtained using the mapped histogram.
The texture-enhanced image obtained using the DOTHE algorithm is shown in the Figure 4. The Figure 4a shows the hair-removed image. The texture-enhanced image is shown in Figure 4b.

4.3. Filtering-Based Watershed Transform

This subsection comprised four operational steps, namely, Gaussian filter, local variance, MMLVR (multiscale morphological local variance reconstruction) and WT (watershed transform). The preprocessed output obtained from the former subsection was further processed using the aforementioned sequential steps for lesion extraction. The detailed explanation of the operations is given in the following subsections.

4.3.1. Gaussian Filter

The presence of irregular texture patches in the texture-enhanced dermoscopic images is one of the limiting factors for the accurate extraction of the skin lesion. Therefore, smoothing was performed using a 2-D Gaussian filter [37,38]. The 2D Gaussian filter kernel is given as
G 2 D x , y = 1 2 π σ 2 e x 2 + y 2 2 σ 2
This is convolved with the enhanced image I E x , y obtained from Section 4.2.2 to get the following result:
I G = I E x , y × G 2 D x , y
A Gaussian filter having a size of 3 × 3 was used in the proposed method. It reduced the effect of irregular texture regions and provided the smoothed intensity values. Thus, the output obtained after Gaussian filtering provided a Gaussian blurred image. The appropriate parameter selection for the Gaussian kernel is discussed in detail in Section 5.

4.3.2. Local Variance for Boundary Region Extraction

The output obtained from the former step was a Gaussian blurred image ( I G ) after the removal of irregular texture patches, which gave a better smoothing effect to the corresponding output image. Further, for lesion extraction, boundary identification is one of the important steps that segregates the lesions from the healthy skin regions. The boundary occurs in the transition points of the intensity image obtained after the smoothing operation. However, due to the smoothing operation applied in the former steps, the boundaries (or edges) were also smoothened, which made it difficult to apply conventional gradient-based techniques such as Canny, Prewitt and Sobel. Therefore, we used local variance for boundary region identification. The local variance technique highly depends on the image’s statistical intensity distribution rather than on the image intensity gradient. When compared to the flat region of the image, the value of local variance differs across the edges. It varies from minimum to maximum and vice versa across the edges. The local variance of the pixel can be computed by using the subsequent steps:
I L V i , j = 1 n 2 1 x = 1 n y = 1 n m x , y m 2   ¯
where the local coordinate of the neighbourhood of m is given by x , y and m ¯ is the mean of the neighbourhood. To determine the local variance feature of the image, this operation was performed throughout the image, varying vertically and horizontally as follows:
I L V = I L V 1 , 1 I L V 1 , 2 I L V 1 , Q I L V 2 , 1 I L V 2 , 2 I L V 2 , Q I L V P , 1 I L V P , 2 I L V P , Q
where P × Q is the size of the original image. The mean of the local variance of pixels was used to determine the image’s boundary in the proposed method. The variance yielded a high value near the boundary regions. The boundary region extraction using the local variance method is shown in Figure 4a.

4.3.3. Multiscale Morphological Local Variance Reconstruction (MMLVR) Based on Watershed Transform

The local variance image ( I L V ) obtained in the former step segregated the boundary region from the healthy skin. The I L V image was further processed using the multiscale morphological local variance reconstruction (MMLVR) operation. It gives a smoothening effect to the lesion region of the image so that the boundary of the lesion regions can be protected [39], which overcomes the over segmentation [25] while removing the useless gradient details. Thus, a binary image was generated, denoted as I B . The basic operation of morphological reconstruction depends on dilation and erosion [39]. Based on the structuring elements (SEs), the dilation and erosion operation can be performed. The dilation of the image I B expands the image as per the four or eight connected structuring elements (SEs), whereas the erosion performs the reverse operation, and it shrinks the image I B based on the SEs. The grayscale image, which is a subset of ZXZ, and was represented with respect to the image.   I B and SE are given below:
( I B ) s , t = ( I B SE ) s , t = m a x I B s x , t y + SE x , y | s x , t y Z ;   x , y Z
ε ( I B ) s , t = ( I B SE ) s , t = m i n I B s + x , t + y SE x , y | s + x , t + y Z ;   x , y Z
The binary image I B with opening and closing operation-based SEs was symbolized as I B SE and I B SE and represented as follows:
Ψ ( I B ) = I B SE = I B SE SE
φ ( I B ) = I B SE = I B SE SE
The object (or lesion region) was smoothed using the morphological closing by partial reconstruction operator Φ K on the processed dilated image, ( I B ) , with a reference image, ϕ k I B , which was obtained by closing the preprocessed image k times. This is given by
MF ( I B ) = Φ r e c ( I B ) , ϕ k I B   0 k n
where n defines the size of the SE.
Further, the proposed method used the watershed transform (WT) to generate superpixels of the enhanced image based on MMGR. The superpixel segregation technique oversegmented the enhanced image into a variety of confined regions. Thus, it helped to improve the efficiency of lesion extraction. The WT performed the operation based on region minima of gradient images to achieve the pre-lesion extraction. The output obtained in the WT based on MMLVR is shown in Figure 5b.

4.4. Fast Fuzzy C-Means Clustering

The output obtained from the former step is a pre-lesion extraction approach that depended on the MMLVR-WT. For final lesion extraction, the fast fuzzy c-means (FCM) [22] was used by computing the histogram of the superpixel images. This histogram of superpixel images was considered as an important factor to achieve the fast color lesion extraction. The proposed method used both MMLVR-WT and fast FCM method for accurate skin lesion extraction. The MMLVR-WT performs the operation based on the local features of an image. The FCM needs the global features of an image for its operation. Thus, by combining both MMLVR-WT and FCM, it is possible to get a better lesion extraction result. The lesion extracted by fast FCM is displayed in Figure 5c.

4.5. Postprocessing

The lesion extraction result obtained from the former step was further binarized and then postprocessed using morphological operation, followed by extraction of the biggest blob. More details about the postprocessing techniques are described in subsequent subsections.

4.5.1. Morphological Operation

Morphological operation is an essential step for the extraction of lesions. The clustered image obtained from the previous step contained the undesired tiny pixel regions shown as encircled regions ‘A’ and ‘B’ that affected the shape and texture regions of the image. Therefore, to remove the undesired pixel regions, thinning and region filling operations were applied to the binary image.

4.5.2. Extraction of the Biggest Blob

The binary image achieved from Section 4.5.1, still contained undesired pixels that could be ignored to extract the accurate skin lesion. To achieve the above, the skin lesion was further processed for extraction of the biggest blob by ignoring all the undesired pixel components from the binary image. Finally, the skin lesion mask was obtained by considering the largest connected components and ignoring all the smallest connected components. The lesion mask obtained using the biggest blob is shown in Figure 6b.

5. Performance Analysis

For performance analysis, the proposed method used different metrics, such as accuracy (Acc), dice coefficient (DC), Jaccard index (JI), sensitivity (SN) and specificity (SP). Therefore, to compute the different metrics, the binary lesion mask extracted using the proposed method was compared with the ground truth binary images provided in the datasets. With the help of two images, a confusion matrix was developed where TP, TN, FP and FN show true positive, true negative, false positive and false negative rate. The different metrics were represented as follows:
A c c u r a c y = T P + T N T P + F P + T N + F N
D C = 2 × T P T P + F P + T P + F N
J I = T P T P + F P + F N
S N = T P T P + F N
S P = T N T N + F P

6. Results and Discussion

Three publicly available dermoscopic image datasets, i.e., ISIC 2016 [31], ISIC 2017 [32] and ISIC 2018 [33], were used to test and validate the proposed method. The entire experiment was done in a PC having a Core i3 processor and 8 GB RAM in MATLAB R2018b.
The proposed method used a Gaussian filter of size 3 × 3 with σ = 1. The local variance window size used was 3 × 3. The structuring element (SE) used in Section 4.3.3 was of the disk type and had a size of 3. The number of iterations used for MMLVR-WT was of 50. Further increase in the number of iterations increased the computation time of the proposed method. The number of class sections for fast FCM was of 2. Further increase in the class number was not required as per the objective of the work in concerned.
For validating the above-mentioned values for different parameters, the proposed method was also tested by varying the different parameters, such as size of the Gaussian filter, SE, local variance window size and the sigma values. The performance measurements for the different metrics obtained from the proposed method by varying the values of different parameters are shown in Table 1 and Table 2.
Table 1 represents the performance measurements obtained from the proposed method when SE was 3, local variance window size was 3 × 3 and different kernel sizes and sigma values were used. The results obtained by changing the SE to 2 with the same local variance size and different kernel sizes and sigma values are shown in Table 2.
From Table 1 and Table 2, it can be observed that the best results were obtained when SE was 3, sigma value was 1(σ = 1), Gaussian filter kernel was used and local variance window size was 3 × 3. The best values are shown in bold in Table 1.
The proposed method comprised three main sections, i.e., preprocessing techniques, filtering-based watershed transform and postprocessing. In the proposed method, fast FCM was used for lesion extractions, and it was one of the major factors. The time complexity of FCM was O(ndc2i), where n is the number of data points, and keeping the data points constant, we assumed that n = 50, d = 3, i = 50 and varied the number of clusters, where n = number of data points, c = number of clusters, d = number of dimensions and i = number of iterations. We did not compare the complexity of the proposed method with that of other approaches as this information was not available in the relevant literature.
After the validation of different parameters, the proposed method was further evaluated by considering the variety of images from the three publicly available datasets, i.e., ISIC 2016, ISIC 2017 and ISIC 2018, and the performance measurement metrics were obtained.
For the ISIC 2016 dataset, the performance measurements obtained for the different metrics in the proposed method are presented in Table 3, which was compared with the different supervised and unsupervised approaches.
Although the proposed method is an unsupervised technique, it gave better accuracy, as shown in Table 3. The best values of each metric for different methods are marked in bold. It provided an accuracy of 95.4%, dice coefficient of 94.5% and Jaccard index of 93.2%, which were greater than those of existing approaches. The sensitivity and specificity of 94.7% and 98.5% were achieved in the proposed method, which were the second highest.
The Figure 7 shows the lesion extracted from the ISIC 2016 dataset using the proposed method. The proposed method was capable of accurately extracting skin lesions from a wide range of dermoscopic images and is shown in Figure 7b. Figure 7c,d shows the ground truths available in the datasets and the lesion mask obtained in the proposed method. By comparing Figure 7c,d, it can be realized that the proposed method was more accurate in segregating the lesion regions.
A bar plot is shown in Figure 8 for better analysis of the proposed method using the different metrics, such as accuracy, dice coefficient and Jaccard index. It shows the effective performance of the proposed method in the case of the ISIC 2016 dataset.
Further, the proposed method was tested with the ISIC 2017 dataset by considering a variety of images, such as in the presence of hairs, ruler marks, low illumination in texture regions and irregularity in shape and structure. The different metrics obtained from the proposed method using ISIC 2017dataset were compared with those from the currently supervised, unsupervised and deep learning approaches, and it is demonstrated in Table 4.
The bold values indicate the best results for a particular performance parameter. It was observed that the proposed method gave a better accuracy of 97.8%, dice coefficient of 93.2%, Jaccard index of 87.1% and specificity of 99.8%, which were the highest as compared to the recent approaches. As demonstrated in Table 4, the proposed method had a sensitivity of 96.8 percent, which was the third highest.
The lesion mask obtained from the proposed method using the ISIC 2017 dataset is illustrated in Figure 9. The extracted lesions obtained from ISIC 2017 are shown in Figure 9b. Figure 9c,d represents the ground truth from the dataset and the lesion masks obtained from the proposed method.
Figure 10 represents the performance analysis of Table 4 as a bar plot by considering the values of accuracy, dice coefficient and Jaccard index, which proved the superiority of the proposed method as compared to the existing methods in the ISIC 2017 dataset.
Finally, the proposed method was tested in the ISIC 2018 dataset by considering a variety of images. The performance measures evaluated were compared with the existing supervised, unsupervised and deep learning approaches are shown in Table 5. The best values are marked in bold for each performance metric.
Accuracy of 96.9%, dice coefficient of 93.0%, Jaccard index of 87.0% and specificity of 98.6% were obtained, which were the highest compared to current state-of-the-art methods; the sensitivity was the second highest at 95.8%, which was obtained in the proposed method.
The skin lesion extracted from the proposed method using the ISIC 2018 dataset is shown in Figure 11. It demonstrates how well the proposed method works in the presence of undesired artifacts. It was found in the datasets that some of the dermoscopic images had low illumination in the lesion regions. Therefore, it was very difficult to segregate the lesion regions from the healthy skins. The proposed method used a texture enhancement technique that enhanced the lesion regions in the dermoscopic images and that helped to segregate the lesion regions accurately from healthy regions. Figure 11b shows the lesions extracted from the ISIC 2018 dataset using the proposed method. The corresponding lesion masks are shown in Figure 11d.
The performance of the proposed method by considering the three different metrics, accuracy, dice coefficient and Jaccard index, for the ISIC 2018 dataset is shown in Figure 12. From the bar plot given in Figure 12, it can be said that the proposed method performed better compared to other methods when considering the ISIC 2018 dataset images.

7. Conclusions

The paper represents an unsupervised method for the extraction of lesions from dermoscopic images using fast fuzzy c-means (FCM) based on MMLVR-WT. The proposed method uses MMLVR-WT for the generation of superpixels of images and computes the histogram of superpixel images to achieve fast fuzzy c-means (FCM). The method is tested on the different publicly available datasets, i.e., ISIC 2016, ISIC 2017 and ISIC 2018, by considering a wide variety of images. Although the proposed method is an unsupervised approach, it can still extract the lesions accurately. It gives an overall accuracy of 96.7%, dice coefficient of 93.56%, Jaccard index of 89.1%, sensitivity of 95.76% and specificity of 98.96%. After analyzing the performance measures of the proposed method, it is found that it gives a better overall result for accuracy, sensitivity and specificity rather than overall dice coefficient and Jaccard index. It is due to the consideration of most challenging images from different datasets having very low resolution, hairs, gels, ruler marks, etc. Nevertheless, comparing the results of dice coefficient and Jaccard index obtained from the proposed method with individual state of art approaches, it is still better than the existing approaches, which include deep learning methods. The proposed method being an unsupervised approach is comparable with supervised approaches such as deep learning approaches. It has a scope for further improvement in terms of lesion detection accuracy upon further integration with deep neural networks. The results obtained are comparable with those of some of the existing state-of-arts methods in order to demonstrate the superiority of the proposed method.

Author Contributions

Conceptualization: R.R. and P.P.; methodology: P.P. and R.R.; validation: P.P. and Y.A.; formal analysis: S.A. and P.P.; investigation: R.R. and P.P. resources: P.P.; data curation: R.R.; writing—original draft preparation: R.R.; writing—review and editing: P.P. and Y.A.; visualization: O.I.K.; supervision: P.P. and Y.A.; project administration: P.P., Y.A. and S.A.; funding acquisition: S.A., Y.A. and O.I.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by Taif University, TURSP-2020/313.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The study did not report any data.

Acknowledgments

We deeply acknowledge Taif University for supporting this study through Taif University Researchers Supporting Project Number (TURSP-2020/313), Taif University, Taif, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zortea, M.; Flores, E.; Scharcanski, J. A simple weighted thresholding method for the segmentation of pigmented skin lesions in macroscopic images. Pattern Recognit. 2017, 64, 92–104. [Google Scholar] [CrossRef]
  2. Lee, T.; Ng, V.; Gallagher, R.; Coldman, A.; McLean, D. Dullrazor®: A software approach to hair removal from images. Comput. Biol. Med. 1997, 27, 533–543. [Google Scholar] [CrossRef]
  3. Li, G.; Liu, F.; Sharma, A.; Khalaf, O.I.; Alotaibi, Y.; Alsufyani, A.; Alghamdi, S. Research on the Natural Language Recognition Method Based on Cluster Analysis Using Neural Network. Math. Probl. Eng. 2021, 2021, 1–13. [Google Scholar] [CrossRef]
  4. Dalal, S.; Khalaf, O.I. Prediction of Occupation Stress by Implementing Convolutional Neural Network Techniques. J. Cases Inf. Technol. 2021, 23, 27–42. [Google Scholar] [CrossRef]
  5. Tavera Romero, C.A.; Ortiz, J.H.; Khalaf, O.I.; Ríos Prado, A. Business Intelligence: Business Evolution after Industry 4.0. Sustainability 2021, 13, 10026. [Google Scholar] [CrossRef]
  6. Khalaf, O.I.; Romero, C.A.T.; Azhagu Jaisudhan Pazhani, A.; Vinuja, G. VLSI Implementation of a High-Performance Nonlinear Image Scaling Algorithm. J. Healthc. Eng. 2021, 2021, 1–10. [Google Scholar] [CrossRef] [PubMed]
  7. Javed Awan, M.; Shafry Mohd Rahim, M.; Nobanee, H.; Yasin, A.; Ibrahim Khalaf, O.; Ishfaq, U. A Big Data Approach to Black Friday Sales. Intell. Autom. Soft Comput. 2021, 27, 785–797. [Google Scholar] [CrossRef]
  8. Zheng, X.; Ping, F.; Pu, Y.; Wang, Y.; Montenegro-Marin, C.E.; Khalaf, O.I. Recognize and regulate the importance of work-place emotions based on organizational adaptive emotion control. Aggress. Violent Behav. 2021, 101557. [Google Scholar] [CrossRef]
  9. Li, X.; Yu, L.; Fu, C.W.; Heng, P.A. Deeply supervised rotation equivariant network for lesion segmentation in dermoscopy images. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin, Germany, 2018; Volume 11041, pp. 235–243. ISBN 9783030012007. [Google Scholar]
  10. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin, Germany, 2015; pp. 234–241. [Google Scholar]
  11. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
  12. Bagheri, F.; Tarokh, M.J.; Ziaratban, M. Skin lesion segmentation from dermoscopic images by using Mask R-CNN, Retina-Deeplab, and graph-based methods. Biomed. Signal Process. Control 2021, 67, 102533. [Google Scholar] [CrossRef]
  13. Zhang, L.; Yang, G.; Ye, X. Automatic skin lesion segmentation by coupling deep fully convolutional networks and shallow network with textons. J. Med. Imaging 2019, 6, 1. [Google Scholar] [CrossRef] [PubMed]
  14. Ünver, H.M.; Ayan, E. Skin Lesion Segmentation in Dermoscopic Images with Combination of YOLO and GrabCut Algorithm. Diagnostics 2019, 9, 72. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Nida, N.; Irtaza, A.; Javed, A.; Yousaf, M.H.; Mahmood, M.T. Melanoma lesion detection and segmentation using deep region based convolutional neural network and fuzzy C-means clustering. Int. J. Med. Inform. 2019, 124, 37–48. [Google Scholar] [CrossRef] [PubMed]
  16. Banerjee, S.; Singh, S.K.; Chakraborty, A.; Das, A.; Bag, R. Melanoma Diagnosis Using Deep Learning and Fuzzy Logic. Diagnostics 2020, 10, 577. [Google Scholar] [CrossRef]
  17. Zhou, Y.M.; Jiang, S.Y.; Yin, M.L. A region-based image segmentation method with mean-shift clustering algorithm. In Proceedings of the 2008 Fifth International Conference on Fuzzy Systems and Knowledge Discovery FSKD, Jinan, China , 18–20 October 2008; Volume 2, pp. 366–370. [Google Scholar] [CrossRef]
  18. Fa, F.; Peixoto, S.A.; Rebouc, P.P. Automatic skin lesions segmentation based on a new morphological approach via geodesic active contour. Cogn. Syst. Res. 2019, 55, 44–59. [Google Scholar] [CrossRef]
  19. Pereira, P.M.M.; Fonseca-Pinto, R.; Paiva, R.P.; Assuncao, P.A.A.; Tavora, L.M.N.; Thomaz, L.A.; Faria, S.M.M. Dermoscopic skin lesion image segmentation based on Local Binary Pattern Clustering: Comparative study. Biomed. Signal Process. Control 2020, 59, 101924. [Google Scholar] [CrossRef]
  20. Kumar, M.; Alshehri, M.; AlGhamdi, R.; Sharma, P.; Deep, V. A DE-ANN Inspired Skin Cancer Detection Approach Using Fuzzy C-Means Clustering. Mob. Networks Appl. 2020, 25, 1319–1329. [Google Scholar] [CrossRef]
  21. Lee, H.; Chen, Y.-P.P. Skin cancer extraction with optimum fuzzy thresholding technique. Appl. Intell. 2014, 40, 415–426. [Google Scholar] [CrossRef]
  22. Cai, W.; Chen, S.; Zhang, D. Fast and robust fuzzy c-means clustering algorithms incorporating local information for image segmentation. Pattern Recognit. 2007, 40, 825–838. [Google Scholar] [CrossRef] [Green Version]
  23. Liu, G.; Zhang, Y.; Wang, A. Incorporating Adaptive Local Information Into Fuzzy Clustering for Image Segmentation. IEEE Trans. Image Process. 2015, 24, 3990–4000. [Google Scholar] [CrossRef] [PubMed]
  24. Guo, Y.; Ashour, A.; Smarandache, F. A Novel Skin Lesion Detection Approach Using Neutrosophic Clustering and Adaptive Region Growing in Dermoscopy Images. Symmetry 2018, 10, 119. [Google Scholar] [CrossRef] [Green Version]
  25. Lei, T.; Jia, X.; Zhang, Y.; Liu, S.; Meng, H.; Nandi, A.K. Superpixel-Based Fast Fuzzy C-Means Clustering for Color Image Segmentation. IEEE Trans. Fuzzy Syst. 2019, 27, 1753–1766. [Google Scholar] [CrossRef] [Green Version]
  26. Ali, A.-R.; Li, J.; Yang, G.; O’Shea, S.J. A machine learning approach to automatic detection of irregularity in skin lesion border using dermoscopic images. PeerJ Comput. Sci. 2020, 6, e268. [Google Scholar] [CrossRef] [PubMed]
  27. Afza, F.; Sharif, M.; Mittal, M.; Khan, M.A.; Jude Hemanth, D. A hierarchical three-step superpixels and deep learning framework for skin lesion classification. Methods 2021. [Google Scholar] [CrossRef]
  28. Liu, L.; Tsui, Y.Y.; Mandal, M. Skin Lesion Segmentation Using Deep Learning with Auxiliary Task. J. Imaging 2021, 7, 67. [Google Scholar] [CrossRef] [PubMed]
  29. Ali, A.-R.; Li, J.; O’Shea, S.J. Towards the automatic detection of skin lesion shape asymmetry, color variegation and diameter in dermoscopic images. PLoS One 2020, 15, e0234352. [Google Scholar] [CrossRef] [PubMed]
  30. Salih, O.; Viriri, S. Skin Lesion Segmentation Using Stochastic Region-Merging and Pixel-Based Markov Random Field. Symmetry 2020, 12, 1224. [Google Scholar] [CrossRef]
  31. Xie, F.; Yang, J.; Liu, J.; Jiang, Z.; Zheng, Y.; Wang, Y. Skin lesion segmentation using high-resolution convolutional neural network. Comput. Methods Programs Biomed. 2020, 186, 105241. [Google Scholar] [CrossRef]
  32. das Chagas, J.V.S.; Ivo, R.F.; Guimarães, M.T.; de Rodrigues, D.A.; de Rebouças, S.E.; Filho, P.P. Fast fully automatic skin lesions segmentation probabilistic with Parzen window. Comput. Med. Imaging Graph. 2020, 85, 101774. [Google Scholar] [CrossRef]
  33. Arora, R.; Raman, B.; Nayyar, K.; Awasthi, R. Automated skin lesion segmentation using attention-based deep convolutional neural network. Biomed. Signal Process. Control 2021, 65, 102358. [Google Scholar] [CrossRef]
  34. Abbas, Q.; Celebi, M.E.; García, I.F. Hair removal methods: A comparative study for dermoscopy images. Biomed. Signal Process. Control 2011, 6, 395–404. [Google Scholar] [CrossRef]
  35. Singh, N.; Kaur, L.; Singh, K. Histogram equalization techniques for enhancement of low radiance retinal images for early detection of diabetic retinopathy. Eng. Sci. Technol. Int. J. 2019, 22, 736–745. [Google Scholar] [CrossRef]
  36. Singh, K.; Vishwakarma, D.K.; Walia, G.S.; Kapoor, R. Contrast enhancement via texture region based histogram equalization. J. Mod. Opt. 2016, 63, 1444–1450. [Google Scholar] [CrossRef]
  37. Nandan, D.; Kanungo, J.; Mahajan, A. An error-efficient Gaussian filter for image processing by using the expanded operand decomposition logarithm multiplication. J. Ambient Intell. Humaniz. Comput. 2018, 4, 38. [Google Scholar] [CrossRef]
  38. Suryanarayana, G.; Chandran, K.; Khalaf, O.I.; Alotaibi, Y.; Alsufyani, A.; Alghamdi, S.A. Accurate Magnetic Resonance Image Super-Resolution Using Deep Networks and Gaussian Filtering in the Stationary Wavelet Domain. IEEE Access 2021, 9, 71406–71417. [Google Scholar] [CrossRef]
  39. Nallaperumal, K.; Krishnaveni, K.; Varghese, J.; Saudia, S.; Annam, S.; Kumar, P. An efficient Multiscale Morphological Watershed Segmentation using Gradient and Marker Extraction. In Proceedings of the 2006 Annual IEEE India Conference, New Delhi, India, 10–12 April 2006; 2006; pp. 1–6. [Google Scholar]
  40. Garcia-Arroyo, J.L.; Garcia-Zapirain, B. Segmentation of skin lesions in dermoscopy images using fuzzy classification of pixels and histogram thresholding. Comput. Methods Programs Biomed. 2019, 168, 11–19. [Google Scholar] [CrossRef] [PubMed]
  41. Moradi, N.; Mahdavi-Amiri, N. Kernel sparse representation based model for skin lesions segmentation and classification. Comput. Methods Programs Biomed. 2019, 182, 105038. [Google Scholar] [CrossRef]
  42. Yu, L.; Chen, H.; Dou, Q.; Qin, J.; Heng, P.-A. Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks. IEEE Trans. Med. Imaging 2017, 36, 994–1004. [Google Scholar] [CrossRef] [PubMed]
  43. Bozorgtabar, B.; Sedai, S.; Roy, P.K.; Garnavi, R. Skin lesion segmentation using deep convolution networks guided by local unsupervised learning. IBM J. Res. Dev. 2017, 61, 6:1–6:8. [Google Scholar] [CrossRef]
  44. Tajeddin, N.Z.; Asl, B.M. A general algorithm for automatic lesion segmentation in dermoscopy images. In Proceedings of the 2016 23rd Iranian Conference on Biomedical Engineering and 2016 1st International Iranian Conference on Biomedical Engineering (ICBME), Tehran, Iran, 24–25 November 2016; pp. 134–139. [Google Scholar]
  45. Soudani, A.; Barhoumi, W. An image-based segmentation recommender using crowdsourcing and transfer learning for skin lesion extraction. Expert Syst. Appl. 2019, 118, 400–410. [Google Scholar] [CrossRef]
  46. Adegun, A.A.; Viriri, S. Deep Learning-Based System for Automatic Melanoma Detection. IEEE Access 2020, 8, 7160–7172. [Google Scholar] [CrossRef]
  47. Hasan, M.K.; Dahal, L.; Samarakoon, P.N.; Tushar, F.I.; Martí, R. DSNet: Automatic dermoscopic skin lesion segmentation. Comput. Biol. Med. 2020, 120, 103738. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Xie, Y.; Zhang, J.; Xia, Y.; Shen, C. A Mutual Bootstrapping Model for Automated Skin Lesion Segmentation and Classification. IEEE Trans. Med. Imaging 2020, 39, 2482–2493. [Google Scholar] [CrossRef] [Green Version]
  49. Kaymak, R.; Kaymak, C.; Ucar, A. Skin lesion segmentation using fully convolutional networks: A comparative experimental study. Expert Syst. Appl. 2020, 161, 113742. [Google Scholar] [CrossRef]
  50. Li, Y.; Shen, L. Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network. Sensors 2018, 18, 556. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Pezhman Pour, M.; Seker, H. Transform domain representation-driven convolutional neural networks for skin lesion segmentation. Expert Syst. Appl. 2020, 144, 113129. [Google Scholar] [CrossRef]
  52. Öztürk, Ş.; Özkaya, U. Skin Lesion Segmentation with Improved Convolutional Neural Network. J. Digit. Imaging 2020, 33, 958–970. [Google Scholar] [CrossRef]
  53. Zafar, K.; Gilani, S.O.; Waris, A.; Ahmed, A.; Jamil, M.; Khan, M.N.; Sohail Kashif, A. Skin Lesion Segmentation from Dermoscopic Images Using Convolutional Neural Network. Sensors 2020, 20, 1601. [Google Scholar] [CrossRef] [Green Version]
  54. Azad, R.; Asadi-Aghbolaghi, M.; Fathy, M.; Escalera, S. Bi-Directional ConvLSTM U-Net with Densley Connected Convolutions. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Korea, 27–28 October 2019. [Google Scholar]
  55. Lei, B.; Xia, Z.; Jiang, F.; Jiang, X.; Ge, Z.; Xu, Y.; Qin, J.; Chen, S.; Wang, T.; Wang, S. Skin lesion segmentation via generative adversarial networks with dual discriminators. Med. Image Anal. 2020, 64, 101716. [Google Scholar] [CrossRef]
  56. Ali, R.; Hardie, R.C.; Narayanan Narayanan, B.; De Silva, S. Deep Learning Ensemble Methods for Skin Lesion Analysis towards Melanoma Detection. In Proceedings of the 2019 IEEE National Aerospace and Electronics Conference (NAECON), Dayton, OH, USA, 15–19 July 2019; pp. 311–316. [Google Scholar]
  57. Jin, Q.; Cui, H.; Sun, C.; Meng, Z.; Su, R. Cascade knowledge diffusion network for skin lesion diagnosis and segmentation. Appl. Soft Comput. 2021, 99, 106881. [Google Scholar] [CrossRef]
Figure 1. (a) Original image; (b) extracted lesion by AD method [29]; (c) extracted lesion by ADR method [26]; (d) extracted lesion by AT method [28]; (e) extracted lesion by HTSDL method [27]; (f) extracted lesion by MRF method [30]; (g) extracted lesion by the proposed method; (h) ground truth; (i) lesion mask of AD method [29]; (j) lesion mask of ADR method [26]; (k) lesion mask of AT method [28]; (l) lesion mask of HTSDL method [27]; (m) lesion mask of MRF method [30]; (n) lesion mask of the proposed method.
Figure 1. (a) Original image; (b) extracted lesion by AD method [29]; (c) extracted lesion by ADR method [26]; (d) extracted lesion by AT method [28]; (e) extracted lesion by HTSDL method [27]; (f) extracted lesion by MRF method [30]; (g) extracted lesion by the proposed method; (h) ground truth; (i) lesion mask of AD method [29]; (j) lesion mask of ADR method [26]; (k) lesion mask of AT method [28]; (l) lesion mask of HTSDL method [27]; (m) lesion mask of MRF method [30]; (n) lesion mask of the proposed method.
Symmetry 13 02085 g001
Figure 2. Architecture of the proposed method.
Figure 2. Architecture of the proposed method.
Symmetry 13 02085 g002
Figure 3. (a) Input image; (b) hair mask from (a); (c) after v-channel inpainting; (d) hair-removed image.
Figure 3. (a) Input image; (b) hair mask from (a); (c) after v-channel inpainting; (d) hair-removed image.
Symmetry 13 02085 g003
Figure 4. (a) Hair-removed image; (b) texture-enhanced image using DOTHE algorithm.
Figure 4. (a) Hair-removed image; (b) texture-enhanced image using DOTHE algorithm.
Symmetry 13 02085 g004
Figure 5. (a) Boundary region extraction using local variance; (b) MMLVR-WT output from (a); (c) lesion extraction output obtained after fast FCM.
Figure 5. (a) Boundary region extraction using local variance; (b) MMLVR-WT output from (a); (c) lesion extraction output obtained after fast FCM.
Symmetry 13 02085 g005
Figure 6. (a) Binary image; (b) lesion mask after using biggest blob.
Figure 6. (a) Binary image; (b) lesion mask after using biggest blob.
Symmetry 13 02085 g006
Figure 7. Results of ISIC 2016: (a) input image from dataset; (b) extracted lesions from (a); (c) available ground truth from dataset; (d) lesion mask obtained using proposed method.
Figure 7. Results of ISIC 2016: (a) input image from dataset; (b) extracted lesions from (a); (c) available ground truth from dataset; (d) lesion mask obtained using proposed method.
Symmetry 13 02085 g007
Figure 8. Performance of proposed method as compared to that of existing methods for ISIC 2016.
Figure 8. Performance of proposed method as compared to that of existing methods for ISIC 2016.
Symmetry 13 02085 g008
Figure 9. Results of ISIC 2017: (a) input image from dataset; (b) extracted lesions from (a); (c) available ground truth from dataset; (d) lesion mask obtained using proposed method.
Figure 9. Results of ISIC 2017: (a) input image from dataset; (b) extracted lesions from (a); (c) available ground truth from dataset; (d) lesion mask obtained using proposed method.
Symmetry 13 02085 g009
Figure 10. Performance of proposed method compared to that of existing methods for ISIC 2017.
Figure 10. Performance of proposed method compared to that of existing methods for ISIC 2017.
Symmetry 13 02085 g010
Figure 11. Results of ISIC 2018: (a) input image from dataset; (b) extracted lesions from (a); (c) available ground truth from dataset; (d) lesion mask obtained using proposed method.
Figure 11. Results of ISIC 2018: (a) input image from dataset; (b) extracted lesions from (a); (c) available ground truth from dataset; (d) lesion mask obtained using proposed method.
Symmetry 13 02085 g011
Figure 12. Performance of proposed method compared to that of existing methods for ISIC 2018.
Figure 12. Performance of proposed method compared to that of existing methods for ISIC 2018.
Symmetry 13 02085 g012
Table 1. Performance measures when SE = 3, local variance window size = 3 × 3 and for different kernel sizes and sigma values.
Table 1. Performance measures when SE = 3, local variance window size = 3 × 3 and for different kernel sizes and sigma values.
Gaussian Filter Kernel SizeSigmaAccDCJISNSP
3 × 3196.4195.6191.9599.6394.76
295.9695.0890.6299.1293.91
396.1095.2490.9199.1194.15
496.1295.2690.8599.1194.18
596.0295.1590.7599.0994.03
696.0395.1590.7699.0994.04
796.0395.1590.7699.0994.05
896.0395.1590.7699.0994.04
996.0395.1590.7699.0994.04
5 × 5196.0295.1690.7799.3093.90
296.1195.2690.7599.2294.10
396.0295.2191.1199.0394.16
495.9995.1190.6899.2893.85
596.1695.2191.0599.2794.14
696.1095.2490.9199.1694.11
796.1695.2191.0599.1594.02
896.1295.2690.9599.2094.12
996.0995.2390.8999.2094.07
7 × 7195.9895.1190.6799.2893.14
293.1595.2991.0098.9094.04
396.0995.2290.8999.0794.02
496.1795.2091.0398.7694.39
596.1695.2991.0198.7794.17
696.0295.0691.1398.9394.05
796.1795.1291.0698.9794.06
896.1395.1091.1198.9794.18
996.1295.0391.2698.9594.13
9 × 9195.9895.1190.6799.2893.84
296.0295.2291.2499.1594.07
396.1595.2991.0198.9694.13
496.1695.2991.0198.8494.11
596.0295.0791.1698.9194.08
696.1595.3691.3398.8394.04
796.1795.0291.2598.9094.16
896.1195.4091.2198.8894.05
996.0295.4091.2198.8894.14
Table 2. Performance measures when SE = 2, local variance window size = 3 × 3 and for different kernel sizes and sigma values.
Table 2. Performance measures when SE = 2, local variance window size = 3 × 3 and for different kernel sizes and sigma values.
Gaussian Filter Kernel SizeSigmaAccDCJISNSP
3 × 3196.1895.3391.0799.1794.23
296.2195.3791.1599.3194.19
396.1895.3491.1099.3194.15
496.1695.3291.0599.3194.12
596.1695.3291.0599.3194.12
696.1695.3191.0599.3194.11
796.1895.3491.1099.3194.15
896.1895.3491.0999.3194.15
996.1895.3491.0999.3194.15
5 × 5196.2195.4991.3799.2794.39
296.2495.4191.2199.1994.33
396.2495.4191.2199.2194.32
496.2995.4691.3299.1694.43
596.2295.3791.1699.1894.29
696.2695.4391.2699.1994.36
796.2595.4291.2499.1994.35
896.2895.4591.2999.1994.39
996.2595.4191.2299.0994.40
7 × 7196.3095.4891.3599.2294.41
296.2695.4291.2599.1594.38
396.3495.5391.4399.1994.50
496.3495.5291.4299.2294.47
596.2495.4191.2299.2094.33
696.2195.3791.1599.1594.31
796.2395.3991.1999.1594.34
896.2395.4091.2099.1694.34
996.2095.3691.1399.1994.26
9 × 9196.0495.3790.2199.1893.94
296.1995.1391.1299.0593.07
396.2295.4291.0698.9694.03
496.1695.2590.2698.9494.11
596.0495.4191.0498.9193.08
696.2995.2691.3098.7394.04
796.0795.0291.2198.9394.16
896.1195.3091.0798.3894.25
996.1295.4091.2198.8894.24
Table 3. Performance of proposed method on the ISIC 2016 dataset.
Table 3. Performance of proposed method on the ISIC 2016 dataset.
MethodAccDCJISNSP
Nida et al. [15]94.294.093.095.094.0
Garcia-Arroyo and Garcia-Zapirain [40]93.486.979.187.097.8
Moradi and Mahdavi-Amiri [41]93.091.283.692.191.5
Yu et al. [42]94.989.782.991.195.7
Bozorgtabar et al. [43]92.389.280.6------
Tajeddin and Asl [44]94.688.881.083.298.7
Xie et al. [31]93.891.885.887.096.4
Proposed method95.494.593.294.798.5
The bold values indicate the best result.
Table 4. Performance of proposed method on the ISIC 2017 dataset.
Table 4. Performance of proposed method on the ISIC 2017 dataset.
MethodAccDCJISNSP
Ünver and Ayan [14]93.384.274.890.892.6
Soudani and Barhoumi [45]94.988.178.985.895.6
Adegun and Viriri [46]95.092.0--97.096.0
Banerjee et al. [16]97.393.086.991.498.7
Hasan et al. [47]95.3----87.595.5
Xie et al. [48]94.787.880.487.496.8
Kaymak et al. [49]93.984.172.5----
Guo et al. [24]95.390.483.297.588.8
Li and Shen [50]95.083.975.385.597.4
Pezhman Pour and Seker [51]94.587.178.288.398.1
Öztürk and Özkaya [52]95.388.678.385.498.0
Bagheri et al. [12]94.187.480.088.396.5
Liu et al. [28]94.32--79.4688.76--
Zafar et al. [53]--85.877.2----
Zhang et al. [13]92.781.872.983.796.4
Chagas et al. [32] 95.789.382.584.799.3
Proposed method97.893.287.196.899.8
The bold values indicate the best result.
Table 5. Performance of proposed method on the ISIC 2018 dataset.
Table 5. Performance of proposed method on the ISIC 2018 dataset.
MethodAccDCJISNSP
Azad et al. [54]93.7----78.598.2
Lei et al. [55]92.988.582.495.391.1
Ali et al. [56]93.688.781.5------
Arora et al. [33]95.091.083.094.095.0
Ali et al. [26]93.6----10092.5
Li and Shen [50]95.083.975.385.597.4
Jin et al. [57]93.487.779.496.790.4
Salih and Viriri [30]89.4780.6772.4579.4595.09
Proposed method96.993.087.095.898.6
The bold values indicate the best result.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rout, R.; Parida, P.; Alotaibi, Y.; Alghamdi, S.; Khalaf, O.I. Skin Lesion Extraction Using Multiscale Morphological Local Variance Reconstruction Based Watershed Transform and Fast Fuzzy C-Means Clustering. Symmetry 2021, 13, 2085. https://doi.org/10.3390/sym13112085

AMA Style

Rout R, Parida P, Alotaibi Y, Alghamdi S, Khalaf OI. Skin Lesion Extraction Using Multiscale Morphological Local Variance Reconstruction Based Watershed Transform and Fast Fuzzy C-Means Clustering. Symmetry. 2021; 13(11):2085. https://doi.org/10.3390/sym13112085

Chicago/Turabian Style

Rout, Ranjita, Priyadarsan Parida, Youseef Alotaibi, Saleh Alghamdi, and Osamah Ibrahim Khalaf. 2021. "Skin Lesion Extraction Using Multiscale Morphological Local Variance Reconstruction Based Watershed Transform and Fast Fuzzy C-Means Clustering" Symmetry 13, no. 11: 2085. https://doi.org/10.3390/sym13112085

APA Style

Rout, R., Parida, P., Alotaibi, Y., Alghamdi, S., & Khalaf, O. I. (2021). Skin Lesion Extraction Using Multiscale Morphological Local Variance Reconstruction Based Watershed Transform and Fast Fuzzy C-Means Clustering. Symmetry, 13(11), 2085. https://doi.org/10.3390/sym13112085

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop