Next Article in Journal
Structural Performance of Modular Sandwich Composite Floor Slabs Containing Basalt FRP-Reinforced Self-Compacting Geopolymer Concrete
Next Article in Special Issue
Cephalometric Landmark Detection in Lateral Skull X-ray Images by Using Improved SpatialConfiguration-Net
Previous Article in Journal
A Systematic Review on Blockchain Adoption
Previous Article in Special Issue
Automatic Breast Tumor Screening of Mammographic Images with Optimal Convolutional Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Dermoscopic Inspired System for Localization and Malignancy Classification of Melanocytic Lesions

1
Department of Information and Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
2
Department of Electronics and Communication Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
3
Department of Mechatronics, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2022, 12(9), 4243; https://doi.org/10.3390/app12094243
Submission received: 23 February 2022 / Revised: 6 April 2022 / Accepted: 13 April 2022 / Published: 22 April 2022
(This article belongs to the Special Issue Advances in Biomedical Image Processing and Analysis)

Abstract

:
This study aims at developing a clinically oriented automated diagnostic tool for distinguishing malignant melanocytic lesions from benign melanocytic nevi in diverse image databases. Due to the presence of artifacts, smooth lesion boundaries, and subtlety in diagnostic features, the accuracy of such systems gets hampered. Thus, the proposed framework improves the accuracy of melanoma detection by combining the clinical aspects of dermoscopy. Two methods have been adopted for achieving the aforementioned objective. Firstly, artifact removal and lesion localization are performed. In the second step, various clinically significant features such as shape, color, texture, and pigment network are detected. Features are further reduced by checking their individual significance (i.e., hypothesis testing). These reduced feature vectors are then classified using SVM classifier. Features specific to the domain have been used for this design as opposed to features of the abstract images. The domain knowledge of an expert gets enhanced by this methodology. The proposed approach is implemented on a multi-source dataset (PH2 + ISBI 2016 and 2017) of 515 annotated images, thereby resulting in sensitivity, specificity and accuracy of 83.8%, 88.3%, and 86%, respectively. The experimental results are promising, and can be applied to detect asymmetry, pigment network, colors, and texture of the lesions.

1. Introduction

1.1. Motivation

Melanoma is one of the worst forms of skin cancer resulting in increased morbidity and huge medical expenditure almost up to $3.3 billion [1]. Although, simple observation aids in detection of the changes in the melanocytic nevi, the deadly disease can spread to other parts of the body by metastasizing. Skin tumor thickness mainly determines the spread of the disease. Melanoma prognosis is inversely proportional to the tumor thickness, since the survival rate relies on the tumor thickness. The greater the thickness, the lesser the survival rate. However, to measure the spread and thickness of the tumor biopsy is required, which is a painful experience to the patient. Additionally, careful observation of the melanoma characteristics can be performed, as the lesion is visible on the skin. However, it is further liable to metastasize and spread to lymph nodes thus incrementing the level of malignancy. According to the Fitzpatrick Skin classification there are six skin types [2]. The type I and type II are more prone to melanoma compared to the other skin types. Melanocytes produce a pigment termed as melanin, which gives the natural color to the skin. There are two kinds of melanin, eumelanin and pheomelanin present in the dark skinned and lighter skinned population respectively. Eumelanin is insoluble and thus the skin darkening effects produced by eumelanin last relatively longer compared to the skin-reddening effects produced by pheomelanin. A Dermoscope aids the dermatologists in the primitive analysis of melanocytic skin lesions [2]. Owing to a dearth of experience and differences in visual perceptions, the prognosis of melanoma is still subjective, in spite of the availability of well-established medical methods. This fosters the need for an objective evaluation tool. Computer Aided Diagnostic (CAD), tools were introduced for dermoscopic images to provide quantitative and target assessment of the skin lesion to help clinicians in demonstrative and prognostic undertakings. Due to the inter and intra-observer variabilities, determination of melanoma is innately subjective. Thus, a CAD tool institutively eliminates the subjectivity in the diagnosis and prognosis of melanoma, and aids in early detection of melanoma in situ, thereby improving the accuracy of detection and reducing the mortality rate. This paper describes a clinical framework that can significantly identify the lesion properties and provide a diagnosis. The system incorporates the knowledge of an experienced dermatologist to co-relate the features extracted with their histopathological relevance. Additionally, it further incorporates certain statistical features to achieve promising results.

1.2. Related Work

The literature reports numerous studies for designing CAD systems used for the diagnosis of melanoma. Based on the features used for the prediction of the lesion, approaches for the diagnosis of melanoma have been broadly classified into three types: (i) methods inspired by the dermoscopic criteria (ABCD), that take into account the global and local lesion features [3,4,5]; (ii) methods based on the characteristic properties of images [6,7,8]; and (iii) combination of the aforementioned methods. These methods can be used to either develop a cumulative score [9] or can be used to develop trained models that make predictions. The proposed approach in this paper, belongs to the third category. Nevertheless, the literature also indicates a few approaches that combine the two categories [10,11]. Celebi et al. [10] used color, texture and presence of blue-white veil for classification of skin lesions. However, the lesions were segmented manually, to separate issue of feature extraction from automated border detection. The smooth boundary between the lesion and surrounding skin, poses difficulties in automated border detection. Abuzaghleh et al. [11] classified the lesions using abstract image features and two dermoscopic features. However, the method is computationally complex due extraction of large feature sets. The method proposed in [5] concentrates mainly on the clinical aspects of color in dermoscopic images and texture features, while missing out the important role of shape features. Several studies also report the use of complex deep learning architecture for segmentation and classification [12,13]. However, these methods need to tackle the vanishing gradient and degradation problem.

1.3. Problem Statement

Based on the literature, a clinical framework for the diagnosis of melanoma should encompass the requirements mentioned below.
  • Provide automated localization of the lesions.
  • The features extracted need to hold clinical significance.
  • Result in a balance between sensitivity and specificity for distinguishing the lesion classes.
The aforementioned issues are addressed in this work.

1.4. Contributions

The proposed system describes the development of a clinically inspired framework for diagnosis of melanocytic lesions, such that the system is informative from the perspective of a dermatologist. In contrast to the methods proposed in literature [3,4,5,6,7,8,9,10,11,12,13,14], the proposed system considers lesion specific properties in order to distinguish benign and malignant melanocytic lesions. Additionally, rather than extracting abstract image features, algorithms are developed for the extraction of melanocytic specific features.

1.5. Summary

The manuscript has been organized in the following manner: section two provides the technical depths pertaining to the methodologies developed for the extraction of hair shafts, segmentation of lesion masks, dermoscopic feature extraction and classification. The section three reports the quantitative results obtained. The manuscript concludes with the discussion and conclusion section that provide key aspects with respect to the methodologies developed and future research perspectives.

2. Methods

This section describes the proposed framework. The proposed framework’s overview is illustrated in Figure 1.
Initially, preprocessing of the dermoscopic images is performed for eliminating artifacts viz., dark corners, ruler markings, hair and dark frames. For the elimination of dark corners, masks of circular shape are created with a radius and centroid co-ordinates computed as given in (1).
R a d i u s = max r , c 2 ,   C e n t r o i d = r 2 , c 2
where, r ,   c correspond to the rows and columns of the dermoscopic image. The Figure 2 illustrates the circular mask created for the corresponding dermoscopic image Figure 2A.
The hair masks are multiplied with the initial contour prior to curve deformation to eliminate the dark corners.

2.1. Detection and Removal of Hair

Numerous techniques have been proposed for the detection and exclusion of hair [11,15,16,17,18]. These techniques have been designed assuming that hair color is much darker than the skin and lesion. Additionally, the properties of dermoscopic hair shaft were not considered to detect the dermoscopic hair. Owing to the localization of melanin in the upper and lower epidermis, most skin lesions are either brown or black in color. Therefore, consideration of color variation between hair and surrounding skin could be erroneous leading to an overlap between the attributes of the lesion and hair. This signifies that, for a hair detection algorithm, a need exists for the inclusion of attributes specific to dermoscopic hair.
Geometric deformable models and their success depends on the initial conditions and speed function evolution. As the color of melanin is dependent on the extent of localization in the skin, the attribute of color is vital in the creation of this framework [19,20,21,22]. Therefore, the approach of segmentation has been adopted in this study by giving consideration to the chroma component as opposed to the RGB channels used in conventional systems. The Figure 3 illustrates the results of dermoscopic hair detection, for the corresponding dermoscopic images of Figure 3A and Figure 3B, the hair masks obtained are Figure 3C and Figure 3D, and the Figure 3E and Figure 3F, corresponds to dermoscopic images after hair inpainting.
The Figure 4 illustrates the process of segmentation. Figure 4D illustrates the segmented border obtained by the proposed approach and the boundary of the ground truth.

2.2. Extraction and Classification of Features

2.2.1. Color Features

The role of color in dermoscopy is inevitable. The most important chromophore of the skin is melanin. Lesions which are benign exhibit one or two colors. Since the malignant lesions are localized within the deeper structures of the skin, they tend to exhibit three or more colors. To study the color properties of the lesions, six groups of features were computed namely, color asymmetry, color similarity index, color entropy and statistical color features (i.e., color co-relation coefficient, principal component analysis and color entropy). The statistical color features are derived specifically for two categories: (i) region of interest (ROI) (ii) between ROI and non-ROI (NROI). The color features are delineated as follows:
The color asymmetry and color similarity index draw their inspiration from the ABCD rule of dermoscopy. The color asymmetry is quantified by the difference between the opposite halves of the lesion along x-axis ( C x 1 , C x 2 ) and y-axis ( C y 1 , C y 2 ) . The perceived color difference Δ E is calculated in the CIE L*a*b color space as given in (9). The four halves are divided as indicated in Figure 5. Correspondingly, four asymmetry indices are computed as given in (2).
C x 1 = Δ E 1 Δ E 3   C x z = Δ E 2 Δ E 4   C y 1 = Δ E 1 Δ E 2   C y 2 = Δ E 3 Δ E 4
A set of four color asymmetry indices were computed.
Color similarity index indicates the presence of six suspicious colors in a lesion (light-brown (LB), dark-brown (DB), black (K), white (W), red (R) and blue-gray (BG)). The color similarity index is computed by computation of the Euclidean distance between the lesion RGB values and corresponding six suspicious colors. The color of the lesion pixels is used to determine the color similarity index and hence the lesion masks are used for computing the color similarity index. The color similarity index and Euclidean distance are inversely related as given in (3).
E u c l i d e a n   D i s t a n c e     1 C o l o r   S i m i l a r i t y
The Algorithm 1 summarizes the steps used for calculating the color similarity score. The threshold T h is determined from the RGB values of two opposite colors [9]. The two opposite colors are black and white. Further, a score of 1 is assigned if more than two percent of the lesion area has suspicious color.
Algorithm 1. Color Similarity Index Calculation
I n p u t :   I x , y = R x , y , G x , y , B x , y  
O u t p u t :   S c o r e  
L e s i o n   R O I :   M x , y I x , y  
S = R ^ S , G ^ S , B ^ S   :   s u s p i c i o u s   c o l o r  
o u t p u t   :   s c o r e  
s c o r e = 0  
T h = 0.5     R w R k ^ 2 + G w G k ^ 2 + B w B k ^ 2   ^ 0.5  
C = 0  
f o r   e a c h   p i x e l   R ^ M _ i , G ^ M _ i , B ^ M _ i   i n   M   d o  
D =   R ^ M _ i R ^ S ^ 2 + G ^ M _ i G ^ S ^ 2 + B ^ M _ i B ^ S ^ 2   ^ 0.5  
i f   D < = T h  
C + +  
i f   C > = 0.2   *   s u m M  
s c o r e = 1  
r e t u r n   s c o r e
The color variance is computed considering the red, green, blue and gray-scale values for the lesion along with the entire image, thus resulting in computation of eight features (VR, VG, VB, and VK). The degree of randomness quantified by color entropy (E) is computed similarly for the red and blue values of the lesion and entire image, leading to computation of four features (ER and EB). The correlation co-efficient signifying the direction and degree of closeness of intra and inter linear relations between the RGB channels and grayscale images resulted in computation of twelve features (CRG, CGB, CBR, CRK, CGK, and CBK). Along with these, the lesion RGB values are projected on the three principal components (PC). Therefore, a total of 37 color features were computed.

2.2.2. Texture Features

According to the opinion of expert dermatologists malignant melanocytic lesions are characterized by coarse texture with a contrast which is inhomogeneous and irregular patterns. Since Tamura’s et al. [23] texture features are based on the visual perception, a set of three texture features, namely coarseness (T1), contrast (T2) and directionality (T3) were computed. A larger value of coarseness specifies a greater degree of roughness. Coarseness is calculated as the average of the best size that gives the maximum difference between the non-overlapping neighborhoods in horizontal and vertical directions. Directionality is computed by taking the gradient of the neighboring pixels given by (4).
Δ G = Δ H + Δ V 2
where, Δ G is the edge strength, ( Δ H ) and ( Δ V ) indicate the horizontal and vertical change in direction. Further, the contrast is calculated as the statistical distribution of pixel gray values.

2.2.3. Shape Features

The computation of the shape symmetry index is performed using the lesion mark. The lesion centroid is positioned at the centroid of the image by using the difference in centroid positions method showcased in (5). This is carried out since the lesions are not aligned with the center of the image.
Δ x , y = { ( C I x C L x ) , C I y C L y }
C I x , y   and C L x , y indicate the image and lesion centroids respectively. The image is divided into two halves with respect to x-axis and y-axis as illustrated in Figure 6 to determine the asymmetry along x and y axis. The maximum possible asymmetry index of the lesion A I is given in (6).
A I = m a x A x = x 1   ^   x 2   , A y = y 1   ^   y 2   ,   / 2
where, x 1 and x 2   are the two halves with respect to the x-axis and y 1 and y 2   are the two halves with respect to the y-axis. Since, asymmetry quantifies malignancy the proposed method takes into account the maximum possible asymmetry to minimize the classification error.
A multi-scale method termed as fractal dimension (FD) is used to quantify border irregularity. It is computed by dividing the image in small grids of size r × r   as given in (7) [9].
l o g 1 N r = f d × log r log λ
N r gives the contour length, λ indicates the scaling constant. The circularity of the lesion is measured using the metric of Compactness (8) [24].
C I = P L 2 4 π A L
P L indicates the perimeter of the lesion and A L indicates the area of the lesion.

2.2.4. Detection of the Pigment Network

Pigment network is a honeycomb-like structure characterized by linear strokes and directional shapes. The presence of pigment network histopathologically, indicates the melanin presence in keratinocytes and melanocytes at the dermal and epidermal junction [25]. Additionally, the lines in the pigment network have diverse orientation. Thus, the proposed approach for detection of pigment network is similar to the hair detection method proposed in Section 2.1 with a few additional steps and change in Gabor parameter   f . The green channel is used for processing due to relatively greater contrast. A second order derivative Laplacian operator [ 3 × 3 mask] is used for enhancing the finer details in the image after median filtering. The enhanced image is convolved with the 2D Gabor filter defined in (3). The empirically determined values for σ x and σ y are same. However, the value of thickness parameter t is set to 3.3, since the lines of the pigment network are comparably thinner than the hair shafts.
For post processing the Adaptive Histogram Equalization (AHE) is followed by determination of the threshold to extract the pigment network efficiently. The threshold is computed by fitting a fourth degree polynomial as given in (9) to the contrast enhanced image.
n = m c 4 + a c 3 + b c 2 + d c + e
m ,   a , b , d   and e are three different points to fit a curve with c   distinct co-ordinates. The pigment network mask serves to calculate the five distinct features ( f 1 , f 2 , f 3 , f 4 , f 5 ) as given in [25]. The Figure 6 illustrates the pigment network detection process. A comparison of the proposed pigment network detection method with the method proposed by Barata et al. [15] is illustrated in Figure 7. It can be observed by from Figure 8B,C that, the proposed method accurately identifies honeycomb-like pigment network structures in comparison to the method in [15]. An overview of the features extracted is summarized in Table 1.

2.2.5. Classification and Diagnosis

The features selected from the groups of f s h a p e ,   f c o l o r , f t e x t u r e , f P N are concatenated to benefit from the complimentary information captured by the feature types. The classification of the observations into two classes classifier (benign and malignant) C yielding the largest probability P G which is performed using a probabilistic SVM, as given in (10).
C = max P s h a p e G ,   P c o l o r G , P t e x t u r e G , P P N G ,
P s h a p e G ,   P c o l o r G , P t e x t u r e G , P P N G indicates the cumulative probabilities of shape, color, texture and pigment network features. The features are concatenated and used to train a single SVM classifier. Platt’s method is used for the computation of posterior probabilities [26,27]. Linear kernel is used to map the scores.

3. Results

3.1. Dataset and Evaluation Metrics

A multi-source dataset of 515 images which was taken from PH2 [28], ISBI 2016 and ISBI 2017 [29,30], has been used for experimentation in this article. The dataset consists of 304 benign and 211 melanoma lesions with annotated ground truths. In the initial stage, pre-processing of the images is performed for the removal of dermoscopic hair and dark corners. The algorithms have been implemented using MATLAB 2016®. Three metrics viz., Sensitivity (SE), Specificity (SP) and Accuracy (ACC) have been used for the detection of ROI and lesion classification. In addition to this the overlap error (between the ground truth and segmented mask) is also computed for evaluation of lesion segmentation. The null hypothesis testing has been performed using the Wilcoxon Rank Sum statistics which is a non-parametric test. The null hypothesis is stated below:
H0. 
The extracted features for benign and malignant lesions have equal medians.
The testing of the null hypothesis against the alternative hypothesis is performed. The alternate hypothesis states that the features extracted do not have equal medians and hence are statistically significant at 5% significance level. Among the 48 p-values, 34 p-values satisfied the alternative hypothesis and hence were used for classification. A hold-out set of 25% is used for testing. The classification metrics are computed by repeating the training and test procedures ten times by stratifying the training and test sets.

3.2. Evaluation of Hair Detection and Lesion Segmentation: Results

Hair detection and exclusion is performed prior to lesion segmentation to eliminate the artifacts thereby increasing the accuracy of segmentation. A positive effect of pre-processing (hair detection + black frame removal) on the segmentation accuracy for the combined dataset can be observed from the Figure 9. The overlap error after applying hair detection algorithm prior to segmentation was 0.07, and the overlap error obtained without applying hair detection algorithm prior to segmentation was 0.15. This proves that hair detection improved the accuracy of segmentation.
The proposed segmentation method resulted in sensitivity, specificity, accuracy and overlap error of 92.5%, 96.7%, 95.7%, and 8.2%, respectively, for the combined datasets. The overlap segmentation error for modified Chan-Vese (proposed method) and Chan-Vese for combined dataset is illustrated in Figure 10. The Overlap Error (OE) is calculated as given in (11).
O E = A r e a G S A r e a   G
where G is the ground truth and S is the segmented binary image.

3.3. Evaluation of Features Extracted and Lesion Classification: Results

Various experiments were conducted to deduce the classifier with a major goal to assess the best subset of features and compare the performance of training the SVM with single set of features against training with concatenation of features. The Figure 11 illustrates the plot of features for combined dataset versus the p-values. A good score has a p-values less than or equal to 0.05 p 0.05 whereas p > 0.05 is considered a bad score. It is seen from Figure 11, that the shape and pigment network features have good scores. However, the performance scores for color asymmetry index ( x 2 , x 3 , x 4 ), indicates comparably lower scores. Insignificant p-values for color similarity index for colors red, light brown and dark brown can be observed. This justifies the fact that presence of red is due to vascularization of blood vessels, irrespective of the lesion class and shades of brown are common to both the lesion types (benign and malignant). Regarding texture features, T2 (contrast) performed comparably poor then T1 and T3. Similarly, the role of statistical color features for ROI and NROI can be interpreted from Figure 11. The Table 2 provides the mean and standard deviation values of the features with significant p-values for combined datasets.
Based on the p-values the feature set was reduced from 48 to 34 features. The role of single and combined features in lesion diagnosis is given in Table 3. It can be inferred from the Table 3 that, the role of color features is significantly pre-dominant for lesion diagnosis, followed by pigment network features. Interestingly, the best overall results were obtained for a combination of the features. Table 4, shows the results of applying the proposed framework on diverse datasets, correspondingly the plot of ROC for the same is illustrated in Figure 12. It can be observed from the Table 4, acceptable results were obtained for diverse datasets, with PH2 dataset yielding the greatest accuracy. The generalization ability of the features extracted is tested by training on ISBI dataset and testing on PH2 dataset and vice-versa, the results are depicted in Table 5.

4. Discussion

The major objective of this study is the development of an automated computer aided melanoma diagnostic system using clinical aspects of dermoscopy on a diverse dataset. In this regard, the diagnosis system was built using a sequence of algorithms for pre-processing, ROI extraction, feature extraction and classification. Since these steps are sequential in nature, the accuracy of classification mainly relies on the efficiency of the preceding steps. The hair detection algorithm considers the dermoscopic knowledge of hair shafts thereby neglecting a overlap of the attributes of lesion and hair. Such an algorithm prevents loss of lesion specific information and efficiently eliminates the light and dark hair that subsequently aids in improving the lesion segmentation accuracy. Border irregularity is a major indication of malignancy of melanocytic lesions. Thus, while localizing the ROI appropriate care has to be taken to prevent the loss of lesion border details. Geometric deformable models incorporating color information have provided promising segmentation accuracy even in the presence of background noise and poor contrast. The segmentation process was followed by extraction of a set of 48 features specific to shape, color, texture and pigment network from the segmented ROI’s to facilitate identification of benign and malignant lesions. The non-parametric Wilcoxon Rank Sum statistics was used to obtain the p-values for the features extracted with the goal of finding the best features.
Of late, deep learning techniques have been extensively used in skin lesion classification [12,13,31]. In spite of the fact that these architectures have increase the accuracy of classification using large data for learning, the optimization of network parameters for reducing computational complexity is unexplored. A quantitative comparison of the proposed method and the state of art methods reported above may be tenuous due to the diversity of the datasets involved, however a comparative analysis of the studies carried out using the same datasets is given in Table 6. Sensitivity indicates the accurate rate of classification of melanoma lesions. Specificity indicates the accurate rate of classification of benign lesions, whereas accuracy gives a cumulative score of classification of benign and malignant lesions. In [5], the sensitivity was higher in contrast to specificity, since the main focus was on color feature of the lesions. An imbalance in sensitivity and specificity was obtained by Yu et al. [31], by employing a deep learning-based architecture. A methodological approach to detect pigmented skin lesions was proposed in [32]. Pennisi et al. [33], have used standard color, shape and texture feature for classification after applying Delaunay Triangulation based segmentation acc Nonetheless, the comparison provides us with relevant information about the significance of the proposed method. Nonetheless, the proposed method employs domain specific features, thereby improving the accuracy in classification of benign and malignant lesions. However, the study did not consider the thickness feature of the lesion due to lack of third dimensional image data and ground truth. The thickness feature would be an important parameter to rate the stage of malignancy once, the lesion malignancy has been detected by the classification model. Another limitation of the study if the processing time, since it approximately takes 90 s, on an average system of 8 GB RAM, and clock frequency of 1.60 GHz to provide the diagnosis once, the dermoscopic image is given as the input to the system The trained system can be employed in a clinical scenario, by using a dermoscopic based image capturing system, since a dermo scope would enhance the resolution of the lesions that would aid in better analysis, rather than a normal image capturing device.

5. Conclusions

This paper presents the development of a clinically oriented framework for melanoma diagnosis. On the basis of the color characteristics of the lesion, the regions are segmented. It can be observed from the Table 4 that the role of color is evident in melanoma detection relative to other features. However, the color features could be severely affected due to variations in image acquisition modalities. Hence, while acquiring real-time images, appropriate illumination correction techniques should be employed to eliminate the effects of non-uniform illuminations.
The experimental results are promising and can be applied to detect asymmetry, pigment network, colors and texture of the lesions. Finally, the detected criteria are combined to develop a cumulative model which exhibits sensitivity, specificity and accuracy of 83.8%, 88.3%, and 86%, respectively.

Author Contributions

Conceptualization, S.P., T.A., S.V., Y.N., R.M.D. and O.P.K.; Formal analysis, S.P., T.A., R.M.D. and O.P.K.; Investigation, S.P. and S.V.; Methodology, S.P., Y.N. and O.P.K.; Software, S.P.; Visualization, T.A. and S.V.; Writing—original draft, S.P.; Writing—review & editing, T.A., Y.N., R.M.D. and O.P.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. The Medical Futurist. Amazing Technologies Changing the Future of Dermatology—The Medical Futurist. 2017. Available online: http://medicalfuturist.com/future-of-dermatology/ (accessed on 24 September 2017).
  2. Pathan, S.; Prabhu, K.G.; Siddalingaswamy, P.C. Techniques and algorithms for computer aided diagnosis of pigmented skin lesions—A review. Biomed. Signal Process. Control 2018, 39, 237–262. [Google Scholar] [CrossRef]
  3. Abbas, Q.; Celebi, M.E.; García, I.F. Skin tumor area extraction using an improved dynamic programming approach. Ski. Res. Technol. 2011, 18, 133–142. [Google Scholar] [CrossRef] [PubMed]
  4. Abbas, Q.; Celebi, M.E.; Garcia, I.F.; Ahmad, W. Melanoma recognition framework based on expert definition of ABCD for dermoscopic images. Ski. Res. Technol. 2013, 19, e93–e102. [Google Scholar] [CrossRef] [PubMed]
  5. Barata, C.; Celebi, M.E.; Marques, J. Development of a clinically oriented system for melanoma diagnosis. Pattern Recognit. 2017, 69, 270–285. [Google Scholar] [CrossRef]
  6. Garnavi, R.; Aldeen, M.; Bailey, J. Computer-Aided Diagnosis of Melanoma Using Border- and Wavelet-Based Texture Analysis. IEEE Trans. Inf. Technol. Biomed. 2012, 16, 1239–1252. [Google Scholar] [CrossRef] [PubMed]
  7. Kostopoulos, S.A.; Asvestas, P.A.; Kalatzis, I.K.; Sakellaropoulos, G.C.; Sakkis, T.H.; Cavouras, D.A.; Glotsos, D.T. Adaptable pattern recognition system for discriminating Melanocytic Nevi from Malignant Melanomas using plain photography images from different image databases. Int. J. Med. Inform. 2017, 105, 1–10. [Google Scholar] [CrossRef] [PubMed]
  8. Ferris, L.K.; Harkes, J.A.; Gilbert, B.; Winger, D.G.; Golubets, K.; Akilov, O.; Satyanarayanan, M. Computer-aided classification of melanocytic lesions using dermoscopic images. J. Am. Acad. Dermatol. 2015, 73, 769–776. [Google Scholar] [CrossRef]
  9. Kasmi, R.; Mokrani, K. Classification of malignant melanoma and benign skin lesions: Implementation of automatic ABCD rule. IET Image Process. 2016, 10, 448–455. [Google Scholar] [CrossRef]
  10. Celebi, M.E.; Kingravi, H.A.; Uddin, B.; Iyatomi, H.; Aslandogan, Y.A.; Stoecker, W.V.; Moss, R.H. A methodological approach to the classification of dermoscopy images. Comput. Med. Imaging Graph. 2007, 31, 362–373. [Google Scholar] [CrossRef] [Green Version]
  11. Abuzaghleh, O.; Barkana, B.D.; Faezipour, M. Noninvasive Real-Time Automated Skin Lesion Analysis System for Melanoma Early Detection and Prevention. IEEE J. Transl. Eng. Health Med. 2015, 3, 4300212. [Google Scholar] [CrossRef]
  12. Bozorgtabar, B.; Sedai, S.; Roy, P.K.; Garnavi, R. Skin lesion segmentation using deep convolution networks guided by local unsupervised learning. IBM J. Res. Dev. 2017, 61, 6:1–6:8. [Google Scholar] [CrossRef]
  13. Premaladha, J.; Ravichandran, K.S. Novel Approaches for Diagnosing Melanoma Skin Lesions Through Supervised and Deep Learning Algorithms. J. Med. Syst. 2016, 40, 96. [Google Scholar] [CrossRef]
  14. Celebi, M.E.; Iyatomi, H.; Stoecker, W.V.; Moss, R.H.; Rabinovitz, H.S.; Argenziano, G.; Soyer, H.P. Automatic detection of blue-white veil and related structures in dermoscopy images. Comput. Med. Imaging Graph. 2008, 32, 670–677. [Google Scholar] [CrossRef] [Green Version]
  15. Barata, C.; Marques, J.S.; Rozeira, J. A System for the Detection of Pigment Network in Dermoscopy Images Using Directional Filters. IEEE Trans. Biomed. Eng. 2012, 59, 2744–2754. [Google Scholar] [CrossRef]
  16. Abbas, Q.; Celebi, M.E.; García, I.F. Hair removal methods: A comparative study for dermoscopy images. Biomed. Signal Process. Control 2011, 6, 395–404. [Google Scholar] [CrossRef]
  17. Toossi, M.T.B.; Pourreza, H.R.; Zare, H.; Sigari, M.-H.; Layegh, P.; Azimi, A. An effective hair removal algorithm for dermoscopy images. Ski. Res. Technol. 2013, 19, 230–235. [Google Scholar] [CrossRef]
  18. Liu, Z.-Q.; Cai, J.-H.; Buse, R. Hand-writing Recognition: Soft Computing and Probablistic Approaches, 1st ed.; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  19. Rakowska, A. Trichoscopy (hair and scalp videodermoscopy) in the healthy female. Method standardization and norms for measurable parameters. J. Dermatol. Case Rep. 2019, 3, 14. [Google Scholar] [CrossRef] [Green Version]
  20. Ma, Z.; Tavares, J.M.R.S. A Novel Approach to Segment Skin Lesions in Dermoscopic Images Based on a Deformable Model. IEEE J. Biomed. Health Inform. 2016, 20, 615–623. [Google Scholar] [CrossRef] [Green Version]
  21. Weatherall, I.L.; Coombs, B.D. Skin Color Measurements in Terms of CIELAB Color Space Values. J. Investig. Dermatol. 1992, 99, 468–473. [Google Scholar] [CrossRef] [Green Version]
  22. Chan, T.F.; Vese, L.A. Active contours without edges. IEEE Trans. Image Process. 2001, 10, 266–277. [Google Scholar] [CrossRef] [Green Version]
  23. Tamura, H.; Mori, S.; Yamawaki, T. Textural Features Corresponding to Visual Perception. IEEE Trans. Syst. Man Cybern. 1978, 8, 460–473. [Google Scholar] [CrossRef]
  24. Lee, T.K.; McLean, D.I.; Atkins, M.S. Irregularity index: A new border irregularity measure for cutaneous melanocytic lesions. Med. Image Anal. 2002, 7, 47–64. [Google Scholar] [CrossRef]
  25. Eltayef, K.; Li, Y.; Liu, X. Detection of Pigment Networks in Dermoscopy Images. J. Physics Conf. Ser. 2017, 787, 012033. [Google Scholar] [CrossRef] [Green Version]
  26. Platt, J. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In Advances in Large Margin Classifiers; The MIT Press: Cambridge, MA, USA, 2000; pp. 61–74. [Google Scholar]
  27. Pathan, S.; Prabhu, K.G.; Siddalingaswamy, P.C. Hair detection and lesion segmentation in dermoscopic images using domain knowledge. Med. Biol. Eng. Comput. 2018, 56, 2051–2065. [Google Scholar] [CrossRef]
  28. Mendonca, T.; Ferreira, P.M.; Marques, J.S.; Marcal, A.R.S.; Rozeira, J. PH2-A dermoscopic image database for research and benchmarking. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 5437–5440. [Google Scholar] [CrossRef]
  29. ISIC 2016: Skin Lesion Analysis Towards Melanoma Detec-tion. Available online: https://challenge.kitware.com/#challenge/n/ISBI_2016%3A_Skin_Lesion_Analysis_Towards_Melanoma_Detection (accessed on 24 September 2017).
  30. Codella, N.C.F.; Gutman, D.; Celebi, M.E.; Helba, B.; Marchetti, M.A.; Dusza, S.W.; Kalloo, A.; Liopyris, K.; Mishra, N.; Kittler, H.; et al. Skin Lesion Analysis Toward Melanoma Detection: A Challenge at the 2017 International Symposium on Biomedical Imaging (ISBI), Hosted by the International Skin Imaging Collaboration (ISIC). arXiv 2017, arXiv:1710.05006. [Google Scholar]
  31. Yu, L.; Chen, H.; Dou, Q.; Qin, J.; Heng, P.-A. Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Net-works. IEEE Trans. Med. Imaging 2017, 36, 994–1004. [Google Scholar] [CrossRef]
  32. Pathan, S.; Prabhu, K.G.; Siddalingaswamy, P. A methodological approach to classify typical and atypical pigment network patterns for melanoma diagnosis. Biomed. Signal Process. Control 2018, 44, 25–37. [Google Scholar] [CrossRef]
  33. Pennisi, A.; Bloisi, D.D.; Nardi, D.; Giampetruzzi, A.R.; Mondino, C.; Facchiano, A. Skin lesion image segmentation using Delaunay Triangulation for melanoma detection. Comput. Med. Imaging Graph. 2016, 52, 89–103. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Overview of the proposed system.
Figure 1. Overview of the proposed system.
Applsci 12 04243 g001
Figure 2. Mask created (A) Dermoscopic Image (B) Mask for image (A).
Figure 2. Mask created (A) Dermoscopic Image (B) Mask for image (A).
Applsci 12 04243 g002
Figure 3. Hair shaft detection and exclusion method (A,B) dermoscopic images, (C,D) hair shafts detected, and (E,F) dermoscopic images after inpainting.
Figure 3. Hair shaft detection and exclusion method (A,B) dermoscopic images, (C,D) hair shafts detected, and (E,F) dermoscopic images after inpainting.
Applsci 12 04243 g003aApplsci 12 04243 g003b
Figure 4. Illustration of the proposed segmentation approach: (A) original Images, (B) chroma component, (C) segmented images, (D) boundary of ground truth and segmented region overlapped on the original images (yellow corresponds to ground truth, white corresponds to segmented output.
Figure 4. Illustration of the proposed segmentation approach: (A) original Images, (B) chroma component, (C) segmented images, (D) boundary of ground truth and segmented region overlapped on the original images (yellow corresponds to ground truth, white corresponds to segmented output.
Applsci 12 04243 g004aApplsci 12 04243 g004b
Figure 5. Color asymmetry calculation: (A) dermoscopic image; (B) the four halves of ROI.
Figure 5. Color asymmetry calculation: (A) dermoscopic image; (B) the four halves of ROI.
Applsci 12 04243 g005
Figure 6. AI calculation: (A) dermoscopic image, (B) left half of the image, (C) right half of the image, (D) asymmetric region over y-axis.
Figure 6. AI calculation: (A) dermoscopic image, (B) left half of the image, (C) right half of the image, (D) asymmetric region over y-axis.
Applsci 12 04243 g006
Figure 7. Detection of pigment network: (A) dermoscopic image, (B) pigment network mask detected, (C) corresponding mask ((A) overlaid on (B)).
Figure 7. Detection of pigment network: (A) dermoscopic image, (B) pigment network mask detected, (C) corresponding mask ((A) overlaid on (B)).
Applsci 12 04243 g007
Figure 8. Comparison of pigment network detection: (A) dermoscopic images with pigment network marked, (B) pigment network masks detected by Barata et al. [15], (C) pigment network masks detected by the proposed method.
Figure 8. Comparison of pigment network detection: (A) dermoscopic images with pigment network marked, (B) pigment network masks detected by Barata et al. [15], (C) pigment network masks detected by the proposed method.
Applsci 12 04243 g008
Figure 9. Effect of pre-processing on segmentation accuracy.
Figure 9. Effect of pre-processing on segmentation accuracy.
Applsci 12 04243 g009
Figure 10. Overlap segmentation error for Modified Chan-Vese and Chan-Vese Algorithm.
Figure 10. Overlap segmentation error for Modified Chan-Vese and Chan-Vese Algorithm.
Applsci 12 04243 g010
Figure 11. Plot of features extracted versus the p-values: (A) lesion specific features, (B) statistical color features (CRG, CGB, CBR, CGK, and CBK indicate correlation between red (R), green (G), blue (B), gray values (K)), V indicates color variance, and E indicates entropy).
Figure 11. Plot of features extracted versus the p-values: (A) lesion specific features, (B) statistical color features (CRG, CGB, CBR, CGK, and CBK indicate correlation between red (R), green (G), blue (B), gray values (K)), V indicates color variance, and E indicates entropy).
Applsci 12 04243 g011
Figure 12. ROC curves (a) For PH2 data (b) For ISBI data (c) For Combined datasets.
Figure 12. ROC curves (a) For PH2 data (b) For ISBI data (c) For Combined datasets.
Applsci 12 04243 g012
Table 1. Overview of the features extracted.
Table 1. Overview of the features extracted.
Feature Type Description (Number)
ShapeShape Asymmetry Index (1), Compactness Index (1), and Fractal Dimensions (1)
ColorColor Asymmetry Index (4), Color similarity score (6), Color variation (8), color entropy (4), color co-relation (12), and PCA (3),
TextureCoarseness (1), Contrast (1), and Directionality (1)
Dermoscopic StructurePigment Network (5)
Table 2. Mean and standard deviation values of the features with significant p-values.
Table 2. Mean and standard deviation values of the features with significant p-values.
FMeanSDFMeanSD
AI0.690.94VRI1509.0151368.13
CI2.633.21VGI1834.9161303.59
FD26.319.30PC12910.311911.63
T139.8017.70PC2116.10100.96
T313.4212.77PC311.627.95
Cx113.5712.89ER6.540.65
W0.100.31EB6.620.44
K0.240.42ERI6.190.75
BG0.940.21EBI6.800.47
CRG0.010.14F17361.8322,929.7
CGB0.940.05F20.080.17
CBR0.950.09F30.520.39
CRK0.850.10F40.060.43
CGK0.990.05F50.140.16
CBK0.940.06
CRGI0.930.05
CBRI0.860.10
VR1032.02832.46
VG1032.52702.47
VK974.10652.87
Note: F—Feature, SD—Standard deviation, AI—Asymmetry Index, CI—Compactness Index, FD—Fractal dimensions, T1—Coarseness, T3-Directionality, Cx1—Color symmetry index, W—white, K—Black, BG—Blue Gray, CRG—Color Variation between Red and Green, CGB—Color Variation between Green and Blue, CBR—Color Variation between Blue and Red, CRK—Color Variation between Red and Grey, CGK—Color Variation between Green and Grey, CBK—Color Variation between Blue and Grey, CRGI—Color Variation between Red and Green for entire image, CBRI—Color Variation between Blue and Red for entire image VR—Color Variation for Red, VG—Color Variation for Green, VK—Color Variation for grey, VRI—Color Variation for Red for entire image, VGI—Color Variation for Green for entire image, PC1, PC2, PC3—Three Principal components, ER-Entropy for red, EB—Entropy for Blue, ERI—Entropy for Red for entire image, EBI—Entropy for Blue for entire image, F1–F5—Pigment network features.
Table 3. Contribution of features for lesion diagnosis (PH2 Dataset).
Table 3. Contribution of features for lesion diagnosis (PH2 Dataset).
Set-UpSE (%)SP (%)ACC (%)
F s h a p e 90.482.783.5
F c o l o r 88.892.891.9
F t e x t u r e 78.785.484.4
F P N 88.784.286.5
F c o m b i n e d 95.695.195.3
Table 4. Classifier Performance for different datasets.
Table 4. Classifier Performance for different datasets.
DatasetSE (%)SP (%)ACC (%)
PH295.695.195.3
ISBI 2016 + 201783.493.785.4
Combined83.888.386
Table 5. Classifier performance depicting the classifier generalization ability.
Table 5. Classifier performance depicting the classifier generalization ability.
DatasetSE (%)SP (%)ACC (%)
ISBI on PH280.581.580.7
PH2 on ISBI907581.2
Table 6. Comparative analysis of lesion classification methods with the state-of art.
Table 6. Comparative analysis of lesion classification methods with the state-of art.
DatasetRef.SE (%)SP (%)ACC (%)
PH2Barata et al. [5]10088.2-
Pennisi et al. [33]93.587.1
Proposed95.695.195.3
ISBI 2016 + 2017Yu et al. [31]54.793.185
Proposed83.493.785.4
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pathan, S.; Ali, T.; Vincent, S.; Nanjappa, Y.; David, R.M.; Kumar, O.P. A Dermoscopic Inspired System for Localization and Malignancy Classification of Melanocytic Lesions. Appl. Sci. 2022, 12, 4243. https://doi.org/10.3390/app12094243

AMA Style

Pathan S, Ali T, Vincent S, Nanjappa Y, David RM, Kumar OP. A Dermoscopic Inspired System for Localization and Malignancy Classification of Melanocytic Lesions. Applied Sciences. 2022; 12(9):4243. https://doi.org/10.3390/app12094243

Chicago/Turabian Style

Pathan, Sameena, Tanweer Ali, Shweta Vincent, Yashwanth Nanjappa, Rajiv Mohan David, and Om Prakash Kumar. 2022. "A Dermoscopic Inspired System for Localization and Malignancy Classification of Melanocytic Lesions" Applied Sciences 12, no. 9: 4243. https://doi.org/10.3390/app12094243

APA Style

Pathan, S., Ali, T., Vincent, S., Nanjappa, Y., David, R. M., & Kumar, O. P. (2022). A Dermoscopic Inspired System for Localization and Malignancy Classification of Melanocytic Lesions. Applied Sciences, 12(9), 4243. https://doi.org/10.3390/app12094243

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop