Next Article in Journal
Circulating microRNAs Correlate with Multiple Myeloma and Skeletal Osteolytic Lesions
Next Article in Special Issue
Breast Invasive Ductal Carcinoma Classification on Whole Slide Images with Weakly-Supervised and Transfer Learning
Previous Article in Journal
Gallbladder Cancer: Current Insights in Genetic Alterations and Their Possible Therapeutic Implications
Previous Article in Special Issue
Artificial Intelligence in Brain Tumour Surgery—An Emerging Paradigm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Skin Lesion Classification Based on Surface Fractal Dimensions and Statistical Color Cluster Features Using an Ensemble of Machine Learning Techniques

by
Simona Moldovanu
1,2,
Felicia Anisoara Damian Michis
2,3,
Keka C. Biswas
4,
Anisia Culea-Florescu
5 and
Luminita Moraru
2,3,*
1
Department of Computer Science and Information Technology, Faculty of Automation, Computers, Electrical Engineering and Electronics, Dunarea de Jos University of Galati, 47 Domneasca Str., 800008 Galati, Romania
2
The Modelling & Simulation Laboratory, Dunarea de Jos University of Galati, 47 Domneasca Str., 800008 Galati, Romania
3
Department of Chemistry, Physics & Environment, Faculty of Sciences and Environment, Dunarea de Jos University of Galati, 47 Domneasca Str., 800008 Galati, Romania
4
Department of Biological Sciences, University of Alabama at Huntsville, Huntsville, AL 35899, USA
5
Department of Electronics and Telecommunications, Faculty of Automation, Computers, Electrical Engineering and Electronics, Dunarea de Jos University of Galati, 47 Domneasca Str., 800008 Galati, Romania
*
Author to whom correspondence should be addressed.
Cancers 2021, 13(21), 5256; https://doi.org/10.3390/cancers13215256
Submission received: 3 October 2021 / Revised: 18 October 2021 / Accepted: 18 October 2021 / Published: 20 October 2021
(This article belongs to the Collection Artificial Intelligence in Oncology)

Abstract

:

Simple Summary

This study aimed to investigate the efficacy of implementation of novel skin surface fractal dimension features as an auxiliary diagnostic method for melanoma recognition. We therefore examined the skin lesion classification accuracy of the kNN-CV algorithm and of the proposed Radial basis function neural network model. We found an increased accuracy of classification when the fractal analysis is added to the classical color distribution analysis. Our results indicate that by using a reliable classifier, more opportunities exist to detect timely cancerous skin lesions.

Abstract

(1) Background: An approach for skin cancer recognition and classification by implementation of a novel combination of features and two classifiers, as an auxiliary diagnostic method, is proposed. (2) Methods: The predictions are made by k-nearest neighbor with a 5-fold cross validation algorithm and a neural network model to assist dermatologists in the diagnosis of cancerous skin lesions. As a main contribution, this work proposes a descriptor that combines skin surface fractal dimension and relevant color area features for skin lesion classification purposes. The surface fractal dimension is computed using a 2D generalization of Higuchi’s method. A clustering method allows for the selection of the relevant color distribution in skin lesion images by determining the average percentage of color areas within the nevi and melanoma lesion areas. In a classification stage, the Higuchi fractal dimensions (HFDs) and the color features are classified, separately, using a kNN-CV algorithm. In addition, these features are prototypes for a Radial basis function neural network (RBFNN) classifier. The efficiency of our algorithms was verified by utilizing images belonging to the 7-Point, Med-Node, and PH2 databases; (3) Results: Experimental results show that the accuracy of the proposed RBFNN model in skin cancer classification is 95.42% for 7-Point, 94.71% for Med-Node, and 94.88% for PH2, which are all significantly better than that of the kNN algorithm. (4) Conclusions: 2D Higuchi’s surface fractal features have not been previously used for skin lesion classification purpose. We used fractal features further correlated to color features to create a RBFNN classifier that provides high accuracies of classification.

Graphical Abstract

1. Introduction

The increasing number of skin cancer patients has revealed the need for decision support systems that can detect the lesions with high accuracy. In Europe, more than 100,000 melanoma cancers and around 22,000 skin cancers associated to melanoma are reported each year [1]. In Australia, more than 14,000 people with melanoma are reported yearly [2]. In the US, in 2018, more than 91,000 new cases and more than 13,000 deaths were reported [3]. In addition, the number of melanoma cases increases annually: An increase of 225% has been reported in the US over the last 30 years [3,4].
The primary form for melanoma detection is visual clinical examination. Visual inspection is an unaided method and thus a challenging and subjective task with limited accuracy, as it is strongly dependent on the expertise of the individual. A properly trained dermatologist uses various screening methods such as the 7-point checklist (describing the symptoms of an atypical pigment network, grey-blue areas, and atypical vascular patterns, blotches, streaks, irregular dots, and globules) [5], the ABCDE rule (i.e., A is for asymmetry, B is for border irregularity, C represents color variations, D is for the diameter, and E for evolution) [6], and other techniques such as infrared imaging, multispectral imaging, and confocal microscopy [7,8,9,10] for melanoma detection. They serve as adjuncts to physicians and provide a satisfactory performance, but are laborious and prone to human error. The accurate diagnosis of the skin cancer in its early stages may reduce the mortality rate by up to 90% [11,12]. The research on medical image processing has led to the development of artificial intelligence, especially deep learning systems to support the diagnosis of various skin lesions.
The strategy for early diagnosis is recognizing new skin lesions or changes in shape, borders, color, and geometry of existing lesions. Most of the existing methods for automated skin lesions detection from dermoscopy images incorporate the following steps: image preprocessing, segmentation, feature extraction, and classification. Segmentation or lesion border detection refers to the position identification. Manual segmentation is tedious, time-consuming, and susceptible to higher inter- and intra-observer variability due to the great variety of lesion shapes, textures, and colors. In addition, the different imaging conditions disturb both the segmentation and feature extraction processes [13,14]. Skin lesion color is another salient feature for diagnosing melanoma. Certain colors are mostly correlated to melanocytic lesions, including brown or black, shades of tan and occasional patches of red, white, or blue [15,16]. However, the existing variability due to the skin color and presence of hairs or veins may lead to variability in the aspect of the melanoma, in terms of color and texture. Various color features can be extracted from the color channels by searching for certain statistical values, such as color asymmetry or centroid distance. This can also be done by using global and local descriptors. Six shades of color are present in skin lesions, light brown, dark brown, white, red, blue, and black [17], but there are difficulties and discrepancies in color perceptions among dermatologists. It would be beneficial to use an algorithm that can identify the clusters of color that closely match dermatologists’ color perception. The choice of suitable color cluster composition is a major bottleneck in these studies.
The border’s irregularity (i.e., the B feature of the ABCDE rule) can be objectively measured using a fractal dimension. These fractal dimensions are related to the complexity of the shapes and have been used for skin lesion border irregularities characterization [18,19]. Ali et al. [20] proposed a deep learning algorithm to learn and classify border irregularities by viewing them as subtle structures of mole and melanoma images. The box-counting method was employed to estimate the fractal dimension of the irregular outlines. Melanomas have irregular borders and they are anticipated to have a higher fractal dimension compared to regular nevi. A 1D HFD was effectively used to compare the brain EEG signal complexities in experiments carried out on rats. A 2D HFD was proposed in [21] for cells and tissue in plant images and in [22] for histological images for an efficient differentiation of tumors.
Artificial intelligence (AI) tools facilitate substantial improvements in all areas of healthcare (either direct healthcare or drug development and ambient assisted living) from earlier diagnosis, customized treatments, monitoring, and efficient follow-ups. However, deep learning algorithm solutions are still relatively new to clinical applications in term of facilitating and enhancing the clinicians’ work in their skin cancer diagnosis process. AI solutions involve medical visualization above human-level capabilities and have an important impact on the current medical practice of dermatology where the patients are examined by dermoscopy. The goal is to augment certain features to facilitate skin cancer classification that is equal to or better than dermatologists’ performance.
Recently, various types of deep-learning neural network image classifiers were proposed to achieve better diagnosis performance. Although a growing body of research suggests deep learning approaches as the most successful applications in images classification, they are so called “black-box” models with a huge predictive power but whose inference mechanisms are often difficult, if not impossible, to grasp intuitively. This is an important obstacle to implementing deep-learning neural network classifiers in routine medical practice. The computational complexity is also a challenge. In our previous study [23], we showed the feasibility of a feedforward back propagation network algorithm that is based on asymmetry, moment of inertia, and histograms to differentiate between non-melanoma and melanoma. Specifically, 60 different net architectures were trained and tested in a dataset collected from four databases (7-Point, PH2, MED-NODE, and PAD-UFES-20). The model performance had an average accuracy of 96.7% and a mean square error (MSE) of 0.0203.
There have been very few studies about the implementation of the 2D HFD as a new descriptor for the study of skin lesions and as input data for a classifier. In this paper, we expand on our previous study [23] and propose a method to derive and cluster the color features of skin lesions, to derive the fractal dimension of the whole skin lesion image by implementing the 2D Higuchi’s surface fractal dimension, and to use machine learning models to automatically identify benignity or malignancy based on these measurements. The main contributions are as follows:
(i)
Determining the representative average percentage of color areas of skin lesions for each considered dataset.
(ii)
Proposing a descriptor for the investigation of the skin surface fractal dimensions for the channels in RGB color images, i.e., 2D Higuchi fractal dimension as an objective quantitative.
(iii)
Two distinctive machine learning classifiers, namely a kNN-CV algorithm and a RBFNN approach as a non-linear classifier, are implemented to generate the prediction. A dynamic partitioning of data is carried out using the 5-fold cross validation method (CV). These machine learning classifiers belong to different classification paradigms.
To the knowledge of the authors, the proposed techniques are novel and there is room for improvement of the existing methods already presented in the literature.
The rest of this paper has the following structure: Section 2 presents a review of the current diagnostic tools; Section 3 presents the proposed methodology, introduces the features to be used, describes the classifier architectures and implementation details, and discusses the image datasets; Section 4 shows the experimental results and discussions; finally, Section 5 gives some concluding remarks and some future work directions.

2. Related Works

Deep learning and neural networks devoted to image segmentation have been increasingly applied to various fields, from satellite imagery and medical scans to cultural heritage preservation and biomechanics [24,25,26,27,28,29,30,31]. Remarkable progress in machine learning methods have been reported for a dermatologist-level skin cancer classification. Here, we summarize the current state-of-the-art developments.
Recently, AI and deep learning techniques have been employed in skin cancer classification. The color distributions of skin lesions have been grouped into six shades of color, which arise due various biological processes. Various AI methods are employed to correlate the melanocytic lesions with certain colors such as brown or black, shades of tan, and occasional patches of red, white, or blue. Khan et al. [16] proposed a hybrid super-feature vector containing textural and color features to realize an intelligent system able to discriminate melanoma from nevus. A support vector machine (SVM) was utilized as a classifier on the DERMIS dataset with 397 skin cancer images. The proposed method achieved 96% accuracy. In another study [32], six different color spaces were used to extract color as a global feature (i.e., mean, standard deviation, variation, and skewness) to classify melanoma and non-melanoma. They reported an accuracy of 90.32%. Seidenari et al. [33] described the colors in melanocytic lesion images by means of the distribution of the 23 color clusters for the RGB images. A significant difference in the presence of the selected color clusters between the nevi and melanoma populations was reported. A sensitivity of 85.5% was reported based on only one parameter (color clusters). A combined approach of deep learning techniques with clinical criteria (color, texture, and shape) for the automated diagnosis of skin lesions is reported in [34].
The 1D Higuchi fractal dimension (HFD) [35] is used as a discriminative feature for various classifiers to classify the modulated signals. This fractal dimension shows superior noise immunity despite having a higher computational complexity. A general application of HFD for a planar curve was proposed in [21,22]. This 2D fractal dimension was only used for histological images. Despite the efficacy of fractal dimensions as meaningful features in machine learning applications, there has been limited research using 2D HFD data sequences as a measure of irregularity of skin lesion borders. Čukić et al. [36] used the 1D HFD together with sample entropy to analyze brain neuronal activity in depressive disorders. These two relatively uncorrelated features provided a better classification accuracy for all employed machine learning techniques. The 4- or 8-connectivity are considered in [37] for neighboring data points. The 2D Higuchi fractal dimensions for both greyscale and color images are computed. A very good performance was reported in the case of color images.
Despite advantages in feature extraction and classification accuracy, a few limitations exist, such as an important variability in the quality of dermoscopy images along with a low resolution that characterizes many of these images. In addition, there is no machine learning technique available yet that can extract and learn the most important and discriminant features from a high dimensional dataset to obtain a higher classification accuracy.

3. Proposed Methodology

The proposed method consists of the following primary steps, which can be seen in Figure 1.
The first step is image pre-processing for noise reduction, hair removal, and image segmentation (Figure 2).
Then, a clustering method allows for the selection of the relevant color distribution in the melanocytic lesion images and computation of the average percentage of color areas within the nevi and melanoma lesion area. A t-test is used for feature selection and to reduce the overfitting in the feature space. In a third step, the fractal dimension of the surface is computed using a 2D generalization of HFD. It computes the HFD of the R, G, and B channel colors in RGB images associated to the skin lesions. Both the color and fractal features are classified, separately, using a kNN algorithm with a 5-fold cross validation method. The same discriminant features are utilized for the classification by the RBFNN approach as those for a non-linear classifier. A performance analysis of these different classification paradigms is performed.

3.1. Analysis of Relevant Color Distribution in Melanocytic Lesion Images and Color Clusters Selection

The method proposed in [33] is used to determine the average percentage of color areas within the nevi and melanoma lesion area. For this purpose, we analyze the color histogram of each image associated with melanoma and nevus lesions. A clustering method allows for the selection of the relevant color distribution in melanocytic lesion images. In the color histogram of the lesion, the pixels are mapped to the relative color clusters. Thus, we considered twenty-three color clusters, denoted as cl1, …, cl23, that are characterized by the largest differences between the minimum and maximum intensity in each R, G, and B channel, according to the data provided in [33]. For each image the percentage of each color cluster is computed as the ratio between the number of pixels within the lesion belonging to the specified color cluster and the total pixels of the lesion. These average percentages of color areas represent relevant statistical features for distinction among melanocytic lesions. A feature space is built. However, many features are correlated and the redundant information about the information content of the images worsens the classification performance. To avoid this issue, a t-test is used for the feature selection, by removing features with no predictive information. In addition, the overfitting in the feature space is reduced. A value of p < 0.05 is considered statistically significant, i.e., the selected samples are different from one another in a statistically significant way. Selected features are universally accepted and used for malignancy detection, but they do not yield satisfactory performance on their own. An example of the color detection, color prototype in the RGB image, and the quantized color clusters after this process is completed are shown in Figure 3.

3.2. Higuchi’s Surface Fractal Dimension (HFD)

HFD is proposed as a tool for detecting and classifying skin lesions based on the complexities present in the images. The Higuchi algorithms run from the minimal scale k = 1 to the maximal scale kmax = 8. A surface f in the 3D Euclidean space allows for a graphical representation γ i j = f ( i , j ) , i = 1, …, M, j = 1, …, N. According to [20], γ i j denotes the intensity levels of a 2D image and they are the element of a (M × N) matrix noted as X. To define the elements of the matrix X, the plane surface f is tessellated by triangles, i.e., each consecutive triplet of vertices defines a new triangle. A triangular grid is obtained. This triangular grid is important in the sense that it has a simple mathematical representation, it has several symmetry axes and a rotation of 90° will map the grid into itself. In addition, the triangle can be enlarged into similar shapes by using a step size parameter. This process will change the resolution of the images and will facilitate the fractal analysis. The number of triangles linked together will generate a new triangle shape that will influence the efficacy of surface area estimation. The sum of the areas of all the triangles within it will estimate the surface area.
According to the method presented in [21], a pre-defined fractal scaling parameter of size k = 1 is considered to build a matrix X1. In the next step, k = 2 and a similar triangle shape but one that is magnified twice will be considered, and a new matrix X2 is formed. This process is repeated until the fractal scaling parameter reaches k = 8. k = 1 means that all the points (i, j, γ i j ) of the analyzed surface are considered when approximating the surface f. Then, for k = 2 we increase the step size and repeat the process. We chose k = 8 so that the analysis still ensures informative details. In addition, this resolution has little effect on the error value of the calculated fractal dimension.
Figure 4 depicts a sample of results consisting of R, G, and B color channels in the first row and two triangle shapes (k = 1 and k = 4) for tessellation for each color channel in the last two rows. This intermediate step allows for the computation of the   X k c m matrices and finally for 2D Higuchi’s surface fractal dimension, as discussed in this section.
An RGB image is separated in the R, G, and B color channels. Each color channel is analyzed as an intensity image. For each R, G, and B color channel, a   X k c m matrix is formed as follows:
X k c m = [ γ m , m γ m + k c , m γ m + 2 k c , m γ m + p k c , m γ m , m + k c γ m + k c , m + k c γ m + 2 k c , m + k c γ m + p k c , m + p k c γ m , m + s k c γ m + k c , m + s k c γ m + 2 k c , m + s k c γ m + p k c , m + p k c ]
where kc = 1, …, 8, and c denotes R, G, and B color channels. m = 1, 2, …, kc, p = [ M m k c ] , s = [ N m k c ] , and [] represents the integer part.
The area of the surface f will be approximated as the sum over all the triangles A m ( k c ) , where c = R ,   G ,   B , whose vertices are given by the matrices X k c m . By multiplying the differences of neighboring pixels in the triangle vertices, the area is estimated as follows [36]:
A m ( k c ) = ξ c i = 1 [ M m k c ] j = 1 [ N m k c ] { | γ m + ( i 1 ) k c , m + j k c γ m + ( i 1 ) k c , m + ( j 1 ) k c | | γ m + i k c , m + j k c γ m + ( i 1 ) k c , m + k c | | γ m + i k c , m + j k c γ m + i k c , m + ( j 1 ) k c | | γ m + i k c , m + ( j 1 ) k c γ m + ( i 1 ) k c , m + ( j 1 ) k c | }
where ξ c = 1 2 k c 4 ( N 1 ) [ N m k c ] ( M 1 ) [ M m k c ] is a normalization factor. Furthermore, the A m ( k c ) were averaged for all m = 1, 2, …, kc. The average surface area A ( k c ) is:
A ( k R ) = m = 1 k R A m ( k R ) k R , A ( k G ) = m = 1 k G A m ( k G ) k G   and   A ( k B ) = m = 1 k B A m ( k B ) k B ,   k c = 1 ,   ,   k m a x .
Each collection of points plots a double logarithmic curve, l n ( A ( k c )   ) versus l n ( A ( k C 2 )   ) , that is fitted by the least squares linear method. The slope of the resulting best straight line, i.e., S C = l n ( A ( k C )   ) / l n ( A ( k C 2 )   ) will allow for the computation of the Higuchi fractal dimension for each color channel as H F D c = S c + 1 . The average of H F D c values will provide the HFD feature. The 2D HFD values range from 2 to 3.

3.3. K-Nearest Neighbor (kNN) with 5-Fold cross Validation (kNN-CV)

kNN is a machine learning algorithm that assumes that the samples of each group are predominantly surrounded by samples from the same group [38]. Usually, kNN has training and classification steps. The training step keeps the features and class label of the training samples. Then, kNN classifies new instances based on a similarity operation. The Euclidean distance is the most frequently used similarity measure.
When only a small dataset is available, the cross validation technique is an alternative solution to the augmentation technique. In addition, the sampling cost is low. The kNN with a 5-fold cross validation (kNN-CV) algorithm uses a subset of data for validation but it still uses all the data for the testing phase. The input data is split into five folds. The algorithm is trained on four of the folds and tested on the single left-out fold. Then, this process is iteratively repeated until each fold becomes a test set and all data are evaluated. The training time complexity is O (d × n × logn), where the number of samples in the training dataset is denoted generically by n and data dimensionality by d. The complexity is an average. For 5-fold cross validation kNN, we repeat the computations and increase the time complexity. This relatively higher computation time could be interpreted as a limitation of the kNN-CV algorithm. This small drawback is compensated for by the reduction in the overfitting influence. Moreover, a correct estimation of the test error is its main advantage. It also provides a good level of accuracy for the model [39].

3.4. RBFNN Classifier

Artificial neural networks consist of a set of linked nodes that collaborate to solve problems. The RBF neural network is a three-layer feedforward neural network architecture. It is simple and fast and shows a very good tolerance to input noise. Each hidden node puts into action a nonlinear activation function, whicg is a radially symmetric function. The radial basis function is centered on a vector in the feature space and its response varies monotonically with distance from the central point. RBFNN can be formulated as the minimization of MSE function. A critical point for an RBF network implementation consists of the RBF centers and weights determination. The main advantages of the RBFNN are as follows: (i) it has a superior approximation ability for the interpolation operation, as it provides predictions in between the training set; (ii) it is a local approximation network as the outputs are provided by specified hidden neurons, and (iii) it uses hyper spheres to separate clusters. Thus, for an appropriately localized RBFNN, an input generates an important activation in a small region so that the issue of local minima is avoided [40,41]. These characteristics recommend the RBFNNs for modeling complex systems.
Figure 5 depicts the RFB net architecture. The input layer allocates the c k + H F D i features to the nodes of the hidden layer. c k is the average percentage of color area feature and H F D i is Higuchi’s fractal dimension for each dataset. The employed radially symmetric functions are Gaussian function h 1 ,   h 2 ,   ,   h n . The output layer consists of a summation over the number of possible output classes. The classifier prediction for an output class is the “spike” output over the summation.
Once the RBF centers and nonlinearities in the hidden layer are determined, the weights are computed based on the linear regression of the hidden layer outputs to the desired outputs or target values. The RBFNN classifier is trained and validated using 5-fold cross validation to strengthen the capabilities of the predictive model [38]. This approach separates the datasets into training and testing groups and avoids net overfitting.

3.5. Dataset Description

The proposed methodology was trained, validated, and tested on 248 nevi and 407 melanoma images collected from three databases, i.e., 7-Point (68 nevi/297 melanoma), PH2 (80 nevi/40 melanoma), and Med-Node (100 nevi/40 melanoma) [5,42,43]. The used classifiers do not require data augmentation techniques to mitigate the unbalanced data among the classes of skin lesions. The features (i.e., HFD and the average percentage of color areas/color clusters) were determined using a custom program written in Matlab (Matlab v. R2020b, Mathworks). Statistical analyses were completed using SPSS software (IBM SPSS Statistics 20). The hardware used was a computer with the following specifications: Inter (R) Core (TM) i7-8550U CPU @ 1.80 GHz; Memory (RAM) 8 GB DDR4; GeForce MX150 4 GB video; hard disk 500 GB SSD. Segmentation and features extraction had a processing time in the range of a couple of seconds. Overall, the processing time for each dataset was between 120 and 200 s.
A sound analysis based on the sensitivity, accuracy, precision, AUC (area under the curve), and Dice score metrics was used to determine the performance of the skin lesions classification.

4. Results and Discussion

The first step is image pre-processing for hair and noise removal and image segmentation. The second step performs the feature selections. The integration of color and fractal features in the analysis results in high feature dimensionality including a high level of feature redundancy. For reducing the redundant features in the feature space, the t-test is used to optimize the classification model. The selected features from each data set are shown in Table 1.
We separately tested and compared the efficacy of selected color clusters and fractal features by using a 5-fold cross validation and kNN classifier. We did not find any studies comparing classification performance of skin cancer diagnoses based on 2D Higuchi’s surface fractal features in the literature, so we compared the two classifiers in terms of accuracy.
The prediction performance of kNN based on the average percentage of the feature descriptors of colored areas differed significantly in terms of color clusters and datasets (Figure 6). The highest classification accuracies of 82.47% (clusters cl10 and cl15), 81.44% (cl23), and 80.41% (cl20) belong to the PH2 dataset. The second highest classification accuracy of 75.91% (cl5 cl7 and cl14) belongs to the Med-Node dataset. The color clusters cl3, cl11, and cl13 from the 7-Point dataset yielded lower accuracy results. The color clusters cl8, cl16, cl17, cl19, and cl22 did not contain any relevant average percentage of feature descriptors of color areas.
As introduced in Section 3.2, the 2D Higuchi’s surface fractal descriptor was computed for each image with the fractal scaling parameter varying from k = 1 to k = 8. The performance of classification based on the 2D Higuchi’s surface fractal descriptor is displayed in Table 2. Classification results show that the 2D Higuchi’s surface fractal descriptor produces a higher classification accuracy (79.38%) than the average percentage of feature descriptors of color areas for the PH2 dataset.
Further, the selected discriminant features were classified by a 5-fold cross validation and RBFNN approach. The input data was split into five folds; one subset was the test set and the other four subsets were for training. The employed Gaussian functions h1, h2, …, hg were as follows: g = 15 for the 7-Point dataset; g = 11 for the Med-Node dataset, and g = 10 for the PH2 dataset.
The investigation was devoted to establishing the best performance for the RBFNN classifier with different inputs: 50 neurons on the hidden layer and two outputs. The number of hidden neurons varied from 0 to 50, and the new hidden nodes were automatically incorporated by the net. The experimental verification indicated that 50 neurons on the hidden layer assured the best classification performance and the classifier achieved the global optimal solution characterized by the lower MSE. Our specified mean squared error goal was 0.01. The diagnostic performance of the models was assessed in terms of sensitivity, accuracy, precision, AUC, Dice scores, and MSE provided by the RBFNN classifier. To highlight that both features can promote each other in the diagnostic process, the average percentage of color areas and HFD features were evaluated and compared together and separately, and the performance changes are presented in Table 3.
Table 3 shows that for all datasets under investigation, when the input of the neural network consisted of both average percentage of color areas/color cluster and 2D Higuchi’s surface fractal descriptors, the diagnostic performance was improved in terms of accuracy, AUC, Dice scores, and MSE. In addition, the proposed RBFNN proved to be more accurate and efficient than the kNN algorithm in the recognition and classification of skin cancer. The RBFNN’s errors were smaller when input data were color clusters and HFD descriptors, confirming our working hypothesis of using the 2D Higuchi’s surface fractal descriptors to improve the performance of classification was correct. In addition, the skin lesions analysis was more precise when the proposed RBFNN was employed.
Traditionally, skin lesions are monitored in order to evaluate their type and evolution towards a potential cancerogenic pathology. Sometimes this is a lengthy process, and it can be quite stressful for the patient. In addition, as reported in the literature [44,45,46], melanomas and nevus may show different biological manifestations according to their anatomic locations (e.g., UV shielded sites and unshielded sites melanoma) and this can add to the complexity of the problem. The addressed lesion features in the proposed method could be used to investigate both of these kinds of cancerous cutaneous lesions. By using the proposed method, it is possible to quickly determine the skin lesions’ characteristics and establish the type of lesion, thus avoiding a lengthy monitoring process and assuring a correct and early diagnosis. In addition, the proposed solution is able to analyze various image types, provided by dermoscopy, clinical, histopathology, or confocal microscopy [46]. Current research [47,48] shows that several cases make diagnosis difficult, especially melanomas hidden by tattoos or atypical (or dysplastic) nevi [49]. In addition, the evolution of melanoma can include subtle changes that cannot be observed by a physician but can play an important role in establishing the type of skin lesion. The proposed surface fractal analysis overcomes this drawback.
To the best of our knowledge, 2D Higuchi’s surface fractal descriptors have not been previously used to recognize and classify nevus and melanomas in a neuronal network model. A comparative analysis to other reported results can only be done qualitatively with reference to the classification accuracy (Table 4). We compared our results to both machine learning and neural network methods. The classification accuracy of kNN-CV with 2D Higuchi’s surface fractal features is comparable to that provided by other classifiers. It can be noted that the proposed RBFNN algorithm achieved significant accuracy improvement in all cases.
There are some limitations of this study. The number of images was still limited despite using the cross-validation method. In addition, this study was devoted to only one method, 2D Higuchi’s surface fractal computation. To overcome this limitation, other methods for determination of 2D fractal dimensions are possible. However, the decorrelated features, such as color clusters and HFD, utilized together in the classification task allow for an accurate classification using a relatively small number of images.

5. Conclusions

In this paper, a study based on the skin surface fractal dimensions and relevant average percentages of the color area features was conducted for skin lesion classification. The results indicated that appropriately generated color features that were further correlated to fractal features allowed the RBFNN classifier to provide high accuracies of classification.
To the best of our knowledge, 2D Higuchi’s surface fractal features have not been previously used for skin lesion classifications. In future work, other image features will be considered together with the surface fractal features. In addition, in order to improve the classification accuracy, a more balanced dataset will be considered (either by populating it with an increased number of images or by using augmentation methods).

Author Contributions

Conceptualization, L.M.; Formal analysis, S.M., F.A.D.M., K.C.B. and L.M.; Investigation, S.M., F.A.D.M., L.M. and A.C.-F.; Methodology, S.M. and L.M.; Software, S.M., F.A.D.M. and L.M.; Validation, S.M., F.A.D.M., L.M. and A.C.-F.; Writing—original draft preparation, S.M., K.C.B., F.A.D.M. and A.C.-F.; Writing—review and editing, K.C.B. and L.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The APC was funded by Contract nr. RF 3627/30.09.2021 “Supporting and developing Research, Innovation and Technology Transfer activities at the University Dunărea de Jos of Galați”.

Acknowledgments

This research was partially supported under Contract nr. RF 3627/30.09.2021 “Supporting and developing Research, Innovation and Technology Transfer activities at the University Dunărea de Jos of Galați”.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bray, F.; Ferlay, J.; Soerjomataram, I.; Siegel, R.L.; Torre, L.A.; Jemal, A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 2018, 68, 394–424. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Australian Government. Melanoma of the Skin Statistics. Available online: https://melanoma.canceraustralia.gov.au/statistics (accessed on 29 March 2019).
  3. Siegel, R.L.; Miller, K.D.; Jemal, A. Cancer statistics 2018. CA Cancer J. Clin. 2018, 68, 7–30. [Google Scholar] [CrossRef]
  4. Silverberg, E.; Boring, C.C.; Squires, T.S. Cancer Statistics. CA Cancer J. Clin. 1990, 40, 9–26. [Google Scholar] [CrossRef] [PubMed]
  5. Kawahara, J.; Daneshvar, S.; Argenziano, G.; Hamarneh, G. Seven-Point checklist and skin lesion classification using multitask multimodal neural nets. IEEE J. Biomed. Health Inform. 2019, 23, 538–546. [Google Scholar] [CrossRef] [PubMed]
  6. Goldsmith, S.; Solomon, A. A series of melanomas smaller than 4mm and implications for the ABCDE rule. J. Eur. Acad. Dermatol. Venereol. 2007, 21, 929–934. [Google Scholar] [CrossRef]
  7. Turani, Z.; Fatemizadeh, E.; Blumetti, T.; Daveluy, S.; Moraes, A.F.; Chen, W.; Mehregan, D.; Andersen, P.; Nisiriavanaki, M. Optical radiomic signatures derived from optical coherence tomography im- ages to improve identification of melanoma. Cancer Res. 2019, 79, 2021–2030. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Codella, N.; Nguyen, Q.-B.; Pankanti, S.; Gutman, D.; Helba, B.; Halpern, A.; Smith, J. Deep learning ensembles for melanoma recognition in dermoscopy images. IBM J. Res. Dev. 2017, 61, 5:1–5:15. [Google Scholar] [CrossRef] [Green Version]
  9. Codella, N.; Gutman, D.; Celebi, M.E.; Helba, B.; Marchetti, M.; Dusza, S.; Kalloo, A.; Liopyris, K.; Mishra, N.K.; Kittler, H.; et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 International Symposium on Biomedical Imaging (ISBI), hosted by the International Skin Imaging Collaboration (ISIC). In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 168–172. [Google Scholar]
  10. Marchetti, M.; Codella, N.; Dusza, S.; Gutman, D.; Helba, B.; Kalloo, A.; Mishra, N.; Carrera, C.; Celebi, E.; DeFazio, J.; et al. Results of the 2016 International Skin Imaging Collaboration International Symposium on Biomedical Imaging Challenge: Comparison of the accuracy of computer algorithms to dermatologists for the diagnosis of melanoma from dermoscopic images. J. Am. Acad. Dermatol. 2018, 78, 270–277. [Google Scholar] [CrossRef] [Green Version]
  11. Kasmi, R.; Mokrani, K. Classification of malignant melanoma and benign skin lesions: Implementation of automatic ABCD rule. IET Image Process. 2016, 10, 448–455. [Google Scholar] [CrossRef]
  12. Damian, F.A.; Moldovanu, S.; Dey, N.; Ashour, A.S.; Moraru, L. Feature Selection of Non-Dermoscopic Skin Lesion Images for Nevus and Melanoma Classification. Computation 2020, 8, 41. [Google Scholar] [CrossRef]
  13. Barata, C.; Celebi, M.E.; Marques, J.S. Towards a robust analysis of dermoscopy images acquired under different conditions. In Dermoscopy Image Analysis; Celebi, M.E., Mendonca, T., Marques, J.S., Eds.; CRC Press: Boca Raton, FL, USA, 2015; pp. 1–22. [Google Scholar]
  14. Barata, C.; Celebi, M.E.; Marques, J.S. A survey of feature extraction in dermoscopy image analysis of skin cancer. IEEE J. Biomed. Health Inform. 2019, 23, 1096–1109. [Google Scholar] [CrossRef]
  15. Oliveira, R.B.; Papa, J.P.; Pereira, A.S.; Tavares, J.M.R.S. Computational methods for pigmented skin lesion classification in images: Review and future trends. Neural. Comput. Appl. 2018, 29, 613–636. [Google Scholar] [CrossRef] [Green Version]
  16. Khan, M.Q.; Hussain, A.; Rehman, S.U.; Khan, U.; Maqsood, M.; Mehmood, K.; Khan, M.A. Classification of Melanoma and Nevus in Digital Images for Diagnosis of Skin Cancer. IEEE Access 2019, 7, 90132–90144. [Google Scholar] [CrossRef]
  17. Hintz-Madsen, M.; Hansen, L.K.; Larsen, J.; Drzewiecki, K.T. A Probabilistic Neural Network Framework for the Detection of Malignant Melanoma. In Artificial Neural Networks in Cancer Diagnosis, Prognosis, and Patient Management, 1st ed.; Naguib, R.N.G., Sherbet, G.A., Eds.; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
  18. Piantanelli, A.; Maponi, P.; Scalise, L.; Serresi, S.; Cialabrini, A.; Basso, A. Fractal characterisation of boundary irregularity in skin pigmented lesions. Med Biol. Eng. Comput. 2005, 43, 436–442. [Google Scholar] [CrossRef]
  19. Spasic, S.; Kesic, S.; Kalauzi, A.; Saponjic, J. Different anaesthesia in rat induces distinct inter-structure brain dynamic detected by Higuchi fractal dimension. Fractals 2011, 19, 113–123. [Google Scholar] [CrossRef]
  20. Ali, A.-R.; Li, J.; Yang, G.; O’Shea, S.J. A machine learning approach to automatic detection of irregularity in skin lesion border using dermoscopic images. PeerJ Comput. Sci. 2020, 6, e268. [Google Scholar] [CrossRef]
  21. Spasic, S. On 2D generalization of Higuchi’s fractal dimension. Chaos Solitons Fractals 2014, 69, 179–187. [Google Scholar] [CrossRef]
  22. Klonowski, W.; Stepien, P.; Stepien, R.; Sedivy, R.; Ahammer, H.; Spasic, S. Analysis of Anal Intraepithelial Neoplasia images using 1D and 2D Higuchi’s fractal dimension methods. Fractals 2018, 26, 1850021. [Google Scholar] [CrossRef]
  23. Moldovanu, S.; Obreja, C.-D.; Biswas, K.C.; Moraru, L. Towards Accurate Diagnosis of Skin Lesions Using Feedforward Back Propagation Neural Networks. Diagnostics 2021, 11, 936. [Google Scholar] [CrossRef] [PubMed]
  24. Mohanty, S.P.; Czakon, J.; Kaczmarek, K.A.; Pyskir, A.; Tarasiewicz, P.; Kunwar, S.; Rohrbach, J.; Luo, D.; Prasad, M.; Fleer, S.; et al. Deep Learning for Understanding Satellite Imagery: An Experimental Survey. Front. Artif. Intell. 2020, 3, 534696. [Google Scholar] [CrossRef]
  25. Fiorucci, M.; Khoroshiltseva, M.; Pontil, M.; Traviglia, A.; Del Bue, A.; James, S. Machine Learning for Cultural Heritage: A Survey. Pattern Recognit. Lett. 2020, 133, 102–108. [Google Scholar] [CrossRef]
  26. Rundo, L.; Han, C.; Nagano, Y.; Zhang, J.; Hataya, R.; Militello, C.; Tangherloni, A.; Nobile, M.; Ferretti, C.; Besozzi, D.; et al. USE-Net, Incorporating Squeeze-and-Excitation blocks into U-Net for prostate zonal segmentation of multi-institutional MRI datasets. Neurocomputin 2019, 365, 31–43. [Google Scholar] [CrossRef] [Green Version]
  27. Zhang, Z.; Liu, Q.; Wang, Y. Road extraction by deep residual u-net. IEEE Geosci. Remote Sens. Lett. 2018, 15, 749–753. [Google Scholar] [CrossRef] [Green Version]
  28. Bardis, M.; Houshyar, R.; Chantaduly, C.; Tran-Harding, K.; Ushinsky, A.; Chahine, C.; Rupasinghe, M.; Chow, D.; Chang, P. Segmentation of the Prostate Transition Zone and Peripheral Zone on MR Images with Deep Learning. Radiol. Imaging Cancer 2021, 3, e200024. [Google Scholar] [CrossRef] [PubMed]
  29. Leopold, H.A.; Orchard, J.; Zelek, J.S.; Lakshminarayanan, V. PixelIBMM Augmenting the pixelICNN with batch normalization and the presentation of a fast architecture for retinal vessel segmentation. J. Imaging 2019, 5, 26. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Wang, D.; Li, Z.; Cao, L.; Balas, V.E.; Dey, N.; Ashour, A.S.; McCauley, P.; Shi, F. Multi-scale plantar pressure imaging data fusion incorporating improved Gaussian mixture operator and fuzzy weighted evaluation system. IEEE Sens. J. 2017, 17, 1407–1420. [Google Scholar] [CrossRef]
  31. Wang, D.; Li, Z.; Dey, N.; Ashour, A.S.; Moraru, L.; Biswas, A.; Shi, F. Optical pressure sensors based plantar image segmenting using an improved fully convolutional network. Optik 2019, 179, 99–114. [Google Scholar] [CrossRef]
  32. Almansour, E.; Jaffar, M.A. Classification of Dermoscopic skin cancer images using color and hybrid texture features. Int. J. Comput. Sci. Netw. Secur. 2016, 16, 135–139. [Google Scholar]
  33. Seidenari, S.; Grana, C.; Pellacani, G. Colour Clusters for Computer Diagnosis of Melanocytic Lesions. Dermatology 2007, 214, 137–143. [Google Scholar] [CrossRef]
  34. Li, X.; Wu, J.; Jiang, H.; Chen, E.Z.; Dong, X.; Rong, R. Skin Lesion Classification Via Combining Deep Learning Features and Clinical Criteria Representations. bioRxiv 2018. [Google Scholar] [CrossRef]
  35. Shi, C.-T. Signal Pattern Recognition Based on Fractal Features and Machine Learning. Appl. Sci. 2018, 8, 1327. [Google Scholar] [CrossRef] [Green Version]
  36. Čukić, M.; Stokić, M.; Simić, S.V.; Pokrakjak, D. The successful discrimination of depression from EEG could be attributed to proper feature extraction and not to a particular classification method. Cogn. Neurodynamics 2020, 14, 443–455. [Google Scholar] [CrossRef] [Green Version]
  37. Ahammer, H.; Sabathiel, N.; Reiss, M.A. Is a two-dimensional generalization of the Higuchi algorithm necessary? Chaos 2015, 25, 073104. [Google Scholar] [CrossRef]
  38. Bonyad, M.; Tieng, Q.; Reutens, D. Optimization of distributions differences for classification. IEEE Trans. Neural. Netw. Learn. Syst. 2019, 30, 511–523. [Google Scholar] [CrossRef] [Green Version]
  39. Tembusai, Z.R.; Mawengkang, H.; Zarlis, M. K-Nearest Neighbor with K-Fold Cross Validation and Analytic Hierarchy Process on Data Classification. Int. J. Adv. Data Inf. Syst. 2021, 2, 1–8. [Google Scholar] [CrossRef]
  40. Hashemi Fath, A.; Madanifar, F.; Abbasi, M. Implementation of multilayer perceptron (MLP) and radial basis function (RBF) neural networks to predict solution gas-oil ratio of crude oil systems. Petroleum 2020, 6, 80–91. [Google Scholar] [CrossRef]
  41. Alexandridis, A.; Chondrodima, E. A medical diagnostic tool based on radial basis function classifiers and evolutionary simulated annealing. J. Biomed. Informatics. 2014, 49, 61–72. [Google Scholar] [CrossRef] [PubMed]
  42. Mendonça, T.; Ferreira, P.M.; Marques, J.S.; Marcal, A.R.; Rozeira, J. PH2—A dermoscopic image database for research and benchmarking. In Proceedings of the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 5437–5440. [Google Scholar]
  43. Giotis, I.; Molders, N.; Land, S.; Biehl, M.; Jonkman, M.F.; Petkov, N. MED-NODE: A computer-assisted melanoma diagnosis system using non-dermoscopic images. Expert Syst. Appl. 2015, 42, 6578–6585. [Google Scholar] [CrossRef]
  44. Paolino, G.; Panetta, C.; Cota, C.; Didona, D.; Moliterni, E.; Di Mattia, C.; De Vita, G.; Bottoni, U.; Donati, P.; Calvieri, S. Vitamin D receptor immunohistochemistry variability in sun-exposed and non-sun-exposed melanomas. Melanoma Res. 2017, 27, 17–23. [Google Scholar] [CrossRef]
  45. Paolino, G.; Moliterni, E.; Didona, D.; Garelli, V.; Corsetti, P.; Lopez, T.; Richetta, A.G.; Cantisani, C.; Bottoni, U.; Calvieri, S. Clinicopathological features, vitamin D serological levels and prognosis in cutaneous melanoma of shield-sites: An update. Med. Oncol. 2015, 32, 451. [Google Scholar] [CrossRef]
  46. Mercuri, S.R.; Rizzo, N.; Bellinzona, F.; Pampena, R.; Brianti, P.; Moffa, G.; Colombo Flink, L.; Bearzi, P.; Longo, C.; Paolino, G. Digital ex-vivo confocal imaging for fast Mohs surgery in nonmelanoma skin cancers: An emerging technique in dermatologic surgery. Dermatol. Ther. 2019, 32, e13127. [Google Scholar] [CrossRef]
  47. Russo, T.; Piccolo, V.; Lallas, A.; Jacomel, J.; Moscarella, E.; Alfano, R.; Argenziano, G. Dermoscopy of Malignant Skin Tumours: What’s New? Dermatology 2017, 233, 64–73. [Google Scholar] [CrossRef] [PubMed]
  48. Skvara, H.; Teban, L.; Fiebiger, M.; Binder, M.; Kittler, H. Limitations of Dermoscopy in the Recognition of Melanoma. Arch. Dermatol. 2005, 141, 155–160. [Google Scholar] [CrossRef] [PubMed]
  49. Goldstein, B.G.; Goldstein, A.O. Diagnosis and management of malignant melanoma. Am. Fam. Physician 2001, 63, 1359–1368, Erratum in 2001, 64, 1682. [Google Scholar]
  50. Nasiri, S.; Helsper, J.; Jung, M.; Fathi, M. DePicT Melanoma Deep-CLASS: A deep convolutional neural networks approach to classify skin lesion images. BMC Bioinform. 2020, 21, 84. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Kavitha, J.C.; Suruliandi, A.; Nagarajan, D.; Nadu, T. Melanoma Detection in Dermoscopic Images using Global and Local Feature Extraction. Int. J. Multimed. Ubiquit. Eng. 2017, 12, 19–28. [Google Scholar] [CrossRef]
  52. Al-masni, M.A.; Kim, D.-H.; Kim, T.-S. Multiple skin lesions diagnostics via integrated deep convolutional networks for segmentation and classification. Comput. Methods Programs Biomed. 2020, 190, 105351. [Google Scholar] [CrossRef]
  53. Seeja, R.D.; Suresh, A. Deep learning-based skin lesion segmentation and classification of melanoma using support vector machine (SVM). Asian Pac. J. Cancer Preven 2019, 20, 1555–1561. [Google Scholar]
  54. Khan, M.A.; Sharif, M.; Akram, T.; Bukhari, S.A.C.; Nayak, R.S. Developed Newton-Raphson based deep features selection framework for skin lesion recognition. Pattern Recognit. Lett. 2020, 129, 293–303. [Google Scholar] [CrossRef]
Figure 1. Processing steps for classifying skin lesions.
Figure 1. Processing steps for classifying skin lesions.
Cancers 13 05256 g001
Figure 2. Illustration of pre-processing for hair removal and segmentation. Rows 1 and 2: nevi image (PH2 database). Rows 3 and 4: melanoma image (7-Point database in row 3 and Med-Node database in row 4). First column: original image. Second column: image after the hair is removed. Third column: segmented image. Med-Node contains non-dermoscopic (or simple digital) images acquired via cross-polarized light. Skin lesions in PH2 were imaged via polarized noncontact dermoscopy by using conventional cross-polarized light. Skin lesions in 7-Point were imaged via fluid immersion and non-polarized dermoscopy.
Figure 2. Illustration of pre-processing for hair removal and segmentation. Rows 1 and 2: nevi image (PH2 database). Rows 3 and 4: melanoma image (7-Point database in row 3 and Med-Node database in row 4). First column: original image. Second column: image after the hair is removed. Third column: segmented image. Med-Node contains non-dermoscopic (or simple digital) images acquired via cross-polarized light. Skin lesions in PH2 were imaged via polarized noncontact dermoscopy by using conventional cross-polarized light. Skin lesions in 7-Point were imaged via fluid immersion and non-polarized dermoscopy.
Cancers 13 05256 g002
Figure 3. Example of measuring the color content and clustering method to select the relevant color distribution in melanocytic lesion images. 23 color clusters, denoted as cl1, …, cl23, are presented. Data for the minimum and maximum intensity in each R, G, and B channel are indicated for each color cluster. Lower right: the segmented image.
Figure 3. Example of measuring the color content and clustering method to select the relevant color distribution in melanocytic lesion images. 23 color clusters, denoted as cl1, …, cl23, are presented. Data for the minimum and maximum intensity in each R, G, and B channel are indicated for each color cluster. Lower right: the segmented image.
Cancers 13 05256 g003aCancers 13 05256 g003bCancers 13 05256 g003c
Figure 4. An example of the HFD computation. (ac) R, G, and B color channels for a digital image that belongs to the PH2 dataset. (di) An illustration of the tessellation patterns for each color channel. The triangle shapes of k = 1 are presented in (df); the fractal scaling parameter k = 4 is shown in (g,i). They are used to compute the X k c m matrices. (d,g) are R channel images. (e,h) are G channel images. (f,i) are B channel images.
Figure 4. An example of the HFD computation. (ac) R, G, and B color channels for a digital image that belongs to the PH2 dataset. (di) An illustration of the tessellation patterns for each color channel. The triangle shapes of k = 1 are presented in (df); the fractal scaling parameter k = 4 is shown in (g,i). They are used to compute the X k c m matrices. (d,g) are R channel images. (e,h) are G channel images. (f,i) are B channel images.
Cancers 13 05256 g004aCancers 13 05256 g004b
Figure 5. The structure of an RBFNN classifier.
Figure 5. The structure of an RBFNN classifier.
Cancers 13 05256 g005
Figure 6. The prediction performance for the 5-fold cross validation and kNN classifier for different average percentage of color areas/color cluster descriptors.
Figure 6. The prediction performance for the 5-fold cross validation and kNN classifier for different average percentage of color areas/color cluster descriptors.
Cancers 13 05256 g006aCancers 13 05256 g006b
Table 1. Selected features and their numbers used in the experimental analysis.
Table 1. Selected features and their numbers used in the experimental analysis.
DatasetColor Cluster Features2D Higuchi’s Surface Fractal
7-Point15 average percentage of color areas: cl1, cl2, cl3, cl4, cl5, cl6, cl9, cl10, cl11, cl12, cl13 cl14, cl15, cl20, cl23
5475 features
365 features
Med-Node10 average percentage of color areas: cl5, cl6, cl7, cl10, cl11, cl14, cl15, cl18, cl20, cl21
1400 features
140 features
PH29 average percentage of color areas: cl1, cl3, cl4, cl7, cl10, cl15, cl20, cl21, cl23
1080 features
120 features
Table 2. Accuracy for 5-fold cross validation and kNN classifier for 2D Higuchi’s surface fractal descriptors.
Table 2. Accuracy for 5-fold cross validation and kNN classifier for 2D Higuchi’s surface fractal descriptors.
DatasetSensitivity (%)Accuracy (%)Precision (%)AUCDice Score
7-Point80.7771.4373.260.69480.7683
Med-Node30.1964.2357.140.64230.3951
PH283.3379.3862.500.80470.7143
Table 3. Performance of the RBFNN classifier in the testing experiments.
Table 3. Performance of the RBFNN classifier in the testing experiments.
DatasetRBFNN InputsSensitivity
(%)
Accuracy (%)Precision
(%)
AUCDice ScoresNo. of Hidden NeuronsMSE
7-PointColor clusters97.7795.1294.320.94120.9603500.1904
Color clusters and HFD98.0195.4294.440.94220.96300.0924
Med-NodeColor clusters96.2294.1288.610.95500.9333500.1789
Color clusters and HFD96.4294.7187.500.95880.93960.1662
PH2Color clusters1.0094.1785.030.95530.9195500.1372
Color clusters and HFD1.0094.8885.620.96850.92110.1128
Table 4. The comparison of accuracy results of the proposed method with those of existing methods.
Table 4. The comparison of accuracy results of the proposed method with those of existing methods.
AuthorsAccuracy (%) and Details
Nasiri et al. [50]64% (for 1st test: kNN (300, 100) and spot features)
67% (2nd test: kNN (1346, 450) and spot features)
Kavitha et al. [51]78.2 (kNN and GLCM features)
Al-masni et al. [52]81.79% (Inception-ResNet-v2, ISIC 2016 dataset)
81.57% (ResNet-50, ISIC 2017 dataset)
89.29% (ResNet-50, ISIC 2018 dataset)
Seeja & Suresh [53]79.26% (kNN, LBP and Edge histograms, HOG, Gabor filter)
Khan et al. [54]94.50% (Neural Network/Feed Forward/sigmoid function/3 hidden layers, ISBI2016 dataset, 70:30 training and testing).
94.20% (Neural Network/Feed Forward/sigmoid function/3 hidden layers, ISBI2017 dataset, 70:30 training and testing).
Proposed kNN-CV71.43% (7-Point dataset); 64.23% (Med-Node dataset) and 79.38% (PH2 dataset) for 2D Higuchi’s surface fractal features
Proposed (RBFNN—color clusters and HFD)95.42% (7-Point dataset)
94.71% (Med-Node dataset)
94.88% (PH2 dataset)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Moldovanu, S.; Damian Michis, F.A.; Biswas, K.C.; Culea-Florescu, A.; Moraru, L. Skin Lesion Classification Based on Surface Fractal Dimensions and Statistical Color Cluster Features Using an Ensemble of Machine Learning Techniques. Cancers 2021, 13, 5256. https://doi.org/10.3390/cancers13215256

AMA Style

Moldovanu S, Damian Michis FA, Biswas KC, Culea-Florescu A, Moraru L. Skin Lesion Classification Based on Surface Fractal Dimensions and Statistical Color Cluster Features Using an Ensemble of Machine Learning Techniques. Cancers. 2021; 13(21):5256. https://doi.org/10.3390/cancers13215256

Chicago/Turabian Style

Moldovanu, Simona, Felicia Anisoara Damian Michis, Keka C. Biswas, Anisia Culea-Florescu, and Luminita Moraru. 2021. "Skin Lesion Classification Based on Surface Fractal Dimensions and Statistical Color Cluster Features Using an Ensemble of Machine Learning Techniques" Cancers 13, no. 21: 5256. https://doi.org/10.3390/cancers13215256

APA Style

Moldovanu, S., Damian Michis, F. A., Biswas, K. C., Culea-Florescu, A., & Moraru, L. (2021). Skin Lesion Classification Based on Surface Fractal Dimensions and Statistical Color Cluster Features Using an Ensemble of Machine Learning Techniques. Cancers, 13(21), 5256. https://doi.org/10.3390/cancers13215256

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop