Next Article in Journal
Multiple Papillomas of the Breast: A Review of Current Evidence and Challenges
Previous Article in Journal
Advances in Digital Holographic Interferometry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Clinically Inspired Skin Lesion Classification through the Detection of Dermoscopic Criteria for Basal Cell Carcinoma

1
Dpto. Teoría de la Señal y Comunicaciones, Universidad de Sevilla, Camino de los Descubrimientos s/n, 41092 Seville, Spain
2
Hospital Universitario Virgen Macarena, Calle Dr. Fedriani, 3, 41009 Seville, Spain
3
Hospitales Quironsalud Infanta Luisa y Sagrado Corazón, Calle San Jacinto, 87, 41010 Seville, Spain
4
Hospital Universitario de Cabueñes, Los Prados, 395, 33394 Gijón, Spain
*
Author to whom correspondence should be addressed.
J. Imaging 2022, 8(7), 197; https://doi.org/10.3390/jimaging8070197
Submission received: 8 June 2022 / Revised: 5 July 2022 / Accepted: 8 July 2022 / Published: 12 July 2022
(This article belongs to the Topic Medical Image Analysis)

Abstract

:
Background and Objective. Skin cancer is the most common cancer worldwide. One of the most common non-melanoma tumors is basal cell carcinoma (BCC), which accounts for 75% of all skin cancers. There are many benign lesions that can be confused with these types of cancers, leading to unnecessary biopsies. In this paper, a new method to identify the different BCC dermoscopic patterns present in a skin lesion is presented. In addition, this information is applied to classify skin lesions into BCC and non-BCC. Methods. The proposed method combines the information provided by the original dermoscopic image, introduced in a convolutional neural network (CNN), with deep and handcrafted features extracted from color and texture analysis of the image. This color analysis is performed by transforming the image into a uniform color space and into a color appearance model. To demonstrate the validity of the method, a comparison between the classification obtained employing exclusively a CNN with the original image as input and the classification with additional color and texture features is presented. Furthermore, an exhaustive comparison of classification employing different color and texture measures derived from different color spaces is presented. Results. Results show that the classifier with additional color and texture features outperforms a CNN whose input is only the original image. Another important achievement is that a new color cooccurrence matrix, proposed in this paper, improves the results obtained with other texture measures. Finally, sensitivity of 0.99, specificity of 0.94 and accuracy of 0.97 are achieved when lesions are classified into BCC or non-BCC. Conclusions. To the best of our knowledge, this is the first time that a methodology to detect all the possible patterns that can be present in a BCC lesion is proposed. This detection leads to a clinically explainable classification into BCC and non-BCC lesions. In this sense, the classification of the proposed tool is based on the detection of the dermoscopic features that dermatologists employ for their diagnosis.

1. Introduction

Skin cancer is the most common cancer worldwide [1]. There are two main types of skin cancer: melanoma and non-melanoma. The most common non-melanoma tumors are basal cell carcinoma (BCC) and squamous cell carcinoma (SCC). BCC accounts for 75% of all skin cancers and it is the most common malignant tumor in white populations [2]. The detection of this cancer is executed by visual inspection by a skilled dermatologist, but there are many benign lesions that can be confused with these types of cancers, leading to unnecessary biopsies in a proportion of five biopsies versus one actual cancer case [3].

1.1. Related Work in the Literature

Ruffano et al. published a comparative study of the use of 24 computer-aided diagnosis (CAD) tools for skin cancer detection and they concluded that CAD systems obtained high sensitivity and could be used as a back-up for specialist diagnosis in a carefully selected patient population, but there is no evidence that, for daily clinical practice, they will be useful to assist clinicians. In this sense, prospective comparative studies are required [4]. Marka et al. present a review of techniques focused on the automatic detection of non-melanoma skin cancer [5]. Their conclusion is that the overall quality of evidence for the diagnostic accuracy of the automatic classification of non-melanoma skin cancer is moderate. The most common limitations of the studies included in the review are the overlapping training and test sets, the non-biopsy-proven references and the non-consecutive samples.
Lately, the use of artificial intelligence (AI) using deep neural networks in the classification of images or object recognition has increased. Specifically, convolutional neural networks (CNN) have become a powerful classification tool. The large databases available to the research community and the improvements in graphic processing unit (GPU) capabilities have contributed to this success. Despite all the above, there is still a wide improvement margin in developing CAD tools in order to be reliable. Most of the AI works published in the field of skin cancer detection have been focused on melanoma [6,7,8,9]. On the other hand, works devoted to detecting non-melanoma skin cancer are uncommon in the literature. Usually, these papers classify skin lesions into different classes of diseases [10,11,12,13,14,15].
The main concern about CAD tools based on deep learning (DL) is the lack of explainability of the classification result. Deep neural networks are considered as black boxes, which give a label for each class, but without revealing the internal decisions that have been taken to reach this label. Lately, different efforts have been directed towards the development of CAD tools for a clinical explainable classification of skin cancer [16,17,18,19] In this regard, in this paper, a CAD tool that provides an explainable detection of BCC is presented.
There are different types of images that physicians use to diagnose the skin lesion (spectroscopy [20], optical coherence tomography, thermography, multispectral images, wide field images [21], ultrasonography [22,23], etc.), but the most used and the simplest one is the digital dermoscopy—that is, digital color photography enhanced by a dermoscope [24].
Dermatologists diagnose BCC from dermoscopic images by detecting different high-level features or dermoscopic criteria. BCC has the most clearly defined clinical criteria [2,25]. Dermoscopic criteria for BCC are branching and linear vessels (arborising and superficial telangiectasia), multiple erosions, ulceration, bluish-gray clods of variable size (ovoid nests, globules and focused dots), radial lines connected to a common base (leaf-like areas), radial lines converging to a central dot or clod (spoke-wheel areas), clods within a clod (concentric structure) and absence of brown reticular lines (pigment network) [2,26].
In the last few years, some works have been focused on the detection of BCC in dermoscopic images. However, to the best of our knowledge, there are a very limited number of works devoted to detecting some of the dermoscopic features that dermatologists employ to diagnose BCC, and none of these works focus on the detection of all the BCC dermoscopic features. Specifically, Cheng et al. detected telangiectasia [27]. Kharazmi et al. analyzed vascular structures to classify dermoscopic images into BCC or non-BCC [28,29]. Guvenc et al. [30] put their effort into detecting blue-gray ovoids, furthering the goal of automatic BCC detection. They extracted 24 color features and concluded that color features allow accurate localization of these structures. Cheng et al. used different types of features (patient age, gender, lesion size and location, dermoscopic patterns) to classify BCC vs. benign lesions with a neural network [31] Kefel et al. extracted measures of smoothness and brightness in red, green, blue and luminance bands to detect semitranslucent and pink blush areas in BCCs [32]. One of the problems encountered while trying to solve this issue is the lack of databases with annotated data (weakly annotated or pixelwise annotated) regarding the dermoscopic criteria found in BCC.
Color is a very important feature to be considered in order to detect and classify images of skin lesions [33,34,35,36,37]. In the particular case of detecting BCC dermoscopic criteria, the color information of the image is crucial. To the best of our knowledge, papers found in the literature that apply DL techniques to classify BCC do not analyze the color information of the image, but they use RGB BCC images without any color processing as inputs to a CNN. None of the papers found in the literature include a study of the influence of color on the classification results.
Lessons learned from this literature review are summarized below:
  • There are works in the literature that detect one or several BCC patterns, but none of them detect all the BCC patterns that dermatologists employ to diagnose;
  • There are works in the literature that employ deep neural networks (DNN) to classify BCC versus other dermatological lesions, but attempts to ensure the clinical explainability of the classification are limited;
  • Color analysis is crucial to analyze pigmented lesions;
  • Existing CAD tools for skin cancer detection lack a prospective study that validates their results.
The authors have worked for more than two decades in skin lesion analysis. Specifically, most of their publications have been focused on burn images and dermoscopic images. They have developed computer-aided diagnosis (CAD) tools for applications focusing on the detection of both color and texture features [38,39,40,41].
Color and texture analysis of this type of image has been crucial in order to obtain good classification and segmentation results. The authors have developed many different methods that calculate new color and texture features [34,42,43].
Lately, they have also applied methods based on DL methodology [44,45], but they came to the conclusion that the classification results attained with these methods are not easy to explain and the optimum architecture to obtain the classification results is not easy to determine. This is why, in the proposed method, together with the classification into BCC or non-BCC, we provide an explanation of this classification.
In this paper, we apply the findings of color science to the detection of BCC dermoscopic features and the classification of the lesion into BCC and non-BCC. The intended use of the proposed method is as a prioritization tool to evaluate all the images received at the hospital by teledermatology, which have recently seen a significant increase due to the high incidence of skin cancer, resulting in an overload for dermatologists.

1.2. Our Contributions

The main contributions of the paper to the state-of-the-art can be summarized as follows:
(1)
Detection of all BCC dermoscopic features.
To the best of our knowledge, no other works in the literature have developed a methodology to detect all the possible patterns that can be present in a BCC lesion.
(2)
Clinically inspired classification of lesions into BCC/non BCC.
To the best of our knowledge, no other works in the literature have attained this goal. The detection of BCC dermoscopic patterns has been used in the clinically inspired classification of a lesion into BCC/non BCC. Physicians prefer to identify the patterns present in the lesion, instead of only the binary classification into BCC/non BCC. That is, they need an explainable diagnosis. Most of the works in the literature perform a binary classification.
There are some works in the literature that, in other applications, have tried to explain the classification by identifying the main areas of activation of the DNN. Differently, in our approach, we try to explain the classification based on the clinical signs found in the image.
(3)
For BCC dermoscopic feature classification, we propose to combine color and texture analysis.
New parameters extracted from a new, perceptually inspired color cooccurrence matrix have been designed. To the best of our knowledge, no other works in the literature have considered uniform color spaces and recent advances in color appearance models for the computation of texture parameters extracted from a cooccurrence matrix.
(4)
An annotated database of the main BCC dermoscopic features has been developed. A software application, so that dermatologists can annotate the images, has been designed.
There are no public annotated databases of all the BCC dermoscopic criteria. In this project, a new database composed of 692 BCC images and 2221 different patterns has been collected.

2. Methods

2.1. Design Considerations

The first design consideration is related to the development of the annotated database. Due to the high care load, physicians have limited time to annotate the database. Thus, an annotation tool with a user-friendly interface has been designed, where physicians select, with a simple mouse click, the pattern that they find in the lesion. This tool has been installed in the computer room of the Dermatology Unit at Hospital Universitario Virgen Macarena, Seville (Spain).
Due to the small database, a DNN trained exclusively with the BCC pattern images did not attain good results. Thus, it was decided to add color and texture features to improve the performance.
On the other hand, a conventional machine learning tool based only on color and texture features was also tested. However, a DL approach that combined image and color and texture information improved the classification results.
The drawbacks of DL are the high computational cost and the requirement of a large database. The first problem was overcome with a computer equipped with a high-performance GPU. The second problem was overcome with transfer learning and by adding color and texture handcrafted features, as explained above.

2.2. Database

The first problem encountered is the limited public databases with BCC images. In the challenges of 2018 and 2019 organized by the International Skin Imaging Collaboration (ISIC) project, a set of 514 and 3323 BCC dermoscopic images was made available, respectively [46,47,48,49]. In Figure 1, three examples of BCC images from ISIC Challenge 2018 are shown. There are no public databases with the segmentation of the different patterns that can be found inside a BCC lesion.
In order to collect a database composed of images representing BCC dermoscopic structures, we have developed a user-friendly program that has been installed on computers located at the Hospital Universitario Virgen Macarena, Seville, Spain. Using this software, the experts have been able to manually segment 256 × 256 pixel images, where one or more patterns are presented. As this is a tedious task, such a software tool can be helpful and time-saving. Images were collected during 9 months approximately. All the BCC images used in the evaluation have been excised and biopsied. The approximate time interval between the dermatoscopic image acquisition and the dermatologist’s diagnosis was less than 10 days. The biopsy was performed less than 90 days from the image acquisition. The eligibility criterion for selecting BCC cases was patients with a BCC diagnosis in teledermatology consultation, and subsequently confirmed by biopsy. Patients belonged to the Andalusian Health System and attended medical centers assigned to the area of influence of Hospital Universitario Virgen Macarena, Seville (Spain). The inclusion criterion was an age over 18 years (non-pediatric).
All the lesions that were not BCC were extracted from the HAM10000 database, where more than 50% of lesions have been confirmed by pathology, while the ground truth for the rest of the cases was either follow-up, expert consensus or confirmation by in vivo confocal microscopy [49]. Most non-BCC lesions were pigmented benign keratosis, but other non-BCC lesions have been chosen as well to build the non-BCC database (nevus, lentigo, seborrheic keratosis, actinic keratosis, melanoma, squamous cell carcinoma and hemangioma).
In Figure 2, examples of the different patterns that can be found in BCC lesions extracted with the developed software tool are shown. All of them are positive criteria, except for the pigment network, which represents a negative criterion. In Table 1, the number of images for each pattern in the dataset is presented. In total, in the database, there are 692 BCCs and 671 non-BCC images.
In order to overcome the limitations imposed by our small database, data augmentation was performed. The number of images in the training database has been multiplied by 4 (from 1371 images to 5484). The transformation applied to each image consisted of 90°, 180° and 270° rotations and random horizontal flipping and vertical flipping.
As stated by Barata et al. [16], the features extracted by CNNs are color-sensitive, confirming the findings of Mahbod [50], who showed that color normalization has a positive impact on the performance of a CNN. However, in our case, as the images have been taken under the same conditions, color normalization has not been necessary.
We have considered that performing a color transformation for data augmentation should not be applied when dealing with medical images whose diagnosis is based on color.

2.3. Color Processing

In this paper, a color analysis is applied in order to test how color influences the classification of BCC lesions. For this purpose, an analysis of the main colors present in a lesion is performed. Different color spaces and color distance metrics are tested.

2.3.1. Uniform Color Spaces and Perceptual Color Differences

Uniform color spaces are color systems where Euclidean distances correlate well with perceived color differences. In 1976, the Commission Internationale de l’Eclairage (CIE) standardized two color spaces, L*u*v* and L*a*b*, with the aim of providing a tool to measure color differences perceived by human observers [51].
The mathematical definition of the CIELAB color difference between two colors, with color coordinates ( L 1 ,   a 1 ,   b 1 ) and ( L 2 ,   a 2 ,   b 2 ) , is
Δ E a b = [ ( L 2 L 1 ) 2 + ( a 2 a 1 ) 2 + ( b 2 b 1 ) 2 ] 1 / 2   =   [ Δ L 2 + Δ a 2 + Δ b 2 ] 1 / 2
The unsatisfactory uniformity of the CIE L*a*b* space prompted researchers to investigate better color difference formulas and new color systems.
CIE color systems can accurately predict whether two colors will match for an average observer, but cannot provide information about the color appearance of these stimuli. Colors, usually, do not appear isolated in a scene, and the color appearance is strongly affected by the viewing conditions and the surroundings of color stimuli [52]. In this work, the color appearance of the different regions present in a lesion is targeted for analysis. Thus, a color appearance model is applied in this analysis. Specifically, the recent CIECAM16 color appearance model, recommended by CIE to replace CIECAM02, is used [53].
CIECAM16 needs as inputs the X and Y coordinates of the stimuli from XYZ color space, the color coordinates of the illuminant and parameters about the viewing surroundings and background. Then, it performs an illuminant and color adaptation and it obtains the following color appearance correlates: lightness J, chroma C, hue composition H, hue angle h, colorfulness M, saturation s and brightness Q [53]. Following the color appearance terminology, brightness is the attribute of a visual sensation according to which an area appears to emit more or less light, and lightness is the brightness of an area judged relative to the brightness of a similarly illuminated area that appears to be white or highly transmitting. Colorfulness is the attribute of a visual sensation according to which the perceived color of an area appears to be more or less chromatic; chroma is the colorfulness of an area, judged as a proportion of the brightness of a similarly illuminated area that appears white or highly transmitting; and saturation is the colorfulness of an area judged in proportion to its brightness. Hue is the attribute of a visual sensation according to which an area appears to be similar to one of the perceived colors—red, yellow, green and blue—or to a combination of two of them. Hue composition describes the perceived hue in terms of the percentages of two of the unique hues, whereas the hue angle describes the perceived hue as a quantity between 0° and 360° [52].
From this color appearance model, a uniform color space, CAM16-UCS, can be defined.
J = 1.7 J 1 + 0.007 J
M = ln 1 + 0.0228 M 0.0228
a = M cos ( h )
b ¡ = M sin ( h )
The color difference between two samples can be computed as the Euclidean distance between them in CAM16-UCS,
Δ E =   [ Δ J 2 + Δ a 2 + Δ J 2 + Δ b 2 ] 1 / 2
For more information about the CIECAM16 color appearance model, please refer to Appendix A of Li et al. [53].

2.3.2. Perceptual Clustering and Relevant Color Identification

In order to improve and facilitate the learning process of the DNN classifier, we propose to introduce, as additional information, the main colors present in the lesion. Due to the small database, we consider that the training process might be facilitated through the introduction of the significant colors of the dermoscopic patterns.
Quantization of the colors of the images involves the election of a clustering algorithm, a color space and a distance metric. The use of Euclidean distances in uniform color spaces provides a perceptual clustering—that is, colors that are perceived as similar by a human observer will be assigned to the same cluster.
The K-means algorithm is the most popular partitional clustering algorithm [54]. When the distance metric in the partitioning space is Euclidean, cluster centers coincide with geometric centers (centroids). Thus, the updating of the centroids is trivial. However, when using non-Euclidean distance formulae, the updating of cluster centers requires specific algorithms to find them. As the color spaces utilized in this paper are perceptually uniform, color clustering employing the Euclidean distance is the most adequate to perform a perceptual clustering.
Identification of the main colors present in each pattern is performed in two steps:
(1)
In an initial step, a color clustering is performed for all the training images for each BCC dermoscopic pattern. The cluster centers are initialized randomly. After this step, 18 color centroids for each pattern are obtained—that is, a total of 126 color centroids are determined.
(2)
Many color centroids from different patterns are very similar. For this reason, in a second step, two centroids are merged if their distance is below a threshold. This threshold is automatically adjusted so that 20 color centroids are retained at the end.
Figure 3 shows the most representative colors of each BCC dermoscopic pattern in RGB color space.
In Figure 4, an example of the quantization into the main BCC representative colors is presented. As can be observed in Figure 4, the pink color detected in the RGB color quantization does not match exactly with the pink color in the original image. Some pink regions in the original image are assigned to a gray centroid in the RGB quantization. These problems are observed in the two uniform color spaces, CIECAM16 and CIELAB. Both of them preserve well the color appearance of the image.

2.4. Texture Analysis

In the analysis of BCC patterns, texture plays an important role. Thus, the introduction of texture features as additional information in the neural network could help in the classification.
The gray-level cooccurrence matrix (GLCM) is a very useful tool in texture analysis, because it is based on an estimation of second-order gray-level statistics [55] (A GLCM defines the frequency of the cooccurrence of two gray levels at a given relative location in an image. Specifically, defining a gray level i , in a particular position, and gray level j , at a particular spatial relationship with the first one, GLCM provides an estimate, P i j , of the probability of having pixel values i and j in these relative positions. However, some authors [56,57] have demonstrated that the introduction of color information facilitates the classification of color texture.
In this paper, two different methods to combine color and texture have been investigated:
(1)
GLCM applied to the L* channel in the uniform color space L*a*b* along with color information, introduced to the network via the image quantized into the main colors. The L* channel represents the perceived relative brightness, and, thus, spatial distribution information can be captured with GLCM in L*.
(2)
A new color cooccurrence matrix (CCM) applied to the color-quantized images according to color appearance information. The cooccurrence of the main colors present in the image is analyzed. As 20 main colors are detected, a matrix of 20 × 20 is obtained. For each element in the matrix, the probability of cooccurrence of color index i   and color index j , P i j in a particular pair of relative spatial positions, is estimated. In this case, when the different parameters extracted from the cooccurrence matrix are calculated, color information and color distances are taken into account. Instead of calculating differences such as | i j | , where i and j are the color indexes of the quantized image, color differences, Δ E , between a color with index i , C i and a color with index j , C j , are calculated ( Δ E is defined in Equations (1) and (6) for L*a*b* and CAM16-UCS color spaces, respectively). Thus, the main parameters extracted from this cooccurrence matrix are calculated as follows:
Homogeneity H = i , j 20 , 20 P i j 1 + ( Δ E ( C i , C j )   ) 2
Mean μ = i , j 20 , 20 C i P i j
Variance σ 2 = i , j 20 , 20 ( Δ E ( C i , μ i )   ) 2 P i j
Correlation ρ = i , j 20 , 20 Δ E ( C i , μ i ) Δ E ( C j , μ j ) σ i 2 σ j 2   P i j
Entropy S = i , j 20 , 20 P i j ln ( P i j )
Homogeneity measures the spatial closeness of the distribution of elements in the CCM to the diagonal. The CCM mean coincides with the image mean. CCM variance is a measure of the contrast between a pixel and its neighbors. Correlation is a measurement of how a pixel correlates to its neighbor across the entire image. Entropy is a measure of variability and it is 0 for a constant image.
In this sense, perceptual color differences are taken into account, and a new color cooccurrence matrix has been defined. Cooccurrence matrix parameters are color perception-adapted because, in all these parameters, perceptually uniform color differences, Δ E , are applied.
Texture information is then employed in a conventional machine learning module that is combined with the deep learning architecture. Two different configurations of this machine learning module are tested:
(1)
The inputs to the module are the GLCM parameters calculated from the L* channel;
(2)
The inputs to the module are the new CCM parameters.
Both architectures are described in Section 2.5, Test 3.

2.5. Classification

For the classification into the different BCC patterns, three different architectures are proposed. These architectures employ the convolutional layers of a deep convolutional network, fully connected layers optimized for this application and a multilayer perceptron to concatenate the outputs of the different blocks, as shown in Figure 5 and Figure 6.
To choose the convolutional layers, different neural networks were tested: Inception V3, Vgg16, ConvNet and EfficientNetB0. VGG16 obtained the best classification, so the VGG16 neural network has been chosen [58]. Specifically, the first convolutional layers have been pre-trained with the ImageNet dataset [59] and these layers have been employed as building blocks in the different architectures proposed.
Subsequently, new fully connected layers (FCL) designed specifically for this problem have been added. These FCLs, along with the last convolutional layers, were fine-tuned for this specific problem. The number of neurons of these FCLs has been optimized to obtain the best classification results.

2.5.1. Architecture 1: Classification with Original RGB Images

In the first scenario, the inputs to the CNN were the original RGB images (256 × 256 pixel images, where one or more patterns are present). The images were weakly annotated by specialists. The CNN in this architecture consists of the convolutional layers of a VGG16 and one fully connected layer.

2.5.2. Architecture 2: Classification with Original RGB Images along with Color-Quantized Images in Different Color Spaces (Dual Classification)

In the dual configuration, the original RGB image and the color-quantized one are the inputs to the convolutional layers of two VGG16 CNNs, respectively, followed by one fully connected layer, as shown in Figure 5. The outputs of the two CNNs are then concatenated and feed a multilayer perceptron (MLP) classifier.

2.5.3. Architecture 3: Classification with Original RGB Images, Color-Quantized Images in Different Color Spaces and Texture Features (Triple Classification)

In an attempt to overcome the problems of a small database, we introduce a conventional machine learning module, where texture features are extracted. The classification architecture is described in Figure 6. The texture features used in this test are described in Section 2.4. Thus, two tests have been performed. In the first triple configuration test, the GLCM texture features were calculated. In the second triple configuration test, the CCM texture features were computed. Before concatenating them with the dual classification module, the texture descriptors pass through an MLP classifier. The architecture of this MLP has been optimized by classifying the training images exclusively with these texture parameters.
As multiple BCC dermoscopic features can be found in the same image, the three proposed architectures have been trained as follows. A common architecture with 7 neurons in the last layer, corresponding to the 7 BCC patterns given in Table 1, was utilized. Each neuron in this last layer provides a probability, which is thresholded to provide the final classification. The area under the curve (AUC) is determined by varying the threshold. This network was trained with binary cross-entropy loss to take into account that multiple classes can be assigned to the same image.

2.5.4. Classification of BCC and Non-BCC

According to Peris et al. and Menzies et al. [2,25,60], dermoscopic criteria for BCC are the absence of brown reticular lines (pigment network) together with the presence of any of the following 6 dermoscopic features: telangiectasia, ulceration, blue-gray ovoid nests and globules, leaf-like areas, spoke-wheel areas and clods.
Thus, in the proposed algorithm, once these 7 dermoscopic features have been identified, all the lesions are classified into BCC or non-BCC following this clinical criterion.

2.6. Description of the Hardware and Software Used

The proposed method has been implemented in a computer with an Intel Core i9 with a Nvidia Titan RTX 24GB GPU.
The code corresponding to the algorithms described in this paper can be found at https://github.com/ManuelL4z0/dermaBCC (accessed on 7 July 2022). The software has been developed with Python 3.8.5, the Anaconda 4.9.2 environment and Spyder 4.1.5.

3. Results

A summary of the flow diagram followed by the participants in the tests conducted in this work is presented in Figure 7.
The four tests carried out are as follows:
Test 1: Architecture 1 is used with the original RGB images as inputs. The classification’s objective is to classify the dermoscopic patterns present in the lesions.
Test 2: Architecture 2 is used with the original RGB images and the color-quantified images as inputs. The classification’s objective is to classify the dermoscopic patterns present in the lesions.
Test 3: Architecture 3 is used, where the inputs are the original RGB images, the color-quantified images and the texture features extracted from the cooccurrence matrix as inputs. The classification’s objective is to classify the dermoscopic patterns present in the lesions.
Test 4: Classification of the images into BCC and non-BCC lesions. If one or more BCC patterns are detected in the images using one of the above tests, the image is classified as BCC. If a pigment network pattern or no pattern is detected using one of the above tests, the image is classified as non-BCC.
For the training of the CNNs, a batch size of 32 was used. The learning rate was adaptive and the optimization algorithm was AdamGrad. The loss function was the binary cross-entropy.
Table 2 provides an analysis of how the information about the distribution of colors in the image influences the final classification. To this aim, in Test 1, whose results are shown in the first row, the network was fed exclusively with the original images. The rest of the rows show the classification results when, along with the original image, the quantized image with the distribution of the main colors was introduced as input to the neural network. Three different color spaces were analyzed.
As shown in Table 2, the configuration of Test 1 attained poor results. In this regard, sensitivity of 0.48 is achieved for the spoke-wheel dermoscopic features and specificity under 0.7 is attained for three dermoscopic features.
When the configuration of Test 1 is compared to Test 2, the best classification results are attained with the dual network when it is fed with the original image plus the CIECAM16-quantized image, which improves the AUC obtained with CIELAB quantization for all the BCC dermoscopic features except for spoke-wheel. The poor results obtained for spoke-wheel and multi-globules, with AUC under 0.8, can be explained by the limited training data available for these BCC dermoscopic features. In the same way, the sensitivity obtained for these dermoscopic features is poor, which can be explained by the class imbalance present in the database.
As described in the Materials and Methods section, due to the small number of training images, it was considered that the classification results could be improved if texture hand-crafted features were introduced to the classifier along with the image. In the first few rows of Table 3, results obtained when the handcrafted features introduced to the classifier were GLCM parameters extracted from the lightness component, L*, are shown. In the last few rows of Table 3, the handcrafted features introduced to the network are the proposed CCM features.
As can be observed in Table 3, the introduction of GLCM parameters to the classifier improves slightly the classification results. Again, when CIECAM16 quantization is employed, the results are slightly better.
Another important observation is that CCM parameters improve the classification. First, the AUC is over 0.8 for all BCC feature classifications when CCM features are applied. Secondly, the sensitivity, specificity and AUC are higher than those obtained with GLCM parameters.
Finally, in Table 4, results when classifying all the lesions into BCC versus non-BCC are shown, wherein the BCC dermoscopic features have been estimated with the triple network and CCM features. As can be observed, the best results are attained when color differences are estimated in CIECAM16, achieving sensitivity of 0.9934. Figure 8 shows the confusion matrix of the classification into BCC versus non-BCC.
In addition, the ROC curve was calculated when CIECAM16 was used for color quantization, which is shown in Figure 9. The AUC was also computed and was equal to 0.997.

4. Discussion

This work is focused on the detection of specific patterns belonging to BCC. Given the presence or absence of these patterns, dermatologists diagnose BCC. In this sense, this work tries to emulate the dermatologist’s assessment in order to detect BCC.
To the best of our knowledge, no other work in the literature detects all the BCC dermoscopic clinical features that clinicians use to diagnose BCC lesions. There are some works focusing on the detection of one or several dermoscopic criteria [29,61], but none of them use them to give an explainable classification into BCC and non-BCC.
Most of the works found in the literature classify skin lesions into several diseases, including BCC as one of these diseases. Usually, they employ conventional machine learning algorithms [62,63] or deep learning-based methods [10,64]. These methods do not provide an explanation of this classification, which could assist the physician in their assessment.
Color is one of the main attributes on which physicians base their diagnosis of skin lesions. Different authors have employed color features to classify skin lesions. We demonstrate that the introduction of the spatial distribution of the main colors present in the image, i.e., a quantized image, improves the classification results compared to the results using only the original images (AUC = 0.85 vs. 0.83). It should be noted that this color quantization is perceptual, in the sense that colors that are perceptually similar have been grouped within the same color cluster.
In a recent paper, in a prospective study Sies et al. compared a conventional machine learning method with a CNN to classify skin lesions [65]. They concluded that the CNN method outperformed methods based on handcrafted features. However, in this paper, we demonstrate that the addition of handcrafted features to a CNN architecture can improve the results, especially when the number of training samples is small. Specifically, an AUC equal to 0.92 has been achieved, versus an AUC equal to 0.85 when the classifier is a CNN architecture whose input is exclusively the original image.
Different authors have demonstrated that color–texture features outperform texture features [56]. Thus, the investigation of new color–texture features is of great interest in color image classification. In this paper, new color–texture features based on a color cooccurrence matrix resulting from the perceptually quantized color image are proposed. The improvement of this proposed color cooccurrence matrix (CCM) versus the gray-level cooccurrence matrix (GLCM), even when color information has been previously introduced to the network, has been demonstrated (AUC = 0.92 vs. 0.88).
Finally, the classification into BCC and non-BCC based on the patterns found in the lesions has been performed. The high classification rate obtained (SE = 0.99) demonstrates that a classification based on the clinical diagnostic criteria can attain very good results. In addition, this classifier provides an explainable classification to the clinician.
In this paper, the STARD 2015 updated list of essential items for reporting diagnostic accuracy in studies has been followed [66].

4.1. Future Plans

The proposed work shows promising results. However, the small database has limited the quality of the results. Thus, we are working on a new version of the annotation software tool with the ability of creating segmentation masks, which will facilitate and accelerate the annotation task and, consequently, will allow the database to increased. This tool will provide not only a weakly annotated database, but also the location of the BCC feature within the image.
If the database keeps improving and growing, the methods shown should also increase in their classification capability, making easier and more profitable the integration with the current teledermatology system. If it were to become operative, as the amount of BCC cases processed at Hospital Universitario Virgen Macarena is around 300 in a month, the algorithm could become more and more efficient and generalized as time goes by.
A prospective study is intended to be carried out. The developed tool will be installed in the computers of the hospital. The tool will be employed to evaluate all the images received at the hospital by the teledermatology team, and the classification results will be compared with the dermatologist’s diagnosis and with the clinical judgement of the general practitioner who acquires the dermoscopic image at the primary health center. At the Virgen Macarena Hospital and its healthcare area, involved in this project, there are 315 dermatology teleconsultations in a month on average, and 100 of these teleconsultations result in a BCC diagnosis. This can serve as an estimation of the amount of data that could be available for a prospective study.

4.2. Strength and Limitation

In summary, the main strength of this study is that it provides a classification accompanied by an explanation, which includes information about the BCC dermoscopic features found in the lesion. Physicians prefer an explained classification rather than a binary classification.
On the other hand, the main limitation is that all the results have been obtained with images collected retrospectively. Most of the results published in the literature are based on retrospective studies. Thus, although the accuracy of computer-aided diagnosis for skin lesion detection is comparable to that of experts, the real-world applicability of these systems is unknown.

5. Conclusions

In this paper, we propose a DNN architecture to detect the dermoscopic patterns that clinicians employ to discriminate between BCC and non-BCC skin lesions. In this architecture, together with the original image, a quantized color image and color–texture handcrafted features are introduced as inputs to the network, improving the classification results. The quantized color images have been quantized according to perceived color differences obtained from a uniform color space derived from CIECAM16 [53]. Color–texture handcrafted features are new, perceptually inspired color features derived from a color cooccurrence matrix.
The improvement achieved with this methodology over a DNN fed only with original RGB images is the following: the specificity parameter has increased from 0.75 to 0.82, the sensitivity parameter has increased from 0.77 to 0.90, and the AUC has increased from 0.83 to 0.92.
Finally, when the lesion is classified into BCC or non-BCC based on the dermoscopic features found, sensitivity of 0.9934 is achieved.
Thus, this classification can provide an accurate and explainable classification to the physician.

Author Contributions

Conceptualization, C.S., B.A., A.S., T.T.-P., R.B.-T.; methodology, C.S. and B.A.; software, M.L.; validation, A.S., T.T.-P., R.B.-T.; formal analysis, M.L., C.S. and B.A.; investigation, C.S., B.A., A.S., T.T.-P., R.B.-T.; resources, C.S., B.A., A.S., T.T.-P., R.B.-T.; data curation, A.S., T.T.-P., R.B.-T.; writing—original draft preparation, M.L., C.S. and B.A. writing—review and editing, C.S. and B.A.; visualization, M.L.; supervision, C.S. and B.A.; project administration, C.S. and B.A. funding acquisition, C.S. and B.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been funded by the Spanish Government, Economy and Competitiveness Ministry (Plan Estatal 2013–2016. Retos-Proyectos I + D + i: DPI2016-81103-R) and by FEDER Universidad de Sevilla and Junta de Andalucía (US-1381640).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of University Hospital Virgen Macarena (Seville-Spain).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ethical restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Skin Cancer Foundation, Skin Cancer Facts and Statistics. 2021. Available online: https://www.skincancer.org/skin-cancer-information/skin-cancer-facts/#:~:text=Skin%20cancer%20is%20the%20most,doubles%20your%20risk%20for%20melanoma (accessed on 30 March 2021).
  2. Peris, K.; Fargnoli, M.C.; Garbe, C.; Kaufmann, R.; Bastholt, L.; Seguin, N.B.; Bataille, V.; Marmol, V.D.; Dummer, R.; Harwood, C.A.; et al. Diagnosis and treatment of basal cell carcinoma: European consensus based interdisciplinary guidelines. Eur. J. Cancer 2019, 118, 10–34. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Breitbart, E.W.; Waldmann, A.; Nolte, S.; Capellaro, M.; Greinert, R.; Volkmer, B.; Katalinic, A. Systematic skin cancer screening in northern Germany. J. Am. Acad. Dermatol. 2012, 66, 201–211. [Google Scholar] [CrossRef] [PubMed]
  4. Ferrante di Ruffano, L.; Takwoingi, Y.; Dinnes, J.; Chuchu, N.; Bayliss, S.E.; Davenport, C.; Matin, R.N.; Godfrey, K.; O'Sullivan, C.; Gulati, A.; et al. Cochrane Skin Cancer Diagnostic Test Accuracy Group. Computer-assisted diagnosis techniques (dermoscopy and spectroscopy-based) for diagnosing skin cancer in adults. Cochrane Database Syst. Rev. 2018, 12, 13186. [Google Scholar] [CrossRef]
  5. Marka, A.; Carter, J.B.; Toto, E.; Hassanpour, S. Automated detection of nonmelanoma skin cancer using digital images: A systematic review. BMC Med. Imaging 2019, 19, 21. [Google Scholar] [CrossRef] [PubMed]
  6. Kaymak, S.; Esmaili, P.; Serener, A. Deep Learning for Two-Step Classification of Malignant Pigmented Skin Lesions. In Proceedings of the 14th Symposium on Neural Networks and Applications (NEUREL), Belgrade, Serbia, 20–21 November 2018. [Google Scholar] [CrossRef]
  7. Tschandl, P.; Sinz, C.; Kittler, H. Domain-specific classification pretrained fully convolutional network encoders for skin lesion segmentation. Comput. Biol. Med. 2019, 104, 111–116. [Google Scholar] [CrossRef] [PubMed]
  8. Pérez, E.; Reyes, O.; Ventura, S. Convolutional neural networks for the automatic diagnosis of melanoma: An extensive experimental study. Med. Image Anal. 2021, 67, 101858. [Google Scholar] [CrossRef]
  9. Curiel-Lewandrowski, C.; Novoa, R.A.; Berry, E.; Celebi, M.E.; Codella, N.; Giuste, F.; Gutman, D.; Halpern, A.; Leachman, S.; Liu, Y.; et al. Artificial Intelligence Approach in Melanoma; Fisher, D., Bastian, B., Eds.; Melanoma Springer: New York, NY, USA, 2019. [Google Scholar] [CrossRef]
  10. Al-masni, M.A.; Kim, D.H.; Kim, T.S. Multiple skin lesions diagnostics via integrated deep convolutional networks for segmentation and classification. Comput. Methods Progr. Biomed. 2020, 190, 105351. [Google Scholar] [CrossRef]
  11. Codella, N.C.F.; Nguyen, Q.-B.; Pankanti, S.; Gutman, D.A.; Helba, B.; Halpern, A.C.; Smith, J.R. Deep learning ensembles for melanoma recognition in dermoscopy images. IBM J. Res. Dev. 2017, 6, 5:1–5:15. [Google Scholar] [CrossRef] [Green Version]
  12. Han, S.S.; Kim, M.S.; Lim, W.; Park, G.H.; Park, I.; Chang, S.E. Classification of the Clinical Images for Benign and Malignant Cutaneous Tumors Using a Deep Learning Algorithm. J. Investig. Dermatol. 2018, 138, 1529–1538. [Google Scholar] [CrossRef] [Green Version]
  13. Carcagnì, P. Classification of Skin Lesions by Combining Multilevel Learnings in a DenseNet Architecture. In Lecture Notes in Computer Science 11751; Ricci, E., Rota Bulò, S., Snoek, C., Lanz, O., Messelodi, S., Sebe, N., Eds.; Springer: Cham, Germany, 2019. [Google Scholar] [CrossRef]
  14. Zhou, H.; Xie, F.; Jiang, Z.; Liu, J.; Wang, S.; Zhu, C. Multi-Classification of Skin Diseases for Dermoscopy Images Using Deep Learning. In Proceedings of the IEEE International Conference on Imaging Systems and Techniques (IST), Rockville, MD, USA, 18–20 October 2017. [Google Scholar] [CrossRef]
  15. Albahar, M.A. Skin Lesion Classification Using Convolutional Neural Network with Novel Regularizer. IEEE Access 2019, 7, 38306–38313. [Google Scholar] [CrossRef]
  16. Barata, C.; Celebi, E.; Marques, J.S. Explainable skin lesion diagnosis using taxonomies. Pattern Recognit. 2021, 110, 1071413. [Google Scholar] [CrossRef]
  17. González-Díaz, I. DermaKNet: Incorporating the Knowledge of Dermatologists to Convolutional Neural Networks for Skin Lesion Diagnosis. IEEE J. Biomed. Health Inform. 2019, 23, 547–559. [Google Scholar] [CrossRef]
  18. Codella, N.C.F.; Lin, C.C.; Halpern, A.; Hind, M.; Feris, R.; Smith, J.R. Collaborative Human-AI (CHAI): Evidence-Based Interpretable Melanoma Classification in Dermoscopic Images. In Lecture Notes in Computer Science11038; Stoyanov, D., Ed.; Springer: Cham, Switzerland, 2018. [Google Scholar] [CrossRef] [Green Version]
  19. Barata, C.; Celebi, M.E.; Marques, J.S. A Survey of Feature Extraction in Dermoscopy Image Analysis of Skin Cancer. IEEE J. Biomed. Health Inform. 2019, 23, 1096–1109. [Google Scholar] [CrossRef] [PubMed]
  20. Fried, L.; Tan, A.; Bajaj, S.; Liebman, T.N.; Polsky, D.; Stein, J.A. Technological advances for the detection of melanoma. Advances in diagnostic techniques. J. Am. Acad. Dermatol. 2020, 83, 983–992. [Google Scholar] [CrossRef] [PubMed]
  21. Birkenfeld, J.S.; Tucker-Schwartz, J.M.; Soenksen, L.R.; Aviles-Izquierdo, J.A.; Marti-Fuster, B. Computer-aided classification of suspicious pigmented lesions using wide-field images. Comput. Methods Progr. Biomed. 2020, 195, 105631. [Google Scholar] [CrossRef] [PubMed]
  22. Oranges, T.; Janowska, A.; Vitali, S.; Loggini, B.; Izzetti, R.; Romanelli, M.; Dini, V. Dermatoscopic and ultra-high frequency ultrasound evaluation in cutaneous postradiation angiosarcoma. J. Eur. Acad. Derm. Venereol 2020, 34, e741–e743. [Google Scholar] [CrossRef]
  23. Izzetti, R.; Oranges, T.; Janowska, A.; Gabriele, M.; Graziani, F.; Romanelli, M. The Application of Ultra-High-Frequency Ultrasound in Dermatology and Wound Management. Int. J. Low Extrem. Wounds 2020, 19, 334–340. [Google Scholar] [CrossRef]
  24. Celebi, M.E.; Codella, N.; Halpern, A. Dermoscopy Image Analysis: Overview and Future Directions. IEEE J. Biomed. Health Inform. 2019, 23, 474–478. [Google Scholar] [CrossRef]
  25. Menzies, S.W.; Westerhoff, K.; Rabinovitz, H.; Kopf, A.W.; McCarthy, W.H.; Katz, B. Surface microscopy of pigmented basal cell carcinoma. Arch. Dermatol. 2000, 136, 1012–1016. [Google Scholar] [CrossRef] [Green Version]
  26. Kittler, H.; Marghoob, A.A.; Argenziano, G.; Carrera, C.; Curiel-Lewandrowski, C.; Hofmann-Wellenhof, R.; Malvehy, J.; Menzies, S.; Puig, S.; Rabinovitz, H.; et al. Standardization of terminology in dermoscopy/dermatoscopy: Results of the third consensus conference of the International Society of Dermoscopy. J. Am. Acad. Dermatol. 2016, 74, 1093–1106. [Google Scholar] [CrossRef] [Green Version]
  27. Cheng, B.; Stanley, R.J.; Stoecker, W.V.; Hinton, K. Automatic telangiectasia analysis in dermoscopy images using adaptive critic design. Skin Res. Technol. 2012, 18, 389–396. [Google Scholar] [CrossRef] [PubMed]
  28. Kharazmi, P.; Aljasser, M.I.; Lui, H.; Wang, Z.J.; Lee, T.K. Automated Detection and Segmentation of Vascular Structures of Skin Lesions Seen in Dermoscopy, with an Application to Basal Cell Carcinoma Classification. IEEE J. Biomed. Health Inform. 2017, 21, 1675–1684. [Google Scholar] [CrossRef] [PubMed]
  29. Kharazmi, P.; Kalia, S.; Lui, H.; Wang, Z.J.; Lee, T.K. A feature fusion system for basal cell carcinoma detection through data-driven feature learning and patient profile. Skin Res. Technol. 2018, 24, 256–264. [Google Scholar] [CrossRef]
  30. Guvenc, P.; LeAnder, R.W.; Kefel, S.; Stoecker, W.V.; Rader, R.K.; Hinton, K.A.; Stricklin, S.M.; Rabinovitz, H.S.; Oliviero, M.; Moss, R.H. Sector expansion and elliptical modeling of blue-gray ovoids for basal cell carcinoma discrimination in dermoscopy images. Skin Res. Technol. 2013, 19, 532–536. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Cheng, B.; Stanley, R.J.; Stoecker, W.V.; Stricklin, S.M.; Hinton, K.A.; Nguyen, T.K.; Rader, R.K.; Rabinovitz, H.S.; Oliviero, M.; Moss, R.H. Analysis of clinical and dermoscopic features for basal cell carcinoma neural network classificaction. Skin Res. Technol. 2013, 19, e217–e222. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Kefel, S.; Kefel, S.P.; LeAnder, R.W.; Kaur, R.; Kasmi, R.; Mishra, N.K.; Rader, R.K.; Cole, J.G.; Woolsey, Z.T.; Stoecker, W.V. Adaptable texture-based segmentation by variance and intensity for automatic detection of semitranslucent and pink blush areas in basal cell carcinoma. Skin Res. Technol. 2016, 22, 412–422. [Google Scholar] [CrossRef] [PubMed]
  33. Celebi, M.E.; Kingravi, H.A.; Uddin, B.; Iyatomi, H.; Aslandogan, Y.A.; Stoecker, W.V.; Moss, R.H. A methodological approach to the classification of dermoscopy images. Comput. Med. Imaging Graph. 2007, 31, 362–373. [Google Scholar] [CrossRef] [Green Version]
  34. Serrano, C.; Acha, B. Pattern analysis of dermoscopic images based on Markov random fields. Pattern Recognit. 2009, 42, 1052–1057. [Google Scholar] [CrossRef]
  35. Sáez, A.; Acha, B.; Serrano, A.; Serrano, C. Statistical detection of colors in dermoscopic images with a texton-based estimation of probabilities. IEEE J. Biomed. Health Inform. 2019, 23, 560–568. [Google Scholar] [CrossRef]
  36. Madooei, A.; Drew, M.S. Incorporating Colour Information for Computer-Aided Diagnosis of Melanoma from Dermoscopy Images: A Retrospective Survey and Critical Analysis. Int. J. Biomed. Imaging 2016, 2016, 4868305. [Google Scholar] [CrossRef]
  37. Celebi, M.E.; Zornberg, A. Automated Quantification of Clinically Significant Colors in Dermoscopy Images and Its Application to Skin Lesion Classification. IEEE Syst. J. 2014, 8, 980–984. [Google Scholar] [CrossRef]
  38. Acha, B.; Serrano, C.; Fondón, I.; Gómez-Cía, T. Burn depth analysis using multidimensional scaling applied to psychophysical experiment data. IEEE Trans. Med. Imaging 2013, 32, 1111–1120. [Google Scholar] [CrossRef] [PubMed]
  39. Sáez, A.; Serrano, C.; Acha, B. Model-Based Classification Methods of Global Patterns in Dermoscopic Images. IEEE Trans. Med. Imaging 2014, 33, 1137–1147. [Google Scholar] [CrossRef] [PubMed]
  40. Serrano, C.; Acha, B.; Gómez-Cía, T.; Acha, J.I.; Roa, L. A computer assisted diagnosis tool for the classification of burns by depth of injury. Burns 2005, 31, 275–281. [Google Scholar] [CrossRef] [PubMed]
  41. Acha, B.; Serrano, C.; Acha, J.I.; Roa, L. Segmentation and classification of burn images by color and texture information. J. Biomed. Opt. 2005, 10, 034014-1–034014-11. [Google Scholar] [CrossRef] [Green Version]
  42. Serrano, C.; Boloix-Tortosa, R.; Gómez-Cía, T.; Acha, B. Features identification for automatic burn classification. Burns 2015, 41, 1883–1890. [Google Scholar] [CrossRef]
  43. Abbas, Q.; Celebi, M.E.; Serrano, C.; Garcia, I.F. Pattern classification of dermoscopy images: A perceptually uniform model. Pattern Recognit. 2013, 46, 86–97. [Google Scholar]
  44. Vélez, P.A.; Serrano, C.; Acha, B.; Pérez Carrasco, J.A. Dermoscopic Image Segmentation: A Comparison of Methodologies. In Proceedings of the 15th Mediterranean Conference on Medical and Biological Engineering and Computing (MEDICON2019), Coimbra, Portugal, 26–28 September 2019. [Google Scholar]
  45. Vélez, P.; Miranda, M.; Serrano, C.; Acha, B. Does a Previous Segmentation Improve the Automatic Detection of Basal Cell Carcinoma Using Deep Neural Networks? Appl. Sci. 2022, 12, 2092. [Google Scholar] [CrossRef]
  46. Codella, N.C.F.; Gutman, D.; Celebi, M.E.; Helba, B.; Marchetti, M.A.; Dusza, S.W.; Kalloo, A.; Liopyris, K.; Mishra, N.; Kittler, H.; et al. Skin Lesion Analysis Toward Melanoma Detection: A Challenge at the 2017 International Symposium on Biomedical Imaging (ISBI), Hosted by the International Skin Imaging Collaboration (ISIC). In Proceedings of the IEEE International Symposium on Biomedical Imaging (ISBI), Washington, DC, USA, 4–7 April 2018; pp. 168–172. [Google Scholar]
  47. Codella, N.; Rotemberg, V.; Tschandl, P.; Celebi, M.E.; Dusza, S.; Gutman, D.; Helba, B.; Kalloo, A.; Liopyris, K.; Marchetti, M.; et al. Skin Lesion Analysis Toward Melanoma Detection 2018: A Challenge Hosted by the International Skin Imaging Collaboration (ISIC). arXiv 2019, arXiv:1902.03368. [Google Scholar]
  48. Combalia, M.; Codella, N.C.F.; Rotemberg, V.; Helba, B.; Vilaplana, V.; Reiter, O.; Halpern, A.C.; Puig, S.; Malvehy, J. BCN20000: Dermoscopic Lesions in the Wild. arXiv 2019, arXiv:1908.02288. [Google Scholar]
  49. Tschandl, P.; Rosendahl, C.; Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 2018, 5, 180161. [Google Scholar] [CrossRef]
  50. Mahbod, A.; Schaefer, G.; Ellinger, I.; Ecker, R.; Pitiot, A.; Wang, C. Fusing fine-tuned deep features for skin lesion classification. Comput. Med. Imaging Graph. 2019, 71, 19–29. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Rangayyan, R.M.; Acha, B.; Serrano, C. Color. Image Processing with Biomedical Applications; SPIE Press: Bellingham, WA, USA, 2011. [Google Scholar]
  52. Fairchild, M.D. Color Appearance Models, 2nd ed.; Wiley: West Sussex, UK, 2005. [Google Scholar]
  53. Li, C.; Li, Z.; Wang, Z.; Xu, Y.; Luo, M.R.; Cui, G.; Melgosa, M.; Brill, M.H.; Pointer, M. Comprehensive color solutions: CAM16, CAT16 and CAM16-UCS. Color. Res. Appl. 2017, 42, 703–718. [Google Scholar] [CrossRef]
  54. Lloyd, S.P. Least squares quantization in PCM. IEEE Trans. Inf. Theory 1982, 28, 29–137. [Google Scholar] [CrossRef] [Green Version]
  55. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef] [Green Version]
  56. Cernadas, E.; Fernández-Delgado, M.; González-Rufino, E.; Carrión, P. Influence of normalization and color space to color texture classification. Pattern Recognit. 2017, 61, 120–138. [Google Scholar] [CrossRef]
  57. Arvis, V.; Debain, C.; Berducat, M.; Benassi, A. Generalization of the cooccurrence matrix of the for colour images: Application to colour texture classification. Image Anal. Stereol. 2004, 23, 63–72. [Google Scholar] [CrossRef] [Green Version]
  58. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  59. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Fei-Fei, L. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the IEEE Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA, 20–25 July 2009. [Google Scholar]
  60. Peris, K.; Altobelli, E.; Ferrari, A.; Fargnoli, M.C.; Piccolo, D.; Esposito, M.; Chimenti, S. Interobserver agreement on dermoscopic features of pigmented basal cell carcinoma. Derm. Surg. 2002, 28, 643–645. [Google Scholar] [CrossRef]
  61. Huang, H.; Kharazmi, P.; McLean, D.I.; Lui, H.; Wang, Z.J.; Lee, T.K. Automatic detection of translucency using a deep learning method from patches of clinical basal cell carcinoma images. In Proceedings of the APSIPA Annual Summit and Conference, Honolulu, HI, USA, 12–15 November 2018; pp. 685–688. [Google Scholar]
  62. Wahba, M.A.; Ashour, A.S.; Guo, Y.; Napoleon, S.A. A novel cumulative level difference mean based GLDM and modified ABCD features ranked using eigenvector centrality approach for four skin lesion types classification. Comput. Methods Progr. Biomed. 2018, 165, 163–174. [Google Scholar] [CrossRef]
  63. Chatterjee, S.; Dey, D.; Munshi, S. Integration of morphological preprocessing and fractal based feature extraction with recursive feature elimination for skin lesion types classification. Comput. Methods Progr. Biomed. 2019, 178, 201–218. [Google Scholar] [CrossRef]
  64. Qui, Z.; Liu, Z.; Zhu, P.; Xue, Y. A GAN-based image synthesis method for skin lesion classification. Comput. Methods Progr. Biomed. 2020, 195, 105568. [Google Scholar]
  65. Sies, K.; Winkler, J.K.; Fink, C.; Bardehle, F.; Toberer, F.; Buhl, T.; Enk, A.; Blum, A.; Rosenberger, A.; Haenssle, H.A. Past and present of computer-assisted dermoscopic diagnosis: Performance of a conventional image analyser versus a convolutional neural network in a prospective data set of 1,981 skin lesions. Eur. J. Cancer 2020, 135, 39–46. [Google Scholar] [CrossRef] [PubMed]
  66. Bossuyt, P.M.; Reitsma, J.B.; Bruns, D.E.; Gatsonis, C.A.; Glasziou, P.P.; Irwig, L.; Lijmer, J.G.; Moher, D.; Rennie, D.; De Vet, H.C.W.; et al. STARD 2015: An updated list of essential items for reporting diagnostic accuracy studies. BMJ 2015, 351, h5527. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. ISIC BCC image samples (Challenge 2018).
Figure 1. ISIC BCC image samples (Challenge 2018).
Jimaging 08 00197 g001
Figure 2. Top row, from left to right: telangiectasia, multiple B/G globules, ulceration and pigment network; bottom row: spoke-wheel, blue-gray ovoids and maple leaf.
Figure 2. Top row, from left to right: telangiectasia, multiple B/G globules, ulceration and pigment network; bottom row: spoke-wheel, blue-gray ovoids and maple leaf.
Jimaging 08 00197 g002
Figure 3. Representative colors of each BCC dermoscopic pattern in RGB color space. (a) Pigment network, (b) ulceration, (c) blue-gray ovoids, (d) multiple B/G globules, (e) maple leaf, (f) spoke-wheel, (g) telangiectasia, (h) the 20 final color centroids selected.
Figure 3. Representative colors of each BCC dermoscopic pattern in RGB color space. (a) Pigment network, (b) ulceration, (c) blue-gray ovoids, (d) multiple B/G globules, (e) maple leaf, (f) spoke-wheel, (g) telangiectasia, (h) the 20 final color centroids selected.
Jimaging 08 00197 g003
Figure 4. Color quantization in different color spaces. (a) Original image. (b) Image quantized in RGB color space (20 colors). (c) Image quantized in L*a*b* color space (18 colors). (d) Image quantized in CIECAM16-UCS (18 colors).
Figure 4. Color quantization in different color spaces. (a) Original image. (b) Image quantized in RGB color space (20 colors). (c) Image quantized in L*a*b* color space (18 colors). (d) Image quantized in CIECAM16-UCS (18 colors).
Jimaging 08 00197 g004
Figure 5. Dual classification. Original RGB and color-quantized images are used as inputs to the VGG16 CNNs, and concatenated to enter in a MLP classifier.
Figure 5. Dual classification. Original RGB and color-quantized images are used as inputs to the VGG16 CNNs, and concatenated to enter in a MLP classifier.
Jimaging 08 00197 g005
Figure 6. Triple classification. Original RGB, color-quantized images and texture descriptors are used as inputs to the MLP classifier.
Figure 6. Triple classification. Original RGB, color-quantized images and texture descriptors are used as inputs to the MLP classifier.
Jimaging 08 00197 g006
Figure 7. Flow diagram followed by the participants in the four tests analyzed in this work.
Figure 7. Flow diagram followed by the participants in the four tests analyzed in this work.
Jimaging 08 00197 g007
Figure 8. Confusion matrix of the classification into BCC versus non-BCC.
Figure 8. Confusion matrix of the classification into BCC versus non-BCC.
Jimaging 08 00197 g008
Figure 9. ROC curve when classifying BCC versus non-BCC.
Figure 9. ROC curve when classifying BCC versus non-BCC.
Jimaging 08 00197 g009
Table 1. Number of images for each pattern in the database.
Table 1. Number of images for each pattern in the database.
Number of Occurrences
Pigment Network614
Ulceration352
Blue-Gray Ovoid Nests338
Multiple B/G Globules150
Maple Leaf177
Spoke-Wheel64
Arborizing Telangiectasia526
Table 2. Evaluation results for Test 1 and Test 2 classifications. SPEC: specificity; SENS: sensitivity; AUC: area under the curve.
Table 2. Evaluation results for Test 1 and Test 2 classifications. SPEC: specificity; SENS: sensitivity; AUC: area under the curve.
SPECSENSAUC
Original RGBPigment network0.890.910.95
Ulceration0.740.870.87
Ovoid nest0.610.830.77
Multiple globules0.690.660.74
Maple leaf0.690.760.79
Spoke-wheel0.900.480.83
A. telangiectasia0.710.880.88
Average0.750.770.83
Dual
original + RGB quantization
Pigment network0.950.970.99
Ulceration0.820.700.85
Ovoid nest0.700.680.77
Multiple globules0.740.630.75
Maple leaf0.860.720.86
Spoke-wheel0.950.360.88
A. telangiectasia0.770.790.85
Average0.830.690.85
Dual
original + CIELAB quantization
Pigment network0.940.950.98
Ulceration0.810.860.91
Ovoid nest0.650.730.76
Multiple globules0.600.680.71
Maple leaf0.780.720.82
Spoke-wheel0.840.730.85
A. telangiectasia0.730.880.87
Average0.770.790.84
Dual original + CIECAM16 quantizationPigment network0.950.960.98
Ulceration0.860.790.92
Ovoid nest0.710.760.80
Multiple globules0.650.720.77
Maple leaf0.690.780.83
Spoke-wheel0.800.610.78
A. telangiectasia0.750.870.88
Average0.780.780.85
Table 3. Evaluation results for Test 3 classification SPEC: specificity; SENS: sensitivity; AUC: area under the curve.
Table 3. Evaluation results for Test 3 classification SPEC: specificity; SENS: sensitivity; AUC: area under the curve.
SPECSENSAUC
Triple
original + CIELAB
Quantization + GLCM L*
Pigment netw.0.920.990.99
Ulceration0.800.820.90
Ovoid nest0.710.680.80
Multiple globules0.780.770.86
Maple leaf0.770.690.81
Spoke-wheel0.720.860.89
A. telangiectasia0.720.850.87
Average0.770.810.87
Triple
original + CIECAM16
quantization + GLCM L*
Pigment netw.0.970.970.98
Ulceration0.760.810.87
Ovoid nest0.600.880.79
Multiple globules0.780.790.86
Maple leaf0.800.770.85
Spoke-wheel0.790.710.89
A. telangiectasia0.730.790.89
Average0.780.820.88
Triple
original + CIELAB quantization+ CIELAB CCM
Pigment netw.0.980.970.99
Ulceration0.820.750.89
Ovoid nest0.740.840.86
Multiple globules0.780.680.80
Maple leaf0.780.680.85
Spoke-wheel0.890.970.96
A. telangiectasia0.760.870.91
Average0.820.820.89
Triple
Original + CIECAM16 quantization+ CIECAM 16 CCM
Pigment netw.0.980.970.99
Ulceration0.860.920.94
Ovoid nest0.850.830.91
Multiple globules0.790.870.89
Maple leaf0.720.820.82
Spoke-wheel0.870.930.96
A. telangiectasia0.680.990.91
Average0.820.900.92
Table 4. Results of the classification into BCC versus non-BCC. ACC: accuracy; PPV: positive predictive value; SPEC: specificity; SENS: sensitivity.
Table 4. Results of the classification into BCC versus non-BCC. ACC: accuracy; PPV: positive predictive value; SPEC: specificity; SENS: sensitivity.
ACCPPVSPECSENS
CIELAB0.96850.97890.97030.9673
CIECAM160.96990.95270.94230.9934
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Serrano, C.; Lazo, M.; Serrano, A.; Toledo-Pastrana, T.; Barros-Tornay, R.; Acha, B. Clinically Inspired Skin Lesion Classification through the Detection of Dermoscopic Criteria for Basal Cell Carcinoma. J. Imaging 2022, 8, 197. https://doi.org/10.3390/jimaging8070197

AMA Style

Serrano C, Lazo M, Serrano A, Toledo-Pastrana T, Barros-Tornay R, Acha B. Clinically Inspired Skin Lesion Classification through the Detection of Dermoscopic Criteria for Basal Cell Carcinoma. Journal of Imaging. 2022; 8(7):197. https://doi.org/10.3390/jimaging8070197

Chicago/Turabian Style

Serrano, Carmen, Manuel Lazo, Amalia Serrano, Tomás Toledo-Pastrana, Rubén Barros-Tornay, and Begoña Acha. 2022. "Clinically Inspired Skin Lesion Classification through the Detection of Dermoscopic Criteria for Basal Cell Carcinoma" Journal of Imaging 8, no. 7: 197. https://doi.org/10.3390/jimaging8070197

APA Style

Serrano, C., Lazo, M., Serrano, A., Toledo-Pastrana, T., Barros-Tornay, R., & Acha, B. (2022). Clinically Inspired Skin Lesion Classification through the Detection of Dermoscopic Criteria for Basal Cell Carcinoma. Journal of Imaging, 8(7), 197. https://doi.org/10.3390/jimaging8070197

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop