Next Article in Journal
Immunohistochemical HER2 Recognition and Analysis of Breast Cancer Based on Deep Learning
Previous Article in Journal
Microparticles as Viral RNA Carriers from Stool for Stable and Sensitive Surveillance
Previous Article in Special Issue
GestroNet: A Framework of Saliency Estimation and Optimal Deep Learning Features Based Gastrointestinal Diseases Detection and Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Multi-Task Learning Network Based on Melanoma Segmentation and Classification with Skin Lesion Images

1
Department of Electrical Engineering, College of Engineering, Jouf University, Sakaka 72388, Saudi Arabia
2
Department of Electrical and Electronics Engineering, Bolu Abant Izzet Baysal University, Bolu 14280, Turkey
*
Authors to whom correspondence should be addressed.
Diagnostics 2023, 13(2), 262; https://doi.org/10.3390/diagnostics13020262
Submission received: 23 December 2022 / Revised: 4 January 2023 / Accepted: 9 January 2023 / Published: 10 January 2023
(This article belongs to the Special Issue Lesion Detection and Analysis Using Artificial Intelligence)

Abstract

:
Melanoma is known worldwide as a malignant tumor and the fastest-growing skin cancer type. It is a very life-threatening disease with a high mortality rate. Automatic melanoma detection improves the early detection of the disease and the survival rate. In accordance with this purpose, we presented a multi-task learning approach based on melanoma recognition with dermoscopy images. Firstly, an effective pre-processing approach based on max pooling, contrast, and shape filters is used to eliminate hair details and to perform image enhancement operations. Next, the lesion region was segmented with a VGGNet model-based FCN Layer architecture using enhanced images. Later, a cropping process was performed for the detected lesions. Then, the cropped images were converted to the input size of the classifier model using the very deep super-resolution neural network approach, and the decrease in image resolution was minimized. Finally, a deep learning network approach based on pre-trained convolutional neural networks was developed for melanoma classification. We used the International Skin Imaging Collaboration, a publicly available dermoscopic skin lesion dataset in experimental studies. While the performance measures of accuracy, specificity, precision, and sensitivity, obtained for segmentation of the lesion region, were produced at rates of 96.99%, 92.53%, 97.65%, and 98.41%, respectively, the performance measures achieved rates for classification of 97.73%, 99.83%, 99.83%, and 95.67%, respectively.

1. Introduction

Skin cancer has a higher incidence compared to other types of cancer. There are two types of skin cancer: melanoma and non-melanoma. The uncontrolled growth of pigmented cells (melanocytes) causes melanoma. Skin cancer deaths from melanoma have stably increased each year over the past years [1,2]. Based on these data, it can be said that this increase poses a significant threat to public health. Early detection is important in order to save human life. Melanoma skin cancer has a successful cure rate when detected early. For this reason, the importance of the methods used in the early detection of the disease has increased. The distinction between lesioned and non-lesioned areas on melanoma skin cancer images is complex in normal conditions. It is not easy to distinguish between these areas, a task that requires specialty.
For this reason, it may cause differences of opinion among dermatologists. It is argued that in order to solve this problem and for dermatologists to be able to make both an accurate and rapid diagnosis, an automated analysis system is needed [3,4,5,6]. Automatic segmentation of the skin surrounding melanomas is an essential step in the computerized analysis of the dermoscopic image [7].
As today’s technology develops, the prevalence of deep neural networks has increased rapidly. Convolutional neural network (CNN) architectures, usually chosen in computer vision applications, possess feature extraction and classification abilities based on deep learning [8,9,10,11]. In recent years, high-performance results have been acquired for pattern recognition and segmentation with CNNs [12,13,14,15,16]. In this vein, many studies have been carried out in the segmentation and classification of lesions occurring on the skin. In these studies [17,18,19,20,21,22,23,24], pre-trained CNN models based on the transfer learning were used to classify the skin lesions. On the other hand, pixel essential lesion regions were generally detected by deep learning approaches such as UNet [25,26,27,28,29,30,31], Mask R-CNN [32,33], fully convolutional networks [34,35], feature pyramid networks [36], SegNet [37,38], and transformers [39,40].
As in the current study, many studies based on the segmentation and classification processes were carried out to identify skin lesions with dermoscopy images. These studies are primarily based on the segmentation of lesions from dermoscopy images and the subsequent classification of these detected lesions. Seeja and Suresh (2019) proposed a mixed model for melanoma classification and segmentation with dermoscopic images from these studies. They first detected lesions with the UNet model and then extracted the shape, color, and texture features from segmented images. Finally, naive Bayes, SVM, and k-nearest neighbor machine learning classifiers were used for melanoma classification using these features. These results provided 85.19% accuracy in classifying nelanoma and a 77.5% Dice co-efficiency value for lesion segmentation [41]. Ding et al. (2022) proposed a two-stage deep neural network. This model applied five deep architectures to lesion images detected from UNet architecture. They achieved 90.9% accuracy with the ISIC 2017 dataset [42]. Jojoa Acosta et al. (2021) first cropped the lesions from the skin image using Mask R_CNN architecture. Then, they used the pre-trained ResNet152 architecture to classify the cropped lesions. This model produced 90.4% accuracy with the ISIC 2017 dataset [43]. Malibari et al. (2022) presented a deep model, based on skin classification and detection with a deep convolutional neural network, in another study. When pre-trained Squeezenet architecture-based whale optimization was used for the classification step, they utilized the UNET architecture for the segmentation step. As a result, they reached 99% accuracy with the ISIC 2019 dataset for melanoma classification [44]. Jayapriya and Jacob (2020) used a hybrid structure with a deep convolutional network. They proposed a method to extract features from the lesions, segmented using pre-trained fully convolutional networks, and classify them with SVM. Experimental works observed that this hybrid approach produced an accuracy of 85.3% for ISIC 2017 and 88.92% for ISIC 2016 [45].
This paper proposes a multi-task learning network based on the melanoma segmentation and classification of skin lesion images. In the segmentation phase of the proposed model, pre-processing methods were used. First, the hair details in the image were removed, and the image was clarified. Later, the lesions in the images were segmented using the VGGNet model-based FCNLayer architecture. Finally, the cropping of these lesions was performed, and the image resolution was increased using a very deep super-resolution neural network. These high-resolution images obtained were given to the input of the classifier model. The classifier model presents a new approach based on three powerful pre-trained deep models. ISIC database was used to test the performance of the proposed approaches. The experimental results show that the multi-task learning network model developed based on segmentation and classification provides high performance.
The contribution of the proposed deep network approach are as follows:
  • Lesion images, cropped from the images detected in the segmentation process, were converted to the input size of the classifier model using the very deep super-resolution neural network approach, and the resolution of lesion images was raised.
  • Lesions were correctly located in all dermoscopy images with the VGGNet-based FCNLayers approach. The numerical and visual results obtained from experimental studies proved this situation.
  • In this paper, an effective deep network architecture is proposed, based on the combination of deep models with different structures. In experimental studies, the proposed approach has been observed to achieve outstanding success in classifying melanoma.
The remainder of the study proceeded as follows. The methodology and theoretical framework are given in Section 2. Experiment results and information about the data set are announced in Section 3. In Section 4, the results are of the model presented when using the developing new technologies, and the study is generally concluded with Section 4.

2. Materials and Methods

In the current study, we presented a paper, as well as a multi-task learning network based on melanoma recognition with dermoscopy images. The proposed system consists of two main stages: segmentation and classification. In the segmentation phase, operations such as the removal of hair details, detection, and cropping of the lesion region are included. The classification phase includes obtaining high-resolution images and classifying melanoma based on the deep neural network. The general representation of the proposed system, which includes all these processes, is given in Figure 1.

2.1. Segmentation

In this study, a deep learning approach, based on the segmentation of high-resolution images, has been advanced. This approach, using 2 stages, consists of pre-processing and detection. In the “pre-processing” approach, the segmentation includes image enhancement operations to improve the forecasting results. Firstly, maximum pooling, contrast, and sharpening methods were applied to remove hair details from skin lesions images and to clarify the image. Then, the VGGNet-based FCNLayer approach was used to detect the lesion region from the enhanced dermoscopy images. This architecture [46] is based on a pixel-based fully convolutional network semantic segmentation. The general structure of this architecture is given in Figure 2.
In the structure given in Figure 2, the gridded rectangles represent the pooling and prediction layers, while the vertical lines represent the interlayers. There are 3 models that are based on the FCNLayer architecture. These are known as 8 s, 16 s, and 32 s. In the first row in Figure 2, the FCN-32s return the upsampling of the 32 predictions back to the images in a single step. In the second line, the FCN-16s divide the output in two by 16 pixels. A 1 × 1 convolution layer is added to the fourth pooling layer to create additional forecasts. Then, the ×2 upsampling layer is added and combined with the forecasts computed in the seventh convolution layer. This eturns upsamples of 16 forecasts back to images with ×2 multiples. FCN-8s are on the third line; FCN-16 is ×2, and by adding a 1 × 1 matrix to the third pooling layer, it is combined with the predictions computed in the seventh convolution layer and returns upsamples of 8 forecasts back to images with ×4 multiples [46].
In the current study, the detection of the lesion area in skin images was realized using VGGNet-based FCN-32s, FCN-16s, and FCN-8s approaches. Then, clipping was performed for the detected lesions. Finally, there was a need to increase the size of the cropped lesion images to be given to the input of the classifier model. This operation converts images to the desired size using the interpolation method. However, changing the size of the image with this method also affects its resolution. Accordingly, we used a deep learning approach based on a very deep super-resolution neural network [47]. This architecture aims to raise the image quality by re-inserting the lost details into the image. This architecture provides increased model performance by combining low-level features and high-level features through a skip link [48]. The VSDR architecture consists of cascading convolutional layers with a size of 3 × 3 × 64, and the size of the part of the image in question is 41 by 41. The general structure of this network architecture is given in Figure 3.
An example illustration of this process is given in Figure 4. In this example, the 48 × 64 cropped lesion image was converted to a 224 × 224 size by applying bilinear interpolation (Figure 4b) and the proposed approach (Figure 4c). As a result, it was observed that the image resolution was better with the proposed VSDR approach.

2.2. Classification

This paper proposed a deep approach based on image super-resolutions and multiple pre-trained convolutional neural networks to classify skin lesions. The general flow diagram of the proposed model is given in Figure 5.
In the classification model, the learned weights of the pre-trained deep architectures are used instead of re-developing and training a CNN model from scratch [49,50,51,52,53,54,55,56]. For this purpose, deep architectures such as DenseNet, GoogleNet, and MobileNet with high performance were used. These architectures have different structures from each other. Detailed information about these architectures is given below:
  • DenseNet201: The DenseNet model is a network architecture in that every layer forwards directly links up other layers [57]. This architecture can reuse the features of different layers, which allows for an increase in the diversity of the input of the next layer and improves performance [58]. It also provides for a direct connection between any two layers with the same graph size and allows the network features to be regained in learning the model [59]. Each layer’s feature maps are passed as inputs to all subsequent layers, while the feature maps of all former layers are approached as apart inputs. Besides, in the DenseNet model, the pooling layer and bottleneck mold are used for transition layers to make feature parameters more efficient and reduce methodological complexity [60,61]. ResNet and DenseNet architectures have similarg architectures. However, in ResNet architecture, every ResNet model receives knowledge from the former model, while in DenseNet architecture, each layer resides receiving knowledge from former layers. The divergence in the DenseNet model joins each layer in a feed-forward intensely [60].
  • GoogleNet: This network was developed in 2015 as a broader and deeper CNN model [62]. GoogleNet has inception modules (1 × 1, 3 × 3, and 5 × 5 convolution sublayers) that perform different sizes of folds and combine filters for the next layer. It has a maximum pooling layer of 3 × 3, capable of performing parallel operations [63,64]. These layers acquire data from former layers and then perform these parallel operations. To reduce the losses in the computation, a 1 × 1 convolution is performed before these operations, but in the beginning module, the 1 × 1 sub-convolution layer is placed after the maximum pooling layer. In every part of the beginning layer, features that may differ from the previous data are calculated. Every output is then combined as an input for the other layers of this CNN. This model uses starter modules instead of fully connected layers. Maximum pooling between some layers is carried out in this network to reduce the information coming from important layers. As well, in GoogleNet, an average pooling layer is available at the end of the network [64,65,66].
  • MobileNetv2: This network implements a technique called deeply separable convolutions (DSC) and uses linear bottlenecks to enhance the information extinction problem that occurs in nonlinear layers in convolution blocks [67,68]. It also introduces a new structure, called inverse residuals, to preserve information. The MobileNet architecture is based on deep, separable convolution. All input channels are processed along the standard convolution and inverted along the depth then convolution of all the inputs with the filter channel. Thus, an output channel with a filter is obtained. These channels are then stacked. Deep convolution uses 1 × 1 convolution to combine these channels into a single channel. As a result, it is known that although this method produces the same outputs as standard convolution, it reduces the number of parameters and increases efficiency [67,69].
In the proposed approach, first, fully connected layers of these architectures were used and 1000 deep features were extracted from dermoscopy images for each. The pseudocode based on these FC layers is given in Equation (1).
f e a t _ D e n s e k = a c t i v a t i o n D e n s e N e t p r e t r a i n e d p a r a m e t e r s , i m a g e k , f c 1000 f e a t _ G o o g l e k = a c t i v a t i o n G o o g l e N e t p r e t r a i n e d p a r a m e t e r s , i m a g e k , l o s s 3 c l a s s i f i e r f e a t _ M o b i l e k = a c t i v a t i o n M o b i l e N e t p r e t r a i n e d p a r a m e t e r s , i m a g e k , L o g i t s k = 1 , 2 , 3 , , N
where, N represents the number of images in the dataset. The deep features obtained using Equation (1) were combined using the global average pooling layer, and 1000 features were obtained for each image Equation (2).
f e a t k = k = 1 N 1 3 i = 1 1000 f e a t _ D e n s e i + f e a t _ G o o g l e i + f e a t _ M o b i l e i
Finally, N × 1000 feature vectors are given as input for a feature layer. This layer is followed by fully connected, ReLU, fully connected, and softmax layers, respectively. As a result, the training process was carried out using the developed deep learning network’s architecture.

3. Results

In the current study, we presented a multi-task learning network based on melanoma segmentation and classification with dermoscopy images. In experimental works, the confusion matrix was previously used to calculate the performance of the proposed segmentation and classification model.
In the experimental studies carried out, the test and training sets for two data sets were randomly divided as 20% and 80%, respectively, to realize one time only. In this way, using the same test and training data set for all applications, the effects of indiscriminately divided data on performance were minimized.

3.1. Dataset

In experimental studies, the widely used, publicly available HAM10000 dataset was used to evaluate the performance of the proposed classification and segmentation models. This dataset consists of a total of 10,015 dermoscopic images belonging to seven classes: benign keratosis, melanoma, basal cell carcinoma, vascular lesion, dermatofibroma, melanocytic nevi, and actinic keratosis, as shown in Figure 6. Additionally, it is an unbalanced dataset, containing a different number of images for each class. We performed experimental studies for two classes, 1113 melanoma images and 8902 non-melanoma images in the current study. A data imbalance between these two classes can lead to overfitting during the training phase. Therefore, we used the data augmentation method, rotation, flip, contrast, and bright, to equalize the data numbers. In this process, the separation of training and test data was performed on the raw data set, and the data was balanced by using data augmentation methods for these two separate datasets. After these processes, the number of melanoma images was increased by 8904 and thus obtained the dataset a total of 17,806 images. Figure 6 shows sample dermoscopy images, (a) Melanoma, (b) Non-melanoma.

3.2. Result of Skin Lesion Segmentation

In the experimental study, we used VGGNet-FCN-8s, VGGNet-FCN-16s, and VGGNet-FCN-32s models, based on the pre-trained VGG16 architecture, for lesion segmentation. In this experimental study, deep parameters such as epoch size 200, batch size 1, and Adam optimization method were used in training these approaches. The TP, FP, TP, and TN values obtained, based on the confusion matrix for each model, are given in Table 1.
The performance measures such as accuracy, precision, and sensitivity were calculated according to the confusion matrices obtained from the models given in Table 1 and are given in Table 2.
According to the results given in Table 2, it was observed that the best performance among the proposed approaches was obtained with VGGNet-FCN16s. On the other hand, the VGGNet-FCN32s model produced 96.11% accuracy, 96.71% precision, and 98.17% sensitivity values, while VGGNet-FCN8s produced 93.61%, 92.59%, and 98.99%, respectively. In addition, sample visual prediction results, based on VGGNet-FCN approaches, are given in Figure 7.
According to the visual estimation results given in Figure 7, it is clearly observed that the VGGNet-FCN16s approach is more successful than other approaches. In addition, while the VGGNet-FCN8s model correctly detected the locations of lesion regions, it also detected non-lesion regions as lesions.

3.3. Result of Skin Lesion Classification

In this classification stage, the individual performances of pre-trained deep architectures such as DenseNet, MobileNet, and GoogleNet, based on the transfer learning approach, were calculated using the cropped images obtained from the segmentation process. These results are given in Table 3. The image size, given for the input of each deep architecture, was adjusted using the VSDR network approach, and a possible resolution reduction was prevented. In these experimental studies, the epoch size, batch size, and optimization method were set to 100, 32, and Sgdm (stochastic gradient descent with momentum), respectively.
According to the results given in Table 3, the best performance among the deep models was obtained with DenseNet at 95.51%. In addition, the MobileNet and GoogleNet models produced 95.06% and 93.07% accuracy, respectively. The confusion matrices of these models are given in Figure 8.
Finally, deep architectures with three different structures used in this study were combined according to the proposed classifier model given in Figure 2. The performances of the combinations were calculated. The obtained performance values are given in Table 4.
As can be seen from Table 4, the deep learning network approach developed based on the three deep models achieved the best accuracy of 97.73%. On the other hand, the second-best score was obtained by combining the Mobilenet and Densenet architectures. In addition, the confusion matrix and ROC diagram of the proposed approach (D+G+M) are given in Figure 9.

4. Discussion

Recently, considering the outstanding achievements of deep learning algorithms, many studies have been carried out in the last 3–4 years for the segmentation and classification of melanoma. In these studies, segmentation and classification processes, based on deep convolution neural networks, were performed using ISIC datasets in general. The performances of the previous studies based on the HAM1000 dataset are compared with the proposed model, and these results are given in Table 5.
When the previous studies given in Table 5 are examined, either classification or segmentation studies were generally performed based on dermoscopy images. Deep learning models such as UNet, FCN, and Segnet were generally used in studies based on the segmentation process. As seen in the results obtained in the experimental findings of the current study, it was observed that the FCNLayer architecture produced more successful results than the other models developed. On the other hand, it is known that networks, trained from scratch based on convolutional neural networks, provide lower levels of performance than pre-trained deep architectures. Therefore, pre-trained deep models were used in most studies based on skin lesion classification. In the current study, it was preferred to use pre-trained deep models based on the transfer learning approach. Accordingly, the developed hybrid deep learning network model was more successful than that in other studies, with an accuracy score of 97.73%.
This paper presents a multi-task learning network, covering segmentation and classification processes. There are a few studies similar to the proposed approach, such as those of Khan et al. (2021) [78] and Khan et al. (2021) [79]. In these studies, when the results obtained for both processes were examined, it was clearly observed that superior performance was obtained compared to previous studies. In these studies, when the results obtained for both processes were examined, it was clearly observed that superior performance was obtained with the proposed approach.

5. Conclusions

The current study proposed a novel approach, based on a multi-task learning network, for melanoma recognition with dermoscopy images. This model includes a hybrid approach based on segmentation and classification. In the segmentation phase, hair details in dermoscopy images were removed, and lesion regions were detected with the VGGNEt-based FCNLayers approach. The experimental results obtained high performances, with 97.65% precision and 98.41% sensitivity scores. In addition, when the visual estimation results were examined, it was observed that the developed approach correctly detected the positions of the lesions in all images. On the other hand, lesion images cropped from the images detected in the segmentation process were converted to the input size of the classifier model using the very deep super-resolution neural network approach, and the resolution of the lesion images was raised. Then, the proposed classifier model, based on three powerful pre-trained deep architectures with different structures, was tested using the ISIC dataset. The experimental results displayed high performance with an accuracy score of approximately 97.73%. As a result, deep learning network approaches, proposed for segmentation and classification processes, were observed to be more successful than they were in previous studies.
In future studies, we will focus on optimization methods for the variables used in the proposed approach and parameters that affect performance. In addition, the transformer structure will be examined, and it will be considered to adapted the proposed approach.

Author Contributions

Conceptualization, F.A. and A.A.; formal analysis, K.P.; investigation, K.P.; resources, A.A.; writing—original draft preparation, F.A.; writing—review and editing, K.P.; visualization, A.A.; supervision, F.A. and K.P.; project administration, F.A and A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Deanship of Scientific Research at Jouf University under Grant Number (DSR2022-RG-0112).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The introduced public datasets are available: HAM10000 dataset at https://www.kaggle.com/datasets/kmader/skin-cancer-mnist-ham10000 (accessed on 1 August 2022).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Globle Coalition. 2020 Melanoma Skin Cancer Report Stemming the global epidemic GlobalCoalition|Euromelanoma|2020 Melanoma Skin Cancer Report 2 Euromelanoma, n.d. Available online: https://melanomapatients.org.au/wp-content/uploads/2020/04/2020-campaign-report-GC%20version-MPA_1.pdf (accessed on 14 July 2022).
  2. Wen, H. II-FCN for skin lesion analysis towards melanoma detection. arXiv 2017, arXiv:1702.08699. [Google Scholar]
  3. di Ruffano, L.F.; Takwoingi, Y.; Dinnes, J.; Chuchu, N.; E Bayliss, S.; Davenport, C.; Matin, R.N.; Godfrey, K.; O’Sullivan, C.; Gulati, A.; et al. Computer-assisted diagnosis techniques (dermoscopy and spectroscopy-based) for diagnosing skin cancer in adults. Cochrane Database Syst. Rev. 2018, 2018, CD013186. [Google Scholar] [CrossRef]
  4. Bi, L.; Kim, J.; Ahn, E.; Kumar, A.; Fulham, M.; Feng, D. Dermoscopic Image Segmentation via Multistage Fully Convolutional Networks. IEEE Trans. Biomed. Eng. 2017, 64, 2065–2074. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Daldal, N.; Cömert, Z.; Polat, K. Automatic determination of digital modulation types with different noises using Convolutional Neural Network based on time–frequency information. Appl. Soft Comput. 2020, 86, 105834. [Google Scholar] [CrossRef]
  6. Daldal, N.; Sengur, A.; Polat, K.; Cömert, Z. A novel demodulation system for base band digital modulation signals based on the deep long short-term memory model. Appl. Acoust. 2020, 166, 107346. [Google Scholar] [CrossRef]
  7. Yuan, Y.; Chao, M.; Lo, Y.-C. Automatic Skin Lesion Segmentation Using Deep Fully Convolutional Networks with Jaccard Distance. IEEE Trans. Med. Imaging 2017, 36, 1876–1886. [Google Scholar] [CrossRef]
  8. Alqudah, A.; Alqudah, A.M. Artificial Intelligence Hybrid System for Enhancing Retinal Diseases Classification Using Automated Deep Features Extracted from OCT Images. Int. J. Intell. Syst. Appl. Eng. 2021, 9, 91–100. [Google Scholar] [CrossRef]
  9. Mahbub, K.; Biswas, M.; Gaur, L.; Alenezi, F.; Santosh, K. Deep features to detect pulmonary abnormalities in chest X-rays due to infectious diseaseX: Covid-19, pneumonia, and tuberculosis. Inf. Sci. 2022, 592, 389–401. [Google Scholar] [CrossRef]
  10. Masad, I.S.; Alqudah, A.; Alqudah, A.M.; Almashaqbeh, S. A hybrid deep learning approach towards building an intelligent system for pneumonia detection in chest X-ray images. Int. J. Electr. Comput. Eng. 2021, 11, 2088–8708. [Google Scholar] [CrossRef]
  11. Obeidat, Y.; Alqudah, A.M. A Hybrid Lightweight 1D CNN-LSTM Architecture for Automated ECG Beat-Wise Classification. Trait. du Signal 2021, 38, 1281–1291. [Google Scholar] [CrossRef]
  12. Alqudah, A.M.; Algharib, H.M.; Algharib, A.M.; Algharib, H.M. Computer aided diagnosis system for automatic two stages classification of breast mass in digital mammogram images. Biomed. Eng. Appl. Basis Commun. 2019, 31, 1950007. [Google Scholar] [CrossRef]
  13. Abu Qasmieh, I.; Alquran, H.; Alqudah, A.M. Occluded iris classification and segmentation using self-customized artificial intelligence models and iterative randomized Hough transform. Int. J. Electr. Comput. Eng. (IJECE) 2021, 11, 4037–4049. [Google Scholar] [CrossRef]
  14. Alqudah, A.M. Ovarian Cancer Classification Using Serum Proteomic Profiling and Wavelet Features A Comparison of Machine Learning and Features Selection Algorithms. J. Clin. Eng. 2019, 44, 165–173. [Google Scholar] [CrossRef]
  15. Al-Issa, Y.; Alqudah, A.M. A lightweight hybrid deep learning system for cardiac valvular disease classification. Sci. Rep. 2022, 12, 1–20. [Google Scholar] [CrossRef]
  16. Alqudah, A.; Alqudah, A.M.; Alquran, H.; Al-Zoubi, H.R.; Al-Qodah, M.; Al-Khassaweneh, M.A. Recognition of handwritten arabic and hindi numerals using convolutional neural networks. Appl. Sci. 2021, 11, 1573. [Google Scholar] [CrossRef]
  17. Benyahia, S.; Meftah, B.; Lézoray, O. Multi-features extraction based on deep learning for skin lesion classification. Tissue Cell 2021, 74, 101701. [Google Scholar] [CrossRef]
  18. Alenezi, F.; Armghan, A.; Polat, K. A multi-stage melanoma recognition framework with deep residual neural network and hyperparameter optimization-based decision support in dermoscopy images. Expert Syst. Appl. 2023, 215, 119352. [Google Scholar] [CrossRef]
  19. Abayomi-Alli, O.O.; Damasevicius, R.; Misra, S.; Maskeliunas, R.; Abayomi-Alli, A. Malignant skin melanoma detection using image augmentation by oversamplingin nonlinear lower-dimensional embedding manifold. Turk. J. Electr. Eng. Comput. Sci. 2021, 29, 2600–2614. [Google Scholar] [CrossRef]
  20. Alqudah, A.M.; Alquraan, H.; Abu Qasmieh, I. Segmented and Non-Segmented Skin Lesions Classification Using Transfer Learning and Adaptive Moment Learning Rate Technique Using Pretrained Convolutional Neural Network. J. Biomimetics, Biomater. Biomed. Eng. 2019, 42, 67–78. [Google Scholar] [CrossRef]
  21. Khan, M.A.; Sharif, M.; Akram, T.; Bukhari, S.A.C.; Nayak, R.S. Developed Newton-Raphson based deep features selection framework for skin lesion recognition. Pattern Recognit. Lett. 2019, 129, 293–303. [Google Scholar] [CrossRef]
  22. Ratul, M.A.R.; Mozaffari, M.H.; Lee, W.S.; Parimbelli, E. Skin lesions classification using deep learning based on dilated convolution. BioRxiv 2020, 860700. [Google Scholar] [CrossRef]
  23. Mahbod, A.; Schaefer, G.; Wang, C.; Ecker, R.; Ellinge, I. Skin lesion classification using hybrid deep neural networks. In Proceedings of the ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 1229–1233. [Google Scholar]
  24. Alenezi, F.; Armghan, A.; Polat, K. Wavelet transform based deep residual neural network and ReLU based Extreme Learning Machine for skin lesion classification. Expert Syst. Appl. 2023, 213, 119064. [Google Scholar] [CrossRef]
  25. Li, W.; Raj, A.N.J.; Tjahjadi, T.; Zhuang, Z. Digital hair removal by deep learning for skin lesion segmentation. Pattern Recognit. 2021, 117, 107994. [Google Scholar] [CrossRef]
  26. Phan, T.-D.-T.; Kim, S.H. Skin Lesion Segmentation by U-Net with Adaptive Skip Connection and Structural Awareness. Appl. Sci. 2021, 11, 4528. [Google Scholar] [CrossRef]
  27. Nguyen, D.K.; Tran, T.-T.; Nguyen, C.P.; Pham, V.-T. Skin Lesion Segmentation based on Integrating EfficientNet and Residual block into U-Net Neural Network. In Proceedings of the 2020 5th International conference on green technology and sustainable development (GTSD), Ho Chi Minh City, Vietnam, 27–28 November 2020; pp. 366–371. [Google Scholar] [CrossRef]
  28. Thanh, D.N.; Hai, N.H.; Hieu, L.M.; Tiwari, P.; Prasath, V.S. Skin lesion segmentation method for dermoscopic images with convolutional neural networks and semantic segmentation. Comput. Opt. 2021, 45, 122–129. [Google Scholar] [CrossRef]
  29. Al Nazi, Z.; Abir, T.A. Automatic skin lesion segmentation and melanoma detection: Transfer learning approach with u-net and dcnn-svm. In Proceedings of International Joint Conference on Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2020; pp. 371–381. [Google Scholar] [CrossRef]
  30. Zafar, K.; Gilani, S.O.; Waris, A.; Ahmed, A.; Jamil, M.; Khan, M.N.; Sohail Kashif, A. Skin Lesion Segmentation from Dermoscopic Images Using Convolutional Neural Network. Sensors 2020, 20, 1601. [Google Scholar] [CrossRef] [Green Version]
  31. Tong, X.; Wei, J.; Sun, B.; Su, S.; Zuo, Z.; Wu, P. ASCU-Net: Attention Gate, Spatial and Channel Attention U-Net for Skin Lesion Segmentation. Diagnostics 2021, 11, 501. [Google Scholar] [CrossRef]
  32. Khan, M.A.; Zhang, Y.D.; Sharif, M.; Akram, T. Pixels to classes: Intelligent learning framework for multi-class skin lesion localization and classification. Comput. Electr. Eng. 2021, 90, 106956. [Google Scholar] [CrossRef]
  33. Goyal, M.; Oakley, A.; Bansal, P.; Dancey, D.; Yap, M.H. Skin Lesion Segmentation in Dermoscopic Images with Ensemble Deep Learning Methods. IEEE Access 2019, 8, 4171–4181. [Google Scholar] [CrossRef]
  34. Shan, P.; Wang, Y.; Fu, C.; Song, W.; Chen, J. Automatic skin lesion segmentation based on FC-DPN. Comput. Biol. Med. 2020, 123, 103762. [Google Scholar] [CrossRef]
  35. Kaymak, R.; Kaymak, C.; Ucar, A. Skin lesion segmentation using fully convolutional networks: A comparative experimental study. Expert Syst. Appl. 2020, 161, 113742. [Google Scholar] [CrossRef]
  36. Khouloud, S.; Ahlem, M.; Fadel, T.; Amel, S. W-net and inception residual network for skin lesion segmentation and classification. Appl. Intell. 2021, 52, 3976–3994. [Google Scholar] [CrossRef]
  37. Brahmbhatt, P.; Rajan, S.N. Skin Lesion Segmentation using SegNet with Binary CrossEntropy. In Proceedings of the International Conference on Artificial Intelligence and Speech Technology (AIST2019), Delhi, India, 14–15 November 2019; pp. 14–15. [Google Scholar]
  38. Saini, S.; Jeon, Y.S.; Feng, M. B-SegNet: Branched-SegMentor network for skin lesion segmentation. In Proceedings of the Conference on Health, Inference, and Learning, Virtual, 8–10 April 2021; pp. 214–221. [Google Scholar]
  39. Wang, J.; Wei, L.; Wang, L.; Zhou, Q.; Zhu, L.; Qin, J. Boundary-aware transformers for skin lesion segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Virtual, 27 September–1 October 2021; pp. 206–216. [Google Scholar]
  40. Wu, H.; Chen, S.; Chen, G.; Wang, W.; Lei, B.; Wen, Z. FAT-Net: Feature adaptive transformers for automated skin lesion segmentation. Med. Image Anal. 2021, 76, 102327. [Google Scholar] [CrossRef]
  41. Seeja, R.D.; Suresh, A. Deep learning based skin lesion segmentation and classification of Melanoma using support vector machine (SVM). Asian Pac. J. Cancer Prev. APJCP 2019, 20, 1555. [Google Scholar]
  42. Ding, J.; Song, J.; Li, J.; Tang, J.; Guo, F. Two-Stage Deep Neural Network via Ensemble Learning for Melanoma Classification. Front. Bioeng. Biotechnol. 2022, 9, 758495. [Google Scholar] [CrossRef]
  43. Jojoa Acosta, M.F.; Caballero Tovar, L.Y.; Garcia-Zapirain, M.B.; Percybrooks, W.S. Melanoma diagnosis using deep learning techniques on dermatoscopic images. BMC Med. Imaging 2021, 21, 6. [Google Scholar] [CrossRef]
  44. Malibari, A.A.; Alzahrani, J.S.; Eltahir, M.M.; Malik, V.; Obayya, M.; Al Duhayyim, M.; Neto, A.V.L.; de Albuquerque, V.H.C. Optimal deep neural network-driven computer aided diagnosis model for skin cancer. Comput. Electr. Eng. 2022, 103, 108318. [Google Scholar] [CrossRef]
  45. Jayapriya, K.; Jacob, I.J. Hybrid fully convolutional networks-based skin lesion segmentation and melanoma detection using deep feature. Int. J. Imaging Syst. Technol. 2019, 30, 348–357. [Google Scholar] [CrossRef]
  46. Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  47. Kim, J.; Lee, J.K.; Lee, K.M. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
  48. Ooi, Y.; Ibrahim, H. Deep Learning Algorithms for Single Image Super-Resolution: A Systematic Review. Electronics 2021, 10, 867. [Google Scholar] [CrossRef]
  49. Imak, A.; Celebi, A.; Siddique, K.; Turkoglu, M.; Sengur, A.; Salam, I. Dental Caries Detection Using Score-Based Multi-Input Deep Convolutional Neural Network. IEEE Access 2022, 10, 18320–18329. [Google Scholar] [CrossRef]
  50. Alqudah, A.; Alqudah, A.M. Sliding window based deep ensemble system for breast cancer classification. J. Med. Eng. Technol. 2021, 45, 313–323. [Google Scholar] [CrossRef] [PubMed]
  51. Turkoglu, M. Defective egg detection based on deep features and Bidirectional Long-Short-Term-Memory. Comput. Electron. Agric. 2021, 185, 106152. [Google Scholar] [CrossRef]
  52. Alqudah, A.M.; Qazan, S.; Al-Ebbini, L.; Alquran, H.; Abu Qasmieh, I. ECG heartbeat arrhythmias classification: A comparison study between different types of spectrum representation and convolutional neural networks architectures. J. Ambient. Intell. Humaniz. Comput. 2021, 13, 4877–4907. [Google Scholar] [CrossRef]
  53. Türkoğlu, M. Brain Tumor Detection using a combination of Bayesian optimization based SVM classifier and fine-tuned based deep features. Eur. J. Sci. Technol. 2021, 27, 251–258. [Google Scholar] [CrossRef]
  54. Alenezi, F.; Öztürk, Ş.; Armghan, A.; Polat, K. An effective hashing method using W-Shaped contrastive loss for imbalanced datasets. Expert Syst. Appl. 2022, 204, 117612. [Google Scholar] [CrossRef]
  55. Ağdaş, M.T.; Türkoğlu, M.; Gülseçen, S. Deep Neural Networks Based on Transfer Learning Approaches to Classification of Gun and Knife Images. Sak. Univ. J. Comput. Inf. Sci. 2021, 4, 131–141. [Google Scholar] [CrossRef]
  56. Uzen, H.; Turkoglu, M.; Hanbay, D. Texture defect classification with multiple pooling and filter ensemble based on deep neural network. Expert Syst. Appl. 2021, 175, 114838. [Google Scholar] [CrossRef]
  57. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 June 2017; pp. 4700–4708. [Google Scholar]
  58. Jaiswal, A.; Gianchandani, N.; Singh, D.; Kumar, V.; Kaur, M. Classification of the COVID-19 infected patients using DenseNet201 based deep transfer learning. J. Biomol. Struct. Dyn. 2020, 39, 5682–5689. [Google Scholar] [CrossRef]
  59. Jasil, S.G.; Ulagamuthalvi, V. Skin lesion classification using pre-trained DenseNet201 deep neural network. In Proceedings of the 2021 3rd International Conference on Signal Processing and Communication (ICPSC), Tamil Nadu, India, 13–14 May 2021; pp. 393–396. [Google Scholar]
  60. Nguyen, L.D.; Lin, D.; Lin, Z.; Cao, J. Deep CNNs for microscopic image classification by exploiting transfer learning and feature concatenation. In Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 27–30 May 2018. [Google Scholar] [CrossRef]
  61. Goceri, E. Deep learning based classification of facial dermatological disorders. Comput. Biol. Med. 2020, 128, 104118. [Google Scholar] [CrossRef]
  62. Szegedy, C. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  63. Ballester, P.; Araujo, R. On the Performance of GoogLeNet and AlexNet Applied to Sketches. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016. [Google Scholar] [CrossRef]
  64. Singla, A.; Yuan, L.; Ebrahimi, T. Food/Non-food Image Classification and Food Categorization using Pre-Trained GoogLeNet Model. In Proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management, Amsterdam, The Netherlands, 16 October 2016. [Google Scholar] [CrossRef] [Green Version]
  65. Anand, R.; Shanthi, T.; Nithish, M.S.; Lakshman, S. Face Recognition and Classification Using GoogleNET Architecture. In Soft Computing for Problem Solving; Springer: Singapore, 2020; pp. 261–269. [Google Scholar] [CrossRef]
  66. Yilmaz, E.; Trocan, M. A modified version of GoogLeNet for melanoma diagnosis. J. Inf. Telecommun. 2021, 5, 395–405. [Google Scholar] [CrossRef]
  67. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  68. Indraswari, R.; Rokhana, R.; Herulambang, W. Melanoma image classification based on MobileNetV2 network. Procedia Comput. Sci. 2022, 197, 198–207. [Google Scholar] [CrossRef]
  69. Dong, K.; Zhou, C.; Ruan, Y.; Li, Y. Mobilenetv2 model for image classification. In Proceedings of the 2020 2nd International Conference on Information Technology and Computer Application (ITCA), Guangzhou, China, 18–20 December 2020; pp. 476–480. [Google Scholar]
  70. Alam, T.M.; Shaukat, K.; Khan, W.A.; Hameed, I.A.; Almuqren, L.A.; Raza, M.A.; Aslam, M.; Luo, S. An Efficient Deep Learning-Based Skin Cancer Classifier for an Imbalanced Dataset. Diagnostics 2022, 12, 2115. [Google Scholar] [CrossRef]
  71. Srinivasu, P.N.; SivaSai, J.G.; Ijaz, M.F.; Bhoi, A.K.; Kim, W.; Kang, J.J. Classification of skin disease using deep learning neural networks with MobileNet V2 and LSTM. Sensors 2021, 21, 2852. [Google Scholar] [CrossRef]
  72. Dhivyaa, C.R.; Sangeetha, K.; Balamurugan, M.; Amaran, S.; Vetriselvi, T.; Johnpaul, P. Skin lesion classification using decision trees and random forest algorithms. J. Ambient. Intell. Humaniz. Comput. 2020, 1–13. [Google Scholar] [CrossRef]
  73. Bibi, A.; Khan, M.A.; Javed, M.Y.; Tariq, U.; Kang, B.-G.; Nam, Y.; Mostafa, R.R.; Sakr, R.H. Skin Lesion Segmentation and Classification Using Conventional and Deep Learning Based Framework. Comput. Mater. Contin. 2022, 71, 2477–2495. [Google Scholar] [CrossRef]
  74. Barın, S.; Güraksın, G.E. An automatic skin lesion segmentation system with hybrid FCN-ResAlexNet. Eng. Sci. Technol. Int. J. 2022, 34, 101174. [Google Scholar] [CrossRef]
  75. Jin, Q.; Cui, H.; Sun, C.; Meng, Z.; Su, R. Cascade knowledge diffusion network for skin lesion diagnosis and segmentation. Appl. Soft Comput. 2020, 99, 106881. [Google Scholar] [CrossRef]
  76. Lei, B.; Xia, Z.; Jiang, F.; Jiang, X.; Ge, Z.; Xu, Y.; Qin, J.; Chen, S.; Wang, T.; Wang, S. Skin lesion segmentation via generative adversarial networks with dual discriminators. Med. Image Anal. 2020, 64, 101716. [Google Scholar] [CrossRef]
  77. Hussain, R.; Basak, H. RecU-Net++: Improved Utilization of Receptive Fields in U-Net++ for Skin Lesion Segmentation. In Proceedings of the 2021 IEEE 18th India Council International Conference (INDICON), Guwahati, India, 19–21 December 2021; pp. 1–6. [Google Scholar]
  78. Khan, M.; Sharif, M.; Akram, T.; Damaševičius, R.; Maskeliūnas, R. Skin Lesion Segmentation and Multiclass Classification Using Deep Learning Features and Improved Moth Flame Optimization. Diagnostics 2021, 11, 811. [Google Scholar] [CrossRef]
  79. Khan, M.A.; Muhammad, K.; Sharif, M.; Akram, T.; de Albuquerque, V.H.C. Multi-Class Skin Lesion Detection and Classification via Teledermatology. IEEE J. Biomed. Health Inform. 2021, 25, 4267–4275. [Google Scholar] [CrossRef]
Figure 1. The proposed multi-task learning network.
Figure 1. The proposed multi-task learning network.
Diagnostics 13 00262 g001
Figure 2. The structure of FCNLayer architecture [46].
Figure 2. The structure of FCNLayer architecture [46].
Diagnostics 13 00262 g002
Figure 3. The structure of VSDR architecture [47].
Figure 3. The structure of VSDR architecture [47].
Diagnostics 13 00262 g003
Figure 4. Size enhancement samples (a) Original image, (b) The bilinear interpolation, (c) The proposed VSDR approach.
Figure 4. Size enhancement samples (a) Original image, (b) The bilinear interpolation, (c) The proposed VSDR approach.
Diagnostics 13 00262 g004
Figure 5. The structure of the proposed classifier model.
Figure 5. The structure of the proposed classifier model.
Diagnostics 13 00262 g005
Figure 6. Sample dermoscopy images, (a) Melanoma, (b) Non-melanoma.
Figure 6. Sample dermoscopy images, (a) Melanoma, (b) Non-melanoma.
Diagnostics 13 00262 g006
Figure 7. The visual estimation results based on sample dermoscopy images.
Figure 7. The visual estimation results based on sample dermoscopy images.
Diagnostics 13 00262 g007
Figure 8. The confusion matrices of pre-trained deep models, (a) DenseNet, (b) MobileNet, (c) GoogleNet.
Figure 8. The confusion matrices of pre-trained deep models, (a) DenseNet, (b) MobileNet, (c) GoogleNet.
Diagnostics 13 00262 g008
Figure 9. The confusion matrix (a) and ROC diagram (b) of proposed classifier approach.
Figure 9. The confusion matrix (a) and ROC diagram (b) of proposed classifier approach.
Diagnostics 13 00262 g009
Table 1. Confusion matrix values (Pixel) of the proposed approach.
Table 1. Confusion matrix values (Pixel) of the proposed approach.
TPFPFNTN
VGGNet-FCN8s45,969,9753,681,826466,79814,797,339
VGGNet-FCN16s48,483,4631,168,338782,69914,481,438
VGGNet-FCN32s48,019,8781,631,923896,53514,367,602
Table 2. The performance results (%) of VGGNet-FCN-based approaches.
Table 2. The performance results (%) of VGGNet-FCN-based approaches.
AccuracyPrecisionSensitivity
VGGNet-FCN8s93.6192.5998.99
VGGNet-FCN16s96.9997.6598.41
VGGNet-FCN32s96.1196.7198.17
Table 3. Classification results (%) of pre-trained deep models.
Table 3. Classification results (%) of pre-trained deep models.
AccuracySpecificityPrecisionSensitivity
DenseNet95.5197.0597.0294.01
MobileNet95.0697.6797.6092.51
GoogleNet93.0798.0197.8588.24
Table 4. Classification results (%) of proposed deep approaches (DenseNet: D, GoogleNet: G, MobileNet: M).
Table 4. Classification results (%) of proposed deep approaches (DenseNet: D, GoogleNet: G, MobileNet: M).
AccuracySpecificityPrecisionSensitivity
D+G95.8499.6199.5691.32
G+M96.3598.4598.3394.17
M+D97.1699.7899.7694.46
D+G+M (our)97.7399.8399.8395.67
Table 5. Comparison of the proposed model’s segmentation and classification results (%) with previous studies.
Table 5. Comparison of the proposed model’s segmentation and classification results (%) with previous studies.
ReferencesTaskAccuracySpecificityPrecisionSensitivity
Alam et al. (2022) [70]Classification91---
Srinivasu et al. (2021) [71]90.2195.1-92.24
Dhivyaa et al. (2020) [72]97.3---
Bibi et al. (2022) [73]96.7 94.48-
Barın and Güraksın (2022) [74]Segmentation94.6587.86 95.85
Wu et al. (2022) [40]95.7896.99 91
Jin et al. (2021) [75]93.490.4 96.7
Lei et al. (2020) [76]92.991.1 95.3
Hussain and Basak (2021) [77]-93.8-94.3
Khan et al. (2021) [78]Segmentation92.69---
Classification90.67--90.2
Khan et al. (2021) [79]Segmentation92.25---
Classification88.39---
Our method (2022)Segmentation96.9992.5397.6598.41
Classification97.7399.8399.8395.67
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alenezi, F.; Armghan, A.; Polat, K. A Novel Multi-Task Learning Network Based on Melanoma Segmentation and Classification with Skin Lesion Images. Diagnostics 2023, 13, 262. https://doi.org/10.3390/diagnostics13020262

AMA Style

Alenezi F, Armghan A, Polat K. A Novel Multi-Task Learning Network Based on Melanoma Segmentation and Classification with Skin Lesion Images. Diagnostics. 2023; 13(2):262. https://doi.org/10.3390/diagnostics13020262

Chicago/Turabian Style

Alenezi, Fayadh, Ammar Armghan, and Kemal Polat. 2023. "A Novel Multi-Task Learning Network Based on Melanoma Segmentation and Classification with Skin Lesion Images" Diagnostics 13, no. 2: 262. https://doi.org/10.3390/diagnostics13020262

APA Style

Alenezi, F., Armghan, A., & Polat, K. (2023). A Novel Multi-Task Learning Network Based on Melanoma Segmentation and Classification with Skin Lesion Images. Diagnostics, 13(2), 262. https://doi.org/10.3390/diagnostics13020262

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop