A Novel Multi-Task Learning Network Based on Melanoma Segmentation and Classification with Skin Lesion Images
Abstract
:1. Introduction
- Lesion images, cropped from the images detected in the segmentation process, were converted to the input size of the classifier model using the very deep super-resolution neural network approach, and the resolution of lesion images was raised.
- Lesions were correctly located in all dermoscopy images with the VGGNet-based FCNLayers approach. The numerical and visual results obtained from experimental studies proved this situation.
- In this paper, an effective deep network architecture is proposed, based on the combination of deep models with different structures. In experimental studies, the proposed approach has been observed to achieve outstanding success in classifying melanoma.
2. Materials and Methods
2.1. Segmentation
2.2. Classification
- DenseNet201: The DenseNet model is a network architecture in that every layer forwards directly links up other layers [57]. This architecture can reuse the features of different layers, which allows for an increase in the diversity of the input of the next layer and improves performance [58]. It also provides for a direct connection between any two layers with the same graph size and allows the network features to be regained in learning the model [59]. Each layer’s feature maps are passed as inputs to all subsequent layers, while the feature maps of all former layers are approached as apart inputs. Besides, in the DenseNet model, the pooling layer and bottleneck mold are used for transition layers to make feature parameters more efficient and reduce methodological complexity [60,61]. ResNet and DenseNet architectures have similarg architectures. However, in ResNet architecture, every ResNet model receives knowledge from the former model, while in DenseNet architecture, each layer resides receiving knowledge from former layers. The divergence in the DenseNet model joins each layer in a feed-forward intensely [60].
- GoogleNet: This network was developed in 2015 as a broader and deeper CNN model [62]. GoogleNet has inception modules (1 × 1, 3 × 3, and 5 × 5 convolution sublayers) that perform different sizes of folds and combine filters for the next layer. It has a maximum pooling layer of 3 × 3, capable of performing parallel operations [63,64]. These layers acquire data from former layers and then perform these parallel operations. To reduce the losses in the computation, a 1 × 1 convolution is performed before these operations, but in the beginning module, the 1 × 1 sub-convolution layer is placed after the maximum pooling layer. In every part of the beginning layer, features that may differ from the previous data are calculated. Every output is then combined as an input for the other layers of this CNN. This model uses starter modules instead of fully connected layers. Maximum pooling between some layers is carried out in this network to reduce the information coming from important layers. As well, in GoogleNet, an average pooling layer is available at the end of the network [64,65,66].
- MobileNetv2: This network implements a technique called deeply separable convolutions (DSC) and uses linear bottlenecks to enhance the information extinction problem that occurs in nonlinear layers in convolution blocks [67,68]. It also introduces a new structure, called inverse residuals, to preserve information. The MobileNet architecture is based on deep, separable convolution. All input channels are processed along the standard convolution and inverted along the depth then convolution of all the inputs with the filter channel. Thus, an output channel with a filter is obtained. These channels are then stacked. Deep convolution uses 1 × 1 convolution to combine these channels into a single channel. As a result, it is known that although this method produces the same outputs as standard convolution, it reduces the number of parameters and increases efficiency [67,69].
3. Results
3.1. Dataset
3.2. Result of Skin Lesion Segmentation
3.3. Result of Skin Lesion Classification
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Globle Coalition. 2020 Melanoma Skin Cancer Report Stemming the global epidemic GlobalCoalition|Euromelanoma|2020 Melanoma Skin Cancer Report 2 Euromelanoma, n.d. Available online: https://melanomapatients.org.au/wp-content/uploads/2020/04/2020-campaign-report-GC%20version-MPA_1.pdf (accessed on 14 July 2022).
- Wen, H. II-FCN for skin lesion analysis towards melanoma detection. arXiv 2017, arXiv:1702.08699. [Google Scholar]
- di Ruffano, L.F.; Takwoingi, Y.; Dinnes, J.; Chuchu, N.; E Bayliss, S.; Davenport, C.; Matin, R.N.; Godfrey, K.; O’Sullivan, C.; Gulati, A.; et al. Computer-assisted diagnosis techniques (dermoscopy and spectroscopy-based) for diagnosing skin cancer in adults. Cochrane Database Syst. Rev. 2018, 2018, CD013186. [Google Scholar] [CrossRef]
- Bi, L.; Kim, J.; Ahn, E.; Kumar, A.; Fulham, M.; Feng, D. Dermoscopic Image Segmentation via Multistage Fully Convolutional Networks. IEEE Trans. Biomed. Eng. 2017, 64, 2065–2074. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Daldal, N.; Cömert, Z.; Polat, K. Automatic determination of digital modulation types with different noises using Convolutional Neural Network based on time–frequency information. Appl. Soft Comput. 2020, 86, 105834. [Google Scholar] [CrossRef]
- Daldal, N.; Sengur, A.; Polat, K.; Cömert, Z. A novel demodulation system for base band digital modulation signals based on the deep long short-term memory model. Appl. Acoust. 2020, 166, 107346. [Google Scholar] [CrossRef]
- Yuan, Y.; Chao, M.; Lo, Y.-C. Automatic Skin Lesion Segmentation Using Deep Fully Convolutional Networks with Jaccard Distance. IEEE Trans. Med. Imaging 2017, 36, 1876–1886. [Google Scholar] [CrossRef]
- Alqudah, A.; Alqudah, A.M. Artificial Intelligence Hybrid System for Enhancing Retinal Diseases Classification Using Automated Deep Features Extracted from OCT Images. Int. J. Intell. Syst. Appl. Eng. 2021, 9, 91–100. [Google Scholar] [CrossRef]
- Mahbub, K.; Biswas, M.; Gaur, L.; Alenezi, F.; Santosh, K. Deep features to detect pulmonary abnormalities in chest X-rays due to infectious diseaseX: Covid-19, pneumonia, and tuberculosis. Inf. Sci. 2022, 592, 389–401. [Google Scholar] [CrossRef]
- Masad, I.S.; Alqudah, A.; Alqudah, A.M.; Almashaqbeh, S. A hybrid deep learning approach towards building an intelligent system for pneumonia detection in chest X-ray images. Int. J. Electr. Comput. Eng. 2021, 11, 2088–8708. [Google Scholar] [CrossRef]
- Obeidat, Y.; Alqudah, A.M. A Hybrid Lightweight 1D CNN-LSTM Architecture for Automated ECG Beat-Wise Classification. Trait. du Signal 2021, 38, 1281–1291. [Google Scholar] [CrossRef]
- Alqudah, A.M.; Algharib, H.M.; Algharib, A.M.; Algharib, H.M. Computer aided diagnosis system for automatic two stages classification of breast mass in digital mammogram images. Biomed. Eng. Appl. Basis Commun. 2019, 31, 1950007. [Google Scholar] [CrossRef]
- Abu Qasmieh, I.; Alquran, H.; Alqudah, A.M. Occluded iris classification and segmentation using self-customized artificial intelligence models and iterative randomized Hough transform. Int. J. Electr. Comput. Eng. (IJECE) 2021, 11, 4037–4049. [Google Scholar] [CrossRef]
- Alqudah, A.M. Ovarian Cancer Classification Using Serum Proteomic Profiling and Wavelet Features A Comparison of Machine Learning and Features Selection Algorithms. J. Clin. Eng. 2019, 44, 165–173. [Google Scholar] [CrossRef]
- Al-Issa, Y.; Alqudah, A.M. A lightweight hybrid deep learning system for cardiac valvular disease classification. Sci. Rep. 2022, 12, 1–20. [Google Scholar] [CrossRef]
- Alqudah, A.; Alqudah, A.M.; Alquran, H.; Al-Zoubi, H.R.; Al-Qodah, M.; Al-Khassaweneh, M.A. Recognition of handwritten arabic and hindi numerals using convolutional neural networks. Appl. Sci. 2021, 11, 1573. [Google Scholar] [CrossRef]
- Benyahia, S.; Meftah, B.; Lézoray, O. Multi-features extraction based on deep learning for skin lesion classification. Tissue Cell 2021, 74, 101701. [Google Scholar] [CrossRef]
- Alenezi, F.; Armghan, A.; Polat, K. A multi-stage melanoma recognition framework with deep residual neural network and hyperparameter optimization-based decision support in dermoscopy images. Expert Syst. Appl. 2023, 215, 119352. [Google Scholar] [CrossRef]
- Abayomi-Alli, O.O.; Damasevicius, R.; Misra, S.; Maskeliunas, R.; Abayomi-Alli, A. Malignant skin melanoma detection using image augmentation by oversamplingin nonlinear lower-dimensional embedding manifold. Turk. J. Electr. Eng. Comput. Sci. 2021, 29, 2600–2614. [Google Scholar] [CrossRef]
- Alqudah, A.M.; Alquraan, H.; Abu Qasmieh, I. Segmented and Non-Segmented Skin Lesions Classification Using Transfer Learning and Adaptive Moment Learning Rate Technique Using Pretrained Convolutional Neural Network. J. Biomimetics, Biomater. Biomed. Eng. 2019, 42, 67–78. [Google Scholar] [CrossRef]
- Khan, M.A.; Sharif, M.; Akram, T.; Bukhari, S.A.C.; Nayak, R.S. Developed Newton-Raphson based deep features selection framework for skin lesion recognition. Pattern Recognit. Lett. 2019, 129, 293–303. [Google Scholar] [CrossRef]
- Ratul, M.A.R.; Mozaffari, M.H.; Lee, W.S.; Parimbelli, E. Skin lesions classification using deep learning based on dilated convolution. BioRxiv 2020, 860700. [Google Scholar] [CrossRef]
- Mahbod, A.; Schaefer, G.; Wang, C.; Ecker, R.; Ellinge, I. Skin lesion classification using hybrid deep neural networks. In Proceedings of the ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 1229–1233. [Google Scholar]
- Alenezi, F.; Armghan, A.; Polat, K. Wavelet transform based deep residual neural network and ReLU based Extreme Learning Machine for skin lesion classification. Expert Syst. Appl. 2023, 213, 119064. [Google Scholar] [CrossRef]
- Li, W.; Raj, A.N.J.; Tjahjadi, T.; Zhuang, Z. Digital hair removal by deep learning for skin lesion segmentation. Pattern Recognit. 2021, 117, 107994. [Google Scholar] [CrossRef]
- Phan, T.-D.-T.; Kim, S.H. Skin Lesion Segmentation by U-Net with Adaptive Skip Connection and Structural Awareness. Appl. Sci. 2021, 11, 4528. [Google Scholar] [CrossRef]
- Nguyen, D.K.; Tran, T.-T.; Nguyen, C.P.; Pham, V.-T. Skin Lesion Segmentation based on Integrating EfficientNet and Residual block into U-Net Neural Network. In Proceedings of the 2020 5th International conference on green technology and sustainable development (GTSD), Ho Chi Minh City, Vietnam, 27–28 November 2020; pp. 366–371. [Google Scholar] [CrossRef]
- Thanh, D.N.; Hai, N.H.; Hieu, L.M.; Tiwari, P.; Prasath, V.S. Skin lesion segmentation method for dermoscopic images with convolutional neural networks and semantic segmentation. Comput. Opt. 2021, 45, 122–129. [Google Scholar] [CrossRef]
- Al Nazi, Z.; Abir, T.A. Automatic skin lesion segmentation and melanoma detection: Transfer learning approach with u-net and dcnn-svm. In Proceedings of International Joint Conference on Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2020; pp. 371–381. [Google Scholar] [CrossRef]
- Zafar, K.; Gilani, S.O.; Waris, A.; Ahmed, A.; Jamil, M.; Khan, M.N.; Sohail Kashif, A. Skin Lesion Segmentation from Dermoscopic Images Using Convolutional Neural Network. Sensors 2020, 20, 1601. [Google Scholar] [CrossRef] [Green Version]
- Tong, X.; Wei, J.; Sun, B.; Su, S.; Zuo, Z.; Wu, P. ASCU-Net: Attention Gate, Spatial and Channel Attention U-Net for Skin Lesion Segmentation. Diagnostics 2021, 11, 501. [Google Scholar] [CrossRef]
- Khan, M.A.; Zhang, Y.D.; Sharif, M.; Akram, T. Pixels to classes: Intelligent learning framework for multi-class skin lesion localization and classification. Comput. Electr. Eng. 2021, 90, 106956. [Google Scholar] [CrossRef]
- Goyal, M.; Oakley, A.; Bansal, P.; Dancey, D.; Yap, M.H. Skin Lesion Segmentation in Dermoscopic Images with Ensemble Deep Learning Methods. IEEE Access 2019, 8, 4171–4181. [Google Scholar] [CrossRef]
- Shan, P.; Wang, Y.; Fu, C.; Song, W.; Chen, J. Automatic skin lesion segmentation based on FC-DPN. Comput. Biol. Med. 2020, 123, 103762. [Google Scholar] [CrossRef]
- Kaymak, R.; Kaymak, C.; Ucar, A. Skin lesion segmentation using fully convolutional networks: A comparative experimental study. Expert Syst. Appl. 2020, 161, 113742. [Google Scholar] [CrossRef]
- Khouloud, S.; Ahlem, M.; Fadel, T.; Amel, S. W-net and inception residual network for skin lesion segmentation and classification. Appl. Intell. 2021, 52, 3976–3994. [Google Scholar] [CrossRef]
- Brahmbhatt, P.; Rajan, S.N. Skin Lesion Segmentation using SegNet with Binary CrossEntropy. In Proceedings of the International Conference on Artificial Intelligence and Speech Technology (AIST2019), Delhi, India, 14–15 November 2019; pp. 14–15. [Google Scholar]
- Saini, S.; Jeon, Y.S.; Feng, M. B-SegNet: Branched-SegMentor network for skin lesion segmentation. In Proceedings of the Conference on Health, Inference, and Learning, Virtual, 8–10 April 2021; pp. 214–221. [Google Scholar]
- Wang, J.; Wei, L.; Wang, L.; Zhou, Q.; Zhu, L.; Qin, J. Boundary-aware transformers for skin lesion segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Virtual, 27 September–1 October 2021; pp. 206–216. [Google Scholar]
- Wu, H.; Chen, S.; Chen, G.; Wang, W.; Lei, B.; Wen, Z. FAT-Net: Feature adaptive transformers for automated skin lesion segmentation. Med. Image Anal. 2021, 76, 102327. [Google Scholar] [CrossRef]
- Seeja, R.D.; Suresh, A. Deep learning based skin lesion segmentation and classification of Melanoma using support vector machine (SVM). Asian Pac. J. Cancer Prev. APJCP 2019, 20, 1555. [Google Scholar]
- Ding, J.; Song, J.; Li, J.; Tang, J.; Guo, F. Two-Stage Deep Neural Network via Ensemble Learning for Melanoma Classification. Front. Bioeng. Biotechnol. 2022, 9, 758495. [Google Scholar] [CrossRef]
- Jojoa Acosta, M.F.; Caballero Tovar, L.Y.; Garcia-Zapirain, M.B.; Percybrooks, W.S. Melanoma diagnosis using deep learning techniques on dermatoscopic images. BMC Med. Imaging 2021, 21, 6. [Google Scholar] [CrossRef]
- Malibari, A.A.; Alzahrani, J.S.; Eltahir, M.M.; Malik, V.; Obayya, M.; Al Duhayyim, M.; Neto, A.V.L.; de Albuquerque, V.H.C. Optimal deep neural network-driven computer aided diagnosis model for skin cancer. Comput. Electr. Eng. 2022, 103, 108318. [Google Scholar] [CrossRef]
- Jayapriya, K.; Jacob, I.J. Hybrid fully convolutional networks-based skin lesion segmentation and melanoma detection using deep feature. Int. J. Imaging Syst. Technol. 2019, 30, 348–357. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Kim, J.; Lee, J.K.; Lee, K.M. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
- Ooi, Y.; Ibrahim, H. Deep Learning Algorithms for Single Image Super-Resolution: A Systematic Review. Electronics 2021, 10, 867. [Google Scholar] [CrossRef]
- Imak, A.; Celebi, A.; Siddique, K.; Turkoglu, M.; Sengur, A.; Salam, I. Dental Caries Detection Using Score-Based Multi-Input Deep Convolutional Neural Network. IEEE Access 2022, 10, 18320–18329. [Google Scholar] [CrossRef]
- Alqudah, A.; Alqudah, A.M. Sliding window based deep ensemble system for breast cancer classification. J. Med. Eng. Technol. 2021, 45, 313–323. [Google Scholar] [CrossRef] [PubMed]
- Turkoglu, M. Defective egg detection based on deep features and Bidirectional Long-Short-Term-Memory. Comput. Electron. Agric. 2021, 185, 106152. [Google Scholar] [CrossRef]
- Alqudah, A.M.; Qazan, S.; Al-Ebbini, L.; Alquran, H.; Abu Qasmieh, I. ECG heartbeat arrhythmias classification: A comparison study between different types of spectrum representation and convolutional neural networks architectures. J. Ambient. Intell. Humaniz. Comput. 2021, 13, 4877–4907. [Google Scholar] [CrossRef]
- Türkoğlu, M. Brain Tumor Detection using a combination of Bayesian optimization based SVM classifier and fine-tuned based deep features. Eur. J. Sci. Technol. 2021, 27, 251–258. [Google Scholar] [CrossRef]
- Alenezi, F.; Öztürk, Ş.; Armghan, A.; Polat, K. An effective hashing method using W-Shaped contrastive loss for imbalanced datasets. Expert Syst. Appl. 2022, 204, 117612. [Google Scholar] [CrossRef]
- Ağdaş, M.T.; Türkoğlu, M.; Gülseçen, S. Deep Neural Networks Based on Transfer Learning Approaches to Classification of Gun and Knife Images. Sak. Univ. J. Comput. Inf. Sci. 2021, 4, 131–141. [Google Scholar] [CrossRef]
- Uzen, H.; Turkoglu, M.; Hanbay, D. Texture defect classification with multiple pooling and filter ensemble based on deep neural network. Expert Syst. Appl. 2021, 175, 114838. [Google Scholar] [CrossRef]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 June 2017; pp. 4700–4708. [Google Scholar]
- Jaiswal, A.; Gianchandani, N.; Singh, D.; Kumar, V.; Kaur, M. Classification of the COVID-19 infected patients using DenseNet201 based deep transfer learning. J. Biomol. Struct. Dyn. 2020, 39, 5682–5689. [Google Scholar] [CrossRef]
- Jasil, S.G.; Ulagamuthalvi, V. Skin lesion classification using pre-trained DenseNet201 deep neural network. In Proceedings of the 2021 3rd International Conference on Signal Processing and Communication (ICPSC), Tamil Nadu, India, 13–14 May 2021; pp. 393–396. [Google Scholar]
- Nguyen, L.D.; Lin, D.; Lin, Z.; Cao, J. Deep CNNs for microscopic image classification by exploiting transfer learning and feature concatenation. In Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 27–30 May 2018. [Google Scholar] [CrossRef]
- Goceri, E. Deep learning based classification of facial dermatological disorders. Comput. Biol. Med. 2020, 128, 104118. [Google Scholar] [CrossRef]
- Szegedy, C. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Ballester, P.; Araujo, R. On the Performance of GoogLeNet and AlexNet Applied to Sketches. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016. [Google Scholar] [CrossRef]
- Singla, A.; Yuan, L.; Ebrahimi, T. Food/Non-food Image Classification and Food Categorization using Pre-Trained GoogLeNet Model. In Proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management, Amsterdam, The Netherlands, 16 October 2016. [Google Scholar] [CrossRef] [Green Version]
- Anand, R.; Shanthi, T.; Nithish, M.S.; Lakshman, S. Face Recognition and Classification Using GoogleNET Architecture. In Soft Computing for Problem Solving; Springer: Singapore, 2020; pp. 261–269. [Google Scholar] [CrossRef]
- Yilmaz, E.; Trocan, M. A modified version of GoogLeNet for melanoma diagnosis. J. Inf. Telecommun. 2021, 5, 395–405. [Google Scholar] [CrossRef]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- Indraswari, R.; Rokhana, R.; Herulambang, W. Melanoma image classification based on MobileNetV2 network. Procedia Comput. Sci. 2022, 197, 198–207. [Google Scholar] [CrossRef]
- Dong, K.; Zhou, C.; Ruan, Y.; Li, Y. Mobilenetv2 model for image classification. In Proceedings of the 2020 2nd International Conference on Information Technology and Computer Application (ITCA), Guangzhou, China, 18–20 December 2020; pp. 476–480. [Google Scholar]
- Alam, T.M.; Shaukat, K.; Khan, W.A.; Hameed, I.A.; Almuqren, L.A.; Raza, M.A.; Aslam, M.; Luo, S. An Efficient Deep Learning-Based Skin Cancer Classifier for an Imbalanced Dataset. Diagnostics 2022, 12, 2115. [Google Scholar] [CrossRef]
- Srinivasu, P.N.; SivaSai, J.G.; Ijaz, M.F.; Bhoi, A.K.; Kim, W.; Kang, J.J. Classification of skin disease using deep learning neural networks with MobileNet V2 and LSTM. Sensors 2021, 21, 2852. [Google Scholar] [CrossRef]
- Dhivyaa, C.R.; Sangeetha, K.; Balamurugan, M.; Amaran, S.; Vetriselvi, T.; Johnpaul, P. Skin lesion classification using decision trees and random forest algorithms. J. Ambient. Intell. Humaniz. Comput. 2020, 1–13. [Google Scholar] [CrossRef]
- Bibi, A.; Khan, M.A.; Javed, M.Y.; Tariq, U.; Kang, B.-G.; Nam, Y.; Mostafa, R.R.; Sakr, R.H. Skin Lesion Segmentation and Classification Using Conventional and Deep Learning Based Framework. Comput. Mater. Contin. 2022, 71, 2477–2495. [Google Scholar] [CrossRef]
- Barın, S.; Güraksın, G.E. An automatic skin lesion segmentation system with hybrid FCN-ResAlexNet. Eng. Sci. Technol. Int. J. 2022, 34, 101174. [Google Scholar] [CrossRef]
- Jin, Q.; Cui, H.; Sun, C.; Meng, Z.; Su, R. Cascade knowledge diffusion network for skin lesion diagnosis and segmentation. Appl. Soft Comput. 2020, 99, 106881. [Google Scholar] [CrossRef]
- Lei, B.; Xia, Z.; Jiang, F.; Jiang, X.; Ge, Z.; Xu, Y.; Qin, J.; Chen, S.; Wang, T.; Wang, S. Skin lesion segmentation via generative adversarial networks with dual discriminators. Med. Image Anal. 2020, 64, 101716. [Google Scholar] [CrossRef]
- Hussain, R.; Basak, H. RecU-Net++: Improved Utilization of Receptive Fields in U-Net++ for Skin Lesion Segmentation. In Proceedings of the 2021 IEEE 18th India Council International Conference (INDICON), Guwahati, India, 19–21 December 2021; pp. 1–6. [Google Scholar]
- Khan, M.; Sharif, M.; Akram, T.; Damaševičius, R.; Maskeliūnas, R. Skin Lesion Segmentation and Multiclass Classification Using Deep Learning Features and Improved Moth Flame Optimization. Diagnostics 2021, 11, 811. [Google Scholar] [CrossRef]
- Khan, M.A.; Muhammad, K.; Sharif, M.; Akram, T.; de Albuquerque, V.H.C. Multi-Class Skin Lesion Detection and Classification via Teledermatology. IEEE J. Biomed. Health Inform. 2021, 25, 4267–4275. [Google Scholar] [CrossRef]
TP | FP | FN | TN | |
---|---|---|---|---|
VGGNet-FCN8s | 45,969,975 | 3,681,826 | 466,798 | 14,797,339 |
VGGNet-FCN16s | 48,483,463 | 1,168,338 | 782,699 | 14,481,438 |
VGGNet-FCN32s | 48,019,878 | 1,631,923 | 896,535 | 14,367,602 |
Accuracy | Precision | Sensitivity | |
---|---|---|---|
VGGNet-FCN8s | 93.61 | 92.59 | 98.99 |
VGGNet-FCN16s | 96.99 | 97.65 | 98.41 |
VGGNet-FCN32s | 96.11 | 96.71 | 98.17 |
Accuracy | Specificity | Precision | Sensitivity | |
---|---|---|---|---|
DenseNet | 95.51 | 97.05 | 97.02 | 94.01 |
MobileNet | 95.06 | 97.67 | 97.60 | 92.51 |
GoogleNet | 93.07 | 98.01 | 97.85 | 88.24 |
Accuracy | Specificity | Precision | Sensitivity | |
---|---|---|---|---|
D+G | 95.84 | 99.61 | 99.56 | 91.32 |
G+M | 96.35 | 98.45 | 98.33 | 94.17 |
M+D | 97.16 | 99.78 | 99.76 | 94.46 |
D+G+M (our) | 97.73 | 99.83 | 99.83 | 95.67 |
References | Task | Accuracy | Specificity | Precision | Sensitivity |
---|---|---|---|---|---|
Alam et al. (2022) [70] | Classification | 91 | - | - | - |
Srinivasu et al. (2021) [71] | 90.21 | 95.1 | - | 92.24 | |
Dhivyaa et al. (2020) [72] | 97.3 | - | - | - | |
Bibi et al. (2022) [73] | 96.7 | 94.48 | - | ||
Barın and Güraksın (2022) [74] | Segmentation | 94.65 | 87.86 | 95.85 | |
Wu et al. (2022) [40] | 95.78 | 96.99 | 91 | ||
Jin et al. (2021) [75] | 93.4 | 90.4 | 96.7 | ||
Lei et al. (2020) [76] | 92.9 | 91.1 | 95.3 | ||
Hussain and Basak (2021) [77] | - | 93.8 | - | 94.3 | |
Khan et al. (2021) [78] | Segmentation | 92.69 | - | - | - |
Classification | 90.67 | - | - | 90.2 | |
Khan et al. (2021) [79] | Segmentation | 92.25 | - | - | - |
Classification | 88.39 | - | - | - | |
Our method (2022) | Segmentation | 96.99 | 92.53 | 97.65 | 98.41 |
Classification | 97.73 | 99.83 | 99.83 | 95.67 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Alenezi, F.; Armghan, A.; Polat, K. A Novel Multi-Task Learning Network Based on Melanoma Segmentation and Classification with Skin Lesion Images. Diagnostics 2023, 13, 262. https://doi.org/10.3390/diagnostics13020262
Alenezi F, Armghan A, Polat K. A Novel Multi-Task Learning Network Based on Melanoma Segmentation and Classification with Skin Lesion Images. Diagnostics. 2023; 13(2):262. https://doi.org/10.3390/diagnostics13020262
Chicago/Turabian StyleAlenezi, Fayadh, Ammar Armghan, and Kemal Polat. 2023. "A Novel Multi-Task Learning Network Based on Melanoma Segmentation and Classification with Skin Lesion Images" Diagnostics 13, no. 2: 262. https://doi.org/10.3390/diagnostics13020262
APA StyleAlenezi, F., Armghan, A., & Polat, K. (2023). A Novel Multi-Task Learning Network Based on Melanoma Segmentation and Classification with Skin Lesion Images. Diagnostics, 13(2), 262. https://doi.org/10.3390/diagnostics13020262