Skin Cancer Detection Using Deep Learning—A Review
Abstract
:1. Introduction
2. Convolutional Neural Networks (CNNs) for Image Classification
2.1. Commonly Used CNN Architectures for Image Classification
2.2. AlexNet
2.3. VGG
2.4. ResNet
2.5. DenseNet
2.6. MobileNet
3. Deep-Learning-Based Classification of Skin Cancers
4. Types of Skin Cancer and Commonly Used Datasets for Skin Cancers
4.1. Type of Skin Cancer
4.1.1. Melanoma
4.1.2. Dysplastic Nevi
4.1.3. Basal Cell Carcinoma (BCC)
4.1.4. Squamous Cell Carcinoma (SCC)
4.1.5. Actinic Keratoses (AKs)
4.2. Datasets
4.2.1. HAM10000
4.2.2.
4.2.3. ISIC
4.2.4. ISIC2016
4.2.5. ISIC 2017
4.2.6. ISIC2018
4.2.7. ISIC 2019
4.2.8. ISIC 2020
4.2.9. Atlas of Dermoscopy
4.2.10. Dermofit
4.2.11. BCN20000
4.2.12. PAD-UFES-20
5. Resources Required for Training Proposed DL Algorithms
6. Conclusions and Discussion
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Skin Cancer. Available online: https://tinyurl.com/ptp97uzv (accessed on 1 May 2023).
- Kittler, H.; Pehamberger, H.; Wolff, K.; Binder, M. Diagnostic accuracy of dermoscopy. Lancet Oncol. 2002, 3, 159–165. [Google Scholar] [CrossRef] [PubMed]
- Rosendahl, C.; Tschandl, P.; Cameron, A.; Kittler, H. Diagnostic accuracy of dermatoscopy for melanocytic and nonmelanocytic pigmented lesions. J. Am. Acad. Dermatol. 2011, 64, 1068–1073. [Google Scholar] [CrossRef] [PubMed]
- Jalalian, A.; Mashohor, S.; Mahmud, R.; Karasfi, B.; Saripan, M.I.B.; Ramli, A.R.B. Foundation and methodologies in computer-aided diagnosis systems for breast cancer detection. EXCLI J. 2017, 16, 113. [Google Scholar] [PubMed]
- Fan, H.; Xie, F.; Li, Y.; Jiang, Z.; Liu, J. Automatic segmentation of dermoscopy images using saliency combined with Otsu threshold. Comput. Biol. Med. 2017, 85, 75–85. [Google Scholar] [CrossRef]
- Hasan, M.K.; Dahal, L.; Samarakoon, P.N.; Tushar, F.I.; Martí, R. DSNet: Automatic dermoscopic skin lesion segmentation. Comput. Biol. Med. 2020, 120, 103738. [Google Scholar] [CrossRef]
- Korotkov, K.; Garcia, R. Computerized analysis of pigmented skin lesions: A review. Artif. Intell. Med. 2012, 56, 69–90. [Google Scholar] [CrossRef]
- Hasan, M.K.; Elahi, M.T.E.; Alam, M.A.; Jawad, M.T.; Martí, R. DermoExpert: Skin lesion classification using a hybrid convolutional neural network through segmentation, transfer learning, and augmentation. Inform. Med. Unlocked 2022, 28, 100819. [Google Scholar] [CrossRef]
- Mishra, N.K.; Celebi, M.E. An overview of melanoma detection in dermoscopy images using image processing and machine learning. arXiv 2016, arXiv:1601.07843. [Google Scholar]
- Pacheco, A.G.; Krohling, R.A. Recent advances in deep learning applied to skin cancer detection. arXiv 2019, arXiv:1912.03280. [Google Scholar]
- Lucieri, A.; Dengel, A.; Ahmed, S. Deep Learning Based Decision Support for Medicine—A Case Study on Skin Cancer Diagnosis. arXiv 2021, arXiv:2103.05112. [Google Scholar]
- Adegun, A.; Viriri, S. Deep learning techniques for skin lesion analysis and melanoma cancer detection: A survey of state-of-the-art. Artif. Intell. Rev. 2021, 54, 811–841. [Google Scholar] [CrossRef]
- Dildar, M.; Akram, S.; Irfan, M.; Khan, H.U.; Ramzan, M.; Mahmood, A.R.; Alsaiari, S.A.; Saeed, A.H.M.; Alraddadi, M.O.; Mahnashi, M.H. Skin cancer detection: A review using deep learning techniques. Int. J. Environ. Res. Public Health 2021, 18, 5479. [Google Scholar] [CrossRef]
- Gilani, S.Q.; Marques, O. Skin lesion analysis using generative adversarial networks: A review. Multimed. Tools Appl. 2023, 1–42. [Google Scholar] [CrossRef]
- Available online: https://www.mathworks.com/discovery/convolutional-neural-network-matlab.html (accessed on 1 May 2023).
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Inthiyaz, S.; Altahan, B.R.; Ahammad, S.H.; Rajesh, V.; Kalangi, R.R.; Smirani, L.K.; Hossain, M.A.; Rashed, A.N.Z. Skin disease detection using deep learning. Adv. Eng. Softw. 2023, 175, 103361. [Google Scholar] [CrossRef]
- Gajera, H.K.; Nayak, D.R.; Zaveri, M.A. A comprehensive analysis of dermoscopy images for melanoma detection via deep CNN features. Biomed. Signal Process. Control 2023, 79, 104186. [Google Scholar] [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
- Gutman, D.; Codella, N.C.; Celebi, E.; Helba, B.; Marchetti, M.; Mishra, N.; Halpern, A. Skin lesion analysis toward melanoma detection: A challenge at the international symposium on biomedical imaging (ISBI) 2016, hosted by the international skin imaging collaboration (ISIC). arXiv 2016, arXiv:1605.01397. [Google Scholar]
- Codella, N.C.; Gutman, D.; Celebi, M.E.; Helba, B.; Marchetti, M.A.; Dusza, S.W.; Kalloo, A.; Liopyris, K.; Mishra, N.; Kittler, H.; et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic). In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 168–172. [Google Scholar]
- Mendonça, T.; Celebi, M.; Mendonca, T.; Marques, J. Ph2: A public database for the analysis of dermoscopic images. In Dermoscopy Image Analysis; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
- Tschandl, P.; Rosendahl, C.; Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 2018, 5, 1–9. [Google Scholar] [CrossRef] [PubMed]
- Alenezi, F.; Armghan, A.; Polat, K. Wavelet transform based deep residual neural network and ReLU based Extreme Learning Machine for skin lesion classification. Expert Syst. Appl. 2023, 213, 119064. [Google Scholar] [CrossRef]
- Shinde, R.K.; Alam, M.S.; Hossain, M.B.; Md Imtiaz, S.; Kim, J.; Padwal, A.A.; Kim, N. Squeeze-MNet: Precise Skin Cancer Detection Model for Low Computing IoT Devices Using Transfer Learning. Cancers 2022, 15, 12. [Google Scholar] [CrossRef] [PubMed]
- Alenezi, F.; Armghan, A.; Polat, K. A multi-stage melanoma recognition framework with deep residual neural network and hyperparameter optimization-based decision support in dermoscopy images. Expert Syst. Appl. 2023, 215, 119352. [Google Scholar] [CrossRef]
- Redmon, J. Darknet: Open Source Neural Networks in C. 2013–2016. Available online: http://pjreddie.com/darknet/ (accessed on 1 May 2023).
- Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
- Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4510–4520. [Google Scholar]
- Abbas, Q.; Gul, A. Detection and Classification of Malignant Melanoma Using Deep Features of NASNet. SN Comput. Sci. 2022, 4, 21. [Google Scholar] [CrossRef]
- Zoph, B.; Vasudevan, V.; Shlens, J.; Le, Q.V. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 8697–8710. [Google Scholar]
- Gouda, W.; Sama, N.U.; Al-Waakid, G.; Humayun, M.; Jhanjhi, N.Z. Detection of skin cancer based on skin lesion images using deep learning. Healthcare 2022, 10, 1183. [Google Scholar] [CrossRef]
- Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; Change Loy, C. Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 8–14 September 2018. [Google Scholar]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 31. [Google Scholar]
- Alwakid, G.; Gouda, W.; Humayun, M.; Sama, N.U. Melanoma Detection Using Deep Learning-Based Classifications. Healthcare 2022, 10, 2481. [Google Scholar] [CrossRef]
- Bassel, A.; Abdulkareem, A.B.; Alyasseri, Z.A.A.; Sani, N.S.; Mohammed, H.J. Automatic malignant and benign skin cancer classification using a hybrid deep learning approach. Diagnostics 2022, 12, 2472. [Google Scholar] [CrossRef]
- Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
- Ho, T.K. Random decision forests. In Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, QC, Canada, 14–16 August 1995; Volume 1, pp. 278–282. [Google Scholar]
- Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef]
- Fix, E. Discriminatory Analysis: Nonparametric Discrimination, Consistency Properties; USAF School of Aviation Medicine: Randolph Field, TX, USA, 1985; Volume 1. [Google Scholar]
- Kousis, I.; Perikos, I.; Hatzilygeroudis, I.; Virvou, M. Deep Learning Methods for Accurate Skin Cancer Recognition and Mobile Application. Electronics 2022, 11, 1294. [Google Scholar] [CrossRef]
- Shorfuzzaman, M. An explainable stacked ensemble of deep learning models for improved melanoma skin cancer detection. Multimed. Syst. 2022, 28, 1309–1323. [Google Scholar] [CrossRef]
- Reis, H.C.; Turk, V.; Khoshelham, K.; Kaya, S. InSiNet: A deep convolutional approach to skin cancer detection and segmentation. Med. Biol. Eng. Comput. 2022, 60, 643–662. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- Wang, S.H.; Zhang, Y.D. DenseNet-201-based deep neural network with composite learning factor and precomputation for multiple sclerosis classification. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2020, 16, 1–19. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Identity mappings in deep residual networks. In Proceedings of the Computer Vision—ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part IV 14. Springer: Berlin/Heidelberg, Germany, 2016; pp. 630–645. [Google Scholar]
- Fraiwan, M.; Faouri, E. On the Automatic Detection and Classification of Skin Cancer Using Deep Transfer Learning. Sensors 2022, 22, 4963. [Google Scholar] [CrossRef]
- Ghosh, P.; Azam, S.; Quadir, R.; Karim, A.; Shamrat, F.J.M.; Bhowmik, S.K.; Jonkman, M.; Hasib, K.M.; Ahmed, K. SkinNet-16: A deep learning approach to identify benign and malignant skin lesions. Front. Oncol. 2022, 12, 931141. [Google Scholar] [CrossRef]
- Abdi, H.; Williams, L.J. Principal component analysis. Wiley Interdiscip. Rev. Comput. Stat. 2010, 2, 433–459. [Google Scholar] [CrossRef]
- Maniraj, S.; Maran, P.S. A hybrid deep learning approach for skin cancer diagnosis using subband fusion of 3D wavelets. J. Supercomput. 2022, 78, 12394–12409. [Google Scholar] [CrossRef]
- Alam, M.J.; Mohammad, M.S.; Hossain, M.A.F.; Showmik, I.A.; Raihan, M.S.; Ahmed, S.; Mahmud, T.I. S2C-DeLeNet: A parameter transfer based segmentation-classification integration for detecting skin cancer lesions from dermoscopic images. Comput. Biol. Med. 2022, 150, 106148. [Google Scholar] [CrossRef]
- Mazoure, B.; Mazoure, A.; Bédard, J.; Makarenkov, V. DUNEScan: A web server for uncertainty estimation in skin cancer detection with deep neural networks. Sci. Rep. 2022, 12, 179. [Google Scholar] [CrossRef] [PubMed]
- Malibari, A.A.; Alzahrani, J.S.; Eltahir, M.M.; Malik, V.; Obayya, M.; Al Duhayyim, M.; Neto, A.V.L.; de Albuquerque, V.H.C. Optimal deep neural network-driven computer aided diagnosis model for skin cancer. Comput. Electr. Eng. 2022, 103, 108318. [Google Scholar] [CrossRef]
- Rashid, J.; Ishfaq, M.; Ali, G.; Saeed, M.R.; Hussain, M.; Alkhalifah, T.; Alturise, F.; Samand, N. Skin cancer disease detection using transfer learning technique. Appl. Sci. 2022, 12, 5714. [Google Scholar] [CrossRef]
- Aljohani, K.; Turki, T. Automatic Classification of Melanoma Skin Cancer with Deep Convolutional Neural Networks. Ai 2022, 3, 512–525. [Google Scholar] [CrossRef]
- Bian, X.; Pan, H.; Zhang, K.; Li, P.; Li, J.; Chen, C. Skin lesion image classification method based on extension theory and deep learning. Multimed. Tools Appl. 2022, 81, 16389–16409. [Google Scholar] [CrossRef]
- Demir, A.; Yilmaz, F.; Kose, O. Early detection of skin cancer using deep learning architectures: Resnet-101 and inception-v3. In Proceedings of the 2019 Medical Technologies Congress (TIPTEKNO), Izmir, Turkey, 3–5 October 2019; pp. 1–4. [Google Scholar]
- Jain, S.; Singhania, U.; Tripathy, B.; Nasr, E.A.; Aboudaif, M.K.; Kamrani, A.K. Deep learning-based transfer learning for classification of skin cancer. Sensors 2021, 21, 8142. [Google Scholar] [CrossRef]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- Kausar, N.; Hameed, A.; Sattar, M.; Ashraf, R.; Imran, A.S.; Abidin, M.Z.U.; Ali, A. Multiclass skin cancer classification using ensemble of fine-tuned deep learning models. Appl. Sci. 2021, 11, 10593. [Google Scholar] [CrossRef]
- Bechelli, S.; Delhommelle, J. Machine learning and deep learning algorithms for skin cancer classification from dermoscopic images. Bioengineering 2022, 9, 97. [Google Scholar] [CrossRef]
- Khan, M.A.; Sharif, M.; Akram, T.; Damaševičius, R.; Maskeliūnas, R. Skin lesion segmentation and multiclass classification using deep learning features and improved moth flame optimization. Diagnostics 2021, 11, 811. [Google Scholar] [CrossRef]
- Adegun, A.A.; Viriri, S.; Yousaf, M.H. A probabilistic-based deep learning model for skin lesion segmentation. Appl. Sci. 2021, 11, 3025. [Google Scholar] [CrossRef]
- Lu, X.; Firoozeh Abolhasani Zadeh, Y. Deep learning-based classification for melanoma detection using XceptionNet. J. Healthc. Eng. 2022, 2022, 2196096. [Google Scholar] [CrossRef]
- Qasim Gilani, S.; Syed, T.; Umair, M.; Marques, O. Skin Cancer Classification Using Deep Spiking Neural Network. J. Digit. Imaging 2023, 1–11. [Google Scholar] [CrossRef]
- Available online: https://github.com/fangwei123456/spikingjelly (accessed on 8 March 2023).
- Khan, M.A.; Muhammad, K.; Sharif, M.; Akram, T.; Kadry, S. Intelligent fusion-assisted skin lesion localization and classification for smart healthcare. Neural Comput. Appl. 2021, 1–16. [Google Scholar] [CrossRef]
- Abdar, M.; Samami, M.; Mahmoodabad, S.D.; Doan, T.; Mazoure, B.; Hashemifesharaki, R.; Liu, L.; Khosravi, A.; Acharya, U.R.; Makarenkov, V.; et al. Uncertainty quantification in skin cancer classification using three-way decision-based Bayesian deep learning. Comput. Biol. Med. 2021, 135, 104418. [Google Scholar] [CrossRef]
- Available online: https://www.kaggle.com/datasets/fanconic/skin-cancer-malignant-vs-benign (accessed on 10 March 2023).
- Available online: https://tinyurl.com/4fetuczx (accessed on 10 March 2023).
- Available online: https://tinyurl.com/5x3ftap6 (accessed on 10 March 2023).
- 6 December 2022 (Last Revised). Available online: https://tinyurl.com/ydsscvkk (accessed on 10 March 2023).
- Available online: https://www.isic-archive.com/#!/topWithHeader/onlyHeaderTop/gallery (accessed on 10 March 2023).
- Mendonça, T.; Ferreira, P.M.; Marques, J.S.; Marcal, A.R.; Rozeira, J. PH 2-A dermoscopic image database for research and benchmarking. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 5437–5440. [Google Scholar]
- Codella, N.; Rotemberg, V.; Tschandl, P.; Celebi, M.E.; Dusza, S.; Gutman, D.; Helba, B.; Kalloo, A.; Liopyris, K.; Marchetti, M.; et al. Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (isic). arXiv 2019, arXiv:1902.03368. [Google Scholar]
- Combalia, M.; Codella, N.C.; Rotemberg, V.; Helba, B.; Vilaplana, V.; Reiter, O.; Carrera, C.; Barreiro, A.; Halpern, A.C.; Puig, S.; et al. Bcn20000: Dermoscopic lesions in the wild. arXiv 2019, arXiv:1908.02288. [Google Scholar]
- Rotemberg, V.; Kurtansky, N.; Betz-Stablein, B.; Caffery, L.; Chousakos, E.; Codella, N.; Combalia, M.; Dusza, S.; Guitera, P.; Gutman, D.; et al. A patient-centric dataset of images and metadata for identifying melanomas using clinical context. Sci. Data 2021, 8, 34. [Google Scholar] [CrossRef]
- Lio, P.A.; Nghiem, P. Interactive Atlas of Dermoscopy: Giuseppe Argenziano, MD, H. Peter Soyer, MD, Vincenzo De Giorgio, MD, Domenico Piccolo, MD, Paolo Carli, MD, Mario Delfino, MD, Angela Ferrari, MD, Rainer Hofmann-Wellenhof, MD, Daniela Massi, MD, Giampiero Mazzocchetti, MD, Massimiliano Scalvenzi, MD, and Ingrid H. Wolf, MD, Milan, Italy, 2000, Edra Medical Publishing and New Media. 208 pages. 290.00. ISBN 88-86457-30-8. CD-ROM requirements (minimum): Pentium 133 MHz, 32-Mb RAM, 24X CD-ROM drive, 800 × 600 resolution, and 16-bit color graphics capability. Test system: Pentium III 700 MHz processor running Microsoft Windows 98. Macintosh compatible only if running Windows emulation software. J. Am. Acad. Dermatol. 2004, 50, 807–808. [Google Scholar]
- Ballerini, L.; Fisher, R.B.; Aldridge, B.; Rees, J. A color and texture based hierarchical K-NN approach to the classification of non-melanoma skin lesions. In Color Medical Image Analysis; Springer: Berlin/Heidelberg, Germany, 2013; pp. 63–86. [Google Scholar]
- Pacheco, A.G.; Lima, G.R.; Salomao, A.S.; Krohling, B.; Biral, I.P.; de Angelo, G.G.; Alves, F.C., Jr.; Esgario, J.G.; Simora, A.C.; Castro, P.B.; et al. PAD-UFES-20: A skin lesion dataset composed of patient data and clinical images collected from smartphones. Data Brief 2020, 32, 106221. [Google Scholar] [CrossRef]
Paper | Year | Scope |
---|---|---|
Pacheco and Krohling [10] | 2019 | Reviewed deep learning models for skin cancer classification |
Lucieri et al. [11] | 2021 | Reviewed deep-learning-based decision support in skin cancer diagnosis |
Adegun and Viriri [12] | 2021 | Reviewed deep learning techniques for skin lesion analysis and melanoma cancer detection |
Dildar et al. [13] | 2021 | Reviewed deep learning algorithms for skin cancer classification |
Gilani and Marques [14] | 2023 | Reviewed skin lesion classification and segmentation using generative adversarial networks (GANs) |
Paper | Architecture | Year |
---|---|---|
Krizhevsky et al. [16] | AlexNet | 2012 |
Simonyan and Zisserman [18] | VGG | 2015 |
He et al. [20] | ResNet | 2016 |
Huang et al. [21] | DenseNet | 2017 |
Howard et al. [22] | MobileNet | 2017 |
Paper | Dataset | Model | Performance |
---|---|---|---|
Inthiyaz et al. [23] | Xiangya-Derm | CNN | AUC = 0.87 |
Gajera et al. [24] | ISIC 2016, ISIC 2017, , HAM10000 | AlexNet, VGG-16, VGG-19, | Accuracy = 98.33%, F1 score = 0.96 |
Alenezi et al. [31] | ISIC 2017, HAM10000 | deep residual network | Accuracy = 96.971%, F1-score = 0.95 |
Shinde et al. [32] | ISIC | Squeeze-MNet | Accuracy = 99.36% |
Alenezi et al. [33] | ISIC 2019, ISIC 2020 | ResNet-101 with SVM | Accuracy = 96.15% (ISIC 2019), 97.15% (ISIC 2020) |
Abbas and Gul [38] | ISIC 2020 | NASNet | Accuracy = 97.7%, F1-score = 0.97 |
Gouda et al. [40] | ISIC 2018 | CNN | Accuracy = 83.2% |
Alwakid et al. [43] | HAM10000 | CNN, ResNet-50 | F1-score = 0.859 (CNN), 0.852 (ResNet-50)% |
Bassel et al. [44] | ISIC | Resnet50, Xception, and VGG16 | Accuracy = 90.9%, F1-score = 0.89 |
Kousis et al. [49] | ISIC 2019 | Eleven CNN architectures with DensNet169 giving the best results | Accuracy = 92.25%, F1-score = 0.932 |
Shorfuzzaman [50] | ISIC archive | DenseNet121, Xception, EfficientNet80 | Accuracy = 95.76%, F1-score = 0.957 |
Reis et al. [51] | HAM10000 (ISIC 2018), ISIC 2019, ISIC 2020 | InSiNet, U-Net | Accuracy = 94.59% (ISIC 2018), 91.89% (ISIC 2019), and 90.54% (ISIC 2020) |
Fraiwan and Faouri [55] | HAM10000 | thirteen CNN architectures with DensNet201 giving the best result | Accuracy = 82.9%, F1-score = 0.744 |
Ghosh et al. [56] | HAM10000, ISIC archive | SkinNet-16 | Accuracy = 95.51% (HAM10000), 99.19% (ISIC) |
Maniraj and Maran [58] | VGG-based hybrid architecture | Accuracy = 93.33% | |
Alam et al. [59] | HAM10000 | S2C-DeLeNet | Mean Accuracy = 91.03%, Mean Dice = 0.9494 |
Mazoure et al. [60] | ISIC | Inceptionv313, ResNet5014, 170 MobileNetv23, EfcientNet15, BYOL16, SwAV | Class prediction probability = 1.00 (Mel) |
Malibari et al. [61] | ISIC 2019 | DNN | Average accuracy = 99.90%, F1-score = 0.990 |
Rashid et al. [62] | ISIC 2020 | MobileNetV2-based transfer learning | Average accuracy = 98.20% |
Aljohani and Turki [63] | ISIC 2019 | DenseNet201, MobileNetV2, ResNet50V2, ResNet152V2, Xception, VGG16, VGG19, and GoogleNet | Accuracy = 76.09% |
Bian et al. [64] | ISBI 2016 | YoDyCK | Accuracy = 96.2% |
Demir et al. [65] | ISIC archive | ResNet-101, Inception-v3 | F1-score = 84.09% (ResNet-101), 87.42% (Inception-v3) |
Jain et al. [66] | HAM10000 | Transfer learning-based VGG19, InceptionV3, InceptionResNetV2, ResNet50, Xception, and MobileNet | Accuracy = 90.48% (Xception) |
Bechelli and Delhommelle [69] | Kaggle dataset, HAM10000 | CNN, pre-trained VGG-16, Xception, ResNet50 | Accuracy = 88% (VGG-16), F1-score = 0.88 (VGG-16) |
Khan et al. [70] | Segmentation (ISBI 2016, ISBI 2017, ISIC 2018, ), classification (HAM10000) | ResNet101, DenseNet201 | Accuracy = 98.70% (Segmentation, ), Accuracy = 98.70% (Classification) |
Adegun et al. [71] | ISBI 2017, | fully convolutional neural network | Accuracy = 97% (ISBI 2016) |
Qasim Gilani et al. [73] | HAM10000 | Spiking VGG-13 | Accuracy = 89.57%, F1-score = 0.9007 |
Lu and Firoozeh Abolhasani Zadeh [72] | HAM10000 | Xception | Accuracy = 100%, F1-score = 95.55% |
Khan et al. [75] | ISBI 2016, ISIC 2017, ISBI 2018, ISIC 2019, HAM10000 | A hybrid framework of 20 layered and 17 layered CNN for segmentation, 30 layered CNN for feature extraction | Segmentation Accuracy = 92.70% (ISIC 2018), Classification Accuracy = 87.02% (HAM10000) |
Abdar et al. [76] | ISIC 2019 [77] | ResNet152V2, MobileNetV2, DenseNet20 | Best Accuracy = 89% [77], F1-score = 0.91 [77] |
Paper | Objective | Summary |
---|---|---|
Inthiyaz et al. [23] | Used pre-trained model for feature extraction and classification was performed using softmax classifier. | This work was tested on a very small dataset; these results can not be generalized on large datasets. Inthiyaz et al. [23] achieved an AUC of 0.87, which can still be improved; citeinthiyaz2023skin used a deep architecture ResNet-50 which increases the computational cost. |
Gajera et al. [24] | Used eight pre-trained CNN models for the classification of dermoscopy images. | The proposed methods were evaluated on , ISIC 2016, and ISIC 2017 with only 200, 900, and 2000 training images. Using deep architectures such as DenseNet-121 on small datasets may result in overfitting. Classification performance on the HAM10000 dataset was low. |
Alenezi et al. [31] | Used wavelet-transform-based deep residual neural network for the classification of skin cancer images. | Limited generalizability. Weak classification performance on lesion images having different sizes, colors, etc. |
Shinde et al. [32] | Lightweight model was proposed for the classification of skin cancer images on IOT devices. | The proposed model in this work had lower sensitivity and specificity than other baseline models. Since this model was proposed for the classification task on IOT, it should have fewer training parameters than other baseline methods, such as MobileNetV2. However, the number of parameters and training time was still greater than MobileNetV2. |
Alenezi et al. [33] | Multi-stage deep model was used for the extraction of features from skin cancer images. | Dataset 1 only contained 1168 images. Deep architectures such as ResNet-101 were used for feature extraction, which may result in overfitting. Features extracted using deep networks were provided to SVM for the classification of skin cancer images; it has limitations in terms of the time required for the parameter selection of the SVM classifier. |
Abbas and Gul [38] | Proposed architecture for the classification of skin cancers. | Proposed a NASNet for the classification of skin cancer images that extracts generalizable features. |
Gouda et al. [40] | Pre-trained models were used for the classification of skin cancer images. ESRGAN was used for augmenting the dataset. | The proposed work was tested on a small dataset using 3533 images from ISIC 2018. The best classification accuracy of 0.8576 was obtained using Inception50, which is still low. The accuracy achieved using this method was low compared to dermoscopy. |
Alwakid et al. [43] | Data augmentation and segmentation of lesion was used to improve the classification performance. | Used ESRGAN for data augmentation. Moreover, performed the segmentation to segment lesions for accurate classification. Proposed CNN-based architecture for the classification. The proposed work achieved an accuracy of 86%, less than the dermoscopy images’ accuracy. |
Bassel et al. [44] | Pre-trained models were used for extracting features. Stacked CV techniques consisting of five different classifiers were used for the classification of skin cancer images. | The proposed model was trained and tested on a small dataset of 2637 training images and 660 test images. The proposed stacked CV algorithm gave the best classification accuracy of 90.9% on the features extracted using Xception. The model may not perform well on large datasets as it will have limited generalizability because a very small dataset was used for training. |
Kousis et al. [49] | Evaluated the performance of eleven CNN on the skin cancer classification task, and created a mobile application using the best model. | Among the eleven architectures used in this work, DenseNet 169 gave the best classification accuracy of 92.5%. Deploying DenseNet169 for skin cancer classification is not computationally efficient. |
Shorfuzzaman [50] | Explainable CNN-based stacked framework was proposed for the classification of skin melanoma images. | The proposed work combined deep models such as DenseNet 121, Xception, and EfficientNetB0 to classify skin cancer images. A total of 3297 images from ISIC 2018 were used for training, and an accuracy of 95.76% was achieved using the proposed method. The proposed method is tested only for melanoma versus non-melanoma problems. The proposed model needs to be tested on large datasets, and combining three deep models will be computationally expensive. |
Reis et al. [51] | Deep CNN network, InSiNet, was proposed for the classification of skin cancer images. | Very deep models trained on only 1323 images were used for classifying melanoma and non-melanoma images. The proposed model can not be generalized to a large dataset as it is trained on 1323 images only. |
Fraiwan and Faouri [55] | Evaluated thirteen transfer learning models for the classification of skin cancers. | DenseNet201 gave the best accuracy of 82.9% and an F1-score of 0.744. F1-score is more suited for performance evaluation as HAM10000 is an imbalanced dataset; the F1-score of 0.7424 achieved in this work was quite low. Precision and recall, which are also important metrics in skin diagnosis, were quite low. |
Ghosh et al. [56] | Proposed SkiNet-16, a CNN for the classification of skin cancers. PCA was used for feature selection. | Used two different datasets for the evaluation of the proposed method; dataset 1 consists of only 3297 images, and dataset 2 consists of 1954 images. The method was tested for melanoma versus non-melanoma cases. Skin cancer images were classified with very high accuracy. |
Maniraj and Maran [58] | Multi-stage hybrid deep learning modeling employing 3D wavelets were proposed. | The proposed mode was tested on only 200 images and can not be used to aid skin cancer diagnosis. The proposed model’s performance will degrade when trained and tested on large datasets. |
Alam et al. [59] | Proposed SC-DeLeNet for the segmentation and classification of skin cancer images. | The proposed S2C-DeLeNet1 was implemented in two stages; in the first stage, Efficient-Net B4 was used as the encoder of U-Net for the segmentation, and the encoder–decoder network was used for features extraction in stage 2. S2C-DeLeNet1 tested on the HAM10000 dataset consisting of 10,000 images from seven classes performed well on both tasks. |
Mazoure et al. [60] | CNN-based webserver was developed for the detection of skin cancers. | Among six deep learning networks trained in this work, ResNet-50 gave the best class prediction probability of 1.00. The web server was developed only for benign versus malignant cases. |
Malibari et al. [61] | CNN-based optimal method for detecting and classifying skin cancer images. | The proposed mode was trained on ISIC 2019 consisting of 253,331 performed well on all five metrics, accuracy, F1-score, precision, recall, and specificity, and gave an impressive accuracy 99.99%. |
Rashid et al. [62] | MobileNetV2 based transfer learning framework was proposed for skin cancer classification problem. | Addressed the problem of the class imbalance problem. The proposed model performed well on all four metrics used in this study, accuracy, recall, F1 score, and precision, and achieved an average accuracy of 98.2%. The model was tested on only the binary classification problem. |
Aljohani and Turki [63] | Evaluated seven different deep learning models on skin cancer classification problem. | The models were evaluated on the dataset comprising 7146 images from two classes. The best accuracy of 76.08% achieved using GoogleNet on the test set was quite low. |
Bian et al. [64] | YoDyCK: YOLOv3 optimized by dynamic convolution kernel trained on skin cancer images collected from was proposed. WGAN was used for data augmentation. | Addressed the problem of data bias in the skin lesion dataset by training the proposed model on the images collected from Asian countries. |
Demir et al. [65] | Classified skin cancer images using Inception-v3 and ResNet-101. | Inception-v3 trained on 2437 images from two classes gave the best F1 score of 87.02%. |
Jain et al. [66] | Used different transfer learning models for feature extraction and classification of skin cancers. | Xception gave the best accuracy, but the computation time was greater than other networks trained in this study. The accuracy was MobileNet was a bit low than Xception, but it required less time for training. |
Bechelli and Delhommelle [69] | Performance of different machine learning and deep learning algorithms was evaluated on skin cancer datasets. | Obtained better accuracy and F1 score on smaller datasets. Deep learning models trained on the HAM10000 dataset achieved an F1-score of 0.70 and a precision of 0.68, which were low when tested on a smaller dataset. |
Khan et al. [70] | Proposed CNN-based fully automated method for the classification and segmentation of images. | The classification accuracy of the proposed model trained on the HAM10000 was high, but the proposed model gave the best segmentation performance on , which has only 200 images; the effectiveness of the proposed method should be evaluated by testing it on larger datasets. |
Adegun et al. [71] | Improved the fully connected convolutional network segmentation using probabilistic model. | The proposed model trained using fewer parameters achieved a good classification accuracy on ISBI data, but it required more time to train. |
Lu and Firoozeh Abolhasani Zadeh [72] | Improved XceptionNet for the classification of skin cancer images. | The proposed model achieved the 100% accuracy and F1-score of 95.3% on HAM10000 dataset. Precision and sensitivity were also greatly improved as compared to other networks. |
Qasim Gilani et al. [73] | Used spiking neural network (SNN) for the classification of skin cancer images. | SNN trained using fewer parameters achieved higher accuracy, and F1-score than the deep learning models but the specificity and precision of VGG-13 was higher. SNN used in this work is preferred for hardware implementation because of its power-efficient behavior. |
Khan et al. [75] | Developed an automated system for collecting and uploading skin lesion images on the cloud and performing classification and segmentation. | Information fusion and improved segmentation methods used in this work improved the performance. However, the use of information fusion increased the feature dimensionality, resulting in increased computational cost. |
Abdar et al. [76] | A hybrid deep learning model for the classification of skin cancer images. | The proposed work assessed the performance of uncertainty quantification methods, Monte Carlo (MC) dropout, ensemble MC dropout (EMC), and deep ensemble (DE) and selected the best-performing models for skin cancer diagnosis. |
Paper | Resources Required for Training Deep Learning Algorithms Covered in This Paper |
---|---|
Gajera et al. [24] | Intel Core i7-7700 (8) CPU @ 4.20 GHz and 16 GB RAM with a single NVIDIA GeForce GTX 1050Ti GPU |
Alenezi et al. [31] | 32 GB RAM and an NVIDIA Quadro P4000 card |
Shinde et al. [32] | Intel Core i5-7500 3.40 GHz processor, 32 GB of RAM, NVIDIA GeForce GTX 10050Ti graphical processor Raspberry Pi 4 microprocessor with a 64-Gb SD card, spy camera, and NeoPixel ring |
Alenezi et al. [33] | Intel Xeon processor, 64 GB of RAM, and 8 GB-P4000 GPU. |
Abbas and Gul [38] | 12 GB GPU and 25 GB of RAM. |
Gouda et al. [40] | Linux PC with GPU RTX3060 and 8 GB of RAM. |
Alwakid et al. [43] | Linux PC with RTX3060 and 8 GB of RAM. |
Bassel et al. [44] | Core Intel4 processor with 12 GB RAM. |
Kousis et al. [49] | Linux system with a GTX 1060 6 GB graphics card. |
Shorfuzzaman [50] | NVIDIA Tesla P100 GPU with 16 GB RAM |
Reis et al. [51] | Intel i5 processor, 6 GB of RAM, and a GTX 940MX NVidia GPU with 2 GB of VRAM |
Fraiwan and Faouri [55] | HP OMEN 30L desktop GT13 with 64 GB RAM, an NVIDIA GeForce RTX 3080 GPU, an Intel Core i7-10700K CPU @ 3.80 GHz, and a 1TB SSD. |
Alam et al. [59] | Ryzen 5600 CPU and Nvidia RTX3060Ti GPU (8 GB VRAM). |
Mazoure et al. [60] | NVIDIA P40 GPU with 4 CPUs. |
Malibari et al. [61] | i5–8600k, GeForce 1050Ti 4 GB, 16 GB RAM, 250 GB SSD, and 1 TB HDD |
Bian et al. [64] | i7-8700k CPU and two 1080ti GPUs |
Khan et al. [70] | 16 GB RAM and 256 GB SSD, 16-GB graphics card |
Lu and Firoozeh Abolhasani Zadeh [72] | Intel® Core™ i7-4720HQ, CPU 1.60 GHz, RAM 16 GB Frequency 1.99 GHz, |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Naqvi, M.; Gilani, S.Q.; Syed, T.; Marques, O.; Kim, H.-C. Skin Cancer Detection Using Deep Learning—A Review. Diagnostics 2023, 13, 1911. https://doi.org/10.3390/diagnostics13111911
Naqvi M, Gilani SQ, Syed T, Marques O, Kim H-C. Skin Cancer Detection Using Deep Learning—A Review. Diagnostics. 2023; 13(11):1911. https://doi.org/10.3390/diagnostics13111911
Chicago/Turabian StyleNaqvi, Maryam, Syed Qasim Gilani, Tehreem Syed, Oge Marques, and Hee-Cheol Kim. 2023. "Skin Cancer Detection Using Deep Learning—A Review" Diagnostics 13, no. 11: 1911. https://doi.org/10.3390/diagnostics13111911
APA StyleNaqvi, M., Gilani, S. Q., Syed, T., Marques, O., & Kim, H. -C. (2023). Skin Cancer Detection Using Deep Learning—A Review. Diagnostics, 13(11), 1911. https://doi.org/10.3390/diagnostics13111911