Bucket of Deep Transfer Learning Features and Classification Models for Melanoma Detection
Abstract
:1. Introduction
- A deep and ensemble learning-based framework, to simultaneously address inter-class variation and class imbalance for the task of melanoma classification.
- A framework that, in the classification phase, at the same time, creates multiple image representation models, based on features extracted with deep transfer learning.
- The demonstration of how the choice of multiple features can enrich image representation by leading a lesion assessment like a skilled dermatologist.
- Some experimental greater improvements over existing methods on different state of art datasets about melanoma detection task.
2. Related Work
3. Materials and Methods
3.1. Data Balancing
3.2. Image Resize
3.3. Transfer Learning and Features Extraction
3.4. Network Design
3.5. Ensemble Learning
3.6. Train and Test Strategy: Bootstrapping
4. Experimental Results
4.1. Datasets
4.2. Settings
4.3. Discussion
5. Conclusions and Future Works
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Codella, N.; Cai, J.; Abedini, M.; Garnavi, R.; Halpern, A.; Smith, J.R. Deep learning, sparse coding, and SVM for melanoma recognition in dermoscopy images. In International Workshop on Machine Learning in Medical Imaging; Springer: Berlin/Heidelberg, Germany, 2015; pp. 118–126. [Google Scholar]
- Mishra, N.K.; Celebi, M.E. An overview of melanoma detection in dermoscopy images using image processing and machine learning. arXiv 2016, arXiv:1601.07843. [Google Scholar]
- Binder, M.; Schwarz, M.; Winkler, A.; Steiner, A.; Kaider, A.; Wolff, K.; Pehamberger, H. Epiluminescence microscopy: A useful tool for the diagnosis of pigmented skin lesions for formally trained dermatologists. Arch. Dermatol. 1995, 131, 286–291. [Google Scholar] [CrossRef] [PubMed]
- Barata, C.; Celebi, M.E.; Marques, J.S. A survey of feature extraction in dermoscopy image analysis of skin cancer. IEEE J. Biomed. Health Inform. 2018, 23, 1096–1109. [Google Scholar] [CrossRef] [PubMed]
- Celebi, M.E.; Kingravi, H.A.; Uddin, B.; Iyatomi, H.; Aslandogan, Y.A.; Stoecker, W.V.; Moss, R.H. A methodological approach to the classification of dermoscopy images. Comput. Med. Imaging Graph. 2007, 31, 362–373. [Google Scholar] [CrossRef] [Green Version]
- Tommasi, T.; La Torre, E.; Caputo, B. Melanoma recognition using representative and discriminative kernel classifiers. In International Workshop on Computer Vision Approaches to Medical Image Analysis; Springer: Berlin/Heidelberg, Germany, 2006; pp. 1–12. [Google Scholar]
- Pathan, S.; Prabhu, K.G.; Siddalingaswamy, P. A methodological approach to classify typical and atypical pigment network patterns for melanoma diagnosis. Biomed. Signal Process. Control. 2018, 44, 25–37. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems; ACM Digital Library: New York, NY, USA, 2012; pp. 1097–1105. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- Yu, L.; Chen, H.; Dou, Q.; Qin, J.; Heng, P.A. Automated melanoma recognition in dermoscopy images via very deep residual networks. IEEE Trans. Med. Imaging 2016, 36, 994–1004. [Google Scholar] [CrossRef]
- Shie, C.K.; Chuang, C.H.; Chou, C.N.; Wu, M.H.; Chang, E.Y. Transfer representation learning for medical image analysis. In Proceedings of the 2015 37th annual international conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 711–714. [Google Scholar]
- Shin, H.C.; Roth, H.R.; Gao, M.; Lu, L.; Xu, Z.; Nogues, I.; Yao, J.; Mollura, D.; Summers, R.M. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans. Med. Imaging 2016, 35, 1285–1298. [Google Scholar] [CrossRef] [Green Version]
- Mahdiraji, S.A.; Baleghi, Y.; Sakhaei, S.M. Skin Lesion Images Classification Using New Color Pigmented Boundary Descriptors. In Proceedings of the 3rd International Conference on Pattern Recognition and Image Analysis (IPRIA 2017), Shahrekord, Iran, 19–20 April 2017. [Google Scholar]
- Amelard, R.; Glaister, J.; Wong, A.; Clausi, D.A. Melanoma Decision Support Using Lighting-Corrected Intuitive Feature Models. In Computer Vision Techniques for the Diagnosis of Skin Cancer; Springer: Berlin/Heidelberg, Germany, 2013; pp. 193–219. [Google Scholar]
- Mahdiraji, S.A.; Baleghi, Y.; Sakhaei, S.M. BIBS, a New Descriptor for Melanoma/Non-Melanoma Discrimination. In Proceedings of the Iranian Conference on Electrical Engineering (ICEE), Mashhad, Iran, 8–10 May 2018; pp. 1397–1402. [Google Scholar]
- Amelard, R.; Glaister, J.; Wong, A.; Clausi, D.A. High-Level Intuitive Features (HLIFs) for Intuitive Skin Lesion Description. IEEE Trans. Biomed. Eng. 2015, 62, 820–831. [Google Scholar] [CrossRef]
- Karabulut, E.; Ibrikci, T. Texture analysis of melanoma images for computer-aided diagnosis. In Proceedings of the International Conference on Intelligent Computing, Computer Science & Information Systems (ICCSIS 16), Pattaya, Thailand, 28–29 April 2016; Volume 2, pp. 26–29. [Google Scholar]
- Giotis, I.; Molders, N.; Land, S.; Biehl, M.; Jonkman, M.; Petkov, N. MED-NODE: A Computer-Assisted Melanoma Diagnosis System using Non-Dermoscopic Images. Expert Syst. Appl. 2015, 42, 6578–6585. [Google Scholar] [CrossRef]
- Nasr-Esfahani, E.; Samavi, S.; Karimi, N.; Soroushmehr, S.M.R.; Jafari, M.H.; Ward, K.; Najarian, K. Melanoma detection by analysis of clinical images using convolutional neural network. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 1373–1376. [Google Scholar]
- Albert, B.A. Deep Learning From Limited Training Data: Novel Segmentation and Ensemble Algorithms Applied to Automatic Melanoma Diagnosis. IEEE Access 2020, 8, 31254–31269. [Google Scholar] [CrossRef]
- Pereira, P.M.; Fonseca-Pinto, R.; Paiva, R.P.; Assuncao, P.A.; Tavora, L.M.; Thomaz, L.A.; Faria, S.M. Skin lesion classification enhancement using border-line features—The melanoma vs. nevus problem. Biomed. Signal Process. Control 2020, 57, 101765. [Google Scholar] [CrossRef]
- Sultana, N.N.; Puhan, N.B.; Mandal, B. DeepPCA Based Objective Function for Melanoma Detection. In Proceedings of the 2018 International Conference on Information Technology (ICIT), Bhubaneswar, India, 19–21 December 2018; pp. 68–72. [Google Scholar]
- Ge, Y.; Li, B.; Zhao, Y.; Guan, E.; Yan, W. Melanoma segmentation and classification in clinical images using deep learning. In Proceedings of the 2018 10th International Conference on Machine Learning and Computing, Macau, China, 26–28 February 2018; pp. 252–256. [Google Scholar]
- Jafari, M.H.; Samavi, S.; Karimi, N.; Soroushmehr, S.M.R.; Ward, K.; Najarian, K. Automatic detection of melanoma using broad extraction of features from digital images. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 1357–1360. [Google Scholar]
- Do, T.; Hoang, T.; Pomponiu, V.; Zhou, Y.; Chen, Z.; Cheung, N.; Koh, D.; Tan, A.; Tan, S. Accessible Melanoma Detection Using Smartphones and Mobile Image Analysis. IEEE Trans. Multimed. 2018, 20, 2849–2864. [Google Scholar] [CrossRef] [Green Version]
- Astorino, A.; Fuduli, A.; Veltri, P.; Vocaturo, E. Melanoma detection by means of Multiple Instance Learning. Interdiscip. Sci. Comput. Life Sci. 2020, 12, 24–31. [Google Scholar] [CrossRef] [PubMed]
- Vocaturo, E.; Zumpano, E.; Giallombardo, G.; Miglionico, G. DC-SMIL: A multiple instance learning solution via spherical separation for automated detection of displastyc nevi. In Proceedings of the 24th Symposium on International Database Engineering & Applications, Incheon (Seoul), South Korea, 12–18 August 2020; pp. 1–9. [Google Scholar]
- Fuduli, A.; Veltri, P.; Vocaturo, E.; Zumpano, E. Melanoma detection using color and texture features in computer vision systems. Adv. Sci. Technol. Eng. Syst. J. 2019, 4, 16–22. [Google Scholar] [CrossRef]
- Corinna Cortes, V.V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
- Kobayashi, T.; Watanabe, K.; Otsu, N. Logistic label propagation. Pattern Recognit. Lett. 2012, 33, 580–588. [Google Scholar] [CrossRef] [Green Version]
- Dasarathy, B.V. Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques; IEEE Computer Society Press: Los Alamitos, CA, USA, 1991; ISBN 978-0-8186-8930-7. [Google Scholar]
- Likas, A.; Vlassis, N.; Verbeek, J.J. The global k-means clustering algorithm. Pattern Recognit. 2003, 36, 451–461. [Google Scholar] [CrossRef] [Green Version]
- Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1989, 1, 541–551. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
- Munteanu, C.; Cooclea, S. Spotmole Melanoma Control System. 2009. Available online: https://play.google.com/store/apps/details?id=com.spotmole&hl=en=AU (accessed on 18 November 2020).
- Zagrouba, E.; Barhoumi, W. A prelimary approach for the automated recognition of malignant melanoma. Image Anal. Stereol. 2004, 23, 121–135. [Google Scholar] [CrossRef]
- Mandal, B.; Sultana, N.; Puhan, N. Deep Residual Network with Regularized Fisher Framework for Detection of Melanoma. IET Comput. Vis. 2018, 12, 1096. [Google Scholar] [CrossRef] [Green Version]
- Jafari, M.H.; Samavi, S.; Soroushmehr, S.M.R.; Mohaghegh, H.; Karimi, N.; Najarian, K. Set of descriptors for skin cancer diagnosis using non-dermoscopic color images. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 2638–2642. [Google Scholar]
- Amelard, R.; Wong, A.; Clausi, D.A. Extracting high-level intuitive features (HLIF) for classifying skin lesions using standard camera images. In Proceedings of the 2012 Ninth Conference on Computer and Robot Vision, Toronto, ON, Canada, 28–30 May 2012; pp. 396–403. [Google Scholar]
- Mendonca, T.; Celebi, M.; Mendonca, T.; Marques, J. Ph2: A public database for the analysis of dermoscopic images. Dermoscopy Image Anal. 2015, 419–439. [Google Scholar] [CrossRef]
- Barata, C.; Ruela, M.; Francisco, M.; Mendonça, T.; Marques, J.S. Two systems for the detection of melanomas in dermoscopy images using texture and color features. IEEE Syst. J. 2013, 8, 965–979. [Google Scholar] [CrossRef]
Network | Depth | Size (MB) | Parameters (Millions) | Input Size | Features Layer |
---|---|---|---|---|---|
Alexnet | 8 | 227 | 61 | 227 × 227 | fc7 |
Googlenet | 8 | 27 | 7 | 224 × 224 | pool5-7x7_s1 |
Resnet18 | 18 | 44 | 11.7 | 224 × 224 | pool5 |
Resnet50 | 50 | 96 | 25.6 | 224 × 224 | avg_pool |
Algorithms | Setting |
---|---|
SVM [29] | KernelFunction:polynomial, KernelScale: auto |
SVM [29] | KernelFunction: Gaussian, KernelScale: auto |
LLP [30] | KernelFunction: rbf, Regularization parameter: 1, init: 0, maxiter: 1000 |
KNN [31] | NumNeighbors: 3, Distance: spearman |
KNN [31] | NumNeighbors: 4, Distance: correlation |
Metric | Equation |
---|---|
True Positive Rate | |
True Negative Rate | |
Positive Predictive Value | |
Negative Predictive Value | |
Accuracy | |
-Score(Positive) | |
-Score(Negative) | |
Matthew’s Correlation Coefficient |
Method | TPR | TNR | PPV | NPV | ACC | MCC | ||
---|---|---|---|---|---|---|---|---|
MED-NODE annoted [18] | 0.78 | 0.59 | 0.56 | 0.80 | 0.66 | 0.65 | 0.68 | 0.36 |
Spotmole [39] | 0.82 | 0.57 | 0.56 | 0.83 | 0.67 | 0.67 | 0.68 | 0.39 |
Barhoumi and Zagrouba [40] | 0.46 | 0.87 | 0.70 | 0.71 | 0.70 | 0.56 | 0.78 | 0.37 |
MED-NODE color [18] | 0.74 | 0.72 | 0.64 | 0.81 | 0.73 | 0.69 | 0.76 | 0.45 |
MED-NODE texture [18] | 0.62 | 0.85 | 0.74 | 0.77 | 0.76 | 0.67 | 0.81 | 0.49 |
Jafari et al. [24] | 0.90 | 0.72 | 0.70 | 0.91 | 0.79 | 0.79 | 0.80 | 0.61 |
MED-NODE combined [18] | 0.80 | 0.81 | 0.74 | 0.86 | 0.81 | 0.77 | 0.83 | 0.61 |
Nasr Esfahani et al. [19] | 0.81 | 0.80 | 0.75 | 0.86 | 0.81 | 0.78 | 0.83 | 0.61 |
Benjamin Albert [20] | 0.89 | 0.93 | 0.92 | 0.93 | 0.91 | 0.89 | 0.92 | 0.83 |
Pereira et [21] ght/svm-smo/f23-32 | 0.45 | 0.92 | - | - | 0.73 | - | - | - |
Pereira et [21] ght/svm-smo/f1-32 | 0.56 | 0.86 | - | - | 0.74 | - | - | - |
Pereira et al. [21] lbpc/svm-smo/f23-32 | 0.49 | 0.93 | - | - | 0.75 | - | - | - |
Pereira et al. [21] lbpc/svm-smo/f1-32 | 0.58 | 0.91 | - | - | 0.78 | - | - | - |
Pereira et al. [21] ght/svm-sda/f23-32 | 0.66 | 0.83 | - | - | 0.76 | - | - | - |
Pereira et al. [21] ght/svm-sda/f1-32 | 0.66 | 0.86 | - | - | 0.78 | - | - | - |
Pereira et al. [21] lbpc/svm-isda/f23-32 | 0.69 | 0.83 | - | - | 0.77 | - | - | - |
Pereira et al. [21] lbpc/svm-isda/f1-32 | 0.65 | 0.88 | - | - | 0.79 | - | - | - |
Pereira et al. [21] ght/ffn/f23-32 | 0.63 | 0.84 | - | - | 0.76 | - | - | - |
Pereira et al. [21] ght/ffn/f1-32 | 0.63 | 0.84 | - | - | 0.76 | - | - | - |
Pereira et al. [21] lbpc/ffn/f23-32 | 0.64 | 0.83 | - | - | 0.75 | - | - | - |
Pereira et al. [21] lbpc/ffn/f1-32 | 0.66 | 0.86 | - | - | 0.77 | - | - | - |
Sultana et al. [22] | 0.73 | 0.86 | 0.77 | 0.83 | 0.81 | - | - | - |
Ge, Yunhao and Liet al. [23] | 0.94 | 0.93 | - | - | 0.92 | - | - | - |
Mandal et al.[41] Case 1 | 0.61 | 0.65 | 0.74 | 0.87 | 0.65 | - | - | - |
Mandal et al.[41] Case 2 | 0.80 | 0.73 | 0.74 | 0.87 | 0.71 | - | - | - |
Mandal et al.[41] Case 3 | 0.84 | 0.66 | 0.68 | 0.86 | 0.71 | - | - | - |
Jafari et al. [42] | 0.82 | 0.71 | 0.67 | 0.85 | 0.76 | - | - | - |
T. Do et al. [25] Color | 0.81 | 0.73 | 0.66 | 0.85 | 0.75 | - | - | - |
T. Do et al. [25] Texture | 0.66 | 0.85 | 0.75 | 0.79 | 0.78 | - | - | - |
T. Do et al. [25] Color and Texture | 0.84 | 0.72 | 0.70 | 0.87 | 0.77 | - | - | - |
E. Nasr-Esfahani et al. [19] | 0.81 | 0.80 | 0.75 | 0.86 | 0.81 | - | - | - |
Resnet50+Resnet18 | 0.80 | 1.00 | 1.00 | 0.83 | 0.90 | 0.88 | 0.90 | 0.81 |
Resnet50+Googlenet+Alexnet | 0.90 | 0.97 | 0.97 | 0.90 | 0.93 | 0.93 | 0.94 | 0.87 |
Method | TPR | TNR | PPV | NPV | ACC | MCC | ||
---|---|---|---|---|---|---|---|---|
Texture analysis [17] | 0.87 | 0.71 | 0.76 | - | 0.75 | - | - | - |
HLIFs [16] | 0.96 | 0.73 | - | - | 0.83 | - | - | - |
BIBS [15] | 0.92 | 0.88 | 0.91 | - | 0.90 | - | - | - |
Decision Support [14] | 0.84 | 0.79 | - | - | 0.81 | - | - | - |
Color pigment boundary [13] | 0.95 | 0.88 | 0.92 | - | 0.82 | - | - | - |
R. Amelard et al. [43] Asymmetry | 0.73 | 0.64 | - | - | 0.69 | - | - | - |
R. Amelard et al. [43] Proposed HLIFs | 0.79 | 0.68 | - | - | 0.75 | - | - | - |
R. Amelard et al. [43] Cavalcanti feature set | 0.84 | 0.78 | - | - | 0.82 | - | - | - |
R. Amelard et al. [43] Modified | 0.86 | 0.75 | - | - | 0.72 | - | - | - |
R. Amelard et al. [43] Combined | 0.91 | 0.80 | - | - | 0.86 | - | - | - |
Resnet50+Resnet18 | 0.84 | 0.92 | 0.91 | 0.85 | 0.88 | 0.87 | 0.88 | 0.76 |
Resnet50+Googlenet+Alexnet | 0.87 | 0.65 | 0.71 | 0.84 | 0.76 | 0.78 | 0.73 | 0.54 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Manzo, M.; Pellino, S. Bucket of Deep Transfer Learning Features and Classification Models for Melanoma Detection. J. Imaging 2020, 6, 129. https://doi.org/10.3390/jimaging6120129
Manzo M, Pellino S. Bucket of Deep Transfer Learning Features and Classification Models for Melanoma Detection. Journal of Imaging. 2020; 6(12):129. https://doi.org/10.3390/jimaging6120129
Chicago/Turabian StyleManzo, Mario, and Simone Pellino. 2020. "Bucket of Deep Transfer Learning Features and Classification Models for Melanoma Detection" Journal of Imaging 6, no. 12: 129. https://doi.org/10.3390/jimaging6120129
APA StyleManzo, M., & Pellino, S. (2020). Bucket of Deep Transfer Learning Features and Classification Models for Melanoma Detection. Journal of Imaging, 6(12), 129. https://doi.org/10.3390/jimaging6120129