A Robust Ensemble of Convolutional Neural Networks for the Detection of Monkeypox Disease from Skin Images
Abstract
:1. Introduction
- Designing a dataset of superficial skin photographs containing images of monkeypox cases, healthy people, and people with other types of diseases that produce skin rashes. This dataset is provided publicly to the community.
- Studying various alternatives of classifiers based on convolutional neural networks to distinguish between the three classes described above. These classifiers will be performed by applying transfer learning techniques to pre-trained models.
- Combining different classifier models to form ensemble systems that perform the same task, comparing the results with those obtained previously.
- Evaluating the results obtained by the classifier using xAI techniques. As far as we know, this is the first time it is applied to an ensemble classifier.
2. Background
- Ali et al. [24]: In this work, a binary classification of monkeypox and other skin diseases is performed using skin images taken by users. The authors tested several convolutional neural network models such as VGG-16, ResNet50, and Inception-V3. The dataset uses 102 monkeypox images and 126 images of other skin diseases, but it does not include images of healthy skin tissue. The best classifier has an accuracy greater than 82%.
- Ahsan et al. [25]: this work uses a custom dataset formed by the four classes “healthy”, “measles”, “chickenpox” and “monkeypox”, containing 54, 17, 47, and 43 images for each class, respectively. Although the authors use a data augmentation process, the dataset has very few images for some classes, and it is quite unbalanced. However, the developed classifiers are trained only for two classes (“monkeypox” versus “others”, and “monkeypox” versus “chickenpox”). Using a VGG-16 CNN for each implemented system, the authors obtain an 83% accuracy for the “monkeypox” vs. “others” study for the training subset, and a 78% accuracy for the second experiment using the training subset.
3. Materials and Methods
3.1. Dataset
- Classes: the dataset needs to distinguish between healthy and ill tissue (with Monkeypox). In addition, it is essential to include other skin diseases to determine the degree of importance of the injury.
- Number of images: It is important to develop a balanced dataset with the same number of images for each class.
- Type of images: To develop a classifier that can be integrated into a mobile device, images with similar characteristics for all classes and highlighting skin lesions (avoiding images taken from a distance or of whole body parts) are required.
- Ali et al. [24]: the main problem of this dataset is the image classes. It has only two classes, and, moreover, there is no “Healthy” class, and this absence can lead to failures when evaluating the classifier with healthy tissue images; since there is no class containing it, any image that it classifies is labeled as damaged tissue (by Monkeypox or some other disease). This problem was previously addressed in other works [28].
- Ahsan et al. [25]: this second dataset contains a good number of classes (including “Healthy” ones), but it has two main drawbacks. The first is the total number of images, only surpassing 50 images for the “Healthy” class (54 images) and giving only 17 images for one of the classes (Measles). The second drawback is data balance: the worst case is observed with the “Healthy” class, which has more than three times the images given by the “Measles” class.
3.2. Classifiers
- The results show that the system required complex architectures to converge, in addition to a very long training process. In those previous tests, an architecture of 10 convolutional layers and 3 dense layers required a total of 100K epochs to obtain an accuracy of around 80% (after ten days). Because of this, we estimate that this type of architecture required more complexity and more training time to obtain acceptable results (at least 95%); in fact, observing the training tendency, obtaining an 85% accuracy would require more than two months (needing more than a year to reach 90%).
- Searching for previous works, it was observed that the trained systems were based on pre-trained models. Therefore, if we wanted to compare ourselves with those works, we had to work with similar systems that would allow us to measure the improvement obtained.
3.2.1. Individual Models
- VGG-16: is a convolutional neural network proposed by [29] of Oxford University, and gained notoriety by winning the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) in 2014. It is composed of 13 convolutional layers, 5 polling layers, and 3 dense layers.
- VGG-19: It is a variant with more computational layers than VGG-16, therefore, heavier in memory storage and computational requirements. In this case, the number of polling and dense layers are the same, but the convolutional layers increase to 16.
- ResNet50: It was introduced by Microsoft and won the ILSVRC (ImageNet Large-Scale Visual Recognition Challenge) competition in 2015 [30]. Its operation is based on increasing the number of layers by introducing a residual connection, moving on to the next one directly, and improving the learning process. This model is much more complex than the previous one: it has almost 50 convolutional layers, 2 polling layers, and one dense layer.
- MobileNet-V2: is a convolutional neural network architecture that seeks to perform well on mobile devices. It is based on an inverted residual structure in which the residual connections are between the bottleneck layers [31]. In this case, the number of convolution layers is similar to that used by ResNet50 (around 50), as well as the polling layers (1) and the dense layers (1).
- EfficientNet-B0: It is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a compound coefficient [32]. The base EfficientNet-B0 network is based on the inverted bottleneck residual blocks of MobileNetV2, in addition to squeeze-and-excitation blocks. Finally, in this case, the number of convolutional layers is reduced to 33, increasing the number of polling layers to 17, and the number of dense layers to 33.
3.2.2. Ensemble Classifiers
- Ensemble 1: VGG-16 + VGG-19 + ResNet50
- Ensemble 2: VGG-16 + ResNet50 + EfficientNet-B0
- Ensemble 3: ResNet50 + EfficientNet-B0 + MobileNet-V2
- Concatenation Ensemble: This is the most common technique to merge different data sources. A concatenation ensemble receives different inputs, whatever their dimensions, and concatenates them on a given axis. This operation can be dispersive, not allowing the final part of the network to learn important information, or resulting in overfitting.
- Average Ensemble: this can be considered to be the opposite of the concatenation operation. The pooled outputs of the networks are passed through dense layers with a fixed number of neurons to equalize them. In this way, the average is computed. The drawback of this approach is the loss of information caused by the nature of the average operation.
- Weighted Ensemble: This is a special form of average operation, where the tensor outputs are multiplied by a weight and then linearly combined. These weights determine the contribution of each model in the final result, but they are not fixed because these values are optimized during the training process. For the weighted ensemble, we have proceeded to create a function that calculates this weighting in real time. In this case, it creates an average with adaptive weights between the outputs and trains the weights by backtracking just like those of any other layer. Finally, it adjusts them with softmax so that they always add up to 1. This mechanism has been used in previous works such as the one recently published in [33].
3.3. Evaluation Metrics
- Accuracy: All samples are classified correctly compared to all samples (see Equation (1)).
- Specificity: proportion of “true negative” values in all cases that do not belong to this class (see Equation (2)).
- Precision: Proportion of “true positive” values in all cases that have been classified as it (see Equation (3)).
- Sensitivity (or Recall): Proportion of “true positive” values in all cases that belong to this class (see Equation (4)).
- F1: It considers both the precision and the sensitivity (recall) of the test to compute the score. It is the harmonic mean of both parameters (see Equation (5)).
3.4. Explainable AI
4. Results and Discussion
4.1. Classification Results
4.1.1. Individual CNN Results
VGG-16
VGG-19
ResNet50
MobileNet-V2
EfficientNet-B0
4.1.2. Ensemble CNN Results
VGG16 + VGG19 + ResNet50
VGG16 + ResNet50 + EfficientNetB0
ResNet50 + EfficientNet + MobileNetV2
4.2. Explainable AI Results
- For the case of Monkeypox images, most systems focus individually on the central pustule of the image. However, it is true that in some cases they focus on the skin wrinkles produced by it. This situation, thanks to the Grad-CAM applied to the ensemble network, is solved by focusing on those aspects common to all networks. Not surprisingly, it can be seen for the ensemble that the resulting heatmap is located entirely on the pustule.
- For the normal skin case, discrepancies are observed between all the models individually, causing the ensemble heatmap to be more dispersed.
- Finally, for the case of other skin damage, being an example in which multiple spots and pimples appear, each model focuses its attention on a particular part of the image, although they all agree to focus on groups of skin spots and/or pimples. Coincidentally, the model with the best individual results (ResNet) focuses its attention on the area with the highest concentration of bumps. As for the ensemble heatmap, it also focuses on this group of bumps.
4.3. Comparison with Previous Works
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- United Nations. What Is Monkeypox? 2022. Available online: https://news.un.org/en/story/2022/07/1123212 (accessed on 2 August 2022).
- World Heath Organization. Monkeypox Details. 2022. Available online: https://www.who.int/news-room/fact-sheets/detail/monkeypox (accessed on 2 August 2022).
- Centre of Disease Control and Prevention. Technical Report: Multi-National Monkeypox Outbreak. 2022. Available online: https://www.cdc.gov/poxvirus/monkeypox/clinicians/technical-report.html (accessed on 2 August 2022).
- The Guardian. Spain Reports Second Death Related to Monkeypox. 2022. Available online: https://www.theguardian.com/world/2022/jul/30/spain-reports-second-death-related-to-monkeypox (accessed on 2 August 2022).
- Torres-Soto, J.; Ashley, E.A. Multi-task deep learning for cardiac rhythm detection in wearable devices. NPJ Digit. Med. 2020, 3, 116. [Google Scholar] [CrossRef] [PubMed]
- Rim, B.; Sung, N.-J.; Min, S.; Hong, M. Deep learning in physiological signal data: A survey. Sensors 2020, 20, 969. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhu, H.; Samtani, S.; Brown, R.; Chen, H. A deep learning approach for recognizing activity of daily living (ADL) for senior care: Exploiting interaction dependency and temporal patterns. Forthcom. Mis Q. 2020. Available online: https://ssrn.com/abstract=3595738 (accessed on 2 August 2022).
- Escobar-Linero, E.; Domínguez-Morales, M.; Sevillano, J.L. Worker’s physical fatigue classification using neural networks. Expert Syst. Appl. 2022, 198, 116784. [Google Scholar] [CrossRef]
- Roncato, C.; Perez, L.; Brochet-Guégan, A.; Allix-Béguec, C.; Raimbeau, A.; Gautier, G.; Agard, C.; Ploton, G.; Moisselin, S.; Lorcerie, F.; et al. Colour Doppler ultrasound of temporal arteries for the diagnosis of giant cell arteritis: A multicentre deep learning study. Clin. Exp. Rheumatol. 2020, 38, S120–S125. [Google Scholar]
- Liu, Z.; Yang, C.; Huang, J.; Liu, S.; Zhuo, Y.; Lu, X. Deep learning framework based on integration of S-Mask R-CNN and Inception-v3 for ultrasound image-aided diagnosis of prostate cancer. Future Gener. Comput. Syst. 2021, 114, 358–367. [Google Scholar] [CrossRef]
- Civit-Masot, J.; Luna-Perejón, F.; María Rodríguez Corral, J.; Domínguez-Morales, M.; Morgado-Estévez, A.; Civit, A. A study on the use of Edge TPUs for eye fundus image segmentation. Eng. Appl. Artif. Intell. 2021, 104, 104384. [Google Scholar] [CrossRef]
- Bakkouri, I.; Afdel, K. MLCA2F: Multi-Level Context Attentional Feature Fusion for COVID-19 lesion segmentation from CT scans. Signal Image Video Process. 2022, 17, 1181–1188. [Google Scholar] [CrossRef]
- Kundu, R.; Das, R.; Geem, Z.W.; Han, G.-T.; Sarkar, R. Pneumonia detection in chest X-ray images using an ensemble of deep learning models. PLoS ONE 2021, 16, e0256630. [Google Scholar] [CrossRef]
- Lotter, W.; Diab, A.R.; Haslam, B.; Kim, J.G.; Grisot, G.; Wu, E.; Wu, K.; Onieva Onieva, K.; Boyer, Y.; Boxerman, J.L.; et al. Robust breast cancer detection in mammography and digital breast tomosynthesis using an annotation-efficient deep learning approach. Nat. Med. 2021, 27, 244–249. [Google Scholar] [CrossRef]
- Thomas, S.M.; Lefevre, J.G.; Baxter, G.; Hamilton, N.A. Interpretable deep learning systems for multi-class segmentation and classification of non-melanoma skin cancer. Med. Image Anal. 2021, 68, 101915. [Google Scholar] [CrossRef]
- Civit-Masot, J.; Bañuls-Beaterio, A.; Domínguez-Morales, M.; Rivas-Pérez, M.; Muñoz-Saavedra, L.; Corral, J.M.R. Non-small cell lung cancer diagnosis aid with histopathological images using Explainable Deep Learning techniques. Comput. Methods Programs Biomed. 2022, 226, 107108. [Google Scholar] [CrossRef]
- Bakkouri, I.; Afdel, K. Computer-aided diagnosis (CAD) system based on multi-layer feature fusion network for skin lesion recognition in dermoscopy images. Multimed. Tools Appl. 2020, 79, 20483–20518. [Google Scholar] [CrossRef]
- Wright, A.; Ai, A.; Ash, J.; Wiesen, J.F.; Hickman, T.-T.T.; Aaron, S.; McEvoy, D.; Borkowsky, S.; Dissanayake, P.I.; Embi, P.; et al. Clinical decision support alert malfunctions: Analysis and empirically derived taxonomy. J. Am. Med. Inform. Assoc. 2018, 25, 496–506. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Von-Eschenbach, W.J. Transparency and the black box problem: Why we do not trust AI. Philos. Technol. 2021, 34, 1607–1622. [Google Scholar] [CrossRef]
- Singh, A.; Sengupta, S.; Lakshminarayanan, V. Explainable deep learning models in medical image analysis. J. Imaging 2020, 6, 52. [Google Scholar] [CrossRef]
- Angelov, P.; Soares, E. Towards explainable deep neural networks (xDNN). Neural Netw. 2020, 130, 185–194. [Google Scholar] [CrossRef] [PubMed]
- Xue, Q.; Chuah, M.C. Explainable deep learning based medical diagnostic system. Smart Health 2019, 13, 100068. [Google Scholar] [CrossRef]
- Brunese, L.; Mercaldo, F.; Reginelli, A.; Santone, A. Explainable deep learning for pulmonary disease and coronavirus COVID-19 detection from X-rays. Comput. Methods Programs Biomed. 2020, 196, 105608. [Google Scholar] [CrossRef]
- Ali, S.N.; Ahmed, M.; Paul, J.; Jahan, T.; Sani, S.; Noor, N.; Hasan, T. Monkeypox Skin Lesion Detection Using Deep Learning Models: A Feasibility Study. arXiv 2022, arXiv:2207.03342. [Google Scholar]
- Ahsan, M.M.; Uddin, M.R.; Farjana, M.; Sakib, A.N.; Momin, K.A.; Luna, S.A. Image Data collection and implementation of deep learning-based model in detecting Monkeypox disease using modified VGG16. arXiv 2022, arXiv:2206.01862. [Google Scholar]
- Ahsan, M.M.; Uddin, M.R.; Luna, S.A. Monkeypox Image Data collection. arXiv 2022, arXiv:2206.01774. [Google Scholar]
- Domínguez-Morales, M.; Escobar-Linero, E.; Civit-Masot, J.; Luna-Perejón, F.; Civit, A. MonkeypoxSkin Dataset. 2022. Available online: https://github.com/mjdominguez/MonkeypoxSkinImages (accessed on 2 August 2022).
- Muñoz-Saavedra, L.; Civit-Masot, J.; Luna-Perejón, F.; Domínguez-Morales, M.; Civit, A. Does Two-Class Training Extract Real Features? A COVID-19 Case Study. Appl. Sci. 2021, 11, 1424. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4510–4520. [Google Scholar]
- Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. Int. Conf. Mach. Learn. 2019, 97, 6105–6114. [Google Scholar]
- Talukder, M.S.H.; Sarkar, A.K. Nutrients deficiency diagnosis of rice crop by weighted average ensemble learning. Smart Agric. Technol. 2023, 4, 100155. [Google Scholar] [CrossRef]
- Sokolova, M.; Lapalme, G. A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 2009, 45, 427–437. [Google Scholar] [CrossRef]
- Hoo, Z.H.; Candlish, J.; Teare, D. What is an ROC curve? Emerg. Med. J. 2017, 34, 6. [Google Scholar] [CrossRef] [Green Version]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
Dataset | Classes | Number of Images | Type of Images |
---|---|---|---|
Ali (2022) [24] | 2: Monkeypox, Others | 102, 126 | Full body, Limbs, Face, Trunk |
Ahsan (2022) [25,26] | 4: Healthy, Monkeypox, Chickenpox, Measles | 54, 43, 47, 17 | Full body, Limbs, Face, Trunk |
MonkeypoxSkin dataset (this work) [27] | 3: Healthy, Monkeypox, Other skin diseases | 100, 100, 100 | Close skin tissue |
Class | Original Dataset | Augmented Dataset | ||||
---|---|---|---|---|---|---|
Train | Validation | Test | Train | Validation | Test | |
Healthy | 60 | 20 | 20 | 3000 | 20 | 20 |
Monkeypox | 60 | 20 | 20 | 3000 | 20 | 20 |
Other skin damages | 60 | 20 | 20 | 3000 | 20 | 20 |
TOTAL | 180 | 60 | 60 | 9000 | 60 | 60 |
Class | Accuracy | Specificity | Precision | Sensitivity | F1 |
---|---|---|---|---|---|
Monkeypox | 96.67 | 100 | 100 | 90 | 94.73 |
Healthy | 95 | 95 | 90.47 | 95 | 92.68 |
Other skin damages | 91.67 | 92.5 | 85.71 | 90 | 87.8 |
Global | 91.67 | 95.83 | 91.67 | 91.67 | 91.67 |
Class | Accuracy | Specificity | Precision | Sensitivity | F1 |
---|---|---|---|---|---|
Monkeypox | 96.67 | 97.5 | 95 | 95 | 95 |
Healthy | 95 | 97.5 | 94.73 | 90 | 92.31 |
Other skin damages | 95 | 95 | 90.47 | 95 | 92.68 |
Global | 93.33 | 96.67 | 93.33 | 93.33 | 93.33 |
Class | Accuracy | Specificity | Precision | Sensitivity | F1 |
---|---|---|---|---|---|
Monkeypox | 96.67 | 95 | 90.91 | 100 | 95.24 |
Healthy | 96.67 | 100 | 100 | 90 | 94.74 |
Other skin damages | 96.67 | 97.5 | 95 | 95 | 95 |
Global | 95 | 97.75 | 95 | 95 | 95 |
Class | Accuracy | Specificity | Precision | Sensitivity | F1 |
---|---|---|---|---|---|
Monkeypox | 91.67 | 95 | 89.47 | 85 | 87.18 |
Healthy | 95 | 95 | 90.48 | 95 | 92.68 |
Other skin damages | 90 | 92.5 | 85 | 85 | 85 |
Global | 88.33 | 94.17 | 88.33 | 88.33 | 88.33 |
Class | Accuracy | Specificity | Precision | Sensitivity | F1 |
---|---|---|---|---|---|
Monkeypox | 96.67 | 97.5 | 95 | 95 | 95 |
Healthy | 91.67 | 90 | 82.61 | 95 | 88.37 |
Other skin damages | 91.67 | 97.5 | 94.12 | 80 | 86.49 |
Global | 90 | 95 | 90 | 90 | 90 |
Class | Accuracy | Specificity | Precision | Sensitivity | F1 |
---|---|---|---|---|---|
Monkeypox | 83.33 | 85 | 72.72 | 80 | 76.19 |
Healthy | 91.67 | 95 | 89.47 | 85 | 87.18 |
Other skin damages | 88.33 | 92.5 | 84.21 | 80 | 82.05 |
Global | 81.67 | 90.83 | 81.67 | 81.67 | 81.67 |
Class | Accuracy | Specificity | Precision | Sensitivity | F1 |
---|---|---|---|---|---|
Monkeypox | 85 | 82.5 | 72 | 90 | 80 |
Healthy | 95 | 97.5 | 94.74 | 90 | 92.31 |
Other skin damages | 86.67 | 95 | 87.5 | 70 | 77.78 |
Global | 83.3 | 91.67 | 83.33 | 83.33 | 83.33 |
Class | Accuracy | Specificity | Precision | Sensitivity | F1 |
---|---|---|---|---|---|
Monkeypox | 93.33 | 95 | 90 | 90 | 90 |
Healthy | 96.67 | 100 | 100 | 90 | 94.74 |
Other skin damages | 93.33 | 92.5 | 86.36 | 95 | 90.48 |
Global | 91.67 | 95.83 | 91.67 | 91.67 | 91.67 |
Class | Accuracy | Specificity | Precision | Sensitivity | F1 |
---|---|---|---|---|---|
Monkeypox | 91.67 | 97.5 | 94.12 | 80 | 86.49 |
Healthy | 93.33 | 95 | 90 | 90 | 90 |
Other skin damages | 95 | 92.5 | 89.96 | 100 | 93.02 |
Global | 90 | 95 | 90 | 90 | 90 |
Class | Accuracy | Specificity | Precision | Sensitivity | F1 |
---|---|---|---|---|---|
Monkeypox | 91.67 | 95 | 89.47 | 85 | 87.18 |
Healthy | 91.67 | 95 | 89.47 | 85 | 87.18 |
Other skin damages | 93.33 | 92.5 | 86.36 | 95 | 90.48 |
Global | 88.33 | 94.17 | 88.33 | 88.33 | 88.33 |
Class | Accuracy | Specificity | Precision | Sensitivity | F1 |
---|---|---|---|---|---|
Monkeypox | 93.33 | 95 | 90 | 90 | 90 |
Healthy | 96.67 | 100 | 100 | 90 | 94.74 |
Other skin damages | 93.33 | 92.5 | 86.36 | 95 | 90.48 |
Global | 91.67 | 95.83 | 91.67 | 91.67 | 91.67 |
Class | Accuracy | Specificity | Precision | Sensitivity | F1 |
---|---|---|---|---|---|
Monkeypox | 95 | 100 | 100 | 85 | 91.89 |
Healthy | 96.67 | 95 | 90.91 | 100 | 95.24 |
Other skin damages | 95 | 95 | 90.48 | 95 | 92.68 |
Global | 93.33 | 96.67 | 93.33 | 93.33 | 93.33 |
Class | Accuracy | Specificity | Precision | Sensitivity | F1 |
---|---|---|---|---|---|
Monkeypox | 98.33 | 97.5 | 95.24 | 100 | 97.56 |
Healthy | 100 | 100 | 100 | 100 | 100 |
Other skin damages | 98.33 | 100 | 100 | 95 | 97.44 |
Global | 98.33 | 99.17 | 98.33 | 98.33 | 98.33 |
Class | Accuracy | Specificity | Precision | Sensitivity | F1 |
---|---|---|---|---|---|
Monkeypox | 98.33 | 97.5 | 95.24 | 100 | 97.56 |
Healthy | 100 | 100 | 100 | 100 | 100 |
Other skin damages | 98.33 | 100 | 100 | 95 | 97.44 |
Global | 98.33 | 99.17 | 98.33 | 98.33 | 98.33 |
Classifier | Accuracy Results | |||
---|---|---|---|---|
Concatenated | Simple Avr | Weighted Avr | Best | |
VGG-16 | − | − | − | 91.67 |
VGG-19 | − | − | − | 93.33 |
ResNet50 | − | − | − | 95 |
EfficientNet-B0 | − | − | − | 90 |
MobileNet-V2 | − | − | − | 88.33 |
VGG16 + VGG19 + ResNet | 81.67 | 83.3 | 91.67 | 91.67 |
VGG16 + ResNet + EfficientNet | 90 | 88.33 | 91.67 | 91.67 |
ResNet + EfficientNet + MobileNet | 93.33 | 98.33 | 98.33 | 98.33 |
Work | Dataset | Classes | Classifier | Results (%) |
---|---|---|---|---|
Ali et al. [24] | Own | 2: Monkeypox, Others | VGG-16 ResNet50 Inception-V3 Ensemble | [VGG16] Acc: 81.48, Pre: 85, Sen: 81, F1: 83 [ResNet] Acc: 82.96, Pre: 87, Sen: 83, F1: 84 [Inception] Acc: 74.03, Pre: 74, Sen: 81, F1: 78 [Ensemble] Acc: 79.26, Pre: 84, Sen: 79, F1: 81 |
Ahsan et al. [25] | Own | 2: Monkeypox, Chickenpox 2: Monkeypox, Others | VGG-16 | [Case 1] Acc: 83, Pre: 88, Sen: 83, Spe: 66, F1: 83 [Case 2] Acc: 78, Pre: 75, Sen: 75, Spe: 83, F1: 75 |
This work (2022) | MonkeypoxSkin | 3: Healthy, Monkeypox, Other skin diseases | VGG-16 VGG-19 ResNet50 MobileNet-V2 EfficientNet-B0 Ensemble 1 Ensemble 2 Ensemble 3 | [VGG16] Acc: 91.67, Pre: 92, Sen: 92, Spe: 96, F1: 92 [VGG19] Acc: 93.33, Pre: 93, Sen: 93, Spe: 97, F1: 93 [ResNet] Acc: 95, Pre: 95, Sen: 95, Spe: 97.75, F1: 95 [MobileNet] Acc: 88.33, Pre: 88, Sen: 88, Spe: 94, F1: 88 [EfficientNet] Acc: 90, Pre: 90, Sen: 90, Spe: 95, F1: 90 [Ensemble1] Acc: 91.67, Pre: 92, Sen: 92, Spe: 96, F1: 92 [Ensemble2] Acc: 91.67, Pre: 92, Sen: 92, Spe: 96, F1: 92 [Ensemble3] Acc: 98.33, Pre: 98, Sen: 98, Spe: 99, F1: 98 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Muñoz-Saavedra, L.; Escobar-Linero, E.; Civit-Masot, J.; Luna-Perejón, F.; Civit, A.; Domínguez-Morales, M. A Robust Ensemble of Convolutional Neural Networks for the Detection of Monkeypox Disease from Skin Images. Sensors 2023, 23, 7134. https://doi.org/10.3390/s23167134
Muñoz-Saavedra L, Escobar-Linero E, Civit-Masot J, Luna-Perejón F, Civit A, Domínguez-Morales M. A Robust Ensemble of Convolutional Neural Networks for the Detection of Monkeypox Disease from Skin Images. Sensors. 2023; 23(16):7134. https://doi.org/10.3390/s23167134
Chicago/Turabian StyleMuñoz-Saavedra, Luis, Elena Escobar-Linero, Javier Civit-Masot, Francisco Luna-Perejón, Antón Civit, and Manuel Domínguez-Morales. 2023. "A Robust Ensemble of Convolutional Neural Networks for the Detection of Monkeypox Disease from Skin Images" Sensors 23, no. 16: 7134. https://doi.org/10.3390/s23167134
APA StyleMuñoz-Saavedra, L., Escobar-Linero, E., Civit-Masot, J., Luna-Perejón, F., Civit, A., & Domínguez-Morales, M. (2023). A Robust Ensemble of Convolutional Neural Networks for the Detection of Monkeypox Disease from Skin Images. Sensors, 23(16), 7134. https://doi.org/10.3390/s23167134