Skin Lesion Synthesis and Classification Using an Improved DCGAN Classifier
Abstract
:1. Introduction
- Construct and train an improved DCGAN classifier using customized synthetic augmentation techniques and fine-tune the parameters for skin lesion classification that can accurately diagnose skin lesions;
- Investigate whether the synthetic images generated by a multi-layered convolutional generative network accurately reflect the distribution of the original image dataset. In contrast, a discriminator perceptron, which is also multi-layered, tries to distinguish between false and real image samples;
- Evaluate the performance of the improved DCGAN Classifier compared with existing state-of-the-art classifiers for skin lesion classification.
2. Related Work
3. Methods
3.1. Skin Cancer Dataset
- Image name: a unique identifier that refers to the filename of the corresponding image;
- Patient ID: a unique identifier assigned to each patient;
- Sex: the gender of the patient or a blank field if unknown;
- Approximate age: the patient’s approximate age at the time the imaging was conducted;
- Anatomical site: the location of the imaged site on the patient’s body;
- Diagnosis: detailed diagnostic information (only included in the training data);
- Benign/malignant indicates whether the imaged lesion is benign or malignant;
- Target: a binarized form of the target variable.
3.2. Proposed Framework of the DCGAN-Based Classifier
3.3. Image Preprocessing Techniques
3.4. DCGAN Architecture
- The generator uses five deconvolutional layers instead of four;
- Replace deterministic spatial pooling layers such as global average pooling with 2 × 2 fractional-stride convolutions (Generator), which allows the networks to learn by themselves spatial downsampling;
- Eliminate connected hidden layers to avoid model instability and stabilize the convergence speed;
- Update the generator weights using backpropagation and an optimizer SGDM with a constant learning rate of 0.01 instead of 0.0002;
- Batch normalization is used to stabilize the learning of the generator;
- All generating levels use the ReLu activation, except the output layer, which employs the Tanh activation to scale the output between −1 and 1.
- The discriminator uses five convolutional layers to train the networks instead of four;
- Replace deterministic spatial pooling layers such as max pooling with 2 × 2 stride convolutions (the discriminator), allowing the networks to learn spatial upsampling by themselves;
- Eliminate connected hidden layers to avoid model instability and stabilize the convergence speed;
- Update the weights of the discriminator using backpropagation and an optimization step;
- Batch normalization is used to stabilize the learning of the discriminator;
- The LeakyReLU activation function is used for all layers in the discriminator except the output layer to allow gradients to flow backwards through the layer;
- The final layer functions as a classifier and uses the SoftMax activation function for classification.
3.4.1. Model Training and Classification
Algorithm 1 Training the Classification Model Based on DCGAN-Based Classifier |
Input: 1: Load the dataset ISIC2017_Training_Data, S17; 2. Split the dataset: Training 70% and Testing 30% 3: Preprocessing of S17: interp2(), histeq(), imsharpen(), imfilter(), rgb2lab(), gaussian_median_filter() 4: Initialize the networks: Generator G(latent_noise), Discriminator D (); 5. Create optimizers to update the weights using backpropagation sgdmupdate () and learning rate(); 6. Train the networks G with noise and D with real and G-generated images for a number of epochs. 7. ReLu and Tanh activation function for G; Leaky ReLu and SoftMax for D; 8. Calculate the loss function and repeat 5–7. 9. D acts as Classifier N + 1 output Output: 1: N + 1 Output 2: Confusion Matrix of Classification; Plots of AUC_ROC. 3: return Accuracy, Recall, Precision, Specificity and F1_Score. |
4. Experimental Results and Discussion
4.1. Performance Metrics
4.2. Results of Image Preprocessing Techniques
4.2.1. Image Scaling
4.2.2. Histogram Equalization
4.2.3. Unsharp Masking and Gaussian High-Pass Filtering
4.2.4. Color Space Transformation
4.2.5. Median Filter
4.3. Results of Improved DCGAN-Based Classifier
4.4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Hoffman, M. Picture of the Skin: Human Anatomy. 2014. Available online: https://www.webmd.com/skin-problems-and-treatments/picture-of-the-skin (accessed on 22 April 2023).
- Stöppler, M.C. Medical Definition of Skin. 2021. Available online: https://www.medicinenet.com/skin/definition.htm (accessed on 22 April 2023).
- Anonymous. Skin Cancer-Index 2018. 2018. Available online: https://derma.plus/en/skin-cancer-index-2018/ (accessed on 22 April 2023).
- Amarathunga, A.; Ellawala, E.P.W.C.; Abeysekara, G.; Amalraj, C.R.J. Expert system for diagnosis of skin diseases. Int. J. Sci. Technol. Res. 2015, 4, 174–178. [Google Scholar]
- Ambad, P.S.; Shirsat, A.S. An image analysis system to detect skin diseases. IOSR J. VLSI Signal Process. 2016, 6, 17–25. [Google Scholar] [CrossRef]
- ALEnezi, N.S.A. A method of skin disease detection using image processing and machine learning. Procedia Comput. Sci. 2019, 163, 85–92. [Google Scholar] [CrossRef]
- Wu, H.; Yin, H.; Chen, H.; Sun, M.; Liu, X.; Yu, Y.; Tang, Y.; Long, H.; Zhang, B.; Zhang, J.; et al. A deep learning, image-based approach for automated diagnosis for inflammatory skin diseases. Ann. Transl. Med. 2020, 8, 581. [Google Scholar] [CrossRef] [PubMed]
- Liu, J.; Wang, W.; Chen, J.; Sun, G.; Yang, A. Classification and research of skin lesions based on machine learning. Comput. Mater. Contin. 2020, 62, 1187–1200. [Google Scholar] [CrossRef]
- Yan, Y.; Kawahara, J.; Hamarneh, G. Melanoma recognition via visual attention. In Information Processing in Medical Imaging, Proceedings of the 26th International Conference, IPMI 2019, Hong Kong, China, 2–7 June 2019; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2019; pp. 793–804. [Google Scholar]
- Duan, M.; Li, K.; Liao, X.; Li, K. A parallel multiclassification algorithm for big data using an extreme learning machine. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 2337–2351. [Google Scholar] [CrossRef] [PubMed]
- Dildar, M.; Akram, S.; Irfan, M.; Khan, H.U.; Ramzan, M.; Mahmood, A.R.; Alsaiari, S.A.; Saeed, A.H.M.; Alraddadi, M.O.; Mahnashi, M.H. Skin Cancer Detection: A Review Using Deep Learning Techniques. Int. J. Environ. Res. Public Health 2021, 18, 5479. [Google Scholar] [CrossRef]
- Wu, Y.; Chen, B.; Zeng, A.; Pan, D.; Wang, R.; Zhao, S. Skin Cancer Classification with Deep Learning: A Systematic Review. Front. Oncol. 2022, 12, 893972. [Google Scholar] [CrossRef]
- Ali, A.R.; Li, J.; Yang, G.; O’Shea, S.J. A Machine Learning Approach to Automatic Detection of Irregularity in Skin Lesion Border Using Dermoscopic Images. PeerJ Comput. Sci. 2020, 6, e268. [Google Scholar] [CrossRef]
- Kaur, R.; GholamHosseini, H.; Sinha, R. Synthetic Images Generation Using Conditional Generative Adversarial Network for Skin Cancer Classification. In Proceedings of the TENCON 2021–2021 IEEE Region 10 Conference (TENCON), Auckland, New Zealand, 7–10 December 2021; Institute of Electrical and Electronics Engineers (IEEE): Manhattan, NY, USA, 2021; pp. 381–386. [Google Scholar]
- ASRT. The ASRT Practice Standards for Medical Imaging and Radiation Therapy; American Society of Radiologic Technologists: Albuquerque, NM, USA, 2019. [Google Scholar]
- ISO 12052:2017; Digital Imaging and Communication in Medicine (DICOM), Including Workflow and Data Management. Health Informatics. ISO: Geneva, Switzerland, 2017. Available online: https://www.iso.org/obp/ui/#iso:std:iso:12052:ed-2v1:en (accessed on 6 April 2023).
- Cassidy, B.; Kendrick, C.; Brodzicki, A.; Jaworek-Korjakowska, J.; Yap, M.H. Analysis of the ISIC image datasets: Usage, benchmarks, and recommendations. Med. Image Anal. 2022, 75, 102305. [Google Scholar] [CrossRef]
- Sun, X.; Yang, J.; Sun, M.; Wang, K. A benchmark for automatic visual classification of clinical skin disease images. In Computer Vision—ECCV 2016; Springer International Publishing: New York, NY, USA, 2016. [Google Scholar]
- Srinivasu, P.N.; SivaSai, J.G.; Ijaz, M.F.; Bhoi, A.K.; Kim, W.; Kang, J.J. Classification of skin disease using deep learning neural networks with MobileNet V2 and LSTM. Sensors 2021, 21, 2852. [Google Scholar] [CrossRef]
- Vulli, A.; Srinivasu, P.N.; Sashank, M.S.; Shafi, J.; Choi, J.; Ijaz, M.F. Fine-tuned DenseNet-169 for breast cancer metastasis prediction using FastAI and 1-cycle policy. Sensors 2022, 22, 2988. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
- Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef] [PubMed]
- WHO Regional Office for Africa. Handbook for Cancer Research in Africa; WHO/AFRO: Brazzaville, Republic of Congo, 2013; p. 147. Available online: https://apps.who.int/iris/handle/10665/100065 (accessed on 22 April 2023).
- Ibbott, G.S.; Van Dyk, J. Quality Assurance for Treatment Planning (IEC 62083 and IAEA Report); RPC: Toronto, ON, Canada, 2017. [Google Scholar]
- Romero-Lopez, A.; Giro, X.; Burdick, J.; Marques, O. Skin lesion classification from dermoscopic images using deep learning techniques. In Proceedings of the 2017 13th IASTED International Conference on Biomedical Engineering (BioMed), Innsbruck, Austria, 20–21 February 2017; ACTA Press: Innsbruck, Austria, 2017; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
- Krawczyk, B. Learning from imbalanced data: Open challenges and future directions. Prog. Artif. Intell. 2016, 5, 221–232. [Google Scholar] [CrossRef] [Green Version]
- Rice, L.; Wong, E.; Kolter, J.Z. Overfitting in adversarially robust deep learning. In Proceedings of the 37th International Conference on Machine Learning, Virtual Event. 13–18 July 2020; Volume 119, pp. 8093–8104. [Google Scholar]
- Stutz, D.; Hein, M.; Schiele, B. Disentangling Adversarial Robustness and Generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 6969–6980. [Google Scholar] [CrossRef] [Green Version]
- Bissoto, A.; Perez, F.; Valle, E.; Avila, S. Skin Lesion Synthesis with Generative Adversarial Networks. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Granada, Spain, 16–20 September 2018. [Google Scholar] [CrossRef] [Green Version]
- Goodfellow, I.J.; Mirza, M.; Xu, B.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. Adv. Neural Inf. Process. Syst. 2014, 2, 2672–2680. [Google Scholar] [CrossRef]
- Iqbal, T.; Ali, H. Generative adversarial network for medical images (MI-GAN). J. Med. Syst. 2018, 42, 231. [Google Scholar] [CrossRef] [Green Version]
- Bi, L.; Feng, D.D.; Fulham, M.; Kim, J. Multi-Label classification of multi-modality skin lesion via hyper-connected convolutional neural network. Pattern Recognit. 2020, 107, 107502. [Google Scholar] [CrossRef]
- Mahbod, A.; Schaefer, G.; Wang, C.; Ecker, R.; Ellinge, I. Skin Lesion Classification Using Hybrid Deep Neural Networks. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 1229–1233. [Google Scholar] [CrossRef] [Green Version]
- Ayan, E.; Ünver, H.M. Data augmentation importance for classification of skin lesions via deep learning. In Proceedings of the 2018 Electric Electronics, Computer Science, Biomedical Engineerings’ Meeting (EBBT), Istanbul, Turkey, 18–19 April 2018; pp. 1–4. [Google Scholar] [CrossRef]
- Zhang, J.; Xie, Y.; Wu, Q.; Xia, Y. Skin Lesion Classification in Dermoscopy Images Using Synergic Deep Learning. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; Volume 11071. [Google Scholar] [CrossRef]
- Motamed, S.; Rogalla, P.; Khalvati, F. Data augmentation using Generative Adversarial Networks (GANs) for GAN-based detection of Pneumonia and COVID-19 in chest X-ray images. Inform. Med. Unlocked 2021, 27, 100779. [Google Scholar] [CrossRef]
- Diamant, I.; Klang, E.; Amitai, M.; Goldberger, J.; Greenspan, H. GAN-based Synthetic Medical Image Augmentation for increased CNN Performance in Liver Lesion Classification. Neurocomputing 2018, 321, 321–331. [Google Scholar] [CrossRef] [Green Version]
- Wu, M.; Wang, S.; Pan, S.; Terentis, A.C.; Strasswimmer, J.; Zhu, X. Deep learning data augmentation for Raman spectroscopy cancer tissue classification. Sci. Rep. 2021, 11, 23842. [Google Scholar] [CrossRef]
- Mumuni, A.; Mumuni, F. Data augmentation: A comprehensive survey of modern approaches. Array 2022, 16, 100258. [Google Scholar] [CrossRef]
- Razghandi, M.; Zhou, H.; Turgut, D. Variational Autoencoder Generative Adversarial Network for Synthetic Data Generation in Smart Home. In Proceedings of the ICC 2022—IEEE International Conference on Communications, Seoul, Republic of Korea, 16–20 May 2022. [Google Scholar] [CrossRef]
- Yi, X.; Walia, E.; Babyn, P. Generative adversarial network in medical imaging: A review. Med. Image Anal. 2019, 58, 101552. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kazeminia, S.; Baur, C.; Kuijper, A.; van Ginneken, B.; Navab, N.; Albarqouni, S.; Mukhopadhyay, A. GANs for medical image analysis. Artif. Intell. Med. 2020, 109, 101938. [Google Scholar] [CrossRef]
- Sampath, V.; Maurtua, I.; Aguilar Martín, J.J.; Gutierrez, A. A survey on generative adversarial networks for imbalance problems in computer vision tasks. J. Big Data 2021, 8, 27. [Google Scholar] [CrossRef]
- Bissoto, A.; Avila, S. Improving Skin Lesion Analysis with Generative Adversarial Networks. In Proceedings of the Anais Estendidos do XXXIII Conference on Graphics, Patterns and Images, Virtual Conference. 7–10 November 2020; pp. 70–76. [Google Scholar]
- Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. In Proceedings of the International Conference on Learning Representations, San Juan, PR, USA, 2–4 May 2016. [Google Scholar] [CrossRef]
- Gurumurthy, S.; Sarvadevabhatla, R.K.; Babu, R.V. DeLiGAN: Generative adversarial networks for diverse and limited data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 166–174. [Google Scholar] [CrossRef]
- Ma, Y.; Zhong, G.; Wang, Y.; Liu, W. MetaCGAN: A Novel GAN Model for Generating High Quality and Diversity Images with Few Training Data. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–7. [Google Scholar] [CrossRef]
- Van Den Oord, A.; Kalchbrenner, N.; Kavukcuoglu, K. Pixel recurrent neural networks. In Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; Volume 4, pp. 2611–2620. [Google Scholar] [CrossRef]
- Van Den Oord, A.; Kalchbrenner, N.; Vinyals, O.; Espeholt, L.; Graves, A.; Kavukcuoglu, K. Conditional image generation with PixelCNN decoders. In Proceedings of the 30th International Conference On Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016. [Google Scholar] [CrossRef]
- Yi, X.; Walia, E.; Babyn, P. Unsupervised and semi-supervised learning with Categorical Generative Adversarial Networks assisted by Wasserstein distance for dermoscopy image Classification. arXiv 2018, arXiv:1804.03700. [Google Scholar] [CrossRef]
- Springenberg, J.T. Unsupervised and semi-supervised learning with categorical generative adversarial networks. In Proceedings of the International Conference on Learning Representations, San Juan, PR, USA, 2–4 May 2016. [Google Scholar] [CrossRef]
- Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A. Improved training of Wasserstein GANs. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS’17), Long Beach, CA, USA, 4–9 December 2017; Curran Associates Inc.: Red Hook, NY, USA, 2017; pp. 5769–5779. [Google Scholar] [CrossRef]
- Gutman, D.; Codella, N.C.; Celebi, E.; Helba, B.; Marchetti, M.; Mishra, N.; Halpern, A. Skin Lesion Analysis Toward Melanoma Detection: A Challenge at The International Symposium on Biomedical Imaging (ISBI) 2016, Hosted by The International Skin Imaging Collaboration (ISIC). In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 168–172. [Google Scholar] [CrossRef] [Green Version]
- Liu, Y.; Chen, A.; Shi, H.; Huang, S.; Zheng, W.; Liu, Z.; Zhang, Q.; Yang, X. CT Synthesis from MRI Using Multi-Cycle GAN For Head-And-Neck Radiation Therapy. Comput. Med. Imaging Graph. 2021, 91, 101953. [Google Scholar] [CrossRef]
- Baur, C.; Albarqouni, S.; Navab, N. MelanoGANs: High-resolution skin lesion synthesis with GANs. In Proceedings of the 1st Conference on Medical Imaging with Deep Learning (MIDL2018), Amsterdam, The Netherlands, 4–6 July 2018. [Google Scholar] [CrossRef]
- Yan, S.; Liu, Y.; Li, J.; Xiao, H. DDGAN: Double Discriminators GAN for Accurate Image Colorization. In Proceedings of the 2020 6th International Conference on Big Data and Information Analytics (BigDIA), Shenzhen, China, 4–6 December 2020; pp. 214–219. [Google Scholar] [CrossRef]
- Denton, E.L.; Chintala, S.; Szalm, A.; Fergus, R. Deep generative image models using a Laplacian pyramid of adversarial networks. Adv. Neural Inf. Process. Syst. 2015, 28, 1486–1494. [Google Scholar]
- Fossen-Romsaas, S.; Storm-Johannessen, A.; Lundervold, A.S. Synthesizing skin Lesion images using CycleGANs—A case Study. In Proceedings of the NIK-2020 Conference, Online, 24–25 November 2020. [Google Scholar]
- Karras, T.; Aila, T.; Laine, S.; Lehtinen, J. Progressive Growing of GANs for Improved Quality, Stability, And Variation. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018), Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Baur, C.; Albarqouni, S.; Navab, N. Generating Highly Realistic Images of Skin Lesions with GANs. In OR 2.0 Context-Aware Operating Theaters, Computer-Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis; CARE CLIP OR 2.0 ISIC 2018; Springer: Cham, Switzerland, 2018; Volume 11041, pp. 260–267. [Google Scholar] [CrossRef] [Green Version]
- Jiang, M.; Zhi, M.; Wei, L.; Yang, X.; Zhang, J.; Li, Y.; Wang, P.; Huang, J.; Yang, G. FA-GAN: Fused Attentive Generative Adversarial Networks for MRI Image Super-Resolution. Comput. Med. Imaging Graph. 2021, 92, 101969. [Google Scholar] [CrossRef]
- Wang, T.; Liu, M.; Zhu, J.; Tao, A.; Kautz, J.; Catanzaro, B. High-resolution image synthesis and semantic manipulation with conditional GANs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Beynek, B.; Bora, Ş.; Evren, V.; Ugur, A. Synthetic Skin Cancer Image Data Generation Using Generative Adversarial Neural Network. Int. J. Multidiscip. Stud. Innov. Technol. 2021, 5, 147–150. [Google Scholar]
- Pang, T.; Wong, J.H.D.; Ng, W.L.; Chan, C.S. Semi-supervised GAN-based Radiomics Model for Data Augmentation in Breast Ultrasound Mass Classification. Comput. Methods Programs Biomed. 2021, 203, 106018. [Google Scholar] [CrossRef]
- Shahsavari, A.; Ranjbari, S.; Khatibi, T. Proposing a novel Cascade Ensemble Super-Resolution Generative Adversarial Network (CESR-GAN) method for the reconstruction of super-resolution skin lesion images. Inform. Med. Unlocked 2021, 24, 100628. [Google Scholar] [CrossRef]
- Rashid, H.; Tanveer, M.A.; Khan, H.A. Skin lesion classification using GAN-based data augmentation. In Proceedings of the 41st Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Berlin, Germany, 23–27 July 2019. [Google Scholar]
- Adhikari, A. Skin Cancer Detection Using Generative Adversarial Network and an Ensemble of Deep Convolutional Neural Networks. Master’s Thesis, The University of Toledo, Toledo, OH, USA, 2019. [Google Scholar]
- Tschandl, P.; Rosendahl, C.; Kittler, H. The HAM10000 dataset: A large collection of multi-sources dermatoscopic images of common pigmented skin lesions. Sci. Data 2018, 5, 180161. [Google Scholar] [CrossRef]
- Huang, G.; Liu, Z.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, Nevada, USA, 27–30 June 2016. [Google Scholar]
- Mutepfe, F.; Kalejahi, B.K.; Meshgini. S.; Danishvar, S. Generative Adversarial Network Image Synthesis Method for Skin Lesion Generation and Classification. J. Med. Signals Sens. 2021, 11, 237–252. [Google Scholar] [CrossRef]
- Qin, Z.; Liu, Z.; Zhu, P.; Xue, Y. A GAN-based image synthesis method for skin lesion classification. Comput. Methods Programs Biomed. 2020, 195, 105568. [Google Scholar] [CrossRef]
- Heenaye-Mamode Khan, M.; Gooda Sahib-Kaudeer, N.; Dayalen, M.; Mahomedaly, F.; Sinha, G.R.; Nagwanshi, K.K.; Taylor, A. Multiclass Skin Problem Classification Using Deep Generative Adversarial Network (DGAN). Comput. Intell. Neurosci. 2022, 2022, 1797471. [Google Scholar] [CrossRef]
- Zhao, C.; Shuai, R.; Ma, L.; Liu, W.; Hu, D.; Wu, M. Dermoscopy Image Classification Based on StyleGAN and DenseNet201. IEEE Access 2021, 9, 8659–8679. [Google Scholar] [CrossRef]
- Wei, L.S.; Gan, Q.; Ji, T. Skin Disease Recognition Method Based on Image Color and Texture Features. Comput. Math Methods Med. 2018, 2018, 8145713. [Google Scholar] [CrossRef] [Green Version]
- Devaraj, S.J. Emerging Paradigms in Transform-Based Medical Image Compression for Telemedicine Environment. In Telemedicine Technologies; Academic Press: Cambridge, MA, USA, 2019; pp. 15–29. [Google Scholar] [CrossRef]
- Zhu, Y.; Dai, Y.; Han, K.; Wang, J.; Hu, J. An efficient bicubic interpolation implementation for real-time image processing using hybrid computing. J. Real-Time Image Proc. 2022, 19, 1211–1223. [Google Scholar] [CrossRef]
- Rajarapollu, P.R.; Mankar, V.R. Bicubic Interpolation Algorithm Implementation for Image Appearance Enhancement. Int. J. Comput. Sci. Technol. 2017, 8, 23–26. [Google Scholar]
- Nuno-Maganda, M.A.; Arias-Estrada, M.O. Real-time FPGA-based architecture for bicubic interpolation: An application for digital image scaling. In Proceedings of the 2005 International Conference on Reconfigurable Computing and FPGAs (ReConFig’05), Puebla, Mexico, 28–30 September 2005; p. 8. [Google Scholar] [CrossRef]
- Triwijoyo, B.; Adil, A. Analysis of Medical Image Resizing Using Bicubic Interpolation Algorithm. J. Ilmu Komput. 2021, 14, 20–29. [Google Scholar] [CrossRef]
- Yuan, S.; Abe, M.; Taguchi, A.; Kawamata, M. High Accuracy Bicubic Interpolation Using Image Local Features. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 2007, E90-A, 1611–1615. [Google Scholar] [CrossRef]
- Xie, Y.; Ning, L.; Wang, M.; Li, C. Image Enhancement Based on Histogram Equalization. J. Phys. Conf. Ser. 2019, 1314, 012161. [Google Scholar] [CrossRef] [Green Version]
- Gaddam, P.C.S.K.; Sunkara, P. Advanced Image Processing Using Histogram Equalization and Android Application Implementation. Master’s Thesis, Blekinge Institute of Technology, Karlskrona, Sweden, 2016. [Google Scholar]
- Atta, M.; Ahmed, O.; Rashed, A.; Ahmed, M. Image Enhancement for Performance Improvement: Mathematics, Machine Learning and Deep Learning Solutions. Adv. Image Enhanc. 2021, 1–14. [Google Scholar]
- Gonzalezand, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Pearson International Edition; Prentice Hall Press: Boca Raton, FL, USA, 2008; pp. 184–186. [Google Scholar]
- Wubuli, A.; Zhen-Hong, J.; Xi-Zhong, Q.; Jie, Y.; Kasabov, N. Medical image enhancement based on shearlet transform and unsharp masking. J. Med. Imaging Health Inform. 2014, 4, 814–818. [Google Scholar] [CrossRef]
- Reza, A.M. Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement. J. VLSI Signal Process. Syst. Signal Image Video Technol. 2004, 38, 35–44. [Google Scholar] [CrossRef]
- Polesel, A.; Ramponi, G.; Mathews, V.J. Image enhancement via adaptive unsharp masking. IEEE Trans. Image Process. 2000, 9, 505–510. [Google Scholar] [CrossRef] [Green Version]
- Munadi, K.; Muchtar, K.; Maulina, N.; Pradhan, B. Image Enhancement for Tuberculosis Detection Using Deep Learning. IEEE Access 2020, 8, 217897–217907. [Google Scholar] [CrossRef]
- Agaian, S.S.; Panetta, K.; Grigoryan, A.M. Transform-based image enhancement algorithms with performance measure. IEEE Trans. Image Process. 2001, 10, 367–382. [Google Scholar] [CrossRef] [Green Version]
- Nevils, B.; Mimbs, T.; Sailesh, A.; Naheed, N. High Frequency Emphasis Filter Instead of Homomorphic Filter. In Proceedings of the 2018 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, 12–14 December 2018; pp. 482–484. [Google Scholar] [CrossRef]
- Santhosh, B.; Rishikesan, J.; Sundar, K.; Kalaiyarasi, M. Filters in Medical Image Processing. Suraj Punj. J. Multidiscip. Res. 2021, 11, 135–140. [Google Scholar]
- Rodríguez-Rodríguez, J.A.; Molina-Cabello, M.A.; Benítez-Rochel, R.; López-Rubio, E. The effect of image enhancement algorithms on convolutional neural networks. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 3084–3089. [Google Scholar] [CrossRef]
- Hasan, M.K.; Dahal, L.; Samarakoon, P.N.; Tushar, F.I.; Martí, R. DSNet: Automatic dermoscopic skin lesion segmentation. Comput. Biol. Med. 2020, 120, 103738. [Google Scholar] [CrossRef] [Green Version]
- Hoshyar, A.N.; Al-Jumaily, A.; Hoshyar, A.N. The Beneficial Techniques in Preprocessing Step of Skin Cancer Detection System Comparing. Procedia Comput. Sci. 2014, 42, 25–31. [Google Scholar] [CrossRef] [Green Version]
- International Color Consortium. Specification ICC.1: 2004-10 (Profile Version 4.2.0.0) Image Technology Colour Management—Architecture, Profile Format, and Data Structure, International Color Consortium, 2006, Revised 2019. Available online: https://www.color.org/icc_specs2.xalter (accessed on 23 May 2023).
- Al-saleem, R.M.; Al-Hilali, B.M.; Abboud, I.K. Mathematical Representation of Color Spaces and Its Role in Communication Systems. J. Appl. Math. 2020, 2020, 4640175. [Google Scholar] [CrossRef]
- Ruslau, M.F.V.; Pratama, R.A.; Nurhayati; Asmal, S. Edge detection in noisy images with different edge types. IOP Conf. Ser. Earth Environ. Sci. 2019, 343, 012198. [Google Scholar] [CrossRef]
- Church, J.C.; Chen, Y.; Stephen, V.; Rice, A. Spatial Median Filter for Noise Removal in Digital Images; Rice Department of Computer and Information Science, University of Mississippi: Oxford, MS, USA, 2009; pp. 618–623. [Google Scholar]
- Rajlaxmi Chouhan, C.; Pradeep, K.C.; Kumar, R. Contrast enhancement of dark images using stochastic resonance in the wavelet domain. Int. J. Mach. Learn. Comput. 2012, 2, 671–676. [Google Scholar]
- Janani, P.; Premaladha, J.; Ravichandran, K.S. Image Enhancement Techniques: A Study. Indian J. Sci. Technol. 2015, 8, 1–12. [Google Scholar] [CrossRef] [Green Version]
- Yadav, S.S.; Jadhav, S.M. Deep convolutional neural network based medical image classification for disease diagnosis. J. Big Data 2019, 6, 113. [Google Scholar] [CrossRef] [Green Version]
- Wang, J.; Yang, Y.; Wang, T.; Sherratt, R.S.; Zhang, J. Big data service architecture: A survey. J. Internet Technol. 2020, 21, 393–405. [Google Scholar]
- Liu, B.; Lv, J.; Fan, X.; Luo, J.; Zou, T. Application of an Improved DCGAN for Image Generation. Mob. Inf. Syst. 2022, 2022, 9005552. [Google Scholar] [CrossRef]
- Zhong, G.; Gao, W.; Liu, Y.; Yang, Y. Generative Adversarial Networks with Decoder-Encoder Output Noise. Neural Netw. 2020, 127, 19–28. [Google Scholar] [CrossRef]
- Nilsson, J. Understanding SSIM. arXiv 2020, arXiv:2006.13846. [Google Scholar] [CrossRef]
- Behara, K.; Bhero, E.; Agee, J.T.; Gonela, V. Artificial Intelligence in Medical Diagnostics: A Review from a South African Context. Sci. Afr. 2022, 17, e01360. [Google Scholar] [CrossRef]
- Ghassemi, N.; Shoeibi, A.; Rouhani, M. Deep neural network with generative adversarial networks training for brain tumour classification based on MR images. Biomed. Signal Process. Control 2020, 57, 101678. [Google Scholar] [CrossRef]
Authors | Techniques | Dataset | Observations | Accuracy (%) |
---|---|---|---|---|
[23] | Pix2Pix GAN | ISIC 2017 | The image-to-image translation was done via binary classification using a combination of semantic and instance mappings. | 84.7 |
[38] | GAN with Raman Spectroscopy | Raman Spectroscopy | The authors created a data augmentation module that uses a GAN to generate RS data comparable to the training data classes. | 92 |
[50] | cGAN and WGAN | ISIC 2016 | The authors have proposed a categorical generative adversarial network that is both unsupervised and semi-supervised to learn the feature representation of dermoscopy images automatically. | 81 |
[55] | DDGAN | ISIC2017 | High-resolution skin lesion synthesis was demonstrated. However, synthetic images were visually low in contrast. | 72 |
[58] | ACGAN, CycleGAN and Path- Rank-Filter | ISIC 2019 | Research has proven that random noise and image translation can create high-quality images that look real to the untrained eye. However, these images did not increase the classifier’s performance. | 85.6 |
[63] | DCGAN | ISIC 2016–2021 | Conducted a Turing test on the generated images, with 7000 images | 58.72 |
[66] | GAN | ISIC 2018 | Created a GAN-based classifier by fine-tuning the existing deep neural architecture. | 86.1 |
[71] | DCGAN | ISIC | The bilateral filter improved training feature recognition and extraction. Fine-tuning the Deep Convolutional Generative Adversarial Network (DCGAN) increased its return. Optimization picked the best network and hyperparameter combinations. Fine-tuning hyperparameter settings takes time and GPU power. | 93.5 |
[72] | styleGAN | ISIC 2018 | The generator and discriminator are modified to synthesize high-quality skin lesion images by modifying the generator’s style control and noise input structure. Transfer learning on a pre-trained deep neural network classifies images. Finally, skin lesion style-based GAN synthetic images are added to the training set to improve classifier performance. | 95.2 |
[73] | DGAN | PH2 SD-198 Interactive Atlas of Dermoscopy DermNet | A multiclass technique was utilized to solve the dataset’s class imbalance. Improving the DGAN model’s stability during training has been one of the development’s primary challenges. | 91.1 |
[74] | SLA-StyleGAN | ISIC 2019 | The proposed approach outperforms GANs and StyleGANs in key quantitative assessment parameters and quickly produces high-quality skin lesion images. It rebuilds the StyleGAN generator and discriminator structures. Shortcoming Two skin lesions in one photograph might make classification difficult and raise the risk of misdiagnosis. | 93.64 |
Nearest Neighbor | Bilinear | Bicubic | |
---|---|---|---|
SSIM | 0.88 | 0.91 | 0.98 |
PSNR | 31.23 | 34.62 | 39.68 |
MSE | 0.0087 | 0.0089 | 0.0001 |
SSIM | PSNR | MSE | |
---|---|---|---|
CIELAB | 0.86 | 96.92 | 9.07 |
Salt and Pepper Noise | Poisson Noise | Speckle Noise | Gaussian Noise | |
---|---|---|---|---|
MSE | 7.26 | 47.65 | 103.65 | 6.61 |
PSNR (dB) | 36.64 | 28.47 | 25.09 | 37.05 |
Learning Rate | Time Elapsed (hh:mm:ss) | Accuracy Minibatch (%) | Validation Accuracy (%) | Mini-Batch Loss | Validation Loss |
---|---|---|---|---|---|
0.01 | 00:13:08 | 100 | 99.38 | 0.0007 | 0.0293 |
0.001 | 00:16:27 | 100 | 98.44 | 0.0039 | 0.0312 |
0.0002 | 00:09:17 | 100 | 96.04 | 0.0366 | 0.1127 |
Batch Size | Time Elapsed (hh:mm:ss) | Accuracy Minibatch (%) | Validation Accuracy (%) | Mini-Batch Loss | Validation Loss |
---|---|---|---|---|---|
64 | 00:13:08 | 100 | 99.38 | 0.0007 | 0.0293 |
128 | 00:08:58 | 100 | 99.79 | 0.0040 | 0.0059 |
256 | 00:09:07 | 100 | 99.69 | 0.0003 | 0.0099 |
Performance Metrics | Learning Rate 0.01 (%) | Learning Rate 0.001 (%) | Learning Rate 0.0002 (%) |
---|---|---|---|
BAS | 99 | 99 | 97 |
Accuracy | 99.38 | 99.06 | 97.08 |
Recall | 99 | 100 | 98 |
Precision | 99 | 98 | 96 |
Specificity | 99 | 98 | 96 |
F1-Score | 99 | 99 | 97 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Behara, K.; Bhero, E.; Agee, J.T. Skin Lesion Synthesis and Classification Using an Improved DCGAN Classifier. Diagnostics 2023, 13, 2635. https://doi.org/10.3390/diagnostics13162635
Behara K, Bhero E, Agee JT. Skin Lesion Synthesis and Classification Using an Improved DCGAN Classifier. Diagnostics. 2023; 13(16):2635. https://doi.org/10.3390/diagnostics13162635
Chicago/Turabian StyleBehara, Kavita, Ernest Bhero, and John Terhile Agee. 2023. "Skin Lesion Synthesis and Classification Using an Improved DCGAN Classifier" Diagnostics 13, no. 16: 2635. https://doi.org/10.3390/diagnostics13162635
APA StyleBehara, K., Bhero, E., & Agee, J. T. (2023). Skin Lesion Synthesis and Classification Using an Improved DCGAN Classifier. Diagnostics, 13(16), 2635. https://doi.org/10.3390/diagnostics13162635