Synthetic Medical Imaging Generation with Generative Adversarial Networks for Plain Radiographs
Abstract
:1. Introduction
2. Materials and Methods
2.1. Generative Adversarial Networks (GANs)
2.2. Selecting a GAN
2.3. Pipeline
2.3.1. Diagram Overview
2.3.2. Preprocessing
2.3.3. Training and Generation
2.3.4. Evaluation (FID Score, GAN-Train/GAN-Test)
2.4. Data
2.5. Experimentation
2.5.1. Initial Experimentation
2.5.2. Hyperparameter Tuning
2.5.3. Threshold Data Requirements
2.5.4. Clinician Evaluation
2.5.5. Gan-Train/Gan-Test
3. Results
3.1. Training Results: FID Score Convergence across X-ray Types
3.2. Image Generation Results: Qualitative Analysis of Image Artifacts
3.3. Hyperparameter Tuning
3.4. Training Dataset Size
3.5. Clinician Evaluation
3.6. Gan-Train and Gan-Test Evaluation
4. Discussion
4.1. Further Research
4.2. Further Development
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Coyner, A.S.; Chen, J.S.; Chang, K.; Singh, P.; Ostmo, S.; Chan, R.V.P.; Chiang, M.F.; Kalpathy-Cramer, J.; Campbell, J.P. Synthetic Medical Images for Robust, Privacy-Preserving Training of Artificial Intelligence. Ophthalmol. Sci. 2022, 2, 100126. [Google Scholar] [CrossRef] [PubMed]
- The Unreasonable Effectiveness of Data|IEEE Journals & Magazine|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/4804817 (accessed on 20 April 2023).
- Tolstikhin, I.; Houlsby, N.; Kolesnikov, A.; Beyer, L.; Zhai, X.; Unterthiner, T.; Yung, J.; Steiner, A.; Keysers, D.; Uszkoreit, J.; et al. MLP-Mixer: An all-MLP Architecture for Vision. arXiv 2021, arXiv:2105.01601. [Google Scholar]
- Ramos, L.; Subramanyam, J. Maverick* Research: Forget about Your Real Data—Synthetic Data Is the Future of AI; Gartner, Inc.: Stamford, CT, USA, 2021; Available online: https://www.gartner.com/en/documents/4002912 (accessed on 13 October 2022).
- Alaa, A.; Breugel, B.V.; Saveliev, E.S.; van der Schaar, M. How Faithful is your Synthetic Data? Sample-level Metrics for Evaluating and Auditing Generative Models. In Proceedings of the 39th International Conference on Machine Learning, PMLR, Baltimore, MD, USA, 17–23 July 2022; pp. 290–306. Available online: https://proceedings.mlr.press/v162/alaa22a.html (accessed on 20 April 2023).
- Wang, T.; Lei, Y.; Fu, Y.; Wynne, J.F.; Curran, W.J.; Liu, T.; Yang, X. A review on medical imaging synthesis using deep learning and its clinical applications. J. Appl. Clin. Med. Phys. 2020, 22, 11–36. [Google Scholar] [CrossRef]
- Mirsky, Y.; Mahler, T.; Shelef, I.; Elovici, Y. CT-GAN: Malicious Tampering of 3D Medical Imagery using Deep Learning. In Proceedings of the 28th USENIX Security Symposium (USENIX Security 19), Santa Clara, CA, USA, 14–16 August 2019. [Google Scholar]
- Top 5 Use Cases for Artificial Intelligence in Medical Imaging. Available online: https://healthitanalytics.com/news/top-5-use-cases-for-artificial-intelligence-in-medical-imaging (accessed on 20 April 2023).
- Generating Diverse Synthetic Medical Image Data for Training Machine Learning Models—Google AI Blog. Available online: https://ai.googleblog.com/2020/02/generating-diverse-synthetic-medical.html (accessed on 20 April 2023).
- Gohorbani, A.; Natarajan, V.; Coz, D.D.; Liu, Y. DermGAN: Synthetic Generation of Clinical Skin Images with Pathology. arXiv 2019, arXiv:1911.08716. [Google Scholar]
- Shared Datasets|Center for Artificial Intelligence in Medicine & Imaging. Available online: https://aimi.stanford.edu/shared-datasets (accessed on 20 April 2023).
- NIH Clinical Center Releases Dataset of 32,000 CT Images|National Institutes of Health (NIH). Available online: https://www.nih.gov/news-events/news-releases/nih-clinical-center-releases-dataset-32000-ct-images (accessed on 20 April 2023).
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2014; Available online: https://papers.nips.cc/paper_files/paper/2014/hash/5ca3e9b122f61f8f06494c97b1afccf3-Abstract.html (accessed on 20 April 2023).
- Shaham, T.R.; Dekel, T.; Michaeli, T. SinGAN: Learning a Generative Model from a Single Natural Image. arXiv 2019, arXiv:1905.01164. [Google Scholar]
- Radford, A.; Metz, L.; Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv 2016, arXiv:1511.06434. [Google Scholar]
- Lin, J.; Zhang, R.; Ganz, F.; Han, S.; Zhu, J. Anycost Gans for interactive image synthesis and editing. arXiv 2021, arXiv:2103.03243. [Google Scholar]
- Niemeyer, M.; Geiger, A. GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields. arXiv 2021, arXiv:2011.12100. [Google Scholar]
- Wang, P.; Li, Y.; Singh, K.K.; Lu, J.; Vasconcelos, N. IMAGINE: Image Synthesis by Image-Guided Model Inversion. arXiv 2021, arXiv:2104.05895. [Google Scholar]
- Karras, T.; Aittala, M.; Laine, S.; Härkönen, E.; Hellsten, J.; Lehtinen, J.; Aila, T. Alias-Free Generative Adversarial Networks. Adv. Neural Inf. Process. Syst. 2021, 34, 852–863. [Google Scholar]
- Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; Hochreiter, S. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Available online: https://papers.nips.cc/paper_files/paper/2017/hash/8a1d694707eb0fefe65871369074926d-Abstract.html (accessed on 24 April 2023).
- Shmelkov, K.; Schmid, C.; Alahari, K. How Good Is My GAN? In Proceedings of the Computer Vision—ECCV 2018: 15th European Conference, Munich, Germany, 8–14 September 2018; Proceedings, Part II. Springer: Berlin/Heidelberg, Germany, 2018; pp. 218–234. [Google Scholar] [CrossRef]
- KneeXrayOA-Simple. Available online: https://www.kaggle.com/datasets/tommyngx/kneexrayoa-simple (accessed on 25 April 2023).
- MURA Dataset: Towards Radiologist-Level Abnormality Detection in Musculoskeletal Radiographs. Available online: https://stanfordmlgroup.github.io/competitions/mura/ (accessed on 10 August 2023).
- Inception_v3. PyTorch. Available online: https://pytorch.org/hub/pytorch_vision_inception_v3/ (accessed on 17 December 2023).
- Resnet101—Torchvision Main Documentation. Available online: https://pytorch.org/vision/main/models/generated/torchvision.models.resnet101.html (accessed on 17 December 2023).
- Vgg19—Torchvision Main Documentation. Available online: https://pytorch.org/vision/main/models/generated/torchvision.models.vgg19.html (accessed on 17 December 2023).
- Borji, A. Pros and Cons of GAN Evaluation Measures: New Developments. arXiv 2021, arXiv:2103.09396. [Google Scholar] [CrossRef]
- Chong, M.J.; Forsyth, D. Effectively Unbiased FID and Inception Score and where to find them. arXiv 2020, arXiv:1911.07023. [Google Scholar]
- Pesaranghader, A.; Wang, Y.; Havaei, M. CT-SGAN: Computed Tomography Synthesis Gan. arXiv 2021, arXiv:2110.09288. [Google Scholar]
Name of Dataset | Location | Type of Data | Image Count | Shortcomings/Challenges |
---|---|---|---|---|
KneeXrayOA-simple | Kaggle | Osteoarthritic Knee X-rays (JPG) | 10k |
|
MURA | Stanford | Musculoskeletal Radiographs (PNGs) | 5k (elbow) |
|
UMD Elbow | University of Maryland | Elbow X-rays (DICOM) | <1k |
|
Model | Approach | Accuracy | Precision | Recall | F1 Score |
---|---|---|---|---|---|
Base | 78.36 | 78.43 | 78.37 | 78.35 | |
Inception | Gan train | 65.87 | 68.78 | 65.87 | 64.49 |
Gan test | 74.03 | 76.56 | 74.04 | 73.41 | |
Base | 82.69 | 83.00 | 82.69 | 82.65 | |
Resnet101 | Gan train | 69.71 | 73.77 | 69.71 | 68.36 |
Gan test | 65.86 | 69.94 | 65.86 | 64.02 | |
Base | 82.21 | 83.10 | 82.21 | 82.09 | |
VGG19 | Gan train | 68.27 | 73.21 | 68.27 | 66.48 |
Gan test | 59.61 | 63.54 | 59.61 | 56.46 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
McNulty, J.R.; Kho, L.; Case, A.L.; Slater, D.; Abzug, J.M.; Russell, S.A. Synthetic Medical Imaging Generation with Generative Adversarial Networks for Plain Radiographs. Appl. Sci. 2024, 14, 6831. https://doi.org/10.3390/app14156831
McNulty JR, Kho L, Case AL, Slater D, Abzug JM, Russell SA. Synthetic Medical Imaging Generation with Generative Adversarial Networks for Plain Radiographs. Applied Sciences. 2024; 14(15):6831. https://doi.org/10.3390/app14156831
Chicago/Turabian StyleMcNulty, John R., Lee Kho, Alexandria L. Case, David Slater, Joshua M. Abzug, and Sybil A. Russell. 2024. "Synthetic Medical Imaging Generation with Generative Adversarial Networks for Plain Radiographs" Applied Sciences 14, no. 15: 6831. https://doi.org/10.3390/app14156831
APA StyleMcNulty, J. R., Kho, L., Case, A. L., Slater, D., Abzug, J. M., & Russell, S. A. (2024). Synthetic Medical Imaging Generation with Generative Adversarial Networks for Plain Radiographs. Applied Sciences, 14(15), 6831. https://doi.org/10.3390/app14156831