Generative Adversarial Networks in Brain Imaging: A Narrative Review
Abstract
:1. Introduction
2. Search Strategy
- What is a GAN?
- What are the principal applications of GANs in medical imaging?
- How are GANs employed in brain imaging?
- Are there any limitations for GAN in this field?
- technical explanation of GANs’ structure;
- focus on at least one application of GANs in brain imaging;
- focus on at least one limitation of GANs in medical imaging.
3. GANs in Brain Imaging
3.1. Generative Adversarial Network: A Brief Overview
3.2. Applications of GAN in Brain Imaging
- Image-to-image translation and cross-modality synthesis;
- Image reconstruction: Super-resolution and artifact removal;
- Image Segmentation;
- Image Synthesis and data augmentation;
- Disease progression modeling;
- Brain decoding.
3.2.1. Image-To-Image Translation and Cross-Modality Synthesis
3.2.2. Image Reconstruction: Super Resolution and Artifact Removal
- -
- Rigid motion, which is caused by the movement of a solid part of the body, in which deformation is zero or so small it can be neglected, such as arm, knee, and head motion; and
- -
- Non-rigid motion, which arises from those parts of the body that do not retain any consistent shape, like cardiac motion.
3.2.3. Image Segmentation
3.2.4. Image Synthesis and Data Augmentation
3.2.5. Brain Decoding
3.2.6. Disease Progression Modelling
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Acknowledgments
Conflicts of Interest
References
- Sorin, V.; Barash, Y.; Konen, E.; Klang, E. Creating Artificial Images for Radiology Applications Using Generative Adversarial Networks (GANs)—A Systematic Review. Acad. Radiol. 2020, 27, 1175–1185. [Google Scholar] [CrossRef] [PubMed]
- Yu, B.; Wang, Y.; Wang, L.; Shen, D.; Zhou, L. Medical Image Synthesis via Deep Learning. Adv. Exp. Med. Biol. 2020, 1213, 23–44. [Google Scholar] [PubMed]
- Wolterink, J.M.; Mukhopadhyay, A.; Leiner, T.; Vogl, T.J.; Bucher, A.M.; Išgum, I. Generative adversarial networks: A primer for radiologists. Radiographics 2021, 41, 840–857. [Google Scholar] [CrossRef] [PubMed]
- Qiao, K.; Chen, J.; Wang, L.; Zhang, C.; Tong, L.; Yan, B. BigGAN-based Bayesian Reconstruction of Natural Images from Human Brain Activity. Neuroscience 2020, 444, 92–105. [Google Scholar] [CrossRef] [PubMed]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27, 2672–2680. Available online: http://www.github.com/goodfeli/adversarial (accessed on 27 July 2021).
- Elazab, A.; Wang, C.; Gardezi, S.J.S.; Bai, H.; Hu, Q.; Wang, T.; Chang, C.; Lei, B. GP-GAN: Brain tumor growth prediction using stacked 3D generative adversarial networks from longitudinal MR Images. Neural Netw. 2020, 132, 321–332. [Google Scholar] [CrossRef] [PubMed]
- Kazuhiro, K.; Werner, R.A.; Toriumi, F.; Javadi, M.S.; Pomper, M.G.; Solnes, L.B.; Verde, F.; Higuchi, T.; Rowe, S.P. Generative Adversarial Networks for the Creation of Realistic Artificial Brain Magnetic Resonance Images. Tomography 2018, 4, 159–163. [Google Scholar] [CrossRef]
- Borji, A. Pros and Cons of GAN Evaluation Measures. Comput. Vis. Image Underst. 2018, 179, 41–65. Available online: https://arxiv.org/abs/1802.03446v5 (accessed on 11 November 2021). [CrossRef] [Green Version]
- Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; Hochreiter, S. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. Adv. Neural Inf. Process. Syst. 2017, 30, 6627–6638. Available online: https://arxiv.org/abs/1706.08500v6 (accessed on 11 November 2021).
- Yuan, W.; Wei, J.; Wang, J.; Ma, Q.; Tasdizen, T. Unified generative adversarial networks for multimodal segmentation from unpaired 3D medical images. Med. Image Anal. 2020, 64, 101731. [Google Scholar] [CrossRef]
- Oh, K.T.; Lee, S.; Lee, H.; Yun, M.; Yoo, S.K. Semantic Segmentation of White Matter in FDG-PET Using Generative Adversarial Network. J. Digit. Imaging 2020, 33, 816–825. [Google Scholar] [CrossRef]
- Zhou, X.; Qiu, S.; Joshi, P.S.; Xue, C.; Killiany, R.J.; Mian, A.Z.; Chin, S.P.; Au, R.; Kolachalama, V.B. Enhancing magnetic resonance imaging-driven Alzheimer’s disease classification performance using generative adversarial learning. Alzheimer’s Res. Ther. 2021, 13, 60. [Google Scholar] [CrossRef] [PubMed]
- Armanious, K.; Hepp, T.; Küstner, T.; Dittmann, H.; Nikolaou, K.; La Fougère, C.; Yang, B.; Gatidis, S. Independent attenuation correction of whole body [18F]FDG-PET using a deep learning approach with Generative Adversarial Networks. EJNMMI Res. 2020, 10, 53. [Google Scholar] [CrossRef] [PubMed]
- Cheng, D.; Qiu, N.; Zhao, F.; Mao, Y.; Li, C. Research on the Modality Transfer Method of Brain Imaging Based on Generative Adversarial Network. Front. Neurosci. 2021, 15, 655019. [Google Scholar] [CrossRef] [PubMed]
- Yurt, M.; Dar, S.U.; Erdem, A.; Erdem, E.; Oguz, K.K.; Çukur, T. Mustgan: Multi-stream generative adversarial networks for MR image synthesis. Med. Image Anal. 2021, 70, 101944. [Google Scholar] [CrossRef] [PubMed]
- Jin, C.-B.; Kim, H.; Liu, M.; Jung, W.; Joo, S.; Park, E.; Ahn, Y.S.; Han, I.H.; Lee, J.I.; Cui, X. Deep CT to MR Synthesis Using Paired and Unpaired Data. Sensors 2019, 19, 2361. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kazemifar, S.; Montero, A.M.B.; Souris, K.; Rivas, S.T.; Timmerman, R.; Park, Y.K.; Jiang, S.; Geets, X.; Sterpin, E.; Owrangi, A. Dosimetric evaluation of synthetic CT generated with GANs for MRI-only proton therapy treatment planning of brain tumors. J. Appl. Clin. Med. Phys. 2020, 21, 76–86. [Google Scholar] [CrossRef] [Green Version]
- Maspero, M.; Bentvelzen, L.G.; Savenije, M.H.; Guerreiro, F.; Seravalli, E.; Janssens, G.O.; Berg, C.A.V.D.; Philippens, M.E. Deep learning-based synthetic CT generation for paediatric brain MR-only photon and proton radiotherapy. Radiother. Oncol. 2020, 153, 197–204. [Google Scholar] [CrossRef]
- Liu, Y.; Nacewicz, B.M.; Zhao, G.; Adluru, N.; Kirk, G.R.; Ferrazzano, P.A.; Styner, M.A.; Alexander, A.L. A 3D Fully Convolutional Neural Network With Top-Down Attention-Guided Refinement for Accurate and Robust Automatic Segmentation of Amygdala and Its Subnuclei. Front. Neurosci. 2020, 14, 260. [Google Scholar] [CrossRef]
- Yang, H.; Qian, P.; Fan, C. An Indirect Multimodal Image Registration and Completion Method Guided by Image Synthesis. Comput. Math. Methods Med. 2020, 2020, 2684851. [Google Scholar] [CrossRef]
- Lan, H.; Toga, A.W.; Sepehrband, F.; Initiative, T.A.D.N. Three-dimensional self-attention conditional GAN with spectral normalization for multimodal neuroimaging synthesis. Magn. Reson. Med. 2021, 86, 1718–1733. [Google Scholar] [CrossRef] [PubMed]
- Song, T.-A.; Chowdhury, S.R.; Yang, F.; Dutta, J. PET image super-resolution using generative adversarial networks. Neural Netw. 2020, 125, 83–91. [Google Scholar] [CrossRef] [PubMed]
- Gong, K.; Yang, J.; Larson, P.E.Z.; Behr, S.C.; Hope, T.A.; Seo, Y.; Li, Q. MR-Based Attenuation Correction for Brain PET Using 3-D Cycle-Consistent Adversarial Network. IEEE Trans. Radiat. Plasma Med. Sci. 2020, 5, 185–192. [Google Scholar] [CrossRef] [PubMed]
- Zaitsev, M.; Maclaren, J.; Herbst, M. Motion artifacts in MRI: A complex problem with many partial solutions. J. Magn. Reson. Imaging 2015, 42, 887–901. Available online: https://pubmed.ncbi.nlm.nih.gov/25630632/ (accessed on 22 August 2021). [CrossRef]
- Ouyang, J.; Chen, K.T.; Gong, E.; Pauly, J.; Zaharchuk, G. Ultra-low-dose PET reconstruction using generative adversarial network with feature matching and task-specific perceptual loss. Med. Phys. 2019, 46, 3555–3564. [Google Scholar] [CrossRef]
- Chen, K.T.; Gong, E.; Macruz, F.B.D.C.; Xu, J.; Boumis, A.; Khalighi, M.; Poston, K.L.; Sha, S.J.; Greicius, M.D.; Mormino, E.; et al. Ultra–Low-Dose 18F-Florbetaben Amyloid PET Imaging Using Deep Learning with Multi-Contrast MRI Inputs. Radiology 2018, 290, 649–656. Available online: https://pubs.rsna.org/doi/abs/10.1148/radiol.2018180940 (accessed on 28 July 2021). [CrossRef]
- Zhao, K.; Zhou, L.; Gao, S.; Wang, X.; Wang, Y.; Zhao, X.; Wang, H.; Liu, K.; Zhu, Y.; Ye, H. Study of low-dose PET image recovery using supervised learning with CycleGAN. PLoS ONE 2020, 15, e0238455. [Google Scholar] [CrossRef]
- Sundar, L.K.S.; Iommi, D.; Muzik, O.; Chalampalakis, Z.; Klebermass, E.M.; Hienert, M.; Rischka, L.; Lanzenberger, R.; Hahn, A.; Pataraia, E.; et al. Conditional Generative Adversarial Networks Aided Motion Correction of Dynamic 18F-FDG PET Brain Studies. J. Nucl. Med. 2021, 62, 871–880. [Google Scholar] [CrossRef]
- Delannoy, Q.; Pham, C.-H.; Cazorla, C.; Tor-Díez, C.; Dollé, G.; Meunier, H.; Bednarek, N.; Fablet, R.; Passat, N.; Rousseau, F. SegSRGAN: Super-resolution and segmentation using generative adversarial networks—Application to neonatal brain MRI. Comput. Biol. Med. 2020, 120, 103755. [Google Scholar] [CrossRef]
- Shaul, R.; David, I.; Shitrit, O.; Raviv, T.R. Subsampled brain MRI reconstruction by generative adversarial neural networks. Med. Image Anal. 2020, 65, 101747. [Google Scholar] [CrossRef]
- Zhang, H.; Shinomiya, Y.; Yoshida, S. 3D MRI Reconstruction Based on 2D Generative Adversarial Network Super-Resolution. Sensors 2021, 21, 2978. [Google Scholar] [CrossRef]
- Islam, J.; Zhang, Y. GAN-based synthetic brain PET image generation. Brain Inform. 2020, 7, 3. [Google Scholar] [CrossRef] [PubMed]
- Hirte, A.U.; Platscher, M.; Joyce, T.; Heit, J.J.; Tranvinh, E.; Federau, C. Realistic generation of diffusion-weighted magnetic resonance brain images with deep generative models. Magn. Reson. Imaging 2021, 81, 60–66. [Google Scholar] [CrossRef] [PubMed]
- Kossen, T.; Subramaniam, P.; Madai, V.I.; Hennemuth, A.; Hildebrand, K.; Hilbert, A.; Sobesky, J.; Livne, M.; Galinovic, I.; Khalil, A.A.; et al. Synthesizing anonymized and labeled TOF-MRA patches for brain vessel segmentation using generative adversarial networks. Comput. Biol. Med. 2021, 131, 104254. [Google Scholar] [CrossRef] [PubMed]
- Barile, B.; Marzullo, A.; Stamile, C.; Durand-Dubief, F.; Sappey-Marinier, D. Data augmentation using generative adversarial neural networks on brain structural connectivity in multiple sclerosis. Comput. Methods Programs Biomed. 2021, 206, 106113. [Google Scholar] [CrossRef] [PubMed]
- Li, Q.; Yu, Z.; Wang, Y.; Zheng, H. TumorGAN: A Multi-Modal Data Augmentation Framework for Brain Tumor Segmentation. Sensors 2020, 20, 4203. [Google Scholar] [CrossRef] [PubMed]
- Kim, H.W.; Lee, H.E.; Lee, S.; Oh, K.T.; Yun, M.; Yoo, S.K. Slice-selective learning for Alzheimer’s disease classification using a generative adversarial network: A feasibility study of external validation. Eur. J. Nucl. Med. Mol. Imaging 2020, 47, 2197–2206. [Google Scholar] [CrossRef] [PubMed]
- Ren, Z.; Li, J.; Xue, X.; Li, X.; Yang, F.; Jiao, Z.; Gao, X. Reconstructing seen image from brain activity by visually-guided cognitive representation and adversarial learning. NeuroImage 2021, 228, 117602. [Google Scholar] [CrossRef]
- Huang, W.; Yan, H.; Wang, C.; Yang, X.; Li, J.; Zuo, Z.; Zhang, J.; Chen, H. Deep Natural Image Reconstruction from Human Brain Activity Based on Conditional Progressively Growing Generative Adversarial Networks. Neurosci. Bull. 2021, 37, 369–379. [Google Scholar] [CrossRef]
- Al-Tahan, H.; Mohsenzadeh, Y. Reconstructing feedback representations in the ventral visual pathway with a generative adversarial autoencoder. PLoS Comput. Biol. 2021, 17, 1–19. [Google Scholar] [CrossRef]
- Han, C.; Rundo, L.; Murao, K.; Noguchi, T.; Shimahara, Y.; Milacski, Z.Á.; Koshino, S.; Sala, E.; Nakayama, H.; Satoh, S. MADGAN: Unsupervised medical anomaly detection GAN using multiple adjacent brain MRI slice reconstruction. BMC Bioinform. 2021, 22 (Suppl. S2), 31. [Google Scholar] [CrossRef]
- Cohen, J.P.; Luck, M.; Honari, S. Distribution Matching Losses Can Hallucinate Features in Medical Image Translation. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2018; Frangi, A., Schnabel, J., Davatzikos, C., Alberola-López, C., Fichtinger, G., Eds.; Springer: Cham, Switzerland, 2018; Volume 11070, Available online: https://link.springer.com/chapter/10.1007/978-3-030-00928-1_60 (accessed on 22 August 2021).
- Mirsky, Y.; Mahler, T.; Shelef, I.; Elovici, Y. CT-GAN: Malicious Tampering of 3D Medical Imagery using Deep Learning. arXiv 2019, arXiv:1901.03597v3. Available online: https://arxiv.org/abs/1901.03597v3 (accessed on 11 November 2021).
- Kingma, D.P.; Welling, M. Auto-encoding variational bayes. arXiv 2013, arXiv:1312.6114. [Google Scholar]
- Kingma, D.P.; Welling, M. An introduction to variational autoencoders. arXiv 2019, arXiv:1906.02691. [Google Scholar]
- Bank, D.; Koenigstein, N.; Giryes, R. Autoencoders. arXiv 2020, arXiv:2003.05991. [Google Scholar]
- Blei, D.M.; Kucukelbir, A.; McAuliffe, J.D. Variational inference: A review for statisticians. J. Am. Stat. Assoc. 2017, 112, 859–877. [Google Scholar] [CrossRef] [Green Version]
Author | Year | Application | Population (No. of Patients) | Imaging Modality | ML Model | Results |
---|---|---|---|---|---|---|
Jin | 2019 | Image-to-Image translation and cross-modality synthesis | 202 patients | MRI from CT image | MR-GAN | MAE: 19.36 PSNR: 65.35 SSIM: 0.25 |
Kazemifar | 2019 | Image-to-Image translation and cross-modality synthesis | 66 patients | CT from MRI | GAN | mean absolute difference below 0.5% (0.3 Gy) |
Dai | 2020 | Image-to-Image translation and cross-modality synthesis | 274 subjects (54 patients with low-grade glioma, and 220 patients with high-grade glioma) | MRI | multimodal MR image synthesis method unified generative adversarial network. | NMAEs for the generated T1c, T2, Flair: 0.034 ± 0.005, 0.041 ± 0.006, and 0.041 ± 0.006. PSNRs: 32.353 ± 2.525 dB, 30.016 ± 2.577 dB, and 29.091 ± 2.795 dB. SSIMs: 0.974 ± 0.059, 0.969 ± 0.059, and 0.959 ± 0.05. |
Hamghalam | 2020 | Image-to-Image translation and cross-modality synthesis | Various datasets | MRI-HTC | Cycle-GAN | Dice similarity scores: 0.8%, (whole tumor) 0.6% (tumor core) 0.5% (enhancing tumor). |
Maspero | 2020 | Image-to-Image translation and cross-modality synthesis | 60 pediatric patients | SynCT from T1-weighted MRI | cGANs | mean absolute error of 61 ± 14 HU pass-rate of 99.5 ± 0.8% and 99.2 ± 1.1% |
Sanders | 2020 | Image-to-Image translation and cross-modality synthesis | 109 brain tumor patients | relative cerebral blood volume (rCBV) maps from computed from DSC MRI, from DCE MRI of brain tumors | cGANs | Pearson correlation analysis showed strong correlation (ρ = 0.87, p < 0.05 and ρ = 0.86, p < 0.05). |
Wang | 2020 | Image-to-Image translation and cross-modality synthesis | 20 patients | MRI-PET | cycleGANs | PSNR > 24.3 SSIM > 0.817 MSE ≤ 0.036. |
Lan | 2021 | Image-to-Image translation and cross-modality synthesis | 265 subjects | PET-MRI | 3D self- attention conditional GAN (SC- GAN) constructed | NRMSE:0.076 ± 0.017 PSNR: 32.14 ± 1.10 SSIM: 0.962 ± 0.008 |
Bourbonne | 2021 | Image-to-Image translation and cross-modality synthesis | 184 patients with brain metastases | CT-MRI | 2D-GAN(2D U-Net) | mean global gamma analysis passing rate: 99.7% |
Cheng | 2021 | Image-to-Image translation and cross-modality synthesis | 17 adults | Two-dimensional fMRI images using two-dimensional EEG images; | BMT-GAN | MSE: 128.6233 PSNR: 27.0376 SSIM: 0.8627 VIF: 0.3575 IFC: 2.4794 |
La Rosa | 2021 | Image-to-Image translation and cross-modality synthesis | 12 healthy controls and 44 patients diagnosed with Multiple Sclerosis | MRI (MP2RAGE uniform images (UNI) from MPRAGE) | GAN | PSNR: 31.39 ± 0.96 NRMSE: 0.13 ± 0.01 SSIM: 0.98 ± 0.01 |
Lin | 2021 | Image-to-Image translation and cross-modality synthesis | AD 362 subjects; 647 images CN 308 subjects; 707 images pMCI 183 subjects; 326 images sMCI 233 subjects; 396 images | MRI-PET | Reversible Generative Adversarial Network (RevGAN) | Synthetic PET: PSNR: 29.42 SSIM: 0.8176 PSNR: 24.97 SSIM: 0.6746 |
Liu | 2021 | Image-to-Image translation and cross-modality synthesis | 12 brain cancer patients | SynCT images from T1-weighted postgadolinium MR | GAN model with a residual network (ResNet) | Average gamma passing rates at 1%/1 mm and 2%/2 mm were 99.0 ± 1.5% and 99.9 ± 0.2%, |
Tang | 2021 | Image-to-Image translation and cross-modality synthesis | 37 brain cancer patients | SynCT from T1-weighted MRI | GAN | Average gamma passing rates at 3%/3 mm and 2%/2 mm criteria were 99.76% and 97.25% |
Uzunova | 2021 | Image-to-Image translation and cross-modality synthesis | Various datasets | MRI (T1/Flair to T2, healthy to pathological) | GAN | T1 → T2 SSIM: 0.911 MAE: 0.017 MSE: 0.003 PSNR: 26.0 Flair → T2 SSIM: 0.905 MAE: 0.021 MSE: 0.004 PSNR: 24.6 |
Yang | 2021 | Image-to-Image translation and cross-modality synthesis | 9 subjects | Multimodal MRI-CT registration into monomodal sCT-CT registration | CAE-GAN | MAE: 99.32 |
Author | Year | Application | Population (No. of Patients) | Imaging Modality | ML Model | Results |
---|---|---|---|---|---|---|
Ouyang | 2019 | Image reconstruction | 39 participants | PET/MRI | GAN | MAE: 8/80 |
Song | 2020 | Image reconstruction | 30 HRRT scans from the ADNI database. Validation dataset = 12 subjects | low-resolution PET and high-resolution MRI images | Self-supervised SR (SSSR) GAN | Various results |
Shaul | 2020 | Image reconstruction | 490 3D brain MRI of a healthy human adult; 64 patients from Longitudinal MS Lesion Segmentation Challenge (T1, T2, PD, and FLAIR); 14 DCE-MRI acquisitions of Stroke and brain tumor | MRI | GAN | PSNR: 40.09 ± 3.24 SSIM: 0.98 ± 0.01 MSE: 0.0021 ± 0.036 |
Zhao | 2020 | Image reconstruction | 109 patients | PET | S-CycleGAN | Average coincidence: 110 ± 23 |
Zhang | 2021 | Image reconstruction | 581 healthy adults | MRI | noise-based super-resolution network (nESRGAN) | SSIM: 0.09710 ± 0.0022 |
Sundar | 2021 | Image reconstruction | 10 healthy adults | PET/MRI | cGAN | AUC: 0.9 ± 0.7% |
Zhou | 2021 | Image reconstruction | 151 patients with Alzheimer’s Disease | MRI | GAN | Image quality: 9.6% |
Lv | 2021 | Image reconstruction | 17 participants with a brain tumor | MRI | PI-GAN | SSIM: 0.96 ± 0.01 RMSE: 1.54 ± 0.33 |
Delannoy | 2020 | Image reconstruction and segmentation | dHCP dataset = 40; Epirmex dataset = 1500 | MRI | SegSRGAN | Dice 0.050 |
Author | Year | Application | Population (No. of Patients) | Imaging Modality | ML Model | Results |
---|---|---|---|---|---|---|
Liu | 2020 | Image segmentation | 14 subjects | MRI | cycle-consistent generative adversarial network (CycleGAN) | Dice 75.5%; ASSD: 1.2 |
Oh | 2020 | Image segmentation | 192 subjects | 18 F-FDG PET/CT and MRI | GAN | AUC-PR: 0.869 ± 0.021 |
Yuan | 2020 | Image segmentation | 484 brain tumor scans | MRI | GAN | Dice: 42.35% |
Author | Year | Application | Population (No. of Patients) | Imaging Modality | ML Model | Results |
---|---|---|---|---|---|---|
Kazuhiro | 2018 | Image synthesis | 30 healthy individuals and 33 patients with cerebrovascular accident | MRI | DCGAN | 45% and 71% were identified as real images by neuroradiologists. |
Islam | 2020 | Image synthesis | 479 patients | PET | DCGAN | SSIM 77.48 |
Kim | 2020 | Image synthesis | 139 patients with Alzheimer’s Disease and 347 Normal Cognitive participants. | PET/CT | Boundary Equilibrium Generative Adversarial Network (BEGAN) | Accuracy: 94.82; Sensitivity: 92.11; Specificity: 97.45; AUC:0.98 |
Qingyun | 2020 | Image synthesis | 226 patients (HGG (high-grade gliomas): 166, LGG (low-grade gliomas): 60) as a training set to train TumorGAN, | MRI (FLAIR, T1, T1CE) | TumorGAN | Dice 0.725 |
Barile | 2021 | Image synthesis | 29 relapsing-remitting and 19 secondary-progressive MS patients. | MRI | GAN AAE | F1 score 81% |
Hirte | 2021 | Image synthesis | 2029 patients normal brain | MRI | GAN | Data similarity 0.0487 |
Kossen | 2021 | Image synthesis | 121 patients with Cerebrovascular disease | MRA | 3 GANs: (1) Deep convolutional GAN, (2) Wasserstein-GAN with gradient penalty (WGAN-GP), and (3) WGAN-GP with spectral normalization (WGAN-GP-SN). | FID 37.01 |
Authors | Year | Application | Population (No. of Patients) | Image Modality | ML Model | Results |
---|---|---|---|---|---|---|
Qiao | 2020 | Brain decoding | 1750 training sample and 120 testing sample | fMRI | GAN-based Bayesian visual reconstruction model (GAN-BVRM) | PSM: 0381 ± 0.082 |
Ren | 2021 | Brain decoding | Various datasets | MRI | Dual-Variational Autoencoder/Generative Adversarial Network (D-Vae/Gan) | Mean identification accuracy: 87% |
Huang | 2021 | Brain decoding | Five volunteers (3 males and 2 females) | fMRI | CAE, LSTM, and conditional progressively growing GAN (C-PG-GAN) deep | Various results for each participant |
Al-Tahan | 2021 | Brain decoding | 50 healthy right-handed participants | fMRI | Adversarial Autoencoder (AAE) framework | MAE 0.49 ± 0.024 |
Author | Year | Application | Population (No. of Patients) | Imaging Modality | ML Model | Results |
---|---|---|---|---|---|---|
Elazab | 2020 | Disease progression modeling | 9 subjects | MRI | growth prediction GAN (GP-GAN) GP-GAN | Dice: 88.26 Jaccard Index: 78.99 |
Han | 2021 | Disease progression modeling | 408 subjects/1133 scans/57,834 slices | MRI | medical anomaly detection generative adversarial network (MADGAN) | Cognitive impairment: AUC: 0.727 AD late stage: AUC 0.894 Brain metastases: AUC 0.921 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Laino, M.E.; Cancian, P.; Politi, L.S.; Della Porta, M.G.; Saba, L.; Savevski, V. Generative Adversarial Networks in Brain Imaging: A Narrative Review. J. Imaging 2022, 8, 83. https://doi.org/10.3390/jimaging8040083
Laino ME, Cancian P, Politi LS, Della Porta MG, Saba L, Savevski V. Generative Adversarial Networks in Brain Imaging: A Narrative Review. Journal of Imaging. 2022; 8(4):83. https://doi.org/10.3390/jimaging8040083
Chicago/Turabian StyleLaino, Maria Elena, Pierandrea Cancian, Letterio Salvatore Politi, Matteo Giovanni Della Porta, Luca Saba, and Victor Savevski. 2022. "Generative Adversarial Networks in Brain Imaging: A Narrative Review" Journal of Imaging 8, no. 4: 83. https://doi.org/10.3390/jimaging8040083
APA StyleLaino, M. E., Cancian, P., Politi, L. S., Della Porta, M. G., Saba, L., & Savevski, V. (2022). Generative Adversarial Networks in Brain Imaging: A Narrative Review. Journal of Imaging, 8(4), 83. https://doi.org/10.3390/jimaging8040083