Improving Structural MRI Preprocessing with Hybrid Transformer GANs
Abstract
:1. Introduction
2. Related Work
Ref. | Input Resolution | Output Resolution | PSNR (dB) | SSIM | Dataset | Model |
---|---|---|---|---|---|---|
[21] | 16 × 16 (×8) | 128 × 128 | 24.63 | 0.784 | Amsterdam open MRI [28] | U-Net with self-attention |
[20] | 60 × 60 (×4) | 240 × 240 | TCGA (36.98), ATLAS (29.02) | TCGA (0.996), ATLAS (0.951) | TCGA [29], ATLAS [30] | U-Net and CNN hybrid |
[23] | 128 × 128 (×2) | 256 × 256 | 32.45 | 0.935 | - | ESRGAN |
[22] | 128 × 128 (×2) | 256 × 256 | Kirby 21 (37.16), NAMIC (35.56) | Kirby 21 (0.990), NAMIC (0.982) | Kirby 21 [32], NAMIC [33] | ResNet |
[19] | 128 × 128 × 128 (×2) | 256 × 256 × 256 | Kirby 21 (38.93), NAMIC (38.06) | Kirby 21 (0.9797), NAMIC (0.9767) | Kirby 21 [32], NAMIC [33] | Deep 3D CNN with skip connections |
[18] | 93 × 93 × 93 (×2, ×3, ×4) | 186 × 186 × 186, 279 × 279 × 279, 372 × 372 × 372 | HCP ×2 (35.97), HCP ×3 (32.63), HCP ×4 (30.64) | HCP ×2 (0.9827), HCP ×3 (0.9671), HCP ×4 (0.9519) | Human Connectome Project (HCP) [38] | 3D regression-based filters |
[17] | 40 × 40 (×2) | 80 × 80 | Kirby 21 (43.68), ANVIL-adult (40.96), MSSEG (41.22) | Kirby 21 (0.9965), ANVIL-adult (0.9906), MSSEG (0.9978) | Kirby 21 [32], ANVIL-adult [39], MSSEG [40] | CNN and ResNet hybrid |
[16] | 20×20 (×2, ×3, ×4) | 40 × 40, 60 × 60, 80 × 80 | BrainWeb ×2 (46.58), BrainWeb × 3 (40.97), BrainWeb ×4 (35.20), NAMIC ×2 (38.32), NAMIC ×3 (33.76), NAMIC ×4 (30.84) | BrainWeb ×2 (0.999), BrainWeb ×3 (0.995), BrainWeb ×4 (0.986), NAMIC ×2 (0.945), NAMIC ×3 (0.872), NAMIC ×4 (0.811) | BrainWeb [41], NAMIC [33] | ResNet |
3. Materials and Methods
3.1. Upscale Network
3.1.1. Degradation
3.1.2. Loss Functions
3.2. Denoise Network
3.3. Evaluation of Results
3.3.1. Objective Evaluation
3.3.2. Subjective Evaluation
4. Results and Discussion
4.1. Experimentation Data
4.2. Implementation Details
4.3. Results
4.4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Krishnapriya, S.; Karuna, Y. A survey of deep learning for MRI brain tumor segmentation methods: Trends, challenges, and future directions. Health Technol. 2023, 13, 181–201. [Google Scholar] [CrossRef]
- Khan, S.U.; Ullah, N.; Ahmed, I.; Ahmad, I.; Mahsud, M.I. MRI imaging, comparison of MRI with other modalities, noise in MRI images and machine learning techniques for noise removal: A review. Curr. Med Imaging 2019, 15, 243–254. [Google Scholar] [CrossRef] [PubMed]
- Odusami, M.; Maskeliūnas, R.; Damaševičius, R. Pixel-Level Fusion Approach with Vision Transformer for Early Detection of Alzheimer’s Disease. Electronics 2023, 12, 1218. [Google Scholar] [CrossRef]
- Praveen, S.P.; Srinivasu, P.N.; Shafi, J.; Wozniak, M.; Ijaz, M.F. ResNet-32 and FastAI for diagnoses of ductal carcinoma from 2D tissue slides. Sci. Rep. 2022, 12, 20804. [Google Scholar] [CrossRef]
- Ullah, I.; Ali, F.; Shah, B.; El-Sappagh, S.; Abuhmed, T.; Park, S.H. A deep learning based dual encoder–decoder framework for anatomical structure segmentation in chest X-ray images. Sci. Rep. 2023, 13, 791. [Google Scholar] [CrossRef]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Image Super-Resolution Using Deep Convolutional Networks. arXiv 2015, arXiv:1501.00092. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. arXiv 2017, arXiv:1706.03762. [Google Scholar] [CrossRef]
- Cordonnier, J.B.; Loukas, A.; Jaggi, M. On the Relationship between Self-Attention and Convolutional Layers. arXiv 2019, arXiv:1911.03584. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. [Google Scholar] [CrossRef]
- Chen, X.; Wang, X.; Zhou, J.; Dong, C. Activating More Pixels in Image Super-Resolution Transformer. arXiv 2022, arXiv:2205.04437. [Google Scholar] [CrossRef]
- Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; Timofte, R. SwinIR: Image Restoration Using Swin Transformer. arXiv 2021, arXiv:2108.10257. [Google Scholar] [CrossRef]
- Wang, X.; Xie, L.; Dong, C.; Shan, Y. Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data. arXiv 2021, arXiv:2107.10833. [Google Scholar] [CrossRef]
- Zhang, K.; Liang, J.; Van Gool, L.; Timofte, R. Designing a Practical Degradation Model for Deep Blind Image Super-Resolution. arXiv 2021, arXiv:2103.14006. [Google Scholar] [CrossRef]
- Ahn, N.; Kang, B.; Sohn, K.A. Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network. arXiv 2018, arXiv:1803.08664. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. arXiv 2014, arXiv:1406.2661. [Google Scholar] [CrossRef]
- Zeng, K.; Zheng, H.; Cai, C.; Yang, Y.; Zhang, K.; Chen, Z. Simultaneous single- and multi-contrast super-resolution for brain MRI images based on a convolutional neural network. Comput. Biol. Med. 2018, 99, 133–141. [Google Scholar] [CrossRef] [PubMed]
- Wang, L.; Zhu, H.; He, Z.; Jia, Y.; Du, J. Adjacent slices feature transformer network for single anisotropic 3D brain MRI image super-resolution. Biomed. Signal Process. Control 2022, 72, 103339. [Google Scholar] [CrossRef]
- Park, S.; Gahm, J.K. Super-Resolution of 3D Brain MRI With Filter Learning Using Tensor Feature Clustering. IEEE Access 2022, 10, 4957–4968. [Google Scholar] [CrossRef]
- Pham, C.H.; Tor-Díez, C.; Meunier, H.; Bednarek, N.; Fablet, R.; Passat, N.; Rousseau, F. Multiscale brain MRI super-resolution using deep 3D convolutional networks. Comput. Med Imaging Graph. 2019, 77, 101647. [Google Scholar] [CrossRef]
- Feng, C.M.; Wang, K.; Lu, S.; Xu, Y.; Li, X. Brain MRI super-resolution using coupled-projection residual network. Neurocomputing 2021, 456, 190–199. [Google Scholar] [CrossRef]
- Wu, Z.; Chen, X.; Xie, S.; Shen, J.; Zeng, Y. Super-resolution of brain MRI images based on denoising diffusion probabilistic model. Biomed. Signal Process. Control 2023, 85, 104901. [Google Scholar] [CrossRef]
- Song, L.; Wang, Q.; Liu, T.; Li, H.; Fan, J.; Yang, J.; Hu, B. Deep robust residual network for super-resolution of 2D fetal brain MRI. Sci. Rep. 2022, 12, 406. [Google Scholar] [CrossRef]
- Hongtao, Z.; Shinomiya, Y.; Yoshida, S. 3D Brain MRI Reconstruction based on 2D Super-Resolution Technology. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON, Canada, 11–14 October 2020; IEEE: Piscataway, NJ, USA, 2020. [Google Scholar] [CrossRef]
- Buades, A.; Coll, B.; Morel, J.M. Non-Local Means Denoising. Image Process. Line 2011, 1, 208–212. [Google Scholar] [CrossRef]
- Black, M.; Sapiro, G.; Marimont, D.; Heeger, D. Robust anisotropic diffusion. IEEE Trans. Image Process. 1998, 7, 421–432. [Google Scholar] [CrossRef] [PubMed]
- Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. In Proceedings of the Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271), Bombay, India, 7 January 1998; pp. 839–846. [Google Scholar] [CrossRef]
- Liu, H.; Yuan, H.; Hou, J.; Hamzaoui, R.; Gao, W. PUFA-GAN: A Frequency-Aware Generative Adversarial Network for 3D Point Cloud Upsampling. IEEE Trans. Image Process. 2022, 31, 7389–7402. [Google Scholar] [CrossRef] [PubMed]
- Snoek, L.; van der Miesen, M.M.; Beemsterboer, T.; van der Leij, A.; Eigenhuis, A.; Scholte, H.S. The Amsterdam Open MRI Collection, a set of multimodal MRI datasets for individual difference analyses. Sci. Data 2021, 8, 85. [Google Scholar] [CrossRef] [PubMed]
- The Cancer Genome Atlas (TCGA) Research Network Dataset, U.S. Department of Health and Human Services, National Institutes of Health, National Cancer Institute. 2006. Available online: https://portal.gdc.cancer.gov/ (accessed on 8 September 2023).
- Liew, S.L.; Lo, B.P.; Donnelly, M.R.; Zavaliangos-Petropulu, A.; Jeong, J.N.; Barisano, G.; Hutton, A.; Simon, J.P.; Juliano, J.M.; Suri, A.; et al. A large, curated, open-source stroke neuroimaging dataset to improve lesion segmentation algorithms. Sci. Data 2022, 9, 320. [Google Scholar] [CrossRef]
- Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; Loy, C.C. ESRGAN: Enhanced super-resolution generative adversarial networks. In Proceedings of the European Conference on Computer Vision Workshops (ECCVW), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Landman, B.A.; Huang, A.J.; Gifford, A.; Vikram, D.S.; Lim, I.A.L.; Farrell, J.A.; Bogovic, J.A.; Hua, J.; Chen, M.; Jarso, S.; et al. Multi-parametric neuroimaging reproducibility: A 3-T resource study. NeuroImage 2011, 54, 2854–2866. [Google Scholar] [CrossRef]
- Wiki, N. Downloads—NAMIC Wiki. 2017. Available online: https://www.na-mic.org/wiki/Downloads (accessed on 25 April 2023).
- Li, W.; Wang, Y.; Su, Y.; Li, X.; Liu, A.A.; Zhang, Y. Multi-Scale Fine-Grained Alignments for Image and Sentence Matching. IEEE Trans. Multimed. 2023, 25, 543–556. [Google Scholar] [CrossRef]
- Cong, R.; Sheng, H.; Yang, D.; Cui, Z.; Chen, R. Exploiting Spatial and Angular Correlations With Deep Efficient Transformers for Light Field Image Super-Resolution. IEEE Trans. Multimed. 2023, 1–14. [Google Scholar] [CrossRef]
- Cheng, D.; Chen, L.; Lv, C.; Guo, L.; Kou, Q. Light-Guided and Cross-Fusion U-Net for Anti-Illumination Image Super-Resolution. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 8436–8449. [Google Scholar] [CrossRef]
- Sheng, H.; Wang, S.; Yang, D.; Cong, R.; Cui, Z.; Chen, R. Cross-View Recurrence-based Self-Supervised Super-Resolution of Light Field. IEEE Trans. Circuits Syst. Video Technol. 2023, 1. [Google Scholar] [CrossRef]
- Essen, D.C.V.; Smith, S.M.; Barch, D.M.; Behrens, T.E.; Yacoub, E.; Ugurbil, K. The WU-Minn Human Connectome Project: An overview. NeuroImage 2013, 80, 62–79. [Google Scholar] [CrossRef]
- Kempton, M.J.; Underwood, T.S.; Brunton, S.; Stylios, F.; Schmechtig, A.; Ettinger, U.; Smith, M.S.; Lovestone, S.; Crum, W.R.; Frangou, S.; et al. A comprehensive testing protocol for MRI neuroanatomical segmentation techniques: Evaluation of a novel lateral ventricle segmentation method. NeuroImage 2011, 58, 1051–1059. [Google Scholar] [CrossRef]
- Commowick, O.; Istace, A.; Kain, M.; Laurent, B.; Leray, F.; Simon, M.; Pop, S.C.; Girard, P.; Améli, R.; Ferré, J.C.; et al. Objective Evaluation of Multiple Sclerosis Lesion Segmentation using a Data Management and Processing Infrastructure. Sci. Rep. 2018, 8, 13650. [Google Scholar] [CrossRef]
- Cocosco, C.A.; Kollokian, V.; Kwan, R.K.-S.; Evans, A.C. BrainWeb: Online Interface to a 3D MRI Simulated Brain Database, Neuroimage. In Proceedings of the 3rd International Conference on Functional Mapping of the Human Brain, Copenhagen, Denmark, 19–23 May 1997; Volume 5, p. S425. [Google Scholar]
- Srinivasan, R. Noise: Radiology Reference Article. Radiopaedia. 11 April 2022. Available online: https://doi.org/10.53347/rid-12937 (accessed on 8 September 2023).
- Smith, S.M.; Jenkinson, M.; Woolrich, M.W.; Beckmann, C.F.; Behrens, T.E.; Johansen-Berg, H.; Bannister, P.R.; Luca, M.D.; Drobnjak, I.; Flitney, D.E.; et al. Advances in functional and structural MR image analysis and implementation as FSL. NeuroImage 2004, 23, S208–S219. [Google Scholar] [CrossRef]
- FreeSurfer, An Open-Source Software Suite for Processing Human Brain MRI. 2023. Available online: https://github.com/freesurfer/freesurfer (accessed on 8 September 2023).
- Li, H.; Wang, W.; Wang, M.; Li, L.; Vimlund, V. A review of deep learning methods for pixel-level crack detection. J. Traffic Transp. Eng. 2022, 9, 945–968. [Google Scholar] [CrossRef]
- Johnson, J.; Alahi, A.; Fei-Fei, L. Perceptual Losses for Real-Time Style Transfer and Super-Resolution. arXiv 2016, arXiv:1603.08155. [Google Scholar] [CrossRef]
- Wu, B.; Duan, H.; Liu, Z.; Sun, G. SRPGAN: Perceptual Generative Adversarial Network for Single Image Super Resolution. arXiv 2017, arXiv:1712.05927. [Google Scholar] [CrossRef]
- Lai, W.S.; Huang, J.B.; Ahuja, N.; Yang, M.H. Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5835–5843. [Google Scholar] [CrossRef]
- Anagun, Y.; Isik, S.; Seke, E. SRLibrary: Comparing different loss functions for super-resolution over various convolutional architectures. J. Vis. Commun. Image Represent. 2019, 61, 178–187. [Google Scholar] [CrossRef]
- Gatys, L.; Ecker, A.S.; Bethge, M. Texture Synthesis Using Convolutional Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2015; Volume 28. [Google Scholar]
- Gatys, L.A.; Ecker, A.S.; Bethge, M. A Neural Algorithm of Artistic Style. arXiv 2015, arXiv:1508.06576. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar] [CrossRef]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
- Kokaram, A. Practical, Unified, Motion and Missing Data Treatment in Degraded Video. J. Math. Imaging Vis. 2004, 20, 163–177. [Google Scholar] [CrossRef]
- Getreuer, P. Rudin-Osher-Fatemi Total Variation Denoising using Split Bregman. Image Process. Line 2012, 2, 74–95. [Google Scholar] [CrossRef]
- Donoho, D.L.; Johnstone, I.M. Ideal spatial adaptation by wavelet shrinkage. Biometrika 1994, 81, 425–455. [Google Scholar] [CrossRef]
- Pratt, W.K. Digital Image Processing: PIKS Scientific Inside, 4th ed.; Wiley-Interscience: Hoboken, NJ, USA, 2007. [Google Scholar] [CrossRef]
- Zhang, K.; Li, Y.; Liang, J.; Cao, J.; Zhang, Y.; Tang, H.; Timofte, R.; Van Gool, L. Practical Blind Denoising via Swin-Conv-UNet and Data Synthesis. arXiv 2022, arXiv:2203.13278. [Google Scholar] [CrossRef]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H. Restormer: Efficient Transformer for High-Resolution Image Restoration. arXiv 2021, arXiv:2111.09881. [Google Scholar] [CrossRef]
- Cai, Y.; Hu, X.; Wang, H.; Zhang, Y.; Pfister, H.; Wei, D. Learning to Generate Realistic Noisy Images via Pixel-level Noise-aware Adversarial Training. arXiv 2022, arXiv:2204.02844. [Google Scholar] [CrossRef]
- Chen, L.; Chu, X.; Zhang, X.; Sun, J. Simple Baselines for Image Restoration. arXiv 2022, arXiv:2204.04676. [Google Scholar] [CrossRef]
- Bradski, G. The OpenCV Library. Dr. Dobb’s J. Softw. Tools 2000. [Google Scholar]
- Van der Walt, S.; Schönberger, J.L.; Nunez-Iglesias, J.; Boulogne, F.; Warner, J.D.; Yager, N.; Gouillart, E.; Yu, T. scikit-image: Image processing in Python. PeerJ 2014, 2, e453. [Google Scholar] [CrossRef] [PubMed]
- Virtanen, P.; Gommers, R.; Oliphant, T.E.; Haberland, M.; Reddy, T.; Cournapeau, D.; Burovski, E.; Peterson, P.; Weckesser, W.; Bright, J.; et al. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nat. Methods 2020, 17, 261–272. [Google Scholar] [CrossRef] [PubMed]
- Hore, A.; Ziou, D. Image Quality Metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; IEEE: Piscataway, NJ, USA, 2010. [Google Scholar] [CrossRef]
- Kastryulin, S.; Zakirov, J.; Pezzotti, N.; Dylov, D.V. Image Quality Assessment for Magnetic Resonance Imaging. arXiv 2022, arXiv:2203.07809. [Google Scholar] [CrossRef]
- Zhang, L.; Shen, Y.; Li, H. VSI: A Visual Saliency-Induced Index for Perceptual Image Quality Assessment. IEEE Trans. Image Process. 2014, 23, 4270–4281. [Google Scholar] [CrossRef] [PubMed]
- Lin, H.; Hosu, V.; Saupe, D. KADID-10k: A Large-scale Artificially Distorted IQA Database. In Proceedings of the 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, 5–7 June 2019; pp. 1–3. [Google Scholar] [CrossRef]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. arXiv 2018, arXiv:1801.03924. [Google Scholar] [CrossRef]
- Lusebrink, F.; Mattern, H.; Yakupov, R.; Acosta-Cabronero, J.; Ashtarayeh, M.; Oeltze-Jafra, S.; Speck, O. Comprehensive Ultrahigh Resolution Whole Brain In Vivo MRI Dataset as a Human Phantom. Sci. Data 2020, 8, 138. [Google Scholar] [CrossRef]
- Schreiber, S.; Bernal, J.; Arndt, P.; Schreiber, F.; Müller, P.; Morton, L.; Braun-Dullaeus, R.C.; Valdés-Hernández, M.D.C.; Duarte, R.; Wardlaw, J.M.; et al. Brain Vascular Health in ALS Is Mediated through Motor Cortex Microvascular Integrity. Cells 2023, 12, 957. [Google Scholar] [CrossRef] [PubMed]
- Betts, M.J.; Perosa, V.; Hämmerer, D.; Düzel, E. Healthy aging and Alzheimer’s disease. In Advances in Magnetic Resonance Technology and Applications; Elsevier: Amsterdam, The Netherlands, 2023; pp. 537–547. [Google Scholar] [CrossRef]
- Naji, N.; Wilman, A. Thin slab quantitative susceptibility mapping. Magn. Reson. Med. 2023. [Google Scholar] [CrossRef]
- Ladd, M.E.; Quick, H.H.; Speck, O.; Bock, M.; Doerfler, A.; Forsting, M.; Hennig, J.; Ittermann, B.; Möller, H.E.; Nagel, A.M.; et al. Germany’s journey toward 14 Tesla human magnetic resonance. Magn. Reson. Mater. Physics Biol. Med. 2023, 36, 191–210. [Google Scholar] [CrossRef] [PubMed]
- Mattern, H.; Lüsebrink, F.; Speck, O. High-resolution structural brain imaging. In Advances in Magnetic Resonance Technology and Applications; Elsevier: Amsterdam, The Netherlands, 2022; pp. 433–448. [Google Scholar] [CrossRef]
- Koenig, L.N.; Day, G.S.; Salter, A.; Keefe, S.; Marple, L.M.; Long, J.; LaMontagne, P.; Massoumzadeh, P.; Snider, B.J.; Kanthamneni, M.; et al. Select Atrophied Regions in Alzheimer disease (SARA): An improved volumetric model for identifying Alzheimer disease dementia. NeuroImage Clin. 2020, 26, 102248. [Google Scholar] [CrossRef]
- Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein GAN. arXiv 2017, arXiv:1701.07875. [Google Scholar] [CrossRef]
- Sahoo, D.K.; Das, A.; Mohanty, M.N.; Mishra, S. Brain tumor detection using inpainting and deep ensemble model. J. Inf. Optim. Sci. 2022, 43, 1925–1933. [Google Scholar] [CrossRef]
- Khosla, M.; Jamison, K.; Kuceyeski, A.; Sabuncu, M.R. Ensemble learning with 3D convolutional neural networks for functional connectome-based prediction. NeuroImage 2019, 199, 651–662. [Google Scholar] [CrossRef] [PubMed]
- Nguyen, D.; Nguyen, H.; Ong, H.; Le, H.; Ha, H.; Duc, N.T.; Ngo, H.T. Ensemble learning using traditional machine learning and deep neural network for diagnosis of Alzheimer’s disease. IBRO Neurosci. Rep. 2022, 13, 255–263. [Google Scholar] [CrossRef]
- Saber, M.; Boulmaiz, T.; Guermoui, M.; Abdrabo, K.I.; Kantoush, S.A.; Sumi, T.; Boutaghane, H.; Hori, T.; Binh, D.V.; Nguyen, B.Q.; et al. Enhancing flood risk assessment through integration of ensemble learning approaches and physical-based hydrological modeling. Geomat. Nat. Hazards Risk 2023, 14. [Google Scholar] [CrossRef]
- Yeganeh, A.; Pourpanah, F.; Shadman, A. An ANN-based ensemble model for change point estimation in control charts. Appl. Soft Comput. 2021, 110, 107604. [Google Scholar] [CrossRef]
Model | Modification | SSIM (%) ↑ | PSNR (dB) ↑ | VSI ↑ | LPIPS ↓ |
---|---|---|---|---|---|
Anisotropic diffusion | Kappa = 60, gamma = 0.0135 | 99.57 | 45.07 | 0.9992 | 0.0048 |
Bilateral filter | 98.55 | 39.31 | 0.9943 | 0.0209 | |
NAFNet | Baseline and width 32 | 97.85 | 36.55 | 0.9978 | 0.0305 |
Non-local means | 96.25 | 38.44 | 0.9958 | 0.0487 | |
Wavelet filter | Wavelet = “sym9” | 96.22 | 34.72 | 0.9984 | 0.0631 |
Restormer | Non-blind and | 96.19 | 35.34 | 0.9970 | 0.0349 |
SCUNet | 96.18 | 35.35 | 0.9964 | 0.0348 | |
SwinIR | 96.12 | 35.36 | 0.9965 | 0.0376 | |
Restormer | Nlind and | 96.09 | 35.31 | 0.9966 | 0.0364 |
Gaussian filter | Std = 0.75 | 95.96 | 34.14 | 0.9979 | 0.0489 |
PNGAN | MIRNet | 95.81 | 35.23 | 0.9974 | 0.0587 |
Chanbolle filter | Weight = 0.08 | 95.11 | 34.84 | 0.9969 | 0.0886 |
NAFNet | Baseline and width 64 | 94.87 | 34.38 | 0.9976 | 0.1555 |
PNGAN | RIDNet | 94.08 | 34.09 | 0.9971 | 0.0833 |
SCUNet | 93.94 | 33.29 | 0.9949 | 0.0553 | |
Restormer | Non-blind and | 93.85 | 33.24 | 0.9949 | 0.0580 |
SwinIR | 93.79 | 33.28 | 0.9947 | 0.0636 | |
Restormer | Blind and | 93.78 | 33.24 | 0.9947 | 0.0577 |
Median filter | Kernel size = 2 | 93.71 | 30.12 | 0.9947 | 0.0384 |
Bregman filter | Weight = 4.5 | 91.46 | 32.02 | 0.9949 | 0.0991 |
SCUNet | 91.07 | 31.19 | 0.9903 | 0.0903 | |
Restormer | Non-blind and | 90.76 | 31.09 | 0.9899 | 0.0995 |
Restormer | Blind and | 90.68 | 31.09 | 0.9898 | 0.1013 |
SwinIR | 90.38 | 31.13 | 0.9904 | 0.1157 | |
NAFNet | Width 32 | 17.83 | 21.38 | 0.9796 | 0.5353 |
NAFNet | Width 64 | 16.78 | 15.00 | 0.9349 | 0.5685 |
Reference | Description |
---|---|
[71] | Literature review on how high-resolution MRI can help in the detection of amyotrophic lateral sclerosis. The dataset was used to justify how certain vascular markers can be identified in the brain due to high-resolution MRI making it possible to see small details, which can be crucial for detection of some diseases, including amyotrophic lateral sclerosis. |
[72] | Book chapter where usage of high-resolution MRI is discussed—how small details in brain imaging can help in assessment of neurodegenerative pathophysiology and vascular dysfunction. The dataset was mentioned as an example. |
[73] | The research utilized the dataset in quantitative susceptibility mapping (QSM) MRI reconstruction from thin slices, where a T1w scan was used as a structural reference. This research aimed to improve QSM reconstruction speed and reliability. |
[74] | Literature review conducted to analyze the current state of ultra-high-resolution MRI acquisition in Germany. The dataset was mentioned as one of the sources for high-resolution MRI. |
[75] | Book chapter that discusses state-of-the-art methods and datasets for ultra-high-resolution structural MRI acquisition. The dataset was mentioned as an example. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Grigas, O.; Maskeliūnas, R.; Damaševičius, R. Improving Structural MRI Preprocessing with Hybrid Transformer GANs. Life 2023, 13, 1893. https://doi.org/10.3390/life13091893
Grigas O, Maskeliūnas R, Damaševičius R. Improving Structural MRI Preprocessing with Hybrid Transformer GANs. Life. 2023; 13(9):1893. https://doi.org/10.3390/life13091893
Chicago/Turabian StyleGrigas, Ovidijus, Rytis Maskeliūnas, and Robertas Damaševičius. 2023. "Improving Structural MRI Preprocessing with Hybrid Transformer GANs" Life 13, no. 9: 1893. https://doi.org/10.3390/life13091893
APA StyleGrigas, O., Maskeliūnas, R., & Damaševičius, R. (2023). Improving Structural MRI Preprocessing with Hybrid Transformer GANs. Life, 13(9), 1893. https://doi.org/10.3390/life13091893