Beyond Nyquist: A Comparative Analysis of 3D Deep Learning Models Enhancing MRI Resolution
Abstract
:1. Introduction
1.1. Contributions
1.2. Background
- 1.
- Extract image features from Y
- 2.
- Map the feature vector to a feature space
- 3.
- Reconstruct X from the feature space
2. Methods
2.1. Dataset
Undersampling
2.2. Network Models
2.2.1. Residual in Residual Dense Block (RRDB)
2.2.2. Structure Preserving Super Resolution (SPSR)
2.2.3. U-Net
2.2.4. U-Net MSS
2.2.5. ShuffleUNet
- 1.
- Double convolution
- 2.
- Convolutional decomposition
- 3.
- Pixel unshuffle
- 1.
- Pixel Shuffle
- 2.
- Convolutional decomposition
- 3.
- Double Convolution
2.3. Implementation, Training, and Evaluation
2.3.1. Dataset Split
- 1.
- Training—70 percent of the entire data set
- 2.
- Testing and Validation—30 percent of the entire data set
- (a)
- Testing—60 percent of the remaining 30 percent data set.
- (b)
- Validation—40 percent of the remaining 30 percent data set.
2.3.2. 3D Image Patching and Merging
2.3.3. Training
2.3.4. Evaluation
2.3.5. Loss Functions
Structural Similarity Index
Mean Absolute Error (L1)
Perceptual Loss
Mixed Gradient Loss ([25])
2.4. Uncertainty Mapping
- Black-box case: Where it is not possible to access to all the model parameters and its internal structure. For these cases the authors proposed an infer-transformation based uncertainty estimation where the images were given tolerable transformations across different dimensions, i.e., random flips and rotations which does not involve major structural changes in the images. Since this research uses anisotropic images where the voxel sizes varies across different dimensions the images there are heavy perturbations across other dimensions if one dimension is flipped. This altered the structure of the MRIs.
- Grey-box case: This is a case where access to the model structure, the authors propose to introduce an internal embedding/representation manipulation say by introducing a dropout and a noise layers during the inference time. This is feasible, since there is the independence to alter the model structure as well as the model weights.
Uncertainty Mapping Pipeline
- 1.
- Iteratively, model predictions are generated from low-resolution images using the trained model weight but with different dropout rates and Gaussian noise in the intermediate layers.
- 2.
- Compile all the generated images.
- 3.
- The uncertainty map is the pixel-wise variance across all these generated images.
3. Experiments and Evaluation
3.1. Experiments
- 1.
- Initial Experiments
- 2.
- Main experiments
3.1.1. Initial Experiments
- 1.
- Different Loss function result comparison.
- 2.
- Different 3D CNN models results comparison for cross acceleration factor on IXI-T1 dataset.
- 3.
- Models comparison on trainable parameters and inference time.
- 4.
- Visualising individual model results for all the acceleration factors.
Different 3D CNN Models Results Comparison for Cross Acceleration Factor of IXI-T1 Dataset
Models Comparison on Trainable Parameters and Inference Time
- 1.
- RRDB with UnetMSS for the acceleration factor ∼27 resulted in a p value of 8.63 , which indicates that the difference is statistically significant.
- 2.
- UNet with UNetMSS for acceleration factor ∼27 resulted in a p value 0.0193, indicating that there is statistical difference between them.
- 3.
- p value increases with higher acceleration factor between UNet and UNetMSS. (All the table of p values can be found in the Appendix A.2).
3.1.2. Main Experiments and Model Comparison
- 1.
- Cross contrast experiments
- 2.
- Uncertainty mapping
Cross Contrast Experiments
Uncertainty Mapping
3.2. Discussion
- 1.
- UNet and UNetMSS perform significantly better compared to RRDB in all the acceleration factors and contasts, except and T2.
- 2.
- UNet and UNetMSS are not significantly different in all acceleration factors.
4. Conclusions and Future Work
4.1. Conclusions
4.2. Future Work
- This research used discrete acceleration factors to downsample the images and then pre-interpolated them before training the models. It could have been avoided by pre-interpolation and upscale directly on the model training pipeline, so that the model can dynamically super-resolve any Acceleration factor.
- Diffusion models [29] are considered to be current state-of-the-art models, where a model is trained to add noise to a low-resolution image and then perform super-resolution by denoising these low-resolution images. This method recently proved to provide good results for images from very low resolution.
- Test block-wise uncertainty mapping by applying a local segmentation algorithm, since they gave good loss-variance correlation in the work done by [27].
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
Appendix A.1. Results Comparison of Different Loss Functions
Loss Function | Evaluation Metric | ||
---|---|---|---|
SSIM | PSNR | NRMSE | |
MGL | 0.9664 ± 0.0084 | 33.5291 ± 2.0137 | 0.0216 ± 0.0048 |
SSIM | 0.9666 ± 0.0085 | 33.5574 ± 2.0781 | 0.0216 ± 0.0223 |
L1 | 0.9297 ± 0.0176 | 31.5964 ± 1.9482 | 0.0270 ± 0.0058 |
Perceptual L1 | 0.9250 ± 0.0240 | 31.8038 ± 3.1137 | 0.0273 ± 0.0088 |
Appendix A.2. Main Experiments
Acceleration Factor | p-Value |
---|---|
23 | 0.0193 |
2.53 | 0.0552 |
33 | 0.0735 |
3.53 | 0.1464 |
43 | 0.195 |
Contrast | AF (^3) | Model | NRMSE | SSIM | ||
---|---|---|---|---|---|---|
Mean | Std | Mean | Std | |||
IXI-T1 | 2 | RRDB | 0.0298 | 0.0057 | 0.9422 | 0.0114 |
2 | SINC_INPUT | 0.0273 | 0.0088 | 0.9250 | 0.0239 | |
2 | UNet | 0.0293 | 0.0080 | 0.9476 | 0.0113 | |
2 | UNetMSS | 0.0281 | 0.0053 | 0.9438 | 0.0142 | |
2d5 | RRDB | 0.0366 | 0.0086 | 0.9164 | 0.0151 | |
2d5 | SINC_INPUT | 0.0351 | 0.0091 | 0.8947 | 0.0253 | |
2d5 | UNet | 0.0328 | 0.0089 | 0.9308 | 0.0141 | |
2d5 | UNetMSS | 0.0326 | 0.0067 | 0.9228 | 0.0172 | |
3 | RRDB | 0.0430 | 0.0122 | 0.8847 | 0.0249 | |
3 | SINC_INPUT | 0.0454 | 0.0126 | 0.8361 | 0.0409 | |
3 | UNet | 0.0379 | 0.0107 | 0.9049 | 0.0256 | |
3 | UNetMSS | 0.0357 | 0.0089 | 0.9041 | 0.0212 | |
3d5 | RRDB | 0.0487 | 0.0125 | 0.8612 | 0.0268 | |
3d5 | SINC_INPUT | 0.0495 | 0.0125 | 0.8060 | 0.0407 | |
3d5 | UNet | 0.0420 | 0.0105 | 0.8902 | 0.0235 | |
3d5 | UNetMSS | 0.0387 | 0.0096 | 0.8874 | 0.0251 | |
4 | RRDB | 0.0538 | 0.0127 | 0.8307 | 0.0336 | |
4 | SINC_INPUT | 0.0524 | 0.0143 | 0.7676 | 0.0462 | |
4 | UNet | 0.0458 | 0.0101 | 0.8638 | 0.0307 | |
4 | UNetMSS | 0.0421 | 0.0104 | 0.8643 | 0.0298 | |
IXI-T2 | 2 | RRDB | 0.0292 | 0.0098 | 0.9441 | 0.0230 |
2 | SINC_INPUT | 0.0263 | 0.0064 | 0.9339 | 0.0144 | |
2 | UNet | 0.0309 | 0.0127 | 0.9439 | 0.0409 | |
2 | UNetMSS | 0.0260 | 0.0072 | 0.9496 | 0.0179 | |
2d5 | RRDB | 0.0360 | 0.0074 | 0.9223 | 0.0149 | |
2d5 | SINC_INPUT | 0.0342 | 0.0073 | 0.9029 | 0.0168 | |
2d5 | UNet | 0.0339 | 0.0080 | 0.9334 | 0.0165 | |
2d5 | UNetMSS | 0.0321 | 0.0066 | 0.9298 | 0.0166 | |
3 | RRDB | 0.0461 | 0.0095 | 0.8852 | 0.0221 | |
3 | SINC_INPUT | 0.0490 | 0.0110 | 0.8373 | 0.0238 | |
3 | UNet | 0.0415 | 0.0085 | 0.9038 | 0.0224 | |
3 | UNetMSS | 0.0381 | 0.0080 | 0.9061 | 0.0187 | |
3d5 | RRDB | 0.0506 | 0.0106 | 0.8588 | 0.0231 | |
3d5 | SINC_INPUT | 0.0541 | 0.0129 | 0.8073 | 0.0278 | |
3d5 | UNet | 0.0451 | 0.0102 | 0.8880 | 0.0218 | |
3d5 | UNetMSS | 0.0438 | 0.0093 | 0.8811 | 0.0215 | |
4 | RRDB | 0.0572 | 0.0166 | 0.8289 | 0.0654 | |
4 | SINC_INPUT | 0.0555 | 0.0127 | 0.7873 | 0.0279 | |
4 | UNet | 0.0506 | 0.0154 | 0.8569 | 0.0648 | |
4 | UNetMSS | 0.0477 | 0.0097 | 0.8627 | 0.0289 | |
PD | 2 | RRDB | 0.0278 | 0.0101 | 0.9546 | 0.0119 |
2 | SINC_INPUT | 0.0229 | 0.0062 | 0.9423 | 0.0137 | |
2 | UNet | 0.0308 | 0.0125 | 0.9561 | 0.0132 | |
2 | UNetMSS | 0.0242 | 0.0074 | 0.9579 | 0.0111 | |
2d5 | RRDB | 0.0364 | 0.0099 | 0.9322 | 0.0142 | |
2d5 | SINC_INPUT | 0.0313 | 0.0075 | 0.9111 | 0.0176 | |
2d5 | UNet | 0.0345 | 0.0092 | 0.9412 | 0.0127 | |
2d5 | UNetMSS | 0.0276 | 0.0065 | 0.9419 | 0.0130 | |
3 | RRDB | 0.0460 | 0.0128 | 0.8976 | 0.0247 | |
3 | SINC_INPUT | 0.0474 | 0.0126 | 0.8502 | 0.0242 | |
3 | UNet | 0.0401 | 0.0106 | 0.9128 | 0.0238 | |
3 | UNetMSS | 0.0347 | 0.0089 | 0.9179 | 0.0177 | |
3d5 | RRDB | 0.0579 | 0.0162 | 0.8741 | 0.0202 | |
3d5 | SINC_INPUT | 0.0528 | 0.0153 | 0.8196 | 0.0282 | |
3d5 | UNet | 0.0491 | 0.0143 | 0.8972 | 0.0189 | |
3d5 | UNetMSS | 0.0427 | 0.0110 | 0.8983 | 0.0171 | |
4 | RRDB | 0.0598 | 0.0188 | 0.8447 | 0.0625 | |
4 | SINC_INPUT | 0.0513 | 0.0140 | 0.8055 | 0.0299 | |
4 | UNet | 0.0491 | 0.0145 | 0.8705 | 0.0542 | |
4 | UNetMSS | 0.0433 | 0.0101 | 0.8844 | 0.0206 |
Contrast | AF | Comparison | p Value |
---|---|---|---|
IXI-T1 | 8 | RRDB vs. UNetMSS | 0.1172 |
RRDB vs. UNet | 0 | ||
RRDB vs. SINC_INPUT | 0 | ||
UNetMSS vs. UNet | 0.0002 | ||
UNetMSS vs. SINC_INPUT | 0 | ||
27 | RRDB vs. UNetMSS | 0 | |
RRDB vs. UNet | 0 | ||
RRDB vs. SINC_INPUT | 0 | ||
UNetMSS vs. UNet | 0.6798 | ||
UNetMSS vs. SINC_INPUT | 0 | ||
64 | RRDB vs. UNetMSS | 0 | |
RRDB vs. UNet | 0 | ||
RRDB vs. SINC_INPUT | 0 | ||
UNetMSS vs. UNet | 0.8103 | ||
UNetMSS vs. SINC_INPUT | 0 | ||
IXI-T2 | 2 | RRDB vs. UNetMSS | 0.0009 |
RRDB vs. UNet | 0.9546 | ||
RRDB vs. SINC_INPUT | 0 | ||
UNetMSS vs. UNet | 0.0253 | ||
UNetMSS vs. SINC_INPUT | 0 | ||
3 | RRDB vs. UNetMSS | 0 | |
RRDB vs. UNet | 0 | ||
RRDB vs. SINC_INPUT | 0 | ||
UNetMSS vs. UNet | 0.1721 | ||
UNetMSS vs. SINC_INPUT | 0 | ||
4 | RRDB vs. UNetMSS | 0 | |
RRDB vs. UNet | 0 | ||
RRDB vs. SINC_INPUT | 0 | ||
UNetMSS vs. UNet | 0.1562 | ||
UNetMSS vs. SINC_INPUT | 0 | ||
PD | 2 | RRDB vs. UNetMSS | 0.0005 |
RRDB vs. UNet | 0.1346 | ||
RRDB vs. SINC_INPUT | 0 | ||
UNetMSS vs. UNet | 0.0747 | ||
UNetMSS vs. SINC_INPUT | 0 | ||
3 | RRDB vs. UNetMSS | 0 | |
RRDB vs. UNet | 0 | ||
RRDB vs. SINC_INPUT | 0 | ||
UNetMSS vs. UNet | 0.0027 | ||
UNetMSS vs. SINC_INPUT | 0 | ||
4 | RRDB vs. UNetMSS | 0 | |
RRDB vs. UNet | 0 | ||
RRDB vs. SINC_INPUT | 0 | ||
UNetMSS vs. UNet | 0 | ||
UNetMSS vs. SINC_INPUT | 0 |
References
- Shi, F.; Cheng, J.; Wang, L.; Yap, P.T.; Shen, D. LRTV: MR Image Super-Resolution with Low-Rank and Total Variation Regularizations. IEEE Trans. Med Imaging 2015, 34, 2459–2466. [Google Scholar] [CrossRef] [PubMed]
- Glasner, D.; Bagon, S.; Irani, M. Super-Resolution from a Single Image. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 27 September–4 October 2009; pp. 349–356. [Google Scholar] [CrossRef]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 295–307. [Google Scholar] [CrossRef] [PubMed]
- Dong, C.; Loy, C.C.; Tang, X. Accelerating the super-resolution convolutional neural network. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part II 14. Springer: Berlin/Heidelberg, Germany, 2016; pp. 391–407. [Google Scholar]
- Chatterjee, S.; Breitkopf, M.; Sarasaen, C.; Yassin, H.; Rose, G.; Nürnberger, A.; Speck, O. Reconresnet: Regularised residual learning for mr image reconstruction of undersampled cartesian and radial data. Comput. Biol. Med. 2022, 143, 105321. [Google Scholar] [CrossRef] [PubMed]
- Ernst, P.; Chatterjee, S.; Rose, G.; Speck, O.; Nürnberger, A. Sinogram upsampling using Primal–Dual UNet for undersampled CT and radial MRI reconstruction. Neural Netw. 2023, 166, 704–721. [Google Scholar] [CrossRef] [PubMed]
- Sarasaen, C.; Chatterjee, S.; Breitkopf, M.; Rose, G.; Nürnberger, A.; Speck, O. Fine-tuning deep learning model parameters for improved super-resolution of dynamic mri with prior-knowledge. Artif. Intell. Med. 2021, 121, 102196. [Google Scholar] [CrossRef]
- Chatterjee, S.; Sciarra, A.; Dünnwald, M.; Mushunuri, R.V.; Podishetti, R.; Rao, R.N.; Gopinath, G.D.; Oeltze-Jafra, S.; Speck, O.; Nürnberger, A. ShuffleUNet: Super resolution of diffusion-weighted MRIs using deep learning. In Proceedings of the 2021 29th European Signal Processing Conference (EUSIPCO), Dublin, Ireland, 23–27 August 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 940–944. [Google Scholar]
- Chatterjee, S.; Sarasaen, C.; Rose, G.; Nürnberger, A.; Speck, O. Ddos-unet: Incorporating temporal information using dynamic dual-channel unet for enhancing super-resolution of dynamic mri. IEEE Access 2024, 12, 99122–99136. [Google Scholar] [CrossRef]
- Wang, Y.; Teng, Q.; He, X.; Feng, J.; Zhang, T. CT-image of rock samples super resolution using 3D convolutional neural network. Comput. Geosci. 2019, 133, 104314. [Google Scholar] [CrossRef]
- Ma, C.; Rao, Y.; Cheng, Y.; Chen, C.; Lu, J.; Zhou, J. Structure-Preserving Super Resolution with Gradient Guidance. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- Zhao, W.; Jiang, D.; Queralta, J.P.; Westerlund, T. MSS U-Net: 3D segmentation of kidneys and tumors from CT images with a multi-scale supervised U-Net. Inform. Med. Unlocked 2020, 19, 100357. [Google Scholar] [CrossRef]
- Chatterjee, S.; Prabhu, K.; Pattadkal, M.; Bortsova, G.; Sarasaen, C.; Dubost, F.; Mattern, H.; de Bruijne, M.; Speck, O.; Nürnberger, A. DS6, deformation-aware semi-supervised learning: Application to small vessel segmentation with noisy training data. J. Imaging 2022, 8, 259. [Google Scholar] [CrossRef] [PubMed]
- McCarthy, P.; Cottaar, M.; Webster, M.; Fitzgibbon, S.; Craig, M. fslpy, 2021. [CrossRef]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar] [CrossRef]
- Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; Change Loy, C. ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 8–14 September 2018. [Google Scholar]
- Pérez-García, F.; Sparks, R.; Ourselin, S. TorchIO: A Python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning. Comput. Methods Programs Biomed. 2021, 208, 106236. [Google Scholar] [CrossRef] [PubMed]
- Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; Lerer, A. Automatic differentiation in PyTorch. In Proceedings of the NIPS-W, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2017, arXiv:1412.6980. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
- Zhao, H.; Gallo, O.; Frosio, I.; Kautz, J. Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 2016, 3, 47–57. [Google Scholar] [CrossRef]
- Willmott, C.J.; Matsuura, K. Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance. Clim. Res. 2005, 30, 79–82. [Google Scholar] [CrossRef]
- Johnson, J.; Alahi, A.; Fei-Fei, L. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 694–711. [Google Scholar]
- Lu, Z.; Chen, Y. Single image super-resolution based on a modified U-net with mixed gradient loss. Signal Image Video Process. 2022, 15, 1143–1151. [Google Scholar] [CrossRef]
- Gal, Y.; Ghahramani, Z. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of the International Conference on Machine Learning, PMLR, New York, NY, USA, 19–24 June 2016; pp. 1050–1059. [Google Scholar]
- Mi, L.; Wang, H.; Tian, Y.; Shavit, N. Training-Free Uncertainty Estimation for Neural Networks. arXiv 2019, arXiv:1910.04858. [Google Scholar]
- Chatterjee, S.; Sciarra, A.; Dünnwald, M.; Talagini Ashoka, A.B.; Oeltze-Jafra, S.; Speck, O.; Nürnberger, A. Uncertainty quantification for ground-truth free evaluation of deep learning reconstructions. In Proceedings of the Joint Annual Meeting ISMRM-ESMRMB, London, UK, 7–12 May 2022; p. 5631. [Google Scholar]
- Saharia, C.; Ho, J.; Chan, W.; Salimans, T.; Fleet, D.J.; Norouzi, M. Image super-resolution via iterative refinement. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 4713–4726. [Google Scholar] [CrossRef] [PubMed]
- Kwon, H. MedicalGuard: U-Net Model Robust against Adversarially Perturbed Images. Secur. Commun. Netw. 2021, 2021, 5595026. [Google Scholar] [CrossRef]
- Kwon, H.; Jeong, J. AdvU-Net: Generating Adversarial Example Based on Medical Image and Targeting U-Net Model. J. Sens. 2022, 2022, 4390413. [Google Scholar] [CrossRef]
Dataset | Training | Validation | Test |
---|---|---|---|
IXI-T1 | 406 | 70 | 105 |
IXI-T2 | 403 | 70 | 104 |
PD | 403 | 70 | 104 |
Method Type | Method Name | Acceleration Factor | ||||
---|---|---|---|---|---|---|
23 | 2.53 | 33 | 3.53 | 43 | ||
Interpolation Methods (non-DL) | Bicubic | 0.90 ± 0.0223 | 0.87 ± 0.032 | 0.81 ± 0.0412 | 0.78 ± 0.0414 | 0.73 ± 0.0477 |
NN | 0.89 ± 0.0245 | 0.83 ± 0.0371 | 0.77 ± 0.0461 | 0.74 ± 0.0448 | 0.69 ± 0.0496 | |
Sinc | 0.93 ± 0.0239 | 0.89 ± 0.0252 | 0.83 ± 0.0409 | 0.80 ± 0.0407 | 0.77 ± 0.0461 | |
Deep Learning Models | RRDB | 0.95 ± 0.0084 | 0.93 ± 0.0141 | 0.91 ± 0.0206 | 0.88 ± 0.0255 | 0.86 ± 0.0295 |
SPSR | 0.91 ± 0.0118 | 0.91 ± 0.0124 | 0.87 ± 0.0139 | 0.84 ± 0.0208 | 0.81 ± 0.0255 | |
UNet | 0.97 ± 0.0087 | 0.95 ± 0.0121 | 0.93 ± 0.0179 | 0.92 ± 0.0202 | 0.90 ± 0.0251 | |
UNetMSS | 0.96 ± 0.0111 | 0.94 ± 0.0138 | 0.92 ± 0.0206 | 0.90 ± 0.0216 | 0.88 ± 0.0269 | |
ShuffleUNet | 0.95 ± 0.0112 | 0.94 ± 0.0140 | 0.90 ± 0.0479 | 0.88 ± 0.0246 | 0.88 ± 0.0412 |
Method Type | Method Name | Acceleration Factor | ||||
---|---|---|---|---|---|---|
23 | 2.53 | 33 | 3.53 | 43 | ||
Interpolation Methods (non-DL) | Bicubic | 0.0378 ± 0.0095 | 0.0437 ± 0.0121 | 0.0544 ± 0.0149 | 0.0577 ± 0.0148 | 0.0676 ± 0.0153 |
NN | 0.0418 ± 0.0109 | 0.0522 ± 0.0139 | 0.0643 ± 0.017 | 0.0665 ± 0.0165 | 0.0763 ± 0.0174 | |
Sinc | 0.0273 ± 0.0086 | 0.0350 ± 0.009 | 0.0454 ± 0.0125 | 0.0495 ± 0.0124 | 0.0523 ± 0.0142 | |
Deep Learning Models | RRDB | 0.026 ± 0.005 | 0.0302 ± 0.0004 | 0.0352 ± 0.0086 | 0.0394 ± 0.0093 | 0.0429 ± 0.011 |
SPSR | 0.039 ± 0.0041 | 0.038 ± 0.0073 | 0.045 ± 0.0095 | 0.051 ± 0.0124 | 0.054 ± 0.0136 | |
UNet | 0.021 ± 0.0050 | 0.025 ± 0.0063 | 0.03 ± 0.0082 | 0.035 ± 0.0084 | 0.036 ± 0.0089 | |
UNetMSS | 0.024 ± 0.0052 | 0.029 ± 0.0065 | 0.034 ± 0.0086 | 0.038 ± 0.0083 | 0.04 ± 0.0087 | |
ShuffleUNet | 0.026 ± 0.0072 | 0.032 ± 0.0093 | 0.036 ± 0.0195 | 0.043 ± 0.0115 | 0.046 ± 0.0114 |
Model | Trainable Parameters | Inference Time (in Seconds) |
---|---|---|
RRDB | 246,865 | 8.40 |
SPSR | 493,754 | 18.86 |
UNet/UNetMSS | 5,418,563 | 8.79 |
ShuffleUNet | 106,957,377 | 44.10 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chatterjee, S.; Sciarra, A.; Dünnwald, M.; Ashoka, A.B.T.; Vasudeva, M.G.C.; Saravanan, S.; Sambandham, V.T.; Tummala, P.; Oeltze-Jafra, S.; Speck, O.; et al. Beyond Nyquist: A Comparative Analysis of 3D Deep Learning Models Enhancing MRI Resolution. J. Imaging 2024, 10, 207. https://doi.org/10.3390/jimaging10090207
Chatterjee S, Sciarra A, Dünnwald M, Ashoka ABT, Vasudeva MGC, Saravanan S, Sambandham VT, Tummala P, Oeltze-Jafra S, Speck O, et al. Beyond Nyquist: A Comparative Analysis of 3D Deep Learning Models Enhancing MRI Resolution. Journal of Imaging. 2024; 10(9):207. https://doi.org/10.3390/jimaging10090207
Chicago/Turabian StyleChatterjee, Soumick, Alessandro Sciarra, Max Dünnwald, Anitha Bhat Talagini Ashoka, Mayura Gurjar Cheepinahalli Vasudeva, Shudarsan Saravanan, Venkatesh Thirugnana Sambandham, Pavan Tummala, Steffen Oeltze-Jafra, Oliver Speck, and et al. 2024. "Beyond Nyquist: A Comparative Analysis of 3D Deep Learning Models Enhancing MRI Resolution" Journal of Imaging 10, no. 9: 207. https://doi.org/10.3390/jimaging10090207
APA StyleChatterjee, S., Sciarra, A., Dünnwald, M., Ashoka, A. B. T., Vasudeva, M. G. C., Saravanan, S., Sambandham, V. T., Tummala, P., Oeltze-Jafra, S., Speck, O., & Nürnberger, A. (2024). Beyond Nyquist: A Comparative Analysis of 3D Deep Learning Models Enhancing MRI Resolution. Journal of Imaging, 10(9), 207. https://doi.org/10.3390/jimaging10090207