A Comparative Study of Automated Deep Learning Segmentation Models for Prostate MRI
Abstract
:Simple Summary
Abstract
1. Introduction
2. Materials and Methods
2.1. Data
2.2. Prostate Detection
2.3. T2W Pre-Processing and Augmentation
2.4. Segmentation Models
2.5. Training and Evaluation
2.5.1. Detection
2.5.2. Segmentation
2.6. Loss Function
3. Results
3.1. Object Detection
3.2. Gland Segmentation
3.3. Zone Segmentation
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
Gland | unet | unet++ | d2unet | d2aunet | runet | aunet | daunet | r2unet | r2aunet | segresnet | highresnet | vnet | nnunet | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
unet | 0.6015 | 0.3472 | 0.9168 | 0.7540 | 0.9168 | 0.7540 | 0.6015 | 0.7540 | 0.9168 | 0.1745 | 0.1745 | 0.2506 | 0.009 | |
unet++ | 0.4647 | 0.9168 | 0.4647 | 0.3472 | 0.9168 | 0.7540 | 0.7540 | 0.9168 | 0.9168 | 0.0758 | 0.1172 | 0.1172 | 0.009 | |
d2unet | 0.1172 | 0.0283 | 0.3472 | 0.7540 | 0.9168 | 0.7540 | 0.4647 | 0.9168 | 0.6015 | 0.0758 | 0.2506 | 0.1172 | 0.009 | |
d2aunet | 0.1172 | 0.0283 | 0.9168 | 0.6015 | 0.6015 | 0.2506 | 0.4647 | 0.4647 | 0.6015 | 0.0472 | 0.0758 | 0.0472 | 0.009 | |
runet | 0.7540 | 0.6015 | 0.0758 | 0.0758 | 0.9168 | 0.7540 | 0.7540 | 0.7540 | 0.7540 | 0.0472 | 0.1172 | 0.0472 | 0.009 | |
aunet | 0.6015 | 0.6015 | 0.6015 | 0.2506 | 0.6015 | 0.3472 | 0.9168 | 0.7540 | 0.7540 | 0.3472 | 0.3472 | 0.3472 | 0.009 | |
daunet | 0.6015 | 0.2506 | 0.6015 | 0.4647 | 0.4647 | 0.6015 | 0.2506 | 0.7540 | 0.6015 | 0.3472 | 0.4647 | 0.4647 | 0.009 | |
r2unet | 0.4647 | 0.6015 | 0.7540 | 0.3472 | 0.4647 | 0.7540 | 0.7540 | 0.2506 | 0.9168 | 0.0472 | 0.1745 | 0.0758 | 0.009 | Full |
r2aunet | 0.2506 | 0.0283 | 0.6015 | 0.4647 | 0.1172 | 0.6015 | 0.7540 | 0.9168 | 0.4647 | 0.1745 | 0.2506 | 0.1745 | 0.009 | |
segresnet | 0.2506 | 0.1172 | 0.0163 | 0.0163 | 0.2506 | 0.1172 | 0.0283 | 0.0283 | 0.0163 | 0.1172 | 0.9168 | 0.7540 | 0.009 | |
highresnet | 0.1172 | 0.0758 | 0.0163 | 0.0163 | 0.1745 | 0.1172 | 0.0283 | 0.1172 | 0.0163 | 0.9168 | 0.4647 | 0.7540 | 0.009 | |
vnet | 0.3472 | 0.3472 | 0.0472 | 0.0472 | 0.4647 | 0.4647 | 0.1172 | 0.2506 | 0.0758 | 0.3472 | 0.4647 | 0.0472 | 0.009 | |
nnunet | 0.1745 | 0.0283 | 0.6015 | 0.6015 | 0.0758 | 0.4647 | 0.6015 | 0.9168 | 0.7540 | 0.0163 | 0.0163 | 0.0472 | 0.009 | |
Cropped | full vs. cropped |
TZ | unet | unet++ | d2unet | d2aunet | runet | aunet | daunet | r2unet | r2aunet | segresnet | highresnet | vnet | nnunet | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
unet | 0.009 | 0.7540 | 0.7540 | 0.4647 | 0.4647 | 0.2506 | 0.7540 | 0.7540 | 0.7540 | 0.1172 | 0.9168 | 0.7540 | 0.0163 | |
unet++ | 0.7540 | 0.009 | 0.6015 | 0.6015 | 0.9168 | 0.7540 | 0.9168 | 0.4647 | 0.9168 | 0.1172 | 0.9168 | 0.4647 | 0.009 | |
d2unet | 0.7540 | 0.6015 | 0.009 | 0.3472 | 0.9168 | 0.6015 | 0.7540 | 0.6015 | 0.7540 | 0.0472 | 0.9168 | 0.3472 | 0.009 | |
d2aunet | 0.4647 | 0.6015 | 0.3472 | 0.0163 | 0.2506 | 0.2506 | 0.6015 | 0.9168 | 0.6015 | 0.4647 | 0.6015 | 0.7540 | 0.0163 | |
runet | 0.4647 | 0.9168 | 0.9168 | 0.2506 | 0.0472 | 0.3472 | 0.7540 | 0.6015 | 0.7540 | 0.0472 | 0.7540 | 0.1745 | 0.0163 | |
aunet | 0.2506 | 0.7540 | 0.6015 | 0.2506 | 0.3472 | 0.0163 | 0.7540 | 0.3472 | 0.6015 | 0.0472 | 0.7540 | 0.1745 | 0.0163 | |
daunet | 0.7540 | 0.9168 | 0.7540 | 0.6015 | 0.7540 | 0.7540 | 0.009 | 0.3472 | 0.4647 | 0.1172 | 0.7540 | 0.4647 | 0.009 | |
r2unet | 0.7540 | 0.4647 | 0.6015 | 0.9168 | 0.6015 | 0.3472 | 0.3472 | 0.0283 | 0.6015 | 0.4647 | 0.3472 | 0.9168 | 0.009 | Full |
r2aunet | 0.7540 | 0.9168 | 0.7540 | 0.6015 | 0.7540 | 0.6015 | 0.4647 | 0.6015 | 0.009 | 0.0758 | 0.7540 | 0.3472 | 0.009 | |
segresnet | 0.1172 | 0.1172 | 0.0472 | 0.4647 | 0.0472 | 0.0472 | 0.1172 | 0.4647 | 0.0758 | 0.0163 | 0.1172 | 0.1745 | 0.009 | |
highresnet | 0.9168 | 0.9168 | 0.9168 | 0.6015 | 0.7540 | 0.7540 | 0.7540 | 0.3472 | 0.7540 | 0.1172 | 0.0283 | 0.7540 | 0.009 | |
vnet | 0.7540 | 0.4647 | 0.3472 | 0.7540 | 0.1745 | 0.1745 | 0.4647 | 0.9168 | 0.3472 | 0.1745 | 0.7540 | 0.0472 | 0.009 | |
nnunet | 0.0163 | 0.009 | 0.009 | 0.0163 | 0.0163 | 0.0163 | 0.009 | 0.009 | 0.009 | 0.009 | 0.009 | 0.009 | 0.0283 | |
Cropped | full vs. cropped |
PZ | unet | unet++ | d2unet | d2aunet | runet | aunet | daunet | r2unet | r2aunet | segresnet | highresnet | vnet | nnunet | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
unet | 0.4647 | 0.6015 | 0.9168 | 0.6015 | 0.3472 | 0.4647 | 0.9168 | 0.9168 | 0.3472 | 0.1745 | 0.6015 | 0.1745 | 0.0163 | |
unet++ | 0.9168 | 0.4647 | 0.9168 | 0.1745 | 0.4647 | 0.3472 | 0.7540 | 0.6015 | 0.1745 | 0.1172 | 0.3472 | 0.2506 | 0.0283 | |
d2unet | 0.9168 | 0.9168 | 0.9168 | 0.6015 | 0.3472 | 0.6015 | 0.9168 | 0.7540 | 0.2506 | 0.0758 | 0.7540 | 0.2506 | 0.0163 | |
d2aunet | 0.9168 | 0.9168 | 0.9168 | 0.4647 | 0.0758 | 0.7540 | 0.4647 | 0.2506 | 0.4647 | 0.0758 | 0.7540 | 0.0758 | 0.0163 | |
runet | 0.7540 | 0.6015 | 0.9168 | 0.6015 | 0.2506 | 0.1745 | 0.4647 | 0.3472 | 0.1172 | 0.2506 | 0.3472 | 0.4647 | 0.009 | |
aunet | 0.7540 | 0.6015 | 0.9168 | 0.6015 | 0.9168 | 0.9168 | 0.4647 | 0.3472 | 0.7540 | 0.0758 | 0.7540 | 0.1745 | 0.0163 | |
daunet | 0.9168 | 0.7540 | 0.9168 | 0.7540 | 0.6015 | 0.6015 | 0.6015 | 0.9168 | 0.3472 | 0.1172 | 0.4647 | 0.3472 | 0.0283 | |
r2unet | 0.7540 | 0.9168 | 0.7540 | 0.9168 | 0.9168 | 0.6015 | 0.7540 | 0.3472 | 0.2506 | 0.0758 | 0.3472 | 0.1745 | 0.0163 | Full |
r2aunet | 0.9168 | 0.9168 | 0.6015 | 0.9168 | 0.7540 | 0.7540 | 0.9168 | 0.7540 | 0.7540 | 0.0758 | 0.6015 | 0.0758 | 0.0163 | |
segresnet | 0.2506 | 0.1745 | 0.2506 | 0.1745 | 0.3472 | 0.4647 | 0.3472 | 0.1745 | 0.2506 | 0.3472 | 0.3472 | 0.3472 | 0.009 | |
highresnet | 0.9168 | 0.9168 | 0.7540 | 0.6015 | 0.3472 | 0.3472 | 0.4647 | 0.9168 | 0.7540 | 0.1172 | 0.3472 | 0.3472 | 0.0283 | |
vnet | 0.4647 | 0.4647 | 0.6015 | 0.4647 | 0.6015 | 0.7540 | 0.4647 | 0.4647 | 0.4647 | 0.6015 | 0.2506 | 0.1745 | 0.009 | |
nnunet | 0.9168 | 0.7540 | 0.7540 | 0.7540 | 0.4647 | 0.6015 | 0.9168 | 0.9168 | 0.7540 | 0.1745 | 0.6015 | 0.2506 | 0.0163 | |
Cropped | full vs. cropped |
References
- Sung, H.; Ferlay, J.; Siegel, R.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA Cancer J. Clin. 2021, 71, 209–249. [Google Scholar] [CrossRef] [PubMed]
- Ramamurthy, K.; Varikuti, A.R.; Gupta, B.; Aswani, N. A deep learning network for Gleason grading of prostate biopsies using EfficientNet. Biomed. Tech. 2022. [Google Scholar] [CrossRef] [PubMed]
- Rodrigues, A.; Santinha, J.; Galvão, B.; Matos, C.; Couto, F.M.; Papanikolaou, N. Prediction of Prostate Cancer Disease Aggressiveness Using Bi-Parametric Mri Radiomics. Cancers 2021, 13, 6065. [Google Scholar] [CrossRef] [PubMed]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2015, arXiv:1505.04597. [Google Scholar]
- Guo, Y.; Gao, Y.; Shen, D. Deformable MR Prostate Segmentation via Deep Feature Learning and Sparse Patch Matching. IEEE Trans. Med. Imaging 2016, 35, 1077–1089. [Google Scholar] [CrossRef] [Green Version]
- Milletari, F.; Navab, N.; Ahmadi, S.A. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
- Zhu, Q.; Du, B.; Turkbey, B.; Choyke, P.; Yan, P. Deeply-supervised CNN for prostate segmentation. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 178–184. [Google Scholar]
- Dai, Z.; Carver, E.; Liu, C.; Lee, J.; Feldman, A.; Zong, W.; Pantelic, M.; Elshaikh, M.; Wen, N. Segmentation of the Prostatic Gland and the Intraprostatic Lesions on Multiparametic Magnetic Resonance Imaging Using Mask Region-Based Convolutional Neural Networks. Adv. Radiat. Oncol. 2020, 5, 473–481. [Google Scholar] [CrossRef] [Green Version]
- Zavala-Romero, O.; Breto, A.L.; Xu, I.R.; Chang, Y.C.C.; Gautney, N.; Dal Pra, A.; Abramowitz, M.C.; Pollack, A.; Stoyanova, R. Segmentation of prostate and prostate zones using deep learning: A multi-MRI vendor analysis. Strahlenther. Onkol. 2020, 196, 932–942. [Google Scholar] [CrossRef]
- Aldoj, N.; Biavati, F.; Michallek, F.; Stober, S.; Dewey, M. Automatic prostate and prostate zones segmentation of magnetic resonance images using DenseNet-like U-net. Sci. Rep. 2020, 10, 14315. [Google Scholar] [CrossRef]
- Duran, A.; Dussert, G.; Rouvière, O.; Jaouen, T.; Jodoin, P.M.; Lartizien, C. ProstAttention-Net: A deep attention model for prostate cancer segmentation by aggressiveness in MRI scans. Med. Image Anal. 2022, 77, 102347. [Google Scholar] [CrossRef]
- Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.J.; Heinrich, M.P.; Misawa, K.; Mori, K.; McDonagh, S.G.; Hammerla, N.Y.; Kainz, B.; et al. Attention U-Net: Learning Where to Look for the Pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
- Hung, A.L.Y.; Zheng, H.; Miao, Q.; Raman, S.S.; Terzopoulos, D.; Sung, K. CAT-Net: A Cross-Slice Attention Transformer Model for Prostate Zonal Segmentation in MRI. arXiv 2022, arXiv:2203.15163. [Google Scholar] [CrossRef]
- Petit, O.; Thome, N.; Rambour, C.; Soler, L. U-Net Transformer: Self and Cross Attention for Medical Image Segmentation. arXiv 2021, arXiv:2103.06104. [Google Scholar]
- Gao, Y.; Zhou, M.; Liu, D.; Yan, Z.; Zhang, S.; Metaxas, D.N. A Data-scalable Transformer for Medical Image Segmentation: Architecture, Model Efficiency, and Benchmark. arXiv 2022, arXiv:2203.00131. [Google Scholar]
- Lin, A.; Chen, B.; Xu, J.; Zhang, Z.; Lu, G. DS-TransUNet:Dual Swin Transformer U-Net for Medical Image Segmentation. arXiv 2021, arXiv:2106.06716. [Google Scholar]
- Hatamizadeh, A.; Nath, V.; Tang, Y.; Yang, D.; Roth, H.R.; Xu, D. Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images. In Proceedings of the BrainLes@MICCAI, Virtual Event, 27 September 2021. [Google Scholar]
- Shamshad, F.; Khan, S.; Zamir, S.W.; Khan, M.H.; Hayat, M.; Khan, F.S.; Fu, H. Transformers in Medical Imaging: A Survey. arXiv 2022, arXiv:2201.09873. [Google Scholar]
- Kalapos, A.; Gyires-T’oth, B. Self-Supervised Pretraining for 2D Medical Image Segmentation. arXiv 2022, arXiv:2209.00314. [Google Scholar]
- Azizi, S.; Mustafa, B.; Ryan, F.; Beaver, Z.; von Freyberg, J.; Deaton, J.; Loh, A.; Karthikesalingam, A.; Kornblith, S.; Chen, T.; et al. Big Self-Supervised Models Advance Medical Image Classification. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 11–17 October 2021; pp. 3458–3468. [Google Scholar]
- Albelwi, S. Survey on Self-Supervised Learning: Auxiliary Pretext Tasks and Contrastive Learning Methods in Imaging. Entropy 2022, 24, 551. [Google Scholar] [CrossRef]
- Meng, H.; Lin, Z.; Yang, F.; Xu, Y.; Cui, L. Knowledge Distillation In Medical Data Mining: A Survey. In Proceedings of the 5th International Conference on Crowd Science and Engineering, Jinan, China, 16–18 October 2021; ICCSE ’21. Association for Computing Machinery: New York, NY, USA, 2022; pp. 175–182. [Google Scholar] [CrossRef]
- Hassan, T.; Shafay, M.; Hassan, B.; Akram, M.U.; ElBaz, A.; Werghi, N. Knowledge distillation driven instance segmentation for grading prostate cancer. Comput. Biol. Med. 2022, 150, 106124. [Google Scholar] [CrossRef]
- Vesal, S.; Gayo, I.; Bhattacharya, I.; Natarajan, S.; Marks, L.S.; Barratt, D.C.; Fan, R.E.; Hu, Y.; Sonn, G.A.; Rusu, M. Domain Generalization for Prostate Segmentation in Transrectal Ultrasound Images: A Multi-center Study. arXiv 2022, arXiv:2209.02126. [Google Scholar] [CrossRef]
- Liu, X.; Hu, B.; Huang, W.; Zhang, Y.; Xiong, Z. Efficient Biomedical Instance Segmentation via Knowledge Distillation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Singapore, 18–22 September 2022. [Google Scholar]
- Tustison, N.; Avants, B.; Cook, P.; Zheng, Y.; Egan, A.; Yushkevich, P.; Gee, J. N4ITK: Improved N3 Bias Correction. IEEE Trans. Med. Imaging 2010, 29, 1310–1320. [Google Scholar] [CrossRef] [Green Version]
- Simpson, A.; Antonelli, M.; Bakas, S.; Bilello, M.; Farahani, K.; Ginneken, B.; Kopp-Schneider, A.; Landman, B.; Litjens, G.; Menze, B.; et al. A large annotated medical image dataset for the development and evaluation of segmentation algorithms. arXiv 2019, arXiv:1902.09063. [Google Scholar]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. UNet++: A Nested U-Net Architecture for Medical Image Segmentation. arXiv 2018, arXiv:1807.10165. [Google Scholar]
- Alom, M.Z.; Hasan, M.; Yakopcic, C.; Taha, T.M.; Asari, V.K. Recurrent Residual Convolutional Neural Network based on U-Net (R2U-Net) for Medical Image Segmentation. arXiv 2018, arXiv:1802.06955. [Google Scholar]
- Isensee, F.; Jaeger, P.F.; Kohl, S.A.A.; Petersen, J.; Maier-Hein, K.H. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 2021, 18, 203–211. [Google Scholar] [CrossRef]
- Li, W.; Wang, G.; Fidon, L.; Ourselin, S.; Cardoso, M.J.; Vercauteren, T.K.M. On the Compactness, Efficiency, and Representation of 3D Convolutional Networks: Brain Parcellation as a Pretext Task. arXiv 2017, arXiv:1707.01992. [Google Scholar]
- Myronenko, A. 3D MRI brain tumor segmentation using autoencoder regularization. In Proceedings of the BrainLes@MICCAI, Granada, Spain, 16 September 2018. [Google Scholar]
- Consortium, T.M. Project MONAI 2020. Available online: https://zenodo.org/record/4323059#.Y_msOR9BxPY (accessed on 2 January 2023).
- Pytorch Lightning. GitHub. 2019, 3. Available online: https://github.com/PyTorchLightning/pytorch-lightning (accessed on 2 January 2023).
- Wright, L. Ranger—A Synergistic Optimizer. 2019. Available online: https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer (accessed on 2 January 2023).
- Lin, T.Y.; Goyal, P.; Girshick, R.B.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2999–3007. [Google Scholar]
(A) | ||||||||
Gland | Cropped Gland | |||||||
MDS | CI | MHD (mm) | ASD (mm) | MDS | CI | MHD (mm) | ASD (mm) | |
unet | 0.9082 | 0.0154 | 11.5690 | 0.8732 | 0.9094 | 0.0064 | 7.0381 | 0.6990 |
unet++ | 0.9066 | 0.0108 | 9.7262 | 0.9289 | 0.9081 | 0.0037 | 7.9955 | 0.7533 |
runet | 0.9081 | 0.0078 | 8.8370 | 0.7610 | 0.9083 | 0.0062 | 7.3480 | 0.7186 |
aunet | 0.9052 | 0.0151 | 11.5549 | 1.0364 | 0.9105 | 0.0079 | 7.0760 | 0.7047 |
daunet | 0.9023 | 0.0203 | 13.5009 | 1.0892 | 0.9123 | 0.0076 | 7.1538 | 0.7265 |
d2unet | 0.9078 | 0.0170 | 12.5156 | 1.2563 | 0.9145 | 0.0047 | 6.6846 | 0.6664 |
d2aunet | 0.9110 | 0.0115 | 9.9763 | 0.7997 | 0.9158 | 0.0068 | 7.6787 | 0.6715 |
r2unet | 0.9084 | 0.0087 | 9.6704 | 0.8259 | 0.9117 | 0.0081 | 7.0649 | 0.6994 |
r2aunet | 0.9069 | 0.0110 | 10.1465 | 0.8990 | 0.9130 | 0.0036 | 7.1827 | 0.7096 |
vnet | 0.8991 | 0.0062 | 12.5448 | 1.0621 | 0.9067 | 0.0070 | 7.2024 | 0.7220 |
segresnet | 0.8960 | 0.0117 | 9.3760 | 0.8540 | 0.9048 | 0.0051 | 6.9018 | 0.7490 |
highresnet | 0.8980 | 0.0130 | 31.1897 | 2.1118 | 0.9029 | 0.0084 | 25.8385 | 0.7533 |
nnunet | 0.9289 | 0.0046 | 5.7155 | 0.9218 | 0.9139 | 0.0044 | 6.2565 | 0.7406 |
(B) | ||||||||
Gland | Cropped Gland | |||||||
MDS | MHD (mm) | ASD (mm) | MDS | MHD (mm) | ASD (mm) | |||
unet | 0.8474 | 16.9045 | 2.3177 | 0.8595 | 9.9819 | 1.2044 | ||
unet++ | 0.8544 | 25.7024 | 2.0092 | 0.8472 | 9.0465 | 1.3394 | ||
runet | 0.8501 | 19.8740 | 1.9686 | 0.8547 | 9.9693 | 1.2591 | ||
aunet | 0.8291 | 25.3664 | 2.5814 | 0.8521 | 9.9022 | 1.3618 | ||
daunet | 0.8277 | 21.4430 | 2.1456 | 0.8579 | 10.6797 | 1.2818 | ||
d2unet | 0.8528 | 15.2098 | 1.7646 | 0.8567 | 9.1642 | 1.3421 | ||
d2aunet | 0.8464 | 13.6260 | 1.9367 | 0.8562 | 8.2389 | 1.3334 | ||
r2unet | 0.8517 | 14.9619 | 1.7727 | 0.8546 | 8.7872 | 1.3819 | ||
r2aunet | 0.8514 | 16.2630 | 1.9737 | 0.8573 | 10.3648 | 1.7792 | ||
vnet | 0.8368 | 21.3305 | 2.7043 | 0.8548 | 9.6195 | 1.3882 | ||
segresnet | 0.8479 | 17.4913 | 1.7528 | 0.8468 | 9.7496 | 1.3192 | ||
highresnet | 0.7448 | 69.6783 | 11.5506 | 0.8092 | 35.0037 | 3.6740 | ||
nnunet | 0.8678 | 10.0231 | 3.3704 | 0.8558 | 9.2565 | 1.6221 |
(A) | ||||||||
Tz | Cropped Tz | |||||||
MDS | CI | MHD (mm) | ASD (mm) | MDS | CI | MHD (mm) | ASD (mm) | |
unet | 0.8423 | 0.0243 | 14.0153 | 1.0092 | 0.7790 | 0.0250 | 14.0153 | 1.3668 |
unet++ | 0.8457 | 0.0254 | 14.0649 | 1.0042 | 0.7715 | 0.0278 | 14.0649 | 1.3840 |
runet | 0.8469 | 0.0186 | 12.9054 | 0.9450 | 0.7895 | 0.0432 | 12.9054 | 1.3410 |
Aunet | 0.8509 | 0.0164 | 14.8667 | 0.9109 | 0.7543 | 0.1045 | 14.8667 | 1.4419 |
daunet | 0.8481 | 0.0155 | 14.6603 | 0.9105 | 0.7703 | 0.0311 | 14.6603 | 1.3604 |
d2unet | 0.8447 | 0.0144 | 13.9448 | 0.9676 | 0.7790 | 0.0483 | 13.9448 | 1.4247 |
d2Aunet | 0.8328 | 0.0456 | 14.5523 | 1.9611 | 0.7724 | 0.0351 | 14.5523 | 1.3411 |
r2unet | 0.8390 | 0.0366 | 14.4749 | 1.0748 | 0.7788 | 0.0579 | 14.1360 | 1.3977 |
r2Aunet | 0.8456 | 0.0152 | 13.1392 | 1.0938 | 0.7799 | 0.0365 | 13.1392 | 1.3406 |
vnet | 0.8389 | 0.0172 | 12.4145 | 1.0663 | 0.7998 | 0.0282 | 12.4145 | 1.2589 |
segresnet | 0.8234 | 0.0199 | 13.7308 | 1.2383 | 0.7729 | 0.0347 | 13.7308 | 1.3995 |
highresnet | 0.8457 | 0.0186 | 15.0525 | 1.5373 | 0.8083 | 0.0212 | 15.0525 | 1.3386 |
nnunet | 0.8760 | 0.0099 | 8.7049 | 1.2928 | 0.8561 | 0.0133 | 10.2390 | 1.1226 |
(B) | ||||||||
Tz | Cropped Tz | |||||||
MDS | MHD (mm) | ASD (mm) | MDS | MHD (mm) | ASD (mm) | |||
unet | 0.7700 | 22.5822 | 2.4379 | 0.6424 | 16.8615 | 2.6838 | ||
unet++ | 0.7681 | 19.8427 | 2.3727 | 0.6536 | 16.9354 | 2.4165 | ||
runet | 0.7671 | 19.9993 | 2.0027 | 0.6477 | 17.0520 | 2.5169 | ||
Aunet | 0.7680 | 21.7833 | 2.4354 | 0.6577 | 16.0525 | 2.5444 | ||
daunet | 0.7688 | 19.6406 | 2.4507 | 0.6471 | 17.4325 | 2.5915 | ||
d2unet | 0.7627 | 21.2574 | 2.4447 | 0.6581 | 17.3590 | 2.7397 | ||
d2Aunet | 0.7638 | 22.9369 | 2.3869 | 0.6581 | 16.7426 | 2.5895 | ||
r2unet | 0.7631 | 19.8119 | 2.5560 | 0.6523 | 17.3374 | 2.8883 | ||
r2Aunet | 0.7672 | 21.2282 | 2.1350 | 0.6553 | 14.3813 | 2.6186 | ||
vnet | 0.7643 | 20.3670 | 2.8237 | 0.6603 | 18.4141 | 2.8833 | ||
segresnet | 0.7381 | 19.6969 | 2.6089 | 0.6482 | 18.0439 | 2.7591 | ||
highresnet | 0.7209 | 46.6032 | 6.4525 | 0.6381 | 22.3236 | 3.6603 | ||
nnunet | 0.7300 | 16.9107 | 7.4144 | 0.7540 | 14.4592 | 2.1459 |
(A) | ||||||||
Pz | Cropped Pz | |||||||
MDS | CI | MHD (mm) | ASD (mm) | MDS | CI | MHD (mm) | ASD (mm) | |
unet | 0.7545 | 0.0300 | 22.1193 | 1.5215 | 0.7656 | 0.0326 | 15.8711 | 1.1443 |
unet++ | 0.7557 | 0.0363 | 21.6545 | 1.4484 | 0.7666 | 0.0328 | 15.8122 | 1.1031 |
runet | 0.7472 | 0.0296 | 23.0029 | 1.4938 | 0.7627 | 0.0243 | 15.8278 | 1.1482 |
aunet | 0.7632 | 0.0280 | 19.6521 | 1.2725 | 0.7611 | 0.0360 | 15.0016 | 1.1684 |
daunet | 0.7576 | 0.0384 | 20.1854 | 1.3109 | 0.7655 | 0.0349 | 15.1258 | 1.1131 |
d2unet | 0.7597 | 0.0336 | 22.9687 | 1.4304 | 0.7644 | 0.0327 | 15.5292 | 1.0531 |
d2aunet | 0.7614 | 0.0219 | 21.2204 | 1.5646 | 0.7677 | 0.0258 | 15.0999 | 1.1449 |
r2unet | 0.7563 | 0.0280 | 24.6377 | 1.2007 | 0.7700 | 0.0307 | 15.4505 | 1.1609 |
r2aunet | 0.7656 | 0.0294 | 19.6629 | 1.2778 | 0.7689 | 0.0343 | 14.8106 | 1.1195 |
vnet | 0.7420 | 0.0315 | 22.0089 | 1.7818 | 0.7573 | 0.0297 | 15.9604 | 1.1659 |
segresnet | 0.7329 | 0.0285 | 21.4516 | 1.4579 | 0.7456 | 0.0380 | 15.9265 | 1.2347 |
highresnet | 0.7560 | 0.0301 | 35.7066 | 2.2781 | 0.7719 | 0.0203 | 18.4239 | 1.1279 |
nnunet | 0.8029 | 0.0063 | 9.8693 | 1.0210 | 0.7686 | 0.0110 | 14.4054 | 1.1633 |
(B) | ||||||||
Pz | Cropped Pz | |||||||
MDS | MHD (mm) | ASD (mm) | MDS | MHD (mm) | ASD (mm) | |||
unet | 0.6160 | 26.9297 | 2.7719 | 0.6217 | 21.3145 | 2.0339 | ||
unet++ | 0.6129 | 37.0787 | 3.5482 | 0.6064 | 19.8930 | 2.3919 | ||
runet | 0.5828 | 19.5994 | 2.8543 | 0.6104 | 22.1387 | 2.4339 | ||
aunet | 0.6224 | 20.8184 | 2.5188 | 0.6141 | 20.3642 | 2.5048 | ||
daunet | 0.6282 | 23.0450 | 3.0015 | 0.6252 | 20.2531 | 2.0680 | ||
d2unet | 0.6222 | 18.9975 | 2.5521 | 0.6387 | 16.3358 | 1.9674 | ||
d2aunet | 0.6218 | 26.7974 | 2.6237 | 0.6372 | 17.8484 | 2.1356 | ||
r2unet | 0.6273 | 27.9654 | 2.5934 | 0.6308 | 21.7236 | 2.2581 | ||
r2aunet | 0.6228 | 21.5421 | 2.7400 | 0.6300 | 17.1794 | 2.0054 | ||
vnet | 0.5848 | 40.8916 | 4.7720 | 0.5853 | 23.1373 | 2.5561 | ||
segresnet | 0.6106 | 21.5214 | 2.5409 | 0.5339 | 41.5031 | 4.4359 | ||
highresnet | 0.5669 | 60.2008 | 5.0504 | 0.6075 | 19.7673 | 2.8573 | ||
nnunet | 0.6835 | 13.4527 | 1.8224 | 0.6038 | 17.6085 | 2.5589 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rodrigues, N.M.; Silva, S.; Vanneschi, L.; Papanikolaou, N. A Comparative Study of Automated Deep Learning Segmentation Models for Prostate MRI. Cancers 2023, 15, 1467. https://doi.org/10.3390/cancers15051467
Rodrigues NM, Silva S, Vanneschi L, Papanikolaou N. A Comparative Study of Automated Deep Learning Segmentation Models for Prostate MRI. Cancers. 2023; 15(5):1467. https://doi.org/10.3390/cancers15051467
Chicago/Turabian StyleRodrigues, Nuno M., Sara Silva, Leonardo Vanneschi, and Nickolas Papanikolaou. 2023. "A Comparative Study of Automated Deep Learning Segmentation Models for Prostate MRI" Cancers 15, no. 5: 1467. https://doi.org/10.3390/cancers15051467
APA StyleRodrigues, N. M., Silva, S., Vanneschi, L., & Papanikolaou, N. (2023). A Comparative Study of Automated Deep Learning Segmentation Models for Prostate MRI. Cancers, 15(5), 1467. https://doi.org/10.3390/cancers15051467