Hybrid Multiple-Organ Segmentation Method Using Multiple U-Nets in PET/CT Images
Abstract
:1. Introduction
1.1. Related Works
1.2. Purpose
1.3. Contribution
2. Methods
2.1. Overview of the Proposed Method
2.2. Dataset
2.2.1. Target Data
2.2.2. Data Preparation
2.3. Organ Segmentation
2.4. Shaping of Output Labels and Composition
2.5. Evaluation Matrices
2.6. Learning Environment and Parameter
3. Results
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Sung, H.; Ferlay, J.; Siegel, R.L.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 2021, 71, 209–249. [Google Scholar] [CrossRef] [PubMed]
- Teramoto, A.; Fujita, H.; Yamamuro, O.; Tamaki, T. Automated detection of pulmonary nodules in PET/CT images: Ensemble false-positive reduction using a convolutional neural network technique. Med. Phys. 2016, 43, 2821–2827. [Google Scholar] [CrossRef] [PubMed]
- Alakwaa, W.; Nassef, M.; Badr, A. Lung cancer detection and classification with 3D convolutional neural network (3D-CNN). Int. J. Adv. Comput. Sci. Appl. (IJACSA) 2017, 8, 409–417. [Google Scholar] [CrossRef]
- Trebeschi, S.; van Griethuysen, J.J.M.; Lambregts, D.M.J.; Lahaye, M.J.; Parmar, C.; Bakers, F.C.H.; Peters, N.H.G.M.; Beets-Tan, R.G.H.; Aerts, H.J.W.L. Deep learning for fully-automated localization and segmentation of rectal cancer on multiparametric MR. Sci. Rep. 2017, 7, 5301. [Google Scholar] [CrossRef] [PubMed]
- Salama, W.M.; Aly, M.H. Deep learning in mammography images segmentation and classification: Automated CNN approach. Alex. Eng. J. 2021, 60, 4701–4709. [Google Scholar] [CrossRef]
- Wolz, R.; Chu, C.; Misawa, K.; Fujiwara, M.; Mori, K.; Rueckert, D. Automated abdominal multi-organ segmentation with subject-specific atlas generation. IEEE Trans. Med. Imaging 2013, 32, 1723–1730. [Google Scholar] [CrossRef] [PubMed]
- Tong, T.; Wolz, R.; Wang, Z.; Gao, Q.; Misawa, K.; Fujiwara, M.; Mori, K.; Hajnal, J.V.; Rueckert, D. Discriminative dictionary learning for abdominal multi-organ segmentation. Med. Image Anal. 2015, 23, 92–104. [Google Scholar] [CrossRef] [PubMed]
- Gauriau, R.; Cuingnet, R.; Lesage, D.; Bloch, I. Multi-organ localization with cascaded global-to-local regression and shape prior. Med. Image Anal. 2015, 23, 70–83. [Google Scholar] [CrossRef] [PubMed]
- Criminisi, A.; Robertson, D.; Konukoglu, E.; Shotton, J.; Pathak, S.; White, S.; Siddiqui, K. Regression forests for efficient anatomy detection and localization in computed tomography scans. Med. Image Anal. 2013, 17, 1293–1303. [Google Scholar] [CrossRef] [PubMed]
- Hu, P.; Wu, F.; Peng, J.; Bao, Y.; Chen, F.; Kong, D. Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 399–411. [Google Scholar] [CrossRef] [PubMed]
- Zhou, X.; Takayama, R.; Wang, S.; Hara, T.; Fujita, H. Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method. Med. Phys. 2017, 44, 5221–5233. [Google Scholar] [CrossRef]
- Roth, H.R.; Oda, H.; Hayashi, Y.; Oda, M.; Shimizu, N.; Fujiwara, M.; Misawa, K.; Mori, K. Hierarchical 3D Fully Convolutional Networks for Multi-organ Segmentation. arXiv 2017, arXiv:1704.06382. [Google Scholar]
- Wang, H.; Zhang, N.; Huo, L.; Zhang, B. Dual-modality multi-atlas segmentation of torso organs from [18F]FDG-PET/CT images. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 473–482. [Google Scholar] [CrossRef]
- Zhang, J.; Wang, Y.; Liu, J.; Tang, Z.; Wang, Z. Multiple organ-specific cancers classification from PET/CT images using deep learning. Multimed. Tool Appl. 2022, 81, 16133–16154. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Mei, X.; Liu, Z.; Robson, P.M.; Marinelli, B.; Huang, M.; Doshi, A.; Jacobi, A.; Cao, C.; Link, K.E.; Yang, T.; et al. RadImageNet: An open radiologic deep learning research dataset for effective transfer learning. Radiol. Artif. Intell. 2022, 4, e210315. [Google Scholar] [CrossRef] [PubMed]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. Lect. Notes Comput. Sci. 2015, 9351, 234–241. [Google Scholar] [CrossRef]
- Li, X.; Chen, H.; Qi, X.; Dou, Q.; Fu, C.W.; Heng, P.A. H-DenseUNet: Hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE Trans. Med. Imaging 2018, 37, 2663–2674. [Google Scholar] [CrossRef] [PubMed]
- Taghanaki, S.A.; Zheng, Y.; Kevin Zhou, S.K.; Georgescu, B.; Sharma, P.; Xu, D.; Comaniciu, D.; Hamarneh, G. Combo loss: Handling Input and Output Imbalance in multi-organ Segmentation. Comput. Med. Imaging Graph. 2019, 75, 24–33. [Google Scholar] [CrossRef] [PubMed]
- Gibson, E.; Giganti, F.; Hu, Y.; Bonmati, E.; Bandula, S.; Gurusamy, K.; Davidson, B.; Pereira, S.P.; Clarkson, M.J.; Barratt, D.C. Automatic multi-organ segmentation on abdominal CT with dense V-networks. IEEE Trans. Med. Imaging 2018, 37, 1822–1834. [Google Scholar] [CrossRef] [PubMed]
Organ | Evaluation Metrics | CandidateLDCT | CandidatePET/CT | Outputproposed | |
---|---|---|---|---|---|
Liver | Dice [%] | Mean | 94.1 | 94.2 | 94.0 |
SD | 4.1 | 4.1 | 4.1 | ||
Median | 94.9 | 94.7 | 94.7 | ||
Min | 59.4 | 59.4 | 59.5 | ||
Max | 97.0 | 96.9 | 96.7 | ||
Sensitivity [%] | 95.4 | 95.6 | 96.6 | ||
False positive rate [%] | 7.1 | 7.2 | 8.4 | ||
Kidney | Dice [%] | Mean | 94.3 | 94.6 | 94.1 |
SD | 3.7 | 3.1 | 3.6 | ||
Median | 95.6 | 95.5 | 95.2 | ||
Min | 70.7 | 76.0 | 72.1 | ||
Max | 97.5 | 97.8 | 97.3 | ||
Sensitivity [%] | 96.3 | 96.7 | 97.7 | ||
False positive rate [%] | 7.5 | 7.3 | 9.1 | ||
Spleen | Dice [%] | Mean | 91.2 | 91.2 | 90.9 |
SD | 8.3 | 8.4 | 8.3 | ||
Median | 92.9 | 92.8 | 92.6 | ||
Min | 20.0 | 20.0 | 20.7 | ||
Max | 96.6 | 96.5 | 96.4 | ||
Sensitivity [%] | 93.9 | 93.8 | 95.4 | ||
False positive rate [%] | 11.0 | 10.9 | 12.9 | ||
Pancreas | Dice [%] | Mean | 69.5 | 70.5 | 71.9 |
SD | 16.2 | 15.7 | 14.0 | ||
Median | 73.1 | 75.0 | 75.9 | ||
Min | 0.0 | 22.3 | 21.7 | ||
Max | 89.1 | 89.6 | 88.4 | ||
Sensitivity [%] | 67.8 | 69.9 | 75.7 | ||
False positive rate [%] | 25.0 | 25.8 | 29.2 |
Organ | Evaluation Metrics | Full-Scratch | ImageNet | RadImageNet | |
---|---|---|---|---|---|
Liver | Dice [%] | Mean | 94.0 | 94.1 | 93.7 |
SD | 4.1 | 4.2 | 4.1 | ||
Median | 94.7 | 94.9 | 94.6 | ||
Min | 59.5 | 58.9 | 59.4 | ||
Max | 96.7 | 96.8 | 96.7 | ||
Sensitivity [%] | 96.6 | 97.0 | 96.8 | ||
False positive rate [%] | 8.4 | 8.6 | 9.1 | ||
Kidney | Dice [%] | Mean | 94.1 | 93.9 | 93.1 |
SD | 3.6 | 4.1 | 4.1 | ||
Median | 95.2 | 95.2 | 94.5 | ||
Min | 72.1 | 71.1 | 72.1 | ||
Max | 97.3 | 97.1 | 97.1 | ||
Sensitivity [%] | 97.7 | 98.0 | 97.9 | ||
False positive rate [%] | 9.1 | 9.6 | 10.9 | ||
Spleen | Dice [%] | Mean | 90.9 | 91.3 | 90.3 |
SD | 8.3 | 8.2 | 7.9 | ||
Median | 92.6 | 92.6 | 92.0 | ||
Min | 20.7 | 20.6 | 24.1 | ||
Max | 96.4 | 96.3 | 95.8 | ||
Sensitivity [%] | 95.4 | 95.8 | 95.6 | ||
False positive rate [%] | 12.9 | 12.6 | 14.2 | ||
Pancreas | Dice [%] | Mean | 71.9 | 75.1 | 70.6 |
SD | 14.0 | 12.5 | 15.8 | ||
Median | 75.9 | 78.9 | 75.2 | ||
Min | 21.7 | 25.1 | 3.6 | ||
Max | 88.4 | 89.4 | 86.4 | ||
Sensitivity [%] | 75.7 | 82.3 | 77.3 | ||
False positive rate [%] | 29.2 | 29.3 | 33.4 |
Author | Methods | Dataset | Dice Index [%] | ||||||
---|---|---|---|---|---|---|---|---|---|
Model | Validation | Modality | Contrast Enhance | Number of Cases | Liver | Kidney | Spleen | Pancreas | |
Tong et al. (2015) [7] | Atlas | LOO | D-CT | CE | 150 | 94.9 | 93.6 | 92.5 | 71.1 |
Hu et al. (2017) [10] | 3D CNN | 5-fold CV | D-CT | Mixed | 140 | 96.0 | 95.4 | 94.2 | — |
Roth et al. (2017) [12] | 3D FCN | Testing | D-CT | CE | 331 | 95.4 | — | 92.8 | 82.2 |
Gibson et al. (2018) [20] | DenseVNet | 9-flod CV | D-CT | Mixed | 90 | 95 | 93 * | 95 | 75 |
Wang et al. (2019) [13] | Multi Atlas | LOO | PET/CT | NCE | 69 | 88 | 79 | 74 | — |
Zhang et al. (2022) [14] | VNet | Hold out | PET/CT | NCE | 175 | 90.7 | 89.9 | 89.1 | 60.3 |
Proposed method | DenseUNet | 5-flod CV | PET/CT | NCE | 88 | 94.1 | 93.9 | 91.3 | 75.1 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Suganuma, Y.; Teramoto, A.; Saito, K.; Fujita, H.; Suzuki, Y.; Tomiyama, N.; Kido, S. Hybrid Multiple-Organ Segmentation Method Using Multiple U-Nets in PET/CT Images. Appl. Sci. 2023, 13, 10765. https://doi.org/10.3390/app131910765
Suganuma Y, Teramoto A, Saito K, Fujita H, Suzuki Y, Tomiyama N, Kido S. Hybrid Multiple-Organ Segmentation Method Using Multiple U-Nets in PET/CT Images. Applied Sciences. 2023; 13(19):10765. https://doi.org/10.3390/app131910765
Chicago/Turabian StyleSuganuma, Yuta, Atsushi Teramoto, Kuniaki Saito, Hiroshi Fujita, Yuki Suzuki, Noriyuki Tomiyama, and Shoji Kido. 2023. "Hybrid Multiple-Organ Segmentation Method Using Multiple U-Nets in PET/CT Images" Applied Sciences 13, no. 19: 10765. https://doi.org/10.3390/app131910765
APA StyleSuganuma, Y., Teramoto, A., Saito, K., Fujita, H., Suzuki, Y., Tomiyama, N., & Kido, S. (2023). Hybrid Multiple-Organ Segmentation Method Using Multiple U-Nets in PET/CT Images. Applied Sciences, 13(19), 10765. https://doi.org/10.3390/app131910765