Fully Automatic Thoracic Cavity Segmentation in Dynamic Contrast Enhanced Breast MRI Using Deep Convolutional Neural Networks
Abstract
:1. Introduction
2. Related Work
2.1. Pixel-Based Approaches
2.2. Atlas-Based Approaches
2.3. Geometrical-Based Approaches
2.4. Deep Learning-Based Approaches
3. Methodology
3.1. Introduction
3.2. Dataset and Labeling Strategy
3.3. Deep Learning Model
3.4. Model Configuration
- ResNet encoders;
- A self-attention layer as part of the upsampling path of the model;
- A blurring algorithm to avoid checkerboard artefacts;
- A bottleneck connection from input to output.
3.4.1. ResNet Encoders
3.4.2. Self-Attention Layer
3.4.3. Blurring
3.4.4. Bottlenecked Connection
3.5. Training of the Models
3.5.1. Data Augmentation
3.5.2. Hyperparameters
3.5.3. Experiments
4. Results
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Bray, F.; Ferlay, J.; Soerjomataram, I.; Siegel, R.L.; Torre, L.A.; Jemal, A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 2018, 68, 394–424. [Google Scholar] [CrossRef]
- Sung, H.; Ferlay, J.; Siegel, R.L.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA Cancer J. Clin. 2021, 71, 209–249. [Google Scholar] [CrossRef]
- Ferlay, J.; Soerjomataram, I.; Dikshit, R.; Eser, S.; Mathers, C.; Rebelo, M.; Parkin, D.M.; Forman, D. Bray, Cancer incidence and mortality worldwide: Sources, methods and major patterns in GLOBOCAN 2012. Int. J. Cancer 2014, 136, 359–386. [Google Scholar] [CrossRef]
- Vinnicombe, S. How I report breast magnetic resonance imaging studies for breast cancer staging and screening. Cancer Imaging 2016, 16, 1–14. [Google Scholar] [CrossRef]
- Cho, N.; Im, S.-A.; Park, I.-A.; Lee, K.-H.; Li, M.; Han, W.; Noh, D.-Y.; Moon, W.K. Breast cancer: Early prediction of response to neoadjuvant chemotherapy using parametric response maps for MR imaging. Radiology 2014, 272, 385–396. [Google Scholar] [CrossRef]
- Zbontar, J.; Knoll, F.; Sriram, A.; Murrell, T.; Huang, Z.; Muckley, M.J.; Defazio, A.; Stern, R.; Johnson, P.; Bruno, M.; et al. fastMRI: An Open Dataset and Benchmarks for Accelerated MRI. arXiv 2018, arXiv:1811.08839. [Google Scholar] [CrossRef]
- Chang, Y.-C.; Huang, Y.-H.; Huang, C.-S.; Chang, P.-K.; Chen, J.-H.; Chang, R.-F. Magnetic Resonance Spectroscopy and Imaging Guidance in Molecular Medicine: Targeting and Monitoring of Choline and Glucose Metabolism in Cancer. Magn. Reson. Imaging 2012, 30, 312–322. [Google Scholar] [CrossRef] [PubMed]
- Piantadosi, G.; Sansone, M.; Fusco, R.; Sansone, C. Multi-planar 3D breast segmentation in MRI via deep convolutional neural networks. Artif. Intell. Med. 2020, 103, 101781. [Google Scholar] [CrossRef] [PubMed]
- Marrone, S.; Piantadosi, G.; Fusco, R.; Petrillo, A.; Sansone, M.; Sansone, C. Breast segmentation using Fuzzy C-Means and anatomical priors in DCE-MRI. In Proceedings of the 23rd IEEE International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 4–8 December 2016; pp. 1472–1477. [Google Scholar]
- Kayalibay, B.; Jensen, G.; van der Smagt, P. CNN-based Segmentation of Medical Imaging Data. arXiv 2017, arXiv:1701.03056. [Google Scholar] [CrossRef]
- Alshanbari, H.S.; Amin, S.; Shuttleworth, J.; Slman, K.A.; Muslam, S. Automatic Segmentation in Breast Cancer Using Watershed Algorithm. Int. J. Biomed. Eng. 2015, 2, 1–6. [Google Scholar]
- Wang, L.; Platel, B.; Ivanovskaya, T.; Harz, M.; Hahn, H.K. Fully automatic breast segmentation in 3D breast MRI. In Proceedings of the 9th IEEE International Symposium on Biomedical Imaging (ISBI), Barcelona, Spain, 2–5 May 2012. [Google Scholar]
- Vignati, A.; Giannini, V.; de Luca, M.; Morra, L.; Persano, D.; Carbonaro, L.A.; Bertotto, I.; Martincich, L.; Regge, D.; Bert, A.; et al. Performance of a Fully Automatic Lesion Detection System for Breast DCE-MRI. J. Magn. Reson. Imaging 2011, 34, 1341–1351. [Google Scholar] [CrossRef]
- Gallego Ortiz, C.; Martel, A.L. Automatic atlas-based segmentation of the breast in MRI for 3D breast volume computation. Med. Phys. 2012, 39, 5835–5848. [Google Scholar] [CrossRef]
- Gubern-Mérida, A.; Kallenberg, M.; Mann, R.M.; Martí, R.; Karssemeijer, N. Breast segmentation and density estimation in breast MRI: A fully automatic framework. IEEE J. Biomed. Health Inform. 2015, 19, 349–357. [Google Scholar] [CrossRef] [PubMed]
- Khalvati, F.; Gallego-Ortiz, C.; Balasingham, S.; Martel, A.L. Automated Segmentation of Breast in 3D MR Images Using a Robust Atlas. IEEE Trans. Med. Imaging 2015, 34, 116–125. [Google Scholar] [CrossRef]
- Reed, V.K.; Woodward, W.A.; Zhang, L.; Strom, E.A.; Perkins, G.H.; Tereffe, W.; Oh, J.L.; Yu, T.K.; Bedrosian, I.; Whitman, G.J.; et al. Automatic segmentation of whole breast using atlas approach and deformable image registration. Int. J. Radiat. Oncol. Biol. Phys. 2009, 73, 1493–1500. [Google Scholar] [CrossRef]
- Fooladivanda, A.; Shokouhi, S.B.; Mosavi, M.R.; Ahmadinejad, N. Atlas-based automatic breast MRI segmentation using pectoral muscle and chest region model. In Proceedings of the 21th IEEE Iranian Conference on Biomedical Engineering (ICBME), Tehran, Iran, 26–28 November 2014. [Google Scholar]
- Mustra, M.; Bozek, J. Breast border extraction and pectoral muscle detection using wavelet decomposition. In Proceedings of the IEEE EUROCON, St. Petersburg, Russia, 18–23 May 2009. [Google Scholar]
- Wu, S.; Weinstein, S.P.; Conant, E.F.; Schnall, M.D.; Kontos, D. Automated chest wall line detection for whole-breast segmentation in sagittal breast MR images. Med. Phys. 2013, 40, 1–12. [Google Scholar] [CrossRef]
- Cai, L.; Gao, J.; Zhao, D. A review of the application of deep learning in medical image classification and segmentation. Ann. Transl. Med. 2020, 8, 713. [Google Scholar] [CrossRef]
- Dalmış, M.; Litjens, G.; Holland, K.; Setio, A.; Mann, R.; Karssemeijer, N.; Gubern-Mérida, A. Using deep learning to segment breast and fibroglandular tissue in MRI volumes. Med. Phys. 2017, 44, 533–546. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is All you Need. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA, 4–8 December 2017. [Google Scholar]
- Iglovikov, V.I. Shvets, TernausNet. In Computer-Aided Analysis of Gastrointestinal Videos; Bernal, J., Histace, A., Eds.; Springer: Cham, Switzerland, 2021. [Google Scholar] [CrossRef]
- Maitra, D.S.; Bhattacharya, U.; Parui, S.K. CNN based common approach to handwritten character recognition of multiple scripts. In Proceedings of the 13th IEEE International Conference on Document Analysis and Recognition (ICDAR), Tunis, Tunisia, 23–26 August 2015; pp. 1021–1025. [Google Scholar]
- Raghu, M.; Zhang, C.; Kleinberg, J.; Bengio, S. Transfusion: Understanding Transfer Learning for Medical Imaging. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar]
- Tan, C.; Sun, F.; Kong, T.; Zhang, W.; Yang, C.; Liu, C. A Survey on Deep Transfer Learning. In Proceedings of the Artificial Neural Networks and Machine Learning—ICANN 2018, Rhodes, Greece, 4–7 October 2018; pp. 270–279. [Google Scholar]
- Serte, S.; Serener, A.; Al-Turjman, F. Deep learning in medical imaging: A brief review. Trans. Emerg. Telecommun. Technol. 2020, 33, e4080. [Google Scholar] [CrossRef]
- Zhang, H.; Dauphin, Y.N.; Ma, T. Fixup Initialization: Residual Learning Without Normalization. arXiv 2019, arXiv:1901.09321. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the International Conference on Computer Vision ICCV, Santiago, Chile, 7–13 December 2015. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef]
- Razzak, M.I.; Naz, S.; Zaib, A. Deep Learning for Medical Image Processing: Overview, Challenges and the Future. Classif. BioApps Lect. Notes Comput. Vis. Biomech. 2017, 26, 323–350. [Google Scholar] [CrossRef]
- Bahdanau, D.; Cho, K.; Bengio, Y. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the 2nd International Conference on Learning Representations, Banff, AB, Canada, 14–16 April 2014. [Google Scholar]
- Xu, T.; Zhang, P.; Huang, Q.; Zhang, H.; Gan, Z.; Huang, X.; He, X. AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Yang, Z.; He, X.; Gao, J.; Deng, L.; Smola, A. Stacked attention networks for image question answering. arXiv 2015, arXiv:1511.02274. [Google Scholar]
- Gregor, K.; Danihelka, I.; Graves, A.; Rezende, D.J.; Wierstra, D. Draw: A recurrent neural network for image generation. arXiv 2015, arXiv:1502.04623. [Google Scholar]
- Guan, Q.; Huang, Y.; Zhong, Z.; Zheng, Z.; Zheng, L.; Yang, Y. Diagnose like a Radiologist: Attention Guided Convolutional Neural Network for Thorax Disease Classification. arXiv 2018, arXiv:1801.09927. [Google Scholar] [CrossRef]
- Zhang, H.; Goodfellow, I.; Metaxas, D.; Odena, A. Self-Attention Generative Adversarial Networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 7354–7363. [Google Scholar]
- Odena, A.; Dumoulin, V.; Olah, C. Deconvolution and Checkerboard Artifacts. Distill 2016, 1, e3. [Google Scholar] [CrossRef]
- Mikolajczyk, A.; Grochowski, M. Data augmentation for improving deep learning in image classification problem. In Proceedings of the IEEE International Interdisciplinary PhD Workshop (IIPhDW), Świnouście, Poland, 9–12 May 2018; pp. 117–122. [Google Scholar]
- Wong, S.C.; Gatt, A.; Stamatescu, V.; McDonnell, M.D. Understanding data augmentation for classification: When to warp? In Proceedings of the IEEE International Conference on Digital Image Computing: Techniques and Applications (DICTA), Gold Coast, Australia, 30 November–2 December 2016; pp. 1–6. [Google Scholar]
- Kingma, D.P.; Ba, J. A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Smith, L.N. A disciplined approach to neural network hyper-parameters: Part 1—Learning rate, batch size, momentum, and weight decay. arXiv 2018, arXiv:1803.09820. [Google Scholar]
- Smith, L.N. No more pesky learning rate guessing games. arXiv 2015, arXiv:1506.01186. [Google Scholar]
Transformation | Parameters | Probability |
---|---|---|
Horizontal flip | N/A | 0.5 |
Rotation | ±10° | 0.75 |
Cropping | 1.1 magnification | 0.75 |
Contrast adjustment | ±20% | 0.75 |
Brightness adjustment | ±10% | 0.75 |
Perspective warp | ±20% position of the observation plane | 0.75 |
Model Configuration | Mean DSC ± SD | Mean JSC ± SD |
---|---|---|
Resnet18 | 0.9253 ± 0.1034 | 0.8760 ± 0.0758 |
ResNet18 + SA | 0.9296 ± 0.1028 | 0.8757 ± 0.0765 |
ResNet18 + BL | 0.9273 ± 0.1033 | 0.8795 ± 0.0764 |
ResNet18 + BC | 0.9283 ± 0.1045 | 0.8742 ± 0.0763 |
ResNet18 + SA + BL | 0.9348 ± 0.1045 | 0.8846 ± 0.0760 |
ResNet18 + SA +BL + BC | 0.9293 ± 0.1036 | 0.8755 ± 0.0765 |
Resnet34 | 0.9244 ± 0.1017 | 0.8721 ± 0.0756 |
ResNet34 + SA | 0.9230 ± 0.1012 | 0.8652 ± 0.0752 |
ResNet34 + BL | 0.9227 ± 0.1015 | 0.8714 ± 0.0749 |
ResNet34 + BC | 0.9292 ± 0.1009 | 0.8754 ± 0.0755 |
ResNet34 + SA + BL | 0.9337 ± 0.1008 | 0.8780 ± 0.0750 |
ResNet34 + SA + BL + BC | 0.9359 ± 0.1004 | 0.8874 ± 0.0748 |
Resnet50 | 0.9240 ± 0.1055 | 0.8717 ± 0.0781 |
ResNet50 + SA | 0.9210 ± 0.1069 | 0.8670 ± 0.0790 |
ResNet50 + BL | 0.9233 ± 0.1063 | 0.8708 ± 0.0777 |
ResNet50 + BC | 0.9257 ± 0.1059 | 0.8730 ± 0.0775 |
ResNet50 + SA + BL | 0.9278 ± 0.1061 | 0.8727 ± 0.0766 |
ResNet50 + SA +BL + BC | 0.9289 ± 0.1053 | 0.8740 ± 0.0770 |
Model Configuration | Inference Time (ms/Image) | Batch Inference Time (1260 Images) (s) |
---|---|---|
Resnet18 | 26.83 | 30.0 |
ResNet18 + SA | 33.56 | 42.3 |
ResNet18 + BL | 31.88 | 40.2 |
ResNet18 + BC | 27.68 | 34.9 |
ResNet18 + SA + BL | 33.56 | 42.3 |
ResNet18 + SA +BL + BC | 33.56 | 42.3 |
Resnet34 | 31.88 | 40.2 |
ResNet34 + SA | 33.56 | 42.3 |
ResNet34 + BL | 33.56 | 42.3 |
ResNet34 + BC | 30.2 | 38.1 |
ResNet34 + SA + BL | 33.56 | 42.3 |
ResNet34 + SA + BL + BC | 33.56 | 42.3 |
Resnet50 | 179.5 | 226.2 |
ResNet50 + SA | 194.6 | 245.2 |
ResNet50 + BL | 196.3 | 247.3 |
ResNet50 + BC | 194.6 | 245.2 |
ResNet50 + SA + BL | 196.3 | 247.3 |
ResNet50 + SA +BL + BC | 198 | 249.5 |
Model Configuration | DSC ± SD (n = 406) | JSC ± SD (n = 406) |
---|---|---|
Resnet18 | 0.9750 ± 0.0451 | 0.9552 ± 0.0667 |
ResNet18 + SA | 0.9731 ± 0.0449 | 0.9552 ± 0.0669 |
ResNet18 + BL | 0.9739 ± 0.0447 | 0.9536 ± 0.0701 |
ResNet18 + BC | 0.9717 ± 0.0450 | 0.9497 ± 0.0670 |
ResNet18 + SA + BL | 0.9751 ± 0.0448 | 0.9556 ± 0.0665 |
ResNet18 + SA +BL + BC | 0.9737 ± 0.0447 | 0.9533 ± 0.0674 |
Resnet34 | 0.9744 ± 0.0420 | 0.9541 ± 0.0682 |
ResNet34 + SA | 0.9734 ± 0.0419 | 0.9527 ± 0.0628 |
ResNet34 + BL | 0.9698 ± 0.0417 | 0.9512 ± 0.0642 |
ResNet34 + BC | 0.9766 ± 0.0422 | 0.9577 ± 0.0677 |
ResNet34 + SA + BL | 0.9775 ± 0.0425 | 0.9584 ± 0.0633 |
ResNet34 + SA + BL + BC | 0.9789 ± 0.0411 | 0.9612 ± 0.0621 |
Resnet50 | 0.9718 ± 0.0493 | 0.9541 ± 0.0627 |
ResNet50 + SA | 0.9665 ± 0.0502 | 0.9538 ± 0.0721 |
ResNet50 + BL | 0.9708 ± 0.0488 | 0.9465 ± 0.0704 |
ResNet50 + BC | 0.9732 ± 0.0499 | 0.9526 ± 0.0706 |
ResNet50 + SA + BL | 0.9712 ± 0.0501 | 0.9532 ± 0.0710 |
ResNet50 + SA +BL + BC | 0.9766 ± 0.0497 | 0.9577 ± 0.0708 |
Model Configuration | Inference Time (ms/Image) | Batch Inference Time (1260 Images) (s) |
---|---|---|
Resnet18 | 92.28 | 116.3 |
ResNet18 + SA | 93.96 | 118.4 |
ResNet18 + BL | 92.28 | 116.3 |
ResNet18 + BC | 92.28 | 116.3 |
ResNet18 + SA + BL | 95.64 | 120.5 |
ResNet18 + SA +BL + BC | 95.64 | 120.5 |
Resnet34 | 90.6 | 114.2 |
ResNet34 + SA | 95.64 | 120.5 |
ResNet34 + BL | 95.64 | 120.5 |
ResNet34 + BC | 92.28 | 116.3 |
ResNet34 + SA + BL | 97.32 | 122.6 |
ResNet34 + SA + BL + BC | 98.99 | 124.7 |
Resnet50 | 258.4 | 325.6 |
ResNet50 + SA | 263.4 | 331.9 |
ResNet50 + BL | 261.7 | 329.7 |
ResNet50 + BC | 260.1 | 327.7 |
ResNet50 + SA + BL | 261.7 | 329.7 |
ResNet50 + SA +BL + BC | 266.8 | 336.2 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Berchiolli, M.; Wolfram, S.; Balachandran, W.; Gan, T.-H. Fully Automatic Thoracic Cavity Segmentation in Dynamic Contrast Enhanced Breast MRI Using Deep Convolutional Neural Networks. Appl. Sci. 2023, 13, 10160. https://doi.org/10.3390/app131810160
Berchiolli M, Wolfram S, Balachandran W, Gan T-H. Fully Automatic Thoracic Cavity Segmentation in Dynamic Contrast Enhanced Breast MRI Using Deep Convolutional Neural Networks. Applied Sciences. 2023; 13(18):10160. https://doi.org/10.3390/app131810160
Chicago/Turabian StyleBerchiolli, Marco, Susann Wolfram, Wamadeva Balachandran, and Tat-Hean Gan. 2023. "Fully Automatic Thoracic Cavity Segmentation in Dynamic Contrast Enhanced Breast MRI Using Deep Convolutional Neural Networks" Applied Sciences 13, no. 18: 10160. https://doi.org/10.3390/app131810160
APA StyleBerchiolli, M., Wolfram, S., Balachandran, W., & Gan, T. -H. (2023). Fully Automatic Thoracic Cavity Segmentation in Dynamic Contrast Enhanced Breast MRI Using Deep Convolutional Neural Networks. Applied Sciences, 13(18), 10160. https://doi.org/10.3390/app131810160