Enhanced Deep-Learning-Based Automatic Left-Femur Segmentation Scheme with Attribute Augmentation
Abstract
:1. Introduction
1.1. Background
1.2. Related Works
2. Deep-Learning-Based Automatic Left-Femur Segmentation Scheme: U-Net Segmentation Model
3. Experimental Dataset and Data Preprocessing
3.1. Assigning xyz Coordinates of the Bounding Box for Cropping
3.2. Contrast Enhancement and Femur Cropping
3.3. Cropped CT Slices Augmented with Attributes (Feature Addition)
3.4. Training, Validation, and Testing Datasets of the Deep-Learning-Based Left-Femur Segmentation Scheme
4. Segmentation Performance and Image Similarity Metrics
5. Results and Discussion
5.1. Performance of the U-Net Femur Segmentation Model
5.2. Comparison and Similarity of 3D Reconstruction Images
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Lee, G.; Fujita, H. Deep Learning in Medical Image Analysis: Challenges and Applications; Springer: Berlin/Heidelberg, Germany, 2020; Volume 1213. [Google Scholar]
- Ma, Z.; Tavares, J.M.R.; Jorge, R.N.; Mascarenhas, T. A review of algorithms for medical image segmentation and their applications to the female pelvic cavity. Comput. Methods Biomech. Biomed. Engin. 2010, 13, 235–246. [Google Scholar] [CrossRef] [Green Version]
- Delaney, G.P.; Barton, M.B. Evidence-based estimates of the demand for radiotherapy. Clin. Oncol. 2015, 27, 70–76. [Google Scholar] [CrossRef]
- Weston, A.D.; Korfiatis, P.; Philbrick, K.A.; Conte, G.M.; Kostandy, P.; Sakinis, T.; Zeinoddini, A.; Boonrod, A.; Moynagh, M.; Takahashi, N. Complete abdomen and pelvis segmentation using U-net variant architecture. Med. Phys. 2020, 47, 5609–5618. [Google Scholar] [CrossRef]
- Ibragimov, B.; Xing, L. Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks. Med. Phys. 2017, 44, 547–557. [Google Scholar] [CrossRef] [Green Version]
- Puangragsa, U.; Setakornnukul, J.; Dankulchai, P.; Phasukkit, P. 3D Kinect Camera Scheme with Time-Series Deep-Learning Algorithms for Classification and Prediction of Lung Tumor Motility. Sensors 2022, 22, 2918. [Google Scholar] [CrossRef]
- Voet, P.W. Automation of Contouring and Planning in Radiotherapy; Erasmus University Rotterdam: Rotterdam, The Netherlands, 2014. [Google Scholar]
- Kosmin, M.; Ledsam, J.; Romera-Paredes, B.; Mendes, R.; Moinuddin, S.; de Souza, D.; Gunn, L.; Kelly, C.; Hughes, C.; Karthikesalingam, A. Rapid advances in auto-segmentation of organs at risk and target volumes in head and neck cancer. Radiother. Oncol. 2019, 135, 130–140. [Google Scholar] [CrossRef] [PubMed]
- Kazemifar, S.; Balagopal, A.; Nguyen, D.; McGuire, S.; Hannan, R.; Jiang, S.; Owrangi, A. Segmentation of the prostate and organs at risk in male pelvic CT images using deep learning. Biomed. Phys. Eng. Express 2018, 4, 055003. [Google Scholar] [CrossRef] [Green Version]
- Doi, K. Computer-aided diagnosis in medical imaging: Historical review, current status and future potential. Comput. Med. Imaging Graph. 2007, 31, 198–211. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Haque, I.R.I.; Neubert, J. Deep learning approaches to biomedical image segmentation. Inform. Med. Unlocked 2020, 18, 100297. [Google Scholar] [CrossRef]
- Mathur, R. Deep Learning over Conventional Image Processing for Contrast Enhancement and Auto-Segmentation of Super-Resolved Neuronal Brain Images: A Comparative Study. Magn. Reson. Med. Sci. 2020, 19, 195–206. [Google Scholar] [CrossRef] [Green Version]
- Paing, M.P.; Tungjitkusolmun, S.; Bui, T.H.; Visitsattapongse, S.; Pintavirooj, C. Automated segmentation of infarct lesions in T1-weighted MRI scans using variational mode decomposition and deep learning. Sensors 2021, 21, 1952. [Google Scholar] [CrossRef]
- Harari, P.M.; Song, S.; Tomé, W.A. Emphasizing conformal avoidance versus target definition for IMRT planning in head-and-neck cancer. Int. J. Radiat. Oncol. Biol. Phys. 2010, 77, 950–958. [Google Scholar] [CrossRef] [Green Version]
- Das, I.J.; Moskvin, V.; Johnstone, P.A. Analysis of treatment planning time among systems and planners for intensity-modulated radiation therapy. J. Am. Coll. Radiol. 2009, 6, 514–517. [Google Scholar] [CrossRef]
- Zhou, X.; Takayama, R.; Wang, S.; Hara, T.; Fujita, H. Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method. Med. Phys. 2017, 44, 5221–5233. [Google Scholar] [CrossRef] [Green Version]
- Roy, S.; Meena, T.; Lim, S.-J. Demystifying supervised learning in healthcare 4.0: A new reality of transforming diagnostic medicine. Diagnostics 2022, 12, 2549. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2016, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
- Chan, H.-P.; Samala, R.K.; Hadjiiski, L.M.; Zhou, C. Deep learning in medical image analysis. Deep. Learn. Med. Image Anal. Chall. Appl. 2020, 1213, 3–21. [Google Scholar] [CrossRef]
- Shen, J.; Lu, S.; Qu, R.; Zhao, H.; Zhang, L.; Chang, A.; Zhang, Y.; Fu, W.; Zhang, Z. A boundary-guided transformer for measuring distance from rectal tumor to anal verge on magnetic resonance images. Patterns 2023, 4, 100711. [Google Scholar] [CrossRef] [PubMed]
- He, Q.; Yang, Q.; Xie, M. HCTNet: A hybrid CNN-transformer network for breast ultrasound image segmentation. Comput. Biol. Med. 2023, 155, 106629. [Google Scholar] [CrossRef] [PubMed]
- He, A.; Wang, K.; Li, T.; Du, C.; Xia, S.; Fu, H. H2Former: An Efficient Hierarchical Hybrid Transformer for Medical Image Segmentation. IEEE Trans. Med. Imaging, 2023; Online ahead of print. [Google Scholar] [CrossRef]
- Cha, K.H.; Hadjiiski, L.; Samala, R.K.; Chan, H.P.; Caoili, E.M.; Cohan, R.H. Urinary bladder segmentation in CT urography using deep-learning convolutional neural network and level sets. Med. Phys. 2016, 43, 1882–1896. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kim, Y.J.; Lee, S.H.; Park, C.M.; Kim, K.G. Evaluation of semi-automatic segmentation methods for persistent ground glass nodules on thin-section CT scans. Healthc. Inform. Res. 2016, 22, 305–315. [Google Scholar] [CrossRef] [Green Version]
- Starmans, M.P.; van der Voort, S.R.; Tovar, J.M.C.; Veenland, J.F.; Klein, S.; Niessen, W.J. Radiomics: Data mining using quantitative medical image features. In Handbook of Medical Image Computing and Computer Assisted Intervention; Elsevier: Amsterdam, The Netherlands, 2020; pp. 429–456. [Google Scholar]
- Sakinis, T.; Milletari, F.; Roth, H.; Korfiatis, P.; Kostandy, P.; Philbrick, K.; Akkus, Z.; Xu, Z.; Xu, D.; Erickson, B.J. Interactive segmentation of medical images through fully convolutional neural networks. arXiv 2019, arXiv:1903.08205. [Google Scholar]
- Chicco, D. Ten quick tips for machine learning in computational biology. BioData Min. 2017, 10, 35. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Islam, M.; Khan, K.N.; Khan, M.S. Evaluation of Preprocessing Techniques for U-Net Based Automated Liver Segmentation. In Proceedings of the 2021 International Conference on Artificial Intelligence (ICAI), Islamabad, Pakistan, 5–7 April 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 187–192. [Google Scholar]
- Duque, P.; Cuadra, J.; Jiménez, E.; Rincón-Zamorano, M. In Data preprocessing for automatic WMH segmentation with FCNNs. In Proceedings of the From Bioinspired Systems and Biomedical Applications to Machine Learning: 8th International Work-Conference on the Interplay Between Natural and Artificial Computation, IWINAC 2019, Almería, Spain, 3–7 June 2019; Part II 8. Springer: Berlin/Heidelberg, Germany, 2019; pp. 452–460. [Google Scholar]
- Ross-Howe, S.; Tizhoosh, H.R. In the effects of image pre-and post-processing, wavelet decomposition, and local binary patterns on U-nets for skin lesion segmentation. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–8. [Google Scholar]
- De Raad, K.; van Garderen, K.A.; Smits, M.; van der Voort, S.R.; Incekara, F.; Oei, E.; Hirvasniemi, J.; Klein, S.; Starmans, M.P. The effect of preprocessing on convolutional neural networks for medical image segmentation. In Proceedings of the 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), Nice, France, 13–16 April 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 655–658. [Google Scholar]
- Gibson, E.; Giganti, F.; Hu, Y.; Bonmati, E.; Bandula, S.; Gurusamy, K.; Davidson, B.; Pereira, S.P.; Clarkson, M.J.; Barratt, D.C. Automatic multi-organ segmentation on abdominal CT with dense V-networks. IEEE Trans. Med. Imaging 2018, 37, 1822–1834. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Balagopal, A.; Kazemifar, S.; Nguyen, D.; Lin, M.-H.; Hannan, R.; Owrangi, A.; Jiang, S. Fully automated organ segmentation in male pelvic CT images. Phys. Med. Biol. 2018, 63, 245015. [Google Scholar] [CrossRef] [Green Version]
- Pal, D.; Reddy, P.B.; Roy, S. Attention UW-Net: A fully connected model for automatic segmentation and annotation of chest X-ray. Comput. Biol. Med. 2022, 150, 106083. [Google Scholar] [CrossRef] [PubMed]
- Yang, S.; Wang, Y.; Chen, K.; Zeng, W.; Fei, Z. Attribute-aware feature encoding for object recognition and segmentation. IEEE Trans. Multimed. 2021, 24, 3611–3623. [Google Scholar] [CrossRef]
- Sulistiyo, M.D.; Kawanishi, Y.; Deguchi, D.; Ide, I.; Hirayama, T.; Murase, H. Performance boost of attribute-aware semantic segmentation via data augmentation for driver assistance. In Proceedings of the 2020 8th International Conference on Information and Communication Technology (ICoICT), Yogyakarta, Indonesia, 24–26 June 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar]
- Wang, R.; Zheng, G. CyCMIS: Cycle-consistent Cross-domain Medical Image Segmentation via diverse image augmentation. Med. Image Anal. 2022, 76, 102328. [Google Scholar] [CrossRef]
- Jahanifar, M.; Tajeddin, N.Z.; Koohbanani, N.A.; Gooya, A.; Rajpoot, N. Segmentation of skin lesions and their attributes using multi-scale convolutional neural networks and domain specific augmentations. arXiv 2018, arXiv:1809.10243. [Google Scholar]
- Khan, A.R.; Khan, S.; Harouni, M.; Abbasi, R.; Iqbal, S.; Mehmood, Z. Brain tumor segmentation using K-means clustering and deep learning with synthetic data augmentation for classification. Microsc. Res. Tech. 2021, 84, 1389–1399. [Google Scholar] [CrossRef]
- Noguchi, S.; Nishio, M.; Yakami, M.; Nakagomi, K.; Togashi, K. Bone segmentation on whole-body CT using convolutional neural network with novel data augmentation techniques. Comput. Biol. Med. 2020, 121, 103767. [Google Scholar] [CrossRef]
- Moore, A.W.; Anderson, B.; Das, K.; Wong, W.-K. Combining multiple signals for biosurveillance. In Handbook of Biosurveillance; Academic Press: London, UK, 2006; pp. 235–242. [Google Scholar]
- Nugus, S. Regression Analysis. In Financial Planning Using Excel: Forecasting, Planning and Budgeting Techniques; CIMA Publishing: Washington, DC, USA, 2009; pp. 37–52. [Google Scholar]
- Alexopoulos, E.C. Introduction to multivariate regression analysis. Hippokratia 2010, 14, 23. [Google Scholar]
- Zatz, L.M. Basic principles of computed tomography scanning. In Technical Aspects of Computed Tomography; Mosby: St. Louis, MO, USA, 1981; pp. 3853–3876. [Google Scholar]
- Hoang, J.K.; Glastonbury, C.M.; Chen, L.F.; Salvatore, J.K.; Eastwood, J.D. CT mucosal window settings: A novel approach to evaluating early T-stage head and neck carcinoma. Am. J. Roentgenol. 2010, 195, 1002–1006. [Google Scholar] [CrossRef]
- Window Width Attribute. Available online: https://dicom.innolitics.com/ciods/us-image/voi-lut/00281051 (accessed on 12 February 2022).
- Christensen, D.; Nappo, K.; Wolfe, J.; Tropf, J.; Berge, M.; Wheatley, B.; Saxena, S.; Yow, B.; Tintle, S. Ten-year fracture risk predicted by proximal femur Hounsfield units. Osteoporos. Int. 2020, 31, 2123–2130. [Google Scholar] [CrossRef] [PubMed]
- Christensen, D.L.; Nappo, K.E.; Wolfe, J.A.; Wade, S.M.; Brooks, D.I.; Potter, B.K.; Forsberg, J.A.; Tintle, S.M. Proximal femur hounsfield units on CT colonoscopy correlate with dual-energy X-ray absorptiometry. Clin. Orthop. Relat. Res. 2019, 477, 850. [Google Scholar] [CrossRef]
- Meena, T.; Roy, S. Bone fracture detection using deep supervised learning from radiological images: A paradigm shift. Diagnostics 2022, 12, 2420. [Google Scholar] [CrossRef]
- Abu-Ain, T.; Sheikh Abdullah, S.N.H.; Bataineh, B.; Omar, K.; Abu-Ein, A. A novel baseline detection method of handwritten Arabic-script documents based on sub-words. In Proceedings of the Soft Computing Applications and Intelligent Systems: Second International Multi-Conference on Artificial Intelligence Technology, M-CAIT 2013, Shah Alam, Malaysia, 28–29 August 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 67–77. [Google Scholar]
- Talari, D.; Namburu, A. Indus Image Segmentation Using Watershed and Histogram Projections. Int. Robot. Autom. J. 2017, 3, 1–4. [Google Scholar] [CrossRef] [Green Version]
- Reddy, E.S. Character segmentation for Telugu image document using multiple histogram projections. Glob. J. Comput. Sci. Technol. 2013, 13, 11–15. [Google Scholar]
- Apivanichkul, K.; Phasukkit, P.; Dankulchai, P. Performance Comparison of Deep Learning Approach for Automatic CT Image Segmentation by Using Window Leveling. In Proceedings of the 2021 13th Biomedical Engineering International Conference (BMEiCON), Ayutthaya, Thailand, 19–21 November 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–5. [Google Scholar] [CrossRef]
- Nijkamp, J.; de Jong, R.; Sonke, J.-J.; van Vliet, C.; Marijnen, C. Target volume shape variation during irradiation of rectal cancer patients in supine position: Comparison with prone position. Radiother. Oncol. 2009, 93, 285–292. [Google Scholar] [CrossRef] [PubMed]
- Uemura, K.; Takao, M.; Otake, Y.; Takashima, K.; Hamada, H.; Ando, W.; Sato, Y.; Sugano, N. The effect of patient positioning on measurements of bone mineral density of the proximal femur: A simulation study using computed tomographic images. Arch. Osteoporos. 2023, 18, 35. [Google Scholar] [CrossRef]
- Jiang, B.; Luo, R.; Mao, J.; Xiao, T.; Jiang, Y. Acquisition of localization confidence for accurate object detection. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 784–799. [Google Scholar]
- Taha, A.A.; Hanbury, A. Metrics for evaluating 3D medical image segmentation: Analysis, selection, and tool. BMC Med. Imaging 2015, 15, 1–28. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kruse, F.A.; Lefkoff, A.; Boardman, J.; Heidebrecht, K.; Shapiro, A.; Barloon, P.; Goetz, A. The spectral image processing system (SIPS)—Interactive visualization and analysis of imaging spectrometer data. Remote Sens. Environ. 1993, 44, 145–163. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- McFadden, S.B.; Ward, P.A. Selecting the proper window for SSIM. In Image Quality and System Performance IX; SPIE: Bellingham, WA, USA, 2012; pp. 90–99. [Google Scholar]
Organ of Interest | Left Femur | |
---|---|---|
Number of patients | 120 | |
Age | 60–80 years old | |
Gender | 60 male and 60 female patients | |
Types of lower abdominal disorders | Cervical cancer and prostate cancer | Colorectal cancer, rectosigmoid cancer, and rectum cancer |
Source of data | Siriraj Hospital, Thailand |
Hyperparameter | Value |
---|---|
Femur | |
Number of layers | 5 |
Epochs | 5000 |
Learning rate | 0.001 |
Optimizer | Adam |
Loss function | Binary cross-entropy |
Input dimension (pixels) | 352 × 208 |
Convolution kernel size | 3 × 3 |
Max pooling kernel size | 2 × 2 |
Activation function | Rectified linear unit (ReLU), sigmoid |
Initial channels | 64 |
Performance of the U-Net Left-Femur Segmentation Model (%) | ||
---|---|---|
Dataset Category | Metric | |
DSC | IoU | |
Dataset category F-I | 37.76 | 23.90 |
Dataset category F-II | 67.96 | 52.32 |
Dataset category F-III | 61.37 | 45.46 |
Dataset category F-IV | 88.25 | 80.85 |
Dataset category F-V | 72.54 | 57.93 |
Dataset category F-VI | 51.37 | 41.25 |
Dataset category F-VII | 48.62 | 35.18 |
Dataset category F-VIII | 45.27 | 31.30 |
Isometric View of the Left Femur | Ground Truth and Predicted Results of the Left Femur under Dataset Category F-I | |||||
Front | Left | Rear | Right | Top | Bottom | |
Ground Truth | ||||||
Prediction | ||||||
SAM | 0.211 | 0.214 | 0.209 | 0.223 | 0.243 | 0.257 |
SSIM | 0.433 | 0.454 | 0.501 | 0.532 | 0.502 | 0.544 |
Ground truth and predicted results of the left femur under dataset category F-II | ||||||
Front | Left | Rear | Right | Top | Bottom | |
Prediction | ||||||
SAM | 0.155 | 0.165 | 0.144 | 0.135 | 0.187 | 0.236 |
SSIM | 0.677 | 0.658 | 0.712 | 0.714 | 0.701 | 0.689 |
Ground truth and predicted results of the left femur under dataset category F-III | ||||||
Front | Left | Rear | Right | Top | Bottom | |
Prediction | ||||||
SAM | 0.162 | 0.139 | 0.152 | 0.120 | 0.223 | 0.220 |
SSIM | 0.668 | 0.669 | 0.708 | 0.729 | 0.688 | 0.708 |
Ground truth and predicted results of the left femur under dataset category F-IV | ||||||
Front | Left | Rear | Right | Top | Bottom | |
Prediction | ||||||
SAM | 0.142 | 0.138 | 0.129 | 0.117 | 0.204 | 0.215 |
SSIM | 0.702 | 0.706 | 0.725 | 0.732 | 0.701 | 0.710 |
Ground truth and predicted results of the left femur under dataset category F-V | ||||||
Front | Left | Rear | Right | Top | Bottom | |
Prediction | ||||||
SAM | 0.151 | 0.145 | 0.134 | 0.121 | 0.196 | 0.223 |
SSIM | 0.683 | 0.675 | 0.722 | 0.729 | 0.700 | 0.702 |
Ground truth and predicted results of the left femur under dataset category F-VI | ||||||
Front | Left | Rear | Right | Top | Bottom | |
Prediction | ||||||
SAM | 0.176 | 0.170 | 0.164 | 0.146 | 0.197 | 0.225 |
SSIM | 0.596 | 0.572 | 0.632 | 0.634 | 0.644 | 0.635 |
Ground truth and predicted results of the left femur under dataset category F-VII | ||||||
Front | Left | Rear | Right | Top | Bottom | |
Prediction | ||||||
SAM | 0.183 | 0.157 | 0.170 | 0.176 | 0.197 | 0.225 |
SSIM | 0.551 | 0.572 | 0.631 | 0.603 | 0.637 | 0.633 |
Ground truth and predicted results of the left femur under dataset category F-VIII | ||||||
Front | Left | Rear | Right | Top | Bottom | |
Prediction | ||||||
SAM | 0.201 | 0.193 | 0.184 | 0.192 | 0.212 | 0.236 |
SSIM | 0.461 | 0.471 | 0.531 | 0.574 | 0.536 | 0.639 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Apivanichkul, K.; Phasukkit, P.; Dankulchai, P.; Sittiwong, W.; Jitwatcharakomol, T. Enhanced Deep-Learning-Based Automatic Left-Femur Segmentation Scheme with Attribute Augmentation. Sensors 2023, 23, 5720. https://doi.org/10.3390/s23125720
Apivanichkul K, Phasukkit P, Dankulchai P, Sittiwong W, Jitwatcharakomol T. Enhanced Deep-Learning-Based Automatic Left-Femur Segmentation Scheme with Attribute Augmentation. Sensors. 2023; 23(12):5720. https://doi.org/10.3390/s23125720
Chicago/Turabian StyleApivanichkul, Kamonchat, Pattarapong Phasukkit, Pittaya Dankulchai, Wiwatchai Sittiwong, and Tanun Jitwatcharakomol. 2023. "Enhanced Deep-Learning-Based Automatic Left-Femur Segmentation Scheme with Attribute Augmentation" Sensors 23, no. 12: 5720. https://doi.org/10.3390/s23125720
APA StyleApivanichkul, K., Phasukkit, P., Dankulchai, P., Sittiwong, W., & Jitwatcharakomol, T. (2023). Enhanced Deep-Learning-Based Automatic Left-Femur Segmentation Scheme with Attribute Augmentation. Sensors, 23(12), 5720. https://doi.org/10.3390/s23125720