Estimation of Fractal Dimension and Segmentation of Brain Tumor with Parallel Features Aggregation Network
Abstract
:1. Introduction
- -
- This study proposes a robust CAD framework based on radiomic parallel features aggregation for the accurate segmentation of three different BT regions. The framework comprises an encoder–decoder architecture, with a novel PFA block comprising multiple-branch feature extraction layers to learn discriminative information from the BT scans in parallel and aggregate them.
- -
- In the encoder module, the PFA block is integrated at low, intermediate, and high levels to capture a comprehensive representation of the BT and preserve diverse multi-level information throughout the encoding process. The multi-level aggregated features capture the overall characteristics of the BT, incorporating local details, such as small tumor boundaries, particularly for ET, intermediate-level structures, such as the shape of the BT, and high-level global information, such as the overall location and size of the BT.
- -
- In contrast to the encoder module, the decoder module utilizes the PFA block to collectively process upscaled low-level, intermediate-level, and high-level bottleneck-rich semantic features in parallel. Subsequently, the PFA block aggregates these semantic features to ensure that the decoder module has access to a diverse range of information, including fine-grained details and high-level contexts.
- -
- Our proposed PFA-Net surpasses the state-of-the-art methods in the field of heterogeneous dataset analysis in terms of segmentation performance and computational efficiency, with 19.49 million (M) parameters less than those of the previous method.
- -
- The integration of the FD estimation method into our system provides valuable insights into the distributional characteristics of BTs, thereby enhancing the comprehensiveness of our approach. Moreover, our trained PFA-Net is publicly available for fair comparison via the following link (https://github.com/PFA-Net, accessed on 23 November 2023).
2. Related Work
2.1. Homogeneous Dataset Analysis
2.1.1. Handcrafted Feature-Based Methods
2.1.2. Deep Feature-Based Methods
2.2. Heterogeneous Dataset Analysis
2.2.1. Partially Heterogeneous Dataset-Based Methods
2.2.2. Complete Heterogeneous Dataset-Based Methods
3. Proposed Methodology
3.1. Overview of the Workflow
3.2. Architecture and Workflow of PFA-Net
3.3. Architecture of PFA Block and Loss Function
4. Experimental Results
4.1. Experimental Dataset
4.2. Environmental Setup, Pre-Processing, and Training
4.3. Evaluation Metrics and Fractal Dimension Estimation
Algorithm 1: The pseudocode for estimation of fractal dimension. |
Input: Is: input is the generated output of PFA-Net Output: fractal dimension (FD) 1: Set the maximum box size and make its dimensions powers of 2 e = 2^[log(max(size(Is))/log2] 2: Pad the Is to make its dimension equal to e if size(Is) < size(e) pad(Is) = e end 3: Pre-allocate the number of boxes n = zeros(1, e +1) 4: Compute number of boxes ‘N(e)’ containing at least one BT pixel n(e + 1) = sum(I(:)) 5: While e > 1: a. Reduce box size: e = e/2 b. Recalculate N(e) 6: Compute log(N(e)) and log(1/e) for each ‘e’ 7: Fit line to [(log(1/e), log(N(e)] using least squares 8: Fractal dimension is slope of line Return FD |
4.4. Testing Results of Homogeneous Dataset Analysis
4.4.1. Ablation Studies
- (1)
- Absence of PFA block.
- (2a)
- PFA-block1 at low level of encoder and PFA-block2 at intermediate level of encoder.
- (2b)
- PFA-block3 at high level of encoder and PFA-block4 in decoder.
- (2c)
- PFA-block2 at intermediate level of encoder and PFA-block4 in decoder.
- (2d)
- PFA-block1 at low level of encoder and PFA-block3 at high-level of encoder.
- (3a)
- PFA-block2 at the intermediate level of the encoder, PFA-block3 at the high-level of the encoder, and PFA-block4 in the decoder.
- (3b)
- PFA-block1 at low encoder levels, PFA-block2 at intermediate encoder levels, and PFA-block3 at high encoder levels.
- (4)
- PFA-block1 at the low level of the encoder, PFA-block2 at the intermediate level of the encoder, PFA-block3 at the high level of the encoder, and PFA-block4 at the decoder level.
- (1)
- PFA-block1 at low level of encoder and PFA-block4 in decoder.
- (2)
- PFA-block2 at intermediate level of encoder and PFA-block4 in decoder.
- (3)
- PFA-block3 at high level of encoder and PFA-block4 in decoder.
- (4)
- PFA-block1 at the low level of the encoder, PFA-block2 at the intermediate level of the encoder, PFA-block3 at the high level of the encoder, and PFA-block4 at the decoder level.
4.4.2. Comparisons with State-of-the-Art Methods
4.5. Testing Results of Heterogeneous Dataset Analysis
4.6. FD Estimation for BTs
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Raghavendra, U.; Gudigar, A.; Paul, A.; Goutham, T.; Inamdar, M.A.; Hegde, A.; Devi, A.; Ooi, C.P.; Deo, R.C.; Barua, P.D.; et al. Brain tumor detection and screening using artificial intelligence techniques: Current trends and future perspectives. Comput. Biol. Med. 2023, 163, 107063. [Google Scholar] [CrossRef] [PubMed]
- Louis, D.N.; Ohgaki, H.; Wiestler, O.D.; Cavenee, W.K.; Burger, P.C.; Jouvet, A.; Scheithauer, B.W.; Kleihues, P. The 2007 WHO classification of tumours of the central nervous system. Acta Neuropathol. 2007, 114, 97–109. [Google Scholar] [CrossRef] [PubMed]
- Vescovi, A.L.; Galli, R.; Reynolds, B.A. Brain tumour stem cells. Nat. Rev. Cancer 2006, 6, 425–436. [Google Scholar] [CrossRef] [PubMed]
- Miller, K.D.; Ostrom, Q.T.; Kruchko, C.; Patil, N.; Tihan, T.; Cioffi, G.; Fuchs, H.E.; Waite, K.A.; Jemal, A.; Siegel, R.L.; et al. Brain and other central nervous system tumor statistics, 2021. CA-Cancer J. Clin. 2021, 71, 381–406. [Google Scholar] [CrossRef] [PubMed]
- Cavaliere, R.; Lopes, M.B.S.; Schiff, D. Low-grade gliomas: An update on pathology and therapy. Lancet Neurol. 2005, 4, 760–770. [Google Scholar] [CrossRef]
- Robbins, M.; Greene-Schloesser, D.; Peiffer, A.; Shaw, E.; Chan, M.; Wheeler, K. Radiation-induced brain injury: A review. Front. Oncol. 2012, 2, 30551. [Google Scholar]
- de Leeuw, C.N.; Vogelbaum, M.A. Supratotal resection in glioma: A systematic review. Neuro-Oncol. 2019, 21, 179–188. [Google Scholar] [CrossRef] [PubMed]
- Barnova, K.; Mikolasova, M.; Kahankova, R.V.; Jaros, R.; Kawala-Sterniuk, A.; Snasel, V.; Mirjalili, S.; Pelc, M.; Martinek, R. Implementation of artificial intelligence and machine learning-based methods in brain–computer interaction. Comput. Biol. Med. 2023, 163, 107135. [Google Scholar] [CrossRef] [PubMed]
- Menze, B.H.; Jakab, A.; Bauer, S.; Kalpathy-Cramer, J.; Farahani, K.; Kirby, J.; Burren, Y.; Porz, N.; Slotboom, J.; Wiest, R.; et al. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 2015, 34, 1993–2024. [Google Scholar] [CrossRef] [PubMed]
- Menze, B.; Isensee, F.; Wiest, R.; Wiestler, B.; Maier-Hein, K.; Reyes, M.; Bakas, S. Analyzing magnetic resonance imaging data from glioma patients using deep learning. Comput. Med. Imaging Graph 2021, 88, 101828. [Google Scholar] [CrossRef] [PubMed]
- Haider, A.; Arsalan, M.; Lee, M.B.; Owais, M.; Mahmood, T.; Sultan, H.; Park, K.R. Artificial intelligence-based computer-aided diagnosis of glaucoma using retinal fundus images. Expert Syst. Appl. 2022, 207, 117968. [Google Scholar] [CrossRef]
- Esteva, A.; Chou, K.; Yeung, S.; Naik, N.; Madani, A.; Mottaghi, A.; Liu, Y.; Topol, E.; Dean, J.; Socher, R. Deep learning-enabled medical computer vision. npj Digit Med. 2021, 4, 5. [Google Scholar] [CrossRef] [PubMed]
- Owais, M.; SikYoon, H.; Mahmood, T.; Haider, A.; Sultan, H.; Park, K.R. Light-weighted ensemble network with multilevel activation visualization for robust diagnosis of COVID-19 pneumonia from large-scale chest radiographic database. Appl. Soft Comput. 2021, 108, 107490. [Google Scholar] [CrossRef] [PubMed]
- Sultan, H.; Owais, M.; Park, C.; Mahmood, T.; Haider, A.; Park, K.R. Artificial intelligence-based recognition of different types of shoulder implants in X-ray scans based on dense residual ensemble-network for personalized medicine. J. Pers. Med. 2021, 11, 482. [Google Scholar] [CrossRef] [PubMed]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Demirhan, A.; Güler, İ. Combining stationary wavelet transform and self-organizing maps for brain MR image segmentation. Eng. Appl. Artif. Intell. 2011, 24, 358–367. [Google Scholar] [CrossRef]
- Guo, P. Brain tissue classification method for clinical decision-support systems. Eng. Appl. Artif. Intell. 2017, 64, 232–241. [Google Scholar] [CrossRef]
- Liu, D.; Sheng, N.; He, T.; Wang, W.; Zhang, J.; Zhang, J. SGEResU-Net for brain tumor segmentation. Math Biosci. Eng. 2022, 19, 5576–5590. [Google Scholar] [CrossRef] [PubMed]
- Sultan, H.; Owais, M.; Nam, S.H.; Haider, A.; Akram, R.; Usman, M.; Park, K.R. MDFU-Net: Multiscale dilated features up-sampling network for accurate segmentation of tumor from heterogeneous brain data. J. King Saud Univ.-Comput. Inf. Sci. 2023, 35, 101560. [Google Scholar] [CrossRef]
- Rehman, M.U.; Ryu, J.; Nizami, I.F.; Chong, K.T. RAAGR2-Net: A brain tumor segmentation network using parallel processing of multiple spatial frames. Comput. Biol. Med. 2023, 152, 106426. [Google Scholar] [CrossRef] [PubMed]
- Jia, Z.; Zhu, H.; Zhu, J.; Ma, P. Two-Branch network for brain tumor segmentation using attention mechanism and super-resolution reconstruction. Comput. Biol. Med. 2023, 157, 106751. [Google Scholar] [CrossRef] [PubMed]
- Bakas, S.; Akbari, H.; Sotiras, A.; Bilello, M.; Rozycki, M.; Kirby, J.S.; Freymann, J.B.; Farahani, K.; Davatzikos, C. Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features. Sci. Data 2017, 4, 170117. [Google Scholar] [CrossRef] [PubMed]
- Bakas, S.; Reyes, M.; Jakab, A.; Bauer, S.; Rempfler, M.; Crimi, A.; Shinohara, R.T.; Berger, C.; Ha, S.M.; Rozycki, M.; et al. Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge 2019. arXiv 2019, arXiv:1811.02629. [Google Scholar] [CrossRef]
- Havlin, S.; Buldyrev, S.; Goldberger, A.; Mantegna, R.N.; Ossadnik, S.; Peng, C.-K.; Simons, M.; Stanley, H. Fractals in biology and medicine. Chaos Solitons Fractals 1995, 6, 171–201. [Google Scholar] [CrossRef] [PubMed]
- Sabir, Z.; Bhat, S.A.; Wahab, H.A.; Camargo, M.E.; Abildinova, G.; Zulpykhar, Z. A bio inspired learning scheme for the fractional order kidney function model with neural networks. Chaos Solitons Fractals 2024, 180, 114562. [Google Scholar] [CrossRef]
- Chen, Y.; Wang, Y.; Li, X. Fractal dimensions derived from spatial allometric scaling of urban form. Chaos Solitons Fractals 2019, 126, 122–134. [Google Scholar] [CrossRef]
- Zook, J.M.; Iftekharuddin, K.M. Statistical analysis of fractal-based brain tumor detection algorithms. Magn. Reason. Imaging 2005, 23, 671–678. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation 2015. arXiv 2015, arXiv:1505.04597. [Google Scholar] [CrossRef]
- Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning dense volumetric segmentation from sparse annotation 2016. arXiv 2016, arXiv:1606.06650. [Google Scholar] [CrossRef]
- Velthuizen, R.P.; Clarke, L.P.; Phuphanich, S.; Hall, L.O.; Bensaid, A.M.; Arrington, J.A.; Greenberg, H.M.; Silbiger, M.L. Unsupervised measurement of brain tumor volume on MR images. J. Magn. Reason. Imaging 1995, 5, 594–605. [Google Scholar] [CrossRef] [PubMed]
- Kamber, M.; Shinghal, R.; Collins, D.L.; Francis, G.S.; Evans, A.C. Model-based 3-D segmentation of multiple sclerosis lesions in magnetic resonance brain images. IEEE Trans. Med. Imaging 1995, 14, 442–453. [Google Scholar] [CrossRef] [PubMed]
- Gibbs, P.; Buckley, D.L.; Blackband, S.J.; Horsman, A. Tumour volume determination from MR images by morphological segmentation. Phys. Med. Biol. 1996, 41, 2437–2446. [Google Scholar] [CrossRef] [PubMed]
- Clark, M.C.; Hall, L.O.; Goldgof, D.B.; Velthuizen, R.; Murtagh, F.R.; Silbiger, M.S. Automatic tumor segmentation using knowledge-based techniques. IEEE Trans. Med. Imaging 1998, 17, 187–201. [Google Scholar] [CrossRef] [PubMed]
- Kaus, M.R.; Warfield, S.K.; Nabavi, A.; Chatzidakis, E.; Black, P.M.; Jolesz, F.A.; Kikinis, R. Segmentation of meningiomas and low grade gliomas in MRI. In Proceedings of the Second International Conference on Medical Image Computing and Computer-Assisted Intervention–MICCAI’99, Berlin, Heidelberg, 19–22 September 1999; pp. 1–10. [Google Scholar] [CrossRef]
- Warfield, S.K.; Kaus, M.; Jolesz, F.A.; Kikinis, R. Adaptive, template moderated, spatially varying statistical classification. Med. Image. Anal. 2000, 4, 43–55. [Google Scholar] [CrossRef] [PubMed]
- Mazzara, G.P.; Velthuizen, R.P.; Pearlman, J.L.; Greenberg, H.M.; Wagner, H. Brain tumor target volume determination for radiation treatment planning through automated MRI segmentation. Int. J. Radiat. Oncol. Biol. Phys. 2004, 59, 300–312. [Google Scholar] [CrossRef] [PubMed]
- Tustison, N.J.; Shrinidhi, K.L.; Wintermark, M.; Durst, C.R.; Kandel, B.M.; Gee, J.C.; Grossman, M.C.; Avants, B.B. Optimal symmetric multimodal templates and concatenated random forests for supervised brain tumor segmentation (simplified) with ANTsR. Neuroinformatics 2015, 13, 209–225. [Google Scholar] [CrossRef]
- Pinto, A.; Pereira, S.; Correia, H.; Oliveira, J.; Rasteiro, D.M.L.D.; Silva, C.A. Brain tumour segmentation based on extremely randomized forest with high-level features. In Proceedings of the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 3037–3040. [Google Scholar] [CrossRef]
- Kikinis, R.; Pieper, S. 3D Slicer as a tool for interactive brain tumor segmentation. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011; pp. 6982–6984. [Google Scholar] [CrossRef]
- Gao, Y.; Kikinis, R.; Bouix, S.; Shenton, M.; Tannenbaum, A. A 3D interactive multi-object segmentation tool using local robust statistics driven active contours. Med. Image Anal. 2012, 16, 1216–1227. [Google Scholar] [CrossRef] [PubMed]
- Jun, W.; Haoxiang, X.; Wang, Z. Brain tumor segmentation using dual-path attention U-Net in 3D MRI images. In Proceedings of the 6th International MICCAI Brain Workshop, Lima, Peru, 4 October 2020; Springer International Publishing: Lima, Peru, 2020; pp. 183–193. [Google Scholar] [CrossRef]
- Wang, Y.; Zhang, Y.; Hou, F.; Liu, Y.; Tian, J.; Zhong, C.; Zhang, Y.; He, Z. Modality-pairing learning for brain tumor segmentation. In Proceedings of the 6th International MICCAI Brain Workshop, Lima, Peru, 4 October 2020; Springer International Publishing: Lima, Peru, 2020; pp. 230–240. [Google Scholar] [CrossRef]
- Yuan, Y. Automatic brain tumor segmentation with scale attention network. In Proceedings of the 6th International MICCAI Brain Workshop, Lima, Peru, 4 October 2020; Springer International Publishing: Lima, Peru, 2020; pp. 285–294. [Google Scholar] [CrossRef]
- Henry, T.; Carré, A.; Lerousseau, M.; Estienne, T.; Robert, C.; Paragios, N.; Deutsch, E. Brain tumor segmentation with self-ensembled, deeply-supervised 3D U-Net neural networks: A BraTS 2020 challenge solution. In Proceedings of the 6th International MICCAI Brain Workshop, Lima, Peru, 4 October 2020; Springer International Publishing: Lima, Peru, 2020; pp. 327–339. [Google Scholar] [CrossRef]
- Sundaresan, V.; Griffanti, L.; Jenkinson, M. Brain tumour segmentation using a triplanar ensemble of U-Nets on MR images. In Proceedings of the 6th International MICCAI Brain Workshop, Lima, Peru, 4 October 2020; Springer International Publishing: Lima, Peru, 2020; pp. 340–353. [Google Scholar] [CrossRef]
- Ballestar, L.M.; Vilaplana, V. MRI brain tumor segmentation and uncertainty estimation using 3D-UNet architectures. In Proceedings of the 6th International MICCAI Brain Workshop, Lima, Peru, 4 October 2020; Springer International Publishing: Lima, Peru, 2020; pp. 376–390. [Google Scholar] [CrossRef]
- Zhang, Y.; Wu, J.; Huang, W.; Chen, Y.; Wu, E.X.; Tang, X. Utility of brain parcellation in enhancing brain tumor segmentation and survival prediction. In Proceedings of the 6th International MICCAI Brain Workshop, Lima, Peru, 4 October 2020; Springer International Publishing: Lima, Peru, 2020; pp. 391–400. [Google Scholar] [CrossRef]
- Zhao, C.; Zhao, Z.; Zeng, Q.; Feng, Y. MVP U-Net: Multi-view pointwise U-Net for brain tumor segmentation. In Proceedings of the 6th International MICCAI Brain Workshop, Lima, Peru, 4 October 2020; Springer International Publishing: Lima, Peru, 2020; pp. 93–103. [Google Scholar] [CrossRef]
- Isensee, F.; Jäger, P.F.; Full, P.M.; Vollmuth, P.; Maier-Hein, K.H. nnU-Net for Brain Tumor Segmentation. In Proceedings of the 6th International MICCAI Brain Workshop, Lima, Peru, 4 October 2020; Springer International Publishing: Lima, Peru, 2020; pp. 118–132. [Google Scholar] [CrossRef]
- Isensee, F.; Jaeger, P.F.; Kohl, S.A.A.; Petersen, J.; Maier-Hein, K.H. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 2021, 18, 203–211. [Google Scholar] [CrossRef] [PubMed]
- Awasthi, N.; Pardasani, R.; Gupta, S. Multi-threshold Attention U-Net (MTAU) based model for multimodal brain tumor segmentation in MRI scans. In Proceedings of the 6th International MICCAI Brain Workshop, Lima, Peru, 4 October 2020; Springer International Publishing: Lima, Peru, 2020; pp. 168–178. [Google Scholar] [CrossRef]
- Agravat, R.R.; Raval, M.S. 3D semantic segmentation of brain tumor for overall survival prediction. In Proceedings of the 6th International MICCAI Brain Workshop, Lima, Peru, 4 October 2020; Springer International Publishing: Lima, Peru, 2020; pp. 215–227. [Google Scholar] [CrossRef]
- Xu, J.H.; Teng, W.P.K.; Wang, X.J.; Nürnberger, A. A deep supervised U-attention net for pixel-wise brain tumor segmentation. In Proceedings of the 6th International MICCAI Brain Workshop, Lima, Peru, 4 October 2020; Springer International Publishing: Lima, Peru, 2020; pp. 278–289. [Google Scholar] [CrossRef]
- Saeed, M.U.; Ali, G.; Bin, W.; Almotiri, S.H.; AlGhamdi, M.A.; Nagra, A.A.; Masood, K.; Amin, R.U. RMU-Net: A novel residual mobile U-Net model for brain tumor segmentation from MR images. Electronics 2021, 10, 1962. [Google Scholar] [CrossRef]
- Cirillo, M.D.; Abramian, D.; Eklund, A. Vox2Vox: 3D-GAN for brain tumour segmentation. In Proceedings of the 6th International MICCAI Brain Workshop, Lima, Peru, 4 October 2020; pp. 274–284. [Google Scholar] [CrossRef]
- Vu, M.H.; Nyholm, T.; Löfstedt, T. Multi-decoder networks with multi-denoising inputs for tumor segmentation. In Proceedings of the 6th International MICCAI Brain Workshop, Lima, Peru, 4 October 2020; Springer International Publishing: Lima, Peru, 2020; pp. 412–423. [Google Scholar] [CrossRef]
- Yang, Q.; Yuan, Y. Learning dynamic convolutions for multi-modal 3D MRI brain tumor segmentation. In Proceedings of the 6th International MICCAI Brain Workshop, Lima, Peru, 4 October 2020; Springer International Publishing: Lima, Peru, 2020; pp. 441–451. [Google Scholar] [CrossRef]
- Guan, X.; Yang, G.; Ye, J.; Yang, W.; Xu, X.; Jiang, W.; Lai, X. 3D AGSE-VNet: An automatic brain tumor MRI data segmentation framework. BMC. Med. Imaging 2022, 22, 6. [Google Scholar] [CrossRef] [PubMed]
- Fang, Y.; Huang, H.; Yang, W.; Xu, X.; Jiang, W.; Lai, X. Nonlocal convolutional block attention module VNet for gliomas automatic segmentation. Int J. Imaging Syst. Technol. 2022, 32, 528–543. [Google Scholar] [CrossRef]
- Zhu, Z.; Sun, M.; Qi, G.; Li, Y.; Gao, X.; Liu, Y. Sparse dynamic volume TransUNet with multi-level edge fusion for brain tumor segmentation. Comput. Biol. Med. 2024, 172, 108284. [Google Scholar] [CrossRef] [PubMed]
- Aboussaleh, I.; Riffi, J.; el Fazazy, K.; Mahraz, A.M.; Tairi, H. 3DUV-NetR+: A 3D hybrid semantic architecture using transformers for brain tumor segmentation with multiModal MR images. Results Eng. 2024, 21, 101892. [Google Scholar] [CrossRef]
- Feng, Y.; Cao, Y.; An, D.; Liu, P.; Liao, X.; Yu, B. DAUnet: A U-shaped network combining deep supervision and attention for brain tumor segmentation. Knowl.-Based Syst. 2024, 285, 111348. [Google Scholar] [CrossRef]
- van der Voort, S.R.; Incekara, F.; Wijnenga, M.M.J.; Kapsas, G.; Gahrmann, R.; Schouten, J.W.; Tewarie, R.N.; Lycklama, G.J.; Hamer, P.C.D.W.; Eijgelaar, R.S.; et al. Combined molecular subtyping, grading, and segmentation of glioma using multi-task deep learning. Neuro-Oncology 2022, 25, 279–289. [Google Scholar] [CrossRef] [PubMed]
- Clark, K.; Vendt, B.; Smith, K.; Freymann, J.; Kirby, J.; Koppel, P.; Moore, S.; Phillips, S.; Maffitt, D.; Pringle, M.; et al. The cancer imaging archive (TCIA): Maintaining and operating a public information repository. J. Digit Imaging 2013, 26, 1045–1057. [Google Scholar] [CrossRef] [PubMed]
- Schmainda, K.; Prah, M. Data from Brain-Tumor-Progression. Cancer Imaging Archive. 2019. Available online: https://www.cancerimagingarchive.net/collection/brain-tumor-progression/ (accessed on 13 May 2024).
- Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the 15th European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Springenberg, J.T.; Dosovitskiy, A.; Brox, T.; Riedmiller, M. Striving for simplicity: The all convolutional net 2015. arXiv 2015, arXiv:1412.6806. [Google Scholar] [CrossRef]
- Jadon, S. A survey of loss functions for semantic segmentation. In Proceedings of the IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), Via Del Mar, Chile, 27–29 October 2020; pp. 1–7. [Google Scholar] [CrossRef]
- NVIDIA GeForce 10 Series. Available online: https://www.nvidia.com/en-us/geforce/10-series/ (accessed on 24 August 2023).
- MATLAB 2021b. Available online: https://www.mathworks.com/products/matlab.html/ (accessed on 24 August 2023).
- Bertels, J.; Eelbode, T.; Berman, M.; Vandermeulen, D.; Maes, F.; Bisschops, R.; Blaschko, M.B. Optimizing the dice score and jaccard index for medical image segmentation: Theory & practice. In Proceedings of the 22nd International Conference on Medical Image Computing and Computer-Assisted Intervention, Shenzen, China, 13–17 October 2019; pp. 92–100. [Google Scholar] [CrossRef]
- Brouty, X.; Garcin, M. Fractal properties, information theory, and market efficiency. Chaos Solitons Fractals 2024, 180, 114543. [Google Scholar] [CrossRef]
- Yin, J. Dynamical fractal: Theory and case study. Chaos Solitons Fractals 2023, 176, 114190. [Google Scholar] [CrossRef]
- Livingston, E.H. Who was student and why do we care so much about his t-test? J Surg. Res. 2004, 118, 58–65. [Google Scholar] [CrossRef] [PubMed]
- Müller, D.; Soto-Rey, I.; Kramer, F. Towards a guideline for evaluation metrics in medical image segmentation. BMC. Res. Notes 2022, 15, 210. [Google Scholar] [CrossRef] [PubMed]
- Ahuja, S.; Panigrahi, B.K.; Gandhi, T.K. Fully automatic brain tumor segmentation using DeepLabv3+ with variable loss functions. In Proceedings of the 8th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 26–27 August 2021; pp. 522–526. [Google Scholar] [CrossRef]
- Akbar, A.S.; Fatichah, C.; Suciati, N. Single level UNet3D with multipath residual attention block for brain tumor segmentation. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 3247–3258. [Google Scholar] [CrossRef]
- Alqazzaz, S.; Sun, X.; Yang, X.; Nokes, L. Automated brain tumor segmentation on multi-modal MR image using SegNet. Comp. Vis. Media 2019, 5, 209–219. [Google Scholar] [CrossRef]
- Sun, J.; Peng, Y.; Guo, Y.; Li, D. Segmentation of the multimodal brain tumor image used the multi-pathway architecture method based on 3D FCN. Neurocomputing 2021, 423, 34–45. [Google Scholar] [CrossRef]
- Mandelbrot, B. How long is the coast of Britain? Statistical self-similarity and fractional dimension. Science 1967, 156, 636–638. [Google Scholar] [CrossRef] [PubMed]
- Chan, A.; Tuszynski, J.A. Automatic prediction of tumour malignancy in breast cancer with fractal dimension. R. Soc. Open Sci. 2016, 3, 160558. [Google Scholar] [CrossRef] [PubMed]
- Cohen, J. A power primer. Psychol. Bull. 1992, 112, 155–159. [Google Scholar] [CrossRef] [PubMed]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
- Spasić, S. On 2D generalization of Higuchi’s fractal dimension. Chaos Solitons Fractals 2014, 69, 179–187. [Google Scholar] [CrossRef]
- Islam, A.; Reza, S.M.; Iftekharuddin, K.M. Multifractal texture estimation for detection and segmentation of brain tumors. IEEE Trans. Biomed. Eng. 2013, 60, 3204–3215. [Google Scholar] [CrossRef] [PubMed]
- Zaia, A.; Zannotti, M.; Losa, L.; Maponi, P. Fractal features of muscle to quantify fatty infiltration in aging and pathology. Fractal Fract. 2024, 8, 275. [Google Scholar] [CrossRef]
Methods | Networks | References | Advantages | Disadvantages | Results |
---|---|---|---|---|---|
Handcrafted feature-based | Supervised and unsupervised methods | [30] | Supervised methods were superior in performance over unsupervised ones | Unsupervised methods required large processing time | 94% TPR |
Model-based and nonmodel-based | [31] | Model-based methods have low error | Errors were manually corrected | 56–82% FP was reduced | |
Morphological | [32] | An appropriate method for measuring tumor volume in a well-defined region | Results were not compared with ground truth and lacked authenticity | Mean difference is 0.8 ± 1.8 cm3 | |
Knowledge-based segmentation | [33] | Knowledge-engineering concept was used | Generated large number of FP cases | Some slices have 90% matching ratio | |
Statistical rule | [34] | Comparable accuracy to manual method in segmenting tumor | Method was dependent on atlas | 99% Acc | |
Adaptive template | [35] | Integration of registration methods with classification methods | Pre-processing and external template are needed | - | |
kNN | [36] | Performed effectively in a challenging case involving a cystic formation | Less accurate compared to expert oncologists | 56% Acc | |
Concatenated random forests | [37] | Symmetric multivariate templates improve performance | Post-processing is required | 74% for ET, 87% for WT, 78% for TC | |
Extremely random forest | [38] | Extra-Trees classifier utilizes local and contextual features | Post-processing is required | 73% for ET, 83% for WT, 78% for TC | |
Deep feature-based | Dual-path attention 3D U-Net | [41] | Used dual-pathway attention gate with double-pathway residual block | Training strategy of random cropping affects the performance of AT | 82.3% for ET, 91.2% for WT, 87.8% for TC |
Modality-pairing 3D U-Net | [42] | Employed parallel branches to extract features from different modalities | - Computationally complex - Requires post-processing | 86.3% for ET, 92.4% for WT, 89.8% for TC | |
3D SA-Net | [43] | Extension of vanilla U-Net with attention blocks | Computation complexity | 81.25% for ET, 91.51% for WT, 87.73% for TC | |
3D U-Net | [44] | Ensemble training approach is used | Ensemble approach is computationally expensive | 81.44% for ET, 90.37% for WT, 87.01% for TC | |
Triplanar U-Net | [45] | Proposed an ensemble model, utilizing three planes of MRI scans | Requires post-processing | 83% for ET, 93% for WT, 87% for TC | |
Ensemble model | [46] | Ensemble of 3D U-Net and V-Net mutually enhanced performance | - Augmentation is applied - Requires post-processing - Low performance results for AT | 77% for ET, 85% for WT, 85% for TC | |
Multiple 3D U-Net | [47] | Employed an ensemble technique with a coarse-to-fine strategy and brain parcellation | - Brain parcellation around tumor regions may be compromised - Low performance results for AT | 79.41% for ET, 92.29% for WT, 87.70% for TC | |
MVP U-Net | [48] | Introduced 2D multi-view layers in 3D network | - Low performance results for all three BT regions - Three pre-processing steps are required | 60% for ET, 79.9% for WT, 63.5% for TC | |
nnU-Net | [49] | Achieving the top position in the BraTS-2020 challenge | - Lacked extensive experimental validation - Training spans across 1000 epochs. | 81.37% for ET, 91.87% for WT, 87.97% for TC | |
MTAU | [51] | Lower memory requirements and shorter training time | - Low performance results for all three BT regions | 59% for ET, 72% for WT, 61% for TC | |
Dense 3D U-Net | [52] | Integration of dense connection with each layer module | - Two post-processing steps are required - Pre-processing is required | 78.2% for ET, 88.2% for WT, 83.2% for TC | |
U-attention Net | [53] | Incorporates an attention gate and multistage layers within U-Net | The performance is negatively impacted by augmentation techniques | 81.79% for ET, 91.90% for WT, 86.35% for TC | |
RMU-Net | [54] | An ensemble CNN integrating the architectures of U-Net and MobileNetV2 | An unjust comparison with cutting-edge methods | 83.26% for ET, 91.35% for WT, 88.13% for TC | |
SGEResU-Net | [18] | A combined model of an attention module together with a residual module | Very low performance gained for WT compared to the performance gained by 3D U-Net | 79.40% for ET, 90.48% for WT, 85.22% for TC | |
3D-GAN | [55] | Segmentation is based on volume-to-volume | - Pre-processing and augmentation are applied - Low performance results for AT | 79.56% for ET, 91.63% for WT, 89.25% for TC | |
Multi-decoder | [56] | Employed a multi-decoder architecture with a common encoder | - Requires pre-processing - Low performance results for AT | 78.13% for ET, 92.75% for WT, 88.34% for TC | |
Dynamic DMF-Net | [57] | Group convolution and dilated convolution are incorporating to learn multilevel features | Performance for AT remains low despite multi-branch structure and dynamic modules. | 74.58% for ET, 91.36% for WT, 84.91% for TC | |
AGSE-VNet | [58] | The squeeze-and-excitation module is integrated into the encoder for BT segmentation | - Low performance results for all three BT regions - Lack of ablation studies | 70% for ET, 85% for WT, 77% for TC | |
NLCA-VNet | [59] | Trained the model with a small-sized dataset | Statistical differences between VNet and NLCA-VNet are not provided | 74.80% for ET, 90.50% for WT, 88.50% for TC | |
SDV-TUNet | [60] | Fusion of multi-level edges | Volumetric data increase the computational cost | 82.48% for ET, 90.22% for WT, 89.20% for TC | |
3DUV-NetR+ | [61] | Combination of U-Net and V-Net | Higher computational cost compared to its sub-models | 81.70% for ET, 91.95% for WT, 82.80% for TC | |
DAUnet | [62] | A 3D attention comprises spatial and channel with a residual connection | Supervision loss increases the training time | 83.30% for ET, 90.60% for WT, 89.20% for TC | |
PFA-Net | Proposed | - Multi-level parallel features are exploited and aggregated - High accuracy for small-area tumors | Intensive computations are required for parallel features extraction | 87.54% for ET, 93.42% for WT, 91.02% for TC |
Methods | Networks | References | Advantages | Disadvantages | Results (DS) |
---|---|---|---|---|---|
Partially heterogeneous analysis-based | Multi-task CNN | [63] | Concurrently executes multiple tasks. | Heterogeneous dataset analysis is partially fulfilled | 84% |
Complete heterogeneous analysis-based | MDFU-Net | [19] | - Multiscale dilated features are upsampled - Performs complete heterogeneous dataset analysis | Pre-processing is required | 62.66% |
PFA-Net | Proposed | - Multi-level parallel features are exploited and aggregated - Performs complete heterogeneous dataset analysis | Intensive computations are needed for parallel features extraction | 64.58% |
Layer Name | # Iterations | Input Size | Output Size | Filter Size | |||
---|---|---|---|---|---|---|---|
Input | 1 | - | - | ||||
Conv1 | 1 | ||||||
Max Pooling | 1 | ||||||
B1 | 2 | ||||||
PFA-block1 | 1 | ||||||
B2 | 2 | 1 | |||||
PFA-block2 | 1 | ||||||
B3 | 4 | 1 | |||||
Upsample1 | 1 | ||||||
PFA-block3 | 1 | ||||||
Upsample2 | 1 | ||||||
Concatenation | 1 | - | |||||
Conv2 | 1 | ||||||
PFA-block4 | 1 | ||||||
Conv3 | 1 | ||||||
Conv4 | 1 | ||||||
Upsample3 | 1 | ||||||
SoftMax | 1 | - | |||||
Pixel Classification | 1 | - |
Block | Layer Name | # Iterations | Input Size | Output Size | Filter Size |
---|---|---|---|---|---|
PFA-block1 | Conv5 | 4 | |||
PW Conv1 | 4 | ||||
BN1 | 4 | - | |||
ReLU1 | 4 | - | |||
Cat1 | 1 | * | - | ||
PFA-block2 | Conv6 | 4 | |||
PW Conv2 | 4 | ||||
BN2 | 4 | - | |||
ReLU2 | 4 | - | |||
Cat2 | 1 | * | - | ||
PFA-block3 | Conv7 | 4 | |||
PW Conv3 | 4 | ||||
BN3 | 4 | - | |||
ReLU3 | 4 | - | |||
Cat3 | 1 | * | - | ||
PFA-block4 | Conv8 | 4 | |||
PW Conv4 | 4 | ||||
BN4 | 4 | - | |||
ReLU4 | 4 | - | |||
Cat4 | 1 | * | - |
Case | PFA-Block1 | PFA-Block2 | PFA-Block3 | PFA-Block4 | DS | IoU |
---|---|---|---|---|---|---|
1 | ✕ | ✕ | ✕ | ✕ | 84.55 | 76.47 |
2a | ◯ | ◯ | ✕ | ✕ | 85.75 | 77.86 |
2b | ✕ | ✕ | ◯ | ◯ | 85.11 | 77.20 |
2c | ✕ | ◯ | ✕ | ◯ | 85.56 | 77.67 |
2d | ◯ | ✕ | ◯ | ✕ | 85.29 | 77.39 |
3a | ✕ | ◯ | ◯ | ◯ | 86.55 | 78.83 |
3b | ◯ | ◯ | ◯ | ✕ | 86.11 | 78.29 |
4 | ◯ | ◯ | ◯ | ◯ | 87.54 | 80.09 |
Case | Encoder | Decoder | DS | IoU | ||
---|---|---|---|---|---|---|
PFA-Block1 | PFA-Block2 | PFA-Block3 | PFA-Block4 | |||
1 | ◯ | ✕ | ✕ | ◯ | 85.46 | 77.51 |
2 | ✕ | ◯ | ✕ | ◯ | 85.56 | 77.67 |
3 | ✕ | ✕ | ◯ | ◯ | 85.11 | 77.2 |
4 | ◯ | ◯ | ◯ | ◯ | 87.54 | 80.09 |
Configuration of PFA Block | Number of Learnable Parameters (Million) | DS (%) | IoU (%) |
---|---|---|---|
3 | 28.16 | 86.25 | 78.48 |
4 | 31.48 | 87.54 | 80.09 |
5 | 34.81 | 85.47 | 77.54 |
Loss Function | DS | IoU |
---|---|---|
CE | 91.02 | 84.94 |
WCE | 81.49. | 72.98 |
DL | 88.02 | 80.87 |
Network | DS |
---|---|
Dual-path attention 3D U-Net [41] | 82.3 |
Modality-pairing 3D U-Net [42] | 86.3 |
3D SA-Net [43] | 81.25 |
3D U-Net [44] | 81.44 |
Triplanar U-Net [45] | 83 |
Ensemble model [46] | 77 |
Multiple 3D U-Net [47] | 79.41 |
MVP U-Net [48] | 60 |
nnU-Net [49] | 81.37 |
MTAU [51] | 59 |
Dense 3D U-Net [52] | 78.2 |
RMU-Net [54] | 83.26 |
U-attention Net [53] | 81.79 |
SGEResU-Net [18] | 79.40 |
3D-GAN [55] | 79.56 |
Multi-decoder [56] | 78.13 |
Dynamic DMF-Net [57] | 74.58 |
AGSE-VNet [58] | 70 |
NLCA-VNet [59] | 74.8 |
SDV-TUNet [60] | 82.48 |
3DUV-NetR+ [61] | 81.70 |
DAUnet [62] | 83.3 |
PFA-Net (proposed) | 87.54 |
Network | DS |
---|---|
Dual-path attention 3D U-Net [41] | 87.8 |
Modality-pairing 3D U-Net [42] | 89.8 |
3D SA-Net [43] | 87.73 |
3D U-Net [44] | 87.01 |
Triplanar U-Net [45] | 87 |
Ensemble model [46] | 85 |
Multiple 3D U-Net [47] | 87.70 |
MVP U-Net [48] | 63.5 |
nnU-Net [49] | 87.97 |
MTAU [51] | 61 |
Dense 3D U-Net [52] | 83.2 |
RMU-Net [54] | 88.13 |
U-attention Net [53] | 86.35 |
SGEResU-Net [18] | 85.22 |
3D-GAN [55] | 89.25 |
Multi-decoder [56] | 88.34 |
Dynamic DMF-Net [57] | 84.91 |
AGSE-VNet [58] | 77 |
NLCA-VNet [59] | 88.50 |
SDV-TUNet [60] | 89.20 |
3DUV-NetR+ [61] | 82.80 |
DAUnet [62] | 89.2 |
PFA-Net (proposed) | 91.02 |
Network. | DS |
---|---|
Dual-path attention 3D U-Net [41] | 91.20 |
Modality-pairing 3D U-Net [42] | 92.40 |
3D SA-Net [43] | 91.51 |
3D U-Net [44] | 90.37 |
Triplanar U-Net [45] | 93 |
Ensemble model [46] | 85 |
Multiple 3D U-Net [47] | 92.29 |
MVP U-Net [48] | 79.90 |
nnU-Net [49] | 91.87 |
MTAU [51] | 72 |
Dense 3D U-Net [52] | 88.20 |
RMU-Net [54] | 91.35 |
U-attention Net [53] | 91.90 |
SGEResU-Net [18] | 90.48 |
3D-GAN [55] | 91.63 |
Multi-decoder [56] | 92.75 |
Dynamic DMF-Net [57] | 91.36 |
AGSE-VNet [58] | 85 |
NLCA-VNet [59] | 90.50 |
SDV-TUNet [60] | 90.22 |
3DUV-NetR+ [61] | 91.95 |
DAUnet [62] | 90.6 |
PFA-Net (proposed) | 93.42 |
Network | DS (%) | IoU (%) | Number of Parameters (M) | GFLOPS | Memory Usage (MB) | Processing Time (s) |
---|---|---|---|---|---|---|
DeepLabV3+ (ResNet18) [66,76] | 50.16 | 49.94 | 20.61 | 53.26 | 209.03 | 1.1 |
DeepLabV3+ (ResNet50) [76] | 53.44 | 51.66 | 43.98 | 72.39 | 412 | 2.32 |
DeepLabV3+ (MobileNetV2) [76] | 50.21 | 49.92 | 6.78 | 28.16 | 149.4 | 1.79 |
U-Net [18,44,77] | 51.18 | 49.63 | 31.03 | 141.72 | 354.94 | 0.4 |
SegNet (VGG16) [78] | 48.61 | 47.29 | 29.44 | 112.96 | 525.15 | 1.08 |
FCN (32s) [79] | 49.94 | 49.82 | 134.29 | 173.40 | 1040.78 | 0.52 |
MDFU-Net [19] | 62.66 | 56.96 | 50.97 | 105.85 | 537.8 | 2.3 |
PFA-Net (proposed) | 64.58 | 59.03 | 31.48 | 126.85 | 318.69 | 1.53 |
Results | ET | TC | WT | Heterogeneous | ||||
---|---|---|---|---|---|---|---|---|
Figure 11a | Figure 11b | Figure 12a | Figure 12b | Figure 13a | Figure 13b | Figure 14a | Figure 14b | |
FD | 1.1956 | 1.1312 | 1.2887 | 1.2562 | 1.5601 | 1.4406 | 1.0776 | 1.3711 |
R2 | 0.957 | 0.934 | 0.983 | 0.96 | 0.982 | 0.98 | 0.983 | 0.954 |
C | 0.9783 | 0.9665 | 0.9915 | 0.9799 | 0.9909 | 0.9901 | 0.9913 | 0.9768 |
Tumor Type | DS (%) | Gain (%) | p-Value | Cohen’s d Value | |
---|---|---|---|---|---|
Base Model | Proposed Model | ||||
ET | 84.55 | 87.54 | 2.99 | 0.0009 | 1.3267 |
TC | 86.71 | 91.02 | 4.31 | 0.0052 | 1.0012 |
WT | 91.78 | 93.42 | 1.64 | 0.003 | 2.4371 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sultan, H.; Ullah, N.; Hong, J.S.; Kim, S.G.; Lee, D.C.; Jung, S.Y.; Park, K.R. Estimation of Fractal Dimension and Segmentation of Brain Tumor with Parallel Features Aggregation Network. Fractal Fract. 2024, 8, 357. https://doi.org/10.3390/fractalfract8060357
Sultan H, Ullah N, Hong JS, Kim SG, Lee DC, Jung SY, Park KR. Estimation of Fractal Dimension and Segmentation of Brain Tumor with Parallel Features Aggregation Network. Fractal and Fractional. 2024; 8(6):357. https://doi.org/10.3390/fractalfract8060357
Chicago/Turabian StyleSultan, Haseeb, Nadeem Ullah, Jin Seong Hong, Seung Gu Kim, Dong Chan Lee, Seung Yong Jung, and Kang Ryoung Park. 2024. "Estimation of Fractal Dimension and Segmentation of Brain Tumor with Parallel Features Aggregation Network" Fractal and Fractional 8, no. 6: 357. https://doi.org/10.3390/fractalfract8060357
APA StyleSultan, H., Ullah, N., Hong, J. S., Kim, S. G., Lee, D. C., Jung, S. Y., & Park, K. R. (2024). Estimation of Fractal Dimension and Segmentation of Brain Tumor with Parallel Features Aggregation Network. Fractal and Fractional, 8(6), 357. https://doi.org/10.3390/fractalfract8060357