CCFNet: Collaborative Cross-Fusion Network for Medical Image Segmentation
Abstract
:1. Introduction
- We propose CCFNet, a collaborative cross-fusion network, to integrate Transformer-based global representations with convolutional local features in a parallel interactive manner. Compared with other fusion methods, the collaborative cross-fusion module can not only encode the hierarchical local and global representations independently but also aggregate the global and local representations efficiently, maximizing the capabilities of the CNN and Transformer.
- In the collaborative cross-fusion module, the CSF block is designed to adaptively fuse the correlation between the local tokens and the global tokens and reorganize the two features to introduce the convolution-specific inductive bias into the Transformer. The spatial feature injector block is designed to reduce the spatial information gap between local and global features, avoiding the asymmetry of extracted features and introducing the global information of the Transformer into the CNN.
- On two publicly accessible medical image segmentation datasets, CCFNet outperforms other competitive segmentation models, validating its effectiveness and superiority.
2. Related Work
2.1. CNN
2.2. Transformer
3. Method
3.1. CNN Branch
3.2. Transformer Branch
3.3. Parallel Fusion Layer
3.4. Decoder
3.5. Loss Function
4. Experiments
4.1. Dataset
4.2. Evaluation Metrics
4.3. Implementation Details
4.4. Results
4.5. Ablation Studies
4.6. 3D Implementation
4.7. Analysis and Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Wang, R.; Lei, T.; Cui, R.; Zhang, B.; Meng, H.; Nandi, A.K. Medical image segmentation using deep learning: A survey. IET Image Process. 2022, 16, 1243–1267. [Google Scholar] [CrossRef]
- Jia, Y.; Kaul, C.; Lawton, T.; Murray-Smith, R.; Habli, I. Prediction of weaning from mechanical ventilation using convolutional neural networks. Artif. Intell. Med. 2021, 117, 102087. [Google Scholar] [CrossRef] [PubMed]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Munich, Germany, 5–9 October 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Tragakis, A.; Kaul, C.; Murray-Smith, R.; Husmeier, D. The Fully Convolutional Transformer for Medical Image Segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikola, HI, USA, 3–7 January 2023; pp. 3660–3669. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual Attention Network for Scene Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2019; pp. 3146–3154. [Google Scholar]
- Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.J.; Heinrich, M.P.; Misawa, K.; Mori, K.; McDonagh, S.G.; Hammerla, N.Y.; Kainz, B.; et al. Attention U-Net: Learning Where to Look for the Pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
- Ding, X.; Zhang, X.; Han, J.; Ding, G. Scaling up your kernels to 31x31: Revisiting large kernel design in CNNs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 11963–11975. [Google Scholar]
- Liu, S.; Chen, T.; Chen, X.; Chen, X.; Xiao, Q.; Wu, B.; Pechenizkiy, M.; Mocanu, D.; Wang, Z. More convnets in the 2020s: Scaling up kernels beyond 51x51 using sparsity. arXiv 2022, arXiv:2207.03620. [Google Scholar]
- Peng, C.; Zhang, X.; Yu, G.; Luo, G.; Sun, J. Large Kernel Matters – Improve Semantic Segmentation by Global Convolutional Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4353–4361. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 5999–6009. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Zheng, S.; Lu, J.; Zhao, H.; Zhu, X.; Luo, Z.; Wang, Y.; Fu, Y.; Feng, J.; Xiang, T.; Torr, P.H.; et al. Rethinking Semantic Segmentation From a Sequence-to-Sequence Perspective With Transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 6881–6890. [Google Scholar]
- Srinivas, A.; Lin, T.Y.; Parmar, N.; Shlens, J.; Abbeel, P.; Vaswani, A. Bottleneck Transformers for Visual Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 16519–16529. [Google Scholar]
- Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. Transunet: Transformers make strong encoders for medical image segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar]
- Wang, H.; Cao, P.; Wang, J.; Zaiane, O.R. Uctransnet: Rethinking the skip connections in U-Net from a channel-wise perspective with transformer. In Proceedings of the AAAI Conference on Artificial Intelligence, Online, 22 February–1 March 2022; pp. 2441–2449. [Google Scholar]
- Hatamizadeh, A.; Tang, Y.; Nath, V.; Yang, D.; Myronenko, A.; Landman, B.; Roth, H.R.; Xu, D. Unetr: Transformers for 3d medical image segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2022; pp. 574–584. [Google Scholar]
- Wang, H.; Xie, S.; Lin, L.; Iwamoto, Y.; Han, X.H.; Chen, Y.W.; Tong, R. Mixed transformer U-Net for medical image segmentation. In Proceedings of the ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, 23–27 May 2022; pp. 2390–2394. [Google Scholar] [CrossRef]
- Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; Jégou, H. Training data-efficient image Transformers distillation through attention. In Proceedings of the International Conference on Machine Learning. PMLR, Virtual Event, 18–24 July 2021; pp. 10347–10357. [Google Scholar]
- Bao, H.; Dong, L.; Wei, F. Beit: Bert pre-training of image Transformers. arXiv 2021, arXiv:2106.08254. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Hatamizadeh, A.; Nath, V.; Tang, Y.; Yang, D.; Roth, H.R.; Xu, D. Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images. In Proceedings of the International MICCAI Brainlesion Workshop; Springer: Berlin/Heidelberg, Germany, 2022; pp. 272–284. [Google Scholar]
- Matsoukas, C.; Haslum, J.F.; Söderberg, M.; Smith, K. Is it time to replace CNNs with Transformers for medical images? arXiv 2021, arXiv:2108.09038. [Google Scholar]
- Zhang, Y.; Liu, H.; Hu, Q. Transfuse: Fusing transformers and cnns for medical image segmentation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 September–1 October 2021, Proceedings, Part I 24; Springer: Berlin/Heidelberg, Germany, 2021; pp. 14–24. [Google Scholar]
- Heidari, M.; Kazerouni, A.; Soltany, M.; Azad, R.; Aghdam, E.K.; Cohen-Adad, J.; Merhof, D. Hiformer: Hierarchical multi-scale representations using transformers for medical image segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 6202–6212. [Google Scholar]
- Lei, T.; Sun, R.; Wang, X.; Wang, Y.; He, X.; Nandi, A. CiT-Net: Convolutional Neural Networks Hand in Hand with Vision Transformers for Medical Image Segmentation. arXiv 2023, arXiv:2306.03373. [Google Scholar]
- Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; Wang, M. Swin-unet: Unet-like pure transformer for medical image segmentation. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; pp. 205–218. [Google Scholar]
- Milletari, F.; Navab, N.; Ahmadi, S.A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Zhang, Z.; Wu, C.; Coleman, S.; Kerr, D. DENSE-INception U-Net for medical image segmentation. Comput. Methods Programs Biomed. 2020, 192, 105395. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Z.; Liu, Q.; Wang, Y. Road extraction by deep residual U-Net. IEEE Geosci. Remote Sens. Lett. 2018, 15, 749–753. [Google Scholar] [CrossRef]
- Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. Unet++: Redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med Imaging 2019, 39, 1856–1867. [Google Scholar] [CrossRef] [PubMed]
- Ibtehaz, N.; Rahman, M.S. MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Netw. 2020, 121, 74–87. [Google Scholar] [CrossRef] [PubMed]
- Kaul, C.; Manandhar, S.; Pears, N. Focusnet: An attention-based fully convolutional network for medical image segmentation. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI), Venice, Italy, 8–11 April 2019; pp. 455–458. [Google Scholar] [CrossRef]
- Isensee, F.; Jaeger, P.F.; Kohl, S.A.; Petersen, J.; Maier-Hein, K.H. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 2021, 18, 203–211. [Google Scholar] [CrossRef]
- Jiang, M.; Yuan, B.; Kou, W.; Yan, W.; Marshall, H.; Yang, Q.; Syer, T.; Punwani, S.; Emberton, M.; Barratt, D.C.; et al. Prostate cancer segmentation from MRI by a multistream fusion encoder. Med. Phys. 2023, 50, 5489–5504. [Google Scholar] [CrossRef] [PubMed]
- Xu, G.; Zhang, X.; He, X.; Wu, X. Levit-unet: Make faster encoders with transformer for medical image segmentation. In Proceedings of the Chinese Conference on Pattern Recognition and Computer Vision (PRCV), Xiamen, China, 13–15 October 2023; pp. 42–53. [Google Scholar]
- Ates, G.C.; Mohan, P.; Celik, E. Dual cross-attention for medical image segmentation. Eng. Appl. Artif. Intell. 2023, 126, 107139. [Google Scholar] [CrossRef]
- Chen, B.; Liu, Y.; Zhang, Z.; Lu, G.; Kong, A.W.K. Transattunet: Multi-level attention-guided u-net with transformer for medical image segmentation. IEEE Trans. Emerg. Top. Comput. Intell. 2023, 8, 55–68. [Google Scholar] [CrossRef]
- Xu, S.; Xiao, D.; Yuan, B.; Liu, Y.; Wang, X.; Li, N.; Shi, L.; Chen, J.; Zhang, J.X.; Wang, Y.; et al. FAFuse: A Four-Axis Fusion framework of CNN and Transformer for medical image segmentation. Comput. Biol. Med. 2023, 166, 107567. [Google Scholar] [CrossRef]
- Yang, H.; Yang, D. CSwin-PNet: A CNN-Swin Transformer combined pyramid network for breast lesion segmentation in ultrasound images. Expert Syst. Appl. 2023, 213, 119024. [Google Scholar] [CrossRef]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ECCV), Virtual Event, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
- Zhang, N.; Yu, L.; Zhang, D.; Wu, W.; Tian, S.; Kang, X.; Li, M. CT-Net: Asymmetric compound branch Transformer for medical image segmentation. Neural Netw. 2024, 170, 298–311. [Google Scholar] [CrossRef] [PubMed]
- Song, P.; Yang, Z.; Li, J.; Fan, H. DPCTN: Dual path context-aware transformer network for medical image segmentation. Eng. Appl. Artif. Intell. 2023, 124, 106634. [Google Scholar] [CrossRef]
- Lin, A.; Chen, B.; Xu, J.; Zhang, Z.; Lu, G.; Zhang, D. DS-TransUNet: Dual Swin Transformer U-Net for Medical Image Segmentation. IEEE Trans. Instrum. Meas. 2022, 71, 4005615. [Google Scholar] [CrossRef]
- Valanarasu, J.M.J.; Oza, P.; Hacihaliloglu, I.; Patel, V.M. Medical transformer: Gated axial-attention for medical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Vancouver, BC, Canada, 8–12 October 2021; pp. 36–46. [Google Scholar]
- Landman, B.; Xu, Z.; Igelsias, J.; Styner, M.; Langerak, T.; Klein, A. Miccai multi-atlas labeling beyond the cranial vault–workshop and challenge. In Proceedings of the MICCAI Multi-Atlas Labeling Beyond Cranial Vault—Workshop Challenge, Singapore, 8–12 September 2015; Volume 5, p. 12. [Google Scholar]
- Bernard, O.; Lalande, A.; Zotti, C.; Cervenansky, F.; Yang, X.; Heng, P.A.; Cetin, I.; Lekadir, K.; Camara, O.; Gonzalez Ballester, M.A.; et al. Deep Learning Techniques for Automatic MRI Cardiac Multi-Structures Segmentation and Diagnosis: Is the Problem Solved? IEEE Trans. Med Imaging 2018, 37, 2514–2525. [Google Scholar] [CrossRef]
- Fu, S.; Lu, Y.; Wang, Y.; Zhou, Y.; Shen, W.; Fishman, E.; Yuille, A. Domain adaptive relational reasoning for 3d multi-organ segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Lima, Peru, 4–8 October 2020; pp. 656–666. [Google Scholar]
- Zhou, H.Y.; Guo, J.; Zhang, Y.; Yu, L.; Wang, L.; Yu, Y. nnformer: Interleaved transformer for volumetric segmentation. arXiv 2021, arXiv:2109.03201. [Google Scholar]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
Method | Avg. DSC (%) | Avg. HD (mm) | Stomach | Spleen | Pancreas | Liver | Kidney (L) | Kidney (R) | Gallbladder | Aorta |
---|---|---|---|---|---|---|---|---|---|---|
V-Net [29] | 68.81 | - | 56.98 | 80.56 | 40.05 | 87.84 | 77.10 | 80.75 | 51.87 | 75.34 |
DARR [51] | 69.77 | - | 45.96 | 89.90 | 54.18 | 94.08 | 72.31 | 73.24 | 53.77 | 74.74 |
TransFuse [25] | 77.42 | - | 73.69 | 87.03 | 57.06 | 94.22 | 80.57 | 78.58 | 63.06 | 85.15 |
U-Net [3] | 76.85 | 39.70 | 75.58 | 86.67 | 53.98 | 93.43 | 77.77 | 68.60 | 69.72 | 89.07 |
R50 UNet [3] | 74.68 | 36.87 | 74.16 | 85.87 | 56.90 | 93.74 | 80.60 | 78.19 | 63.66 | 87.74 |
R50 AttnUNet [8] | 75.57 | 36.97 | 74.95 | 87.19 | 49.37 | 93.56 | 79.20 | 72.71 | 63.91 | 55.92 |
AttnUNet [8] | 77.77 | 36.02 | 75.75 | 87.30 | 58.04 | 93.57 | 77.98 | 71.11 | 68.88 | 89.55 |
R50 ViT [13] | 71.29 | 32.87 | 73.95 | 81.99 | 45.99 | 91.51 | 75.80 | 72.20 | 55.13 | 73.73 |
TransUNet [16] | 77.48 | 31.69 | 75.62 | 85.08 | 55.86 | 94.08 | 81.87 | 77.02 | 63.13 | 87.23 |
LeVit-UNet-384 [39] | 78.53 | 16.84 | 72.76 | 88.86 | 59.07 | 93.11 | 84.61 | 80.25 | 62.23 | 87.33 |
MT-UNet [19] | 78.59 | 26.59 | 76.81 | 87.75 | 59.46 | 93.06 | 81.47 | 77.29 | 64.99 | 87.92 |
UCTransNet [17] | 78.23 | 26.75 | 79.42 | 87.84 | 56.22 | 93.17 | 80.19 | 73.18 | 66.97 | 88.86 |
Swin-Unet [28] | 79.13 | 21.55 | 76.60 | 90.66 | 56.58 | 94.29 | 83.28 | 79.61 | 66.53 | 85.47 |
Ours | 81.59 | 14.47 | 80.47 | 88.19 | 56.89 | 95.37 | 87.42 | 83.50 | 72.49 | 88.35 |
Method | Avg. DSC (%) | RV | Myo | LV |
---|---|---|---|---|
R50 UNet [3] | 87.55 | 87.10 | 80.63 | 94.92 |
R50 AttnUNet [8] | 86.75 | 87.58 | 79.20 | 93.47 |
R50 ViT [13] | 87.57 | 86.07 | 81.88 | 94.75 |
UNETR [18] | 88.61 | 85.29 | 86.52 | 94.02 |
TransUNet [16] | 89.71 | 88.86 | 84.53 | 95.73 |
Swin-Unet [28] | 90.00 | 88.55 | 85.62 | 95.83 |
MT-UNet [19] | 90.43 | 86.64 | 89.04 | 95.62 |
UCTransNet [17] | 89.69 | 87.92 | 85.43 | 95.71 |
LeViT-UNet-384 [39] | 90.32 | 89.55 | 87.64 | 93.76 |
Ours | 91.07 | 89.78 | 89.30 | 94.11 |
Method | Params (M) | FLOPs (G) |
---|---|---|
U-Net [3] | 31.13 | 55.84 |
Swin-Unet [28] | 96.34 | 42.68 |
TransUNet [16] | 105.32 | 38.52 |
MT-UNet [19] | 79.07 | 44.72 |
UCTransNet [17] | 65.6 | 63.2 |
Ours | 137.36 | 76.18 |
Method | Avg. DSC (%) | Avg. HD (mm) |
---|---|---|
Baseline | 78.91 | 24.08 |
Baseline + SEConv | 80.20 | 19.33 |
Baseline + SEConv + CCFM | 81.59 | 14.47 |
DFE | CSF | SFI | Avg. DSC (%) | Avg. HD (mm) |
---|---|---|---|---|
✔ | 80.20 | 19.33 | ||
✔ | 79.93 | 18.44 | ||
✔ | ✔ | 81.11 | 18.26 | |
✔ | ✔ | 79.57 | 17.65 | |
✔ | ✔ | 80.37 | 22.40 | |
✔ | ✔ | ✔ | 81.59 | 14.47 |
Method | Avg. DSC (%) | Avg. HD (mm) | Stomach | Spleen | Pancreas | Liver | Kidney (L) | Kidney (R) | Gallbladder | Aorta |
---|---|---|---|---|---|---|---|---|---|---|
ViT [13] | 67.86 | 36.11 | 70.44 | 81.75 | 42.00 | 91.32 | 74.70 | 67.40 | 45.10 | 70.19 |
R50 ViT [13] | 71.29 | 32.87 | 73.95 | 81.99 | 45.99 | 91.51 | 75.80 | 72.20 | 55.13 | 73.73 |
UNETR [18] | 79.56 | 22.97 | 73.99 | 87.81 | 59.25 | 94.46 | 85.66 | 84.80 | 60.56 | 89.99 |
nnFormer [52] | 86.57 | 10.63 | 86.83 | 90.51 | 83.35 | 96.84 | 86.57 | 86.25 | 70.17 | 92.04 |
Ours | 86.88 | 8.78 | 85.17 | 89.67 | 82.36 | 97.01 | 85.92 | 90.01 | 72.19 | 92.74 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, J.; Yuan, B. CCFNet: Collaborative Cross-Fusion Network for Medical Image Segmentation. Algorithms 2024, 17, 168. https://doi.org/10.3390/a17040168
Chen J, Yuan B. CCFNet: Collaborative Cross-Fusion Network for Medical Image Segmentation. Algorithms. 2024; 17(4):168. https://doi.org/10.3390/a17040168
Chicago/Turabian StyleChen, Jialu, and Baohua Yuan. 2024. "CCFNet: Collaborative Cross-Fusion Network for Medical Image Segmentation" Algorithms 17, no. 4: 168. https://doi.org/10.3390/a17040168
APA StyleChen, J., & Yuan, B. (2024). CCFNet: Collaborative Cross-Fusion Network for Medical Image Segmentation. Algorithms, 17(4), 168. https://doi.org/10.3390/a17040168