Edge-Guided Cell Segmentation on Small Datasets Using an Attention-Enhanced U-Net Architecture
Abstract
:1. Introduction
2. Edge Attention Enhanced Medical Image Segmentation Method
2.1. Framework Overview
2.2. Feature Encoding–Decoding Branch
2.3. Attention-Enhanced Branch
3. Experiments and Results
3.1. Dataset Details
3.2. Implementation Details
3.3. Metrics
3.4. Results
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Shen, D.; Wu, G.; Suk, H.-I. Deep learning in medical image analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221–248. [Google Scholar] [CrossRef] [PubMed]
- Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sanchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [PubMed]
- Hesamian, M.H.; Jia, W.; He, X.; Kennedy, P. Deep learning techniques for medical image segmentation: Achievements and challenges. J. Digit. Imaging 2019, 32, 582–596. [Google Scholar] [CrossRef] [PubMed]
- Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Iglovikov, V.; Shvets, A. Ternausnet: U-net with vgg11 encoder pre-trained on imagenet for image segmentation. arXiv 2018, arXiv:1801.05746. [Google Scholar]
- Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef] [PubMed]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
- Milletari, F.; Navab, N.; Ahmadi, S.-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
- Wang, R.S.; Lei, T.; Cui, R.X.; Zhang, B.T.; Meng, H.Y.; Nandi, A.K. Medical image segmentation using deep learning: A survey. LET Image Process. 2022, 16, 1243–1267. [Google Scholar] [CrossRef]
- Liu, X.B.; Song, L.P.; Liu, S.; Zhang, Y.D. A Review of Deep-Learning-Based Medical Image Segmentation Methods. Sustainability 2021, 13, 1224. [Google Scholar] [CrossRef]
- Siddique, N.; Paheding, S.; Elkin, C.P.; Devabhaktuni, V. U-Net and Its Variants for Medical Image Segmentation: A Review of Theory and Applications. IEEE Access 2021, 9, 82031–82057. [Google Scholar] [CrossRef]
- Xiao, H.; Li, L.; Liu, Q.; Zhu, X.; Zhang, Q. Transformers in medical image segmentation: A review. Biomed. Signal Process. Control 2023, 84, 104791. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Zhou, Z.W.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J.M. UNet++: A Nested U-Net Architecture for Medical Image Segmentation. In Proceedings of the 4th International Workshop on Deep Learning in Medical Image Analysis (DLMIA)/8th International Workshop on Multimodal Learning for Clinical Decision Support (ML-CDS), Granada, Spain, 20 September 2018; pp. 3–11. [Google Scholar]
- Alom, M.Z.; Yakopcic, C.; Taha, T.M.; Asari, V.K. Nuclei Segmentation with Recurrent Residual Convolutional Neural Networks based U-Net (R2U-Net). In Proceedings of the IEEE National Aerospace and Electronics Conference (NAECON), Dayton, OH, USA, 23–26 July 2018; pp. 228–233. [Google Scholar]
- Oktay, O.; Schlemper, J.; Le Folgoc, L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Y Hammerla, N.; Kainz, B.; et al. Attention U-Net: Learning Where to Look for the Pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
- Valanarasu, J.M.J.; Oza, P.; Hacihaliloglu, I.; Patel, V.M. Medical Transformer: Gated Axial-Attention for Medical Image Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Electr Network, Strasbourg, France, 27 September–1 October 2021; pp. 36–46. [Google Scholar]
- Brigato, L.; Iocchi, L. A close look at deep learning with small data. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 2490–2497. [Google Scholar]
- Romero, M.; Interian, Y.; Solberg, T.; Valdes, G. Training deep learning models with small datasets. arXiv 2019, arXiv:1912.06761. [Google Scholar]
- Kumar, N.; Verma, R.; Sharma, S.; Bhargava, S.; Vahadane, A.; Sethi, A. A dataset and a technique for generalized nuclear segmentation for computational pathology. IEEE Trans. Med. Imaging 2017, 36, 1550–1560. [Google Scholar] [CrossRef] [PubMed]
- Gamper, J.; Alemi Koohbanani, N.; Benet, K.; Khuram, A.; Rajpoot, N. Pannuke: An open pan-cancer histology dataset for nuclei instance segmentation and classification. In Proceedings of the Digital Pathology: 15th European Congress, ECDP 2019, Warwick, UK, 10–13 April 2019; Proceedings 15; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; pp. 11–19. [Google Scholar]
Domain | Dataset | Sample Size |
---|---|---|
Medical Datasets | DRIVE | 40 |
MoNuSeg | 42 | |
LIDC-IDRI | 1018 | |
PanNuke | 2656 | |
Autonomous Driving Datasets | Waymo Open Dataset Cityscapes | 200,000 20,000 |
Object Detection Datasets | ImageNet COCO PASCAL VOC | 14,000,000 200,000 10,000 |
Type | Network | F1 (MoNuSeg) | IoU (MoNuSeg) | F1 (PanNuke) | IoU (PanNuke) |
---|---|---|---|---|---|
CNN baselines | U-Net | 0.820 | 0.718 | 0.859 | 0.761 |
UNet++ | 0.817 | 0.714 | 0.858 | 0.760 | |
Attention U-Net | 0.832 | 0.732 | 0.844 | 0.741 | |
Attention U-Net* 1 | 0.837 | 0.739 | 0.852 | 0.751 | |
Transformer baselines | MedT | 0.683 | 0.578 | 0.776 | 0.656 |
Proposed | AttEUnet | 0.859 | 0.758 | 0.888 | 0.794 |
Network | Training Time per Epoch (s) | Inference Time (s) | First Achievement Time for a 0.5 IoU Score (min) |
---|---|---|---|
U-Net | 210 | 9.42 | 606 |
UNet++ | 523 | 10.51 | 1824 |
Attention U-Net | 218 | 9.72 | 579 |
Attention U-Net* 1 | 275 | 11.45 | 1039 |
MedT | 1073 | 257.08 | - |
AttEUnet | 231 | 11.32 | 56 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhou, Y.; Ma, K.; Sun, Q.; Wang, Z.; Liu, M. Edge-Guided Cell Segmentation on Small Datasets Using an Attention-Enhanced U-Net Architecture. Information 2024, 15, 198. https://doi.org/10.3390/info15040198
Zhou Y, Ma K, Sun Q, Wang Z, Liu M. Edge-Guided Cell Segmentation on Small Datasets Using an Attention-Enhanced U-Net Architecture. Information. 2024; 15(4):198. https://doi.org/10.3390/info15040198
Chicago/Turabian StyleZhou, Yiheng, Kainan Ma, Qian Sun, Zhaoyuxuan Wang, and Ming Liu. 2024. "Edge-Guided Cell Segmentation on Small Datasets Using an Attention-Enhanced U-Net Architecture" Information 15, no. 4: 198. https://doi.org/10.3390/info15040198
APA StyleZhou, Y., Ma, K., Sun, Q., Wang, Z., & Liu, M. (2024). Edge-Guided Cell Segmentation on Small Datasets Using an Attention-Enhanced U-Net Architecture. Information, 15(4), 198. https://doi.org/10.3390/info15040198