Bi-SANet—Bilateral Network with Scale Attention for Retinal Vessel Segmentation
Abstract
:1. Introduction
- We propose a bilateral network containing coarse and fine branches, and the two branches are responsible for the extraction of semantic information and spatial information, respectively. Meanwhile, multi-scale input is carried out on the network to improve the feature extraction ability of the model for images of different scales.
- In order to improve the network’s ability to extract vessel semantic information in low-contrast regions, a multi-scale attention module is introduced at the end of down-sampling of the coarse network. This can improve the pertinence of information recovery in up-sampling.
- The U-shaped fine network is replaced by a module. This module mainly uses convolution layers with different dilation rates to make up for the lost spatial information of the coarse network. It not only improves the network segmentation ability but also reduces its computational complexity. Finally, the feature fusion module is used to aggregate different levels of information of the coarse and fine networks.
2. Methods
2.1. The Network Structure of Bi-SANet
2.2. Coarse Network (Coarse)
Multi-Scale Attention Module
2.3. Fine Network (FineNet)
2.3.1. Spatial Detail Module
2.3.2. Feature Fusion Module
3. Datasets and Evaluation
3.1. Datasets
3.2. Experimental Environment and Parameter Settings
3.3. Performance Evaluation Indicator
4. Experiment Results and Analysis
4.1. Discussion of Model Performance at Different Dilation Rates
(d1, d2, d3) | F-Measure | Sensitivity | Specificity | Accuracy |
---|---|---|---|---|
(1, 1, 1) | 0.8274 | 0.8042 | 0.9866 | 0.9706 |
(1, 2, 3) | 0.8266 | 0.8107 | 0.9855 | 0.9702 |
(2, 2, 2) | 0.8282 | 0.8092 | 0.9861 | |
(1, 2, 4) | 0.8293 | 0.8318 | 0.9832 | 0.9700 |
(3, 3, 3) | 0.8253 | 0.7926 | 0.9876 | |
(1, 3, 5) | 0.9772 | 0.9693 | ||
(5, 5, 5) | 0.8270 | 0.8039 | 0.9865 | 0.9705 |
4.2. Structure Ablation
Methods | Sensitivity | Specificity | Accuracy | F-Measure | Params Size (MB) |
---|---|---|---|---|---|
MU-Net | 0.8579 | 0.9799 | 0.9691 | 0.8320 | |
MU-Net+MA | 0.8664 | 0.9787 | 0.9687 | 0.8340 | 44.76 |
MU-Net+MA+FineNet | 0.8846 | 0.9782 | 0.9693 | 0.8376 | 45.46 |
MU-Net+MA+FineNet+FFA | 0.9772 | 46.17 |
Methods | Sensitivity | Specificity | Accuracy | F-Measure | Params Size (MB) |
---|---|---|---|---|---|
MU-Net | 0.8006 | 0.9871 | 0.9753 | 0.8039 | |
MU-Net+MA | 0.8082 | 0.9868 | 0.9755 | 0.8058 | 44.76 |
MU-Net+MA+FineNet | 0.8082 | 0.9758 | 0.8083 | 45.46 | |
MU-Net+MA+FineNet+FFA | 0.9852 | 46.17 |
4.3. Attention Module Ablation
4.4. Model Parameter Quantity and Computation Time Analysis
4.5. Visual Comparison with Different Methods
4.6. Comparison of Segmentation Results with Different Methods
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Abràmoff, M.D.; Folk, J.C.; Han, D.P.; Walker, J.D.; Williams, D.F.; Russell, S.R.; Massin, P.; Cochener, B.; Gain, P.; Tang, L.; et al. Automated analysis of retinal images for detection of referable diabetic retinopathy. JAMA Ophthalmol. 2013, 131, 351–357. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Robinson, B.E. Prevalence of Asymptomatic Eye Disease Prévalence des maladies oculaires asymptomatiques. Rev. Can. D’Optométrie 2003, 65, 175. [Google Scholar]
- Zhang, B.; Zhang, L.; Zhang, L.; Karray, F. Retinal vessel extraction by matched filter with first-order derivative of Gaussian. Comput. Biol. Med. 2010, 40, 438–445. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Jiang, X.; Mojon, D. Adaptive Local Thresholding by Verification-Based Multithreshold Probing with Application to Vessel Detection in Retinal Images. IEEE Comput. Soc. 2003, 25, 131–137. [Google Scholar]
- Zana, F.; Klein, J.C. Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation. IEEE Trans. Image Process. 2001, 10, 1010–1019. [Google Scholar] [CrossRef] [Green Version]
- Azzopardi, G.; Strisciuglio, N.; Vento, M.; Petkov, N. Trainable COSFIRE filters for vessel delineation with application to retinal images. Med. Image Anal. 2015, 19, 46–57. [Google Scholar] [CrossRef] [Green Version]
- Wang, Y.; Ji, G.; Lin, P.; Trucco, E. Retinal vessel segmentation using multiwavelet kernels and multiscale hierarchical decomposition. Pattern Recognit. 2013, 46, 2117–2133. [Google Scholar] [CrossRef]
- Guo, Z.; Lin, P.; Ji, G.; Wang, Y. Retinal vessel segmentation using a finite element based binary level set method. Inverse Probl. Imaging 2017, 8, 459–473. [Google Scholar] [CrossRef]
- Tolias, Y.A.; Panas, S.M. A fuzzy vessel tracking algorithm for retinal images based on fuzzy clustering. IEEE Trans. Med. Imaging 1998, 17, 263–273. [Google Scholar] [CrossRef]
- Wang, X.-H.; Zhao, Y.-Q.; Liao, M.; Zou, B.-J. Automatic segmentation for retinal vessel based on multiscale 2D Gabor wavelet. Acta Autom. Sin. 2015, 41, 970–980. [Google Scholar]
- Liang, L.M.; Huang, C.L.; Shi, F.; Wu, J.; Jiang, H.J.; Chen, X.J. Retinal Vessel Segmentation Using Level Set Combined with Shape Priori. Chin. J. Comput. 2018, 41, 1678–1692. [Google Scholar]
- Khalaf, A.F.; Yassine, I.A.; Fahmy, A.S. Convolutional neural networks for deep feature learning in retinal vessel segmentation. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 385–388. [Google Scholar]
- Liskowski, P.; Krawiec, K. Segmenting retinal blood vessels with deep neural networks. IEEE Trans. Med. Imaging 2016, 35, 2369–2380. [Google Scholar] [CrossRef]
- Yu, L.; Qin, Z.; Zhuang, T.; Ding, Y.; Qin, Z.; Choo, K.R. A framework for hierarchical division of retinal vascular networks. Neurocomputing 2020, 392, 221–232. [Google Scholar] [CrossRef]
- Fu, H.; Xu, Y.; Lin, S.; Wong, D.W.K.; Liu, J. Deepvessel: Retinal vessel segmentation via deep learning and conditional random field. In Lecture Notes in Computer Science, Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 17–21 October 2016; Springer: Cham, Switzerland, 2016; pp. 132–139. [Google Scholar]
- Dasgupta, A.; Singh, S. A fully convolutional neural network based structured prediction approach towards the retinal vessel segmentatio. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, VIC, Australia, 18–21 April 2017; pp. 248–251. [Google Scholar]
- Feng, Z.; Yang, J.; Yao, L. Patch-based fully convolutional neural network with skip connections for retinal blood vessel segmentation. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 1742–1746. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Lecture Notes in Computer Science, Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Zhang, Y.; Chung, A.C.S. Deep supervision with additional labels for retinal vessel segmentation task. In Lecture Notes in Computer Science, Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Granada, Spain, 16–20 September 2018; Springer: Cham, Switzerland, 2018; pp. 83–91. [Google Scholar]
- Wu, Y.; Xia, Y.; Song, Y.; Zhang, Y.; Cai, W. Multiscale network followed network model for retinal vessel segmentation. In Lecture Notes in Computer Science, Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Granada, Spain, 16–20 September 2018; Springer: Cham, Switzerland, 2018; pp. 119–126. [Google Scholar]
- Wang, K.; Zhang, X.; Huang, S.; Wang, Q.; Chen, F. CTF-Net: Retinal Vessel Segmentation via Deep Coarse-To-Fine Supervision Network. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; pp. 1237–1241. [Google Scholar]
- Wu, Y.; Xia, Y.; Song, Y.; Zhang, D.; Liu, D.; Zhang, C.; Cai, W. Vessel-Net: Retinal vessel segmentation under multi-path supervision. In Lecture Notes in Computer Science, Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Shenzhen, China, 13–17 October 2019; Springer: Cham, Switzerland, 2019; pp. 264–272. [Google Scholar]
- Feng, S.; Zhuo, Z.; Pan, D.; Tian, Q. CcNet: A cross-connected convolutional network for segmenting retinal vessels using multi-scale features. Neurocomputing 2020, 392, 268–276. [Google Scholar] [CrossRef]
- Zhang, S.; Fu, H.; Yan, Y.; Zhang, Y.; Wu, Q.; Yang, M.; Tan, M.; Xu, Y. Attention guided network for retinal image segmentation. In Lecture Notes in Computer Science, Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Shenzhen, China, 13–17 October 2019; Springer: Cham, Switzerland, 2019; pp. 797–805. [Google Scholar]
- Staal, J.; Abràmoff, M.D.; Niemeijer, M.; Viergever, M.A.; Ginneken, B. Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 2004, 23, 501–509. [Google Scholar] [CrossRef]
- Owen, C.G.; Rudnicka, A.R.; Mullen, R.; Barman, S.A.; Monekosso, D.; Whincup, P.H.; Ng, J.; Paterson, C. Measuring retinal vessel tortuosity in 10-year-old children: Validation of the computer-assisted image analysis of the retina (CAIAR) program. Investig. Ophthalmol. Vis. Sci. 2009, 50, 2004–2010. [Google Scholar] [CrossRef] [Green Version]
- Hoover, A.D.; Kouznetsova, V.; Goldbaum, M. Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Trans. Med. Imaging 2000, 19, 203–210. [Google Scholar] [CrossRef] [Green Version]
- Zhuang, J. LadderNet: Multi-path networks based on U-Net for medical image segmentation. arXiv 2018, arXiv:1810.07810. [Google Scholar]
- Jiang, Y.; Zhang, H.; Tan, N.; Chen, L. Automatic retinal blood vessel segmentation based on fully convolutional neural networks. Symmetry 2019, 11, 1112. [Google Scholar] [CrossRef] [Green Version]
- Jiang, Y.; Yao, H.; Wu, C.; Liu, W. A Multi-Scale Residual Attention Network for Retinal Vessel Segmentation. Symmetry 2021, 13, 24. [Google Scholar] [CrossRef]
- Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Li, X.; Zhong, Z.; Wu, J.; Yang, Y.; Lin, Z.; Liu, H. Expectation-maximization attention networks for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October– 2 November 2019; pp. 9167–9176. [Google Scholar]
- Alom, M.Z.; Hasan, M.; Yakopcic, C.; Taha, T.M.; Asari, V.K. Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation. arXiv 2018, arXiv:1802.06955. [Google Scholar]
- Ma, Y.; Li, X.; Duan, X.; Peng, Y.; Zhang, Y. Retinal Vessel Segmentation by Deep Residual Learning with Wide Activation. Comput. Intell. Neurosci. 2020, 2020, 8822407. [Google Scholar] [CrossRef]
- Hu, J.; Wang, H.; Wang, J.; Wang, Y.; He, F.; Zhang, J. SA-Net: A scale-attention network for medical image segmentation. PLoS ONE 2021, 16, e0247388. [Google Scholar]
- Tian, C.; Fang, T.; Fan, Y.; Wu, W. Multi-path convolutional neural network in fundus segmentation of blood vessels. Biocybern. Biomed. Eng. 2020, 40, 583–595. [Google Scholar] [CrossRef]
- Miao, Y.; Cheng, Y. Automatic extraction of retinal blood vessel based on matched filtering and local entropy thresholding. In Proceedings of the 2015 8th International Conference on Biomedical Engineering and Informatics (BMEI), Shenyang, China, 14–16 October 2015; pp. 62–67. [Google Scholar]
- Marín, D.; Aquino, A.; Gegúndez-Arias, M.E.; Bravo, J.M. A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features. IEEE Trans. Med. Imaging 2010, 30, 146–158. [Google Scholar] [CrossRef] [Green Version]
- Aslani, S.; Sarnel, H. A new supervised retinal vessel segmentation method based on robust hybrid features. Biomed. Signal Process. Control 2016, 30, 1–12. [Google Scholar] [CrossRef]
- Li, L.; Verma, M.; Nakashima, Y.; Nagahara, H.; Kawasaki, R. Iternet: Retinal image segmentation utilizing structural redundancy in vessel networks. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Snowmass Village, CO, USA, 1–5 March 2020; pp. 3656–3665. [Google Scholar]
- Atli, İ.; Gedik, O.S. Sine-Net: A fully convolutional deep learning architecture for retinal blood vessel segmentation. Eng. Sci. Technol. Int. J. 2021, 24, 271–283. [Google Scholar]
- Gu, R.; Wang, G.; Song, T.; Huang, R.; Aertsen, M.; Deprest, J.; Ourselin, S.; Vercauteren, T.; Zhang, S. CA-Net: Comprehensive attention convolutional neural networks for explainable medical image segmentation. IEEE Trans. Med. Imaging 2020, 40, 699–711. [Google Scholar] [CrossRef]
- Mo, J.; Zhang, L. Multi-level deep supervised networks for retinal vessel segmentation. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 2181–2193. [Google Scholar] [CrossRef]
- Yan, Z.; Yang, X.; Cheng, K.T. A three-stage deep learning model for accurate retinal vessel segmentation. IEEE J. Biomed. Health Inform. 2018, 23, 1427–1436. [Google Scholar] [CrossRef]
- Jin, Q.; Meng, Z.; Pham, T.D.; Chen, Q.; Wei, L.; Su, R. DUNet: A deformable network for retinal vessel segmentation. Knowl.-Based Syst. 2019, 178, 149–162. [Google Scholar] [CrossRef] [Green Version]
- Wang, D.; Haytham, A.; Pottenburgh, J.; Saeedi, O.; Tao, Y. Hard attention net for automatic retinal vessel segmentation. IEEE J. Biomed. Health Inform. 2020, 24, 3384–3396. [Google Scholar] [CrossRef] [PubMed]
Methods | Sensitivity | Specificity | Accuracy | F-Measure | Params Size (MB) |
---|---|---|---|---|---|
ECA [31]+Baseline | 0.8153 | 0.9851 | 0.9702 | 0.8276 | 46.50 |
SE [32]+Baseline | 0.7979 | 0.9867 | 0.9702 | 0.8245 | 46.43 |
EMA [33]+Baseline | 0.8107 | 0.9856 | 0.8273 | 46.32 | |
MA+Baseline(Ours) | 0.9772 | 0.9693 |
Methods | Sensitivity | Specificity | Accuracy | F-Measure | Params Size (MB) |
---|---|---|---|---|---|
ECA [31]+Baseline | 0.8175 | 0.9854 | 0.9748 | 0.8037 | 46.50 |
SE [32]+Baseline | 0.8269 | 0.9847 | 0.9748 | 0.8056 | 46.43 |
EMA [33]+Baseline | 0.7923 | 0.9874 | 0.9751 | 0.8008 | 46.32 |
MA+Baseline(Ours) | 0.9852 |
Type | Methods | Year | Sensitivity | Specificity | Accuracy | F-Measure |
---|---|---|---|---|---|---|
Unsupervised methods | Zhang [3] | 2010 | - | - | 0.9382 | - |
Wang [10] | 2015 | - | - | 0.9457 | - | |
Miao [38] | 2015 | 0.7481 | 0.9748 | 0.9597 | - | |
Supervised methods | Marín [39] | 2010 | 0.7607 | 0.9801 | 0.9452 | - |
Aslani [40] | 2016 | 0.7545 | 0.9801 | 0.9513 | - | |
Feng [17] | 2017 | 0.7811 | 0.9839 | 0.9560 | - | |
U-Net [34] | 2018 | 0.7537 | 0.9820 | 0.9531 | 0.8142 | |
IterNet [41] | 2019 | 0.7735 | 0.9838 | 0.9573 | 0.8205 | |
Tian [37] | 2019 | 0.8639 | 0.9690 | 0.9580 | - | |
Sine-Net [42] | 2020 | 0.8260 | 0.9824 | 0.9685 | - | |
CTF-Net [21] | 2020 | 0.7849 | 0.9813 | 0.9567 | 0.8241 | |
SA-Net [36] | 2021 | 0.8252 | 0.9764 | 0.9569 | 0.8289 | |
CA-Net [43] | 2021 | 0.8082 | 0.9858 | 0.8261 | ||
Ours | 2021 | 0.9772 | 0.9693 |
Type | Methods | Year | Sensitivity | Specificity | Accuracy | F-Measure |
---|---|---|---|---|---|---|
Unsupervised methods | Azzopardi [6] | 2015 | 0.7655 | 0.9704 | 0.9442 | - |
Supervised methods | Mo [44] | 2017 | 0.7661 | 0.9816 | 0.9599 | - |
Yan [45] | 2018 | 0.7641 | 0.9806 | 0.9607 | - | |
U-Net [34] | 2018 | 0.8288 | 0.9701 | 0.9578 | 0.7783 | |
DUNet [46] | 2019 | 0.8155 | 0.9752 | 0.9610 | 0.7883 | |
Tian [37] | 2020 | 0.8778 | 0.9680 | 0.9601 | - | |
IterNet [41] | 2020 | 0.7970 | 0.9823 | 0.9655 | 0.8073 | |
Sine-Net [42] | 2021 | 0.7856 | 0.9845 | 0.9676 | - | |
CA-Net [43] | 2021 | 0.8138 | 0.9867 | 0.9758 | 0.8093 | |
Ours | 2021 | 0.8371 |
Type | Methods | Year | Sensitivity | Specificity | Accuracy | F-Measure |
---|---|---|---|---|---|---|
Unsupervised methods | Azzopardi [6] | 2015 | 0.7716 | 0.9701 | 0.9497 | - |
Miao [38] | 2015 | 0.7298 | 0.9831 | 0.9532 | - | |
Wang [10] | 2015 | - | - | 0.9451 | - | |
Supervised methods | Mo [44] | 2017 | 0.8147 | 0.9844 | 0.9674 | - |
U-Net [34] | 2018 | 0.8270 | 0.9842 | 0.9690 | 0.8373 | |
IterNet [41] | 2019 | 0.7715 | 0.9886 | 0.9701 | 0.8146 | |
Sine-Net [42] | 2020 | 0.6776 | 0.9946 | 0.9711 | - | |
HANet [47] | 2020 | 0.8186 | 0.9844 | 0.9673 | 0.8379 | |
Ours | 2021 | 0.8290 |
Image | Accuracy | Sensitivity | Specificity | F-Measure |
---|---|---|---|---|
0 | 0.9725 | 0.8365 | 0.9843 | 0.8290 |
1 | 0.9772 | 0.7824 | 0.9911 | 0.8207 |
2 | 0.9826 | 0.8180 | 0.9931 | 0.8493 |
3 | 0.9686 | 0.6457 | 0.9945 | 0.7532 |
4 | 0.9649 | 0.7661 | 0.9847 | 0.7979 |
5 | 0.9776 | 0.8816 | 0.9849 | 0.8462 |
6 | 0.9794 | 0.9150 | 0.9850 | 0.8770 |
7 | 0.9806 | 0.8516 | 0.9910 | 0.8678 |
8 | 0.9833 | 0.8756 | 0.9925 | 0.8920 |
9 | 0.9747 | 0.8831 | 0.9827 | 0.8486 |
10 | 0.9797 | 0.8644 | 0.9886 | 0.8586 |
11 | 0.9829 | 0.9305 | 0.9873 | 0.8937 |
12 | 0.9777 | 0.8560 | 0.9895 | 0.8721 |
13 | 0.9801 | 0.8837 | 0.9897 | 0.8896 |
14 | 0.9767 | 0.8495 | 0.9887 | 0.8629 |
15 | 0.9637 | 0.7248 | 0.9908 | 0.8029 |
16 | 0.9761 | 0.8705 | 0.9865 | 0.8669 |
17 | 0.9858 | 0.8053 | 0.9955 | 0.8519 |
18 | 0.9846 | 0.7650 | 0.9945 | 0.8111 |
19 | 0.9709 | 0.7745 | 0.9849 | 0.7803 |
Average | 0.9770 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jiang, Y.; Yao, H.; Ma, Z.; Zhang, J. Bi-SANet—Bilateral Network with Scale Attention for Retinal Vessel Segmentation. Symmetry 2021, 13, 1820. https://doi.org/10.3390/sym13101820
Jiang Y, Yao H, Ma Z, Zhang J. Bi-SANet—Bilateral Network with Scale Attention for Retinal Vessel Segmentation. Symmetry. 2021; 13(10):1820. https://doi.org/10.3390/sym13101820
Chicago/Turabian StyleJiang, Yun, Huixia Yao, Zeqi Ma, and Jingyao Zhang. 2021. "Bi-SANet—Bilateral Network with Scale Attention for Retinal Vessel Segmentation" Symmetry 13, no. 10: 1820. https://doi.org/10.3390/sym13101820
APA StyleJiang, Y., Yao, H., Ma, Z., & Zhang, J. (2021). Bi-SANet—Bilateral Network with Scale Attention for Retinal Vessel Segmentation. Symmetry, 13(10), 1820. https://doi.org/10.3390/sym13101820