MMNet: A Mixing Module Network for Polyp Segmentation
Abstract
:1. Introduction
- We propose a multi-stage transformer coupled mixing network to achieve improved performance in polyp segmentation. This method aims to improve long-range dependencies with a reduced computational cost.
- We introduce a feature mixing module with which the global feature map generated on the encoding region is further enhanced by highlighting the necessary information and suppressing the unnecessary information.
- We validate our MMNet extensively with five different datasets. Our network can accurately segment polyps and thus consistently outperforms the previous best methods.
2. Related Works
2.1. Deep Learning for Image Segmentation
2.2. Attention Mechanism in Image Segmentation
2.3. Feature Selection Approach in Polyp Segmentation
2.4. Transformer and Mixing Models
Algorithm 1 Pseudo-Code for Mixing Module Network |
Input: Polyp Image I, Ground Truth G |
Output: Predicted Mask, M |
|
3. Methodology
3.1. Overview of the Model
3.2. Feature Enhancer and Parallel Partial Decoder
3.3. Feature Mixing Module
3.4. Loss Function
4. Experiments and Results
4.1. Datasets
4.2. Evaluation Metrics
4.3. Implementation Details
4.4. Evaluation Results
4.5. Ablation Study
5. Discussions and Limitations
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Favoriti, P.; Carbone, G.; Greco, M.; Pirozzi, F.; Pirozzi, R.E.M.; Corcione, F. Worldwide burden of colorectal cancer: A review. Updat. Surg. 2016, 68, 7–11. [Google Scholar] [CrossRef]
- Granados-Romero, J.J.; Valderrama-Treviño, A.I.; Contreras-Flores, E.H.; Barrera-Mera, B.; Herrera Enríquez, M.; Uriarte-Ruíz, K.; Ceballos-Villalba, J.; Estrada-Mata, A.G.; Alvarado Rodríguez, C.; Arauz-Peña, G. Colorectal cancer: A review. Int. J. Res. Med. Sci. 2017, 5, 4667–4676. [Google Scholar] [CrossRef]
- Holme, Ø.; Bretthauer, M.; Fretheim, A.; Odgaard-Jensen, J.; Hoff, G. Flexible sigmoidoscopy versus faecal occult blood testing for colorectal cancer screening in asymptomatic individuals. Cochrane Database Syst. Rev. 2013, 2013, CD009259. [Google Scholar] [CrossRef]
- Tajbakhsh, N.; Gurudu, S.R.; Liang, J. Automated polyp detection in colonoscopy videos using shape and context information. IEEE Trans. Med. Imaging 2015, 35, 630–644. [Google Scholar] [CrossRef] [PubMed]
- Iwahori, Y.; Shinohara, T.; Hattori, A.; Woodham, R.J.; Fukui, S.; Bhuyan, M.K.; Kasugai, K. Automatic Polyp Detection in Endoscope Images Using a Hessian Filter. In Proceedings of the MVA, Kyoto, Japan, 20–23 May 2013; pp. 21–24. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Akbari, M.; Mohrekesh, M.; Nasr-Esfahani, E.; Soroushmehr, S.R.; Karimi, N.; Samavi, S.; Najarian, K. Polyp segmentation in colonoscopy images using fully convolutional network. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 69–72. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Berlin, Germany, 2015; pp. 234–241. [Google Scholar]
- Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; Springer: Berlin, Germany, 2018; pp. 3–11. [Google Scholar]
- Poudel, S.; Lee, S.W. Deep multi-scale attentional features for medical image segmentation. Appl. Soft Comput. 2021, 109, 107445. [Google Scholar] [CrossRef]
- Jha, D.; Smedsrud, P.H.; Riegler, M.A.; Johansen, D.; De Lange, T.; Halvorsen, P.; Johansen, H.D. Resunet++: An advanced architecture for medical image segmentation. In Proceedings of the 2019 IEEE International Symposium on Multimedia (ISM), San Diego, CA, USA, 9–11 December 2019; pp. 225–2255. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5998–6008. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16 × 16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. Transunet: Transformers make strong encoders for medical image segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar]
- Dong, B.; Wang, W.; Fan, D.P.; Li, J.; Fu, H.; Shao, L. Polyp-pvt: Polyp segmentation with pyramid vision transformers. arXiv 2021, arXiv:2108.06932. [Google Scholar] [CrossRef]
- Valanarasu, J.M.J.; Oza, P.; Hacihaliloglu, I.; Patel, V.M. Medical transformer: Gated axial-attention for medical image segmentation. arXiv 2021, arXiv:2102.10662. [Google Scholar]
- Tolstikhin, I.O.; Houlsby, N.; Kolesnikov, A.; Beyer, L.; Zhai, X.; Unterthiner, T.; Yung, J.; Steiner, A.; Keysers, D.; Uszkoreit, J.; et al. Mlp-mixer: An all-mlp architecture for vision. In Proceedings of the Advances in Neural Information Processing Systems, Online, 6–14 December 2021; Volume 34. [Google Scholar]
- Trockman, A.; Kolter, J.Z. Patches are all you need? arXiv 2022, arXiv:2201.09792. [Google Scholar]
- Yu, T.; Li, X.; Cai, Y.; Sun, M.; Li, P. Rethinking token-mixing mlp for mlp-based vision backbone. arXiv 2021, arXiv:2106.14882. [Google Scholar]
- Wang, W.; Xie, E.; Li, X.; Fan, D.P.; Song, K.; Liang, D.; Lu, T.; Luo, P.; Shao, L. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 568–578. [Google Scholar]
- Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention u-net: Learning where to look for the pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3146–3154. [Google Scholar]
- Li, X.; Zhong, Z.; Wu, J.; Yang, Y.; Lin, Z.; Liu, H. Expectation-maximization attention networks for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27–28 October 2019; pp. 9167–9176. [Google Scholar]
- Fang, Y.; Chen, C.; Yuan, Y.; Tong, K.y. Selective feature aggregation network with area-boundary constraints for polyp segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Shenzhen, China, 13–17 October 2019; Springer: Berlin, Germany, 2019; pp. 302–310. [Google Scholar]
- Fan, D.P.; Ji, G.P.; Zhou, T.; Chen, G.; Fu, H.; Shen, J.; Shao, L. Pranet: Parallel reverse attention network for polyp segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 4–8 October 2020; Springer: Berlin, Germany, 2020; pp. 263–273. [Google Scholar]
- Zhao, X.; Zhang, L.; Lu, H. Automatic polyp segmentation via multi-scale subtraction network. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Strasbourg, France, 27 September–1 October 2021; Springer: Berlin, Germany, 2021; pp. 120–130. [Google Scholar]
- Kim, T.; Lee, H.; Kim, D. Uacanet: Uncertainty augmented context attention for polyp segmentation. In Proceedings of the 29th ACM International Conference on Multimedia, Online, 20–24 October 2021; pp. 2167–2175. [Google Scholar]
- Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; Jégou, H. Training data-efficient image transformers & distillation through attention. In Proceedings of the International Conference on Machine Learning—PMLR, Online, 18–24 July 2021; pp. 10347–10357. [Google Scholar]
- Wu, Z.; Su, L.; Huang, Q. Cascaded partial decoder for fast and accurate salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3907–3916. [Google Scholar]
- Liu, S.; Huang, D.; Wang, Y. Receptive field block net for accurate and fast object detection. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 385–400. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning—PMLR, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
- Ba, J.L.; Kiros, J.R.; Hinton, G.E. Layer normalization. arXiv 2016, arXiv:1607.06450. [Google Scholar]
- Hendrycks, D.; Gimpel, K. Gaussian error linear units (gelus). arXiv 2016, arXiv:1606.08415. [Google Scholar]
- Wei, J.; Wang, S.; Huang, Q. F3Net: Fusion, Feedback and Focus for Salient Object Detection. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 12321–12328. [Google Scholar]
- Jha, D.; Smedsrud, P.H.; Riegler, M.A.; Halvorsen, P.; de Lange, T.; Johansen, D.; Johansen, H.D. Kvasir-seg: A segmented polyp dataset. In Proceedings of the International Conference on Multimedia Modeling, Daejeon, Republic of Korea, 5–8 January 2020; Springer: Berlin, Germany, 2020; pp. 451–462. [Google Scholar]
- Bernal, J.; Sánchez, F.J.; Fernández-Esparrach, G.; Gil, D.; Rodríguez, C.; Vilariño, F. WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians. Comput. Med. Imaging Graph. 2015, 43, 99–111. [Google Scholar] [CrossRef] [PubMed]
- Bernal, J.; Sánchez, J.; Vilarino, F. Towards automatic polyp detection with a polyp appearance model. Pattern Recognit. 2012, 45, 3166–3182. [Google Scholar] [CrossRef]
- Vázquez, D.; Bernal, J.; Sánchez, F.J.; Fernández-Esparrach, G.; López, A.M.; Romero, A.; Drozdzal, M.; Courville, A. A benchmark for endoluminal scene segmentation of colonoscopy images. J. Healthc. Eng. 2017, 2017, 4037190. [Google Scholar] [CrossRef] [PubMed]
- Silva, J.; Histace, A.; Romain, O.; Dray, X.; Granado, B. Toward embedded detection of polyps in wce images for early diagnosis of colorectal cancer. Int. J. Comput. Assist. Radiol. Surg. 2014, 9, 283–293. [Google Scholar] [CrossRef] [PubMed]
- Margolin, R.; Zelnik-Manor, L.; Tal, A. How to evaluate foreground maps? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 248–255. [Google Scholar]
- Fan, D.P.; Gong, C.; Cao, Y.; Ren, B.; Cheng, M.M.; Borji, A. Enhanced-alignment measure for binary foreground map evaluation. arXiv 2018, arXiv:1805.10421. [Google Scholar]
- Fan, D.P.; Cheng, M.M.; Liu, Y.; Li, T.; Borji, A. Structure-measure: A new way to evaluate foreground maps. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4548–4557. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Patel, K.; Bur, A.M.; Wang, G. Enhanced u-net: A feature enhancement network for polyp segmentation. In Proceedings of the 2021 18th Conference on Robots and Vision (CRV), Burnaby, BC, Canada, 26–28 May 2021; pp. 181–188. [Google Scholar]
Dataset | Methods | mDice ↑ | mIOU ↑ | ↑ | ↑ | ↑ | MAE ↓ |
---|---|---|---|---|---|---|---|
Kvasir-SEG | UNet++ [9] | 0.824 | 0.753 | 0.808 | 0.862 | 0.907 | 0.048 |
SFA [25] | 0.725 | 0.619 | 0.670 | 0.782 | 0.828 | 0.075 | |
PraNet [26] | 0.901 | 0.848 | 0.885 | 0.915 | 0.943 | 0.030 | |
EU-Net [45] | 0.908 | 0.854 | 0.893 | 0.917 | 0.954 | 0.028 | |
MSNet [27] | 0.907 | 0.862 | 0.893 | 0.922 | 0.944 | 0.028 | |
UACANet-S [28] | 0.905 | 0.852 | 0.897 | 0.914 | 0.951 | 0.026 | |
UACANet-L [28] | 0.912 | 0.859 | 0.902 | 0.917 | 0.958 | 0.025 | |
Polyp-PVT [20] | 0.917 | 0.864 | 0.911 | 0.925 | 0.962 | 0.023 | |
MMNet (Ours) | 0.917 | 0.866 | 0.910 | 0.927 | 0.966 | 0.023 | |
CVC-ClinicDB | UNet++ [9] | 0.797 | 0.741 | 0.785 | 0.872 | 0.898 | 0.022 |
SFA [25] | 0.698 | 0.615 | 0.647 | 0.793 | 0.816 | 0.042 | |
PraNet [26] | 0.902 | 0.858 | 0.896 | 0.935 | 0.958 | 0.009 | |
EU-Net [45] | 0.902 | 0.846 | 0.891 | 0.936 | 0.965 | 0.011 | |
MSNet [27] | 0.921 | 0.879 | 0.914 | 0.941 | 0.972 | 0.008 | |
UACANet-S [28] | 0.916 | 0.870 | 0.917 | 0.940 | 0.968 | 0.008 | |
UACANet-L [28] | 0.926 | 0.880 | 0.928 | 0.943 | 0.976 | 0.006 | |
Polyp-PVT [20] | 0.937 | 0.889 | 0.936 | 0.949 | 0.989 | 0.006 | |
MMNet (Ours) | 0.937 | 0.889 | 0.935 | 0.953 | 0.990 | 0.006 |
Dataset | Methods | mDice ↑ | mIOU ↑ | ↑ | ↑ | ↑ | MAE ↓ |
---|---|---|---|---|---|---|---|
CVC-ColonDB | UNet++ [9] | 0.490 | 0.413 | 0.467 | 0.691 | 0.762 | 0.064 |
SFA [25] | 0.467 | 0.351 | 0.379 | 0.634 | 0.648 | 0.094 | |
PraNet [26] | 0.716 | 0.645 | 0.699 | 0.820 | 0.847 | 0.043 | |
EU-Net [45] | 0.756 | 0.681 | 0.730 | 0.831 | 0.872 | 0.045 | |
MSNet [27] | 0.755 | 0.678 | 0.737 | 0.836 | 0.883 | 0.041 | |
UACANet-S [28] | 0.783 | 0.704 | 0.772 | 0.848 | 0.897 | 0.034 | |
UACANet-L [28] | 0.751 | 0.678 | 0.746 | 0.835 | 0.878 | 0.039 | |
Polyp-PVT [20] | 0.808 | 0.727 | 0.795 | 0.865 | 0.919 | 0.031 | |
MMNet (Ours) | 0.812 | 0.728 | 0.795 | 0.870 | 0.923 | 0.026 | |
CVC-300 | UNet++ [9] | 0.714 | 0.636 | 0.687 | 0.838 | 0.884 | 0.018 |
SFA [25] | 0.465 | 0.332 | 0.341 | 0.640 | 0.604 | 0.065 | |
PraNet [26] | 0.873 | 0.804 | 0.843 | 0.924 | 0.938 | 0.010 | |
EU-Net [45] | 0.837 | 0.765 | 0.805 | 0.904 | 0.933 | 0.015 | |
MSNet [27] | 0.869 | 0.807 | 0.849 | 0.925 | 0.943 | 0.010 | |
UACANet-S [28] | 0.902 | 0.837 | 0.886 | 0.934 | 0.976 | 0.006 | |
UACANet-L [28] | 0.910 | 0.849 | 0.901 | 0.937 | 0.980 | 0.005 | |
Polyp-PVT [20] | 0.900 | 0.833 | 0.884 | 0.935 | 0.981 | 0.007 | |
MMNet (Ours) | 0.901 | 0.834 | 0.885 | 0.938 | 0.977 | 0.006 | |
ETIS | UNet++ [9] | 0.413 | 0.342 | 0.390 | 0.681 | 0.704 | 0.035 |
SFA [25] | 0.297 | 0.219 | 0.231 | 0.557 | 0.515 | 0.109 | |
PraNet [26] | 0.630 | 0.576 | 0.600 | 0.791 | 0.792 | 0.031 | |
EU-Net [45] | 0.687 | 0.609 | 0.636 | 0.793 | 0.841 | 0.068 | |
MSNet [27] | 0.719 | 0.664 | 0.678 | 0.840 | 0.830 | 0.020 | |
UACANet-S [28] | 0.694 | 0.615 | 0.650 | 0.815 | 0.851 | 0.023 | |
UACANet-L [28] | 0.766 | 0.689 | 0.740 | 0.859 | 0.905 | 0.012 | |
Polyp-PVT [20] | 0.787 | 0.706 | 0.750 | 0.871 | 0.910 | 0.013 | |
MMNet (Ours) | 0.807 | 0.752 | 0.771 | 0.880 | 0.923 | 0.012 |
Datasets | Kvasir-SEG | CVC-ClinicDB | CVC-ColonDB | CVC-300 | ETIS |
---|---|---|---|---|---|
Metrics | mDice ± SD | mDice ± SD | mDice ± SD | mDice ± SD | mDice ± SD |
UNet++ [9] | 0.821 ± 0.040 | 0.794 ± 0.044 | 0.456 ± 0.037 | 0.707 ± 0.053 | 0.401 ± 0.057 |
SFA [25] | 0.723 ± 0.052 | 0.701 ± 0.054 | 0.444 ± 0.037 | 0.468 ± 0.050 | 0.297 ± 0.025 |
PraNet [26] | 0.898 ± 0.041 | 0.899 ± 0.048 | 0.712 ± 0.038 | 0.871 ± 0.051 | 0.628 ± 0.036 |
EU-Net [45] | 0.908 ± 0.042 | 0.902 ± 0.048 | 0.756 ± 0.040 | 0.837 ± 0.049 | 0.687 ± 0.039 |
UACANet-L [28] | 0.912 ± N/A | 0.926 ± N/A | 0.751 ± N/A | 0.910 ± N/A | 0.766 ± N/A |
Polyp-PVT [20] | 0.917 ± 0.042 | 0.937 ± 0.050 | 0.808 ± 0.043 | 0.900 ± 0.052 | 0.787 ± 0.044 |
MMNet (Ours) | 0.917 ± 0.041 | 0.937 ± 0.048 | 0.812 ± 0.042 | 0.901 ± 0.057 | 0.807 ± 0.032 |
Dataset | Methods | mDice ↑ | mIOU ↑ | ↑ | ↑ | ↑ | MAE ↓ |
---|---|---|---|---|---|---|---|
Kvasir-SEG | Backbone | 0.899 | 0.837 | 0.887 | 0.912 | 0.945 | 0.029 |
Backbone + FEB1 | 0.860 | 0.783 | 0.837 | 0.880 | 0.923 | 0.042 | |
Backbone + FEB2 | 0.901 | 0.838 | 0.884 | 0.914 | 0.955 | 0.031 | |
Backbone + FEB3 | 0.906 | 0.850 | 0.895 | 0.918 | 0.955 | 0.028 | |
Backbone + FEB123 + PPD | 0.909 | 0.849 | 0.896 | 0.920 | 0.957 | 0.026 | |
MMNet (Final) | 0.917 | 0.866 | 0.910 | 0.927 | 0.966 | 0.023 | |
CVC-ClinicDB | Backbone | 0.923 | 0.868 | 0.920 | 0.947 | 0.989 | 0.007 |
Backbone + FEB1 | 0.890 | 0.829 | 0.880 | 0.922 | 0.956 | 0.017 | |
Backbone + FEB2 | 0.905 | 0.847 | 0.900 | 0.930 | 0.969 | 0.017 | |
Backbone + FEB3 | 0.906 | 0.846 | 0.901 | 0.937 | 0.973 | 0.012 | |
Backbone + FEB123 + PPD | 0.919 | 0.867 | 0.917 | 0.942 | 0.974 | 0.010 | |
MMNet (Final) | 0.937 | 0.888 | 0.935 | 0.953 | 0.990 | 0.006 | |
CVC-ColonDB | Backbone | 0.776 | 0.685 | 0.756 | 0.850 | 0.903 | 0.036 |
Backbone + FEB1 | 0.695 | 0.603 | 0.666 | 0.800 | 0.859 | 0.047 | |
Backbone + FEB2 | 0.752 | 0.667 | 0.730 | 0.835 | 0.883 | 0.044 | |
Backbone + FEB3 | 0.783 | 0.695 | 0.759 | 0.853 | 0.902 | 0.038 | |
Backbone + FEB123 + PPD | 0.783 | 0.698 | 0.764 | 0.850 | 0.903 | 0.037 | |
MMNet (Final) | 0.812 | 0.728 | 0.795 | 0.870 | 0.923 | 0.026 | |
CVC-300 | Backbone | 0.878 | 0.807 | 0.857 | 0.928 | 0.971 | 0.007 |
Backbone + FEB1 | 0.831 | 0.738 | 0.784 | 0.894 | 0.965 | 0.014 | |
Backbone + FEB2 | 0.878 | 0.809 | 0.855 | 0.927 | 0.971 | 0.008 | |
Backbone + FEB3 | 0.869 | 0.792 | 0.841 | 0.919 | 0.969 | 0.011 | |
Backbone + FEB123 + PPD | 0.878 | 0.807 | 0.853 | 0.925 | 0.967 | 0.011 | |
MMNet (Final) | 0.901 | 0.834 | 0.885 | 0.938 | 0.977 | 0.006 | |
ETIS | Backbone | 0.753 | 0.663 | 0.707 | 0.856 | 0.908 | 0.016 |
Backbone + FEB1 | 0.703 | 0.606 | 0.652 | 0.826 | 0.886 | 0.020 | |
Backbone + FEB2 | 0.748 | 0.663 | 0.708 | 0.858 | 0.895 | 0.022 | |
Backbone + FEB3 | 0.762 | 0.674 | 0.716 | 0.861 | 0.894 | 0.022 | |
Backbone + FEB123 + PPD | 0.790 | 0.708 | 0.744 | 0.878 | 0.895 | 0.022 | |
MMNet (Final) | 0.807 | 0.752 | 0.771 | 0.880 | 0.923 | 0.012 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ghimire, R.; Lee, S.-W. MMNet: A Mixing Module Network for Polyp Segmentation. Sensors 2023, 23, 7258. https://doi.org/10.3390/s23167258
Ghimire R, Lee S-W. MMNet: A Mixing Module Network for Polyp Segmentation. Sensors. 2023; 23(16):7258. https://doi.org/10.3390/s23167258
Chicago/Turabian StyleGhimire, Raman, and Sang-Woong Lee. 2023. "MMNet: A Mixing Module Network for Polyp Segmentation" Sensors 23, no. 16: 7258. https://doi.org/10.3390/s23167258
APA StyleGhimire, R., & Lee, S. -W. (2023). MMNet: A Mixing Module Network for Polyp Segmentation. Sensors, 23(16), 7258. https://doi.org/10.3390/s23167258