Real-Time Multi-Label Upper Gastrointestinal Anatomy Recognition from Gastroscope Videos
Abstract
:1. Introduction
2. Materials and Methods
2.1. DataSets
2.2. Backbone Structure
2.3. GCN Structure
2.4. LSTM Structure
2.5. Experimental Setups
3. Results
3.1. Evaluation Metrics
3.2. Experimental Results
3.2.1. GCN Sructure
3.2.2. GCN with LSTM Structure
3.2.3. Retrospective Analysis of EGD Videos
4. Discussion
4.1. Recognition Evaluation
4.2. Clinical Retrospective Analysis
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
EGD | Esophagogastroduodenoscopy |
GCN | graph convolutional network |
LSTM | long short-term memory |
ASGE | The American Society of Gastrointestinal Endoscopy |
ACG | The American College of Gastroenterology |
ESGE | The European Society of Gastroenterology |
HMMs | hidden Markov models |
References
- Zeiler, M.D.; Fergus, R. Visualizing and Understanding Convolutional Networks. In Proceedings of the Computer Vision—ECCV 2014, Zurich, Switzerland, 6–12 September 2014; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 818–833. [Google Scholar]
- Ang, T.L.; Fock, K.M. Clinical epidemiology of gastric cancer. Singap. Med. J. 2014, 55, 621–628. [Google Scholar] [CrossRef] [Green Version]
- Rutter, M.D.; Rees, C.J. Quality in gastrointestinal endoscopy. Endoscopy 2014, 46, 526–528. [Google Scholar] [CrossRef] [PubMed]
- Cohen, J.; Safdi, M.A.; Deal, S.E.; Baron, T.H.; Chak, A.; Hoffman, B.; Jacobson, B.C.; Mergener, K.; Petersen, B.T.; Petrini, J.L.; et al. Quality indicators for esophagogastroduodenoscopy. Gastrointest. Endosc. 2006, 63, S10–S15. [Google Scholar] [CrossRef] [PubMed]
- Park, W.G.; Cohen, J. Quality measurement and improvement in upper endoscopy. Tech. Gastrointest. Endosc. 2012, 14, 13–20. [Google Scholar]
- Bretthauer, M.; Aabakken, L.; Dekker, E.; Kaminski, M.F.; Roesch, T.; Hultcrantz, R.; Suchanek, S.; Jover, R.; Kuipers, E.J.; Bisschops, R.; et al. Requirements and standards facilitating quality improvement for reporting systems in gastrointestinal endoscopy: European Society of Gastrointestinal Endoscopy (ESGE) Position Statement. Endoscopy 2016, 48, 291–294. [Google Scholar] [CrossRef] [Green Version]
- Nayyar, Z.; Khan, M.; Alhussein, M.; Nazir, M.; Aurangzeb, K.; Nam, Y.; Kadry, S.; Haider, S. Gastric Tract Disease Recognition Using Optimized Deep Learning Features. CMC-Comput. Mater. Contin. 2021, 68, 2041–2056. [Google Scholar] [CrossRef]
- Zhang, X.; Chen, F.; Yu, T.; An, J.; Huang, Z.; Liu, J.; Hu, W.; Wang, L.; Duan, H.; Si, J. Real-time gastric polyp detection using convolutional neural networks. PLoS ONE 2019, 14, e0214133. [Google Scholar] [CrossRef] [Green Version]
- Guimares, P.; Keller, A.; Fehlmann, T.; Lammert, F.; Casper, M. Deep-learning based detection of gastric precancerous conditions. Gut 2020, 69, 4–6. [Google Scholar] [CrossRef] [Green Version]
- Wang, C.; Li, Y.; Yao, J.; Chen, B.; Song, J.; Yang, X. Localizing and Identifying Intestinal Metaplasia Based on Deep Learning in Oesophagoscope. In Proceedings of the 2019 8th International Symposium on Next Generation Electronics (ISNE), Zhengzhou, China, 9–10 October 2019; pp. 1–4. [Google Scholar] [CrossRef]
- Yan, T.; Wong, P.K.; Choi, I.C.; Vong, C.M.; Yu, H.H. Intelligent diagnosis of gastric intestinal metaplasia based on convolutional neural network and limited number of endoscopic images. Comput. Biol. Med. 2020, 126, 104026. [Google Scholar] [CrossRef]
- Zheng, W.; Zhang, X.; Kim, J.; Zhu, X.; Ye, G.; Ye, B.; Wang, J.; Luo, S.; Li, J.; Yu, T.; et al. High Accuracy of Convolutional Neural Network for Evaluation of Helicobacter pylori Infection Based on Endoscopic Images: Preliminary Experience. Clin. Transl. Gastroenterol. 2019, 10, e00109. [Google Scholar] [CrossRef]
- Itoh, T.; Kawahira, H.; Nakashima, H.; Yata, N. Deep learning analyzes Helicobacter pylori infection by upper gastrointestinal endoscopy images. Endosc. Int. Open 2018, 6, E139–E144. [Google Scholar] [CrossRef] [Green Version]
- Lin, N.; Yu, T.; Zheng, W.; Hu, H.; Xiang, L.; Ye, G.; Zhong, X.; Ye, B.; Wang, R.; Deng, W.; et al. Simultaneous Recognition of Atrophic Gastritis and Intestinal Metaplasia on White Light Endoscopic Images Based on Convolutional Neural Networks: A Multicenter Study. Clin. Transl. Gastroenterol. 2021, 12, e00385. [Google Scholar] [CrossRef]
- Lee, J.H.; Kim, Y.J.; Kim, Y.W.; Park, S.; Choi, Y.i.; Kim, Y.J.; Park, D.K.; Kim, K.G.; Chung, J.W. Spotting malignancies from gastric endoscopic images using deep learning. Surg. Endosc. Other Interv. Tech. 2019, 33, 3790–3797. [Google Scholar] [CrossRef]
- Zhu, Y.; Wang, Q.C.; Xu, M.D.; Zhang, Z.; Cheng, J.; Zhong, Y.S.; Zhang, Y.Q.; Chen, W.F.; Yao, L.Q.; Zhou, P.H.; et al. Application of convolutional neural network in the diagnosis of the invasion depth of gastric cancer based on conventional endoscopy. Gastrointest. Endosc. 2019, 89, 806–815.e1. [Google Scholar] [CrossRef]
- Ikenoyama, Y.; Hirasawa, T.; Ishioka, M.; Namikawa, K.; Yoshimizu, S.; Horiuchi, Y.; Ishiyama, A.; Yoshio, T.; Tsuchida, T.; Takeuchi, Y.; et al. Detecting early gastric cancer: Comparison between the diagnostic ability of convolutional neural networks and endoscopists. Dig. Endosc. 2021, 33, 141–150. [Google Scholar] [CrossRef]
- Ueyama, H.; Kato, Y.; Akazawa, Y.; Yatagai, N.; Komori, H.; Takeda, T.; Matsumoto, K.; Ueda, K.; Matsumoto, K.; Hojo, M.; et al. Application of artificial intelligence using a convolutional neural network for diagnosis of early gastric cancer based on magnifying endoscopy with narrow-band imaging. J. Gastroenterol. Hepatol. 2021, 36, 482–489. [Google Scholar] [CrossRef]
- Ling, T.; Wu, L.; Fu, Y.; Xu, Q.; An, P.; Zhang, J.; Hu, S.; Chen, Y.; He, X.; Wang, J.; et al. A deep learning-based system for identifying differentiation status and delineating the margins of early gastric cancer in magnifying narrow-band imaging endoscopy. Endoscopy 2021, 53, 469–477. [Google Scholar] [CrossRef]
- Saito, H.; Aoki, T.; Aoyama, K.; Kato, Y.; Tsuboi, A.; Yamada, A.; Fujishiro, M.; Oka, S.; Ishihara, S.; Matsuda, T.; et al. Automatic detection and classification of protruding lesions in wireless capsule endoscopy images based on a deep convolutional neural network. Gastrointest. Endosc. 2020, 92, 144–151.e1. [Google Scholar] [CrossRef]
- Hu, H.; Gong, L.; Dong, D.; Zhu, L.; Wang, M.; He, J.; Shu, L.; Cai, Y.; Cai, S.; Su, W.; et al. Identifying early gastric cancer under magnifying narrow-band images with deep learning: A multicenter study. Gastrointest. Endosc. 2021, 93, 1333–1341.e3. [Google Scholar] [CrossRef]
- Wu, L.; Zhou, W.; Wan, X.; Zhang, J.; Shen, L.; Hu, S.; Ding, Q.; Mu, G.; Yin, A.; Huang, X.; et al. A deep neural network improves endoscopic detection of early gastric cancer without blind spots. Endoscopy 2019, 51, 522–531. [Google Scholar] [CrossRef] [Green Version]
- Wu, L.; Zhang, J.; Zhou, W.; An, P.; Shen, L.; Liu, J.; Jiang, X.; Huang, X.; Mu, G.; Wan, X.; et al. Randomised Controlled Trial of WISENSE, a Real-Time Quality Improving System for Monitoring Blind Spots during Esophagogastroduodenoscopy. Gut 2019, 68, 2161–2169. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lin, T.H.; Jhang, J.Y.; Huang, C.R.; Tsai, Y.C.; Cheng, H.C.; Sheu, B.S. Deep Ensemble Feature Network for Gastric Section Classification. IEEE J. Biomed. Health Inform. 2021, 25, 77–87. [Google Scholar] [CrossRef] [PubMed]
- He, Q.; Bano, S.; Ahmad, O.F.; Yang, B.; Chen, X.; Valdastri, P.; Lovat, L.B.; Stoyanov, D.; Zuo, S. Deep learning-based anatomical site classification for upper gastrointestinal endoscopy. Int. Comput. Assist. Radiol. Surg. 2020, 15, 1085–1094. [Google Scholar] [CrossRef]
- Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. arXiv 2016, arXiv:1608.06993. [Google Scholar]
- Liu, L.; Wang, P.; Shen, C.; Wang, L.; Van Den Hengel, A.; Wang, C.; Shen, H.T. Compositional Model Based Fisher Vector Coding for Image Classification. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2335–2348. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Jin, Y.; Dou, Q.; Chen, H.; Yu, L.; Qin, J.; Fu, C.; Heng, P. SV-RCNet: Workflow Recognition From Surgical Videos Using Recurrent Convolutional Network. IEEE Trans. Med. Imaging 2018, 37, 1114–1126. [Google Scholar] [CrossRef] [PubMed]
- Chen, S.F.; Chen, Y.C.; Yeh, C.K.; Wang, Y.C.F. Order-Free RNN with Visual Attention for Multi-Label Classification. arXiv 2017, arXiv:1707.05495. [Google Scholar]
- Wang, J.; Yang, Y.; Mao, J.; Huang, Z.; Huang, C.; Xu, W. CNN-RNN: A Unified Framework for Multi-label Image Classification. arXiv 2016, arXiv:1604.04573. [Google Scholar]
- Wang, Z.; Chen, T.; Li, G.; Xu, R.; Lin, L. Multi-label Image Recognition by Recurrently Discovering Attentional Regions. arXiv 2017, arXiv:1711.02816. [Google Scholar]
- Li, Q.; Qiao, M.; Bian, W.; Tao, D. Conditional Graphical Lasso for Multi-label Image Classification. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2977–2986. [Google Scholar]
- Li, X.; Zhao, F.; Guo, Y. Multi-Label Image Classification with a Probabilistic Label Enhancement Model. In Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence, UAI’14, Quebec City, QC, Canada, 23–27 July 2014; AUAI Press: Arlington, VA, USA, 2014; pp. 430–439. [Google Scholar]
- Zhu, F.; Li, H.; Ouyang, W.; Yu, N.; Wang, X. Learning Spatial Regularization with Image-Level Supervisions for Multi-label Image Classification. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE Computer Society: Los Alamitos, CA, USA, 2017; pp. 2027–2036. [Google Scholar] [CrossRef] [Green Version]
- Chen, Z.M.; Wei, X.S.; Wang, P.; Guo, Y. Multi-Label Image Recognition with Graph Convolutional Networks. arXiv 2019, arXiv:1904.03582. [Google Scholar]
- Padoy, N.; Blum, T.; Ahmadi, S.A.; Feussner, H.; Berger, M.O.; Navab, N. Statistical modeling and recognition of surgical workflow. Med. Image Anal. 2012, 16, 632–641. [Google Scholar] [CrossRef]
- Tao, L.; Zappella, L.; Hager, G.D.; Vidal, R. Surgical Gesture Segmentation and Recognition. Med. Image Comput. Comput. Assist. Interv. 2013, 16, 339–346. [Google Scholar] [CrossRef] [Green Version]
- Lalys, F.; Riffaud, L.; Morandi, X.; Jannin, P. Surgical Phases Detection from Microscope Videos by Combining SVM and HMM. In Medical Computer Vision. Recognition Techniques and Applications in Medical Imaging; Menze, B., Langs, G., Tu, Z., Criminisi, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 54–62. [Google Scholar]
- Gers, F.A.; Eck, D.; Schmidhuber, J. Applying LSTM to Time Series Predictable through Time-Window Approaches. In Proceedings of the Artificial Neural Networks—ICANN 2001, Vienna, Austria, 21–25 August 2001; Dorffner, G., Bischof, H., Hornik, K., Eds.; Springer: Berlin/Heidelberg, Germany, 2001; pp. 669–676. [Google Scholar]
- Zeng, T.; Wu, B.; Zhou, J.; Davidson, I.; Ji, S. Recurrent Encoder-Decoder Networks for Time-Varying Dense Prediction. In Proceedings of the 2017 IEEE International Conference on Data Mining (ICDM), New Orleans, LA, USA, 18–21 November 2017; pp. 1165–1170. [Google Scholar] [CrossRef]
- Bisschops, R.; Areia, M.; Coron, E.; Dobru, D.; Kaskas, B.; Kuvaev, R.; Pech, O.; Ragunath, K.; Weusten, B.; Familiari, P.; et al. Performance measures for upper gastrointestinal endoscopy: A European Society of Gastrointestinal Endoscopy (ESGE) Quality Improvement Initiative. Endoscopy 2016, 48, 843–864. [Google Scholar] [CrossRef] [Green Version]
- Yao, K.; Uedo, N.; Kamada, T.; Hirasawa, T.; Nagahama, T.; Yoshinaga, S.; Oka, M.; Inoue, K.; Mabe, K.; Yao, T.; et al. Guidelines for endoscopic diagnosis of early gastric cancer. Dig. Endosc. 2020, 32, 663–698. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-Excitation Networks. arXiv 2017, arXiv:1709.01507. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv 2019, arXiv:1905.11946. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Dong, J.; Xia, W.; Chen, Q.; Feng, J.; Huang, Z.; Yan, S. Subcategory-Aware Object Classification. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 827–834. [Google Scholar] [CrossRef] [Green Version]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. arXiv 2019, arXiv:1912.01703. [Google Scholar]
- Xu, B.; Wang, N.; Chen, T.; Li, M. Empirical Evaluation of Rectified Activations in Convolutional Network. arXiv 2015, arXiv:1505.00853. [Google Scholar]
- Ge, W.; Yang, S.; Yu, Y. Multi-Evidence Filtering and Fusion for Multi-Label Classification, Object Detection and Semantic Segmentation Based on Weakly Supervised Learning. arXiv 2018, arXiv:1802.09129. [Google Scholar]
- Wei, Y.; Xia, W.; Lin, M.; Huang, J.; Ni, B.; Dong, J.; Zhao, Y.; Yan, S. HCP: A Flexible CNN Framework for Multi-Label Image Classification. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 1901–1907. [Google Scholar] [CrossRef] [Green Version]
- Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning Deep Features for Discriminative Localization. arXiv 2015, arXiv:1512.04150. [Google Scholar]
- Xu, Z.; Tao, Y.; Wenfang, Z.; Ne, L.; Zhengxing, H.; Jiquan, L.; Weiling, H.; Huilong, D.; Jianmin, S. Upper gastrointestinal anatomy detection with multi-task convolutional neural networks. Healthc. Technol. Lett. 2019, 6, 176–180. [Google Scholar]
- Chang, Y.Y.; Li, P.C.; Chang, R.F.; Yao, C.D.; Chen, Y.Y.; Chang, W.Y.; Yen, H.H. Deep learning-based endoscopic anatomy classification: An accelerated approach for data preparation and model validation. Surg. Endosc. 2021, 1–11. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Two-Stream Convolutional Networks for Action Recognition in Videos. In Proceedings of the Neural Information Processing Systems (NIPS’14), Montreal, QC, Canada, 8–13 December 2014; Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., Weinberger, K.Q., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2014; MIT Press: Cambridge, MA, USA, 2014; Volume 1, pp. 568–576. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. arXiv 2014, arXiv:1409.4842. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Radosavovic, I.; Prateek Kosaraju, R.; Girshick, R.; He, K.; Dollár, P. Designing Network Design Spaces. arXiv 2020, arXiv:2003.13678. [Google Scholar]
- Teh, J.L.; Tan, J.R.; Lau, L.J.F.; Saxena, N.; Salim, A.; Tay, A.; Shabbir, A.; Chung, S.; Hartman, M.; Bok-Yan So, J. Longer Examination Time Improves Detection of Gastric Cancer During Diagnostic Upper Gastrointestinal Endoscopy. Clin. Gastroenterol. Hepatol. 2015, 13, 480–487.e2. [Google Scholar] [CrossRef] [PubMed]
- Conio, M.; Filiberti, R.; Blanchi, S.; Ferraris, R.; Marchi, S.; Ravelli, P.; Lapertosa, G.; Iaquinto, G.; Sablich, R.; Gusmaroli, R.; et al. Risk factors for Barrett’s esophagus: A case-control study. Int. J. Cancer 2002, 97, 225–229. [Google Scholar] [CrossRef] [PubMed]
- Gupta, N.; Gaddam, S.; Wani, S.B.; Bansal, A.; Rastogi, A.; Sharma, P. Longer inspection time is associated with increased detection of high-grade dysplasia and esophageal adenocarcinoma in Barrett’s esophagus. Gastrointest. Endosc. 2012, 76, 531–538. [Google Scholar] [CrossRef] [PubMed]
Anatomy | CNN-RNN | RNN-Attention | ResNet50 | ResNet-GCN | GL-Net |
---|---|---|---|---|---|
esophagus | |||||
Squamocolumnar juction | |||||
Cardia I | |||||
Cardia O | |||||
Fundus | |||||
Middle-upper body A | |||||
Middle-upper body L | |||||
Middle-upper body P | |||||
Middle-upper body G | |||||
Lower body A | |||||
Lower body L | |||||
Lower body P | |||||
Lower body G | |||||
Antrum A | |||||
Antrum L | |||||
Antrum P | |||||
Antrum G | |||||
Angulus G | |||||
R-middle-upper body A | |||||
R-middle-upper body L | |||||
R-middle-upper body P | |||||
R-middle-upper body G | |||||
Duodenal bulb | |||||
Duodenal descending | |||||
Pylorus |
Anatomy | Miss Rate (%) |
---|---|
esophagus | |
Squamocolumnar juction | |
Cardia I | |
Cardia O | |
Fundus | |
Middle-upper body A | |
Middle-upper body L | |
Middle-upper body P | |
Middle-upper body G | |
Lower body A | |
Lower body L | |
Lower body P | |
Lower body G | |
Antrum A | |
Antrum L | |
Antrum P | |
Antrum G | |
Angulus | |
R-middle-upper body A | |
R-middle-upper body L | |
R-middle-upper body P | |
R-middle-upper body G | |
Duodenal bulb | |
Duodenal descending | |
Pylorus |
Inspection Type | Mean (min) |
---|---|
Regular endoscopy | |
Coverage of all anatomy |
Anatomy | Inspection Time (s) |
---|---|
esophagus | |
Squamocolumnar juction | |
Cardia I | |
Cardia O | |
Fundus | |
Middle-upper body A | 39 |
Middle-upper body L | |
Middle-upper body P | |
Middle-upper body G | |
Lower body A | |
Lower body L | |
Lower body P | |
Lower body G | |
Antrum A | |
Antrum L | 30 |
Antrum P | |
Antrum G | 42 |
Angulus | 45 |
R-middle-upper body A | 15 |
R-middle-upper body L | |
R-middle-upper body P | |
R-middle-upper body G | |
Duodenal bulb | |
Duodenal descending | |
Pylorus |
/ | Ratio (%) |
---|---|
Effective Frames | |
Invalid Frames |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yu, T.; Hu, H.; Zhang, X.; Lei, H.; Liu, J.; Hu, W.; Duan, H.; Si, J. Real-Time Multi-Label Upper Gastrointestinal Anatomy Recognition from Gastroscope Videos. Appl. Sci. 2022, 12, 3306. https://doi.org/10.3390/app12073306
Yu T, Hu H, Zhang X, Lei H, Liu J, Hu W, Duan H, Si J. Real-Time Multi-Label Upper Gastrointestinal Anatomy Recognition from Gastroscope Videos. Applied Sciences. 2022; 12(7):3306. https://doi.org/10.3390/app12073306
Chicago/Turabian StyleYu, Tao, Huiyi Hu, Xinsen Zhang, Honglin Lei, Jiquan Liu, Weiling Hu, Huilong Duan, and Jianmin Si. 2022. "Real-Time Multi-Label Upper Gastrointestinal Anatomy Recognition from Gastroscope Videos" Applied Sciences 12, no. 7: 3306. https://doi.org/10.3390/app12073306
APA StyleYu, T., Hu, H., Zhang, X., Lei, H., Liu, J., Hu, W., Duan, H., & Si, J. (2022). Real-Time Multi-Label Upper Gastrointestinal Anatomy Recognition from Gastroscope Videos. Applied Sciences, 12(7), 3306. https://doi.org/10.3390/app12073306