Localization and Classification of Gastrointestinal Tract Disorders Using Explainable AI from Endoscopic Images
Abstract
:1. Introduction
- Development of an encoder–decoder-based model for segmentation and localization of diseases.
- Development of an explainable AI-based model that is utilized for the classification of endoscopic images with contours into four main diseases.
- Development of an efficient and robust framework having better accuracy, precision, and recall rate.
2. Literature Review
3. Methodology
3.1. Dataset Collection and Preparation
3.2. Preprocessing
3.3. Segmentation
3.4. Heat Maps
3.5. Features Extraction and Classification
4. Results
4.1. Segmentation Results
4.2. Classification Results
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Ling, T.; Wu, L.; Fu, Y.; Xu, Q.; An, P.; Zhang, J.; Hu, S.; Chen, Y.; He, X.; Wang, J.; et al. A deep learning-based system for identifying differentiation status and delineating the margins of early gastric cancer in magnifying narrow-band imaging endoscopy. Endoscopy 2021, 53, 469–477. [Google Scholar] [CrossRef]
- Sung, H.; Ferlay, J.; Siegel, R.L.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global cancer statistics 2020: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 2021, 71, 209–249. [Google Scholar]
- Noor, M.N.; Nazir, M.; Ashraf, I.; Almujally, N.A.; Aslam, M.; Fizzah Jilani, S. GastroNet: A robust attention-based deep learning and cosine similarity feature selection framework for gastrointestinal disease classification from endoscopic images. CAAI Trans. Intell. Technol. 2023, 1–14. [Google Scholar] [CrossRef]
- Available online: https://www.cancer.net/cancer-types/colorectal-cancer/statistics (accessed on 20 April 2023).
- Siegel, R.L.; Miller, K.D.; Jemal, A. Cancer statistics, 2015. CA Cancer J. Clin. 2015, 65, 5–29. [Google Scholar] [CrossRef]
- Korkmaz, M.F. Artificial Neural Network by Using HOG Features HOG_LDA_ANN. In Proceedings of the 15th IEEE International Symposium on Intelligent Systems and Informatics (SISY), Subotica, Serbia, 14–16 September 2017; pp. 327–332. [Google Scholar]
- Li, S.; Cao, J.; Yao, J.; Zhu, J.; He, X.; Jiang, Q. Adaptive aggregation with self-attention network for gastrointestinal image classification. IET Image Process 2022, 16, 2384–2397. [Google Scholar] [CrossRef]
- Azhari, H.; King, J.; Underwood, F.; Coward, S.; Shah, S.; Ho, G.; Chan, C.; Ng, S.; Kaplan, G. The global incidence of peptic ulcer disease at the turn of the 21st century: A study of the organization for economic co-operation and development (oecd). Am. J. Gastroenterol. 2018, 113, S682–S684. [Google Scholar] [CrossRef]
- Kim, N.H.; Jung, Y.S.; Jeong, W.S.; Yang, H.J.; Park, S.K.; Choi, K.; Park, D.I. Miss rate of colorectal neoplastic polyps and risk factors for missed polyps in consecutive colonoscopies. Intest. Res. 2017, 15, 411. [Google Scholar] [CrossRef] [Green Version]
- Iddan, G.; Meron, G.; Glukhovsky, A.; Swain, P. Wireless capsule endoscopy. Nature 2000, 405, 417. [Google Scholar] [CrossRef]
- Khan, M.A.; Sarfraz, M.S.; Alhaisoni, M.; Albesher, A.A.; Wang, S. StomachNet: Optimal deep learning features fusion for stomach abnormalities classification. IEEE Access 2020, 8, 197969–197981. [Google Scholar] [CrossRef]
- Khan, M.A.; Sharif, M.; Akram, T.; Yasmin, M.; Nayak, R.S. Stomach deformities recognition using rank-based deep features selection. J. Med. Syst. 2019, 43, 329. [Google Scholar] [CrossRef]
- Yeh, J.Y.; Wu, T.H.; Tsai, W.J. Bleeding and ulcer detection using wireless capsule endoscopy images. J. Softw. Eng. Appl. 2014, 7, 422. [Google Scholar]
- Dewi, A.K.; Novianty, A.; Purboyo, T.W. Stomach disorder detection through the Iris Image using Backpropagation Neural Network. In Proceedings of the 2016 International Conference on Informatics and Computing (ICIC), Mataram, Indonesia, 28–29 October 2016; pp. 192–197. [Google Scholar]
- Korkmaz, S.A.; Akcicek, A.; Binol, H.; Korkmaz, M.F. Recognition of the stomach cancer images with probabilistic HOG feature vector histograms by using HOG features. In Proceedings of the 2017 IEEE 15th International Symposium on Intelligent Systems and Informatics (SISY), Subotica, Serbia, 14–16 September 2017; pp. 339–342. [Google Scholar]
- De Groen, P.C. Using artificial intelligence to improve adequacy of inspection in gastrointestinal endoscopy. Tech. Innov. Gastrointest. Endosc. 2020, 22, 71–79. [Google Scholar] [CrossRef]
- Wong, G.L.-H.; Ma, A.J.; Deng, H.; Ching, J.Y.-L.; Wong, V.W.-S.; Tse, Y.-K.; Yip, T.C.-F.; Lau, L.H.-S.; Liu, H.H.-W.; Leung, C.-M.; et al. Machine learning model to predict recurrent ulcer bleeding in patients with history of idiopathic gastroduodenal ulcer bleeding. APT—Aliment. Pharmacol. Ther. 2019, 49, 912–918. [Google Scholar]
- Wang, S.; Xing, Y.; Zhang, L.; Gao, H.; Zhang, H. Second glance framework (secG): Enhanced ulcer detection with deep learning on a large wireless capsule endoscopy dataset. In Proceedings of the Fourth International Workshop on Pattern Recognition, Nanjing, China, 31 July 2019; pp. 170–176. [Google Scholar]
- Majid, A.; Khan, M.A.; Yasmin, M.; Rehman, A.; Yousafzai, A.; Tariq, U. Classification of stomach infections: A paradigm of convolutional neural network along with classical features fusion and selection. Microsc. Res. Tech. 2020, 83, 562–576. [Google Scholar] [CrossRef]
- Sun, J.Y.; Lee, S.W.; Kang, M.C.; Kim, S.W.; Kim, S.Y.; Ko, S.J. A novel gastric ulcer differentiation system using convolutional neural networks. In Proceedings of the 2018 IEEE 31st International Symposium on Computer-Based Medical Systems (CBMS), Karlstad, Sweden, 18–21 June 2018; pp. 351–356. [Google Scholar]
- Aoki, T.; Yamada, A.; Aoyama, K.; Saito, H.; Tsuboi, A.; Nakada, A.; Niikura, R.; Fujishiro, M.; Oka, S.; Ishihara, S.; et al. Automatic detection of erosions and ulcerations in wireless capsule endoscopy images based on a deep convolutional neural network. Gastrointest. Endosc. 2019, 89, 357–363. [Google Scholar]
- Sekuboyina, A.K.; Devarakonda, S.T.; Seelamantula, C.S. A convolutional neural network approach for abnormality detection in wireless capsule endoscopy. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, Australia, 18–21 April 2017; pp. 1057–1060. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Milletari, F.; Navab, N.; Ahmadi, S.-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
- Zhang, Z.; Liu, Q.; Wang, Y. Road extraction by deep residual unet. IEEE Geosci. Remote Sens. Lett. 2018, 15, 749–753. [Google Scholar] [CrossRef] [Green Version]
- Guo, Y.B.; Matuszewski, B. Giana polyp segmentation with fully convolutional dilation neural networks. In Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Prague, Czech Republic, 25–27 February 2019; pp. 632–641. [Google Scholar]
- Alhajlah, M.; Noor, M.N.; Nazir, M.; Mahmood, A.; Ashraf, I.; Karamat, T. Gastrointestinal Diseases Classification Using Deep Transfer Learning and Features Optimization. Comput. Mater. Contin. 2023, 75, 2227–2245. [Google Scholar] [CrossRef]
- Nouman, N.M.; Nazir, M.; Khan, S.A.; Song, O.-Y.; Ashraf, I. Efficient Gastrointestinal Disease Classification Using Pretrained Deep Convolutional Neural Network. Electronics 2023, 12, 1557. [Google Scholar] [CrossRef]
- Jha, D.; Smedsrud, P.H.; Riegler, M.; Halvorsen, P.; Lange, T.D.; Johansen, D.; Johansen, H.D. Kvasir-SEG: A Segmented Polyp Dataset. In Proceedings of the MultiMedia Modeling: 26th International Conference, MMM 2020, Daejeon, South Korea, 5–8 January 2020; Proceedings, Part II 26. Springer International Publishing: Cham, Switzerland, 2020; Volume 11962, pp. 451–462. [Google Scholar]
- Pogorelov, K.; Randel, K.R.; Griwodz, C.; Eskeland, S.L.; de Lange, T.; Johansen, D.; Spampinato, C.; Dang-Nguyen, D.T.; Lux, M.; Schmidt, P.T.; et al. KVASIR: A multi-class image dataset for computer aided gastrointestinal disease detection. In Proceedings of the 8th ACM on Multimedia Systems Conference, Taipei, Taiwan, 20–23 June 2017; pp. 164–169. [Google Scholar]
- Borgli, H.; Thambawita, V.; Smedsrud, P.H.; Hicks, S.; Jha, D.; Eskeland, S.L.; Randel, K.R.; Pogorelov, K.; Lux, M.; Nguyen, D.T.D.; et al. Hyper-Kvasir: A Comprehensive Multi-Class Image and Video Dataset for Gastrointestinal Endoscopy. Sci. Data 2020, 7, 283. [Google Scholar] [CrossRef]
- Shorten, C.; Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J. Big. Data 2019, 6, 60. [Google Scholar] [CrossRef] [Green Version]
- Ding, Y.; Chen, F.; Zhao, Y.; Wu, Z.; Zhang, C.; Wu, D. A Stacked Multi-Connection Simple Reducing Net for Brain Tumor Segmentation. IEEE Access 2020, 7, 104011–104024. [Google Scholar] [CrossRef]
- Kaiming, H.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Livne, M.; Rieger, J.; Aydin, O.U.; Taha, A.A.; Akay, E.M.; Kossen, T.; Sobesky, J.; Kelleher, J.D.; Hildebrand, K.; Frey, D.; et al. A U-Net Deep Learning Framework for High Performance Vessel Segmentation in Patients with Cerebrovascular Disease. Front. Neurosci. 2019, 13, 97. [Google Scholar] [CrossRef] [Green Version]
- Bae, K.; Heechang, R.; Hayong, S. Does Adam optimizer keep close to the optimal point? arXiv 2019, arXiv:1911.00289. [Google Scholar]
- Kusakunniran, W.; Karnjanapreechakorn, S.; Siriapisith, T.; Borwarnginn, P.; Sutassananon, K.; Tongdee, T.; Saiviroonporn, P. COVID-19 detection and heatmap generation in chest x-ray images. J. Med. Imaging 2021, 8, 014001. [Google Scholar] [CrossRef]
- van der Velden, B.H.M.; Kuijf, J.H.; Gilhuijs, K.G.A.; Viergever, M.A. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Med. Image Anal. 2022, 79, 102470. [Google Scholar] [CrossRef]
- Noor, M.N.; Khan, T.A.; Haneef, F.; Ramay, M.I. Machine Learning Model to Predict Automated Testing Adoption. Int. J. Softw. Innov. 2022, 10, 1–15. [Google Scholar] [CrossRef]
- Noor, M.N.; Nazir, M.; Rehman, S.; Tariq, J. Sketch-Recognition using Pre-Trained Model. In Proceedings of the National Conference on Engineering and Computing Technology, Islamabad, Pakistan, 12–13 June 2021; Volume 8. [Google Scholar]
- Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Ho, Y.; Wookey, S. The real-world-weight cross-entropy loss function: Modeling the costs of mislabeling. IEEE Access 2019, 8, 4806–4813. [Google Scholar] [CrossRef]
- Taha, A.A.; Hanbury, A. Metrics for evaluating 3D medical image segmentation: Analysis, selection, and tool. BMC Med. Imaging 2015, 15, 29. [Google Scholar] [CrossRef] [Green Version]
- Jha, D.; Smedsrud, P.H.; Riegler, M.A.; Johansen, D.; De Lange, T.; Halvorsen, P.; Johansen, H.D. ResUNet++: An Advanced Architecture for Medical Image Segmentation. In Proceedings of the 2019 IEEE International Symposium on Multimedia (ISM), San Diego, CA, USA, 9–11 December 2019; pp. 225–2255. [Google Scholar] [CrossRef] [Green Version]
- Jha, D.; Ali, S.; Tomar, N.K.; Johansen, H.D.; Johansen, D.; Rittscher, J.; Riegler, M.A.; Halvorsen, P. Real-time polyp detection, localization and segmentation in colonoscopy using deep learning. IEEE Access 2021, 9, 40496–40510. [Google Scholar] [CrossRef]
- Huang, C.-H.; Wu, H.-Y.; Lin, Y.-L. Hardnet-mseg: A simple encoder-decoder polyp segmentation neural network that achieves over 0.9 mean dice and 86 fps. arXiv 2021, arXiv:2101.07172. [Google Scholar]
- Fan, D.P.; Ji, G.P.; Zhou, T.; Chen, G.; Fu, H.; Shen, J.; Shao, L. Pranet: Parallel reverse attention network for polyp segmentation. In Proceedings of the International Conference on Medical image Computing and Computer-Assisted Intervention, Lima, Peru, 4–8 October 2020. [Google Scholar]
- Habib, M.; Ramzan, M.; Khan, S.A. A Deep Learning and Handcrafted Based Computationally Intelligent Technique for Effective COVID-19 Detection from X-ray/CT-scan Imaging. J. Grid Comput. 2022, 20, 23. [Google Scholar] [CrossRef]
- Ramzan, M.; Habib, M.; Khan, S.A. Secure and efficient privacy protection system for medical records. Sustain. Comput. Inform. Syst. 2022, 35, 100717. [Google Scholar] [CrossRef]
- Masmoudi, Y.; Ramzan, M.; Khan, S.A.; Habib, M. Optimal feature extraction and ulcer classification from WCE image data using deep learning. Soft Comput. 2022, 26, 7979–7992. [Google Scholar] [CrossRef]
- Riaz, A.; Riaz, N.; Mahmood, A.; Khan, S.A.; Mahmood, I.; Almutiry, O.; Dhahri, H. ExpressionHash: Securing telecare medical information systems using biohashing. Comput. Mater. Contin. 2021, 67, 2747–2764. [Google Scholar] [CrossRef]
- Hussain, A.; Alawairdhi, M.; Alazemi, F.; Khan, S.A.; Ramzan, M. A Hybrid Approach for the Lung (s) Nodule Detection Using the Deformable Model and Distance Transform. Intell. Autom. Soft Comput. 2020, 26, 857–871. [Google Scholar] [CrossRef]
Sr. No. | Method | Dice | mIOU | Precision | Recall |
---|---|---|---|---|---|
1. | ResUNet [45] | 0.5144 | 0.4364 | 0.7292 | 0.5041 |
2. | ColonSegNet [46] | 0.7980 | 0.6980 | 0.8432 | 0.8193 |
3. | HarDNet-MSEG [47] | 0.8102 | 0.7459 | 0.8652 | 0.8485 |
4. | PraNet [48] | 0.8142 | 0.8796 | 0.9126 | 0.8453 |
5. | UNet with ResNet-34 | 0.8208 | 0.9030 | 0.9435 | 0.8597 |
Precision | Recall | Accuracy | |
---|---|---|---|
Softmax | 89.62% | 78.25% | 81.06% |
Linear SVM | 77.16% | 71.33% | 75.94% |
Quadratic SVM | 88.97% | 78.17% | 80.68% |
Bayesian | 77.16% | 71.33% | 75.94% |
Precision | Recall | Accuracy | |
---|---|---|---|
Softmax | 87.67% | 80.13% | 85.27% |
Linear SVM | 81.92% | 77.42% | 79.19% |
Quadratic SVM | 86.22% | 80.10% | 85.02% |
Bayesian | 80.31% | 74.22% | 78.94% |
Precision | Recall | Accuracy | |
---|---|---|---|
Softmax | 96.94% | 93.22% | 94.68% |
Linear SVM | 90.35% | 87.01% | 87.19% |
Quadratic SVM | 91.77% | 82.01% | 86.22% |
Bayesian | 91.64% | 88.35% | 88.34% |
Precision | Recall | Accuracy | |
---|---|---|---|
Softmax | 82.56% | 73.69% | 78.06% |
Linear SVM | 74.22% | 67.42% | 72.39% |
Quadratic SVM | 82.18% | 72.81% | 77.83% |
Bayesian | 78.16% | 71.23% | 77.81% |
Precision | Recall | Accuracy | |
---|---|---|---|
Softmax | 99.68% | 96.13% | 98.32% |
Linear SVM | 91.72% | 89.29% | 90.07% |
Quadratic SVM | 99.24% | 95.04% | 97.64% |
Bayesian | 97.63% | 94.46% | 97.28% |
Methods | Accuracy |
---|---|
Logistic and ridge regression [17] | 83.3% |
CNN-based framework [18] | 85.69% |
Various classifiers are applied to multiple handcrafted extracted features [13] | 93.64% |
Modified VGGNet model on preprocessed images [20] | 86.6% |
Divided images into numerous regions and then applied the modified DenseNet Model [22] | 94.03% |
Maximizing the characteristics gleaned from two pre-trained models [28] | 96.43% |
A contrast enhancement approach is suggested with MobileNet-V2 [29] | 96.40% |
An attention image-based classification is performed and best features selected [3] | 98.07% |
Proposed Model | 98.32% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Nouman Noor, M.; Nazir, M.; Khan, S.A.; Ashraf, I.; Song, O.-Y. Localization and Classification of Gastrointestinal Tract Disorders Using Explainable AI from Endoscopic Images. Appl. Sci. 2023, 13, 9031. https://doi.org/10.3390/app13159031
Nouman Noor M, Nazir M, Khan SA, Ashraf I, Song O-Y. Localization and Classification of Gastrointestinal Tract Disorders Using Explainable AI from Endoscopic Images. Applied Sciences. 2023; 13(15):9031. https://doi.org/10.3390/app13159031
Chicago/Turabian StyleNouman Noor, Muhammad, Muhammad Nazir, Sajid Ali Khan, Imran Ashraf, and Oh-Young Song. 2023. "Localization and Classification of Gastrointestinal Tract Disorders Using Explainable AI from Endoscopic Images" Applied Sciences 13, no. 15: 9031. https://doi.org/10.3390/app13159031
APA StyleNouman Noor, M., Nazir, M., Khan, S. A., Ashraf, I., & Song, O. -Y. (2023). Localization and Classification of Gastrointestinal Tract Disorders Using Explainable AI from Endoscopic Images. Applied Sciences, 13(15), 9031. https://doi.org/10.3390/app13159031