Deep-Learning-Based Automated Identification and Visualization of Oral Cancer in Optical Coherence Tomography Images
Abstract
:1. Introduction
2. Materials and Methods
2.1. Sample Preparation and Data Acquisition
2.2. Establishment of the Data Set
2.3. CNN Architecture
2.4. Training and Classification
2.5. Evaluation Indicators
2.6. Visualization
3. Results
3.1. Identification Using CNN Alone
3.2. Identification Using CNN + ML
3.3. Performance Evaluation of Two Strategies
3.4. Predictive Visualization
3.5. Grad-CAM Visualization
4. Discussion
- (1)
- According to the requirements of the network on the input size, the size of ROI is determined as 256 × 256 pixels, which can speed up network training compared to the whole image being input.
- (2)
- Each ROI must contain the unique characteristics of oral tissue, for example, the epithelium and lamina propria must be included for normal tissue. It can be found that the size of 256 × 256 pixels can not only include the features of oral tissue, but also effectively reduce the interference of background area.
- (3)
- In order to effectively use the information in the image, we selected ROI areas in an overlapping approach.
- (4)
- Areas with poor image quality were discarded, such as areas without focus due to large fluctuation of tissue surface.
5. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Siegel, R.L.; Miller, K.D.; Wagle, N.S.; Jemal, A. Cancer statistics, 2023. CA A Cancer J. Clin. 2023, 73, 17–48. [Google Scholar] [CrossRef]
- Chen, P.H.; Wu, C.H.; Chen, Y.F.; Yeh, Y.C.; Lin, B.H.; Chang, K.W.; Lai, P.Y.; Hou, M.C.; Lu, C.L.; Kuo, W.C. Combination of structural and vascular optical coherence tomography for differentiating oral lesions of mice in different carcinogenesis stages. Biomed. Opt. Express 2018, 9, 1461–1476. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Amagasa, T.; Yamashiro, M.; Uzawa, N. Oral premalignant lesions: From a clinical perspective. Int. J. Clin. Oncol. 2011, 16, 5–14. [Google Scholar] [CrossRef] [PubMed]
- Warnakulasuriya, S. Global epidemiology of oral and oropharyngeal cancer. Oral Oncol. 2009, 45, 309–316. [Google Scholar] [CrossRef]
- Tiziani, S.; Lopes, V.; Günther, U.L. Early stage diagnosis of oral cancer using 1H NMR—Based metabolomics. Neoplasia 2009, 11, 269–276, IN7–IN10. [Google Scholar] [CrossRef] [Green Version]
- Messadi, D.V. Diagnostic aids for detection of oral precancerous conditions. Int. J. Oral Sci. 2013, 5, 59–65. [Google Scholar] [CrossRef]
- Downer, M.C.; Moles, D.R.; Palmer, S.; Speight, P.M. A systematic review of test performance in screening for oral cancer and precancer. Oral Oncol. 2004, 40, 264–273. [Google Scholar] [CrossRef] [PubMed]
- Lingen, M.W.; Kalmar, J.R.; Karrison, T.; Speight, P.M. Critical evaluation of diagnostic aids for the detection of oral cancer. Oral Oncol. 2008, 44, 10–22. [Google Scholar] [CrossRef] [Green Version]
- Tsai, M.R.; Shieh, D.B.; Lou, P.J.; Lin, C.F.; Sun, C.K. Characterization of oral squamous cell carcinoma based on higher-harmonic generation microscopy. J. Biophotonics 2012, 5, 415–424. [Google Scholar] [CrossRef]
- Kumar, P.; Kanaujia, S.K.; Singh, A.; Pradhan, A. In vivo detection of oral precancer using a fluorescence-based, in-house-fabricated device: A Mahalanobis distance-based classification. Lasers Med. Sci. 2019, 34, 1243–1251. [Google Scholar] [CrossRef]
- Brouwer de Koning, S.G.; Weijtmans, P.; Karakullukcu, M.B.; Shan, C.; Baltussen, E.J.M.; Smit, L.A.; van Veen, R.L.P.; Hendriks, B.H.W.; Sterenborg, H.J.C.M.; Ruers, T.J.M. Toward assessment of resection margins using hyperspectral diffuse reflection imaging (400–1700 nm) during tongue cancer surgery. Lasers Surg. Med. 2020, 52, 496–502. [Google Scholar] [CrossRef] [PubMed]
- Scanlon, C.S.; Van Tubergen, E.A.; Chen, L.-C.; Elahi, S.F.; Kuo, S.; Feinberg, S.; Mycek, M.-A.; D’Silva, N.J. Characterization of squamous cell carcinoma in an organotypic culture via subsurface non-linear optical molecular imaging. Exp. Biol. Med. 2013, 238, 1233–1241. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kurtzman, J.; Sukumar, S.; Pan, S.; Mendonca, S.; Lai, Y.; Pagan, C.; Brandes, S. The impact of preoperative oral health on buccal mucosa graft histology. J. Urol. 2021, 206, 655–661. [Google Scholar] [CrossRef] [PubMed]
- Huang, D.; Swanson, E.A.; Lin, C.P.; Schuman, J.S.; Stinson, W.G.; Chang, W.; Hee, M.R.; Flotte, T.; Gregory, K.; Puliafito, C.A. Optical coherence tomography. Science 1991, 254, 1178–1181. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Takusagawa, H.L.; Hoguet, A.; Junk, A.K.; Nouri-Mandavi, K.; Radhakrishnan, S.; Chen, T.C. Swept-source OCT for evaluating the lamina cribrosa: A report by the american academy of ophthalmology. Ophthalmology 2019, 126, 1315–1323. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Yonetsu, T.; Bouma, B.E.; Kato, K.; Fujimoto, J.G.; Jang, I.K. Optical coherence tomography-15 years in cardiology. Circ. J. 2013, 77, 1933–1940. [Google Scholar] [CrossRef] [Green Version]
- Tsai, T.H.; Leggett, C.L.; Trindade, A.J.; Sethi, A.; Swager, A.F.; Joshi, V.; Bergman, J.J.; Mashimo, H.; Nishioka, N.S.; Namati, E. Optical coherence tomography in gastroenterology: A review and future outlook. J. Biomed. Opt. 2017, 22, 121716. [Google Scholar] [CrossRef] [Green Version]
- Olsen, J.; Holmes, J.; Jemec, G.B. Advances in optical coherence tomography in dermatology-a review. J. Biomed. Opt. 2018, 23, 040901. [Google Scholar] [CrossRef] [Green Version]
- Tsai, M.-T.; Lee, H.-C.; Lee, C.-K.; Yu, C.-H.; Chen, H.-M.; Chiang, C.-P.; Chang, C.-C.; Wang, Y.-M.; Yang, C.C. Effective indicators for diagnosis of oral cancer using optical coherence tomography. Opt. Express 2008, 16, 15847–15862. [Google Scholar] [CrossRef]
- Adegun, O.K.; Tomlins, P.H.; Hagi-Pavli, E.; McKenzie, G.; Piper, K.; Bader, D.L.; Fortune, F. Quantitative analysis of optical coherence tomography and histopathology images of normal and dysplastic oral mucosal tissues. Lasers Med. Sci. 2012, 27, 795–804. [Google Scholar] [CrossRef]
- Yang, Z.; Shang, J.; Liu, C.; Zhang, J.; Liang, Y. Identification of oral cancer in OCT images based on an optical attenuation model. Lasers Med. Sci. 2020, 35, 1999–2007. [Google Scholar] [CrossRef] [PubMed]
- Krishnan, M.M.; Venkatraghavan, V.; Acharya, U.R.; Pal, M.; Paul, R.R.; Min, L.C.; Ray, A.K.; Chatterjee, J.; Chakraborty, C. Automated oral cancer identification using histopathological images: A hybrid feature extraction paradigm. Micron 2012, 43, 352–364. [Google Scholar] [CrossRef]
- Thomas, B.; Kumar, V.; Saini, S. Texture analysis based segmentation and classification of oral cancer lesions in color images using ANN. In Proceedings of the 2013 IEEE International Conference on Signal Processing, Computing and Control (ISPCC), Solan, India, 26–28 September 2013; pp. 1–5. [Google Scholar]
- Yang, Z.; Shang, J.; Liu, C.; Zhang, J.; Liang, Y. Classification of oral salivary gland tumors based on texture features in optical coherence tomography images. Lasers Med. Sci. 2021, 37, 1139–1146. [Google Scholar] [CrossRef] [PubMed]
- Yang, Z.; Shang, J.; Liu, C.; Zhang, J.; Liang, Y. Identification of oral squamous cell carcinoma in optical coherence tomography images based on texture features. J. Innov. Opt. Health Sci. 2020, 14, 2140001. [Google Scholar] [CrossRef]
- Azam, S.; Rafid, A.K.M.R.H.; Montaha, S.; Karim, A.; Jonkman, M.; De Boer, F. Automated detection of broncho-arterial pairs using CT scans employing different approaches to classify lung diseases. Biomedicines 2023, 11, 133. [Google Scholar] [CrossRef] [PubMed]
- Ali, Z.; Alturise, F.; Alkhalifah, T.; Khan, Y.D. IGPred-HDnet: Prediction of immunoglobulin proteins using graphical features and the hierarchal deep learning-based approach. Comput. Intell. Neurosci. 2023, 2023, 2465414. [Google Scholar] [CrossRef]
- Hassan, A.; Alkhalifah, T.; Alturise, F.; Khan, Y.D. RCCC_Pred: A novel method for sequence-based identification of renal clear cell carcinoma genes through DNA mutations and a blend of features. Diagnostics 2022, 12, 3036. [Google Scholar] [CrossRef]
- Aubreville, M.; Knipfer, C.; Oetter, N.; Jaremenko, C.; Rodner, E.; Denzler, J.; Bohr, C.; Neumann, H.; Stelzle, F.; Maier, A. Automatic classification of cancerous tissue in laserendomicroscopy images of the oral cavity using deep learning. Sci. Rep. 2017, 7, 11979. [Google Scholar] [CrossRef] [Green Version]
- Welikala, R.A.; Remagnino, P.; Lim, J.H.; Chan, C.S.; Rajendran, S.; Kallarakkal, T.G.; Zain, R.B.; Jayasinghe, R.D.; Rimal, J.; Kerr, A.R.; et al. Automated detection and classification of oral lesions using deep learning for early detection of oral cancer. IEEE Access 2020, 8, 132677–132693. [Google Scholar] [CrossRef]
- Li, K.; Yang, Z.; Liang, W.; Shang, J.; Liang, Y.; Wan, S. Low-cost, ultracompact handheld optical coherence tomography probe for in vivo oral maxillofacial tissue imaging. J. Biomed. Opt. 2020, 25, 046003. [Google Scholar] [CrossRef]
- Yang, Z.; Shang, J.; Liu, C.; Zhang, J.; Hou, F.; Liang, Y. Intraoperative imaging of oral-maxillofacial lesions using optical coherence tomography. J. Innov. Opt. Health Sci. 2020, 13, 2050010. [Google Scholar] [CrossRef] [Green Version]
- Li, T.; Jin, D.; Du, C.; Cao, X.; Chen, H.; Yan, J.; Chen, N.; Chen, Z.; Feng, Z.; Liu, S. The image-based analysis and classification of urine sediments using a LeNet-5 neural network. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2020, 8, 109–114. [Google Scholar] [CrossRef]
- Krishnaswamy Rangarajan, A.; Purushothaman, R. Disease classification in eggplant using pre-trained VGG16 and MSVM. Sci. Rep. 2020, 10, 2322. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Odusami, M.; Maskeliūnas, R.; Damaševičius, R.; Krilavičius, T. Analysis of features of alzheimer’s disease: Detection of early stage from functional brain changes in magnetic resonance images using a finetuned ResNet18 network. Diagnostics 2021, 11, 1071. [Google Scholar] [CrossRef]
- Weiss, K.; Khoshgoftaar, T.M.; Wang, D. A survey of transfer learning. J. Big Data 2016, 3, 9. [Google Scholar] [CrossRef] [Green Version]
- Panwar, H.; Gupta, P.K.; Siddiqui, M.K.; Morales-Menéndez, R.; Bhardwaj, P.; Singh, V. A deep learning and grad-CAM based color visualization approach for fast detection of COVID-19 cases using chest X-ray and CT-Scan images. Chaos Solitons Fractals 2020, 140, 110190. [Google Scholar] [CrossRef]
- Chen, S.; Forman, M.; Sadow, P.M.; August, M. The diagnostic accuracy of incisional biopsy in the oral cavity. J. Oral Maxillofac. Surg. 2016, 74, 959–964. [Google Scholar] [CrossRef]
- Bisht, S.R.; Mishra, P.; Yadav, D.; Rawal, R.; Mercado-Shekhar, K.P. Current and emerging techniques for oral cancer screening and diagnosis: A review. Prog. Biomed. Eng. 2021, 3, 042003. [Google Scholar] [CrossRef]
Dataset | Normal * | LEH | OSCC | Total |
---|---|---|---|---|
Patients’ number | - | 5 | 14 | 19 |
Age (median [range]) | - | 62 (37–73) | 60 (29–69) | |
Gender (male/female) | - | 3/2 | 7/7 | 10/9 |
Training set | ||||
Patients’ number | - | 3 | 10 | 13 |
OCT images | 2151 | 3639 | 3947 | 9737 |
Test set | ||||
Patients’ number | - | 2 | 4 | 6 |
OCT images | 1043 | 1601 | 1418 | 4062 |
Parameter | NM | LEH | OSCC |
---|---|---|---|
Precision (%) | 87.8 | 94.0 | 94.5 |
Sensitivity (%) | 90.7 | 99.5 | 86.0 |
Specificity (%) | 95.6 | 95.8 | 97.3 |
Model | Classifier | LeNet-5 | VGG16 | ResNet18 |
---|---|---|---|---|
CNN alone | - | 96.76 | 91.94 | 90.43 |
CNN + ML | DT | 87.23 | 89.42 | 90.51 |
RF | 91.53 | 90.52 | 90.01 | |
SVM | 92.52 | 91.33 | 89.51 |
Model | LeNet-5 | VGG16 | ResNet18 |
---|---|---|---|
CNN alone | |||
Each epoch/s | 228 | 2891 | 1618 |
Convergence/s | 9120 | 115,640 | 64,720 |
CNN + ML | |||
Feature extraction/s | 86 | 710 | 481 |
DT/s | 0.57 | 7.12 | 0.88 |
RF/s | 0.27 | 1.25 | 0.29 |
SVM/s | 15 | 22 | 1.56 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yang, Z.; Pan, H.; Shang, J.; Zhang, J.; Liang, Y. Deep-Learning-Based Automated Identification and Visualization of Oral Cancer in Optical Coherence Tomography Images. Biomedicines 2023, 11, 802. https://doi.org/10.3390/biomedicines11030802
Yang Z, Pan H, Shang J, Zhang J, Liang Y. Deep-Learning-Based Automated Identification and Visualization of Oral Cancer in Optical Coherence Tomography Images. Biomedicines. 2023; 11(3):802. https://doi.org/10.3390/biomedicines11030802
Chicago/Turabian StyleYang, Zihan, Hongming Pan, Jianwei Shang, Jun Zhang, and Yanmei Liang. 2023. "Deep-Learning-Based Automated Identification and Visualization of Oral Cancer in Optical Coherence Tomography Images" Biomedicines 11, no. 3: 802. https://doi.org/10.3390/biomedicines11030802
APA StyleYang, Z., Pan, H., Shang, J., Zhang, J., & Liang, Y. (2023). Deep-Learning-Based Automated Identification and Visualization of Oral Cancer in Optical Coherence Tomography Images. Biomedicines, 11(3), 802. https://doi.org/10.3390/biomedicines11030802