Role of Artificial Intelligence in the Early Diagnosis of Oral Cancer. A Scoping Review
Abstract
:Simple Summary
Abstract
1. Introduction
- Supervised learning: the training process in this case is based on labeled data using a known external standard known as the “ground truth”.
- Unsupervised learning: the algorithm analyzes unlabeled data to identify hidden structures. In this case, the algorithm itself seeks to detect patterns in the data for learning, since the system lacks prior labeled data or expectable results.
- Reinforcement learning: in this case, the software actions receive positive and/or negative reinforcement within a dynamic environment.
2. Materials and Methods
2.1. Protocol and Registration
2.2. Search Strategy
2.3. Eligibility Criteria. Inclusion Criteria
2.4. Exclusion Criteria
2.5. Data Items
2.6. Critical Analysis and Evidence Synthesis
- Q1. In relation to telemedicine (teledentistry or telehealth)
- Q1a. Is there agreement in the diagnosis of oral lesions between the practitioner and experts in Oral Medicine or Oral Cancer?
- Q1b. Would the images received by mobile (telemedicine), and classified through the neural network, corroborate the diagnosis of OPMD and oral cancer?
- Q2. Would the classification of photographic images submitted to AI allow the discrimination of OPMD and oral cancer?
- Q3. Does the application of light-based detection on the lesion improve the AI classification of lesions for decision-making in the diagnosis of OPMD and oral cancer?
- Q4. Does exfoliative cytology offer information for the screening of patients at risk of oral cancer?
- Q5. Do the demographic variables of the patients, the toxic habits, and the clinical parameters, introduced in the IA classification models provide predictive values for oral cancer?
3. Results
3.1. Selection of Resources/Search Results
3.2. Description of Studies
3.2.1. Mobile Phone Technologies
3.2.2. Medical Imaging Techniques
3.2.3. Fluorescence Imaging
3.2.4. Exfoliative Cytology
3.2.5. Predictor Variables of Datasets
3.3. Artificial Intelligence (AI) Methods Used in Selected Studies
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Conflicts of Interest
References
- International Agency for Reesearch on Cancer 2020. GLOBOCAN 2020. Available online: https://gco.iarc.fr/today/online-analysis-table? (accessed on 15 August 2021).
- Chi, A.C.; Day, T.A.; Neville, B.W. Oral cavity and oropharyngeal squamous cell carcinoma—An update. CA Cancer J. Clin. 2015, 65, 401–421. [Google Scholar] [CrossRef] [PubMed]
- Marur, S.; Forastiere, A.A. Head and neck cancer: Changing epidemiology, diagnosis, and treatment. Mayo Clin. Proc. 2008, 83, 489–501. [Google Scholar] [CrossRef]
- Liao, D.Z.; Schlecht, N.F.; Rosenblatt, G.; Kinkhabwala, C.M.; Leonard, J.A.; Ference, R.S.; Prystowsly, M.; Ow, J.O.; Schiff, B.A.; Smith, R.V.; et al. Association of delayed time to treatment initiation with overall survival and recurrence among patients with head and neck squamous cell carcinoma in an underserved urban population. JAMA Otolaryngol. Head Neck Surg. 2019, 145, 1001–1019. [Google Scholar] [CrossRef] [PubMed]
- van der Waal, I.; de Bree, R.; Brakenhoff, R.; Coebegh, J.W. Early diagnosis in primary oral cancer: Is it possible? Med. Oral Patol. Oral Cir. Bucal. 2011, 16, e300–e305. [Google Scholar] [CrossRef]
- Bagan, J.; Sarrion, G.; Jimenez, Y. Oral cancer: Clinical features. Oral Oncol. 2010, 46, 414–417. [Google Scholar] [CrossRef] [PubMed]
- Seoane, J.; Takkouche, B.; Varela-Centelles, P.; Tomás, I.; Seoane-Romero, J.M. Impact of delay in diagnosis on survival to head and neck carcinomas: A systematic review with metaanalysis. Clin. Otolaryngol. 2012, 37, 99–106. [Google Scholar] [CrossRef] [PubMed]
- Gigliotti, J.; Madathil, S.; Makhoul, N. Delays in oral cavity cancer. Int. J. Oral Maxillofac. Surg. 2019, 48, 1131–1137. [Google Scholar] [CrossRef]
- Warnakulasuriya, S.; Kujan, O.; Aguirre-Urizar, J.M.; Bagan, J.V.; González-Moles, M.Á.; Kerr, A.R.; Lodi, G.; Mello, F.W.; Monteiro, L.; Ogden, G.R.; et al. Oral potentially malignant disorders: A consensus report from an international seminar on nomenclature and classification, convened by the WHO Collaborating Centre for Oral Cancer. Oral Dis. 2020, 31. [Google Scholar] [CrossRef]
- Morikawa, T.; Shibahara, T.; Nomura, T.; Katakura, A.; Takano, M. Non-Invasive Early Detection of Oral Cancers Using Fluorescence Visualization with Optical Instruments. Cancers 2020, 12, 2771. [Google Scholar] [CrossRef] [PubMed]
- Simonato, L.E.; Tomo, S.; Navarro, R.S.; Villaverde, A.G.J.B. Fluorescence visualization improves the detection of oral, potentially malignant, disorders in population screening. Photodiagn. Photodyn. Ther. 2019, 27, 74–78. [Google Scholar] [CrossRef] [PubMed]
- Tomo, S.; Miyahara, G.I.; Simonato, L.E. History and future perspectives for the use of fluorescence visualization to detect oral squamous cell carcinoma and oral potentially malignant disorders. Photodiagn. Photody. Ther. 2019, 28, 308–317. [Google Scholar] [CrossRef]
- Farah, C.S.; McIntosh, L.; Georgiou, A.; McCullough, M.J. Efficacy of tissue autofluorescence imaging (velscope) in the visualization of oral mucosal lesions. Head Neck 2011, 34, 856–862. [Google Scholar] [CrossRef]
- Mehrotra, R.; Singh, M.; Thomas, S.; Nair, P.; Pandya, S.; Nigam, N.S.; Shukla, P. A Cross-sectional study evaluating chemiluminescence and autofluorescence in the detection of clinically innocuous precancerous and cancerous oral lesions. J. Am. Dent. Assoc. 2010, 141, 151–156. [Google Scholar] [CrossRef]
- Ilhan, B.; Lin, K.; Guneri, P.; Wilder-Smith, P. Improving Oral Cancer Outcomes with Imaging and Artificial Intelligence. J. Dent. Res. 2020, 99, 241–248. [Google Scholar] [CrossRef] [PubMed]
- Sidey-Gibbons, J.A.M.; Sidey-Gibbons, C.J. Machine learning in medicine: A practical introduction. BMC Med. Res. Methodol. 2019, 19, 64. [Google Scholar] [CrossRef] [Green Version]
- Rajkomar, A.; Dean, J.; Kohane, I. Machine learning in medicine. N. Engl. J. Med. 2019, 380, 1347–1358. [Google Scholar] [CrossRef] [PubMed]
- Cuocolo, R.; Caruso, M.; Perillo, T.; Ugga, L.; Petretta, M. Machine Learning in oncology: A clinical appraisal. Cancer Lett. 2020, 481, 55–62. [Google Scholar] [CrossRef]
- Cuocolo, R.; Ugga, L. Imaging applications of artificial intelligence. Health Manag. 2018, 18, 484–487. [Google Scholar]
- Shimizu, H.; Nakayama, K.I. Artificial intelligence in oncology. Cancer Sci. 2020, 111, 1452–1460. [Google Scholar] [CrossRef] [Green Version]
- Kim, D.W.; Lee, S.; Kwon, S.; Nam, W.; Cha, I.H.; Kim, H.J. Deep learning-based survival prediction of oral cancer patients. Sci. Rep. 2019, 9, 6994. [Google Scholar] [CrossRef] [Green Version]
- Ilhan, B.; Guneri, P.; Wilder-Smith, P. The contribution of artificial intelligence to reducing the diagnostic delay in oral cancer. Oral Oncol. 2021, 116, 105254. [Google Scholar] [CrossRef]
- Mahmood, H.; Shaban, M.; Rajpoot, N.; Khurram, S.A. Artificial Intelligence-based methods in head and neck cancer diagnosis: An overview. Br. J. Cancer 2021, 124, 1934–1940. [Google Scholar] [CrossRef]
- Mahmood, H.; Shaban, M.; Indave, B.I.; Santos-Silva, A.R.; Rajpoot, N.; Khurram, S.A. Use of artificial intelligence in diagnosis of head and neck precancerous and cancerous lesions: A systematic review. Oral Oncol. 2020, 110, 104885. [Google Scholar] [CrossRef]
- Kar, A.; Wreesmann, V.B.; Shwetha, V.; Thakur, S.; Rao, V.U.S.; Arakeri, G.; Brennan, P.A. Improvement of oral cancer screening quality and reach: The promise of artificial intelligence. J. Oral Pathol. Med. 2020, 49, 727–730. [Google Scholar] [CrossRef] [PubMed]
- Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Peters, M.D.J.; Horsley, T.; Weeks, L.; et al. PRISMA Extension for Scoping Reviews (PRISMAScR): Checklist and Explanation. Ann. Intern. Med. 2018, 169, 467–473. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Aubreville, M.; Knipfer, C.; Oetter, N.; Jaremenko, C.; Rodner, E.; Denzler, J.; Bohr, C.; Neumann, H.; Stelzle, F.; Maier, A. Automatic Classification of Cancerous Tissue in Laserendomicroscopy Images of the Oral Cavity using Deep Learning. Sci. Rep. 2017, 7, 11979. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Awais, M.; Ghayvat, H.; Pandarathodiyil, A.K.; Ghani, W.M.N.; Ramanathan, A.; Pandya, S.; Walter, N.; Saad, M.N.; Zain, R.B.; Faye, I. Healthcare Professional in the Loop (HPIL): Classification of Standard and Oral Cancer-Causing Anomalous Regions of Oral Cavity Using Textural Analysis Technique in Autofluorescence Imaging. Sensors 2020, 20, 5780. [Google Scholar] [CrossRef] [PubMed]
- Banerjee, S.; Karri, S.P.; Chaterjee, P.; Pal, M.; Paul, R.R.; Chaterjee, J. Multimodal Diagnostic Segregation of Oral Leukoplakia and Cancer. In International Conference on Systems in Medicine and Biology (ICSMB); Indian Institute of Technology: Kharagpur, India, 2016; pp. 67–70. [Google Scholar] [CrossRef]
- Birur, P.N.; Sunny, S.P.; Jena, S.; Kandasarma, U.; Raghavan, S.; Ramaswamy, B.; Shanmugam, S.P.; Patrick, S.; Kuriakose, R.; Mallaiah, J.; et al. Mobile health application for remote oral cancer surveillance. J. Am. Dent. Assoc. 2015, 146, 886–894. [Google Scholar] [CrossRef]
- Bourass, Y.; Zouaki, H.; Bahri, A. Computer-aided diagnostics of facial and oral cancer. In Proceedings of the 3rd IEEE World Conference on Complex Systems (WCCS), Marrakech, Morocco, 23–25 November 2015. [Google Scholar] [CrossRef]
- Chan, C.H.; Huang, T.T.; Chen, C.Y.; Lee, C.C.; Chan, M.Y.; Chung, P.C. Texture-Map-Based Branch-Collaborative Network for Oral Cancer Detection. IEEE Trans. Biomed. Circuits Syst. 2019, 13, 766–780. [Google Scholar] [CrossRef]
- de Veld, D.C.; Skurichina, M.; Witjes, M.J.; Duin, R.P.; Sterenborg, H.J.; Roodenburg, J.L. Clinical study for classification of benign, dysplastic, and malignant oral lesions using autofluorescence spectroscopy. Biomed. Opt. 2004, 9, 940–950. [Google Scholar] [CrossRef] [Green Version]
- Dey, S.; Sarkar, R.; Chatterjee, K.; Datta, P.; Barui, A.; Maity, S.P. Pre-cancer risk assessment in habitual smokers from DIC images of oral exfoliative cells using active contour and SVM analysis. Tissue Cell 2017, 49 Pt 2, 296–306. [Google Scholar] [CrossRef]
- Fu, Q.; Chen, Y.; Li, Z.; Jing, Q.; Hu, C.; Liu, H.; Bao, J.; Homg, Y.; Shi, T.; Li, K.; et al. A deep learning algorithm for detection of oral cavity squamous cell carcinoma from photographic images: A retrospective study. EClinicalMedicine 2020, 27, 100558. [Google Scholar] [CrossRef] [PubMed]
- Halicek, M.; Little, T.; Wang, X.; Patel, M.; Griffith, C.; El-Deiry, M.; Chen, A.; Fei, B. Deep convolutional neural networks for classifying head and neck cancer using hyperspectral imaging. J. Biomed. Opt. 2017, 22, 60503. [Google Scholar] [CrossRef]
- Haron, N.; Zain, R.B.; Nabillah, W.M.; Saleh, A.; Kallarakkal, T.G.; Ramanathan, A.; Sinon, S.H.M.; Razak, I.A.; Cheong, S.C. Mobile phone imaging in low resource settings for early detection of oral cancer and concordance with clinical oral examination. Telemed. J. E Health 2017, 23, 192–199. [Google Scholar] [CrossRef] [Green Version]
- Heintzelman, D.L.; Utzinger, U.U.; Fuchs, H.; Zuluaga, A.; Gossage, K.; Gillenwater, A.M.; Jacob, R.; Kemp, B.; Richards-Kortum, R.R. Optimal excitation wavelengths for in vivo detection of oral neoplasia using fluorescence spectroscopy. Photochem. Photobiol. 2000, 72, 103–113. [Google Scholar] [CrossRef] [Green Version]
- Jeyaraj, P.R.; Nadar, E.R.S. Computer-assisted medical image classification for early diagnosis of oral cancer employing deep learning algorithm. J. Cancer Res. Clin. Oncol. 2019, 145, 829–837. [Google Scholar] [CrossRef]
- Jurczyszyn, K.; Gedrange, T.; Kozakiewicz, M. Theoretical Background to Automated Diagnosing of Oral Leukoplakia: A Preliminary Report. J. Health Eng. 2020, 2020, 8831161. [Google Scholar] [CrossRef]
- Jurczyszyn, K.; Kozakiewicz, M. Differential diagnosis of leukoplakia versus lichen planus of the oral mucosa based on digital texture analysis in intraoral photography. Adv. Clin. Exp. Med. 2019, 28, 1469–1476. [Google Scholar] [CrossRef]
- Kareem, S.A.; Pozos-Parra, P.; Wilson, N. An application of belief merging for the diagnosis of oral cancer. Appl. Soft Comput. 2017, 61, 1105–1112. [Google Scholar] [CrossRef]
- Liu, Y.; Li, Y.; Fu, Y.; Liu, T.; Liu, X.; Zhang, X.; Fu, J.; Guan, X.; Chen, T.; Chen, X.; et al. Quantitative prediction of oral cancer risk in patients with oral leukoplakia. Oncotarget 2017, 8, 46057–46064. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Majumder, S.K.; Ghosh, N.; Gupta, P.K. Relevance vector machine for optical diagnosis of cancer. Lasers Surg. Med. 2005, 36, 323–333. [Google Scholar] [CrossRef]
- McRae, M.P.; Kerr, A.R.; Janal, M.N.; Thornhill, M.H.; Redding, S.W.; Vigneswaran, N.; Kang, S.K.; Niederman, R.; Christodoulides, N.J.; Trochesset, D.A.; et al. Nuclear F-actin Cytology in Oral Epithelial Dysplasia and Oral Squamous Cell Carcinoma. J. Dent. Res. 2021, 100, 479–486. [Google Scholar] [CrossRef] [PubMed]
- McRae, M.P.; Modak, S.S.; Simmons, G.W.; Trochesset, D.A.; Kerr, A.R.; Thornhill, M.H.; Redding, S.W.; Vigneswaran, N.; Kang, S.K.; Thornhill, M.H.; et al. Point-of-care oral cytology tool for the screening and assessment of potentially malignant oral lesions. Cancer Cytopathol. 2020, 128, 207–220. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Mohd, F.; Noor, N.M.M.; Bakar, Z.A.; Rajion, Z.A. Analysis of oral cancer prediction using features selection with machine learning. In Proceedings of the 7th International Conference on Information Technology (ICIT), Amman, Jordan, 12–15 May 2015; pp. 283–288. [Google Scholar]
- Rosma, M.D.; Sameemii, A.K.; Basir, A.; Mazlipahiv, I.S.; Norzaidi, M.D. The use of artificial intelligence to identify people at risk of oral cancer: Empirical evidence in Malaysian University. Int. J. Sci. Res. Edu. 2010, 3, 10–20. [Google Scholar]
- Sarkar, R.; Dey, S.; Pal, M.; Paul, R.R.; Chatterjee, J.; RoyChaudhuri, C.; Barui, A. Risk prediction for oral potentially malignant disorders using fuzzy analysis of cytomorphological and autofluorescence alterations in habitual smokers. Future Oncol. 2017, 13, 499–511. [Google Scholar] [CrossRef] [PubMed]
- Shamim, M.Z.M.; Syed, S.; Shiblee, M.; Usman, M.; Ali, S. Automated detection of oral pre-cancerous tongue lesions using deep learning for early diagnosis of oral cavity cancer. arXiv 2019, arXiv:1909.0898. [Google Scholar]
- Sharma, N.; Om, H. Usage of Probabilistic and General Regression Neural Network for Early Detection and Prevention of Oral Cancer. Sci. World J. 2015, 234191. [Google Scholar] [CrossRef] [Green Version]
- Song, S.; Sunny, S.; Uthoff, R.D.; Patrick, S.; Suresh, A.; Kolur, T.; Kerrthi, G.; Anbarani, A.; Wilder-Smith, P.; Kuriakose, M.; et al. Automatic classification of dual-modality, smartphone-based oral dysplasia and malignancy images using deep learning. Biomed. Opt. Express 2018, 9, 5318–5329. [Google Scholar] [CrossRef]
- Spyridonos, P.; Gaitanis, G.; Bassukas, I.D.; Tzaphlidou, M. Evaluation of vermillion border descriptors and relevance vector machines discrimination model for making probabilistic predictions of solar cheilosis on digital lip photographs. Comput. Biol. Med. 2015, 63, 11–18. [Google Scholar] [CrossRef]
- Sunny, S.; Baby, A.; James, B.L.; Balaji, D.; Aparna, N.V.; Rana, M.H.; Gurpur, P.; Skandarajah, A.; D’Ámbrosio, M.; Ramanjinappa, R.D.; et al. A smart tele-cytology point-of-care platform for oral cancer screening. PLoS ONE 2019, 14, e0224885. [Google Scholar] [CrossRef] [Green Version]
- Tetarbe, A.; Choudhury, T.; Teik, T.; Rawat, S. Oral cancer detection using data mining tool. In Proceedings of the 3rd International Conference on Applied and Theoretical Computing and Communication Technology (iCATccT), Tumkur, India, 21–23 December 2017; pp. 35–39. [Google Scholar]
- Thomas, B.; Kumar, V.; Saini, S. Texture analysis based segmentation and classification of oral cancer lesions in color images using ANN. In Proceedings of the IEEE International Conference on Signal Processing, Computing and Control (ISPCC), Solan, India, 26–28 September 2013; pp. 1–5. [Google Scholar]
- Uthoff, R.D.; Song, B.; Birur, P.; Kuriakose, M.A.; Sunny, S.; Suresh, A.; Patrick, S.; Anbarani, A.; Spires, O.; Wilder-Smith, P.; et al. Development of a dual modality, dual-view smartphone-based imaging system for oral cancer detection. In Proceedings of the SPIE 10486, Design and Quality for Biomedical Technologies, San Francisco, CA, USA, 27–28 January 2018. [Google Scholar] [CrossRef]
- van Staveren, H.J.; van Veen, R.L.; Speelman, O.C.; Witjes, M.J.; Star, W.M.; Roodenburg, J.L. Classification of clinical autofluorescence spectra of oral leukoplakia using an artificial neural network: A pilot study. Oral Oncol. 2000, 36, 286–293. [Google Scholar] [CrossRef]
- Wang, C.Y.; Tsai, T.; Chen, H.M.; Chen, C.T.; Chiang, C.P. PLS-ANN based classification model for oral submucous fibrosis and oral carcinogenesis. Lasers Surg. Med. 2003, 32, 318–326. [Google Scholar] [CrossRef]
- Wang, X.; Yang, J.; Wei, C.; Zhou, G.; Wu, L.; Gao, Q.; He, X.; Shi, J.; Mei, Y.; Liu, Y.; et al. A personalized computational model predicts cancer risk level of oral potentially malignant disorders and its web application for promotion of non-invasive screening. J. Oral Pathol. Med. 2020, 49, 417–426. [Google Scholar] [CrossRef] [PubMed]
- Welikala, R.A.; Remagnino, P.; Lim, J.H.; Chang, H.S.; Kallarankal, H.C.; Zain, R.B.; Jayasinghe, R.D.; Rimal, J.; Kerr, A.R.; Amtha, R.; et al. Automated Detection and Classification of Oral Lesions Using Deep Learning for Early Detection of Oral Cancer. IEEE Access 2020, 8, 132677–132693. [Google Scholar] [CrossRef]
- Wieslander, H.; Forslid, G.; Bengtsson, E.; Wählby, C.; Hirsch, J.M.; Stark, C.R.; Sadanandan, S. K Deep Convolutional Neural Networks For Detecting Cellular Changes Due To Malignancy. In Proceedings of the Conferencia: 16th IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 82–89. [Google Scholar]
- Bray, F.; Ferlay, J.; Soerjomataram, I.; Siegel, R.; Torre, L.; Jemal, A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 2018, 68, 394–424. [Google Scholar] [CrossRef] [Green Version]
- Perdoncini, N.N.; Schussel, J.L.; Amenábar, J.; Torres-Pereira, C.C. Use of smartphone video calls in the diagnosis of oral lesions: Teleconsultations between a specialist and patients assisted by a general dentist. J. Am. Dent. Assoc. 2021, 152, 127–135. [Google Scholar] [CrossRef]
- Flores, A.P.D.C.; Lazaro, S.A.; Molina-Bastos, C.G.; Guattini, V.L.O.; Umpierre, R.N.; Gonçalves, M.R.; Carrard, V.C. Teledentistry in the diagnosis of oral lesions: A systematic review of the literature. J. Am. Med. Inform. Assoc. 2020, 27, 1166–1172. [Google Scholar] [CrossRef] [PubMed]
- Hosny, K.M.; Kassem, M.A.; Fouad, M.M. Classification of Skin Lesions into Seven Classes Using Transfer Learning with AlexNet. J. Digit. Imaging 2020, 33, 1325–1334. [Google Scholar] [CrossRef] [PubMed]
- Alsaade, F.W.; Aldhyani, T.H.H.; Al-Adhaileh, M.H. Developing a Recognition System for Diagnosing Melanoma Skin Lesions Using Artificial Intelligence Algorithms. Comput. Math. Methods Med. 2021, 15, 9998379. [Google Scholar] [CrossRef]
- Aggarwal, P.; Papay, F.A. Artificial intelligence image recognition of melanoma and basal cell carcinoma in racially diverse populations. J. Dermatolog. Treat. 2021, 30, 1–6. [Google Scholar] [CrossRef]
- Kim, D.H.; Kim, S.W.; Lee, J.; Hwang, S.H. Narrow-band imaging for screening of oral premalignant or cancerous lesions: A systematic review and meta-analysis. Clin. Otolaryngol. 2021, 46, 501–507. [Google Scholar] [CrossRef]
- Mascharak, S.; Baird, B.J.; Holsinger, F.C. Detecting oropharyngeal carcinoma using multispectral, narrow-band imaging and machine learning. Laryngoscope 2018, 128, 2514–2520. [Google Scholar] [CrossRef]
- Tamashiro, A.; Yoshio, T.; Ishiyama, A.; Tsuchida, T.; Hijikata, K.; Yoshimizu, S.; Horiuchi, Y.; Hirasawa, T.; Seto, A.; Sasaki, T.; et al. Artificial intelligence-based detection of pharyngeal cancer using convolutional neural networks. Dig. Endosc. 2020, 32, 1057–1065. [Google Scholar] [CrossRef]
- Paderno, A.; Piazza, C.; Del Bon, F.; Lancini, D.; Tanagli, S.; Deganello, A.; Peretti, G.; De Momi, E.; Patrini, I.; Ruperti, M.; et al. Deep Learning for Automatic Segmentation of Oral and Oropharyngeal Cancer Using Narrow Band Imaging: Preliminary Experience in a Clinical Perspective. Front. Oncol. 2021, 24, 626602. [Google Scholar] [CrossRef]
- Tiwari, L.; Kujan, O.; Farah, C.S. Optical fluorescence imaging in oral cancer and potentially malignant disorders: A systematic review. Oral Dis. 2020, 26, 491–510. [Google Scholar] [CrossRef]
- Lima, I.F.P.; Brand, L.M.; de Figueiredo, J.A.P.; Steier, L.; Lamers, M.L. Use of autofluorescence and fluorescent probes as a potential diagnostic tool for oral cancer: A systematic review. Photodiagn. Photodyn. Ther. 2021, 33, 102073. [Google Scholar] [CrossRef] [PubMed]
- Walsh, T.; Macey, R.; Kerr, A.R.; Lingen, M.W.; Ogden, G.R.; Warnakulasuriya, S. Diagnostic tests for oral cancer and potentially malignant disorders in patients presenting with clinically evident lesions. Cochrane Database Syst. Rev. 2021, 20, CD010276. [Google Scholar] [CrossRef]
- Rivera, S.C.; Liu, X.; Chan, A.W.; Denniston, A.K.; Calvert, M.J. SPIRIT-AI and CONSORT-AI Working Group Guidelines for clinical trial protocols for interventions involving artificial intelligence: The SPIRIT-AI extension. Lancet Digit. Health 2020, 2, e549–e560. [Google Scholar] [CrossRef]
- Collins, G.S.; Moons, K.G.M. Reporting of artificial intelligence prediction models. Lancet 2019, 393, 20. [Google Scholar] [CrossRef]
Term | Interpretation |
---|---|
Artificial intelligence (AI) | A field of science and engineering concerned to develop machines that can learn through data so that they can solve the problems. |
Machine learning (ML) | A subfield of AI in which algorithms are trained to perform tasks by learning patterns from data so they can resolve issues without human input. |
Deep learning (DL) | A subset of machine learning. The purpose of DL is to construct a neural network that automatically identifies patterns to improve feature detection, collecting features from the abstracted layer of filters. |
Neural Network | A set of algorithms of solutions to a problem that compute signals via artificial neurons, to create neural networks that function like the human brain |
Probabilistic systems | Incorporate rates of diseases or problems in a population and the likelihood of various clinical findings in order to calculate the most likely explanation for a particular clinical case |
Supervised learning | Based on labeled data using a known external standard called as the “ground truth”. A model is built by learning common features from a non-labeled set of training data |
Unsupervised learning | The algorithm itself seeks to detect patterns in the data for learning, since the system lacks prior labeled data or expectable results. Model is built by learning common features from a non-labeled set of training data. |
True positive | An abnormal lesion is categorized correctly as abnormal. |
True negative | A normal is categorized correctly as normal |
False positive | A normal is categorized wrongly as abnormal |
False negative | An abnormal is categorized wrongly as normal. |
Accuracy | The proportion of correctly predicted results among all samples, the proportional precision in a classification system. Test accuracy 0.90, the model correctly classified 90%. |
Sensitivity (recall) | The ratio of true positives to total positive predictions or the proportion of the true cases that are identified by the model. Percentage predicted positive among all truly positive |
Specificity | The ratio of true negatives to total positive prediction Percentage predicted negative among all truly negative |
Precision (positive predictive value) | The proportion of cases selected by the model that has the true value. The proportion of the patients with the disease, who are correctly predicted to have the disease. The number of true positives divided by the number that was predicted as positive |
F1 Score | The harmonic mean of the precision and recall |
Receiver operating characteristics (ROC) | A curve for a model and is used for estimating the prediction ability of a model. |
Training | Used for generating or created a model |
Validation | Used to estimate the model effectivity or prediction error |
Authors, Year, Country, 1 Reference | Aim | Method. Classifier | Sample | Outcomes: Diagnostic Performance (%) |
---|---|---|---|---|
Birur et al., 2015, India [30] | To determine the effectiveness of a mobile phone–based for a surveillance program (Oncogrid) connecting primary care dental practitioners and frontline health workers (FHW) with oral cancer specialists for screening oral cancer. | The specialist reviewed the image and judged it as interpretable or not interpretable. The interpretable images were clinically stratified as nonneoplastic, potentially malignant, or malignant. Oncogrid Network of mobile phone. Sana platform (Computer & AI) | Oral Cancer Specialist Target screening: FHW (n = 4): 2000 patients, Opportunist screening Dentist: 1440 patients | Concordance with dentist:100 Positive predictive value:100 Concordance with FWH: Positive predictive value:45 |
Haron et al., 2017, Malaysia [37] | To examine the concordance in clinical diagnosis of OPMD and referral decisions between dentists and oral medicine specialist (OMS) | Mobile: 3 types of phones with different cameras Dentists (n = 3); Oral Medicine Specialists (OMS) (n = 2) | OPMD: 8 Non OPMD or Normal: 8 | Concordance between OMS: Presence of lesion Sensitivity: 70 Specificity: 100 Category of lesion Sensitivity: 75 Specificity: 100 Referral decision Sensitivity: 81 Specificity: 100 |
Song et al., 2018, India [52] | To screen high-risk populations for oral cancer using smartphone-based intraoral dual-modality immunofluorescence imaging platform and classification of images obtained. In addition, to compare the performance of different CNN and transfer learning | Android Smartphone Luxeon LED to enable the autofluorescence imaging (AFI) and a white light imaging (WLI) CNN toolbox Luxeon UV: Transfer learning VGG-CNN-M VGG-CNN-S VGG-CNN-16 | Training/validation Normal: 66/20 Suspicious 2: 64/20 | Best performance with AFI-WLI: VGG-CNN-M Accuracy:86.9 Sensitivity:85.0 Specificity: 88.7 |
Uthoff et al., 2018, India [57] | To use the smartphone’s data transmission capabilities, and uploaded to a cloud server, where a remote specialist can access the images and make a diagnosis. Furthermore to classify images in suspicious and non suspicious | Android Smartphone Luxeon LED with AFI and a WLI CNN: VGG-CNN-M | Suspicious 3 vs. non-suspicious Normal: 33; Suspicious: 60 OSCC:6 CNN, normal:86/suspicious: 84 | Remote specialist/CNN Sensitivity: 92.59/85.00 Specificity:86.67/88.75 |
Welikala, et al., United Kingdom. 2020 [61] | To detect and classify oral lesions in low risk and high risk, first in a phase of collection with bonding box annotations from clinicians and after classifying by deep learning. | Mobile Mouth Screening Anywhere (MeMoSA) Classification: ResNet-101: CNN Detection: Faster R-CNN | 2155 images captured by MeMoSA App (normal, benign, OPMD, malignant) Clinician: 3–7 experts Training:1744 (Back propagation/stochastic gradient) Validation: 207 Testing: 204 | Identification image that containing lesion (test): Precision:84.77 Recall: 89.51 F1Score: 87.07 Identification imaging that required referral (test): Precision:67.1 Recall: 93.8 F1Score: 78.3 Refer-low risk OPMD/cancer or high OPMD (test): Precision:26.4/14.7 Recall: 43.9/56.0 F1Score: 33/23.3 |
Authors, Year, Country, Reference | Aims of Study | Method. Classifier’s | Sample | Outcomes: Diagnostic Performance (%) |
---|---|---|---|---|
Bourass et al., 2015. Morocco [31] | To develop computer-aided diagnostics systems that aims at providing a classification of suspicious regions content-based image retrieval (CBIR). | SURF: Speed Up Robust Features Hierarchical-SVM vs. RGB-Histogram | Facial & Oral cancer database: 4160 images. | Hierarchical model SVM feedback, Precision: 82 |
Chan, et al., 2019, Taiwan [32] | To develop the texture map based on branch-collaborative network model to allow detection cancerous regions and marking the ROI | SMOTE texture-map-based branch Network Wavelet transformation Gabor filtering Fully Convolutional Network (FCN) Feature Pyramidal Network (FPN) | (Training/Validity)/Testing Cancer: 25/5 Normal: 45/5 | Branch Network/Gabor filter (ROI) Sensitivity: 0.9687/0.9314 Specificity: 0.7129/0.9475 |
Fu, et al., 2020 China [35] | To develop a rapid, non-invasive and easy-to-use DL approach to identifying OSCC using photograms. | CNN | Training / Internal validation/external validation n = 1469 images from hospital n = 420 (images from journal) external validation (n = 666) | Algorithm/OC expert/medical student/non-medical student Accuracy: 92.3/92.4/87.0/77.2 Sensitivity: 91.0/91.7/83.1/76.6 Specificity: 93.5/93.1/90.7/77.8 |
Jeyaraj & Nadar, 2019. India [39] | To develop a DL algorithm for automated, computer-aided oral cancer detecting system by investigating patient hyperspectral images | Partitioned Deep CNN SVM Deep belief Network | OC vs. Benign Partitioned CNN vs. expert oncologist (n = 100 images) OC vs. Normal Partitioned CNN vs. expert oncologist (n = 500 images) | Accuracy: 91.4 Sensitivity: 94 Specificity: 91 Accuracy: 94.5 Sensitivity: 94 Specificity: 98 |
Jurczyszyn & Kozakiewicz, 2019. Poland [40] | To formulate a differential diagnosis for leukoplakia vs. lichen planus in the oral mucosa based on digital texture analysis in intraoral macrophotography | Neural Network Bayesian (PNN) Run/short length emphasis matrix Co-occurrence matrix | Oral leukoplakia: 21 Oral lichen planus: 21 Normal: 21 | Sensitivity: 57 Specificity: 74 Sensitivity: 38 Specificity: 81 Sensitivity: 94 Specificity: 88 |
Jurczyszyn et al., 2020. Poland [41] | To propose an effective texture analysis algorithm for oral leukoplakia diagnosis | PNN Run length matrix (short/long) Co-occurrence matrix (entropy/difference entropy) Haar wavelet transformation (Energy 5.6) | Oral leukoplakia:35 | Sensitivity: 100 Specificity: 97 |
Shamim et al., 2019 Saudi Arabia [50] | To apply and evaluate the efficacy of six model for identifying pre-cancerous tongue lesions directly using photographic images to diagnose | Deep CNN Transfer learning: AlexNet; GooLeNet; Vgg19; Inceptionv3; ResNet50; Squeeze | Training (160 images, 80%) Validation (40 images, 20%) Tongue diseases (Internet images) Physician with more than 15 years of clinical practice | Best (benign or precancerous: VGG19)/4 benign and 1 precancerous: ResNet) Accuracy: 98/97 Sensitivity:89 Specificity:97 |
Spyrodonos et al., 2015. Greece. [53] | To determine robust macro-morphological descriptors of the vermillion border from non-standardized digital photographs and to exploit a probabilistic model for solar cheilosis recognition | Relevance vector machine | Solar cheilosis: 75 Non-solar cheilosis:75 | Sensitivity: 94.6 Specificity: 96 |
Thomas et al., 2017 India [56] | To distinguish between different groups of carcinoma of different areas of oral cavity by different selected features of Grey Level. | Backpropagation Artificial NN (to validate): Grey Level Co-occurrence Matrix (GLCM) 1 Grey Level Run-Length Matrix (GLRL) 2 Boxplot analysis | Oral cancer vs. normal Training: n = 12 Validation: 4 Sections of images:192 | Accuracy Selected 11features: 97.9 All 61 features: 91.6 GLCM: 89.5 GLRL: 85.4 |
Authors, Year, Country, Reference | Aim/no. of Predictor Variables | Method. Classifier | Sample | Outcomes: Diagnostic Performance (%) |
---|---|---|---|---|
Aubreville, et al., 2017 Germany [27] | To diagnose OSCC using deep learning on Confocal laser endomicroscopy (CLE) images | CLE Patch-extraction of images CNN RF-LBP; RF-GLCM | OSCC:12 | AUC: Patch-extraction (validation) Accuracy: 88.3 Sensitivity: 86.6 Specificity: 90′ |
Awais et al., 2020. China [28] | To propose a method for the classification of OML and OPMDs based on a GLCM texture to take a biopsy | Velscope (ROI) GLCM LDA K-NN | n = 22 OML, OPMD | Accuracy: 83 Sensitivity: 85 Specificity: 84 |
de Veld et al., 2004. Netherlands [33] | To develop and compare algorithms for lesion classification and to examine the potential for detecting invisible tissue alterations. | Xe-lamp PCA ANN KLLC | Receiver-operator characteristic areas under the curve (ROC-AUCs) Patients = 155 Health: 96 | PCA/ANN Accuracy: 96.5/98.3 Sensitivity: 92.9/96.5 Specificity: 100/100 Not distinguish benign vs. premalignant |
Halicek et al., 2017. United States [36] | To compare automatic labeling of cancer and normal tissue applying hyperspectral images for using intraoperative cancer detection. | Xenon White light CNN SVM, k-NN, LR, DTC, LDA | 37 OSCC | External validation training CNN Accuracy: 77 ± 21/96.8 Sensitivity: 77 ± 19/96.1 Specificity: 78 ± 16/96.4 |
Heintzelman et al., 2000. United States [38] | To determine optimal excitation–emission wavelength combinations to discriminate normal and precancerous/cancerous tissue, and estimate the performance of algorithms based on fluorescence. | Xenon (ƛ:472/350/499) PCA | OPMD/malignant: 11/17 | Training //validation Sensitivity: FlS:90/CI: 100//FlS:100/CI: 100 Specificity: FlS:88/CI:83//FlS:98/CI:100 |
Majumder et al., 2005. India [44] | To compare evaluation of the diagnostic efficacy of the Relevance vector machine (RVM) and Support vector machine (SVM) | N2 laser (ƛ:337 nm: 375–700 nm) (Linear vs. RBF) RVM (Bayesian framework) vs. SVM (non-Bayesian) | OSCC: 16 Normal: 13 | RVM (Linear/RBF)//SVM (Linear/RBF) Training Sensitivity (84/88)//(86/93) Specificity (93/95)//(91/96) Validation Sensitivity (86/91)//(86/93) Specificity (96/95)//(92/95) |
van Staveren et al., 2000. Netherlands [58] | To apply an Artificial Network of autofluorescence spectra for classifying either homogeneous or non-homogeneous leukoplakia | Xe-lamp (ƛ:420nm) FullyNN | Leukoplakia: 22 Normal: 6 | Abnormal vs. Normal//Homogeneous/Non-homogeneous/Normal Sensitivity: 86//73/64/100 Specificity: 100//82/94/86 |
Wang C et al., 2003. China [59] | To evaluate whether algorithm could discriminate premalignant (ED) and malignant (SCC) tissues from ‘‘benign’’ | Xenon (ƛ:330nm) PLS-ANN (partial least squares and ANN | Normal:15, OSF: 30, EH: 26, ED: 13, SCC: 13 Sensitivity: 81 Specificity: 96; | Accuracy: Normal: 93 OSF: 97 EH: 62 ED & OSCC: 77 |
Wang X et al., 2020. China [60] | To develop a personalized computational model to predict cancer risk level of OPMDs and explore a potential web application in OPMD screening. | VEL scope TB staining Gini Index | n = 266; Follow-up 3 yr Training (3/5)/test (2/5) Model B (baseline) Model P (personalized) Expert | Training Model B/model P/experts Sensitivity:81 Specificity (Low grade): 92.31 Test Specificity (Low grade) Model B/Model P: 91.78 Experts: 86.30 |
Authors, Year, Country, Reference | Aim | Method. Classifier | Sample | Outcomes: Diagnostic Performance (%) |
---|---|---|---|---|
Banerjee et al., 2016. India [29] | To classify cells and nucleus for diagnosis oral leukoplakia and cancer 1 | SVM MATLAB | OLK:16 OSCC:23 | Cell/nucleus Sensitivity: 100/89.9 Specificity: 100/100 |
Dey et al., 2016, India. [34] | To classify cellular abnormalities smokers vs. non-smokers and precancer 2 | SVM Texture Features: GLCM (energy, homogeneity, correlation, contrast DIC) Morphological features 4 Gradient Vector flow Snake model k-means clustering | No smoking: 30 Smokers: 63 Pre-cancer: 26 | Accuracy: 85.71 Sensitivity: 80.0 Specificity: 88.89 |
Liu et al., 2017. China. [43] | To improve the performance of the risk index of preexisting model for assessment for oral cancer risk in OLK 3 | SVM Peals—Random Forest SVM full KNN, CF; RF OCRI | Training/Validation Normal: 18/102 OLK: 28/82 OSCC: 41/93 Follow up: 23 ± 20 months | Peaks RF Sensitivity: 100 Specificity: 100 |
McRae et al., 2020 (a). USA. [45] | To describe cytopathology tools, including machine learning algorithms, clinical algorithms, and test reports developed to assist pathologists and clinicians with PMOL evaluation and using a POCOCT platform. 3 | SVM Lasso logistic regression Training: PCA Validation: K-NN | Benign OPMD Oral epithelial dysplasia OSCC | Accuracy: 99.3 Clinical algorithm. AUC Benign vs. mild dysplasia: 0.81 No lesion vs. malignancy: 0.97 |
McRae et al., 2020 (b). USA. [46] | To classify the spectrum of oral epithelial dysplasia and OSCC and to determine the utility of cytological signatures, including nuclear F-actin cell phenotypes. 3 | Lasso logistic regression Training: PCA Validation: K-NN | OPMD OSCC health | AUC Early disease 5:0.82 Late disease 6:0.93 |
Sarkar et al., 2016. India. [49] | To develop a novel non invasive method for early cancer trend in habitual smoking 1 | DIC Fluorescence microscopy Fuzzy trend (Mamdani): Risk of OPMD in smokers | OPMD smokers: 40 Non-smokers: 40 Control: 40 | Positive correlation of smoking duration with early cancer risk: Correlation co-efficient: 0.86 Accuracy: 96 Sensitivity: 96 Specificity: 96 |
Sunny et al., 2019. India. [52] | To evaluate the efficacy of telecytology system in comparison with conventional cytology 3 | Manual (telecytology 2 vs. ANN Inception V3, Implemented in Python RF, LR, Linear discriminant analysis, KNN | Training/Validation OPML: 3 OSCC: 3 | SVM (best accuracy) Sensitivity: 88 malignant lesion: 93%, high grade OPML: 73% Specificity: 93 |
Wieslander et al., 2017, Suecia. [62] | To presents a pilot study on applying the PAP-based screening method for early detection of oral cancer and to compare 2 network architectures 3 | Classifier: CNN Evaluation: ResNet VGG net | Herlev dataset Normal: 3 OSCC: 3 | ResNet/VGGNet OSCC vs. normal Accuracy:78.3/80.6 vs. 82.3/80.8 Precision;72.4/75.0 vs. 82.4/82.4 Recall: 79.0/80.6 vs. 82.5/79.8 F score: 75.5/77.6 vs. 82.5/81.0 |
Authors, Year, Country, Reference | Aim | Method. Classifier | Sample (Prediction Factors) | Outcomes: Diagnostic Performance (%) |
---|---|---|---|---|
Karem et al., 2017, Malaysia. [42] | To guide oral cancer diagnosis using a real-world medical dataset with prediction model. | Training (PS-Merge) Fuzzy NN Fuzzy Regression Fuzzy Logic Logistic Regression | Oral Cancer: 171 (n = 8): | Best 7 factors Accuracy:78.95 Sensitivity: 100 Specificity: 58.62 |
Mohd et al., 2015, Malaysia. [47] | To predict more accurately the presence of oral cancer with reduced number of attributes. | SMOTE Features selection algorithm SVM Updatable Naïve Bayes Multilayer Perceptron K-Nearest Neighbors | Re-sample Oral cancer: 201 (n = 25): | Accuracy N° of features: NB/MLP/SVM/KNN 25: 91.9/94.2/93.3/86.1 14: 94.7/94.7/92.3/90.9 |
Rosma et al., 2010, Malaysia. [48] | To evaluate the ability of a fuzzy neural network (FNN) model and fuzzy regression (FR) model to predict the likelihood of an individual in developing OC based on knowledge of their risk habits and demographic profiles. | Prediction Model: FNN FR | Oral cancer: 84 Non-cancer: 87 (n = 5) | P value (Factors: 1 or 2/3 or 4) FR vs FNN: 1/1 FR vs. OCC: 1/0.043 FNN vs. OCC: 1/0.02 |
Sharma & Om, 2015, India. [51] | To design a data mining model using probabilistic and general regression neural network for early detection and prevention of oral malignancy. | Probabilistic NN/ General Regression NN | Oral cancer: 1025 (n = 12) | Benign vs malignant Accuracy: 99.0 Sensitivity: 99.3 Specificity: 98.0 |
Tetarbe et al., 2017, India. [55] | To analyze and classify data from an oral cancer dataset for accurate prognosis model from it. | WEKA Naïve Bayes J48Tree SMO algorithm REPTree Random Tree | Oral cancer:48 (n = 18) | Explorer/Experimenter Accuracy Naïve Bayes: 60.4/64.9 J48Tree: 75.0/77.6 SMO algorithm: 70.8/NA REPTree: 79.1/78.72 Random Tree: 72.8/68.3 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
García-Pola, M.; Pons-Fuster, E.; Suárez-Fernández, C.; Seoane-Romero, J.; Romero-Méndez, A.; López-Jornet, P. Role of Artificial Intelligence in the Early Diagnosis of Oral Cancer. A Scoping Review. Cancers 2021, 13, 4600. https://doi.org/10.3390/cancers13184600
García-Pola M, Pons-Fuster E, Suárez-Fernández C, Seoane-Romero J, Romero-Méndez A, López-Jornet P. Role of Artificial Intelligence in the Early Diagnosis of Oral Cancer. A Scoping Review. Cancers. 2021; 13(18):4600. https://doi.org/10.3390/cancers13184600
Chicago/Turabian StyleGarcía-Pola, María, Eduardo Pons-Fuster, Carlota Suárez-Fernández, Juan Seoane-Romero, Amparo Romero-Méndez, and Pia López-Jornet. 2021. "Role of Artificial Intelligence in the Early Diagnosis of Oral Cancer. A Scoping Review" Cancers 13, no. 18: 4600. https://doi.org/10.3390/cancers13184600
APA StyleGarcía-Pola, M., Pons-Fuster, E., Suárez-Fernández, C., Seoane-Romero, J., Romero-Méndez, A., & López-Jornet, P. (2021). Role of Artificial Intelligence in the Early Diagnosis of Oral Cancer. A Scoping Review. Cancers, 13(18), 4600. https://doi.org/10.3390/cancers13184600