Application of Artificial Intelligence in Orthodontics: Current State and Future Perspectives
Abstract
:1. Introduction
2. Application of AI in Orthodontics
2.1. Diagnosis
2.1.1. Cephalometric Analysis
2.1.2. Dental Analysis
2.1.3. Facial Analysis
2.1.4. Skeletal Maturation Determination
2.1.5. Upper-Airway Obstruction Assessment
2.2. Treatment Planning
2.2.1. Decision Making for Extractions
2.2.2. Decision Making for Orthognathic Surgery
2.2.3. Treatment Outcome Prediction
2.3. Clinical Practice
2.3.1. Practice Guidance
2.3.2. Remote Care
2.3.3. Clinical Documentation
3. Limitations and Future Perspectives
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Conflicts of Interest
References
- Kulikowski, C.A. An Opening Chapter of the First Generation of Artificial Intelligence in Medicine: The First Rutgers AIM Workshop, June 1975. Yearb. Med. Inform. 2015, 10, 227–233. (In English) [Google Scholar] [CrossRef] [PubMed]
- Moravčík, M.; Schmid, M.; Burch, N.; Lisý, V.; Morrill, D.; Bard, N.; Davis, T.; Waugh, K.; Johanson, M.; Bowling, M. DeepStack: Expert-level artificial intelligence in heads-up no-limit poker. Science 2017, 356, 508–513. (In English) [Google Scholar] [CrossRef]
- Wang, X.-L.; Liu, J.; Li, Z.-Q.; Luan, Z.-L. Application of physical examination data on health analysis and intelligent diagnosis. BioMed Res. Int. 2021, 2021, 8828677. [Google Scholar] [CrossRef]
- Sharif, M.S.; Abbod, M.; Amira, A.; Zaidi, H. Artificial Neural Network-Based System for PET Volume Segmentation. Int. J. Biomed. Imaging 2010, 2010, 105610. (In English) [Google Scholar] [CrossRef] [PubMed]
- Wang, D.; Yang, J.S. Analysis of Sports Injury Estimation Model Based on Mutation Fuzzy Neural Network. Comput. Intell. Neurosci. 2021, 2021, 3056428. (In English) [Google Scholar] [CrossRef] [PubMed]
- Ding, H.; Wu, J.; Zhao, W.; Matinlinna, J.P.; Burrow, M.F.; Tsoi, J.K. Artificial intelligence in dentistry—A review. Front. Dent. Med. 2023, 4, 1085251. [Google Scholar] [CrossRef]
- Chiu, Y.C.; Chen, H.H.; Gorthi, A.; Mostavi, M.; Zheng, S.; Huang, Y.; Chen, Y. Deep learning of pharmacogenomics resources: Moving towards precision oncology. Brief. Bioinform. 2020, 21, 2066–2083. (In English) [Google Scholar] [CrossRef]
- Taye, M.M. Understanding of Machine Learning with Deep Learning: Architectures, Workflow, Applications and Future Directions. Computers 2023, 12, 91. [Google Scholar] [CrossRef]
- Mohammad-Rahimi, H.; Nadimi, M.; Rohban, M.H.; Shamsoddin, E.; Lee, V.Y.; Motamedian, S.R. Machine learning and orthodontics, current trends and the future opportunities: A scoping review. Am. J. Orthod. Dentofac. Orthop. 2021, 160, 170–192.e174. (In English) [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Li, Z.; Liu, F.; Yang, W.; Peng, S.; Zhou, J. A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 6999–7019. (In English) [Google Scholar] [CrossRef]
- Tomè, D.; Monti, F.; Baroffio, L.; Bondi, L.; Tagliasacchi, M.; Tubaro, S. Deep convolutional neural networks for pedestrian detection. Signal Process. Image Commun. 2016, 47, 482–489. [Google Scholar] [CrossRef]
- Zou, J.; Meng, M.; Law, C.S.; Rao, Y.; Zhou, X. Common dental diseases in children and malocclusion. Int. J. Oral Sci. 2018, 10, 7. (In English) [Google Scholar] [CrossRef] [PubMed]
- Borzabadi-Farahani, A.; Borzabadi-Farahani, A.; Eslamipour, F. Malocclusion and occlusal traits in an urban Iranian population. An epidemiological study of 11- to 14-year-old children. Eur. J. Orthod. 2009, 31, 477–484. (In English) [Google Scholar] [CrossRef]
- Peter, E.; Monisha, J.; Edward Benson, P.; Ani George, S. Does orthodontic treatment improve the Oral Health-Related Quality of Life when assessed using the Malocclusion Impact Questionnaire-a 3-year prospective longitudinal cohort study. Eur. J. Orthod. 2023. (In English) [Google Scholar] [CrossRef] [PubMed]
- Ribeiro, L.G.; Antunes, L.S.; Küchler, E.C.; Baratto-Filho, F.; Kirschneck, C.; Guimarães, L.S.; Antunes, L.A.A. Impact of malocclusion treatments on Oral Health-Related Quality of Life: An overview of systematic reviews. Clin. Oral Investig. 2023, 27, 907–932. (In English) [Google Scholar] [CrossRef]
- Silva, T.P.D.; Lemos, Y.R.; Filho, M.V.; Carneiro, D.P.A.; Vedovello, S.A.S. Psychosocial impact of malocclusion in the school performance. A Hierarchical Analysis. Community Dent. Health 2022, 39, 211–216. (In English) [Google Scholar]
- Cenzato, N.; Nobili, A.; Maspero, C. Prevalence of Dental Malocclusions in Different Geographical Areas: Scoping Review. Dent. J. 2021, 9, 117. (In English) [Google Scholar] [CrossRef]
- Borzabadi-Farahani, A.; Borzabadi-Farahani, A.; Eslamipour, F. The relationship between the ICON index and the dental and aesthetic components of the IOTN index. World J. Orthod. 2010, 11, 43–48. (In English) [Google Scholar]
- Monill-González, A.; Rovira-Calatayud, L.; d’Oliveira, N.G.; Ustrell-Torrent, J.M. Artificial intelligence in orthodontics: Where are we now? A scoping review. Orthod. Craniofacial Res. 2021, 24, 6–15. [Google Scholar] [CrossRef]
- Albalawi, F.; Alamoud, K.A. Trends and Application of Artificial Intelligence Technology in Orthodontic Diagnosis and Treatment Planning—A Review. Appl. Sci. 2022, 12, 11864. [Google Scholar] [CrossRef]
- Proffit, W.R.; Fields, H.W.; Larson, B.; Sarver, D.M. Contemporary Orthodontics-e-Book; Elsevier Health Sciences: Amsterdam, The Netherlands, 2018. [Google Scholar]
- Yue, W.; Yin, D.; Li, C.; Wang, G.; Xu, T. Automated 2-D cephalometric analysis on X-ray images by a model-based approach. IEEE Trans. Biomed. Eng. 2006, 53, 1615–1623. [Google Scholar]
- Kim, J.; Kim, I.; Kim, Y.J.; Kim, M.; Cho, J.H.; Hong, M.; Kang, K.H.; Lim, S.H.; Kim, S.J.; Kim, Y.H. Accuracy of automated identification of lateral cephalometric landmarks using cascade convolutional neural networks on lateral cephalograms from nationwide multi-centres. Orthod. Craniofacial Res. 2021, 24, 59–67. [Google Scholar] [CrossRef] [PubMed]
- Baumrind, S.; Frantz, R.C. The reliability of head film measurements: 1. Landmark identification. Am. J. Orthod. 1971, 60, 111–127. [Google Scholar] [CrossRef]
- Durão, A.P.R.; Morosolli, A.; Pittayapat, P.; Bolstad, N.; Ferreira, A.P.; Jacobs, R. Cephalometric landmark variability among orthodontists and dentomaxillofacial radiologists: A comparative study. Imaging Sci. Dent. 2015, 45, 213–220. [Google Scholar] [CrossRef]
- Cohen, A.M.; Ip, H.H.; Linney, A.D. A preliminary study of computer recognition and identification of skeletal landmarks as a new method of cephalometric analysis. Br. J. Orthod. 1984, 11, 143–154. (In English) [Google Scholar] [CrossRef] [PubMed]
- Payer, C.; Štern, D.; Bischof, H.; Urschler, M. Integrating spatial configuration into heatmap regression based CNNs for landmark localization. Med. Image Anal. 2019, 54, 207–219. (In English) [Google Scholar] [CrossRef] [PubMed]
- Nishimoto, S.; Sotsuka, Y.; Kawai, K.; Ishise, H.; Kakibuchi, M. Personal Computer-Based Cephalometric Landmark Detection With Deep Learning, Using Cephalograms on the Internet. J. Craniofacial Surg. 2019, 30, 91–95. (In English) [Google Scholar] [CrossRef] [PubMed]
- Zhong, Z.; Li, J.; Zhang, Z.; Jiao, Z.; Gao, X. An attention-guided deep regression model for landmark detection in cephalograms. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, 13–17 October 2019. Proceedings, Part VI 22. [Google Scholar]
- Park, J.H.; Hwang, H.W.; Moon, J.H.; Yu, Y.; Kim, H.; Her, S.B.; Srinivasan, G.; Aljanabi, M.N.A.; Donatelli, R.E.; Lee, S.J. Automated identification of cephalometric landmarks: Part 1-Comparisons between the latest deep-learning methods YOLOV3 and SSD. Angle Orthod. 2019, 89, 903–909. (In English) [Google Scholar] [CrossRef] [PubMed]
- Moon, J.H.; Hwang, H.W.; Yu, Y.; Kim, M.G.; Donatelli, R.E.; Lee, S.J. How much deep learning is enough for automatic identification to be reliable? Angle Orthod. 2020, 90, 823–830. (In English) [Google Scholar] [CrossRef]
- Hwang, H.W.; Park, J.H.; Moon, J.H.; Yu, Y.; Kim, H.; Her, S.B.; Srinivasan, G.; Aljanabi, M.N.A.; Donatelli, R.E.; Lee, S.J. Automated identification of cephalometric landmarks: Part 2-Might it be better than human? Angle Orthod. 2020, 90, 69–76. (In English) [Google Scholar] [CrossRef]
- Oh, K.; Oh, I.S.; Le, V.N.T.; Lee, D.W. Deep Anatomical Context Feature Learning for Cephalometric Landmark Detection. IEEE J. Biomed. Health Inform. 2021, 25, 806–817. (In English) [Google Scholar] [CrossRef]
- Kim, H.; Shim, E.; Park, J.; Kim, Y.J.; Lee, U.; Kim, Y. Web-based fully automated cephalometric analysis by deep learning. Comput. Methods Programs Biomed. 2020, 194, 105513. (In English) [Google Scholar] [CrossRef] [PubMed]
- Kunz, F.; Stellzig-Eisenhauer, A.; Zeman, F.; Boldt, J. Artificial intelligence in orthodontics: Evaluation of a fully automated cephalometric analysis using a customized convolutional neural network. J. Orofac. Orthop. 2020, 81, 52–68. (In English) [Google Scholar] [CrossRef]
- Alqahtani, H. Evaluation of an online website-based platform for cephalometric analysis. J. Stomatol. Oral Maxillofac. Surg. 2020, 121, 53–57. (In English) [Google Scholar] [CrossRef] [PubMed]
- Lee, J.H.; Yu, H.J.; Kim, M.J.; Kim, J.W.; Choi, J. Automated cephalometric landmark detection with confidence regions using Bayesian convolutional neural networks. BMC Oral Health 2020, 20, 270. (In English) [Google Scholar] [CrossRef]
- Yu, H.J.; Cho, S.R.; Kim, M.J.; Kim, W.H.; Kim, J.W.; Choi, J. Automated Skeletal Classification with Lateral Cephalometry Based on Artificial Intelligence. J. Dent. Res. 2020, 99, 249–256. (In English) [Google Scholar] [CrossRef]
- Li, W.; Lu, Y.; Zheng, K.; Liao, H.; Lin, C.; Luo, J.; Cheng, C.-T.; Xiao, J.; Lu, L.; Kuo, C.-F. Structured landmark detection via topology-adapting deep graph learning. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020. Proceedings, Part IX 16. [Google Scholar]
- Tanikawa, C.; Lee, C.; Lim, J.; Oka, A.; Yamashiro, T. Clinical applicability of automated cephalometric landmark identification: Part I-Patient-related identification errors. Orthod. Craniofacial Res. 2021, 24 (Suppl. S2), 43–52. (In English) [Google Scholar] [CrossRef] [PubMed]
- Zeng, M.; Yan, Z.; Liu, S.; Zhou, Y.; Qiu, L. Cascaded convolutional networks for automatic cephalometric landmark detection. Med. Image Anal. 2021, 68, 101904. (In English) [Google Scholar] [CrossRef]
- Hwang, H.W.; Moon, J.H.; Kim, M.G.; Donatelli, R.E.; Lee, S.J. Evaluation of automated cephalometric analysis based on the latest deep learning method. Angle Orthod. 2021, 91, 329–335. (In English) [Google Scholar] [CrossRef]
- Bulatova, G.; Kusnoto, B.; Grace, V.; Tsay, T.P.; Avenetti, D.M.; Sanchez, F.J.C. Assessment of automatic cephalometric landmark identification using artificial intelligence. Orthod. Craniofacial Res. 2021, 24 (Suppl. S2), 37–42. (In English) [Google Scholar] [CrossRef] [PubMed]
- Jeon, S.; Lee, K.C. Comparison of cephalometric measurements between conventional and automatic cephalometric analysis using convolutional neural network. Prog. Orthod. 2021, 22, 14. (In English) [Google Scholar] [CrossRef]
- Hong, M.; Kim, I.; Cho, J.H.; Kang, K.H.; Kim, M.; Kim, S.J.; Kim, Y.J.; Sung, S.J.; Kim, Y.H.; Lim, S.H.; et al. Accuracy of artificial intelligence-assisted landmark identification in serial lateral cephalograms of Class III patients who underwent orthodontic treatment and two-jaw orthognathic surgery. Korean J. Orthod. 2022, 52, 287–297. (In English) [Google Scholar] [CrossRef] [PubMed]
- Le, V.N.T.; Kang, J.; Oh, I.S.; Kim, J.G.; Yang, Y.M.; Lee, D.W. Effectiveness of Human-Artificial Intelligence Collaboration in Cephalometric Landmark Detection. J. Pers. Med. 2022, 12, 387. (In English) [Google Scholar] [CrossRef] [PubMed]
- Mahto, R.K.; Kafle, D.; Giri, A.; Luintel, S.; Karki, A. Evaluation of fully automated cephalometric measurements obtained from web-based artificial intelligence driven platform. BMC Oral Health 2022, 22, 132. (In English) [Google Scholar] [CrossRef]
- Uğurlu, M. Performance of a Convolutional Neural Network- Based Artificial Intelligence Algorithm for Automatic Cephalometric Landmark Detection. Turk. J. Orthod. 2022, 35, 94–100. (In English) [Google Scholar] [CrossRef]
- Yao, J.; Zeng, W.; He, T.; Zhou, S.; Zhang, Y.; Guo, J.; Tang, W. Automatic localization of cephalometric landmarks based on convolutional neural network. Am. J. Orthod. Dentofac. Orthop. 2022, 161, e250–e259. (In English) [Google Scholar] [CrossRef]
- Lu, G.; Zhang, Y.; Kong, Y.; Zhang, C.; Coatrieux, J.L.; Shu, H. Landmark Localization for Cephalometric Analysis Using Multiscale Image Patch-Based Graph Convolutional Networks. IEEE J. Biomed. Health Inform. 2022, 26, 3015–3024. (In English) [Google Scholar] [CrossRef]
- Tsolakis, I.A.; Tsolakis, A.I.; Elshebiny, T.; Matthaios, S.; Palomo, J.M. Comparing a Fully Automated Cephalometric Tracing Method to a Manual Tracing Method for Orthodontic Diagnosis. J. Clin. Med. 2022, 11, 6854. (In English) [Google Scholar] [CrossRef]
- Duran, G.S.; Gökmen, Ş.; Topsakal, K.G.; Görgülü, S. Evaluation of the accuracy of fully automatic cephalometric analysis software with artificial intelligence algorithm. Orthod. Craniofacial Res. 2023, 26, 481–490. (In English) [Google Scholar] [CrossRef]
- Ye, H.; Cheng, Z.; Ungvijanpunya, N.; Chen, W.; Cao, L.; Gou, Y. Is automatic cephalometric software using artificial intelligence better than orthodontist experts in landmark identification? BMC Oral Health 2023, 23, 467. (In English) [Google Scholar] [CrossRef] [PubMed]
- Ueda, A.; Tussie, C.; Kim, S.; Kuwajima, Y.; Matsumoto, S.; Kim, G.; Satoh, K.; Nagai, S. Classification of Maxillofacial Morphology by Artificial Intelligence Using Cephalometric Analysis Measurements. Diagnostics 2023, 13, 2134. (In English) [Google Scholar] [CrossRef] [PubMed]
- Bao, H.; Zhang, K.; Yu, C.; Li, H.; Cao, D.; Shu, H.; Liu, L.; Yan, B. Evaluating the accuracy of automated cephalometric analysis based on artificial intelligence. BMC Oral Health 2023, 23, 191. (In English) [Google Scholar] [CrossRef] [PubMed]
- Kim, M.J.; Liu, Y.; Oh, S.H.; Ahn, H.W.; Kim, S.H.; Nelson, G. Evaluation of a multi-stage convolutional neural network-based fully automated landmark identification system using cone-beam computed tomographysynthesized posteroanterior cephalometric images. Korean J. Orthod. 2021, 51, 77–85. (In English) [Google Scholar] [CrossRef]
- Takeda, S.; Mine, Y.; Yoshimi, Y.; Ito, S.; Tanimoto, K.; Murayama, T. Landmark annotation and mandibular lateral deviation analysis of posteroanterior cephalograms using a convolutional neural network. J. Dent. Sci. 2021, 16, 957–963. (In English) [Google Scholar] [CrossRef]
- Lee, S.M.; Kim, H.P.; Jeon, K.; Lee, S.H.; Seo, J.K. Automatic 3D cephalometric annotation system using shadowed 2D image-based machine learning. Phys. Med. Biol. 2019, 64, 055002. (In English) [Google Scholar] [CrossRef]
- Torosdagli, N.; Liberton, D.K.; Verma, P.; Sincan, M.; Lee, J.S.; Bagci, U. Deep Geodesic Learning for Segmentation and Anatomical Landmarking. IEEE Trans. Med. Imaging 2019, 38, 919–931. (In English) [Google Scholar] [CrossRef]
- Yun, H.S.; Jang, T.J.; Lee, S.M.; Lee, S.H.; Seo, J.K. Learning-based local-to-global landmark annotation for automatic 3D cephalometry. Phys. Med. Biol. 2020, 65, 085018. (In English) [Google Scholar] [CrossRef]
- Kang, S.H.; Jeon, K.; Kang, S.H.; Lee, S.H. 3D cephalometric landmark detection by multiple stage deep reinforcement learning. Sci. Rep. 2021, 11, 17509. (In English) [Google Scholar] [CrossRef]
- Ghowsi, A.; Hatcher, D.; Suh, H.; Wile, D.; Castro, W.; Krueger, J.; Park, J.; Oh, H. Automated landmark identification on cone-beam computed tomography: Accuracy and reliability. Angle Orthod. 2022, 92, 642–654. (In English) [Google Scholar] [CrossRef]
- Dot, G.; Schouman, T.; Chang, S.; Rafflenbeul, F.; Kerbrat, A.; Rouch, P.; Gajny, L. Automatic 3-Dimensional Cephalometric Landmarking via Deep Learning. J. Dent. Res. 2022, 101, 1380–1387. (In English) [Google Scholar] [CrossRef]
- Blum, F.M.S.; Möhlhenrich, S.C.; Raith, S.; Pankert, T.; Peters, F.; Wolf, M.; Hölzle, F.; Modabber, A. Evaluation of an artificial intelligence-based algorithm for automated localization of craniofacial landmarks. Clin. Oral Investig. 2023, 27, 2255–2265. (In English) [Google Scholar] [CrossRef]
- Yang, J.; Ling, X.; Lu, Y.; Wei, M.; Ding, G. Cephalometric image analysis and measurement for orthognathic surgery. Med. Biol. Eng. Comput. 2001, 39, 279–284. (In English) [Google Scholar] [CrossRef]
- Schwendicke, F.; Chaurasia, A.; Arsiwala, L.; Lee, J.H.; Elhennawy, K.; Jost-Brinkmann, P.G.; Demarco, F.; Krois, J. Deep learning for cephalometric landmark detection: Systematic review and meta-analysis. Clin. Oral Investig. 2021, 25, 4299–4309. (In English) [Google Scholar] [CrossRef]
- Arık, S.; Ibragimov, B.; Xing, L. Fully automated quantitative cephalometry using convolutional neural networks. J. Med. Imaging 2017, 4, 014501. (In English) [Google Scholar] [CrossRef]
- Cao, L.; He, H.; Hua, F. Deep Learning Algorithms Have High Accuracy for Automated Landmark Detection on 2D Lateral Cephalograms. J. Evid. Based Dent. Pract. 2022, 22, 101798. (In English) [Google Scholar] [CrossRef] [PubMed]
- Meriç, P.; Naoumova, J. Web-based Fully Automated Cephalometric Analysis: Comparisons between App-aided, Computerized, and Manual Tracings. Turk. J. Orthod. 2020, 33, 142–149. (In English) [Google Scholar] [CrossRef]
- Montúfar, J.; Romero, M.; Scougall-Vilchis, R.J. Automatic 3-dimensional cephalometric landmarking based on active shape models in related projections. Am. J. Orthod. Dentofac. Orthop. 2018, 153, 449–458. (In English) [Google Scholar] [CrossRef] [PubMed]
- Zhang, J.; Liu, M.; Wang, L.; Chen, S.; Yuan, P.; Li, J.; Shen, S.G.; Tang, Z.; Chen, K.C.; Xia, J.J.; et al. Context-guided fully convolutional networks for joint craniomaxillofacial bone segmentation and landmark digitization. Med. Image Anal. 2020, 60, 101621. (In English) [Google Scholar] [CrossRef] [PubMed]
- Dot, G.; Rafflenbeul, F.; Arbotto, M.; Gajny, L.; Rouch, P.; Schouman, T. Accuracy and reliability of automatic three-dimensional cephalometric landmarking. Int. J. Oral Maxillofac. Surg. 2020, 49, 1367–1378. (In English) [Google Scholar] [CrossRef]
- Ghesu, F.C.; Georgescu, B.; Mansi, T.; Neumann, D.; Hornegger, J.; Comaniciu, D. An artificial agent for anatomical landmark detection in medical images. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention-MICCAI 2016: 19th International Conference, Athens, Greece, 17–21 October 2016. Proceedings, Part III 19. [Google Scholar]
- Ghesu, F.C.; Georgescu, B.; Grbic, S.; Maier, A.; Hornegger, J.; Comaniciu, D. Towards intelligent robust detection of anatomical structures in incomplete volumetric data. Med. Image Anal. 2018, 48, 203–213. (In English) [Google Scholar] [CrossRef] [PubMed]
- Chen, S.; Wu, S. Deep Q-networks with web-based survey data for simulating lung cancer intervention prediction and assessment in the elderly: A quantitative study. BMC Med. Inform. Decis. Mak. 2022, 22, 1. (In English) [Google Scholar] [CrossRef]
- Talaat, S.; Kaboudan, A.; Talaat, W.; Kusnoto, B.; Sanchez, F.; Elnagar, M.H.; Bourauel, C.; Ghoneima, A. The validity of an artificial intelligence application for assessment of orthodontic treatment need from clinical images. Semin. Orthod. 2021, 27, 164–171. [Google Scholar] [CrossRef]
- Ryu, J.; Kim, Y.H.; Kim, T.W.; Jung, S.K. Evaluation of artificial intelligence model for crowding categorization and extraction diagnosis using intraoral photographs. Sci. Rep. 2023, 13, 5177. (In English) [Google Scholar] [CrossRef]
- Im, J.; Kim, J.Y.; Yu, H.S.; Lee, K.J.; Choi, S.H.; Kim, J.H.; Ahn, H.K.; Cha, J.Y. Accuracy and efficiency of automatic tooth segmentation in digital dental models using deep learning. Sci. Rep. 2022, 12, 9429. (In English) [Google Scholar] [CrossRef] [PubMed]
- Woodsend, B.; Koufoudaki, E.; Lin, P.; McIntyre, G.; El-Angbawi, A.; Aziz, A.; Shaw, W.; Semb, G.; Reesu, G.V.; Mossey, P.A. Development of intra-oral automated landmark recognition (ALR) for dental and occlusal outcome measurements. Eur. J. Orthod. 2022, 44, 43–50. (In English) [Google Scholar] [CrossRef] [PubMed]
- Woodsend, B.; Koufoudaki, E.; Mossey, P.A.; Lin, P. Automatic recognition of landmarks on digital dental models. Comput. Biol. Med. 2021, 137, 104819. (In English) [Google Scholar] [CrossRef]
- Zhao, Y.; Zhang, L.; Liu, Y.; Meng, D.; Cui, Z.; Gao, C.; Gao, X.; Lian, C.; Shen, D. Two-Stream Graph Convolutional Network for Intra-Oral Scanner Image Segmentation. IEEE Trans. Med. Imaging 2022, 41, 826–835. (In English) [Google Scholar] [CrossRef] [PubMed]
- Wu, T.H.; Lian, C.; Lee, S.; Pastewait, M.; Piers, C.; Liu, J.; Wang, F.; Wang, L.; Chiu, C.Y.; Wang, W.; et al. Two-Stage Mesh Deep Learning for Automated Tooth Segmentation and Landmark Localization on 3D Intraoral Scans. IEEE Trans. Med. Imaging 2022, 41, 3158–3166. (In English) [Google Scholar] [CrossRef] [PubMed]
- Liu, Z.; He, X.; Wang, H.; Xiong, H.; Zhang, Y.; Wang, G.; Hao, J.; Feng, Y.; Zhu, F.; Hu, H. Hierarchical Self-Supervised Learning for 3D Tooth Segmentation in Intra-Oral Mesh Scans. IEEE Trans. Med. Imaging 2023, 42, 467–480. (In English) [Google Scholar] [CrossRef]
- Rao, G.K.L.; Srinivasa, A.C.; Iskandar, Y.H.P.; Mokhtar, N. Identification and analysis of photometric points on 2D facial images: A machine learning approach in orthodontics. Health Technol. 2019, 9, 715–724. [Google Scholar] [CrossRef]
- Yurdakurban, E.; Duran, G.S.; Görgülü, S. Evaluation of an automated approach for facial midline detection and asymmetry assessment: A preliminary study. Orthod. Craniofacial Res. 2021, 24 (Suppl. S2), 84–91. (In English) [Google Scholar] [CrossRef] [PubMed]
- Rousseau, M.; Retrouvey, J.M. Machine learning in orthodontics: Automated facial analysis of vertical dimension for increased precision and efficiency. Am. J. Orthod. Dentofac. Orthop. 2022, 161, 445–450. (In English) [Google Scholar] [CrossRef]
- Kim, H.; Kim, C.S.; Lee, J.M.; Lee, J.J.; Lee, J.; Kim, J.S.; Choi, S.H. Prediction of Fishman’s skeletal maturity indicators using artificial intelligence. Sci. Rep. 2023, 13, 5870. (In English) [Google Scholar] [CrossRef]
- Lee, H.; Tajmir, S.; Lee, J.; Zissen, M.; Yeshiwas, B.A.; Alkasab, T.K.; Choy, G.; Do, S. Fully Automated Deep Learning System for Bone Age Assessment. J. Digit. Imaging 2017, 30, 427–441. (In English) [Google Scholar] [CrossRef] [PubMed]
- Kim, J.R.; Shim, W.H.; Yoon, H.M.; Hong, S.H.; Lee, J.S.; Cho, Y.A.; Kim, S. Computerized Bone Age Estimation Using Deep Learning Based Program: Evaluation of the Accuracy and Efficiency. AJR Am. J. Roentgenol. 2017, 209, 1374–1380. (In English) [Google Scholar] [CrossRef]
- Kök, H.; Izgi, M.S.; Acilar, A.M. Determination of growth and development periods in orthodontics with artificial neural network. Orthod. Craniofacial Res. 2021, 24 (Suppl. S2), 76–83. (In English) [Google Scholar] [CrossRef]
- Franchi, L.; Baccetti, T.; McNamara, J.A., Jr. Mandibular growth as related to cervical vertebral maturation and body height. Am. J. Orthod. Dentofac. Orthop. 2000, 118, 335–340. (In English) [Google Scholar] [CrossRef]
- Flores-Mir, C.; Burgess, C.A.; Champney, M.; Jensen, R.J.; Pitcher, M.R.; Major, P.W. Correlation of skeletal maturation stages determined by cervical vertebrae and hand-wrist evaluations. Angle Orthod. 2006, 76, 1–5. (In English) [Google Scholar]
- Kucukkeles, N.; Acar, A.; Biren, S.; Arun, T. Comparisons between cervical vertebrae and hand-wrist maturation for the assessment of skeletal maturity. J. Clin. Pediatr. Dent. 1999, 24, 47–52. (In English) [Google Scholar]
- McNamara, J.A., Jr.; Franchi, L. The cervical vertebral maturation method: A user’s guide. Angle Orthod. 2018, 88, 133–143. (In English) [Google Scholar] [CrossRef] [PubMed]
- Kim, D.W.; Kim, J.; Kim, T.; Kim, T.; Kim, Y.J.; Song, I.S.; Ahn, B.; Choo, J.; Lee, D.Y. Prediction of hand-wrist maturation stages based on cervical vertebrae images using artificial intelligence. Orthod. Craniofacial Res. 2021, 24 (Suppl. S2), 68–75. (In English) [Google Scholar] [CrossRef] [PubMed]
- Gandini, P.; Mancini, M.; Andreani, F. A comparison of hand-wrist bone and cervical vertebral analyses in measuring skeletal maturation. Angle Orthod. 2006, 76, 984–989. [Google Scholar] [CrossRef]
- Akay, G.; Akcayol, M.A.; Özdem, K.; Güngör, K. Deep convolutional neural network—The evaluation of cervical vertebrae maturation. Oral Radiol. 2023, 39, 629–638. [Google Scholar] [CrossRef]
- Kök, H.; Acilar, A.M.; İzgi, M.S. Usage and comparison of artificial intelligence algorithms for determination of growth and development by cervical vertebrae stages in orthodontics. Prog. Orthod. 2019, 20, 41. (In English) [Google Scholar] [CrossRef]
- Makaremi, M.; Lacaule, C.; Mohammad-Djafari, A. Deep Learning and Artificial Intelligence for the Determination of the Cervical Vertebra Maturation Degree from Lateral Radiography. Entropy 2019, 21, 1222. (In English) [Google Scholar] [CrossRef]
- Amasya, H.; Yildirim, D.; Aydogan, T.; Kemaloglu, N.; Orhan, K. Cervical vertebral maturation assessment on lateral cephalometric radiographs using artificial intelligence: Comparison of machine learning classifier models. Dentomaxillofac. Radiol. 2020, 49, 20190441. (In English) [Google Scholar] [CrossRef]
- Amasya, H.; Cesur, E.; Yıldırım, D.; Orhan, K. Validation of cervical vertebral maturation stages: Artificial intelligence vs human observer visual analysis. Am. J. Orthod. Dentofac. Orthop. 2020, 158, e173–e179. (In English) [Google Scholar] [CrossRef] [PubMed]
- Seo, H.; Hwang, J.; Jeong, T.; Shin, J. Comparison of Deep Learning Models for Cervical Vertebral Maturation Stage Classification on Lateral Cephalometric Radiographs. J. Clin. Med. 2021, 10, 3591. (In English) [Google Scholar] [CrossRef] [PubMed]
- Zhou, J.; Zhou, H.; Pu, L.; Gao, Y.; Tang, Z.; Yang, Y.; You, M.; Yang, Z.; Lai, W.; Long, H. Development of an Artificial Intelligence System for the Automatic Evaluation of Cervical Vertebral Maturation Status. Diagnostics 2021, 11, 2200. (In English) [Google Scholar] [CrossRef]
- Kim, E.G.; Oh, I.S.; So, J.E.; Kang, J.; Le, V.N.T.; Tak, M.K.; Lee, D.W. Estimating Cervical Vertebral Maturation with a Lateral Cephalogram Using the Convolutional Neural Network. J. Clin. Med. 2021, 10, 5400. (In English) [Google Scholar] [CrossRef] [PubMed]
- Mohammad-Rahimi, H.; Motamadian, S.R.; Nadimi, M.; Hassanzadeh-Samani, S.; Minabi, M.A.S.; Mahmoudinia, E.; Lee, V.Y.; Rohban, M.H. Deep learning for the classification of cervical maturation degree and pubertal growth spurts: A pilot study. Korean J. Orthod. 2022, 52, 112–122. (In English) [Google Scholar] [CrossRef] [PubMed]
- Radwan, M.T.; Sin, Ç.; Akkaya, N.; Vahdettin, L. Artificial intelligence-based algorithm for cervical vertebrae maturation stage assessment. Orthod. Craniofacial Res. 2023, 26, 349–355. (In English) [Google Scholar] [CrossRef]
- Rojas, E.; Corvalán, R.; Messen, E.; Sandoval, P. Upper airway assessment in Orthodontics: A review. Odontoestomatologia 2017, 19, 40–51. [Google Scholar] [CrossRef]
- Shen, Y.; Li, X.; Liang, X.; Xu, H.; Li, C.; Yu, Y.; Qiu, B. A deep-learning-based approach for adenoid hypertrophy diagnosis. Med. Phys. 2020, 47, 2171–2181. (In English) [Google Scholar] [CrossRef]
- Zhao, T.; Zhou, J.; Yan, J.; Cao, L.; Cao, Y.; Hua, F.; He, H. Automated Adenoid Hypertrophy Assessment with Lateral Cephalometry in Children Based on Artificial Intelligence. Diagnostics 2021, 11, 1386. (In English) [Google Scholar] [CrossRef]
- Liu, J.L.; Li, S.H.; Cai, Y.M.; Lan, D.P.; Lu, Y.F.; Liao, W.; Ying, S.C.; Zhao, Z.H. Automated Radiographic Evaluation of Adenoid Hypertrophy Based on VGG-Lite. J. Dent. Res. 2021, 100, 1337–1343. (In English) [Google Scholar] [CrossRef]
- Sin, Ç.; Akkaya, N.; Aksoy, S.; Orhan, K.; Öz, U. A deep learning algorithm proposal to automatic pharyngeal airway detection and segmentation on CBCT images. Orthod. Craniofacial Res. 2021, 24 (Suppl. S2), 117–123. (In English) [Google Scholar] [CrossRef]
- Leonardi, R.; Lo Giudice, A.; Farronato, M.; Ronsivalle, V.; Allegrini, S.; Musumeci, G.; Spampinato, C. Fully automatic segmentation of sinonasal cavity and pharyngeal airway based on convolutional neural networks. Am. J. Orthod. Dentofac. Orthop. 2021, 159, 824–835.e821. (In English) [Google Scholar] [CrossRef]
- Shujaat, S.; Jazil, O.; Willems, H.; Van Gerven, A.; Shaheen, E.; Politis, C.; Jacobs, R. Automatic segmentation of the pharyngeal airway space with convolutional neural network. J. Dent. 2021, 111, 103705. (In English) [Google Scholar] [CrossRef] [PubMed]
- Jeong, Y.; Nang, Y.; Zhao, Z. Automated Evaluation of Upper Airway Obstruction Based on Deep Learning. BioMed Res. Int. 2023, 2023, 8231425. (In English) [Google Scholar] [CrossRef]
- Dong, W.; Chen, Y.; Li, A.; Mei, X.; Yang, Y. Automatic detection of adenoid hypertrophy on cone-beam computed tomography based on deep learning. Am. J. Orthod. Dentofac. Orthop. 2023, 163, 553–560.e553. (In English) [Google Scholar] [CrossRef]
- Jin, S.; Han, H.; Huang, Z.; Xiang, Y.; Du, M.; Hua, F.; Guan, X.; Liu, J.; Chen, F.; He, H. Automatic three-dimensional nasal and pharyngeal airway subregions identification via Vision Transformer. J. Dent. 2023, 136, 104595. (In English) [Google Scholar] [CrossRef]
- Soldatova, L.; Otero, H.J.; Saul, D.A.; Barrera, C.A.; Elden, L. Lateral Neck Radiography in Preoperative Evaluation of Adenoid Hypertrophy. Ann. Otol. Rhinol. Laryngol. 2020, 129, 482–488. (In English) [Google Scholar] [CrossRef]
- Duan, H.; Xia, L.; He, W.; Lin, Y.; Lu, Z.; Lan, Q. Accuracy of lateral cephalogram for diagnosis of adenoid hypertrophy and posterior upper airway obstruction: A meta-analysis. Int. J. Pediatr. Otorhinolaryngol. 2019, 119, 1–9. (In English) [Google Scholar] [CrossRef] [PubMed]
- Fujioka, M.; Young, L.W.; Girdany, B.R. Radiographic evaluation of adenoidal size in children: Adenoidal-nasopharyngeal ratio. AJR Am. J. Roentgenol. 1979, 133, 401–404. (In English) [Google Scholar] [CrossRef] [PubMed]
- Xie, X.; Wang, L.; Wang, A. Artificial neural network modeling for deciding if extractions are necessary prior to orthodontic treatment. Angle Orthod. 2010, 80, 262–266. (In English) [Google Scholar] [CrossRef]
- Jung, S.K.; Kim, T.W. New approach for the diagnosis of extractions with neural network machine learning. Am. J. Orthod. Dentofac. Orthop. 2016, 149, 127–133. (In English) [Google Scholar] [CrossRef] [PubMed]
- Li, P.; Kong, D.; Tang, T.; Su, D.; Yang, P.; Wang, H.; Zhao, Z.; Liu, Y. Orthodontic treatment planning based on artificial neural networks. Sci. Rep. 2019, 9, 2037. [Google Scholar] [CrossRef]
- Suhail, Y.; Upadhyay, M.; Chhibber, A.; Kshitiz. Machine learning for the diagnosis of orthodontic extractions: A computational analysis using ensemble learning. Bioengineering 2020, 7, 55. [Google Scholar] [CrossRef]
- Etemad, L.; Wu, T.H.; Heiner, P.; Liu, J.; Lee, S.; Chao, W.L.; Zaytoun, M.L.; Guez, C.; Lin, F.C.; Jackson, C.B.; et al. Machine learning from clinical data sets of a contemporary decision for orthodontic tooth extraction. Orthod. Craniofacial Res. 2021, 24 (Suppl. S2), 193–200. (In English) [Google Scholar] [CrossRef]
- Shojaei, H.; Augusto, V. Constructing Machine Learning models for Orthodontic Treatment Planning: A comparison of different methods. In Proceedings of the 2022 IEEE International Conference on Big Data (Big Data), Osaka, Japan, 17–20 December 2022. [Google Scholar]
- Real, A.D.; Real, O.D.; Sardina, S.; Oyonarte, R. Use of automated artificial intelligence to predict the need for orthodontic extractions. Korean J. Orthod. 2022, 52, 102–111. (In English) [Google Scholar] [CrossRef] [PubMed]
- Leavitt, L.; Volovic, J.; Steinhauer, L.; Mason, T.; Eckert, G.; Dean, J.A.; Dundar, M.M.; Turkkahraman, H. Can we predict orthodontic extraction patterns by using machine learning? Orthod. Craniofacial Res. 2023, 26, 552–559. (In English) [Google Scholar] [CrossRef] [PubMed]
- Prasad, J.; Mallikarjunaiah, D.R.; Shetty, A.; Gandedkar, N.; Chikkamuniswamy, A.B.; Shivashankar, P.C. Machine Learning Predictive Model as Clinical Decision Support System in Orthodontic Treatment Planning. Dent. J. 2022, 11, 1. (In English) [Google Scholar] [CrossRef]
- Knoops, P.G.M.; Papaioannou, A.; Borghi, A.; Breakey, R.W.F.; Wilson, A.T.; Jeelani, O.; Zafeiriou, S.; Steinbacher, D.; Padwa, B.L.; Dunaway, D.J.; et al. A machine learning framework for automated diagnosis and computer-assisted planning in plastic and reconstructive surgery. Sci. Rep. 2019, 9, 13597. (In English) [Google Scholar] [CrossRef] [PubMed]
- Choi, H.-I.; Jung, S.-K.; Baek, S.-H.; Lim, W.H.; Ahn, S.-J.; Yang, I.-H.; Kim, T.-W. Artificial intelligent model with neural network machine learning for the diagnosis of orthognathic surgery. J. Craniofacial Surg. 2019, 30, 1986–1989. [Google Scholar] [CrossRef] [PubMed]
- Lee, K.-S.; Ryu, J.-J.; Jang, H.S.; Lee, D.-Y.; Jung, S.-K. Deep convolutional neural networks based analysis of cephalometric radiographs for differential diagnosis of orthognathic surgery indications. Appl. Sci. 2020, 10, 2124. [Google Scholar] [CrossRef]
- Jeong, S.H.; Yun, J.P.; Yeom, H.G.; Lim, H.J.; Lee, J.; Kim, B.C. Deep learning based discrimination of soft tissue profiles requiring orthognathic surgery by facial photographs. Sci. Rep. 2020, 10, 16235. (In English) [Google Scholar] [CrossRef]
- Shin, W.; Yeom, H.G.; Lee, G.H.; Yun, J.P.; Jeong, S.H.; Lee, J.H.; Kim, H.K.; Kim, B.C. Deep learning based prediction of necessity for orthognathic surgery of skeletal malocclusion using cephalogram in Korean individuals. BMC Oral Health 2021, 21, 130. (In English) [Google Scholar] [CrossRef]
- Kim, Y.H.; Park, J.B.; Chang, M.S.; Ryu, J.J.; Lim, W.H.; Jung, S.K. Influence of the Depth of the Convolutional Neural Networks on an Artificial Intelligence Model for Diagnosis of Orthognathic Surgery. J. Pers. Med. 2021, 11, 356. (In English) [Google Scholar] [CrossRef]
- Lee, H.; Ahmad, S.; Frazier, M.; Dundar, M.M.; Turkkahraman, H. A novel machine learning model for class III surgery decision. J. Orofac. Orthop. 2022. (In English) [Google Scholar] [CrossRef]
- Woo, H.; Jha, N.; Kim, Y.-J.; Sung, S.-J. Evaluating the accuracy of automated orthodontic digital setup models. Semin. Orthod. 2023, 29, 60–67. [Google Scholar]
- Park, J.H.; Kim, Y.-J.; Kim, J.; Kim, J.; Kim, I.-H.; Kim, N.; Vaid, N.R.; Kook, Y.-A. Use of artificial intelligence to predict outcomes of nonextraction treatment of Class II malocclusions. Semin. Orthod. 2021, 27, 87–95. [Google Scholar]
- Tanikawa, C.; Yamashiro, T. Development of novel artificial intelligence systems to predict facial morphology after orthognathic surgery and orthodontic treatment in Japanese patients. Sci. Rep. 2021, 11, 15853. (In English) [Google Scholar]
- Park, Y.S.; Choi, J.H.; Kim, Y.; Choi, S.H.; Lee, J.H.; Kim, K.H.; Chung, C.J. Deep Learning-Based Prediction of the 3D Postorthodontic Facial Changes. J. Dent. Res. 2022, 101, 1372–1379. (In English) [Google Scholar]
- Xu, L.; Mei, L.; Lu, R.; Li, Y.; Li, H.; Li, Y. Predicting patient experience of Invisalign treatment: An analysis using artificial neural network. Korean J. Orthod. 2022, 52, 268–277. (In English) [Google Scholar] [PubMed]
- Ribarevski, R.; Vig, P.; Vig, K.D.; Weyant, R.; O’Brien, K. Consistency of orthodontic extraction decisions. Eur. J. Orthod. 1996, 18, 77–80. [Google Scholar]
- Drucker, H.; Wu, D.; Vapnik, V.N. Support vector machines for spam categorization. IEEE Trans. Neural Netw. 1999, 10, 1048–1054. (In English) [Google Scholar]
- Khozeimeh, F.; Sharifrazi, D.; Izadi, N.H.; Joloudari, J.H.; Shoeibi, A.; Alizadehsani, R.; Tartibi, M.; Hussain, S.; Sani, Z.A.; Khodatars, M.; et al. RF-CNN-F: Random forest with convolutional neural network features for coronary artery disease diagnosis based on cardiac magnetic resonance. Sci. Rep. 2022, 12, 11178. (In English) [Google Scholar]
- Ahsan, M.M.; Luna, S.A.; Siddique, Z. Machine-Learning-Based Disease Diagnosis: A Comprehensive Review. Healthcare 2022, 10, 541. (In English) [Google Scholar] [PubMed]
- Rabie, A.B.; Wong, R.W.; Min, G.U. Treatment in Borderline Class III Malocclusion: Orthodontic Camouflage (Extraction) Versus Orthognathic Surgery. Open Dent. J. 2008, 2, 38–48. (In English) [Google Scholar]
- Alhammadi, M.S.; Almashraqi, A.A.; Khadhi, A.H.; Arishi, K.A.; Alamir, A.A.; Beleges, E.M.; Halboub, E. Orthodontic camouflage versus orthodontic-orthognathic surgical treatment in borderline class III malocclusion: A systematic review. Clin. Oral Investig. 2022, 26, 6443–6455. (In English) [Google Scholar] [PubMed]
- Eslami, S.; Faber, J.; Fateh, A.; Sheikholaemmeh, F.; Grassia, V.; Jamilian, A. Treatment decision in adult patients with class III malocclusion: Surgery versus orthodontics. Prog. Orthod. 2018, 19, 28. (In English) [Google Scholar]
- Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
- Isola, P.; Zhu, J.-Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Vu, H.; Vo, P.T.; Kim, H.D. Gender modified association of oral health indicators with oral health-related quality of life among Korean elders. BMC Oral Health 2022, 22, 168. (In English) [Google Scholar]
- El-Dawlatly, M.M.; Abdelmaksoud, A.R.; Amer, O.M.; El-Dakroury, A.E.; Mostafa, Y.A. Evaluation of the efficiency of computerized algorithms to formulate a decision support system for deepbite treatment planning. Am. J. Orthod. Dentofac. Orthop. 2021, 159, 512–521. (In English) [Google Scholar]
- Tao, T.; Zou, K.; Jiang, R.; He, K.; He, X.; Zhang, M.; Wu, Z.; Shen, X.; Yuan, X.; Lai, W.; et al. Artificial intelligence-assisted determination of available sites for palatal orthodontic mini implants based on palatal thickness through CBCT. Orthod. Craniofacial Res. 2023, 26, 491–499. (In English) [Google Scholar]
- Hu, X.; Zhao, Y.; Yang, C. Evaluation of root position during orthodontic treatment via multiple intraoral scans with automated registration technology. Am. J. Orthod. Dentofac. Orthop. 2023, 164, 285–292. (In English) [Google Scholar]
- Lee, S.C.; Hwang, H.S.; Lee, K.C. Accuracy of deep learning-based integrated tooth models by merging intraoral scans and CBCT scans for 3D evaluation of root position during orthodontic treatment. Prog. Orthod. 2022, 23, 15. (In English) [Google Scholar]
- Hansa, I.; Semaan, S.J.; Vaid, N.R. Clinical outcomes and patient perspectives of Dental Monitoring® GoLive® with Invisalign®-a retrospective cohort study. Prog. Orthod. 2020, 21, 16. (In English) [Google Scholar] [PubMed]
- Strunga, M.; Urban, R.; Surovková, J.; Thurzo, A. Artificial Intelligence Systems Assisting in the Assessment of the Course and Retention of Orthodontic Treatment. Healthcare 2023, 11, 683. (In English) [Google Scholar] [CrossRef]
- Hansa, I.; Katyal, V.; Semaan, S.J.; Coyne, R.; Vaid, N.R. Artificial Intelligence Driven Remote Monitoring of orthodontic patients: Clinical applicability and rationale. Semin. Orthod. 2021, 27, 138–156. [Google Scholar] [CrossRef]
- Ryu, J.; Lee, Y.S.; Mo, S.P.; Lim, K.; Jung, S.K.; Kim, T.W. Application of deep learning artificial intelligence technique to the classification of clinical orthodontic photos. BMC Oral Health 2022, 22, 454. (In English) [Google Scholar] [CrossRef] [PubMed]
- Li, S.; Guo, Z.; Lin, J.; Ying, S. Artificial Intelligence for Classifying and Archiving Orthodontic Images. BioMed Res. Int. 2022, 2022, 1473977. (In English) [Google Scholar] [CrossRef]
- Keim, R.G. Fine-tuning our treatment of deep bites. J. Clin. Orthod. 2008, 42, 687–688. (In English) [Google Scholar] [PubMed]
- Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, 17–21 October 2016. Proceedings, Part II 19. [Google Scholar]
- Möhlhenrich, S.C.; Heussen, N.; Modabber, A.; Bock, A.; Hölzle, F.; Wilmes, B.; Danesh, G.; Szalma, J. Influence of bone density, screw size and surgical procedure on orthodontic mini-implant placement—part B: Implant stability. Int. J. Oral Maxillofac. Surg. 2021, 50, 565–572. (In English) [Google Scholar] [CrossRef]
- Poon, Y.C.; Chang, H.P.; Tseng, Y.C.; Chou, S.T.; Cheng, J.H.; Liu, P.H.; Pan, C.Y. Palatal bone thickness and associated factors in adult miniscrew placements: A cone-beam computed tomography study. Kaohsiung J. Med. Sci. 2015, 31, 265–270. (In English) [Google Scholar] [CrossRef]
- Dalessandri, D.; Sangalli, L.; Tonni, I.; Laffranchi, L.; Bonetti, S.; Visconti, L.; Signoroni, A.; Paganelli, C. Attitude towards Telemonitoring in Orthodontists and Orthodontic Patients. Dent. J. 2021, 9, 47. (In English) [Google Scholar] [CrossRef] [PubMed]
- Sangalli, L.; Savoldi, F.; Dalessandri, D.; Visconti, L.; Massetti, F.; Bonetti, S. Remote digital monitoring during the retention phase of orthodontic treatment: A prospective feasibility study. Korean J. Orthod. 2022, 52, 123–130. (In English) [Google Scholar] [CrossRef]
- Sangalli, L.; Alessandri-Bonetti, A.; Dalessandri, D. Effectiveness of dental monitoring system in orthodontics: A systematic review. J. Orthod. 2023. online ahead of print (In English) [Google Scholar] [CrossRef]
- Homsi, K.; Snider, V.; Kusnoto, B.; Atsawasuwan, P.; Viana, G.; Allareddy, V.; Gajendrareddy, P.; Elnagar, M.H. In-vivo evaluation of Artificial Intelligence Driven Remote Monitoring technology for tracking tooth movement and reconstruction of 3-dimensional digital models during orthodontic treatment. Am. J. Orthod. Dentofac. Orthop. 2023. online ahead of print (In English) [Google Scholar] [CrossRef] [PubMed]
- Moylan, H.B.; Carrico, C.K.; Lindauer, S.J.; Tüfekçi, E. Accuracy of a smartphone-based orthodontic treatment-monitoring application: A pilot study. Angle Orthod. 2019, 89, 727–733. (In English) [Google Scholar] [CrossRef] [PubMed]
- Ferlito, T.; Hsiou, D.; Hargett, K.; Herzog, C.; Bachour, P.; Katebi, N.; Tokede, O.; Larson, B.; Masoud, M.I. Assessment of artificial intelligence-based remote monitoring of clear aligner therapy: A prospective study. Am. J. Orthod. Dentofac. Orthop. 2023, 164, 194–200. (In English) [Google Scholar] [CrossRef] [PubMed]
- Candemir, S.; Nguyen, X.V.; Folio, L.R.; Prevedello, L.M. Training Strategies for Radiology Deep Learning Models in Data-limited Scenarios. Radiol. Artif. Intell. 2021, 3, e210014. (In English) [Google Scholar] [CrossRef] [PubMed]
- Ge, Y.; Guo, Y.; Das, S.; Al-Garadi, M.A.; Sarker, A. Few-shot learning for medical text: A review of advances, trends, and opportunities. J. Biomed. Inform. 2023, 144, 104458. (In English) [Google Scholar] [CrossRef]
- Langnickel, L.; Fluck, J. We are not ready yet: Limitations of transfer learning for Disease Named Entity Recognition. bioRxiv 2021. (In English) [Google Scholar] [CrossRef]
- Frid-Adar, M.; Diamant, I.; Klang, E.; Amitai, M.; Goldberger, J.; Greenspan, H. GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing 2018, 321, 321–331. [Google Scholar] [CrossRef]
- Zhang, K.; Song, X.; Zhang, C.; Yu, S. Challenges and future directions of secure federated learning: A survey. Front. Comput. Sci. 2022, 16, 165817. [Google Scholar] [CrossRef]
- Wolff, J.; Matschinske, J.; Baumgart, D.; Pytlik, A.; Keck, A.; Natarajan, A.; von Schacky, C.E.; Pauling, J.K.; Baumbach, J. Federated machine learning for a facilitated implementation of Artificial Intelligence in healthcare—A proof of concept study for the prediction of coronary artery calcification scores. J. Integr. Bioinform. 2022, 19, 20220032. [Google Scholar] [CrossRef]
- Attaran, M. Blockchain technology in healthcare: Challenges and opportunities. Int. J. Healthc. Manag. 2022, 15, 70–83. [Google Scholar] [CrossRef]
- Tagde, P.; Tagde, S.; Bhattacharya, T.; Tagde, P.; Chopra, H.; Akter, R.; Kaushik, D.; Rahman, M.H. Blockchain and artificial intelligence technology in e-Health. Environ. Sci. Pollut. Res. 2021, 28, 52810–52831. [Google Scholar] [CrossRef] [PubMed]
- Rieke, N.; Hancox, J.; Li, W.; Milletarì, F.; Roth, H.R.; Albarqouni, S.; Bakas, S.; Galtier, M.N.; Landman, B.A.; Maier-Hein, K.; et al. The future of digital health with federated learning. NPJ Digit. Med. 2020, 3, 119. [Google Scholar] [CrossRef]
- Allareddy, V.; Rampa, S.; Venugopalan, S.R.; Elnagar, M.H.; Lee, M.K.; Oubaidin, M.; Yadav, S. Blockchain technology and federated machine learning for collaborative initiatives in orthodontics and craniofacial health. Orthod. Craniofacial Res. 2023. (In English) [Google Scholar] [CrossRef] [PubMed]
- Norgeot, B.; Quer, G.; Beaulieu-Jones, B.K.; Torkamani, A.; Dias, R.; Gianfrancesco, M.; Arnaout, R.; Kohane, I.S.; Saria, S.; Topol, E.; et al. Minimum information about clinical artificial intelligence modeling: The MI-CLAIM checklist. Nat. Med. 2020, 26, 1320–1324. (In English) [Google Scholar] [CrossRef]
- Singh, A.; Sengupta, S.; Lakshminarayanan, V. Explainable deep learning models in medical image analysis. J. Imaging 2020, 6, 52. [Google Scholar] [CrossRef]
- Naz, Z.; Khan, M.U.G.; Saba, T.; Rehman, A.; Nobanee, H.; Bahaj, S.A. An Explainable AI-Enabled Framework for Interpreting Pulmonary Diseases from Chest Radiographs. Cancers 2023, 15, 314. (In English) [Google Scholar] [CrossRef]
- Selvaraju, R.R.; Das, A.; Vedantam, R.; Cogswell, M.; Parikh, D.; Batra, D. Grad-CAM: Why did you say that? arXiv 2016, arXiv:1611.07450. [Google Scholar]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
- Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. In Proceedings of the Computer Vision—ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014. Proceedings, Part I 13. [Google Scholar]
- Bao, C.; Pu, Y.; Zhang, Y. Fractional-Order Deep Backpropagation Neural Network. Comput. Intell. Neurosci. 2018, 2018, 7361628. (In English) [Google Scholar] [CrossRef] [PubMed]
- Wang, K.; Yang, B.; Li, Q.; Liu, S. Systematic Evaluation of Genomic Prediction Algorithms for Genomic Prediction and Breeding of Aquatic Animals. Genes 2022, 13, 2247. (In English) [Google Scholar] [CrossRef] [PubMed]
- Xi, J.; Wang, M.; Li, A. Discovering mutated driver genes through a robust and sparse co-regularized matrix factorization framework with prior information from mRNA expression patterns and interaction network. BMC Bioinform. 2018, 19, 214. (In English) [Google Scholar] [CrossRef]
- Leite, A.F.; Gerven, A.V.; Willems, H.; Beznik, T.; Lahoud, P.; Gaêta-Araujo, H.; Vranckx, M.; Jacobs, R. Artificial intelligence-driven novel tool for tooth detection and segmentation on panoramic radiographs. Clin. Oral Investig. 2021, 25, 2257–2267. (In English) [Google Scholar] [CrossRef]
- Kittichai, V.; Kaewthamasorn, M.; Thanee, S.; Jomtarak, R.; Klanboot, K.; Naing, K.M.; Tongloy, T.; Chuwongin, S.; Boonsang, S. Classification for avian malaria parasite Plasmodium gallinaceum blood stages by using deep convolutional neural networks. Sci. Rep. 2021, 11, 16919. (In English) [Google Scholar] [CrossRef] [PubMed]
- Borzabadi-Farahani, A. An insight into four orthodontic treatment need indices. Prog. Orthod. 2011, 12, 132–142. (In English) [Google Scholar] [CrossRef] [PubMed]
- Borzabadi-Farahani, A.; Eslamipour, F.; Shahmoradi, M. Functional needs of subjects with dentofacial deformities: A study using the index of orthognathic functional treatment need (IOFTN). J. Plast. Reconstr. Aesthet. Surg. 2016, 69, 796–801. (In English) [Google Scholar] [CrossRef] [PubMed]
Author (Year) | Data Type | Dataset Size (Training/Test) | No. of Landmarks/Measurements | Algorithm | Performance |
---|---|---|---|---|---|
Payer et al. (2019) [28] | Lateral cephalograms | 150/250 | 19/0 | CNN | Error radii: 26.67% (2 mm), 21.24% (2.5 mm), 16.76% (3 mm), and 10.25% (4 mm). |
Nishimoto et al. (2019) [29] | Lateral cephalograms | 153/66 | 10/12 | CNN | Average prediction errors: 17.02 pixels. Median prediction errors: 16.22 pixels. |
Zhong et al. (2019) [30] | Lateral cephalograms | 150/100 (additional 150 images than validation set). | 19/0 | U-Net | Test 1: MRE: 1.12 ± 0.88 mm. SDR within 2, 2.5, 3, and 4 mm: 86.91%, 91.82%, 94.88%, and 97.90%, respectively. Test 2: MRE: 1.42 ± 0.84 mm. SDR within 2, 2.5, 3, and 4 mm: 76.00%, 82.90%, 88.74%, and 94.32%, respectively. |
Park et al. (2019) [31] | Lateral cephalograms | 1028/283 | 80/0 | YOLOv3, SSD | YOLOv3 demonstrated overall superiority over SSD in terms of accuracy and computational performance. For YOLOv3, SDR within 2, 2.5, 3, and 4 mm: 80.40%, 87.4%, 92.00%, and 96.2%, respectively. |
Moon et al. (2020) [32] | Lateral cephalograms | Training: 50, 100, 200, 400, 800, 1200, 1600, 2000. Test: 200. | 19, 40, 80 | CNN (YOLOv3) | The accuracy of AI is positively correlated with the number of training datasets and negatively correlated with the number of detection targets. |
Hwang et al. (2020) [33] | Lateral cephalograms | 1028/283 | A total of 80 | CNN (YOLOv3) | Mean detection error: 1.46 ± 2.97 mm. |
Oh et al. (2020) [34] | Lateral cephalograms | 150/100 (additional 150 images than validation set). | 19/8 | CNN (DACFL) | MRE: 14.55 ± 8.22 pixel. SDR within 2, 2.5, 3, and 4 mm: 75.9%, 83.4%, 89.3%, and 94.7%, respectively. Classification accuracy: 83.94%. |
Kim et al. (2020) [35] | Lateral cephalograms | 1675/400 | 23/8 | Stacked hourglass deep learning model. | Point-to-point error: 1.37 ± 1.79 mm. SCR: 88.43%. |
Kunz et al. (2020) [36] | Lateral cephalograms | 1792/50 | 18/12 | CNN | The CNN models showed almost no statistically significant differences with the humans’ gold standard. |
Alqahtani et al. (2020) [37] | Lateral cephalograms | -/30 | 16/16 | Commercially available web-based platform (CephX, https://www.orca-ai.com/, accessed on 23 August 2023) | The results obtained from CephX and manual landmarking did not exhibit clinically significant differences. |
Lee et al. (2020) [38] | Lateral cephalograms | 150/250 | 19/8 | Bayesian CNN | Mean landmark error: 1.53 ± 1.74 mm. SDR within 2, 3, and 4 mm: 82.11%, 92.28%, and 95.95%, respectively. Classification accuracy: 72.69~84.74. |
Yu et al. (2020) [39] | Lateral cephalograms | A total of 5890 | Four skeletal classification indicators. | Multimodal CNN | Sensitivity, specificity, and accuracy for vertical and sagittal skeletal classification: >90%. |
Li et al. (2020) [40] | Lateral cephalograms | 150/100 (additional 150 images than validation set). | 19/0 | GCN | MRE: 1.43 mm. SDR within 2, 2.5, 3, and 4 mm: 76.57%, 83.68%, 88.21%, and 94.31%, respectively. |
Tanikawa et al. (2021) [41] | Lateral cephalograms | 1755/30 for each subgroup | 26/0 | CNN | Mean success rate: 85~91%. Mean identification error: 1.32~1.50 mm. |
Zeng et al. (2021) [42] | Lateral cephalograms | 150/100 (additional 150 images than validation set). | 19/8 | CNN | MRE: 1.64 ± 0.91 mm. SDR within 2, 2.5, 3, and 4 mm: 70.58%, 79.53%, 86.05%, and 93.32%, respectively. SCR: 79.27%. |
Kim et al. (2021) [24] | Lateral cephalograms | 2610/100 (additional 440 images than validation set) | 20/0 | Cascade CNN | Overall detection error: 1.36 ± 0.98 mm. |
Hwang et al. (2021) [43] | Lateral cephalograms | 1983/200 | 19/8 | CNN (YOLOv3) | SDR within 2, 2.5, 3, and 4 mm: 75.45%, 83.66%, 88.92%, and 94.24%, respectively. SCR: 81.53%. |
Bulatova et al. (2021) [44] | Lateral cephalograms | -/110 | 16/0 | CNN (YOLOv3) (Ceppro software) | Total of 12 out of 16 points showed no statistical difference in absolute differences between AI and manual landmarking. |
Jeon et al. (2021) [45] | Lateral cephalograms | -/35 | 16/26 | CNN | None of the measurements showed statistically differences except the saddle angle, linear measurements of maxillary incisor to NA line and mandibular incisor to NB line. |
Hong et al. (2022) [46] | Lateral cephalograms | 3004/184 | 20/ | Cascade CNN | Total mean error was 1.17 mm. Accuracy percentage: 74.2%. |
Le et al. (2022) [47] | Lateral cephalograms | 1193/100 | 41/8 | CNN (DACFL) | MRE of 1.87 ± 2.04 mm. SDR within 2, 2.5, 3, and 4 mm: 73.32%, 80.39%, 85.61%, and 91.68%, respectively. Average SCR: 83.75%. |
Mahto et al. (2022) [48] | Lateral cephalograms | -/30 | 18/12 | Commercially available web-based platform (WebCeph, https://webceph.com, accessed on 23 August 2023) | Intraclass correlation coefficient: 7 parameters >0.9 (excellent agreement), 5 parameters: 0.75~0.9 (good agreement). |
Uğurlu et al. (2022) [49] | Lateral cephalograms | 1360/180 (additional 140 images than validation set) | 21/0 | CNN (FARNet) | MRE: 3.4 ± 1.57 mm. SDR within 2, 2.5, 3, 4 mm: 76.2%, 83.5%, 88.2%, 93.4%, respectively. |
Yao et al. (2022) [50] | Lateral cephalograms | 312/100 (additional 100 images than validation set) | 37/0 | CNN | MRE: 1.038 ± 0.893 mm. SDR within 1, 1.5, 2, 2.5, 3, 3.5, 4 mm: 54.05%, 91.89%, 97.30%, 100%, 100%, 100%, respectively. |
Lu et al. (2022) [51] | Lateral cephalograms | 150/250 | 19/0 | GCN | MRE: 1.19 mm. SDR within 2, 2.5, 3, and 4 mm: 83.20%, 88.93%, 92.88%, and 97.07%, respectively. |
Tsolakis et al. (2022) [52] | Lateral cephalograms | -/100 | 16/18 | CNN (commercially available software: CS imaging V8). | Differences between the AI software (CS imaging V8) and manual landmarking were not clinically significant. |
Duran et al. (2023) [53] | Lateral cephalograms | -/50 | 32/18 | Commercially available web-based platform (OrthoDx, https://ortho dx.phime ntum.com; WebCeph, https://webceph.com, accessed on 23 August 2023) | Consistency between AI software and manual landmarking: A statistically significant good level: angular measurements; a weak level: linear measurement and soft tissue parameters. |
Ye et al. (2023) [54] | Lateral cephalograms | -/43 | 32/0 | Commercially available software (MyOrthoX, Angelalign, and Digident) | MRE: MyOrthoX: 0.97 ± 0.51 mm. Angelalign: 0.80 ± 0.26 mm. Digident: 1.11 ± 0.48 mm. SDR (%) (within 1/1.5/2 mm): MyOrthoX: 67.02 ± 10.23/82.80 ± 7.36/89.99 ± 5.17. Angelalign: 78.08 ± 14.23/89.29 ± 14.02/93.09 ± 13.64. Digident: 59.13 ± 10.36/78.72 ± 5.97/87.53 ± 4.84. |
Ueda et al. (2023) [55] | Lateral cephalometric data | A total of 220 | 0/8 | RF | Overall accuracy: 0.823 ± 0.060. |
Bao et al.(2023) [56] | Reconstructed lateral cephalograms from CBCT | -/85 | 19/23 | Commercially available software (Planmeca Romexis 6.2) | For landmarks: MRE: 2.07 ± 1.35 mm SDR within 1, 2, 2.5, 3, and 4 mm: 18.82%, 58.58%, 71.70%, 82.04%, and 91.39%, respectively. For measurements: The rates of consistency within the 95% limits of agreement: 91.76~98.82%. |
Kim et al. (2021) [57] | Reconstructed Posteroanterior cephalograms from CBCT | 345/85 | 23/0 | Multi-stage CNN | MRE: 2.23 ± 2.02 mm SDR within 2 mm: 60.88%. |
Takeda et al. (2021) [58] | Posteroanterior cephalograms | 320/80 | 4/1 | CNN, RF | The CNN showed higher coefficient of determination than RF and less mean absolute error for the distance from the vertical reference line to menton. CNN with a stochastic gradient descent optimizer had the best performance. |
Lee et al. (2019) [59] | CBCT | 20/7 | 7 | Deep learning | Average point-to-point error: 1.5 mm. |
Torosdagli et al. (2019) [60] | CBCT | A total of 50 | 9/0 | Deep geodesic learning | Errors in the pixel space: <3 pixels for all landmarks. |
Yun et al. (2020) [61] | CBCT | 230/25 | 93/0 | CNN | Average point-to-point error: 3.63 mm. |
Kang et al. (2021) [62] | CT | 20/8 | 16/0 | Multi-stage DRL | Mean detection error: 1.96 ± 0.78. SDR within 2, 2.5, 3, and 4 mm: 58.99%, 75.39%, 86.52%, and 95.70%, respectively. |
Ghowsi et al. (2022) [63] | CBCT | -/100 | 53/0 | Commercially available software (Stratovan Corporation) | Mean absolute error: 1.57 mm. Mean error distance: 3.19 ± 2.6 mm. SDR within 2, 2.5, 3, and 4 mm: 35%, 48%, 59%, and 75%, respectively. |
Dot et al. (2022) [64] | CT | 128/38 (additional 32 images as validation set). | 33/15 | SCN | For landmarks: MRE: 1.0 ± 1.3 mm. SDR within 2, 2.5, and 3 mm: 90.4%, 93.6%, and 95.4%, respectively. For measurements: Mean errors: −0.3 ± 1.3° (angular), −0.1 ± 0.7 mm (linear). |
Blum et al. (2023) [65] | CBCT | 931/114 | 35/0 | CNN | Mean error: 2.73 mm. |
Author (Year) | Data Type | Dataset Size (Training/Test) | Algorithm | Performance |
---|---|---|---|---|
Kök et al. (2019) [99] | Lateral cephalograms | 240/60 | k-NN, NB, DT, ANN, SVM, RF, LR | Mean rank of accuracy: k-NN: 4.67, NB: 4.50, DT: 3.67, ANN: 2.17, SVM: 2.50, RF: 4.33, LR: 5.83. |
Makaremi et al. (2019) [100] | Lateral cephalograms | Training: 360/600/900/1870 Evaluation: 300 Testing: 300 | CNN | Performance varied depending on image numbers and pre-processing method. |
Amasya et al. (2020) [101] | Lateral cephalograms | 498/149 | LR, SVM, RF, ANN, DT | Accuracy: LR: 78.69%, SVM: 81.08%, RF: 82.38%, ANN: 86.93%, DT: 85.89%. |
Amasya et al. (2020) [102] | Lateral cephalograms | -/72 | ANN | Average of 58.3% agreement with four human observers. |
Kök et al. (2021) [91] | Lateral cephalograms | A total of 419 | Total of 24 different ANN models | The highest accuracy was 0.9427. |
Seo et al. (2021) [103] | Lateral cephalograms | A total of 600 | ResNet-18, MobileNet-v2, ResNet-50, ResNet-101, Inception-v3, Inception-ResNet-v2 | Accuracy/Precision/Recall/F1 score: ResNet-18: 0.927 ± 0.025/0.808 ± 0.094/0.808 ± 0.065/0.807 ± 0.074. MobileNet-v2: 0.912 ± 0.022/0.775 ± 0.111/0.773 ± 0.040/0.772 ± 0.070. ResNet-50: 0.927 ± 0.025/0.807 ± 0.096/0.808 ± 0.068/0.806 ± 0.075. ResNet-101: 0.934 ± 0.020/0.823 ± 0.113/0.837 ± 0.096/0.822 ± 0.054. Inception-v3: 0.933 ± 0.027/0.822 ± 0.119/0.833 ± 0.100/0.821 ± 0.082. Inception-ResNet-v2: 0.941 ± 0.018/0.840 ± 0.064/0.843 ± 0.061/0.840 ± 0.051. |
Zhou et al. (2021) [104] | Lateral cephalograms | 980/100 | CNN | Mean labeling error: 0.36 ± 0.09 mm. Accuracy: 71%. |
Kim et al. (2021) [105] | Lateral cephalograms | 480/120 | CNN | Three-step model obtained the highest accuracy at 62.5%. |
Rahimi et al. (2022) [106] | Lateral cephalograms | 692/99 (additional 99 images than validation set). | ResNet-18, ResNet-50, ResNet-101, ResNet-152, VGG19, DenseNet, ResNeXt-50, ResNeXt-101, MobileNetV2, InceptionV3. | ResNeXt-101 showed the best test accuracy: Six-class: 61.62%, three-class: 82.83%. |
Radwan et al. (2023) [107] | lateral cephalograms | 1201/150 (additional 150 images than validation set). | U-Net, Alex-Net | Segmentation network: Global accuracy: 0.99. Average dice score: 0.93. Classification network: Accuracy: 0.802. Sensitivity (pre-pubertal/pubertal/post-pubertal): 0.78/0.45/0.98. Specificity (pre-pubertal/pubertal/post-pubertal): 0.94/0.94/0.75. F1 score (pre-pubertal/pubertal/post-pubertal): 0.76/0.57/0.90. |
Akay et al. (2023) [98] | lateral cephalograms | 352/141 (additional 94 images than validation set). | CNN | Classification accuracy: 58.66%. Precision (stage 1/2/3/4/5/6): 0.82/0.47/0.64/0.52/0.55/0.52. Recall (stage 1/2/3/4/5/6): 0.70/0.74/0.58/0.54/0.37/0.60. F1 score (stage 1/2/3/4/5/6): 0.76/0.57/0.61/0.53/0.44/0.56. |
Author (Year) | Data Type | Dataset Size (Training/Test) | Algorithm | Purpose | Performance |
---|---|---|---|---|---|
Shen et al. (2020) [109] | Lateral cephalograms | 488/116 (additional 64 images than validation set) | CNN | Adenoid hypertrophy detection | Classification accuracy: 95.6%. Average AN ratio error: 0.026. Macro F1 score: 0.957. |
Zhao et al. (2021) [110] | Lateral cephalograms | 581/160 | CNN | Adenoid hypertrophy detection | Accuracy: 0.919. Sensitivity: 0.906. Specificity: 0.938. ROC: 0.987. |
Liu et al. (2021) [111] | Lateral cephalograms | 923/100 | VGG-Lite | Adenoid hypertrophy detection | Sensitivity: 0.898. Specificity: 0.882. Positive predictive value: 0.880. Negative predictive value: 0.900. F1 score: 0.889. |
Sin et al. (2021) [112] | CBCT | 214/46 (additional 46 images than validation set) | CNN | Pharyngeal airway segmentation | Dice ratio: 0.919. Weighted IoU: 0.993. |
Leonardi et al. (2021) [113] | CBCT | 20/20 | CNN | Sinonasal cavity and pharyngeal airway segmentation | Mean matching percentage (tolerance 0.5 mm/1.0 mm): 85.35 ± 2.59/93.44 ± 2.54 |
Shujaat et al. (2021) [114] | CBCT | 48/25 (additional 30 images than validation set) | 3D U-Net | Pharyngeal airway segmentation | Accuracy: 100%. Dice score:0.97 ± 0.02. IoU: 0.93 ± 0.03. |
Jeong et al. (2023) [115] | Lateral cephalograms | 1099/120 | CNN | Upper-airway obstruction evaluation | Sensitivity: 0.86. Specificity: 0.89. Positive predictive value: 0.90. Negative predictive value: 0.85, F1 score: 0.88. |
Dong et al. (2023) [116] | CBCT | A total of 87 | HMSAU-Net and 3D-ResNet | Upper-airway segmentation and adenoid hypertrophy detection | Segmentation: Dice value: 0.96. Diagnosis: accuracy: 0.912. Sensitivity: 0.976. Specificity: 0.867. Positive predictive value: 0.837. Negative predictive value: 0.981. F1 score: 0.901. |
Jin et al. (2023) [117] | CBCT | A total of 50 | Transformer and U-Net | Nasal and pharyngeal airway segmentation | Precision: 85.88~94.25%. Recall: 93.74~98.44%. Dice similarity coefficient: 90.95~96.29%. IoU: 83.68~92.85%. |
Author (Year) | Data Type | Dataset Size (Training/Test) | Algorithms | Purpose | Performance |
---|---|---|---|---|---|
Xie et al. (2010) [121] | Cephalometric variables, cast measurement. | 180/20 | ANN | To predict tooth extraction diagnosis. | Accuracy: 80%. |
Jung et al. (2016) [122] | Cephalometric variables, dental variable, profile variables, and chief complaint for protrusion. | 64/60 (additional 32 samples than validation set) | ANN | To predict tooth extraction diagnosis, and extraction patterns. | Success rate: Tooth extraction diagnosis: 93%. Extraction patterns: 84%. |
Li et al. (2019) [123] | Demographic data, cephalometric data, dental data, and soft tissue data. | A total of 302 samples | MLP (ANN) | To predict tooth extraction diagnosis, extraction patterns and anchorage patterns. | Accuracy: For extraction diagnosis: 94%. For extraction patterns: 84.2%. For anchorage patterns: 92.8%. |
Suhail et al. (2020) [124] | Diagnosis, feature identification of photos, models and X-rays. | A total of 287 samples | ANN, LR, RF | To predict tooth extraction diagnosis, and extraction patterns. | For extraction diagnosis: LR outperformed the ANN. For extraction patterns: RF outperformed ANN. |
Etemad et al. (2021) [125] | Demographic data, cephalometric data, dental data, and soft tissue data. | A total of 838 samples | RF, MLP (ANN) | To predict tooth extraction diagnosis. | Accuracy of RF with 22/117/all inputs: 0.75/0.76/0.75. Accuracy of MLP with 22/117/all inputs: 0.79/0.75/0.79. |
Shojaei et al. (2022) [126] | Medical records, extra and intra oral photos, dental model records, and radiographic images. | A total of 126 samples | LR, SVM, DT, RF, Gaussian NB, KNN Classifier, ANN | To predict tooth extraction diagnosis, extraction patterns and anchorage patterns. | Accuracy for extraction decision: ANN: 93%, LR:86%, SVM:83%, DT: 76%, RF: 83%, Gaussian NB: 72%, KNN Classifier: 72%. Accuracy for extraction pattern: ANN: 89%, RF:40%. Accuracy for extraction and anchorage pattern: ANN: 81%, RF:23%. |
Real et al. (2022) [127] | Sex, model variables, cephalometric variables, outcome variables. | -/214 | Commercially available software (Auto-WEKA) | To predict tooth extraction diagnosis. | Accuracy: 93.9%: input model and cephalometric data. 87.4%: input only model data. 72.7%: input only cephalometric data. |
Leavitt et al. (2023) [128] | Cephalometric variables, dental variables, demographic characteristics. | 256/110 | RF, LR, SVM | To predict tooth extraction patterns. | Overall accuracy: RF: 54.55%, SVM: 52.73%, LR: 49.09%. |
Ryu et al. (2023) [78] | Intraoral photographs, extraction decision. | 2736/400 | ResNet (ResNet50, ResNet101), VggNet (VGG16, and VGG19) | To predict tooth extraction diagnosis. | Accuracy: Maxilla: VGG19 (0.922) > ResNet101 (0.915) > VGG16 (0.910) > ResNet50 (0.909). Mandible: VGG19 (0.898) = VGG16 (0.898) > ResNet50 (0.895) > ResNet101 (0.890). |
Prasad et al. (2022) [129] | Clinical data, cephalometric data, cast and photographic data. | A total of 700 samples | RF, XGB, LR, DT, K-Neighbors, Linear SVM, NB | To predict skeletal jaw base, extraction diagnosis for Class 1 jaw base, and functional/camouflage/surgical strategies for Class 2/3 jaw base. | Different algorithms showed different accuracies in different layers. RF performed best in 3 out of 4 layers. |
Knoops et al. (2019) [130] | 3D face scans | A total of 4261 | SVM for classification LR, RR, LARS, and LASSO for regression | To predict surgery/non-surgery decision and surgical outcomes. | For surgery/non-surgery decision: Accuracy: 95.4%. Sensitivity: 95.5%. Specificity: 95.2%. For surgical outcomes simulation: Average error: LARS and RR (1.1 ± 0.3 mm). LASSO (1.3 ± 0.3 mm). LR (3.0 ± 1.2 mm). |
Choi et al. (2019) [131] | Lateral cephalometric variables, dental variable, profile variables, chief complaint for protrusion. | 136/112 (additional 68 samples than validation set) | ANN | To predict surgery/non-surgery decision, extraction/non-extraction for surgical treatment. | Accuracy for all dataset: Diagnosis of surgery/non-surgery: 96%. Diagnosis of extraction/non-extraction for Class II surgery: 97%. Diagnosis of extraction/non-extraction for Class III surgery: 88%. Diagnosis of extraction/non-extraction for surgery: 91%. |
Lee et al. (2020) [132] | Lateral cephalograms. | 220/40 (additional 73 samples than validation set) | CNN (Modified-Alexnet, MobileNet, and Resnet50) | To predict the need for orthognathic surgery. | Average accuracy for all dataset: Modified-Alexnet: 96.4%. MobileNet: 95.4%. Resnet50: 95.6%. |
Jeong et al. (2020) [133] | Facial photos (front and right). | A total of 822 samples. Group 1: 207/204. Group 2: 205/206 | CNN | To predict the need for orthognathic surgery. | Accuracy: 0.893. Precision: 0.912. Recall: 0.867. F1 scores:0.889. |
Shin et al. (2021) [134] | Lateral cephalograms and posteroanterior cephalograms. | A total of 840 samples. Group 1: 273/304 (additional 30 samples than validation set). Group 2: 98/109 (additional 11 samples than validation set) | CNN | To predict the diagnosis of orthognathic surgery. | Accuracy: 0.954. Sensitivity: 0.844. Specificity: 0.993. |
Kim et al. (2021) [135] | Lateral cephalograms. | 810/150 | CNN (ResNet-18, 34, 50, 101) | To predict the diagnosis of orthognathic surgery. | Accuracy for test dataset: ResNet-18/34/50/101: 93.80%/93.60%/91.13%/91.33%. |
Lee et al. (2022) [136] | Cephalometric measurements, demographic characteristics, dental analysis, and chief complaint. | 136/60 | RF, LR | To predict the diagnosis of orthognathic surgery. | Accuracy (RF/LR): 90%/78%. Sensitivity (RF/LR): 84%/89%. Specificity (RF/LR): 93%/73%. |
Woo et al. (2023) [137] | Intraoral scan data | -/30 | Three commercially available software packages (Autolign, Outcome Simulator Pro, Ortho Simulation) | To evaluate the accuracy of automated digital setup accuracy. | Mean error of three pieces of software: Linear movement: 0.39~1.40 mm. Angular movement: 3.25~7.80°. |
Park et al. (2021) [138] | Lateral cephalograms | A total of 284 cases | CNN (U-Net) | To predict the cephalometric changes of Class II patients after using modified C-palatal plates. | Total mean error: 1.79 ± 1.77 mm. |
Tanikawa et al. (2021) [139] | 3D facial images | A total of 72 cases in surgery group and 65 cases in extraction group | Deep learning | To predict facial morphology change after orthodontic or orthognathic surgical treatment. | Average system errors: Surgery group: 0.94 ± 0.43 mm; orthodontic group: 0.69 ± 0.28 mm. Success rates (<1 mm): Surgery group: 54%; orthodontic group: 98%. Success rates (<2 mm): Surgery group: 100%; orthodontic group: 100%. |
Park et al. (2022) [140] | CBCT | 268/44 | cGAN | To predict post-orthodontic facial changes. | Mean prediction error: 1.2 ± 1.01 mm. Accuracy within 2 mm: 80.8%. |
Xu et al. (2022) [141] | Total of 17 clinical features | A total of 196 cases | ANN | To predict patient experience of Invisalign treatment. | Predictive success rate: Pain: 87.7%. Anxiety: 93.4%. Quality of life: 92.4%. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, J.; Zhang, C.; Shan, Z. Application of Artificial Intelligence in Orthodontics: Current State and Future Perspectives. Healthcare 2023, 11, 2760. https://doi.org/10.3390/healthcare11202760
Liu J, Zhang C, Shan Z. Application of Artificial Intelligence in Orthodontics: Current State and Future Perspectives. Healthcare. 2023; 11(20):2760. https://doi.org/10.3390/healthcare11202760
Chicago/Turabian StyleLiu, Junqi, Chengfei Zhang, and Zhiyi Shan. 2023. "Application of Artificial Intelligence in Orthodontics: Current State and Future Perspectives" Healthcare 11, no. 20: 2760. https://doi.org/10.3390/healthcare11202760
APA StyleLiu, J., Zhang, C., & Shan, Z. (2023). Application of Artificial Intelligence in Orthodontics: Current State and Future Perspectives. Healthcare, 11(20), 2760. https://doi.org/10.3390/healthcare11202760