Next Article in Journal
Harnessing Stadium Roofs for Community Electrical Power: A Case Study of Rome’s Olympic Stadium Title
Next Article in Special Issue
Differential Relapse of Proximal and Distal Segments after Mandibular Setback Surgery
Previous Article in Journal
Analysis of Decarbonization Potential in Mobility Sector with High Spatial Resolution: Study Case of the Metropolitan Area of Valencia (Spain)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

A Comparative Study of Deep Learning and Manual Methods for Identifying Anatomical Landmarks through Cephalometry and Cone-Beam Computed Tomography: A Systematic Review and Meta-Analysis

1
Orthodontics, Graduate School of Clinical Dental Science, The Catholic University of Korea, Seoul 06591, Republic of Korea
2
Department of Orthodontics, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea
3
Medical Library, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea
4
Department of Oral and Maxillofacial Surgery, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea
5
Department of Periodontics, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea
6
Dental Implantology, Graduate School of Clinical Dental Science, The Catholic University of Korea, Seoul 06591, Republic of Korea
7
Department of Medicine, Graduate School, The Catholic University of Korea, Seoul 06591, Republic of Korea
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2024, 14(16), 7342; https://doi.org/10.3390/app14167342
Submission received: 18 July 2024 / Revised: 8 August 2024 / Accepted: 16 August 2024 / Published: 20 August 2024

Abstract

:
Background: Researchers have noted that the advent of artificial intelligence (AI) heralds a promising era, with potential to significantly enhance diagnostic and predictive abilities in clinical settings. The aim of this meta-analysis is to evaluate the discrepancies in identifying anatomical landmarks between AI and manual approaches. Methods: A comprehensive search strategy was employed, incorporating controlled vocabulary (MeSH) and free-text terms. This search was conducted by two reviewers to identify published systematic reviews. Three major electronic databases, namely, Medline via PubMed, the Cochrane database, and Embase, were searched up to May 2024. Results: Initially, 369 articles were identified. After conducting a comprehensive search and applying strict inclusion criteria, a total of ten studies were deemed eligible for inclusion in the meta-analysis. The results showed that the average difference in detecting anatomical landmarks between artificial intelligence and manual approaches was 0.35, with a 95% confidence interval (CI) ranging from −0.09 to 0.78. Additionally, the overall effect between the two groups was found to be insignificant. Upon further analysis of the subgroup of cephalometric radiographs, it was determined that there were no significant differences between the two groups in terms of detecting anatomical landmarks. Similarly, the subgroup of cone-beam computed tomography (CBCT) revealed no significant differences between the groups. Conclusions: In summary, the study concluded that the use of artificial intelligence is just as effective as the manual approach when it comes to detecting anatomical landmarks, both in general and in specific contexts such as cephalometric radiographs and CBCT evaluations.

1. Introduction

The development of computer systems with the ability to perform tasks requiring human intelligence is referred to as AI [1]. The emergence of AI is a promising era and has the potential to greatly enhance diagnostic and predictive capabilities in clinical settings [2]. AI has made considerable progress in various aspects of dentistry, presenting promising developments for both dental professionals and patients alike [3]. In dentistry, AI has been utilized to aid in the acquisition of digital data, including the tasks of scan cleaning, scan assistance, and the automation of the alignment process between the scanned body of an implant and the planning procedures for implants [4]. Deep learning models have proven to be highly effective in detecting temporomandibular joint arthropathies with a high degree of sensitivity and specificity [5].
Similarly, AI has made considerable advancements in orthodontics, and this area has generated significant interest among orthodontists [6]. AI can aid in diagnosing orthodontic issues by analyzing clinical photographs, radiographs, and three-dimensional scans [7]. AI can identify various irregularities, including those related to tooth alignment, jaw relation, and tooth structure [8]. The use of AI in this context expedites the diagnostic process and enhances the accuracy of the results [9]. This study aimed to carry out a meta-analysis to evaluate the disparities in the identification of anatomical landmarks between artificial intelligence and manual approaches.

2. Materials and Methods

2.1. Protocol and Eligibility Criteria

This systematic review adheres to the guidelines specified in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement, as outlined in reference [10]. The central question of this review is as follows: how accurate is automated landmarking using deep learning in comparison to manual tracing for cephalometric analysis? The participants in this study are three-dimensional data images that are suitable for landmarking. The intervention is landmarking performed by a deep learning machine, and the comparison is manual landmarking. The outcomes of interest are accuracy, precision, and reliability. Studies published in languages other than English were excluded from this review.

2.2. Information Sources and Search Strategy

A solitary reviewer, affiliated with the library and identified by the initials N.J.K., executed an extensive search using a combination of controlled vocabulary (MeSH) and free-text terms. This approach was designed to locate published systematic reviews. Additionally, two reviewers, identified by the initials Y.J.L. and J.H.P., performed comprehensive searches of three major electronic databases, including Medline via PubMed, the Cochrane database, and Embase, up to 15 May 2024. The results of the search were carefully transferred to the EndNote reference management software (Version 21, Clarivate, Philadelphia, PA, USA) for an extensive process of deduplication. This step was deemed essential to ensure that the research findings would not be compromised by any duplicate entries or redundant references. To optimize the accuracy and relevance of our search, the search strategy employed was carefully tailored to align with the specific criteria and nuances of each database. This tailored approach allowed us to effectively harness the full potential of each database, maximizing the retrieval of relevant and valuable information for our research objectives. Further details about the initial search strategy can be found in Supplementary Material Table S1.

2.3. Study Selection and Data Extraction

The evaluation of the retrieved articles’ titles and abstracts for eligibility criteria was conducted blindly by two reviewers (Y.J.L. and J.H.P.). Any discrepancies were settled through discussions with a third author (S.H.H.). The full text of the remaining articles was then assessed independently and in duplicate by Y.J.L. and J.H.P. before final selection. Data were obtained from the selected studies and organized according to the PICOS question, including general information (author name and publication year), participant details (number of samples and landmarks measured, anatomic landmarks), intervention/comparison (deep learning vs. manual landmarking), and outcomes (mean radial error, successful detection rate).

2.4. Risk-of-Bias Assessment

Two independent reviewers, Y.J.L. and J.H.P., assessed the risk of individual bias in the eligible studies using the QUADAS-2 tool [11]. This tool consists of four domains: patient selection, index test, reference standard, and flow and timing. Each domain is evaluated for the risk of bias, and the first three domains are also assessed for applicability concerns. The domains can be classified as “high risk”, “uncertain risk”, or “low risk”. When the reviewers had divergent opinions, they resolved their differences through discussion. If no consensus was reached, a third author, S.H.H., was consulted to make a final decision.

2.5. Data Synthesis and Analysis

Meta-analysis was performed using R (version 3.5.0; R Project for Statistical Computing). The mean difference (MD) and the 95% confidence interval (CI) were utilized as summary statistics. In conducting the meta-analysis, a random-effects model was adopted, with a significance level of 0.05. To assess the variability among the studies, both the I2 static and the chi-squared test were performed.

3. Results

3.1. Study Selection and Data Extraction

The literature search initially yielded 369 articles. Following the exclusion of 86 duplicates, the titles and abstracts of the remaining articles were assessed, resulting in the exclusion of 191 articles that did not meet the inclusion criteria. Thereafter, the full-text versions of the 92 articles that remained were evaluated based on the inclusion and exclusion criteria. Of these, 82 articles were found to be ineligible for further analysis, leaving 10 studies that were assessed for eligibility. The flowchart depicting the screening process is presented in Figure 1, while Supplementary Material Table S2 provides a list of the excluded articles along with the reasons for their exclusion. Table 1 offers an overview of the key characteristics of the studies that were ultimately included in this analysis.

3.2. Risk of Bias Assessment

The summary of the risk of bias and overall risk of bias score for each field in the included articles are depicted in Figure 2 and Supplementary Material Table S3 Considering all four bias assessment domains, four studies were found to have a low concern of bias, four studies had unclear concerns regarding bias, and two studies showed a high risk of bias. Most studies presented a low risk of bias for the domains of patient selection (70%, 7/10), index test (90%, 1/10), and reference standard (80%, 2/10). However, for flow and timing, 20% of studies showed a high risk, and 30% of studies indicated an unclear risk of bias due to issues with the time interval. Regarding applicability, nearly all papers received a low-risk evaluation across all domains, with only one paper receiving a high concern of applicability in the index test domain. Overall, the concern for applicability was mainly low, given that only one high risk was obtained in the index test domain. Rationale for each question in the QUADAS-2 assessment is shown in in Supplementary Material Table S4.

3.3. Meta-Analysis

A total of ten articles (Shahidi et al., 2014 [12]; Wang et al., 2018 [13]; Hwang et al., 2020 [14]; Muraev et al., 2020 [15]; Kim et al., 2021 [16]; Kim et al., 2021 [17]; Gil et al., 2022 [18]; Le et al., 2022 [19]; Blum et al., 2023 [20]; and Han et al., 2024 [21]) were examined in a study to explore the differences in the identification of anatomical landmarks between deep learning and manual approaches. Taking into account the diverse designs of the studies, a random-effects model was used. The high level of I2 values (99%; p < 0.01) indicated substantial heterogeneity among the studies. To account for this heterogeneity, a subgroup analysis was performed, dividing the literature into two subgroups based on the type of radiographic analysis and cone-beam computed tomography. The results of the meta-analysis showed that the pooled mean difference for accuracy in detecting the anatomical landmarks between artificial intelligence and manual approaches was 0.35 (95% CI, −0.09 to 0.78), and the overall effect between the groups was insignificant (p > 0.01) (Figure 3).
Seven articles (Han et al., 2024 [21]; Le et al., 2022 [19]; Gil et al., 2022 [18]; Kim et al., 2021 [16]; Muraev et al., 2020 [15]; Hwang et al., 2020 [14]; and Wang et al., 2018 [13]) investigated the accuracy of detecting anatomical landmarks using cephalometric radiographs. The studies revealed a high level of heterogeneity (I2 values of 99%; p < 0.01). The meta-analysis revealed a pooled mean difference of 0.09 (95% CI, −0.18 to 0.36; p > 0.01) between artificial intelligence and manual approaches in detecting anatomical landmarks (Figure 3).
Three articles (Blum et al., 2023 [20]; Kim et al., 2021 [17]; and Shahidi et al., 2014 [12]) assessed the disparities in identifying anatomical landmarks using cone-beam computed tomography. The high level of I2 values (99%; p < 0.01) indicated substantial heterogeneity among the studies. The meta-analysis showed a pooled mean difference of 0.95 (95% CI, −0.21 to 2.11; p > 0.01) between artificial intelligence and human in detecting anatomical landmarks. The forest plot did not demonstrate superiority between artificial intelligence and humans (Figure 3).
Six articles (Wang et al., 2018 [13]; Kim et al., 2021 [16]; Kim et al., 2021 [17]; Gil et al., 2022 [18]; Le et al., 2022 [19]; and Han et al., 2024 [21]) were examined in a study to access the successful detection rate (SDR) of deep learning with a margin of error of 2 mm, considering that a margin of error of 2 mm is clinically acceptable (Kim et al., 2021 [16]). Given the diverse designs of the studies, a random-effects model was used. The high level of I2 values (99%; p < 0.01) indicated very high heterogeneity among the studies. The meta-analysis showed a pooled proportion of 0.77 (95% CI, 0.69 to 0.84). The forest plot demonstrates relatively consistent high SDR of deep learning with a margin of error of 2mm above 70%, with the exception of one article (Kim et al., 2021 [17]) (Figure 4).

3.4. Sensitivity Meta-Analysis

A sensitivity analysis was performed by excluding a single study (Shahidi et al., 2014 [12]) from the total of ten studies, which was found to affect the aggregate outcomes of the meta-analysis (Figure 5). When comparing the comprehensive meta-analysis encompassing all studies to the meta-analyses performed after individually excluding each study, the mean difference, confidence intervals, and heterogeneity index generally remained consistent. Hence, a new meta-analysis was conducted by excluding the Shahidi et al. study (2014) [12]. The test for overall effect was insignificant (p > 0.05), and it also revealed that there were no significant differences between subgroups (p > 0.05) (Figure 6).

3.5. Publication Bias Analysis

The funnel plot revealed an asymmetric pattern, indicating potential publication bias (Figure 7). Most studies were clustered around the center, suggesting a concentration near the average effect size. Egger’s regression test yielded a t-value of 1.69 with 8 degrees of freedom and a corresponding p-value of 0.13, suggesting no substantial evidence of publication bias or small study effects in the meta-analysis. Results for the adjusted trim-and-fill analysis are shown in Supplementary Material Figure S1 and Supplementary Material Table S5.

4. Discussion

This systematic review and meta-analysis was conducted to compare the effectiveness of artificial intelligence and manual methods in identifying anatomical landmarks. The goal of this study was to examine the discrepancies in the identification of anatomical landmarks between artificial intelligence and manual approaches. The results of the study showed that artificial intelligence was just as effective as manual methods in detecting anatomical landmarks in both cephalometric radiographs and CBCT.
As manual landmarking is a labor-intensive task, automated detection of landmarks could be greatly beneficial, as it expedites access to cephalometric analysis [22]. Clinicians, particularly those who are not experts in the field, may significantly benefit from using AI for evaluating disorders [5]. Automated landmark identification can expedite the manual analysis process for dental professionals, thereby enabling more prompt diagnosis and treatment planning [14]. Leveraging AI technology in the diagnostic procedure, dental practitioners can optimize their workflow, resulting in a more efficient journey from diagnosis to treatment [23]. The performance of AI models is not solely determined by their ability to provide accurate or complete information, but also by how well they engage, assist, and satisfy users, too [24]. AI may identify anatomical points more consistently, which minimizes variability that can arise from manual identification by practitioners with varying levels of expertise [25]. Presently, AI-based automatic diagnosis techniques primarily function as assistant tools, and the need to make judgments or adjustments to AI results from this [26].
Cephalometric analysis is a quantitative diagnostic tool that is frequently utilized by orthodontists to assess skeletal and dentoalveolar relationships, morphometric characteristics, and growth patterns of their patients [22]. To address the limitations of two-dimensional radiographs, such as the superimposition of left and right cranial structures, unequal magnification of bilateral structures, and the potential for distortion of mid-facial structures, three-dimensional CBCT has been implemented to assess craniofacial structures with reduced distortion compared to conventional radiographic images [12]. CBCT is an advanced radiological method that generates detailed three-dimensional reconstructions and slice images [20]. As familiarity with CBCT images grows within the field, diagnostic techniques that optimize their potential are being developed, and over time, through a process of trial and error, the full capabilities of CBCT will be more fully understood and realized [27]. The customary method of evaluating the performance of an automated identification system has been to assess its ability to successfully detect skeletal landmarks with a variance of 2 mm, which has historically been considered a clinically acceptable range of error at AI performance contests [14]. According to another report, the average automated detection error was 1.36 ± 0.98 mm [16]. This study showed that the use of artificial intelligence was as accurate as the manual approach when it comes to detecting anatomical landmarks.
Establishing the gold standard of the anatomical landmarks in cephalomaetric radiographs or CBCT differed between the studies. In one study, the gold standard was defined as the mean value of the positions of the landmarks identified two orthodontists with clinical experience [16]. In another study, the initial manual identification of cephalometric landmarks on CBCT-synthesized PA images was established as the standard of truth [17]. The initial annotation of the points was performed by students, and subsequently, an orthodontist and maxillofacial surgeon collaborated to review and rectify the positions of the points [15]. The mean inter-examiner difference of the landmarks identified by the orthodontists was 1.31 ± 1.13 mm [16]. It was also reported that AI landmark identification demonstrated improved consistency when contrasted with manual identification [17]. In a previous report, the experiment for landmark localization was performed by 20 dental students as beginners for human–AI collaboration [19]. In another study, a second-year orthodontic resident identified the anatomic landmarks [16].
The main limitations of this systematic review and meta-analysis are associated with the studies included for qualitative and quantitative assessment. The number of studies included in the review may be restricted due to the relatively recent emergence of AI and automated landmarking [22]. The studies included in the review have different designs, particularly regarding the number and type of landmarks and algorithms, which can introduce variability in the results [28]. Specifically, Shahidi et al. (2014) used a regression analysis model, an early model introduced in the application of AI to medical image analysis [12]. It is considered to have significant heterogeneity compared to the studies published after the introduction of CNNs. Previous research with differences in the quality of data, sophisticated algorithms, rigorous training methods, and practical application settings can lead to variations in the accuracy of AI-generated content [29,30]. Moreover, it was not possible to draw statistical conclusions for specific landmarks due to the variability in the number and type of landmarks examined across different studies, as well as the lack of reporting on localization errors for each specific landmark [31]. Establishing the optimal criteria for anatomical landmarks in cephalometric radiographs or CBCT may be necessary [18]. As AI and analysis techniques advance over time, they may produce different performance in diagnosis [32]. The rapid progress in artificial intelligence has led to the possibility that certain findings may quickly become outdated due to the development of newer and more precise AI models. This analysis of this study may be limited by the number of eligible studies. Further research is necessary to determine the general applicability of the AI-learned techniques in managing orthodontic patients and to assess the effectiveness of the learning process in different clinical settings when treating various patient populations [33].

5. Conclusions

The findings of the study indicate that the application of artificial intelligence was as effective as the manual approach in identifying anatomical landmarks, both in general and when using cephalometric radiographs and CBCT evaluations.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/app14167342/s1. Table S1: Search strategy of the online databases of PubMed. Table S2: Studies excluded after full-text readings and the reasons for exclusion. Table S3: Qualitative assessment on risk of bias and applicability concerns of the included studies. Table S4: Rationale for each question in the QUADAS-2 assessment. Table S5: Analyses for publication bias. Figure S1: Adjustment of the funnel plot after trim-and-fill analysis for publication bias analysis. References [34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114] are cited in the Supplementary Materials.

Author Contributions

Conceptualization, Y.L., J.-H.P., S.-H.H., N.J.K., W.-J.P. and J.-B.P.; formal analysis, Y.L., J.-H.P., S.-H.H., N.J.K., W.-J.P. and J.-B.P.; writing—original draft preparation, Y.L., J.-H.P., S.-H.H., N.J.K., W.-J.P. and J.-B.P.; and writing—review and editing, Y.L., J.-H.P., S.-H.H., N.J.K., W.-J.P. and J.-B.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article and Supplementary Material; further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kim, Y.H.; Kim, I.; Kim, Y.J.; Kim, M.; Cho, J.H.; Hong, M.; Kang, K.H.; Lim, S.H.; Kim, S.J.; Kim, N.; et al. The prediction of sagittal chin point relapse following two-jaw surgery using machine learning. Sci. Rep. 2023, 13, 17005. [Google Scholar] [CrossRef]
  2. Park, J.B.; Park, S.Y.; Park, J.C.; Kim, Y.G.; Ahn, H.T.; Shin, S.Y. Revolutionizing scholarly publishing by integrating artificial intelligence into editorial and peer review processes. J. Periodontal Implant. Sci. 2024, 54, 63–64. [Google Scholar] [CrossRef]
  3. Mahesh Batra, A.; Reche, A. A New Era of Dental Care: Harnessing Artificial Intelligence for Better Diagnosis and Treatment. Cureus 2023, 15, e49319. [Google Scholar] [CrossRef]
  4. Revilla-León, M.; Gómez-Polo, M.; Sailer, I.; Kois, J.C.; Rokhshad, R. An overview of artificial intelligence based applications for assisting digital data acquisition and implant planning procedures. J. Esthet. Restor. Dent. 2024. [Google Scholar] [CrossRef] [PubMed]
  5. Rokhshad, R.; Mohammad-Rahimi, H.; Sohrabniya, F.; Jafari, B.; Shobeiri, P.; Tsolakis, I.A.; Ourang, S.A.; Sultan, A.S.; Khawaja, S.N.; Bavarian, R.; et al. Deep learning for temporomandibular joint arthropathies: A systematic review and meta-analysis. J. Oral Rehabil. 2024, 51, 1632–1644. [Google Scholar] [CrossRef]
  6. Liu, J.; Zhang, C.; Shan, Z. Application of Artificial Intelligence in Orthodontics: Current State and Future Perspectives. Healthcare 2023, 11, 2760. [Google Scholar] [CrossRef] [PubMed]
  7. Kazimierczak, N.; Kazimierczak, W.; Serafin, Z.; Nowicki, P.; Nożewski, J.; Janiszewska-Olszowska, J. AI in Orthodontics: Revolutionizing Diagnostics and Treatment Planning-A Comprehensive Review. J. Clin. Med. 2024, 13, 344. [Google Scholar] [CrossRef] [PubMed]
  8. Zhu, J.; Chen, Z.; Zhao, J.; Yu, Y.; Li, X.; Shi, K.; Zhang, F.; Yu, F.; Shi, K.; Sun, Z.; et al. Artificial intelligence in the diagnosis of dental diseases on panoramic radiographs: A preliminary study. BMC Oral Health 2023, 23, 358. [Google Scholar] [CrossRef] [PubMed]
  9. Karalis, V.D. The Integration of Artificial Intelligence into Clinical Practice. Appl. Biosci. 2024, 3, 14–44. [Google Scholar] [CrossRef]
  10. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Syst. Rev. 2021, 10, 89. [Google Scholar] [CrossRef]
  11. Whiting, P.F.; Rutjes, A.W.; Westwood, M.E.; Mallett, S.; Deeks, J.J.; Reitsma, J.B.; Leeflang, M.M.; Sterne, J.A.; Bossuyt, P.M. QUADAS-2: A revised tool for the quality assessment of diagnostic accuracy studies. Ann. Intern. Med. 2011, 155, 529–536. [Google Scholar] [CrossRef]
  12. Shahidi, S.; Bahrampour, E.; Soltanimehr, E.; Zamani, A.; Oshagh, M.; Moattari, M.; Mehdizadeh, A. The accuracy of a designed software for automated localization of craniofacial landmarks on CBCT images. BMC Med. Imaging 2014, 14, 32. [Google Scholar] [CrossRef]
  13. Wang, S.; Li, H.; Li, J.; Zhang, Y.; Zou, B. Automatic Analysis of Lateral Cephalograms Based on Multiresolution Decision Tree Regression Voting. J. Healthc. Eng. 2018, 2018, 1797502. [Google Scholar] [CrossRef]
  14. Hwang, H.W.; Park, J.H.; Moon, J.H.; Yu, Y.; Kim, H.; Her, S.B.; Srinivasan, G.; Aljanabi, M.N.A.; Donatelli, R.E.; Lee, S.J. Automated identification of cephalometric landmarks: Part 2-Might it be better than human? Angle Orthod. 2020, 90, 69–76. [Google Scholar] [CrossRef]
  15. Muraev, A.A.; Tsai, P.; Kibardin, I.; Oborotistov, N.; Shirayeva, T.; Ivanov, S.; Ivanov, S.; Guseynov, N.; Aleshina, O.; Bosykh, Y.; et al. Frontal cephalometric landmarking: Humans vs artificial neural networks. Int. J. Comput. Dent. 2020, 23, 139–148. [Google Scholar] [PubMed]
  16. Kim, J.; Kim, I.; Kim, Y.J.; Kim, M.; Cho, J.H.; Hong, M.; Kang, K.H.; Lim, S.H.; Kim, S.J.; Kim, Y.H.; et al. Accuracy of automated identification of lateral cephalometric landmarks using cascade convolutional neural networks on lateral cephalograms from nationwide multi-centres. Orthod. Craniofac. Res. 2021, 24 (Suppl. S2), 59–67. [Google Scholar] [CrossRef]
  17. Kim, M.J.; Liu, Y.; Oh, S.H.; Ahn, H.W.; Kim, S.H.; Nelson, G. Evaluation of a multi-stage convolutional neural network-based fully automated landmark identification system using cone-beam computed tomographysynthesized posteroanterior cephalometric images. Korean J. Orthod. 2021, 51, 77–85. [Google Scholar] [CrossRef]
  18. Gil, S.M.; Kim, I.; Cho, J.H.; Hong, M.; Kim, M.; Kim, S.J.; Kim, Y.J.; Kim, Y.H.; Lim, S.H.; Sung, S.J.; et al. Accuracy of auto-identification of the posteroanterior cephalometric landmarks using cascade convolution neural network algorithm and cephalometric images of different quality from nationwide multiple centers. Am. J. Orthod. Dentofac. Orthop. 2022, 161, e361–e371. [Google Scholar] [CrossRef] [PubMed]
  19. Le, V.N.T.; Kang, J.; Oh, I.S.; Kim, J.G.; Yang, Y.M.; Lee, D.W. Effectiveness of Human-Artificial Intelligence Collaboration in Cephalometric Landmark Detection. J. Pers. Med. 2022, 12, 387. [Google Scholar] [CrossRef] [PubMed]
  20. Blum, F.M.S.; Möhlhenrich, S.C.; Raith, S.; Pankert, T.; Peters, F.; Wolf, M.; Hölzle, F.; Modabber, A. Evaluation of an artificial intelligence-based algorithm for automated localization of craniofacial landmarks. Clin. Oral Investig. 2023, 27, 2255–2265. [Google Scholar] [CrossRef]
  21. Han, S.H.; Lim, J.; Kim, J.S.; Cho, J.H.; Hong, M.; Kim, M.; Kim, S.J.; Kim, Y.J.; Kim, Y.H.; Lim, S.H.; et al. Accuracy of posteroanterior cephalogram landmarks and measurements identification using a cascaded convolutional neural network algorithm: A multicenter study. Korean J. Orthod. 2024, 54, 48–58. [Google Scholar] [CrossRef]
  22. Serafin, M.; Baldini, B.; Cabitza, F.; Carrafiello, G.; Baselli, G.; Del Fabbro, M.; Sforza, C.; Caprioglio, A.; Tartaglia, G.M. Accuracy of automated 3D cephalometric landmarks by deep learning algorithms: Systematic review and meta-analysis. Radiol. Med. 2023, 128, 544–555. [Google Scholar] [CrossRef]
  23. Dhopte, A.; Bagde, H. Smart Smile: Revolutionizing Dentistry With Artificial Intelligence. Cureus 2023, 15, e41227. [Google Scholar] [CrossRef] [PubMed]
  24. Daraqel, B.; Wafaie, K.; Mohammed, H.; Cao, L.; Mheissen, S.; Liu, Y.; Zheng, L. The performance of artificial intelligence models in generating responses to general orthodontic questions: ChatGPT vs Google Bard. Am. J. Orthod. Dentofac. Orthop. 2024, 165, 652–662. [Google Scholar] [CrossRef]
  25. Bowness, J.S.; Metcalfe, D.; El-Boghdadly, K.; Thurley, N.; Morecroft, M.; Hartley, T.; Krawczyk, J.; Noble, J.A.; Higham, H. Artificial intelligence for ultrasound scanning in regional anaesthesia: A scoping review of the evidence from multiple disciplines. Br. J. Anaesth. 2024, 132, 1049–1062. [Google Scholar] [CrossRef]
  26. Chen, J.; Che, H.; Sun, J.; Rao, Y.; Wu, J. An automatic cephalometric landmark detection method based on heatmap regression and Monte Carlo dropout. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2023, 2023, 1–4. [Google Scholar] [CrossRef] [PubMed]
  27. Park, J.; Baumrind, S.; Curry, S.; Carlson, S.K.; Boyd, R.L.; Oh, H. Reliability of 3D dental and skeletal landmarks on CBCT images. Angle Orthod. 2019, 89, 758–767. [Google Scholar] [CrossRef]
  28. Pittayapat, P.; Limchaichana-Bolstad, N.; Willems, G.; Jacobs, R. Three-dimensional cephalometric analysis in orthodontics: A systematic review. Orthod. Craniofac. Res. 2014, 17, 69–91. [Google Scholar] [CrossRef] [PubMed]
  29. Ali, S.; Abuhmed, T.; El-Sappagh, S.; Muhammad, K.; Alonso-Moral, J.M.; Confalonieri, R.; Guidotti, R.; Del Ser, J.; Díaz-Rodríguez, N.; Herrera, F. Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Inf. Fusion 2023, 99, 101805. [Google Scholar] [CrossRef]
  30. Kelly, C.J.; Karthikesalingam, A.; Suleyman, M.; Corrado, G.; King, D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 2019, 17, 195. [Google Scholar] [CrossRef]
  31. von Cramon-Taubadel, N.; Frazier, B.C.; Lahr, M.M. The problem of assessing landmark error in geometric morphometrics: Theory, methods, and modifications. Am. J. Phys. Anthropol. 2007, 134, 24–35. [Google Scholar] [CrossRef]
  32. Park, S.H.; Han, K. Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction. Radiology 2018, 286, 800–809. [Google Scholar] [CrossRef] [PubMed]
  33. Shimizu, Y.; Tanikawa, C.; Kajiwara, T.; Nagahara, H.; Yamashiro, T. The validation of orthodontic artificial intelligence systems that perform orthodontic diagnoses and treatment planning. Eur. J. Orthod. 2022, 44, 436–444. [Google Scholar] [CrossRef]
  34. Wang, C.W.; Huang, C.T.; Hsieh, M.C.; Li, C.H.; Chang, S.W.; Li, W.C.; Vandaele, R.; Marée, R.; Jodogne, S.; Geurts, P.; et al. Evaluation and Comparison of Anatomical Landmark Detection Methods for Cephalometric X-Ray Images: A Grand Challenge. IEEE Trans. Med. Imaging 2015, 34, 1890–1900. [Google Scholar] [CrossRef]
  35. Bermejo, E.; Taniguchi, K.; Ogawa, Y.; Martos, R.; Valsecchi, A.; Mesejo, P.; Ibáñez, O.; Imaizumi, K. Automatic landmark annotation in 3D surface scans of skulls: Methodological proposal and reliability study. Comput. Methods Programs Biomed. 2021, 210, 106380. [Google Scholar] [CrossRef] [PubMed]
  36. Jeong, S.H.; Yun, J.P.; Yeom, H.G.; Kim, H.K.; Kim, B.C. Deep-Learning-Based Detection of Cranio-Spinal Differences between Skeletal Classification Using Cephalometric Radiography. Diagnostics 2021, 11, 591. [Google Scholar] [CrossRef]
  37. Kim, M.; Kim, S.; Kim, M.; Bae, H.J.; Park, J.W.; Kim, N. Realistic high-resolution lateral cephalometric radiography generated by progressive growing generative adversarial network and quality evaluations. Sci. Rep. 2021, 11, 12563. [Google Scholar] [CrossRef]
  38. Kim, Y.H.; Park, J.B.; Chang, M.S.; Ryu, J.J.; Lim, W.H.; Jung, S.K. Influence of the depth of the convolutional neural networks on an artificial intelligence model for diagnosis of orthognathic surgery. J. Pers. Med. 2021, 11, 356. [Google Scholar] [CrossRef] [PubMed]
  39. Zhang, Y.; Qin, H.; Li, P.; Pei, Y.; Guo, Y.; Xu, T.; Zha, H. Deformable registration of lateral cephalogram and cone-beam computed tomography image. Med. Phys. 2021, 48, 6901–6915. [Google Scholar] [CrossRef]
  40. Dolatabadi, N.; Boyd, R.L.; Oh, H. Comparison between a human judge and automatic landmark identification on digital models. Am. J. Orthod. Dentofac. Orthop. 2022, 162, 257–263. [Google Scholar] [CrossRef]
  41. Gokdeniz, S.T.; Kamburoğlu, K. Artificial intelligence in dentomaxillofacial radiology. World J. Radiol. 2022, 14, 55–59. [Google Scholar] [CrossRef] [PubMed]
  42. Lang, Y.; Lian, C.; Xiao, D.; Deng, H.; Thung, K.H.; Yuan, P.; Gateno, J.; Kuang, T.; Alfi, D.M.; Wang, L.; et al. Localization of Craniomaxillofacial Landmarks on CBCT Images Using 3D Mask R-CNN and Local Dependency Learning. IEEE Trans. Med. Imaging 2022, 41, 2856–2866. [Google Scholar] [CrossRef] [PubMed]
  43. Li, S.; Gong, Q.; Li, H.; Chen, S.; Liu, Y.; Ruan, G.; Zhu, L.; Liu, L.; Chen, H. Automatic location scheme of anatomical landmarks in 3D head MRI based on the scale attention hourglass network. Comput. Methods Programs Biomed. 2022, 214, 106564. [Google Scholar] [CrossRef] [PubMed]
  44. Torres, H.R.; Morais, P.; Fritze, A.; Burkhardt, W.; Kaufmann, M.; Oliveira, B.; Veloso, F.; Hahn, G.; Rüdiger, M.; Fonseca, J.C.; et al. Anthropometric Landmarking for Diagnosis of Cranial Deformities: Validation of an Automatic Approach and Comparison with Intra- and Interobserver Variability. Ann. Biomed. Eng. 2022, 50, 1022–1037. [Google Scholar] [CrossRef]
  45. Jiang, C.; Jiang, F.; Xie, Z.; Sun, J.; Sun, Y.; Zhang, M.; Zhou, J.; Feng, Q.; Zhang, G.; Xing, K.; et al. Evaluation of automated detection of head position on lateral cephalometric radiographs based on deep learning techniques. Ann. Anat. 2023, 250, 152114. [Google Scholar] [CrossRef]
  46. Weingart, J.V.; Schlager, S.; Metzger, M.C.; Brandenburg, L.S.; Hein, A.; Schmelzeisen, R.; Bamberg, F.; Kim, S.; Kellner, E.; Reisert, M.; et al. Automated detection of cephalometric landmarks using deep neural patchworks. Dentomaxillofac. Radiol. 2023, 52, 20230059. [Google Scholar] [CrossRef]
  47. Kazimierczak, N.; Kazimierczak, W.; Serafin, Z.; Nowicki, P.; Jankowski, T.; Jankowska, A.; Janiszewska-Olszowska, J. Skeletal facial asymmetry: Reliability of manual and artificial intelligence-driven analysis. Dentomaxillofac. Radiol. 2024, 53, 52–59. [Google Scholar] [CrossRef]
  48. Noothout, J.M.H.; De Vos, B.D.; Wolterink, J.M.; Postma, E.M.; Smeets, P.A.M.; Takx, R.A.P.; Leiner, T.; Viergever, M.A.; Isgum, I. Deep Learning-Based Regression and Classification for Automatic Landmark Localization in Medical Images. IEEE Trans. Med. Imaging 2020, 39, 4011–4022. [Google Scholar] [CrossRef]
  49. Shahidi, S.; Oshagh, M.; Gozin, F.; Salehi, P.; Danaei, S.M. Accuracy of computerized automatic identification of cephalometric landmarks by a designed software. Dentomaxillofac. Radiol. 2013, 42, 20110187. [Google Scholar] [CrossRef]
  50. Gupta, A.; Kharbanda, O.P.; Sardana, V.; Balachandran, R.; Sardana, H.K. A knowledge-based algorithm for automatic detection of cephalometric landmarks on CBCT images. Int. J. Comput. Assist. Radiol. Surg. 2015, 10, 1737–1752. [Google Scholar] [CrossRef]
  51. Gupta, A.; Kharbanda, O.P.; Sardana, V.; Balachandran, R.; Sardana, H.K. Accuracy of 3D cephalometric measurements based on an automatic knowledge-based landmark detection algorithm. Int. J. Comput. Assist. Radiol. Surg. 2016, 11, 1297–1309. [Google Scholar] [CrossRef]
  52. Lindner, C.; Wang, C.W.; Huang, C.T.; Li, C.H.; Chang, S.W.; Cootes, T.F. Fully Automatic System for Accurate Localisation and Analysis of Cephalometric Landmarks in Lateral Cephalograms. Sci. Rep. 2016, 6, 33581. [Google Scholar] [CrossRef] [PubMed]
  53. Mosleh, M.A.; Baba, M.S.; Malek, S.; Almaktari, R.A. Ceph-X: Development and evaluation of 2D cephalometric system. BMC Bioinform. 2016, 17, 499. [Google Scholar] [CrossRef]
  54. Zhang, J.; Gao, Y.; Wang, L.; Tang, Z.; Xia, J.J.; Shen, D. Automatic Craniomaxillofacial Landmark Digitization via Segmentation-Guided Partially-Joint Regression Forest Model and Multiscale Statistical Features. IEEE Trans. Biomed. Eng. 2016, 63, 1820–1829. [Google Scholar] [CrossRef] [PubMed]
  55. Arık, S.; Ibragimov, B.; Xing, L. Fully automated quantitative cephalometry using convolutional neural networks. J. Med. Imaging 2017, 4, 014501. [Google Scholar] [CrossRef]
  56. Codari, M.; Caffini, M.; Tartaglia, G.M.; Sforza, C.; Baselli, G. Computer-aided cephalometric landmark annotation for CBCT data. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 113–121. [Google Scholar] [CrossRef]
  57. Ed-Dhahraouy, M.; Riri, H.; Ezzahmouly, M.; Bourzgui, F.; El Moutaoukkil, A. A new methodology for automatic detection of reference points in 3D cephalometry: A pilot study. Int. Orthod. 2018, 16, 328–337. [Google Scholar] [CrossRef]
  58. Montúfar, J.; Romero, M.; Scougall-Vilchis, R.J. Automatic 3-dimensional cephalometric landmarking based on active shape models in related projections. Am. J. Orthod. Dentofac. Orthop. 2018, 153, 449–458. [Google Scholar] [CrossRef]
  59. Montúfar, J.; Romero, M.; Scougall-Vilchis, R.J. Hybrid approach for automatic cephalometric landmark annotation on cone-beam computed tomography volumes. Am. J. Orthod. Dentofac. Orthop. 2018, 154, 140–150. [Google Scholar] [CrossRef] [PubMed]
  60. Neelapu, B.C.; Kharbanda, O.P.; Sardana, V.; Gupta, A.; Vasamsetti, S.; Balachandran, R.; Sardana, H.K. Automatic localization of three-dimensional cephalometric landmarks on CBCT images by extracting symmetry features of the skull. Dentomaxillofac. Radiol. 2018, 47, 20170054. [Google Scholar] [CrossRef]
  61. Kang, S.H.; Jeon, K.; Kim, H.J.; Seo, J.K.; Lee, S.H. Automatic Three-Dimensional Cephalometric Annotation System Using Three-Dimensional Convolutional Neural Networks. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, S189–S190. [Google Scholar] [CrossRef]
  62. Lee, S.M.; Kim, H.P.; Jeon, K.; Lee, S.H.; Seo, J.K. Automatic 3D cephalometric annotation system using shadowed 2D image-based machine learning. Phys. Med. Biol. 2019, 64, 055002. [Google Scholar] [CrossRef] [PubMed]
  63. Nishimoto, S.; Sotsuka, Y.; Kawai, K.; Ishise, H.; Kakibuchi, M. Personal Computer-Based Cephalometric Landmark Detection With Deep Learning, Using Cephalograms on the Internet. J. Craniofac. Surg. 2019, 30, 91–95. [Google Scholar] [CrossRef] [PubMed]
  64. Park, J.H.; Hwang, H.W.; Moon, J.H.; Yu, Y.; Kim, H.; Her, S.B.; Srinivasan, G.; Aljanabi, M.N.A.; Donatelli, R.E.; Lee, S.J. Automated identification of cephalometric landmarks: Part 1-Comparisons between the latest deep-learning methods YOLOV3 and SSD. Angle Orthod. 2019, 89, 903–909. [Google Scholar] [CrossRef]
  65. Kim, H.; Shim, E.; Park, J.; Kim, Y.J.; Lee, U.; Kim, Y. Web-based fully automated cephalometric analysis by deep learning. Comput. Methods Programs Biomed. 2020, 194, 105513. [Google Scholar] [CrossRef]
  66. Kunz, F.; Stellzig-Eisenhauer, A.; Zeman, F.; Boldt, J. Artificial intelligence in orthodontics: Evaluation of a fully automated cephalometric analysis using a customized convolutional neural network. J. Orofac. Orthop. 2020, 81, 52–68. [Google Scholar] [CrossRef]
  67. Lee, J.H.; Yu, H.J.; Kim, M.J.; Kim, J.W.; Choi, J. Automated cephalometric landmark detection with confidence regions using Bayesian convolutional neural networks. BMC Oral Health 2020, 20, 270. [Google Scholar] [CrossRef] [PubMed]
  68. Wirtz, A.; Lam, J.; Wesarg, S. Automated cephalometric landmark localization using a coupled shape model. Biomed. Tech. 2020, 65, S16. [Google Scholar] [CrossRef]
  69. Yun, H.S.; Jang, T.J.; Lee, S.M.; Lee, S.H.; Seo, J.K. Learning-based local-to-global landmark annotation for automatic 3D cephalometry. Phys. Med. Biol. 2020, 65, 085018. [Google Scholar] [CrossRef]
  70. Bulatova, G.; Kusnoto, B.; Grace, V.; Tsay, T.P.; Avenetti, D.M.; Sanchez, F.J.C. Assessment of automatic cephalometric landmark identification using artificial intelligence. Orthod. Craniofac. Res. 2021, 24 (Suppl. S2), 37–42. [Google Scholar] [CrossRef]
  71. Huang, Y.; Fan, F.; Syben, C.; Roser, P.; Mill, L.; Maier, A. Cephalogram synthesis and landmark detection in dental cone-beam CT systems. Med. Image Anal. 2021, 70, 102028. [Google Scholar] [CrossRef]
  72. Hwang, H.W.; Moon, J.H.; Kim, M.G.; Donatelli, R.E.; Lee, S.J. Evaluation of automated cephalometric analysis based on the latest deep learning method. Angle Orthod. 2021, 91, 329–335. [Google Scholar] [CrossRef]
  73. Jeon, S.; Lee, K.C. Comparison of cephalometric measurements between conventional and automatic cephalometric analysis using convolutional neural network. Prog. Orthod. 2021, 22, 14. [Google Scholar] [CrossRef]
  74. Kang, S.H.; Jeon, K.; Kang, S.H.; Lee, S.H. 3D cephalometric landmark detection by multiple stage deep reinforcement learning. Sci. Rep. 2021, 11, 17509. [Google Scholar] [CrossRef] [PubMed]
  75. Kim, Y.H.; Lee, C.; Ha, E.G.; Choi, Y.J.; Han, S.S. A fully deep learning model for the automatic identification of cephalometric landmarks. Imaging Sci. Dent. 2021, 51, 299–306. [Google Scholar] [CrossRef] [PubMed]
  76. Oh, K.; Oh, I.S.; Le, V.N.T.; Lee, D.W. Deep Anatomical Context Feature Learning for Cephalometric Landmark Detection. IEEE J. Biomed. Health Inform. 2021, 25, 806–817. [Google Scholar] [CrossRef]
  77. Tanikawa, C.; Lee, C.; Lim, J.; Oka, A.; Yamashiro, T. Clinical applicability of automated cephalometric landmark identification: Part I-Patient-related identification errors. Orthod. Craniofac. Res. 2021, 24 (Suppl. S2), 43–52. [Google Scholar] [CrossRef] [PubMed]
  78. Wang, L.; Ma, L.; Li, Y.; Niu, K.; He, Z. A DCNN system based on an iterative method for automatic landmark detection in cephalometric X-ray images. Biomed. Signal Process. Control 2021, 68, 102757. [Google Scholar] [CrossRef]
  79. Zeng, M.; Yan, Z.; Liu, S.; Zhou, Y.; Qiu, L. Cascaded convolutional networks for automatic cephalometric landmark detection. Med. Image Anal. 2021, 68, 101904. [Google Scholar] [CrossRef]
  80. Ahn, J.; Nguyen, T.P.; Kim, Y.J.; Kim, T.; Yoon, J. Automated analysis of three-dimensional CBCT images taken in natural head position that combines facial profile processing and multiple deep-learning models. Comput. Methods Programs Biomed. 2022, 226, 107123. [Google Scholar] [CrossRef]
  81. Ali, S.M.; Saloom, H.F.; Tawfeeq, M.A. Cephalometric Variables Prediction from Lateral Photographs Between Different Skeletal Patterns Using Regression Artificial Neural Networks. Turk. J. Orthod. 2022, 35, 101–111. [Google Scholar] [CrossRef]
  82. Çoban, G.; Öztürk, T.; Hashimli, N.; Yağci, A. Comparison between cephalometric measurements using digital manual and web-based artificial intelligence cephalometric tracing software. Dental Press J. Orthod. 2022, 27, e222112. [Google Scholar] [CrossRef] [PubMed]
  83. Dot, G.; Schouman, T.; Chang, S.; Rafflenbeul, F.; Kerbrat, A.; Rouch, P.; Gajny, L. Automatic 3-Dimensional Cephalometric Landmarking via Deep Learning. J. Dent. Res. 2022, 101, 1380–1387. [Google Scholar] [CrossRef]
  84. Jiang, F.; Guo, Y.; Zhou, Y.; Yang, C.; Xing, K.; Zhou, J.; Lin, Y.; Cheng, F.; Li, J. Automated calibration system for length measurement of lateral cephalometry based on deep learning. Phys. Med. Biol. 2022, 67, 225016. [Google Scholar] [CrossRef]
  85. Mahto, R.K.; Kafle, D.; Giri, A.; Luintel, S.; Karki, A. Evaluation of fully automated cephalometric measurements obtained from web-based artificial intelligence driven platform. BMC Oral Health 2022, 22, 132. [Google Scholar] [CrossRef]
  86. Ristau, B.; Coreil, M.; Chapple, A.; Armbruster, P.; Ballard, R. Comparison of AudaxCeph®’s fully automated cephalometric tracing technology to a semi-automated approach by human examiners. Int. Orthod. 2022, 20, 100691. [Google Scholar] [CrossRef] [PubMed]
  87. Tsolakis, I.A.; Tsolakis, A.I.; Elshebiny, T.; Matthaios, S.; Palomo, J.M. Comparing a Fully Automated Cephalometric Tracing Method to a Manual Tracing Method for Orthodontic Diagnosis. J. Clin. Med. 2022, 11, 6854. [Google Scholar] [CrossRef] [PubMed]
  88. Uğurlu, M. Performance of a Convolutional Neural Network- Based Artificial Intelligence Algorithm for Automatic Cephalometric Landmark Detection. Turk. J. Orthod. 2022, 35, 94–100. [Google Scholar] [CrossRef]
  89. Yao, J.; Zeng, W.; He, T.; Zhou, S.; Zhang, Y.; Guo, J.; Tang, W. Automatic localization of cephalometric landmarks based on convolutional neural network. Am. J. Orthod. Dentofac. Orthop. 2022, 161, e250–e259. [Google Scholar] [CrossRef]
  90. Yun, H.S.; Hyun, C.M.; Baek, S.H.; Lee, S.H.; Seo, J.K. A semi-supervised learning approach for automated 3D cephalometric landmark identification using computed tomography. PLoS ONE 2022, 17, e0275114. [Google Scholar] [CrossRef] [PubMed]
  91. Bao, H.; Zhang, K.; Yu, C.; Li, H.; Cao, D.; Shu, H.; Liu, L.; Yan, B. Evaluating the accuracy of automated cephalometric analysis based on artificial intelligence. BMC Oral Health 2023, 23, 191. [Google Scholar] [CrossRef]
  92. Chang, Q.; Wang, Z.; Wang, F.; Dou, J.; Zhang, Y.; Bai, Y. Automatic analysis of lateral cephalograms based on high-resolution net. Am. J. Orthod. Dentofac. Orthop. 2023, 163, 501–508.e4. [Google Scholar] [CrossRef]
  93. Chen, Z.; Ishikawa, H.; Wollstein, G.; Wang, Y.; Schuman, J.S. Deep-Learning-Based Group Point-Wise Spatial Mapping of Structure to Function in Glaucoma. Investig. Ophthalmol. Vis. Sci. 2023, 64, 344. [Google Scholar]
  94. Duran, G.S.; Gökmen, Ş.; Topsakal, K.G.; Görgülü, S. Evaluation of the accuracy of fully automatic cephalometric analysis software with artificial intelligence algorithm. Orthod. Craniofac. Res. 2023, 26, 481–490. [Google Scholar] [CrossRef] [PubMed]
  95. Gillot, M.; Miranda, F.; Baquero, B.; Ruellas, A.; Gurgel, M.; Al Turkestani, N.; Anchling, L.; Hutin, N.; Biggs, E.; Yatabe, M.; et al. Automatic landmark identification in cone-beam computed tomography. Orthod. Craniofac. Res. 2023, 26, 560–567. [Google Scholar] [CrossRef]
  96. Hong, W.; Kim, S.M.; Choi, J.; Ahn, J.; Paeng, J.Y.; Kim, H. Automated Cephalometric Landmark Detection Using Deep Reinforcement Learning. J. Craniofac. Surg. 2023, 34, 2336–2342. [Google Scholar] [CrossRef]
  97. Indermun, S.; Shaik, S.; Nyirenda, C.; Johannes, K.; Mulder, R. Human examination and artificial intelligence in cephalometric landmark detection-is AI ready to take over? Dentomaxillofac. Radiol. 2023, 52, 20220362. [Google Scholar] [CrossRef] [PubMed]
  98. Jiang, F.; Guo, Y.; Yang, C.; Zhou, Y.; Lin, Y.; Cheng, F.; Quan, S.; Feng, Q.; Li, J. Artificial intelligence system for automated landmark localization and analysis of cephalometry. Dentomaxillofac. Radiol. 2023, 52, 20220081. [Google Scholar] [CrossRef]
  99. Kunz, F.; Stellzig-Eisenhauer, A.; Widmaier, L.M.; Zeman, F.; Boldt, J. Assessment of the quality of different commercial providers using artificial intelligence for automated cephalometric analysis compared to human orthodontic experts. J. Orofac. Orthop. 2023. [Google Scholar] [CrossRef] [PubMed]
  100. Lee, H.; Cho, J.M.; Ryu, S.; Ryu, S.; Chang, E.; Jung, Y.S.; Kim, J.Y. Automatic identification of posteroanterior cephalometric landmarks using a novel deep learning algorithm: A comparative study with human experts. Sci. Rep. 2023, 13, 15506. [Google Scholar] [CrossRef]
  101. Lu, G.; Shu, H.; Bao, H.; Kong, Y.; Zhang, C.; Yan, B.; Zhang, Y.; Coatrieux, J.L. CMF-Net: Craniomaxillofacial landmark localization on CBCT images using geometric constraint and transformer. Phys. Med. Biol. 2023, 68, 095020. [Google Scholar] [CrossRef]
  102. Prince, S.T.T.; Srinivasan, D.; Duraisamy, S.; Kannan, R.; Rajaram, K. Reproducibility of linear and angular cephalometric measurements obtained by an artificial-intelligence assisted software (WebCeph) in comparison with digital software (AutoCEPH) and manual tracing method. Dental Press J. Orthod. 2023, 28, e2321214. [Google Scholar] [CrossRef]
  103. Rashmi, S.; Srinath, S.; Patil, K.; Murthy, P.S.; Deshmukh, S. Lateral Cephalometric Landmark Annotation Using Histogram Oriented Gradients Extracted from Region of Interest Patches. J. Maxillofac. Oral Surg. 2023, 22, 806–812. [Google Scholar] [CrossRef] [PubMed]
  104. Ye, H.; Cheng, Z.; Ungvijanpunya, N.; Chen, W.; Cao, L.; Gou, Y. Is automatic cephalometric software using artificial intelligence better than orthodontist experts in landmark identification? BMC Oral Health 2023, 23, 467. [Google Scholar] [CrossRef] [PubMed]
  105. Zhao, C.; Yuan, Z.; Luo, S.; Wang, W.; Ren, Z.; Yao, X.; Wu, T. Automatic recognition of cephalometric landmarks via multi-scale sampling strategy. Heliyon 2023, 9, e17459. [Google Scholar] [CrossRef]
  106. Gaonkar, P.; Mohammed, I.; Ribin, M.; Kumar, C.D.; Thomas, P.A.; Saini, R. Assessing the Impact of AI-Enhanced Diagnostic Tools on the Treatment Planning of Orthodontic Cases: An RCT. J. Pharm. Bioallied Sci. 2024, 16, S1798–S1800. [Google Scholar] [CrossRef]
  107. Guinot-Barona, C.; Alonso Pérez-Barquero, J.; Galán López, L.; Barmak, A.B.; Att, W.; Kois, J.C.; Revilla-León, M. Cephalometric analysis performance discrepancy between orthodontists and an artificial intelligence model using lateral cephalometric radiographs. J. Esthet. Restor. Dent. 2024, 36, 555–565. [Google Scholar] [CrossRef]
  108. Jiao, Z.; Liang, Z.; Liao, Q.; Chen, S.; Yang, H.; Hong, G.; Gui, H. Deep learning for automatic detection of cephalometric landmarks on lateral cephalometric radiographs using the Mask Region-based Convolutional Neural Network: A pilot study. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2024, 137, 554–562. [Google Scholar] [CrossRef] [PubMed]
  109. Kang, S.; Kim, I.; Kim, Y.J.; Kim, N.; Baek, S.H.; Sung, S.J. Accuracy and clinical validity of automated cephalometric analysis using convolutional neural networks. Orthod. Craniofac. Res. 2024, 27, 64–77. [Google Scholar] [CrossRef]
  110. Dot, G.; Rafflenbeul, F.; Arbotto, M.; Gajny, L.; Rouch, P.; Schouman, T. Accuracy and reliability of automatic three-dimensional cephalometric landmarking. Int. J. Oral Maxillofac. Surg. 2020, 49, 1367–1378. [Google Scholar] [CrossRef]
  111. Lee, M.; Chung, M.; Shin, Y.G. Cephalometric landmark detection via global and local encoders and patch-wise attentions. Neurocomputing 2022, 470, 182–189. [Google Scholar] [CrossRef]
  112. Kolsanov, A.V.; Popov, N.V.; Ayupova, I.O.; Tsitsiashvili, A.M.; Gaidel, A.V.; Dobratulin, K.S. Cephalometric analysis of lateral skull X-ray images using soft computing components in the search for key points. Stomatologiia 2021, 100, 63–67. [Google Scholar] [CrossRef] [PubMed]
  113. Chang, S.; Wang, S.F.; Zuo, F.F.; Wang, F.; Gong, B.W.; Wang, Y.J.; Xie, X.J. Automated diagnostic classification with lateral cephalograms based on deep learning network model. Zhonghua Kou Qiang Yi Xue Za Zhi 2023, 58, 547–553. [Google Scholar] [CrossRef]
  114. Gong, B.W.; Chang, S.; Zuo, F.F.; Xie, X.J.; Wang, S.F.; Wang, Y.J.; Sun, Y.Y.; Guan, X.C.; Bai, Y.X. Automated cephalometric landmark identification and location based on convolutional neural network. Zhonghua Kou Qiang Yi Xue Za Zhi 2023, 58, 1249–1256. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flow chart illustrating the process regarding the articles that have been encompassed within the systematic reviews.
Figure 1. Flow chart illustrating the process regarding the articles that have been encompassed within the systematic reviews.
Applsci 14 07342 g001
Figure 2. Results of risk of bias assessment with its graphic representation and bar plot of risk of applicability concerns assessment through QUADAS-2.
Figure 2. Results of risk of bias assessment with its graphic representation and bar plot of risk of applicability concerns assessment through QUADAS-2.
Applsci 14 07342 g002
Figure 3. Forest plot illustrating the comparison between artificial intelligence and the manual approach in detecting the anatomical landmarks [12,13,14,15,16,17,18,19,20,21].
Figure 3. Forest plot illustrating the comparison between artificial intelligence and the manual approach in detecting the anatomical landmarks [12,13,14,15,16,17,18,19,20,21].
Applsci 14 07342 g003
Figure 4. Forest plot illustrating the SDR of deep learning with a margin of error of 2 mm [13,16,17,18,19,21].
Figure 4. Forest plot illustrating the SDR of deep learning with a margin of error of 2 mm [13,16,17,18,19,21].
Applsci 14 07342 g004
Figure 5. Forest plot illustrating the results of sensitivity tests [12,13,14,15,16,17,18,19,20,21].
Figure 5. Forest plot illustrating the results of sensitivity tests [12,13,14,15,16,17,18,19,20,21].
Applsci 14 07342 g005
Figure 6. Forest plot illustrating the comparison between artificial intelligence and humans in detecting the anatomical landmarks after sensitivity analysis [13,14,15,16,17,18,19,20,21].
Figure 6. Forest plot illustrating the comparison between artificial intelligence and humans in detecting the anatomical landmarks after sensitivity analysis [13,14,15,16,17,18,19,20,21].
Applsci 14 07342 g006
Figure 7. Funnel plot without added studies illustrating the publication bias analysis.
Figure 7. Funnel plot without added studies illustrating the publication bias analysis.
Applsci 14 07342 g007
Table 1. Main characteristics of the included studies.
Table 1. Main characteristics of the included studies.
Study, YearCountryImaging ExaminationArchitectureNumber of Experts Involved in Manual LandmarkingNumber of Training/TestingNumbers of LandmarkMRE ± SD (mm)SDR < 2 mm (%)Results
Shahidi et al., 2014 [12]IranCBCTImage registration method using MATLAB
software language
38/20143.40 ± 1.48NRThe mean errors for all 14 landmarks were less than 4 mm, and over 63% of them had a mean error of less than 3 mm when compared to manual measurements.
Wang et al., 2018 [13]ChinaLateral cephalogramsMultiresolution decision tree regression voting using scale invariant feature transform-based patch features2150/150191.69 ± 1.4373.37The algorithm’s average 73% successful detection rate, which falls within a precision range of 2.0 mm, was validated by the clinical database.
Hwang et al., 2020 [14]KoreaLateral cephalogramsYou Only Look Once, Version 3 (YOLOv3)21028/283801.46 ± 2.97NRArtificial intelligence achieved an accuracy in identifying cephalometric landmarks that was on par with that of human examinations.
Muraev et al., 2020 [15]RussiaFrontal cephalogramsANN13300/30452.87 ± 0.99NRThe outcome of this study reveals that artificial neural networks attained accuracy levels comparable to human experts in identifying cephalometric landmarks.
Kim, J. et al., 2021 [16]KoreaLateral cephalogramA cascaded CNN2440/100201.36 ± 0.9883.6The total automatic detection mistake was 1.36 ± 0.98 mm, and the average error for identifying each landmark ranged from 0.46 ± 0.37 mm for maxillary incisor crown tip to 2.09 ± 1.91 mm for distal root tip of the mandibular first molar.
Kim, M. et al., 2021 [17]KoreaPosteroanterior CBCTA multi-stage CNN1345/85232.23 ± 2.0260.88Automatic identification of cephalometric landmarks using CBCT synthesis did not achieve a clinically acceptable level of accuracy, as the error range fell short of the desired threshold of less than 2 mm.
Gil et al., 2022 [18]KoreaPosteroanterior cephalogramA cascaded CNN12418/99161.52 ± 1.1383.3The cascade convolution neural network algorithm for automatically identifying posteroanterior cephalometric landmarks demonstrated potential as a viable alternative to manual identification.
Le et al., 2022 [19]KoreaLateral cephalogramThe deep anatomical context feature learning model201193/10411.87 ± 2.0473.17The use of beginner-artificial intelligence collaboration was successful in identifying cephalometric landmarks.
Blum et al., 2023 [20]GermanyCBCTDensilia® (Munich, Germanay): software using CNN algorithm4931/114352.73 ± 2.37NRThe level of accuracy achieved in automatic landmark detection falls within the clinically acceptable range, and it is comparable to the precision of manual landmark determination, while also requiring significantly less time.
Han et al., 2024 [21]KoreaPosteroanterior cephalogramA cascaded CNN2+22150/37791.26 ± 1.9483.2The cascaded-convolutional neural network model can be considered a useful tool for automatically identifying midline landmarks and determining the extent of midline deviation in posteroanterior cephalograms of adult patients.
ANN, artificial neural network; CBCT, cone-beam computed tomography; CNN, convolutional neural network; MRE, mean radial error; SDR, successful detection rate; NR, not reported.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, Y.; Pyeon, J.-H.; Han, S.-H.; Kim, N.J.; Park, W.-J.; Park, J.-B. A Comparative Study of Deep Learning and Manual Methods for Identifying Anatomical Landmarks through Cephalometry and Cone-Beam Computed Tomography: A Systematic Review and Meta-Analysis. Appl. Sci. 2024, 14, 7342. https://doi.org/10.3390/app14167342

AMA Style

Lee Y, Pyeon J-H, Han S-H, Kim NJ, Park W-J, Park J-B. A Comparative Study of Deep Learning and Manual Methods for Identifying Anatomical Landmarks through Cephalometry and Cone-Beam Computed Tomography: A Systematic Review and Meta-Analysis. Applied Sciences. 2024; 14(16):7342. https://doi.org/10.3390/app14167342

Chicago/Turabian Style

Lee, Yoonji, Jeong-Hye Pyeon, Sung-Hoon Han, Na Jin Kim, Won-Jong Park, and Jun-Beom Park. 2024. "A Comparative Study of Deep Learning and Manual Methods for Identifying Anatomical Landmarks through Cephalometry and Cone-Beam Computed Tomography: A Systematic Review and Meta-Analysis" Applied Sciences 14, no. 16: 7342. https://doi.org/10.3390/app14167342

APA Style

Lee, Y., Pyeon, J. -H., Han, S. -H., Kim, N. J., Park, W. -J., & Park, J. -B. (2024). A Comparative Study of Deep Learning and Manual Methods for Identifying Anatomical Landmarks through Cephalometry and Cone-Beam Computed Tomography: A Systematic Review and Meta-Analysis. Applied Sciences, 14(16), 7342. https://doi.org/10.3390/app14167342

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop