Artificial Intelligence in Oral Health

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: closed (30 April 2022) | Viewed by 67803

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


E-Mail
Guest Editor
Department of Periodontology, Daejeon Dental Hospital, Wonkwang University, Daejeon 35233, Korea
Interests: periodontology; implantology; deep learning; oral health

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI), including deep learning and machine learning, is undergoing rapid development and has garnered substantial public attention in recent years. In particular, AI is positioned to become one of the most transformative technologies for medical applications and demonstrates great potential and useful properties for improving the analysis of various medical imaging datasets such as plain radiographs or three-dimensional imaging modalities. Several AI-based deep learning architectures have already been approved by the FDA and are being applied in clinical practice. In the dental field, the usefulness of AI has been assessed for the detection, classification, and segmentation of anatomical variables for orthodontic landmarks, dental caries, periodontal disease, and osteoporosis; however, these applications are still in very preliminary stages. This Special Issue is intended to lay the foundation of AI applications focusing on oral health, including general dentistry, periodontology, implantology, oral surgery, oral radiology, orthodontics, and prosthodontics, among others.

Prof. Dr. Jae-Hong Lee
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial intelligence
  • Biomedical image analysis
  • Clinical data
  • Clinical validation
  • Computer-assisted diagnosis
  • Computer vision
  • Deep learning
  • Dentistry
  • Histopathological images
  • Implantology
  • Machine learning
  • Oral health
  • Personalized medicine

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

2 pages, 169 KiB  
Editorial
Special Issue “Artificial Intelligence in Oral Health”
by Jae-Hong Lee
Diagnostics 2022, 12(8), 1866; https://doi.org/10.3390/diagnostics12081866 - 2 Aug 2022
Viewed by 1581
Abstract
I thank all authors, reviewers and the editorial staff who contributed to this Special Issue [...] Full article
(This article belongs to the Special Issue Artificial Intelligence in Oral Health)

Research

Jump to: Editorial, Review

13 pages, 853 KiB  
Article
Artificial Intelligence-Based Prediction of Oroantral Communication after Tooth Extraction Utilizing Preoperative Panoramic Radiography
by Andreas Vollmer, Babak Saravi, Michael Vollmer, Gernot Michael Lang, Anton Straub, Roman C. Brands, Alexander Kübler, Sebastian Gubik and Stefan Hartmann
Diagnostics 2022, 12(6), 1406; https://doi.org/10.3390/diagnostics12061406 - 6 Jun 2022
Cited by 11 | Viewed by 3482
Abstract
Oroantral communication (OAC) is a common complication after tooth extraction of upper molars. Profound preoperative panoramic radiography analysis might potentially help predict OAC following tooth extraction. In this exploratory study, we evaluated n = 300 consecutive cases (100 OAC and 200 controls) and [...] Read more.
Oroantral communication (OAC) is a common complication after tooth extraction of upper molars. Profound preoperative panoramic radiography analysis might potentially help predict OAC following tooth extraction. In this exploratory study, we evaluated n = 300 consecutive cases (100 OAC and 200 controls) and trained five machine learning algorithms (VGG16, InceptionV3, MobileNetV2, EfficientNet, and ResNet50) to predict OAC versus non-OAC (binary classification task) from the input images. Further, four oral and maxillofacial experts evaluated the respective panoramic radiography and determined performance metrics (accuracy, area under the curve (AUC), precision, recall, F1-score, and receiver operating characteristics curve) of all diagnostic approaches. Cohen’s kappa was used to evaluate the agreement between expert evaluations. The deep learning algorithms reached high specificity (highest specificity 100% for InceptionV3) but low sensitivity (highest sensitivity 42.86% for MobileNetV2). The AUCs from VGG16, InceptionV3, MobileNetV2, EfficientNet, and ResNet50 were 0.53, 0.60, 0.67, 0.51, and 0.56, respectively. Expert 1–4 reached an AUC of 0.550, 0.629, 0.500, and 0.579, respectively. The specificity of the expert evaluations ranged from 51.74% to 95.02%, whereas sensitivity ranged from 14.14% to 59.60%. Cohen’s kappa revealed a poor agreement for the oral and maxillofacial expert evaluations (Cohen’s kappa: 0.1285). Overall, present data indicate that OAC cannot be sufficiently predicted from preoperative panoramic radiography. The false-negative rate, i.e., the rate of positive cases (OAC) missed by the deep learning algorithms, ranged from 57.14% to 95.24%. Surgeons should not solely rely on panoramic radiography when evaluating the probability of OAC occurrence. Clinical testing of OAC is warranted after each upper-molar tooth extraction. Full article
(This article belongs to the Special Issue Artificial Intelligence in Oral Health)
Show Figures

Figure 1

13 pages, 1029 KiB  
Article
Detecting Proximal Caries on Periapical Radiographs Using Convolutional Neural Networks with Different Training Strategies on Small Datasets
by Xiujiao Lin, Dengwei Hong, Dong Zhang, Mingyi Huang and Hao Yu
Diagnostics 2022, 12(5), 1047; https://doi.org/10.3390/diagnostics12051047 - 21 Apr 2022
Cited by 10 | Viewed by 3065
Abstract
The present study aimed to evaluate the performance of convolutional neural networks (CNNs) that were trained with small datasets using different strategies in the detection of proximal caries at different levels of severity on periapical radiographs. Small datasets containing 800 periapical radiographs were [...] Read more.
The present study aimed to evaluate the performance of convolutional neural networks (CNNs) that were trained with small datasets using different strategies in the detection of proximal caries at different levels of severity on periapical radiographs. Small datasets containing 800 periapical radiographs were randomly categorized into a training and validation dataset (n = 600) and a test dataset (n = 200). A pretrained Cifar-10Net CNN was used in the present study. Different training strategies were used to train the CNN model independently; these strategies were defined as image recognition (IR), edge extraction (EE), and image segmentation (IS). Different metrics, such as sensitivity and area under the receiver operating characteristic curve (AUC), for the trained CNN and human observers were analysed to evaluate the performance in detecting proximal caries. IR, EE, and IS recognition modes and human eyes achieved AUCs of 0.805, 0.860, 0.549, and 0.767, respectively, with the EE recognition mode having the highest values (p all < 0.05). The EE recognition mode was significantly more sensitive in detecting both enamel and dentin caries than human eyes (p all < 0.05). The CNN trained with the EE strategy, the best performer in the present study, showed potential utility in detecting proximal caries on periapical radiographs when using small datasets. Full article
(This article belongs to the Special Issue Artificial Intelligence in Oral Health)
Show Figures

Figure 1

13 pages, 2230 KiB  
Article
Deep Learning Based Detection Tool for Impacted Mandibular Third Molar Teeth
by Mahmut Emin Celik
Diagnostics 2022, 12(4), 942; https://doi.org/10.3390/diagnostics12040942 - 9 Apr 2022
Cited by 39 | Viewed by 4614
Abstract
Third molar impacted teeth are a common issue with all ages, possibly causing tooth decay, root resorption, and pain. This study was aimed at developing a computer-assisted detection system based on deep convolutional neural networks for the detection of third molar impacted teeth [...] Read more.
Third molar impacted teeth are a common issue with all ages, possibly causing tooth decay, root resorption, and pain. This study was aimed at developing a computer-assisted detection system based on deep convolutional neural networks for the detection of third molar impacted teeth using different architectures and to evaluate the potential usefulness and accuracy of the proposed solutions on panoramic radiographs. A total of 440 panoramic radiographs from 300 patients were randomly divided. As a two-stage technique, Faster RCNN with ResNet50, AlexNet, and VGG16 as a backbone and one-stage technique YOLOv3 were used. The Faster-RCNN, as a detector, yielded a [email protected] rate of 0.91 with ResNet50 backbone while VGG16 and AlexNet showed slightly lower performances: 0.87 and 0.86, respectively. The other detector, YOLO v3, provided the highest detection efficacy with a [email protected] of 0.96. Recall and precision were 0.93 and 0.88, respectively, which supported its high performance. Considering the findings from different architectures, it was seen that the proposed one-stage detector YOLOv3 had excellent performance for impacted mandibular third molar tooth detection on panoramic radiographs. Promising results showed that diagnostic tools based on state-ofthe-art deep learning models were reliable and robust for clinical decision-making. Full article
(This article belongs to the Special Issue Artificial Intelligence in Oral Health)
Show Figures

Figure 1

11 pages, 1676 KiB  
Article
Comparison of Tongue Characteristics Classified According to Ultrasonographic Features Using a K-Means Clustering Algorithm
by Ariya Chantaramanee, Kazuharu Nakagawa, Kanako Yoshimi, Ayako Nakane, Kohei Yamaguchi and Haruka Tohara
Diagnostics 2022, 12(2), 264; https://doi.org/10.3390/diagnostics12020264 - 21 Jan 2022
Cited by 5 | Viewed by 2424
Abstract
The precise correlations among tongue function and characteristics remain unknown, and no previous studies have attempted machine learning-based classification of tongue ultrasonography findings. This cross-sectional observational study aimed to investigate relationships among tongue characteristics and function by classifying ultrasound images of the tongue [...] Read more.
The precise correlations among tongue function and characteristics remain unknown, and no previous studies have attempted machine learning-based classification of tongue ultrasonography findings. This cross-sectional observational study aimed to investigate relationships among tongue characteristics and function by classifying ultrasound images of the tongue using a K-means clustering algorithm. During 2017–2018, 236 healthy older participants (mean age 70.8 ± 5.4 years) were enrolled. The optimal number of clusters determined by the elbow method was 3. After analysis of tongue thickness and echo intensity plots, tongues were classified into three groups. One-way ANOVA was used to compare tongue function, tongue pressure, and oral diadochokinesis for /ta/ and /ka/ in each group. There were significant differences in all tongue functions among the three groups. The worst function was observed in patients with the lowest values for tongue thickness and echo intensity (tongue pressure [P = 0.023], /ta/ [P = 0.007], and /ka/ [P = 0.038]). Our results indicate that ultrasonographic classification of tongue characteristics using K-means clustering may aid clinicians in selecting the appropriate treatment strategy. Indeed, ultrasonography is advantageous in that it provides real-time imaging that is non-invasive, which can improve patient follow-up both in the clinic and at home. Full article
(This article belongs to the Special Issue Artificial Intelligence in Oral Health)
Show Figures

Figure 1

11 pages, 2734 KiB  
Article
Artificial Intelligence Application in Assessment of Panoramic Radiographs
by Łukasz Zadrożny, Piotr Regulski, Katarzyna Brus-Sawczuk, Marta Czajkowska, Laszlo Parkanyi, Scott Ganz and Eitan Mijiritsky
Diagnostics 2022, 12(1), 224; https://doi.org/10.3390/diagnostics12010224 - 17 Jan 2022
Cited by 36 | Viewed by 8007
Abstract
The aim of this study was to assess the reliability of the artificial intelligence (AI) automatic evaluation of panoramic radiographs (PRs). Thirty PRs, covering at least six teeth with the possibility of assessing the marginal and apical periodontium, were uploaded to the Diagnocat [...] Read more.
The aim of this study was to assess the reliability of the artificial intelligence (AI) automatic evaluation of panoramic radiographs (PRs). Thirty PRs, covering at least six teeth with the possibility of assessing the marginal and apical periodontium, were uploaded to the Diagnocat (LLC Diagnocat, Moscow, Russia) account, and the radiologic report of each was generated as the basis of automatic evaluation. The same PRs were manually evaluated by three independent evaluators with 12, 15, and 28 years of experience in dentistry, respectively. The data were collected in such a way as to allow statistical analysis with SPSS Statistics software (IBM, Armonk, NY, USA). A total of 90 reports were created for 30 PRs. The AI protocol showed very high specificity (above 0.9) in all assessments compared to ground truth except from periodontal bone loss. Statistical analysis showed a high interclass correlation coefficient (ICC > 0.75) for all interevaluator assessments, proving the good credibility of the ground truth and the reproducibility of the reports. Unacceptable reliability was obtained for caries assessment (ICC = 0.681) and periapical lesions assessment (ICC = 0.619). The tested AI system can be helpful as an initial evaluation of screening PRs, giving appropriate credibility reports and suggesting additional diagnostic methods for more accurate evaluation if needed. Full article
(This article belongs to the Special Issue Artificial Intelligence in Oral Health)
Show Figures

Figure 1

15 pages, 3437 KiB  
Article
Deep Learning-Based Microscopic Diagnosis of Odontogenic Keratocysts and Non-Keratocysts in Haematoxylin and Eosin-Stained Incisional Biopsies
by Roopa S. Rao, Divya B. Shivanna, Kirti S. Mahadevpur, Sinchana G. Shivaramegowda, Spoorthi Prakash, Surendra Lakshminarayana and Shankargouda Patil
Diagnostics 2021, 11(12), 2184; https://doi.org/10.3390/diagnostics11122184 - 24 Nov 2021
Cited by 6 | Viewed by 2840
Abstract
Background: The goal of the study was to create a histopathology image classification automation system that could identify odontogenic keratocysts in hematoxylin and eosin-stained jaw cyst sections. Methods: From 54 odontogenic keratocysts, 23 dentigerous cysts, and 20 radicular cysts, about 2657 microscopic pictures [...] Read more.
Background: The goal of the study was to create a histopathology image classification automation system that could identify odontogenic keratocysts in hematoxylin and eosin-stained jaw cyst sections. Methods: From 54 odontogenic keratocysts, 23 dentigerous cysts, and 20 radicular cysts, about 2657 microscopic pictures with 400× magnification were obtained. The images were annotated by a pathologist and categorized into epithelium, cystic lumen, and stroma of keratocysts and non-keratocysts. Preprocessing was performed in two steps; the first is data augmentation, as the Deep Learning techniques (DLT) improve their performance with increased data size. Secondly, the epithelial region was selected as the region of interest. Results: Four experiments were conducted using the DLT. In the first, a pre-trained VGG16 was employed to classify after-image augmentation. In the second, DenseNet-169 was implemented for image classification on the augmented images. In the third, DenseNet-169 was trained on the two-step preprocessed images. In the last experiment, two and three results were averaged to obtain an accuracy of 93% on OKC and non-OKC images. Conclusions: The proposed algorithm may fit into the automation system of OKC and non-OKC diagnosis. Utmost care was taken in the manual process of image acquisition (minimum 28–30 images/slide at 40× magnification covering the entire stretch of epithelium and stromal component). Further, there is scope to improve the accuracy rate and make it human bias free by using a whole slide imaging scanner for image acquisition from slides. Full article
(This article belongs to the Special Issue Artificial Intelligence in Oral Health)
Show Figures

Figure 1

16 pages, 3634 KiB  
Article
Oral Cancer Discrimination and Novel Oral Epithelial Dysplasia Stratification Using FTIR Imaging and Machine Learning
by Rong Wang, Aparna Naidu and Yong Wang
Diagnostics 2021, 11(11), 2133; https://doi.org/10.3390/diagnostics11112133 - 17 Nov 2021
Cited by 9 | Viewed by 2619
Abstract
The Fourier transform infrared (FTIR) imaging technique was used in a transmission model for the evaluation of twelve oral hyperkeratosis (HK), eleven oral epithelial dysplasia (OED), and eleven oral squamous cell carcinoma (OSCC) biopsy samples in the fingerprint region of 1800–950 cm−1 [...] Read more.
The Fourier transform infrared (FTIR) imaging technique was used in a transmission model for the evaluation of twelve oral hyperkeratosis (HK), eleven oral epithelial dysplasia (OED), and eleven oral squamous cell carcinoma (OSCC) biopsy samples in the fingerprint region of 1800–950 cm−1. A series of 100 µm × 100 µm FTIR imaging areas were defined in each sample section in reference to the hematoxylin and eosin staining image of an adjacent section of the same sample. After outlier removal, signal preprocessing, and cluster analysis, a representative spectrum was generated for only the epithelial tissue in each area. Two representative spectra were selected from each sample to reflect intra-sample heterogeneity, which resulted in a total of 68 representative spectra from 34 samples for further analysis. Exploratory analyses using Principal component analysis and hierarchical cluster analysis showed good separation between the HK and OSCC spectra and overlaps of OED spectra with either HK or OSCC spectra. Three machine learning discriminant models based on partial least squares discriminant analysis (PLSDA), support vector machines discriminant analysis (SVMDA), and extreme gradient boosting discriminant analysis (XGBDA) were trained using 46 representative spectra from 12 HK and 11 OSCC samples. The PLSDA model achieved 100% sensitivity and 100% specificity, while both SVM and XGBDA models generated 95% sensitivity and 96% specificity, respectively. The PLSDA discriminant model was further used to classify the 11 OED samples into HK-grade (6), OSCC-grade (4), or borderline case (1) based on their FTIR spectral similarity to either HK or OSCC cases, providing a potential risk stratification strategy for the precancerous OED samples. The results of the current study support the application of the FTIR-machine learning technique in early oral cancer detection. Full article
(This article belongs to the Special Issue Artificial Intelligence in Oral Health)
Show Figures

Figure 1

12 pages, 1096 KiB  
Article
Deep Learning for Caries Detection and Classification
by Luya Lian, Tianer Zhu, Fudong Zhu and Haihua Zhu
Diagnostics 2021, 11(9), 1672; https://doi.org/10.3390/diagnostics11091672 - 13 Sep 2021
Cited by 80 | Viewed by 8100
Abstract
Objectives: Deep learning methods have achieved impressive diagnostic performance in the field of radiology. The current study aimed to use deep learning methods to detect caries lesions, classify different radiographic extensions on panoramic films, and compare the classification results with those of expert [...] Read more.
Objectives: Deep learning methods have achieved impressive diagnostic performance in the field of radiology. The current study aimed to use deep learning methods to detect caries lesions, classify different radiographic extensions on panoramic films, and compare the classification results with those of expert dentists. Methods: A total of 1160 dental panoramic films were evaluated by three expert dentists. All caries lesions in the films were marked with circles, whose combination was defined as the reference dataset. A training and validation dataset (1071) and a test dataset (89) were then established from the reference dataset. A convolutional neural network, called nnU-Net, was applied to detect caries lesions, and DenseNet121 was applied to classify the lesions according to their depths (dentin lesions in the outer, middle, or inner third D1/2/3 of dentin). The performance of the test dataset in the trained nnU-Net and DenseNet121 models was compared with the results of six expert dentists in terms of the intersection over union (IoU), Dice coefficient, accuracy, precision, recall, negative predictive value (NPV), and F1-score metrics. Results: nnU-Net yielded caries lesion segmentation IoU and Dice coefficient values of 0.785 and 0.663, respectively, and the accuracy and recall rate of nnU-Net were 0.986 and 0.821, respectively. The results of the expert dentists and the neural network were shown to be no different in terms of accuracy, precision, recall, NPV, and F1-score. For caries depth classification, DenseNet121 showed an overall accuracy of 0.957 for D1 lesions, 0.832 for D2 lesions, and 0.863 for D3 lesions. The recall results of the D1/D2/D3 lesions were 0.765, 0.652, and 0.918, respectively. All metric values, including accuracy, precision, recall, NPV, and F1-score values, were proven to be no different from those of the experienced dentists. Conclusion: In detecting and classifying caries lesions on dental panoramic radiographs, the performance of deep learning methods was similar to that of expert dentists. The impact of applying these well-trained neural networks for disease diagnosis and treatment decision making should be explored. Full article
(This article belongs to the Special Issue Artificial Intelligence in Oral Health)
Show Figures

Figure 1

9 pages, 1938 KiB  
Article
Artificial Intelligence Model to Detect Real Contact Relationship between Mandibular Third Molars and Inferior Alveolar Nerve Based on Panoramic Radiographs
by Tianer Zhu, Daqian Chen, Fuli Wu, Fudong Zhu and Haihua Zhu
Diagnostics 2021, 11(9), 1664; https://doi.org/10.3390/diagnostics11091664 - 11 Sep 2021
Cited by 28 | Viewed by 3877
Abstract
This study aimed to develop a novel detection model for automatically assessing the real contact relationship between mandibular third molars (MM3s) and the inferior alveolar nerve (IAN) based on panoramic radiographs processed with deep learning networks, minimizing pseudo-contact interference and reducing the frequency [...] Read more.
This study aimed to develop a novel detection model for automatically assessing the real contact relationship between mandibular third molars (MM3s) and the inferior alveolar nerve (IAN) based on panoramic radiographs processed with deep learning networks, minimizing pseudo-contact interference and reducing the frequency of cone beam computed tomography (CBCT) use. A deep-learning network approach based on YOLOv4, named as MM3-IANnet, was applied to oral panoramic radiographs for the first time. The relationship between MM3s and the IAN in CBCT was considered the real contact relationship. Accuracy metrics were calculated to evaluate and compare the performance of the MM3–IANnet, dentists and a cooperative approach with dentists and the MM3–IANnet. Our results showed that in comparison with detection by dentists (AP = 76.45%) or the MM3–IANnet (AP = 83.02%), the cooperative dentist–MM3–IANnet approach yielded the highest average precision (AP = 88.06%). In conclusion, the MM3-IANnet detection model is an encouraging artificial intelligence approach that might assist dentists in detecting the real contact relationship between MM3s and IANs based on panoramic radiographs. Full article
(This article belongs to the Special Issue Artificial Intelligence in Oral Health)
Show Figures

Figure 1

9 pages, 1877 KiB  
Communication
Automatized Detection and Categorization of Fissure Sealants from Intraoral Digital Photographs Using Artificial Intelligence
by Anne Schlickenrieder, Ole Meyer, Jule Schönewolf, Paula Engels, Reinhard Hickel, Volker Gruhn, Marc Hesenius and Jan Kühnisch
Diagnostics 2021, 11(9), 1608; https://doi.org/10.3390/diagnostics11091608 - 3 Sep 2021
Cited by 12 | Viewed by 6377
Abstract
The aim of the present study was to investigate the diagnostic performance of a trained convolutional neural network (CNN) for detecting and categorizing fissure sealants from intraoral photographs using the expert standard as reference. An image set consisting of 2352 digital photographs from [...] Read more.
The aim of the present study was to investigate the diagnostic performance of a trained convolutional neural network (CNN) for detecting and categorizing fissure sealants from intraoral photographs using the expert standard as reference. An image set consisting of 2352 digital photographs from permanent posterior teeth (461 unsealed tooth surfaces/1891 sealed surfaces) was divided into a training set (n = 1881/364/1517) and a test set (n = 471/97/374). All the images were scored according to the following categories: unsealed molar, intact, sufficient and insufficient sealant. Expert diagnoses served as the reference standard for cyclic training and repeated evaluation of the CNN (ResNeXt-101-32x8d), which was trained by using image augmentation and transfer learning. A statistical analysis was performed, including the calculation of contingency tables and areas under the receiver operating characteristic curve (AUC). The results showed that the CNN accurately detected sealants in 98.7% of all the test images, corresponding to an AUC of 0.996. The diagnostic accuracy and AUC were 89.6% and 0.951, respectively, for intact sealant; 83.2% and 0.888, respectively, for sufficient sealant; 92.4 and 0.942, respectively, for insufficient sealant. On the basis of the documented results, it was concluded that good agreement with the reference standard could be achieved for automatized sealant detection by using artificial intelligence methods. Nevertheless, further research is necessary to improve the model performance. Full article
(This article belongs to the Special Issue Artificial Intelligence in Oral Health)
Show Figures

Figure 1

11 pages, 1215 KiB  
Article
Machine Learning Study in Caries Markers in Oral Microbiota from Monozygotic Twin Children
by Esther Alia-García, Manuel Ponce-Alonso, Claudia Saralegui, Ana Halperin, Marta Paz Cortés, María Rosario Baquero, David Parra-Pecharromán, Javier Galeano and Rosa del Campo
Diagnostics 2021, 11(5), 835; https://doi.org/10.3390/diagnostics11050835 - 6 May 2021
Cited by 3 | Viewed by 2443
Abstract
In recent years, the etiology of caries has evolved from a simplistic infectious perspective based on Streptococcus mutans and/or Lactobacillus activity, to a multifactorial disease involving a complex oral microbiota, the human genetic background and the environment. The aim of this work was [...] Read more.
In recent years, the etiology of caries has evolved from a simplistic infectious perspective based on Streptococcus mutans and/or Lactobacillus activity, to a multifactorial disease involving a complex oral microbiota, the human genetic background and the environment. The aim of this work was to identify bacterial markers associated with early caries using massive 16S rDNA. To minimize the other factors, the composition of the oral microbiota of twins in which only one of them had caries was compared with their healthy sibling. Twenty-one monozygotic twin pairs without a previous diagnosis of caries were recruited in the context of their orthodontic treatment and divided into two categories: (1) caries group in which only one of the twins had caries; and (2) control group in which neither of the twins had caries. Each participant contributed a single oral lavage sample in which the bacterial composition was determined by 16S rDNA amplification and further high-throughput sequencing. Data analysis included statistical comparison of alpha and beta diversity, as well as differential taxa abundance between groups. Our results show that twins of the control group have a closer bacterial composition than those from the caries group. However, statistical differences were not detected and we were unable to find any particular bacterial marker by 16S rDNA high-throughput sequencing that could be useful for prevention strategies. Although these results should be validated in a larger population, including children from other places or ethnicities, we conclude that the occurrence of caries is not related to the increase of any particular bacterial population. Full article
(This article belongs to the Special Issue Artificial Intelligence in Oral Health)
Show Figures

Figure 1

11 pages, 2355 KiB  
Article
Deep Active Learning for Automatic Segmentation of Maxillary Sinus Lesions Using a Convolutional Neural Network
by Seok-Ki Jung, Ho-Kyung Lim, Seungjun Lee, Yongwon Cho and In-Seok Song
Diagnostics 2021, 11(4), 688; https://doi.org/10.3390/diagnostics11040688 - 12 Apr 2021
Cited by 31 | Viewed by 4303
Abstract
The aim of this study was to segment the maxillary sinus into the maxillary bone, air, and lesion, and to evaluate its accuracy by comparing and analyzing the results performed by the experts. We randomly selected 83 cases of deep active learning. Our [...] Read more.
The aim of this study was to segment the maxillary sinus into the maxillary bone, air, and lesion, and to evaluate its accuracy by comparing and analyzing the results performed by the experts. We randomly selected 83 cases of deep active learning. Our active learning framework consists of three steps. This framework adds new volumes per step to improve the performance of the model with limited training datasets, while inferring automatically using the model trained in the previous step. We determined the effect of active learning on cone-beam computed tomography (CBCT) volumes of dental with our customized 3D nnU-Net in all three steps. The dice similarity coefficients (DSCs) at each stage of air were 0.920 ± 0.17, 0.925 ± 0.16, and 0.930 ± 0.16, respectively. The DSCs at each stage of the lesion were 0.770 ± 0.18, 0.750 ± 0.19, and 0.760 ± 0.18, respectively. The time consumed by the convolutional neural network (CNN) assisted and manually modified segmentation decreased by approximately 493.2 s for 30 scans in the second step, and by approximately 362.7 s for 76 scans in the last step. In conclusion, this study demonstrates that a deep active learning framework can alleviate annotation efforts and costs by efficiently training on limited CBCT datasets. Full article
(This article belongs to the Special Issue Artificial Intelligence in Oral Health)
Show Figures

Figure 1

12 pages, 4936 KiB  
Article
Deep-Learning-Based Detection of Cranio-Spinal Differences between Skeletal Classification Using Cephalometric Radiography
by Seung Hyun Jeong, Jong Pil Yun, Han-Gyeol Yeom, Hwi Kang Kim and Bong Chul Kim
Diagnostics 2021, 11(4), 591; https://doi.org/10.3390/diagnostics11040591 - 25 Mar 2021
Cited by 8 | Viewed by 3077
Abstract
The aim of this study was to reveal cranio-spinal differences between skeletal classification using convolutional neural networks (CNNs). Transverse and longitudinal cephalometric images of 832 patients were used for training and testing of CNNs (365 males and 467 females). Labeling was performed such [...] Read more.
The aim of this study was to reveal cranio-spinal differences between skeletal classification using convolutional neural networks (CNNs). Transverse and longitudinal cephalometric images of 832 patients were used for training and testing of CNNs (365 males and 467 females). Labeling was performed such that the jawbone was sufficiently masked, while the parts other than the jawbone were minimally masked. DenseNet was used as the feature extractor. Five random sampling crossvalidations were performed for two datasets. The average and maximum accuracy of the five crossvalidations were 90.43% and 92.54% for test 1 (evaluation of the entire posterior–anterior (PA) and lateral cephalometric images) and 88.17% and 88.70% for test 2 (evaluation of the PA and lateral cephalometric images obscuring the mandible). In this study, we found that even when jawbones of class I (normal mandible), class II (retrognathism), and class III (prognathism) are masked, their identification is possible through deep learning applied only in the cranio-spinal area. This suggests that cranio-spinal differences between each class exist. Full article
(This article belongs to the Special Issue Artificial Intelligence in Oral Health)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

12 pages, 718 KiB  
Review
Application and Performance of Artificial Intelligence Technology in Oral Cancer Diagnosis and Prediction of Prognosis: A Systematic Review
by Sanjeev B. Khanagar, Sachin Naik, Abdulaziz Abdullah Al Kheraif, Satish Vishwanathaiah, Prabhadevi C. Maganur, Yaser Alhazmi, Shazia Mushtaq, Sachin C. Sarode, Gargi S. Sarode, Alessio Zanza, Luca Testarelli and Shankargouda Patil
Diagnostics 2021, 11(6), 1004; https://doi.org/10.3390/diagnostics11061004 - 31 May 2021
Cited by 47 | Viewed by 7911
Abstract
Oral cancer (OC) is a deadly disease with a high mortality and complex etiology. Artificial intelligence (AI) is one of the outstanding innovations in technology used in dental science. This paper intends to report on the application and performance of AI in diagnosis [...] Read more.
Oral cancer (OC) is a deadly disease with a high mortality and complex etiology. Artificial intelligence (AI) is one of the outstanding innovations in technology used in dental science. This paper intends to report on the application and performance of AI in diagnosis and predicting the occurrence of OC. In this study, we carried out data search through an electronic search in several renowned databases, which mainly included PubMed, Google Scholar, Scopus, Embase, Cochrane, Web of Science, and the Saudi Digital Library for articles that were published between January 2000 to March 2021. We included 16 articles that met the eligibility criteria and were critically analyzed using QUADAS-2. AI can precisely analyze an enormous dataset of images (fluorescent, hyperspectral, cytology, CT images, etc.) to diagnose OC. AI can accurately predict the occurrence of OC, as compared to conventional methods, by analyzing predisposing factors like age, gender, tobacco habits, and bio-markers. The precision and accuracy of AI in diagnosis as well as predicting the occurrence are higher than the current, existing clinical strategies, as well as conventional statistics like cox regression analysis and logistic regression. Full article
(This article belongs to the Special Issue Artificial Intelligence in Oral Health)
Show Figures

Figure 1

Back to TopTop