Next Article in Journal
No Association between Jump Parameters and Tissue Stiffness in the Quadriceps and Triceps Surae Muscles in Recreationally Active Young Adult Males
Previous Article in Journal
Optimization of the Sowing Unit of a Piezoelectrical Sensor Chamber with the Use of Grain Motion Modeling by Means of the Discrete Element Method. Case Study: Rape Seed
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning Based Detection of Missing Tooth Regions for Dental Implant Planning in Panoramic Radiographic Images

1
Gwangju Institute of Science and Technology (GIST), School of Integrated Technology (SIT), Gwangju 61005, Korea
2
Department of Oral and Maxillofacial Surgery, College of Dentistry, Chosun University, Gwangju 61452, Korea
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2022, 12(3), 1595; https://doi.org/10.3390/app12031595
Submission received: 23 December 2021 / Revised: 28 January 2022 / Accepted: 30 January 2022 / Published: 2 February 2022
(This article belongs to the Section Applied Dentistry and Oral Sciences)

Abstract

:
Dental implantation is a surgical procedure in oral and maxillofacial surgery. Detecting missing tooth regions is essential for planning dental implant placement. This study proposes an automated method that detects regions of missing teeth in panoramic radiographic images. Tooth instance segmentation is required to accurately detect a missing tooth region in panoramic radiographic images containing obstacles, such as dental appliances or restoration. Therefore, we constructed a dataset that contains 455 panoramic radiographic images and annotations for tooth instance segmentation and missing tooth region detection. First, the segmentation model segments teeth into the panoramic radiographic image and generates teeth masks. Second, a detection model uses the teeth masks as input to predict regions of missing teeth. Finally, the detection model identifies the position and number of missing teeth in the panoramic radiographic image. We achieved 92.14% mean Average Precision (mAP) for tooth instance segmentation and 59.09% mAP for missing tooth regions detection. As a result, this method assists diagnosis by clinicians to detect missing teeth regions for implant placement.

1. Introduction

Dental implants are a common surgical procedure in oral and maxillofacial surgery [1]. Prior to implant placement surgery, it is essential to establish a surgical plan [2,3,4]. Generally, an implant placement plan is created using a patient’s panoramic radiographic image or cone beam computed tomography (CBCT) image [5,6,7]. Implant placement is performed by finding missing tooth regions and determining a suitable implant product for the missing tooth regions [8,9]. Therefore, missing tooth region detection precedes implant placement. Furthermore, automatic detection of missing tooth regions is essential for developing an automatic implant placement plan.
A panoramic X-ray is one of the most commonly used diagnostic tools in modern dentistry along with CBCT due to its advantages of being more cost and time efficient than CBCT [10,11,12,13,14,15,16,17]. Furthermore, only a panoramic radiographic image is used for diagnosis instead of CBCT because of the cost of the device and imaging of CBCT in some cases [18]. As a result, we used a panoramic radiographic image in this study to reduce the amount of computation.
Due to the lack of automated technologies, clinicians had to plan dental implant placement manually. As a result, the diagnostic fatigue and burden on clinicians have steadily increased. Therefore, several studies have attempted to automatically detect missing tooth regions in order to generate an implant placement plan [8,9]. One study uses deep learning to detect missing teeth in CBCT images for implant placement planning. Because the missing tooth regions are predicted using surrounding information such as the location, tilt, and placement of adjacent teeth, the prediction may be inaccurate when teeth are lost consecutively [8]. A deep neural network is used to detect the missing left first molar in panoramic radiographic images, and the position and axis of the implant are generated in 3D simulation [9]. However, the previous two studies using deep learning can only detect the missing region of the specific tooth and have limitations in detecting multiple missing tooth regions simultaneously [8,9]. In actual clinical practice, multiple implants are frequently performed at the same time. Therefore, the detection of multiple missing tooth regions is necessary to establish multiple implant placement plans.
Deep learning is a type of computing method that focuses on learning successive layers of increasingly meaningful representations from data [19]. Therefore, a deep learning model can extract meaningful representations from not only linear data but also complex data. As a result, deep learning is applied to data classification, segmentation, and detection in various domains, such as images and signals [20,21,22,23,24,25,26]. In particular, deep learning demonstrates high performance in medical imaging [27,28,29,30]. Deep learning is also used to assist with various tasks, including caries detection and third molar extraction, in dentistry [14,15,31,32].
In this work, we propose a deep learning method to detect missing tooth regions through panoramic images as a process for dental implant planning. This method consists of tooth instance segmentation, which segments all teeth except third molars by instance in panoramic radiographic images, and missing tooth regions detection, which identifies areas of multiple missing teeth. Previously, there were no datasets for both tooth instance segmentation and missing tooth region detection at the same time. Therefore, we built a dataset for tooth instance segmentation and missing tooth region detection. Furthermore, this study can be used for the production of dental implant surgical guides, diagnostic assistance for clinicians, and the education of unskilled apprentices.
The goal of this work is to detect missing tooth regions as a process of automated dental implant placement. We propose a deep-learning-based method for the detection of missing tooth regions and tooth instance segmentation in panoramic radiographic images. The main contributions of this study are summarized as follows:
  • We proposed a method of simultaneously detecting missing tooth regions for dental implant placement planning using a panoramic radiographic image.
  • We constructed datasets for tooth instance segmentation and missing tooth region detection at the same time.
  • By using a dataset composed of various panoramic radiographic images, we ensure consistent performance for our method.

2. Materials and Methods

2.1. Dataset

This study was approved by the Institutional Review Board (IRB) of the Chosun University Dental Hospital (CUDHIRB 2005008) and the Gwangju Institute of Science and Technology (20210217-HR-59-01-02). We used both a public dataset and acquired data in the dataset. There are 386 panoramic radiographic X-ray images in the public dataset, including images with missing teeth and teeth with restoration and dental applications [33]. In addition, the dataset we acquired contains 69 panoramic radiographic X-ray images of patients from Chosun University Dental Hospital. The dataset used in the study excluded the patient’s personal information and are obtained from various panorama devices and settings. Of the 455 panoramic radiographic images, 348 images have ground truth for tooth instance segmentation, while 107 images have both the ground truth for tooth instance segmentation and missing tooth region detection. Therefore, 348 images were used to train, whereas 107 images were used to evaluate the performance of the model. As a result, the dataset is randomly split into 77.5% for training, 7.5% for validating, and 15% for testing.

2.1.1. Tooth Instance Segmentation Dataset

This dataset was used to train and evaluate models for segmenting teeth in the panoramic radiographic image on an instance-by-instance basis. The dataset contains panoramic radiographic images from various cases in which a number of teeth have been lost or include dental appliances and restoration. Figure 1 shows panoramic radiographic images including dental appliances and restoration. The panoramic radiographic image and valid teeth label are required to train the tooth instance segmentation model. Therefore, the dataset was constructed by labeling each tooth with a polygon and tagging the tooth number. Labeling was performed on 28 teeth, excluding third molars, and tooth numbers were assigned based on Federation Dentaire Internationale (FDI).

2.1.2. Missing Tooth Regions Detection Dataset

The performance of the deep learning model depends on the amount of training data. Therefore, we constructed a dataset for the missing tooth regions detection through synthetic data generation. The synthetic dataset was constructed using teeth masks generated through tooth instance segmentation from 170 patients, as seen in Figure 2. The teeth masks assigned pixel values according to each tooth number. Synthetic teeth masks are generated by randomly removing up to 10 teeth from a tooth mask with all 28 teeth except the third molars. The ground truths for missing tooth regions are constructed in the form of a bounding box using the position information in the image of the excluded teeth. As a result, our dataset contains 37,323 synthetic images generated from 170 teeth masks.

2.2. Tooth Instance Segmentation Model

The tooth instance segmentation model segments 28 teeth, excluding the third molars, in the panoramic radiographic image. For tooth instance segmentation, we used Mask R-CNN, which exhibited high performance in the image segmentation task [34]. ResNet-101 is used for the backbone of the segmentation model [35]. ResNet-101 is a deep neural network architecture commonly used to extract feature maps for image-based tasks, such as segmentation or object detection. In particular, ResNet-101 consists of 101 layers and contains a residual block that solves the degradation problem by reducing the difference between input and output. Figure 3 shows the schematic of the network architecture of ResNet-101. Furthermore, Figure 4 shows the results of the segmentation of teeth by instance in the panoramic radiographic image. As a result, the tooth mask generated by the tooth instance segmentation model is utilized as the input data for the missing tooth region detection model.
For data augmentation, brightness, contrast, and saturation augmentation is applied to images randomly. The segmentation model was trained by using an SGD optimizer with a learning rate of 1 × 10 2 , a batch size of 4, and the Smooth L1 loss function. The segmentation model was trained for 100,000 iterations and evaluated every 500 iterations, and the number of parameters of the model is 60.6 M.

2.3. Missing Tooth Region Detection Model

Missing tooth region detection is required for implant placement in the panoramic radiographic image. Through this model, missing tooth regions, except the third molars, are detected simultaneously. Missing tooth regions are detected in the form of bounding boxes and assigned numbers (#11∼#47). For missing tooth region detection, we used Faster R-CNN, which exhibits high performance in object detection. The detection model’s backbone is ResNet-101. The model was trained via the synthetic data, and real data were used for evaluation. To improve the training efficiency, the input image of the model was resized to 600 × 300. Figure 4 shows the entire process for detecting missing tooth regions from the panoramic radiographic image. The detection model was trained using an SGD optimizer with a learning rate of 1 × 10 2 , a batch size of 32, and the Smooth L1 loss function. The detection model was trained for 100,000 iterations and evaluated every 500 iterations, and the number of parameters of model is 63.3 M.

2.4. Evaluation Metrics

The evaluation of the two modules is connected because the missing tooth region detection model takes segmented masks from the segmentation model as input. The mean Average Precision (mAP) was used to evaluate both the segmentation and detection models. mAP is the average of each class’s AP values, with AP being the value obtained by calculating the area under the Precision–Recall curve. When the Intersection over Union (IoU) between ground truth and prediction is larger than the threshold, we considered it a correct prediction. mAP (0.5) is calculated by predicting the correct answer when IoU is >0.5, and mAP (0.5:0.95) is calculated by averaging the performance of IoUs within 0.5–0.95 in 0.05 steps.

3. Results

In this section, the performances of the tooth instance segmentation model and missing tooth region detection model are evaluated. mAP (0.5) and mAP (0.5:0.95) are used for evaluating the performance of both models.

3.1. Tooth Instance Segmentation Model

The performance of the segmentation model is evaluated using panoramic radiographic images. In Table 1, the segmentation model achieves 92.14% for mAP (0.5) and 76.78% for mAP (0.5:0.95). Furthermore, Table 2 shows the performance by each tooth number (#11∼#47), where #N indicates tooth number in this table. The segmentation performance of the second molars is less than the other teeth. Because the third molar’s appearance and location is similar to the second molar, the segmentation model confuses the third molar with the second molar. Figure 5 describes the visualization of the segmentation model result.

3.2. Missing Tooth Regions Detection Model

We also evaluate the missing tooth region detection model using panoramic radiographic images. The detection model uses the teeth masks obtained from the tooth instance segmentation model. Table 3 shows the performance of the model. The model achieves 59.09% for mAP (0.5) and 20.40% for mAP (0.5:0.95). In addition, Table 4 exhibits the performance by each tooth number (#11∼#47) where #N indicates tooth number. Because the actual patient’s data was used as a test set, there is no result value for the tooth number that is not in the dataset in Table 4. In addition, the performance of the model for the second molars is lower than the other teeth. Due to the location of the second molars at the end of the tooth arrangement, there are often no adjacent teeth to guide the detection. Figure 6 illustrates the visualization of the missing tooth regions detection model results.

4. Discussion

Recently, deep learning has been widely applied in the medical field [27,28,29,30]. In particular, deep learning demonstrates high performance in image classification, segmentation, and detection. Therefore, deep learning is used to diagnose diseases, such as specific diseases and abnormal signs in X-ray, CT, and MRI images [36,37,38,39]. In addition, previous studies demonstrate that deep learning exhibits a reliable performance in the diagnosis of dental disease [40,41,42,43].
Deep learning is a data-driven method of training. However, there are no studies and datasets for the detection of missing tooth regions for implant planning in panoramic radiographic images. Therefore, we constructed a dataset for a missing tooth region detection model and a tooth instance segmentation model.
Several studies have applied deep learning to detect missing tooth regions and segment various anatomical structures for implant planning [8,9]. Bayrakdar et al. developed an AI system that detects canal, sinus, fossa, and missing teeth in CBCT images for implant planning [8]. Liu Yun et al. proposed a deep learning method for implant planning of the mandibular left first molar. Previous studies have established implant placement plans using CBCT images [8,9]. However, CBCT requires a higher cost when compared to a panoramic radiographic X-ray. Additionally, previous studies have not fully automated methods and can only detect specific missing tooth regions. Therefore, there is a limit to detecting multiple missing tooth regions simultaneously [8,9].
In clinical practice, implant placement is frequently performed on several teeth at the same time. Thus, we developed a detection model for identifying multiple missing tooth regions simultaneously for implant placement using panoramic radiographic images. Tooth instance segmentation was conducted to improve the performance of missing teeth region detection by only segmenting the teeth in the panoramic radiographic images. As a result, we achieved that the performance of tooth instance segmentation is 92.14% for mAP (0.5) and 76.78% for mAP (0.5:0.95), while the performance of missing tooth regions detection is 59.09% for mAP (0.5) and 20.40% for mAP (0.5:0.95). Since the oral structure and tooth size vary depending on the person, there was a limit to the detection performance of the missing tooth regions.
A sufficient amount of data is required to achieve high-performance deep neural network, yet it is labor-intensive and expensive. Therefore, various previous studies have improved performance by using synthetic data when sufficient amounts of data are unavailable [44,45,46,47,48,49]. Thus, we generated 37,323 synthetic datasets and used them to achieve high performance. However, the synthetic dataset was created from only 170 teeth masks. Thus, the synthetic dataset has limitations in diversity. Therefore, if more real data are accumulated, we expect the detection performance of the missing tooth regions to improve.
For automatic dental implant placement planning, future work is required to develop a method that creates the position, axis, and size of the implant for the missing tooth regions in the panoramic radiographic image. Using a panoramic radiographic image for a deep learning model can reduce the computational cost more than using CBCT images. However, using CBCT is essential to planning the dental implant placement. As a result, future work is required to generate dental implant placement plans using a deep learning model from a 2D image and then convert the results into CBCT image. Through this methodology, we expect it to support clinicians in diagnosing quicker and providing patients with more transparency in their treatment. As a result, this work investigates a previous process in the generation of the implant placement plan that estimates the position, axis, and size of the implant in the panoramic radiographic image.

5. Conclusions

Detection of missing tooth regions is an essential part of implant placement planning. Therefore, this study proposes a method for detecting missing tooth regions through panoramic radiographic images. The results of this study demonstrate that a deep learning model can provide a great contribution to the automation of implant placement. This study also constructed a dataset for tooth instance segmentation and missing tooth region detection in panoramic radiographic images. In the future, the use of more data and an improved algorithm will be more helpful in automating implant placement.

Author Contributions

Conceptualization, J.P., S.M. and K.L.; data, S.M.; methodology, J.P. and K.L.; validation, J.P. and J.L.; formal analysis J.P., J.L., S.M. and K.L.; investigation, J.P., J.L., S.M. and K.L.; writing—original draft preparation, J.P.; writing—review and editing, J.P., J.L., S.M. and K.L.; supervision, K.L.; project administration, K.L.; funding acquisition, S.M. and K.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Small and Medium Enterprise R&D Sharing Center (SMEBridge), funded by the Ministry of Science and ICT (MSIT), the Republic of Korea, during 2021 (Project No. A0801043001).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Boards of the Chosun University Dental Hospital (CUDHIRB 2005008) and the Gwangju Institute of Science and Technology (20210217-HR-59-01-02).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request but is subject to the permission of the Institutional Review Boards of the participating institutions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Elani, H.; Starr, J.; Da Silva, J.; Gallucci, G. Trends in dental implant use in the US, 1999–2016, and projections to 2026. J. Dent. Res. 2018, 97, 1424–1430. [Google Scholar] [CrossRef] [PubMed]
  2. Handelsman, M. Surgical guidelines for dental implant placement. Br. Dent. J. 2006, 201, 139–152. [Google Scholar] [CrossRef] [PubMed]
  3. Sießegger, M.; Schneider, B.T.; Mischkowski, R.A.; Lazar, F.; Krug, B.; Klesper, B.; Zöller, J.E. Use of an image-guided navigation system in dental implant surgery in anatomically complex operation sites. J. Cranio-Maxillofac. Surg. 2001, 29, 276–281. [Google Scholar] [CrossRef] [PubMed]
  4. Spector, L. Computer-aided dental implant planning. Dent. Clin. N. Am. 2008, 52, 761–775. [Google Scholar] [CrossRef] [PubMed]
  5. Jorba-García, A.; Figueiredo, R.; González-Barnadas, A.; Camps-Font, O.; Valmaseda-Castellón, E. Accuracy and the role of experience in dynamic computer guided dental implant surgery: An in-vitro study. Med. Oral Patol. Oral Y Cir. Bucal 2019, 24, e76. [Google Scholar] [CrossRef]
  6. Fokas, G.; Vaughn, V.M.; Scarfe, W.C.; Bornstein, M.M. Accuracy of linear measurements on CBCT images related to presurgical implant treatment planning: A systematic review. Clin. Oral Implant. Res. 2018, 29, 393–415. [Google Scholar] [CrossRef]
  7. Deeb, G.; Antonos, L.; Tack, S.; Carrico, C.; Laskin, D.; Deeb, J.G. Is cone-beam computed tomography always necessary for dental implant placement? J. Oral Maxillofac. Surg. 2017, 75, 285–289. [Google Scholar] [CrossRef] [Green Version]
  8. Bayrakdar, S.K.; Orhan, K.; Bayrakdar, I.S.; Bilgir, E.; Ezhov, M.; Gusarev, M.; Shumilov, E. A deep learning approach for dental implant planning in cone-beam computed tomography images. BMC Med. Imaging 2021, 21, 86. [Google Scholar] [CrossRef]
  9. Liu, Y.; Chen, Z.C.; Chu, C.H.; Deng, F.L. Transfer Learning via Artificial Intelligence for Guiding Implant Placement in the Posterior Mandible: An In Vitro Study. 2021. Available online: https://assets.researchsquare.com/files/rs-986672/v1/a6dedda8-632d-44e0-b417-25552a81c4c7.pdf?c=1642488248 (accessed on 10 January 2022).
  10. Molander, B. Panoramic radiography in dental diagnostics. Swed. Dent. J. Suppl. 1996, 119, 1–26. [Google Scholar]
  11. Katsnelson, A.; Flick, W.G.; Susarla, S.; Tartakovsky, J.V.; Miloro, M. Use of panoramic X-ray to determine position of impacted maxillary canines. J. Oral Maxillofac. Surg. 2010, 68, 996–1000. [Google Scholar] [CrossRef]
  12. Simon, J.H.; Enciso, R.; Malfaz, J.M.; Roges, R.; Bailey-Perry, M.; Patel, A. Differential diagnosis of large periapical lesions using cone-beam computed tomography measurements and biopsy. J. Endod. 2006, 32, 833–837. [Google Scholar] [CrossRef]
  13. Scarfe, W.C.; Levin, M.D.; Gane, D.; Farman, A.G. Use of cone beam computed tomography in endodontics. Int. J. Dent. 2009, 2009, 634567. [Google Scholar] [CrossRef]
  14. Haghanifar, A.; Majdabadi, M.M.; Ko, S.B. Paxnet: Dental caries detection in panoramic X-ray using ensemble transfer learning and capsule classifier. arXiv 2020, arXiv:2012.13666. [Google Scholar]
  15. Yoo, J.H.; Yeom, H.G.; Shin, W.; Yun, J.P.; Lee, J.H.; Jeong, S.H.; Lim, H.J.; Lee, J.; Kim, B.C. Deep learning based prediction of extraction difficulty for mandibular third molars. Sci. Rep. 2021, 11, 1954. [Google Scholar] [CrossRef]
  16. Kim, B.S.; Yeom, H.G.; Lee, J.H.; Shin, W.S.; Yun, J.P.; Jeong, S.H.; Kang, J.H.; Kim, S.W.; Kim, B.C. Deep Learning-Based Prediction of Paresthesia after Third Molar Extraction: A Preliminary Study. Diagnostics 2021, 11, 1572. [Google Scholar] [CrossRef]
  17. Clark, D.; Danforth, R.; Barnes, R.; Burtch, M. Radiation absorbed from dental implant radiography: A comparison of linear tomography, CT scan, and panoramic and intra-oral techniques. J. Oral Implantol. 1990, 16, 156–164. [Google Scholar]
  18. Tang, Z.; Liu, X.; Chen, K. Comparison of digital panoramic radiography versus cone beam computerized tomography for measuring alveolar bone. Head Face Med. 2017, 13, 2. [Google Scholar] [CrossRef] [Green Version]
  19. Chollet, F. Deep Learning with Python; Simon and Schuster: New York, NY, USA, 2021. [Google Scholar]
  20. Seo, H.; Back, S.; Lee, S.; Park, D.; Kim, T.; Lee, K. Intra-and inter-epoch temporal context network (IITNet) using sub-epoch features for automatic sleep scoring on raw single-channel EEG. Biomed. Signal Process. Control 2020, 61, 102037. [Google Scholar] [CrossRef]
  21. Shin, S.; Lee, Y.; Kim, S.; Choi, S.; Kim, J.G.; Lee, K. Rapid and non-destructive spectroscopic method for classifying beef freshness using a deep spectral network fused with myoglobin information. Food Chem. 2021, 352, 129329. [Google Scholar] [CrossRef]
  22. Shvets, A.A.; Rakhlin, A.; Kalinin, A.A.; Iglovikov, V.I. Automatic instrument segmentation in robot-assisted surgery using deep learning. In Proceedings of the 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), Orlando, FL, USA, 17–20 December 2018; pp. 624–628. [Google Scholar]
  23. Ramos, S.; Gehrig, S.; Pinggera, P.; Franke, U.; Rother, C. Detecting unexpected obstacles for self-driving cars: Fusing deep learning and geometric modeling. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 1025–1032. [Google Scholar]
  24. Xu, M.; Zhang, Z.; Hu, H.; Wang, J.; Wang, L.; Wei, F.; Bai, X.; Liu, Z. End-to-End Semi-Supervised Object Detection with Soft Teacher. arXiv 2021, arXiv:2106.09018. [Google Scholar]
  25. Dai, X.; Chen, Y.; Xiao, B.; Chen, D.; Liu, M.; Yuan, L.; Zhang, L. Dynamic Head: Unifying Object Detection Heads with Attentions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 7373–7382. [Google Scholar]
  26. Liu, Z.; Hu, H.; Lin, Y.; Yao, Z.; Xie, Z.; Wei, Y.; Ning, J.; Cao, Y.; Zhang, Z.; Dong, L.; et al. Swin Transformer V2: Scaling Up Capacity and Resolution. arXiv 2021, arXiv:2111.09883. [Google Scholar]
  27. Liu, S.; Liu, S.; Cai, W.; Pujol, S.; Kikinis, R.; Feng, D. Early diagnosis of Alzheimer’s disease with deep learning. In Proceedings of the 2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI), Beijing, China, 29 April–2 May 2014; pp. 1015–1018. [Google Scholar]
  28. Panwar, H.; Gupta, P.; Siddiqui, M.K.; Morales-Menendez, R.; Singh, V. Application of deep learning for fast detection of COVID-19 in X-rays using nCOVnet. Chaos Solitons Fractals 2020, 138, 109944. [Google Scholar] [CrossRef]
  29. Back, S.; Lee, S.; Shin, S.; Yu, Y.; Yuk, T.; Jong, S.; Ryu, S.; Lee, K. Robust Skin Disease Classification by Distilling Deep Neural Network Ensemble for the Mobile Diagnosis of Herpes Zoster. IEEE Access 2021, 9, 20156–20169. [Google Scholar] [CrossRef]
  30. Jeong, H.G.; Kim, B.J.; Kim, T.; Kang, J.; Kim, J.Y.; Kim, J.; Kim, J.T.; Park, J.M.; Kim, J.G.; Hong, J.H.; et al. Classification of cardioembolic stroke based on a deep neural network using chest radiographs. EBioMedicine 2021, 69, 103466. [Google Scholar] [CrossRef] [PubMed]
  31. Lee, J.H.; Kim, D.H.; Jeong, S.N.; Choi, S.H. Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm. J. Dent. 2018, 77, 106–111. [Google Scholar] [CrossRef] [PubMed]
  32. Lee, J.; Park, J.; Moon, S.Y.; Lee, K. Automated Prediction of Extraction Difficulty and Inferior Alveolar Nerve Injury for Mandibular Third Molar Using a Deep Neural Network. Appl. Sci. 2022, 12, 475. [Google Scholar] [CrossRef]
  33. Jader, G.; Fontineli, J.; Ruiz, M.; Abdalla, K.; Pithon, M.; Oliveira, L. Deep instance segmentation of teeth in panoramic X-ray images. In Proceedings of the 2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Parana, Brazil, 29 October–1 November 2018; pp. 400–407. [Google Scholar]
  34. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
  35. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  36. Huang, T.W.; Chen, H.T.; Fujimoto, R.; Ito, K.; Wu, K.; Sato, K.; Taki, Y.; Fukuda, H.; Aoki, T. Age estimation from brain MRI images using deep learning. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, VIC, Australia, 18–21 April 2017; pp. 849–852. [Google Scholar]
  37. Skourt, B.A.; El Hassani, A.; Majda, A. Lung CT image segmentation using deep neural networks. Procedia Comput. Sci. 2018, 127, 109–113. [Google Scholar] [CrossRef]
  38. Hemdan, E.E.D.; Shouman, M.A.; Karar, M.E. Covidx-net: A framework of deep learning classifiers to diagnose covid-19 in X-ray images. arXiv 2020, arXiv:2003.11055. [Google Scholar]
  39. Spampinato, C.; Palazzo, S.; Giordano, D.; Aldinucci, M.; Leonardi, R. Deep learning for automated skeletal bone age assessment in X-ray images. Med. Image Anal. 2017, 36, 41–51. [Google Scholar] [CrossRef]
  40. Takahashi, T.; Nozaki, K.; Gonda, T.; Mameno, T.; Ikebe, K. Deep learning-based detection of dental prostheses and restorations. Sci. Rep. 2021, 11, 1960. [Google Scholar] [CrossRef]
  41. Prajapati, S.A.; Nagaraj, R.; Mitra, S. Classification of dental diseases using CNN and transfer learning. In Proceedings of the 2017 5th International Symposium on Computational and Business Intelligence (ISCBI), Dubai, United Arab Emirates, 11–14 August 2017; pp. 70–74. [Google Scholar]
  42. Lee, J.H.; Kim, D.H.; Jeong, S.N. Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network. Oral Dis. 2020, 26, 152–158. [Google Scholar] [CrossRef]
  43. Ezhov, M.; Gusarev, M.; Golitsyna, M.; Yates, J.M.; Kushnerev, E.; Tamimi, D.; Aksoy, S.; Shumilov, E.; Sanders, A.; Orhan, K. Clinically applicable artificial intelligence system for dental diagnosis with CBCT. Sci. Rep. 2021, 11, 15006. [Google Scholar] [CrossRef]
  44. Abbasnejad, I.; Sridharan, S.; Nguyen, D.; Denman, S.; Fookes, C.; Lucey, S. Using synthetic data to improve facial expression analysis with 3d convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017; pp. 1609–1618. [Google Scholar]
  45. Tang, Z.; Naphade, M.; Birchfield, S.; Tremblay, J.; Hodge, W.; Kumar, R.; Wang, S.; Yang, X. Pamtri: Pose-aware multi-task learning for vehicle re-identification using highly randomized synthetic data. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27–28 October 2019; pp. 211–220. [Google Scholar]
  46. Park, D.; Lee, J.; Lee, J.; Lee, K. Deep Learning based Food Instance Segmentation using Synthetic Data. In Proceedings of the 2021 18th International Conference on Ubiquitous Robots (UR), Gangwon-do, Korea, 12–14 July 2021; pp. 499–505. [Google Scholar]
  47. Lin, Y.; Tang, C.; Chu, F.J.; Vela, P.A. Using synthetic data and deep networks to recognize primitive shapes for object grasping. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 10494–10501. [Google Scholar]
  48. Danielczuk, M.; Matl, M.; Gupta, S.; Li, A.; Lee, A.; Mahler, J.; Goldberg, K. Segmenting unknown 3d objects from real depth images using mask r-cnn trained on synthetic data. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 7283–7290. [Google Scholar]
  49. Thalhammer, S.; Patten, T.; Vincze, M. SyDPose: Object detection and pose estimation in cluttered real-world depth images trained using only synthetic data. In Proceedings of the 2019 International Conference on 3D Vision (3DV), Quebec City, QC, Canada, 16–19 September 2019; pp. 106–115. [Google Scholar]
Figure 1. Examples of panoramic radiographic images. (a) image with 28 teeth, (b) image with restoration, and (c) image with dental appliance.
Figure 1. Examples of panoramic radiographic images. (a) image with 28 teeth, (b) image with restoration, and (c) image with dental appliance.
Applsci 12 01595 g001
Figure 2. Generation of synthetic data for missing tooth region detection. (a) Panoramic radiographic image that contains 28 teeth. (b) Teeth mask generated through tooth instance segmentation except third molars. (c) Synthetic data generated by randomly removing teeth from a tooth mask.
Figure 2. Generation of synthetic data for missing tooth region detection. (a) Panoramic radiographic image that contains 28 teeth. (b) Teeth mask generated through tooth instance segmentation except third molars. (c) Synthetic data generated by randomly removing teeth from a tooth mask.
Applsci 12 01595 g002
Figure 3. Schematic of the network architecture of ResNet-101. The network is implemented by repeating the block of three convolutional layers 3, 4, 23, and 3 times, respectively. 3 × 3 pool means pooling layer with 3 × 3 filters. Furthermore, K and C are the mean size of kernel and the number of channels of convolutional layer, respectively.
Figure 3. Schematic of the network architecture of ResNet-101. The network is implemented by repeating the block of three convolutional layers 3, 4, 23, and 3 times, respectively. 3 × 3 pool means pooling layer with 3 × 3 filters. Furthermore, K and C are the mean size of kernel and the number of channels of convolutional layer, respectively.
Applsci 12 01595 g003
Figure 4. Entire process for missing tooth region detection. Tooth instance segmentation model segments 28 teeth, excluding the third molars, in the (a) panoramic radiographic image. (b) shows result of the tooth instance segmentation model. Furthermore, then, (c) is generated from results of the segmentation model. The missing tooth region detection model detects the regions of missing teeth from the generated teeth mask. Furthermore, the result of the detection model is shown in (d).
Figure 4. Entire process for missing tooth region detection. Tooth instance segmentation model segments 28 teeth, excluding the third molars, in the (a) panoramic radiographic image. (b) shows result of the tooth instance segmentation model. Furthermore, then, (c) is generated from results of the segmentation model. The missing tooth region detection model detects the regions of missing teeth from the generated teeth mask. Furthermore, the result of the detection model is shown in (d).
Applsci 12 01595 g004
Figure 5. Visualization of tooth instance segmentation model result.
Figure 5. Visualization of tooth instance segmentation model result.
Applsci 12 01595 g005
Figure 6. Visualization of missing tooth regions detection model result.
Figure 6. Visualization of missing tooth regions detection model result.
Applsci 12 01595 g006
Table 1. Performance of tooth instance segmentation model.
Table 1. Performance of tooth instance segmentation model.
ModelAP (0.5)AP (0.5:0.95)
Mask R-CNN92.14%76.78%
Table 2. Performance of tooth instance segmentation model by tooth number.
Table 2. Performance of tooth instance segmentation model by tooth number.
#NAP (0.5)#NAP (0.5)#NAP (0.5)#NAP (0.5)
#1199.67%#2198.01%#3194.99%#4196.76%
#1299.64%#22100.0%#3294.72%#4295.33%
#1399.35%#2399.08%#3394.44%#4392.71%
#1495.14%#2492.79%#3495.60%#4487.51%
#1593.21%#2587.15%#3592.97%#4593.70%
#1691.08%#2691.70%#3677.49%#4682.81%
#1794.60%#2786.58%#3778.70%#4776.58%
Table 3. Performance of missing tooth regions detection model.
Table 3. Performance of missing tooth regions detection model.
ModelAP (0.5)AP (0.5:0.95)
Faster R-CNN59.09%20.40%
Table 4. Performance of missing teeth region detection model by tooth number.
Table 4. Performance of missing teeth region detection model by tooth number.
#NAP (0.5)#NAP (0.5)#NAP (0.5)#NAP (0.5)
#1150.95%#2150.49%#31-#41-
#12-#2250%#32-#42-
#13100%#2366.99%#33-#43-
#1446.73%#2463.99%#34100%#440%
#1567.82%#2570.26%#3573.88%#4582.26%
#1638.5%#2674.32%#3691.11%#4689.74%
#1732.53%#2726.51%#3727.50%#4737.77%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Park, J.; Lee, J.; Moon, S.; Lee, K. Deep Learning Based Detection of Missing Tooth Regions for Dental Implant Planning in Panoramic Radiographic Images. Appl. Sci. 2022, 12, 1595. https://doi.org/10.3390/app12031595

AMA Style

Park J, Lee J, Moon S, Lee K. Deep Learning Based Detection of Missing Tooth Regions for Dental Implant Planning in Panoramic Radiographic Images. Applied Sciences. 2022; 12(3):1595. https://doi.org/10.3390/app12031595

Chicago/Turabian Style

Park, Jumi, Junseok Lee, Seongyong Moon, and Kyoobin Lee. 2022. "Deep Learning Based Detection of Missing Tooth Regions for Dental Implant Planning in Panoramic Radiographic Images" Applied Sciences 12, no. 3: 1595. https://doi.org/10.3390/app12031595

APA Style

Park, J., Lee, J., Moon, S., & Lee, K. (2022). Deep Learning Based Detection of Missing Tooth Regions for Dental Implant Planning in Panoramic Radiographic Images. Applied Sciences, 12(3), 1595. https://doi.org/10.3390/app12031595

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop