Next Article in Journal
Application of Artificial Intelligence in Cone-Beam Computed Tomography for Airway Analysis: A Narrative Review
Previous Article in Journal
Diagnostic and Therapeutic Indications of Different Types of Mandibular Advancement Design for Patients with Obstructive Sleep Apnea Syndrome: Indications from Literature Review and Case Descriptions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Profile Photograph Classification Performance of Deep Learning Algorithms Trained Using Cephalometric Measurements: A Preliminary Study

by
Duygu Nur Cesur Kocakaya
1,
Mehmet Birol Özel
1,*,
Sultan Büşra Ay Kartbak
1,
Muhammet Çakmak
2 and
Enver Alper Sinanoğlu
3
1
Department of Orthodontics, Faculty of Dentistry, Kocaeli University, Kocaeli 41190, Türkiye
2
Department of Computer Engineering, Faculty of Engineering and Architecture, Sinop University, Sinop 57000, Türkiye
3
Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Kocaeli University, Kocaeli 41190, Türkiye
*
Author to whom correspondence should be addressed.
Diagnostics 2024, 14(17), 1916; https://doi.org/10.3390/diagnostics14171916
Submission received: 5 July 2024 / Revised: 22 August 2024 / Accepted: 28 August 2024 / Published: 30 August 2024
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)

Abstract

:
Extraoral profile photographs are crucial for orthodontic diagnosis, documentation, and treatment planning. The purpose of this study was to evaluate classifications made on extraoral patient photographs by deep learning algorithms trained using grouped patient pictures based on cephalometric measurements. Cephalometric radiographs and profile photographs of 990 patients from the archives of Kocaeli University Faculty of Dentistry Department of Orthodontics were used for the study. FH-NA, FH-NPog, FMA and N-A-Pog measurements on patient cephalometric radiographs were carried out utilizing Webceph. 3 groups for every parameter were formed according to cephalometric values. Deep learning algorithms were trained using extraoral photographs of the patients which were grouped according to respective cephalometric measurements. 14 deep learning models were trained and tested for accuracy of prediction in classifying patient images. Accuracy rates of up to 96.67% for FH-NA groups, 97.33% for FH-NPog groups, 97.67% for FMA groups and 97.00% for N-A-Pog groups were obtained. This is a pioneering study where an attempt was made to classify clinical photographs using artificial intelligence architectures that were trained according to actual cephalometric values, thus eliminating or reducing the need for cephalometric X-rays in future applications for orthodontic diagnosis.

1. Introduction

Facial photographs reveal the aesthetic characteristics of the facial shape and its relationship to the teeth. They can be used in conjunction with a combination of radiographic measurements [1,2]. Facial photography allows for evaluation of the patient’s facial aesthetics, visualizing their current condition, creating an appropriate treatment plan and monitoring the progress of the treatment [3].
Artificial intelligence [AI] technology has made a significant contribution on orthodontic treatment planning [4]. AI-based software systems, which play an important and transformative role in orthodontics, are considered the future of dental applications. AI is utilized in every aspect of orthodontics, from patient communication to diagnosis and treatment processes [5]. Recent orthodontic studies investigating AI have focused on two or three-dimensional digital radiographs and tools to provide standard patient care, achieving treatment goals and clinical decision-making goals in the most successful way [6,7,8].
AI is described as the science and engineering of making intelligent machines, especially intelligent computer programs, by McCarthy [9]. Deep learning (DL) is a part of machine learning designed to mimic the recognition system of the human brain while leveraging the computing power of graphics processing units [10,11]. As an important subset of DL techniques, convolutional neural network systems (CNNs) have gained popularity in the field of graphical image analysis. It has been reported that CNNs are widely used among deep learning algorithms and are well suited for image processing, including with medical images [12,13,14]. Accurate diagnosis based on cephalometric measurement is prerequisite for a successful treatment. Lateral cephalometric radiography is widely used as a standard tool in orthodontic evaluation and in dentofacial diagnosis [15,16].
As a medical image, patient profile photographs have not been investigated for their potential to reflect cephalometric data. To the best of our knowledge, this is the first study in which clinical profile photographs were evaluated using DL models in orthodontic planning processes. Therefore, the aim of our study was to compare classifications made solely from photographs by different DL models trained with categorized patient photographs based on cephalometric measurements with actual cephalometric classifications.
The null hypothesis of this study was that extraoral photograph classifications are unrelated to cephalometric measurement classifications.

2. Materials and Methods

This study was approved by the Ethics Committee of Kocaeli University (GOKAEK-2023/06.21). The pretreatment cephalometric radiographs and profile photographs of patients were randomly retrieved from the database of Kocaeli University Faculty of Dentistry, Department of Orthodontics, during the period between 2014 and 2018.

2.1. Study Group Selection

The study group consisted of 990 individuals who had not received any orthodontic treatment before and had both cephalometric radiographs and profile photographs taken at the same session. Individuals who had previously received orthodontic treatment, had cleft lip and palate or any other syndromic craniofacial anomaly were excluded from the study. Although missing teeth were not quantified, severe edentulism that may have affected craniofacial morphology was eliminated by the exclusion of syndromic anomalies. Lateral cephalometric radiographs with artefacts or photographs with poor image quality for the assessment were also excluded.
The lateral cephalometric radiographs were taken with a Morita Veraviewepocs 2D (J. Morita MFG. Corp, Kyoto, Japan) device under standard conditions and with a magnification difference of 1.1 mm, as determined by the manufacturer.
The profile photographs were taken with a Nikon D700 digital camera and Nikon AF-S VR Micro Nikkor 105 mm f/2.8 G IF ED lens. In order to fully visualize the face, the hair of the patient was gathered behind the ear, and photographs were taken from the right side to ensure that the patient’s profile view was compatible with the cephalometric radiography.

2.2. Cephalometric Measurements

A total of 4 angular measurements were performed on cephalometric radiographs after 10 cephalometric landmarks (point A, point B, Sella, Nasion, Orbita, Porion, Pogonion, Gonion, Gnathion, Menthon) were determined (Figure 1).
FH-NA, FH-NPog, FMAand N-A-Pog were measured to evaluate the position of the maxilla, the position of the mandible, the vertical development of the face and profile convexity respectively. Frankfurt Horizontal reference plane was selected to evaluate the relationship between the facial features and the cephalometric measurements.
The Webceph digital cephalometric measurement program (AssembleCircle Corp., Pangyoyeok-ro, Bundang-gu, Seongnam-si, Gyeonggi-do, Republic of Korea) was used for performing below mentioned measurements:
FH-NA was the angle between the Frankfort horizontal plane passing through the Porion and Orbita points and the plane passing through the Nasion and A points, which was used to assess the position of the maxilla in the sagittal direction (Figure 2a).
FH-NPOG was the angle between the Frankfort horizontal plane passing through the Porion and Orbita points and the plane passing through the Nasion and Pogonion points. This measurement was used to classify the sagittal position of the mandible (Figure 2b).
FMA was constructed as the angle between the Frankfort horizontal plane passing through the Porion and Orbita points and the plane passing through the Gonion and Menthon points. The vertical dimension of the face is classified according to this measurement (Figure 2c).
N-A-Pog was the angle between Nasion, point A and the Pogonion point. This angle was used to evaluate the profile convexity of the face (Figure 2d).
After completion of cephalometric measurements, 50 randomly selected lateral cephalometric radiographs were re-evaluated two weeks later by the same researcher to evaluate repeatability of the measurements. Intraobserver agreement was assessed by the intraclass correlation coefficient and were found to be between 0.85 and 0.99, presenting high reproducibility for assessment of the 4 angles.

2.3. Profile Photograph Group Formation for DL

The study group was divided into 3 subgroups according to cephalometric measurement values. The cephalometric values for the subgroups were as follows:
For FH-NA; first group (n = 330) was formed (FH-NA1) between measurement scores of 79.58 and 90.02. Second group (n = 330) was formed (FH-NA2) between measurement scores of 90.02 and 93.00. Third group (n = 330) was formed (FH-NA3) between measurement scores of 93.00 and 103.89.
For FH-NPOG; First group (n = 330) was formed (FH-NPOG1) between measurement scores of 72.49 and 84.32. Second group (n = 300) was formed (FH-NPOG2) between measurement scores of 84.32 and 87.32. Third group (n = 300) was formed (FH-NPOG3) between measurement scores of 87.32 and 99.52.
For FMA; first group (n = 330) was formed (FMA1) between measurement scores of 10.38 and 22.48. Second group (n = 330) was formed (FMA2) between measurement scores of 22.48 and 27.75. Third group (n = 330) was formed (FMA3) between measurement scores of 27.75 and 46.68.
N-A-Pog; first group (n = 330) was formed (N-A-POG1) between measurement scores of −27.98 and 3.31. Second group (n = 330) was formed (N-A-POG2) between measurement scores of 3.31 and 8.9. Third group (n = 330) was formed (N-A-POG3) between measurement scores of 8.9 and 26.69.
After formation of subgroups all profile photographs of groups were sent for DL analysis (Figure 3).

2.4. Preparation of Deep Learning Models

Image classification procedures were implemented in the laboratory of Karabük University, Department of Electrical and Electronics Engineering. Data augmentation was applied to the grouped profile photographs on all deep learning models to obtain higher accuracy and lower loss values than the deep learning models used for the study.
The 14 DL models that were employed were as follows:
  • MobileNet V2,
  • Inception V3,
  • DenseNet 121,
  • DenseNet 169,
  • DenseNet 201,
  • EfficientNet B0,
  • Xception,
  • VGG16,
  • VGG19,
  • NasNetMobile,
  • ResNet101,
  • ResNet 152,
  • ResNet 50,
  • EfficientNet V2
The average number of photographs in each class, which was 330 in the above mentioned original data sets, was increased to 1000 photographs for each group (FH-NA1, FH-NA2 and FH-NA3 for FH-NA; FH-NPOG1, FH-NPOG2 and FH-NPOG3 for FH-NPOG; FMA1, FMA2 and FMA3 for FMA; N-A-POG1, N-A-POG2 and N-A-POG3 for N-A-POG).
Following the data enrichment process, in the first step, training, validation and test data sets were created for all deep learning models. These data sets consist of labeled samples. In all deep learning models, the training data set constitutes 80% of the entire data set, the validation data set constitutes 10%, and the test data set constitutes the remaining 10%. Therefore, the training dataset contains 800 images, and the validation and test datasets contain 100 images.
After the data sets were arranged, a series of filters (kernels) were applied to the input data for the first convolution layer for deep learning. Each filter acts on the input data with a convolution process and multiplies and sums the pixel values in the region to be filtered. This sum forms one pixel of the feature map.
In the pooling layer, the feature maps are reduced, and the features are made more invariant to scale and location changes. After the pooling layers, the feature maps were flattened and transmitted to the dense layers. After the last dense layer, classification was performed in the output layer. For classification, a softmax activation function was used. Experimental study: DL models were trained using a GPU-supported system in the Google cloud environment.
Training was performed on a Tesla T4 GPU and an Intel Xeon CPU with 16 GB RAM running at 2.20 GHz. Python 3 was used to write all programs for transfer learning design, and the Keras 2.3.1 training framework was used to train the models.
After completing learning and training and validation procedures, all photographic images belonging to cephalometric measurement groups were tested (n = 100) via 14 DL algorithms (MobileNet V2, Inception V3, DenseNet 121, Densenet 169, Densenet 201, Efficientnet B0, Xception, Vgg16, Vgg19, Nasnetmobile, Resnet101, Resnet152, Resnet50 and Efficientnet V2) for their accuracy in predicting the cephalometric classification.
Accuracy value was calculated by the percentage of correctly classified subjects using the confusion matrix.
(TP + TN)/(TP + TN + FP + FN)
TP (True Positive): Number of profile photos correctly classified as belonging to the group by the DL model. TN (True Negative): Number of profile photos correctly classified as not belonging to the evaluated group by the DL model.
FP (False Positive): Number of profile photos incorrectly classified as belonging to the group by the DL model.
FN (False Negative): Number of profile photos incorrectly classified as not belonging to the evaluated group by the DL model.
Precision, recall and F1 score values were also calculated for every successful sequence with a total of 40 instances.
Precision (positive predictive value) is the proportion of true positive results among all the positive results identified by the DL model, and it was calculated by the formula: Precision = TP/(TP + FP).
Recall (sensitivity or true positive rate) refers to the proportion of positive results correctly identified by the DL model out of all the actual positive cases and it was calculated by the formula: Recall = TP/(TP + FN).
F1 score is a score between 0–1, where the value 1 indicates a perfect balance between precision and recall. It provides an overall performance metric of the model, considering both precision and recall. The F1 score is the harmonic mean of precision and recall, which is calculated by the formula: F1 score =2 × (Recall × Precision)/(Recall + Precision).

3. Results

DL algorithms exhibited varying degrees of accuracy in the prediction of the 4 cephalometric traits measured. These results depicted that the null hypothesis was rejected in 40 of 56 prediction sequences where DL algorithms demonstrated acceptable accuracy of prediction. The null hypothesis was accepted in 16 prediction sequences where the prediction accuracy was found to be equal to or lower than 36.37%. Profile photograph classification performances of CNN models using state-of-the-art deep learning models trained according to actual cephalometric measurements were evaluated by accuracy, precision, sensitivity and F1 Scores. The accuracy of the prediction results is shown in Table 1.
FH-NA angle value; from a set of 14 different DL models, Efficient B0 presented a relatively high accuracy rate of 96.67%, with correct results for 290 images out of 300. VGG16 presented the lowest accuracy rate, of 30.67%.
FH-NPOG angle value; from a set of 14 different DL models, DenseNet201 presented the highest accuracy rate of 97.33%, with correct results for 292 images out of 300. VGG19 and Inception V3 presented the lowest accuracy rates, of 31.33%.
FMA angle value; from a set of 14 different DL models, DenseNet 201 presented the highest accuracy rate of 97.67%, with correct results for 293 images out of 300. VGG19 presented the lowest accuracy rate, of 31.33%.
N-A-POG angle value; from a set of 14 different DL models, Efficient V2 presented the highest accuracy rate of 97.00% with correct results for 291 images out of 300. VGG19 presented the lowest accuracy rate of 34.67%.
Precision, recall and F1 score values and confusion matrices of the 40 instances have been provided as Supplementary Data owing to the high volume of information.

4. Discussion

The use of medical images with DL training is becoming a point of interest in recent studies. In evaluation of intraoral medical images, Tanriver et al. evaluated six DL algorithms and reported that DL-based approaches could be applied for automatic detection and classification of oral lesions [17]. Warin et al. also investigated DL algorithms to develop a classification and detection model of oral cancer photographs and trained three DL algorithms. They stated that DL-based algorithms offer an acceptable potential for classification and detection of lesions in oral images [18,19]. In our study, cephalometric radiographs and profile photographs of 900 patients were evaluated with 14 DL models which provided a more comprehensive perspective in assessing the potential of available DL algorithms. Additionally, considering the high accuracy rates for each cephalometric analyses, our results also suggest that DL training may offer promising results for orthodontic purposes.
In the literature, it is seen that the DenseNet201 model is used more in image classification studies than other models [20,21,22]. For our study, the highest accuracy values were seen in the DenseNet201 algorithm. Benyahia S. et al. [20], trained DL models to classify medical images containing skin lesions and extract lesion features. DenseNet201 was the DL model that gave the highest accuracy in terms of both medical image classification and feature extraction. Thalakottor et al. [21] used three convolutional neural network (CNN) models for medical image classification in their study. (VGG19, DenseNet201, ResNet50V2). Of these, the DenseNet201 model performed better than other models and achieved the highest accuracy. Mingzhu Meng et al. [22] examined the degree of overlap between the histopathological method for distinguishing benign and malignant breast lesions in their study. They stated that they preferred the DenseNet 201 DL model because of its ability to reuse features, its ability to reduce exploding features, and because it is associated with fewer gradient disappearance problems.
In our study, care was taken to select angles that could correlate with cephalometric radiographs and patients’ photographs. To determine the position of the maxilla and mandible, angles that we could associate with the FH reference plane were used instead of SNA and SNB. In studies comparing the Sella–Nasion reference plane and the Frankfurt Horizontal reference plane for cephalometric orientation, it is stated that clinically visualizing the Frankfurt Horizontal plane provides the opportunity to evaluate its relationship with the face, maxilla and mandible, and for reliability [23].
Considering the necessity of dentofacial photographic records for epidemiological purposes, in cases where initial examinations and irradiation are contraindicated or should be strictly avoided, photographs may provide cephalometric information that may be sufficient for such purposes [24,25,26]. DL trained models that have analyzed photographs may provide cephalometric information which may be adequate for such purposes.
Our limitation is that our study was conducted on a relatively narrow population, especially for algorithms that require large databases such as deep learning. It can also be speculated that, in the pre-processing of the photographs by the software, the differences in eyes, ears and hair of individuals in their profile photos may have presented as excessive data for the DL algorithm. This may have been misleading for accurate classification decisions. Additionally, this may also be the case for parameters such as patient age, gender, hair color, and other facial features that lack a direct relationship with investigated features by DL model.
This dataset, consisting of profile photographs, was obtained only from patients of our faculty’s orthodontics department. In order to develop the model, this problem was solved by using the data augmentation method in the training data set. The key to deep learning is a large data set to develop the performance model. The data set could be expanded by collecting cases from different centers or collecting more image data from a similar program via telemedicine.
Apart from the angular measurements used in our study, there are hundreds of parameters that can be measured in cephalometric X-rays and other DL models that can be trained. With further studies, DL models that can provide higher accuracy values can be developed.
By cropping the cephalometric X-rays and photographs we use, studies can be carried out in more specific regions for the extraction and classification of features, and DL models can be trained.
It is thought that in future applications, automatic measurements of facial and intraoral photographs without the need for cephalometric radiographs will be used in clinical orthodontics as a diagnostic step.

5. Conclusions

The most successful deep learning models were DENSENET 201 and EFFICIENTNET V2.
The classification according to FH-NPog angle exhibited the lowest and N-A-Pog exhibited highest success rate in classification.
The results of this study were promising in predicting cephalometric classifications from profile photographs without using cephalometric X-rays.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/diagnostics14171916/s1.

Author Contributions

Conceptualization, M.B.Ö., D.N.C.K., S.B.A.K. and M.Ç.; methodology, M.B.Ö.; D.N.C.K., S.B.A.K. and M.Ç.; software, M.Ç.; validation, M.B.Ö.; D.N.C.K., S.B.A.K. and M.Ç.; formal analysis, M.B.Ö. and E.A.S.; investigation, D.N.C.K. and S.B.A.K.; resources, M.B.Ö., D.N.C.K., S.B.A.K. and M.Ç.; data curation, M.Ç.; writing—original draft preparation, M.B.Ö., D.N.C.K. and S.B.A.K.; writing—review and editing, M.B.Ö., D.N.C.K., S.B.A.K. and E.A.S.; visualization, D.N.C.K. and S.B.A.K.; supervision, D.N.C.K. and S.B.A.K.; project administration, M.B.Ö. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Our study was found scientifically and ethically appropriate by the Kocaeli University Non-Interventional Clinical Research Ethics Committee with decision number KOU GOKAEK-2023/06.21 and project number 2023/84 (approval date 27 March 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sandler, J.; Murray, A. Digital photography in orthodontics. J. Orthod. 2001, 28, 197–202. [Google Scholar] [CrossRef]
  2. Sandler, J.; Dwyer, J.; Kokich, V.; McKeown, F.; Murray, A.; McLaughlin, R.; O’Brien, C.; O’Malley, P. Quality of clinical photographs taken by orthodontists, professional photographers, and orthodontic auxiliaries. Am. J. Orthod. Dentofac. Orthop. 2009, 135, 657–662. [Google Scholar] [CrossRef]
  3. Alam, M.K.; Abutayyem, H.; Alotha, S.N.; Alsiyat, B.M.H.; Alanazi, S.H.K.; Alrayes, M.H.H.; Alrayes, R.H.; Khalaf Alanazi, D.F.; Alswairki, H.J.; Ali Alfawzan, A.; et al. Impact of Portraiture Photography on Orthodontic Treatment: A Systematic Review and Meta-Analysis. Cureus 2023, 15, e48054. [Google Scholar] [CrossRef]
  4. Monill-González, A.; Rovira-Calatayud, L.; d’Oliveira, N.G.; Ustrell-Torrent, J.M. Artificial intelligence in orthodontics: Where are we now? A scoping review. Orthod. Craniofac. Res. 2021, 24 (Suppl. S2), 6–15. [Google Scholar] [CrossRef]
  5. Akdeniz, S.; Tosun, M.E. A review of the use of artificial intelligence in orthodontics. J. Exp. Clin. Med. 2021, 38, 157–162. [Google Scholar] [CrossRef]
  6. Katne, T.; Kanaparthi, A.; Srikanth Gotoor, S.; Muppirala, S.; Devaraju, R.; Gantala, R. Artificial intelligence: Demystifying dentistry—The future and beyond. Int. J. Contemp. Med. Surg. Radiol. 2019, 4, D6–D9. [Google Scholar] [CrossRef]
  7. Redelmeier, D.A.; Shafir, E. Medical decision making in situations that offer multiple alternatives. J. Am. Med. Assoc. 1995, 273, 302. [Google Scholar] [CrossRef] [PubMed]
  8. Ryu, J.; Lee, Y.S.; Mo, S.P.; Lim, K.; Jung, S.K.; Kim, T.W. Application of deep learning artificial intelligence technique to the classification of clinical orthodontic photos. BMC Oral. Health 2022, 22, 454. [Google Scholar] [CrossRef] [PubMed]
  9. McCarthy, J. What Is Artificial Intelligence? Available online: https://www-formal.stanford.edu/jmc/whatisai.pdf (accessed on 14 June 2024).
  10. Lee, J.G.; Jun, S.; Cho, Y.W.; Lee, H.; Kim, G.B.; Seo, J.B.; Kim, N. Deep Learning in Medical Imaging: General Overview. Korean J. Radiol. 2017, 18, 570–584. [Google Scholar] [CrossRef]
  11. Wan, J.; Wang, D.; Hoi, S.C.H.; Wu, P.; Zhu, J.; Zhang, Y.; Li, J. Deep Learning for Content-Based Image Retrieval: A Comprehensive Study. In Proceedings of the 22nd ACM International Conference on Multimedia, Orlando, FL, USA, 3–7 November 2014; Association for Computing Machinery: New York, NY, USA, 2014; pp. 157–166. [Google Scholar]
  12. Chartrand, G.; Cheng, P.M.; Vorontsov, E.; Drozdzal, M.; Turcotte, S.; Pal, C.J.; Kadoury, S.; Tang, A. Deep Learning: A Primer for Radiologists. Radiographics 2017, 37, 2113–2131. [Google Scholar] [CrossRef] [PubMed]
  13. Schwendicke, F.; Golla, T.; Dreher, M.; Krois, J. Convolutional neural networks for dental image diagnostics: A scoping review. J. Dent. 2019, 91, 103226. [Google Scholar] [CrossRef]
  14. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.; van Ginneken, B.; Sanchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [PubMed]
  15. Hurst, C.A.; Eppley, B.L.; Havlik, R.J.; Sadove, A.M. Surgical cephalometrics: Applications and developments. Plast. Reconstr. Surg. 2007, 120, 92e–104e. [Google Scholar] [CrossRef] [PubMed]
  16. Durão, A.R.; Pittayapat, P.; Rockenbach, M.I.; Olszewski, R.; Ng, S.; Ferreira, A.P.; Jacobs, R. Validity of 2D lateral cephalometry in orthodontics: A systematic review. Prog. Orthod. 2013, 14, 31. [Google Scholar] [CrossRef]
  17. Tanriver, G.; Soluk Tekkesin, M.; Ergen, O. Automated Detection and Classification of Oral Lesions Using Deep Learning to Detect Oral Potentially Malignant Disorders. Cancers 2021, 13, 2766. [Google Scholar] [CrossRef]
  18. Warin, K.; Limprasert, W.; Suebnukarn, S.; Jinaporntham, S.; Jantana, P. Automatic classification and detection of oral cancer in photographic images using deep learning algorithms. J. Oral Pathol. Med. 2021, 50, 911–918. [Google Scholar] [CrossRef]
  19. Warin, K.; Limprasert, W.; Suebnukarn, S.; Jinaporntham, S.; Jantana, P. Performance of deep convolutional neural network for classification and detection of oral potentially malignant disorders in photographic images. Int. J. Oral Maxillofac. Surg. 2022, 51, 699–704. [Google Scholar] [CrossRef] [PubMed]
  20. Benyahia, S.; Meftah, B.; Lézoray, O. Multi-features extraction based on deep learning for skin lesion classification. Tissue Cell 2022, 74, 101701. [Google Scholar] [CrossRef]
  21. Thalakottor, L.A.; Shirwaikar, R.D.; Pothamsetti, P.T.; Mathews, L.M. Classification of Histopathological Images from Breast Cancer Patients Using Deep Learning: A Comparative Analysis. Crit. Rev. Biomed. Eng. 2023, 51, 41–62. [Google Scholar] [CrossRef]
  22. Meng, M.; Zhang, M.; Shen, D.; He, G. Differentiation of breast lesions on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) using deep transfer learning based on DenseNet201. Medicine 2022, 101, e31214. [Google Scholar] [CrossRef]
  23. Limprasert, W.; Suebnukarn, S.; Jinaporntham, S.; Jantana, P. Orientation-sella-nasion or Frankfort horizontal. Am. J. Orthod. 1976, 69, 648–654. [Google Scholar]
  24. Manosudprasit, A.; Haghi, A.; Allareddy, V.; Masoud, M.I. Diagnosis and treatment planning of orthodontic patients with 3-dimensional dentofacial records. Am. J. Orthod. Dentofac. Orthop. 2017, 151, 1083–1091. [Google Scholar] [CrossRef]
  25. Patel, D.P.; Trivedi, R. Photograpy versus lateral cephalogram: Role in facial diagnosis. Indian. J. Dent. Res. 2013, 24, 587–592. [Google Scholar] [PubMed]
  26. Jaiswal, P.; Gandhi, A.; Gupta, A.R.; Malik, N.; Singh, S.K.; Ramesh, K. Reliability of Photogrammetric Landmarks to the Conventional Cephalogram for Analyzing Soft-Tissue Landmarks in Orthodontics. J. Pharm. Bioallied Sci. 2021, 13 (Suppl. S1), S171–S175. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Cephalometric reference points.
Figure 1. Cephalometric reference points.
Diagnostics 14 01916 g001
Figure 2. (a) FH-NA, (b) FH-NPog, (c) FMA, (d) N-A-Pog.
Figure 2. (a) FH-NA, (b) FH-NPog, (c) FMA, (d) N-A-Pog.
Diagnostics 14 01916 g002
Figure 3. Sample photographs and cephalograms of 3 patients grouped according to cephalometric values.
Figure 3. Sample photographs and cephalograms of 3 patients grouped according to cephalometric values.
Diagnostics 14 01916 g003
Table 1. DL algorithm’s prediction accuracy of cephalometric classifications based on profile photographs.
Table 1. DL algorithm’s prediction accuracy of cephalometric classifications based on profile photographs.
DL ModelFH-NA (%)FH-NPog (%)FMA (%)N-A-Pog (%)
MobileNet V290.3388.3392.6789.33
Inception V336.3731.3389.0079.00
DenseNet 12193.0093.6795.0093.00
DenseNet 16991.6793.6792.6795.00
DenseNet 20194.6797.33 *97.67 *96.00
EfficientNet B096.67 *96.3393.3396.33
XCeption94.0033.333.6793.00
VGG1630.6732.333735.33
VGG1934.3331.3331.3334.67
NasNetMobile77.0080.3384.0081.67
ResNet 10183.0034.6734.3364.00
ResNet 15267.6735.3333.3764.33
ResNet 5084.3384.6788.6781.67
EfficientNet V295.6796.0097.0097.00 *
* DL models with the highest accuracy value, accuracy values in italics are not evaluated.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kocakaya, D.N.C.; Özel, M.B.; Kartbak, S.B.A.; Çakmak, M.; Sinanoğlu, E.A. Profile Photograph Classification Performance of Deep Learning Algorithms Trained Using Cephalometric Measurements: A Preliminary Study. Diagnostics 2024, 14, 1916. https://doi.org/10.3390/diagnostics14171916

AMA Style

Kocakaya DNC, Özel MB, Kartbak SBA, Çakmak M, Sinanoğlu EA. Profile Photograph Classification Performance of Deep Learning Algorithms Trained Using Cephalometric Measurements: A Preliminary Study. Diagnostics. 2024; 14(17):1916. https://doi.org/10.3390/diagnostics14171916

Chicago/Turabian Style

Kocakaya, Duygu Nur Cesur, Mehmet Birol Özel, Sultan Büşra Ay Kartbak, Muhammet Çakmak, and Enver Alper Sinanoğlu. 2024. "Profile Photograph Classification Performance of Deep Learning Algorithms Trained Using Cephalometric Measurements: A Preliminary Study" Diagnostics 14, no. 17: 1916. https://doi.org/10.3390/diagnostics14171916

APA Style

Kocakaya, D. N. C., Özel, M. B., Kartbak, S. B. A., Çakmak, M., & Sinanoğlu, E. A. (2024). Profile Photograph Classification Performance of Deep Learning Algorithms Trained Using Cephalometric Measurements: A Preliminary Study. Diagnostics, 14(17), 1916. https://doi.org/10.3390/diagnostics14171916

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop