Deep Fusion Feature Extraction for Caries Detection on Dental Panoramic Radiographs
Abstract
:1. Introduction
2. Related Works
- (i)
- Stability, which based on the large data, can describe the problem and cross-validation to measure the different situation of problem;
- (ii)
- Performance is a better result in accuracy and improves specificity since the balance between sensitivity and specificity is sometimes more important than accuracy. Other measures are also shown for comparison with the previous study.
3. Materials and Methods
3.1. Radiographs Dataset
3.2. Method
3.2.1. Features Descriptors using Pre-trained CNN Deep Networks
3.2.2. Features Descriptors using Geometric Features
3.2.3. Fusion Features
3.2.4. Classification
4. Experimental Results
4.1. Measures
- True positive (TP) presents the number of caries images classified correctly as caries;
- True negative (TN) presents the number of non-caries images classified correctly as non-caries;
- False-positive (FT) means the number of non-caries images classified wrongly as caries;
- False-negative (FN) means the number of caries images classified wrongly as non-caries.
4.2. Experiment and Result
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Oral health. World Health Organization 2020. Available online: https://www.who.int/health-topics/oral-health/ (accessed on 1 October 2020).
- Cavities. From the MSD Manual Consumer Version (Known as the Merck Manual in the US and Canada and the MSD Manual in the rest of the world), edited by Robert Porter. Copyright (2021) by Merck Sharp & Dohme Corp., a Subsidiary of Merck & Co., Inc., Kenilworth, NJ. Available online: https://www.msdmanuals.com/en-jp/home/mouth-and-dental-disorders/tooth-disorders/cavities (accessed on 26 January 2021).
- Mosquera-Lopez, C.; Agaian, S.; Velez-Hoyos, A.; Thompson, I. Computer-Aided Prostate Cancer Diagnosis from Digitized Histopathology: A Review on Texture-Based Systems. IEEE Rev. Biomed. Eng. 2015, 8, 98–113. [Google Scholar] [CrossRef] [PubMed]
- Mansour, R.F. Evolutionary Computing Enriched Computer-Aided Diagnosis System for Diabetic Retinopathy: A Survey. IEEE Rev. Biomed. Eng. 2017, 10, 334–349. [Google Scholar] [CrossRef] [PubMed]
- Sampathkumar, A.; Hughes, D.A.; Kirk, K.J.; Otten, W.; Longbottom, C. All-optical photoacoustic imaging and detection of early-stage dental caries. In Proceedings of the 2014 IEEE International Ultrasonics Symposium, Chicago, IL, USA, 3–6 September 2014; pp. 1269–1272. [Google Scholar]
- Hughes, D.A.; Girkin, J.M.; Poland, S.; Longbottom, C.; Cochran, S. Focused ultrasound for early detection of tooth decay. In Proceedings of the 2009 IEEE International Ultrasonics Symposium, Rome, Italy, 20–23 September 2009; pp. 1–3. [Google Scholar]
- Usenik, P.; Bürmen, M.; Fidler, A.; Pernuš, F.; Likar, B. Near-infrared hyperspectral imaging of water evaporation dynamics for early detection of incipient caries. J. Dent. 2014, 42, 1242–1247. [Google Scholar] [CrossRef] [PubMed]
- Li, S.; Pang, Z.; Song, W.; Guo, Y.; You, W.; Hao, A.; Qin, H. Low-Shot Learning of Automatic Dental Plaque Segmentation Based on Local-to-Global Feature Fusion. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; pp. 664–668. [Google Scholar]
- Maslak, E.; Khudanov, B.; Krivtsova, D.; Tsoy, T. Application of Information Technologies and Quantitative Light-Induced Fluorescence for the Assessment of Early Caries Treatment Outcomes. In Proceedings of the 2019 12th International Conference on Developments in eSystems Engineering (DeSE), Kazan, Russia, 7–10 October 2019; pp. 912–917. [Google Scholar]
- Angelino, K.; Edlund, D.A.; Shah, P. Near-Infrared Imaging for Detecting Caries and Structural Deformities in Teeth. IEEE J Transl. Eng. Health Med. 2017, 5, 2300107. [Google Scholar] [CrossRef] [PubMed]
- Li, W.; Kuang, W.; Li, Y.; Li, Y.; Ye, W. Clinical X-Ray Image Based Tooth Decay Diagnosis using SVM. In Proceedings of the 2007 International Conference on Machine Learning and Cybernetics, Hong Kong, China, 19–22 August 2007; pp. 1616–1619. [Google Scholar]
- Yu, Y.; Li, Y.; Li, Y.-J.; Wang, J.-M.; Lin, D.-H.; Ye, W.-P. Tooth Decay Diagnosis using Back Propagation Neural Network. In Proceedings of the 2006 IEEE International Conference on Machine Learning and Cybernetics, Dalian, China, 13–16 August 2006; pp. 3956–3959. [Google Scholar]
- Patil, S.; Kulkarni, V.; Bhise, A. Intelligent system with dragonfly optimisation for caries detection. IET Image Process. 2019, 13, 429–439. [Google Scholar] [CrossRef]
- Pan, W.-T. A new Fruit Fly Optimization Algorithm: Taking the financial distress model as an example. Knowl. Based Syst. 2012, 26, 69–74. [Google Scholar] [CrossRef]
- Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
- Loog, M.; Duin, R.P.W. Linear dimensionality reduction via a heteroscedastic extension of LDA: The Chernoff criterion. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 732–739. [Google Scholar] [PubMed]
- Lazcano, R.; Madroñal, D.; Salvador, R.; Desnos, K.; Pelcat, M.; Guerra, R.; Fabelo, H.; Ortega, S.; Lopez, S.; Callico, G.M.; et al. Porting a PCA-based hyperspectral image dimensionality reduction algorithm for brain cancer detection on a manycore architecture. J. Syst. Archit. 2017, 77, 101–111. [Google Scholar] [CrossRef]
- Montefusco-Siegmund, R.; Maldonado, P.E.; Devia, C. Effects of ocular artifact removal through ICA decomposition on EEG phase. In Proceedings of the 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER), San Diego, CA, USA, 6–8 November 2013; pp. 1374–1377. [Google Scholar]
- Jiao, Z.; Gao, X.; Wang, Y.; Li, J.; Xu, H. Deep Convolutional Neural Networks for mental load classification based on EEG data. Pattern Recognit. 2018, 76, 582–595. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proceedings of the Proceedings of the 25th International Conference on Neural Information Processing Systems—Volume 1, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105.
- Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Tiulpin, A.; Thevenot, J.; Rahtu, E.; Lehenkari, P.; Saarakkala, S. Automatic Knee Osteoarthritis Diagnosis from Plain Radiographs: A Deep Learning-Based Approach. Sci. Rep. 2018, 8, 1727. [Google Scholar] [CrossRef] [PubMed]
- Stuhlsatz, A.; Lippel, J.; Zielke, T. Feature Extraction with Deep Neural Networks by a Generalized Discriminant Analysis. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 596–608. [Google Scholar] [CrossRef] [PubMed]
- Szegedy, C.; Wei, L.; Yangqing, J.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1800–1807. [Google Scholar]
- Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. ManCybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef] [Green Version]
- Soh, L.; Tsatsoulis, C. Texture analysis of SAR sea ice imagery using gray level co-occurrence matrices. IEEE Trans. Geosci. Remote Sens. 1999, 37, 780–795. [Google Scholar] [CrossRef] [Green Version]
- Clausi, D.A. An analysis of co-occurrence texture statistics as a function of grey level quantization. Can. J. Remote Sens. 2002, 28, 45–62. [Google Scholar] [CrossRef]
- Vapnik, V. The Nature of Statistical Learning Theory; Springer: New York, NY, USA, 1995. [Google Scholar]
- Guo, G.; Wang, H.; Bell, D.; Bi, Y. KNN Model-Based Approach in Classification. In On the Move to Meaningful Internet Systems 2003: CoopIS, DOA, and ODBASE; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
- Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees; CRC Press: New York, NY, USA, 1984. [Google Scholar]
- Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning, Second Edition; Springer: New York, NY, USA, 2008. [Google Scholar]
- Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
Network Model | Depth | Size (MB) | Input Size | |
---|---|---|---|---|
Alexnet | 8 | 227 | 61.0 | 2272273 |
Googlenet | 22 | 27 | 7.0 | 2242243 |
VGG16 | 23 | 528 | 138.4 | 2242243 |
VGG19 | 26 | 549 | 143.7 | 2242243 |
Resnet18 | 18 | 45 | 11.5 | 2242243 |
Resnet50 | 50 | 98 | 25.6 | 2242243 |
Resnet101 | 101 | 171 | 44.7 | 2242243 |
Xception | 126 | 88 | 22.9 | 2992993 |
Features | Name | Formula |
---|---|---|
F1 | Mean | |
F2 | Entropy | |
F3 | Autocorrelation | |
F4 | Contrast | |
F5 | Correlation | |
F6 | Cluster prominence | |
F7 | Cluster shade | |
F8 | Dissimilarity | |
F9 | Maximum probability | |
F10 | Sum of square variance | |
F11 | Sum of average | |
F12 | Sum of entropy | |
F13 | Sum of variance | |
F14 | Difference entropy |
Network | Alexnet | Googlenet | VGG16 | VGG19 | Resnet18 | Resnet50 | Resnet101 | Xception |
---|---|---|---|---|---|---|---|---|
Layer | fc8 | pool5-7x7_s1 | fc8 | fc8 | pool5 | avg_pool | pool5 | avg_pool |
ACC | 0.8679 | 0.8302 | 0.9057 | 0.8113 | 0.8491 | 0.8868 | 0.8868 | 0.8868 |
SEN | 0.7826 | 0.8261 | 0.9130 | 0.7391 | 0.8261 | 0.8696 | 0.8261 | 0.9130 |
SPEC | 0.9333 | 0.8333 | 0.9000 | 0.8667 | 0.8667 | 0.9000 | 0.9333 | 0.8667 |
PPV | 0.9000 | 0.7919 | 0.8750 | 0.8095 | 0.8261 | 0.8696 | 0.9048 | 0.8400 |
NPV | 0.8485 | 0.8621 | 0.9310 | 0.8125 | 0.8667 | 0.9000 | 0.8750 | 0.9286 |
F1-score | 0.7200 | 0.6786 | 0.8077 | 0.6296 | 0.7037 | 0.7692 | 0.7600 | 0.7778 |
AUC | 0.9087 | 0.8333 | 0.9587 | 0.8674 | 0.9014 | 0.9565 | 0.9072 | 0.9464 |
Network | Alexnet | Googlenet | VGG16 | VGG19 | Resnet18 | Resnet50 | Resnet101 | Xception |
---|---|---|---|---|---|---|---|---|
ACC | 0.8679 | 0.8679 | 0.9057 | 0.8113 | 0.8868 | 0.8868 | 0.8868 | 0.9245 |
SEN | 0.7826 | 0.8696 | 0.9130 | 0.7826 | 0.8696 | 0.8696 | 0.8261 | 1.0000 |
SPEC | 0.9333 | 0.8667 | 0.9000 | 0.8333 | 0.9000 | 0.9000 | 0.9333 | 0.8667 |
PPV | 0.9000 | 0.8333 | 0.8750 | 0.7826 | 0.8696 | 0.8696 | 0.9048 | 0.8519 |
NPV | 0.8485 | 0.8966 | 0.9310 | 0.8333 | 0.9000 | 0.9000 | 0.8750 | 1.0000 |
F1-score | 0.7200 | 0.7407 | 0.8077 | 0.6429 | 0.7692 | 0.7692 | 0.7600 | 0.8519 |
AUC | 0.9087 | 0.8949 | 0.9594 | 0.8659 | 0.9123 | 0.9580 | 0.9087 | 0.9688 |
Classifier | Measure | Five-Fold Cross-Validation | |||||
---|---|---|---|---|---|---|---|
Fold-1 | Fold-2 | Fold-3 | Fold-4 | Fold-5 | Mean | ||
Decision Tree | Accuracy | 0.6415 | 0.6038 | 0.7170 | 0.6038 | 0.6981 | 0.6528 |
Sensitivity | 0.6522 | 0.7826 | 0.7391 | 0.6957 | 0.6087 | 0.6957 | |
Specificity | 0.6333 | 0.4667 | 0.7000 | 0.5333 | 0.7667 | 0.6200 | |
PPV | 0.5769 | 0.5294 | 0.6538 | 0.5333 | 0.6667 | 0.5920 | |
NPV | 0.7037 | 0.7368 | 0.7778 | 0.6957 | 0.7188 | 0.7265 | |
F1-score | 0.4412 | 0.4615 | 0.5313 | 0.4324 | 0.4667 | 0.4666 | |
AUC | 0.6696 | 0.6507 | 0.7717 | 0.6159 | 0.7043 | 0.6825 | |
K-Nearest Neighbor | Accuracy | 0.8491 | 0.8302 | 0.7736 | 0.7547 | 0.7170 | 0.7849 |
Sensitivity | 0.6522 | 0.6957 | 0.6087 | 0.6087 | 0.6522 | 0.6435 | |
Specificity | 1.0000 | 0.9333 | 0.9000 | 0.8667 | 0.7667 | 0.8933 | |
PPV | 1.0000 | 0.8889 | 0.8235 | 0.7778 | 0.6818 | 0.8344 | |
NPV | 0.7895 | 0.8000 | 0.7500 | 0.7429 | 0.7419 | 0.7649 | |
F1-score | 0.6522 | 0.6400 | 0.5385 | 0.5185 | 0.5000 | 0.5698 | |
AUC | 0.8261 | 0.8145 | 0.7543 | 0.7377 | 0.7094 | 0.7684 | |
Naïve Bayes | Accuracy | 0.7358 | 0.7333 | 0.7170 | 0.7547 | 0.7547 | 0.7391 |
Sensitivity | 0.6087 | 0.7308 | 0.6087 | 0.6522 | 0.6522 | 0.6505 | |
Specificity | 0.8333 | 0.7353 | 0.8000 | 0.8333 | 0.8333 | 0.8071 | |
PPV | 0.7368 | 0.6786 | 0.7000 | 0.7500 | 0.7500 | 0.7231 | |
NPV | 0.7353 | 0.7813 | 0.7273 | 0.7576 | 0.7576 | 0.7518 | |
F1-score | 0.5000 | 0.5429 | 0.4828 | 0.5357 | 0.5357 | 0.5194 | |
AUC | 0.8101 | 0.8066 | 0.8043 | 0.7674 | 0.8094 | 0.7996 | |
Random Forest | Accuracy | 0.9057 | 0.8679 | 0.9245 | 0.7736 | 0.7925 | 0.8528 |
Sensitivity | 0.8696 | 0.9565 | 0.9565 | 0.7391 | 0.6522 | 0.8348 | |
Specificity | 0.9333 | 0.8000 | 0.9000 | 0.8000 | 0.9000 | 0.8667 | |
PPV | 0.9091 | 0.7857 | 0.8800 | 0.7391 | 0.8333 | 0.8295 | |
NPV | 0.9032 | 0.9600 | 0.9643 | 0.8000 | 0.7714 | 0.8798 | |
F1-score | 0.8000 | 0.7586 | 0.8462 | 0.5862 | 0.5769 | 0.7136 | |
AUC | 0.9551 | 0.9261 | 0.9623 | 0.8087 | 0.8652 | 0.9035 | |
Support Vector Machine | Accuracy | 0.9623 | 0.9245 | 0.8868 | 0.8868 | 0.9245 | 0.9170 |
Sensitivity | 0.9565 | 0.8696 | 0.7391 | 0.9565 | 1.0000 | 0.9043 | |
Specificity | 0.9667 | 0.9667 | 1.0000 | 0.8333 | 0.8667 | 0.9267 | |
PPV | 0.9565 | 0.9524 | 1.0000 | 0.8148 | 0.8519 | 0.9151 | |
NPV | 0.9667 | 0.9063 | 0.8333 | 0.9615 | 1.0000 | 0.9336 | |
F1-score | 0.9167 | 0.8333 | 0.7391 | 0.7857 | 0.8519 | 0.8253 | |
AUC | 0.9971 | 0.9899 | 0.9681 | 0.9652 | 0.9688 | 0.9778 |
Function Name | Time(s) |
---|---|
Load data | 0.37 |
Deep activated features extraction | 9.99 |
Geometric features extraction | 2.52 |
Fusion features combination | 0.01 |
Training classification model | 0.62 |
Predict and evaluation | 0.28 |
Total | 13.79 |
References | Method | Samples | ACC% | SEN% | SPEC% | PPV% | NPV% |
---|---|---|---|---|---|---|---|
[11,13] |
| 120 | 53.33 | 59.33 | 06.67 | 73.67 | 6.67 |
[12,13] |
| 120 | 73.33 | 77.67 | 53.33 | 90.33 | 53.33 |
[13] |
| 120 | 90.00 | 94.67 | 63.33 | 91.00 | 63.33 |
Proposed method |
| 533 | 91.70 | 90.43 | 92.67 | 91.51 | 93.36 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bui, T.H.; Hamamoto, K.; Paing, M.P. Deep Fusion Feature Extraction for Caries Detection on Dental Panoramic Radiographs. Appl. Sci. 2021, 11, 2005. https://doi.org/10.3390/app11052005
Bui TH, Hamamoto K, Paing MP. Deep Fusion Feature Extraction for Caries Detection on Dental Panoramic Radiographs. Applied Sciences. 2021; 11(5):2005. https://doi.org/10.3390/app11052005
Chicago/Turabian StyleBui, Toan Huy, Kazuhiko Hamamoto, and May Phu Paing. 2021. "Deep Fusion Feature Extraction for Caries Detection on Dental Panoramic Radiographs" Applied Sciences 11, no. 5: 2005. https://doi.org/10.3390/app11052005
APA StyleBui, T. H., Hamamoto, K., & Paing, M. P. (2021). Deep Fusion Feature Extraction for Caries Detection on Dental Panoramic Radiographs. Applied Sciences, 11(5), 2005. https://doi.org/10.3390/app11052005