Clinical, Research, and Educational Applications of ChatGPT in Dentistry: A Narrative Review
Abstract
:1. Introduction
2. Materials and Methods
2.1. Eligibility Criteria
- Opinion pieces, editorial commentaries, and letters to editor.
- Studies focused on a specific medical specialty.
- Papers not focused on AI language models.
- Studies not in the English language.
- Papers without available full text.
2.2. Data Extraction and Review Process
3. Results
4. Discussion
- It can process a greater number of words simultaneously compared to ChatGPT 3.5, enabling more extensive conversations and context as well as improved text comprehension.
- It provides more accurate responses thanks to a larger and more diverse database.
- It successfully incorporates contextual details and generates responses that are consistent not just with the immediate input but with an entire conversation.
- It reduces the number of plausible but incorrect responses (known as “hallucinations”), providing a crucial improvement in the trustworthiness of the generated results.
4.1. Applications in Dental Research
4.2. Clinical Applications
4.2.1. Diagnostics and Radiology
4.2.2. Traumatology
4.2.3. Oral and Maxillofacial Surgery
4.2.4. Prosthodontics
4.2.5. Periodontology
4.2.6. Endodontics
4.2.7. Orthodontics and Pediatrics
4.2.8. Patients’ Communication and Self-Education
4.3. Administrative Applications
4.4. Educational Enhancements
4.4.1. The Students’ Perspective
4.4.2. The Teachers’ Perspective
4.4.3. Mastery and Expertise Tests
4.5. Ethical and Practical Considerations
4.6. Future Directions
4.7. Alternative AI Models to ChatGPT
4.8. Limitations
5. Conclusions
- Dental research: It has been shown to be useful in study designing, abstract generation, draft correction, syntax error correction, translations, and reference formatting. At present, ChatGPT cannot be trusted to generate bibliographic references from a text due to frequent “hallucinations”, errors, and/or outdated information.
- Clinical application: ChatGPT can be used, only if properly trained by the operator, to assist the diagnostic workflow. However, it is not yet possible to use it directly as a diagnostic tool, and answers should not be accepted without question, primarily because there is a chance that an outdated version of the model might be in use.
- Administrative applications: Through inputted clinical findings and treatment plans, the tool is able to generate high-quality reports, spot trends, and manage patient follow-up appointments.
- Educational enhancements: By simulating a realistic conversation, it is capable of assisting students in generating scenarios, questions, answers, and explanations in a quick and engaging way. It can compose ever-changing tests and grade them.
- Ethical and practical considerations: The processing and storage of patients’ sensitive data raises significant privacy concerns. This is a crucial aspect to consider in the deployment of such tools in a clinical setting.
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
References
- Chen, M.; Decary, M. Artificial intelligence in healthcare: An essential guide for health leaders. Healthc. Manag. Forum 2020, 33, 10–18. [Google Scholar] [CrossRef] [PubMed]
- Aggarwal, A.; Tam, C.C.; Wu, D.; Li, X.; Qiao, S. Artificial Intelligence-Based Chatbots for Promoting Health Behavioral Changes: Systematic Review. J. Med. Internet Res. 2023, 25, E40789. [Google Scholar] [CrossRef] [PubMed]
- Wailthare, S.; Gaikwad, T.; Khadse, K.; Dubey, P. Artificial intelligence-based chat-bot. Int. Res. J. Eng. Technol. 2018, 5, 2305–2306. [Google Scholar]
- Eysenbach, G. The Role of ChatGPT, Generative Language Models, and Artificial Intelligence in Medical Education: A Conversation With ChatGPT and a Call for Papers. JMIR Med. Educ. 2023, 9, E46885. [Google Scholar] [CrossRef]
- Galvao Gomes da Silva, J.; Kavanagh, D.J.; Belpaeme, T.; Taylor, L.; Beeson, K.; Andrade, J. Experiences of a Motivational Interview Delivered by a Robot: Qualitative Study. J. Med. Internet Res. 2018, 20, E116. [Google Scholar] [CrossRef] [PubMed]
- Stephens, T.N.; Joerin, A.; Rauws, M.; Werk, L.N. Feasibility of pediatric obesity and prediabetes treatment support through Tess, the AI behavioral coaching chatbot. Transl. Behav. Med. 2019, 9, 440–447. [Google Scholar] [CrossRef]
- Milne-Ives, M.; de Cock, C.; Lim, E.; Shehadeh, M.H.; de Pennington, N.; Mole, G.; Normando, E.; Meinert, E. The Effectiveness of Artificial Intelligence Conversational Agents in Health Care: Systematic Review. J. Med. Internet Res. 2020, 22, E20346. [Google Scholar] [CrossRef]
- Krishnan, C.; Gupta, A.; Gupta, A.; Singh, G. Impact of Artificial Intelligence-Based Chatbots on Customer Engagement and Business Growth. In Deep Learning for Social Media Data Analytics; Serrano-Estrada, L., Saxena, A., Biswas, A., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2022; pp. 195–210. [Google Scholar]
- Yala, A.; Lehman, C.; Schuster, T.; Portnoi, T.; Barzilay, R. A Deep Learning Mammography-based Model for Improved Breast Cancer Risk Prediction. Radiology 2019, 292, 60–66. [Google Scholar] [CrossRef]
- Verma, P.; Maan, P.; Gautam, R.; Arora, T. Unveiling the Role of Artificial Intelligence (AI) in Polycystic Ovary Syndrome (PCOS) Diagnosis: A Comprehensive Review. Reprod. Sci. 2024, 31, 2901–2915. [Google Scholar] [CrossRef]
- Paranjape, K.; Schinkel, M.; Nannan Panday, R.; Car, J.; Nanayakkara, P. Introducing Artificial Intelligence Training in Medical Education. JMIR Med. Educ. 2019, 5, E16048. [Google Scholar] [CrossRef]
- Dave, T.; Athaluri, S.A.; Singh, S. ChatGPT in medicine: An overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front. Artif. Intell. 2023, 6, 1169595. [Google Scholar] [CrossRef] [PubMed]
- Prada, P.; Perroud, N.; Thorens, G. Artificial intelligence and psychiatry: Questions from psychiatrists to ChatGPT. Rev. Med. Suisse 2023, 19, 532–536. [Google Scholar] [CrossRef]
- Yang, J.; Xie, Y.; Liu, L.; Xia, B.; Cao, Z.; Guo, C. Automated Dental Image Analysis by Deep Learning on Small Dataset. In Proceedings of the 2018 IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC), Tokyo, Japan, 23–27 July 2018; pp. 492–497. [Google Scholar] [CrossRef]
- Schwendicke, F.; Golla, T.; Dreher, M.; Krois, J. Convolutional neural networks for dental image diagnostics: A scoping review. J. Dent. 2019, 91, 103226. [Google Scholar] [CrossRef] [PubMed]
- Lee, J.H.; Jeong, S.N. Efficacy of deep convolutional neural network algorithm for the identification and classification of dental implant systems, using panoramic and periapical radiographs: A pilot study. Medicine 2020, 99, E20787. [Google Scholar] [CrossRef] [PubMed]
- Qiu, B.; Guo, J.; Kraeima, J.; Glas, H.H.; Borra, R.J.H.; Witjes, M.J.H.; van Ooijen, P.M.A. Automatic segmentation of the mandible from computed tomography scans for 3D virtual surgical planning using the convolutional neural network. Phys. Med. Biol. 2019, 64, 175020. [Google Scholar] [CrossRef]
- Bonny, T.; Al Nassan, W.; Obaideen, K.; Al Mallahi, M.N.; Mohammad, Y.; El-Damanhoury, H.M. Contemporary Role and Applications of Artificial Intelligence in Dentistry. F1000Res 2023, 12, 1179. [Google Scholar] [CrossRef]
- Kattadiyil, M.T.; Mursic, Z.; AlRumaih, H.; Goodacre, C.J. Intraoral scanning of hard and soft tissues for partial removable dental prosthesis fabrication. J. Prosthet. Dent. 2014, 112, 444–448. [Google Scholar] [CrossRef]
- Engels, P.; Meyer, O.; Schonewolf, J.; Schlickenrieder, A.; Hickel, R.; Hesenius, M.; Gruhn, V.; Kuhnisch, J. Automated detection of posterior restorations in permanent teeth using artificial intelligence on intraoral photographs. J. Dent. 2022, 121, 104124. [Google Scholar] [CrossRef]
- Li, H.; Sakai, T.; Tanaka, A.; Ogura, M.; Lee, C.; Yamaguchi, S.; Imazato, S. Interpretable AI Explores Effective Components of CAD/CAM Resin Composites. J. Dent. Res. 2022, 101, 1363–1371. [Google Scholar] [CrossRef]
- Rojek, I.; Mikolajewski, D.; Dostatni, E.; Macko, M. AI-Optimized Technological Aspects of the Material Used in 3D Printing Processes for Selected Medical Applications. Materials 2020, 13, 5437. [Google Scholar] [CrossRef]
- Ng, W.L.; Chan, A.; Ong, Y.S.; Chua, C.K. Deep learning for fabrication and maturation of 3D bioprinted tissues and organs. Virtual Phys. Prototyp. 2020, 15, 340–358. [Google Scholar] [CrossRef]
- Liu, Y.; Shang, X.; Shen, Z.; Hu, B.; Wang, Z.; Xiong, G. 3D Deep Learning for 3D Printing of Tooth Model. In Proceedings of the 2019 IEEE International Conference on Service Operations and Logistics, and Informatics (SOLI), Zhengzhou, China, 6–8 November 2019; pp. 274–279. [Google Scholar] [CrossRef]
- Eggmann, F.; Weiger, R.; Zitzmann, N.; Blatz, M. Implications of large language models such as ChatGPT for dental medicine. J. Esthet. Restor. Dent. 2023, 35, 1098–1102. [Google Scholar] [CrossRef] [PubMed]
- Baig, Z.; Lawrence, D.; Ganhewa, M.; Cirillo, N. Accuracy of Treatment Recommendations by Pragmatic Evidence Search and Artificial Intelligence: An Exploratory Study. Diagnostics 2024, 14, 527. [Google Scholar] [CrossRef]
- Islam, A.; Banerjee, A.; Wati, S.M.; Banerjee, S.; Shrivastava, D.; Srivastava, K.C. Utilizing Artificial Intelligence Application for Diagnosis of Oral Lesions and Assisting Young Oral Histopathologist in Deriving Diagnosis from Provided Features—A Pilot study. J. Pharm. Bioallied Sci. 2024, 16, S1136–S1139. [Google Scholar] [CrossRef]
- Mohammad-Rahimi, H.; Khoury, Z.; Alamdari, M.I.; Rokhshad, R.; Motie, P.; Parsa, A.; Tavares, T.; Sciubba, J.; Price, J.; Sultan, A. Performance of AI chatbots on controversial topics in oral medicine, pathology, and radiology. Oral. Surg. Oral. Med. Oral. Pathol. Oral. Radiol. 2024, 137, 508–514. [Google Scholar] [CrossRef]
- Russe, M.F.; Rau, A.; Ermer, M.A.; Rothweiler, R.; Wenger, S.; Kloble, K.; Schulze, R.K.W.; Bamberg, F.; Schmelzeisen, R.; Reisert, M.; et al. A content-aware chatbot based on GPT 4 provides trustworthy recommendations for Cone-Beam CT guidelines in dental imaging. Dentomaxillofacial Radiol. 2024, 53, 109–114. [Google Scholar] [CrossRef] [PubMed]
- Sahu, P.K.; Benjamin, L.A.; Singh Aswal, G.; Williams-Persad, A. ChatGPT in research and health professions education: Challenges, opportunities, and future directions. Postgrad. Med. J. 2023, 100, 50–55. [Google Scholar] [CrossRef]
- Shikino, K.; Shimizu, T.; Otsuka, Y.; Tago, M.; Takahashi, H.; Watari, T.; Sasaki, Y.; Iizuka, G.; Tamura, H.; Nakashima, K.; et al. Evaluation of ChatGPT-Generated Differential Diagnosis for Common Diseases With Atypical Presentation: Descriptive Research. JMIR Med. Educ. 2024, 10, E58758. [Google Scholar] [CrossRef] [PubMed]
- Silva, T.P.; Andrade-Bortoletto, M.F.S.; Ocampo, T.S.C.; Alencar-Palha, C.; Bornstein, M.M.; Oliveira-Santos, C.; Oliveira, M.L. Performance of a commercially available Generative Pre-trained Transformer (GPT) in describing radiolucent lesions in panoramic radiographs and establishing differential diagnoses. Clin. Oral. Investig. 2024, 28, 204. [Google Scholar] [CrossRef]
- Mohammad-Rahimi, H.; Ourang, S.A.; Pourhoseingholi, M.A.; Dianat, O.; Dummer, P.M.H.; Nosrat, A. Validity and reliability of artificial intelligence chatbots as public sources of information on endodontics. Int. Endod. J. 2024, 57, 305–314. [Google Scholar] [CrossRef] [PubMed]
- Ourang, S.A.; Sohrabniya, F.; Mohammad-Rahimi, H.; Dianat, O.; Aminoshariae, A.; Nagendrababu, V.; Dummer, P.M.H.; Duncan, H.F.; Nosrat, A. Artificial intelligence in endodontics: Fundamental principles, workflow, and tasks. Int. Endod. J. 2024, 57, 1546–1565. [Google Scholar] [CrossRef]
- Qutieshat, A.; Al Rusheidi, A.; Al Ghammari, S.; Alarabi, A.; Salem, A.; Zelihic, M. Comparative analysis of diagnostic accuracy in endodontic assessments: Dental students vs. artificial intelligence. Diagnosis 2024, 11, 259–265. [Google Scholar] [CrossRef]
- Snigdha, N.T.; Batul, R.; Karobari, M.I.; Adil, A.H.; Dawasaz, A.A.; Hameed, M.S.; Mehta, V.; Noorani, T.Y. Assessing the Performance of ChatGPT 3.5 and ChatGPT 4 in Operative Dentistry and Endodontics: An Exploratory Study. Hum. Behav. Emerg. Tech. 2024, 2024, 8. [Google Scholar] [CrossRef]
- Suarez, A.; Diaz-Flores Garcia, V.; Algar, J.; Gomez Sanchez, M.; Llorente de Pedro, M.; Freire, Y. Unveiling the ChatGPT phenomenon: Evaluating the consistency and accuracy of endodontic question answers. Int. Endod. J. 2024, 57, 108–113. [Google Scholar] [CrossRef]
- Acar, A.H. Can natural language processing serve as a consultant in oral surgery? J. Stomatol. Oral. Maxillofac. Surg. 2024, 125, 101724. [Google Scholar] [CrossRef]
- Alten, A.; Gündeş, E.; Tuncer, E.; Kozanoğlu, E.; Akalın, B.E.; Emekli, U. Integrating artificial intelligence in orthognathic surgery: A case study of ChatGPT’s role in enhancing physician-patient consultations for dentofacial deformities. J. Plast. Reconstr. Aesthet. Surg. 2023, 87, 405–407. [Google Scholar] [CrossRef] [PubMed]
- Balel, Y. ScholarGPT’s performance in oral and maxillofacial surgery. J. Stomatol. Oral. Maxillofac. Surg. 2024, 102114. [Google Scholar] [CrossRef] [PubMed]
- Cai, Y.; Zhao, R.; Zhao, H.; Li, Y.; Gou, L. Exploring the use of ChatGPT/GPT-4 for patient follow-up after oral surgeries. Int. J. Oral. Maxillofac. Surg. 2024, 53, 867–872. [Google Scholar] [CrossRef]
- Çoban, E.; Altay, B. ChatGPT May Help Inform Patients in Dental Implantology. Int. J. Oral. Maxillofac. Implant. 2024, 39, 203–208. [Google Scholar] [CrossRef] [PubMed]
- Isik, G.; Kafadar-Gurbuz, I.; Elgun, F.; Kara, R.U.; Berber, B.; Ozgul, S.; Gunbay, T. Is Artificial Intelligence a Useful Tool for Clinical Practice of Oral and Maxillofacial Surgery? J. Craniofacial Surg. 2024, 10–97. [Google Scholar]
- Jacobs, T.; Shaari, A.; Gazonas, C.B.; Ziccardi, V.B. Is ChatGPT an Accurate and Readable Patient Aid for Third Molar Extractions? J. Oral. Maxillofac. Surg. 2024, 82, 1239–1245. [Google Scholar] [CrossRef]
- Abu Arqub, S.; Al-Moghrabi, D.; Allareddy, V.; Upadhyay, M.; Vaid, N.; Yadav, S. Content analysis of AI-generated (ChatGPT) responses concerning orthodontic clear aligners. Angle Orthod. 2024, 94, 263–272. [Google Scholar] [CrossRef]
- Daraqel, B.; Wafaie, K.; Mohammed, H.; Cao, L.; Mheissen, S.; Liu, Y.; Zheng, L. The performance of artificial intelligence models in generating responses to general orthodontic questions: ChatGPT vs Google Bard. Am. J. Orthod. Dentofac. Orthop. 2024, 165, 652–662. [Google Scholar] [CrossRef]
- Dursun, D.; Bilici Geçer, R. Can artificial intelligence models serve as patient information consultants in orthodontics? BMC Med. Inf. Inform. Decis. Mak. 2024, 24, 211. [Google Scholar] [CrossRef]
- Lima, N.G.M.; Costa, L.; Santos, P.B. ChatGPT in orthodontics: Limitations and possibilities. Australas. Orthod. J. 2024, 40, 19–21. [Google Scholar] [CrossRef]
- Makrygiannakis, M.A.; Giannakopoulos, K.; Kaklamanos, E.G. Evidence-based potential of generative artificial intelligence large language models in orthodontics: A comparative study of ChatGPT, Google Bard, and Microsoft Bing. Eur. J. Orthod. 2024, cjae017. [Google Scholar] [CrossRef]
- Surovková, J.; Haluzová, S.; Strunga, M.; Urban, R.; Lifková, M.; Thurzo, A. The New Role of the Dental Assistant and Nurse in the Age of Advanced Artificial Intelligence in Telehealth Orthodontic Care with Dental Monitoring: Preliminary Report. Appl. Sci.-Basel 2023, 13, 16. [Google Scholar] [CrossRef]
- Batool, I.; Naved, N.; Kazmi, S.M.R.; Umer, F. Leveraging Large Language Models in the delivery of post-operative dental care: A comparison between an embedded GPT model and ChatGPT. BDJ Open 2024, 10, 48. [Google Scholar] [CrossRef]
- Gugnani, N.; Pandit, I.K.; Gupta, M.; Gugnani, S.; Kathuria, S. Parental concerns about oral health of children: Is ChatGPT helpful in finding appropriate answers? J. Indian. Soc. Pedod. Prev. Dent. Apr. Jun. 2024, 42, 104–111. [Google Scholar] [CrossRef]
- Hassona, Y.; Alqaisi, D.; Al-Haddad, A.; Georgakopoulou, E.A.; Malamos, D.; Alrashdan, M.S.; Sawair, F. How good is ChatGPT at answering patients’ questions related to early detection of oral (mouth) cancer? Oral. Surg. Oral. Med. Oral. Pathol. Oral. Radiol. 2024, 138, 269–278. [Google Scholar] [CrossRef] [PubMed]
- Incerti Parenti, S.; Bartolucci, M.L.; Biondi, E.; Maglioni, A.; Corazza, G.; Gracco, A.; Alessandri-Bonetti, G. Online Patient Education in Obstructive Sleep Apnea: ChatGPT versus Google Search. Healthcare 2024, 12, 1781. [Google Scholar] [CrossRef] [PubMed]
- Vassis, S.; Powell, H.; Petersen, E.; Barkmann, A.; Noeldeke, B.; Kristensen, K.D.; Stoustrup, P. Large-Language Models in Orthodontics: Assessing Reliability and Validity of ChatGPT in Pretreatment Patient Education. Cureus 2024, 16, E68085. [Google Scholar] [CrossRef]
- Yurdakurban, E.; Topsakal, K.G.; Duran, G.S. A comparative analysis of AI-based chatbots: Assessing data quality in orthognathic surgery related patient information. J. Stomatol. Oral. Maxillofac. Surg. 2024, 125, 101757. [Google Scholar] [CrossRef]
- Rokhshad, R.; Zhang, P.; Mohammad-Rahimi, H.; Pitchika, V.; Entezari, N.; Schwendicke, F. Accuracy and consistency of chatbots versus clinicians for answering pediatric dentistry questions: A pilot study. J. Dent. 2024, 144, 104938. [Google Scholar] [CrossRef]
- Alan, R.; Alan, B.M. Utilizing ChatGPT-4 for Providing Information on Periodontal Disease to Patients: A DISCERN Quality Analysis. Cureus 2023, 15, E46213. [Google Scholar] [CrossRef]
- Babayiğit, O.; Tastan Eroglu, Z.; Ozkan Sen, D.; Ucan Yarkac, F. Potential Use of ChatGPT for Patient Information in Periodontology: A Descriptive Pilot Study. Cureus 2023, 15, E48518. [Google Scholar] [CrossRef]
- Danesh, A.; Pazouki, H.; Danesh, F.; Danesh, A.; Vardar-Sengul, S. Artificial intelligence in dental education: ChatGPT’s performance on the periodontic in-service examination. J. Periodontol. 2024, 95, 682–687. [Google Scholar] [CrossRef]
- Tastan Eroglu, Z.; Babayigit, O.; Ozkan Sen, D.; Ucan Yarkac, F. Performance of ChatGPT in classifying periodontitis according to the 2018 classification of periodontal diseases. Clin. Oral. Investig. 2024, 28, 407. [Google Scholar] [CrossRef] [PubMed]
- Freire, Y.; Laorden, A.S.; Perez, J.O.; Sanchez, M.G.; Garcia, V.D.-F.; Suarez, A. ChatGPT performance in prosthodontics: Assessment of accuracy and repeatability in answer generation. J. Prosthet. Dent. 2024, 131, 659.e651–659.e656. [Google Scholar] [CrossRef] [PubMed]
- Rokhshad, R.; Fadul, M.; Zhai, G.; Carr, K.; Jackson, J.G.; Zhang, P. A Comparative Analysis of Responses of Artificial Intelligence Chatbots in Special Needs Dentistry. Pediatr. Dent. 2024, 46, 337–344. [Google Scholar] [PubMed]
- Khan, M.K. Novel applications of artificial intelligence, machine learning, and deep learning-based modalities in dental traumatology: An overview of evidence-based literature. MRIMS J. Health Sci. 2024, 12, 223–227. [Google Scholar] [CrossRef]
- Ozden, I.; Gokyar, M.; Ozden, M.E.; Sazak Ovecoglu, H. Assessment of artificial intelligence applications in responding to dental trauma. Dent. Traumatol. 2024, 40, 722–729. [Google Scholar] [CrossRef]
- Alhaidry, H.M.; Fatani, B.; Alrayes, J.O.; Almana, A.M.; Alfhaed, N.K. ChatGPT in Dentistry: A Comprehensive Review. Cureus 2023, 15, e38317. [Google Scholar] [CrossRef]
- de Souza, L.L.; Pontes, H.A.R.; Martins, M.D.; Fonesca, F.P.; Corrêa, F.; Coracin, F.L.; Khurram, S.A.; Hagag, A.; Santos-Silva, A.R.; Vargas, P.A.; et al. ChatGPT and dentistry: A step toward the future. Gen. Dent. 2024, 72, 72–77. [Google Scholar]
- Huang, H.Y.; Zheng, O.; Wang, D.D.; Yin, J.Y.; Wang, Z.J.; Ding, S.X.; Yin, H.; Xu, C.; Yang, R.J.; Zheng, Q.; et al. ChatGPT for shaping the future of dentistry: The potential of multi-modal large language model. Int. J. Oral. Sci. 2023, 15, 13. [Google Scholar] [CrossRef] [PubMed]
- Al-Moghrabi, D.; Abu Arqub, S.; Maroulakos, M.P.; Pandis, N.; Fleming, P.S. Can ChatGPT identify predatory biomedical and dental journals? A cross-sectional content analysis. J. Dent. 2024, 142, 104840. [Google Scholar] [CrossRef]
- Bagde, H.; Dhopte, A.; Alam, M.K.; Basri, R. A systematic review and meta-analysis on ChatGPT and its utilization in medical and dental research. Heliyon 2023, 9, E23050. [Google Scholar] [CrossRef]
- Demir, G.B.; Sukut, Y.; Duran, G.S.; Topsakal, K.G.; Gorgulu, S. Enhancing systematic reviews in orthodontics: A comparative examination of GPT-3.5 and GPT-4 for generating PICO-based queries with tailored prompts and configurations. Eur. J. Orthod. 2024, 46, cjae011. [Google Scholar] [CrossRef]
- Fatani, B. ChatGPT for Future Medical and Dental Research. Cureus 2023, 15, E37285. [Google Scholar] [CrossRef] [PubMed]
- George Pallivathukal, R.; Kyaw Soe, H.H.; Donald, P.M.; Samson, R.S.; Hj Ismail, A.R. ChatGPT for Academic Purposes: Survey Among Undergraduate Healthcare Students in Malaysia. Cureus 2024, 16, E53032. [Google Scholar] [CrossRef] [PubMed]
- Tiwari, A.; Kumar, A.; Jain, S.; Dhull, K.S.; Sajjanar, A.; Puthenkandathil, R.; Paiwal, K.; Singh, R. Implications of ChatGPT in Public Health Dentistry: A Systematic Review. Cureus 2023, 15, E40367. [Google Scholar] [CrossRef]
- Uribe, S.E.; Maldupa, I. Estimating the use of ChatGPT in dental research publications. J. Dent. 2024, 149, 105275. [Google Scholar] [CrossRef] [PubMed]
- Claman, D.; Sezgin, E. Artificial Intelligence in Dental Education: Opportunities and Challenges of Large Language Models and Multimodal Foundation Models. JMIR Med. Educ. 2024, 10, E52346. [Google Scholar] [CrossRef] [PubMed]
- Roganović, J. Familiarity with ChatGPT Features Modifies Expectations and Learning Outcomes of Dental Students. Int. Dent. J. 2024, 74, 1456–1462. [Google Scholar] [CrossRef]
- Albagieh, H.; Alzeer, Z.O.; Alasmari, O.N.; Alkadhi, A.A.; Naitah, A.N.; Almasaad, K.F.; Alshahrani, T.S.; Alshahrani, K.S.; Almahmoud, M.I. Comparing Artificial Intelligence and Senior Residents in Oral Lesion Diagnosis: A Comparative Study. Cureus 2024, 16, E51584. [Google Scholar] [CrossRef] [PubMed]
- Ali, K.; Barhom, N.; Tamimi, F.; Duggal, M. ChatGPT-A double-edged sword for healthcare education? Implications for assessments of dental students. Eur. J. Dent. Educ. 2024, 28, 206–211. [Google Scholar] [CrossRef]
- Aminoshariae, A.; Nosrat, A.; Nagendrababu, V.; Dianat, O.; Mohammad-Rahimi, H.; O’Keefe, A.; Setzer, F. Artificial Intelligence in Endodontic Education. J. Endod. 2024, 50, 562–578. [Google Scholar] [CrossRef]
- Giannakopoulos, K.; Kavadella, A.; Salim, A.A.; Stamatopoulos, V.; Kaklamanos, E.G. Evaluation of the Performance of Generative AI Large Language Models ChatGPT, Google Bard, and Microsoft Bing Chat in Supporting Evidence-Based Dentistry: Comparative Mixed Methods Study. J. Med. Internet Res. 2023, 25, 15. [Google Scholar] [CrossRef]
- Kunzle, P.; Paris, S. Performance of large language artificial intelligence models on solving restorative dentistry and endodontics student assessments. Clin. Oral Investig. 2024, 28, 575. [Google Scholar] [CrossRef]
- Li, C.; Zhang, J.; Abdul-Masih, J.; Zhang, S.; Yang, J. Performance of ChatGPT and Dental Students on Concepts of Periodontal Surgery. Eur. J. Dent. Educ. 2024. [Google Scholar] [CrossRef]
- Molena, K.F.; Macedo, A.P.; Ijaz, A.; Carvalho, F.K.; Gallo, M.J.D.; Wanderley Garcia de Paula, E.S.F.; de Rossi, A.; Mezzomo, L.A.; Mugayar, L.R.F.; Queiroz, A.M. Assessing the Accuracy, Completeness, and Reliability of Artificial Intelligence-Generated Responses in Dentistry: A Pilot Study Evaluating the ChatGPT Model. Cureus 2024, 16, E65658. [Google Scholar] [CrossRef] [PubMed]
- Praveen, G.; Poornima, U.L.S.; Akkaloori, A.; Bharathi, V. ChatGPT as a Tool for Oral Health Education: A Systematic Evaluation of ChatGPT Responses to Patients’ Oral Health-related Queries. J. Nat. Sci. Med. Jul. Sep. 2024, 7, 154–157. [Google Scholar] [CrossRef]
- Puladi, B.; Gsaxner, C.; Kleesiek, J.; Hölzle, F.; Röhrig, R.; Egger, J. The impact and opportunities of large language models like ChatGPT in oral and maxillofacial surgery: A narrative review. Int. J. Oral Maxillofac. Surg. 2024, 53, 78–88. [Google Scholar] [CrossRef]
- Sabri, H.; Saleh, M.H.A.; Hazrati, P.; Merchant, K.; Misch, J.; Kumar, P.S.; Wang, H.L.; Barootchi, S. Performance of three artificial intelligence (AI)-based large language models in standardized testing; implications for AI-assisted dental education. J. Periodontal Res. 2024. [Google Scholar] [CrossRef]
- Kavadella, A.; Silva, M.; Kaklamanos, E.G.; Stamatopoulos, V.; Giannakopoulos, K.; Kavadella, A. Evaluation of ChatGPT’s Real-Life Implementation in Undergraduate Dental Education: Mixed Methods Study. JMIR Med. Educ. 2024, 10, 14. [Google Scholar] [CrossRef]
- Saravia-Rojas, M.A.; Camarena-Fonseca, A.R.; Leon-Manco, R.; Geng-Vivanco, R. Artificial intelligence: ChatGPT as a disruptive didactic strategy in dental education. J. Dent. Educ. 2024, 88, 872–876. [Google Scholar] [CrossRef]
- Chau, R.C.W.; Thu, K.M.; Yu, O.Y.; Hsung, R.T.C.; Lo, E.C.M.; Lam, W.Y.H. Performance of Generative Artificial Intelligence in Dental Licensing Examinations. Int. Dent. J. 2024, 74, 616–621. [Google Scholar] [CrossRef] [PubMed]
- Dashti, M.; Ghasemi, S.; Ghadimi, N.; Hefzi, D.; Karimian, A.; Zare, N.; Fahimipour, A.; Khurshid, Z.; Chafjiri, M.M.; Ghaedsharaf, S. Performance of ChatGPT 3.5 and 4 on U.S. dental examinations: The INBDE, ADAT, and DAT. Imaging Sci. Dent. 2024, 54, 271–275. [Google Scholar] [CrossRef] [PubMed]
- Farajollahi, M.; Modaberi, A. Can ChatGPT pass the “Iranian Endodontics Specialist Board” exam? Iran. Endod. J. 2023, 18, 192. [Google Scholar]
- Fuchs, A.; Trachsel, T.; Weiger, R.; Eggmann, F. ChatGPT’s performance in dentistry and allergyimmunology assessments: A comparative study. Swiss Dent. J. 2023, 134, 1–17. [Google Scholar] [CrossRef]
- Jeong, H.; Han, S.S.; Yu, Y.; Kim, S.; Jeon, K.J. How well do large language model-based chatbots perform in oral and maxillofacial radiology? Dentomaxillofac Radiol. 2024, 53, 390–395. [Google Scholar] [CrossRef]
- Jin, H.K.; Lee, H.E.; Kim, E. Performance of ChatGPT-3.5 and GPT-4 in national licensing examinations for medicine, pharmacy, dentistry, and nursing: A systematic review and meta-analysis. BMC Med. Educ. 2024, 24, 1013. [Google Scholar] [CrossRef]
- Kim, W.; Kim, B.C.; Yeom, H.G. Performance of Large Language Models on the Korean Dental Licensing Examination: A Comparative Study. Int. Dent. J. 2024, 5, 5. [Google Scholar] [CrossRef]
- Morishita, M.; Fukuda, H.; Muraoka, K.; Nakamura, T.; Hayashi, M.; Yoshioka, I.; Ono, K.; Awano, S. Evaluating GPT-4V’s performance in the Japanese national dental examination: A challenge explored. J. Dent. Sci. 2024, 19, 1595–1600. [Google Scholar] [CrossRef]
- Ohta, K.; Ohta, S. The Performance of GPT-3.5, GPT-4, and Bard on the Japanese National Dentist Examination: A Comparison Study. Cureus 2023, 15, E50369. [Google Scholar] [CrossRef]
- Revilla-León, M.; Barmak, B.A.; Sailer, I.; Kois, J.C.; Att, W. Performance of an Artificial Intelligence-Based Chatbot (ChatGPT) Answering the European Certification in Implant Dentistry Exam. Int. J. Prosthodont. 2024, 37, 221–224. [Google Scholar] [CrossRef]
- Song, E.S.; Lee, S.P. Comparative Analysis of the Response Accuracies of Large Language Models in the Korean National Dental Hygienist Examination Across Korean and English Questions. Int. J. Dent. Hyg. 2024. [Google Scholar] [CrossRef]
- Takagi, S.; Koda, M.; Watari, T. The Performance of ChatGPT-4V in Interpreting Images and Tables in the Japanese Medical Licensing Exam. JMIR Med. Educ. 2024, 10, E54283. [Google Scholar] [CrossRef]
- Abdaljaleel, M.; Barakat, M.; Alsanafi, M.; Salim, N.A.; Abazid, H.; Malaeb, D.; Mohammed, A.H.; Hassan, B.A.R.; Wayyes, A.M.; Farhan, S.S.; et al. Author Correction: A multinational study on the factors influencing university students’ attitudes and usage of ChatGPT. Sci. Rep. 2024, 14, 8281. [Google Scholar] [CrossRef]
- Alnaim, N.; AlSanad, D.S.; Albelali, S.; Almulhem, M.; Almuhanna, A.F.; Attar, R.W.; Alsahli, M.; Albagmi, S.; Bakhshwain, A.M.; Almazrou, S.; et al. Effectiveness of ChatGPT in remote learning environments: An empirical study with medical students in Saudi Arabia. Nutr. Health 2024, 16, 2601060241273596. [Google Scholar] [CrossRef]
- Kurt Demirsoy, K.; Buyuk, S.K.; Bicer, T. How reliable is the artificial intelligence product large language model ChatGPT in orthodontics? Angle Orthod. 2024, 94, 602–607. [Google Scholar] [CrossRef]
- Rahad, K.; Martin, K.; Amugo, I.; Ferguson, S.; Curtis, A.; Davis, A.; Gangula, P.; Wang, Q. ChatGPT to Enhance Learning in Dental Education at a Historically Black Medical College. Dent. Res. Oral. Health 2024, 7, 8–14. [Google Scholar] [CrossRef]
- Sallam, M.; Salim, N.A.; Barakat, M.; Al-Tammemi, A.B. ChatGPT applications in medical, dental, pharmacy, and public health education: A descriptive study highlighting the advantages and limitations. Narra J. 2023, 3, E103. [Google Scholar] [CrossRef]
- Ahmed, W.M.; Azhari, A.A.; Alfaraj, A.; Alhamadani, A.; Zhang, M.; Lu, C.T. The Quality of AI-Generated Dental Caries Multiple Choice Questions: A Comparative Analysis of ChatGPT and Google Bard Language Models. Heliyon 2024, 10, e28198. [Google Scholar] [CrossRef]
- Brondani, M.; Alves, C.; Ribeiro, C.; Braga, M.M.; Garcia, R.C.M.; Ardenghi, T.; Pattanaporn, K. Artificial intelligence, ChatGPT, and dental education: Implications for reflective assignments and qualitative research. J. Dent. Educ. 2024. [Google Scholar] [CrossRef]
- de Vries, T.J.; Schoenmaker, T.; Peferoen, L.A.N.; Krom, B.P.; Bloemena, E. Design and evaluation of an immunology and pathology course that is tailored to today’s dentistry students. Front. Oral. Health 2024, 5, 1386904. [Google Scholar] [CrossRef]
- Quah, B.; Zheng, L.; Sng, T.J.H.; Yong, C.W.; Islam, I. Reliability of ChatGPT in automated essay scoring for dental undergraduate examinations. BMC Med. Educ. 2024, 24, 962. [Google Scholar] [CrossRef]
- Shamim, M.S.; Zaidi, S.J.A.; Rehman, A. The Revival of Essay-Type Questions in Medical Education: Harnessing Artificial Intelligence and Machine Learning. JCPSP J. Coll. Physicians Surg. Pak. 2024, 34, 595–599. [Google Scholar] [CrossRef]
- Shete, A.; Shete, M.; Chavan, M.; Channe, P.; Sapkal, R.; Buva, K. Evaluation of ChatGPT as a New Assessment Tool in Dental Education. J. Indian. Acad. Oral. Med. Radiol. Jul. Sep. 2024, 36, 259–263. [Google Scholar]
- Uribe, S.E.; Maldupa, I.; Kavadella, A.; El Tantawi, M.; Chaurasia, A.; Fontana, M.; Marino, R.; Innes, N.; Schwendicke, F. Artificial intelligence chatbots and large language models in dental education: Worldwide survey of educators. Eur. J. Dent. Educ. 2024, 28, 865–876. [Google Scholar] [CrossRef]
- Hirosawa, T.; Kawamura, R.; Harada, Y.; Mizuta, K.; Tokumasu, K.; Kaji, Y.; Suzuki, T.; Shimizu, T. ChatGPT-Generated Differential Diagnosis Lists for Complex Case-Derived Clinical Vignettes: Diagnostic Accuracy Evaluation. JMIR Med. Inform. 2023, 11, E48808. [Google Scholar] [CrossRef]
- Alkaissi, H.; McFarlane, S.I. Artificial Hallucinations in ChatGPT: Implications in Scientific Writing. Cureus 2023, 15, E35179. [Google Scholar] [CrossRef]
- Biswas, S. ChatGPT and the Future of Medical Writing. Radiology 2023, 307, e223312. [Google Scholar] [CrossRef]
- Thorp, H.H. ChatGPT is fun, but not an author. Science 2023, 379, 313. [Google Scholar] [CrossRef]
- Haman, M.; Skolnik, M. Using ChatGPT to conduct a literature review. Acc. Res. 2024, 31, 1244–1246. [Google Scholar] [CrossRef]
- Suárez, A.; Jiménez, J.; de Pedro, M.L.; Andreu-Vázquez, C.; García, V.D.F.; Sánchez, M.G.; Freire, Y. Beyond the Scalpel: Assessing ChatGPT’s potential as an auxiliary intelligent virtual assistant in oral surgery. Comp. Struct. Biotechnol. J. 2024, 24, 46–52. [Google Scholar] [CrossRef]
- Stokel-Walker, C.; Van Noorden, R. What ChatGPT and generative AI mean for science. Nature 2023, 614, 214–216. [Google Scholar] [CrossRef]
- Mijwil, M.; Mohammad, A.; Ahmed Hussein, A. ChatGPT: Exploring the Role of Cybersecurity in the Protection of Medical Information. Mesopotamian J. CyberSecurity 2023, 2023, 18–21. [Google Scholar] [CrossRef]
- Hasal, M.; Nowaková, J.; Ahmed Saghair, K.; Abdulla, H.; Snášel, V.; Ogiela, L. Chatbots: Security, privacy, data protection, and social aspects. Concurr. Comput. Pract. Exp. 2021, 33, E6426. [Google Scholar] [CrossRef]
- Gerke, S.; Minssen, T.; Cohen, G. Chapter 12—Ethical and legal challenges of artificial intelligence-driven healthcare. In Artificial Intelligence in Healthcare; Bohr, A., Memarzadeh, K., Eds.; Academic Press: Cambridge, MA, USA, 2020; pp. 295–336. [Google Scholar]
- Anderson, N.; Belavy, D.L.; Perle, S.M.; Hendricks, S.; Hespanhol, L.; Verhagen, E.; Memon, A.R. AI did not write this manuscript, or did it? Can we trick the AI text detector into generated texts? The potential future of ChatGPT and AI in Sports & Exercise Medicine manuscript generation. BMJ Open Sport. Exerc. Med. 2023, 9, E001568. [Google Scholar] [CrossRef]
- ChatGPT Generative Pre-trained Transformer; Zhavoronkov, A. Rapamycin in the context of Pascal’s Wager: Generative pre-trained transformer perspective. Oncoscience 2022, 9, 82–84. [Google Scholar] [CrossRef] [PubMed]
- O’Connor, S. Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse? Nurse Educ. Pract. 2023, 66, 103537. [Google Scholar] [CrossRef] [PubMed]
- Stokel-Walker, C. ChatGPT listed as author on research papers: Many scientists disapprove. Nature 2023, 613, 620–621. [Google Scholar] [CrossRef] [PubMed]
- Gomez-Cabello, C.A.; Borna, S.; Pressman, S.M.; Haider, S.A.; Forte, A.J. Large Language Models for Intraoperative Decision Support in Plastic Surgery: A Comparison between ChatGPT-4 and Gemini. Medicina 2024, 60, 957. [Google Scholar] [CrossRef]
- Rossettini, G.; Rodeghiero, L.; Corradi, F.; Cook, C.; Pillastrini, P.; Turolla, A.; Castellini, G.; Chiappinotto, S.; Gianola, S.; Palese, A. Comparative accuracy of ChatGPT-4, Microsoft Copilot and Google Gemini in the Italian entrance test for healthcare sciences degrees: A cross-sectional study. BMC Med. Educ. 2024, 24, 694. [Google Scholar] [CrossRef]
Topic | Subtopic | Authors (Year) [Reference] | Conclusions |
---|---|---|---|
Administrative applications | Eggmann, F., et al. (2023) [25] | ChatGPT can streamline administrative workflows and aid in clinical decision support, given comprehensive, unbiased data. However, it raises privacy and cybersecurity concerns and lacks reliability and up-to-date knowledge compared to traditional search engines, especially for health queries. | |
Clinical applications | Diagnostics and radiology | Baig, Z., et al. (2024) [26] | AI programs’ treatment recommendations generally matched the current literature with up to 75% agreement, though data sources were often missing, except for Bard. Both GPT-4 and clinician reviews suggested procedures potentially leading to overtreatment. GPT-4 had the highest overall accuracy. |
Clinical applications | Diagnostics and radiology | Islam, A., et al. (2024) [27] | The proficiency of ChatGPT in handling intricate reasoning queries within pathology demonstrated a noteworthy level of relational accuracy. Consequently, its text output created coherent links between elements, producing meaningful responses. This suggests that scholars or students can rely on this program to address reasoning-based inquiries. |
Clinical applications | Diagnostics and radiology | Mohammad-Rahimi, H., et al. (2024) [28] | GPT-4 excelled in providing high-quality information on controversial dental topics. However, developers should incorporate scientific citation authenticators to validate citations due to the high incidence of fabricated references. |
Clinical applications | Diagnostics and radiology | Russe, M.F.D.M., et al. (2024) [29] | A content-aware chatbot using GPT-4 reliably provided recommendations according to current consensus guidelines. The responses were deemed trustworthy and transparent and therefore facilitate the integration of artificial intelligence into clinical decision-making. |
Clinical applications | Diagnostics and radiology | Sahu, P.K., et al. (2023) [30] | A content-aware chatbot using GPT-4 reliably followed current guidelines, providing trustworthy and transparent recommendations. This supports AI’s integration into clinical decision-making. |
Clinical applications | Diagnostics and radiology | Shikino, K., et al. (2024) [31] | ChatGPT-4 demonstrates potential as an auxiliary tool for diagnosing typical and mildly atypical presentations of common diseases. However, its performance declines with greater atypicality. |
Clinical applications | Diagnostics and radiology | Silva, T.P., et al. (2024) [32] | The performance of the GPT program in describing and providing differential diagnoses for radiolucent lesions in panoramic radiographs is variable and at this stage limited in its use for clinical application. |
Clinical applications | Endodontics | Mohammad-Rahimi, H., et al. (2024) [33] | GPT-3.5 provided more credible information on topics related to endodontics compared to Google Bard and Bing. |
Clinical applications | Endodontics | Ourang, S.A., et al. (2024) [34] | The paper reviews AI concepts in endodontics, focusing on machine learning for diagnosis and computer vision for dental image interpretation. It emphasizes the need for rigorous validation and ethical transparency. AI has significant potential to enhance endodontic research, education, and patient care with interdisciplinary collaboration. |
Clinical applications | Endodontics | Qutieshat, A., et al. (2024) [35] | The study reveals AI’s capability to outperform dental students in diagnostic accuracy regarding endodontic assessments. |
Clinical applications | Endodontics | Snigdha, N.T., et al. (2024) [36] | The results showed no statistically significant differences between the two versions, indicating comparable response accuracy. |
Clinical applications | Endodontics | Suarez, A., et al. (2024) [37] | The answers generated by ChatGPT showed high consistency (85.44%). ChatGPT achieved an average accuracy of 57.33%. However, significant differences in accuracy were observed based on question difficulty, with lower accuracy for easier questions. |
Clinical applications | Oral and maxillofacial surgery | Acar, A.H. (2024) [38] | ChatGPT excels in answering oral surgery-related questions with superior accuracy, completeness, and clarity, making it a valuable tool for detailed information. |
Clinical applications | Oral and maxillofacial surgery | Alten, A., et al. (2023) [39] | ChatGPT-4 can provide valuable information and guidance during orthognathic surgery consultations, but it cannot replace direct medical consultation. |
Clinical applications | Oral and maxillofacial surgery | Balel, Y. (2024) [40] | Scholar GPT excelled in oral and maxillofacial surgery questions, providing more consistent and high-quality responses compared to ChatGPT. Models using academic databases offer more accurate and reliable information. |
Clinical applications | Oral and maxillofacial surgery | Cai, Y., et al. (2024) [41] | ChatGPT/GPT-4 excelled in medical knowledge accuracy and recommendation rationality while also accurately sensing and providing reassurance about patient emotions. It can be used for patient follow-up after oral surgeries but should be supervised by healthcare professionals to consider current limitations. |
Clinical applications | Oral and maxillofacial surgery | Çoban, E., and B. Altay (2024) [42] | The AI platform can educate patients about dental implantology and treatment procedures, but there is concern about potential bias toward specific dental implant brands. |
Clinical applications | Oral and maxillofacial surgery | Isik, G., et al. [43] | The study outcomes emphasized high accuracy and quality in ChatGPT Plus’s responses except for the questions requiring a detailed response or a comment. |
Clinical applications | Oral and maxillofacial surgery | Jacobs, T., et al. (2024) [44] | AI was able to provide mostly accurate responses, and content was closely aligned with AAOMS guidelines. However, responses were too complex for the average third molar extraction patient and were deficient in citations and references. |
Clinical applications | Oral and maxillofacial surgery | Suarez, A., et al. (2024) [37] | Final grade accuracy was found to be 71.7%, and consistency of the experts’ grading across iterations ranged from moderate to almost perfect. |
Clinical applications | Orthodontics | Abu Arqub, S., et al. (2024) [45] | The accuracy of ChatGPT’s responses was generally insufficient, often missing relevant literature citations. Additionally, its capability to provide up-to-date and precise information was limited. |
Clinical applications | Orthodontics | Daraqel, B. a. b., et al. (2024) [46] | Both ChatGPT- and Google Bard-generated responses were rated with a high level of accuracy and completeness to the general orthodontic questions posed. However, acquiring answers was generally faster using the Google Bard model. |
Clinical applications | Orthodontics | Dursun, D., and R. Bilici Geçer (2024) [47] | All chatbot models provided generally accurate, moderately reliable, and moderate- to good-quality answers to questions the clear aligners. |
Clinical applications | Orthodontics | Lima, N.G.M., et al. (2024) [48] | AI improves patient communication, diagnosis support, data digitization, and treatment assistance. ChatGPT aids in care, billing, and health information access but may provide nonsensical responses and poses privacy risks. |
Clinical applications | Orthodontics | Makrygiannakis, M.A., et al. (2024) [49] | LLMs hold promise for evidence-based orthodontics, but their limitations can lead to incorrect decisions if not used carefully. They cannot replace orthodontists’ critical thinking and expertise. |
Clinical applications | Orthodontics | Surovková, J., et al. (2023) [50] | The paper introduces an AI-powered orthodontic workflow, highlighting new responsibilities for orthodontic nurses and assessing its use over three years with Dental Monitoring. It concludes that AI enhances dental practice with precise, personalized treatment but raises new ethical and legal issues. |
Clinical applications | Patients’ communication and self-education | Batool, I., et al. (2024) [51] | Embedded GPT model showed better results as compared to ChatGPT in providing postoperative dental care emphasizing the benefits of embedding and prompt engineering. |
Clinical applications | Patients’ communication and self-education | Gugnani, N., et al. (2024) [52] | Overall, the responses were found to be complete and logical and in clear language, with only some inadequacies being reported in a few of the answers. |
Clinical applications | Patients’ communication and self-education | Hassona, Y. (2024) [53] | ChatGPT is an attractive and potentially useful resource for informing patients about early detection of oral cancer. Nevertheless, concerns do exist about readability and actionability of the offered information. |
Clinical applications | Patients’ communication and self-education | Incerti Parenti, S., et al. (2024) [54] | The study suggests that while ChatGPT-3.5 can be a valuable tool for patient education, efforts to improve readability are necessary to ensure accessibility and utility for all patients. |
Clinical applications | Patients’ communication and self-education | Vassis, S., et al. (2024) [55] | Although patients generally prefer AI-generated information regarding the side effects of orthodontic treatment, the tested prompts fall short of providing thoroughly satisfactory and high-quality education to patients. |
Clinical applications | Patients’ communication and self-education | Yurdakurban, E., et al. (2024) [56] | AI-based chatbots with a variety of features have usually provided answers with high quality, reliability, and difficult readability to questions. |
Clinical applications | Pediatrics | Rokhshad, R., et al. (2024) [57] | In the pilot study, chatbots showed lower accuracy than dentists. Chatbots may not be recommended yet for clinical pediatric dentistry. |
Clinical applications | Periodontology | Alan, R., and B.M. Alan (2023) [58] | Consistently offered accurate guidance in most responses. |
Clinical applications | Periodontology | Babayiğit, O., et al. (2023) [59] | While ChatGPT may not offer absolute precision without expert supervision, it can still serve as a valuable resource for periodontologists, with some risk of inaccuracies. |
Clinical applications | Periodontology | Danesh, A., et al. (2024) [60] | While ChatGPT 4 showed a higher proficiency compared to ChatGPT 3.5, both chatbot models leave considerable room for misinformation with their responses relating to periodontology. |
Clinical applications | Periodontology | Tastan Eroglu, Z., et al. (2024) [61] | The present performance of ChatGPT in the classification of periodontitis exhibited a reasonable level. However, it is expected that additional improvements would increase its effectiveness and broaden its range of functionalities. |
Clinical applications | Prosthodontics | Freire, Y., et al. (2024) [62] | The results show that currently, ChatGPT has limited ability to generate answers related to RDPs and tooth-supported FDPs. |
Clinical applications | Special needs | Rokhshad, R., et al. (2024) [63] | Chatbots exhibit acceptable consistency in responding to questions related to special needs dentistry and better accuracy in responding to true/false questions than diagnostic questions. |
Clinical applications | Traumatology | Khan, M.K. (2024) [64] | AI and its subsets have been applied in a very limited number of fields of dental traumatology. However, the findings from the literature were found favorable and promising. |
Clinical applications | Traumatology | Ozden, I., et al. (2024) [65] | Although ChatGPT and Google Bard are potential knowledge resources, their consistency and accuracy in responding to dental trauma queries remain limited. |
Comprehensive | Alhaidry, H.M., et al. (2023) [66] | AI has greatly advanced dentistry, particularly in research. ChatGPT can transform dental and healthcare systems, but caution and policies are needed to mitigate hazards, and continuous monitoring is recommended due to ethical concerns and improper reference generation. | |
Comprehensive | de Souza, L.L., et al. (2024) [67] | Integrating ChatGPT in dentistry can be highly beneficial, but it is crucial to address ethical considerations, accuracy, and privacy concerns. | |
Comprehensive | Huang, H.Y., et al. (2023) [68] | While LLMs offer significant potential benefits, the challenges, such as data privacy, data quality, and model bias, need further study. | |
Dental research | Al-Moghrabi, D., et al. (2024) [69] | ChatGPT may effectively distinguish between predatory and legitimate journals, with accuracy rates of 92.5% and 71%, respectively. | |
Dental research | Bagde, H., et al. (2023) [70] | ChatGPT has the ability to provide appropriate solutions to questions in the medical and dentistry areas, but researchers and doctors should cautiously assess its responses because they might not always be dependable. | |
Dental research | Demir, G.B., et al. (2024) [71] | Both ChatGPT 3.5 and 4 can be pivotal tools for generating PICO-driven queries in orthodontics when optimally configured. However, the precision required in medical research necessitates a judicious and critical evaluation of LLM-generated outputs, advocating for a circumspect integration into scientific investigations. | |
Dental research | Fatani, B. (2023) [72] | ChatGPT can help find and summarize academic papers, generate drafts, and translate content, streamlining and simplifying academic writing. However, its use in scientific writing should be regulated and monitored due to ethical considerations. | |
Dental research | George Pallivathukal, R., et al. (2024) [73] | The study aids in creating guidelines for implementing GAI chatbots in healthcare education, emphasizing benefits and risks, and informing AI developers and educators about ChatGPT’s potential in academia. | |
Dental research | Tiwari, A., et al. (2023) [74] | Studies show ChatGPT helps in scientific and dental research but should not be solely relied on due to ethical concerns and the need for review. | |
Dental research | Uribe, S.E., and I. Maldupa (2024) [75] | GenAI can potentially increase productivity and inclusivity, but it raises concerns such as bias, inaccuracy, and distortion of academic incentives. Therefore, the findings support the need for clear AI guidelines and standards for academic publishing. | |
Educational enhancements | Claman, D., and E. Sezgin (2024) [76] | LLMs can enhance dental education by offering personalized feedback, case scenarios, and educational content. However, they also present challenges like bias, inaccuracies, privacy issues, and the risk of overreliance. | |
Educational enhancements | Roganović, J. (2024) [77] | A majority of students in the cohort were reluctant to use ChatGPT. Furthermore, familarity (reading) with ChatGPT features appears to alter the expectations and enhance learning performance of students, suggesting an AI description-related cognitive bias. | |
Educational enhancements | General expertise | Albagieh, H., et al. (2024) [78] | No significant difference was found in response scores. However, residents showed low agreement, while LLMs showed high agreement. Dentists should leverage AI for diagnosis and treatment. |
Educational enhancements | General expertise | Ali, K., et al. (2024) [79] | Generative AI can transform virtual learning. Healthcare educators should adapt to its benefits for learners while mitigating dishonest use. |
Educational enhancements | General expertise | Aminoshariae, A., et al. (2024) [80] | AI in endodontic education will support clinical and didactic teaching through individualized feedback; enhanced, augmented, and virtually generated training aids; automated detection and diagnosis; treatment planning and decision support; and AI-based student progress evaluation and personalized education. |
Educational enhancements | General expertise | Giannakopoulos, K., et al. (2023) [81] | Although LLMs show promise in evidence-based dentistry, their limitations can lead to harmful decisions if not used carefully. They should complement, not replace, a dentist’s critical thinking and expertise. |
Educational enhancements | General expertise | Kunzle, P., and S. Paris (2024) [82] | Overall, there are large performance differences among LLMAs. Only the ChatGPT-4 models achieved a success ratio that could be used with caution to support the dental academic curriculum. |
Educational enhancements | General expertise | Li, C., et al. (2024) [83] | For periodontal surgery exams, ChatGPT’s accuracy was not as high as students’, but it shows potential in assisting with the curriculum and helping with clinical letters and reviews. |
Educational enhancements | General expertise | Molena, K.F., et al. (2024) [84] | ChatGPT initially demonstrated good accuracy and completeness, which was further improved with machine learning (ML) over time. However, some inaccurate answers and references persisted. |
Educational enhancements | General expertise | Praveen, G., et al. (2024) [85] | ChatGPT generated clear, scientifically accurate and relevant, comprehensive, and consistent responses to diverse oral health-related queries despite some significant limitations. |
Educational enhancements | General expertise | Puladi, B., et al. (2024) [86] | Classic OMS diseases are underrepresented. The current literature related to LLMs in OMS has a limited evidence level. |
Educational enhancements | General expertise | Sabri, H., et al. (2024) [87] | ChatGPT-4 performed well on AAP in-service exam questions, outperforming Gemini and ChatGPT-3.5. While it shows potential as an educational tool in periodontics and oral implantology, limitations like processing image-based inquiries, inconsistent responses, and not reaching absolute accuracy must be considered. |
Educational enhancements | GPT vs. literature research | Kavadella, A., et al. (2024) [88] | Students using ChatGPT for their learning assignments performed significantly better in the knowledge examination than their fellow students who used the literature research methodology. |
Educational enhancements | GPT vs. literature research | Saravia-Rojas, M.A., et al. (2024) [89] | Dental students highly valued the experience of using ChatGPT for academic tasks. Nonetheless, the traditional method of searching for scientific articles yield higher scores. |
Educational enhancements | Licensing exam/mastery tests | Chau, R.C.W., et al. (2024) [90] | The newer version of GenAI has shown good proficiency in answering multiple-choice questions from dental licensing examinations. |
Educational enhancements | Licensing exam/mastery tests | Dashti, M., et al. (2024) [91] | Both ChatGPT 3.5 and 4 effectively handled knowledge-based, case history, and comprehension questions, with ChatGPT 4 being more reliable and surpassing the performance of 3.5. ChatGPT 4’s perfect score in comprehension questions underscores its trainability in specific subjects. However, both versions exhibited weaker performance in mathematical analysis. |
Educational enhancements | Licensing exam/mastery tests | Farajollahi, M., and A. Modaberi (2023) [92] | Out of 100 questions asked from ChatGPT, a score of 40 was obtained. |
Educational enhancements | Licensing exam/mastery tests | Fuchs, A., et al. (2023) [93] | The performance disparity between SFLEDM and EEAACI assessments highlights ChatGPT’s varying proficiency due to differences in training data. Priming can help, but healthcare use must be cautious due to inherent risks. |
Educational enhancements | Licensing exam/mastery tests | Jeong, H., et al. (2024) [94] | The performance of chatbots in oral and maxillofacial radiology was unsatisfactory. Further training using specific, relevant data derived solely from reliable sources is required. |
Educational enhancements | Licensing exam/mastery tests | Jin, H.K., et al. (2024) [95] | The accuracy levels ranged from 36 to 77% for ChatGPT-3.5 and from 64.4 to 100% for GPT-4. Additionally, in the context of health licensing examinations, the ChatGPT models exhibited greater proficiency in the following order: pharmacy, medicine, dentistry, and nursing. |
Educational enhancements | Licensing exam/mastery tests | Kim, W., et al. (2024) [96] | Using the KDLE as a benchmark, the study demonstrates that although LLMs have not yet reached human-level performance in overall scores, both Claude3-Opus and ChatGPT-4 exceed the cut-off scores and perform exceptionally well in specific subjects. |
Educational enhancements | Licensing exam/mastery tests | Morishita, M., et al. (2024) [97] | While innovative, ChatGPT-4V’s image recognition feature exhibited limitations, especially in handling image-intensive and complex clinical practical questions, and is not yet fully suitable as an educational support tool for dental students at its current stage. |
Educational enhancements | Licensing exam/mastery tests | Ohta, K., and Ohta, S. (2023) [98] | GPT-4 achieved the highest overall score in the JNDE, followed by Bard and GPT-3.5. However, only Bard surpassed the passing score for essential questions. |
Educational enhancements | Licensing exam/mastery tests | Revilla-León, M., et al. (2024) [99] | The AI-based chatbot tested not only passed the exam but performed better than licensed dentists. |
Educational enhancements | Licensing exam/mastery tests | Song, E.S., and S.P. Lee (2024) [100] | GPT-4 shows great potential for medical education and standardized testing, especially in English. However, performance varies across subjects and languages, highlighting the need for diverse and localized training data to improve effectiveness. |
Educational enhancements | Licensing exam/mastery tests | Takagi, S., et al. (2024) [101] | ChatGPT-4V successfully passed the 117th JMLE, demonstrating proficiency in handling questions, including image- and table-based questions. |
Educational enhancements | The students’ perspective | Abdaljaleel, M., et al. (2024) [102] | The study validated “TAME-ChatGPT” as a useful tool for assessing ChatGPT adoption among university students. |
Educational enhancements | The students’ perspective | Alnaim, N., et al. (2024) [103] | Despite challenges and varied perceptions based on gender and education level, the overwhelmingly positive attitudes toward ChatGPT underscore its potential as a valuable tool in medical education. |
Educational enhancements | The students’ perspective | Kurt Demirsoy, K., et al. (2024) [104] | ChatGPT has significant potential in terms of usability for patient information and education in the field of orthodontics if it is developed and necessary updates are made. |
Educational enhancements | The students’ perspective | Rahad, K., et al. (2024) [105] | The results showed that ChatGPT can assist in dental essay writing and generate relevant content for dental students, in addition to other benefits. |
Educational enhancements | The students’ perspective | Sallam, M., et al. (2023) [106] | ChatGPT has potential in medical, dental, pharmacy, and public health education by improving personalized learning, clinical reasoning, and the understanding of complex concepts. However, concerns include data privacy, biased and inaccurate content, and risks to critical thinking and communication skills, highlighting the need for proper guidelines. |
Educational enhancements | The teachers’ perspective | Ahmed, W.M., et al. (2024) [107] | ChatGPT and Bard can generate numerous questions about dental caries, especially at the knowledge and comprehension levels, making them useful for large-scale exams. However, educators need to review and adapt these questions to ensure they meet their learning objectives. |
Educational enhancements | The teachers’ perspective | Brondani, M., et al. (2024) [108] | Instructors could usually tell if reflections were generated by ChatGPT or students. However, the thematic analysis content from ChatGPT matched that of qualitative researchers. |
Educational enhancements | The teachers’ perspective | de Vries, T.J., et al. (2024) [109] | These methods proved to be appropriate and logical choices for reaching the learning goals of the course. |
Educational enhancements | The teachers’ perspective | Quah, B., et al. (2024) [110] | The study shows the potential of ChatGPT for essay marking. However, an appropriate rubric design is essential for optimal reliability. |
Educational enhancements | The teachers’ perspective | Shamim, M.S., et al. (2024) [111] | AI and ML technologies can potentially supplement human grading in the assessment of essays. |
Educational enhancements | The teachers’ perspective | Shete, A., et al. (2024) [112] | Instead of treating artificial intelligence as a threat, dental educators need to adapt teaching and assessments in dental education for the benefit of learners while mitigating its dishonest use. |
Educational enhancements | The teachers’ perspective | Uribe, S.E., et al. (2024) [113] | A positive yet cautious view towards AI chatbot integration in dental curricula is prevalent, underscoring the need for clear implementation guidelines. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Puleio, F.; Lo Giudice, G.; Bellocchio, A.M.; Boschetti, C.E.; Lo Giudice, R. Clinical, Research, and Educational Applications of ChatGPT in Dentistry: A Narrative Review. Appl. Sci. 2024, 14, 10802. https://doi.org/10.3390/app142310802
Puleio F, Lo Giudice G, Bellocchio AM, Boschetti CE, Lo Giudice R. Clinical, Research, and Educational Applications of ChatGPT in Dentistry: A Narrative Review. Applied Sciences. 2024; 14(23):10802. https://doi.org/10.3390/app142310802
Chicago/Turabian StylePuleio, Francesco, Giorgio Lo Giudice, Angela Mirea Bellocchio, Ciro Emiliano Boschetti, and Roberto Lo Giudice. 2024. "Clinical, Research, and Educational Applications of ChatGPT in Dentistry: A Narrative Review" Applied Sciences 14, no. 23: 10802. https://doi.org/10.3390/app142310802
APA StylePuleio, F., Lo Giudice, G., Bellocchio, A. M., Boschetti, C. E., & Lo Giudice, R. (2024). Clinical, Research, and Educational Applications of ChatGPT in Dentistry: A Narrative Review. Applied Sciences, 14(23), 10802. https://doi.org/10.3390/app142310802