Topic Editors

Headingley Campus, Leeds Beckett University, Leeds LS6 3QS, UK
Department of Sciences and Methods for Engineering, University of Modena and Reggio Emilia, 42100 Reggio Emilia, Italy
School of Computer Science and Engineering, South China University of Technology, Guangzhou 510641, China
Department of Computer Science & Engineering (DISI), University of Bologna, 40136 Bologna, Italy
Artificial Intelligence in Biomedical Imaging Lab (AIBI Lab), Institute of Innovative Research, Tokyo Institute of Technology, Yokohama, 26-8503, Japan
Department of English Language & Applied Linguistics, University of Reading, Reading RG6 6AH, UK

AI Chatbots: Threat or Opportunity?

Abstract submission deadline
closed (29 February 2024)
Manuscript submission deadline
closed (30 April 2024)
Viewed by
83842

Topic Information

Dear Colleagues,

ChatGPT, based on GPT-3, was launched by OpenAI in November 2022. On their website it is described as ‘a language model … designed to respond to text-based queries and generate natural language responses. It is part of the broader field of artificial intelligence known as natural language processing (NLP), which seeks to teach computers to understand and interpret human language’. More significantly, it is stated that ‘One of the main applications of ChatGPT is in chatbots, where it can be used to provide automated customer service, answer FAQs, or even engage in more free-flowing conversations with users. However, it can also be used in other NLP applications such as text summarization, language translation, and content creation. Overall, ChatGPT represents a significant advancement in the field of NLP and has the potential to revolutionize the way we interact with computers and digital systems’.

These claims, although containing relatively innocuous terms, have been seen by many as potentially ominous and with far-reaching ramifications. Teachers, already facing the issues of cut-and-paste-off-the-internet plagiarism, ghost-writing, and contract cheating, foresaw that AI chatbots such as ChatGPT, Bard, and Bing, would offer students new and more powerful opportunities to produce work for assessment. For some this was not a problem, but for others it appeared to be the beginning of the end for anything other than in-person assessments, including hand-written exams and vivas.

People began to experiment with ChatGPT, using it to produce computer code, speeches, and academic papers. In some cases, users expressed their astonishment at the high quality of the outputs, but others were far more skeptical. In the meantime, OpenAI released GPT-4, which is now incorporated into ChatGPT Plus. It is expected that GPT-5 will be available later this year, on top of which, autonomous AI agents such as Auto-GPT and Agent-GPT are now available. These developments, and others in the general area of AI, have led to calls for a pause in such developments, although others have expressed doubts that this will have any impact.

The issues raised by AI chatbots such as ChatGPT impact upon a range of practices and disciplines, as well as many facets of our everyday lives and interactions. Hence, this invitation to submit work comes from editors associated with a wide variety of MDPI journals, encompassing a range of inter-related perspectives on the topic. We are keen to receive submissions relating to the technologies behind the advance in these AI chatbots, and also with regard to the wider implications of their use in social, technical, and educational contexts.

We are open to all manner of submissions, but to give some indication of the aspects of key interest we list the following questions and issues.

  • The development of AI chatbots has been claimed to herald a new era, offering significant advances in the incorporation of technology into people’s lives and interactions. Is this likely to be the case, and if so, where are these impacts going to be the most pervasive and effective?
  • Is it possible to strike a balance regarding the impact of these technologies so that any potential harms are minimized, while potential benefits are maximized and shared?
  • How should educators respond to the challenge of AI chatbots? Should they welcome this technology and re-orient teaching and learning strategies around it, or seek to safeguard traditional practices from what is seen as a major threat?
  • There is a growing body of evidence that the design and implementation of many AI applications, i.e., algorithms, incorporate bias and prejudice. How can this be countered and corrected?
  • How can publishers and editors recognize the difference between manuscripts that have been written by a chatbot and "genuine" articles written by researchers? Is training to recognize the difference required? If so, who could offer such training?
  • How can the academic world and the wider public be protected against the creation of "alternative facts" by AI? Should researchers be required to submit their data with manuscripts to show that the data are authentic? What is the role of ethics committees in protecting the integrity of research?
  • Can the technology underlying AI chatbots be enhanced to guard against misuse and vulnerabilities?
  • Novel models and algorithms for using AI chatbots in cognitive computing;
  • Techniques for training and optimizing AI chatbots for cognitive computing tasks;
  • Evaluation methods for assessing the performance of AI chatbot-based - cognitive computing systems;
  • Case studies and experiences in developing and deploying AI chatbot-based cognitive computing systems in real-world scenarios;
  • Social and ethical issues related to the use of AI chatbots for cognitive computing.

The potential impact of these AI chatbots on the topics covered by journals is twofold: on the one hand, there is a need for research on the technological bases underlying AI chatbots, including the algorithmic aspects behind the AI; on the other hand, there are many aspects related to the support and assistance that these AI chatbots can provide to algorithm designers, code developers and others operating in the many fields and practices encompassed by this collection of journals.

Prof. Dr. Antony Bryant, Editor-in-Chief of Informatics
Prof. Dr. Roberto Montemanni, Section Editor-in-Chief of Algorithms
Prof. Dr. Min Chen, Editor-in-Chief of BDCC
Prof. Dr. Paolo Bellavista, Section Editor-in-Chief of Future Internet
Prof. Dr. Kenji Suzuki, Editor-in-Chief of AI
Prof. Dr. Jeanine Treffers-Daller, Editor-in-Chief of Languages
Topic Editors

Keywords

  • ChatGPT
  • OpenAI
  • AI chatbots
  • natural language processing
 

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
AI
ai
3.1 7.2 2020 17.6 Days CHF 1600
Algorithms
algorithms
1.8 4.1 2008 15 Days CHF 1600
Big Data and Cognitive Computing
BDCC
3.7 7.1 2017 18 Days CHF 1800
Future Internet
futureinternet
2.8 7.1 2009 13.1 Days CHF 1600
Informatics
informatics
3.4 6.6 2014 33 Days CHF 1800
Information
information
2.4 6.9 2010 14.9 Days CHF 1600
Languages
languages
0.9 1.4 2016 49.6 Days CHF 1400
Publications
publications
4.6 6.5 2013 35.8 Days CHF 1400

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (13 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
18 pages, 2039 KiB  
Article
AI Language Models: An Opportunity to Enhance Language Learning
by Yan Cong
Informatics 2024, 11(3), 49; https://doi.org/10.3390/informatics11030049 - 19 Jul 2024
Viewed by 1703
Abstract
AI language models are increasingly transforming language research in various ways. How can language educators and researchers respond to the challenge posed by these AI models? Specifically, how can we embrace this technology to inform and enhance second language learning and teaching? In [...] Read more.
AI language models are increasingly transforming language research in various ways. How can language educators and researchers respond to the challenge posed by these AI models? Specifically, how can we embrace this technology to inform and enhance second language learning and teaching? In order to quantitatively characterize and index second language writing, the current work proposes the use of similarities derived from contextualized meaning representations in AI language models. The computational analysis in this work is hypothesis-driven. The current work predicts how similarities should be distributed in a second language learning setting. The results suggest that similarity metrics are informative of writing proficiency assessment and interlanguage development. Statistically significant effects were found across multiple AI models. Most of the metrics could distinguish language learners’ proficiency levels. Significant correlations were also found between similarity metrics and learners’ writing test scores provided by human experts in the domain. However, not all such effects were strong or interpretable. Several results could not be consistently explained under the proposed second language learning hypotheses. Overall, the current investigation indicates that with careful configuration and systematic metrics design, AI language models can be promising tools in advancing language education. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Figure 1

22 pages, 353 KiB  
Article
GPTs or Grim Position Threats? The Potential Impacts of Large Language Models on Non-Managerial Jobs and Certifications in Cybersecurity
by Raza Nowrozy
Informatics 2024, 11(3), 45; https://doi.org/10.3390/informatics11030045 - 11 Jul 2024
Viewed by 1262
Abstract
ChatGPT, a Large Language Model (LLM) utilizing Natural Language Processing (NLP), has caused concerns about its impact on job sectors, including cybersecurity. This study assesses ChatGPT’s impacts in non-managerial cybersecurity roles using the NICE Framework and Technological Displacement theory. It also explores its [...] Read more.
ChatGPT, a Large Language Model (LLM) utilizing Natural Language Processing (NLP), has caused concerns about its impact on job sectors, including cybersecurity. This study assesses ChatGPT’s impacts in non-managerial cybersecurity roles using the NICE Framework and Technological Displacement theory. It also explores its potential to pass top cybersecurity certification exams. Findings reveal ChatGPT’s promise to streamline some jobs, especially those requiring memorization. Moreover, this paper highlights ChatGPT’s challenges and limitations, such as ethical implications, LLM limitations, and Artificial Intelligence (AI) security. The study suggests that LLMs like ChatGPT could transform the cybersecurity landscape, causing job losses, skill obsolescence, labor market shifts, and mixed socioeconomic impacts. A shift in focus from memorization to critical thinking, and collaboration between LLM developers and cybersecurity professionals, is recommended. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Figure 1

29 pages, 860 KiB  
Article
ChatGPT Code Detection: Techniques for Uncovering the Source of Code
by Marc Oedingen, Raphael C. Engelhardt, Robin Denz, Maximilian Hammer and Wolfgang Konen
AI 2024, 5(3), 1066-1094; https://doi.org/10.3390/ai5030053 - 2 Jul 2024
Viewed by 4690
Abstract
In recent times, large language models (LLMs) have made significant strides in generating computer code, blurring the lines between code created by humans and code produced by artificial intelligence (AI). As these technologies evolve rapidly, it is crucial to explore how they influence [...] Read more.
In recent times, large language models (LLMs) have made significant strides in generating computer code, blurring the lines between code created by humans and code produced by artificial intelligence (AI). As these technologies evolve rapidly, it is crucial to explore how they influence code generation, especially given the risk of misuse in areas such as higher education. The present paper explores this issue by using advanced classification techniques to differentiate between code written by humans and code generated by ChatGPT, a type of LLM. We employ a new approach that combines powerful embedding features (black-box) with supervised learning algorithms including Deep Neural Networks, Random Forests, and Extreme Gradient Boosting to achieve this differentiation with an impressive accuracy of 98%. For the successful combinations, we also examine their model calibration, showing that some of the models are extremely well calibrated. Additionally, we present white-box features and an interpretable Bayes classifier to elucidate critical differences between the code sources, enhancing the explainability and transparency of our approach. Both approaches work well, but provide at most 85–88% accuracy. Tests on a small sample of untrained humans suggest that humans do not solve the task much better than random guessing. This study is crucial in understanding and mitigating the potential risks associated with using AI in code generation, particularly in the context of higher education, software development, and competitive programming. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Figure 1

14 pages, 611 KiB  
Article
Analysing the Impact of Generative AI in Arts Education: A Cross-Disciplinary Perspective of Educators and Students in Higher Education
by Sara Sáez-Velasco, Mario Alaguero-Rodríguez, Vanesa Delgado-Benito and Sonia Rodríguez-Cano
Informatics 2024, 11(2), 37; https://doi.org/10.3390/informatics11020037 - 3 Jun 2024
Cited by 1 | Viewed by 4480
Abstract
Generative AI refers specifically to a class of Artificial Intelligence models that use existing data to create new content that reflects the underlying patterns of real-world data. This contribution presents a study that aims to show what the current perception of arts educators [...] Read more.
Generative AI refers specifically to a class of Artificial Intelligence models that use existing data to create new content that reflects the underlying patterns of real-world data. This contribution presents a study that aims to show what the current perception of arts educators and students of arts education is with regard to generative Artificial Intelligence. It is a qualitative research study using focus groups as a data collection technique in order to obtain an overview of the participating subjects. The research design consists of two phases: (1) generation of illustrations from prompts by students, professionals and a generative AI tool; and (2) focus groups with students (N = 5) and educators (N = 5) of artistic education. In general, the perception of educators and students coincides in the usefulness of generative AI as a tool to support the generation of illustrations. However, they agree that the human factor cannot be replaced by generative AI. The results obtained allow us to conclude that generative AI can be used as a motivating educational strategy for arts education. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Figure 1

20 pages, 3041 KiB  
Article
Artificial Intelligence Chatbots in Chemical Information Seeking: Narrative Educational Insights via a SWOT Analysis
by Johannes Pernaa, Topias Ikävalko, Aleksi Takala, Emmi Vuorio, Reija Pesonen and Outi Haatainen
Informatics 2024, 11(2), 20; https://doi.org/10.3390/informatics11020020 - 18 Apr 2024
Cited by 2 | Viewed by 2264
Abstract
Artificial intelligence (AI) chatbots are next-word predictors built on large language models (LLMs). There is great interest within the educational field for this new technology because AI chatbots can be used to generate information. In this theoretical article, we provide educational insights into [...] Read more.
Artificial intelligence (AI) chatbots are next-word predictors built on large language models (LLMs). There is great interest within the educational field for this new technology because AI chatbots can be used to generate information. In this theoretical article, we provide educational insights into the possibilities and challenges of using AI chatbots. These insights were produced by designing chemical information-seeking activities for chemistry teacher education which were analyzed via the SWOT approach. The analysis revealed several internal and external possibilities and challenges. The key insight is that AI chatbots will change the way learners interact with information. For example, they enable the building of personal learning environments with ubiquitous access to information and AI tutors. Their ability to support chemistry learning is impressive. However, the processing of chemical information reveals the limitations of current AI chatbots not being able to process multimodal chemical information. There are also ethical issues to address. Despite the benefits, wider educational adoption will take time. The diffusion can be supported by integrating LLMs into curricula, relying on open-source solutions, and training teachers with modern information literacy skills. This research presents theory-grounded examples of how to support the development of modern information literacy skills in the context of chemistry teacher education. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Figure 1

14 pages, 960 KiB  
Article
ChatGPT in Education: Empowering Educators through Methods for Recognition and Assessment
by Joost C. F. de Winter, Dimitra Dodou and Arno H. A. Stienen
Informatics 2023, 10(4), 87; https://doi.org/10.3390/informatics10040087 - 29 Nov 2023
Cited by 14 | Viewed by 7134
Abstract
ChatGPT is widely used among students, a situation that challenges educators. The current paper presents two strategies that do not push educators into a defensive role but can empower them. Firstly, we show, based on statistical analysis, that ChatGPT use can be recognized [...] Read more.
ChatGPT is widely used among students, a situation that challenges educators. The current paper presents two strategies that do not push educators into a defensive role but can empower them. Firstly, we show, based on statistical analysis, that ChatGPT use can be recognized from certain keywords such as ‘delves’ and ‘crucial’. This insight allows educators to detect ChatGPT-assisted work more effectively. Secondly, we illustrate that ChatGPT can be used to assess texts written by students. The latter topic was presented in two interactive workshops provided to educators and educational specialists. The results of the workshops, where prompts were tested live, indicated that ChatGPT, provided a targeted prompt is used, is good at recognizing errors in texts but not consistent in grading. Ethical and copyright concerns were raised as well in the workshops. In conclusion, the methods presented in this paper may help fortify the teaching methods of educators. The computer scripts that we used for live prompting are available and enable educators to give similar workshops. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Figure 1

16 pages, 335 KiB  
Review
AI Chatbots in Digital Mental Health
by Luke Balcombe
Informatics 2023, 10(4), 82; https://doi.org/10.3390/informatics10040082 - 27 Oct 2023
Cited by 13 | Viewed by 19540
Abstract
Artificial intelligence (AI) chatbots have gained prominence since 2022. Powered by big data, natural language processing (NLP) and machine learning (ML) algorithms, they offer the potential to expand capabilities, improve productivity and provide guidance and support in various domains. Human–Artificial Intelligence (HAI) is [...] Read more.
Artificial intelligence (AI) chatbots have gained prominence since 2022. Powered by big data, natural language processing (NLP) and machine learning (ML) algorithms, they offer the potential to expand capabilities, improve productivity and provide guidance and support in various domains. Human–Artificial Intelligence (HAI) is proposed to help with the integration of human values, empathy and ethical considerations into AI in order to address the limitations of AI chatbots and enhance their effectiveness. Mental health is a critical global concern, with a substantial impact on individuals, communities and economies. Digital mental health solutions, leveraging AI and ML, have emerged to address the challenges of access, stigma and cost in mental health care. Despite their potential, ethical and legal implications surrounding these technologies remain uncertain. This narrative literature review explores the potential of AI chatbots to revolutionize digital mental health while emphasizing the need for ethical, responsible and trustworthy AI algorithms. The review is guided by three key research questions: the impact of AI chatbots on technology integration, the balance between benefits and harms, and the mitigation of bias and prejudice in AI applications. Methodologically, the review involves extensive database and search engine searches, utilizing keywords related to AI chatbots and digital mental health. Peer-reviewed journal articles and media sources were purposively selected to address the research questions, resulting in a comprehensive analysis of the current state of knowledge on this evolving topic. In conclusion, AI chatbots hold promise in transforming digital mental health but must navigate complex ethical and practical challenges. The integration of HAI principles, responsible regulation and scoping reviews are crucial to maximizing their benefits while minimizing potential risks. Collaborative approaches and modern educational solutions may enhance responsible use and mitigate biases in AI applications, ensuring a more inclusive and effective digital mental health landscape. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
21 pages, 1002 KiB  
Article
Chatbots Put to the Test in Math and Logic Problems: A Comparison and Assessment of ChatGPT-3.5, ChatGPT-4, and Google Bard
by Vagelis Plevris, George Papazafeiropoulos and Alejandro Jiménez Rios
AI 2023, 4(4), 949-969; https://doi.org/10.3390/ai4040048 - 24 Oct 2023
Cited by 22 | Viewed by 9216
Abstract
In an age where artificial intelligence is reshaping the landscape of education and problem solving, our study unveils the secrets behind three digital wizards, ChatGPT-3.5, ChatGPT-4, and Google Bard, as they engage in a thrilling showdown of mathematical and logical prowess. We assess [...] Read more.
In an age where artificial intelligence is reshaping the landscape of education and problem solving, our study unveils the secrets behind three digital wizards, ChatGPT-3.5, ChatGPT-4, and Google Bard, as they engage in a thrilling showdown of mathematical and logical prowess. We assess the ability of the chatbots to understand the given problem, employ appropriate algorithms or methods to solve it, and generate coherent responses with correct answers. We conducted our study using a set of 30 questions. These questions were carefully crafted to be clear, unambiguous, and fully described using plain text only. Each question has a unique and well-defined correct answer. The questions were divided into two sets of 15: Set A consists of “Original” problems that cannot be found online, while Set B includes “Published” problems that are readily available online, often with their solutions. Each question was presented to each chatbot three times in May 2023. We recorded and analyzed their responses, highlighting their strengths and weaknesses. Our findings indicate that chatbots can provide accurate solutions for straightforward arithmetic, algebraic expressions, and basic logic puzzles, although they may not be consistently accurate in every attempt. However, for more complex mathematical problems or advanced logic tasks, the chatbots’ answers, although they appear convincing, may not be reliable. Furthermore, consistency is a concern as chatbots often provide conflicting answers when presented with the same question multiple times. To evaluate and compare the performance of the three chatbots, we conducted a quantitative analysis by scoring their final answers based on correctness. Our results show that ChatGPT-4 performs better than ChatGPT-3.5 in both sets of questions. Bard ranks third in the original questions of Set A, trailing behind the other two chatbots. However, Bard achieves the best performance, taking first place in the published questions of Set B. This is likely due to Bard’s direct access to the internet, unlike the ChatGPT chatbots, which, due to their designs, do not have external communication capabilities. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Figure 1

26 pages, 4052 KiB  
Article
Fluent but Not Factual: A Comparative Analysis of ChatGPT and Other AI Chatbots’ Proficiency and Originality in Scientific Writing for Humanities
by Edisa Lozić and Benjamin Štular
Future Internet 2023, 15(10), 336; https://doi.org/10.3390/fi15100336 - 13 Oct 2023
Cited by 18 | Viewed by 7850
Abstract
Historically, mastery of writing was deemed essential to human progress. However, recent advances in generative AI have marked an inflection point in this narrative, including for scientific writing. This article provides a comprehensive analysis of the capabilities and limitations of six AI chatbots [...] Read more.
Historically, mastery of writing was deemed essential to human progress. However, recent advances in generative AI have marked an inflection point in this narrative, including for scientific writing. This article provides a comprehensive analysis of the capabilities and limitations of six AI chatbots in scholarly writing in the humanities and archaeology. The methodology was based on tagging AI-generated content for quantitative accuracy and qualitative precision by human experts. Quantitative accuracy assessed the factual correctness in a manner similar to grading students, while qualitative precision gauged the scientific contribution similar to reviewing a scientific article. In the quantitative test, ChatGPT-4 scored near the passing grade (−5) whereas ChatGPT-3.5 (−18), Bing (−21) and Bard (−31) were not far behind. Claude 2 (−75) and Aria (−80) scored much lower. In the qualitative test, all AI chatbots, but especially ChatGPT-4, demonstrated proficiency in recombining existing knowledge, but all failed to generate original scientific content. As a side note, our results suggest that with ChatGPT-4, the size of large language models has reached a plateau. Furthermore, this paper underscores the intricate and recursive nature of human research. This process of transforming raw data into refined knowledge is computationally irreducible, highlighting the challenges AI chatbots face in emulating human originality in scientific writing. Our results apply to the state of affairs in the third quarter of 2023. In conclusion, while large language models have revolutionised content generation, their ability to produce original scientific contributions in the humanities remains limited. We expect this to change in the near future as current large language model-based AI chatbots evolve into large language model-powered software. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Figure 1

16 pages, 872 KiB  
Article
Qualitative Research Methods for Large Language Models: Conducting Semi-Structured Interviews with ChatGPT and BARD on Computer Science Education
by Andreas Dengel, Rupert Gehrlein, David Fernes, Sebastian Görlich, Jonas Maurer, Hai Hoang Pham, Gabriel Großmann and Niklas Dietrich genannt Eisermann
Informatics 2023, 10(4), 78; https://doi.org/10.3390/informatics10040078 - 12 Oct 2023
Cited by 10 | Viewed by 7501
Abstract
In the current era of artificial intelligence, large language models such as ChatGPT and BARD are being increasingly used for various applications, such as language translation, text generation, and human-like conversation. The fact that these models consist of large amounts of data, including [...] Read more.
In the current era of artificial intelligence, large language models such as ChatGPT and BARD are being increasingly used for various applications, such as language translation, text generation, and human-like conversation. The fact that these models consist of large amounts of data, including many different opinions and perspectives, could introduce the possibility of a new qualitative research approach: Due to the probabilistic character of their answers, “interviewing” these large language models could give insights into public opinions in a way that otherwise only interviews with large groups of subjects could deliver. However, it is not yet clear if qualitative content analysis research methods can be applied to interviews with these models. Evaluating the applicability of qualitative research methods to interviews with large language models could foster our understanding of their abilities and limitations. In this paper, we examine the applicability of qualitative content analysis research methods to interviews with ChatGPT in English, ChatGPT in German, and BARD in English on the relevance of computer science in K-12 education, which was used as an exemplary topic. We found that the answers produced by these models strongly depended on the provided context, and the same model could produce heavily differing results for the same questions. From these results and the insights throughout the process, we formulated guidelines for conducting and analyzing interviews with large language models. Our findings suggest that qualitative content analysis research methods can indeed be applied to interviews with large language models, but with careful consideration of contextual factors that may affect the responses produced by these models. The guidelines we provide can aid researchers and practitioners in conducting more nuanced and insightful interviews with large language models. From an overall view of our results, we generally do not recommend using interviews with large language models for research purposes, due to their highly unpredictable results. However, we suggest using these models as exploration tools for gaining different perspectives on research topics and for testing interview guidelines before conducting real-world interviews. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Figure 1

6 pages, 500 KiB  
Communication
Children of AI: A Protocol for Managing the Born-Digital Ephemera Spawned by Generative AI Language Models
by Dirk H. R. Spennemann
Publications 2023, 11(3), 45; https://doi.org/10.3390/publications11030045 - 21 Sep 2023
Cited by 3 | Viewed by 2194
Abstract
The recent public release of the generative AI language model ChatGPT has captured the public imagination and has resulted in a rapid uptake and widespread experimentation by the general public and academia alike. The number of academic publications focusing on the capabilities as [...] Read more.
The recent public release of the generative AI language model ChatGPT has captured the public imagination and has resulted in a rapid uptake and widespread experimentation by the general public and academia alike. The number of academic publications focusing on the capabilities as well as practical and ethical implications of generative AI has been growing exponentially. One of the concerns with this unprecedented growth in scholarship related to generative AI, in particular, ChatGPT, is that, in most cases, the raw data, which is the text of the original ‘conversations,’ have not been made available to the audience of the papers and thus cannot be drawn on to assess the veracity of the arguments made and the conclusions drawn therefrom. This paper provides a protocol for the documentation and archiving of these raw data. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Figure 1

18 pages, 3680 KiB  
Article
Application of ChatGPT-Based Digital Human in Animation Creation
by Chong Lan, Yongsheng Wang, Chengze Wang, Shirong Song and Zheng Gong
Future Internet 2023, 15(9), 300; https://doi.org/10.3390/fi15090300 - 2 Sep 2023
Cited by 11 | Viewed by 5457
Abstract
Traditional 3D animation creation involves a process of motion acquisition, dubbing, and mouth movement data binding for each character. To streamline animation creation, we propose combining artificial intelligence (AI) with a motion capture system. This integration aims to reduce the time, workload, and [...] Read more.
Traditional 3D animation creation involves a process of motion acquisition, dubbing, and mouth movement data binding for each character. To streamline animation creation, we propose combining artificial intelligence (AI) with a motion capture system. This integration aims to reduce the time, workload, and cost associated with animation creation. By utilizing AI and natural language processing, the characters can engage in independent learning, generating their own responses and interactions, thus moving away from the traditional method of creating digital characters with pre-defined behaviors. In this paper, we present an approach that employs a digital person’s animation environment. We utilized Unity plug-ins to drive the character’s mouth Blendshape, synchronize the character’s voice and mouth movements in Unity, and connect the digital person to an AI system. This integration enables AI-driven language interactions within animation production. Through experimentation, we evaluated the correctness of the natural language interaction of the digital human in the animated scene, the real-time synchronization of mouth movements, the potential for singularity in guiding users during digital human animation creation, and its ability to guide user interactions through its own thought process. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Graphical abstract

8 pages, 231 KiB  
Editorial
AI Chatbots: Threat or Opportunity?
by Antony Bryant
Informatics 2023, 10(2), 49; https://doi.org/10.3390/informatics10020049 - 12 Jun 2023
Cited by 6 | Viewed by 4240
Abstract
In November 2022, OpenAI launched ChatGPT, an AI chatbot that gained over 100 million users by February 2023 [...] Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Back to TopTop