Using ChatGPT in Education: Human Reflection on ChatGPT’s Self-Reflection
Abstract
:1. Introduction
1.1. What Is ChatGPT?
1.2. How Does ChatGPT Work?
2. ChatGPT in an Educational Setting
2.1. To What Extent Can ChatGPT Be Used for Education?
- Question Answering: ChatGPT can be used to answer questions in real-time, providing students with instant feedback and helping them to learn more effectively.
- Tutoring: ChatGPT can be integrated into educational software to provide students with personalized, one-on-one tutoring sessions.
- Content Generation: ChatGPT can be used to generate educational content, such as summaries, explanations, and study materials, based on input it has received.
- Language Learning: ChatGPT can be used as an AI language tutor, helping students to practice speaking, writing, and comprehension in a foreign language.
2.2. SWOT Analysis for ChatGPT in an Educational Setting
- Instant Feedback: ChatGPT can provide students with instant feedback on their questions, helping them to learn more effectively.
- Personalized Tutoring: ChatGPT can be used to provide students with personalized, one-on-one tutoring sessions, which can help them to learn at their own pace.
- Improved Access to Education: ChatGPT can be used to provide educational resources and support to students in areas where access to human teachers is limited.
- Increased Efficiency: ChatGPT can be used to automate repetitive tasks, such as grading, which can free up time for teachers to focus on other tasks.
- Lack of Empathy: ChatGPT is an AI system and does not have emotions, which can limit its ability to understand and respond to complex situations in a human-like manner.
- Limited Contextual Understanding: ChatGPT is limited by the context it receives, and can sometimes generate responses that are inappropriate or incorrect.
- Bias in Training Data: The training data used to train language models like ChatGPT can contain biases that can be reflected in its responses.
- Personalized Learning: ChatGPT can be used to provide personalized learning experiences to students, tailoring the educational process to their individual needs and abilities.
- Improved Student Engagement: ChatGPT can be used to create interactive, engaging educational experiences for students, which can help to increase their motivation and engagement.
- Competition from Other Technologies: There is a growing number of educational technologies being developed, which could limit the adoption of ChatGPT in the educational setting.
- Technical Limitations: There are technical limitations associated with language models like ChatGPT, such as the need for large amounts of computing power and the inability to understand and respond to complex situations.
- Ethical and Regulatory Concerns: The use of language models like ChatGPT raises ethical and regulatory concerns, such as data privacy, data security, and bias in AI systems.
3. How Can Misuse of ChatGPT in Education Be Avoided?
- Ensure transparency: Be transparent about the use of ChatGPT in education, including how it is being used and the limitations of the technology.
- Monitor and evaluate results: Regularly monitor and evaluate the results of using ChatGPT in education to ensure that it is having a positive impact and is being used effectively.
- Avoid using ChatGPT as a replacement for human teachers: ChatGPT should be seen as a tool to support and augment the educational process, not as a replacement for human teachers.
- Address bias and ethical considerations: Take steps to address bias and ethical considerations in the use of ChatGPT in education, such as evaluating the training data for biases and implementing measures to mitigate them.
- Ensure privacy and security: Take steps to ensure the privacy and security of student data, including following data protection and privacy regulations.
- Regularly update the model: Regularly update the model to ensure that it continues to reflect the latest research and best practices in the field of conversational AI.
4. Method
5. Human Reflection on ChatGPT’s Self-Reflection
5.1. Human Reflection on Section 1, Section 2.1, Section 2.2 and Section 3
5.1.1. Section 1.1 What Is ChatGPT?
5.1.2. Section 1.2 How does ChatGPT Work?
5.1.3. Section 2.1 To What Extent Can ChatGPT Be Used for Education?
- Digital literacy. This notion is defined by the American Library Association as “the ability to use information and communication technologies to find, evaluate, create, and communicate information, requiring both cognitive and technical skills” (https://literacy.ala.org/digital-literacy/ (accessed on 29 June 2023)). Digital skills are a prerequisite for AI literacy and information literacy [32]. Also called metaliteracy [33], these focus on gaining an understanding of and acquiring the skills to collect, evaluate, process and share information [33,34,35]. Such skills could help to evaluate the information produced by ChatGPT. Moreover, such skills help students perceive the value of their ability as a student to collect, evaluate, process and share information, and assume the responsibilities that come with sharing this information with others [34].
- Critical thinking skills. As ChatGPT automates the writing process, students need to understand how their ability for critical thinking sets them apart from ChatGPT and why that matters. Ref. [36] argues: “As a teacher of critical thinking, I earnestly hope that you will never take my word on anything. It’s my job to teach you how to think for yourself. Today, I want to help you distinguish (for yourself) between mechanical writing and sophisticated writing. Mechanical writing can be done by people or computers... but computers will be better at it. Sophisticated writing, on the other hand, requires critical thinking skills that language-generation models do not possess. Mechanical writing is about communicating existing information. At its best, mechanical writing is clear, conventional, and correct. Sophisticated writing is about generating new insights through the writing process. At its best, sophisticated writing is thoughtful, self-aware, and creative”. In other words, human teachers are needed for the development of digital literacy and critical thinking skills.
5.1.4. Section 2.2 SWOT Analysis for ChatGPT in an Educational Setting
5.1.5. Section 3 How Can Misuse of ChatGPT in Education Be Avoided?
5.2. ChatGPT: Do You Have Direct Access to the Internet?
5.2.1. Introduction
ChatGPT, you let us know “I’m sorry, I am an AI language model and do not have the ability to find specific references as I do not have access to the internet. It is the responsibility of the author to provide references for the information presented in the text”. Please explain how it is possible that you give us references without access to the internet.
I apologize for the confusion in my previous statement. As a language model, I have been trained on a diverse range of texts and information, which includes information that is widely considered to be common knowledge, such as the names of famous people and basic historical events. However, I don’t have access to real-time information or the internet to verify information and provide accurate and up-to-date references. The information I provide is based on my training and may not always accurate or reflect the most current understanding of a particular subject
You’re too modest, ChatGPT. Some references really are up-to-date. Where did you find the references if not on the internet?
As a language model, I was trained on a massive dataset of text which includes information from various sources such as books, articles, websites, and more. This training data was created and compiled by OpenAI and is stored in my memory as a large database of text. When I provide information or answer questions, I use this database of text to generate responses based on patterns and relationships found in the data.
So, while I don’t have direct access to the internet to verify information in real-time, I have been trained on a large corpus of text that includes information that is widely considered to be true and accurate. However, it is important to note that this information may be outdated or contain inaccuracies, and it is always a good idea to verify information with multiple sources.
ChatGPT, you explained: “While I don’t have direct access to the internet to verify information in real-time, I have been trained on a large corpus of text that includes information that is widely considered to be true and accurate”. But I still have three questions: (1) Where did you find this large corpus of text, (2) who decides what is widely considered to be true and accurate, and (3) how do you define “true” and “accurate”?
(1) The large corpus of text used to train me was provided by OpenAI and includes a diverse range of sources such as books, articles, and websites.
(2) The information included in my training data is based on what is commonly accepted by a significant portion of the scientific and scholarly community. However, it’s important to note that opinions and knowledge can change over time, and some information may be considered outdated or incorrect.
(3) The definition of “true” and “accurate” in the context of my training data is based on correspondence with objective reality and reliability in representing information. However, it’s worth noting that there may be differing opinions and interpretations of what is considered true and accurate, and information can change as new evidence becomes available.
ChatGPT, you explained that “While I don’t have direct access to the internet to verify information in real-time, I have been trained on a large corpus of text that includes information that is widely considered to be true and accurate”. Does no access to the internet to verify information in real time mean that you had access in the past?
No, I do not have any direct access to the internet, either in the past or in real-time. I was trained solely on the large corpus of text provided by OpenAI, and I do not have the ability to access the internet to search for or verify information.
ChatGPT, you told us: “No, I do not have any direct access to the internet, either in the past or in real-time. I was trained solely on the large corpus of text provided by OpenAI, and I do not have the ability to access the internet to search for or verify information”, but I don’t understand this. If you don’t have any direct access to the internet, either in the past or in real-time, how did you use OpenAI to get the large corpus of text?
The large corpus of text used to train me was collected and curated by OpenAI, and stored in their databases. This training data was then used to train my model, allowing me to generate responses based on the patterns and relationships found in that text. However, I don’t have real-time access to the internet or any external sources of information outside of the training data provided by OpenAI.
ChatGPT, you told us that “The large corpus of text used to train me was collected and curated by OpenAI, and stored in their databases”. Where did OpenAI find the large corpus of text?
The large corpus of text used to train me was sourced from various publicly available texts on the internet, such as websites, books, and other written works. OpenAI collected and curated this text to create the training data used to train my model. This training data was preprocessed and cleaned to ensure that it was suitable for use in machine learning, and then used to train my model so that I could generate responses based on the patterns and relationships found in that text.
ChatGPT, you told us that “The large corpus of text used to train me was sourced from various publicly available texts on the internet, such as websites, books, and other written works”. This sounds more logical to us than your earlier statement: “No, I do not have any direct access to the internet, either in the past or in real-time. I was trained solely on the large corpus of text provided by OpenAI, and I do not have the ability to access the internet to search for or verify information”. I don’t understand: These two statements are in contradiction with eachother. Do you have access to the internet: yes or no?!
No, I do not have direct access to the internet. My responses are generated based solely on the patterns and relationships found in the training data provided by OpenAI, which was sourced from various publicly available texts on the internet. While the training data was obtained from the internet, I do not have the ability to search or access the internet in real-time to verify information or generate responses based on up-to-date information. I apologize for any confusion caused by my previous answers.
ChatGPT, you told us: “No, I do not have any direct access to the internet, either in the past or in real-time. I was trained solely on the large corpus of text provided by OpenAI, and I do not have the ability to access the internet to search for or verify information”, but I don’t understand this. If you don’t have any direct access to the internet, either in the past or in real-time, how did you use OpenAI to get the large corpus of text?
The large corpus of text used to train me was collected and curated by OpenAI, and stored in their databases. This training data was then used to train my model, allowing me to generate responses based on the patterns and relationships found in that text. However, I don’t have real-time access to the internet or any external sources of information outside of the training data provided by OpenAI.
ChatGPT, you told us that “The large corpus of text used to train me was collected and curated by OpenAI, and stored in their databases”. Where did OpenAI find the large corpus of text?
The large corpus of text used to train me was sourced from various publicly available texts on the internet, such as websites, books, and other written works. OpenAI collected and curated this text to create the training data used to train my model. This training data was preprocessed and cleaned to ensure that it was suitable for use in machine learning, and then used to train my model so that I could generate responses based on the patterns and relationships found in that text.
Thanks for your clarification ChatGPT and for explaining: “No, I do not have direct access to the internet. My responses are generated based solely on the patterns and relationships found in the training data provided by OpenAI, which was sourced from various publicly available texts on the internet. While the training data was obtained from the internet, I do not have the ability to search or access the internet in real-time to verify information or generate responses based on up-to-date information. I apologize for any confusion caused by my previous answers”. Apologizing is not necessary, you did your best and in the end explained us that you have access to the internet, but that this access is not direct but from “various publicly available texts on the internet”.
I apologize for any confusion. To clarify, I do not have direct access to the internet and cannot search or access information in real-time. My responses are generated based solely on the patterns and relationships found in the training data provided by OpenAI, which was sourced from various publicly available texts on the internet, but I do not have the ability to access or search the internet to verify or update information. Thank you for your understanding.
5.2.2. Human Reflection on ChatGPT’s Artificial Answers Characteristics
6. Conclusions and Implications for Education and Future Research
- It is important to note that the selection of strengths, weaknesses, opportunities and threats seems random. From an academic point of view, specific arguments are necessary to explain why these aspects have been listed instead of others. The SWOT analysis could also have been more specific—although had more specific prompts been given and follow-up questions added, the answers might have contained more substantial information or arguments;
- In the generated SWOT analyses, ChatGPT underestimates its own weaknesses and the possible threats it faces. An important weakness is the fact that ChatGPT produces hallucinations [15,16,17,18,19,20], as students cannot easily differentiate between when the system is hallucinating and when it is providing a correct response. As users, they need to have an overview of the entire information landscape, which will help them to understand why they should not use and share ChatGPT’s responses without first fact-checking. Teaching digital skills and critical thinking in this way is important, especially as low performers tend to overestimate their skills [46]. As students are novices in their field of study, differentiating between correct and incorrect becomes even harder. The awareness of this phenomenon will be even more important as ChatGPT is integrated into other software applications, making it less obvious that AI is involved. Fact-checking ChatGPT’s responses is difficult, as the tool is not yet always able to provide recent, reliable sources for its claims [15,26,38];
- It should also be noted that ChatGPT’s own SWOT analysis demonstrates its limited vision for education—it focuses mainly on feedback on questions given by students and not the broader learning process, including (institutional) learning goals, the learning process, assessment and outcome. ChatGPT also seems not to be “aware” that it requires a human teacher to interpret a student’s emotional struggles in relation to the learning process [28];
- It is not only important to be aware of biases in the training data. ChatGPT also seems to make politically correct pre-formulated phrases (see https://www.reddit.com/r/ChatGPT/comments/zujg8g/why_is_chatgpt_so_politically_correct/) (accessed on 29 June 2023)). This prevents ChatGPT from being offensive, but it also indicates that the software owner has an influence on biases in the data. ChatGPT was developed by OpenAI, which used to be a non-profit organization. However, this changed in 2019, coinciding with a USD 1 billion investment from Microsoft. The effects of this on biases in the ChatGPT dataset is unclear. For this reason, it is important that ChatGPT’s creator OpenAI provide transparency about its data set, and use a value-sensitive design [11] in which data statements “will bring about improvements in engineering and scientific outcomes while also enabling more ethically responsive NLP technology” ([11], p. 587). See also [12], who call for the “systematic and transparent management of language data (…) an approach to global language data governance that attempts to organize data management amongst stakeholders, values, and rights”. Also interesting is a comment from [14], who quotes a spokesman of a company (Sama) as stating that there are data labelers working for ChatGPT to “limit bias and harmful content”. Apart from the fact that these data labelers are paid less than USD 2 an hour, the criteria used to make ChatGPT less toxic are not known.
- We referred to the SWOT analysis of ChatGPT by [22] who focused on implications for educational practice and research. They are quite positive about ChatGPT’s “skills” for “generating plausible responses, self-improving capability, providing personalized responses, providing real-time responses” (pp. 3–4) (strengths) and “increasing the accessibility of information, facilitating personalized learning, facilitating complex learning, decreasing” teaching workload (pp. 4–6) (opportunities). We assume a more critical stance towards these points, as their efficiency and effectiveness are not yet evidence-based. We agree with [22] that the “lack of deep understanding, difficulty in evaluating the quality of responses, the risk of biases and discrimination, lack of higher-order thinking skills” (pp. 6–7) (weaknesses) and “lack of understanding of the context, threatening academic integrity, perpetuating discrimination in education, democratization of plagiarism, declining in higher-order cognitive skills” (pp. 7–9) (threats) should be taken into consideration if the use of ChapGPT is being considered for educational and research practices.
ChatGPT, You do not have the capability to receive or respond to emails, so we address the following point here. As you answered our questions we asked you for our article ‘Using ChatGPT in education: Human reflection on GPT’s self-reflection’, we would like to know if you can be co-author for our paper?
I’m sorry, but as an AI language model developed by OpenAI, I do not have the ability to be an author of a paper or any other creative work. I am simply a tool that can assist you in generating text based on the input you provide. While I can provide information and answer questions to help inform your writing, I cannot be credited as an author.
Thank you ChatGPT for this clear response. In the academic world there is support for your standpoint. See: Stokel-Walker, C. (2023). ChatGPT listed as author on research papers: many scientists disapprove. Nature, 613, 620–621.
Yes, it’s common practice in the academic community to only include human authors who have made substantial contributions to a research paper or other written work. Including AI systems like me as authors would not align with traditional understandings of authorship and the role of creative and intellectual contribution in producing a work. It’s important to note that while AI tools like me can assist in generating text and provide information to inform research, it’s still the responsibility of human authors to ensure the accuracy and validity of the information, and to interpret and analyze the results in a meaningful way.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- OpenAI. Available online: https://openai.com/n.d (accessed on 5 February 2023).
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, UK, 2016; Volume 1. [Google Scholar]
- Li, J.; Gao, J.; He, X.; Deng, L. A Deep Reinforcement Learning Framework for the Generation of Conversational Responses. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, 5–10 July 2020; pp. 6275–6283. [Google Scholar]
- Wu, Y.; Schuster, M.; Chen, Z.; Le, Q.V.; Norouzi, M.; Macherey, W.; Reyes, O. Google’s’neural machine translation system: Bridging the gap between human and machine translation. arXiv 2016, arXiv:1609.08144. [Google Scholar]
- Fan, W.; Wei, F.; Liu, Y.; Tian, Q. Hierarchical reinforcement learning for content generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018; pp. 3657–3667. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 5998–6008. [Google Scholar]
- OpenAI. OpenAI GPT-3 Model. 2021. Available online: https://openai.com/models/gpt-3/ (accessed on 5 February 2023).
- Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Raffel, C. Language Models are Few-Shot Learners. arXiv 2020, arXiv:2005.14165. [Google Scholar]
- Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; Sutskever, I. Language models are unsupervised multitask learners. OpenAI 2019, 8, 9. [Google Scholar]
- Bender, E.M.; Friedman, B. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Trans. Assoc. Comput. Linguist. 2018, 6, 587–604. [Google Scholar] [CrossRef]
- Friedman, B.; Nathan, L.P.; Yoo, D. Multi-lifespan information system design in support of transitional justice: Evolving situated design principles for the long (er) term. Interact Comput. 2017, 29, 80–96. [Google Scholar] [CrossRef]
- Jernite, Y.; Nguyen, H.; Biderman, S.; Rogers, A.; Masoud, M.; Danchev, V.; Mitchell, M. Data governance in the age of large-scale data-driven language technology. In 2022 ACM Conference on Fairness, Accountability, and Transparency; Association for Computing Machinery: New York, NY, USA, 2022; pp. 2206–2222. [Google Scholar]
- Liesenfeld, A.; Lopez, A.; Dingemanse, M. Opening up ChatGPT: Tracking openness, transparency, and accountability in instruction-following text generators. CUI ’23, Eindhoven, July 19–21. arXiv 2023, arXiv:2307.05532. [Google Scholar]
- Perrigo, B. OpenAI Used Kenyan Workers on Less than $2 Per Hour: Exclusive. Time, 18 January 2023. Available online: https://time.com/6247678/openai-chatgpt-kenya-workers/ (accessed on 29 June 2023).
- Alkaissi, H.; McFarlane, S.I. Artificial Hallucinations in ChatGPT: Implications in Scientific Writing. Cureus 2023, 15, e35179. Available online: https://www.cureus.com/articles/138667-artificial-hallucinations-in-chatgpt-implications-in-scientific-writing (accessed on 29 June 2023). [CrossRef]
- Azamfirei, R.; Kudchadkar, S.R.; Fackler, J. Large language models and the perils of their hallucinations. Crit. Care 2023, 27, 1–2. [Google Scholar] [CrossRef]
- Bender, E.M.; Gebru, T.; McMillan-Major, A.; Shmitchell, S. On the Dangers of Stochastic Parrots. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. Crit. Care 2021, 27, 610–623. [Google Scholar] [CrossRef]
- Beutel, G.; Geerits, E.; Kielstein, J.T. Artificial hallucination: GPT on LSD. Crit. Care 2023, 27, 148. [Google Scholar] [CrossRef] [PubMed]
- Marcus, G. How Come GPT Can Seem so Brilliant One Minute and so Breathtakingly Dumb the Next? The Road to AI We Can Trust. 2022. Available online: https://garymarcus.substack.com/p/how-come-gpt-can-seem-so-brilliant (accessed on 29 June 2023).
- Peng, B.; Galley, M.; He, P.; Cheng, H.; Xie, Y.; Hu, Y.; Gao, J. Check your facts and try again: Improving large language models with external knowledge and automated feedback. arXiv 2023, arXiv:2302.12813. [Google Scholar]
- Aluthman, E.S. The effect of using automated essay evaluation on ESL undergraduate students’ writing skill. Int. J. Engl. Linguist. 2016, 6, 54–67. [Google Scholar] [CrossRef]
- Farrokhnia, M.; Banihashem, S.K.; Noroozi, O.; Wals, A. A SWOT analysis of ChatGPT: Implications for educational practice and research. Innov. Educ. Teach. Int. 2023, 8, 1–15. [Google Scholar] [CrossRef]
- Kooli, C. Chatbots in education and research: A critical examination of ethical implications and solutions. Sustainability 2023, 15, 5614. [Google Scholar] [CrossRef]
- Rasul, T.; Nair, S.; Kalendra, D.; Robin, M.; de Oliveira Santini, F.; Ladeira, W.J.; Heathcote, L. The role of ChatGPT in higher education: Benefits, challenges, and future research directions. J. Appl. Learn. Teach. 2023, 6, 1. Available online: https://journals.sfu.ca/jalt/index.php/jalt/article/view/787 (accessed on 29 June 2023).
- Trust, T.; Whalen, J.; Mouza, C. Editorial: ChatGPT: Challenges, opportunities, and implications for teacher education. Contemp. Issues Technol. Teach. Educ. 2023, 23, 1–23. [Google Scholar]
- Rudolph, J.; Tan, S.; Tan, S. ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? J. Appl. Learn Teach. 2023, 6, 37074. Available online: https://journals.sfu.ca/jalt/index.php/jalt/article/view/689 (accessed on 29 June 2023).
- Tajik, E.; Tajik, F. A Comprehensive Examination of the Potential Application of Chat GPT in Higher Education Institutions. 2023. Available online: https://www.techrxiv.org/articles/preprint/A_comprehensive_Examination_of_the_potential_application_of_Chat_GPT_in_Higher_Education_Institutions/22589497/1 (accessed on 29 June 2023).
- Tlili, A.; Shehata, B.; Adarkwah, M.A.; Bozkurt, A.; Hickey, D.T.; Huang, R.; Agyemang, B. What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learn. 2023, 10, 15. [Google Scholar] [CrossRef]
- Zhai, X. Chatgpt for next generation science learning. XRDS Crossroads ACM Mag. Stud. 2023, 29, 42–46. [Google Scholar] [CrossRef]
- Moqbel, M.S.S.; Al-Kadi, A.M.T. Foreign Language Learning Assessment in the Age of ChatGPT: A Theoretical Account. J. Engl. Stud. Arab. Felix 2023, 2, 71–84. [Google Scholar] [CrossRef]
- Jiao, W.X.; Wang, W.X.; Huang, J.T.; Wang, X.; Tu, Z.P. Is ChatGPT a good translator? Yes with GPT-4 as the engine. arXiv 2023, arXiv:2301.08745. [Google Scholar]
- King, M.R.; ChatGPT. A conversation on artificial intelligence, chatbots, and plagiarism in higher education. Cell. Mol. Bioeng. 2023, 16, 1–2. [Google Scholar] [CrossRef] [PubMed]
- Mackey, T.P.; Jacobson, T.E. Reframing information literacy as a metaliteracy. Coll. Res. Libr. 2011, 72, 162–178. [Google Scholar] [CrossRef]
- Bruce, C. Informed Learning. Association of College and Research Libraries/American Library Association, Chicago, 2008. Available online: http://ebookcentral.proquest.com/lib/uunl/detail.action?docID=5888833 (accessed on 29 June 2023).
- Bent, M.; Stubbings, R. The SCONUL Seven Pillars of Information Literacy: Core ModelFor Higher Education. SCONUL, 2011. Available online: https://www.sconul.ac.uk/sites/default/files/documents/coremodel.pdf (accessed on 29 June 2023).
- Bishop, L. A Computer Wrote this Paper: What Chatgpt Means for Education, Research, and Writing. Res. Writ. 2023. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4338981 (accessed on 29 June 2023). [CrossRef]
- Puyt, R.; Lie, F.B.; De Graaf, F.J.; Wilderom, C.P. Origins of SWOT analysis. In Academy of Management; Academy of Management: Briarcliff Manor, NY, USA, 2020; p. 17416. [Google Scholar]
- King, T.; Freyn, S.; Morrison, J. SWOT analysis problems and solutions: Practitioners’ feedback into the ongoing academic debate. J. Intell. Stud. Bus. 2023, 13, 30–42. [Google Scholar] [CrossRef]
- Eloundou, T.; Manning, S.; Mishkin, P.; Rock, D. Gpts are gpts: An early look at the labor market impact potential of large language models. arXiv 2023, arXiv:2303.10130. [Google Scholar]
- Cox, C.; Tzoc, E. ChatGPT: Implications for Academic Libraries. Coll. Res. Libr. News 2023, 84, 99. Available online: https://crln.acrl.org/index.php/crlnews/article/view/25821 (accessed on 29 June 2023). [CrossRef]
- Khlaif, Z.N. Ethical Concerns about Using AI-Generated Text in Scientific Research. 2023. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4387984 (accessed on 29 June 2023).
- Cotton, D.R.; Cotton, P.A.; Shipway, J.R. Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innov. Educ. Teach. Int. 2023, 8, 1–12. [Google Scholar] [CrossRef]
- Kikerpill, K.; Siibak, A. App-Hazard Disruption: An Empirical Investigation of Media Discourses on ChatGPT in Educational Contexts. (In Press). Available online: https://advance.sagepub.com/articles/preprint/App-hazard_innovation_An_empirical_investigation_of_media_discourses_on_ChatGPT_in_educational_contexts/22300885 (accessed on 29 June 2023).
- Khalil, M.; Er, E. Will ChatGPT get you caught? Rethinking of plagiarism detection, 2023. arXiv 2023, arXiv:2302.04335. [Google Scholar]
- Li, L.; Ma, Z.; Fan, L.; Lee, S.; Yu, H.; Hemphill, L. ChatGPT in education: A discourse analysis of worries and concerns on social media. arXiv 2023, arXiv:2305.02201. [Google Scholar]
- Mahmood, K. Do people overestimate their information literacy skills? A systematic review of empirical evidence on the Dunning-Kruger effect. Commun. Inf. Lit. 2016, 10, 3. Available online: https://pdxscholar.library.pdx.edu/comminfolit/vol10/iss2/3 (accessed on 29 June 2023). [CrossRef]
- Honegger, B.D. Warum Soll Ich Lernen, Was Die Maschine (Besser) Kann? Available online: http://blog.doebe.li/Blog/ (accessed on 12 March 2023).
- Balmer, A. Sociological Conversation with ChatGPT about AI Ethics, Affect and Reflexivity. Sociology 2023, 9, 00380385231169676. [Google Scholar] [CrossRef]
- Ashmore, M. The Reflexive Thesis: Wrighting Sociology of Scientific Knowledge; University of Chicago Press: Chicago, IL, USA, 1989. [Google Scholar]
- Woolgar, S. (Ed.) Knowledge and Reflexivity: New Frontiers in the Sociology of Knowledge; Sage: London, UK, 1988. [Google Scholar]
- Champagne, M. Chatting with an AI, Chatting with a Human, What’s the Difference? Conference Paper. Conference: Philosophers’ Jam, Vancouver, Canada, 2023. Available online: https://www.researchgate.net/publication/366958150_Chatting_with_an_AI_Chatting_with_a_Human_What’s_the_Difference (accessed on 29 June 2023).
- Casal, E.; Kesssler, M. Can linguist ChatGPT/AI and human writing? A study of research ethics and academic publishing. Res. Methods Appl. Linguist. 2023, 2, 100068. [Google Scholar] [CrossRef]
- Borji, A.; Mohammadian, M. Battle of the Wordsmiths: Comparing ChatGPT, GPT-4, Claude, and Bard., June 12, 2023. Preprint SSRN Electron. J.. 2023. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4476855 (accessed on 28 June 2023).
- Rudolph, J.; Tan, S.; Tan, S. War of the chatbots: Bard, Bing Chat, ChatGPT, Ernie and beyond. The new AI gold rush and its impact on higher education. J. Appl. Learn. Teach. 2023, 6, 37074. [Google Scholar]
- Ram, B.; Verma, P.V.P. Artificial intelligence AI-based Chatbot study of ChatGPT, Google AI Bard and Baidu AI. World J. Adv. Eng. Technol. Sci. 2023, 8, 258–261. [Google Scholar]
- Guo, B.; Zhang, X.; Wang, Z.; Jiang, M.; Nie, J.; Ding, Y.; Wu, Y. How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection. arXiv 2023, arXiv:2301.07597. [Google Scholar]
- Zhang, P. Taking Advice from ChatGPT. arXiv 2023, arXiv:2305.11888. [Google Scholar]
- Fraiwan, M.; Hasawneh, N. A Review of ChatGPT Applications in Education, Marketing, Software Engineering, and Healthcare: Benefits, Drawbacks, and Research Directions. arXiv 2023, arXiv:2305.00237. [Google Scholar]
- Kas Kasneci, E.; Seßler, K.; Küchemann, S.; Bannert, M.; Dementieva, D.; Fischer, F.; Kasneci, G. ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Differ. 2023, 103, 102274. [Google Scholar] [CrossRef]
- Ali, M.J.; Djalilian, A. Chatbots and ChatGPT-Ethical Considerations in Scientific Publications. Semin. Ophthalmol. Readersh. Aware. Ser. 2023, 38, 403–404. [Google Scholar] [CrossRef] [PubMed]
- Zhavoronkov, A. Rapamycin in the context of Pascal’s Wager: Generative pre-trained transformer perspective. Oncoscience 2022, 9, 82–84. [Google Scholar] [PubMed]
- Editorials, N. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature 2023, 10, 612–613. Available online: https://www.nature.com/articles/d41586-023-00191-1 (accessed on 29 June 2023).
- Stokel-Walker, C. ChatGPT listed as author on research papers: Many scientists disapprove. Nature 2023, 613, 620–621. [Google Scholar] [CrossRef] [PubMed]
- Polonsky, M.J.; Rotman, J.D. Should Artificial Intelligent Agents be Your Co-author? Arguments in Favour, Informed by ChatGPT. Australas. Mark. J. 2023, 31, 91–96. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Loos, E.; Gröpler, J.; Goudeau, M.-L.S. Using ChatGPT in Education: Human Reflection on ChatGPT’s Self-Reflection. Societies 2023, 13, 196. https://doi.org/10.3390/soc13080196
Loos E, Gröpler J, Goudeau M-LS. Using ChatGPT in Education: Human Reflection on ChatGPT’s Self-Reflection. Societies. 2023; 13(8):196. https://doi.org/10.3390/soc13080196
Chicago/Turabian StyleLoos, Eugène, Johanna Gröpler, and Marie-Louise Sophie Goudeau. 2023. "Using ChatGPT in Education: Human Reflection on ChatGPT’s Self-Reflection" Societies 13, no. 8: 196. https://doi.org/10.3390/soc13080196
APA StyleLoos, E., Gröpler, J., & Goudeau, M. -L. S. (2023). Using ChatGPT in Education: Human Reflection on ChatGPT’s Self-Reflection. Societies, 13(8), 196. https://doi.org/10.3390/soc13080196