Promptology: Enhancing Human–AI Interaction in Large Language Models
Abstract
:1. Introduction
2. Background
3. Methodology
4. Data Collection
- Assignments and Course Integration: Across the five courses, AI-based assignments required students to engage with AI tools in a variety of ways:
- ∘
- Research Paper Review: Students used AI to critically analyze research papers;
- ∘
- Summarization of Research Articles: Students employed AI to create summaries of selected research articles;
- ∘
- Research Proposal Development: AI tools were used to assist in structuring and drafting research proposals. The data collected through these assignments provided valuable insights into student interactions with AI, including common challenges like hallucination (AI-generated incorrect or fabricated information) and concerns regarding plagiarism.
- Professor Observations: Three Faculty members were asked to document their observations of student interactions with AI, focusing on how effectively students utilized AI tools and any challenges they faced. Informal interviews with professors further supplemented this data, providing insights into the faculty’s perceptions of AI’s role in academic tasks and its impact on learning outcomes.
- Learning Management System Discussion Board Reviews: Each course had an online discussion board where students reflected on their AI experiences. These discussion threads provided rich qualitative data on how students engaged with the AI, shared concerns, and collaborated to improve their understanding of AI use in academic work. Key issues like trust in AI-generated summaries and the difficulty of crafting appropriate prompts for the AI were common themes identified in these forums.
5. Data Analysis
- Notes from professor observations and informal interviews were analyzed alongside student feedback, helping to refine the SPARRO framework by ensuring it addressed both faculty and student perspectives;
- Thematic analysis of discussion board posts provided additional insights into how students navigated the use of AI, particularly in relation to trust and academic integrity. These discussions informed key components of the SPARRO framework, such as the ‘Reviewing’ and ‘Refining’ stages, which emphasize the importance of verifying AI-generated content.
6. Developing the SPARRO Framework
7. SPARRO Framework Development
- Strategy addressed the need for planning AI’s role in research with a ‘Declaration of Generative AI Use’ to maintain transparency;
- Prompt Design utilized the CRAFT model (Clarity, Rationale, Audience, Format, Tasks) to create effective prompts tailored to course needs;
- Adopting ensured AI content aligned with assignment objectives, integrating AI outputs seamlessly with human input;
- Reviewing included critical assessments of AI content for accuracy and relevance, maintaining educational standards;
- Refining focused on iterative improvements based on feedback, enhancing content quality;
- Optimizing ensured originality and academic integrity with plagiarism checkers and reference verification tools.
- Ethical Considerations
8. Prompt Engineering SPARRO Framework
- Strategy
- Declaration of Generative AI and AI-assisted technologies
- During the preparation of this assignment, I used [NAME TOOL/SERVICE] to perform the following [REASON]. After using this tool/service, I reviewed and edited the content as needed and I take full responsibility for the content of the publication.
- Example Declaration statement
- Prompt Design
- Adoption
- Reviewing
- Refining
- Optimizing
9. Discussion
10. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Heston, T.F.; Khun, C. Prompt Engineering in Medical Education. Int. Med. Educ. 2023, 2, 198–205. [Google Scholar] [CrossRef]
- Strobelt, H.; Webson, A.; Sanh, V.; Hoover, B.; Beyer, J.; Pfister, H.; Rush, A.M. Interactive and visual prompt engineering for ad-hoc task adaptation with large language models. IEEE Trans. Vis. Comput. Graph. 2022, 29, 1146–1156. [Google Scholar] [CrossRef] [PubMed]
- Dobson, J.E. On reading and interpreting black box deep neural networks. Int. J. Digit. Humanit. 2023, 5, 431–449. [Google Scholar] [CrossRef]
- Robert, L.P.; Pierce, C.; Marquis, L.; Kim, S.; Alahmad, R. Designing fair AI for managing employees in organizations: A review, critique, and design agenda. Hum.–Comput. Interact. 2020, 35, 545–575. [Google Scholar] [CrossRef]
- Ansari, M.F.; Dash, B.; Sharma, P.; Yathiraju, N. The Impact and Limitations of Artificial Intelligence in Cybersecurity: A Literature Review. Int. J. Adv. Res. Comput. Commun. Eng. 2022, 11, 81–90. [Google Scholar] [CrossRef]
- Trope, R.L. What a Piece of Work is AI"-Security and Al Developments. Bus. Law 2020, 76, 289–294. [Google Scholar]
- Lo, L.S. The CLEAR path: A framework for enhancing information literacy through prompt engineering. J. Acad. Libr. 2023, 49, 102720. [Google Scholar] [CrossRef]
- Svendsen, A.; Garvey, B. An Outline for an Interrogative/Prompt Library to help improve output quality from Generative-AI Datasets (May 2023). Available online: https://ssrn.com/abstract=4495319 (accessed on 6 October 2024).
- Javaid, M.; Haleem, A.; Singh, R.P.; Suman, R. Artificial Intelligence Applications for Industry 4.0: A Literature-Based Study. J. Ind. Integr. Manag. 2022, 7, 83–111. [Google Scholar] [CrossRef]
- Hajkowicz, S.; Sanderson, C.; Karimi, S.; Bratanova, A.; Naughtin, C. Artificial intelligence adoption in the physical sciences, natural sciences, life sciences, social sciences and the arts and humanities: A bibliometric analysis of research publications from 1960-2021. Technol. Soc. 2023, 74, 102260. [Google Scholar] [CrossRef]
- Ronanki, K.; Cabrero-Daniel, B.; Horkoff, J.; Berger, C. Requirements engineering using generative AI: Prompts and prompting patterns. In Generative AI for Effective Software Development; Springer Nature: Cham, Switzerland, 2024; pp. 109–127. [Google Scholar] [CrossRef]
- Liu, P.; Yuan, W.; Fu, J.; Jiang, Z.; Hayashi, H.; Neubig, G. Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. ACM Comput. Surv. 2023, 55, 1–35. [Google Scholar] [CrossRef]
- Velásquez-Henao, J.D.; Franco-Cardona, C.J.; Cadavid-Higuita, L. Prompt Engineering: A methodology for optimizing interactions with AI-Language Models in the field of engineering. Dyna 2023, 90, 9–17. [Google Scholar] [CrossRef]
- Meskó, B. Prompt engineering as an important emerging skill for medical professionals: Tutorial. J. Med. Internet Res. 2023, 25, e50638. [Google Scholar] [CrossRef] [PubMed]
- Singhal, K.; Azizi, S.; Tu, T.; Mahdavi, S.S.; Wei, J.; Chung, H.W.; Tanwani, A.; Cole-Lewis, H.; Pfohl, S.; Payne, P.; et al. Large language models encode clinical knowledge. Nature 2023, 620, 172–180. [Google Scholar] [CrossRef] [PubMed]
- Dang, H.; Mecke, L.; Lehmann, F.; Goller, S.; Buschek, D. How to prompt? Opportunities and challenges of zero-and few-shot learning for human-AI interaction in creative applications of generative models. arXiv 2022, arXiv:2209.01390. [Google Scholar] [CrossRef]
- Esplugas, M. The use of artificial intelligence (AI) to enhance academic communication, education and research: A balanced approach. J. Hand Surg. 2023, 48, 819–822. [Google Scholar] [CrossRef] [PubMed]
- Li, H.; Moon, J.T.; Purkayastha, S.; Celi, L.A.; Trivedi, H.; Gichoya, J.W. Ethics of large language models in medicine and medical research. Lancet Digit. Health 2023, 5, e333–e335. [Google Scholar] [CrossRef] [PubMed]
- Eke, D.O. ChatGPT and the rise of generative AI: Threat to academic integrity? J. Responsible Technol. 2023, 13, 100060. [Google Scholar] [CrossRef]
- Murugesan, S.; Cherukuri, A.K. The rise of generative Artificial Intelligence and its impact on education: The promises and perils. Computer 2023, 56, 116–121. [Google Scholar] [CrossRef]
- Xu, W.; Ouyang, F. A systematic review of AI role in the educational system based on a proposed conceptual framework. Educ. Inf. Technol. 2022, 27, 4195–4223. [Google Scholar] [CrossRef]
Component | Description | Example from Data |
---|---|---|
Strategy | Develop a clear plan for how AI will be used in the assignment, including its role, limitations, and how it aligns with the objectives. | Students who clearly articulated how AI would assist in their assignments experienced less writer’s block and confusion. In the Health Service Administration course, students who outlined AI’s role had an easier time getting started and structuring their work. It was also a requirement to declare how AI was going to be used. |
Prompt Design | Use the CRAFT model (Clarity, Rationale, Audience, Format, Tasks) to create effective prompts. Higher-level cognitive prompts (e.g., ‘evaluate’ or ‘analyze’) generate more useful content, and specifying the output format enhances relevance. | In Nursing Research Methods, students who used higher-level prompts (e.g., ‘evaluate this article’) received better AI-generated summaries. Specifying the desired output format, such as requesting a table or bullet points, resulted in more applicable responses. |
Adopting | Integrate AI-generated content into the student’s work, ensuring alignment with their voice and the assignment’s requirements. AI should be transferred to an editing software from the AI systems once a level of satisfaction is achieved. | In Nursing Leadership and Policy, a student modified an AI-generated policy analysis, incorporating critical thinking and aligning the content with her own research to meet the assignment’s objectives. |
Reviewing | Critically assess AI-generated content for accuracy, relevance, and coherence. Verify the content with trusted sources to ensure reliability. | In the Healthcare Research course, students who fact-checked AI summaries against peer-reviewed articles identified inaccuracies such as hallucinations, which improved the quality and accuracy of their final submissions. |
Refining | After verifying accuracy, iterate on AI-generated content by refining language, improving arguments, and adding personal insights. This step ensures the final submission meets academic standards. | In Advanced Clinical Decision Making, a student critically compared AI-generated case analyses with their own clinical reasoning. The human analysis highlighted gaps in the AI’s suggestions, leading to a more comprehensive and accurate final diagnosis. |
Optimizing | Ensure academic integrity by using plagiarism checkers and reference verification tools to confirm the originality and credibility of the AI-assisted work. | In Research Methods, students who used plagiarism detection tools on AI-generated content identified potential issues with originality and refined their work to ensure proper citations and academic integrity. |
Component | Definition |
---|---|
Clarity | Be clear, specific, and unambiguous in your prompt to avoid multiple interpretations and set clear boundaries for AI capabilities. |
Rationale | Specify the context or background for the prompt. This is the underlying rationale within which the prompt is expected to operate. |
Audience | Consider the audience when crafting the prompt. The language, complexity, and tone should be tailored to the intended readership. |
Format | Specify the desired output format (e.g., essay, list, table, flowchart) to tailor the AI’s response for immediate applicability. |
Tasks | Break down the prompt into smaller, manageable sections, each addressing a specific aspect of the complex query. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Olla, P.; Elliott, L.; Abumeeiz, M.; Mihelich, K.; Olson, J. Promptology: Enhancing Human–AI Interaction in Large Language Models. Information 2024, 15, 634. https://doi.org/10.3390/info15100634
Olla P, Elliott L, Abumeeiz M, Mihelich K, Olson J. Promptology: Enhancing Human–AI Interaction in Large Language Models. Information. 2024; 15(10):634. https://doi.org/10.3390/info15100634
Chicago/Turabian StyleOlla, Phillip, Lauren Elliott, Mustafa Abumeeiz, Karen Mihelich, and Joshua Olson. 2024. "Promptology: Enhancing Human–AI Interaction in Large Language Models" Information 15, no. 10: 634. https://doi.org/10.3390/info15100634
APA StyleOlla, P., Elliott, L., Abumeeiz, M., Mihelich, K., & Olson, J. (2024). Promptology: Enhancing Human–AI Interaction in Large Language Models. Information, 15(10), 634. https://doi.org/10.3390/info15100634