Next Article in Journal
A Feature-Weighted Support Vector Regression Machine Based on Hilbert–Schmidt Independence Criterion Least Absolute Shrinkage and Selection Operator
Next Article in Special Issue
Support of Migrant Reception, Integration, and Social Inclusion by Intelligent Technologies
Previous Article in Journal
Voltage Deviation Improvement in Microgrid Operation through Demand Response Using Imperialist Competitive and Genetic Algorithms
Previous Article in Special Issue
Word Sense Disambiguation for Morphologically Rich Low-Resourced Languages: A Systematic Literature Review and Meta-Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Promptology: Enhancing Human–AI Interaction in Large Language Models

1
College of Health Professionals, University of Detroit Mercy, Detroit, MI 48221, USA
2
Schulich School of Medicine, Western University, London, ON N6A 3K7, Canada
*
Author to whom correspondence should be addressed.
Information 2024, 15(10), 634; https://doi.org/10.3390/info15100634
Submission received: 18 September 2024 / Revised: 8 October 2024 / Accepted: 8 October 2024 / Published: 14 October 2024
(This article belongs to the Special Issue Advances in Human-Centered Artificial Intelligence)

Abstract

:
This study investigates the integration of generative AI in higher education and the development of the SPARRO framework, a structured approach to improving human–AI interaction in academic settings. This ethnographic study explores the integration of generative AI in healthcare and nursing education, detailing the development of the SPARRO framework based on observations of student and faculty interactions with AI tools across five courses. The study identifies key challenges such as AI hallucination, mistrust of AI-generated summaries, and the difficulty in formulating effective prompts. The SPARRO framework addresses these challenges, offering a step-by-step guide for planning, prompt design, reviewing, and refining AI outputs. While the framework shows promise in improving AI integration, future research is needed to validate its applicability across other academic disciplines and assess its long-term impact on critical thinking and academic integrity. This study contributes to the growing body of research on AI in education, offering practical solutions for ethically and effectively integrating AI tools in academic settings.

1. Introduction

This study examines the role of Generative Artificial Intelligence (GenAI) in higher education, particularly in healthcare and nursing courses, through an ethnographic lens. It observes how students and professors engage with AI, noting challenges regarding academic integrity and practical AI integration. This paper presents the SPARRO framework, developed through detailed observations and data from five different courses, offering a structured approach to the ethical and effective use of AI. There have been previous efforts to enhance AI interaction, including prompt engineering, prompt design [1,2], and promptology [3]. For our work, grounded in the principles of human–computer interaction, we define promptology as:
Promptology is an interdisciplinary field that focuses on designing strategic and secure prompts for generative AI systems. It integrates technical skills, cognitive science, and frameworks to optimize human–AI interactions, ensuring ethical, efficient, and accurate responses across various industries.
Promptology integrates technical skills with human cognition and language to optimize human–AI interfaces across various computing environments. As GenAI becomes ubiquitous in fields such as education, business, healthcare, and research [4], promptology is critical for ensuring safe, ethical, and effective AI use. A key aspect of promptology is learning to create robust and secure prompts that prevent potential misuse or manipulation by malicious actors [5,6]. By fostering the creation of clear, specific prompts, promptology also helps generate cost-effective AI interactions, reducing unnecessary computing time and energy [7,8]. The SPARRO framework applies these principles, offering a structured approach to mitigate AI challenges in academic settings.
Promptology plays a key role in developing best practices for prompt design. It offers a systematic approach that can be applied across different industries, enabling users to achieve optimal outcomes from their AI systems. A key part of this approach is the creation of a taxonomy of prompts, categorizing them based on their characteristics and intended uses. This framework empowers users to harness the full potential of AI and enhances the overall utility of GenAI.
This paper introduces a foundational approach to incorporating promptology into academic and research settings, aiming to balance the efficiency and workload reduction capabilities of AI with the need to maintain the integrity and quality of intellectual outputs. Through the SPARRO framework’s context-appropriate guidelines and best practices in prompt design, this study contributes to the broader discourse on fostering ethical, safe, and productive human–AI collaboration in critical intellectual domains.
The following sections outline the methodology used to collect and analyze data from five courses, introduce the SPARRO framework developed from these insights, discuss the key findings, and provide implications for future AI integration in education.

2. Background

Generative AI refers to a class of artificial intelligence models that can generate text, images, videos, code, and other content from user prompts. GPT-4 is an example of a GenAI model, designed to interpret and produce human-like text. These models are reshaping the digital landscape, with important implications across a multitude of sectors [9]. GenAI models have moved beyond being mere curiosities of computational linguistics and are now driving significant advancements in fields as diverse as education, healthcare, business, research, and even creative arts [10]. Their ability to generate, comprehend, translate, and summarize text expands their usefulness across any field where language and communication are crucial.
One of the defining characteristics of GenAIs is their interaction mechanism. Unlike traditional software that is governed by explicit commands and rigid interfaces, GenAIs operate on the principle of ‘prompts’. In the context of AI, a prompt is a user-issued instruction or query that guides the AI’s output. Prompts serve as the primary language for communicating with these models, providing an intuitive and flexible way to leverage their capabilities [11]. However, in the utilization of prompts, said prompts must be generated with the methodology often employed in prompt engineering. Prompt engineering can be described as the step necessary to transition GenAI models from a pre-train, fine-tune model requiring explicit, pre-trained commands to a pre-train, prompt, and predict model [12]. An effective prompt or query allows the GenAI model to predict the desired user output and generate an appropriate response. The selection of a prompt with sufficient keywords to generate the desired output from a GenAI model, therefore, becomes paramount. A well-crafted prompt can generate highly accurate, relevant, and creative outputs. This brings us to the emergent field of ‘Promptology’ [13]. AI prominence is increasing, and the ability to interact with these systems efficiently and effectively is becoming a critical skill. Just as the study of linguistics helps us understand the structures and principles of human languages, studying promptology is essential for mastering the ‘language’ of AI. Promptology is the study of prompt design. It focuses on the principles, strategies, and techniques for crafting prompts that maximize the utility, efficiency, and safety of interactions with GenAIs [3]. There are several models currently in use and development to facilitate the design of improved prompts in a variety of GenAI and other LLM (large language models) systems, as outlined in a paper by Liu et al. [12]. Another important distinction is between manual template engineering and automated template learning [12]. Manual template engineering describes the process of generating prompts through human creativity, reflection, and refinement. Automated template learning describes the automated generation of prompts for target tasks and can be performed through a variety of methods such as using GenAI itself to generate prompts. Depending on the problem at the center of the prompt, manual prompt engineering and automated prompt engineering each have their advantages and disadvantages. Manual prompt engineering allows for a natural approach shaped by human intuition and reasoning, while the automated generation of prompts allows for faster generation and testing of a variety of prompt options with less human workload [12].
Prompt engineering is the process of creating, structuring, fine-tuning, and then integrating the instructions that guide LLMs in helping them accomplish specific tasks [14]. Prompts are created with a specific purpose and usage framework in mind. Without prompt engineering, GenAI would have no way of extracting relevant information reliably and would often make the GenAI challenging to use in specific contexts. Using an example of prompt engineering in healthcare, a prompt could be used to manage administrative responsibilities and would include language that would provide GenAI with fields to be entered surrounding medication reminders, the scheduling of appointments, or even simple health recommendations prior to appointments. Many methodologies can be used to achieve quality prompt engineering. While these will vary depending on the context of the task, general aims include being specific to the problem, providing as much context to the problem as possible, experimenting with verbiage, having the GenAI take on specific roles, formulating open-ended prompts, and more.
GenAI models can be designed to work with complex, field-specific data sets. One such example is described by Singhal et al. [15], in which they utilize instruction prompt tuning to attempt to improve the performance of large language models (LLMs) on the MultiMedQA, a question dataset composed of questions similar to those in the US Medical Licensing Exam (USMLE). They observed that an LLM refined with prompt tuning focused on medical vocabulary improved the performance of the LLM on the MultiMedQA, demonstrating the importance of domain-specific prompt engineering. While prior frameworks like prompt engineering have focused on creating efficient prompts for AI, the SPARRO framework addresses broader ethical concerns, including the mitigation of AI-generated hallucinations and plagiarism [3].
The phrase ‘garbage in, garbage out’ is relevant here. If the prompts created to extract data from the AI system are unclear, the output from the system is likely to be flawed or unhelpful [13]. This makes the study of promptology not merely an academic interest but a vital component of practical AI usage. As AI systems continue to evolve and their integration into everyday life deepens, a solid understanding of promptology will become increasingly indispensable. Promptology is a mix of technical acumen and creative experiments, essential for developing input texts that coax out optimal performance from AI models. Recent advances have seen the field of prompt engineering transform, with a reduced emphasis on engineering and a greater focus on prompt designing [16].
Academic research involves searching for, interpreting, and making conclusions regarding large swaths of academic text sourced from a variety of places [17]. This review of the data is essential for the development of publications, presentations, and medical and scientific documentation. GenAI applications such as Open AI’s ChatGPT model V4, or more domain-specific applications have the potential to be an incredible time-saver for clinicians and researchers [18]. More domain-specific GPT applications that have been launched in recent years, such as BioGPT and PubMed GPT, would have the advantage of being trained on a more specific dataset than general models such as ChatGPT. GenAIs can generate large portions of text, summarize prompted articles, and edit text. They may also be able to summarize copious amounts of data such as those sourced from clinical trials or from the record of a complex patient [18]. As research has highlighted the potential ethical risks of GenAI, this study sought to explore these issues through classroom integration, leading to the development of the SPARRO framework based on real-world student and faculty experiences. The next section provides an overview of how GenAI can be safely integrated into the academic process.

3. Methodology

This study utilized an ethnographic approach, focusing on understanding the cultural and social practices of AI integration in healthcare and nursing education. Over one semester, data were collected through participant observation, student surveys, and professor interviews in five courses. The researcher documented classroom interactions, student challenges, and professor feedback, capturing how AI tools were adopted in academic tasks. A thematic analysis of these data sources was conducted to inform the development of the SPARRO framework. The ethnographic methodology enabled the researcher to observe and document the use of AI in the classroom, focusing on the interactions between students and faculty. These observations, combined with data from surveys and interviews, provided a comprehensive understanding of how AI tools were integrated into academic tasks, the challenges encountered, and their impact on enhancing learning in academic environments.

4. Data Collection

Data were collected through multiple ethnographic methods, including learning management systems (LMS) observations, anonymous surveys distributed to students, and semi-structured interviews with professors. The observations focused on capturing deeper insights into their experiences and challenges with AI.
Data were gathered from five healthcare and nursing courses: Healthcare Research Methods, Healthcare Informatics, Advanced Clinical Decision Making, Research Methods and Statistics, and Nursing Leadership and Policy.
Each course integrated AI-based assignments tailored to its specific learning objectives. The assignments included tasks such as using AI to review research papers, summarize research articles, and develop research proposals.
  • Assignments and Course Integration: Across the five courses, AI-based assignments required students to engage with AI tools in a variety of ways:
    Research Paper Review: Students used AI to critically analyze research papers;
    Summarization of Research Articles: Students employed AI to create summaries of selected research articles;
    Research Proposal Development: AI tools were used to assist in structuring and drafting research proposals. The data collected through these assignments provided valuable insights into student interactions with AI, including common challenges like hallucination (AI-generated incorrect or fabricated information) and concerns regarding plagiarism.
  • Professor Observations: Three Faculty members were asked to document their observations of student interactions with AI, focusing on how effectively students utilized AI tools and any challenges they faced. Informal interviews with professors further supplemented this data, providing insights into the faculty’s perceptions of AI’s role in academic tasks and its impact on learning outcomes.
  • Learning Management System Discussion Board Reviews: Each course had an online discussion board where students reflected on their AI experiences. These discussion threads provided rich qualitative data on how students engaged with the AI, shared concerns, and collaborated to improve their understanding of AI use in academic work. Key issues like trust in AI-generated summaries and the difficulty of crafting appropriate prompts for the AI were common themes identified in these forums.

5. Data Analysis

The findings reveal several key patterns in how students and professors engaged with AI tools. Students who clearly outlined AI’s role in their assignments experienced less confusion and writer’s block. Professors noted that higher-level prompts led to more relevant AI outputs, while concerns about plagiarism and AI hallucination were consistently raised in discussion boards. These observations highlighted the need for structured guidelines, which led to the development of the SPARRO framework:
  • Notes from professor observations and informal interviews were analyzed alongside student feedback, helping to refine the SPARRO framework by ensuring it addressed both faculty and student perspectives;
  • Thematic analysis of discussion board posts provided additional insights into how students navigated the use of AI, particularly in relation to trust and academic integrity. These discussions informed key components of the SPARRO framework, such as the ‘Reviewing’ and ‘Refining’ stages, which emphasize the importance of verifying AI-generated content.

6. Developing the SPARRO Framework

Based on these findings, the SPARRO framework was developed to address these challenges and provide a structured approach for integrating AI tools into academic tasks. The framework emphasizes clear planning, prompt design, and iterative refinement to ensure the effective and ethical use of AI in education. For example, the ‘Strategy’ component was influenced by the need for careful planning of AI use in academic assignments, while the ‘Adopting’ component reflects the challenge of aligning AI outputs with academic objectives. The iterative nature of the framework, particularly the ‘Reviewing’ and ‘Refining’ stages, was informed by the continuous need to assess and improve AI-generated content based on both student and faculty feedback.

7. SPARRO Framework Development

The ethnographic insights led to the development of the SPARRO framework, addressing specific challenges faced by students and professors:
  • Strategy addressed the need for planning AI’s role in research with a ‘Declaration of Generative AI Use’ to maintain transparency;
  • Prompt Design utilized the CRAFT model (Clarity, Rationale, Audience, Format, Tasks) to create effective prompts tailored to course needs;
  • Adopting ensured AI content aligned with assignment objectives, integrating AI outputs seamlessly with human input;
  • Reviewing included critical assessments of AI content for accuracy and relevance, maintaining educational standards;
  • Refining focused on iterative improvements based on feedback, enhancing content quality;
  • Optimizing ensured originality and academic integrity with plagiarism checkers and reference verification tools.
  • Ethical Considerations
Transparency was maintained through clear communication, with informed consent obtained from participants. Anonymity and confidentiality were upheld throughout the research process.

8. Prompt Engineering SPARRO Framework

This paper proposes a structural framework to ethically and safely integrate language processors such as ChatGPT into student education. Critics have suggested that language processors undermine critical thinking while additionally suggesting that programs such as ChatGPT are still very prone to error. These conversations have sparked debates about banning these tools [19]. However, excessive fear surrounding these technological advancements could potentially hinder progress in how we engage with and process information. Some professors are now opting for pen-and-paper assessments in fear that AI works may be introducing inefficiencies that could be more appropriately addressed via alternative frameworks [20,21].
The SPARRO framework (outlined in Table 1 and Figure 1) attempts to guide the accurate ethical and reliable use of language processing systems such as ChatGPT in an educational setting. To harness AI responsibly, students must adopt a systematic method that respects academic integrity while leveraging AI’s capabilities. This section introduces the SPARRO framework as a comprehensive approach to facilitating the ethical and effective use of AI in academic settings. Each component—Strategy, Prompt design, Adopting, Reviewing, Refining, and Optimizing—serves as a guideline to ensure the reliability and appropriateness of AI-generated content within scholarly work.
  • Strategy
The foundational aspect of the SPARRO framework is Strategy, which involves developing a comprehensive plan for integrating AI into research or assignments. This plan includes creating a declaration of use statement, determining the role of AI, its scope, and the boundaries of its use. A well-defined strategy ensures that the application of AI aligns with learning objectives and respects the ethical standards of academic work. It compels students to consider the AI’s function as a tool rather than a replacement for critical thinking and creativity.
  • Declaration of Generative AI and AI-assisted technologies
  • During the preparation of this assignment, I used [NAME TOOL/SERVICE] to perform the following [REASON]. After using this tool/service, I reviewed and edited the content as needed and I take full responsibility for the content of the publication.
  • Example Declaration statement
During the preparation of this assignment, I used ChatGPT to perform the following tasks: idea generation and outline structuring. After using the ChatGPT tool, I reviewed and edited the content and authored the full paper without the use of ChatGPT. I take full responsibility for the content of the publication.
  • Prompt Design
Prompt design, drawing on the CRAFT model (outlined in Table 2), is crucial for harnessing AI’s potential effectively. It requires clarity in the communication of tasks to AI, providing a rationale for the context, acknowledging the intended audience, specifying the desired format, and delineating the tasks. This component ensures that the prompts generate relevant and precise responses, tailored to the academic task at hand. This is an iterative approach and may require some tweaking of the CRAFT model to achieve a desirable output.
By looking at each component of the CRAFT model and understanding its purpose, both educators and students can better understand what content is being generated from the AI systems.
  • Adoption
The adoption phase involves the careful integration of AI-generated text into the student’s own work. It emphasizes the importance of maintaining a consistent voice and ensuring that the AI output supports the assignment’s objectives. This step requires a discerning approach to distinguish between the value added by AI and the student’s original thought and analysis.
  • Reviewing
Reviewing is a critical examination of the AI-generated content. It necessitates a thorough evaluation of accuracy, relevance, and coherence, comparing the AI’s output to academic standards and the assignment’s criteria. It is during this phase that students must engage deeply with the material, identifying any gaps or inaccuracies and considering the argument’s logical flow.
  • Refining
Refinement is an iterative process, enhancing the language and arguments of the adopted text. It ensures that the content not only meets academic standards but also reflects the student’s intellectual contribution. Refining might involve restructuring arguments, expanding on ideas, and incorporating participants’ insights to elevate the work’s quality and authenticity. While Reviewing focuses on evaluating AI-generated content for accuracy and relevance, Refining is an iterative process where students enhance their work by integrating participants’ insights and restructuring arguments for clarity.
  • Optimizing
Finally, Optimizing addresses the originality and integrity of academic work. Students are encouraged to utilize plagiarism checkers and reference verification tools to ascertain that their work is free of inadvertent plagiarism and that all sources are properly cited. This step is pivotal in maintaining the scholarly value of their work and upholding the principles of academic honesty.
The benefit of the SPARRO model is that it transcends any particular chatbot, tool, or AI technology. It is a guidebook that when applied properly will ensure that AI is used accurately, ethically, and safely within the educational system. The future of promptology entails developing new techniques to improve interaction.

9. Discussion

The integration of AI in academic tasks, as observed in the five courses, suggests a cultural shift towards reliance on technology for research and writing. However, the ethnographic data reveal significant concerns about trust, academic integrity, and prompt formulation. These findings emphasize the need for structured AI integration frameworks, such as SPARRO, to address these challenges and improve both student engagement and learning outcomes.
However, while the framework showed promise, the study also revealed key challenges in the integration of AI. One significant finding was the variance in student trust of AI-generated summaries. Some students expressed concerns about the reliability of AI outputs, particularly in terms of accuracy and potential plagiarism. This mistrust underscores the need for further refinement in how AI is incorporated into academic work, particularly in verifying AI-generated content against peer-reviewed sources, as emphasized in the Reviewing and Refining stages of the SPARRO framework.
Moreover, the study primarily focused on short-term outcomes, such as immediate student engagement and task completion. The long-term effects of using AI in academic work—especially its impact on critical thinking, originality, and academic integrity—remain uncertain. Future research should explore these aspects over extended periods to determine whether AI use enhances or diminishes core academic skills. Another challenge lies in the varying levels of student proficiency with AI tools. While some students adapted quickly to using AI in assignments, others struggled, particularly when formulating higher-level cognitive prompts. This disparity highlights the importance of providing ongoing support and training to ensure equitable access to, and effective use of, AI across all student skill levels.
Lastly, while the SPARRO framework provides a useful guide for integrating AI into education, its applicability to other disciplines outside of healthcare and nursing remains to be tested. The framework was developed based on data collected in a specific academic context, and further empirical validation is needed to determine whether the framework can be adapted to other fields. Additionally, the lack of quantitative data in this study means that the findings, while insightful, are primarily based on qualitative feedback. Quantitative validation, especially concerning student performance metrics, would strengthen the case for the broader adoption of the SPARRO framework.

10. Conclusions

This ethnographic study of AI integration in healthcare and nursing education demonstrates both the potential and the challenges of incorporating generative AI tools into academic work. The SPARRO framework, developed through direct observation of student and professor interactions, offers a practical and ethical approach to navigating these challenges, ensuring that AI is used effectively while maintaining academic integrity. Future research should focus on validating the framework across other academic disciplines
However, this study also revealed important challenges that must be considered for the broader adoption of AI in education. While the framework mitigates some of the risks associated with AI, such as content inaccuracies and academic dishonesty, it remains essential to address the issue of trust in AI-generated content. Students expressed concerns about the reliability of AI outputs, underscoring the need for continued refinement in how AI tools are used and validated in academic settings. Moreover, the long-term effects of AI use on critical thinking, originality, and academic integrity are still unclear and warrant further investigation. Additionally, while the SPARRO framework has shown promise in healthcare and nursing courses, its application across other academic disciplines remains untested. Future research should focus on validating the framework in different contexts and fields to ensure its adaptability and effectiveness beyond the current scope.
In conclusion, the SPARRO framework provides a valuable starting point for the ethical and effective integration of AI in education, but its broader applicability and long-term effects must be carefully considered. Further research should aim to validate the framework across disciplines, assess its long-term impact on student learning, and ensure that students have the necessary support and resources to use AI tools effectively and responsibly.

Author Contributions

Conceptualization, P.O.; Formal analysis, P.O., L.E. and M.A.; Data curation, K.M. and J.O.; Writing—original draft, P.O.; Writing—review & editing, L.E. and M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee) of University of Detroit Mercy, Protocol 23-24-25, and approval date 01-23-2024.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Heston, T.F.; Khun, C. Prompt Engineering in Medical Education. Int. Med. Educ. 2023, 2, 198–205. [Google Scholar] [CrossRef]
  2. Strobelt, H.; Webson, A.; Sanh, V.; Hoover, B.; Beyer, J.; Pfister, H.; Rush, A.M. Interactive and visual prompt engineering for ad-hoc task adaptation with large language models. IEEE Trans. Vis. Comput. Graph. 2022, 29, 1146–1156. [Google Scholar] [CrossRef] [PubMed]
  3. Dobson, J.E. On reading and interpreting black box deep neural networks. Int. J. Digit. Humanit. 2023, 5, 431–449. [Google Scholar] [CrossRef]
  4. Robert, L.P.; Pierce, C.; Marquis, L.; Kim, S.; Alahmad, R. Designing fair AI for managing employees in organizations: A review, critique, and design agenda. Hum.–Comput. Interact. 2020, 35, 545–575. [Google Scholar] [CrossRef]
  5. Ansari, M.F.; Dash, B.; Sharma, P.; Yathiraju, N. The Impact and Limitations of Artificial Intelligence in Cybersecurity: A Literature Review. Int. J. Adv. Res. Comput. Commun. Eng. 2022, 11, 81–90. [Google Scholar] [CrossRef]
  6. Trope, R.L. What a Piece of Work is AI"-Security and Al Developments. Bus. Law 2020, 76, 289–294. [Google Scholar]
  7. Lo, L.S. The CLEAR path: A framework for enhancing information literacy through prompt engineering. J. Acad. Libr. 2023, 49, 102720. [Google Scholar] [CrossRef]
  8. Svendsen, A.; Garvey, B. An Outline for an Interrogative/Prompt Library to help improve output quality from Generative-AI Datasets (May 2023). Available online: https://ssrn.com/abstract=4495319 (accessed on 6 October 2024).
  9. Javaid, M.; Haleem, A.; Singh, R.P.; Suman, R. Artificial Intelligence Applications for Industry 4.0: A Literature-Based Study. J. Ind. Integr. Manag. 2022, 7, 83–111. [Google Scholar] [CrossRef]
  10. Hajkowicz, S.; Sanderson, C.; Karimi, S.; Bratanova, A.; Naughtin, C. Artificial intelligence adoption in the physical sciences, natural sciences, life sciences, social sciences and the arts and humanities: A bibliometric analysis of research publications from 1960-2021. Technol. Soc. 2023, 74, 102260. [Google Scholar] [CrossRef]
  11. Ronanki, K.; Cabrero-Daniel, B.; Horkoff, J.; Berger, C. Requirements engineering using generative AI: Prompts and prompting patterns. In Generative AI for Effective Software Development; Springer Nature: Cham, Switzerland, 2024; pp. 109–127. [Google Scholar] [CrossRef]
  12. Liu, P.; Yuan, W.; Fu, J.; Jiang, Z.; Hayashi, H.; Neubig, G. Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. ACM Comput. Surv. 2023, 55, 1–35. [Google Scholar] [CrossRef]
  13. Velásquez-Henao, J.D.; Franco-Cardona, C.J.; Cadavid-Higuita, L. Prompt Engineering: A methodology for optimizing interactions with AI-Language Models in the field of engineering. Dyna 2023, 90, 9–17. [Google Scholar] [CrossRef]
  14. Meskó, B. Prompt engineering as an important emerging skill for medical professionals: Tutorial. J. Med. Internet Res. 2023, 25, e50638. [Google Scholar] [CrossRef] [PubMed]
  15. Singhal, K.; Azizi, S.; Tu, T.; Mahdavi, S.S.; Wei, J.; Chung, H.W.; Tanwani, A.; Cole-Lewis, H.; Pfohl, S.; Payne, P.; et al. Large language models encode clinical knowledge. Nature 2023, 620, 172–180. [Google Scholar] [CrossRef] [PubMed]
  16. Dang, H.; Mecke, L.; Lehmann, F.; Goller, S.; Buschek, D. How to prompt? Opportunities and challenges of zero-and few-shot learning for human-AI interaction in creative applications of generative models. arXiv 2022, arXiv:2209.01390. [Google Scholar] [CrossRef]
  17. Esplugas, M. The use of artificial intelligence (AI) to enhance academic communication, education and research: A balanced approach. J. Hand Surg. 2023, 48, 819–822. [Google Scholar] [CrossRef] [PubMed]
  18. Li, H.; Moon, J.T.; Purkayastha, S.; Celi, L.A.; Trivedi, H.; Gichoya, J.W. Ethics of large language models in medicine and medical research. Lancet Digit. Health 2023, 5, e333–e335. [Google Scholar] [CrossRef] [PubMed]
  19. Eke, D.O. ChatGPT and the rise of generative AI: Threat to academic integrity? J. Responsible Technol. 2023, 13, 100060. [Google Scholar] [CrossRef]
  20. Murugesan, S.; Cherukuri, A.K. The rise of generative Artificial Intelligence and its impact on education: The promises and perils. Computer 2023, 56, 116–121. [Google Scholar] [CrossRef]
  21. Xu, W.; Ouyang, F. A systematic review of AI role in the educational system based on a proposed conceptual framework. Educ. Inf. Technol. 2022, 27, 4195–4223. [Google Scholar] [CrossRef]
Figure 1. SPARRO Framework.
Figure 1. SPARRO Framework.
Information 15 00634 g001
Table 1. SPARRO framework and its components.
Table 1. SPARRO framework and its components.
ComponentDescriptionExample from Data
StrategyDevelop a clear plan for how AI will be used in the assignment, including its role, limitations, and how it aligns with the objectives.Students who clearly articulated how AI would assist in their assignments experienced less writer’s block and confusion. In the Health Service Administration course, students who outlined AI’s role had an easier time getting started and structuring their work. It was also a requirement to declare how AI was going to be used.
Prompt DesignUse the CRAFT model (Clarity, Rationale, Audience, Format, Tasks) to create effective prompts. Higher-level cognitive prompts (e.g., ‘evaluate’ or ‘analyze’) generate more useful content, and specifying the output format enhances relevance.In Nursing Research Methods, students who used higher-level prompts (e.g., ‘evaluate this article’) received better AI-generated summaries. Specifying the desired output format, such as requesting a table or bullet points, resulted in more applicable responses.
AdoptingIntegrate AI-generated content into the student’s work, ensuring alignment with their voice and the assignment’s requirements. AI should be transferred to an editing software from the AI systems once a level of satisfaction is achieved. In Nursing Leadership and Policy, a student modified an AI-generated policy analysis, incorporating critical thinking and aligning the content with her own research to meet the assignment’s objectives.
ReviewingCritically assess AI-generated content for accuracy, relevance, and coherence. Verify the content with trusted sources to ensure reliability.In the Healthcare Research course, students who fact-checked AI summaries against peer-reviewed articles identified inaccuracies such as hallucinations, which improved the quality and accuracy of their final submissions.
RefiningAfter verifying accuracy, iterate on AI-generated content by refining language, improving arguments, and adding personal insights. This step ensures the final submission meets academic standards. In Advanced Clinical Decision Making, a student critically compared AI-generated case analyses with their own clinical reasoning. The human analysis highlighted gaps in the AI’s suggestions, leading to a more comprehensive and accurate final diagnosis.
OptimizingEnsure academic integrity by using plagiarism checkers and reference verification tools to confirm the originality and credibility of the AI-assisted work.In Research Methods, students who used plagiarism detection tools on AI-generated content identified potential issues with originality and refined their work to ensure proper citations and academic integrity.
Table 2. CRAFT model and its components.
Table 2. CRAFT model and its components.
ComponentDefinition
ClarityBe clear, specific, and unambiguous in your prompt to avoid multiple
interpretations and set clear boundaries for AI capabilities.
RationaleSpecify the context or background for the prompt. This is the underlying rationale within which the prompt is expected to operate.
AudienceConsider the audience when crafting the prompt. The language, complexity, and tone should be tailored to the intended readership.
FormatSpecify the desired output format (e.g., essay, list, table, flowchart) to tailor the AI’s response for immediate applicability.
TasksBreak down the prompt into smaller, manageable sections, each addressing a specific aspect of the complex query.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Olla, P.; Elliott, L.; Abumeeiz, M.; Mihelich, K.; Olson, J. Promptology: Enhancing Human–AI Interaction in Large Language Models. Information 2024, 15, 634. https://doi.org/10.3390/info15100634

AMA Style

Olla P, Elliott L, Abumeeiz M, Mihelich K, Olson J. Promptology: Enhancing Human–AI Interaction in Large Language Models. Information. 2024; 15(10):634. https://doi.org/10.3390/info15100634

Chicago/Turabian Style

Olla, Phillip, Lauren Elliott, Mustafa Abumeeiz, Karen Mihelich, and Joshua Olson. 2024. "Promptology: Enhancing Human–AI Interaction in Large Language Models" Information 15, no. 10: 634. https://doi.org/10.3390/info15100634

APA Style

Olla, P., Elliott, L., Abumeeiz, M., Mihelich, K., & Olson, J. (2024). Promptology: Enhancing Human–AI Interaction in Large Language Models. Information, 15(10), 634. https://doi.org/10.3390/info15100634

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop