Next Article in Journal
Unveiling Hidden Histories: Disability in Ancient Egypt and Its Impact on Today’s Society—How Can Disability Representation in Museums Challenge Societal Prejudice?
Next Article in Special Issue
Bridging Boundaries to Acquire Research and Professional Skills: Reflecting on the Impact and Experiences of Technology-Enabled Collaborative Cross-Institutional and Transnational Social Work Placement Projects
Previous Article in Journal
Integrating Work-Life and Student Work-Related Experiences in Classroom Learning—The Perspective of Primary Teachers
Previous Article in Special Issue
Ecopedagogy in Remote Digitally Facilitated Field Education Experiences: Embedding Ecosocial Work in Practice
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Examining the Role of Generative AI in Enhancing Social Work Education: An Analysis of Curriculum and Assessment Design

by
Elizabeth Claire Reimer
Social Work, Faculty of Health, Southern Cross University, East Lismore, NSW 2480, Australia
Soc. Sci. 2024, 13(12), 648; https://doi.org/10.3390/socsci13120648
Submission received: 13 October 2024 / Revised: 19 November 2024 / Accepted: 27 November 2024 / Published: 29 November 2024
(This article belongs to the Special Issue Digital Intervention for Advancing Social Work and Welfare Education)

Abstract

:
Generative Artificial Intelligence (GAI) holds significant potential to advance the field of social work, yet it brings forth considerable challenges and risks. Key concerns include the legal and ethical ramifications of GAI application, as well as its effects on the vital human connections inherent in social work. Nonetheless, educators in this field must ready their students for the evolving digital environment, ensuring they are adept at employing GAI thoughtfully, skillfully, and ethically. This article will explore the integration of GAI knowledge and skills within educational settings. It will feature a case study detailing the author’s redesign of community welfare and social work degree assignments to include GAI within a community welfare/social work undergraduate course in Queensland, Australia. The discussion will extend to curriculum and assessment development processes aimed at leveraging GAI to enhance student learning, knowledge retention, and confidence in applying GAI within their academic and professional pursuits. Furthermore, the article will examine the implications for curriculum and assessment design, emphasizing the importance of clear learning objectives, the creation of specific, intricate, and contextualized assessments, the necessity for students to critically evaluate GAI outputs, and the challenge of presenting GAI with tasks beyond its capabilities.

1. Introduction

Artificial intelligence is being rapidly integrated into research, education and practical methodologies within the field of social work (Lehtiniemi 2023; Haider 2024; Singer et al. 2023; Ioakimidis and Maglajlic 2023). Generative AI, often referred to as GAI, is a type of artificial intelligence that utilizes machine learning algorithms to generate new content across various formats, including text, audio, images, code, simulations, and videos (Bearman and Luckin 2020; Mao et al. 2024). It involves pre-trained large language models (LLMs) based on extensive text data, learning grammar, vocabulary, and other linguistic elements to produce coherent and contextually relevant human-like content in response to complex prompts (Salinas-Navarro et al. 2024; Ogunleye et al. 2024). This form of artificial intelligence is distinguished by its ability to learn and adapt its behaviour based on new information, leveraging advanced machine learning techniques to improve performance with experience. GAI has proven to be a transformative tool in various domains, significantly enhancing efficiency and effectiveness in tasks traditionally performed by humans (Bearman and Luckin 2020; Mao et al. 2024; Haider 2024; Moorhouse et al. 2023).
The vast potential of GAI in social work research and practice is supported by recent editorials in disciplinary journals (Ioakimidis and Maglajlic 2023; Scheyett 2023; Singer et al. 2023). This potential for AI exposure suggests a pressing need for social work professionals to proactively engage with and leverage GAI technologies to stay at the forefront of their field (Haider 2024; Ioakimidis and Maglajlic 2023; Singer et al. 2023; Scheyett 2023). However, despite early pioneers of social work practice being known to be early adopters of emergent technologies, social work professionals practicing today have been found to demonstrate resistance towards emerging digital technologies such as GAI (Haider 2024). This is concerning, considering some argue that the inevitability of GAI into social and professional life renders resistance pointless (Haider 2024; Ioakimidis and Maglajlic 2023; Singer et al. 2023). Concerns include that refusal to engage will likely result in social work professionals being unprepared to engage critically in a world influenced by GAI, leading to social policy decisions that reproduce exploitation, disempowerment, inequity and injustice.
With these concerns in mind, social work educators have a responsibility to teach social work students to become literate about digital tools such as GAI, and to promote socially beneficial ways of using new technologies (Haider 2024). Social work educators also need to teach students about the uncertainty, paradox and ambiguity that exists in the work they will be doing, and assess how well they demonstrate learning (Haider 2024). This involves teaching students they need to understand enough about the topic to accurately and effectively prompt GAI for answers, using it critically (Haider 2024) and ethically (Victor et al. 2023; Haider 2024). However, there is yet not a lot of critical digital literacy about the use of GAI in social work (Haider 2024). In line with the focus of this special issue on developing knowledge and practice skills in classroom settings, this paper aims to discuss and provide practical ideas on how to incorporate GAI into social work education experiences, specifically curricula and assessments.

Assessment of Learning

The emergence of GAI, while creating new possibilities for learning and teaching, has exacerbated existing assessment challenges within higher education (Liu 2023; Moorhouse et al. 2023; Ogunleye et al. 2024; Salinas-Navarro et al. 2024; Smolansky et al. 2023). Where assessment involves teaching academics in higher education measuring student learning performance, GAI has raised serious concerns about the integrity of student learning in higher education (Mao et al. 2024; Liu 2023; Moorhouse et al. 2023). For example, GAI can easily replicate written assessments completed out of sight of instructors, which are undetectable as machine generated (Liu 2023). Hence, integrating GAI in education presents a significant challenge, especially in the realm of authentic assessment design (Thanh et al. 2023; Bridgeman et al. 2023; Liu 2023; Moorhouse et al. 2023). Comprehensive re-evaluation of the nature and formulation of assessments is required (Thanh et al. 2023; Liu 2023).
To achieve this, some have sought to propose a rethinking of assessment in higher education that integrates more process- and performance-based components into education (Mao et al. 2024; Liu 2023; Ogunleye et al. 2024). For example, academics at the University of Sydney have developed a ‘two-lane approach’ to assessment in the age of GAI (Liu and Bridgeman 2023). Lane One consists of assessments that are secure from AI misuse, as they are completed in-situ under supervision. Lane Two includes assessments that are more vulnerable to GAI use, allowing students to easily utilise GAI to complete tasks (Liu and Bridgeman 2023). The purpose of Lane Two is to teach students how to use GAI effectively and ethically, preparing them for professional practice. Impending in-situ assessments in Lane One serve as a deterrent, motivating students to use AI as a learning tool rather than a means to complete assessments dishonestly. This program-wide approach enables academics to scaffold GAI usage in a way that ensures students are ready to use GAI in their professional lives without compromising learning integrity (Bridgeman et al. 2023; Liu and Bridgeman 2023).
However, while face-to-face and viva type assessments, which are a Lane One form of assessment, are an approach commonly used throughout field education aspects of social work degrees, this may not be realistic for social work courses seeking authentic assessments. This is because social work professionals rely heavily on text-based skills. There are many text-based tasks required in social work practice, such as, report writing, evaluating programs, lobbying/advocacy submissions to government, developing arguments for funding, writing explanations for service participants, journaling critical self-reflection, written documents for bureaucratic and court requirements. Hence, social workers graduating from tertiary education need to know how to write and form critical arguments in multiple written formats.
This article draws on ideas from the scholarship of teaching and learning (SOTL), which involves systematic inquiry into teaching, informed by scholarship (Kreber 1999; Kreber and Cranton 2000). The article has been developed as a meta-reflection (Thorpe and Garside 2017; Reimer and Whitaker 2019; Humphrey 2009) on the author’s teaching practices coupled with incorporating relevant ideas from scholarly literature in order to inform and develop academic practice. Furthermore, consistent with ideas from the scholarship of teaching and learning, this article is reflective and conceptual rather than empirical (Kreber 1999; Kreber and Cranton 2000).
The aim of this paper is to help tertiary educators adapt their assessment approaches for a digitally evolving educational environment where students use GAI to complete assessments. The paper draws on educational theory, specifically authentic assessment and Bloom’s taxonomy and is framed as a meta-reflection on the author’s practice. It will include issues, implications, challenges, opportunities, benefits, and ethical considerations for educators. Furthermore, the paper provides strategies the author devised to teach bachelor-level social work students to understand and use GAI critically, in a way that preserves social work values, and to refocus assessments to assess students using this technology. Readers should note, however, that the practical implementation of these concepts has not yet undergone evaluation.

2. Materials and Methods

This study employs a meta-analysis methodology to critically reflect on teaching practices, integrating Scholarship of Teaching and Learning (SoTL) ideas from transformative learning theory and relational views of education into designing text-based assessments for social work undergraduate programs. By engaging with current assessment tasks in my coursework and drawing on these SoTL concepts on experience- and research-based knowledge, I critically examined and reflected upon my teaching process and assessments. This involved focusing on understanding how Generative AI (GAI) approached these tasks, and critically analysing and redesigning them to be resistant to students easily completing them using GAI.

2.1. Meta-Reflection

Meta-reflection is a higher-order reflection on the nature of reflection itself (Zuber-Skerritt 2015; Humphrey 2009), in this case analysing not just teaching, but also how the author reflects on teaching. This analysis is complemented by an assessment of the intellectual tools employed in teaching, considering how theories, concepts, and ideas shape reflective processes and student learning experiences (Holdo 2022). Moreover, this meta-reflection is situated within a broader examination of the institutional context, analysing how the higher education setting influences teaching and reflective practices, considering both enabling and constraining factors (Holdo 2022, 2020).
The meta-reflection process begins with a thorough contextual analysis, examining the structural, institutional, and social conditions that influence teaching and learning processes (Wacquant and Bourdieu 1992; Holdo 2020). The data collected through these processes are synthesized and interpreted through the lens of transformative learning theory and Dewey’s relational view. This synthesis aims to provide a comprehensive understanding of teaching practices, the factors that influence them, and the nature of reflection on these practices. By integrating various theoretical perspectives and analytical approaches, this methodology provides a robust framework for reflecting on and improving teaching practices in higher education settings. Applying these ideas to my teaching practice involved critically analysing the assessments I had developed, along with how I taught the content which students applied to the assessments. Initially, I used GAI to complete my assessment tasks, which I assessed using the rubrics. I then learned about how others are addressing GAI in both my institution and the tertiary education sector more broadly, critically analysing my practices in light of what others were saying about GAI in the context of assessments. As discussed below, I applied these findings to rewrite my assessments and reconsider my approach to teaching students to learn how to use GAI in professional practice.

2.2. Scholarship of Teaching and Learning (SoTL)

The approach, which is embedded in SoTL, synthesizes multiple theoretical frameworks and analytical techniques to provide a comprehensive examination of the teaching process. The approach to SoTL adopted here draws on reflection on teaching practice influenced by Mezirow’s transformative learning theory (Mezirow 1990, 1997), Schön’s ideas on reflection-in-action and reflection-on-action (Schön 1983), and Dewey’s theory on relational reflection (Dewey 2008). Reflection in SoTL, which can be informed by experience-based knowledge, research-based knowledge, or both, is concerned with the construction of knowledge, through reflection (Kreber 1999; Kreber and Cranton 2000). These frameworks emphasise practical application of reflections to teaching practice, underscoring Dewey’s notion of learning through practical engagement with objects of knowledge.

2.3. Authentic Assessment

Regarding the notion of authentic assessment, the methodology emphasizes the integrated application of knowledge and skills in practical, real-world situations, incorporating social, cognitive, and reflective processes of learning (Brill and Park 2008). This approach aims to enhance higher-order thinking skills, foster transferable skills in contextualized settings, and boost student motivation and overall learning engagement (Ashford-Rowe et al. 2014; Villarroel et al. 2018). By focusing on authentic assessment, the author aims to bridge the gap between academic learning and real-world challenges, aligning education with future career expectations (Neely and Tucker 2012). This methodology not only assesses student learning, but also prepares them for the complexities of professional life, fostering critical thinking, problem-solving, innovation, and creativity (Wiewiora and Kowalkiewicz 2019). Practical application of these ideas involved ensuring new assessments emphasise integrating the application of knowledge and skills in real-world professional social work situations. This approach aimed to enhance higher-order thinking skills, foster transferable skills, and boost student motivation and engagement, bridging the gap between academic learning and real-world challenges, including rising expectations they will be able to use GAI in professional contexts. By aligning assessments with current capabilities of GAI and future career expectations that graduates will be able to use GAI, I attempted to prepare students for professional complexities, fostering critical thinking, problem-solving, innovation, and creativity.

2.4. Bloom’s Taxonomy

The meta-reflection employs Bloom’s Taxonomy (Bloom et al. 1956) as a framework to evaluate the capabilities of GAI tools in addressing various cognitive levels. Previous research has established that GAI excels at lower-order cognitive tasks, such as remembering facts and understanding basic concepts (Thanh et al. 2023). However, the ability of GAI to perform higher-order tasks requiring creativity, critical thinking, and problem-solving, remains a subject of investigation. By aligning GAI capabilities with Bloom’s hierarchical structure, this meta-reflection provides insights into the strengths and limitations of these tools in educational contexts using GAI. The author’s undergraduate social work course assessments were evaluated against this framework and will be outlined below, where the author is referred to using first-person pronouns. In employing Bloom’s Taxonomy, I evaluated the capabilities of GAI tools in addressing various cognitive levels as they related to my assessments. Aligning GAI capabilities with Bloom’s hierarchical structure provided insights into the strengths and limitations of these tools regarding my coursework assessments.

2.5. Method

The meta-reflection was initiated in response to the November 2022 launch of ChatGPT 3.5 (OpenAI 2022), which garnered significant media attention. My interest was piqued by widespread public reaction and concerns, particularly regarding the potential for students to utilize GAI in academic assessments. Recognizing the projected impact of GAI for the social work profession, I identified a need to prepare students for its responsible and ethical use in the undergraduate learning contexts with which I was involved.
To assess the capabilities of GAI in an educational setting, during December 2022, I conducted a series of tests on assessment tasks I had designed for undergraduate units related to foundation studies for social work/welfare (WELF1001), community development (WELF1002), youth work (WELF2002), and child protection practice (WELF3002). This involved inputting current course assessments into ChatGPT 3.5 (OpenAI 2022) to evaluate both the ease with which GAI could complete these tasks and the quality of the outputs. The results were concerning, as ChatGPT 3.5 (OpenAI 2022) consistently produced responses that met passing criteria, with some outputs achieving credit-level grades.
These exploratory activities and resultant concerns have since been corroborated by emerging scholarly literature (Liu 2023; Lodge et al. 2023; Thanh et al. 2023). This has validated my approach and concerns, and underscores the relevance of this exploratory investigation in the broader academic discourse on GAI in education. With this in mind, the meta-reflection is founded on learning that emerged from my direct experience of paying critical attention to assessments in my units and realising assessments were not GAI-proof. To advance the meta-reflection, I asked the following questions: How susceptible are my assessments to being completed to a satisfactory level by GAI? Where assessments are found to be able to be completed to a satisfactory level by GAI, what changes do I need to make to my assessments to GAI-proof them?

3. Results

Overall findings support other research that GAI is a good complementary tool for research and writing, having demonstrated competent remembering, application, synthesis, summarising, reflection, and classification of general information (Victor et al. 2023; Thanh et al. 2023). However, similar to other research, through this meta-reflection, I found GAI to be less competent at certain tasks. I also found that potential misuse of GAI undermines the core purpose of assessments in higher education. Due to the ability of GAI to produce high-quality, genuine-looking responses, students can easily submit AI-generated work as their own. Misuse of GAI makes it challenging to assess how much students have engaged with the learning materials and determine their understanding and competency (Liu 2023; Mao et al. 2024; Moorhouse et al. 2023; Smolansky et al. 2023; Souppez et al. 2023). This presents a significant challenge to educators understanding how much students have engaged with curricula, the depth of their learning (Bridgeman et al. 2023; Liu 2023; Souppez et al. 2023), and academic integrity (Lodge et al. 2023; Souppez et al. 2023). Together, these facets of GAI misuse undermine the integrity of higher education. Revealingly, GAI was unable to provide attention to detail and coherent thought patterns when an assessment asked for synthesis and complex thinking tasks, for example demonstrating understanding of professional concepts covered in the course materials or assessment tasks focused on a specific real-life program or context. This finding, related to a GAI limitation when addressing specific information, has since been supported by others (Bearman and Luckin 2020; Victor et al. 2023; Thanh et al. 2023; Ogunleye et al. 2024). Furthermore, students are taught to demonstrate critical thinking through presenting arguments where they identify, test, and resolve theoretical and practical dilemmas by integrating multiple perspectives and using evidence to form, justify, and articulate these arguments. They are graded on how they demonstrate synthesis, deep reflection and innovative thinking on the curricula, develop and test hypotheses, critique literature, and apply transferable understanding of unit content. While GAI gave a good balanced account of aspects of a critical argument, responses tended to present oversimplified, binary arguments without adequate evidence to support the positions, GAI responses primarily summarised literature rather than developing an integrated analysis. GAI response included minor factual errors, and failed to substantiate the arguments with evidence or explore the nuanced complexities of the topic. This aligns with other research suggesting that GAI does not outline its reasoning process, making it impossible to hold the logic accountable (Victor et al. 2023), and that GAI is not good at specific information retrieval yet, such as locating accurate sources, current news updates and real-world examples (Victor et al. 2023; Thanh et al. 2023). This is, in part, because the data sourced are bounded by what is already there across the internet (and not behind paywalls, such as news media and empirical data in academic journals that are not open-source (Victor et al. 2024), although new AI programs are being developed to link to scholarly sources (Victor et al. 2024).
Furthermore, through my meta-reflection I found many inaccuracies in responses GAI gave, which others have termed “hallucination” (Maleki et al. 2024). This includes issues such as GAI either not attributing sources, not doing so correctly, or misattributing the knowledge it uses. However, when planning to future-proof assessments, this finding needs to be considered in light of the fact that GAI that has been trained on specific professional content and retrieval-augmented generation (RAG) processes, which are reducing this risk (Victor et al. 2024; Thanh et al. 2023; Ogunleye et al. 2024).
Additionally, my meta-reflection found, similarly to other research, that GAI detection tools are often inaccurate (Liu 2023; Mao et al. 2024; Bearman and Luckin 2020). Others have explained this through research findings suggesting that GAI exhibits a bias towards native English speakers, meaning that individuals using non-conventional English syntax and grammar are frequently mistaken for students using GAI (Liu 2023; Mao et al. 2024; Bearman and Luckin 2020), and that detection tools are easy to deceive; for example, one can use GAI to generate content and then make minor edits to bypass detection (Liu 2023; Mao et al. 2024; Bearman and Luckin 2020). Students’ reliance on GAI, which is trained primarily on Western literature, could lead to inaccurate, incomplete and skewed understanding of history and culture. Lack of representation in the training data could perpetuate a cycle of misinformation, marginalisation and exclusion of First Nations communities and their perspectives. Hence, I critically examined my coursework to further develop learning activities that facilitate critical discourse around systemic bias, colonisation, and labelling theory. This was to help students develop understanding of bias within GAI towards Anglo-centric linguistic structures and epistemologies. This critical framework was embedded within the units. For example, in two units, in-class and workbook activities, and assessment tasks, required students to analyse GAI’s social construction of Australian youth identity, and integrate Indigenous knowledge systems into community development practice.
Finally, my meta-reflection revealed the potential for GAI to exacerbate educational disadvantage, inequities and unfairness through inequitable access to GAI tools and varying levels of students’ digital literacy. These must be carefully considered in assessment design, as they can affect student learning and performance. While examining my assessments, it became apparent that I could not access more advanced versions of GAI without paying for them. While this is becoming less of an issue, better quality GAI tools are mostly still subscription-based, meaning there will continue to emerge an inequitable access to the best GAI tools. Others have since articulated similar concerns (Liu 2023; Lodge et al. 2023; Moorhouse et al. 2023). Additionally, I learned that not everyone wants to use GAI, and that many individuals have valid reasons for not wanting to engage with GAI. Such a position is related to worldviews about social work practice, values, concerns about online exposure and, as Moorhouse et al. (2023) have also noted, valid ethical concerns people hold about intellectual property, privacy, integrity and confidentiality.
The following section discusses these findings in light of wider research on GAI in tertiary education, and provides the evidence-informed strategies I developed to mitigate these issues in assessments.

4. Discussion and Application

The integration of GAI in educational assessment presents both opportunities and challenges for educators. The following discussion explores the multifaceted implications of GAI within assessment design for facilitating and assessing student knowledge application, focusing on key areas for applying what I learned, such as prioritisation, contextual relevance, critical thinking and reflection, communication, how rubrics can support learning integrity, and equity issues. This discussion will explore these aspects, examining how educators can leverage the strengths of GAI while mitigating its limitations to create more effective, equitable, and meaningful assessments in higher education.

4.1. Students’ Application of Knowledge

Students’ application of knowledge is a critical aspect of education, requiring an approach that maintains academic integrity and ensures that students are genuinely meeting the required standards. In addition, GAI can be used as a collaborative tool to help students with initial stages of researching and organising ideas and structuring their writing (Liu and Bridgeman 2023; Souppez et al. 2023). Liu (2023) emphasizes the need to balance the use of GAI with the assurance of learning outcomes, advocating for assessments that ensure student attainment of learning objectives. By leveraging AI’s capabilities, educators can enhance the assessment process, ensure the meaningful application of knowledge, and prepare students with critical skills for the future. This integrated approach not only improves efficiency, but also fosters a deeper, more authentic engagement with knowledge, ultimately benefiting both students and educators. However, integrating GAI requires a shift in thinking regarding assessing students’ learning, which incorporates differently prioritising what to assess and contextualising assessment tasks within authentic applications, in particular focusing on relevant professional experiences.

4.1.1. Priorities

During my meta-reflection, I realised that it is essential to carefully consider the assessment priorities when designing assessments for a GAI-era. This includes re-prioritising what I worried about assessing, allowing GAI to become a collaborator with students in the early stage of completing their assessments. Souppez et al. (2023) have since agreed, highlighting the role of AI in early-idea generation, which can accelerate the initial stages of coursework and assessment completion. Doing so supports students’ development of professional skills in using GAI, while understanding its limitations. It also allows for more complex assessments earlier in the academic journey, because it frees up time so instructors can set assessments on the more advanced and in-depth aspects of their studies that GAI cannot yet manage.
Others have since argued that the advent of GAI necessitates a re-evaluation of assessment priorities, shifting the focus towards evaluating complex elements of student learning rather than outcomes easily replicated by GAI (Liu and Bridgeman 2023; Lodge et al. 2023). This focus ensures that students engage in genuine intellectual work, applying theories of knowledge to particular domains, rather than merely repeating them.
To get to this point, I needed to carefully and critically think through, and define, what I aimed to evaluate, and identify aspects that are less critical. Drawing on learning theory related to summative and formative assessment, GAI can complicate summative assessment, as it may obscure students’ contribution to the final product, and, in the context of GAI, a combination of both summative and formative assessments may be necessary. Summative assessment focuses on evaluating the final product or the outcome of learning, providing a measure of what students have achieved at the end of an instructional period (Wakeford 2009). In contrast, formative assessment requires ongoing feedback throughout the learning process, helping to guide and improve student learning as it happens (Wakeford 2009). Incorporating assessment of the iterative process leading to the outcome—the “working out”—helps ensure a comprehensive evaluation of student learning. Such ideas on process-focused assessments were later supported by others (Liu and Bridgeman 2023; Lodge et al. 2023; Moorhouse et al. 2023; Smolansky et al. 2023). The opacity of AI learning processes further underscores the importance of evaluating the learning process, rather than just the final product, because it is currently virtually impossible to distinguish human- from GAI-generated work (Liu and Bridgeman 2023). I communicated these ideas and expectations to students in my units during recorded workshops. This included instructing students about how to do the assessment, that markers will be assessing students’ prompts for synthesis and critical analysis of unit content related to the assessment task, and requesting them to include the GAI chat-bot responses as an appendix to their assignment.
By focusing on the process, educators can better understand and support students’ development of critical thinking, judgement, and reflective skills, ensuring that learning remains meaningful and authentic in an AI-enhanced educational landscape.

4.1.2. Highly Contextual

As I explored new ideas for GAI-proof assessments I realised that its propensity to generalise knowledge could be an exploitable limitation. Others have also found this approach useful, suggesting that a way to expose where students have used GAI is possible by redesigning assessment tasks to incorporate contextual elements, particularly addressing knowledge that is highly contextualized and specific to professional practice (Moorhouse et al. 2023; Liu and Bridgeman 2023; Ogunleye et al. 2024; Lodge et al. 2023). This might include requiring students to discuss services available in their local community, or to evaluate a local service in light of relevant theories or practice concepts. Moorhouse et al. (2023) further suggest it is useful to ask real questions that stem from current activities and debates within the discipline. These ideas involve making assessments meaningful and challenging, and promoting deeper engagement and learning.
Guided by these ideas, and drawing on a recent collaboration I had been involved in with a locally-based community development program, I trialled with GAI an assessment task idea for the Community Development course I facilitate. The GAI tool could not integrate, to a passable standard, data about a specific activity into the explicit content from the course and a particular theoretical framework. Figure 1 provides an example I developed for WELF1001.
Furthermore, social work practice is inherently ambiguous and uncertain, and often contains contradictions. Therefore, it is crucial to develop assessments that authentically test students’ ability to learn and navigate through uncertainty, ambiguity, and paradox. To achieve this, I utilised Claude3 (Anthropic 2023) to create a case study for a youth worker that involves these elements of uncertainty, ambiguity, and paradox. I prompted Claude3 (Anthropic 2023) as follows (Figure 2):
The students’ task was to develop an action plan for working with the young person in the GAI-generated case study that aligns with what we have discussed in the unit about youth work ethics and Australia’s Youth Policy Framework. The prompts were specific and contextualized to the unit materials, ensuring that the plan closely mirrors real-world scenarios that social workers may encounter and remains relevant to what the students were learning. Furthermore, using GAI to create a case study with the same aim and purpose, but with new variables and details for each new offering of the course/unit of study, helps reduce potential for students to plagiarise past students’ work in a way that is time- and energy-efficient for educators.
This approach not only challenges students to more deeply consider the relationship between theory and practice, but ensures assessments are authentic with respect to the types of activities and professional experiences students need to consider as preparation for real-world professional practice. By fostering higher-order thinking and transferable skills, authentic assessment seeks to align academic learning with professional expectations, thereby preparing students for the complexities of their future careers.

4.2. Layered

Another key learning from my meta-reflection, supported later by others, was that GAI struggles to answer assessment tasks with multiple steps, layers or stages, so it is essential to develop multi-layered assessments (Moorhouse et al. 2023; Liu and Bridgeman 2023). Multi-step assessments can be a valuable tool in mitigating the risks of GAI misuse, promoting higher-order thinking skills, and allowing educators to analyse students’ metacognition and depth of understanding of the subject matter. This includes assessments that focus on the process of learning, rather than on the final product, as this allows educators to evaluate students’ understanding and skills at various stages of the learning process, making it more difficult for students to rely solely on GAI for completion (Liu 2023; Lodge et al. 2023). Examples of process assessment include requiring students to provide evidence of their engagement with the material and demonstrating their critical thinking process (by submitting drafts, outlines, or research notes), and designing assessments where students reflect on their learning process, and explain their decisions, challenges, and insights (Liu 2023; Thanh et al. 2023). Educators can assess these alongside the final product. Other multi-assessment types include those which integrate GAI into the assessment process, for example, asking students to critique GAI-generated content and/or build upon GAI outputs and provide the GAI-generated material they started with. Educators can then analyse how students have engaged with the material and applied critical thinking and understanding (Moorhouse et al. 2023; Thanh et al. 2023; Liu 2023).
Examples I have used in my coursework include tasking students in tutorials with critically analysing a response generated by GAI based on their prompts and, in the context of child protection coursework, as highlighted by Lehtiniemi (2023), providing an in-class activity where students use AI to generate a case study based on child protection risk and protective factors they have learned, which they work in groups to critically analyse, using research evidence to support their analysis. An example of using a layered approach in assessments involves students being asked to “chat” with AI around an issue or concept and using concepts from the unit of study to critically analyse (see, Figure 3).
Linking with ideas related to assessing the process rather than the product, marking/grading layered assessable tasks would involve assessing the extent to which students demonstrate understanding of specific concepts as defined in the course by the kinds of items in the chat with the AI program, while another could involve assessing the extent to which student demonstrate their understanding of unit concepts and critical analysis skills by their evaluation of the case study AI generated. Evaluation criteria could include how well students identified relevant factors and their underlying assumptions. Each of these cover the aspects of students’ application of knowledge outlined earlier.

4.3. Critical Thinking and Reflection

Another useful realisation during my meta-reflection occurred when I asked GAI to complete a critical self-reflection assessment I already had set, and found GAI could not adequately complete the task. As others have noted, GAI faces significant challenges in addressing knowledge that requires creative and critical thinking and reflection, evaluation and judgement, in particular within specific and complex contexts, and related to ethical considerations (Thanh et al. 2023; Bearman and Luckin 2020; Moorhouse et al. 2023; Lodge et al. 2023; Ogunleye et al. 2024). Furthermore, GAI faces significant challenges in addressing knowledge that requires critical reflection, in particular dimensions such as introspection and self-assessment (Bearman and Luckin 2020; Moorhouse et al. 2023; Souppez et al. 2023). Moorhouse et al. (2023), note that AI-generated work may not accurately reflect students’ comprehension or critical thinking capabilities, creating challenges for educators in identifying what to focus on in class. Moreover, misuse of GAI potentially impedes the development of higher-order thinking skills, leading to diminished student ability to demonstrate advanced cognitive processes (Thanh et al. 2023; Ogunleye et al. 2024). Furthermore, students’ overreliance on GAI for information critique and self-reflection can result in unchallenged acceptance of potentially biased, inaccurate, or irrelevant outputs (Mao et al. 2024), and constrain students’ development of creative thinking, critical analysis, and autonomous judgment formation (Haider 2024). Additionally, relying on GAI to complete assignments diminishes the authenticity and professional relevance of assessment tasks (Mao et al. 2024; Thanh et al. 2023). All of these potentially compromise students’ preparation for real-world practice and undermine the integrity of higher education courses.
Limitations of GAI in critical thinking, judgement and reflection highlight the need for assessments that cultivate these capacities (Liu and Bridgeman 2023; Moorhouse et al. 2023) and necessitate carefully designed assessments that emphasize human-centric skills and contextual understanding. One way to achieve this is through integrating GAI in ways that enhance, rather than replace, human judgement, and to support students to experience the complexities of professional practice. Furthermore, drawing on previous ideas, such assessments can involve students being tasked to ask real questions from current debates and encouraging evidence-based speculation, where markers focus on assessing deep and complex intellectual engagement. For example, I developed the following assessment (Figure 4), whereby the students are required to develop a GAI-generated finding, which they must then critically analyse and evaluate, an idea later supported by others (Ogunleye et al. 2024; Liu and Bridgeman 2023). This process not only enhances their critical thinking skills, but also highlights the limitations of GAI in producing high-quality, contextually relevant content (Ogunleye et al. 2024; Liu and Bridgeman 2023).
Such assessments help students internalize their learning experiences, making connections between theoretical knowledge and practical application, and requiring students to apply knowledge in real-world contexts, reflecting on their experiences and learning processes (Ogunleye et al. 2024; Moorhouse et al. 2023). This is particularly important for tertiary education in social work, as critical thinking and reflection is essential for professional practice, emotional and psychological safety in practice, and personal development.

4.4. Communication

Another dimension to GAI-proofing assessments involves ways we communicate GAI use. Higher education institutions should foster open communication and dialogue between educators and students about the use of GAI in assessments (Moorhouse et al. 2023). This includes providing instruction on limitations and potential biases of GAI tools (Haider 2024; Mao et al. 2024), potential data privacy and security concerns (Haider 2024), and how to acknowledge and cite the use of GAI in their work (Moorhouse et al. 2023). This involves providing clear guidelines that outline what constitutes plagiarism and what are the ethical considerations in the context of GAI, and emphasizing that submitting AI-generated work as one’s own is both unethical and violates academic integrity (Moorhouse et al. 2023).
During my meta-reflection on my assessments, the university in which I was employed was beginning to discuss the importance of clear communication regarding GAI usage guidelines to maintain student academic integrity and foster responsible GAI use. Again, this has since been supported by others who have found that integrity in learning, including trustworthy assessments, require clear boundaries on GAI usage through structured, well-communicated guidelines (Liu 2023; Bearman and Luckin 2020). As such, differentiating GAI use for learning versus assessment is crucial, with explicit communication on appropriate and ethical contexts for GAI application (Salinas-Navarro et al. 2024; Liu 2023). Additionally, responsible GAI use should prioritize human agency, learning effectiveness, and ethical considerations, ensuring GAI supports, rather than replaces, human intelligence and accountability (Salinas-Navarro et al. 2024; Liu 2023; Bearman and Luckin 2020). Instructors must communicate the role and boundaries of AI in assessments, to foster learning and personal and academic integrity (Liu 2023; Bearman and Luckin 2020; Lodge et al. 2023). This involves providing rationales for assessments, emphasizing human contributions over GAI, clearly defining permissible GAI use, and ensuring students understand their accountability for distinguishing their work from that of the GAI agent they used. Figure 5 shows the wording I used to communicate these ideas to students across my units.

4.5. Rubrics

Ensuring academic integrity requires a comprehensive approach that integrates robust assessment design with effective evaluation and monitoring tools. Managing GAI extends beyond the design of assessments, to include the use of detection tools and rubrics. Increasing use of GAI in educational settings necessitates the development of robust assessment strategies to ensure academic integrity and the meaningful evaluation of student learning. Methods used to assess assessments, and the tools we employ, are as crucial as the design of the assessments themselves. One effective approach is the incorporation of GAI considerations into assessment rubrics. Moorhouse et al. (2023) also emphasise the need to understand and assess GAI use by integrating it into rubrics. This involves creating guidelines that explicitly address how GAI tools are used in the completion of assessments. By doing so, educators can ensure that students are not only using these tools ethically, but are also demonstrating their own understanding and skills.
My approach involves developing rubrics that identify poor thinking and research, similar to how we would address any other form of academic dishonesty. I operate under the assumption that every student uses GAI in some capacity, and thus my rubric needs to be sophisticated enough to account for this possibility. This ensures that the assessment process remains rigorous and fair, maintaining academic integrity. To address this, I incorporate critical analysis skills into the rubric, to ensure students can identify and critique known biases within GAI responses. Thanh et al. (2023) provide detailed guidance on crafting authentic assessment questions and developing rubrics tailored for GAI input. These rubrics should be designed to evaluate the quality of AI-generated content critically, ensuring that students engage with the material beyond mere replication. For instance, rubrics can include criteria that assess the student’s ability to refine and improve AI-generated outputs, demonstrating their critical thinking and problem-solving skills.
Furthermore, a significant shortfall of GAI, as noted by Thanh et al. (2023), is its inability to connect answers to real-world situations and provide practical examples. This limitation can be addressed within rubrics by setting a minimum pass level for knowledge and critical analysis that requires students to contextualize their responses. Rubrics should include specific criteria that evaluate the application of theoretical knowledge to practical scenarios, ensuring that students can bridge the gap between AI-generated content and real-world relevance.
To mitigate these issues, rubrics can be designed to give no or low marks to low-difficulty tasks that GAI can easily accomplish. It is important to be clear about what is being assessed. For example, if the goal is to assess learning about digital literacy, students might pass with basic tasks. However, if the assessment focuses on the critical analysis of digital platforms, which requires higher-order thinking related to digital literacy, the rubric needs to be adjusted accordingly, to ensure students demonstrate a deeper level of understanding.

4.6. Equity

Finally, while GAI offers significant potential for enhancing educational assessments, it also poses risks of exacerbating disadvantage and inequity. Ensuring fair and inclusive assessment practices requires careful consideration of access disparities, ethical concerns, and the design of assessment tasks that emphasize creativity and critical thinking. We need to account for equity factors in our assessments, ensuring that they do not prohibit students from researching and completing their work in traditional ways. To mitigate equity issues, Moorhouse et al. (2023) suggest that instructors redesign assessment tasks to focus on creativity, critical thinking, and authentic assessments. An educator’s role includes teaching students about GAI, modelling its critical and ethical use, and giving students safe and guided opportunities to practice using GAI themselves. For example, if students need to generate a prompt to critically analyse, I provide a prompt that I have generated (Figure 6).
Furthermore, not everyone intuitively knows how to create effective prompts. It is essential to teach students how to write prompts that are specific and contextualized. Students also need to learn about follow-up prompts or nudges when GAI generates a response. Examples I have conducted in tutorials include the following:
  • Providing more context in the prompts yields more useful results, such as specifying “academic” and “scholarly” to refine the output.
  • Concepts like “give more detail on”, “expand”, “what is different”, “explain”, “discuss tensions and conflicts”, and “provide an example that demonstrates this” are useful.
  • Analysis words such as “break down”, “explore”, and “examine the underlying elements/factors/causes” can elevate the conversation with GAI beyond the basics.

5. Conclusions

In conclusion, educators in higher education can maintain course and assessment integrity in the age of GAI by implementing a multi-faceted approach that emphasizes authentic assessment through critical analysis of real-world applications and personal reflection, while shifting the focus to assessing learning processes. Assessment strategies should deliberately incorporate GAI as a learning tool, while requiring students to critically evaluate and refine AI-generated content, thereby developing essential GAI literacy alongside traditional academic skills. Maintaining open communication about GAI use and expectations, while avoiding over-reliance on GAI detection tools, creates an environment that promotes academic integrity and deeper understanding of the role of GAI in education and professional practice.
The key learning from my meta-reflection of my assessment tasks was to approach the issue like I do when protecting my home from intruders. Using a home security analogy, I realised that, just as we secure our homes from thieves with multiple layers of protection, we need multiple checkpoints throughout the assessment process to discourage the inappropriate use of GAI. To maintain academic integrity, it is essential to limit the use of GAI and catch potential misuse at multiple points in the assessment process. While the gold standard for tertiary education might become a “two-lane” approach, this is difficult for individual educators to achieve. In education contexts where institutional systems have not yet caught up with the way students use GAI, and so have not initiated a “two-lane” approach, individual educators need to secure their assessments as best they can. While these ideas are limited to being exploratory, and research is required into such ideas to determine the extent to which they really do mitigate and catch GAI use, I hope they are useful for others in such a context. I hope sharing my experience of critically evaluating, reflecting on, and redesigning my undergraduate social work course assessments for the GAI-era motivates others to critique ways they evaluate student performance and learning outcomes in higher education, to better accommodate the evolving needs of the digital age, while maintaining a focus on deep, meaningful learning experiences that prepare students for their future professional lives and discussing these in the broadest context possible.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Anthropic. 2023. Claude (Mar 8 Version) [Large Language Model]. Available online: https://www.anthropic.com/ (accessed on 8 March 2020).
  2. Ashford-Rowe, Kevin, Janice Herrington, and Christine Brown. 2014. Establishing the critical elements that determine authentic assessment. Assessment & Evaluation in Higher Education 39: 205–22. [Google Scholar]
  3. Bearman, Margaret, and Rosemary Luckin. 2020. Preparing University Assessment for a World with AI: Tasks for Human Intelligence. In Re-Imagining University Assessment in a Digital World. Edited by Margaret Bearman, Phillip Dawson, Rola Ajjawi, Joanna Tai and David Boud. Cham: Springer, pp. 49–63. [Google Scholar]
  4. Bhattacharyya, Jnanabrata. 2004. Theorizing Community Development. Community Development 34: 5–34. [Google Scholar]
  5. Bloom, Benjamin S., Max D. Engelhart, Edward J. Furst, Walker H. Hill, and David R. Krathwohl. 1956. Taxonomy of Educational Objectives: Cognitive Domain. New York: McKay. [Google Scholar]
  6. Bridgeman, Adam, Danny Liu, and Ruth Weeks. 2023. Program Level Assessment Design and the Two-Lane Approach. Available online: https://educational-innovation.sydney.edu.au/teaching@sydney/program-level-assessment-two-lane/ (accessed on 16 September 2024).
  7. Brill, Jennifer M., and Yeonjeong Park. 2008. Facilitating engaged learning in the interaction age taking a pedagogically-disciplined approach to innovation with emergent technologies. International Journal of Teaching and Learning in Higher Education 20: 70–78. [Google Scholar]
  8. Dewey, John. 2008. The Later Works of John Dewey, Volume 8, 1925–53: 1933. In Essays and How We Think. Carbondale: Southern Illinois University Press, vol. 8. [Google Scholar]
  9. Fisher, Kath. 2009. Critical Self-Reflection: What Is It and How Do You Do It. Unpublished Manuscript. Lismore, Australia. [Google Scholar]
  10. Haider, Sharif. 2024. Exploring opportunities and challenges of artificial intelligence in social work education. In The Routledge International Handbook of Social Work Teaching. Edited by Jarosław Przeperski and Rajendra Baikady. London: Routledge, pp. 46–62. [Google Scholar]
  11. Holdo, Markus. 2020. Meta-deliberation: Everyday acts of critical reflection in deliberative systems. Politics 40: 106–19. [Google Scholar] [CrossRef]
  12. Holdo, Markus. 2022. Critical Reflection: John Dewey’s Relational View of Transformative Learning. Journal of Transformative Education 21: 9–25. [Google Scholar] [CrossRef]
  13. Humphrey, Caroline. 2009. By the light of the Tao. European Journal of Social Work 12: 377–90. [Google Scholar] [CrossRef]
  14. Ioakimidis, Vasilios, and Reima Ana Maglajlic. 2023. Neither ‘Neo-Luddism’ nor ‘Neo-Positivism’; Rethinking Social Work’s Positioning in the Context of Rapid Technological Change. Oxford: Oxford University Press. [Google Scholar]
  15. Kenny, Susan. 1996. Contestations of Community Development in Australia. Community Development Journal 31: 104–13. [Google Scholar] [CrossRef]
  16. Kreber, Carolin. 1999. A course-based approach to the development of teaching-scholarship: A case study. Teaching in Higher Education 4: 309–25. [Google Scholar] [CrossRef]
  17. Kreber, Carolin, and Patricia A. Cranton. 2000. Exploring the scholarship of teaching. The Journal of Higher Education 71: 476–95. [Google Scholar] [CrossRef]
  18. Lehtiniemi, Tuukka. 2023. Contextual social valences for artificial intelligence: Anticipation that matters in social work. Information, Communication & Society 27: 1110–25. [Google Scholar]
  19. Liu, Danny. 2023. Responding to Generative AI for Assessments in Semester 2, 2023. Available online: https://unisyd-my.sharepoint.com/:b:/g/personal/danny_liu_sydney_edu_au/EVnXmBOhOMdOrA7_plQLh5kB5rwPKw7fPOAMYvS2FO682Q?e=ztgXRi (accessed on 16 September 2024).
  20. Liu, Danny, and Adam Bridgeman. 2023. What to Do About Assessments If We Can’t Out-Design or Out-Run AI? Available online: https://educational-innovation.sydney.edu.au/teaching@sydney/what-to-do-about-assessments-if-we-cant-out-design-or-out-run-ai/ (accessed on 16 September 2024).
  21. Lodge, Jason, Sarah Howard, and Margaret Bearman. 2023. Assessment Reform for the Age of Artificial Intelligence; Canberra: Tertiary Education Quality and Standards Agency. Available online: https://www.teqsa.gov.au/guides-resources/resources/corporate-publications/assessment-reform-age-artificial-intelligence (accessed on 16 September 2024).
  22. Maleki, Negar, Balaji Padmanabhan, and Kaushik Dutta. 2024. AI hallucinations: A misnomer worth clarifying. Paper presented at 2024 IEEE Conference on Artificial Intelligence (CAI), Singapore, March 25. [Google Scholar]
  23. Mao, Jin, Baiyun Chen, and Juhong Christie Liu. 2024. Generative Artificial Intelligence in Education and Its Implications for Assessment. TechTrends 68: 58–66. [Google Scholar] [CrossRef]
  24. Mezirow, Jack. 1990. How Critical Reflection Triggers Transformative Learning. In Fostering Critical Reflection in Adulthood. Edited by Jack Mezirow. San Francisco: Jossey-Bass Publishers, pp. 1–20. [Google Scholar]
  25. Mezirow, Jack. 1997. Transformative learning: Theory to practice. New Directions for Adult and Continuing Education 1997: 5–12. [Google Scholar] [CrossRef]
  26. Microsoft. 2024. Copilot (April 2024 Version) [Large Language Model]. Available online: https://www.bing.com/chat?form=NTPCHB (accessed on 29 April 2024).
  27. Moorhouse, Benjamin Luke, Marie Alina Yeo, and Yuwei Wan. 2023. Generative AI tools and assessment: Guidelines of the world’s top-ranking universities. Computers and Education Open 5: 100151. [Google Scholar] [CrossRef]
  28. Neely, Pat, and Jan Tucker. 2012. Using business simulations as authentic assessment tools. American Journal of Business Education 5: 449–56. [Google Scholar] [CrossRef]
  29. Ogunleye, Bayode, Kudirat Ibilola Zakariyyah, Oluwaseun Ajao, Olakunle Olayinka, and Hemlata Sharma. 2024. Higher education assessment practice in the era of generative AI tools. arXiv arXiv:2404.01036. [Google Scholar]
  30. OpenAI. 2022. ChatGPT (Dec 2 Version) [Large Language Model]. Available online: https://chat.openai.com/ (accessed on 2 December 2022).
  31. Reimer, Elizabeth, and Louise Whitaker. 2019. Exploring the Depths of the Rainforest: A Metaphor for Teaching Critical Reflection. Reflective Practice 20: 175–86. [Google Scholar] [CrossRef]
  32. Salinas-Navarro, David Ernesto, Eliseo Vilalta-Perdomo, Rosario Michel-Villarreal, and Luis Montesinos. 2024. Using Generative Artificial Intelligence Tools to Explain and Enhance Experiential Learning for Authentic Assessment. Education Sciences 14: 83. [Google Scholar] [CrossRef]
  33. Sarantakos, Sotirios. 1998. Social Research. London: Palgrave. [Google Scholar]
  34. Scheyett, Anna. 2023. A Liminal Moment in Social Work. Social Work 68: 101–2. [Google Scholar] [CrossRef]
  35. Schön, Donald A. 1983. The Reflective Practitioner: How Professionals Think in Action. New York: Basic Books. [Google Scholar]
  36. Singer, Jonathan B., Johanna Creswell Báez, and Juan A. Rios. 2023. AI creates the message: Integrating AI language learning models into social work education and practice. Journal of Social Work Education 59: 294–302. [Google Scholar] [CrossRef]
  37. Smolansky, Adele, Andrew Cram, Corina Raduescu, Sandris Zeivots, Elaine Huber, and Rene F. Kizilcec. 2023. Educator and student perspectives on the impact of generative AI on assessments in higher education. Paper presented at Proceedings of the Tenth ACM conference on Learning@ Scale, Copenhagen, Denmark, July 20–22. [Google Scholar]
  38. Souppez, Jean-Baptiste R. G., Debjani Goswami, and Joe Yuen. 2023. Assessment and Feedback in the Generative AI Era: Transformative Opportunities, Novel Assessment Strategies and Policies in Higher Education. Paper presented at International Federation of National Teaching Fellows Symposathon, Online, December 4–5. [Google Scholar]
  39. Thanh, Binh Nguyen, Diem Thi Hong Vo, Minh Nguyen Nhat, Thi Thu Tra Pham, Hieu Thai Trung, and Son Ha Xuan. 2023. Race with the machines: Assessing the capability of generative AI in solving authentic assessments. Australasian Journal of Educational Technology 39: 59–81. [Google Scholar] [CrossRef]
  40. Thorpe, Anthony, and Diane Garside. 2017. (Co) meta-reflection as a method for the professional development of academic middle leaders in higher education. Management in Education 31: 111–17. [Google Scholar] [CrossRef]
  41. Victor, Bryan G., Lauri Goldkind, and Brian E. Perron. 2024. Forum: The Limitations of Large Language Models and Emerging Correctives to Support Social Work Scholarship: Selecting the Right Tool for the Task. International Journal of Social Work Values and Ethics 21: 200–7. [Google Scholar] [CrossRef]
  42. Victor, Bryan G., Rebeccah L. Sokol, Lauri Goldkind, and Brian E. Perron. 2023. Recommendations for Social Work Researchers and Journal Editors on the Use of Generative AI and Large Language Models. Journal of the Society for Social Work and Research 14: 563–77. [Google Scholar] [CrossRef]
  43. Villarroel, Verónica, Susan Bloxham, Daniela Bruna, Carola Bruna, and Constanza Herrera-Seda. 2018. Authentic assessment: Creating a blueprint for course design. Assessment & Evaluation in Higher Education 43: 840–54. [Google Scholar]
  44. Wacquant, Loïc J. D., and Pierre Bourdieu. 1992. An Invitation to Reflexive Sociology. Cambridge: Cambridge Polity. [Google Scholar]
  45. Wakeford, Richard. 2009. Principles of student assessment. In A Handbook for Teaching and Learning in Higher Education. Enhancing Academic Practice. Edited by Heather Fry, Steve Ketteri and Stephanie Marshall. Abingdon: Routledge. [Google Scholar]
  46. Wiewiora, Anna, and Anetta Kowalkiewicz. 2019. The role of authentic assessment in developing authentic leadership identity and competencies. Assessment & Evaluation in Higher Education 44: 415–30. [Google Scholar]
  47. Zuber-Skerritt, Ortrun. 2015. Critical Reflection. In Professional Learning in Higher Education and Communities: Towards a New Vision for Action Research. London: Palgrave Macmillan UK, pp. 76–101. [Google Scholar]
Figure 1. Wording Assessment 1 (WELF1001) (Bhattacharyya 2004; Kenny 1996).
Figure 1. Wording Assessment 1 (WELF1001) (Bhattacharyya 2004; Kenny 1996).
Socsci 13 00648 g001
Figure 2. Prompt for GAI-generated case study (WELF2002).
Figure 2. Prompt for GAI-generated case study (WELF2002).
Socsci 13 00648 g002
Figure 3. Assessment task wording for devising a multi-layered assessment (WELF2002) (Sarantakos 1998).
Figure 3. Assessment task wording for devising a multi-layered assessment (WELF2002) (Sarantakos 1998).
Socsci 13 00648 g003
Figure 4. Wording for critical self-reflection assessment contextualised to a specific task (WELF3002) (Fisher 2009).
Figure 4. Wording for critical self-reflection assessment contextualised to a specific task (WELF3002) (Fisher 2009).
Socsci 13 00648 g004
Figure 5. Wording used to communicate with students acceptable GAI use in assessments.
Figure 5. Wording used to communicate with students acceptable GAI use in assessments.
Socsci 13 00648 g005
Figure 6. Instructions regarding pre-generated prompt students can use (Microsoft 2024).
Figure 6. Instructions regarding pre-generated prompt students can use (Microsoft 2024).
Socsci 13 00648 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Reimer, E.C. Examining the Role of Generative AI in Enhancing Social Work Education: An Analysis of Curriculum and Assessment Design. Soc. Sci. 2024, 13, 648. https://doi.org/10.3390/socsci13120648

AMA Style

Reimer EC. Examining the Role of Generative AI in Enhancing Social Work Education: An Analysis of Curriculum and Assessment Design. Social Sciences. 2024; 13(12):648. https://doi.org/10.3390/socsci13120648

Chicago/Turabian Style

Reimer, Elizabeth Claire. 2024. "Examining the Role of Generative AI in Enhancing Social Work Education: An Analysis of Curriculum and Assessment Design" Social Sciences 13, no. 12: 648. https://doi.org/10.3390/socsci13120648

APA Style

Reimer, E. C. (2024). Examining the Role of Generative AI in Enhancing Social Work Education: An Analysis of Curriculum and Assessment Design. Social Sciences, 13(12), 648. https://doi.org/10.3390/socsci13120648

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop