Next Article in Journal
Supported Open Learning and Decoloniality: Critical Reflections on Three Case Studies
Previous Article in Journal
Teacher Educators Experience Adopting Problem-Based Learning in Science Education
Previous Article in Special Issue
Content-Focused Formative Feedback Combining Achievement, Qualitative and Learning Analytics Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

In Search of Alignment between Learning Analytics and Learning Design: A Multiple Case Study in a Higher Education Institution

1
Connected Intelligence Centre, University of Technology Sydney, Sydney, NSW 2007, Australia
2
Faculty of Engineering and IT, University of Technology Sydney, Sydney, NSW 2007, Australia
3
Faculty of Arts and Social Sciences, University of Technology Sydney, Sydney, NSW 2007, Australia
4
UTS Business School, University of Technology Sydney, Sydney, NSW 2007, Australia
*
Author to whom correspondence should be addressed.
Educ. Sci. 2023, 13(11), 1114; https://doi.org/10.3390/educsci13111114
Submission received: 9 September 2023 / Revised: 23 October 2023 / Accepted: 29 October 2023 / Published: 6 November 2023

Abstract

:
Learning design (LD) has increasingly been recognized as a significant contextual element for the interpretation and adoption of learning analytics (LA). Yet, few studies have explored how instructors integrate LA feedback into their learning designs, especially within open automated feedback (AF) systems. This research presents a multiple-case study at one higher education institution to unveil instructors’ pilot efforts in using an open AF system to align LA and LD within their unique contexts, with the goal of delivering personalized feedback and tailored support. A notable finding from these cases is that instructors successfully aligned LA with LD for personalized feedback through checkpoint analytics in highly structured courses. Moreover, they relied on checkpoint analytics as an evaluation mechanism for evaluating impact. Importantly, students perceived a stronger sense of instructors’ support, reinforcing previous findings on the effectiveness of personalized feedback. This study contributes essential empirical insights to the intersection of learning analytics and learning design, shedding light on practical ways educators align LA and LD for personalized feedback and support.

1. Introduction

While learning analytics (LA) has matured as a field of research, in practice, there is still a dearth of evidence to show its impact. Most notably, within the last 6 years, LA research has seen an exponential increase in the development of personalized feedback interventions, leveraging the availability of data from digital learning environments as well as from academic administration systems [1]. LA feedback intervention systems may be classified as closed, fully-automated systems such as dashboards displaying visualizations of students’ learning activity or progress information, presented to instructors or students; or open, automated feedback (AF) systems where instructors can control various parameters for personalizing feedback, including the data informing feedback, feedback frequency, as well as the feedback message [2]. Compared with the rapid pace of development of LA systems, however, adoption by faculty is not well understood, and, accordingly, research suffers from a lack of evidence showing real-world impact [3,4].
Learning design (LD) has long been acknowledged as an important contextual factor for interpreting learning analytics [5,6] and promoting faculty adoption [7]. In this study, we adopt the widely cited definition of learning design by [8] as “a methodology for enabling teachers/designers to make more informed decisions in how they go about designing learning activities and interventions that are pedagogically informed and make effective use of appropriate resources and technologies” (p.7).
Few studies have described how instructors have aligned LA feedback with LD. Of particular interest is how instructors are using open AF systems for LA-based, personalized feedback within the context of their LD. Open AF tools are worthy of investigation as they are designed to facilitate instructor agency over feedback parameters—such as data, timing, and message—for personalizing feedback in their courses [2]. Thus, they present an opportune way for instructors to use LA to inform LD for the purpose of personalizing feedback. The present research describes a multiple case study highlighting the possibilities of using one such open AF tool to align LA with LD from the perspective of educators. In so doing, this research aims to contribute to much-needed empirical work at the intersection of learning analytics and learning design and to illustrate ways that educators may align LA and LD for personalizing feedback.

2. Literature Review

2.1. Personalized Feedback with Learning Analytics: The Affordance of Open, Automated Feedback (AF) Tools

LA approaches seek to harvest data with the aim of producing meaningful outputs that facilitate actionable insights for stakeholders. However, with the abundance of data in digital learning environments, there is a danger of harvesting ‘low-hanging fruit’ in the form of low-level clickstream data from learning management systems or other technology-enhanced learning technologies. Using such data to provide feedback to students may result in feedback that is not meaningful for them. This is likely to reduce the likelihood that feedback is enacted by students, thereby rendering feedback ineffective [9]. Hence, for personalized feedback to be effective, LA approaches should draw on ‘small (but meaningful) data’ [10].
Advances in LA research as well as data mining methodologies have seen exponential growth in the development of AF systems. To date, AF systems may be broadly categorized into one of two types: “closed” systems, which are fully automated and loaded with preset algorithms, and “open” systems, which allow instructors to manipulate any number of feedback parameters that affect the tool’s behavior. As specified in [2], these parameters include: data; algorithms; feedback messages; feedback channels; and the student-driven feedback processes elicited by the system. Two examples of open AF systems are OnTask [11] and the Student Relationship Engagement System, SRES [12]. Because open AF systems enable instructors to make decisions around feedback parameters, they offer a viable opportunity for instructors to strategically design personalized feedback into their courses based on LD. In the next section, we explain what LD means in the context of the present research.

2.2. Learning Design: Orchestrating the Achievement of Course Objectives

Conceptually, LD refers to the underpinning structure of a course and is demonstrated through the orchestration of learning activities, resources, and assessments, supported by learning technologies, in order to meet course objectives within the context of a course. The significance of LD for LA lies in its affordance as a “critical frame of reference” for understanding the outputs of LA that are harvested from students’ interactions with the course materials [13]. LDs may be graphically presented (e.g., [8,14]) or described categorically with respect to activities (e.g., [15]). Importantly, as highlighted by [16], the rationale for these representations is “to facilitate sharing and reuse of pedagogical plans and to automate some of the design phases” [p. 234].
Effective LA implementation needs to be informed by the pedagogical context [17,18]. This consideration is important given the mission of LA to optimize the learning of all students within their contexts [19]. LA captured within LD provides valuable information regarding the effectiveness of the teaching and learning context. This information can then serve as an evidence-based rationale to replicate effective LD across multiple contexts [5]. To this end, researchers have identified 18 frameworks for learning analytics and learning design in the literature [7], of which a few are outlined below.

2.3. Frameworks for Learning Analytics and Learning Design

Checkpoint and process analytics [5]. Arguably, this was the first framework to demonstrate in practical terms how learning analytics could be aligned with learning design through the use of two types of analytics [20]. This framework focuses on three elements inherent in all LDs—resources; tasks; and supports—and how LA can then be informed against these reference points. Checkpoint analytics refer to point-in-time data showing whether students have accessed resources and are making progress through the intended sequence of learning. Examples of checkpoint analytics are course site log-ins, file downloads, or the completion of formative quizzes at the end of a weekly topic. Process analytics involve additional analysis of data harvested from learning management systems and digital learning tools. Central to process analytics are visualizations that graphically represent learning processes, especially social network analysis to represent how students interact with each other in collaborative learning activities. Process analytics also include dashboards that visualize students’ activity patterns, which may offer insights into their self-regulated learning through advanced educational data mining methodologies such as process mining.
Learning analytics for learning design [13]. This framework was developed from an interview study with instructors. It proposes five dimensions of analytics that demonstrate how LD can serve as a reference point for LA, allowing instructors to “transform learning design into a teacher-led inquiry-based practice” (p.333). These five dimensions are: temporal; tool-specific; cohort dynamics; comparative; and contingency. In relation to personalized feedback and support, the category of contingency and decision support tools is particularly relevant. Possible metrics include weekly metrics and tool-specific metrics that could be analyzed with algorithms to gain insights into students’ learning progress. The main aim of this category of analytics is to develop tools that enable instructors to identify students at risk and then tailor and send out feedback and study advice accordingly.
Analytics layers for learning design, AL4LD [21]. The framework proposes three layers of analytics. At the base of the framework is the LA layer—data drawn from the use of digital tools and platforms that students interact with as part of the course LD. This layer builds on earlier models by [5,13] described above. The researchers further categorize the data in this layer into five classes: Profiles, Checkpoints, Process, Performance, and Satisfaction. While the purpose of this layer is primarily to inform the educator about the impact of a particular learning design, we propose that this layer also lends itself nicely to the design of personalized feedback, as these metrics essentially offer dynamic information about learners’ ongoing progression and performance while still in the course.
The shortlist of LA-LD frameworks outlined above is certainly not exhaustive. However, the three frameworks were selected for their relevance to personalized feedback, which is aligned with the interests of the present research, to explore the alignment between LA and LD for effective personalized feedback and tailored support. In particular, the AL4LD was seen as a useful framework to analyze the LD in this study, as the data classes were the most comprehensive of the three frameworks.

2.4. Supporting Student Learning with Learning Analytics and Learning Design: What Has Been Done?

With the growing interest in the intersection of LA and LD, much research has been conducted to demonstrate how LA data can be applied to evaluate the quality of LD. To date, there have been a handful of systematic reviews aiming to synthesize the findings from this strand of research (e.g., [22,23]). However, supporting students’ ongoing learning with fine-grained data collected in alignment with LD is an area that is comparatively under-explored [3].
Lockyer et al.’s [5] checkpoint and process analytics framework was explored by Kaliisa and colleagues [3]. Their case study described how LA might potentially be used to provide evidence-based adaptations of LD in the context of a blended course. The researchers harvested checkpoint analytics in the form of weekly LMS activity logs, namely page views and participation. Process analytics were also employed, in the form of social network analyses derived from student interactions in online discussion forums as well as directed network graphs generated from automated discourse analysis. Interviews with the course instructors revealed that the visualizations from process analytics could be valuable for adapting LDs not just for the next iteration but also for addressing students’ progress at a cohort level while still learning in the course.
In a rare institutional-level study, [24] explored LA and its alignment with LD, specifically with respect to personalized feedback and support. However, in that study, no specific LD framework was employed; rather, the researchers referred to learning design elements (Assessment, Assignment, Session Profile, and Forum use) as a way to operationalize LD. In their study, they examined indicators (LA) that were used by instructors to personalize feedback using the OnTask tool at an institution, how these were aligned with the LD elements, and ultimately how they were associated with the academic performance of students. Through an analysis of usage logs in OnTask, the researchers found that the most frequently used LA data came from activities relating to Assessment, Assignment, Session Profile, and Discussion Forums across 99 courses and approximately 20,000 students. While the findings from this institutional study are undoubtedly valuable for understanding feedback mechanisms at an institutional level, as well as for documenting the kinds of analytics used by instructors for personalizing feedback and the impact therefrom, there is still a limited understanding of how the instructors used their own LDs as a reference point for LA-based, personalized feedback.
Overall, research is limited regarding how instructors intentionally use LA to inform personalized feedback within LD. Given that LA to inform LD is conceptually important for meaningful LA, in practice, there is still a dearth of research demonstrating how this can be achieved in real-world contexts. This lack of knowledge can impede the increased adoption of LA.

2.5. Purpose of the Research

To address the research gap, the present research aims to explore how instructors reference LD in using LA for personalized feedback and study advice in their real-world contexts. To achieve this aim, we present a multiple case study where instructors used LA data aligned with LD for personalizing feedback. In all these case studies, the instructors were exploring the OnTask tool to generate and communicate personalized feedback and support to their students. All three instructors were also course coordinators, meaning that they were the key decision-makers with regard to the learning design and LA decisions that were made in preparing and communicating personalized feedback. The case studies describe instructors’ pilot attempts at using OnTask to align LA within the contexts of their own LDs with the purpose of supporting students through personalized feedback and tailored advice. In so doing, this exploratory research contributes to the evidence of aligning LA with LD for personalized feedback in practice. This research was guided by the following questions:
  • RQ1. How do instructors align LA with LD in their contexts for the purpose of personalizing feedback and support in their courses?
  • RQ2. How do instructors evaluate the impact of personalized feedback and support when LA and LD are aligned?
  • RQ3. How do students perceive their personalized feedback when LA is aligned with LD?

3. Materials and Methods

3.1. OnTask: Leveraging Learning Analytics for Personalized Feedback and Advice

OnTask is a web-based platform designed to facilitate instructors’ preparation and delivery of personalized learning support actions at scale to course cohorts. As an open automated feedback (AF) tool, the platform enables instructors to make decisions around key parameters of the tool’s behavior, such as: the kinds of data and algorithms used to set rules and conditions for personalization; the content of the message; the timing and frequency of the message; and the student-driven feedback processes arising from personalized feedback. The agency afforded to instructors by the tool is therefore well-placed for instructors’ strategic alignment of LA and LD, especially in view of the kinds of data that are used to inform feedback and the timing of the communication of feedback.

3.2. Case Study Approach

The overarching aim of the research was to gain an in-depth understanding of how instructors worked within their own teaching contexts to align LA and LD. The case study approach was deemed appropriate for two reasons, based on [25]. Firstly, the inquiry was undertaken in a real-world context, and understanding the complexity of the context of LD was integral to the understanding of the case. Secondly, gaining in-depth insights into this phenomenon called for multiple sources of evidence to be gathered. In view of the fact that a single case study may present limitations with regard to generalizability, multiple cases were selected for this analysis.
The cases were selected from a pool of instructor-users at the institution where OnTask was being piloted between 2021 and 2023. This institution was a large research university in Australia. Similar to the institutional context in [24], this personalized communication platform was available to all instructors at the institution. Instructors were trained in the use of the platform, not only to equip them with the necessary skills to use OnTask but also to provide a grounded understanding of feedback, especially emphasizing that effective feedback should be embedded within LD. After attending the introductory trainings, ongoing support was provided to each instructor as needed by a researcher as they used the tool. The rationale for taking a qualitative approach in the form of multiple case studies was to obtain an in-depth understanding of the context influencing how instructors aligned LA and LD.

3.3. Data Collection

This research was part of a wider, ethics-approved institutional project evaluating the pilot use of OnTask at the institution. Instructors were informed about the institutional project towards the end of the introductory training and invited to participate in the research. Informed consent was obtained prior to data collection. In order to gain insights into how instructors integrate learning analytics tools into their learning design practices, the following data sources were utilized to capture a holistic view of the phenomenon under investigation:
  • Instructor interviews. Semi-structured interviews were conducted with each participating instructor to delve deeper into their alignment practices. The interviews aimed to elicit instructors’ experiences in integrating learning analytics tools into their learning design, in particular their motivations for using the tool and their decision-making processes behind the alignment strategies. Instructors were also asked about the perceived impact of analytics on any learning outcomes, including their observations from course data. The interviews were structured around the following questions, with room for further in-depth probes: 1. Please tell me about the course in which you implemented OnTask. 2. To date, how many teaching sessions have you implemented OnTask for? 3. What motivated you to use OnTask in your course? 4. What impact do you think your feedback using OnTask had on the students’ learning experience? 5. Do you feel like you’ve had to learn any new skills to use OnTask? 6. Do you feel OnTask saved you time or prompted you to use your teaching and administrative time differently? 7. What were the main challenges you faced in using OnTask in your course?
  • Artifacts demonstrating strategies of alignment between LA and LD. These artifacts were digital documents generated in the course of discussions between the researcher and the instructors during the process of planning the strategic alignment. These are in effect the kind of “design artifacts” noted by [2] that are typically in the form of “documents and visualizations that aid personal and joint cognitive work” (p. 24).
  • Student experience surveys. To gain a holistic understanding of the impact of aligned learning analytics on the learning experience, feedback was gathered from students enrolled in the courses under investigation using a single, unified survey. The survey comprised four items on a 6-point Likert scale to assess students’ perceptions of their feedback, as well as one open-ended question inviting them to input any other comments they had about their feedback. The four items were: (1) This feedback made me feel more supported by my instructor; (2) The feedback and support improved the quality and standard of my work; (3) This feedback and support improved my overall learning experience in this subject; (4) This feedback and support allowed me to complete my tasks and studies more effectively. This was followed by an open-ended prompt: Do you have any other comments about your feedback experience? This data source provided a student-centered view of the alignment’s effectiveness and potential areas for improvement.

3.4. Data Analysis

To answer RQ1 and RQ2, qualitative data from the interviews were analyzed using an inductive approach in order to generate meaningful themes at the semantic level from the data. To further address RQ2, the digital artifacts were analyzed using the lens of the AL4LD framework [21]. To answer RQ3, descriptive analysis was employed on the students’ survey responses. The survey results were then compared across the three cases.

3.5. Case Studies: Aligning LA and LD for Personalized Feedback and Study Advice

With the aim of this research to investigate the phenomenon of how instructors aligned LA and LD, cases were selected following key principles for case selection recommended by [25]. Foremost, to ensure relevance to the theoretical framework of LD, we drew on the [21] AL4LD framework as a lens for analyzing the learning designs of each case. We chose this framework as it included a class of data specifically on performance, which was missing in other frameworks. In particular, we focused on the learning analytics layer, as that was the scope of this research. For theoretical relevance, we narrowed the selection of cases to instructors who were intentionally strategic in their use of the tool to embed feedback and support within the context of their learning design over the teaching semester; in this sense, we precluded cases where instructors were using the tool mainly for personalizing general course announcements to students’ first names. For replication logic, we identified cases through purposive sampling that differed with respect to discipline, learning design, and approach to alignment. This also allowed for information richness through the collection of in-depth and comprehensive data to address the research questions while allowing for meaningful cross-comparison. Based on these principles, three cases were selected for this research. All three participant instructors were the course coordinators who led the learning design of each course. Case 1 was a large postgraduate course in the Engineering and IT disciplines with a female instructor. Case 2 was a smaller course in the Business discipline with a female instructor. Finally, Case 3 was a much smaller course in the Arts and Social Sciences discipline, at the postgraduate certificate level; this course was led by a male instructor. The instructors in Cases 1 and 2 were piloting the tool for the first time in their courses. The instructor in Case 3 had some experience implementing the tool in a different course and was exploring the use of the tool in the current course. Table 1 summarizes the main characteristics of the three cases.
The rest of the paper is organized as follows. Following the guidelines for multiple case study methodology [26], we first present a detailed analysis of each case in view of the research questions, using the following structure for consistency and to facilitate cross-comparison: Course description and learning design; Motivation for using the tool; Alignment of LA and LD; Perceived impact; and Student perceptions. Next, in the Results and Discussion section, we compare and contrast the findings from each case, identifying similarities and differences, and discuss these to answer the research questions. Finally, we discuss the limitations of this study, suggest future directions for the work, and offer some conclusions from the study.

3.5.1. Case 1: Engineering and IT (First Year, Postgraduate)

Course description and learning design. This postgraduate course focused on the processes of generation, dissemination, retention, application, and distribution of corporate information and knowledge that provide business intelligence to enterprises. Typically, students undertake this subject during their first year (second semester) of study in a two-year Masters in IT program. This was a diverse cohort with different demographics and skill sets. During the Spring semester of 2021, the subject enrolled 101 students, of whom 81% were international, with English as an additional language. Furthermore, due to the COVID-19 pandemic and isolation requirements, many of the international students were learning remotely from their home countries.
This fully online course involved 12 consecutive weeks of study. This was a highly interactive course whereby students participated in weekly 3 h (fully) online, synchronous interactive, and collaborative workshops supported by a range of learning activities they were expected to complete before and after the class. The summative assessment structure comprised an early diagnostic quiz, weekly case studies, quizzes and reflections, a blog post, and a group project. The LD comprised the following weekly module structure of activities and assessments:
  • Prepare: Weekly announcements provided a clear outline of what would be covered in the upcoming week to help students plan their time effectively and prioritize their tasks. Students were required to complete pre-reading activities, namely, interactive quizzes and videos, or scenarios that aligned with the weekly topic.
  • Engage: This section of the weekly module encouraged active learning and peer-to-peer interaction by including interactive and collaborative in-class activities (real-life case studies, videos, simulations, or hands-on activities) designed to capture students’ attention and involve them actively with the subject material.
  • Reflect and Progress: This section of the weekly module comprised formative self-assessment quizzes to assess students’ understanding of the subject material and open-ended text questions to encourage active self-reflection on their learning experiences to promote deeper thinking and improved metacognition.
Motivation for using the tool. The course coordinator was interested in exploring the use of LA for timely and personalized feedback to create a more student-centered and adaptive learning environment. Prior to this, the coordinator had been using the analytics feature in the institutional LMS in order to obtain valuable insights into each of her students’ learning behaviors, strengths, and areas for improvement. However, this feature was limited in terms of personalizing communications and providing actionable feedback to students, which was her main purpose for tracking students’ progress with the LMS analytics. Having attended the introductory training sessions on OnTask, the coordinator was drawn to the personalization elements the tool had to offer. This is illustrated by the following quote:
… I realised after attending the training sessions on OnTask that [the personalization] is missing [in the LMS analytics feature]. …. So the first thing is that in [the LMS analytics feature], we cannot address the students by their first names. So that personalization touch is missing from the very beginning. And in OnTask, we have that option available to tag our students in.
Alignment of LA and LD. Figure 1 presents the artifact showing the alignment between LA and LD. At the beginning of the semester, all students received a personal welcome email from the course coordinator acknowledging their enrollment and providing information about the subject structure, expectations, and important resources. After this week, the course coordinator planned to send personalized feedback to students based on the checkpoint analytics around logins, participation in subject activities, and submission of summative assessment tasks. As shown in Figure 1, the timing of the planned feedback emails was always after each checkpoint activity or assessment, so that students would be aware of their ongoing progress rather than only at the end of the course.
Figure 2 shows an example of personalized feedback generated after the first assessment, highlighting the rule-based conditions for messages to (1) students with no submission, (2) students who failed the first assessment, and (3) students who passed. In line with principles of effective feedback [27], the message was thoughtfully crafted to express the course coordinator’s genuine concern for students’ success and confidence in their capabilities, and it offered specific actionable advice on how to improve their performance in the subject.
Perceived impact. The interview with the course coordinator highlighted three positive outcomes from personalizing feedback through the strategic alignment of LA and LD. Firstly, email replies in response to personalized feedback highlighted students’ greater sense of connection to the course and enhanced motivation. This point is illustrated by the coordinator’s comment:
…in reply emails students they have clearly mentioned that they have felt connected to the subject more, as compared to other subjects. … So, once students, they feel that they are making progress in a subject, that give them the confidence to excel in that subject as well.
Secondly, the course coordinator observed, from the checkpoint analytics, that there were noticeably fewer late submissions of assessments. The effect of this was that, although it was planned to send students personalized feedback at every checkpoint, this was no longer necessary after Week 6. As noted by the coordinator:
Initially, the plan was to send them nine or 10 emails, but we ended up sending only half of the emails. The reason is very positive behind that action … students didn’t give us the opportunity to send them emails. With the (first) assessment tasks, seven of them, I had no clue about them why they haven’t submitted. So I sent them a message that they have a missing assessment task, and that they still have five days to submit. So they submitted within those five days. And with the next assessment tasks, there was no missing submissions. I guess students knew, someone is checking on them. So they want to prove themselves that they should not get late this time.
Thirdly, from the judgment of the course coordinator, students’ assessments were of higher quality than those in previous iterations of the course:
That’s my personal observation that the quality of the submissions, the presentations, the in-class activities, the case study discussion students they were doing, it has much more improved if I compare it, with previous cohorts.
Student perceptions. 41 students (40.6% of the cohort) responded to the survey on their experience with their personalized feedback. Student responses were positive in the range of 4.76 to 5.24 across the four items, with the highest-rated item being This feedback made me feel more supported by my instructor (M = 5.24, SD = 1.18). This was followed by The feedback and support improved the quality and standard of my work (M = 4.83, SD = 1.38). To a lesser extent, students agreed that This feedback and support improved my overall learning experience in this subject (M = 4.76, SD = 1.16), and This feedback and support allowed me to complete my tasks and studies more effectively (M = 4.76, SD = 1.3).

3.5.2. Case 2: Business (First Year, Postgraduate)

Course description and learning design. The context of this case study was a postgraduate subject in the accounting major of several Masters programs at the institution’s Business School, in which students learned fundamental management accounting and costing practices. Typically, most students attempt this subject in their second year of study. The course usually attracts a small but highly diverse cohort. For example, in the Spring semester 2021 with 43 students, 83% were internationals, and only 17% of the cohort spoke English as their first language.
Similar to Case 1, the learning format in this course involved 12 consecutive weeks of study. Students participated in weekly 3 h interactive seminars supported by a range of self-guided learning activities they were expected to complete before and after class. The assessment structure comprised two formative in-class quizzes (in Week 4 and Week 7), a group assignment (due in Week 10), and a final exam.
As a result of the COVID-19 pandemic, early in 2020, the subject shifted to an online delivery mode. The general assessment structure, however, was preserved. Due to border lockdowns, the majority of the cohort studied remotely in their home countries. This change to remote, online learning raised the stakes for students’ capacity for self-directed learning, with the expectation that they would independently complete preparatory tasks before attending online classes. For example, prior to this, students would learn fundamental concepts from an in-person ‘lecture style’ component of the weekly seminar. In contrast, in an online environment, they were now expected to review pre-recorded explanatory videos about the topic content and work through practical demonstration questions independently before attending a weekly Zoom class to work collaboratively through more advanced practical problems.
Motivation for using the tool. The course coordinator was curious about using OnTask to communicate in a different way with students. Having a small cohort afforded the opportunity to pilot a personalized feedback strategy that could then be scaled up to larger cohorts if found effective. Importantly, the course coordinator felt constrained by the limitations of online teaching with respect to identifying and responding to cues around students’ progress. This is described in the following comment:
I get the sense that teaching online is like teaching through a keyhole … So you get a very small glimpse of the student, compared to when we had face to face where you can see when they walk in, what’s their mood, you can see how much preparation they’ve done by just looking on their computer or on their workbook, you can see them interacting, even if they’re not asking you questions. So you get a much more holistic picture of the student and how they’re progressing. And that feeds into how much support you feel you need to give in terms of their learning.
As well, the coordinator perceived that teaching online hindered communications with individual students, as explained in the following quote:
The other part about teaching through the keyhole is just how little the communication is between myself and the students. … When you’ve got 40, 45 people in a zoom class, and even if you’ve got a couple of hours with the class, you can’t really just have an individual conversation with a student there.
Ultimately, the coordinator wanted to trial using LA through the use of OnTask to provide a more personalized approach to feedback that encourages students to engage in effective self-directed learning tasks outside of class.
Alignment of LA and LD. The expectation that students would complete activities in the learning management system before the online seminar created an opportunity for personalizing feedback around checkpoint analytics. The artifacts are presented as: Table 2, which shows the schedule of personalized feedback messages implemented in this case, along with the relevant LA around checkpoints and performance data; and Figure 3, which shows the alignment between LA and LD.
An example of how one of these messages (message 4) was constructed in OnTask is provided in Figure 4. This shows that the message, the personalized feedback after Quiz 1, is constructed in five main sections:
  • The student’s name and individual quiz 1 score.
  • Text customized to commentary on their outcome of Quiz 1 (blue).
  • Text customized to provide advice about participating in class based on their attendance pattern (green).
  • Text customized to provide advice about preparing for class based on their pattern. in downloading weekly preparation material from the LMS (red).
  • Sign-off includes an offer for further individual contact with the subject coordinator.
Perceived impact. The interview with the course coordinator noted two main positive outcomes. Firstly, there was a moderate improvement in engagement with the self-directed tasks, as observed from checkpoint analytics, namely, completion of the preparatory work. In weeks where reminders were sent, the course coordinator observed that an average of 29.5% of students completed the preparation before class, compared to an average of 25.6% in other weeks with no reminders. Accordingly, the course coordinator felt that the timing of personalized feedback served as an effective nudge to complete the preparation work.
Importantly, the course coordinator’s deliberate alignment of LA and LD meant that students were receiving personalized nudge reminders on a Friday as a lead up to the seminar on the following Monday, resulting in a regular feedback loop. As noted by the coordinator:
I would send it on the Friday, because we had class on the Monday. …most of the emails were a reminder about what to do in the lead up to Monday’s class. Yeah, it just so happened that I kept that kind of basic rhythm. And it’s so it meant that when they did the quiz either on Tuesday or Wednesday, they had a 48-h window to do it. I had marked it by Thursday. And then on the Friday, I sent out the email at the same time as I release the marks on [LMS] as well. … It was a really condensed and short feedback cycle, which I intended on that.
A second and somewhat unexpected outcome noted by the course coordinator was unsolicited individual emails from students in response to personalized feedback. For example, in the week after the feedback message about Quiz 1 results was sent, 20% of the cohort (n = 9) individually wrote follow-up emails. In these email responses, students often expressed appreciation for the feedback, reminders, or suggestions. Students often also expressed their personal feelings about their assessment results, such as feeling “happy”, “sad”, “surprised and shocked” or “disappointed." Their messages also often contained reflective statements about their existing learning strategies and outlined plans for actioning the feedback, such as paying more attention, spending more time preparing for class, following the suggested tips, and being more interactive in class. In responding to these messages, the course coordinator found that both they and the student would often engage in ongoing, individualized dialogue over time about their progress in actioning these plans and progressing in the course.
Student perceptions. Approximately 44% of the cohort (n = 19) completed the anonymous post-evaluation survey, with ratings averaging between 4.16 and 4.37 out of 6. In this course, the highest-ranked item was that This feedback and support improved my overall learning experience for this subject (M = 4.37, SD = 1.92). This was followed by This feedback and support allowed me to complete my tasks and studies more effectively (M = 4.21, SD = X), and This feedback made me feel more supported by my instructor (M = 4.21, SD = 1.87). The feedback and support improved the quality and standard of my work (M = 4.16, SD = 1.86).

3.5.3. Case 3: Education (Graduate Certificate)

Course description and learning design. This case was conducted in a new graduate certificate program in education at the institution. The program prepared students for future professions in learning design. The program itself began in 2019, and due to the COVID-19 pandemic, it has always been delivered fully online, comprising asynchronous activities and material hosted on the university LMS and synchronous weekly ‘live sessions’ or seminars. Post-pandemic, this modality was retained, as it allowed for participation from a wider pool of students internationally. Adding to the geographical diversity of cohorts was the fact that students entered the course with different prior knowledge of the discipline. Unlike traditional 12-week courses, the program comprised eight 6-week courses. Accordingly, the program was highly intense; students could complete the entire graduate certificate program over eight months by studying one course every six weeks.
The alignment of LA and LD for personalized feedback was conducted over two iterations of one course in the program. This course introduced students to learning theories, how they related to learning design, and the implications of technology on learning theory. As with the other courses within the program, this 6-week fully online course was modular in nature, exploring a different topic over the first five weeks, culminating in a conclusion at Week 6. The LD was replicated over the five weeks in which topics were explored. Pre-class activities (before the “live session”) involved participation in discussion forums, where two or three pre-class readings on the topic were discussed. During the live session, students performed activities in online labs to learn how to use different digital learning technologies. After the live session, students attended webinars hosted by industry professionals to gain a perspective on learning designers at work. Students also summarized their own learning by completing a short 100-200 word reflection on their learning in the week’s module. In addition to the text requirement, this reflection also comprised one or two questions—depending on the number of learning objectives for each module—asking; How confident are you at [Learning Objective 1]?.
Motivation for using the tool. The primary motivation for the course coordinator to use OnTask was to enhance student belonging. While the online delivery of the program was beneficial for widening enrolment, there have been some concerns that this online-only modality might have a detrimental effect upon the students’ sense of ‘cohortship’ and their sense of belonging in the course. This could potentially lead to a decrease in engagement with both the synchronous and asynchronous material and possibly increased dropout rates or student achievement. Moving to a face-to-face modality was not a possibility, as the course coordinator wished to retain the wide appeal and accessibility of the program. Therefore, the coordinator was interested in other ways to foster students’ sense of belonging in the course and drive increased engagement, motivation, and achievement. A possible approach was identified as increasing the teacher-student connection.
There were two further considerations that affected the design choices made. Firstly, similar to Case 2, the course coordinator was interested in trialing an approach that might be suitable for much higher-enrolled courses. Therefore, the workflow put into place needed to be efficient and scalable. Secondly, the wider context was a program about learning design, prompting a secondary motive for using OnTask. The course coordinator had previously explored the tool in another course within the program to show the students how this kind of personalized feedback might be incorporated into course design; in other words, they wanted the implementation of this design to be a teachable moment for students. Having already experimented with personalizing feedback in this way, they now wanted to “play around with giving the students a little bit more feedback on how they were going." Notably, the coordinator realized that there was potentially usable data from students’ reflections to tailor feedback and support regarding their progress. This idea is illustrated in the following comment:
Most of the students do the first part (of the reflection). And so I thought that was unused data. You know, so what’s the point of me asking the question if I’m not actually going to do something with it? … And so I thought, what I should actually try to do is use that to help meet their needs. Specifically, if students said, I’m not confident that I understand [topic], well, what can I do to help them do that?
Alignment of LA and LD. The motivation to support students’ learning personalized to their self-reflection was implemented as shown in Figure 5. The structure of the LD lends itself to a weekly cycle of learning activity, reflection, and personalized support. In so doing, the course coordinator wanted to create a sense of regularity in the students’ engagement with their personalized support through OnTask; that is, they would almost come to expect and even look forward to the messages. The cycle was based on the modular structure of the course.
As described above, students would complete a short, reflective survey that asked them to rate how confident they were that they had met the learning objectives for that module. In this survey, students would indicate that they either felt very confident, moderately confident, slightly confident, or not confident at all. These scores were then added together to provide an overall confidence score for that module. This analysis underpins the alignment between LA and LD in this course. If a student had indicated that they were very confident across all the learning objectives, a message could be sent to them, providing them with further information or resources or encouraging them to deepen their understanding. Alternatively, if a student had indicated that they were not confident at all, then a different message could be sent that might affirm the students’ concerns and reiterate some of the key messages. After some initial experimentation, the course coordinator decided to use videos for the personalized messages (see Figure 6). In this case, students received a personalized email from the coordinator, directing them to a video that was tailored to their level of confidence (see Figure 7). The video was chosen for two reasons. Firstly, the idea of the feedback was meant to be a quick ‘check-in’ and affirmation or support, rather than a detailed and comprehensive summary or overview. The focus was on students’ affective disposition rather than their actual performance in the course (there were other designs in place to support performance). It was felt that video was more engaging than simply text as the students could see their teacher, thus enhancing the connection with that teacher. Secondly, by pre-recording the videos and hosting them on YouTube, it would be possible to track the number of views each video received, thus providing information about whether students were watching the videos.
These emails were designed to increase individual students’ feelings of belonging. The course coordinator was inspired by the work of [28], who argued that educators should know each student, both as an individual and as a learner. This would mean that the learner could see that the educator was invested in the success of each student and cared about their interaction and performance in the course. This notion—of knowing every student as individuals and as learners—is fundamental to many models of belonging and relates to ongoing issues like student retention and success (especially with students who are either first-time students or are returning to study after a significant period—as was the case for many students enrolled in the program).
Perceived impact. From the course coordinator’s perspective, two positive outcomes were observed from this approach of aligning LA with LD for personalized learning support for students. Firstly, after each email with the linked personalized video content was sent out, analytics reports from the LMS showed an uptick in student activity. In particular, the coordinator noted an association between the recommended resources and the students’ activities. The following quote illustrates this point with reference to encouraging students to return to recommended readings and the related discussion boards:
I’ve got [LMS] set up for digests emails. So every day it sends me a digest letting me know, participation in my subjects. And so usually on a Thursday or Friday, I’d send out the OnTask email ... And then over the next couple of days, the digest email would come in and say, you know, student A, posted on this discussion board, Student B posted on this discussion board, and so on.
The second way in which the course coordinator monitored impact was in terms of students’ direct replies to the personalized message. The nature of these responses was interesting: for students that had been less active or were less confident in the course material, often the response was an apology and a commitment to be more active in upcoming modules—this was despite how the coordinator had deliberately steered away from language that might make the students feel guilty. The more active and confident students often responded with their thoughts on the additional materials recommended, indicating an interest in pursuing a discussion about their learning. However, something that was common to all of the responses received was appreciation for the coordinator’s noticeable engagement with their progress through the course; that is, the students liked knowing that the course coordinator knew how they were going.
Student perceptions. As the individual cohorts were small, survey data were aggregated for the two cohorts where personalized video support was implemented. A total of 7 students from the 2021 cohort responded to the survey, while 6 students from the 2023 cohort responded. Students felt most positively that This feedback made me feel more supported by my instructor (M = 5.00, SD = 0.71), followed by This feedback and support improved my overall learning experience for this subject (M = 4.46, SD = 0.97). To a lesser extent, students felt positive that This feedback and support allowed me to complete my tasks and studies more effectively (M = 4.15, SD = 0.80), and that The feedback and support improved the quality and standard of my work (M = 4.00, SD = 1.08).
Students commented that they appreciated the videos and felt as if their lecturer was talking directly to them—almost as if it were a conversation. They also noted that they felt confident undertaking further modules because of this feedback. One interesting outcome was that some students wanted to watch all the offered videos (i.e., from very confident to not confident at all) and not just the one assigned to their responses. Some students were less enthusiastic about the videos themselves, indicating that they would much prefer to read the feedback, as watching the videos “took too long."

4. Results and Discussion

In this section, we draw out commonalities and differences across the three cases, with reference to each research question, and discuss their relevance to existing literature in this area.

4.1. RQ1. How do Instructors Align LA with LD in Their Contexts, for the Purpose of Personalizing Feedback and Support in Their Courses?

The findings from the three cases illustrate the utility of LA-LD alignment within the context of regular “instructional cycles” [29]. In all three cases, the course coordinators employed learning designs that were highly structured around weekly synchronous classes, with preparation activities before and reflective or summary activities to evaluate learning after these classes. The coordinators aligned LA and LD via checkpoint analytics according to the preparation and reflective/summary activities, thereby creating iterative cycles of learning activity and feedback that are valued by students [30].
Case 3 also provides an interesting example of using data other than that automatically harvested from the LMS to align LA and LD. Many personalized feedback interventions in the literature have tended to rely on readily available data from the LMS [1]. This case demonstrates how it is possible to leverage “dangling data” [31] from a simple reflective activity, recognize their affordance as small but meaningful data [32] indicating students’ subjective states of their learning [23], and transform this into metrics to personalize learning support. Similar approaches to personalized support informed by students’ self-reports have been reported with other open AF tools like ECoach [33], but this is still relatively rare.
In view of this, the frameworks for LA and LD rarely, if at all, have a category of student self-report of their progress as part of the data categories. The AL4LD framework includes student surveys as possible complementary data sources in the learning analytics layer of the framework, but the purpose of this is mainly to provide information about students’ profiles or satisfaction. Case 3 in this research illustrates how a simple survey can be designed as a reflective activity within a learning design, and the data therein can be used to personalize learning content for students. If this approach becomes more common and evidence continues to emerge around the usefulness of these self-reports, then this could implicate an additional data class for LA-LD frameworks.

4.2. RQ2. How Do Instructors Evaluate the Impact of Personalized Feedback and Support When LA and LD Are Aligned?

Across the three cases, the course coordinators used mainly checkpoint analytics and performance data to personalize feedback and support for students in their courses. In evaluating the impact of the intervention, they also monitored changes in student behaviors around those checkpoint analytics. This demonstrates the utility of such analytics to both support students’ learning in a tailored way and evaluate the impact of such support. This dual use of analytics referenced against LD for timely support of student learning as well as for the evaluation of impact has been noted elsewhere. Furthermore, by aligning LA with LD for personalizing feedback, the course coordinators were able to “bridge the gap between the information provided by LA and the pedagogical designs created” ([7], p. 374). For Case 3, the course coordinator also leveraged simple viewer statistics from the videos personalized to students’ self-reported progress to know whether students were responding to the personalized advice. While raw click data are acknowledged to provide coarse-grained data on student learning and may limit insights into how students are learning, its value in this case was as an additional checkpoint analytic evaluating the impact of the personalized intervention.
Because the personalized feedback was delivered directly to students’ inboxes from the course coordinators’ email addresses, students had a direct communication channel with the instructor. As noted in all three cases, the personalized feedback and support messages commenced a trail of meaningful email correspondence that helped the coordinator further understand the student’s progress with the course activities. Because students were replying directly to the personalized message, the coordinator was able to use this as a source of information regarding the impact of their personalized interventions. More importantly, when these email conversations were sustained over a few exchanges, this fostered dialogic feedback processes [34], resulting from an interplay of cognitive, social-affective, and structural dimensions of feedback. Especially the structural dimension of feedback—that is, the integration of LA-based feedback into learning design—can present a significant challenge [35]. This is due to factors such as the complexity of educational systems, an emphasis on analytics over learning, and a focus on LA as a tool rather than a process. However, LA aligned with LD for personalized feedback through open AF tools such as OnTask, as demonstrated in this multiple case study, offers a solution to this challenge.
Surveys eliciting students’ perceptions of their experience were also helpful for course coordinators to further understand the impact of LA-LD alignment for personalized support at the cohort level. Furthermore, in this present research, the use of a unified survey facilitated the collation of student perceptions of personalized feedback using the OnTask tool in order to build an evidence base for the continued support and adoption of the tool at the institution.
With evidence from multiple data sources to understand the impact of personalized feedback and support based on the alignment of LA and LD, the course coordinators in this multiple case study were able to employ data for informed decision-making in their practice.

4.3. RQ3. How Do Students Perceive Their Personalized Feedback When LA Is Aligned with LD?

Figure 8 summarizes students’ ratings on each of the feedback experience survey items for each case as well as the overall average across the three cases. Due to the small number of responses to the survey, it was not possible to perform comparative analysis to identify differences between the cases. Across the three cases, students were most satisfied regarding the support they felt from their personalized feedback when LA was aligned with LD, as well as the improvement in the overall learning experience in the course. This sense of increased support from instructors as a result of personalized feedback is consistent with findings from other studies examining the impact of LA-based feedback with open AF systems like OnTask. For example, [36] documented that a dominant theme in students’ experiences of feedback was the perception of care by the instructor, which enhanced their own motivation to learn.
To a lesser extent, students were also satisfied that the feedback and support improved the quality and standard of their work, as well as allowing them to complete their tasks and studies more effectively. These results suggest a possible area of improvement for personalized feedback based on the alignment of LA and LD. The three cases in this research mostly employed checkpoint analytics and, to some extent, performance data. As noted in the LA-LD literature (e.g., [3,5]), checkpoint analytics alone may not be sufficient to illuminate specific learning processes. For example, more fine-grained process analytics of usage behaviors within the quiz environment may illuminate where students were having particular difficulties, which may then be used to inform more specific feedback to students on how to improve the task more effectively. Future studies could compare the student experience of personalized feedback based on checkpoints or process analytics to examine if the latter can shift students’ perceptions regarding the quality of feedback for improving their work and helping them to be more effective in their studies.

4.4. Limitations and Future Work

While this exploratory multiple case study has provided insights specifically into how educators use LA to inform LD for the purpose of personalizing feedback, we acknowledge that this study is not without its limitations. Firstly, the results of this study are limited to three case studies conducted in postgraduate courses at one institution. While we have captured cases from different disciplines to maximize variability, we acknowledge that these cases may not represent all higher education contexts. Secondly, the data for evaluating personalized feedback in this study was mainly drawn from the perspective of the instructor. We acknowledge that the instructors had limited experiences with the tool and therefore may not have been able to fully explore the possibilities of aligning LA and LD with this tool. Additionally, student responses to the feedback experience survey were also limited to a small sample, as not all students chose to give their feedback. Related to this, the survey was also anonymous, making it hard to examine a more nuanced impact—for example, in Case 3, how did students who self-reported low confidence in the achievement of the learning objectives perceive their personalized feedback with respect to the four items? Thirdly, it should be noted that, at the time of this research, OnTask was being piloted at the institution and therefore not fully integrated with the institution’s data systems. We recognize that this is a limitation with respect to instructors being able to fully experience the power of an automated system. Notwithstanding these limitations, the focus of this multiple case study was to understand how educators aligned LA and LD for personalized feedback in real-world contexts, which is recognized to be a significant gap in the literature. From the findings of the present study, we can identify the following possible avenues for future research:
  • Documenting and gathering data from a wider range of contexts to capture more variations in the approaches of LA-LD alignment for personalized feedback.
  • Include student performance data as an additional source of data to examine the impact of LA-LD alignment strategies.
  • Compare the experiences of educators at institutions where OnTask or similar open AF systems are fully integrated with those of the current study to examine differences in the use of analytics for personalized feedback.
  • Compare the student experience of personalized feedback based on checkpoint or process analytics to examine if the latter can shift students’ perceptions regarding the quality of feedback for improving their work and helping them to be more effective in their studies.

5. Conclusions

This research has contributed to the increasingly acknowledged need for an alignment between LA and LD, especially in real-world teaching practice, which is currently a gap in the literature. To ensure grounding in the LD literature, we used the AL4LD framework [21] as a lens to analyze alignment. Through three exploratory case studies intentionally selected for their variation, this research provides empirical evidence of instructors’ practices of aligning LA with LD for personalizing feedback and support to their cohorts. Our research builds on the emerging work on bridging LA and LD in practice to strengthen connections between LA and pedagogy.

Author Contributions

Conceptualization, L.-A.L.; formal analysis, L.-A.L., A.A., K.H. and N.S.; methodology L.-A.L.; writing—original draft preparation, L.-A.L., A.A., K.H. and N.S.; Writing—review and editing L.-A.L., A.A., K.H. and N.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board (or Ethics Committee) of University of Technology Sydney (protocol code ETH17-1395 and date of approval 19 May 2017).

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing are not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Avila, A.G.N.; Feraud, I.F.S.; Solano-Quinde, L.D.; Zuniga-Prieto, M.; Echeverria, V.; Laet, T.D. Learning analytics to support the provision of feedback in higher education: A systematic literature review. In Proceedings of the 2022 XVII Latin American Conference on Learning Technologies (LACLO), Armenia, Colombia, 17–21 October 2022; pp. 1–8. [Google Scholar]
  2. Buckingham Shum, S.; Lim, L.-A.; Boud, D.; Bearman, M.; Dawson, P. A comparative analysis of the skilled use of automated feedback tools through the lens of teacher feedback literacy. Int. J. Educ. Technol. High. Educ. 2023, 20, 40. [Google Scholar] [CrossRef]
  3. Kaliisa, R.; Kluge, A.; Mørch, A.I. Combining Checkpoint and Process Learning Analytics to Support Learning Design Decisions in Blended Learning Environments. J. Learn. Anal. 2020, 7, 33–47. [Google Scholar] [CrossRef]
  4. Viberg, O.; Gronlund, A. Desperately seeking the impact of learning analytics in education at scale: Marrying data analysis with teaching and learning. In Online Learning Analytics; Liebowitz, J., Ed.; Auerbach Publications: New York, NY, USA, 2021. [Google Scholar]
  5. Lockyer, L.; Heathcote, E.; Dawson, S. Informing pedagogical action: Aligning learning analytics with learning design. Am. Behav. Sci. 2013, 57, 1439–1459. [Google Scholar] [CrossRef]
  6. Rienties, B.; Toetenel, L.; Bryan, A. “Scaling up” learning design: Impact of learning design activities on LMS behavior and performance. In Proceedings of the Fifth International Conference on Learning Analytics and Knowledge, Poughkeepsie, NY, USA, 16–20 March 2015; pp. 315–319. [Google Scholar]
  7. Kaliisa, R.; Kluge, A.; Mørch, A.I. Overcoming Challenges to the Adoption of Learning Analytics at the Practitioner Level: A Critical Analysis of 18 Learning Analytics Frameworks. Scand. J. Educ. Res. 2022, 66, 367–381. [Google Scholar] [CrossRef]
  8. Conole, G. Designing for Learning in an Open World; Springer Science & Business Media: Leicester, UK, 2012; Volume 4. [Google Scholar]
  9. Winstone, N.E.; Nash, R.A.; Parker, M.; Rowntree, J. Supporting learners’ agentic engagement with feedback: A systematic review and a taxonomy of recipience processes. Educ. Psychol. 2017, 52, 17–37. [Google Scholar] [CrossRef]
  10. Arthars, N.; Dollinger, M.; Vigentini, L.; Liu, D.Y.-T.; Kondo, E.; King, D.M. Empowering Teachers to Personalize Learning Support. In Utilizing Learning Analytics to Support Study Success; Ifenthaler, D., Mah, D.-K., Yau, J.Y.-K., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 223–248. [Google Scholar]
  11. Pardo, A.; Bartimote-Aufflick, K.; Buckingham Shum, S.; Dawson, S.; Gao, J.; Gašević, D.; Leichtweis, S.; Liu, D.Y.T.; Martinez-Maldonado, R.; Mirriahi, N.; et al. OnTask: Delivering Data-Informed Personalized Learning Support Actions. J. Learn. Anal. 2018, 5, 235–249. [Google Scholar] [CrossRef]
  12. Liu, D.Y.-T.; Bartimote-Aufflick, K.; Pardo, A.; Bridgeman, A.J. Data-driven personalization of student learning support in higher education. In Learning Analytics: Fundaments, Applications, and Trends; Studies in Systems, Decision and Control; Peña-Ayala, A., Ed.; Springer International Publishing: Berlin/Heidelberg, Germany, 2017; Volume 94, pp. 143–169. [Google Scholar]
  13. Bakharia, A.; Corrin, L.; Barba, P.d.; Kennedy, G.; Gašević, D.; Mulder, R.; Williams, D.; Dawson, S.; Lockyer, L. A conceptual framework linking learning design with learning analytics. In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge, Edinburgh, UK, 25–29 April 2016; Association for Computing Machinery: Edinburgh, UK, 2016; pp. 329–338. [Google Scholar]
  14. Caeiro-Rodriguez, M. Making Teaching and Learning Visible: How Can Learning Designs Be Represented? In Proceedings of the Seventh International Conference on Technological Ecosystems for Enhancing Multiculturality, León, Spain, 16–18 October 2019; pp. 265–274. [Google Scholar]
  15. Toetenel, L.; Rienties, B. Analysing 157 learning designs using learning analytic approaches as a means to evaluate the impact of pedagogical decision making. Br. J. Educ. Technol. 2016, 47, 981–992. [Google Scholar] [CrossRef]
  16. Persico, D.; Pozzi, F. Informing learning design with learning analytics to improve teacher inquiry. Br. J. Educ. Technol. 2015, 46, 230–248. [Google Scholar] [CrossRef]
  17. Gašević, D.; Dawson, S.; Siemens, G. Let’s not forget: Learning analytics are about learning. TechTrends 2015, 59, 64–71. [Google Scholar] [CrossRef]
  18. Shibani, A.; Knight, S.; Buckingham Shum, S. Contextualizable Learning Analytics Design: A Generic Model and Writing Analytics Evaluations. In Proceedings of the 9th International Conference on Learning Analytics & Knowledge, Tempe, AZ, USA, 4–8 March 2019; Association for Computing Machinery: Tempe, AZ, USA, 2019; pp. 210–219. [Google Scholar]
  19. Siemens, G.; Long, P. Penetrating the fog: Analytics in learning and education. EDUCAUSE Rev. 2011, 46, 30. [Google Scholar]
  20. Macfadyen, L.P.; Lockyer, L.; Rienties, B. Learning Design and Learning Analytics: Snapshot 2020. J. Learn. Anal. 2020, 7, 6–12. [Google Scholar] [CrossRef]
  21. Hernández-Leo, D.; Martinez-Maldonado, R.; Pardo, A.; Muñoz-Cristóbal, J.A.; Rodríguez-Triana, M.J. Analytics for learning design: A layered framework and tools. Br. J. Educ. Technol. 2019, 50, 139–152. [Google Scholar] [CrossRef]
  22. Mangaroska, K.; Giannakos, M. Learning analytics for learning design: A systematic literature review of analytics-driven design to enhance learning. IEEE Trans. Learn. Technol. 2018, 12, 516–534. [Google Scholar] [CrossRef]
  23. Ahmad, A.; Schneider, J.; Griffiths, D.; Biedermann, D.; Schiffner, D.; Greller, W.; Drachsler, H. Connecting the Dots—A Literature Review on Learning Analytics Indicators from a Learning Design Perspective. J. Comput. Assist. Learn. 2022; early view. [Google Scholar] [CrossRef]
  24. Salehian Kia, F.; Pardo, A.; Dawson, S.; O’Brien, H. Exploring the relationship between personalized feedback models, learning design and assessment outcomes. Assess. Eval. High. Educ. 2023, 48, 860–873. [Google Scholar] [CrossRef]
  25. Yin, R.K. Case Study Research: Design and Methods, 4th ed.; Sage Publications: Thousand Oaks, CA, USA, 2009. [Google Scholar]
  26. Stake, R.E. Multiple Case Study Analysis; The Guilford Press: New York, NY, USA, 2006. [Google Scholar]
  27. Nicol, D.J.; Macfarlane-Dick, D. Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Stud. High. Educ. 2006, 31, 199–218. [Google Scholar] [CrossRef]
  28. Dinham, S. The secondary head of department and the achievement of exceptional student outcomes. J. Educ. Adm. 2007, 45, 62–79. [Google Scholar] [CrossRef]
  29. Pardo, A.; Jovanovic, J.; Dawson, S.; Gašević, D.; Mirriahi, N. Using learning analytics to scale the provision of personalised feedback. Br. J. Educ. Technol. 2019, 50, 128–138. [Google Scholar] [CrossRef]
  30. Lim, L.-A.; Dawson, S.; Gašević, D.; Joksimović, S.; Fudge, A.; Pardo, A.; Gentili, S. Students’ sense-making of personalised feedback based on learning analytics. Australas. J. Educ. Technol. 2020, 36, 15–33. [Google Scholar] [CrossRef]
  31. Sadler, D.R. Formative assessment and the design of instructional systems. Instr. Sci. 1989, 18, 119–144. [Google Scholar] [CrossRef]
  32. Merceron, A.; Blikstein, P.; Siemens, G. Learning Analytics: From Big Data to Meaningful Data. J. Learn. Anal. 2016, 2, 4–8. [Google Scholar] [CrossRef]
  33. Matz, R.L.; Schulz, K.W.; Hanley, E.N.; Derry, H.A.; Hayward, B.T.; Koester, B.P.; Hayward, C.; McKay, T. Analyzing the efficacy of Ecoach in supporting gateway course success through tailored support. In Proceedings of the LAK21: 11th International Learning Analytics and Knowledge Conference (LAK21), Irvine, CA, USA, 12–16 April 2021; ACM: New York, NY, USA, 2021; pp. 216–225. [Google Scholar]
  34. Yang, M.; Carless, D. The feedback triangle and the enhancement of dialogic feedback processes. Teach. High. Educ. 2013, 18, 285–297. [Google Scholar] [CrossRef]
  35. Tsai, Y.-S. Why feedback literacy matters for learning analytics. In Proceedings of the 16th International Conference of the Learning Sciences (ICLS), Online, 6 June 2022; pp. 27–34. [Google Scholar]
  36. Lim, L.; Dawson, S.; Gašević, D.; Joksimović, S.; Pardo, A.; Fudge, A.; Gentili, S. Students’ perceptions of, and emotional responses to, personalised LA-based feedback: An exploratory study of four courses. Assess. Eval. High. Educ. 2021, 46, 339–359. [Google Scholar] [CrossRef]
Figure 1. Case 1 artifact showing alignment of learning analytics and learning design for automated, personalized feedback.
Figure 1. Case 1 artifact showing alignment of learning analytics and learning design for automated, personalized feedback.
Education 13 01114 g001
Figure 2. An example of a rule-based personalized feedback email used in Case 1.
Figure 2. An example of a rule-based personalized feedback email used in Case 1.
Education 13 01114 g002
Figure 3. Case 2 artifact showing alignment of learning analytics and learning design for automated, personalized feedback.
Figure 3. Case 2 artifact showing alignment of learning analytics and learning design for automated, personalized feedback.
Education 13 01114 g003
Figure 4. Message construction in OnTask shows rules for personalization of feedback based on checkpoint analytics.
Figure 4. Message construction in OnTask shows rules for personalization of feedback based on checkpoint analytics.
Education 13 01114 g004
Figure 5. Case 3 artifact showing alignment of learning analytics and learning design for automated, personalized feedback.
Figure 5. Case 3 artifact showing alignment of learning analytics and learning design for automated, personalized feedback.
Education 13 01114 g005
Figure 6. Still images of personalized videos were used in the alignment of learning analytics and learning design (Case 3).
Figure 6. Still images of personalized videos were used in the alignment of learning analytics and learning design (Case 3).
Education 13 01114 g006
Figure 7. Personalized video messages when aligning learning analytics with learning design (Case 3). (Left): Conditions with personalized video message; (Right): Student view of personalized message.
Figure 7. Personalized video messages when aligning learning analytics with learning design (Case 3). (Left): Conditions with personalized video message; (Right): Student view of personalized message.
Education 13 01114 g007
Figure 8. Students’ ratings of personalized feedback using LA aligned with LD: Means for 3 cases and overall mean.
Figure 8. Students’ ratings of personalized feedback using LA aligned with LD: Means for 3 cases and overall mean.
Education 13 01114 g008
Table 1. Summary of case characteristics.
Table 1. Summary of case characteristics.
Case FacultyCourse LevelSubject DesignCohort SizeSession DurationData Classes (Based on [21])
CheckpointsProcessPerformance
Case 1Engineering and ITPostgraduate, MastersBlended learning, modular 10112 weeksLogin by Week 3; Weekly pre-reading participation; Weekly tutorial submissions; Weekly post-class quiz submission; Weekly post-class reflection Assessment task 2 performance
Case 2BusinessPostgraduate, MastersProject-based, Flipped learning 4212 weeksAttendance at seminars; Downloads of preparatory material topic; Downloads of revision materials for exams Assessment performance—summative quizzes
Case 3Arts and Social SciencePostgraduate, Graduate CertificateBlended, modular 15 16 weeks Student reflection self-ratings
Note. All courses were conducted online. 1 2 cohorts were described in this case, due to small cohort sizes.
Table 2. Schedule of personalized feedback and data used in alignment of learning analytics and learning design (Case 2).
Table 2. Schedule of personalized feedback and data used in alignment of learning analytics and learning design (Case 2).
OnTask Email MessageDate SentPurposeLearning Analytics
Welcome message and instructions for Seminar 1Week 0Remind students about what they need to do to prepare before Seminar 1
Nudge to attempt preparatory materialEnd of Week 1Remind students to work through the preparation material and attend the seminarCheckpoint analytics:
Attendance at seminar 1 (Y/N)
Downloaded preparatory material, topic 1 (Y/N)
Follow-up reminder to form assignment groupsEnd of Week 2Remind students to form a group for the assignment and begin interacting on Microsoft TeamsCheckpoint analytics:
Attendance at seminar 1 (Y/N);
Downloaded preparatory material topic 1 (Y/N)
Personalized feedback after Quiz 1End of Week 4Provide personalized feedback for Quiz 1, with personalized suggested learning strategiesCheckpoint analytics:
Registered as group (Y/N)
Personalized feedback after Quiz 2End of mid-semester study breakProvide personalized feedback for Quiz 2 (noting changes from Quiz 1) with personalized suggested learning strategiesPerformance data:
Quiz 2 score
Quiz 2 result (Exceed Maintain/Exceed Improve/Meet/Below);
Checkpoint analytics:
Download preparatory materials pattern (0–1 weeks, 2–3 weeks)
Strategies for revising the subjectBeginning of Week 11Remind students about the availability of weekly self-guided revision tasksCheckpoint analytics:
Download revision materials pattern (0, 1–4 weeks, 5+ weeks)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lim, L.-A.; Atif, A.; Heggart, K.; Sutton, N. In Search of Alignment between Learning Analytics and Learning Design: A Multiple Case Study in a Higher Education Institution. Educ. Sci. 2023, 13, 1114. https://doi.org/10.3390/educsci13111114

AMA Style

Lim L-A, Atif A, Heggart K, Sutton N. In Search of Alignment between Learning Analytics and Learning Design: A Multiple Case Study in a Higher Education Institution. Education Sciences. 2023; 13(11):1114. https://doi.org/10.3390/educsci13111114

Chicago/Turabian Style

Lim, Lisa-Angelique, Amara Atif, Keith Heggart, and Nicole Sutton. 2023. "In Search of Alignment between Learning Analytics and Learning Design: A Multiple Case Study in a Higher Education Institution" Education Sciences 13, no. 11: 1114. https://doi.org/10.3390/educsci13111114

APA Style

Lim, L. -A., Atif, A., Heggart, K., & Sutton, N. (2023). In Search of Alignment between Learning Analytics and Learning Design: A Multiple Case Study in a Higher Education Institution. Education Sciences, 13(11), 1114. https://doi.org/10.3390/educsci13111114

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop