1. Introduction
Writing proficiency is central to student success in the educational realm and also in the personal and vocational realms, enabling them to function in society, acquire knowledge, and demonstrate what they have learned. However, it is challenging for novice writers to transform their ideas into effective compositions due to their lack of authentic experiences, linguistic and lexical resources, and automatized knowledge [
1,
2]. The use of augmented reality (AR) provides students with a realistic and immersive writing learning experience [
3]. Several empirical studies have demonstrated the great potential of AR in enhancing learners’ writing performance [
4], engagement [
2], writing motivation [
1], and critical thinking [
5]. Although these studies identified the multiple benefits of introducing AR in writing education, researchers have also reported the challenges imposed by AR technology. One that must be considered is the learners’ cognitive overload in an AR learning environment [
6]. The volume of material and complexity of tasks in an AR environment might overwhelm students, leading to cognitive overload and diminished learning outcomes [
7].
To mitigate this issue, researchers have implemented various pedagogical strategies such as inquiry-based learning, collaborative learning, and project-based learning [
7,
8]. Of these, collaborative learning strategies such as formative peer assessment (FPA) have been found to be the most effective in AR interventions [
8]. FPA promotes interaction, dialogue, and collaborative knowledge construction among peers, thereby helping to reduce individuals’ cognitive load and enhance learning outcomes [
9,
10]. However, there is limited research providing robust evidence on how FPA facilitates AR-based learning. Existing studies have examined its effectiveness in AR-based design [
11] and geometry learning [
12], leaving its role in AR-based writing instruction underexplored. Moreover, FPA can help learners gain a better understanding of the learning content and engage in deeper thinking [
10]. Notably, existing studies suggest that providing feedback contributes more significantly to learners’ achievement than receiving feedback [
13,
14,
15]. Nevertheless, these studies mainly focus on the theoretical explorations of the benefits for feedback providers, with limited empirical evidence supporting these claims. Regarding the role of peer feedback, several studies have found that peer feedback quality and features play a critical role in determining peer feedback implementation [
16,
17,
18]. However, research on how these traits impact learners’ writing performance remains limited [
19]. Moreover, studies examining the relationship between writing performance and peer feedback are scarce. Therefore, it is important to identify which specific traits of peer feedback are associated with learners’ writing performance.
Furthermore, due to the differences in learners’ personal preferences, behavioral performance, cognitive strategy, and ability characteristics with different cognitive styles [
20], the impact of the pedagogical approach varies between field-independent (FI) learners and field-dependent (FD) learners [
21,
22]. The FD–FI style, which relates to individual differences in the visual information process, can influence learners’ behavior and performance in AR environments [
23,
24]. Moreover, cognitive style has been thought to affect learners’ acceptance of different types of teacher feedback, thereby impacting their learning outcomes [
25]. However, the effect of FPA on the writing performance of FI and FD learners remains unclear. Understanding how FPA interacts with cognitive style in AR-based writing instruction could provide valuable insights into designing effective AR applications in education.
To address these research gaps, this study proposed an AR-FPA learning approach and investigated its effects on the writing performance of FI and FD learners. Additionally, the study examined whether the quality and features of peer feedback provided by FI and FD learners would predict their writing performance. To this end, an experiment was conducted with fifth-grade Chinese students, who utilized either the AR-FPA or conventional FPA approach to empirically explore the following research questions:
How do the learning approaches (AR-FPA vs. FPA) and cognitive styles (FI vs. FD) affect learners’ writing performance?
How do the quality and features of peer feedback provided by learners with different cognitive styles relate to their writing performance across different learning approaches?
As the educational environment evolves, integrating emerging technologies with teaching strategies can better align with future educational development trends and promote sustainable innovation in educational tools and methodologies. A significant contribution of this research is its first attempt to combine AR technology with FPA strategy in writing education, illuminating how the interaction between the learning approaches and cognitive styles influences writing outcomes. Furthermore, an empirical experiment was conducted to evaluate the effectiveness of this instructional innovation, providing empirical insights for fostering the sustainability of writing learning and teaching practices. Thus, our research contributes not only to understanding how the AR-based FPA approach and students’ cognitive styles shape peer feedback and writing achievements but also to encouraging instructors’ active attempts to create adaptive learning environments by integrating new technologies into the educational process.
The remainder of this paper is structured as follows:
Section 2 reviews the relevant literature on AR-supported writing instruction, the integration of AR with FPA in education, and the interaction effects of cognitive styles and instructional modes.
Section 3 introduces the AR-based formative peer assessment system developed for this study.
Section 4 details the research method, and
Section 5 presents the experiment results. Finally,
Section 6 and
Section 7 outline our conclusions and directions for future research.
3. Development of an AR-Based Formative Peer Assessment System
3.1. System Structure and Function
Figure 1 shows the structure of the AR-based formative peer assessment system, which consists of an augmented reality learning system, a formative peer assessment mechanism, and a database management mechanism.
The augmented reality learning system enables teachers to prepare learning materials, design learning scripts, and maintain learning portfolios. Students can use their tablet computers to operate the AR application, observe the learning materials and scripts, and complete the learning tasks. The formative peer assessment module allows students to evaluate their peers’ work, provide feedback, review feedback from peers, and revise their own essays. Teachers can design evaluation criteria, issue an assessment assignment, and monitor assessment status. Moreover, the database management mechanism recorded students’ learning data in the AR materials database, student portfolio database, peer assessment database, and learning portfolio database.
3.2. Augmented Reality Learning System
In this study, the learning content was based on the concept of “A Magical Forest Adventure Journey”, which is included in the primary school writing curriculum in China. An AR application called “Explore Wild Animals”, accompanied by a book, was used to deliver the learning content. This application is an AR educational tool specifically designed for children. It transforms the dull and challenging knowledge in traditional books into vivid and engaging three-dimensional (3D) animations. This approach enhances students’ understanding and retention of information related to the writing theme while stimulating their interest in writing learning. By scanning the accompanying book with a mobile application on their smartphones or tablets, students can observe virtual scenes and interact with virtual objects. In the AR context, audio and text prompts scaffolded content learning, guiding students to interact with virtual objects. Furthermore, students can manipulate the progress bar to revisit scenes of interest, pause to observe details, and screenshot key scenes for later review during the writing process (
Figure 2).
3.3. Formative Peer Assessment Mechanism
The formative peer assessment system was constructed on the open-source Workshop plugin of Moodle. Its custom features allow teachers to design assessment content and structures that align with specific teaching objectives. This personalized approach not only enhances student engagement but also enables real-time adjustments to the assessment tasks. The system supports various interactive methods, such as written comments, ratings, and dialogues, offering students diverse perspectives on their writing themes, thereby deepening their understanding and reflection on their work. Additionally, by displaying students’ work and achievements, the system motivates them to participate more actively, encouraging them to be not just feedback providers but also active learners and collaborators in the FPA process. In this study, the system was embedded with four specific phases: task performance, peer feedback provision, peer feedback reception, and revision, which corresponded to the phases of FPA activity in this study.
In task performance, students are typically required to complete a task, such as writing an essay. Before initiating the evaluation activity, the teacher configures the evaluation settings, including the criteria form, assessment prompts, the number of works each student needs to evaluate, and anonymity options. To ensure anonymity, participants’ names were replaced with numerical IDs, so they neither knew the identity of the peers they were assessing nor from whom they would receive feedback. In this study, students submitted their compositions to the FPA system, which then automatically assigned two of these compositions to each student for assessment (
Figure 3).
The peer feedback provision stage asked students to rate and comment on their peers’ writings based on the assessment criteria.
Figure 4 shows a list of assessment criteria and input fields for each evaluation dimension: accuracy, organization, expression, creativity, and overall feedback.
During the peer feedback reception phase, students can view the ratings and comments on their work provided by two peers (
Figure 5). Additionally, this phase is supported by a peer dialog scaffold that allows students to express their acknowledgements, ask questions about the feedback, and express agreement or disagreement with the feedback in a kind manner.
Figure 6 illustrates the interface of the peer dialog scaffold.
After completing the peer feedback reception phase, participants transitioned to the revision stage. In this stage, students revised their compositions based on the received peer feedback and submitted their revised drafts.
4. Methods
4.1. Participants
The inclusion criteria for the participants are: (1) students must be native Chinese speakers; and (2) students must not have physical disabilities (e.g., visual impairments) that could diminish their AR-based learning experience. Employing a simple random selection method, we selected two classes at random to guarantee a representative sample. A total of 94 fifth-grade students, aged 10 to 11 years on average, from these classes at a Chinese primary school were recruited for this study. All participants had no prior experience with AR or FPA before the treatment. Six students (two from the AR-FPA group and four from the FPA group) were excluded from the final analyses because they did not complete all the learning tasks. One class was randomly assigned as the conventional formative peer assessment (FPA) group with 43 learners (15 males and 28 females), while the other class was randomly designated as the augmented reality-based formative peer assessment (AR-FPA) group with 46 learners (30 males and 16 females). All the participants were informed that their participation was voluntary and that they could withdraw from the study at any time.
4.2. Experiment Procedure
Figure 7 illustrates the experiment procedure, and a detailed comparison of the programs for the two groups is shown in
Appendix A. The study was conducted over five weeks, from May to June 2023, with two lessons each week. In the first week, before the intervention, all participants completed a writing performance pre-test. Subsequently, the Group Embedded Figures Test (GEFT) was administered to determine the participants’ cognitive styles.
From weeks two to four, both groups participated in 90-min writing learning activities each week. The AR-FPA group engaged in these activities within an AR context, while the FPA group studied in a conventional PowerPoint environment with the same learning content as the AR context. In each session, both groups completed paragraph writing tasks and employed FPA to evaluate their work. Based on the peer feedback received, students were able to revisit the relevant AR or printed materials and reflect on and revise their paragraphs. By the end of the fourth week, both groups had submitted their complete writing scripts.
In the fifth week, a 60-min online FPA activity was conducted to evaluate students’ first drafts. Students in both groups provided ratings and feedback to their peers. Based on the feedback, they revised their own compositions and completed their second drafts. These revised drafts served as the post-test for writing performance.
4.3. Measuring Tools
4.3.1. Evaluation Scale for Chinese Writing Performance
The evaluation scale for Chinese writing performance was adapted from the composition evaluation scale proposed by Yang et al. [
26]. As detailed in
Appendix B, the scale consisted of four dimensions (i.e., accuracy, organization, expression, and creativity). Each dimension is scored from 1 to 25 points, with a perfect score being 100 points. Two Chinese teachers, each with over 5 years of experience in teaching Chinese writing, graded the students’ compositions. The analysis of the intra-class correlation coefficient (ICC) between the two raters demonstrated a high level of consistency (ICC = 0.834,
p < 0.001). Therefore, the average scores assigned by the two raters were adopted as the final scores for students’ writing performance.
4.3.2. The Group Embedded Figures Test
A Chinese version of the Group Embedded Figures Test (GEFT), revised by the College of Psychology at Beijing Normal University, was employed to classify the participants’ cognitive styles as either field-dependent (FD) or field-independent (FI) [
27]. The test included 25 items divided into three sections. The first section, consisting of 7 items, had a time constraint of 2 min and was designed to familiarize participants with the test format without contributing to their final score. The second and third sections, each containing 9 items, contributed to the final score, with one point awarded for each correct answer. The total score of the test items was 18. Participants scoring below a norm of 11.4 were labeled as FD, while those scoring above were assigned to the FI group [
20].
4.3.3. Coding Scheme for Peer Feedback
Feedback quality. A two-dimensional measurement scale adapted from He and Gao (2023) [
16] was employed to assess whether the feedback aligned with the writing problem and had the potential to lead to writing improvement. Both the accuracy (Kappa = 0.78) and revision potential (Kappa = 0.81) dimensions were rated on a scale from 0 to 3. The definitions and examples of the feedback quality are presented in
Appendix C.
Feedback features. Following the coding scheme of Wu and Schunn (2020) [
17], each implementable comment was double-coded by two researchers for the presence or absence of the following five features: identification (Kappa = 0.69), explanation (Kappa = 0.84), suggestion (Kappa = 0.73), solution (Kappa = 0.77), and mitigating praise (Kappa = 79) (see
Appendix D for the definitions and examples). A code of “0” was assigned for absence and “1” for presence.
4.4. Data Analysis
This study implemented a 2 × 2 factorial design, where the first factor under investigation was the learning approach (AR-FPA vs. FPA) and the second factor was the cognitive style (FI vs. FD). The dependent variable was the students’ writing performance.
All analyses were performed in IBM SPSS version 26.0, with a significance level set at a p-value less than 0.05. The assumptions of normality and homogeneity regression slopes for the dependent variable were satisfied, indicating that it was reasonable to analyze covariance. Therefore, a two-way analysis of covariance (ANCOVA) was conducted, using pre-test writing performance as the covariate, to investigate the interaction effects of learning approaches and cognitive styles on learners’ writing performance. Furthermore, the Kruskal–Wallis H test and Mann–Whitney U test were performed to examine specific differences in feedback quality and features among different conditions. Due to a violation of Levene’s test for equality of variance and the Shapiro–Wilk test for normality (p < 0.05), the Kruskal–Wallis H test was employed instead of one-way analysis of variance (ANOVA), and the Mann–Whitney U test was used as a post-hoc test to explore the specific differences among the groups. Finally, correlation analysis and multiple linear regression analyses were conducted to explore how pre-test writing performance, peer feedback quality, and features provided by students correlated with their post-test writing performance.
7. Conclusions
Integrating the FPA instructional strategy and AR learning context into writing instruction enhances the writing performance of students with both field-dependent and field-independent cognitive styles. Furthermore, cognitive styles significantly moderate the impact of learning approaches on writing performance; specifically, FI learners benefit more from the AR-FPA approach, while FD learners benefit more from the conventional FPA approach. However, the quality and features of peer feedback provided by students show little to no significant relationship with their writing performance. These findings contribute to the understanding of cognitive style theory and offer a valuable reference for the selection of instructional strategies in writing education. Nevertheless, further investigations on the relationships between peer feedback traits and writing performance should be undertaken.
In conclusion, the integration of AR with FPA can be considered a valuable approach to the sustainable development of technology-enhanced writing instruction. This research may engage scholars in related areas, such as AR in education, peer assessment, Chinese writing, and educational technology, thereby encouraging their active participation in further exploration of this topic.