Next Article in Journal
Incorporating Uncertainty Quantification for the Performance Improvement of Academic Recommenders
Previous Article in Journal
Validity and Validation of Computer Simulations—A Methodological Inquiry with Application to Integrated Assessment Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Effects of Cognitive and Metacognitive Prompts on Learning Performance in Digital Learning Environments

1
Department of Educational Research, University of Salzburg, Hellbrunnerstr. 34, 5020 Salzburg, Austria
2
Mittelschule Taxham, Franz-Linher-Straße 4, 5020 Salzburg, Austria
*
Author to whom correspondence should be addressed.
Knowledge 2023, 3(2), 277-292; https://doi.org/10.3390/knowledge3020019
Submission received: 28 April 2023 / Revised: 4 June 2023 / Accepted: 7 June 2023 / Published: 14 June 2023

Abstract

:
Self-regulated learning (SRL) requires learners’ active participation, i.e., they need to activate cognitive and metacognitive learning strategies. These strategies can be activated and supported by using cognitive and metacognitive prompts. Extensive research concerning the effects of prompts on SRL is necessary to determine connections between these two concepts. Our study investigates the effects of cognitive and metacognitive activities—i.e., prompts—on learning performance during SRL. Therefore, we developed three types of learning environments that use different types of prompts—cognitive or metacognitive prompts—or no prompts. Moreover, we also used a questionnaire to examine prior knowledge and post-knowledge. Pre- and post-tests show that self-confidence in prior knowledge has a significant effect on self-confidence in post-knowledge, cognitive prompts reduce extrinsic motivation, and knowing how to use cognitive learning strategies enables using cognitive prompts more effectively. These results are partially in line with existing research findings on the effects of prompts in SRL.

1. Introduction

The following study incorporates three different fields: self-regulated learning, learning strategies, and prompts. Due to the rise in digital learning technologies, self-regulated learning has gained significant importance, especially over the last three years [1]. Schools and universities were forced to switch to online teaching, and learners had to adapt to these changes. Consequently, learner autonomy and the regulation of learning processes became integral parts of learners’ skill set [2]. When it comes to learning strategies, using cognitive and metacognitive strategies improves learning outcomes [2,3]. However, research in this field shows inconsistent results. For instance, different cognitive learning strategies, emotions, and motivation can also have contradictory effects on learning results [3,4,5]. Therefore, we aim at contributing to this research field in order to reduce this divergence of findings and to provide further insight into self-regulated learning.
Research on cognitive and metacognitive learning strategies provides accurate and deep understanding of how individuals acquire and retain knowledge [6]. Cognitive learning strategies refer to techniques and approaches that learners use to process information, such as summarizing, note-taking, and elaboration [3]. In contrast, metacognitive learning strategies involve learners’ awareness and control of their own cognitive processes, such as planning, monitoring, and evaluating their own learning [7]. Understanding how learners use these strategies also improves teaching practices and educational interventions [8]. For instance, Broadbent and Poon [7] discovered positive connections between metacognitive strategies and academic achievement. Moreover, research on cognitive and metacognitive strategies can help individuals to become more self-aware and reflect on their own learning processes, leading to better learning outcomes [3]. Predicated on these findings, we aim at investigating learners’ use of cognitive and metacognitive learning strategies in self-regulated learning scenarios by providing cognitive and metacognitive prompts.
Based on cognitive and metacognitive learning strategies, cognitive and metacognitive prompts can be used to improve learning, especially when it comes to online learning scenarios [9]. Moreover, cognitive and metacognitive prompts facilitate learning and promote problem-solving strategies [10]. Cognitive prompts are prompts that guide learners to use specific cognitive strategies, such as asking questions or making connections between ideas. Metacognitive prompts encourage learners to regulate and reflect on their own learning processes, such as setting goals or monitoring their progress. For instance, using cognitive and metacognitive prompts improves learning outcomes, motivation for learning, and self-efficacy [8]. Research also suggests that cognitive and metacognitive prompts activate cognitive and metacognitive learning strategies [11]. These theoretical examinations and research findings form the basis of our investigation into self-regulated learning with cognitive and metacognitive prompts.

2. Theoretical Framework

Based on current literature, the following section provides an overview of factors influencing self-regulated learning. In addition to learners’ characteristics, we also show that using cognitive and metacognitive strategies as well as integrating prompts can promote the efficacy of self-regulated learning.

2.1. Self-Regulated Learning

Self-regulated learning (SRL) is a key competence of successful learning in digital learning environments [12]. It is a dynamic process characterized by the active participation of learners. During self-regulated learning, individuals need to activate cognitive and metacognitive strategies and be aware of their prior knowledge and skills [7,13,14]. Self-regulated individuals are able to plan their learning, to set goals, and to autonomously acquire new information [5,15]. Former research in this domain [16,17] shows that individuals who are able to monitor and regulate their cognition, motivation, and behavior are more likely to engage in deep learning processes and demonstrate higher academic achievement than learners with low self-regulation skills [18]
In order to determine how behavioral, motivational, and cognitive components interact, several models for SRL have been developed. A widely cited framework in the domain of SRL is the 3-phase model of Zimmerman [19]. As the name suggests, the model assumes that one’s SRL is organized in three phases: forethought, performance, and self-reflection. During the forethought phase, learners analyze the learning content, plan their learning behavior, and set goals for the current learning task. Based on their prior knowledge, learners make assumptions about how efficiently they can solve the task and which cognitive and metacognitive strategies are necessary to control learning. Furthermore, motivational beliefs towards the learning task are determined in this phase. During the performance phase, actual SRL takes place. To stay concentrated, learners need to increase their attentional control and use cognitive and metacognitive strategies. The performance phase is generally followed by a self-reflection phase where learners reflect on their performance and ensure that they understand the learning content. Depending on their learning success, decisions regarding further learning actions are made [19,20].
Similar to the model by Zimmerman [19] the Dual Processing Model of Boekaerts [21,22,23] distinguishes between three phases of self-regulation: regulation of the self (choosing goals and resources), regulation of the learning process (using metacognitive knowledge and skills to direct one’s learning), and regulation of the processing modes (choosing cognitive strategies). To control knowledge processing, learners use their learning experience and choose appropriate cognitive strategies to reach their learning goal. To stay focused, learners need to regulate and control their learning processes using metacognitive strategies. In addition to the regulation of the self and the learning processes, the third level of the model is concerned with learners’ motivational resources. During SRL, individuals need to motivate themselves to pursue their learning goals.
However, while the 3-phase model of Zimmerman [19] is more static in nature, in the model of Boekaerts [21] the order of self-regulation processes is not fixed and can be changed depending on the learning task. The choice of goals and resources can also be placed at the beginning of the self-regulation process, since learners’ motivation is of central importance to actively engage in learning [24,25].
In sum, these models show that individuals need to engage in various controlling, regulation, and monitoring processes during SRL. However, it has been stated that learners can face difficulties to regulate their learning [26,27]. Reasons for difficulties in regulating their learning processes can be learners’ individual characteristics or the learning environment. Following the model of good information processing [28], learning can be influenced by interindividual or intraindividual differences. Interindividual differences refer to the fact that learners differ in their learning activities and their learning success. Intraindividual differences describe variations in the performance of a single learner. Consequently, individual performance can be influenced by various factors such as the time it takes to complete a task or the content-related learning settings [29,30]. Hasselhorn et al. [31] and Bosch et al. [32] showed that intraindividual as well as interindividual factors need to interact in a positive way to enable successful learning. This might be even more important in SRL situations where learners engage in cognitive, metacognitive, and motivational control processes. In order to define the conditions of successful SRL, situational factors and individual learning prerequisites are highlighted.

2.1.1. Motivation

Learners’ motivation is a central aspect of successful SRL. It determines students’ engagement in a learning experience and whether the learned information is stored in long-term memory [33,34]. However, learning motivation can be influenced by various aspects. For instance, learners’ motivational orientation—i.e., their intrinsic and extrinsic motivation—is one of these aspects. Intrinsic motivation means that learning is generally driven by learners’ inherent interest and enjoyment of task completion and their internal satisfaction. In contrast, extrinsic motivation means that learners engage in learning for other reasons such as rewards or punishment [35,36]. Kotera et al. [12,37] as well as Froiland [38] point out the significant role of intrinsic motivation in terms of learning engagement and academic achievement. The Expectancy–Value Theory of Eccles and Wigfield [39] also showed that task value, perceived self-efficacy, anticipated effort, and task difficulty play a major role in learners’ motivation. Confronted with a learning task, individuals weigh the costs—invested time and effort—against the values of task completion [40,41]. Whether learners feel capable of solving a task depends on their prior knowledge and their academic self-concept. In general, academic self-concept is an evaluative self-concept of a learner that is determined by past academic achievement. Consequently, it affects subsequent academic achievement through students’ engagement [14,42].

2.1.2. Cognitive Load

Cognitive aspects can also influence the success of SRL. For instance, Cognitive Load Theory (CLT) is concerned with humans’ cognitive architecture [43,44,45]. It assumes that new information needs to be processed in working memory before it is stored in long-term memory. Long-term memory is defined as nearly unlimited information storage, and the processing capacity of working memory is limited [43]. That is, if a task exceeds a learner’s cognitive processing capacity, learning is affected negatively. CLT postulates three categories of cognitive load in order to characterize the demands imposed on working memory: intrinsic cognitive load (ICL), extraneous cognitive load (ECL), and germane cognitive load (GCL). ICL is determined by the number of interacting elements of a task and learners’ prior knowledge. In contrast, ECL describes a learning-irrelevant type of load. GCL is concerned with the connection of new information with long-term memory representations and describes actual learning [46].
When using digital learning environments, students are confronted with various information resources; hence, Cognitive-Affective Theory of Learning with Media (CATLM; [47]) assumes that cognitive and motivational resources need to be regulated to prevent cognitive overload. Grafe [48] as well as Arnold et al. [49] showed that scaffolding encourages learners to activate cognitive and metacognitive strategies, leading to reduced cognitive load. However, prompts can also have negative effects. For example, Berthold et al. [6] found out that using prompts can cause cognitive overload. This might be due to the fact that the cognitive load is already increased during SRL. That is, learners have to process new information, plan their actions, and control their learning behavior. Scaffolds also need to be processed in working memory. Therefore, it is possible that instead of triggering metacognitive activation, using prompts exceeds learners’ cognitive capacities and impedes learning [6]. Element interactivity and established knowledge in long-term memory influence the perceived cognitive load. Consequently, learners’ prior knowledge is central for successful SRL.

2.1.3. Prior Knowledge

Prior knowledge influences how much effort a learner needs to invest to solve a task. If task-related information has already been stored in long-term memory, learners with more prior knowledge can integrate the new learning content in already established memory schemes. However, learners with little or no prior knowledge cannot rely on long-term memory schemes, hence, the cognitive load imposed by the same task is high [24,45]. For example, Bannert [27] and Taub et al. [50] discovered that especially learners with little prior knowledge have problems integrating scaffolds in their learning process. This might be due to the fact that the participants of the study did not have enough working memory capacity to process cognitive and metacognitive scaffolds during learning [51]. In contrast, Kapa [52] showed that learners with less prior knowledge rely more on prompts to regulate learning than learners with more prior knowledge. Studies show divergent results concerning the use of metacognitive scaffolds by learners with little prior knowledge. For instance, learners with more prior knowledge generally demonstrate more self-reflective and monitoring behavior and are more likely to use learning strategies [53]. In sum, the regulation of motivational, behavioral, and cognitive processes seems to be central for successful SRL. However, Schumacher and Ifenthaler [54] demonstrate that learners struggle to self-regulate learning without guidance. Learning strategies are an effective means to control learning processes; hence, cognitive and metacognitive strategies are presented in the following paragraph.

2.2. Learning Strategies

Self-regulated learning is gaining importance in modern societies due to changes in the education system, such as the increasing use of digital technologies. For example, information is not presented exclusively by teachers. Instead, it is acquired in autonomous learning environments. In order to control and regulate learning behavior, learners need to use various learning strategies [55,56,57]. Theobald et al. [5] distinguish three types of learning strategies: cognitive strategies, metacognitive strategies, and resource management strategies. While resource-oriented strategies are applied to regulate and maintain the learning process, cognitive and metacognitive strategies are directly influenced by the learning content [31,58,59]. Our study focuses on the effects of cognitive and metacognitive activities on learning performance during SRL; consequently, strategies to support these processes are highlighted.

2.2.1. Cognitive Learning Strategies

Cognitive strategies can help learners to identify, acquire, and process new information and to integrate it in established knowledge structures. Three different types of cognitive strategies are distinguished: elaboration strategies, organization strategies, and retry strategies [60]. First, elaboration strategies help learners to integrate new information in long-term memory representations. These strategies aim at finding analogies between new content and learners’ prior knowledge [58]. Second, organization strategies enable making complex information more accessible for learning. Learners break down complex tasks by paraphrasing the learning content. They also draw paper–pencil schemes or diagrams to identify central arguments or dates. Graphical representations help relate important information in order to establish new memory schemes and to engage in deep learning processes [61]. Third, retry strategies are applied to restudy the learning content and to strengthen the integration of new information in already established knowledge schemes [5,13,25,62].

2.2.2. Metacognitive Learning Strategies

In contrast to cognitive strategies, metacognitive strategies are applied to control the learning process. Metacognitive strategies include planning and control strategies. They focus on individuals’ abilities to plan, control, and regulate cognitive processes. While planning strategies are generally applied before actual learning begins, control strategies help learners to stay in control of their learning process. During learning, individuals engage in controlling behavior by reflecting on their learning behavior and evaluating whether they understand the content. Hence, controlling processes are often followed by regulation strategies. These strategies help learners to adapt their learning behavior in order to achieve certain learning objectives [63,64,65,66]. In conclusion, cognitive and metacognitive strategies are important for successful learning. However, Schuster [67] showed that using these strategies can be problematic. In SRL scenarios, these difficulties can be explained by interindividual differences such as learning motivation and prior knowledge. However, as learners’ motivation and prior knowledge change, depending on the learning domain and the study setting, supporting the use of cognitive and metacognitive strategies is important.

2.2.3. Indirect and Direct Support

There are various ways to enhance the use of learning strategies. Learners’ experience in using those strategies determines which type of support they need. If they have not yet established a repertoire of metacognitive and cognitive strategies, direct support is necessary [64]. That is, learning strategies are explained and learners receive instructions on how and when to use them [66]. Although this process is very time consuming, Carreti et al. [68] found positive effects of direct learning support on learners’ planning skills, text comprehension, and metacognitive control. In contrast to direct support, indirect methods support individuals in using strategies they already know [69,70]. For instance, prompts are used to script learners’ behavior during the learning process. According to Zumbach et al. [10], scaffolding students’ behavior by using prompts is an effective strategy to foster learning. Consequently, different types of prompts, their use, and their influence on cognitive and metacognitive factors are discussed.

2.3. Prompts

Research shows that prompting stimulates cognitive, metacognitive, and motivational processes and positively influences learning [65]. The basis for using prompts is that learners generally know about learning strategies but they fail to apply them. Prompts can support learners’ knowledge acquisition and help them to regulate their learning process [27,64]. They aim at triggering the activation of cognitive and metacognitive learning strategies by directing learners’ attention towards relevant information [11,71]. Lin et al. [72] distinguish three different types of prompts: process prompts, explicit prompts, and structured prompts. In general, prompts activate acquisition and monitoring processes; however, process prompts are useful in problem-oriented learning contexts. They support learners’ monitoring and the reevaluation of learning process; therefore, process prompts are of special interest in the present study. In this context, scaffolds can be short questions aiming at directing learners’ attention towards efficient learning processes [73]. That is, process prompts are indirect methods of learning support. In contrast, explicit prompts—such as process models—encourage learners’ use of cognitive and metacognitive strategies. Process models are based on the principle of model learning. Problem-solving models guide learners through learning processes. Last, structured prompts can encourage learners to visualize invisible processes. Scripting process display helps learners to visualize their actions during the learning process and to react to chosen strategies. Depending on the learning success, strategies can be maintained or study behavior can be readjusted to reach the learning goal.

Effectiveness of Prompts

Positive effects of prompts on learning behavior and performance are shown in numerous studies [54,65,74,75]. The present study focuses on the efficacy of prompts. Therefore, we provide an overview of articles presenting the effects of cognitive and metacognitive prompts on SRL, learning performance, and cognitive load. Furthermore, prerequisites helping students to use scaffolds in order to enhance their learning behavior are highlighted. Regarding the effects of prompts on learners’ self-regulation, Daumiller and Dresel [76] showed that metacognitive prompts can positively influence motivation and enhance SRL behavior. Dori et al. [77] showed that metacognitive scaffolds help learners engage in monitoring activities and thus foster deep learning processes. This can increase knowledge gain and improve long-term learning outcomes. The effects of prompts in virtual settings are of central importance since learners need to self-regulate learning in digital learning environments. For example, Castronovo et al. [78] and Engelmann et al. [79] investigated the effects of metacognitive scaffolds on learning behavior in computer-based learning environments. The results showed that in the experimental group, problem-solving skills and engagement in deeper analysis of the learning content increased. Chen et al. [8] and Kriegelstein et al. [80] examined the influence of cognitive and metacognitive prompts on learners’ motivation and cognitive load. Both studies showed that using prompts has positive effects on learners’ motivation. Moreover, scaffolding facilitates learning in dynamic environments and has positive effects on cognitive load. Regarding the influence of cognitive and metacognitive prompts on cognitive load, Chen et al. [8] specified that cognitive prompts did not cause significantly more mental load but did result in higher mental efforts when compared with metacognitive prompts.
In sum, prompts positively influence learning behavior by supporting learners’ engagement in deeper learning processes. However, Pieger and Bannert [81] as well as Richey et al. [82] stress that using prompts is not equally beneficial for all learners. For instance, Pieger and Bannert [81] showed that prompts are more beneficial for learners with less learning experience. In contrast, Richey et al. [82] discovered that expert learners are more likely to profit from scaffolds than novice learners. Regarding these divergent results, the purpose of the present study was to examine the influence of prior knowledge, learning motivation, and cognitive load on the efficacy of prompts.

2.4. Open Research Questions

This study sets out to examine the effects of prompts on learning performance as well as motivational and cognitive processes. In addition, the influence of prior knowledge and learners’ metacognitive strategy knowledge on the use of prompts was analyzed. That is, we examined which type of prompt—cognitive or metacognitive—is more effective in improving learning performance. Another objective of the study was to find out whether factors such as prior knowledge, motivation, and metacognitive strategy knowledge influence the perceived cognitive load in prompt-based learning environments. Based on previous research and literature [8,78,80], we assumed that learning with prompts leads to better learning outcomes than learning without prompts (Hypothesis 1; see Figure 1). We divided H1 into two sub-hypotheses (H1a, H1b). H1a assumed that the groups with prompting show better learning outcomes than the control group. H1b stated that metacognitive prompting leads to better learning outcomes in comparison to cognitive prompting. However, previous research showed that learners with less prior knowledge had difficulties using scaffolds to support their learning processes [27,50]. Based on these findings, we expected that learners with more prior knowledge would achieve better results in the knowledge posttest (Hypothesis 2). We also examined the influence of scaffolds on cognitive and motivational processes. As shown by Eccles and Wigfield [40], learners only engage in solving a task if the costs of task completion do not exceed its values. Scaffolds can help learners to regulate their learning behavior by directing their attention toward effective learning behavior [73], so we expected prompts to have positive effects on motivation (Hypothesis 3) and cognitive load (Hypothesis 4). Finally, based on findings by Zumbach et al. [10], we also assumed that using prompts is more effective for learners with more metacognitive strategy knowledge than for learners with less metacognitive knowledge (Hypothesis 5). All hypotheses correspond to previous research in this field [76,80,83].

3. Material and Methods

The objective of the study was to examine the effects of cognitive and metacognitive scaffolds—i.e., prompts—on motivation, cognitive load, and learning outcomes in a prompt-based learning environment. We analyzed the effects of prior knowledge and knowledge on metacognitive strategies on learning motivation, perceived cognitive load, and learning performance.

3.1. Sample

A total of 100 learners (71 female, 29 male; mean age: M = 20.38; SD = 5.52) participated in this study. No rewards were given. The participants were randomly assigned to one of the three experimental conditions. Those three conditions were learning with cognitive prompts, learning with metacognitive prompts, and learning without prompts. Participation was voluntary, and all data were obtained anonymously. All participants were advised about privacy and agreed to the terms and conditions of the study.

3.2. Design

In our study, the presence or absence of prompts was included as a dependent variable. That is, prompts were either present as metacognitive or cognitive prompts or there were no prompts at all. The group without prompts functioned as the control group. Moreover, all three learning sessions—i.e., the three different experimental conditions—were consistent concerning content. They differed only regarding scaffolding, i.e., the prompts that were used. The following variables were included in the study: academic self-concept, extrinsic and intrinsic motivation, self-confidence in pre- and post-knowledge, and results in the knowledge-post-test. Control variables were assessed by questionnaires after completion of the learning task. Covariates of the study were included based on the results of a correlation analyses with results of the knowledge-post-test and self-confidence in post-knowledge as dependent variables. This process led to the following covariates: results and self-confidence in knowledge in the knowledge pre-test, deep processing, metacognitive monitoring and planning, cognitive elaboration, and germane and extraneous cognitive load.

3.3. Material

Learning Environment

To test cognitive and metacognitive scaffolding, three different test conditions were developed to which the participants were randomly assigned in a prompt-based learning scenario. The learning environment was created using the online survey tool LimeSurvey. First, the story of a fictional student called Clara was presented to all participants using three videos and a written storyline. In the story, Clara experiences a drop in performance in mathematics class. She makes assumptions about the reasons for the drop in performance. In addition, statements and assumptions about the drop in performance made by her mother and Clara’s mathematics teacher were provided. Next, a learning unit on attribution theory based on the fictional scenario was implemented. This unit was different for each of the three experimental groups. The learning task was divided into five sections. One group was provided with cognitive prompts, the second group received metacognitive prompts, and the last group did not receive any prompts. Then, participants in both experimental groups were advised to take notes while they worked on the learning task. In addition, text boxes for note taking were provided in the survey. This is an example of one of the cognitive prompts that were provided: “Think about which of the following contents you understood well/not well. What are attributions and control cognitions? What are the differences that cause attributions between individuals?” In contrast, metacognitive prompts were formulated as follows: “Are the previous contents clear to you? Write down the most important terms and briefly explain what they mean.”

3.4. Instruments

For all scales, Cronbach’s alpha (α) was assessed and items were reduced when stated.
Prior knowledge and knowledge acquisition. In order to assess prior knowledge and knowledge acquisition, we developed a test based on the learning environments’ objectives. This test was implemented as an online questionnaire. It consisted of 14 single-choice questions, i.e., four response options were given and participants had to choose the right answer. Pre- and post-tests of the study were the same. Additionally, in the pre-test, the option “I don’t know” was given. All correct answers were added up to an overall score.
Learning strategies. We used the LIST questionnaire to examine knowledge on learning strategies (Lernstrategien im Studium, i.e., learning strategies at university; [61]). This questionnaire includes three scales assessing learners’ cognitive, metacognitive, and resource-oriented learning strategies. We combined three subscales—two metacognitive and one cognitive scale—to examine learners’ planning, monitoring, and elaboration strategies. Planning was assessed using seven items (α = 0.79), for example, “Before I start to learn, I plan the best way possible to handle the learning content.” Monitoring strategies were investigated using four items (α = 0.71), for instance, “I ask myself questions about the learning content to make sure I understand everything.” One item had to be eliminated from this subscale because of the deterioration of Cronbach’s alpha. Cognitive elaboration strategies were measured using eight items (α = 0.79), such as, “I try to connect the learning content to my own experiences.” All questionnaires used a 6-point Likert scale ranging from 1 (completely agree) to 5 (completely disagree).
Academic self-concept. In order to assess academic self-concept as a control variable, we used the SESSKO questionnaire developed by Dickhäuser et al. [84]. That is, we used five items on a 5-point Likert scale ranging from 1 (completely disagree) to 5 (completely agree) to measure learners’ general academic-self-concept (α = 0.80), e.g., “I am intelligent.”
Motivation. In the pre-test, we integrated motivation as a dependent variable using the Motivated Strategies for Learning Questionnaire (MLSQ; [85]). This self-evaluation questionnaire consists of two subscales assessing learners’ motivational strategies and their motivational orientation. We also used eight items to assess participants’ intrinsic (α = 0.83) and extrinsic motivation (α = 0.91). A sample item for intrinsic motivation is “I prefer tasks that require learning new things.” Extrinsic motivation was assessed using items such as the following: “I try to get better grades than the rest of the students.” Each of the four items was given twice but was slightly rephrased. Intrinsic motivation and extrinsic motivation were rated on a 5-point Likert scale ranging from 1 (very often) to 5 (very rarely).
Cognitive load. To assess the cognitive load after completion of the learning task, we used the Naïve Rating Questionnaire provided by Klepsch et al. [86]. This self-evaluation questionnaire consists of three subscales measuring intrinsic (ICL; α = 0.77, e.g., “This task was very complex.”), germane (GCL; α = 0.71, e.g., “For me, it was important to understand the learning content.”), and extraneous cognitive load (ECL; α = 0.81; “During the task, it was exhausting to find the important information.”) using a 5-point Likert scale ranging from 1 (completely disagree) to 5 (completely agree). While ICL was measured by two items, participants’ GCL and ECL were assessed using three items each.
Learning approaches. We also used the Revised Two-factor Learning Questionnaire (R-LPQ-2F; [87] Chow & Chapman, 2018) to assess learning approaches as covariates. This questionnaire consists of two subscales assessing deep and surface learning approaches. Deep processing was assessed using four items (α = 0.69), e.g., “I try to link information of different domains.” In contrast, surface processing was examined using seven items (α = 0.70), e.g., “I generally restrict my study to what is specially set, as I think it is unnecessary to do anything extra.” One item was excluded due to the deterioration of Cronbach’s alpha. The items were rated on a 5-point Likert-scale ranging from 1 (completely disagree) to 5 (completely agree).

4. Results

First, we describe the descriptive results of our study. We conducted MANCOVA with several covariates in order to analyze differences between the groups with prompts and the control group. Then, we conducted further variance analyses for motivation, academic self-concept, and cognitive load measures. The following paragraphs elaborate on correlative results concerning learning strategies.

4.1. Descriptive Results

Descriptive results concerning prior knowledge and post-knowledge (Table 1) showed successful learning. That is, comparing pre- and post-tests revealed a significant increase in knowledge. This increase in knowledge occurred in all three groups, regardless of the type of prompts used or their absence. The metacognitive prompts group showed post-knowledge scores similar to those of the group without prompts. The cognitive prompts group showed the highest scores of prior knowledge and post-knowledge. Self-confidence in pre- and post-knowledge increased in all three groups. However, the cognitive-prompted group demonstrated the highest level of post-knowledge certainty.

4.2. Post-Knowledge and Self-Confidence in Post-Knowledge

Here, a MANCOVA with post-knowledge and self-confidence in post-knowledge as dependent variables was conducted. Covariates of the analysis were included based on correlations between post-knowledge and the control variables. The covariates included in the study were self-confidence in prior knowledge (correlation with post-knowledge: r(100) = 0.31; p = 0.001), prior knowledge (r(100) = 0.38; p < 0.001), deep processing (r(100) = 0.26; p = 0.01), metacognitive planning (r(100) = 0.18; p = 0.06), metacognitive monitoring (r(100) = 0.18; p = 0.08), metacognitive elaboration (r(100) = 0.19; p = 0.06), GCL (r(100) = 0.34; p = 0.001), and ECL (r(100) = −0.40; p < 0.001).
Multivariate analysis showed significant effects of prior knowledge self-confidence (F(2, 88) = 8.08, p = 0.001, ηp2 = 0.16) and extraneous cognitive load (F(2, 88) = 5.58, p = 0.005, ηp2 = 112) on post-knowledge and self-confidence in post-knowledge. In contrast, we did not find any significant effects of prior knowledge (F(2, 88) = 2.73, p = 0.07, ηp2 = 0.06), deep processing (F(2, 88) = 0.82, p = 0.44, ηp2 = 0.02), metacognitive planning (F(2, 88) = 0.08, p = 0.92, ηp2 = 0.002), metacognitive monitoring (F(2, 88) = 2.70, p = 0.07, ηp2 = 0.06), cognitive elaboration (F(2, 88) = 0.56, p = 0.57, ηp2 = 0.01), and germane cognitive load (F(2, 88) = 0.05, p = 0.95, ηp2 = 0.001). The grouping variable showed a significant main effect (F(2, 88) = 3.17, p = 0.015, ηp2 = 0.07). Concerning the univariate effects of self-confidence in prior knowledge on self-confidence in post-knowledge (F(1, 89) = 16.23, p = 000, ηp2 = 0.15) and post-knowledge (F(1, 89) = 2.81, p = 0.10, ηp2 = 0.03), we discovered a significant effect on self-confidence in post-knowledge. The univariate effects of extraneous cognitive load were significant for self-confidence in post-knowledge (F(1, 89) = 7.71, p = 0.007, ηp2 = 0.08) and post-knowledge (F(1, 89) = 7.33, p = 0.008, η2 = 0.08).
Regarding the univariate effects of the experimental conditions (cognitive prompts, metacognitive prompts, and no prompts), we found significant effects on post-knowledge (F(1, 89) = 6.71, p = 0.002, ηp2 = 0.13) but not on self-confidence in post-knowledge (F(1, 89) = 1.12, p = 0.33, ηp2 = 0.03). A comparison of groups’ post-knowledge showed no significant differences (p = 0.58) between the metacognitive prompts group and the group without prompts. However, the metacognitive prompts group and the cognitive prompts group showed significant differences regarding post-knowledge (p = 0.005). The cognitive prompts group and the group without prompts showed significant differences (p = 0.001) in post-knowledge. In sum, the cognitive prompts group achieved significantly greater learning success than the metacognitive prompts group (M = 6.65, SD = 2.82) and the group without prompts.

4.3. Motivational Measures and Academic Self-Concept

In order to investigate differences between the three groups regarding motivational measures (intrinsic and extrinsic) and academic self-concept, we conducted variance analysis. The multivariate results showed a significant difference between the three groups (F(6, 192) = 3.14, p = 0.006, ηp2 = 0.09). Univariate results showed a significant effect of the grouping variable on extrinsic motivation (F(2, 97) = 3.49, p = 0.03, ηp2 = 0.07) but no significant effects on intrinsic motivation (F(2, 97) = 1.71, p = 0.19, ηp2 = 0.03) and academic self-concept (F(2, 97) = 2.11, p = 0.13, ηp2 = 0.04). The cognitive prompts group reached the lowest values of all three groups regarding extrinsic motivation. Moreover, differences concerning the cognitive load between the three experimental groups were examined using variance analysis. Here, the grouping variable served as the independent variable and the cognitive load variables (intrinsic, germane, extraneous) functioned as dependent variables. We could not find any significant main effect of the grouping variable on the cognitive load measures (F(2, 97) = 0.70, p = 0.65, ηp2 = 0.02). Thus, we could not establish any significant univariate results.

4.4. Learning Strategies

Next, we used Spearman’s correlation in order to assess correlations between the use of metacognitive and cognitive learning strategies (elaboration, planning, and regulation) and the use of cognitive and metacognitive prompts. We discovered that the use of the cognitive learning strategy of elaboration significantly correlated with the use of cognitive prompts (r(32) = 0.34, p = 0.05). Correlations between the use of metacognitive learning strategies, such as planning (r(31) = 0.03, p = 0.87) and regulation (r(31) = 0.34, p = 0.06), and the use of metacognitive prompts were insignificant.

5. Discussion

We investigated the effects of prompts (cognitive, metacognitive, and no prompts) on learning outcomes in a prompt-based learning environment (H1). The results showed that cognitive and metacognitive prompting had positive effects on learning outcomes. In both experimental groups, learners reached higher scores in post-knowledge tests in comparison to prior knowledge tests. However, the control group also reached higher scores in the post-knowledge test than in the prior knowledge test. Learners in the cognitive prompts group showed the highest learning outcomes compared to the other two groups. The metacognitive prompts group also showed an increase in learning outcomes, though group comparisons did not show any significant differences. Zumbach et al. [10] showed similar findings concerning group differences between prompted and non-prompted groups. In sum, H1a and H1b cannot be supported; metacognitive prompting did not lead to significantly better learning outcomes than learning with cognitive prompts or no prompts. Moreover, we did not find any significant group differences regarding learning outcomes in the post-knowledge test for all three groups. Therefore, we cannot confirm the findings reported by Daumiller and Dresel [76]. We assume that the outcome of our study results from assigning learners to different groups instead of allowing them to choose a group. That is, they could not choose their preferred prompting group. Consequently, we assume that eliminating individualized group choices automatically also led to eliminating learners’ personal preferences and strengths concerning learning strategies. Prompting was not coordinated with learners needs, which can be implemented in the classroom (e.g., [88]) but hardly in the course of an experiment.
Second, we expected learners with more prior knowledge to achieve better results in the knowledge post-test (H2). However, we had to reject this hypothesis because prior knowledge did not have a significant influence on post-knowledge. Further, prior knowledge did not have a significant impact on self-confidence in post-knowledge. In short, no significant effects were found regarding prior knowledge and post-knowledge scores. However, self-confidence in prior knowledge did have a significant effect on self-confidence in post knowledge. Consequently, our results did not support H2, though they are in line with results reported by Richey and Nokes-Malach [82].
Third, we assumed that the use of prompts affects learners’ motivation (H3). Our findings partially supported this assumption. That is, we did find significant group differences regarding extrinsic motivation between the experimental groups and the control group, but we did not find any group differences concerning intrinsic motivation and academic self-concept. In short, descriptive results showed that the group provided with cognitive prompts achieved the lowest values in extrinsic motivation.
Fourth, we investigated the differences of cognitive load measures (intrinsic, germane, and extrinsic). However, we were not able to find any significant differences between the groups. That is, prompts did not affect learners’ cognitive load; hence, H4 was rejected. We suspect that deficiencies concerning the construction of the prompts led to these findings. Even though we tested different prompts, the three conditions turned out to be too similar and too easy for learners. Nevertheless, none of the three groups experienced the experimental condition to be disruptive or distracting during the learning situation. On the contrary, learners did not use the prompts as extensively as they should have. This result corresponds with findings by Moser et al. [51], who discovered that prompts are only effective when learners use them thoroughly.
Fifth, concerning the effectiveness of prompts, we assumed that prompts are more effective for learners with more metacognitive strategy knowledge than for learners with less metacognitive knowledge (H5). We found out that learners who already knew how to use cognitive learning strategies could use cognitive prompts more effectively. However, knowing and using metacognitive learning strategies did not lead to more efficient use of metacognitive prompts. Our results do not support the findings of Bannert and Mengelkamp [65], who pointed out that learners with metacognitive learning strategies were able to use metacognitive prompts successfully. For example, one learner in the metacognitive prompts group commented, “I do not understand why I have to indicate the stated concepts in my own words.” We assume that learners became frustrated while working on the learning tasks and using metacognitive prompts. Consequently, alterations or improvements regarding metacognitive prompts should be considered for future studies. We also assume that cognitive prompts are more natural to the learning situation than metacognitive prompts. Further, the group without prompting showed similar scores in extrinsic motivation compared to the two prompted groups. Therefore, H5 was not fully supported by the results of this study and was rejected.
These results show that further research in the field of learning with prompts is needed. For instance, changes concerning the research design can contribute to a more diverse approach as Zhang et al. [75] showed when they examined two groups using cognitive and mixed prompts. Furthermore, time measurements in the learning environment show which tasks learners spend more time on. Then, further prompts can be incorporated to support the scaffolding of these tasks.
The implications of this study for classroom practices concern self-regulated learning environments that are usually created within e-learning environments. Self-regulated learning is a key element of lifelong learning and is relevant for students “to cope and operate effectively within our technology rich and fast developing society” [89] (p. 115). Modern teaching styles create learner-centered and explorative learning environments for students, which call for fitting learning strategies used by learners. These strategies need to be used thoroughly to be successful. Hence, self-regulated leaning has to be instructed and guided correctly by teachers so that learners can achieve better learning outcomes [51]. The findings from this study support this because implied scaffolding, similar to the use of cognitive prompts, led to better learning outcomes.

Author Contributions

Conceptualization—methodology, formal analysis, investigation, resources, data curation, J.Z. and K.H.; writing—original draft preparation, writing—review and editing, I.Z. and S.H.; review and editing, B.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to non-person-related data acquisition. As this study was a voluntary part of geography learning in school, permission by the school administration was granted.

Informed Consent Statement

Informed consent was obtained from all people involved in the study.

Data Availability Statement

The data presented in this study are available on request from Joerg Zumbach ([email protected]).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Boholano, H.B.; Cajes, R.C.; Boholano, G.S. Technology based teaching and learning in junior high school. Res. Psychol. 2021, 11, 98–107. [Google Scholar] [CrossRef]
  2. Edisherashvili, N.; Saks, K.; Pedaste, M.; Leijen, Ä. Supporting self-regulated learning in distance learning contexts at higher education level: Systematic literature review. Front. Psychol. 2022, 12, 792422. [Google Scholar] [CrossRef]
  3. Jiang, L.; Zhang, S.; Li, X.; Luo, F. How grit influences high school students’ academic performance and the mediation effect of academic self-efficacy and cognitive learning strategies. Curr. Psychol. 2021, 42, 94–103. [Google Scholar] [CrossRef]
  4. Hayat, A.A.; Shateri, K.; Amini, M.; Shokrpour, N. Relationships between academic self-efficacy, learning-related emotions, and metacognitive learning strategies with academic performance in medical students: A structural equation model. BMC Med. Educ. 2020, 20, 76. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Theobald, M. Self-regulated learning training programs enhance university students’ academic performance, self-regulated learning strategies, and motivation: A meta-analysis. Contemp. Educ. Psychol. 2021, 66, 101976. [Google Scholar] [CrossRef]
  6. Berthold, K.; Röder, H.; Knörzer, D.; Kessler, W.; Renkl, A. The double-edged effects of explanation prompts. Comput. Hum. Behav. 2011, 27, 69–75. [Google Scholar] [CrossRef]
  7. Broadbent, J.; Poon, W.L. Self-regulated learning strategies & academic achievement in online higher education learning environments: A systematic review. Internet High. Educ. 2015, 27, 1–13. [Google Scholar] [CrossRef]
  8. Chen, C.-H.; Liu, T.-K.; Huang, K. Scaffolding vocational high school students’ computational thinking with cognitive and metacognitive prompts in learning about programmable logic controllers. J. Res. Technol. Educ. 2021, 29, 527–544. [Google Scholar] [CrossRef]
  9. Berthold, K.; Nückles, M.; Renkl, A. Do learning protocols support learning strategies and outcomes? The role of cognitive and metacognitive prompts. Learn. Instr. 2007, 17, 564–577. [Google Scholar] [CrossRef]
  10. Zumbach, J.; Ortler, C.; Deibl, I.; Moser, S. Using prompts to scaffold metacognition in case-based problem solving within the domain of attribution theory. J. Probl.-Based Learn. 2020, 7, 21–31. [Google Scholar] [CrossRef]
  11. Wirth, J. Prompting self-regulated learning throught prompts. Z. Padagog. Psychol. 2009, 23, 91–94. [Google Scholar] [CrossRef]
  12. Dent, A.L.; Koenka, A.C. The relation between self-regulated learning and academic achievement across childhood and adolescence: A meta-analysis. Educ. Psychol. Rev. 2016, 28, 425–474. [Google Scholar] [CrossRef]
  13. Broadbent, J. Comparing online and blended learner’s self-regulated learning strategies and academic performance. Internet High. Educ. 2017, 33, 24–32. [Google Scholar] [CrossRef]
  14. Woolfolk, A.; Schönpflug, U. Pädagogische Psychologie, 12th ed.; Pearson: Hallbergmoos, Germany, 2014. [Google Scholar]
  15. Lawson, D. Supporting Students’ Development of Self-Regulated Learning Using a Diagnostic Questionnaire Tool. Res. High. Educ. 2019, 12, 15–23. [Google Scholar]
  16. Yilmaz, R.; Karaoglan Yilmaz, F.G.; Keser, H. Vertical versus shared e-leadership approach in online project-based learning: A comparison of self-regulated learning skills, motivation and group collaboration processes. J. Comput. High. Educ. 2020, 32, 628–654. [Google Scholar] [CrossRef]
  17. Bui, T.H.; Kaur, A.; Trang Vu, M. Effectiveness of technology-integrated project-based approach for self-regulated learning of engineering students. Eur. J. Eng. Educ. 2022, 47, 591–605. [Google Scholar] [CrossRef]
  18. Carter Jr, R.A.; Rice, M.; Yang, S.; Jackson, H.A. Self-regulated learning in online learning environments: Strategies for remote learning. Inf. Learn. Sci. 2020, 121, 321–329. [Google Scholar] [CrossRef]
  19. Zimmerman, B.J. Investigating self-regulation and motivation: Historical background, methodological developments, and future prospects. Am. Educ. Res. J. 2008, 45, 166–183. [Google Scholar] [CrossRef]
  20. Ohtani, K.; Hisasaka, T. Beyond intelligence: A meta-analytic review of the relationship among metacognition, intelligence, and academic performance. Metacogn. Learn. 2018, 13, 179–212. [Google Scholar] [CrossRef]
  21. Boekaerts, M. Self-regulated learning: Where we are today. Int. J. Educ. Res. 1999, 31, 445–457. [Google Scholar] [CrossRef]
  22. Boekaerts, M. Cognitive load and self-regulation: Attempts to build a bridge. Learn. Instr. 2017, 51, 90–97. [Google Scholar] [CrossRef]
  23. Boekaerts, M.; Corno, L. Self-regulation in the classroom: A perspective on assessment and intervention. Appl. Psychol. 2005, 54, 199–231. [Google Scholar] [CrossRef]
  24. Ansell, D.B.; Spencer, N.L.I. “Think about what you’re doing and why you’re doing it”: Coach feedback, athlete self-regulation, and male youth hockey players. J. Appl. Sport Psychol. 2022, 34, 459–478. [Google Scholar] [CrossRef]
  25. Panadero, E. A review of self-regulated learning: Six models and four directions for research. Front. Psychol. 2017, 8, 422. [Google Scholar] [CrossRef] [Green Version]
  26. Artelt, C.; Demmrich, A.; Baumert, J. Selbstreguliertes Lernen. In PISA 2000. Basiskompetenzen von Schülerinnen und Schülern im internationalen Vergleich; Bannert, J., Ed.; Leske + Budrich: Opladen, Germany, 2000; pp. 271–298. [Google Scholar]
  27. Bannert, M. Designing metacognitive support for hypermedia learning. In Instructional Design for Multimedia-Learning; Niegemann, D.L.H., Brünken, R., Eds.; Waxmann: Münster, Germany, 2004; pp. 19–30. [Google Scholar]
  28. Pressley, M.; Borkwski, J.G.; Schneider, W. Good information processing: What it is and how education can promote it. Int. J. Educ. Res. 1989, 13, 857–867. [Google Scholar] [CrossRef] [Green Version]
  29. Krapp, A. Pädagogische Psychologie: Ein Lehrbuch, 5th ed.; Beltz: Weinheim, Germany, 2006. [Google Scholar]
  30. Zumbach, J.; Mohraz, M. Cognitive load in hypermedia reading comprehension: Influence of text type and linearity. Comput. Hum. Behav. 2008, 24, 875–887. [Google Scholar] [CrossRef]
  31. Hasselhorn, M.; Gold, A.; Kunde, W.; Schneider, S. Pädagogische Psychologie: Erfolgreiches Lernen und Lehren; Kohlhammer: Stuttgart, Germany, 2017. [Google Scholar]
  32. Bosch, E.; Seifried, E.; Spinath, B. What successful students do: Evidence-based learning activities matter for students’ performance in higher education beyond prior knowledge, motivation, and prior achievement. Learn. Individ. Differ. 2021, 91, 102056. [Google Scholar] [CrossRef]
  33. Mienert, M.; Pitcher, S. Pädagogische Psychologie: Theorie und Praxis des Lebenslangen Lernens; VS Verlag für Sozialwissenschaften: Wiesbaden, Germany, 2011. [Google Scholar]
  34. Putwain, D.W. An examination of the self-referent executive processing model of test anxiety: Control, emotional regulation, self-handicapping, and examination performance. Eur. J. Psychol. Educ. 2019, 34, 341–358. [Google Scholar] [CrossRef] [Green Version]
  35. Ryan, R.M.; Deci, E. Self-Determination Theory: Basic Psychological Needs in Motivation, Development, and Wellness; The Guilford: New York, NY, USA, 2017. [Google Scholar]
  36. Ryan, R.M.; Deci, E.L. Intrinsic and extrinsic motivation from a self-determination theory perspective: Definitions, theory, practices, and future directions. Contemp. Educ. Psychol. 2020, 61, 101860. [Google Scholar] [CrossRef]
  37. Kotera, Y.; Taylor, E.; Fido, D.; Williams, D.; Tsuda-McCaie, F. Motivation of UK graduate students in education: Self-compassion moderates pathway from extrinsic motivation to intrinsic motivation. Curr. Psychol. 2021, 42, 10163–10176. [Google Scholar] [CrossRef]
  38. Froiland, J.M.; Worrell, F.C. Intrinsic motivation, learning goals, engagement, and achievement in a diverse high school. Psychol. Sch. 2016, 53, 321–336. [Google Scholar] [CrossRef]
  39. Eccles, J.S.; Wigfield, A. In the mind of the actor: The structure of adolescents’ achievement task values and expectancy-related beliefs. Pers. Soc. Psychol. Bull. 1995, 21, 215–225. [Google Scholar] [CrossRef] [Green Version]
  40. Eccles, J.S.; Wigfield, A. From expectancy-value theory to situated expectancy-value theory: A developmental, social cognitive, and sociocultural perspective on motivation. Contemp. Educ. Psychol. 2020, 61, 101859. [Google Scholar] [CrossRef]
  41. Zumbach, J.; Zeitlhofer, I.; Mann, B.; Hoermann, S.; Reisenhofer, B. The Appraisal Principle in Multimedia Learning: Impact of Appraisal Processes, Modality, and Codality. Multimodal. Technol. 2022, 6, 58. [Google Scholar] [CrossRef]
  42. Guay, F.; Ratelle, C.F.; Roy, A.; Litalien, D. Academic self-concept, autonomous academic motivation, and academic achievement: Mediating and additive effects. Learn. Individ. Differ. 2010, 20, 644–653. [Google Scholar] [CrossRef] [Green Version]
  43. Sweller, J. Cognitive load during problem solving: Effects on learning. Cogn. Sci. 1988, 12, 257–285. [Google Scholar] [CrossRef]
  44. Sweller, J. Element interactivity and intrinsic, extraneous, and germane cognitive load. Educ. Psychol. Rev. 2010, 22, 123–138. [Google Scholar] [CrossRef]
  45. Sweller, J. Cognitive load theory and educational technology. Educ. Technol. Res. Dev. 2020, 68, 1–16. [Google Scholar] [CrossRef]
  46. Anmarkrud, Ø.; Andresen, A.; Bråten, I. Cognitive load and working memory in multimedia learning: Conceptual and measurement issues. Educ. Psychol. 2019, 54, 61–83. [Google Scholar] [CrossRef]
  47. Moreno, R.; Mayer, R. Interactive multimodal learning environments: Special issue on interactive learning environments: Contemporary issues and trends. Educ. Psychol. Rev. 2007, 19, 309–326. [Google Scholar] [CrossRef]
  48. Grafe, S. Förderung von Problemlösefähigkeit beim Lernen mit Computersimulationen: Grundlagen und Schulische Anwendungen; Klinkhardt: Bad Heilbrunn, Germany, 2008. [Google Scholar]
  49. Arnold, J.; Kremer, K.; Mayer, J. Scaffolding in Inquiry Learning: An Empirical Study on the Impact of Learning Support. Zeitschrift für die Didaktik der Naturwissenschaften 2017, 23, 21–37. [Google Scholar] [CrossRef] [Green Version]
  50. Taub, M.; Azevedo, R.; Bouchet, F.; Khosravifar, B. Can the use of cognitive and metacognitive self-regulated learning strategies be predicted by learners’ levels of prior knowledge in hypermedia-learning environments? Comput. Hum. Behav. 2014, 39, 356–367. [Google Scholar] [CrossRef]
  51. Moser, S.; Zumbach, J.; Deibl, I. The effect of metacognitive training and prompting on learning success in simulation-based physics learning. Sci. Educ. 2017, 101, 944–967. [Google Scholar] [CrossRef]
  52. Kapa, E. A metacognitive support during the process of problem solving in a computerized environment. Educ. Stud. Math. 2001, 47, 317–336. [Google Scholar] [CrossRef]
  53. Yang, T.-C.; Chen, M.C.; Chen, S.Y. The influences of self-regulated learning support and prior knowledge on improving learning performance. Comput. Educ. J. 2018, 126, 37–52. [Google Scholar] [CrossRef]
  54. Schumacher, C.; Ifenthaler, D. Investigating prompts for supporting students’ self-regulation–A remaining challenge for learning analytics approaches? Internet High. Educ. 2021, 49, 100791. [Google Scholar] [CrossRef]
  55. Akamatsu, D.; Nakaya, M.; Koizumi, R. Effects of metacognitive strategies on the self-regulated learning process: The mediating effects of self-efficacy. Behav. Sci. 2019, 9, 128. [Google Scholar] [CrossRef] [Green Version]
  56. Saraff, S.; Tripathi, M.; Biswal, R.; Saxena, A.S. Impact of metacognitive strategies on self-regulated learning and intrinsic motivation. Psychol. Res. 2020, 15, 35–46. [Google Scholar] [CrossRef]
  57. Winne, P.H. Cognition and metacognition within self-regulated learning. In Handbook of self-Regulation of Learning and Performance; Schunk, D.H., Greene, J.A., Eds.; Routledge/Taylor & Francis Group: New York, NY, USA, 2018; pp. 36–48. [Google Scholar]
  58. Dettling, R.Q. Lernstrategien und Mediennutzung im Studium: Explorative Langzeitstudie mit Lernjournalen. Dissertation, University of Zürich, Zürich, Switzerland, 2015. [Google Scholar]
  59. Mannon, J. Metacognition, self-regulation and self-regulated learning: What’s the difference. Impact 2020, 8, 66–69. [Google Scholar]
  60. Leopold, C.; Leutner, D. Der Einsatz von Lernstrategien in einer konkreten Lernsituation bei Schülern unterschiedlicher Jahrgangsstufen. ZfPäd (Zeitschrift für Pädagogik) 2002, 45, 240–258. [Google Scholar] [CrossRef]
  61. Wild, K.P.; Schiefele, U. Lernstrategien im Studium: Ergebnisse zur Faktorenstruktur und Reliabilität eines neuen Fragebogens. Zeitschrift Differentielle Diagnostische Psychologie 1994, 15, 185–200. [Google Scholar]
  62. Garcia, R.; Falkner, K.; Vivian, R. Systematic literature review: Self-Regulated Learning strategies using e-learning tools for Computer Science. Comput. Educ. J. 2018, 123, 150–163. [Google Scholar] [CrossRef]
  63. Azevedo, R. Theoretical, conceptual, methodological, and instructional issues in research on metacognition and self-regulated learning: A discussion. Metacogn. Learn. 2009, 4, 87–95. [Google Scholar] [CrossRef]
  64. Bannert, M.; Hildebrand, M.; Mengelkamp, C. Effects of a metacognitive support device in learning environments. Comput. Hum. Behav. 2009, 25, 829–835. [Google Scholar] [CrossRef]
  65. Bannert, M.; Mengelkamp, C. Scaffolding hypermedia learning through metacognitive prompts. Instr. Sci. 2013, 40, 193–211. [Google Scholar] [CrossRef]
  66. Zumbach, J.; Rammerstorfer, L.; Deibl, I. Cognitive and metacognitive support in learning with a serious game about demographic change. Comput. Hum. Behav. 2020, 103, 120–129. [Google Scholar] [CrossRef]
  67. Schuster, C.; Stebner, F.; Wirth, J.; Leutner, D. Förderung des Transfers metakognitiver Lernstrategien durch direktes und indirektes Training. Unterrichtswiss. 2018, 46, 409–435. [Google Scholar] [CrossRef]
  68. Carretti, B.; Caldarola, N.; Tencati, C.; Cornoldi, C. Improving reading comprehension in reading and listening settings: The effect of two training programmes focusing on metacognition and working memory. Br. J. Educ. Psychol. 2014, 84, 194–210. [Google Scholar] [CrossRef]
  69. Friedrich, H.F.; Mandl, H. Lern-und Denkstrategien-Ein Problemaufriss; Hogrefe: Göttingen, Germany, 1992. [Google Scholar]
  70. Winne, P.H.; Azevedo, R. Metacognition. In The Cambridge Handbook of the Learning Sciences; Sawyer, R., Ed.; Cambridge Unviersity Press: Cambridge, MA, USA, 2014; pp. 63–87. [Google Scholar]
  71. Zeitlhofer, I.; Zumbach, J.; Aigner, V. Effects of Pedagogical Agents on Learners’ Knowledge Acquisition and Motivation in Digital Learning Environments. Knowledge 2023, 3, 4. [Google Scholar] [CrossRef]
  72. Lin, X.; Hmelo, C.; Kinzer, C.K.; Secules, T.J. Designing technology to support reflection. Educ. Technol. Res. Dev. 1999, 47, 43–62. [Google Scholar] [CrossRef]
  73. Bannert, M. Metakognition beim Lernen mit Hypermedien; Waxmann: Münster, Germany, 2007. [Google Scholar]
  74. Saks, K.; Leijen, Ä. Cognitive and metacognitive strategies as predictors of language learning outcomes. Psihologija 2018, 51, 489–505. [Google Scholar] [CrossRef]
  75. Zhang, W.-X.; Hsu, Y.-S.; Wang, C.-Y.; Ho, Y.-T. Exploring the impacts of cognitive and metacognitive prompting on students’ scientific inquiry practices within an e-learning environment. Int. J. Sci. Educ. 2015, 37, 529–553. [Google Scholar] [CrossRef]
  76. Daumiller, M.; Dresel, M. Supporting self-regulated learning with digital media using motivational regulation and metacognitive prompts. J. Exp. Educ. 2019, 87, 161–177. [Google Scholar] [CrossRef] [Green Version]
  77. Dori, Y.J.; Avargil, S.; Kohen, Z.; Saar, L. Context-based learning and metacognitive prompts for enhancing scientific text comprehension. Int. J. Sci. Educ. 2018, 40, 1198–1220. [Google Scholar] [CrossRef] [Green Version]
  78. Castronovo, F.; van Meter, P.N.; Messner, J.I. Leveraging metacognitive prompts in construction educational games for higher educational gains. Int. J. Constr. Manag. 2022, 22, 19–30. [Google Scholar] [CrossRef]
  79. Engelmann, K.; Bannert, M.; Melzner, N. Do self-created metacognitive prompts promote short-and long-term effects in computer-based learning environments? Res. Pract. Technol. Enhanc. Learn. 2021, 16, 3. [Google Scholar] [CrossRef]
  80. Krieglstein, F.; Schneider, S.; Gröninger, J.; Beege, M.; Nebel, S.; Wesenberg, L.; Suren, M.; Rey, G.D. Exploring the effects of content-related segmentations and metacognitive prompts on learning with whiteboard animations. Comput. Educ. J. 2023, 194, 104702. [Google Scholar] [CrossRef]
  81. Pieger, E.; Bannert, M. Differential effects of students’ self-directed metacognitive prompts. Comput. Hum. Behav. 2018, 86, 165–173. [Google Scholar] [CrossRef]
  82. Richey, J.E.; Nokes-Malach, T.J. Comparing four instructional techniques for promoting robust knowledge. Educ. Psychol. Rev. 2015, 27, 181–218. [Google Scholar] [CrossRef]
  83. Teng, M.F. The effectiveness of incorporating metacognitive prompts in collaborative writing on academic English writing skills. Appl. Cogn. Psychol. 2021, 35, 659–673. [Google Scholar] [CrossRef]
  84. Dickhäuser, O.; Butler, R.; Tönjes, B. Das zeigt doch nur, dass ich’s nicht kann. Z. Padagog. Psychol. 2007, 39, 120–126. [Google Scholar] [CrossRef] [Green Version]
  85. Pintrich, P.R.; Smith, D.A.; Garcia, T.; McKeachie, W.J. A Manual for the Use of the Motivated Strategies for Learning Questionnaire. (MSLQ); The Regents of the University of Michigan: Michigan, MI, USA, 1991. [Google Scholar]
  86. Klepsch, M.; Schmitz, F.; Seufert, T. Development and validation of two instruments measuring intrinsic, extraneous, and germane cognitive load. Front. Psychol. 2017, 8, 1997. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  87. Chow, C.W.; Chapman, E. Construct Validity of the Two-Factor Revised Learning Process Questionnaire in a Singapore High School. J. Educ. Psychol. 2018, 8, 159. [Google Scholar] [CrossRef]
  88. Uzunbacak, S.; Klusmeyer, J. Elaborierte Untersuchungsplanung mittels E-Portfolio und Prompts. In Digitalisierung und Hochschulentwicklung: Proceedings zur 26. Tagung der Gesellschaft für Medien in der Wissenschaft e.V.; Getto, B., Kerres, P.H.M., Eds.; Waxmann: Münster, Germany, 2018; pp. 179–189. [Google Scholar]
  89. Persico, D.; Karl, S. Self-regulated learning in technology enhanced learning environments. In Technology Enhances Learning; Duval, E., Ed.; Springer: New York, NY, USA, 2017; pp. 115–126. [Google Scholar]
Figure 1. Hypotheses of the study.
Figure 1. Hypotheses of the study.
Knowledge 03 00019 g001
Table 1. Descriptive date, inferential statistics, and correlations of dependent and control variables.
Table 1. Descriptive date, inferential statistics, and correlations of dependent and control variables.
MeasureNo Prompts
(n = 37)
Cognitive Prompts
(n = 32)
Metacognitive Prompts
(n = 31)
Post-Knowledge
and Self-Confidence in Post-Knowledge
MSDMSDMSDpηp2
Prior knowledge2.242.634.033.102.552.470.070.06
Post-Knowledge6.382.809.312.716.652.82--
Self-confidence in Prior knowledge1.921.212.471.271.710.820.010.16
Self-confidence in Post-Knowledge3.030.933.660.943.000.93--
Deep processing4.060.593.770.723.730.790.440.02
Metacognitive Planning4.151.114.340.934.100.760.920.002
Metacognitive Monitoring4.071.164.161.004.310.970.070.06
Cognitive Elaboration4.360.934.390.724.120.870.570.01
Germane Cognitive Load3.920.904.130.913.770.710.950.001
Extraneous Cognitive Load2.491.012.190.982.551.060.0050.112
Group------0.0150.07
Intrinsic Motivation3.330.763.050.673.290.68--
Extrinsic Motivation3.301.022.891.043.510.74--
Academic Self-Concept4.130.544.240.344.000.45--
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zeitlhofer, I.; Hörmann, S.; Mann, B.; Hallinger, K.; Zumbach, J. Effects of Cognitive and Metacognitive Prompts on Learning Performance in Digital Learning Environments. Knowledge 2023, 3, 277-292. https://doi.org/10.3390/knowledge3020019

AMA Style

Zeitlhofer I, Hörmann S, Mann B, Hallinger K, Zumbach J. Effects of Cognitive and Metacognitive Prompts on Learning Performance in Digital Learning Environments. Knowledge. 2023; 3(2):277-292. https://doi.org/10.3390/knowledge3020019

Chicago/Turabian Style

Zeitlhofer, Ines, Sandra Hörmann, Bettina Mann, Katharina Hallinger, and Joerg Zumbach. 2023. "Effects of Cognitive and Metacognitive Prompts on Learning Performance in Digital Learning Environments" Knowledge 3, no. 2: 277-292. https://doi.org/10.3390/knowledge3020019

APA Style

Zeitlhofer, I., Hörmann, S., Mann, B., Hallinger, K., & Zumbach, J. (2023). Effects of Cognitive and Metacognitive Prompts on Learning Performance in Digital Learning Environments. Knowledge, 3(2), 277-292. https://doi.org/10.3390/knowledge3020019

Article Metrics

Back to TopTop