Next Article in Journal
Inclusion and Inclusive Education in Russia: Analysis of Legislative and Strategic Documents at the State Level between 2012–2014
Next Article in Special Issue
Design and Assessment of an Active Learning-Based Seminar
Previous Article in Journal
Technological Tools in Higher Education: A Qualitative Analysis from the Perspective of Students with Disabilities
Previous Article in Special Issue
Students’ Experience of Online Learning in a Blended Learning Setting: A Qualitative Evaluation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

When Video Improves Learning in Higher Education

by
Sven Trenholm
1,* and
Fernando Marmolejo-Ramos
2
1
Education Futures, University of South Australia, Adelaide, SA 5072, Australia
2
Centre for Change and Complexity in Learning (C3L), University of South Australia, Adelaide, SA 5072, Australia
*
Author to whom correspondence should be addressed.
Educ. Sci. 2024, 14(3), 311; https://doi.org/10.3390/educsci14030311
Submission received: 28 December 2023 / Revised: 27 February 2024 / Accepted: 5 March 2024 / Published: 15 March 2024
(This article belongs to the Special Issue Current Challenges in Digital Higher Education)

Abstract

:
The use of video in education has become ubiquitous as technological developments have markedly improved the ability and facility to create, deliver, and view videos. The concomitant pedagogical transformation has created a sense of urgency regarding how video may be used to advance learning. Initial reviews have suggested only limited potential for the use of video in higher education. More recently, a systematic review of studies on the effect of video use on learning in higher education, published in the journal Review of Educational Research, found, overall, effects to be positive. In the present paper, we critique this study. We reveal significant gaps in the study methodology and write-up and use a cognitive processing lens to critically assess and re-analyse study data. We found the results of this study to be only applicable to learning requiring lower-level cognitive processing and conclude, consistent with prior research, that claims of a universal benefit are not yet warranted.

1. Introduction

In recent years, the use of video in education has rapidly expanded. As an increasingly important pedagogical tool, the pace of its penetration into educational processes has outstripped the ability of researchers to evaluate its effectiveness [1]. In one recent and significant effort to evaluate the effect of video usage in education, Noetel et al. [2] conducted a meta-analysis titled, ‘Video improves learning in higher education: A systematic review’, published in the journal Review of Educational Research and directed at the use of video as either a replacement or supplement to classroom learning.
Given the methodological approach of the study, it was reasonable to expect that greater clarity would be achieved with video, at least in higher education. Yet a close inspection of the study reveals significant gaps that call into question the overall study findings. As accurate research findings are urgently needed to inform policy decisions, this paper sets out to critically assess the aforementioned study and conduct a re-analysis of associated study data. In our critique and re-analysis we use a cognitive processing lens to build on Noetel et al.’s work and provide some needed clarity to better inform future educational research and policy development.

Initial Critical Assessment and Review of the Literature

In early media reports, Noetel et al. [3] characterized their findings by stating the use of video was ‘consistently good for learning’ and later, in the published study, stated the effect was ‘unlikely to be detrimental and usually improve student learning’ ([2]: p. 204). Though such a disparity in characterizations may raise concern, of further interest was how their overall study conclusions appeared to contradict prior research, suggesting that the video medium has only limited potential for advancing student learning in higher education.
For example, from a stronger methodological basis, prior research regarding the use of video, claiming to be the first controlled experiment of its kind, yet not covered by Noetel et al., has shown detrimental learning effects for some minority and other student demographic groups when relying on videos for instruction [4]. These findings mirror a recent meta-analysis for a related medium: low-SES school students were found to be disadvantaged by the use of screens as compared to paper-based books [5]. As an important issue regarding generalizability, Noetel et al. did not account for any participant demographics in their meta-analysis.
Moreover, related prior reviews on the use of video in education also suggest limited potential for advancing learning. For example, an early review by Hansch et al. [6] presented critical reflections on the state of the field. Based upon a review of the literature, personal observations, and 12 semi-structured interviews, largely focused in the context of higher education, it was concluded there was ‘little conclusive research to show that video is indeed an effective method for learning’, recommending consideration of a variety of pedagogical resources rather than a simple reliance on video (p. 10). Their conclusions were later confirmed in a systematic review undertaken by Poquet et al. [7]. In this review, 178 papers published between 2007 and 2017 were selected using strict inclusion criteria, all experimental and case studies conducted in the context of higher education and professional learning. Their detailed descriptive analysis summarized the effects of a variety of interventions on a variety of learning outcomes, with results highlighting some of the complexities involved in undertaking this research. In particular, among other variables, they suggest that the effect of video-based teaching is dependent on the nature of the knowledge to be learned, noting the effectiveness of video-based learning may vary depending, for example, on whether learning objectives involve simple recall vs. comprehension.
Indeed, an emerging body of research has found the efficacy of different instructional media varies depending on the nature of the associated learning task [8,9,10]. For example, in early research, ChanLin [8] investigated (n = 135 undergraduate students) the use of three visual treatments (no graphics, still graphics, and animated graphics) using co-variates of learner prior knowledge (high vs. low) and the nature of knowledge being learned (procedural vs. descriptive facts), with results supporting early claims that visual treatment effects vary according to the nature of knowledge being learned. Particularly of note, the use of visuals did not always guarantee successful learning. In related research, Garrett [9] used a novel data-mining approach to analyse PowerPoint files (n = 30,263) and understand differences in slide presentations relative to academic discipline. Though focused on the teaching approach rather than the learning effect, the nature of the discipline was found to significantly predict how text and graphics were used. Finally, Hong, Pi, and Yang [10], in a randomized controlled experiment, examined the learning effectiveness of video lectures (n = 60 undergraduate students) using co-variates of knowledge type (declarative vs. procedural) and instructor presence (with vs. without). The results suggested that ‘the learning effectiveness of video lectures varies depending on the type of knowledge being taught and the presence or absence of an instructor’ (p. 74). Taken as a whole, prior research suggests the nature of learning in each study context is an important moderating variable needed to properly understand the effects of video on learning.
Despite such efforts, the cognitive processes and mechanisms underlying the use of video remain poorly understood. Some insight, however, may be gained from research directed at related media. For example, almost a century after its invention, television viewing continues to be associated with a weakened cognitive investment [11,12,13,14]. Indeed, some argue television viewing is conditioning people to use ‘poorer executive functioning alongside automatic processes that may be erroneous and even difficult to undo’ ([15], p. 3019). Such claims and findings appear consistent with cognitive research on the use of images, found to be processed much faster and automatically compared to the slower and more controlled processing of printed text [16,17]. All this may suggest that the video medium is priming students to rely on quicker intuitive ‘feelings of rightness’ ([18], p. 236) rather than engaging in slower, deliberative reflection [15]. This dynamic may explain why the use of video is effective for teaching in some knowledge areas but not others.
Reflecting on the state of current research in relation to Noetel et al.’s review, we recognize two important shortcomings. First, Noetel et al. considered learning tasks as a relatively simplistic ‘skill’ vs. ‘knowledge’ dichotomy (later characterized in their review as ‘teaching skills’ vs. ‘transmitting knowledge’; p. 222). Second, complicating the ability to interpret their findings, they provide very limited information about the educational contexts represented in their meta-analysis, which they refer to as ‘learning domains’, providing no definition for what is meant by a learning domain and, perhaps most surprising, no descriptive analysis or clear summary of domains included in the study. In sum, their review employed an analytical framework that was not based in theory but one reflecting a relatively simplistic view of the nature of knowledge and learning. Additionally, in what may have helped interpret their findings, little information was provided concerning the learning contexts represented in their meta-analysis.
Given this assessment, we executed a close investigation of the Noetel et al.tudy data (available on bit.ly/betteronyoutube), asking two research questions:
RQ1. What is the nature of the learning contexts covered in the study as suggested by (i) a basic descriptive analysis of the Noetel et al. data and (ii) a descriptive analysis by way of using a relevant established theoretical framework?
RQ2. What does a re-analysis of the data tell us about how the use of video affects learning in higher education when the aforementioned theoretical framework is employed?
We first present the methodology, results, and some discussion for each research question. Following this, we present a summary discussion where we conclude, consistent with prior research, that the use of video has limited potential for advancing learning in higher education.

2. Methodology and Results

Alongside the associated methodology, we present the results of our re-analysis in the following subsections.

2.1. Research Question 1

As previously discussed, the original study write-up provides limited information about the learning contexts represented by the included studies. This led us to seek a clearer understanding of the contexts covered by the review.
We first made use of the original source data and learning domain categorizations (as categorized by Noetel et al.) to present a simple descriptive analysis of the contexts, the results of which may be seen below in Table 1.
From this basic analysis, it is clear, consistent with expectations from the literature (e.g., [19]), that more than 80% of included studies were in health science contexts (e.g., medicine, nursing, dentistry). As a broad context, learning in the health sciences has been found to focus mostly on lower-level cognitive processes, such as learning facts and procedures (e.g., Medicine: [20,21,22]; Nursing: [23]; Dentistry: [24,25]), as typically revealed using the lens of Bloom’s [26] taxonomy of cognitive learning objectives (Bloom’s objectives have been categorized as remember, understand, apply, analyze, evaluate, and create, in order of cognitive complexity from those requiring lower to higher levels of cognitive processing [27,28]). For example, approximately half (49%) of included studies were classified in the learning domain of ‘medicine’, an area of study long known for its ‘persistent focus’ on learning ‘factual minutiae’ ([20], p.1343). In other words, a simple descriptive analysis quickly made clear that the vast majority of included studies focused on learning contexts targeting lower-level cognitive processing. Indeed, virtually all (102 of 106 or 96.2%) of the learning domains relate to what we would categorize as professional degree programs. This skewed representation raised some concern regarding generalizability across higher education, which prompted us to take a closer look at the nature of learning represented in the meta-analysis.
To undertake this investigation, several potential theoretical frameworks were considered, including the seminal works of Bloom [26], Biggs [29] and Biglan [30]. The latter, a taxonomy for classifying academic disciplines in higher education, was identified as a clear choice given the nature of the available data. Indeed, strengthening this selection, Biglan’s [30] framework is perhaps the most well-known system for classifying academic disciplines in higher education [31]. Moreover, the taxonomy was originally developed to provide a ‘framework exploring the role of cognitive processes in academic fields’ ([30], p. 202) and has repeatedly demonstrated its validity in subsequent research [31,32,33]. Importantly, as it relates to our research questions, and notwithstanding further complexities [34], prior research using this framework has found generalities and differences regarding the nature of learning within and between disciplinary contexts [19,35,36,37].
We first make use of this framework to categorize each of the 106 studies included in this review and demonstrate how the included studies represent a relatively limited learning focus. Our results, displayed in Table 2 below, indicate that almost all (94 of 106 or 88.8%) learning domains were confined to teaching and learning contexts in the applied sciences where, consistent with our previous findings, learning has been associated with lower-level cognitive processes [38]; see also, for example, [39]. Moreover, a closer look at the 12 remaining studies, all in pure disciplines, similarly suggests a focus on lower-level learning. This includes, for example, learning facts about biology (for an introductory microbiology course) or the correct procedures for using statistical software (for a psychology course). We conclude, based on the use of Biglan’s framework, that lower-level learning was targeted by the vast majority, if not all, of the learning contexts represented in this review.

2.2. Research Question 2

We next make use of Biglan’s framework by undertaking a meta-regression re-analysis of the data sets behind figures 2 and 3 in the review investigating, respectively, the effect of using video as a replacement for and supplement to live instruction (see [2], p. 214 and 218, respectively; the original study R code and files related to these two data sets are found in the repository linked to the original paper (bit.ly/betteronyoutube)). Related data sets are termed ‘swap’ (i.e., video as a replacement) and ‘sup’ (video as a supplement). However, in our re-analysis, we include the three levels from Biglan’s classification shown in Table 1 above as an additional moderator variable (i.e., hard vs. soft, pure vs. applied, and life vs. nonlife).

2.2.1. Statistical Methodology

Our re-analysis was conducted via meta-regression (MR). MR is a regression model applied to data obtained from a meta-analytic study in which, most likely, the dependent variable is numeric and corresponds to effect sizes. The regression model is usually the ordinary least squares linear model, but other alternatives exist when parametric assumptions such as normality and homoscedasticity are not met (mainly, the distribution of the residuals is not normal). In such cases, a linear (mixed) model would give biased results; thus, non-parametric or semi-parametric approaches are recommended.
In this re-analysis, a generalized additive model for a location, scale, and shape (GAMLSS; [40]) approach was first used. The GAMLSS approach allows examining the effects of covariates on the dependent variable’s location, scale, skewness, and kurtosis parameters. GAMLSS is a form of supervised machine learning that allows for flexible regression and smoothing models to fit the data [41].
For the sake of simplicity, we focused on the effects of the covariates on the location parameter of the dependent variable, assuming this is best described by the four-parameter Skew Power exponential type 2 (SEP2) distribution [42]. Although we found the SEP2 distribution fit the data well, the results of the model fit were not convincing due to the shape of the residuals, thus suggesting the adoption of a non-parametric approach. We thus chose a robust linear mixed model (RLMM; [43]) as a second analytical approach. RLMMs were consequently used for all our analyses. As the current R implementation of the RLMM does not allow ANOVA-type outputs, pairwise differences were examined via multiple comparisons [44] and boxplots (for details in relation to the modelling, see the Supplementary Files at https://cutt.ly/rUbeOPa, accessed on 26 March 2024).
The model we investigated had the following structure:
D V v 1 + v 2 + + 1 | r v
That is, the model was additive (i.e., no interactions are included), the dependent variable ( D V ) was numeric, and there was a random (intercept) variable ( r v ). In order to investigate parsimonious models (i.e., models with few covariates), the number of covariates included variables that seem essential to the model according to the results by Noetel et al. (see their Table 1 on p.). That is, no variable-selection method was pursued.
Thus, as applied to the original data sets, the final model investigated was
s m d h a r d _ v s _ s o f t + a p p l i e d _ v s _ n o n . a p p l i e d + l i f e _ v s _ n o n l i f e + S e t t i n g + C o m p a r i s o n + O u t c o m e + W h i c h _ i s _ m o r e _ i n t e r a c t i v e + T o p i c _ o r _ c o u r s e + 1 | s t u d y n u m b e r

2.2.2. Results

We summarize our major results in this section (for more detailed results, see https://cutt.ly/rUbeOPa, accessed on 26 March 2024). Overall, as may be expected given the learning contexts uncovered in RQ1, the effect sizes remained positive. However, despite the relative data homogeneity, our results demonstrate much greater complexity associated with learning via video. In particular, as we conclude in this section, we found significant differences emerging between major disciplinary groups.
To begin, when video was used as a replacement, we found several significant differences emerge. First, differences in effect sizes were found between ‘educational settings’; particularly between ‘tutorial’ and ‘homework’, where the use of video was found to be more useful with homework than with tutorials (Mdnhomework = 0.59, 95% CI [0.52, 0.65]; Mdntutorial = 0.40, 95% CI [0.35, 0.46]; tpermutation = −3.02, p = 0.018). Second, in contrast to the original study findings where the use video was found more effective when ‘skill acquisition’ was assessed (vs. knowledge), no difference in effect was found between the two types of outcome assessments (Mdnknowledge test = 0.16, 95% CI [0.06, 0.26]; Mdnskills assessment = 0.27, 95% CI [0.13, 0.42]; tpermutation = 1.26, p = 0.238).
Next, when investigating the use of video as a supplement to existing content, other new findings emerged. First, significant differences were found in effect sizes between educational settings: mixed (‘Mixed’ is a term used in the original data set, though not explained in the main manuscript) and homework (Mdnmixed = 0.30, 95% CI [0.22, 0.39]; Mdnhomework = 0.56, 95% CI [0.46, 0.65]; tpermutation = −2.80, p = 0.046) and between mixed and tutorial (Mdnmixed = as above; Mdntutorial = 0.68, % CI [0.60, 0.75]; tpermutation = 4.85, p < 0.001). Second, there was an effect of comparison such that there was a difference between ‘human’ (or teacher) and ‘static media’, with static media found to be more effective as a supplement than human input (Mdnhuman = 0.30, 95% CI [0.08, 0.52]; Mdnstatic media = 1.07, 95% CI [0.80, 1.34]; tpermutation = 5.76, p < 0.0010). Third, the difference between the type of outcome was borderline at the 0.05 level, with video supplements found to be more helpful for skill assessments than knowledge tests (Mdnknowledge test = 0.54, 95% CI [0.28, 0.80]; Mdnskills assessment = 1.05, 95% CI [0.86, 1.23]; tpermutation = 2.16, p = 0.048).
Finally, despite the relatively homogeneous nature of the original study data, we found important differences in effect sizes emerge between major disciplinary subgroups. In particular, when swapping video for any other learning opportunity, the results indicate that soft learning domains had significantly larger effect sizes than hard learning domains (Mdnsoft domain = 0.37, 95% CI [0.19, 0.55]; Mdnhard domain = 0.15, 95% CI [0.07, 0.23]; tpermutation = 2.69, p = 0.008). However, somewhat in contrast, when videos are provided in addition to existing content, hard learning domains tended to have larger effect sizes than soft learning domains (tpermutation = −1.76, p = 0.08). We now turn to discussing these results in light of Noetel et al.’s study findings.

3. Discussion

The original study concluded that the effect of video on learning in higher education was generally positive. Noetel et al. concluded this effect based on an analysis which employed no theoretical framework for categorizing their data while providing little contextual information concerning the source of that data. As a methodological issue, this approach was surprising given that the use of theory and the importance of contextualizing findings are considered basic research practices. Given these issues, we set out to examine the study data more closely and conduct a re-analysis using a relevant theoretical framework.
The results of our re-analysis were at variance with Noetel et al.’s findings. First, in our descriptive analysis, we found almost all included studies were in contexts where the associated learning may be characterized as involving lower-level cognitive processes, such as learning facts and procedures. Second, in our meta-analysis using an established theoretical framework, though effect sizes remained positive, we found greater complexity around how video was used and its effects relative to the learning contexts. Taken together, from a cognitive processing perspective, we did not consider the rediscovery of positive effect sizes as surprising given the relative homogeneity of the original study data, but an affirmation of the suggestion that, overall, the use of video in higher education benefits learning requiring lower-level cognitive processing.
Indeed, we suggest significant negative effect sizes would emerge if learning requiring higher-level cognitive processing was adequately represented in the original review. These are, for example, disciplines typically associated with abstract reasoning, such as pure mathematics [45], where associated cognitive demands are known to be high [46,47]. For example, related meta-analytic research comparing distance education to live classroom instruction has found mathematics instruction ‘best suited to the classroom’ ([48], p. 400). Indeed, regarding the specific use of video, recent consecutive systematic reviews have found, overall, student use of recorded lecture videos (RLVs; RLVs are experiencing rapid growth [49]. For both systematic reviews, included studies permit individual students to use RLVs as a supplement to and/or replacement for attending live lectures) in undergraduate mathematics negatively correlated with academic performance [50,51] (Note: pure mathematics learning contexts were not represented in Noetel et al.’s review. For comparison, the review included only two studies in a pure discipline (i.e., both biology) where the comparison involved student performance when using recorded-only vs. live-only lectures [52,53]. Both reported negative effects), with some early research supporting causality [15]. In particular, RLVs appear to enable students to engage in surface learning—such as rote memorization—of course content, which leads to poorer academic performance [54,55]. As early research suggests, mathematics students approach the use of RLVs in similar ways as they approach viewing television [15], weakening their cognitive investment [11,12,13,14]. In sum, though such approaches may be sufficient to undertake tasks involving lower-level cognitive processing, such as learning facts or acquiring procedural knowledge, they may be detrimental when tasks require higher-level processes, such as acquiring richly connected conceptual knowledge [56].

4. Future Research

The effects of a screen-based video medium on the learning process remain poorly understood. Further exploration of factors influencing the video-based learning process is needed. As the results of our re-analysis point out, this includes distinguishing between whether videos are used as a replacement for or supplement to live instruction. Further to this exploration, a variety of theoretical lenses may be explored. In our view, self-regulation theories show particular promise for future research. This is because much of the current video-based teaching, such as RLVs, is delivered asynchronously with students mostly responsible for monitoring, judging, and controlling their learning (see, for example, [57]; to be clear, we do not suggest the synchronous delivery of video-based teaching to be free from issues related to those we discuss in this section. For simplicity, we focus on asynchronous delivery not least because of its prevalence in video-based teaching in higher education).
Moreover, when a video is delivered asynchronously, the experience is obviously one-way: the teacher presents material but does not interact with students in real time. This, for example, denies teachers the ability to ‘read their audience’ and adjust pacing or the method of how new concepts are scaffolded. Furthermore, if, for example, only the teacher’s head is shown in the video, students may be denied additional resources, such as hand gestures, considered a support to the learning process [58,59].
Notably, as a proxy for learning involving higher-level cognitive processing, those researching learning in mathematics have highlighted the nature of interactivity as vital to the learning process (e.g., [60,61,62]). Indeed, deeper learning in mathematics has been theorized as a form of interactivity involving iterative cycles of discussion, feedback, and reflection [63,64,65]. When this interactivity is almost entirely regulated by the student, this presents one plausible reason for diminished learning outcomes in knowledge areas requiring high-level cognitive processing, as student objectives (e.g., time efficiency) and their regulation of resources may be at odds with the teacher’s target outcomes (e.g., depth of understanding). In consideration of all these factors, the use of self-regulation theory may yield important new insights.
We further hypothesize, as framed by self-regulated learning, that the video medium may cue learners to a weakened cognitive investment, inhibiting learners from undertaking higher-level cognitive processing and thus achieving deeper learning. By contrast, consistent with work on self-regulated learning, these learners may be under the illusion of achieving the goal of understanding, even though their thinking is actually poor or even incorrect. The resulting dynamic is thought to lead to cycles of ‘poor self-regulation and lower levels of achievement’ ([57], p.427). Testing such a hypothesis presents an important avenue for future research
Finally, some may envisage addressing current issues by leveraging AI technologies to direct the presentation of video-based teaching. As this potential remains unclear (e.g., [66]), more research is needed to understand how AI may be used to assist teaching via video.

5. Conclusions

In sum, understanding the effects of any pedagogical innovation involves the unravelling of a complex web of influences related to the learning process in varied contexts. In relation to this review, we highlight crucial yet missing complexities. We conduct simple descriptive analyses as well as a re-analysis providing evidence demonstrating, consistent with prior reviews, that current findings do not support broad generalizations across higher education. Moreover, while we have no doubt that the use of video has some beneficial effects in higher education, as demonstrated by Noetel et al.’s review and our re-analysis, we remain concerned about the potentially adverse effects a reliance on this innovation may have on students from, for example, differing demographic backgrounds studying in varied disciplinary contexts. While more research is needed to reveal how video may be an optimal or suboptimal instructional medium for instruction, it is clear this research must employ robust, rigorous, and well-grounded methodological approaches which will provide clear and, ultimately, accurate findings.

Author Contributions

Conceptualization, S.T.; methodology, S.T. and F.M.-R.: software, S.T. and F.M.-R.; validation, S.T. and F.M.-R.; formal analysis, F.M.-R.; investigation, S.T. and F.M.-R.; resources, S.T. and F.M.-R.; data curation, S.T. and F.M.-R.; writing—original draft preparation, S.T.; writing—review and editing, S.T. (major) and F.M.-R. (minor); visualization, S.T.; supervision, S.T.; project administration, S.T.; funding acquisition, N/A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data and material for the originating study may be found at https://bit.ly/betteronyoutube, accessed on 26 February 2024. Supplementary material for this review may be found at https://cutt.ly/rUbeOPa, accessed on 26 February 2024.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Woolfitt, Z. The Effective Use of Video in Higher Education. Lectoraat Teach. Learn. Technol. Inholland Univ. Appl. Sci. 2015, 1, 1–49. Available online: https://www.academia.edu/download/43014151/The_effective_use_of_video_in_higher_education_-_Woolfitt_October_2015.pdf (accessed on 31 March 2023).
  2. Noetel, M.; Griffith, S.; Delaney, O.; Sanders, T.; Parker, P.; del Pozo Cruz, B.; Lonsdale, C. Video improves learning in higher education: A systematic review. Rev. Educ. Res. 2021, 91, 204–236. [Google Scholar] [CrossRef]
  3. Noetel, M.; del Pozo Cruz, B.; Lonsdale, C.; Parker, P.; Sanders, T. Videos won’t Kill the uni Lecture, but They will Improve Student Learning and Their Marks. Conversation 2020. Available online: https://theconversation.com/videos-wont-kill-the-uni-lecture-but-they-will-improve-student-learning-and-their-marks-142282 (accessed on 20 December 2021).
  4. Figlio, D.; Rush, M.; Yin, L. Is it live or is it internet? Experimental estimates of the effects of online instruction on student learning. J. Labor Econ. 2013, 31, 763–784. [Google Scholar] [CrossRef]
  5. Furenes, M.I.; Kucirkova, N.; Bus, A.G. A comparison of children’s reading on paper versus screen: A meta-analysis. Rev. Educ. Res. 2021, 91, 483–517. [Google Scholar] [CrossRef]
  6. Hansch, A.; Hillers, L.; McConachie, K.; Newman, C.; Schildhauer, T.; Schmidt, P. Video and Online Learning: Critical Reflections and Findings from the Field. HIIG Discussion Paper Series No. 2015-02. 2015. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2577882 (accessed on 26 February 2024).
  7. Poquet, O.; Lim, L.; Mirriahi, N.; Dawson, S. Video and learning: A systematic review (2007–2017). In Proceedings of the 8th International Conference on Learning Analytics and Knowledge, Sydney, NSW, Australia, 7–9 March 2018; pp. 151–160. [Google Scholar] [CrossRef]
  8. ChanLin, L.J. Animation to teach students of different knowledge levels. J. Instr. Psychol. 1998, 25, 166–175. [Google Scholar]
  9. Garrett, N. How do academic disciplines use PowerPoint? Innov. High. Educ. 2016, 41, 365–380. [Google Scholar] [CrossRef]
  10. Hong, J.; Pi, Z.; Yang, J. Learning declarative and procedural knowledge via video lectures: Cognitive load and learning effectiveness. Innov. Educ. Teach. Int. 2018, 55, 74–81. [Google Scholar] [CrossRef]
  11. Collins, W.A.; Wiens, M. Cognitive processes in television viewing: Description and strategic implications. In Cognitive Strategy Research; Pressley, M., Levin, J.R., Eds.; Springer: Berlin/Heidelberg, Germany, 1983; pp. 179–201. [Google Scholar] [CrossRef]
  12. Kubey, R.W.; Csikszentmihalyi, M. Television and the Quality of Life: How Viewing Shapes Everyday Experience; Erlbaum: Hillsdale, NJ, USA, 1990. [Google Scholar]
  13. Klemm, W. Television Effects on Education, Revisited. Psychol. Today-Mem. Medic. 2012. Available online: https://www.psychologytoday.com/us/blog/memory-medic/201207/television-effects-education-revisited (accessed on 29 December 2021).
  14. Schwab, F.; Hennighausen, C.; Adler, D.C.; Carolus, A. Television is still “easy” and print is still “tough”? more than 30 years of research on the amount of invested mental effort. Front. Psychol. 2018, 9, 1098. [Google Scholar] [CrossRef] [PubMed]
  15. Trenholm, S. Media effects accompanying the use of recorded lecture videos in undergraduate mathematics instruction. International J. Math. Educ. Sci. Technol. 2021, 1–29. [Google Scholar] [CrossRef]
  16. Jabr, F. The reading brain in the digital age: The science of paper versus screens. Sci. Am. 2013, 309, 48–53. [Google Scholar] [CrossRef] [PubMed]
  17. Powell, T.E.; Boomgaarden, H.G.; De Swert, K.; de Vreese, C.H. Framing fast and slow: A dual processing account of multimodal framing effects. Media Psychol. 2019, 22, 572–600. [Google Scholar] [CrossRef]
  18. Evans, J.S.B.; Stanovich, K.E. Dual-process theories of higher cognition: Advancing the debate. Perspect. Psychol. Sci. 2013, 8, 223–241. [Google Scholar] [CrossRef] [PubMed]
  19. Czerniewicz, L.; Brown, C. Disciplinary differences in the use of educational technology. In Proceedings of the Second International E-learning Conference, New York, NY, USA, 28–29 June 2007. [Google Scholar]
  20. Cooke, M.; Irby, D.M.; Sullivan, W.; Ludmerer, K.M. American medical education 100 years after the Flexner report. N. Engl. J. Med. 2006, 355, 1339–1344. [Google Scholar] [CrossRef]
  21. Légaré, F.; Freitas, A.; Thompson-Leduc, P.; Borduas, F.; Luconi, F.; Boucher, A.; Witteman, H.O.; Jacques, A. The majority of accredited continuing professional development activities do not target clinical behavior change. Acad. Med. 2015, 90, 197–202. [Google Scholar] [CrossRef] [PubMed]
  22. Callaghan-Koru, J.A.; Aqil, A.R. Theory-Informed Course Design: Applications of Bloom’s Taxonomy in Undergraduate Public Health Courses. Pedagog. Health Promot. Scholarsh. Teach. Learn. 2020, 8, 75–83. [Google Scholar] [CrossRef]
  23. Laschinger, H.K.; Boss, M.W. Learning styles of nursing students and career choices. J. Adv. Nurs. 1984, 9, 375–380. [Google Scholar] [CrossRef]
  24. Albino, J.E.; Young, S.K.; Neumann, L.M.; Kramer, G.A.; Andrieu, S.C.; Henson, L.; Horn, B.; Hendricson, W.D. Assessing dental students’ competence: Best practice recommendations in the performance assessment literature and investigation of current practices in predoctoral dental education. J. Dent. Educ. 2008, 72, 1405–1435. [Google Scholar] [CrossRef]
  25. Gonzalez-Cabezas, C.; Anderson, O.S.; Wright, M.C.; Fontana, M. Association between dental student-developed exam questions and learning at higher cognitive levels. J. Dent. Educ. 2015, 79, 1295–1304. [Google Scholar] [CrossRef]
  26. Bloom, B.S. Taxonomy of Educational Objectives: Handbook 1: Cognitive Domain; Longman: London, UK, 1956. [Google Scholar]
  27. Krathwohl, D.R. A revision of Bloom’s taxonomy: An overview. Theory Into Pract. 2002, 41, 212–218. [Google Scholar] [CrossRef]
  28. Adams, A.E.M.; Randall, S.; Traustadóttir, T. A tale of two sections: An experiment to compare the effectiveness of a hybrid versus a traditional lecture format in introductory microbiology. CBE Life Sci. Educ. 2015, 14, ar6. [Google Scholar] [CrossRef]
  29. Biggs, J. Individual differences in study processes and the quality of learning outcomes. High. Educ. 1979, 8, 381–394. [Google Scholar] [CrossRef]
  30. Biglan, A. The characteristics of subject matter in different academic areas. J. Appl. Psychol. 1973, 57, 195. [Google Scholar] [CrossRef]
  31. Simpson, A. The surprising persistence of Biglan’s classification scheme. Stud. High. Educ. 2017, 42, 1520–1531. [Google Scholar] [CrossRef]
  32. Smart, J.C.; Elton, C.F. Validation of the Biglan model. Res. High. Educ. 1982, 17, 213–229. [Google Scholar] [CrossRef]
  33. Stoecker, J.L. The Biglan classification revisited. Res. High. Educ. 1993, 34, 451–464. [Google Scholar] [CrossRef]
  34. Entwistle, N. Learning outcomes and ways of thinking across contrasting disciplines and settings in higher education. Curric. J. 2005, 16, 67–82. [Google Scholar] [CrossRef]
  35. Donald, J.G. Knowledge and the university curriculum. High. Educ. 1986, 15, 267–282. [Google Scholar] [CrossRef]
  36. Neumann, R.; Parry, S.; Becher, T. Teaching and learning in their disciplinary contexts: A conceptual analysis. Stud. High. Educ. 2002, 27, 405–417. [Google Scholar] [CrossRef]
  37. Smith, S.N.; Miller, R.J. Learning approaches: Examination type, discipline of study, and gender. Educ. Psychol. 2005, 25, 43–53. [Google Scholar] [CrossRef]
  38. Paulsen, M.B.; Wells, C.T. Domain differences in the epistemological beliefs of college students. Res. High. Educ. 1998, 39, 365–384. [Google Scholar] [CrossRef]
  39. Swart, A.J. Evaluation of final examination papers in engineering: A case study using Bloom’s Taxonomy. IEEE Trans. Educ. 2009, 53, 257–264. [Google Scholar] [CrossRef]
  40. Stasinopoulos, M.D.; Rigby, R.A.; Bastiani, F.D. GAMLSS: A distributional regression approach. Stat. Model. 2018, 18, 248–273. [Google Scholar] [CrossRef]
  41. Kneib, T. Beyond mean regression. Stat. Model. 2013, 13, 275–303. [Google Scholar] [CrossRef]
  42. DiCiccio, T.J.; Monti, A.C. Inferential aspects of the skew exponential power distribution. J. Am. Stat. Assoc. 2004, 99, 439–450. [Google Scholar] [CrossRef]
  43. Koller, M. Robustlmm: An R package for robust estimation of linear mixed-effects models. J. Stat. Softw. 2016, 75, 1–24. [Google Scholar] [CrossRef]
  44. Noguchi, K.; Abel, R.S.; Marmolejo-Ramos, F.; Konietschke, F. Nonparametric multiple comparisons. Behav. Res. Methods 2020, 52, 489–502. [Google Scholar] [CrossRef] [PubMed]
  45. Ferrari, P.L. Abstraction in mathematics. Philos. Trans. R. Soc. London. Ser. B Biol. Sci. 2003, 358, 1225–1230. [Google Scholar] [CrossRef]
  46. Henningsen, M.; Stein, M.K. Mathematical tasks and student cognition: Classroom-based factors that support and inhibit high-level mathematical thinking and reasoning. J. Res. Math. Educ. 1997, 28, 524–549. [Google Scholar] [CrossRef]
  47. McCabe, D.P.; Roediger, H.L., III; McDaniel, M.A.; Balota, D.A.; Hambrick, D.Z. The relationship between working memory capacity and executive functioning: Evidence for a common executive attention construct. Neuropsychology 2010, 24, 222–243. [Google Scholar] [CrossRef]
  48. Bernard, R.M.; Abrami, P.C.; Lou, Y.; Borokhovski, E.; Wade, A.; Wozney, L.; Wallet, P.A.; Fiset, M.; Huang, B. How does distance education compare with classroom instruction? A meta-analysis of the empirical literature. Rev. Educ. Res. 2004, 74, 379–439. [Google Scholar] [CrossRef]
  49. Research and Markets. Lecture Capture Systems Market—Growth, Trends, COVID-19 Impact, and Forecasts (2021–2026). Mordor Intelligence. 2021. Available online: https://www.businesswire.com/news/home/20210412005476/en/Global-Lecture-Capture-Systems-Market-2021-to-2026---Growth-Trends-COVID-19-Impact-and-Forecasts---ResearchAndMarkets.com (accessed on 26 February 2024).
  50. Trenholm, S.; Alcock, L.; Robinson, C.L. Mathematics lecturing in the digital age. Int. J. Math. Educ. Sci. Technol. 2012, 43, 703–716. [Google Scholar] [CrossRef]
  51. Lindsay, E.; Evans, T. The use of lecture capture in university mathematics education: A systematic review of the research literature. Math. Educ. Res. J. 2021, 34, 911–931. [Google Scholar] [CrossRef]
  52. Adams, N.E. Bloom’s taxonomy of cognitive learning objectives. J. Med. Libr. Assoc. JMLA 2015, 103, 152. [Google Scholar] [CrossRef] [PubMed]
  53. Thai, T.; De Wever, B.; Valcke, M. Impact of Different Blends of Learning on Students Performance in Higher Education. In Proceedings of the 14th European Conference on E-Learning (ECEL), Hatfield, UK, 29–30 October 2015. [Google Scholar]
  54. Trenholm, S.; Hajek, B.; Robinson, C.L.; Chinnappan, M.; Albrecht, A.; Ashman, H. Investigating undergraduate mathematics learners’ cognitive engagement with recorded lecture videos. Int. J. Math. Educ. Sci. Technol. 2019, 50, 3–24. [Google Scholar] [CrossRef]
  55. Le, A.; Joordens, S.; Chrysostomou, S.; Grinnell, R. Online lecture accessibility and its influence on performance in skills-based courses. Comput. Educ. 2010, 55, 313–319. [Google Scholar] [CrossRef]
  56. Baroody, A.J.; Feil, Y.; Johnson, A.R. Research commentary: An alternative reconceptualization of procedural and conceptual knowledge. J. Res. Math. Educ. 2007, 38, 115–131. [Google Scholar] [CrossRef]
  57. Bjork, R.A.; Dunlosky, J.; Kornell, N. Self-regulated learning: Beliefs, techniques, and illusions. Annu. Rev. Psychol. 2013, 64, 417–444. [Google Scholar] [CrossRef]
  58. Edwards, L.D. Gestures and conceptual integration in mathematical talk. Educ. Stud. Math. 2009, 70, 127–141. [Google Scholar] [CrossRef]
  59. Hegeman, J.S. Using Instructor-Generated Video Lectures in Online Mathematics Courses Improves Student Learning. Online Learn. 2015, 19, 70–87. [Google Scholar] [CrossRef]
  60. Björklund Boistrup, L. Assessment discourses in mathematics classrooms: A multimodal social semiotic study. Ph.D. Thsis, Department of Mathematics and Science Education, Stockholm University, Stockholm, Sweden, 2010. [Google Scholar]
  61. Roth, W.M. Gestures: Their role in teaching and learning. Rev. Educ. Res. 2001, 71, 365–392. [Google Scholar] [CrossRef]
  62. Tall, D. Cognitive conflict and the learning of mathematics. In Proceedings of the First Conference of the International Group for the Psychology of Mathematics Education, Utrecht, The Netherlands; 1977. [Google Scholar]
  63. Skemp, R.R. Goals of Learning and Qualities of Understanding. Math. Teach. 1979, 88, 44–49. [Google Scholar]
  64. Rittle-Johnson, B.; Siegler, R.S.; Alibali, M.W. Developing conceptual understanding and procedural skill in mathematics: An iterative process. J. Educ. Psychol. 2001, 93, 346. [Google Scholar] [CrossRef]
  65. Rittle-Johnson, B.; Schneider, M. Developing conceptual and procedural knowledge of mathematics. In Oxford Handbook of Numerical Cognition; Oxford University Press: Oxford, UK, 2015; pp. 1118–1134. [Google Scholar]
  66. Seo, K.; Fels, S.; Yoon, D.; Roll, I.; Dodson, S.; Fong, M. Artificial intelligence for video-based learning at scale. In Proceedings of the Seventh ACM Conference on Learning@ Scale, New, York, NY, USA, 12–14 August 2020; pp. 215–217. [Google Scholar]
Table 1. Learning domains as classified by Noetel et al. a.
Table 1. Learning domains as classified by Noetel et al. a.
Learning Domain
(as Categorized by Noetel et al.)
TallyPercent
(1 d.p.)
biology32.8
computer science21.9
dentistry85
engineering10.9
English as a foreign language54.7
medicine5249.1
nursing1312.3
nursing, paramedicine10.9
nutrition10.9
pharmacy10.9
physical education10.9
physical therapy43.8
physics10.9
physiotherapy10.9
psychology43.8
psychology, education10.9
sign language10.9
sport science21.9
teaching43.8
Total106100
a See bit.ly/betteronyoutube > Supplementary File 3: Characteristics of Included Studies, Consensus Extraction and Risk of Bias Spreadsheets, supplementary file > column titled ‘learning_domain’.
Table 2. Learning domains as classified using Biglan’s taxonomy a.
Table 2. Learning domains as classified using Biglan’s taxonomy a.
HardSoft
LifeNonlifeLifeNonlifeTotals
Pure322512
Applied631120094
Totals6613225106 b
7927
a Stoecker’s [33] revision was used to classify previously unclassified domains of dentistry and nursing. b Number disparity due to Study Number 17, representing two study contexts, being counted twice (see bit.ly/betteronyoutube).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Trenholm, S.; Marmolejo-Ramos, F. When Video Improves Learning in Higher Education. Educ. Sci. 2024, 14, 311. https://doi.org/10.3390/educsci14030311

AMA Style

Trenholm S, Marmolejo-Ramos F. When Video Improves Learning in Higher Education. Education Sciences. 2024; 14(3):311. https://doi.org/10.3390/educsci14030311

Chicago/Turabian Style

Trenholm, Sven, and Fernando Marmolejo-Ramos. 2024. "When Video Improves Learning in Higher Education" Education Sciences 14, no. 3: 311. https://doi.org/10.3390/educsci14030311

APA Style

Trenholm, S., & Marmolejo-Ramos, F. (2024). When Video Improves Learning in Higher Education. Education Sciences, 14(3), 311. https://doi.org/10.3390/educsci14030311

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop