Next Article in Journal
Sagnac with Double-Sense Twisted Low-Birefringence Standard Fiber as Vibration Sensor
Previous Article in Journal
Reliability and Validity of Inertial Sensor Assisted Reaction Time Measurement Tools among Healthy Young Adults
Previous Article in Special Issue
A Data-Driven Approach to Quantify and Measure Students’ Engagement in Synchronous Virtual Learning Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

From Sensor Data to Educational Insights

by
José A. Ruipérez-Valiente
1,*,
Roberto Martínez-Maldonado
2,
Daniele Di Mitri
3 and
Jan Schneider
3
1
Department of Information and Communication Engineering, University of Murcia, 30100 Murcia, Spain
2
Faculty of Information Technology, Monash University, Clayton, VIC 3168, Australia
3
DIPF|Leibniz Institute for Research and Information in Education, 60323 Frankfurt, Germany
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(21), 8556; https://doi.org/10.3390/s22218556
Submission received: 26 October 2022 / Accepted: 3 November 2022 / Published: 7 November 2022
(This article belongs to the Special Issue From Sensor Data to Educational Insights)
Technology is gradually becoming an integral part of learning at all levels of educational. This includes the now pervasive presence of virtual learning environments (VLEs) and the inclusion of interactive devices used or worn by learners or that are present in the physical classroom environment. These new technology-rich educational ecosystems have greatly facilitated data capture about learners. Thus, several research areas, such as learning analytics (LA), educational data mining (EDM), and artificial intelligence in education (AIED), have grown exponentially during the last decade, with multiple venues supporting this research [1]. However, the inferences about learning that can be made by solely analyzing trace data from VLEs are limited, since logged data do not commonly provide a complete view of the learning experience [2]. Therefore, research communities are moving beyond the data obtained from VLEs and other online tools by incorporating data from external sources such as sensors, pervasive devices, and computer vision systems. Within the context of education, this subfield is often denominated as multimodal learning analytics (MMLA) [3]; nevertheless, the use of these data sources is also common in broader research areas, such as affective computing (e.g., [4]) and human–computer interaction (HCI) (e.g., [5]). The promise is to augment and improve the extent and quality of the analysis that can be performed with these new data sources [6]. Moreover, many new sensor-based tools, such as sensor-based games [7] or realistic laboratories [8,9], are being built to support the educational process. The challenge is embedding sensors and resulting data representations in authentic educational settings in pedagogically meaningful and ethical ways [10].
This Special Issue (SI) invited publications that include approaches to converting data captured using sensors (e.g., cameras, smartphones, microphones, or temperature sensors), wearables (e.g., smart wristbands, watches, or glasses), or other Internet of Things (IoT) devices (e.g., interactive whiteboards, eBooks, or tablets) into meaningful educational insights. Moreover, it invited papers on tools, architectures, or frameworks to manage the orchestration of these sensors and IoT devices to improve education. The submitted articles had to appropriately explain how the inclusiveness of sensor devices can augment the analyses performed to improve teaching, learning, or the educational context in which the sensing it occurs (e.g., in classrooms, VLEs, or other educational spaces). This SI has focused on empirical case studies that fulfill the aforementioned criteria and experimental architectures, methodologies, frameworks, or survey papers.

1. The Affordances and Caveats of Sensor Data in Education

Using sensor data in education offers researchers novel perceived affordances to generate a richer picture of the learning experience by going beyond what can be captured from mouse clicks and keystrokes. Sensor data can enable the automated analysis and support of learning activities that are not necessarily mediated by a computer [11], such as activity unfolding in a maker space [12] or in physical simulation-based training rooms [13]. Similarly, video, audio devices, and other sensing devices have been used in physical learning spaces to model aspects of the classroom that, in the past, could only be studied via direct observations and ethnographic studies such as teacher–student communication [14], spatial dynamics [15], and students’ collaborative dialogue [16]. Data obtained from both wearables and video-based body part recognition algorithms are also enabling the study of learning activities that involve the development of psychomotor skills, such as the effective performance of cardiovascular resuscitation for the case of clinical students [17] or the delivery of oral presentations by effectively combining gestures and body posture [18]. Sensor data are also being used to generate a deeper understanding of under-the-skin cognitive and emotional aspects of the student that may affect learning but that are hard to perceive without sensing devices [19].
However, this increased interest in using sensor data in education comes with some important caveats. The risks of unintended over-surveillance and the capture of activities that are not directly related to learning need to be addressed [20]. For example, using video in learning spaces can bring numerous ethical implications since not all the activities that occur in a classroom may be related to the subject matter being learned, and students can be easily identified [21]. The use of physiological sensors can also generate information that can be considered private and personal by many learners [22]. Moreover, another issue involves the enlarged amount and heterogeneity of data being captured, which also increases the complexity of the learning analytics solution. Whilst it may be possible to find interesting trends from the data and provide more accurate learning models, the intricacy of the models and the sensing setup may make the solution harder to implement under authentic learning situations [3]. It may also become challenging to translate low-level sensor signals into meaningful educational constructs that teachers and students can understand [6,13]. This can also make full consenting from the perspective of students, educators and other educational stakeholders more challenging, as it may be harder for them to fully understand the implications of capturing and analyzing each data modality [11,23]. Together, the added complexity in infrastructure and expertise required and informed consenting may threaten the long-term sustainability and scalability of sensor-based educational solutions, which has already been identified in a recent MMLA literature review [10]. In sum, much research is needed to develop sensor-based MMLA systems with integrity that balance the benefits of augmenting the learning situations with the potential ethical and practical challenges that this conveys. This SI contributes in several aspects to these open issues and research gaps.

2. Overview of the Special Issue

The SI has gathered 12 articles on diverse topics that have used sensors within educational environments. Four main uses of sensors in education can be identified in these articles. First, there are articles evaluating sensor-based tools exclusively designed for educational purposes. Some articles studied the use of sensor data to improve the learning process directly. Next, some articles consisted of case studies that collected sensor data to provide new insights into the learning process. Finally, we identified articles with diverse objectives. We will now review the articles of the SI in thematic groups.
First, there is a group of three papers that have implemented tools based on sensors for educational purposes. Guerrero-Osuna et al. [8] presented the design and implementation of a novel IoT device called MEIoT weather station. This tool can be used as a hybrid educational IoT environment. They presented a case study regarding how to use it to teach the least-squares regression topics for linear, quadratic, and cubic approximations within the Educational Mechatronics Conceptual Framework (EMCF). Another example is the article published by Khan et al. [7], who proposed a 3D realistic open-ended VR and Kinect sensor-based training setup using the Unity game engine, wherein children are educated and involved in road safety exercises. The sensors of the game make the game experience much more immersive, and the experimental results reported positive outcomes, encouraging good behaviors of the children in terms of the road crossing. Last, the third article related to these topics implemented a Two-Dimensional Cartesian Coordinate System Educational Toolkit (2D-CACSET) to teach the two-dimensional representations as the first step to construct spatial thinking using multiple sensors [9].
Then, another group of three articles implemented applications that use data from sensors to potentially improve the learning process. For example, Mat Sanusi et al. [24] designed and implemented the Table Tennis Tutor (T3), a multi-sensor system consisting of a smartphone device with built-in sensors for collecting motion data and a Microsoft Kinect for tracking body position that could be used to perform live coaching and feedback of the table tennis forehand strokes of the trainee. Then, the work of [25] explored the factors from the physical learning environment (PLE) that can affect distance learning and built a software infrastructure that can measure, collect, and process the identified multimodal data from and about the PLE by utilizing mobile sensing. Finally, they conducted an evaluation with 10 participants regarding what extent the software can provide relevant information about the learning context. The last article of this group by Praharaj et al. [26] prototyped a tool that can perform automatic collaboration analytics using both non-verbal and verbal audio indicators. This tool could be used in collaborative learning activities to better understand the actions and discourse of each member during the activity.
In addition, a group of three articles focused on using sensor data to understand better the learning process within specific case studies and contexts. Two of these articles have focused on the eye gaze of students. The article by Lee et al. [27] used Tobii Pro Glasses 2 to capture eye gaze and developed eye movement analysis with hidden Markov models (EMHMM) to differentiate between the states of focused attention and mind-wandering. They found that participants with the centralized pattern had better performance detecting targets and rated themselves as more focused than those with the distributed pattern, highlighting eye movement patterns differences between attention states (focused attention vs. mind-wandering). A second study by Brückner et al. [28] also focused on eye gaze, but in this case, they used Epistemic Network Analysis (ENA) within the context of graph tasks. They found differences between the gaze patterns of students who solved the graph tasks correctly and incorrectly; for example, incorrect solvers shifted their gaze from the graph to the x-axis and from the question to the graph comparatively more often than correct solvers. The last article of this group by Vujovic et al. [29] used collected audiovisual data, within a collaborative activity in a co-located physical learning space, to explore the effects of table shape on collaboration when different group sizes and genders are considered. They found that table shape influences student behavior when considering different group sizes and genders.
Finally, we have the last group of three additional papers that have diverse objectives in their work. For example, Solé-Beteta et al. [30] proposed a methodology and associated model to measure student engagement in VLEs using more than 30 digital interactions and events during a synchronous lesson. Of course, many of these digital interactions are captured via sensors, such as students’ faces, gestural poses, or even audio from their voices. They also validated this methodology by building a software prototype tested in two different synchronous learning activities. The article by [31] presents an evaluation framework to assess the generalizability of machine learning models that use sensor data for LA called EFAR-MMLA and tests it with two datasets with audio and log data; the authors concluded that the framework does indeed help to solve the problematic issue of generalizability. Finally, the last paper Horvers et al. [32] conducted a systematic literature review on how the electrodermal activity (EDA), collected by sensors, is currently used within learning settings. They screened more than 1200 records to finally keep 27 studies in the review, finding considerable variation in the usage of EDA and inconsistent associations between physiological arousal and learning outcomes. They concluded that there is a need for explicit guidelines and standards for EDA processing in educational contexts due to the variability found in the review.
All these SI articles are open access and accessible through the following link: https://www.mdpi.com/journal/sensors/special_issues/sdei (Last access date: 1 November 2022).

3. Conclusions and Future of Sensor-Based Technologies in Education

The SI has gathered diverse papers demonstrating the affordances of using sensor data or sensor-based tools within educational settings. However, we also need to consider the potential caveats and drawbacks within this research area to mitigate the risks while optimizing the benefits. Although numerous studies are emerging in the literature, it is still challenging to find practitioners using these technologies in day-to-day teaching. Therefore, it is hard to find significant findings that are robust enough to identify underlying dependencies across the explored variables or to replicate studies to see whether results are universal and which of them are context-dependent [10]. We need more resilient science in this context that can help overcome these current limitations.
The future of sensors and multimodal innovations in education is promising, and there are certain directions where it can have a real positive impact. For example, to improve the training of professionals that need to develop complex cognitive abilities that are required for a role (such as stress regulation in works under heavy pressure [33]) or those that require practicing specific motor skills (such as nurses or doctors in healthcare education [5]). Moreover, these technologies can also be promising to support remote teaching and distance learning with more realistic and immersive activities that more closely mimic face-to-face embodied learning experiences while maintaining the flexibility of these learning modalities [34]. However, future work should also tackle some main challenges, such as model transfer across contexts, ethical and equity concerns, scalability, and good alignment with the instructional design, among many others.

References

Author Contributions

Conceptualization, J.A.R.-V., R.M.-M., D.D.M. and J.S.; writing—original draft preparation, J.A.R.-V., R.M.-M., D.D.M. and J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We appreciate all the authors that have contributed to the articles published within this Special Issue. Moreover, we would like to express our gratitude to the editorial office of Sensors for their cooperation in preparing this SI.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gomez, M.J.; Ruipérez-Valiente, J.A.; García Clemente, F.J. Analyzing Trends and Patterns Across the Educational Technology Communities Using Fontana Framework. IEEE Access 2022, 10, 35336–35351. [Google Scholar] [CrossRef]
  2. Kitto, K.; Buckingham Shum, S.; Gibson, A. Embracing imperfection in learning analytics. In Proceedings of the 8th International Conference on Learning Analytics and Knowledge, Wales, SNS, Australia, 7–9 March 2018; pp. 451–460. [Google Scholar]
  3. Ochoa, X. Multimodal Learning Analytics-Rationale, Process, Examples, and Direction. In The Handbook of Learning Analytics, 2nd ed.; Lang, C., Siemens, G., Wise, A.F., Gasevic, D., Merceron, A., Eds.; SoLAR: Vancouver, BC, Canada, 2022; Section 6; pp. 54–65. [Google Scholar]
  4. Järvelä, S.; Gašević, D.; Seppänen, T.; Pechenizkiy, M.; Kirschner, P.A. Bridging learning sciences, machine learning and affective computing for understanding cognition and affect in collaborative learning. Br. J. Educ. Technol. 2020, 51, 2391–2406. [Google Scholar] [CrossRef]
  5. Echeverria, V.; Martinez-Maldonado, R.; Power, T.; Hayes, C.; Shum, S.B. Where Is the Nurse? Towards Automatically Visualising Meaningful Team Movement in Healthcare Education. In Proceedings of the Artificial Intelligence in Education, London, UK, 27–30 June 2018; Penstein Rosé, C., Martínez-Maldonado, R., Hoppe, H.U., Luckin, R., Mavrikis, M., Porayska-Pomsta, K., McLaren, B., du Boulay, B., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 74–78. [Google Scholar]
  6. Di Mitri, D.; Schneider, J.; Specht, M.; Drachsler, H. From signals to knowledge: A conceptual model for multimodal learning analytics. J. Comput. Assist. Learn. 2018, 34, 338–349. [Google Scholar] [CrossRef] [Green Version]
  7. Khan, N.; Muhammad, K.; Hussain, T.; Nasir, M.; Munsif, M.; Imran, A.S.; Sajjad, M. An Adaptive Game-Based Learning Strategy for Children Road Safety Education and Practice in Virtual Space. Sensors 2021, 21, 3661. [Google Scholar] [CrossRef] [PubMed]
  8. Guerrero-Osuna, H.A.; Luque-Vega, L.F.; Carlos-Mancilla, M.A.; Ornelas-Vargas, G.; Castañeda-Miranda, V.H.; Carrasco-Navarro, R. Implementation of a MEIoT Weather Station with Exogenous Disturbance Input. Sensors 2021, 21, 1653. [Google Scholar] [CrossRef]
  9. Castañeda-Miranda, V.H.; Luque-Vega, L.F.; Lopez-Neri, E.; Nava-Pintor, J.A.; Guerrero-Osuna, H.A.; Ornelas-Vargas, G. Two-Dimensional Cartesian Coordinate System Educational Toolkit: 2D-CACSET. Sensors 2021, 21, 6304. [Google Scholar] [CrossRef]
  10. Yan, L.; Zhao, L.; Gasevic, D.; Martinez-Maldonado, R. Scalability, Sustainability, and Ethicality of Multimodal Learning Analytics. In Proceedings of the LAK22: 12th International Learning Analytics and Knowledge Conference, Online, 21–25 March 2022; Association for Computing Machinery: New York, NY, USA, 2022. LAK22. pp. 13–23. [Google Scholar] [CrossRef]
  11. Worsley, M.; Martinez-Maldonado, R.; D’Angelo, C. A New Era in Multimodal Learning Analytics: Twelve Core Commitments to Ground and Grow MMLA. J. Learn. Anal. 2021, 8, 10–27. [Google Scholar] [CrossRef]
  12. Chng, E.; Seyam, M.R.; Yao, W.; Schneider, B. Toward capturing divergent collaboration in makerspaces using motion sensors. Inf. Learn. Sci. 2022, 123, 276–297. [Google Scholar] [CrossRef]
  13. Martinez-Maldonado, R.; Echeverria, V.; Fernandez Nieto, G.; Buckingham Shum, S. From data to insights: A layered storytelling approach for multimodal learning analytics. In Proceedings of the 2020 Chi Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–15. [Google Scholar]
  14. Dale, M.E.; Godley, A.J.; Capello, S.A.; Donnelly, P.J.; D’Mello, S.K.; Kelly, S.P. Toward the automated analysis of teacher talk in secondary ELA classrooms. Teach. Teach. Educ. 2022, 110, 103584. [Google Scholar] [CrossRef]
  15. Yan, L.; Martinez-Maldonado, R.; Zhao, L.; Deppeler, J.; Corrigan, D.; Gasevic, D. How Do Teachers Use Open Learning Spaces? Mapping from Teachers’ Socio-Spatial Data to Spatial Pedagogy. In Proceedings of the LAK22: 12th International Learning Analytics and Knowledge Conference, Online, 21–25 March 2022; pp. 87–97. [Google Scholar]
  16. Southwell, R.; Pugh, S.; Perkoff, E.M.; Clevenger, C.; Bush, J.B.; Lieber, R.; Ward, W.; Foltz, P.; D’Mello, S. Challenges and Feasibility of Automatic Speech Recognition for Modeling Student Collaborative Discourse in Classrooms. In Proceedings of the 15th International Conference on Educational Data Mining, Durham, UK, 24–27 July 2022; Volume 27, pp. 302–315. [Google Scholar]
  17. Di Mitri, D.; Schneider, J.; Drachsler, H. Keep Me in the Loop: Real-Time Feedback with Multimodal Data. Int. J. Artif. Intell. Educ. 2021, 32, 1–26. [Google Scholar] [CrossRef]
  18. Schneider, J.; Börner, D.; Van Rosmalen, P.; Specht, M. Presentation Trainer: What experts and computers can tell about your nonverbal communication. J. Comput. Assist. Learn. 2017, 33, 164–177. [Google Scholar] [CrossRef] [Green Version]
  19. Mangaroska, K.; Sharma, K.; Gašević, D.; Giannakos, M. Exploring students’ cognitive and affective states during problem solving through multimodal data: Lessons learned from a programming activity. J. Comput. Assist. Learn. 2022, 38, 40–59. [Google Scholar] [CrossRef]
  20. Alwahaby, H.; Cukurova, M.; Papamitsiou, Z.; Giannakos, M. The Evidence of Impact and Ethical Considerations of Multimodal Learning Analytics: A Systematic Literature Review. In The Multimodal Learning Analytics Handbook; Giannakos, M., Spikol, D., Di Mitri, D., Sharma, K., Ochoa, X., Hammad, R., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 289–325. [Google Scholar] [CrossRef]
  21. Donnelly, P.J.; Blanchard, N.; Samei, B.; Olney, A.M.; Sun, X.; Ward, B.; Kelly, S.; Nystrand, M.; D’Mello, S.K. Multi-Sensor Modeling of Teacher Instructional Segments in Live Classrooms. In Proceedings of the 18th ACM International Conference on Multimodal Interaction, Tokyo, Japan, 12–16 November 2016; Association for Computing Machinery: New York, NY, USA, 2016. ICMI ’16. pp. 177–184. [Google Scholar] [CrossRef]
  22. Mangaroska, K.; Martinez-Maldonado, R.; Vesin, B.; Gašević, D. Challenges and opportunities of multimodal data in human learning: The computer science students’ perspective. J. Comput. Assist. Learn. 2021, 37, 1030–1047. [Google Scholar] [CrossRef]
  23. Beardsley, M.; Martinez Moreno, J.; Vujovic, M.; Santos, P.; Hernández-Leo, D. Enhancing consent forms to support participant decision making in multimodal learning data research. Br. J. Educ. Technol. 2020, 51, 1631–1652. [Google Scholar] [CrossRef]
  24. Mat Sanusi, K.A.; Mitri, D.D.; Limbu, B.; Klemke, R. Table Tennis Tutor: Forehand Strokes Classification Based on Multimodal Data and Neural Networks. Sensors 2021, 21, 3121. [Google Scholar] [CrossRef]
  25. Ciordas-Hertel, G.P.; Rödling, S.; Schneider, J.; Di Mitri, D.; Weidlich, J.; Drachsler, H. Mobile Sensing with Smart Wearables of the Physical Context of Distance Learning Students to Consider Its Effects on Learning. Sensors 2021, 21, 6649. [Google Scholar] [CrossRef] [PubMed]
  26. Praharaj, S.; Scheffel, M.; Schmitz, M.; Specht, M.; Drachsler, H. Towards Automatic Collaboration Analytics for Group Speech Data Using Learning Analytics. Sensors 2021, 21, 3156. [Google Scholar] [CrossRef]
  27. Lee, H.H.; Chen, Z.L.; Yeh, S.L.; Hsiao, J.H.; Wu, A.Y.A. When Eyes Wander Around: Mind-Wandering as Revealed by Eye Movement Analysis with Hidden Markov Models. Sensors 2021, 21, 7569. [Google Scholar] [CrossRef]
  28. Brückner, S.; Schneider, J.; Zlatkin-Troitschanskaia, O.; Drachsler, H. Epistemic Network Analyses of Economics Students’ Graph Understanding: An Eye-Tracking Study. Sensors 2020, 20, 6908. [Google Scholar] [CrossRef]
  29. Vujovic, M.; Amarasinghe, I.; Hernández-Leo, D. Studying Collaboration Dynamics in Physical Learning Spaces: Considering the Temporal Perspective through Epistemic Network Analysis. Sensors 2021, 21, 2898. [Google Scholar] [CrossRef]
  30. Solé-Beteta, X.; Navarro, J.; Gajšek, B.; Guadagni, A.; Zaballos, A. A Data-Driven Approach to Quantify and Measure Students’ Engagement in Synchronous Virtual Learning Environments. Sensors 2022, 22, 3294. [Google Scholar] [CrossRef]
  31. Chejara, P.; Prieto, L.P.; Ruiz-Calleja, A.; Rodríguez-Triana, M.J.; Shankar, S.K.; Kasepalu, R. EFAR-MMLA: An Evaluation Framework to Assess and Report Generalizability of Machine Learning Models in MMLA. Sensors 2021, 21, 2863. [Google Scholar] [CrossRef] [PubMed]
  32. Horvers, A.; Tombeng, N.; Bosse, T.; Lazonder, A.W.; Molenaar, I. Detecting Emotions through Electrodermal Activity in Learning Contexts: A Systematic Review. Sensors 2021, 21, 7869. [Google Scholar] [CrossRef] [PubMed]
  33. Albaladejo-González, M.; Ruipérez-Valiente, J.A.; Gómez Mármol, F. Evaluating different configurations of machine learning models and their transfer learning capabilities for stress detection using heart rate. J. Ambient. Intell. Humaniz. Comput. 2022. [Google Scholar] [CrossRef]
  34. Di Mitri, D. Restoring Context in Online Teaching with Artificial Intelligence and Multimodal Learning Experiences. In Proceedings of the SITE Interactive Conference, Association for the Advancement of Computing in Education (AACE), Online, 28 February 2021; pp. 494–501. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ruipérez-Valiente, J.A.; Martínez-Maldonado, R.; Di Mitri, D.; Schneider, J. From Sensor Data to Educational Insights. Sensors 2022, 22, 8556. https://doi.org/10.3390/s22218556

AMA Style

Ruipérez-Valiente JA, Martínez-Maldonado R, Di Mitri D, Schneider J. From Sensor Data to Educational Insights. Sensors. 2022; 22(21):8556. https://doi.org/10.3390/s22218556

Chicago/Turabian Style

Ruipérez-Valiente, José A., Roberto Martínez-Maldonado, Daniele Di Mitri, and Jan Schneider. 2022. "From Sensor Data to Educational Insights" Sensors 22, no. 21: 8556. https://doi.org/10.3390/s22218556

APA Style

Ruipérez-Valiente, J. A., Martínez-Maldonado, R., Di Mitri, D., & Schneider, J. (2022). From Sensor Data to Educational Insights. Sensors, 22(21), 8556. https://doi.org/10.3390/s22218556

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop