Next Issue
Volume 6, January
Previous Issue
Volume 5, November
 
 

Multimodal Technol. Interact., Volume 5, Issue 12 (December 2021) – 13 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
19 pages, 1419 KiB  
Article
How Can Autonomous Vehicles Convey Emotions to Pedestrians? A Review of Emotionally Expressive Non-Humanoid Robots
by Yiyuan Wang, Luke Hespanhol and Martin Tomitsch
Multimodal Technol. Interact. 2021, 5(12), 84; https://doi.org/10.3390/mti5120084 - 20 Dec 2021
Cited by 16 | Viewed by 5282
Abstract
In recent years, researchers and manufacturers have started to investigate ways to enable autonomous vehicles (AVs) to interact with nearby pedestrians in compensation for the absence of human drivers. The majority of these efforts focuses on external human–machine interfaces (eHMIs), using different modalities, [...] Read more.
In recent years, researchers and manufacturers have started to investigate ways to enable autonomous vehicles (AVs) to interact with nearby pedestrians in compensation for the absence of human drivers. The majority of these efforts focuses on external human–machine interfaces (eHMIs), using different modalities, such as light patterns or on-road projections, to communicate the AV’s intent and awareness. In this paper, we investigate the potential role of affective interfaces to convey emotions via eHMIs. To date, little is known about the role that affective interfaces can play in supporting AV–pedestrian interaction. However, emotions have been employed in many smaller social robots, from domestic companions to outdoor aerial robots in the form of drones. To develop a foundation for affective AV–pedestrian interfaces, we reviewed the emotional expressions of non-humanoid robots in 25 articles published between 2011 and 2021. Based on findings from the review, we present a set of considerations for designing affective AV–pedestrian interfaces and highlight avenues for investigating these opportunities in future studies. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Graphical abstract

19 pages, 6858 KiB  
Article
Music to My Ears: Developing Kanji Stroke Knowledge through an Educational Music Game
by Oleksandra G. Keehl and Edward F. Melcer
Multimodal Technol. Interact. 2021, 5(12), 83; https://doi.org/10.3390/mti5120083 - 17 Dec 2021
Cited by 2 | Viewed by 3311
Abstract
Millions of people worldwide are taking up foreign languages with logographic writing systems, such as Japanese or Chinese. Learning thousands of characters necessary for literacy in those languages is a unique challenge to those coming from alphabetic backgrounds, and sustaining motivation in the [...] Read more.
Millions of people worldwide are taking up foreign languages with logographic writing systems, such as Japanese or Chinese. Learning thousands of characters necessary for literacy in those languages is a unique challenge to those coming from alphabetic backgrounds, and sustaining motivation in the face of such a momentous task is a struggle for many students. Many games exist for this purpose, but few offer production memory practice such as writing, and the vast majority are thinly veiled flashcards. To address this gap, we created Radical Tunes—a musical kanji-writing game—which combines production practice with musical mnemonic by assigning a melody to each element of a character. We chose to utilize music as it is a powerful tool that can be employed to enhance learning and memory. In this article, we explore whether incorporating melodies into a kanji learning game can positively affect the memorization of the stroke order/direction and overall shape of several Japanese characters, similar to the mnemonic effect of adding music to text. Specifically, we conducted two experimental studies, finding that (1) music improved immersion—an important factor related to learning; and (2) there was a positive correlation between melody presence and character production, particularly for more complex characters. Full article
(This article belongs to the Special Issue Innovations in Game-Based Learning)
Show Figures

Figure 1

25 pages, 6030 KiB  
Article
An Interactive Information System That Supports an Augmented Reality Game in the Context of Game-Based Learning
by Maria Cristina Costa, Paulo Santos, João Manuel Patrício and António Manso
Multimodal Technol. Interact. 2021, 5(12), 82; https://doi.org/10.3390/mti5120082 - 15 Dec 2021
Cited by 12 | Viewed by 4085
Abstract
Mobile augmented reality applications are gaining prominence in education, but there is a need to design appropriate and enjoyable games to be used in educational contexts such as classrooms. This paper presents an interactive information system designed to support the implementation of an [...] Read more.
Mobile augmented reality applications are gaining prominence in education, but there is a need to design appropriate and enjoyable games to be used in educational contexts such as classrooms. This paper presents an interactive information system designed to support the implementation of an augmented reality application in the context of game-based learning. PlanetarySystemGO includes a location-based mobile augmented reality game designed to promote learning about the celestial bodies and planetary systems of the Universe, and a web application that interacts with the mobile device application. Besides face-to-face classes, this resource can also be used in online classes, which is very useful in social isolation situations as the ones caused by the COVID-19 pandemic. Furthermore, it is the inclusion of the web application, with a back-office, in the information system that makes it possible to include curricula contents according to the grade level of students. Moreover, it is intended that teachers use the information system to include the contents they find appropriate to the grade level they teach. Therefore, it is crucial to provide their professional development to be able to use this resource. In this regard, a pilot study was conducted with teachers who participated in a STEM professional development programme in order to assess if the system is appropriate to be used by them. It is concluded that teachers found this resource relevant to motivate students to learn, and also acknowledged that the web application facilitated the introduction of appropriate curricula contents and also was useful to assess student performance during the game. Teachers need support, however, to implement these types of technologies which are not familiar to them. The necessary support can be provided through collaboration among the researchers and teachers in their schools. Besides engaging students to learn about celestial bodies, it is concluded that the information system can be used by teachers to introduce appropriate curricula contents and to be implemented in class. Full article
(This article belongs to the Special Issue Innovations in Game-Based Learning)
Show Figures

Graphical abstract

34 pages, 2227 KiB  
Review
Technologies for Multimodal Interaction in Extended Reality—A Scoping Review
by Ismo Rakkolainen, Ahmed Farooq, Jari Kangas, Jaakko Hakulinen, Jussi Rantala, Markku Turunen and Roope Raisamo
Multimodal Technol. Interact. 2021, 5(12), 81; https://doi.org/10.3390/mti5120081 - 10 Dec 2021
Cited by 33 | Viewed by 11992
Abstract
When designing extended reality (XR) applications, it is important to consider multimodal interaction techniques, which employ several human senses simultaneously. Multimodal interaction can transform how people communicate remotely, practice for tasks, entertain themselves, process information visualizations, and make decisions based on the provided [...] Read more.
When designing extended reality (XR) applications, it is important to consider multimodal interaction techniques, which employ several human senses simultaneously. Multimodal interaction can transform how people communicate remotely, practice for tasks, entertain themselves, process information visualizations, and make decisions based on the provided information. This scoping review summarized recent advances in multimodal interaction technologies for head-mounted display-based (HMD) XR systems. Our purpose was to provide a succinct, yet clear, insightful, and structured overview of emerging, underused multimodal technologies beyond standard video and audio for XR interaction, and to find research gaps. The review aimed to help XR practitioners to apply multimodal interaction techniques and interaction researchers to direct future efforts towards relevant issues on multimodal XR. We conclude with our perspective on promising research avenues for multimodal interaction technologies. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Figure 1

16 pages, 1624 KiB  
Article
The Effect of Multiplayer Video Games on Incidental and Intentional L2 Vocabulary Learning: The Case of Among Us
by José Ramón Calvo-Ferrer and Jose Belda-Medina
Multimodal Technol. Interact. 2021, 5(12), 80; https://doi.org/10.3390/mti5120080 - 10 Dec 2021
Cited by 14 | Viewed by 8424
Abstract
Vocabulary learning has been traditionally considered central to second language learning. It may take place either intentionally, by means of deliberate attempts to commit factual information to memory, or incidentally, as a consequence of other cognitive processes involving comprehension. Video games, which have [...] Read more.
Vocabulary learning has been traditionally considered central to second language learning. It may take place either intentionally, by means of deliberate attempts to commit factual information to memory, or incidentally, as a consequence of other cognitive processes involving comprehension. Video games, which have been extensively employed in educational contexts to understand lexical development in foreign languages, foster both exposure to and the production of authentic and meaning-focused vocabulary. An empirical study was conducted to explore the effect of playing an online multiplayer social deduction game (i.e., a game in which players attempt to uncover each other’s hidden role) on incidental and intentional second language (L2) vocabulary learning. Secondary school pre-intermediate English as a Foreign Language (EFL) students (n = 54) took a vocabulary pre-test that identified eight unknown words likely to appear in the video game Among Us. Then, students were randomly assigned to different groups of players and to different learning conditions—within each group, half of the players were given a list of phrases containing such target words, which they were encouraged to meaningfully use in the game by means of written interaction. In doing so, students learnt some target words intentionally and provided contextualized incidental exposure to other players. They took a vocabulary test after two sessions of practice with the game to explore intentional and incidental L2 vocabulary learning gains. The pre- and post-tests suggested, among other results, that players using new L2 words in the game Among Us would retain more vocabulary than players only encountering them, that vocabulary intentionally input helped other users trigger incidental vocabulary learning, and that repetition had a positive effect on L2 vocabulary learning. Full article
(This article belongs to the Special Issue Innovations in Game-Based Learning)
Show Figures

Figure 1

17 pages, 1705 KiB  
Article
The Influence of Collaborative and Multi-Modal Mixed Reality: Cultural Learning in Virtual Heritage
by Mafkereseb Kassahun Bekele, Erik Champion, David A. McMeekin and Hafizur Rahaman
Multimodal Technol. Interact. 2021, 5(12), 79; https://doi.org/10.3390/mti5120079 - 5 Dec 2021
Cited by 14 | Viewed by 4597
Abstract
Studies in the virtual heritage (VH) domain identify collaboration (social interaction), engagement, and a contextual relationship as key elements of interaction design that influence users’ experience and cultural learning in VH applications. The purpose of this study is to validate whether collaboration (social [...] Read more.
Studies in the virtual heritage (VH) domain identify collaboration (social interaction), engagement, and a contextual relationship as key elements of interaction design that influence users’ experience and cultural learning in VH applications. The purpose of this study is to validate whether collaboration (social interaction), engaging experience, and a contextual relationship enhance cultural learning in a collaborative and multi-modal mixed reality (MR) heritage environment. To this end, we have designed and implemented a cloud-based collaborative and multi-modal MR application aiming at enhancing user experience and cultural learning in museums. A conceptual model was proposed based on collaboration, engagement, and relationship in the context of MR experience. The MR application was then evaluated at the Western Australian Shipwrecks Museum by experts, archaeologists, and curators from the gallery and the Western Australian Museum. Questionnaire, semi-structured interview, and observation were used to collect data. The results suggest that integrating collaborative and multi-modal interaction methods with MR technology facilitates enhanced cultural learning in VH. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Figure 1

11 pages, 438 KiB  
Article
Dimension-Based Interactions with Virtual Assistants: A Co-Design Project with Design Fictions
by Hebitz C. H. Lau and Jeffrey C. F. Ho
Multimodal Technol. Interact. 2021, 5(12), 78; https://doi.org/10.3390/mti5120078 - 3 Dec 2021
Cited by 1 | Viewed by 2995
Abstract
This study presents a co-design project that invites participants with little or no background in artificial intelligence (AI) and machine learning (ML) to design their ideal virtual assistants (VAs) for everyday (/daily) use. VAs are differently designed and function when integrated into people’s [...] Read more.
This study presents a co-design project that invites participants with little or no background in artificial intelligence (AI) and machine learning (ML) to design their ideal virtual assistants (VAs) for everyday (/daily) use. VAs are differently designed and function when integrated into people’s daily lives (e.g., voice-controlled VAs are designed to blend in based on their natural qualities). To further understand users’ ideas of their ideal VA designs, participants were invited to generate designs of personal VAs. However, end users may have unrealistic expectations of future technologies. Therefore, design fiction was adopted as a method of guiding the participants’ image of the future and carefully managing their realistic, as well as unrealistic, expectations of future technologies. The result suggests the need for a human–AI relationship based on controls with various dimensions (e.g., vocalness degree and autonomy level) instead of specific features. The design insights are discussed in detail. Additionally, the co-design process offers insights into how users can participate in AI/ML designs. Full article
(This article belongs to the Special Issue AI for (and by) the People)
Show Figures

Graphical abstract

21 pages, 559 KiB  
Article
When Preschoolers Interact with an Educational Robot, Does Robot Feedback Influence Engagement?
by Mirjam de Haas, Paul Vogt and Emiel Krahmer
Multimodal Technol. Interact. 2021, 5(12), 77; https://doi.org/10.3390/mti5120077 - 1 Dec 2021
Cited by 4 | Viewed by 3672
Abstract
In this paper, we examine to what degree children of 3–4 years old engage with a task and with a social robot during a second-language tutoring lesson. We specifically investigated whether children’s task engagement and robot engagement were influenced by three different feedback [...] Read more.
In this paper, we examine to what degree children of 3–4 years old engage with a task and with a social robot during a second-language tutoring lesson. We specifically investigated whether children’s task engagement and robot engagement were influenced by three different feedback types by the robot: adult-like feedback, peer-like feedback and no feedback. Additionally, we investigated the relation between children’s eye gaze fixations and their task engagement and robot engagement. Fifty-eight Dutch children participated in an English counting task with a social robot and physical blocks. We found that, overall, children in the three conditions showed similar task engagement and robot engagement; however, within each condition, they showed large individual differences. Additionally, regression analyses revealed that there is a relation between children’s eye-gaze direction and engagement. Our findings showed that although eye gaze plays a significant role in measuring engagement and can be used to model children’s task engagement and robot engagement, it does not account for the full concept and engagement still comprises more than just eye gaze. Full article
(This article belongs to the Special Issue Intricacies of Child–Robot Interaction)
Show Figures

Graphical abstract

9 pages, 1011 KiB  
Article
Pitch It Right: Using Prosodic Entrainment to Improve Robot-Assisted Foreign Language Learning in School-Aged Children
by Bo Molenaar, Breixo Soliño Fernández, Alessandra Polimeno, Emilia Barakova and Aoju Chen
Multimodal Technol. Interact. 2021, 5(12), 76; https://doi.org/10.3390/mti5120076 - 30 Nov 2021
Cited by 4 | Viewed by 3440
Abstract
Robot-assisted language learning (RALL) is a promising application when employing social robots to help both children and adults acquire a language and is an increasingly widely studied area of child–robot interaction. By introducing prosodic entrainment, i.e., converging the robot’s pitch with that of [...] Read more.
Robot-assisted language learning (RALL) is a promising application when employing social robots to help both children and adults acquire a language and is an increasingly widely studied area of child–robot interaction. By introducing prosodic entrainment, i.e., converging the robot’s pitch with that of the learner, the present study aimed to provide new insights into RALL as a facilitative method for interactive tutoring. It is hypothesized that pitch-level entrainment by a Nao robot during a word learning task in a foreign language will result in increased learning in school-aged children. The results indicate that entrainment has no significant effect on participants’ learning, contra the hypothesis. Research on the implementation of entrainment in the context of RALL is new. This study highlights constraints in currently available technologies for voice generation and methodological limitations that should be taken into account in future research. Full article
(This article belongs to the Special Issue Intricacies of Child–Robot Interaction)
Show Figures

Figure 1

38 pages, 2225 KiB  
Article
Identification of Usability Issues of Interactive Technologies in Cultural Heritage through Heuristic Evaluations and Usability Surveys
by Duyen Lam, Thuong Hoang and Atul Sajjanhar
Multimodal Technol. Interact. 2021, 5(12), 75; https://doi.org/10.3390/mti5120075 - 29 Nov 2021
Cited by 5 | Viewed by 4922
Abstract
Usability is a principal aspect of the system development process to improve and augment system facilities and meet users’ needs and necessities in all domains. It is no exception for cultural heritage. Usability problems of the interactive technology practice in cultural heritage museums [...] Read more.
Usability is a principal aspect of the system development process to improve and augment system facilities and meet users’ needs and necessities in all domains. It is no exception for cultural heritage. Usability problems of the interactive technology practice in cultural heritage museums should be recognized thoroughly from the viewpoints of experts and users. This paper reports on a two-phase empirical study to identify the usability problems in audio guides and websites of cultural heritage museums in Vietnam, as a developing country, and Australia, as a developed country. In phase one, five-user experience experts identified usability problems using the set of usability heuristics, and proposed suggestions to mitigate these issues. Ten usability heuristics identified a total of 176 problems for audio guides and websites. In phase two, we conducted field usability surveys to collect the real users’ opinions to detect the usability issues and examine the negative-ranked usability. The outstanding issues for audio guides and websites were pointed out. Identification of relevant usability issues and users’ and experts’ suggestions for these technologies should be given immediate attention to helping organizations and interactive service providers improve technologies’ adoptions. The paper’s findings are reliable inputs for our future study about the preeminent UX framework for interactive technology in the CH domain. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Figure 1

25 pages, 2182 KiB  
Article
Robot-Enhanced Language Learning for Children in Norwegian Day-Care Centers
by Till Halbach, Trenton Schulz, Wolfgang Leister and Ivar Solheim
Multimodal Technol. Interact. 2021, 5(12), 74; https://doi.org/10.3390/mti5120074 - 24 Nov 2021
Cited by 8 | Viewed by 3919
Abstract
In a case study, we transformed the existing learning program Language Shower, which is used in some Norwegian day-care centers in the Grorud district of Oslo municipality, into a digital solution using an app for smartphones or tablets with the option for further [...] Read more.
In a case study, we transformed the existing learning program Language Shower, which is used in some Norwegian day-care centers in the Grorud district of Oslo municipality, into a digital solution using an app for smartphones or tablets with the option for further enhancement of the presentation by a NAO robot. The solution was tested in several iterations and multiple day-care centers over several weeks. Measurements of the children’s progress across learning sessions indicated a positive impact of the program using a robot as compared to the program without a robot. In situ observations and interviews with day-care center staff confirmed the solution’s many advantages, but also revealed some important areas for improvement. In particular, the speech recognition needs to be more flexible and robust, and special measures have to be in place to handle children speaking simultaneously. Full article
(This article belongs to the Special Issue Intricacies of Child–Robot Interaction)
Show Figures

Graphical abstract

19 pages, 1177 KiB  
Article
A Survey of Domain Knowledge Elicitation in Applied Machine Learning
by Daniel Kerrigan, Jessica Hullman and Enrico Bertini
Multimodal Technol. Interact. 2021, 5(12), 73; https://doi.org/10.3390/mti5120073 - 24 Nov 2021
Cited by 14 | Viewed by 6553
Abstract
Eliciting knowledge from domain experts can play an important role throughout the machine learning process, from correctly specifying the task to evaluating model results. However, knowledge elicitation is also fraught with challenges. In this work, we consider why and how machine learning researchers [...] Read more.
Eliciting knowledge from domain experts can play an important role throughout the machine learning process, from correctly specifying the task to evaluating model results. However, knowledge elicitation is also fraught with challenges. In this work, we consider why and how machine learning researchers elicit knowledge from experts in the model development process. We develop a taxonomy to characterize elicitation approaches according to the elicitation goal, elicitation target, elicitation process, and use of elicited knowledge. We analyze the elicitation trends observed in 28 papers with this taxonomy and identify opportunities for adding rigor to these elicitation approaches. We suggest future directions for research in elicitation for machine learning by highlighting avenues for further exploration and drawing on what we can learn from elicitation research in other fields. Full article
(This article belongs to the Special Issue AI for (and by) the People)
Show Figures

Figure 1

20 pages, 294 KiB  
Article
Case Studies in Game-Based Complex Learning
by Josh Aaron Miller and Seth Cooper
Multimodal Technol. Interact. 2021, 5(12), 72; https://doi.org/10.3390/mti5120072 - 23 Nov 2021
Cited by 3 | Viewed by 4610
Abstract
Despite the prevalence of game-based learning (GBL), most applications of GBL focus on teaching routine skills that are easily teachable, drill-able, and testable. Much less work has examined complex cognitive skills such as computational thinking, and even fewer are projects that have demonstrated [...] Read more.
Despite the prevalence of game-based learning (GBL), most applications of GBL focus on teaching routine skills that are easily teachable, drill-able, and testable. Much less work has examined complex cognitive skills such as computational thinking, and even fewer are projects that have demonstrated commercial or critical success with complex learning in game contexts. Yet, recent successes in the games industry have provided examples of success in game-based complex learning. This article represents a series of case studies on those successes. We interviewed game designers Zach Gage and Jack Schlesinger, creators of Good Sudoku, and Zach Barth, creator of Zachtronics games, using reflexive thematic analysis to thematize findings. We additionally conducted a close play of Duolingo following Bizzocchi and Tanenbaum’s adaptation of close reading. Several insights result from these case studies, including the practice of game design as instructional design, the use of constructionist environments, the tensions between formal education and informal learning, and the importance of entrepreneurialism. Specific recommendations for GBL designers are provided. Full article
(This article belongs to the Special Issue Innovations in Game-Based Learning)
Show Figures

Graphical abstract

Previous Issue
Next Issue
Back to TopTop