Feature Papers of MTI in 2021

A special issue of Multimodal Technologies and Interaction (ISSN 2414-4088).

Deadline for manuscript submissions: closed (31 December 2021) | Viewed by 127104

Special Issue Editor

Special Issue Information

Dear Colleagues,

This Special Issue is open to high-quality papers invited by Editors-in-Chief, Editorial Board Members, or the Editorial Office. Both original research articles and comprehensive review papers are welcome. Contributions to this Special Issue will be published free of charge in open access format after peer review.

Prof. Dr. Cristina Portalés Ricart
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (24 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

23 pages, 4394 KiB  
Article
Emotion Classification from Speech and Text in Videos Using a Multimodal Approach
by Maria Chiara Caschera, Patrizia Grifoni and Fernando Ferri
Multimodal Technol. Interact. 2022, 6(4), 28; https://doi.org/10.3390/mti6040028 - 12 Apr 2022
Cited by 15 | Viewed by 5746
Abstract
Emotion classification is a research area in which there has been very intensive literature production concerning natural language processing, multimedia data, semantic knowledge discovery, social network mining, and text and multimedia data mining. This paper addresses the issue of emotion classification and proposes [...] Read more.
Emotion classification is a research area in which there has been very intensive literature production concerning natural language processing, multimedia data, semantic knowledge discovery, social network mining, and text and multimedia data mining. This paper addresses the issue of emotion classification and proposes a method for classifying the emotions expressed in multimodal data extracted from videos. The proposed method models multimodal data as a sequence of features extracted from facial expressions, speech, gestures, and text, using a linguistic approach. Each sequence of multimodal data is correctly associated with the emotion by a method that models each emotion using a hidden Markov model. The trained model is evaluated on samples of multimodal sentences associated with seven basic emotions. The experimental results demonstrate a good classification rate for emotions. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Figure 1

24 pages, 2636 KiB  
Article
Improving User Experience and Communication of Digitally Enhanced Advanced Services (DEAS) Offers in Manufacturing Sector
by Mohammed Soheeb Khan, Vassilis Charissis, Phil Godsiff, Zena Wood, Jannat F. Falah, Salsabeel F. M. Alfalah and David K. Harrison
Multimodal Technol. Interact. 2022, 6(3), 21; https://doi.org/10.3390/mti6030021 - 16 Mar 2022
Cited by 7 | Viewed by 3679
Abstract
Digitally enhanced advanced services (DEAS), offered currently by various industries, could be a challenging concept to comprehend for potential clients. This could result in limited interest in adopting (DEAS) or even understanding its true value with significant financial implications for the providers. Innovative [...] Read more.
Digitally enhanced advanced services (DEAS), offered currently by various industries, could be a challenging concept to comprehend for potential clients. This could result in limited interest in adopting (DEAS) or even understanding its true value with significant financial implications for the providers. Innovative ways to present and simplify complex information are provided by serious games and gamification, which simplify and engage users with intricate information in an enjoyable manner. Despite the use of serious games and gamification in other areas, only a few examples have been documented to convey servitization offers. This research explores the design and development of a serious game for the Howden Group, a real-world industry partner aiming to simplify and convey existing service agreement packages. The system was developed under the consultation of a focus group comprising five members of the industrial partner. The final system was evaluated by 30 participants from engineering and servitization disciplines who volunteered to test online the proposed system and discuss their user experience (UX) and future application requirements. The analysis of users’ feedback presented encouraging results, with 90% confirming that they understood the DEAS concept and offers. To conclude, the paper presents a tentative plan for future work which will address the issues highlighted by users’ feedback and enhance the positive aspects of similar applications. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Figure 1

29 pages, 26599 KiB  
Article
Detecting Groups and Estimating F-Formations for Social Human–Robot Interactions
by Sai Krishna Pathi, Andrey Kiselev and Amy Loutfi
Multimodal Technol. Interact. 2022, 6(3), 18; https://doi.org/10.3390/mti6030018 - 23 Feb 2022
Cited by 8 | Viewed by 4025
Abstract
The ability of a robot to detect and join groups of people is of increasing importance in social contexts, and for the collaboration between teams of humans and robots. In this paper, we propose a framework, autonomous group interactions for robots (AGIR), that [...] Read more.
The ability of a robot to detect and join groups of people is of increasing importance in social contexts, and for the collaboration between teams of humans and robots. In this paper, we propose a framework, autonomous group interactions for robots (AGIR), that endows a robot with the ability to detect such groups while following the principles of F-formations. Using on-board sensors, this method accounts for a wide spectrum of different robot systems, ranging from autonomous service robots to telepresence robots. The presented framework detects individuals, estimates their position and orientation, detects groups, determines their F-formations, and is able to suggest a position for the robot to enter the social group. For evaluation, two simulation scenes were developed based on the standard real-world datasets. The 1st scene is built with 20 virtual agents (VAs) interacting in 7 different groups of varying sizes and 3 different formations. The 2nd scene is built with 36 VAs, positioned in 13 different groups of varying sizes and 6 different formations. A model of a Pepper robot is used in both simulated scenes in randomly generated different positions. The ability for the robot to estimate orientation, detect groups, and estimate F-formations at various locations is used to determine the validation of the approaches. The obtained results show a high accuracy within each of the simulated scenarios and demonstrates that the framework is able to work from an egocentric view with a robot in real time. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Figure 1

19 pages, 1219 KiB  
Article
Key Ergonomics Requirements and Possible Mechanical Solutions for Augmented Reality Head-Mounted Displays in Surgery
by Renzo D’Amato, Fabrizio Cutolo, Giovanni Badiali, Marina Carbone, Hao Lu, Harm Hogenbirk and Vincenzo Ferrari
Multimodal Technol. Interact. 2022, 6(2), 15; https://doi.org/10.3390/mti6020015 - 10 Feb 2022
Cited by 5 | Viewed by 3989
Abstract
In the context of a European project, we identified over 150 requirements for the development of an augmented reality (AR) head-mounted display (HMD) specifically tailored to support highly challenging manual surgical procedures. The requirements were established by surgeons from different specialties and by [...] Read more.
In the context of a European project, we identified over 150 requirements for the development of an augmented reality (AR) head-mounted display (HMD) specifically tailored to support highly challenging manual surgical procedures. The requirements were established by surgeons from different specialties and by industrial players working in the surgical field who had strong commitments to the exploitation of this technology. Some of these requirements were specific to the project, while others can be seen as key requirements for the implementation of an efficient and reliable AR headset to be used to support manual activities in the peripersonal space. The aim of this work is to describe these ergonomic requirements that impact the mechanical design of the HMDs, the possible innovative solutions to these requirements, and how these solutions have been used to implement the AR headset in surgical navigation. We also report the results of a preliminary qualitative evaluation of the AR headset by three surgeons. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Graphical abstract

31 pages, 5112 KiB  
Article
Remote Dyslexia Screening for Bilingual Children
by Maren Eikerling, Matteo Secco, Gloria Marchesi, Maria Teresa Guasti, Francesco Vona, Franca Garzotto and Maria Luisa Lorusso
Multimodal Technol. Interact. 2022, 6(1), 7; https://doi.org/10.3390/mti6010007 - 13 Jan 2022
Cited by 7 | Viewed by 5399
Abstract
Ideally, language and reading skills in bilingual children are assessed in both languages spoken in order to avoid misdiagnoses of communication or learning disorders. Due to limited capacity of clinical and educational staff, computerized screenings that allow for automatic evaluation of the children’s [...] Read more.
Ideally, language and reading skills in bilingual children are assessed in both languages spoken in order to avoid misdiagnoses of communication or learning disorders. Due to limited capacity of clinical and educational staff, computerized screenings that allow for automatic evaluation of the children’s performance on reading tasks (accuracy and speed) might pose a useful alternative in clinical and school settings. In this study, a novel web-based screening platform for language and reading assessment is presented. This tool has been preliminarily validated with monolingual Italian, Mandarin–Italian and English–Italian speaking primary school children living and schooled in Italy. Their performances in the screening tasks in Italian and—if bilingual—in their native language were compared to the results of standardized/conventional reading assessment tests as well as parental and teacher questionnaires. Correlations revealed the tasks that best contributed to the identification of risk for the presence of reading disorders and showed the general feasibility and usefulness of the computerized screening. In a further step, both screening administrators (Examiners) and child participants (Examinees) were invited to participate in usability studies, which revealed general satisfaction and provided suggestions for further improvement of the screening platform. Based on these findings, the potential of the novel web-based screening platform is discussed. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Figure 1

15 pages, 1822 KiB  
Article
Trade-Off between Task Accuracy, Task Completion Time and Naturalness for Direct Object Manipulation in Virtual Reality
by Jari Kangas, Sriram Kishore Kumar, Helena Mehtonen, Jorma Järnstedt and Roope Raisamo
Multimodal Technol. Interact. 2022, 6(1), 6; https://doi.org/10.3390/mti6010006 - 10 Jan 2022
Cited by 16 | Viewed by 4063
Abstract
Virtual reality devices are used for several application domains, such as medicine, entertainment, marketing and training. A handheld controller is the common interaction method for direct object manipulation in virtual reality environments. Using hands would be a straightforward way to directly manipulate objects [...] Read more.
Virtual reality devices are used for several application domains, such as medicine, entertainment, marketing and training. A handheld controller is the common interaction method for direct object manipulation in virtual reality environments. Using hands would be a straightforward way to directly manipulate objects in the virtual environment if hand-tracking technology were reliable enough. In recent comparison studies, hand-based systems compared unfavorably against the handheld controllers in task completion times and accuracy. In our controlled study, we compare these two interaction techniques with a new hybrid interaction technique which combines the controller tracking with hand gestures for a rigid object manipulation task. The results demonstrate that the hybrid interaction technique is the most preferred because it is intuitive, easy to use, fast, reliable and it provides haptic feedback resembling the real-world object grab. This suggests that there is a trade-off between naturalness, task accuracy and task completion time when using these direct manipulation interaction techniques, and participants prefer to use interaction techniques that provide a balance between these three factors. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Graphical abstract

22 pages, 1959 KiB  
Article
Comparing Map Learning between Touchscreen-Based Visual and Haptic Displays: A Behavioral Evaluation with Blind and Sighted Users
by Hari Prasath Palani, Paul D. S. Fink and Nicholas A. Giudice
Multimodal Technol. Interact. 2022, 6(1), 1; https://doi.org/10.3390/mti6010001 - 22 Dec 2021
Cited by 14 | Viewed by 3925
Abstract
The ubiquity of multimodal smart devices affords new opportunities for eyes-free applications for conveying graphical information to both sighted and visually impaired users. Using previously established haptic design guidelines for generic rendering of graphical content on touchscreen interfaces, the current study evaluates the [...] Read more.
The ubiquity of multimodal smart devices affords new opportunities for eyes-free applications for conveying graphical information to both sighted and visually impaired users. Using previously established haptic design guidelines for generic rendering of graphical content on touchscreen interfaces, the current study evaluates the learning and mental representation of digital maps, representing a key real-world translational eyes-free application. Two experiments involving 12 blind participants and 16 sighted participants compared cognitive map development and test performance on a range of spatio-behavioral tasks across three information-matched learning-mode conditions: (1) our prototype vibro-audio map (VAM), (2) traditional hardcopy-tactile maps, and (3) visual maps. Results demonstrated that when perceptual parameters of the stimuli were matched between modalities during haptic and visual map learning, test performance was highly similar (functionally equivalent) between the learning modes and participant groups. These results suggest equivalent cognitive map formation between both blind and sighted users and between maps learned from different sensory inputs, providing compelling evidence supporting the development of amodal spatial representations in the brain. The practical implications of these results include empirical evidence supporting a growing interest in the efficacy of multisensory interfaces as a primary interaction style for people both with and without vision. Findings challenge the long-held assumption that blind people exhibit deficits on global spatial tasks compared to their sighted peers, with results also providing empirical support for the methodological use of sighted participants in studies pertaining to technologies primarily aimed at supporting blind users. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Graphical abstract

19 pages, 1419 KiB  
Article
How Can Autonomous Vehicles Convey Emotions to Pedestrians? A Review of Emotionally Expressive Non-Humanoid Robots
by Yiyuan Wang, Luke Hespanhol and Martin Tomitsch
Multimodal Technol. Interact. 2021, 5(12), 84; https://doi.org/10.3390/mti5120084 - 20 Dec 2021
Cited by 16 | Viewed by 5300
Abstract
In recent years, researchers and manufacturers have started to investigate ways to enable autonomous vehicles (AVs) to interact with nearby pedestrians in compensation for the absence of human drivers. The majority of these efforts focuses on external human–machine interfaces (eHMIs), using different modalities, [...] Read more.
In recent years, researchers and manufacturers have started to investigate ways to enable autonomous vehicles (AVs) to interact with nearby pedestrians in compensation for the absence of human drivers. The majority of these efforts focuses on external human–machine interfaces (eHMIs), using different modalities, such as light patterns or on-road projections, to communicate the AV’s intent and awareness. In this paper, we investigate the potential role of affective interfaces to convey emotions via eHMIs. To date, little is known about the role that affective interfaces can play in supporting AV–pedestrian interaction. However, emotions have been employed in many smaller social robots, from domestic companions to outdoor aerial robots in the form of drones. To develop a foundation for affective AV–pedestrian interfaces, we reviewed the emotional expressions of non-humanoid robots in 25 articles published between 2011 and 2021. Based on findings from the review, we present a set of considerations for designing affective AV–pedestrian interfaces and highlight avenues for investigating these opportunities in future studies. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Graphical abstract

17 pages, 1705 KiB  
Article
The Influence of Collaborative and Multi-Modal Mixed Reality: Cultural Learning in Virtual Heritage
by Mafkereseb Kassahun Bekele, Erik Champion, David A. McMeekin and Hafizur Rahaman
Multimodal Technol. Interact. 2021, 5(12), 79; https://doi.org/10.3390/mti5120079 - 5 Dec 2021
Cited by 15 | Viewed by 4624
Abstract
Studies in the virtual heritage (VH) domain identify collaboration (social interaction), engagement, and a contextual relationship as key elements of interaction design that influence users’ experience and cultural learning in VH applications. The purpose of this study is to validate whether collaboration (social [...] Read more.
Studies in the virtual heritage (VH) domain identify collaboration (social interaction), engagement, and a contextual relationship as key elements of interaction design that influence users’ experience and cultural learning in VH applications. The purpose of this study is to validate whether collaboration (social interaction), engaging experience, and a contextual relationship enhance cultural learning in a collaborative and multi-modal mixed reality (MR) heritage environment. To this end, we have designed and implemented a cloud-based collaborative and multi-modal MR application aiming at enhancing user experience and cultural learning in museums. A conceptual model was proposed based on collaboration, engagement, and relationship in the context of MR experience. The MR application was then evaluated at the Western Australian Shipwrecks Museum by experts, archaeologists, and curators from the gallery and the Western Australian Museum. Questionnaire, semi-structured interview, and observation were used to collect data. The results suggest that integrating collaborative and multi-modal interaction methods with MR technology facilitates enhanced cultural learning in VH. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Figure 1

38 pages, 2225 KiB  
Article
Identification of Usability Issues of Interactive Technologies in Cultural Heritage through Heuristic Evaluations and Usability Surveys
by Duyen Lam, Thuong Hoang and Atul Sajjanhar
Multimodal Technol. Interact. 2021, 5(12), 75; https://doi.org/10.3390/mti5120075 - 29 Nov 2021
Cited by 5 | Viewed by 4937
Abstract
Usability is a principal aspect of the system development process to improve and augment system facilities and meet users’ needs and necessities in all domains. It is no exception for cultural heritage. Usability problems of the interactive technology practice in cultural heritage museums [...] Read more.
Usability is a principal aspect of the system development process to improve and augment system facilities and meet users’ needs and necessities in all domains. It is no exception for cultural heritage. Usability problems of the interactive technology practice in cultural heritage museums should be recognized thoroughly from the viewpoints of experts and users. This paper reports on a two-phase empirical study to identify the usability problems in audio guides and websites of cultural heritage museums in Vietnam, as a developing country, and Australia, as a developed country. In phase one, five-user experience experts identified usability problems using the set of usability heuristics, and proposed suggestions to mitigate these issues. Ten usability heuristics identified a total of 176 problems for audio guides and websites. In phase two, we conducted field usability surveys to collect the real users’ opinions to detect the usability issues and examine the negative-ranked usability. The outstanding issues for audio guides and websites were pointed out. Identification of relevant usability issues and users’ and experts’ suggestions for these technologies should be given immediate attention to helping organizations and interactive service providers improve technologies’ adoptions. The paper’s findings are reliable inputs for our future study about the preeminent UX framework for interactive technology in the CH domain. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Figure 1

16 pages, 4021 KiB  
Article
Analysis of a Human-Machine Interface for Cooperative Truck Overtaking Maneuvers on Freeways: Increase Success Rate and Assess Driving Behavior during System Failures
by Jana Fank, Christian Knies and Frank Diermeyer
Multimodal Technol. Interact. 2021, 5(11), 69; https://doi.org/10.3390/mti5110069 - 2 Nov 2021
Cited by 5 | Viewed by 3480
Abstract
Cooperation between road users based on V2X communication has the potential to make road traffic safer and more efficient. The exchange of information enables the cooperative orchestration of critical traffic situations, such as truck overtaking maneuvers on freeways. With the benefit of such [...] Read more.
Cooperation between road users based on V2X communication has the potential to make road traffic safer and more efficient. The exchange of information enables the cooperative orchestration of critical traffic situations, such as truck overtaking maneuvers on freeways. With the benefit of such a system, questions arise concerning system failure or the abrupt and unexpected behavior of road users. A human-machine interface (HMI) organizes and negotiates the cooperation between drivers and maintains smooth interaction, trust, and system acceptance, even in the case of a possible system failure. A study was conducted with 30 truck drivers on a dynamic truck driving simulator to analyze the negotiation of cooperation requests and the reaction of truck drivers to potential system failures. The results show that an automated cooperation request does not translate into a significantly higher cooperation success rate. System failures in cooperative truck passing maneuvers are not considered critical by truck drivers in this simulated environment. The next step in the development process is to investigate how the success rate of truck overtaking maneuvers on freeways can be further increased as well as the implementation of the system in a real vehicle to investigate the reaction behavior of truck drivers in case of system failures in a real environment. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Figure 1

16 pages, 767 KiB  
Article
Smartphone and the Self: Experimental Investigation of Self-Incorporation of and Attachment to Smartphones
by Marlene Gertz, Simone Schütz-Bosbach and Sarah Diefenbach
Multimodal Technol. Interact. 2021, 5(11), 67; https://doi.org/10.3390/mti5110067 - 27 Oct 2021
Cited by 4 | Viewed by 4651
Abstract
Smartphones are a constant companion in everyday life. Interacting with a smartphone calls for a multimodal input and often leads to a multisensory output. Combining research in human-computer interaction (HCI) and psychology, the present research explored the idea that a smartphone is more [...] Read more.
Smartphones are a constant companion in everyday life. Interacting with a smartphone calls for a multimodal input and often leads to a multisensory output. Combining research in human-computer interaction (HCI) and psychology, the present research explored the idea that a smartphone is more than a smart object but represents an object to which people feel emotionally attached to and which is even perceived as a part or an extension of a person’s self. To this end, we used an established rubber hand illusion paradigm to experimentally induce body ownership experiences in young adults (n = 76) in a 4-level mixed-design study. Our results revealed that in contrast to a neutral control object participants indeed felt attached to a smartphone, perceived it as a part of themselves and felt the need to interact with the device. This was specifically pronounced when hedonic characteristics were evaluated as high and when its usage for social communication was highlighted during the experiment. Psychological mechanisms of the incorporation of technologies are discussed and connected to positive and negative effects of smartphone usage on human behavior, its implications for technology design and marketing. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Figure 1

16 pages, 3346 KiB  
Article
User Behavior Adaptive AR Guidance for Wayfinding and Tasks Completion
by Camille Truong-Allié, Alexis Paljic, Alexis Roux and Martin Herbeth
Multimodal Technol. Interact. 2021, 5(11), 65; https://doi.org/10.3390/mti5110065 - 20 Oct 2021
Cited by 3 | Viewed by 3561
Abstract
Augmented reality (AR) is widely used to guide users when performing complex tasks, for example, in education or industry. Sometimes, these tasks are a succession of subtasks, possibly distant from each other. This can happen, for instance, in inspection operations, where AR devices [...] Read more.
Augmented reality (AR) is widely used to guide users when performing complex tasks, for example, in education or industry. Sometimes, these tasks are a succession of subtasks, possibly distant from each other. This can happen, for instance, in inspection operations, where AR devices can give instructions about subtasks to perform in several rooms. In this case, AR guidance is both needed to indicate where to head to perform the subtasks and to instruct the user about how to perform these subtasks. In this paper, we propose an approach based on user activity detection. An AR device displays the guidance for wayfinding when current user activity suggests it is needed. We designed the first prototype on a head-mounted display using a neural network for user activity detection and compared it with two other guidance temporality strategies, in terms of efficiency and user preferences. Our results show that the most efficient guidance temporality depends on user familiarity with the AR display. While our proposed guidance has not proven to be more efficient than the other two, our experiment hints toward several improvements of our prototype, which is a first step in the direction of efficient guidance for both wayfinding and complex task completion. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Graphical abstract

26 pages, 14702 KiB  
Article
There is Always a Way: Organizing VR User Tests with Remote and Hybrid Setups during a Pandemic—Learnings from Five Case Studies
by Sanni Siltanen, Hanna Heinonen, Alisa Burova, Paulina Becerril Palma, Phong Truong, Viveka Opas and Markku Turunen
Multimodal Technol. Interact. 2021, 5(10), 62; https://doi.org/10.3390/mti5100062 - 11 Oct 2021
Cited by 9 | Viewed by 5608
Abstract
(1) COVID-19 pandemic restrictions caused a dramatic shift in research activities, forcing the adoption of remote practices and methods. Despite the known benefits of remote testing, there is limited knowledge on how to prepare and conduct such studies in the industrial context where [...] Read more.
(1) COVID-19 pandemic restrictions caused a dramatic shift in research activities, forcing the adoption of remote practices and methods. Despite the known benefits of remote testing, there is limited knowledge on how to prepare and conduct such studies in the industrial context where the target users are experts and company employees. (2) In this article, we detail how we organized VR user tests in five industrial cases during the pandemic, focusing on practicalities and procedures. We cover both on-site testing, including disinfecting and other safety protocols, as well as remote and hybrid setups where both remote and on-site participants were involved. Subject matter experts from eight countries were involved in a total of 22 tests. (3) We share insights for VR user test arrangements relevant to the pandemic, remote and hybrid setups, and an industrial context, among others. (4) Our work confirms that with careful planning it is possible to organize user tests remotely. There are also some limitations in remote user testing, such as reduced visibility and interaction with participants. Most importantly, we list practical recommendations for organizing hybrid user tests with safety and disinfecting procedures for on-site VR use. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Figure 1

27 pages, 528 KiB  
Article
Heart Rate Sharing at the Workplace
by Valtteri Wikström, Mari Falcon, Silja Martikainen, Jana Pejoska, Eva Durall, Merja Bauters and Katri Saarikivi
Multimodal Technol. Interact. 2021, 5(10), 60; https://doi.org/10.3390/mti5100060 - 8 Oct 2021
Cited by 3 | Viewed by 4752
Abstract
Augmenting online interpersonal communication with biosignals, often in the form of heart rate sharing, has shown promise in increasing affiliation, feelings of closeness, and intimacy. Increasing empathetic awareness in the professional domain and in the customer interface could benefit both customer and employee [...] Read more.
Augmenting online interpersonal communication with biosignals, often in the form of heart rate sharing, has shown promise in increasing affiliation, feelings of closeness, and intimacy. Increasing empathetic awareness in the professional domain and in the customer interface could benefit both customer and employee satisfaction, but heart rate sharing in this context needs to consider issues around physiological monitoring of employees, appropriate level of intimacy, as well as the productivity outlook. In this study, we explore heart rate sharing at the workplace and study its effects on task performance. Altogether, 124 participants completed a collaborative visual guidance task using a chat box with heart rate visualization. Participants’ feedback about heart rate sharing reveal themes such as a stronger sense of human contact and increased self-reflection, but also raise concerns around unnecessity, intimacy, privacy and negative interpretations. Live heart rate was always measured, but to investigate the effect of heart rate sharing on task performance, half of the customers were told that they were seeing a recording, and half were told that they were seeing the advisor’s live heart beat. We found a negative link between awareness and task performance. We also found that higher ratings of usefulness of the heart rate visualization were associated with increased feelings of closeness. These results reveal that intimacy and privacy issues are particularly important for heart rate sharing in professional contexts, that preference modulates the effects of heart rate sharing on social closeness, and that heart rate sharing may have a negative effect on performance. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Figure 1

25 pages, 791 KiB  
Article
Human-Robot Interaction in Groups: Methodological and Research Practices
by Raquel Oliveira, Patrícia Arriaga and Ana Paiva
Multimodal Technol. Interact. 2021, 5(10), 59; https://doi.org/10.3390/mti5100059 - 30 Sep 2021
Cited by 16 | Viewed by 5091
Abstract
Understanding the behavioral dynamics that underline human-robot interactions in groups remains one of the core challenges in social robotics research. However, despite a growing interest in this topic, there is still a lack of established and validated measures that allow researchers to analyze [...] Read more.
Understanding the behavioral dynamics that underline human-robot interactions in groups remains one of the core challenges in social robotics research. However, despite a growing interest in this topic, there is still a lack of established and validated measures that allow researchers to analyze human-robot interactions in group scenarios; and very few that have been developed and tested specifically for research conducted in-the-wild. This is a problem because it hinders the development of general models of human-robot interaction, and makes the comprehension of the inner workings of the relational dynamics between humans and robots, in group contexts, significantly more difficult. In this paper, we aim to provide a reflection on the current state of research on human-robot interaction in small groups, as well as to outline directions for future research with an emphasis on methodological and transversal issues. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Figure 1

14 pages, 3125 KiB  
Article
Accuracy and Repeatability Tests on HoloLens 2 and HTC Vive
by Inês Soares, Ricardo B. Sousa, Marcelo Petry and António Paulo Moreira
Multimodal Technol. Interact. 2021, 5(8), 47; https://doi.org/10.3390/mti5080047 - 23 Aug 2021
Cited by 41 | Viewed by 8342
Abstract
Augmented and virtual reality have been experiencing rapid growth in recent years, but there is still no deep knowledge regarding their capabilities and in what fields they could be explored. In that sense, this paper presents a study on the accuracy and repeatability [...] Read more.
Augmented and virtual reality have been experiencing rapid growth in recent years, but there is still no deep knowledge regarding their capabilities and in what fields they could be explored. In that sense, this paper presents a study on the accuracy and repeatability of Microsoft’s HoloLens 2 (augmented reality device) and HTC Vive (virtual reality device) using an OptiTrack system as ground truth. For the HoloLens 2, the method used was hand tracking, whereas, in HTC Vive, the object tracked was the system’s hand controller. A series of tests in different scenarios and situations were performed to explore what could influence the measures. The HTC Vive obtained results in the millimeter range, while the HoloLens 2 revealed not very accurate measurements (around 2 cm). Although the difference can seem to be considerable, the fact that HoloLens 2 was tracking the user’s hand and not the system’s controller made a huge impact. The results are considered a significant step for the ongoing project of developing a human–robot interface by demonstrating an industrial robot using extended reality, which shows great potential to succeed based on our data. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Figure 1

19 pages, 2800 KiB  
Article
Multimodal Warnings in Remote Operation: The Case Study on Remote Driving
by Pekka Kallioniemi, Alisa Burova, John Mäkelä, Tuuli Keskinen, Kimmo Ronkainen, Ville Mäkelä, Jaakko Hakulinen and Markku Turunen
Multimodal Technol. Interact. 2021, 5(8), 44; https://doi.org/10.3390/mti5080044 - 12 Aug 2021
Cited by 4 | Viewed by 3674
Abstract
Developments in sensor technology, artificial intelligence, and network technologies like 5G has made remote operation a valuable method of controlling various types of machinery. The benefits of remote operations come with an opportunity to access hazardous environments. The major limitation of remote operation [...] Read more.
Developments in sensor technology, artificial intelligence, and network technologies like 5G has made remote operation a valuable method of controlling various types of machinery. The benefits of remote operations come with an opportunity to access hazardous environments. The major limitation of remote operation is the lack of proper sensory feedback from the machine, which in turn negatively affects situational awareness and, consequently, may risk remote operations. This article explores how to improve situational awareness via multimodal feedback (visual, auditory, and haptic) and studies how it can be utilized to communicate warnings to remote operators. To reach our goals, we conducted a controlled, within-subjects experiment in eight conditions with twenty-four participants on a simulated remote driving system. Additionally, we gathered further insights with a UX questionnaire and semi-structured interviews. Gathered data showed that the use of multimodal feedback positively affected situational awareness when driving remotely. Our findings indicate that the combination of added haptic and visual feedback was considered the best feedback combination to communicate the slipperiness of the road. We also found that the feeling of presence is an important aspect of remote driving tasks, and a requested one, especially by those with more experience in operating real heavy machinery. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Figure 1

13 pages, 305 KiB  
Article
BookTubers as Multimodal Reading Influencers: An Analysis of Subscriber Interactions
by Rosabel Roig-Vila, Héctor Romero-Guerra and José Rovira-Collado
Multimodal Technol. Interact. 2021, 5(7), 39; https://doi.org/10.3390/mti5070039 - 16 Jul 2021
Cited by 6 | Viewed by 5516
Abstract
The objective of the study was to learn about the relationships between BookTubers and their subscribers by focusing on the comments left by viewers of the audio-visual literary reviews. We also examined whether viewer-BookTuber relationships resulted in the promotion of reading. A mixed [...] Read more.
The objective of the study was to learn about the relationships between BookTubers and their subscribers by focusing on the comments left by viewers of the audio-visual literary reviews. We also examined whether viewer-BookTuber relationships resulted in the promotion of reading. A mixed qualitative-quantitative methodology was followed, including a descriptive analysis of contents and a case study. The main tools used were MAXQDA to process the qualitative data and Excel to obtain the quantitative data. The sample was a non-random selection of four BookTubers channels, taking into account both their impact and gender equality (two female and two male BookTubers). The categorization was conducted based on Cultural Studies and Reception Aesthetics. A total of eight videos (four reviews and four Book Hauls) were selected and 100 comments on each were analyzed, giving rise to four categories. The results indicated that in terms of content decoding, close relationships were established among community members, between both consumers and producers. In addition, message acceptance took place and a certain relationship was found between the BookTuber’s work and the promotion of reading. BookTubers were therefore identified as multimodal influencers. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
13 pages, 1734 KiB  
Article
Comparison of Controller-Based Locomotion Techniques for Visual Observation in Virtual Reality
by Jussi Rantala, Jari Kangas, Olli Koskinen, Tomi Nukarinen and Roope Raisamo
Multimodal Technol. Interact. 2021, 5(7), 31; https://doi.org/10.3390/mti5070031 - 23 Jun 2021
Cited by 14 | Viewed by 6793
Abstract
Many virtual reality (VR) applications use teleport for locomotion. The non-continuous locomotion of teleport is suited for VR controllers and can minimize simulator sickness, but it can also reduce spatial awareness compared to continuous locomotion. Our aim was to create continuous, controller-based locomotion [...] Read more.
Many virtual reality (VR) applications use teleport for locomotion. The non-continuous locomotion of teleport is suited for VR controllers and can minimize simulator sickness, but it can also reduce spatial awareness compared to continuous locomotion. Our aim was to create continuous, controller-based locomotion techniques that would support spatial awareness. We compared the new techniques, slider and grab, with teleport in a task where participants counted small visual targets in a VR environment. Task performance was assessed by asking participants to report how many visual targets they found. The results showed that slider and grab were significantly faster to use than teleport, and they did not cause significantly more simulator sickness than teleport. Moreover, the continuous techniques provided better spatial awareness than teleport. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Figure 1

16 pages, 7441 KiB  
Article
Haptic Actuation Plate for Multi-Layered In-Vehicle Control Panel
by Patrick Coe, Grigori Evreinov, Hasse Sinivaara, Arto Hippula and Roope Raisamo
Multimodal Technol. Interact. 2021, 5(5), 25; https://doi.org/10.3390/mti5050025 - 5 May 2021
Cited by 1 | Viewed by 4574
Abstract
High-fidelity localized feedback has the potential of providing new and unique levels of interaction with a given device. Achieving this in a cost-effective reproducible manner has been a challenge in modern technology. Past experiments have shown that by using the principles of constructive [...] Read more.
High-fidelity localized feedback has the potential of providing new and unique levels of interaction with a given device. Achieving this in a cost-effective reproducible manner has been a challenge in modern technology. Past experiments have shown that by using the principles of constructive wave interference introduced by time offsets it is possible to achieve a position of increased vibration displacement at any given location. As new interface form factors increasingly incorporate curved surfaces, we now show that these same techniques can successfully be applied and mechanically coupled with a universal actuation plate. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Figure 1

Review

Jump to: Research

34 pages, 2227 KiB  
Review
Technologies for Multimodal Interaction in Extended Reality—A Scoping Review
by Ismo Rakkolainen, Ahmed Farooq, Jari Kangas, Jaakko Hakulinen, Jussi Rantala, Markku Turunen and Roope Raisamo
Multimodal Technol. Interact. 2021, 5(12), 81; https://doi.org/10.3390/mti5120081 - 10 Dec 2021
Cited by 33 | Viewed by 12133
Abstract
When designing extended reality (XR) applications, it is important to consider multimodal interaction techniques, which employ several human senses simultaneously. Multimodal interaction can transform how people communicate remotely, practice for tasks, entertain themselves, process information visualizations, and make decisions based on the provided [...] Read more.
When designing extended reality (XR) applications, it is important to consider multimodal interaction techniques, which employ several human senses simultaneously. Multimodal interaction can transform how people communicate remotely, practice for tasks, entertain themselves, process information visualizations, and make decisions based on the provided information. This scoping review summarized recent advances in multimodal interaction technologies for head-mounted display-based (HMD) XR systems. Our purpose was to provide a succinct, yet clear, insightful, and structured overview of emerging, underused multimodal technologies beyond standard video and audio for XR interaction, and to find research gaps. The review aimed to help XR practitioners to apply multimodal interaction techniques and interaction researchers to direct future efforts towards relevant issues on multimodal XR. We conclude with our perspective on promising research avenues for multimodal interaction technologies. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Figure 1

19 pages, 10476 KiB  
Review
An Overview of Olfactory Displays in Education and Training
by Miguel Angel Garcia-Ruiz, Bill Kapralos and Genaro Rebolledo-Mendez
Multimodal Technol. Interact. 2021, 5(10), 64; https://doi.org/10.3390/mti5100064 - 13 Oct 2021
Cited by 10 | Viewed by 6537
Abstract
This paper describes an overview of olfactory displays (human–computer interfaces that generate and diffuse an odor to a user to stimulate their sense of smell) that have been proposed and researched for supporting education and training. Past research has shown that olfaction (the [...] Read more.
This paper describes an overview of olfactory displays (human–computer interfaces that generate and diffuse an odor to a user to stimulate their sense of smell) that have been proposed and researched for supporting education and training. Past research has shown that olfaction (the sense of smell) can support memorization of information, stimulate information recall, and help immerse learners and trainees into educational virtual environments, as well as complement and/or supplement other human sensory channels for learning. This paper begins with an introduction to olfaction and olfactory displays, and a review of techniques for storing, generating and diffusing odors at the computer interface. The paper proceeds with a discussion on educational theories that support olfactory displays for education and training, and a literature review on olfactory displays that support learning and training. Finally, the paper summarizes the advantages and challenges regarding the development and application of olfactory displays for education and training. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Show Figures

Figure 1

20 pages, 347 KiB  
Review
In Search of Embodied Conversational and Explainable Agents for Health Behaviour Change and Adherence
by Amal Abdulrahman and Deborah Richards
Multimodal Technol. Interact. 2021, 5(9), 56; https://doi.org/10.3390/mti5090056 - 18 Sep 2021
Cited by 5 | Viewed by 4337
Abstract
Conversational agents offer promise to provide an alternative to costly and scarce access to human health providers. Particularly in the context of adherence to treatment advice and health behavior change, they can provide an ongoing coaching role to motivate and keep the health [...] Read more.
Conversational agents offer promise to provide an alternative to costly and scarce access to human health providers. Particularly in the context of adherence to treatment advice and health behavior change, they can provide an ongoing coaching role to motivate and keep the health consumer on track. Due to the recognized importance of face-to-face communication and establishment of a therapist-patient working alliance as the biggest single predictor of adherence, our review focuses on embodied conversational agents (ECAs) and their use in health and well-being interventions. The article also introduces ECAs who provide explanations of their recommendations, known as explainable agents (XAs), as a way to build trust and enhance the working alliance towards improved behavior change. Of particular promise, is work in which XAs are able to engage in conversation to learn about their user and personalize their recommendations based on their knowledge of the user and then tailor their explanations to the beliefs and goals of the user to increase relevancy and motivation and address possible barriers to increase intention to perform the healthy behavior. Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
Back to TopTop