Next Issue
Volume 3, June
Previous Issue
Volume 2, December
 
 

Multimodal Technol. Interact., Volume 3, Issue 1 (March 2019) – 22 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
29 pages, 10276 KiB  
Article
Cartographic Visualization for Indoor Semantic Wayfinding
by Nikolaos Bakogiannis, Charalampos Gkonos and Lorenz Hurni
Multimodal Technol. Interact. 2019, 3(1), 22; https://doi.org/10.3390/mti3010022 - 26 Mar 2019
Cited by 1 | Viewed by 3606
Abstract
In recent years, pedestrian navigation assistance has been used by an increasing number of people to support wayfinding tasks. Especially in unfamiliar and complex indoor environments such as universities and hospitals, the importance of an effective navigation assistance becomes apparent. This paper investigates [...] Read more.
In recent years, pedestrian navigation assistance has been used by an increasing number of people to support wayfinding tasks. Especially in unfamiliar and complex indoor environments such as universities and hospitals, the importance of an effective navigation assistance becomes apparent. This paper investigates the feasibility of the indoor landmark navigation model (ILNM), a method for generating landmark-based routing instructions, by combining it with indoor route maps and conducting a wayfinding experiment with human participants. Within this context, three different cartographic visualization scenarios were designed and evaluated. Two of these scenarios were based on the implementation of the ILNM algorithm, with the concurrent effort to overcome the challenge of representing the semantic navigation instructions in two different ways. In the first scenario, the selected landmarks were visualized as pictograms, while in the second scenario, an axonometric-based design philosophy for the depiction of landmarks was followed. The third scenario was based on the benchmark approach (metric-based routing instructions) for conveying routing instructions to the users. The experiment showed that the implementation of the ILNM was feasible, and, more importantly, it was beneficial in terms of participants’ navigation performance during the wayfinding experiment, compared to the metric-based instructions scenario (benchmark for indoor navigation). Valuable results were also obtained, concerning the most suitable cartographic approach for visualizing the selected landmarks, while implementing this specific algorithm (ILNM). Finally, our findings confirm that the existence of landmarks, not only within the routing instructions, but also as cartographic representations on the route map itself, can significantly help users to position themselves correctly within an unfamiliar environment and to improve their navigation performance. Full article
(This article belongs to the Special Issue Interactive 3D Cartography)
Show Figures

Figure 1

19 pages, 4451 KiB  
Article
Improving Driver Emotions with Affective Strategies
by Michael Braun, Jonas Schubert, Bastian Pfleging and Florian Alt
Multimodal Technol. Interact. 2019, 3(1), 21; https://doi.org/10.3390/mti3010021 - 25 Mar 2019
Cited by 73 | Viewed by 9422
Abstract
Drivers in negative emotional states, such as anger or sadness, are prone to perform bad at driving, decreasing overall road safety for all road users. Recent advances in affective computing, however, allow for the detection of such states and give us tools to [...] Read more.
Drivers in negative emotional states, such as anger or sadness, are prone to perform bad at driving, decreasing overall road safety for all road users. Recent advances in affective computing, however, allow for the detection of such states and give us tools to tackle the connected problems within automotive user interfaces. We see potential in building a system which reacts upon possibly dangerous driver states and influences the driver in order to drive more safely. We compare different interaction approaches for an affective automotive interface, namely Ambient Light, Visual Notification, a Voice Assistant, and an Empathic Assistant. Results of a simulator study with 60 participants (30 each with induced sadness/anger) indicate that an emotional voice assistant with the ability to empathize with the user is the most promising approach as it improves negative states best and is rated most positively. Qualitative data also shows that users prefer an empathic assistant but also resent potential paternalism. This leads us to suggest that digital assistants are a valuable platform to improve driver emotions in automotive environments and thereby enable safer driving. Full article
Show Figures

Figure 1

11 pages, 1040 KiB  
Article
The Voice Makes the Car: Enhancing Autonomous Vehicle Perceptions and Adoption Intention through Voice Agent Gender and Style
by Sanguk Lee, Rabindra Ratan and Taiwoo Park
Multimodal Technol. Interact. 2019, 3(1), 20; https://doi.org/10.3390/mti3010020 - 21 Mar 2019
Cited by 37 | Viewed by 5664
Abstract
The present research explores how autonomous vehicle voice agent (AVVA) design influences autonomous vehicle passenger (AVP) intentions to adopt autonomous vehicles. An online experiment (N = 158) examined the role of gender stereotypes in response to an AVVA with respect to the [...] Read more.
The present research explores how autonomous vehicle voice agent (AVVA) design influences autonomous vehicle passenger (AVP) intentions to adopt autonomous vehicles. An online experiment (N = 158) examined the role of gender stereotypes in response to an AVVA with respect to the technology acceptance model. The findings indicate that characteristics of the AVVA that are more consistent with the stereotypical expectation of the social role (informative male AVVA and social female AVVA) foster greater perceived ease of use (PEU) and perceived usefulness (PU) than inconsistent conditions (social male AVVA and informative female AVVA). The study offers theoretical implications regarding the technology acceptance model in the context of autonomous technologies as well as practical implications for the design of autonomous vehicle voice agents. Full article
Show Figures

Figure 1

23 pages, 1465 KiB  
Article
Guidance in Cinematic Virtual Reality-Taxonomy, Research Status and Challenges
by Sylvia Rothe, Daniel Buschek and Heinrich Hußmann
Multimodal Technol. Interact. 2019, 3(1), 19; https://doi.org/10.3390/mti3010019 - 19 Mar 2019
Cited by 81 | Viewed by 9024
Abstract
In Cinematic Virtual Reality (CVR), the viewer of an omnidirectional movie can freely choose the viewing direction when watching a movie. Therefore, traditional techniques in filmmaking for guiding the viewers’ attention cannot be adapted directly to CVR. Practices such as panning or changing [...] Read more.
In Cinematic Virtual Reality (CVR), the viewer of an omnidirectional movie can freely choose the viewing direction when watching a movie. Therefore, traditional techniques in filmmaking for guiding the viewers’ attention cannot be adapted directly to CVR. Practices such as panning or changing the frame are no longer defined by the filmmaker; rather it is the viewer who decides where to look. In some stories, it is necessary to show certain details to the viewer, which should not be missed. At the same time, the freedom of the viewer to look around in the scene should not be destroyed. Therefore, techniques are needed which guide the attention of the spectator to visual information in the scene. Attention guiding also has the potential to improve the general viewing experience, since viewers will be less afraid to miss something when watching an omnidirectional movie where attention-guiding techniques have been applied. In recent years, there has been a lot of research about attention guiding in images, movies, virtual reality, augmented reality and also in CVR. We classify these methods and offer a taxonomy for attention-guiding methods. Discussing the different characteristics, we elaborate the advantages and disadvantages, give recommendations for use cases and apply the taxonomy to several examples of guiding methods. Full article
Show Figures

Figure 1

6 pages, 308 KiB  
Brief Report
ECG Monitoring during End of Life Care: Implications on Alarm Fatigue
by Sukardi Suba, Cass Piper Sandoval, Xiao Hu and Michele M. Pelter
Multimodal Technol. Interact. 2019, 3(1), 18; https://doi.org/10.3390/mti3010018 - 13 Mar 2019
Cited by 4 | Viewed by 6141
Abstract
Excessive numbers of clinical alarms in the intensive care unit (ICU) contribute to alarm fatigue. Efforts to eliminate unnecessary alarms, including during end of life (EOL) care, are pivotal. This study describes electrocardiographic (ECG) arrhythmia alarm usage following the decision for comfort care. [...] Read more.
Excessive numbers of clinical alarms in the intensive care unit (ICU) contribute to alarm fatigue. Efforts to eliminate unnecessary alarms, including during end of life (EOL) care, are pivotal. This study describes electrocardiographic (ECG) arrhythmia alarm usage following the decision for comfort care. We conducted a review of electronic health records (EHR) in patients who died and had comfort care orders that were in place during our study. The occurrences of ECG arrhythmia alarms among these patients were examined. We found 151 arrhythmia alarms that were generated in 11 patients after comfort care was initiated: 72% were audible, 21% were manually muted, and 7% had an unknown audio label. Level of alarm: 33% crisis, 58% warning, 1% message, and 8% were labeled unknown. Our report shows that ECG monitoring was commonly maintained during the EOL care. Since the goal of care during this phase is for both patient and family comfort, it is important for the clinicians to weigh the benefits versus harms of the continuous ECG monitoring. Full article
(This article belongs to the Special Issue Multimodal Medical Alarms)
Show Figures

Figure 1

24 pages, 3207 KiB  
Article
Using Sensory Wearable Devices to Navigate the City: Effectiveness and User Experience in Older Pedestrians
by Angélique Montuwy, Béatrice Cahour and Aurélie Dommes
Multimodal Technol. Interact. 2019, 3(1), 17; https://doi.org/10.3390/mti3010017 - 12 Mar 2019
Cited by 15 | Viewed by 5532
Abstract
Preserving older pedestrians’ navigation skills in urban environments is a challenge for maintaining their quality of life. However, the maps that are usually used by older pedestrians might be unsuitable to their specificities and the existing digital aids do not consider older people’s [...] Read more.
Preserving older pedestrians’ navigation skills in urban environments is a challenge for maintaining their quality of life. However, the maps that are usually used by older pedestrians might be unsuitable to their specificities and the existing digital aids do not consider older people’s perceptual and cognitive declines or user experience. This study presents a rich description of the navigation experience of older pedestrians either with a visual (augmented reality glasses), auditory (bone conduction headphones), or a visual and haptic (smartwatch) wearable device adapted to age-related declines. These wearable devices are compared to the navigation aid older people usually use when navigating the city (their own digital or paper map). The study, with 18 participants, measured the navigation performance and captured detailed descriptions of the users’ experience using interviews. We highlight three main phenomena which impact the quality of the user experience with the four aids: (1) the shifts in attention over time, (2) the understanding of the situation over time, and (3) the emergence of affective and aesthetic feelings over time. These findings add a new understanding of the specificities of navigation experience by older people and are discussed in terms of design recommendations for navigation devices. Full article
(This article belongs to the Special Issue Interactive Assistive Technology)
Show Figures

Figure 1

19 pages, 4893 KiB  
Article
Let’s Play a Game! Kin-LDD: A Tool for Assisting in the Diagnosis of Children with Learning Difficulties
by Eleni Chatzidaki, Michalis Xenos and Charikleia Machaira
Multimodal Technol. Interact. 2019, 3(1), 16; https://doi.org/10.3390/mti3010016 - 11 Mar 2019
Cited by 1 | Viewed by 4694
Abstract
This paper presents an alternative approach for the diagnosis of learning difficulties in children. A game-based evaluation study, using Kinaesthetic Learning Difficulties Diagnosis (Kin-LDD), was performed during the actual diagnosis procedure for the identification of learning difficulties. Kin-LDD is a serious game that [...] Read more.
This paper presents an alternative approach for the diagnosis of learning difficulties in children. A game-based evaluation study, using Kinaesthetic Learning Difficulties Diagnosis (Kin-LDD), was performed during the actual diagnosis procedure for the identification of learning difficulties. Kin-LDD is a serious game that provides a gesture-based interface and incorporates spatial and time orientation activities. These activities assess children’s cognitive attributes while they are using their motor skills to interact with the game. The aim of this work was to introduce the fun parameter to the diagnostic process, provide a useful tool for the special educators and investigate potential correlations between in-game metrics and the diagnosis outcome. An experiment was conducted in which 30 children played the game during their official assessment for the diagnosis of learning difficulties at the Center for Differential Diagnosis, Diagnosis and Support. Performance metrics were collected automatically while the children were playing the game. These metrics, along with questionnaires appropriate for children and post-session interviews were later analyzed and the findings are presented in the paper. According to the results: (a) children evaluated the game as a fun experience, (b) special educators claimed it was helpful to the diagnostic procedure, and (c) there were statistically significant correlations between in-game metrics and the category of learning difficulty the child was characterized with. Full article
(This article belongs to the Special Issue Digital Health Applications of Ubiquitous HCI Research)
Show Figures

Figure 1

29 pages, 36483 KiB  
Article
To Beep or Not to Beep? Evaluating Modalities for Multimodal ICU Alarms
by Vanessa Cobus and Wilko Heuten
Multimodal Technol. Interact. 2019, 3(1), 15; https://doi.org/10.3390/mti3010015 - 9 Mar 2019
Cited by 19 | Viewed by 6630
Abstract
Technology plays a prominent role in intensive care units (ICU), with a variety of sensors monitoring both patients and devices. A serious problem exists, however, that can reduce the sensors’ effectiveness. When important values exceed or fall below a certain threshold or sensors [...] Read more.
Technology plays a prominent role in intensive care units (ICU), with a variety of sensors monitoring both patients and devices. A serious problem exists, however, that can reduce the sensors’ effectiveness. When important values exceed or fall below a certain threshold or sensors lose their signal, up to 350 alarms per patient a day are issued. These frequent alarms are audible in several locations on the ICU, resulting in a massive cognitive load for ICU nurses, as they must evaluate and acknowledge each alarm. “Alarm fatigue” sets in, a desensitization and delayed response time for alarms that can have severe consequences for patients and nurses. To counteract the acoustic load on ICUs, we designed and evaluated personal multimodal alarms for a wearable alarm system (WAS). The result was a lower response time and higher ratings on suitability and feasibility, as well as a lower annoyance level, compared to acoustic alarms. We find that multimodal alarms are a promising new approach to alert ICU nurses, reduce cognitive load, and avoid alarm fatigue. Full article
(This article belongs to the Special Issue Multimodal Medical Alarms)
Show Figures

Figure 1

12 pages, 3760 KiB  
Article
CheckMates, Helping Nurses Plan Ahead in the Neonatal Intensive Care Unit
by Jesper van Bentum, Deedee Kommers, Saskia Bakker, Miguel Cabral Guerra, Carola van Pul and Peter Andriessen
Multimodal Technol. Interact. 2019, 3(1), 14; https://doi.org/10.3390/mti3010014 - 9 Mar 2019
Cited by 1 | Viewed by 3430
Abstract
Workflow in a neonatal intensive care unit (NICU) is relatively unpredictable, which makes it difficult to plan activities. Simple tasks, such as checking device statuses may be forgotten, resulting in disturbing alarms. In this paper, we will present CheckMates, ambient lighting displays, which [...] Read more.
Workflow in a neonatal intensive care unit (NICU) is relatively unpredictable, which makes it difficult to plan activities. Simple tasks, such as checking device statuses may be forgotten, resulting in disturbing alarms. In this paper, we will present CheckMates, ambient lighting displays, which visualize device statuses to provide nurses with more overview. We performed expert reviews to obtain insights into the different potentials of CheckMates. Additionally, we performed a simulation study to gather user experiences regarding the functioning of CheckMates and their capacity to improve planning in an NICU environment. The results showed a variety of potential benefits for increasing nurses’ overview of device statuses and their opportunities for workflow planning. Furthermore, CheckMates did not appear to be distracting. Full article
(This article belongs to the Special Issue Multimodal Medical Alarms)
Show Figures

Figure 1

18 pages, 2713 KiB  
Article
Exploring How Interactive Technology Enhances Gesture-Based Expression and Engagement: A Design Study
by Shichao Zhao
Multimodal Technol. Interact. 2019, 3(1), 13; https://doi.org/10.3390/mti3010013 - 27 Feb 2019
Cited by 7 | Viewed by 4729
Abstract
The interpretation and understanding of physical gestures play a significant role in various forms of art. Interactive technology and digital devices offer a plethora of opportunities for personal gesture-based experience and they assist in the creation of collaborative artwork. In this study, three [...] Read more.
The interpretation and understanding of physical gestures play a significant role in various forms of art. Interactive technology and digital devices offer a plethora of opportunities for personal gesture-based experience and they assist in the creation of collaborative artwork. In this study, three prototypes for use with different digital devices (digital camera, PC camera, and Kinect) were designed. Subsequently, a series of workshops were conducted and in-depth interviews with participants from different cultural and occupational backgrounds. The latter were designed to explore how to specifically design personalised gesture-based expressions and how to engage the creativity of the participants in their gesture-based experiences. The findings indicated that, in terms of gesture-based interaction, the participants preferred to engage with the visual traces that were displayed at specific timings in multi-experience spaces. Their gesture-based interactions could effectively support non-verbal emotional expression. In addition, the participants were shown to be strongly inclined to combine their personal stories and emotions into their own gesture-based artworks. Based on the participants’ different cultural and occupational backgrounds, their artistic creation could be spontaneously formed. Full article
(This article belongs to the Special Issue Embodied and Spatial Interaction)
Show Figures

Figure 1

20 pages, 3267 KiB  
Article
Hungry Cat—A Serious Game for Conveying Spatial Information to the Visually Impaired
by Carmen Chai, Bee Theng Lau and Zheng Pan
Multimodal Technol. Interact. 2019, 3(1), 12; https://doi.org/10.3390/mti3010012 - 27 Feb 2019
Cited by 12 | Viewed by 4160
Abstract
Navigation is done through obtaining spatial information from the environment and forming a spatial map about it. The visually impaired rely mainly on orientation and mobility training by a certified specialist to acquire spatial navigation skills. However, it is manpower intensive and costly. [...] Read more.
Navigation is done through obtaining spatial information from the environment and forming a spatial map about it. The visually impaired rely mainly on orientation and mobility training by a certified specialist to acquire spatial navigation skills. However, it is manpower intensive and costly. This research designed and developed a serious game, Hungry Cat. This game can convey spatial information of virtual rooms to children with visual impairment through game playing. An evaluation with 30 visually impaired participants was conducted by allowing them to explore each virtual room in Hungry Cat. After exploration, the food finding test, which is a game mode available in Hungry Cat, was conducted, followed by the physical wire net test to evaluate their ability in forming the spatial mental maps of the virtual rooms. The positive results of the evaluation obtained demonstrate the ability of Hungry Cat, in conveying spatial information about virtual rooms and aiding the development of spatial mental maps of these rooms through game playing. Full article
(This article belongs to the Special Issue Interactive Assistive Technology)
Show Figures

Figure 1

8 pages, 1139 KiB  
Article
Reducing Redundant Alarms in the Pediatric ICU
by Maya Dewan, Lindsay Cipriani, Jacqueline Boyer, Julie Stark, Brandy Seger and Ken Tegtmeyer
Multimodal Technol. Interact. 2019, 3(1), 11; https://doi.org/10.3390/mti3010011 - 23 Feb 2019
Cited by 5 | Viewed by 4183
Abstract
Physiologic monitors generate alarms to alert clinicians to signs of instability. However, these monitors also create alarm fatigue that places patients at risk. Redundant alarms have contributed to alarm fatigue without improving patient safety. In this study, our specific aim was to decrease [...] Read more.
Physiologic monitors generate alarms to alert clinicians to signs of instability. However, these monitors also create alarm fatigue that places patients at risk. Redundant alarms have contributed to alarm fatigue without improving patient safety. In this study, our specific aim was to decrease the median percentage of redundant alarms by 50% within 6 months using the Model for Improvement. Our primary outcome was to lower the percentage of redundant alarms. We used the overall alarm rate per patient per day and code blue events as balancing metrics. We completed three Plan-Do-Study-Act cycles and generated run charts using standard industry criteria to determine the special cause. Ultimately, we decreased redundant alarms from a baseline of 6.4% of all alarms to 1.8%, surpassing our aim of a 50% reduction. Our overall alarm rate, one of our balancing metrics, decreased from 137 alarms/patient day to 118 alarms/patient day during the intervention period. No code blue events were determined to be related to incorrect setting of alarms. Decreasing redundant alarms is safe and feasible. Following a reduction in redundant alarms, more intensive alarm reduction methods are needed to continue to reduce alarm fatigue while keeping patients safe. Full article
(This article belongs to the Special Issue Multimodal Medical Alarms)
Show Figures

Figure 1

15 pages, 7936 KiB  
Article
Virtual Reality in Cartography: Immersive 3D Visualization of the Arctic Clyde Inlet (Canada) Using Digital Elevation Models and Bathymetric Data
by Mona Lütjens, Thomas P. Kersten, Boris Dorschel and Felix Tschirschwitz
Multimodal Technol. Interact. 2019, 3(1), 9; https://doi.org/10.3390/mti3010009 - 20 Feb 2019
Cited by 56 | Viewed by 8825
Abstract
Due to rapid technological development, virtual reality (VR) is becoming an accessible and important tool for many applications in science, industry, and economy. Being immersed in a 3D environment offers numerous advantages especially for the presentation of geographical data that is usually depicted [...] Read more.
Due to rapid technological development, virtual reality (VR) is becoming an accessible and important tool for many applications in science, industry, and economy. Being immersed in a 3D environment offers numerous advantages especially for the presentation of geographical data that is usually depicted in 2D maps or pseudo 3D models on the monitor screen. This study investigated advantages, limitations, and possible applications for immersive and intuitive 3D terrain visualizations in VR. Additionally, in view of ever-increasing data volumes, this study developed a workflow to present large scale terrain datasets in VR for current mid-end computers. The developed immersive VR application depicts the Arctic fjord Clyde Inlet in its 160 km × 80 km dimensions at 5 m spatial resolution. Techniques, such as level of detail algorithms, tiling, and level streaming, were applied to run the more than one gigabyte large dataset at an acceptable frame rate. The immersive VR application offered the possibility to explore the terrain with or without water surface by various modes of locomotion. Terrain textures could also be altered and measurements conducted to receive necessary information for further terrain analysis. The potential of VR was assessed in a user survey of persons from six different professions. Full article
(This article belongs to the Special Issue Interactive 3D Cartography)
Show Figures

Figure 1

17 pages, 279 KiB  
Article
Education and Attachment: Guidelines to Prevent School Failure
by Rosa Maria de Castro and Dora Isabel Fialho Pereira
Multimodal Technol. Interact. 2019, 3(1), 10; https://doi.org/10.3390/mti3010010 - 20 Feb 2019
Cited by 12 | Viewed by 7800
Abstract
Portuguese schools have high student failure and early school leaving rates (Pordata, 2017) giving rise to a number of initiatives aimed at their reduction. The “Alternative Curricular Course” (ACC) promotes the learning of basic skills, specifically in Portuguese language and Mathematics, to support [...] Read more.
Portuguese schools have high student failure and early school leaving rates (Pordata, 2017) giving rise to a number of initiatives aimed at their reduction. The “Alternative Curricular Course” (ACC) promotes the learning of basic skills, specifically in Portuguese language and Mathematics, to support logical reasoning and artistic, vocational, and professional development. Its main goal is the fulfilment of compulsory schooling and the reduction of academic failure. Research based on attachment theory (Bowlby, 1969) suggests that different internal working models of attachment are associated with different characteristics of social, academic, emotional, and behavioural competencies that may interfere in the quality of relationships that young people establish in school, especially with teachers, and also influence their academic performance. This study evaluates the relationship between internal working models of students, their perceptions of the quality of their relationships with teachers, and their academic performance using three measures: (i) the “Inventory of Attachment in Childhood and Adolescence” (IACA) measure, (ii) the “Inventory of Parent and Peer Attachment” (IPPA) measure—concerning the attachment to teacher”, and (iii) a socio-demographic questionnaire on a sample of 305 students from the 8th grade of regular education (RE) and the ACC. The results reveal that students on the ACC exhibit a less secure internal working model than students in RE, and that the perception of the quality of the student-teacher relationship, regarding the dimension of acceptance and understanding by the teachers, is associated with a better academic performance. These results align with those of other recent studies in support of the conclusion that the process of attachment has a significant influence on educational contexts, consistent with attachment and related theories. Full article
17 pages, 5565 KiB  
Article
Participatory Research Principles in Human-Centered Design: Engaging Teens in the Co-Design of a Social Robot
by Elin A. Björling and Emma Rose
Multimodal Technol. Interact. 2019, 3(1), 8; https://doi.org/10.3390/mti3010008 - 10 Feb 2019
Cited by 69 | Viewed by 9336
Abstract
Social robots are emerging as an important intervention for a variety of vulnerable populations. However, engaging participants in the design of social robots in a way that is ethical, meaningful, and rigorous can be challenging. Many current methods in human–robotic interaction rely on [...] Read more.
Social robots are emerging as an important intervention for a variety of vulnerable populations. However, engaging participants in the design of social robots in a way that is ethical, meaningful, and rigorous can be challenging. Many current methods in human–robotic interaction rely on laboratory practices, often experimental, and many times involving deception which could erode trust in vulnerable populations. Therefore, in this paper, we share our human-centered design methodology informed by a participatory approach, drawing on three years of data from a project aimed to design and develop a social robot to improve the mental health of teens. We present three method cases from the project that describe creative and age appropriate methods to gather contextually valid data from a teen population. Specific techniques include design research, scenario and script writing, prototyping, and teens as operators and collaborative actors. In each case, we describe the method and its implementation and discuss the potential strengths and limitations. We conclude by situating these methods by presenting a set of recommended participatory research principles that may be appropriate for designing new technologies with vulnerable populations. Full article
(This article belongs to the Special Issue New Directions in User-Centered Interaction Design)
Show Figures

Figure 1

19 pages, 3079 KiB  
Article
Affective Communication between ECAs and Users in Collaborative Virtual Environments: The REVERIE European Parliament Use Case
by Ioannis Doumanis and Daphne Economou
Multimodal Technol. Interact. 2019, 3(1), 7; https://doi.org/10.3390/mti3010007 - 10 Feb 2019
Cited by 1 | Viewed by 3320
Abstract
This paper discusses the enactment and evaluation of Embodied Conversational Agents (ECA) capable of affective communication in Collaborative Virtual Environments (CVE) for learning. The CVE discussed is a reconstruction of the European Parliament in Brussels developed using the REVERIE (Real and Virtual Engagement [...] Read more.
This paper discusses the enactment and evaluation of Embodied Conversational Agents (ECA) capable of affective communication in Collaborative Virtual Environments (CVE) for learning. The CVE discussed is a reconstruction of the European Parliament in Brussels developed using the REVERIE (Real and Virtual Engagement In Realistic Immersive Environment) framework. REVERIE is a framework designed to support the creation of CVEs populated by ECAs capable of natural human-like behaviour, physical interaction and engagement. The ECA provides a tour of the virtual parliament and participates in the learning activity as an intervention mechanism to engage students. The ECA is capable of immediacy behaviour (verbal and non-verbal) and interactions to support a dialogic learning scenario. The design of the ECA is grounded on a theoretical framework that addresses the required characteristics of the ECA to successfully support collaborative learning. In this paper, we discuss the Heuristic Evaluation of the REVERIE ECA which revealed a wealth of usability problems that led to the development of a list of design recommendations to improve their usability, including its immediacy behaviours and interactions. An ECA capable of effectively creating rapport should result in more positive experiences for participants and better learning results for students in dialogic learning scenarios. Future work aims to evaluate this hypothesis in real-world scenarios with teachers and students participating in a shared virtual educational experience. Full article
(This article belongs to the Special Issue Virtual, Augmented and Mixed Reality in Improving Education)
Show Figures

Figure 1

24 pages, 3066 KiB  
Article
A Survey of Assistive Technologies for Assessment and Rehabilitation of Motor Impairments in Multiple Sclerosis
by Akilesh Rajavenkatanarayanan, Varun Kanal, Konstantinos Tsiakas, Diane Calderon, Michalis Papakostas, Maher Abujelala, Marnim Galib, James C. Ford, Glenn Wylie and Fillia Makedon
Multimodal Technol. Interact. 2019, 3(1), 6; https://doi.org/10.3390/mti3010006 - 5 Feb 2019
Cited by 16 | Viewed by 7399
Abstract
Multiple sclerosis (MS) is a disease that affects the central nervous system, which consists of the brain and spinal cord. Although this condition cannot be cured, proper treatment of persons with MS (PwMS) can help control and manage the relapses of several symptoms. [...] Read more.
Multiple sclerosis (MS) is a disease that affects the central nervous system, which consists of the brain and spinal cord. Although this condition cannot be cured, proper treatment of persons with MS (PwMS) can help control and manage the relapses of several symptoms. In this survey article, we focus on the different technologies used for the assessment and rehabilitation of motor impairments for PwMS. We discuss sensor-based and robot-based solutions for monitoring, assessment and rehabilitation. Among MS symptoms, fatigue is one of the most disabling features, since PwMS may need to put significantly more intense effort toward achieving simple everyday tasks. While fatigue is a common symptom across several neurological chronic diseases, it remains poorly understood for various reasons, including subjectivity and variability among individuals. To this end, we also investigate recent methods for fatigue detection and monitoring. The result of this survey will provide both clinicians and researchers with valuable information on assessment and rehabilitation technologies for PwMS, as well as providing insights regarding fatigue and its effect on performance in daily activities for PwMS. Full article
(This article belongs to the Special Issue Interactive Assistive Technology)
Show Figures

Figure 1

11 pages, 2218 KiB  
Article
A Low-Cost Prototype for Driver Fatigue Detection
by Tiago Meireles and Fábio Dantas
Multimodal Technol. Interact. 2019, 3(1), 5; https://doi.org/10.3390/mti3010005 - 2 Feb 2019
Cited by 8 | Viewed by 6472
Abstract
Driver fatigue and inattention accounts for up to 20% of all traffic accidents, therefore any system that can warn the driver whenever fatigue occurs proves to be useful. Several systems have been devised to detect driver fatigue symptoms, such as measuring physiological parameters, [...] Read more.
Driver fatigue and inattention accounts for up to 20% of all traffic accidents, therefore any system that can warn the driver whenever fatigue occurs proves to be useful. Several systems have been devised to detect driver fatigue symptoms, such as measuring physiological parameters, which can be uncomfortable, or using a video or infrared camera pointed at the driver’s face, which in some cases, may cause privacy concerns for the driver. Usually these systems are expensive, therefore a brief discussion on low-cost fatigue detection systems is presented, followed by a proposal for a non-intrusive low-cost prototype, that aims to detect driver fatigue symptoms. The prototype consists of several sensors that monitor driver physical parameters and vehicle behaviour, with a total system price close to 30 euros. The prototype is discussed and compared with similar systems, pointing out its strengths and weaknesses. Full article
Show Figures

Figure 1

10 pages, 443 KiB  
Perspective
Improving Human–Computer Interface Design through Application of Basic Research on Audiovisual Integration and Amplitude Envelope
by Sharmila Sreetharan and Michael Schutz
Multimodal Technol. Interact. 2019, 3(1), 4; https://doi.org/10.3390/mti3010004 - 22 Jan 2019
Cited by 11 | Viewed by 4214
Abstract
Quality care for patients requires effective communication amongst medical teams. Increasingly, communication is required not only between team members themselves, but between members and the medical devices monitoring and managing patient well-being. Most human–computer interfaces use either auditory or visual displays, and despite [...] Read more.
Quality care for patients requires effective communication amongst medical teams. Increasingly, communication is required not only between team members themselves, but between members and the medical devices monitoring and managing patient well-being. Most human–computer interfaces use either auditory or visual displays, and despite significant experimentation, they still elicit well-documented concerns. Curiously, few interfaces explore the benefits of multimodal communication, despite extensive documentation of the brain’s sensitivity to multimodal signals. New approaches built on insights from basic audiovisual integration research hold the potential to improve future human–computer interfaces. In particular, recent discoveries regarding the acoustic property of amplitude envelope illustrate that it can enhance audiovisual integration while also lowering annoyance. Here, we share key insights from recent research with the potential to inform applications related to human–computer interface design. Ultimately, this could lead to a cost-effective way to improve communication in medical contexts—with signification implications for both human health and the burgeoning medical device industry. Full article
(This article belongs to the Special Issue Multimodal Medical Alarms)
Show Figures

Figure 1

4 pages, 146 KiB  
Editorial
Acknowledgement to Reviewers of MTI in 2018
by MTI Editorial Office
Multimodal Technol. Interact. 2019, 3(1), 3; https://doi.org/10.3390/mti3010003 - 9 Jan 2019
Viewed by 2038
Abstract
Rigorous peer-review is the corner-stone of high-quality academic publishing [...] Full article
16 pages, 325 KiB  
Article
Living and Working in a Multisensory World: From Basic Neuroscience to the Hospital
by Kendall Burdick, Madison Courtney, Mark T. Wallace, Sarah H. Baum Miller and Joseph J. Schlesinger
Multimodal Technol. Interact. 2019, 3(1), 2; https://doi.org/10.3390/mti3010002 - 8 Jan 2019
Cited by 7 | Viewed by 4643
Abstract
The intensive care unit (ICU) of a hospital is an environment subjected to ceaseless noise. Patient alarms contribute to the saturated auditory environment and often overwhelm healthcare providers with constant and false alarms. This may lead to alarm fatigue and prevent optimum patient [...] Read more.
The intensive care unit (ICU) of a hospital is an environment subjected to ceaseless noise. Patient alarms contribute to the saturated auditory environment and often overwhelm healthcare providers with constant and false alarms. This may lead to alarm fatigue and prevent optimum patient care. In response, a multisensory alarm system developed with consideration for human neuroscience and basic music theory is proposed as a potential solution. The integration of auditory, visual, and other sensory output within an alarm system can be used to convey more meaningful clinical information about patient vital signs in the ICU and operating room to ultimately improve patient outcomes. Full article
(This article belongs to the Special Issue Multimodal Medical Alarms)
23 pages, 3742 KiB  
Article
Embodied Engagement with Narrative: A Design Framework for Presenting Cultural Heritage Artifacts
by Jean Ho Chu and Ali Mazalek
Multimodal Technol. Interact. 2019, 3(1), 1; https://doi.org/10.3390/mti3010001 - 2 Jan 2019
Cited by 20 | Viewed by 7101
Abstract
An increasing number of museum exhibits incorporate multi-modal technologies and interactions; yet these media divert visitors’ attention away from the cultural heritage artifacts on display. This paper proposes an overarching conceptual structure for designing tangible and embodied narrative interaction with cultural heritage artifacts [...] Read more.
An increasing number of museum exhibits incorporate multi-modal technologies and interactions; yet these media divert visitors’ attention away from the cultural heritage artifacts on display. This paper proposes an overarching conceptual structure for designing tangible and embodied narrative interaction with cultural heritage artifacts within a museum exhibit so that visitors can interact with them to comprehend their cultural context. The Tangible and Embodied Narrative Framework (TENF) consists of three spectra (diegetic vs. non-diegetic, internal vs. external, and ontological vs. exploratory) and, considering how different interactions map along these three spectra, can guide designers in the way they integrate digital media, narrative, and embodiment. In this paper, we examine interactive narrative scholarship, existing frameworks for tangible and embodied interactions, and tangible and embodied narrative projects. We then describe the design of the TENF and its application to the pilot project, Mapping Place, and to the case study project, Multi-Sensory Prayer Nuts. The findings indicate that embodied engagement with artifacts through a narrative role can help visitors (1) contextualize the meaning of artifacts and (2) make personalized connections to the artifacts. Based on this work, we suggest design recommendations for tailoring the use of the TENF in the cultural heritage domain: simulate cultural practices, associate visitors with cultural perspectives, and provide simultaneous digital feedback. We conclude by describing future directions for the research, which include generating other possible projects using the TENF; collaborating with other designers and museum professionals; and exploring applications of the TENF in museum spaces. Full article
(This article belongs to the Special Issue Embodied and Spatial Interaction)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop