User Interfaces to Pave the Way for Interaction with Tomorrow’s Vehicles

Special Issue Editors


E-Mail Website
Guest Editor
Human-Computer Interaction Group, Technische Hochschule Ingolstadt, 85049 Ingolstadt, Germany
Interests: user experience design; automotive user interfaces; human-computer interaction; intelligent user interfaces; AR/VR applications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Industrial and Systems Engineering; Department of Computer Science (by courtesy), Virginia Tech, Blacksburg, VA 24061, USA
Interests: auditory displays; affective computing; automotive user interfaces; assistive robotics; aesthetic computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Automotive user interfaces and automated vehicle technologies pose numerous challenges to support all diverse facets of user needs. These range from inexperienced, thrill-seeking, young novice drivers to elderly drivers with a mostly opposite set of preferences together with their natural limitations. In the future, the driving task will increasingly be shared between the driver and the vehicle (level 3) or the driver will be pushed into a passive passenger role (level 4). In the long term, full automation (level 5) will require totally new UI/UX concepts – as the drivers don't need to control the vehicle or are not allowed to do. Thus, we need to put efforts into the design of radically new automotive user interfaces to support the driver/passenger in different levels and activities. Acceptance of these new technologies will, however, be highly dependent on aspects like trust in technology and acceptance of the automated driving system (ADS) as well as the successful communication of system state/black-box behavior (vehicle intentions). In this regard, the decision-making algorithm (moral decision?) will play an important role. This special issue is set on top of this problem and inviting submissions related to the following topics (but are not limited to):

  • Transfer of control between driver and vehicle (forth and back, switching between levels)
  • Trust in technology and acceptance of assistance systems
  • Ways for effective office work in cars (reading, typing)
  • Driver state estimation (e.g., emotions, mind wandering, situation awareness, readiness to takeover, etc.)
  • Communication in the exterior, e.g., concepts for vehicle-pedestrian interaction
  • Novel forms of communication (gestures, speech, brain-computer interfaces)
  • Artificial intelligence technologies in the car
  • In-vehicle AR/VR applications

Prof. Andreas Riener
Prof. Myounghoon Jeon
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Transfer of control between driver and vehicle (forth and back, switching between levels)
  • Trust in technology and acceptance of assistance systems
  • Ways for effective office work in cars (reading, typing)
  • Communication in the exterior, i.e. concepts for vehicle-pedestrian interaction
  • Novel forms of communication (gestures, speech, brain-computer interfaces)
  • Artificial intelligence technology in the car
  • In-vehicle AR/VR applications

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

5 pages, 171 KiB  
Editorial
Guest Editors’ Introduction: Multimodal Technologies and Interaction in the Era of Automated Driving
by Andreas Riener and Myounghoon Jeon
Multimodal Technol. Interact. 2019, 3(2), 41; https://doi.org/10.3390/mti3020041 - 12 Jun 2019
Viewed by 3775
Abstract
Recent advancements in automated vehicle technologies pose numerous opportunities and challenges to support the diverse facets of user needs [...] Full article

Research

Jump to: Editorial

20 pages, 2585 KiB  
Article
Towards a Taxonomy for In-Vehicle Interactions Using Wearable Smart Textiles: Insights from a User-Elicitation Study
by Vijayakumar Nanjappan, Rongkai Shi, Hai-Ning Liang, Kim King-Tong Lau, Yong Yue and Katie Atkinson
Multimodal Technol. Interact. 2019, 3(2), 33; https://doi.org/10.3390/mti3020033 - 9 May 2019
Cited by 13 | Viewed by 4540
Abstract
Textiles are a vital and indispensable part of our clothing that we use daily. They are very flexible, often lightweight, and have a variety of application uses. Today, with the rapid developments in small and flexible sensing materials, textiles can be enhanced and [...] Read more.
Textiles are a vital and indispensable part of our clothing that we use daily. They are very flexible, often lightweight, and have a variety of application uses. Today, with the rapid developments in small and flexible sensing materials, textiles can be enhanced and used as input devices for interactive systems. Clothing-based wearable interfaces are suitable for in-vehicle controls. They can combine various modalities to enable users to perform simple, natural, and efficient interactions while minimizing any negative effect on their driving. Research on clothing-based wearable in-vehicle interfaces is still underexplored. As such, there is a lack of understanding of how to use textile-based input for in-vehicle controls. As a first step towards filling this gap, we have conducted a user-elicitation study to involve users in the process of designing in-vehicle interactions via a fabric-based wearable device. We have been able to distill a taxonomy of wrist and touch gestures for in-vehicle interactions using a fabric-based wrist interface in a simulated driving setup. Our results help drive forward the investigation of the design space of clothing-based wearable interfaces for in-vehicle secondary interactions. Full article
Show Figures

Figure 1

15 pages, 1097 KiB  
Article
Tell Them How They Did: Feedback on Operator Performance Helps Calibrate Perceived Ease of Use in Automated Driving
by Yannick Forster, Sebastian Hergeth, Frederik Naujoks, Josef Krems and Andreas Keinath
Multimodal Technol. Interact. 2019, 3(2), 29; https://doi.org/10.3390/mti3020029 - 29 Apr 2019
Cited by 6 | Viewed by 4126
Abstract
The development of automated driving will profit from an agreed-upon methodology to evaluate human–machine interfaces. The present study examines the role of feedback on interaction performance provided directly to participants when interacting with driving automation (i.e., perceived ease of use). In addition, the [...] Read more.
The development of automated driving will profit from an agreed-upon methodology to evaluate human–machine interfaces. The present study examines the role of feedback on interaction performance provided directly to participants when interacting with driving automation (i.e., perceived ease of use). In addition, the development of ratings itself over time and use case specificity were examined. In a driving simulator study, N = 55 participants completed several transitions between Society of Automotive Engineers (SAE) level 0, level 2, and level 3 automated driving. One half of the participants received feedback on their interaction performance immediately after each use case, while the other half did not. As expected, the results revealed that participants judged the interactions to become easier over time. However, a use case specificity was present, as transitions to L0 did not show effects over time. The role of feedback also depended on the respective use case. We observed more conservative evaluations when feedback was provided than when it was not. The present study supports the application of perceived ease of use as a diagnostic measure in interaction with automated driving. Evaluations of interfaces can benefit from supporting feedback to obtain more conservative results. Full article
Show Figures

Figure 1

19 pages, 4451 KiB  
Article
Improving Driver Emotions with Affective Strategies
by Michael Braun, Jonas Schubert, Bastian Pfleging and Florian Alt
Multimodal Technol. Interact. 2019, 3(1), 21; https://doi.org/10.3390/mti3010021 - 25 Mar 2019
Cited by 73 | Viewed by 9422
Abstract
Drivers in negative emotional states, such as anger or sadness, are prone to perform bad at driving, decreasing overall road safety for all road users. Recent advances in affective computing, however, allow for the detection of such states and give us tools to [...] Read more.
Drivers in negative emotional states, such as anger or sadness, are prone to perform bad at driving, decreasing overall road safety for all road users. Recent advances in affective computing, however, allow for the detection of such states and give us tools to tackle the connected problems within automotive user interfaces. We see potential in building a system which reacts upon possibly dangerous driver states and influences the driver in order to drive more safely. We compare different interaction approaches for an affective automotive interface, namely Ambient Light, Visual Notification, a Voice Assistant, and an Empathic Assistant. Results of a simulator study with 60 participants (30 each with induced sadness/anger) indicate that an emotional voice assistant with the ability to empathize with the user is the most promising approach as it improves negative states best and is rated most positively. Qualitative data also shows that users prefer an empathic assistant but also resent potential paternalism. This leads us to suggest that digital assistants are a valuable platform to improve driver emotions in automotive environments and thereby enable safer driving. Full article
Show Figures

Figure 1

11 pages, 1040 KiB  
Article
The Voice Makes the Car: Enhancing Autonomous Vehicle Perceptions and Adoption Intention through Voice Agent Gender and Style
by Sanguk Lee, Rabindra Ratan and Taiwoo Park
Multimodal Technol. Interact. 2019, 3(1), 20; https://doi.org/10.3390/mti3010020 - 21 Mar 2019
Cited by 37 | Viewed by 5664
Abstract
The present research explores how autonomous vehicle voice agent (AVVA) design influences autonomous vehicle passenger (AVP) intentions to adopt autonomous vehicles. An online experiment (N = 158) examined the role of gender stereotypes in response to an AVVA with respect to the [...] Read more.
The present research explores how autonomous vehicle voice agent (AVVA) design influences autonomous vehicle passenger (AVP) intentions to adopt autonomous vehicles. An online experiment (N = 158) examined the role of gender stereotypes in response to an AVVA with respect to the technology acceptance model. The findings indicate that characteristics of the AVVA that are more consistent with the stereotypical expectation of the social role (informative male AVVA and social female AVVA) foster greater perceived ease of use (PEU) and perceived usefulness (PU) than inconsistent conditions (social male AVVA and informative female AVVA). The study offers theoretical implications regarding the technology acceptance model in the context of autonomous technologies as well as practical implications for the design of autonomous vehicle voice agents. Full article
Show Figures

Figure 1

Back to TopTop