Towards a Taxonomy for In-Vehicle Interactions Using Wearable Smart Textiles: Insights from a User-Elicitation Study
Abstract
:1. Introduction
2. Related Work
2.1. Participatory In-Vehicle Elicitation Studies
2.2. Wrist-Worn Wearable Interfaces
2.3. Summary
3. Methodology
Fabric-based Wrist Interface
4. Co-Design Study
4.1. Participants
4.2. Simulated Elicitation Setup
4.3. Commands
4.4. Procedure
4.4.1. Introduction and Driving Practice
4.4.2. Pre-Elicitation
4.4.3. Elicitation
4.4.4. Semi-Structured Interview
5. Results
5.1. Classification of In-Vehicle Wrist Gestures
5.2. Consensus between the Drivers
5.3. Effects of Driving Experience on Agreement Rate
5.4. Wrist Movements and Touch Gestures
5.5. In-Vehicle Consensus Gesture Set
5.6. Taxonomoy of Wrist and Touch Gestures
- Complexity (Figure 5a) identifies a proposed gesture as either (a) simple or (b) complex. We describe simple gestures as gestures that are performed using only one action, wrist or touch gesture. For example, moving the wrist downwards toward the palm to perform downward flexion and/or using a soft foam button to tap on the steering wheel are identified as simple gestures. Complex gestures are combination of two gestures performed using two distinct gestures, e.g., tapping any one of the buttons followed by moving the wrist downwards toward the palm. We adopted this dimension from Reference [43].
- Locale (Figure 5b) indicates the location inside the vehicle where the wrist and touch gestures were performed: (a) on steering wheel, (b) off steering wheel, and (c) on gear lever. We adopted and modified this measure from Reference [44]. For example, mid-air gestures were performed immediately off the steering wheel and also on top of the gear control. Similarly, touch gestures were performed on the steering wheel and also on the gear lever.
- Structure (Figure 6, next page) distinguishes the relative importance of the wrist and touch gestures in the elicitation of in-vehicle gestures, with five categories: (a) wrist (b) touch (bottom button), (c) touch (side button), (d) touch (bottom button) and wrist, and (e) touch (side button) and wrist. For example, for the touch (bottom button) category, the tap or hold gesture was performed using the bottom button. The touch (bottom button) and wrist category include any wrist gestures performed after either tapping or holding the bottom button. We modified this category from the taxonomy of Vatavu and Pentiuc [45].
- Action (Figure 7) classifies the gestures based on their actions rather than their semantic meaning with six categories: (a) scroll, (b) swipe, (c) circle, (d) tap, (e) hold, and (f) compound. We adopted and modified this classification from Chan et al. [36], who used these dimensions to define user designed single-hand micro gestures without any specific domains. For example, downward flexion and upward extension were grouped as scrolls while leftward flexion and rightward extension were grouped as swipes.
5.7. Participants’ Feedback
6. Discussion
6.1. Design Recommendations for Fabric-Based Wrist Interfaces
6.1.1. Simple Gestures Were Preferred over Complex
6.1.2. Reflect Legacy Inspired “Simple Taps” for State Toggles
6.1.3. Users Prefer “Simple Wrist Gestures” for Directional Pairs
6.1.4. Consider Similar Gestures for Similar Commands
6.1.5. Design That Leverages the Synergy between Gestures and In-Vehicle Locations
6.1.6. Favor Stretchable Fabric with Fingerless Thumb-Hole Design
6.1.7. Consider Side Button for Gesture Delimiter
6.2. Limitations
7. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Pfleging, B.; Rang, M.; Broy, N. Investigating user needs for non-driving-related activities during automated driving. In Proceedings of the the 15th International Conference on Mobile and Ubiquitous Multimedia, MUM ’16, Rovaniemi, Finland, 13–15 December 2016; pp. 91–99. [Google Scholar]
- May, K.R.; Gable, T.M.; Walker, B.N. A multimodal air gesture interface for in vehicle menu navigation. In Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Seattle, WA, USA, 17–19 September 2014; pp. 1–6. [Google Scholar]
- Tsimhoni, O.; Green, P. Visual demand of driving and the execution of display-intensive in-vehicle tasks. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2001, 45, 1586–1590. [Google Scholar] [CrossRef]
- Normark, C.J.; Tretten, P.; Gärling, A. Do redundant head-up and head-down display configurations cause distractions? In Proceedings of the Fifth International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design, Big Sky, MT, USA, 22–25 June 2009; pp. 398–404. [Google Scholar]
- González, I.E.; Wobbrock, J.O.; Chau, D.H.; Faulring, A.; Myers, B.A. Eyes on the road, hands on the wheel: thumb-based interaction techniques for input on steering wheels. In Proceedings of the Graphics Interface 2007, Montreal, QC, Canada, 28–30 May 2007; pp. 95–102. [Google Scholar]
- Bach, K.M.; Jæger, M.G.; Skov, M.B.; Thomassen, N.G. You can touch, but you can’t look: Interacting with In-Vehicle Systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Florence, Italy, 5–10 April 2008; p. 1139. [Google Scholar]
- Döring, T.; Kern, D.; Marshall, P.; Pfeiffer, M.; Schöning, J.; Gruhn, V.; Schmidt, A. Gestural interaction on the steering wheel – Reducing the visual demand. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vancouver, BC, Canada, 7–12 May 2011; p. 483. [Google Scholar]
- Koyama, S.; Sugiura, Y.; Ogata, M.; Withana, A.; Uema, Y.; Honda, M.; Yoshizu, S.; Sannomiya, C.; Nawa, K.; Inami, M. Multi-touch steering wheel for in-car tertiary applications using infrared sensors. In Proceedings of the 5th Augmented Human International Conference, Kobe, Japan, 7–9 March 2014; pp. 1–4. [Google Scholar]
- Pfeiffer, M.; Kern, D.; Schöning, J.; Döring, T.; Krüger, A.; Schmidt, A. A multi-touch enabled steering wheel — Exploring the design space. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Vancouver, BC, Canada, 7–12 May 2011; pp. 3355–3360. [Google Scholar]
- Werner, S. The steering wheel as a touch interface: using thumb-based gesture interfaces as control inputs while driving. In Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications Automotive’UI 14, Seattle, WA, USA, 17–19 September 2014; pp. 9–12. [Google Scholar]
- Hessan, J.F.; Zancanaro, M.; Kavakli, M.; Billinghurst, M. Towards Optimization of Mid-air Gestures for In-vehicle Interactions. In Proceedings of the 29th Australian Conference on Computer-Human Interaction, Brisbane, Australia, 28 November–1 December 2017; pp. 126–134. [Google Scholar]
- Riener, A.; Wintersberger, P. Natural, intuitive finger based input as substitution for traditional vehicle control. In Proceedings of the 3rd International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Salzburg, Austria, 30 November–2 December 2011; p. 159. [Google Scholar]
- Pfleging, B.; Schneegass, S.; Schmidt, A. Multimodal interaction in the car– Combining speech and gestures on the steering wheel. In Proceedings of the 4th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Portsmouth, NH, USA, 17–19 October 2012; p. 155. [Google Scholar]
- Stoppa, M.; Chiolerio, A. Wearable electronics and smart textiles: A critical review. Sensors 2014, 14, 11957–11992. [Google Scholar] [CrossRef] [PubMed]
- Parzer, P.; Sharma, A.; Vogl, A.; Steimle, J.; Olwal, A.; Haller, M. SmartSleeve: Real-time sensing of surface and deformation gestures on flexible, interactive textiles, using a Hybrid gesture detection Pipeline. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, Quebec City, QC, Canada, 22–25 October 2017; pp. 565–577. [Google Scholar]
- Schneegas, S.; Voit, A. GestureSleeve: Using touch sensitive fabrics for gestural input on the forearm for controlling smartwatches. In Proceedings of the 2016 ACM International Symposium on Wearable Computers, Heidelberg, Germany, 12–16 September 2016; pp. 108–115. [Google Scholar]
- Yoon, S.H.; Huo, K.; Nguyen, V.P.; Ramani, K. TIMMi: Finger-worn textile input device with multimodal sensing in mobile interaction. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction, Stanford, CA, USA, 16–19 January 2015; pp. 269–272. [Google Scholar]
- Strohmeier, P.; Knibbe, J.; Boring, S.; Hornbæk, K. zPatch: Hybrid Resistive/Capacitive eTextile Input. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction, Stanford, CA, USA, 16–19 January 2015; pp. 188–198. [Google Scholar]
- Yoon, S.H.; Huo, K.; Ramani, K. Plex: Finger-Worn textile sensor for mobile interaction during activities. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Seattle, WA, USA, 13–17 September 2014; pp. 191–194. [Google Scholar]
- Endres, C.; Schwartz, T.; Müller, C. Geremin’: 2D microgestures for drivers based on electric field sensing. In Proceedings of the 16th International Conference on Intelligent User Interfaces, Palo Alto, CA, USA, 13–16 February 2011; pp. 327–330. [Google Scholar]
- Riener, A. Gestural interaction in vehicular applications. Computer 2012, 45, 42–47. [Google Scholar] [CrossRef]
- Angelini, L.; Carrino, F.; Carrino, S.; Caon, M.; Khaled, O.A.; Baumgartner, J.; Sonderegger, A.; Lalanne, D.; Mugellini, E. Gesturing on the steering wheel: A user-elicited taxonomy. In Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Seattle, WA, USA, 17–19 September 2014; pp. 1–8. [Google Scholar]
- Huber, J.; Sheik-Nainar, M.; Matic, N. Force-enabled touch input on the steering wheel: An elicitation study. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications Adjunct, AutomotiveUI ’17, Oldenburg, Germany, 24–27 September 2017; pp. 168–172. [Google Scholar]
- May, K.R.; Gable, T.M.; Walker, B.N. Designing an in-vehicle air gesture set using elicitation methods. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI ’17, Oldenburg, Germany, 24–27 September 2017; pp. 74–83. [Google Scholar]
- Riener, A.; Ferscha, A.; Bachmair, F.; Hagmüller, P.; Lemme, A.; Muttenthaler, D.; Pühringer, D.; Rogner, H.; Tappe, A.; Weger, F. Standardization of the in-car gesture interaction space. In Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications - AutomotiveUI ’13, Eindhoven, The Netherlands, 27–30 October 2013; pp. 14–21. [Google Scholar]
- Horswill, M.S.; McKenna, F.P. The effect of interference on dynamic risk-taking judgments. Br. J. Psychol. 1999, 90, 189–199. [Google Scholar] [CrossRef]
- Wigdor, D.; Balakrishnan, R. TiltText: Using tilt for text input to mobile phones. In Proceedings of the 16th Annual Acm Symposium on User Interface Software and Technology, Vancouver, BC, Canada, 2–5 November 2003; pp. 81–90. [Google Scholar]
- Gong, J.; Yang, X.-D.; Irani, P. WristWhirl: One-handed continuous smartwatch input using wrist gestures. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, Tokyo, Japan, 16–19 October 2016; pp. 861–872. [Google Scholar]
- Cheung, V.; Eady, A.K.; Girouard, A. Exploring Eyes-free Interaction with Wrist-Worn Deformable Materials. In Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction, Yokohama, Japan, 20–23 March 2017; pp. 521–528. [Google Scholar]
- Lopes, P.; Ion, A.; Mueller, W.; Hoffmann, D.; Jonell, P.; Baudisch, P. Proprioceptive interaction. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’15, Seoul, Korea, 18–23 April 2015; pp. 939–948. [Google Scholar]
- Crossan, A.; Williamson, J.; Brewster, S.; Murray-Smith, R. Wrist rotation for interaction in mobile contexts. In Proceedings of the 10th International Conference on Human Computer Interaction with Mobile Devices and Services, Amsterdam, The Netherlands, 2–5 September 2008; p. 435. [Google Scholar]
- Strohmeier, P.; Vertegaal, R.; Girouard, A. With a flick of the wrist: Stretch sensors as lightweight input for mobile devices. In Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction, Kingston, ON, Canada, 19–22 February 2012; p. 307. [Google Scholar]
- Iwasaki, S.; Sakaguchi, S.; Abe, M.; Matsushita, M. Cloth switch: Configurable touch switch wearable device made with cloth. In Proceedings of the SIGGRAPH Asia 2015 Posters, Kobe, Japan, 2–6 November 2015; p. 22. [Google Scholar]
- Green, P. Visual and Task Demands of Driver Information Systems; UMTRl Technical Report 98-16; The University of Michigan Transportation Research Institute: Ann Arbor, MI, USA, June 1999; p. 120. [Google Scholar]
- Pakanen, M.; Lappalainen, T.; Roinesalo, P.; Häkkilä, J. Exploring smart handbag concepts through co-design. In Proceedings of the 15th International Conference on Mobile and Ubiquitous Multimedia, MUM ’16, Rovaniemi, Finland, 12–15 December 2016; pp. 37–48. [Google Scholar]
- Chan, E.; Seyed, T.; Stuerzlinger, W.; Yang, X.-D.; Maurer, F. User elicitation on single-hand microgestures. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, CA, USA, 7–12 May 2016; pp. 3403–3414. [Google Scholar]
- Gheran, B.-F.; Vanderdonckt, J.; Vatavu, R.-D. Gestures for smart rings: Empirical results, insights, and design implications. In Proceedings of the 2018 Designing Interactive Systems Conference, Hong Kong, China, 9–13 June 2018; pp. 623–635. [Google Scholar]
- Morris, M.R.; Danielescu, A.; Drucker, S.; Fisher, D.; Lee, B.; Schraefel, M.C.; Wobbrock, J.O. Reducing legacy bias in gesture elicitation studies. Interactions 2014, 21, 40–45. [Google Scholar] [CrossRef]
- Rahman, M.; Gustafson, S.; Irani, P.; Subramanian, S. Tilt techniques: Investigating the dexterity of wrist-based input. In Proceedings of the 27th International Conference on Human Factors in Computing Systems, CHI 09, Boston, MA, USA, 4–9 April 2009; p. 1943. [Google Scholar]
- Green, P. Crashes induced by driver information systems and what can be done to reduce them. SAE Tech. Paper 2000, 1, C008. [Google Scholar]
- Vatavu, R.-D.; Wobbrock, J.O. Formalizing agreement analysis for elicitation studies: New measures, significance test, and toolkit. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’15, Seoul, Korea, 18–23 April 2015; pp. 1325–1334. [Google Scholar]
- Wobbrock, J.O.; Aung, H.H.; Rothrock, B.; Myers, B.A. Maximizing the guessability of symbolic input. In Proceedings of the CHI 2005 Conference on Human Factors in Computing Systems, Portland, OR, USA, 2–7 April 2005; pp. 1869–1872. [Google Scholar]
- Ruiz, J.; Vogel, D. Soft-Constraints to reduce legacy and performance bias to elicit whole-body gestures with low arm fatigue. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’15, Seoul, Korea, 18–23 April 2015; pp. 3347–3350. [Google Scholar]
- Piumsomboon, T.; Clark, A.; Billinghurst, M.; Cockburn, A. User-defined gestures for augmented reality. In Proceedings of the IFIP Conference on Human-Computer Interaction, Cape Town, South Africa, 2–6 September 2013; pp. 282–299. [Google Scholar]
- Vatavu, R.D.; Pentiuc, S.G. Multi-Level representation of gesture as command for human computer interaction. Comput. Inf. 2012, 27, 837–851. [Google Scholar]
- Liang, H.N.; Williams, C.; Semegen, M.; Stuerzlinger, W.; Irani, P. An investigation of suitable interactions for 3D manipulation of distant objects through a mobile device. Int. J. Innov. Comput. Inf. Control 2013, 9, 4737–4752. [Google Scholar]
- Seyed, T.; Burns, C.; Costa Sousa, M.; Maurer, F.; Tang, A. Eliciting usable gestures for multi-display environments. In Proceedings of the 2012 ACM International Conference on Interactive Tabletops and Surfaces, Cambridge/Boston, MA, USA, 11–14 November 2012; pp. 41–50. [Google Scholar]
- Gheran, B.-F.; Vatavu, R.-D.; Vanderdonckt, J. Ring x2: Designing gestures for smart rings using temporal calculus. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility, Baltimore, MD, USA, 29 October–1 November 2017; pp. 117–122. [Google Scholar]
- Morris, M.R. Web on the wall: Insights from a multimodal interaction elicitation study. In Proceedings of the 2012 ACM International Conference on Interactive Tabletops and Surfaces, Cambridge/Boston, MA, USA, 11–14 November 2012; pp. 95–104. [Google Scholar]
- Anthony, L.; Vatavu, R.-D.; Wobbrock, J.O. Understanding the consistency of users’ pen and finger stroke gesture articulation. In Proceedings of the Graphics Interface 2013, Regina, SK, Canada, 29–31 May 2013; pp. 87–94. [Google Scholar]
- Pickering, C.A.; Burnham, K.J.; Richardson, M.J. A research study of hand gesture recognition technologies and applications for human vehicle interaction. In Proceedings of the 2007 3rd Institution of Engineering and Technology Conference on Automotive Electronics, Warwick, UK, 28–29 June 2007; pp. 1–15. [Google Scholar]
- Gable, T.M.; May, K.R.; Walker, B.N. Applying popular usability heuristics to gesture interaction in the vehicle. In Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Seattle, WA, USA, 17–19 September 2014; pp. 1–7. [Google Scholar]
Phone | Map Navigation | Music Player |
---|---|---|
T1. Unlock the phone | T5. Move left | T11. Play/Resume |
T2. Answer the call | T6. Move right | T12. Pause |
T3. Hang up the call | T7. Move up | T13. Volume up |
T4. Ignore the call | T8. Move down | T14. Volume down |
T9. Zoom-in | T15. Next song | |
T10. Zoom-out | T16. Previous song |
Commands | AR | Driving Experience | ||
---|---|---|---|---|
Less than 2 Years | More than 2 Years | p | ||
T1. Unlock the phone | 0.078 | 0.089 | 0.036 | 0.625 |
T2. Answer the call | 0.118 | 0.022 | 0.250 | 0.078 1 |
T3. Hang up the call | 0.15 | 0.156 | 0.143 | 0.909 |
T4. Ignore the call | 0.039 | 0.022 | 0.071 | 0.646 |
T5. Move left | 0.039 | 0.044 | 0 | 0.688 |
T6. Move right | 0.052 | 0.044 | 0 | 0.688 |
T7. Move up | 0.111 | 0.067 | 0.143 | 0.493 |
T8. Move down | 0.105 | 0.067 | 0.107 | 0.713 |
T9. Zoom-in | 0.046 | 0.022 | 0.036 | 0.898 |
T10. Zoom-out | 0.033 | 0.044 | 0.036 | 0.942 |
T11. Play/Resume | 0.078 | 0.089 | 0.036 | 0.625 |
T12. Pause | 0.183 | 0.2 | 0.143 | 0.603 |
T13. Volume up | 0.085 | 0.089 | 0.071 | 0.878 |
T14. Volume down | 0.15 | 0.089 | 0.214 | 0.276 |
T15. Next song | 0.039 | 0.044 | 0.071 | 0.804 |
T16. Previous song | 0.039 | 0.044 | 0.036 | 0.942 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Nanjappan, V.; Shi, R.; Liang, H.-N.; Lau, K.K.-T.; Yue, Y.; Atkinson, K. Towards a Taxonomy for In-Vehicle Interactions Using Wearable Smart Textiles: Insights from a User-Elicitation Study. Multimodal Technol. Interact. 2019, 3, 33. https://doi.org/10.3390/mti3020033
Nanjappan V, Shi R, Liang H-N, Lau KK-T, Yue Y, Atkinson K. Towards a Taxonomy for In-Vehicle Interactions Using Wearable Smart Textiles: Insights from a User-Elicitation Study. Multimodal Technologies and Interaction. 2019; 3(2):33. https://doi.org/10.3390/mti3020033
Chicago/Turabian StyleNanjappan, Vijayakumar, Rongkai Shi, Hai-Ning Liang, Kim King-Tong Lau, Yong Yue, and Katie Atkinson. 2019. "Towards a Taxonomy for In-Vehicle Interactions Using Wearable Smart Textiles: Insights from a User-Elicitation Study" Multimodal Technologies and Interaction 3, no. 2: 33. https://doi.org/10.3390/mti3020033
APA StyleNanjappan, V., Shi, R., Liang, H. -N., Lau, K. K. -T., Yue, Y., & Atkinson, K. (2019). Towards a Taxonomy for In-Vehicle Interactions Using Wearable Smart Textiles: Insights from a User-Elicitation Study. Multimodal Technologies and Interaction, 3(2), 33. https://doi.org/10.3390/mti3020033