Next Article in Journal
Practical Methodology for the Design of Educational Serious Games
Next Article in Special Issue
Repeated Usage of an L3 Motorway Chauffeur: Change of Evaluation and Usage
Previous Article in Journal
Machine Learning Models for Cultural Heritage Image Classification: Comparison Based on Attribute Selection
Previous Article in Special Issue
How Do eHMIs Affect Pedestrians’ Crossing Behavior? A Study Using a Head-Mounted Display Combined with a Motion Suit
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

External Human–Machine Interfaces: The Effect of Display Location on Crossing Intentions and Eye Movements

Department Cognitive Robotics, Delft University of Technology, Mekelweg, 2628 CD Delft, The Netherlands
*
Author to whom correspondence should be addressed.
Information 2020, 11(1), 13; https://doi.org/10.3390/info11010013
Submission received: 19 November 2019 / Revised: 14 December 2019 / Accepted: 17 December 2019 / Published: 24 December 2019

Abstract

:
In the future, automated cars may feature external human–machine interfaces (eHMIs) to communicate relevant information to other road users. However, it is currently unknown where on the car the eHMI should be placed. In this study, 61 participants each viewed 36 animations of cars with eHMIs on either the roof, windscreen, grill, above the wheels, or a projection on the road. The eHMI showed ‘Waiting’ combined with a walking symbol 1.2 s before the car started to slow down, or ‘Driving’ while the car continued driving. Participants had to press and hold the spacebar when they felt it safe to cross. Results showed that, averaged over the period when the car approached and slowed down, the roof, windscreen, and grill eHMIs yielded the best performance (i.e., the highest spacebar press time). The projection and wheels eHMIs scored relatively poorly, yet still better than no eHMI. The wheels eHMI received a relatively high percentage of spacebar presses when the car appeared from a corner, a situation in which the roof, windscreen, and grill eHMIs were out of view. Eye-tracking analyses showed that the projection yielded dispersed eye movements, as participants scanned back and forth between the projection and the car. It is concluded that eHMIs should be presented on multiple sides of the car. A projection on the road is visually effortful for pedestrians, as it causes them to divide their attention between the projection and the car itself.

1. Introduction

In recent years, a substantial number of studies have emerged on external human–machine interfaces (eHMIs) for automated cars. In automated driving, non-verbal communication between the driver and other road users is often impossible, because the driver is not physically present in the driver seat, or because the driver is engaged in a non-driving task. One reason for employing eHMIs would be to substitute the lack of eye-contact and other types of non-verbal communication. A second reason for using eHMIs is to transmit information about the future state of the automated vehicle to other traffic participants. For example, if the path planning software of the automated driving system knows that the vehicle will slow down for an upcoming intersection, the eHMI could accordingly communicate that the vehicle is about to slow down [1]. Thus, eHMIs could communicate information that is not apparent from implicit ways of communication, for example, from the car’s acceleration and deceleration.
So far, a number of different eHMIs have been designed. Bazilinskyy et al. [2] provided an overview of 22 eHMI concepts from industry, whereas Rasouli and Tsotsos [3] and Schieben et al. [4] presented a survey of eHMIs that are studied in academic contexts. The eHMIs proposed so far come in a variety of modalities, for example as text and light strips (e.g., as in [5]), as well as in many colours (green, red, cyan; [6,7]). Research has found that text-based eHMIs are regarded as easily understood without learning [1,8], and that text has disadvantages related to legibility from a distance and cross-national interpretability [2]. A scientific consensus regarding the most efficient modality for eHMIs has not been reached so far.
A lesser studied question is where on the car the eHMI should be positioned to attain maximum compliance and decision-making efficiency. A variety of locations for eHMIs have been proposed, including:
The positioning of the eHMI is important because pedestrians (and other road users) visually sample the road environment in an intermittent matter [34]. The presented information may be critical to road safety, and should be understood early in time.
From the existing body of literature, an eHMI on the front (grill) or roof of the car seems to be the most frequently used option. These locations are justifiable because they may easily allow for mounting a communication device. An eHMI that projects a message on the road or an eHMI that is integrated with the windscreen are challenging to manufacture. However, these types of eHMIs hold promise because they can be made larger than regular screen-based eHMIs, enhancing their visibility from a distance. This notion is supported by a study using self-reports by Ackermann et al. [9]. They showed that participants found eHMIs that projected its messages on the windscreen or the ground were regarded as better recognisable than display-based eHMIs. Ackermann et al. [9] pointed out that the relatively large size of the projections was probably an underlying reason for these effects.
Even though research (e.g., [35]) shows that pedestrians and drivers do not make direct eye contact very often, an eye-tracking study by Dey et al. [36] showed that pedestrians tend to look at the windscreen when an approaching car is close by, “likely to seek the intention or information about the situational awareness of the driver” (p. 375). Accordingly, a windscreen-based eHMI may be an attractive location for presenting a message. In the same way, Bazilinskyy et al. [37] found that pedestrians often look at the wheels of parked cars; this provides motivation for using a wheel-based eHMI.
At present, it is unclear which location of the eHMI results in the best-perceived clarity and behavioural compliance among pedestrians. This lack of knowledge impedes the standardisation of eHMI designs. In the present study, we let participants view animated video clips in which automated vehicles drove with an eHMI at one of the five abovementioned locations. Participants were asked to hold the spacebar when they felt safe to cross. Consequently, we examined which type of eHMI resulted in the highest time-percentage of spacebar pressings while the automated vehicle slowed down for the participant. This is a continuous behavioural measurement method that was introduced by De Clercq et al. [1]. Additionally, we used eye-tracking to infer which type of eHMI yields the most concentrated gaze patterns.
A survey of eHMI concepts proposed by the automotive industry indicated that about 50% of the concepts contained a text message of some kind [2]. Research has also shown that the commanding text ‘Walk’ can be understood without particular training or prior exposure [1,2]. However, the development of commanding-text eHMIs is technologically challenging, because such design requires that the automated vehicle knows for which road user the command is meant. Another disadvantage of commanding texts concerns liability: if an automated vehicle displays ‘Walk’, and a pedestrian walks onto the road and collides with a third road user, the manufacturer of the automated vehicle may be at fault.
It has further been shown that a light-based eHMI can be perceived as ambiguous without learning [1,8]. For example, it may be unclear whether a green or red light signal applies to the pedestrian (egocentric perspective) or the automated vehicle (allocentric perspective; [2]).
Our eHMIs consisted of non-commanding text (‘Waiting’ or ‘Driving’) combined with an icon. The text on the eHMI was white to avoid the above-mentioned red/green dilemma. We opted for a relatively salient (i.e, large display/projection) and redundant (i.e., text combined with an icon) eHMI to ensure that participants would have no difficulty understanding what the eHMI message means. We do not aim to suggest that a text-based eHMI would be the optimal solution in real traffic. However, because the present study is concerned with examining the effect of eHMI location, we selected an eHMI design that was shown to be effective in previous research in virtual environments.

2. Methods

2.1. Participants

The participants were 51 males and 10 females. They were all aged between 19 and 27 years (M = 23.0, SD = 1.8). The participants were all students of BSc and MSc studies at the faculty of Mechanical, Maritime and Materials Engineering at the Delft University of Technology, the Netherlands. About half of the participants were recruited based on opportunity sampling within the faculty building, whereas the other half participated for course credit. All participants provided written, informed consent. The research was approved by the TU Delft Human Research Ethics Committee.

2.2. Apparatus

Eye movements were recorded at 2000 Hz using the Eyelink 1000 Plus eye-tracker v5.15 (SR-Research; Ottawa, ON, Canada). Participants were asked to place their head in the head support during the entire experiment. The stimuli were shown on a 24-inch BENQ monitor (Taipei, Taiwan) with a resolution of 1920 × 1080 pixels (531 × 298 mm). The refresh rate of the monitor was set at 60 Hz. The distance between the monitor and the head support was 95 cm. Accordingly, the monitor subtended 31 deg and 18 deg horizontal and vertical viewing angles, respectively. The experimental setup is shown in Figure 1.

2.3. Independent Variable

The independent variable was the eHMI type. Six eHMI conditions were used: Roof, Windscreen, Grill, Projection, Wheels, and No eHMI. Figure 2 shows a car that combines all five eHMIs. In the experiment, only one eHMI condition was used at a time. The eHMI could show either ‘Waiting’ or ‘Driving’ (Figure 3). The ‘Driving’ message turned on when the approaching car would not stop for the pedestrian. The ‘Waiting’ message turned on when the approaching car would stop for the pedestrian.
This study was designed to examine participants’ responses when the car was stopping and the eHMI showed ‘Waiting’. The responses to the non-stopping vehicles were not analysed herein. The non-stopping vehicles were included to ensure that participants would not start to expect that all cars would stop for them. Note that stopping vehicles had a dominant effect on participants’ spacebar-pressing behaviours, whereas no meaningful differences in spacebar-press behaviour between the eHMI conditions occurred for non-stopping vehicles. For example, when the stopping vehicle drove off, it became unsafe to cross, and participants released the spacebar. A non-stopping vehicle that was approaching at that time could not affect spacebar-pressing behaviour because participants already had the spacebar released. We used white text together with a symbol on a black background to achieve the highest possible contrast, because colours (e.g., red and green) already have a meaning, yet this meaning becomes ambiguous when the colour is presented on an approaching vehicle [2].

2.4. Design of the Animated Video Clips

The experiment consisted of 36 non-interactive animated video clips: 6 virtual environments × 6 eHMI conditions. All cars drove at a speed of about 35 km/h unless slowing down for the pedestrian. The videos were 25 s long and played at 60 frames/s. Three environments were used: a straight road, a T-junction and an intersection, with two different preprogrammed traffic behaviours per eHMI. Accordingly, there were six videos per eHMI condition. The lane width was 3.66 m (a standard lane width, e.g., [38]). The camera perspective was from the eyes of a pedestrian waiting to cross the road at a crossing with a traffic island. The field of view of the animation was 80 deg, which ensured that a large part of the environment could be seen (e.g., cars making a right turn, cars driving straight on, and cars making a left turn). In each video, cars were driving on both lanes. The cars did not contain a driver or passenger. This was done to resemble future driverless vehicles, which may transport goods rather than people.
Within a video, all cars featured the same eHMI type. The eHMI could show one of two messages: If the approaching car passed without slowing down, the eHMI changed from blank to ‘Driving’ (Figure 3, right). If the approaching car did stop for the participant, the eHMI changed from blank to ‘Waiting’ (Figure 3, left). The change of state from blank to ‘Waiting’ occurred when the longitudinal distance between the center of the car and the pedestrian was 23 m. After 1.2 s, when the longitudinal distance had reduced to 11 m, the car started to decelerate to a full stop. The car came to a full stop 2.0 s after the eHMI had switched on, at a longitudinal distance of 7 m between the center of the car and the pedestrian (Figure 2). About 2 s after the car had come to a full stop, the eHMI switched to blank again. About 1.2 s later, the car drove off and passed the participant. These timing and distance parameters yielded a scenario in which cars drove by and stopped in rapid succession. The traffic was not created according to actual traffic data or models of human behaviour.
As stated above, there were six videos per eHMI condition, with each video showing a different traffic environment. The traffic environments were the same for each eHMI, except for a temporal offset (up to 10 s) of the starting moments and corresponding ending moments of the video clips. This offset was included to encourage that participants could not recognise/memorise the behaviour of the cars in the video. In each of the six traffic-environment videos for a particular eHMI condition, one or two of the approaching cars stopped and subsequently drove away. In total, across the six traffic-environment videos per eHMI condition, ten approaching cars stopped for the participant. Details about the video clips and data exclusions are available in the Supplementary Material (Figures S1–S6).

2.5. Procedure and Task

Participants first read and signed an informed consent form. Next, the eye-tracker was calibrated. Then, participants performed two 10 s training scenarios. These concerned an empty straight road, showing a single car without eHMI; this car approached, stopped and drove off. The participants’ task was to press and hold the spacebar whenever they felt it was safe to cross the road. Subsequently, the participants viewed the 36 animated video clips in random order. After each scenario, the participants were asked to rate their perceived clarity with the statement: ‘It was clear when I could cross’ on a scale from 0 (completely disagree) to 10 (completely agree).

2.6. Dependent Variables

  • We calculated the following dependent variables:
  • Self-reported clarity on a scale from 0 (completely disagree) to 10 (completely agree).
  • Percentage of time that the participant had the spacebar pressed since the moment the eHMI switched to ‘Waiting’ until 3 s after. A higher percentage indicated a better performance (i.e., indicating when it is safe to cross when it is indeed safe to cross).
  • Percentage of time that the participant had the spacebar released since the moment the eHMI switched off before driving away until 3 s after. Again, a higher percentage indicates better performance (i.e., indicating that it is not safe to cross when it is indeed unsafe to cross).
  • Gaze spread in pixels. We calculated, for each time sample, the distance between the participant’s x and y gaze coordinates and the mean x and y gaze coordinates of all participants. The gaze spread is the average distance from the moment the eHMI switched to ‘Waiting’ until 3 s later.

2.7. Statistical Analyses

The effects of eHMI type on the dependent variables were assessed using a repeated-measures analysis of variance (ANOVA), after averaging the performance scores of the individual vehicle approaches per participant. Significant differences between conditions were assessed with MATLAB’s multcompare function, using the Tukey–Kramer critical value.

3. Results

3.1. Self-Reported Clarity

Figure 4 shows the results for self-reported clarity per eHMI condition. There was a significant difference between the six eHMI conditions, F(5,300) = 114.4, p < 0.001, ηp2 = 0.66. Pairwise comparisons showed that Roof, Windscreen, and Grill were not significantly different from each other. The mean clarity scores between the other combinations differed significantly.

3.2. Performance for Approaching Cars

Figure 5 shows the performance scores, averaged for the nine approaches where the car drove straight on or made a left turn before stopping for the pedestrian. The six eHMI conditions were significantly different from each other, F(5,300) = 130.1, p < 0.001, ηp2 = 0.68. Again, Roof, Windscreen, and Grill were not significantly different from each other, whereas all other combinations differed significantly.
Figure 6 illustrates participants’ spacebar pressing behaviour as a function of elapsed time since the moment of eHMI onset at t = 0 s. It can be seen that initially (between 0 and 0.5 s), the percentage of participants pressing the spacebar dropped with time, which can be explained by the fact that the approaching car kept getting closer; hence, it became less safe to cross. The Roof, Windscreen, and Grill caused participants to press the spacebar at about 0.5 s since the eHMI turned on. The Projection and especially Wheels triggered a later spacebar-press response, presumably because these eHMIs were poorly visible from a distance; see Figure 7 for an illustration. Figure 6 also shows that for No eHMI, participants only started to press the spacebar once they could detect that the car decelerated (the car decelerated between 1.2 and 2.0 s).
Figure 8 shows the performance score for one selected approach condition: a case where the approaching car made a right turn. Again, the difference in performance scores was significant, F(5,300) = 10.6, p < 0.001, ηp2 = 0.15. All five eHMIs differed significantly from the No eHMI condition, and Wheels differed significantly from Roof and Grill. In other words, in straight and left approach cases, Wheels yielded the lowest performance (Figure 5 and Figure 6), whereas in the right-turn case, Wheels yielded the highest performance (Figure 8).
The high performance for Wheels, and to a lesser extent for Projection, can be explained by the visibility of the sign in the right-turn case (Figure 9). The Roof, Windscreen, and Grill, however, only became visible after the car had made the turn.
The results above showed similar results for self-reported clarity and objective performance. In order to describe the degree of similarity, we averaged the performance scores and clarity scores for all participants per eHMI. The results, shown in Figure 10, reveal a strong association (r = 0.99). In other words, in the aggregate, it appears that clarity and performance are both affected by the same mechanism, which we think is the visibility/readability of the display.

3.3. Eye-Movements for Approaching Cars

A visual inspection of the participants’ eye movements indicated that these were often goal-directed, focusing on future interactions. For example, in Figure 11, the majority of participants looked at the approaching car even before the eHMI had turned on; participants did not necessarily look towards the nearest or more salient car. Furthermore, we found that participants’ attention distribution was sometimes dispersed (e.g., when multiple cars were visible) and at other times concentrated (e.g., when a relevant car approached the participant, e.g., Figure 9). Herein, we introduce a new measure to describe the degree of gaze dispersion. We defined dispersion as the mean distance from the participants’ overall mean gaze coordinate for that particular animated video clip. A dispersion score of, e.g., 200 pixels, means that participants’ gaze was, on average, 200 pixels away from the mean fixation gaze position of all participants.
The results of the gaze dispersion analysis (Figure 12) show that approaching cars attracted attention, as evidenced by low dispersion (<150 pixels) for the No eHMI condition while the car was approaching (0 to 2 s). The Wheels attracted attention, especially just before coming to a stop (from 1 to 2 s). The Projection, on the other hand, resulted in diversified attention, as illustrated in Figure 13. The Windscreen, on the other hand, yielded in a low gaze dispersion when the car was standing still. The eye-movement dispersion was significantly different between the six eHMI conditions, F(5,300) = 31.4, p < 0.001, ηp2 = 0.34. The Projection yielded a significantly higher dispersion than all five other conditions. The Wheels yielded a significantly lower dispersion than all conditions, except for Windscreen. The Windscreen yielded a significantly lower dispersion than Roof and Projection.

3.4. Performance for Cars Driving off

So far, we examined only the performance of eHMI for approaching cars. Another relevant aspect of eHMI evaluation is how participants respond after the eHMI switches off before the car drives away. Figure 14 shows that all eHMIs resulted in improved performance compared to No eHMI; that is, participants were more likely to release the spacebar before the car drove off. Initially (at t = 0 s), participants using one of the five eHMIs had the spacebar pressed, because the eHMI displayed ‘Waiting’ until that point. It took about 0.2 for the first participants to release the spacebar after this eHMI message disappeared. Participants in the No eHMI condition started to release the spacebar only after the car drove off (at 1.4 s), see Figure 14.
An analysis of the performance scores (Figure 15) showed a significant difference between the five eHMI conditions, F(5,300) = 37.4, p < 0.001, ηp2 = 0.38. The No eHMI condition differed significantly from the five other eHMI conditions; there were no significant differences between Roof, Windscreen, Grill, Projection, and Wheels. In other words, participants responded similarly to the eHMI turning off, regardless of the type of eHMI.

4. Discussion

In this study, five eHMI locations, together with a baseline No eHMI condition, were compared in a within-subjects design using a total of 61 participants. The participants viewed animated video clips and were asked to press and hold the spacebar when they thought it was safe to cross, while their eye-movements were recorded using an eye-tracker.

4.1. Performance

The results showed that the Roof, Windscreen, and Grill-based eHMIs yielded the best performance, defined in terms of the pressing time of the spacebar when it was safe to cross. However, this finding did not hold in all scenarios; the eHMI right above the wheel was found to be the best-performing eHMI when the car approached from a corner. In this specific scenario, the eHMIs on the front (Roof, Windscreen, and Grill) were not visible, and therefore failed to communicate their messages to the pedestrian. Together, our findings suggest that eHMIs should be omnidirectional if they are to be applied in traffic scenarios where cars can approach from multiple directions. Vlakveld et al. [26] showed animations of cars with an omnidirectional eHMI on the roof, whereas drive.ai [27] used multiple displays on the car’s exterior. Another solution to ensure visibility from all sides is to use a light emitting diode (LED) strip as in Cefkin et al. [39], or LED patterns on the lateral surfaces of the car [40].
The Projection yielded poor spacebar-pressing performance when the car was approaching. This finding can be explained by the poor visibility of the projection at a far distance due to the shallow viewing angle. We do not mean to suggest that our results generalize to all possible projections. In a virtual reality study, Löcken et al. [31] tested different animations of eHMIs, including a projection which they dubbed F015 (after the name of the concept car presented by Mercedes–Benz USA [33]). Their results showed that the F015 yielded high ratings (5.7 on a scale from 1 to 7) on the User Experience Questionnaire. The concept of Löcken et al. [31] differed from ours, as their projection was highly salient, consisting of a bright green zebra message for the pedestrian. Our findings point to limitations in the use of projections that move with the car, as a projection may not be clear from a distance. We expect that these limitations will be more severe in real traffic. Although technologically feasible (e.g., [41]), it may require powerful lasers to ensure that a projection is visible on the road in daylight. An eHMI on a windscreen may also be technologically challenging to achieve, and may have variable contrast depending on whether or not the eHMI is mounted on a transparent windscreen or whether the windscreen is blinded (in the case of level 5 autonomous vehicles).
For the events where the car was driving away, and the eHMI switched from ‘Waiting’ to a blank display, all five eHMI locations were found to yield equivalent performance. These findings can be explained because the removal of the message was a salient event, which participants could detect independent of eHMI location or even message content.
Our findings indicate that it is possible to convince users to cross or not to cross before the car slows down or drives away. In other words, all eHMI locations were shown to evoke a more accurate response compared to the No eHMI condition.

4.2. Eye-Tracking

The eye-tracking results showed that the Windscreen eHMI yielded a concentrated gaze pattern, which can be explained by the fact that this eHMI is embedded in the centre of the car. This finding is in line with Dey et al. [36], who showed that pedestrians are inclined to look at the windscreen when an oncoming car gets close to the pedestrian. The Wheels eHMI also yielded a concentrated gaze pattern, but only for a brief period of about 1 s before the car came to a full stop. This finding may be explained by the fact that the Wheels eHMI was poorly visible from a distance; when the car came close to the participant, they were inclined to fixate on the eHMI to read its message.
We found that the Projection eHMI yielded a dispersed eye-movement pattern, a finding that can be attributed to the fact that participants looked at the projection and the car itself. These results are consistent with Powelleit et al. [42], who tested a projection in front of the car showing the predicted vehicle trajectory. The results of Powelleit et al. [42] showed that drivers found such a display distracting. Similarly, we see a risk that a projection on the road may result in distraction, where road users may fixate on the projection on the road at the expense of attention towards the car itself, and therefore may miss relevant implicit cues.
Such results have been found in the use of visual augmented feedback in air traffic control: Eisma et al. [43] found that augmented visual feedback helps to achieve a better task performance, but also has distraction potential.

4.3. Self-Reports

An interesting result was that, in the aggregate, self-reported clarity was strongly associated with objective performance, with a correlation of 0.99. This strong correlation may be due to a single underlying factor, such as the legibility of the display. In other words, the Projection and Wheels eHMIs were hard to read from a distance, as a result of which participants pressed the spacebar late and gave a low clarity rating. The strong correlation between subjective and objective performance is promising for those who examine eHMIs using self-reports (e.g., [8]).

4.4. Limitations and Recommendations

The present study was conducted in rather constrained conditions. We used a computer monitor that offered a physical field of view of 31 deg and a virtual field of view of 80 deg. The 36 videos followed each other in quick succession, and the cars in the videos did not behave according to a realistic traffic flow model. Furthermore, participants were given a straightforward task to press the spacebar when feeling that it was safe to cross.
It would be worthwhile to employ more ecologically-valid methods, such as a virtual reality headset combined with a motion suit [44] or a field test using a Wizard of Oz approach [39]. It remains to be investigated how participants would respond to eHMIs in real traffic, in which situations arise more naturally and in which pedestrians may be in a hurry or lack the concentration to focus on a particular eHMI. We especially recommend testing eHMIs in traffic environments that involve competing visual demands. It is possible that pedestrians in complex traffic rely on peripheral vision without sustained visual attention towards the eHMI [39,45]. Wide fields of view could be achieved using a head-mounted display or surround projections. An advantage of our setup, in which head movement was constrained, is that we were able to measure eye movements with high accuracy.
Our computer monitor had a standard resolution of 1920 × 1080 pixels. The text-based eHMIs may have been hard to read when the virtual car drove at a large distance, especially for participants that suffer from near-sightedness. As discussed above, the Projection eHMI was relatively difficult to perceive just after it has appeared. However, despite the limited display resolution, participants rated the Roof, Windscreen, and Grill eHMIs as clear, with scores of about 8 on a scale from 0 to 10, as shown in Figure 4. Furthermore, our experiment proved to be highly sensitive for detecting differences between eHMIs conditions. To illustrate, 1.5 s after the eHMI turned on, over 70% of the participants pressed the spacebar for the Roof, Windscreen, and Grill eHMIs, compared to only 4% without eHMI. The limitation of display quality also applies to other simulation environments, such as CAVE simulations and head-mounted displays (e.g., [1]). In real traffic, legibility will be affected by other types of visual factors, such as direct sunlight, rain, or smog.
Our simulation did not feature sound. In reality, pedestrians may rely on auditory information to establish the state and relative position of oncoming vehicles. Participants in the simulation were not moving through the virtual environment, and the oncoming car decelerated abruptly while not interacting with the participant. These factors should be improved in future research.
For the present experiment, we selected an eHMI consisting of a non-commanding text message combined with an icon. We do not suggest that this type of eHMI is optimal in real-life applications. Clamann et al. [14] mounted a 32-inch screen on the front of a vehicle, depicting messages that were legible from about 75 m distance. Such large screens, or even multiple screens (see [27]), may not be desirable from an aesthetics and aerodynamics point of view and will require careful system integration. Because display clarity is an essential factor for performance, we recommend that future research examines highly salient eHMI, such as a blinking LED strip.
A final limitation is that the present experiment was conducted using young engineering students, who can be expected to have a relatively high spatial ability [46] and perceptual speed [47]. It remains to be investigated whether older people would be able to intuitively understand eHMIs, such as the ones tested in the present study.

5. Conclusions

In conclusion, eHMIs on the Grill, Windscreen, and Roof were subjectively regarded as the clearest and evoked the highest rate of compliance for approaching cars. A projection-based eHMI has limitations in the form of poor legibility and participants’ visual attention distribution. Based on our results, we recommend that eHMIs should be visible from multiple directions.

Supplementary Materials

The following are available online at https://www.mdpi.com/2078-2489/11/1/13/s1, Figure S1. Percentage of participants who pressed the spacebar during the videos of Traffic environment 1; Figure S2. Percentage of participants who pressed the spacebar during the videos of Traffic environment 2; Figure S3. Percentage of participants who pressed the spacebar during the videos of Traffic environment 3; Figure S4. Percentage of participants who pressed the spacebar during the videos of Traffic environment 4; Figure S5. Percentage of participants who pressed the spacebar during the videos of Traffic environment 5; Figure S6. Percentage of participants who pressed the spacebar during the videos of Traffic environment 6. Raw data, videos, and scripts are accessible here: https://www.dropbox.com/sh/egpd8kgk9bs9yee/AABi8sbwAvfbiyVxPhKVkuota?dl=0.

Author Contributions

Conceptualization, all authors; Methodology, all authors; Software, all authors; Validation, all authors; Formal analysis, all authors; Investigation, all authors; Resources, J.C.F.d.W.; Data curation, Y.B.E. & J.C.F.d.W.; Writing—original draft preparation, Y.B.E. & J.C.F.d.W.; Writing—review and editing, Y.B.E. &, J.C.F.d.W.; Visualization, Y.B.E. &, J.C.F.d.W.; Supervision, Y.B.E. & J.C.F.d.W.; Project administration, J.C.F.d.W.; Funding acquisition, J.C.F.d.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the research program VIDI with grant number TTW 016.Vidi.178.047 (2018–2022; “How should automated vehicles communicate with other road users?”), which is financed by the Netherlands Organisation for Scientific Research (NWO).

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. De Clercq, G.K.; Dietrich, A.; Núñez Velasco, P.; De Winter, J.C.F.; Happee, R. External human-machine interfaces on automated vehicles: Effects on pedestrian crossing decisions. Hum. Factors 2019. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Bazilinskyy, P.; Dodou, D.; De Winter, J.C.F. Survey on eHMI concepts: The effect of text, color, and perspective. Transp. Res. F Traffic Psychol. Behav. 2019, 67, 175–194. [Google Scholar] [CrossRef]
  3. Rasouli, A.; Tsotsos, J.K. Autonomous vehicles that interact with pedestrians: A survey of theory and practice. IEEE Trans. Intell. Transp. Syst 2019, in press. [Google Scholar] [CrossRef] [Green Version]
  4. Schieben, A.; Wilbrink, M.; Kettwich, C.; Madigan, R.; Louw, T.; Merat, N. Designing the interaction of automated vehicles with other traffic participants: Design considerations based on human needs and expectations. Cognit. Technol. Work 2019, 21, 69–85. [Google Scholar] [CrossRef] [Green Version]
  5. Benderius, O.; Berger, C.; Lundgren, V.M. The best rated human-machine interface design for autonomous vehicles in the 2016 Grand Cooperative Driving Challenge. IEEE Trans. Intell. Transp. Syst. 2018, 19, 1302–1307. [Google Scholar] [CrossRef]
  6. Zhang, J.; Vinkhuyzen, E.; Cefkin, M. Evaluation of an autonomous vehicle external communication system concept: A survey study. In Advances in Human Aspects of Transportation. AHFE 2017. Advances in Intelligent Systems and Computing; Stanton, N., Ed.; Springer: Cham, Swizerland, 2017; Volume 597, pp. 650–661. [Google Scholar] [CrossRef]
  7. Werner, A. New colours for autonomous driving: An evaluation of chromaticities for the external lighting equipment of autonomous vehicles. Colour Turn 2018, 1. [Google Scholar] [CrossRef]
  8. Fridman, L.; Mehler, B.; Xia, L.; Yang, Y.; Facusse, L.Y.; Reimer, B. To walk or not to walk: Crowdsourced assessment of external vehicle-to-pedestrian displays. arXiv 2017, arXiv:1707.02698. [Google Scholar]
  9. Ackermann, C.; Beggiato, M.; Schubert, S.; Krems, J.F. An experimental study to investigate design and assessment criteria: What is important for communication between pedestrians and automated vehicles? Appl. Ergon. 2019, 75, 272–282. [Google Scholar] [CrossRef] [PubMed]
  10. Nissan. IDS Concept. Available online: https://www.nissan.co.uk/experience-nissan/concept-cars/ids-concept.html (accessed on 2 December 2019).
  11. Sweeney, M.; Pilarski, T.; Ross, W.P.; Liu, C. Light Output System for a Self-Driving Vehicle. U.S. Patent No. US9902311B2, 25 December 2018. [Google Scholar]
  12. Weber, F.; Chadowitz, R.; Schmidt, K.; Messerschmidt, J.; Fuest, T. Crossing the street across the globe: A study on the effects of eHMI on pedestrians in the US, Germany and China. In HCI in Mobility, Transport, and Automotive Systems. HCII 2019. Lecture Notes in Computer Science; Krömker, H., Ed.; Springer: Cham, Switzerland, 2019; Volume 11596, pp. 515–530. [Google Scholar] [CrossRef]
  13. Chang, C.M.; Toda, K.; Igarashi, T.; Miyata, M.; Kobayashi, Y. A video-based study comparing communication modalities between an autonomous car and a pedestrian. In Proceedings of the Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Toronto, ON, Canada, 23–25 September 2018; pp. 104–109. [Google Scholar] [CrossRef]
  14. Clamann, M.; Aubert, M.; Cummings, M.L. Evaluation of vehicle-to-pedestrian communication displays for autonomous vehicles. In Proceedings of the Transportation Research Board 96th Annual Meeting, Washington, DC, USA, 8–12 January 2017. [Google Scholar]
  15. Daimler. Autonomous Concept Car Smart Vision EQ Fortwo: Welcome to the Future of Car Sharing. Available online: https://media.daimler.com/marsMediaSite/en/instance/ko.xhtml?oid=29042725 (accessed on 2 December 2019).
  16. Joisten, P.; Alexandi, E.; Drews, R.; Klassen, L.; Petersohn, P.; Pick, A.; Abendroth, B. Displaying vehicle driving mode—Effects on pedestrian behavior and perceived safety. In International Conference on Human Systems Engineering and Design: Future Trends and Applications; Ahram, T., Karwowski, W., Pickl, S., Taiar, R., Eds.; Springer: Cham, Switzerland, 2019; pp. 250–256. [Google Scholar] [CrossRef]
  17. Otherson, I.; Conti-Kufner, A.S.; Dietrich, A.; Maruhn, P.; Bengler, K. Designing for automated vehicle and pedestrian communication: Perspectives on eHMIs from older and younger persons. Proceedings of the Human Factors and Ergonomics Society Europe Chapter 2018 Annual Conference, De Waard, D., Brookhuis, K., Coelho, D., Fairclough, S., Manzey, D., Naumann, A., Onnasch, L., Röttger, S., Toffetti, A., Wiczorek, R., Eds.; 2018; 135–148. [Google Scholar]
  18. Semcon. Who Sees You When the Car Drives Itself? Available online: https://semcon.com/smilingcar (accessed on 2 December 2019).
  19. Song, Y.E.; Lehsing, C.; Fuest, T.; Bengler, K. External HMIs and their effect on the interaction between pedestrians and automated vehicles. In International Conference on Intelligent Human Systems Integration; Karwowski, W., Ahram, T., Eds.; Springer: Cham, Switzerland, 2018; pp. 13–18. [Google Scholar] [CrossRef]
  20. Nuñez Velasco, J.P.; Farah, H.; Van Arem, B.; Hagenzieker, M.P. Studying pedestrians’ crossing behavior when interacting with automated vehicles using virtual reality. Transp. Res. F Traffic Psychol. Behav. 2019, 66, 1–14. [Google Scholar] [CrossRef] [Green Version]
  21. Stadler, S.; Cornet, H.; Theoto, T.N.; Frenkler, F. A tool, not a toy: Using virtual reality to evaluate the communication between autonomous vehicles and pedestrians. In Augmented Reality and Virtual Reality; Tom Dieck, M.C., Jung, T., Eds.; Springer: Cham, Switzerland, 2019; pp. 203–216. [Google Scholar] [CrossRef]
  22. Toyota. Concept-i. Available online: https://newsroom.toyota.eu/2018-toyota-concept-i (accessed on 2 December 2019).
  23. Deb, S.; Strawderman, L.J.; Carruth, D.W. Should I cross? Evaluating interface options for autonomous vehicle and pedestrian interaction. In Proceedings of the Road, Safety, and Simulation Conference, Iowa City, IA, USA, 14–17 October 2019. [Google Scholar]
  24. Hensch, A.C.; Neumann, I.; Beggiato, M.; Halama, J.; Krems, J.F. How should automated vehicles communicate?–Effects of a light-based communication approach in a Wizard-of-Oz study. In International Conference on Applied Human Factors and Ergonomics; Stanton, N., Ed.; Springer: Cham, Switzerland, 2019; pp. 79–91. [Google Scholar] [CrossRef]
  25. Mahadevan, K.; Somanath, S.; Sharlin, E. Communicating awareness and intent in autonomous vehicle-pedestrian interaction. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018. [Google Scholar] [CrossRef]
  26. Vlakveld, W.; Van der Kint, S.; Hagezieker, M.P. Cyclists’ intentions to yield for automated cars at intersections when they have right of way: Results of an experiment using high-quality video animations. Submitted.
  27. Drive.ai. The Self-Driving Car Is Here. Available online: https://web.archive.org/web/20181025194248/https://www.drive.ai/# (accessed on 2 December 2019).
  28. Colley, A.; Häkkilä, J.; Pfleging, B.; Alt, F. A design space for external displays on cars. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications Adjunct, Oldenburg, Germany, 24–27 September 2017; pp. 146–151. [Google Scholar] [CrossRef] [Green Version]
  29. Colley, A.; Häkkilä, J.; Forsman, M.T.; Pfleging, B.; Alt, F. Car exterior surface displays: Exploration in a real-world context. In Proceedings of the 7th ACM International Symposium on Pervasive Displays, Munich, Germany, 6–8 June 2018. [Google Scholar] [CrossRef]
  30. Dietrich, A.; Willrodt, J.-H.; Wagner, K.; Bengler, K. Projection-based external human-machine interfaces–Enabling interaction between automated vehicles and pedestrians. In Proceedings of the Driving Simulation Conference Europe, Antibes, France, 5–7 September 2018; pp. 43–50. [Google Scholar]
  31. Löcken, A.; Wintersberger, P.; Frison, A.K.; Riener, A. Investigating user requirements for communication between automated vehicles and vulnerable road users. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV’19), Paris, France, 9–12 June 2019; pp. 879–884. [Google Scholar] [CrossRef]
  32. Mitsubishi Electric. Mitsubishi Electric Introduces Road-Illuminating Directional Indicators. Available online: http://www.mitsubishielectric.com/news/2015/1023.html (accessed on 2 December 2019).
  33. Mercedes-Benz USA. Mercedes-Benz F 015 Luxury in Motion. Available online: https://www.youtube.com/watch?v=MaGb3570K1U (accessed on 2 December 2019).
  34. Senders, J.W.; Kristofferson, A.B.; Levison, W.H.; Dietrich, C.W.; Ward, J.L. The attentional demand of automobile driving. Highw. Res. Rec. 1967, 195, 15–33. [Google Scholar]
  35. AlAdawy, D.; Glazer, M.; Terwilliger, J.; Schmidt, H.; Domeyer, J.; Mehler, B.; Fridman, L. Eye contact between pedestrians and drivers. In Proceedings of the Tenth International Driving Symposium on Human Factors in Driver Assessment, Training and Vehicle Design, Santa Fe, NM, USA, 24–27 June 2019; pp. 301–307. [Google Scholar]
  36. Dey, D.; Walker, F.; Martens, M.; Terken, J. Gaze patterns in pedestrian interaction with vehicles: Towards effective design of external human-machine interfaces for automated vehicles. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Utrecht, The Netherlands, 22–25 September 2019; pp. 369–378. [Google Scholar] [CrossRef]
  37. Bazilinskyy, P.; Wesdorp, D.; De Vlam, V.; Hopmans, B.; Visscher, J.; Dodou, D.; De Winter, J.C.F. Visual scanning behaviour on a parking lot. In preparation.
  38. Liu, C.; Wang, Z. Effect of narrowing traffic lanes on pavement damage. Int. J. Pavement Eng. 2003, 4, 177–180. [Google Scholar] [CrossRef]
  39. Cefkin, M.; Zhang, J.; Stayton, E.; Vinkhuyzen, E. Multi-methods research to examine external HMI for highly automated vehicles. In International Conference on Human-Computer Interaction; Springer: Cham, Switzerland, 2019; pp. 46–64. [Google Scholar] [CrossRef]
  40. Troel-Madec, L.; Alaimo, J.; Boissieux, L.; Chatagnon, S.; Borkoswki, S.; Spalanzani, A.; Vaufreydaz, D. eHMI positioning for autonomous vehicle/pedestrians interaction. In Proceedings of the IHM 2019—31e Conférence Francophone sur l’Interaction Homme-Machine, Grenoble, France, 10–13 December 2019; pp. 1–8. [Google Scholar]
  41. Ineos159challenge The Role of the Car. Available online: https://www.ineos159challenge.com/news/the-role-of-the-car/ (accessed on 2 December 2019).
  42. Powelleit, M.; Winkler, S.; Vollrath, M. Cooperation through communication–Using headlight technologies to improve traffic climate. Proceedings of the Human Factors and Ergonomics Society Europe Chapter 2018 Annual Conference, De Waard, D., Brookhuis, K., Coelho, D., Fairclough, S., Manzey, D., Naumann, A., Onnasch, L., Röttger, S., Toffetti, A., Wiczorek, R., Eds.; 2018; 149–160. [Google Scholar]
  43. Eisma, Y.B.; Borst, C.B.; Van Paassen, M.M.; De Winter, J.C.F. Augmented visual feedback: Cure or distraction? Submitted.
  44. Kooijman, L.; Happee, R.; De Winter, J.C.F. How do eHMIs affect pedestrians’ crossing behavior? A study using a head-mounted display combined with a motion suit. Information 2019, 10, 386. [Google Scholar] [CrossRef] [Green Version]
  45. Moore, D.; Currano, R.; Strack, G.E.; Sirkin, D. The case for implicit external human-machine interfaces for autonomous vehicles. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Utrecht, The Netherlands, 22–25 September 2019; pp. 295–307. [Google Scholar] [CrossRef]
  46. Wai, J.; Lubinski, D.; Benbow, C.P. Spatial ability for STEM domains: Aligning over 50 years of cumulative psychological knowledge solidifies its importance. J. Educ. Psychol. 2009, 101, 817–835. [Google Scholar] [CrossRef]
  47. Salthouse, T.A. Aging and measures of processing speed. Biol. Psychol. 2000, 54, 35–54. [Google Scholar] [CrossRef]
Figure 1. Experimental setup. In the actual experiment, the windows were blinded with aluminium foil.
Figure 1. Experimental setup. In the actual experiment, the windows were blinded with aluminium foil.
Information 11 00013 g001
Figure 2. Car combining all five external human–machine interfaces (eHMIs). In the experiment, the car showed only one eHMI at a time. Here, the car has stopped for the pedestrian. The distance between the centre of the car and the camera (pedestrian) is 7 m longitudinal (i.e., parallel to the direction of the road) and 4.5 m lateral (i.e., perpendicular to the road). The white markings on the road were intended to create a pedestrian crossing on the road, without designated priority to the pedestrian.
Figure 2. Car combining all five external human–machine interfaces (eHMIs). In the experiment, the car showed only one eHMI at a time. Here, the car has stopped for the pedestrian. The distance between the centre of the car and the camera (pedestrian) is 7 m longitudinal (i.e., parallel to the direction of the road) and 4.5 m lateral (i.e., perpendicular to the road). The white markings on the road were intended to create a pedestrian crossing on the road, without designated priority to the pedestrian.
Information 11 00013 g002
Figure 3. (a) Image presented on the eHMI when the approaching car stopped for the pedestrian, (b) Image presented on the eHMI when the approaching car did not stop for the pedestrian.
Figure 3. (a) Image presented on the eHMI when the approaching car stopped for the pedestrian, (b) Image presented on the eHMI when the approaching car did not stop for the pedestrian.
Information 11 00013 g003
Figure 4. Mean self-reported clarity rating per participant. An average is taken of the scores of six scenarios per participant.
Figure 4. Mean self-reported clarity rating per participant. An average is taken of the scores of six scenarios per participant.
Information 11 00013 g004
Figure 5. Mean performance score per participant for car approaches. The performance score is defined as the percentage of time that the spacebar was pressed, from the moment the eHMI turned on until 3 s later. The average is taken for the nine approaches where the car drove straight on or made a left turn before stopping for the pedestrian.
Figure 5. Mean performance score per participant for car approaches. The performance score is defined as the percentage of time that the spacebar was pressed, from the moment the eHMI turned on until 3 s later. The average is taken for the nine approaches where the car drove straight on or made a left turn before stopping for the pedestrian.
Information 11 00013 g005
Figure 6. Percentage of participants who pressed the spacebar during car approaches. The average was taken for the nine approaches where the car drove straight on or made a left turn. t = 0 s: the eHMI turns on. t = 2 s: the car has come to a stop.
Figure 6. Percentage of participants who pressed the spacebar during car approaches. The average was taken for the nine approaches where the car drove straight on or made a left turn. t = 0 s: the eHMI turns on. t = 2 s: the car has come to a stop.
Information 11 00013 g006
Figure 7. Screenshot of the animation in a straight approach case with the Projection eHMI. The yellow markers represent the gaze positions of all of the participants. The projection in front of the car is difficult to discern from a distance.
Figure 7. Screenshot of the animation in a straight approach case with the Projection eHMI. The yellow markers represent the gaze positions of all of the participants. The projection in front of the car is difficult to discern from a distance.
Information 11 00013 g007
Figure 8. Mean performance score per participant for car approaches where the car made a right turn before stopping for the pedestrian. The performance score is defined as the percentage of time that the spacebar was pressed, from the moment the eHMI turned on until 3 s later.
Figure 8. Mean performance score per participant for car approaches where the car made a right turn before stopping for the pedestrian. The performance score is defined as the percentage of time that the spacebar was pressed, from the moment the eHMI turned on until 3 s later.
Information 11 00013 g008
Figure 9. Screenshot of the animation in the right-turn approach case with the Wheels eHMI. The yellow markers represent the gaze positions of the participants.
Figure 9. Screenshot of the animation in the right-turn approach case with the Wheels eHMI. The yellow markers represent the gaze positions of the participants.
Information 11 00013 g009
Figure 10. Overall mean self-reported clarity versus overall mean performance score during car approaches. The performance score is defined as the percentage of time that the spacebar was pressed, from the moment the eHMI turns on until 3 s later.
Figure 10. Overall mean self-reported clarity versus overall mean performance score during car approaches. The performance score is defined as the percentage of time that the spacebar was pressed, from the moment the eHMI turns on until 3 s later.
Information 11 00013 g010
Figure 11. Screenshot of the animation in an intersection scenario. The yellow markers represent the gaze position of the participants.
Figure 11. Screenshot of the animation in an intersection scenario. The yellow markers represent the gaze position of the participants.
Information 11 00013 g011
Figure 12. Eye movement dispersion score during car approaches. The average was taken of the nine approaches where the car drove straight on or made a left turn. t = 0 s: the eHMI turned on. t = 2 s: the car has come to a stop.
Figure 12. Eye movement dispersion score during car approaches. The average was taken of the nine approaches where the car drove straight on or made a left turn. t = 0 s: the eHMI turned on. t = 2 s: the car has come to a stop.
Information 11 00013 g012
Figure 13. Screenshot of the animation in a straight approach scenario with the Projection eHMI. The yellow markers represent the gaze positions of the participants. The Projection results in dispersed eye gaze, with some participants looking at the eHMI on the asphalt and other participants looking at the car.
Figure 13. Screenshot of the animation in a straight approach scenario with the Projection eHMI. The yellow markers represent the gaze positions of the participants. The Projection results in dispersed eye gaze, with some participants looking at the eHMI on the asphalt and other participants looking at the car.
Information 11 00013 g013
Figure 14. Eye movement dispersion score while the car was driving off. The average is taken of nine times driving off. t = 0 s: the eHMI turned off. t = 1.4 s: the car started to accelerate.
Figure 14. Eye movement dispersion score while the car was driving off. The average is taken of nine times driving off. t = 0 s: the eHMI turned off. t = 1.4 s: the car started to accelerate.
Information 11 00013 g014
Figure 15. Mean performance score per participant for cases where the car drove off. The performance score is defined as the percentage of time that the spacebar was released, from the moment the eHMI turned off until 3 s later. For each participant, the average is taken of nine times driving off.
Figure 15. Mean performance score per participant for cases where the car drove off. The performance score is defined as the percentage of time that the spacebar was released, from the moment the eHMI turned off until 3 s later. For each participant, the average is taken of nine times driving off.
Information 11 00013 g015

Share and Cite

MDPI and ACS Style

Eisma, Y.B.; van Bergen, S.; ter Brake, S.M.; Hensen, M.T.T.; Tempelaar, W.J.; de Winter, J.C.F. External Human–Machine Interfaces: The Effect of Display Location on Crossing Intentions and Eye Movements. Information 2020, 11, 13. https://doi.org/10.3390/info11010013

AMA Style

Eisma YB, van Bergen S, ter Brake SM, Hensen MTT, Tempelaar WJ, de Winter JCF. External Human–Machine Interfaces: The Effect of Display Location on Crossing Intentions and Eye Movements. Information. 2020; 11(1):13. https://doi.org/10.3390/info11010013

Chicago/Turabian Style

Eisma, Y. B., S. van Bergen, S. M. ter Brake, M. T. T. Hensen, W. J. Tempelaar, and J. C. F. de Winter. 2020. "External Human–Machine Interfaces: The Effect of Display Location on Crossing Intentions and Eye Movements" Information 11, no. 1: 13. https://doi.org/10.3390/info11010013

APA Style

Eisma, Y. B., van Bergen, S., ter Brake, S. M., Hensen, M. T. T., Tempelaar, W. J., & de Winter, J. C. F. (2020). External Human–Machine Interfaces: The Effect of Display Location on Crossing Intentions and Eye Movements. Information, 11(1), 13. https://doi.org/10.3390/info11010013

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop