Next Article in Journal
Leveraging the TOE Framework: Examining the Potential of Mobile Health (mHealth) to Mitigate Health Inequalities
Next Article in Special Issue
The Convergence of Artificial Intelligence and Blockchain: The State of Play and the Road Ahead
Previous Article in Journal
Quantum Convolutional Long Short-Term Memory Based on Variational Quantum Algorithms in the Era of NISQ
Previous Article in Special Issue
Multi-Objective Advantage Actor-Critic Algorithm for Hybrid Disassembly Line Balancing with Multi-Skilled Workers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Synthetic Displays and Their Potential for Driver Assistance Systems

by
Elisabeth Maria Wögerbauer
*,
Christoph Bernhard
and
Heiko Hecht
Department of Psychology, Johannes Gutenberg-University Mainz, 55122 Mainz, Germany
*
Author to whom correspondence should be addressed.
Information 2024, 15(4), 177; https://doi.org/10.3390/info15040177
Submission received: 22 February 2024 / Revised: 14 March 2024 / Accepted: 18 March 2024 / Published: 23 March 2024
(This article belongs to the Special Issue Feature Papers in Information in 2023)

Abstract

:
Advanced visual display technologies typically supplement the out-of-window view with separate displays (e.g., analog speedometer or artificial horizon) or with overlays (e.g., projected speedometer or map). Studies on head-up displays suggest that altering the out-of-window view itself is superior to supplemental displays, as sensor-based information not normally visible to the driver can be included. Such novel synthetic displays have been researched for cockpit implementation but less so for driving. We discuss such view-altering synthetic displays in general, and camera–monitor systems (CMS) designed to replace rear-view mirrors as a special instance of a novel synthetic display in the automotive domain. In a standard CMS, a camera feed is presented on a monitor, but could also be integrated into the windshield of the car. More importantly, the camera feed can undergo alterations, augmentations, or condensations before being displayed. The implications of these technologies are discussed, along with findings from an experiment examining the impact of information reduction on a time-to-contact (TTC) estimation task. In this experiment, observers judged the TTC of approaching cars based on the synthetic display of a futuristic CMS. Promisingly, TTC estimations were unaffected by information reduction. The study also emphasizes the significance of the visual reference frame.

1. Introduction

The optimization of dedicated displays, such as speedometers, infotainment systems, or navigation displays, has been a long-standing goal in the context of user interface design in many domains. And the literature on how to improve text readability (e.g., [1]), reduce clutter (e.g., [2]), or optimize menu structures [3] abounds. In contrast, the enhancement of scenes as conveyed by a look out of the window of a car or an airplane has been less researched. In recent decades, such enhancement has been privy to the design of futuristic airplane cockpits, but lately it has spread to the automotive domain. In many modern cars, when put in reverse, a monitor in the dashboard produces a video feed from a rear-view camera that may be enhanced by contrast amplification at nighttime or by the addition of lines indicating the projected path of the vehicle or by putting a stop line to indicate the closest approach distance to a curb or obstacle. Another established feature is the bird’s-eye view, where sensor information about obstacles is schematically represented to visualize the potential contact point with the vehicle and the remaining distance to the obstacle. Technological advances propel such synthetic displays toward everyday tools. For the first half of the paper, we examine various possible forms of information presentation within synthetic displays and explore the psychological difficulties, consequences, and potential benefits associated with them. We try to sketch the state-of-the-art of synthetic displays in everyday applications, in particular in the automotive sector. In the other half of the paper, we investigate a particular instance of synthetic displays in the context of camera-monitor systems (CMS), and we present experimental findings of a laboratory study where the degree of environmental context information and the reference frame of the presentation (own vehicle visible or invisible) were systematically manipulated. The research aims to provide insights into the role of synthetic displays in time-critical perceptual tasks, such as time-to-contact estimation. Finally, we deduce some recommendations from our findings for the design of artificial displays in general.

1.1. Evolution of Information Displays in Vehicles

The time-honored use of physical dials, gauges, speedometers, and the like to provide information to supplement an operator’s view out of the window is undergoing radical changes in many domains. The dedicated displays and gauges are steadily being displaced by multi-purpose displays. The latter range from touch-screens that provide different menu systems as a function of the operator’s purpose, to enhanced vision systems, to all-out synthetic vision systems. Rather than displaying important information in separate displays, enhanced vision systems overlay this information onto a transparent projection surface, such as the windshield of a car or a head-worn augmented reality system. Enhanced see-through vision systems allow for direct visual contact with the outside world, while integrating additional information into this view. The just mentioned situation of backing up a car based on the video feed of a monitor in the dashboard is a limiting case as the camera view is merely enhanced with a few lines indicating trajectory and distance to obstacles. In contrast, a synthetic vision system (SVS) replaces the analog view of the outside world with an entirely computer-generated scene in a manner compatible with natural viewing, but based on numerous sensors. The SVS can receive information from an optical camera and/or from a wealth of sensors, which can include but are not limited to thermal infra-red imaging, radar, Lidar, or ultrasound.
In the following sections, the evolution of driving related information displays is described in more detail.

1.1.1. The Beginning—Natural Displays

Visual or visualized information about the world can be accessed in natural ways, be it by the naked eye or viewed in a traditional mirror or through a window. In all of these ways, the image and the information therein can be altered or deteriorated. The windshield may be dirty or confined in size to constitute a frame and thereby introduce small deviations as compared to unencumbered viewing. Mirrors are a special case of natural viewing because in some sense they—not undisputedly so—reverse left and right (see [4]). We have become so accustomed to mirrors that we take them for granted in many everyday situations and may not even notice when they distort the image, for instance, when they are not planar but shaped concavely for makeup purposes or convexly or aspherically to minimize the blind spot when driving. Changing the curvature of the mirror itself affects the perception of information in the world, as shown in many studies (e.g., [5,6]). Moreover, the amount of information available in a complex driving scene can simply overwhelm the driver, as natural displays provide no possibility to reduce information in the driving scene.
Of course, the natural displays can be enhanced, which in itself contributes a first evolution in information displays. For example, objects seen through the windshield can be emphasized by adding bright color to them, which has been demonstrated to increase the passenger’s understanding of autonomous driving decisions [7]. Additionally, an alert light can be placed in the world (cockpit) or the frame of the mirror. Or additional information can be projected onto the window using augmented reality, be it by a pair of glasses or a head-up display. The latter already alters the natural view to some degree by added glare and tint. However, the enhancement cannot solve the problem of the potential perceptual overload of the driver.

1.1.2. The Evolution—Video-Based Displays

As research and new technologies are increasingly affecting automotive display design, an evolution of information displays has begun. As an alternative to natural viewing, video-based systems are becoming more prominent in the automotive industry as well as other domains, such as military vehicles. Regarding military vehicles, several studies have investigated the advantages and disadvantages of camera-based or indirect vision systems in armored and weaponized vehicles. The advantages of indirect vision systems are manifold. A larger field of view increases the awareness of the surrounding environment and of potential threats. At the same time, weak spots in the armor by eye slits are prevented. Therefore, sensor-based systems are important parts of modern equipment in this kind of vehicles. Empirical approaches to investigate the advantages and disadvantages of these systems were undertaken (e.g., [8,9]). Both studies showed that task performance in different vehicle control tasks improved with indirect vision systems, especially when using a higher view-point.
Regarding automobiles, the above mentioned rear-view backup cameras have become mandatory in the US and are recommended in Europe, and they are typically enhanced by distance information, path suggestions, and warning symbols. Manufacturers have begun to offer the option to replace the rear-view mirrors mounted to the sides of the vehicle with camera-monitor systems (CMS). The image taken by a camera mounted somewhere on the vehicle is displayed on a monitor placed somewhere inside the cockpit. CMS can also be enhanced, either by transformations of the camera image, e.g., the option to select a wider field of view to be displayed on the monitor (focal length), or by the addition of information, such as a flashing frame indicating a car in the blind spot. We believe that CMS have great potential and may become the first SVS that establishes itself in our daily routines.
These video-based images can be enhanced or augmented in more or less sophisticated ways. Once a CMS is in place, it becomes possible, if not easy, to process the image before it is displayed on the monitor. Contrast can be enhanced or night-vision features can be implemented that reduce brightness and glare from headlights and make the approaching vehicle better visible in the mirror. One can add a flashing symbol to the display to indicate an object in the blind spot or show a map in the portion of the display that is likely to cover a piece of sky. And with appropriate object-detection software, objects in the scene can be rendered invisible to remove information, provided they are irrelevant to the traffic scenario and the processing required to do so can be performed within a few milliseconds to avoid time-lags.
However, the first CMS offered by manufacturers seem to follow very traditional evolutionary footsteps. Just as the first passenger car designs held on to running boards, which were useful for carriages but quite superfluous for cars, the first CMS were very much influenced by the classical rear-view mirrors. For instance, in the Audi e-tron, the engineers put the camera where the mirror used to be, and they placed the monitor just underneath this place on the inside the doors of the driver and passenger, even maintaining a mirror-like shape (see Figure 1). Interestingly, the resulting monitor view does not correspond to what would be seen in the traditional analog mirror. In that case, the camera would have to be mounted at the virtual eye-point somewhere on top of the front fender of the car. To maximize fuel savings, the camera should protrude as little as possible from the car. Thus far, we know that low camera positions are problematic [10], but we know precious little about the perfect camera and monitor positions.
Another problem with video-based displays is that they can and do suffer from similar distortions as are common for physical mirrors. We rarely notice these gradual distortions as a function of the mirror’s radius or the focal length of the camera. Maybe owing to our evolutionary history, we readily take a mirror or video image to convey a glimpse of the real world and experience no oddness when the image is scaled or distorted to our convenience. (Note that, strictly speaking, a camera lens does not necessarily distort the image but merely causes the impression of distortion when we view the image from an incorrect station point in front of the picture.) We trust mirrors to the extent that we often dispense with the additional head-turn to obtain an unmediated view of the traffic into which we want to merge. And we happily back up the car merely on the basis of video images displayed on a monitor. However, there are many challenges to video-based displays and information enhancement. Consequently, video-based displays could be a new way, but not the final step in the evolution of information displays.

1.1.3. A Revolution—Synthetic Vision Displays

Information displays in automobiles are already undergoing several changes, but we anticipate a revolution of how we display information from the outside driving world as yet to come. We believe that this revolution involves synthetic vision systems (SVS). Synthetic displays go far beyond distortion or video processing. SVS are reconstructive in the sense that the visual world is (re)constructed on the basis of many sensors, which may include but are not limited to reliance on digital information from light-sensitive materials. SVS construct the visual world from scratch. Maps and GPS-data can provide the framework into which data from Lidar, ultrasound, infrared cameras, etc., are integrated. It is conceivable, that the SVS produces a visual world that looks alike regardless of the weather conditions or regardless of daytime. Whether this is indeed desirable depends on many factors. It is, however, easy to conceive that such a display can vastly improve visibility at nighttime and/or during foul weather conditions. Given how well we have become adapted to convex mirrors and wide-angle video images as the basis for operating vehicles at high speed, it seems very likely that users will have little trouble to accept synthetic images once they start to replace natural and video-based views of the world at a larger scale.
Whereas enhanced vision systems have begun to find their way into the automotive industries for many years, fully synthetic displays remain futuristic. In contrast, in aviation, synthetic displays have been a topic of design concepts for a few decades (for a comprehensive review, see [11]), and some of them are currently entering prototype stage. NASA’s experimental Quiet Supersonic Transport plane is reported to receive a SVS, as its long nose prevents all natural forward vision of the pilot.
Figure 2 depicts a complex synthetic display, which combines a computer-generated view of the outside world with a number of overlaid virtual gauges and dials. Note that this display could look the same, no matter what the outside weather or daylight conditions may be. At nighttime, the predominantly camera-based information would be replaced with sensor-based information such that the pilot can optimally rely on instinctive visual processing, which greatly reduces the workload associated with flying in poor visibility. Ultimately, an advanced synthetic display could render obsolete the cumbersome instrument flight-rules pilots have to obey in foul weather when automated instrument-based landing is not available. Such a display would enable the pilot to apply visual flight rules, and, for instance, perceive other airplanes or the landing strip in the synthetic display with the same level of detail and precision as provided by a look out of the window in bright daylight conditions.
Compared to the video-based systems described above, synthetic images can be altered to a much larger extent. Synthetic displays offer wonderful opportunities to enhance a given image by adding information, by augmenting the image, or by replacing the image with a purely concocted view that allows for the necessary actions to be performed at their best. We can alter the size of some objects in the synthetic world or insert others, such as warning signs or a virtual lead vehicle. For instance, in previous work, we have generated a CMS that increased the visual size of accelerating cars which were approaching on the passing lane (i.e., the size increased in addition to the naturally occurring retinal change in the car’s size due to its approach as it caught up with the car of the observer) and the size of decelerating cars was reduced accordingly [12]. By doing so, we could compensate for errors that observers make in a systematic fashion when judging accelerating or decelerating cars. The altered synthetic image allowed the combined behavioral system of operator and CMS to judge accelerating vehicles with smaller error margins. Importantly, synthetic displays also allow for completely new modifications that involve information reduction rather than addition. Unlike in natural and video-based images, redundant or misleading information can be removed from synthetic displays. For instance, a pedestrian occluded by a tree on the sidewalk near an intersection could be made visible by removing the tree from the synthetic display. The latter venue of information removal in synthetic displays has not been systematically researched. We do not yet know if information removal can indeed generate more advantages than disadvantages. The experiment reported in the second half of this article is a first proof of concept.

1.2. A Taxonomy of Displays

In the previous sections, we have described the evolution of information displays in the automotive industry. We see that there are different types of displays that can be distinguished according to the way information is presented and processed by the driver. Moreover, all this information can be enhanced or altered in different ways, as described in the sections above. These different types of displays lead us to a taxonomy of displays in the automotive domain. Such a taxonomy can be used to arrive at some guidelines as to useful modifications of the image available to the driver of an automobile.
The taxonomy is presented in Figure 3. As shown there, a first distinction is made between natural and digital presentation of information. In both these categories, information can be captured differently. In natural displays, which comprise the beginnings and current standards of information displays, information is captured by the naked eye directly, or by means of mirrors and through windscreens. In digital displays, information capture might be video-based or sensor-based. Video-based information capture describes all systems in which the outside world is captured by a camera and immediately displayed on a screen. Sensor-based information capture describes SVS where information is completely reconstructed based on different information sources.
The next level of the taxonomy describes the levels of enhancement that can be applied. Mediated natural views and all digital displays can be altered by adding information (enhancement). In contrast, reducing information is much trickier and can only be achieved if video-based digital views are post-processed, or if an entirely synthetic system (SVS) is used.
Finally, the last level of the taxonomy presents examples from the development of CMS, which is of special interest in the experiment described in the second part of this article.

1.3. Challenges of Information Enhancement in Synthetic Displays

As desirable and convenient as powerful synthetic displays may be, a number of challenges have prevented them from being implemented beyond the experimental stage in commercial aviation, let alone in private automobiles. For fear of missing out on relevant information, designers of SVS often pile additional information from various sensors on top of a camera-based scene. For one, this results in clutter and feature congestion (see, e.g., [2]). It also increases the workload as additional resources go into distinguishing relevant from irrelevant information. Thus, it is important to minimize clutter-increasing enhancements and strive for clutter-neutral or clutter-reducing enhancements. It has to be evaluated whether redundancy gain by adding information is desirable for faster action planning or whether it is counterproductive.

1.3.1. Sensor Integration

It can be rather tricky to fuse information from different sensors into one image, such that the content can be easily grasped by the operator. For instance, thermal images have a very different feel than video-based images. If they are being overlaid, it is important to ensure that the overlay is conformal. Both image types need to have common reference points to avoid double images and spatial uncertainty. If one decides for a non-conformal overlay, such as in the case of an artificial horizon, it has to be clearly separable from the objects in the visual world. Livatino and colleagues [13] provide an overview of various possibilities for visual aids based on multi-sensor information, as applied in synthetic displays for teleoperation. In this context, the video-recorded and 3D-reconstructed scene is supplemented with additional information, such as distance to other objects, trajectories, and details about traversable surfaces. It may also be expanded by displaying an exocentric perspective. Suggestions for the promising use of synthetic displays can also be found in other domains. Whether assisting in surgical procedures through image-guided surgery [14], predicting flight paths and offering guidance in aviation [15], providing virtual annotations for enhanced visibility in challenging conditions [16], facilitating the monitoring of highly automated machines [17], or for damage visualization in buildings after earthquakes [18], synthetic displays can be used within a wide range of applications. Their success will largely depend on whether the information from different sensors can be integrated into one meaningful display.

1.3.2. Information Load

Currently, a multitude of information is presented to drivers in various modern in-vehicle displays. This is driven by factors such as the general needs and preferences of drivers, manufacturer services, or regulatory requirements, to name just a few. At least for now, information enhancement does not decrease the manifold information but simply offers a new—potentially better—way to present it. However, the load of information based on several gauges and monitors poses challenges to human information processing, as the driver’s attention should be directed towards the road at all times. The high visual complexity of in-vehicle information systems can increase the time needed to identify relevant information, especially for older drivers [19]. Visual search times increase as a function of visual complexity, in particular if serial search is involved (see feature integration theory [20]). Lee and colleagues [19] present empirical examples of this effect in the automotive context, which often amount to clutter (for a taxonomy of clutter relevant information in displays, see [21]). A synthetic display that increases the visual information load cannot solve the clutter problem, whereas one that reduces visual information could. This illustrates the great potential of information reduction in synthetic displays.

1.3.3. Driver Distraction

Closely connected with the challenge of information load is the ability of drivers to switch their attention between the road and in-vehicle displays. Most of the information that constitutes enhancement is presented visually. As driving is primarily a visual task, enhancements using the visual channel can pose challenges to the visual resources of drivers—especially if the enhancement does not replace other visual information. According to Wickens’ Multiple Resource Theory [22], the human driver has limited resources which are more or less specific for a given perceptual channel (i.e., visual). Thus, visual enhancement competes for visual resources and increases the potential of distraction. A comprehensive review by Ziakopoulos and colleagues [23] supports this notion. They calculated that in-vehicle information systems could account for around 1.66% of all on-road crashes. This underscores the potentially distracting influence of visual in-vehicle information during driving.
Another source of driver distraction that traditionally occurs during driving is visual clutter in the environment. Unlike in the design of synthetic displays, we have little to no control over this type of clutter. According to a taxonomy proposed by Edquist [24], visual clutter in the environment can be divided into three different types: situational clutter (temporary, moving objects on and alongside the road that must be attended for safe driving, such as other road users like vehicles, cyclists, and pedestrians), designed clutter (permanent objects used for communication with drivers, such as road markings, traffic signs, and signals), and built clutter (buildings and other infrastructure, advertising billboards, etc.). The presence of clutter in the environment can have a negative impact on driving performance. Through synthetic displays, it is possible to remove (or not present) those components from the visual environment that are not needed for the driving task. In this way, environmental clutter can be actively reduced.

1.4. Information Reduction in Synthetic Displays

The decluttering of a scene, which is all but impossible in natural viewing, could be a major advantage of synthetic displays. The latter allow for the complete removal of inessential data, or for their de-emphasis by increasing the transparency of irrelevant information. We focus on the removal of inessential information. In the case of a standard driving task, it is by no means obvious which parts of the complex scene captured by the various sensors (camera, Lidar, thermal imaging, etc.) are necessary, which are dispensable, and which parts may constitute positive clutter removal. In order to determine dispensable or distracting information, we need to know the visual information base that the user’s visual system is exploiting. It is often far from obvious what the relevant information is in a given case.
We have chosen to look at the case of TTC estimation, which is about determining the time at which a moving object will reach a certain location. In the domain of TTC estimation, which is a crucial aspect of driving, we can draw on a large body of research, which has attempted to identify the relevant visual information [25]. In a nutshell, the relevant information is not what a physicist or a mechanical system might work with, that is, a computation based on the instantaneous velocity and the distance of the approaching objects. Users’ behavior points to very different processes. Observers are able to exploit the relative optical expansion rate of the approaching car, but they do not always make use of this strategy. Instead, they often behave as if using faster abbreviated heuristics, for instance, by picking a variable that is often correlated with TTC and easier to perceive or compute than TTC (e.g., [26,27]). Based on the vast literature on TTC estimation, we can be fairly confident that rather minimal visual information, for instance, a display consisting of mostly empty space and a few abstract objects, such as a circle representing a spherical object, is sufficient to make TTC estimates of acceptable quality. This suggests that information reduction should in principle maintain observers’ ability to make temporal range estimates.
We can thus begin to answer the question whether information reduction can be useful by focusing on a narrowly defined use case involving TTC estimates. We have chosen a synthetic display, namely, a synthetic CMS, to manipulate the richness of environmental information that is available to be used to judge the time remaining until a car on the passing lane will be right next to our own car, or collide with us, should we decide to change into the passing lane without sufficient leeway. Let us further assume that our own car is stationary (e.g., waiting on an onramp) or moves along with constant speed, and the approaching car is approaching at a faster constant speed. For this scenario, we can rely on theories about time-to-contact (TTC) estimation. According to the classical view, the relative expansion rate of the approaching object is taken to be at the basis of such estimates. As long as the approaching car is clearly visible, observers should produce valid TTC estimates. That is, the removal of environmental scene information, such as trees on the roadside, pavement markings, etc., should not compromise the judgements, as very simple artificial objects typically suffice to gain insights into human TTC-estimation behavior (e.g., [28]). However, the relative velocities among objects on the display screen, as well as the background objects, can constitute relevant information for TTC estimation [29]. Thus, the removal of an important reference object might nonetheless introduce TTC errors.

2. Experiment: Judging TTC Based on a Simulated SVS

Can a synthetic display constitute a case of paradoxical enhancement, where reducing environmental information enhances the display and improves performance? We have chosen a classical task in which a driver observes an approaching vehicle for a while until it is occluded from view. The observer then has to extrapolate and determine the moment of contact, had the car continued to move in the same way as seen before the occlusion (prediction motion paradigm). The entire scene was designed with a virtual reality engine such that we could arbitrarily add or remove any detail. We thereby manipulated the degree of environmental clutter while maintaining sufficient information to meaningfully perform the TTC estimation task. Observers viewed the approaching vehicles with differing degrees of environmental information reduction, that is, clutter removal, in a setup simulating a synthetic rear-view mirror display.

2.1. Methods

2.1.1. Participants

A total of 29 psychology students (25 female, 4 male, 0 non-binary) with a mean age of 24.55 years (SD = 5.25 years) participated in the experiment. Twenty-six of them stated having a valid driver’s license with a mean annual mileage of 3762 km (SD = 4928 km). Participants were recruited via a mailing list and received partial course credit for their participation. Prior to the experiment, all participants were informed about the voluntary nature of their participation, and their written consent to take part in the experiment was obtained. All participants had normal or corrected-to-normal near vision. Data from two participants were excluded from the analysis for excessively long and variable answers indicating a failure to follow instructions.

2.1.2. Apparatus

In the lab experiment, participants imagined themselves to be in the driver’s seat of a vehicle equipped with a camera–monitor system. The experimental setup consisted of two monitors: a large screen (LG, 1920 × 1080 resolution, 52.5 × 30.0 cm) that reproduced the driver’s forward view through the windshield, and a side monitor (Feelworld 4K, 1920 × 1080 resolution, 15.0 × 9.5 cm) that simulated the monitor of a CMS for the task presentation (see Figure 4). While maintaining a consistent distance to the screens, participants rested their chin on a chinrest, which was positioned 53 cm from the forward-facing screen and 63 cm from the simulated CMS monitor. The CMS monitor screen provided a field of view of 13.6 degrees horizontally and 8.6 degrees vertically. The synthetic scenes were created using a Python-based virtual reality software (Vizard 7).
The forward view (see Figure 4, large monitor) depicted a two-lane road, each lane 7.5 m wide, with an asphalt-textured shoulder. Guardrails on the left and delineators on the right framed the road. The scene included green shrubberies, a light blue sky with gentle clouds, and a depiction of a portion of the vehicle’s hood. This forward view remained constant throughout the experiment. On the CMS monitor (see Figure 4, small monitor on the left), the rear view was presented in three information reduction variants, which differed with regard to the extent of displayed environmental information. We refer to this manipulation as the clutter condition.

2.1.3. Procedure

The procedure (see Figure 5) began with an assessment of the participants’ visual acuity and a preliminary questionnaire. This questionnaire covered demographic data, technology affinity using the Affinity for Technology Interaction Scale [30], and the pre-questionnaire of the Technology Usage Inventory [31]. Subsequently, the experimental trials were conducted, followed by the post-questionnaire of the Technology for Usage Inventory, along with open questions exploring the strategies used during the experiment and preferences for specific representation variants. The entire procedure lasted approximately 75 min.
Participants were instructed to imagine themselves seated in a stationary vehicle on the shoulder, ready to merge into the highway. They were to observe the other vehicles approaching on the near lane. A given vehicle would be visible for 1.5 s while approaching at varying constant speeds before disappearing from the monitor. Participants had to estimate when the approaching vehicle would have reached their position (prediction motion paradigm). They indicated this moment with a keypress. After providing an estimate, a new vehicle was placed in the scene. Upon another keypress, the trial began and the vehicle started moving at a constant speed. No audio was presented during the experiment.

2.1.4. Design

The experiment used a five-factorial within-subjects design. (1) The side of the participants’ own vehicle could be visible in the rear-view monitor or it was invisible, as would be the case if the classical rear-view mirror had been rotated more or less (a visualization is depicted in Figure 6). The participants were not explicitly informed about this manipulation. (2) The size and type of the approaching vehicle varied at three levels (truck, van, or SUV), as depicted in Figure 7. (3) The speed of the approaching vehicle was 50, 75, or 100 km/h, and the (4) actual TTC could assume five values (0.5, 1.0, 1.5, 2.0, 2.5 s). Most importantly, the (5) visual clutter had three levels (see Table 1). The scene could be displayed with full cues of including detailed road features and objects in the environment, it could be displayed with reduced clutter while road markings were maintained, or the approaching car could be presented in isolation on a gray background.
The visibility of the participants’ own vehicle and the clutter condition were blocked. In the first half of the experiment, a portion of the participants’ own vehicle was always visible in the CMS monitor, whereas in the second half, it was always invisible. Participants were not explicitly informed about the manipulation of visibility. The orders of the levels of clutter were counterbalanced among participants. All other factors were randomized to produce different orders within any given block for each participant. This combination led to a total of six blocks (three degrees of clutter × two visibility options of the participants’ own car). Each block contained the fully crossed instances of approaching vehicle type, speed, and TTC, resulting in 90 trials per block, and 270 trials in total. Each trial was presented twice, yielding a total of 540 trials. These were preceded by two training trials at the beginning to familiarize the participants with the task and procedure. The recorded dependent variable was the time until the participants pressed the key.

2.2. Results

In the experiment, we investigated how information reduction or clutter removal affects the estimation of time-to-contact. For each trial, the signed estimation error (estimated TTC—actual TTC) was computed and subsequently aggregated for each combination of participant and experimental condition. To test the hypotheses, we conducted a repeated-measures ANOVA (rmANOVA) on the mean signed TTC estimation errors, which can be found in Appendix A. All statistical analyses are interpreted at a significance level of α = .05. Where applicable, we conducted pairwise paired-sample t-tests with Bonferroni correction for follow-up comparisons.
Across all conditions, the estimated contact time increases with higher actual TTC and higher velocity, particularly pronounced in the interaction (actual TTC × velocity) (see Table A1, Appendix A). This can be attributed to well-established phenomena such as the size–arrival effect or the distance bias, as varying the actual TTC and velocity results in a different visual angle and distance of the presented vehicle at occlusion. The factors vehicle type, reference visibility, and clutter condition hold particular importance for our research question; thus, a depiction of their main and interaction effects obtained from the conducted rmANOVA is presented in Table 2.
Comparisons of TTC estimations for the vehicle type indicate that, on average, the van showed the largest TTC estimation (M = 1.94 s, SD = 0.75 s), followed by the SUV (M = 1.91 s, SD = 0.74 s) and the truck with the shortest TTC estimation (M = 1.88 s, SD = 0.74 s). This main effect of vehicle type was also confirmed in the rmANOVA (see Table 2). Post hoc tests reveal significant differences only between the van and SUV (t(26) = 4.06, pbonf = .001, dz = .78), as well as between the van and truck (t(26) = 4.89, pbonf < .001, dz = .94), but not between the truck and SUV (t(26) = 2.39, pbonf = .073, dz = .46). This effect becomes more pronounced with increasing actual TTC, as evidenced by the significant interaction of actual TTC × vehicle type (see Table A1, Appendix A, and visualization in Figure 8). Remarkably, the van shows the largest average TTC estimation, compared to the SUV and truck. This is contrary to previous findings, as it cannot be attributed to a simple size–arrival effect. Instead, heuristics based on general assumptions about the vehicle types and their typical size may have informed TTC decisions. A possible explanation for these results emerges when considering the shape of the presented vehicles. Both the SUV and the truck have a relatively straight front hood, whereas the van features a sloped front. Given the minor differences in TTC estimation, this disparity in hood configuration could lead to a perceived difference in distance, which influences the TTC estimation and therefore could provide a possible explanation for this observation.
Across the board, participants tended to overestimate TTC, which should not be interpreted as risky behavior, but is likely owed to the general setup in a safe laboratory environment. The significant main effect of reference visibility indicates that the TTC estimates were on average larger when the reference was not visible (M = 1.98 s, SD = 0.82 s) compared to when the reference was visible (M = 1.84 s, SD = 0.70 s; F(1, 26) = 5.37, p = .029, η²p = .17). This pattern is found in all three clutter conditions (see Figure 9). Descriptively, there are slight differences among the three clutter conditions, with the TTC estimates, on average, being largest for the schematic representation (M = 1.95 s, SD = 0.75 s), slightly shorter for the full information condition (M = 1.93 s, SD = 0.71 s), and shortest for the isolated target condition (M = 1.86 s, SD = 0.79 s). However, these three variations do not differ significantly (see Table 2; F(2, 52) = 2.78, p = .071, η²p = .10). Thus, the removal of information, as presented in the two reduced variants (reduced clutter and isolated target), did not have a substantial impact on TTC estimation.
When asked about their preferred display variant, 17 participants expressed a preference for the condition without clutter removal (full information), stating that it was closest to their previous driving experiences. Six participants favored the reduced information condition, noting that they found the scene less distracting while still providing orientation information. Three participants equally preferred both the normal and the reduced information condition, citing the significance of having background and road markings visible. Only one person expressed a preference for the isolated target condition as it reminded her of the representation in the Tesla vehicle she uses and due to the ability to better focus on the moving vehicle.
Among the 27 participants included in the analysis, 11 reported not noticing that their own vehicle had not been visible in the CMS monitor during the second half of the experiment. Participants were asked to rate, on a rating scale, how in retrospect they perceived the presence of their vehicle in the monitor for the TTC estimation task (ranging from 1, “impeded the task”, to 11, “facilitated the task”). Those who reported noticing the absence of their own vehicle gave an average rating of 8.50 (SD = 3.08), while those who did not notice gave an average rating of 7.18 (SD = 2.04). However, these differences were not statistically significant (t(25) = 1.34, p = .193, d = .49).

3. Discussion

We have provided a test case to reflect upon synthetic vision systems that have the potential to enter our everyday world. For a long time, design concepts for such systems have been in the drawers of designers of futuristic displays meant to be used in high-tech airplanes or military vehicles. However, as the automotive industry moves toward ever more sophisticated driver assistance systems, and as the technological components underlying synthetic displays become more affordable, they are beginning to enter everyday devices. We picked CMS to explore whether a feature or side-effect of synthetic vision systems could be a show-stopper for the broader implementation of such systems in cars. Because of limitations in computing power on the one hand, and the limits of human information processing bandwidth on the other hand, the artificial world displayed to the user may have to be abbreviated or cleansed from unnecessary and potentially interfering detail. In other words, we explored whether information reduction is tolerated by observers as they have to perform a complex time-critical task. If observers do not rely on the information aspects that have been removed to make their decisions, the removal of information need not be detrimental and may even be advantageous. We referred to the non-functional information as clutter, well aware that it depends to a large extent on the task at hand what would be considered clutter and what would be considered critical information. The problem is that, in many cases, it may be very difficult to determine which information is clutter. Therefore, we chose the well-researched paradigm of TTC estimation as a test case, for which it is known that very little information is indispensable. To estimate when a car approaching in the mirror would pass an observer (the driver), little detail is required, and clutter removal should in principle not entail relevant information loss. And this is indeed what we found. Clutter removal had no detrimental effects.
Ideally, the elimination of clutter could provide an advantage in terms of improving or speeding up decisions due to a reduction in workload. Since we did not take workload measurements and did not manipulate the workload by making the task difficult or by adding a secondary task, we can only assume that the decluttered condition requires fewer resources than the full cues condition. However, the current data encourage experiments that explore this potential advantage, but they do not speak to the extent to which this will actually materialize. We were not expecting to find a benefit of clutter removal for TTC estimation. We did demonstrate that clutter removal had no negative effects on TTC estimation. This robustness with regard to information removal should be useful in situations of high task load when driving. And it might even be required in a more demanding context, such as in teleoperation. Reducing the amount of information also reduces the bandwidth required to transmit the camera image, thereby improving the latency for the operator.
Our CMS study can be considered a proof of concept that synthetic vision can be applied to time-critical traffic displays when driving. The necessary information reduction required to process and display information with sufficient speed was to no disadvantage for the observers when it came to objective performance. However, performance was not always in agreement with the observers’ subjective preference. The majority of our participants in the experiment did prefer the full view, and a substantial minority preferred it when clutter was reduced. Interestingly, the Tesla-affine subject even preferred to view the targets in isolation. One could take this as an indication of high adaptability to such novel displays.
Will these results generalize to other synthetic displays? This question is very hard to answer because an understanding of the decision base which the display is designed to provide is essential. For the TTC task which we picked for our investigation, the decision base can be narrowed down to the relative expansion rate of the image of the approaching car on the observer’s retina, or to the local image velocity of the car. The former would be relevant if observers use the optical variable tau [25]. In both cases, the decluttered display contains all the information required to perform the task. When judging more complex cases where drivers have to judge the temporal window size for passing, e.g., on a multi-lane highway where several vehicles have to be taken into consideration, the removal of most scenery may be advantageous, but the removal of a vehicle may be fatal. Hence, a more general synthetic vision system must be able to draw on criteria specifying which information can safely be omitted. At the same time, other driving contexts may benefit from the addition of information, which may again be sketchy but does add to the complexity of the system. For instance, when driving on a country road or when preparing a turn at an intersection, some detail in the far distance ahead may be critical. To feed the synthetic system with information that provides improved usability as compared to the view out of the window is certainly a daunting task and will require full access to the information of the auto-pilot. An entire human-factors research program can be envisioned, which determines just what portions of the auto-pilot data should be visualized in a given synthetic display in the vehicle.
Why did observers overestimate TTC throughout? When inspecting Figure 9, we see that the overestimation is pretty constant and does not vary with condition. That is, a constant positive error was made. Observers overestimated TTC by about 300 ms in all cases. This bias is not modified and thus represents a mere baseline shift. This is found often in prediction motion tasks, in particular if the object is moving rather fast. For instance, we found a TTC overestimation bias for vehicles traveling at and above 50 km/h [12]. Another reason for the consistent overestimation of TTC could be that the observers had a different sense of the length and proportions of their own car in the experiment. True TTC was determined as the point in time when the front bumpers of the two cars were exactly aligned. Thus, when the observers took the invisible front of their car to be farther in front of them (or the depicted rear end of the car to be farther to the back) than was appropriate for the passenger car used as a model, overestimation would result. Note that this constant error is not relevant to the hypotheses regarding clutter manipulation. The finding that the visual reference of the participants’ own vehicle reduced the TTC estimates, in contrast, might be interpreted such that the reference information has reduced a potential overestimation of their own vehicle size. The observed impact of the vehicle type on TTC estimation has implications for the development of a synthetic CMS. On the one hand, the potential use of inaccurate heuristics can be mitigated by replacing the actual vehicle in the synthetic display with an alternative model or symbol. On the other hand, our experimental results suggest that the selection and representation of vehicles (and possibly other road users and obstacles) needs particular attention to ensure an appropriate TTC estimation.

3.1. Limitations and Technical Challenges

It is important to acknowledge some characteristics that might limit the significance of the results observed in this study for driver assistance in general. Only in a controlled experimental setting can we make claims about the effects of information reduction. Thus, we used a narrowly circumscribed driving context. For this context, the experiment at hand represents a first proof of concept that information reduction need not be prohibitive. The associated null finding regarding the removal of clutter is encouraging, but we cannot estimate whether and to what extent it will generalize to dynamic and more complex driving scenarios, where the driver’s demands and workload are higher. Here, clutter removal could have benefits. Therefore, future research should investigate its effects in more dynamic and realistic driving situations. Furthermore, the study was conducted with university students. This is not representative for the population of vehicle owners in Germany, who range from 17 to over 80 years. However, as the perception of TTC is a rather basic task, we assume that the relative differences observed here do still apply to older drivers. This should, of course, be validated by future research using a broader sample.
Finally, we acknowledge that there are other important questions central to the development of synthetic displays, such as required image resolution and maximal processing time for the modification of synthetic displays. Image delays exceeding a few milliseconds may be prohibitive and thus put a limit on the extent to which different information sources can be incorporated, altered, or removed. All of this will have to be taken into account in order to provide a reliable SVS, which ensures optimal support for drivers. The current experiment constitutes a promising start to explore the venue of clutter removal in SVS.

3.2. Future Developments and Practical Implications

We have provided a synthetic visual world, which was rich in detail and thus similar to a view through the rear-view mirror of a real car, in the full-cue condition. In the reduced-cue condition, we removed clutter in a uniform fashion. One could build on this and introduce adaptive clutter removal, which adapts to the purposes of the driver and changes continuously based on the traffic assessment of the auto-pilot. For instance, if the driver’s destination is known to the artificial vision system, the intelligent augmentation system can enhance the CMS display in a dynamic fashion that incorporates the planning horizon of the navigational system, anticipated interfering objects, the sensed acceleration of cyclists, and so forth.
The enhancement of CMS displays in cars clearly seems worthwhile, in particular at night or during low visibility conditions caused by fog or rain. Information from infra-red cameras could be used to render objects in the CMS such that they appear as they would in clear weather. Lidar-based object recognition could be used detect objects, such as pedestrians or animals that may enter the road, valuable milliseconds or seconds before they would be detected in normal viewing. This could be done by coloring the critical objects, increasing their contrast, making them blink, etc., and, at the same time, by removing, down-sizing, or down-contrasting objects entirely irrelevant to the driver. In principle, such image enhancement could also be applied to the entire view out the window by replacing the windshield with a large monitor. It is conceivable that drivers use such a purely synthetic display to make night-driving as pleasant and safe as driving during daytime. Precursors of synthetic displays are already finding their way into assistance systems for semi-automated driving scenarios. For instance, on the monitor used to interact with the car when driving at SAE-Level 3, vector-based lanes can be displayed to facilitate the restoration of situation awareness when switching from an attention-demanding task back to driving operations upon a take-over request. Such a display would project colored traces on top of a video-based view of the traffic scene as captured by a front-facing camera. The traces indicate where other vehicles are likely to move within the next seconds, and whether these vehicles are accelerating or decelerating. Such external visualizations are particularly important as it as rather impossible to maintain situation awareness during a demanding secondary task, even when regularly observing traffic [32]. From there, it is still quite a long way to a truly synthetic display replacing the out-the-window view, but it is conceivable how future synthetic displays may look.
These and many other features currently envisioned or in beta testing by car manufacturers contribute to an improved passenger acceptance because they reveal—to some extent—how the automated system works by visualizing information that is considered by the automation and by depicting the future states anticipated by the auto-pilot. Such displays not only enhance the overall user experience but they likely increase the user’s trust in these systems as they provide some insight into the considerations of the auto-pilot.
In our opinion, the next crucial steps to improve synthetic vision systems in cars are twofold. First, the information from non-visual sensors, such as ultrasound, Lidar, etc., need to be fused with the visual image to provide one visual world. Second, the visual world needs to be displayed in one large display, as opposed to the current multiple separate worlds displayed through the window (normal viewing), on top of the window (head-up display), in a mirror, and on separate monitors (CMS) or on the main display. Ultimately, one large display in lieu of the front windshield might be the solution.
In conclusion, our study has shown that time-critical actions relevant for driving can be performed well, even when the display upon which they are based merely provides severely reduced information. Using a TTC-estimation task, which is highly relevant for driving, we have demonstrated that a synthetic display with reduced clutter serves the purpose of judging the remaining time that is available for a potential lane-change. In other words, we provide a proof of concept that paradoxical enhancement can work, and that displays can be enhanced by dropping irrelevant information. This finding bodes well for the further development of synthetic vision systems. It also emphasizes that the context-specific factors of these systems require thorough further investigation.

Author Contributions

Conceptualization, C.B. and H.H.; software, E.M.W.; formal analysis, E.M.W.; investigation, E.M.W.; data curation, E.M.W.; writing—original draft preparation, E.M.W. and H.H.; writing—review and editing, E.M.W., H.H. and C.B.; visualization, E.M.W.; supervision, H.H.; project administration, E.M.W.; funding acquisition, H.H. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), grant number 464850937.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon request.

Acknowledgments

We thank Agnes Münch for assistance in programming the traffic scenario.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Results of the rmANOVA on the mean signed TTC estimation errors.
Table A1. Results of the rmANOVA on the mean signed TTC estimation errors.
dfNumdfDen ε ~ Fpη²p
Actual TTC41040.282.34.135.08
Velocity2520.5594.10<.001.78
Vehicle type2520.8915.76<.001.38
Reference visibility126 5.37.029.17
Clutter condition2521.002.78.071.10
Actual TTC × velocity82080.7429.86<.001.54
Actual TTC × vehicle type82080.773.64.002.12
Velocity × vehicle type41040.990.56.690.02
Actual TTC × reference visibility41040.540.22.817.01
Velocity × reference visibility2520.910.49.596.02
Vehicle type × reference visibility2521.000.77.470.03
Actual TTC × clutter condition82080.681.19.314.04
Velocity × clutter condition41040.831.11.354.04
Vehicle type × clutter condition41041.002.43.053.09
Reference visibility × clutter condition2520.900.51.587.02
Actual TTC × velocity × vehicle type164160.621.14.334.04
Actual TTC × velocity × reference visibility82080.710.82.552.03
Actual TTC × vehicle type × reference visibility82080.870.84.556.03
Velocity × vehicle type × reference visibility41041.000.28.889.01
Actual TTC × velocity × clutter condition164160.631.52.131.06
Actual TTC × vehicle type × clutter condition164160.940.95.513.04
Velocity × vehicle type × clutter condition82081.001.29.249.05
Actual TTC × reference visibility × clutter condition82080.620.79.556.03
Velocity × reference visibility × clutter condition41040.960.91.457.03
Vehicle type × reference visibility × clutter condition41040.861.30.279.05
Actual TTC × velocity × vehicle type × reference visibility164160.701.18.302.04
Actual TTC × velocity × vehicle type × clutter condition32832 1.47.045.05
Actual TTC × velocity × reference visibility × clutter condition164160.810.97.479.04
Actual TTC × vehicle type × reference visibility × clutter condition164160.911.22.254.05
Velocity × vehicle type × reference visibility × clutter condition82080.971.42.193.05
Actual TTC × velocity × vehicle type × reference visibility × clutter condition328320.280.97.521.04
Displayed are uncorrected numerator degrees of freedom (dfNum), denominator degrees of freedom (dfDen), the Huynh–Feldt multiplier for sphericity correction ( ε ~ ), F-values, p-values, and partial η² (η²p).

References

  1. Debernardis, S.; Fiorentino, M.; Gattullo, M.; Monno, G.; Uva, A.E. Text Readability in Head-Worn Displays: Color and Style Optimization in Video versus Optical See-Through Devices. IEEE Trans. Vis. Comput. Graph. 2014, 20, 125–139. [Google Scholar] [CrossRef]
  2. Moacdieh, N.; Sarter, N. Display Clutter: A Review of Definitions and Measurement Techniques. Hum. Factors J. Hum. Factors Ergon. Soc. 2015, 57, 61–100. [Google Scholar] [CrossRef]
  3. Kim, K.; Jacko, J.; Salvendy, G. Menu Design for Computers and Cell Phones: Review and Reappraisal. Int. J. Hum.–Comput. Interact. 2011, 27, 383–404. [Google Scholar] [CrossRef]
  4. Gregory, R.L. Mirrors in Mind; W.H. Freeman/Spektrum: London, UK, 1997. [Google Scholar]
  5. Fisher, J.A.; Galer, I.A.R. The effects of decreasing the radius of curvature of convex external rear view mirrors upon drivers’ judgements of vehicles approaching in the rearward visual field. Ergonomics 1984, 27, 1209–1224. [Google Scholar] [CrossRef] [PubMed]
  6. Higashiyama, A.; Shimono, K. Mirror vision: Perceived size and perceived distance of virtual images. Percept. Psychophys. 2004, 66, 679–691. [Google Scholar] [CrossRef] [PubMed]
  7. Flohr, L.A.; Valiyaveettil, J.S.; Krüger, A.; Wallach, D.P. Prototyping Autonomous Vehicle Windshields with AR and Real-Time Object Detection Visualization: An On-Road Wizard-of-Oz Study. In Proceedings of the 2023 ACM Designing Interactive Systems Conference, New York, NY, USA, 10–14 July 2023; ACM: Pittsburgh, PA, USA, 2023; pp. 2123–2137. [Google Scholar]
  8. Wu, J.; Wu, Z.; Bao, J. Study on the impact of indirect driving system on mental workload and task performance of driver. In Proceedings of the 2013 IEEE International Conference on Vehicular Electronics and Safety, Dongguan, China, 28–30 July 2013; pp. 53–56. [Google Scholar]
  9. Van Erp, J.B.; Padmos, P. Image parameters for driving with indirect viewing systems. Ergonomics 2003, 46, 1471–1499. [Google Scholar] [CrossRef] [PubMed]
  10. Bernhard, C.; Reinhard, R.; Kleer, M.; Hecht, H. A Case for Raising the Camera: A Driving Simulator Test of Camera-Monitor Systems. Hum. Factors J. Hum. Factors Ergon. Soc. 2023, 65, 321–336. [Google Scholar] [CrossRef] [PubMed]
  11. Prinzel, L.J.; Kramer, L.J. Synthetic vision systems. In International Encyclopedia of Ergonomics and Human Factors; Taylor & Francis: Abingdon, UK, 2006; pp. 1264–1271. [Google Scholar]
  12. Wögerbauer, E.M.; Hecht, H.; Wessels, M. Camera–Monitor Systems as An Opportunity to Compensate for Perceptual Errors in Time-to-Contact Estimations. Vision 2023, 7, 65. [Google Scholar] [CrossRef] [PubMed]
  13. Livatino, S.; Guastella, D.C.; Muscato, G.; Rinaldi, V.; Cantelli, L.; Melita, C.D.; Caniglia, A.; Mazza, R.; Padula, G. Intuitive Robot Teleoperation through Multi-Sensor Informed Mixed Reality Visual Aids. IEEE Access 2021, 9, 25795–25808. [Google Scholar] [CrossRef]
  14. Traub, J.; Sielhorst, T.; Heining, S.-M.; Navab, N. Advanced Display and Visualization Concepts for Image Guided Surgery. J. Disp. Technol. 2008, 4, 483–490. [Google Scholar] [CrossRef]
  15. Schnell, T.; Kwon, Y.; Merchant, S.; Etherington, T. Improved Flight Technical Performance in Flight Decks Equipped with Synthetic Vision Information System Displays. Int. J. Aviat. Psychol. 2004, 14, 79–102. [Google Scholar] [CrossRef]
  16. Hong, Z.; Zhang, Q.; Su, X.; Zhang, H. Effect of virtual annotation on performance of construction equipment teleoperation under adverse visual conditions. Autom. Constr. 2020, 118, 103296. [Google Scholar] [CrossRef]
  17. Lorenz, S. Design of a teleoperation user interface for shared control of highly automated agricultural machines. Proc. Des. Soc. 2023, 3, 1277–1286. [Google Scholar] [CrossRef]
  18. Hoskere, V.; Narazaki, Y.; Spencer, B.F. Physics-Based Graphics Models in 3D Synthetic Environments as Autonomous Vision-Based Inspection Testbeds. Sensors 2022, 22, 532. [Google Scholar] [CrossRef]
  19. Lee, S.C.; Kim, Y.W.; Ji, Y.G. Effects of visual complexity of in-vehicle information display: Age-related differences in visual search task in the driving context. Appl. Ergon. 2019, 81, 102888. [Google Scholar] [CrossRef] [PubMed]
  20. Treisman, A.M.; Gelade, G. A feature-integration theory of attention. Cogn. Psychol. 1980, 12, 97–136. [Google Scholar] [CrossRef]
  21. Ellis, G.; Dix, A. A Taxonomy of Clutter Reduction for Information Visualisation. IEEE Trans. Vis. Comput. Graph. 2007, 13, 1216–1223. [Google Scholar] [CrossRef]
  22. Wickens, C.D. Multiple resources and performance prediction. Theor. Issues Ergon. Sci. 2002, 3, 159–177. [Google Scholar] [CrossRef]
  23. Ziakopoulos, A.; Theofilatos, A.; Papadimitriou, E.; Yannis, G. A meta-analysis of the impacts of operating in-vehicle information systems on road safety. IATSS Res. 2019, 43, 185–194. [Google Scholar] [CrossRef]
  24. Edquist, J. The Effects of Visual Clutter on Driving Performance. Ph.D. Thesis, Monash University, Melbourne, Australia, 2008. [Google Scholar]
  25. Lee, D.N. A Theory of Visual Control of Braking Based on Information about Time-to-Collision. Perception 1976, 5, 437–459. [Google Scholar] [CrossRef]
  26. DeLucia, P.R. Chapter 11 Multiple sources of information influence time-to-contact judgments: Do heuristics accommodate limits in sensory and cognitive processes? In Advances in Psychology; Elsevier: Amsterdam, The Netherlands, 2004; pp. 243–285. [Google Scholar]
  27. Keshavarz, B.; Campos, J.L.; DeLucia, P.R.; Oberfeld, D. Estimating the relative weights of visual and auditory tau versus heuristic-based cues for time-to-contact judgments in realistic, familiar scenes by older and younger adults. Atten. Percept. Psychophys. 2017, 79, 929–944. [Google Scholar] [CrossRef] [PubMed]
  28. Kim, N.-G.; Grocki, M.J. Multiple sources of information and time-to-contact judgments. Vis. Res. 2006, 46, 1946–1958. [Google Scholar] [CrossRef] [PubMed]
  29. Gray, R.; Regan, D. Simulated self-motion alters perceived time to collision. Curr. Biol. 2000, 10, 587–590. [Google Scholar] [CrossRef] [PubMed]
  30. Franke, T.; Attig, C.; Wessel, D. A Personal Resource for Technology Interaction: Development and Validation of the Affinity for Technology Interaction (ATI) Scale. Int. J. Hum.–Comput. Interact. 2019, 35, 456–467. [Google Scholar] [CrossRef]
  31. Kothgassner, O.D.; Felnhofer, A.; Hauk, N.; Kastenhofer, E.; Gomm, J.; Kryspin-Exner, I. TUI (Technology Usage Inventory) Manual; ICARUS: Vienna, Austria, 2012. [Google Scholar]
  32. Röckel, C.; Hecht, H. Regular looks out the window do not maintain situation awareness in highly automated driving. Transp. Res. Part F Traffic Psychol. Behav. 2023, 98, 368–381. [Google Scholar] [CrossRef]
Figure 1. View of a CMS currently marketed as an option by a car manufacturer (Audi e-tron). Note. From Virtuelle Außenspiegel, by AUDI AG (2018). https://www.audi-mediacenter.com/de/fotos/detail/virtuelle-aussenspiegel-70735 (accessed on21 November 2023).
Figure 1. View of a CMS currently marketed as an option by a car manufacturer (Audi e-tron). Note. From Virtuelle Außenspiegel, by AUDI AG (2018). https://www.audi-mediacenter.com/de/fotos/detail/virtuelle-aussenspiegel-70735 (accessed on21 November 2023).
Information 15 00177 g001
Figure 2. A synthetic display for a pilot. The visual landscape representation is based on a terrain database and is combined with sensor data as used in a primary flight display. Note. From An image of Honeywell’s synthetic view for pilots, by Honeywell (2012). CC-BY-SA 3.0. https://commons.wikimedia.org/wiki/File:Synthetic_Vision.JPG (accessed on 21 November 2023).
Figure 2. A synthetic display for a pilot. The visual landscape representation is based on a terrain database and is combined with sensor data as used in a primary flight display. Note. From An image of Honeywell’s synthetic view for pilots, by Honeywell (2012). CC-BY-SA 3.0. https://commons.wikimedia.org/wiki/File:Synthetic_Vision.JPG (accessed on 21 November 2023).
Information 15 00177 g002
Figure 3. A taxonomy of visual information displays as they apply to the problem space of rearward perception when driving. At the first level, we distinguish natural and digital displays. Both capture visual information in very different ways, as specified at the second level. The next level describes the potential to alter the displayed information. Note that some natural views and all digital views can be enhanced. In contrast, only synthetic displays allow for the removal of potentially distracting information.
Figure 3. A taxonomy of visual information displays as they apply to the problem space of rearward perception when driving. At the first level, we distinguish natural and digital displays. Both capture visual information in very different ways, as specified at the second level. The next level describes the potential to alter the displayed information. Note that some natural views and all digital views can be enhanced. In contrast, only synthetic displays allow for the removal of potentially distracting information.
Information 15 00177 g003
Figure 4. Experimental setup, simulating the driver’s view out the window and of the rear-view monitor (SVS).
Figure 4. Experimental setup, simulating the driver’s view out the window and of the rear-view monitor (SVS).
Information 15 00177 g004
Figure 5. Procedure of the laboratory experiment.
Figure 5. Procedure of the laboratory experiment.
Information 15 00177 g005
Figure 6. The scene with the side of the participants’ own vehicle visible in the monitor (left panel) and without visibility of their own vehicle (right panel). With standard mirrors, both views are possible depending on how much the mirror is tilted.
Figure 6. The scene with the side of the participants’ own vehicle visible in the monitor (left panel) and without visibility of their own vehicle (right panel). With standard mirrors, both views are possible depending on how much the mirror is tilted.
Information 15 00177 g006
Figure 7. The three vehicles (truck, van, and SUV) presented in the experiment.
Figure 7. The three vehicles (truck, van, and SUV) presented in the experiment.
Information 15 00177 g007
Figure 8. Mean estimated TTC as a function of the actual TTC, separately for each vehicle type (as indicated by the symbols differing in color and shape). The gray dashed line represents a perfect estimation of the contact time. The diagonal represents the perfect TTC estimation. Error bars indicate ± 1 SE of the mean.
Figure 8. Mean estimated TTC as a function of the actual TTC, separately for each vehicle type (as indicated by the symbols differing in color and shape). The gray dashed line represents a perfect estimation of the contact time. The diagonal represents the perfect TTC estimation. Error bars indicate ± 1 SE of the mean.
Information 15 00177 g008
Figure 9. Mean estimated TTC as a function of the actual TTC. Each panel represents one clutter condition. The blue symbols indicate trials without visibility of the subject’s vehicle, whereas for the red symbols it was visible. The gray dashed line represents a perfect estimation of the contact time. Error bars indicate ± 1 SE of the mean.
Figure 9. Mean estimated TTC as a function of the actual TTC. Each panel represents one clutter condition. The blue symbols indicate trials without visibility of the subject’s vehicle, whereas for the red symbols it was visible. The gray dashed line represents a perfect estimation of the contact time. Error bars indicate ± 1 SE of the mean.
Information 15 00177 g009
Table 1. Overview of the representation variants displayed in the CMS monitor. The target vehicle is depicted at a distance of 28 m (measured between the front of both vehicles). The shown vehicles and vehicle parts remained visually unchanged in all three variants.
Table 1. Overview of the representation variants displayed in the CMS monitor. The target vehicle is depicted at a distance of 28 m (measured between the front of both vehicles). The shown vehicles and vehicle parts remained visually unchanged in all three variants.
Clutter ConditionDescriptionExample Picture
Full cuesA realistic representation with all details of the surroundings (identical to the forward view) is shown.Information 15 00177 i001
Reduced clutterIrrelevant objects in the environment (such as guardrails, shrubberies, and clouds) are removed, whereas other elements of the environment (such as the road, markings, and the sky) are depicted with uniform colors without texture.Information 15 00177 i002
Isolated targetOnly a uniform gray background is shown, as well as the information of the presented target vehicle itself (such as its optical expansion).Information 15 00177 i003
Table 2. Excerpt of the results of the rmANOVA on the mean signed TTC estimation errors for the factors vehicle type, reference visibility, and clutter condition. The results of the rmANOVA with all factors can be found in Appendix A.
Table 2. Excerpt of the results of the rmANOVA on the mean signed TTC estimation errors for the factors vehicle type, reference visibility, and clutter condition. The results of the rmANOVA with all factors can be found in Appendix A.
dfNumdfDen ε ~ Fpη²p
Vehicle type2520.8915.76<0.0010.38
Reference visibility126 5.370.0290.17
Clutter condition2521.002.780.0710.10
Vehicle type × reference visibility2521.000.770.4700.03
Vehicle type × clutter condition41041.002.430.0530.09
Reference visibility × clutter condition2520.900.510.5870.02
Vehicle type × reference visibility × clutter condition41040.861.300.2790.05
Displayed are uncorrected numerator degrees of freedom (dfNum), denominator degrees of freedom (dfDen), the Huynh–Feldt multiplier for sphericity correction ( ε ~ ), F-values, p-values, and partial η² (η²p).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wögerbauer, E.M.; Bernhard, C.; Hecht, H. Synthetic Displays and Their Potential for Driver Assistance Systems. Information 2024, 15, 177. https://doi.org/10.3390/info15040177

AMA Style

Wögerbauer EM, Bernhard C, Hecht H. Synthetic Displays and Their Potential for Driver Assistance Systems. Information. 2024; 15(4):177. https://doi.org/10.3390/info15040177

Chicago/Turabian Style

Wögerbauer, Elisabeth Maria, Christoph Bernhard, and Heiko Hecht. 2024. "Synthetic Displays and Their Potential for Driver Assistance Systems" Information 15, no. 4: 177. https://doi.org/10.3390/info15040177

APA Style

Wögerbauer, E. M., Bernhard, C., & Hecht, H. (2024). Synthetic Displays and Their Potential for Driver Assistance Systems. Information, 15(4), 177. https://doi.org/10.3390/info15040177

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop