Next Article in Journal
From Smartphone Lateral Flow Immunoassay Screening to Direct MS Analysis: Development and Validation of a Semi-Quantitative Direct Analysis in Real-Time Mass Spectrometric (DART-MS) Approach to the Analysis of Deoxynivalenol
Next Article in Special Issue
Analysis of Cooperative Perception in Ant Traffic and Its Effects on Transportation System by Using a Congestion-Free Ant-Trail Model
Previous Article in Journal
Radiometric Performance Evaluation of FY-4A/AGRI Based on Aqua/MODIS
Previous Article in Special Issue
Research on Cooperative Perception of MUSVs in Complex Ocean Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Driving Environment Perception Based on the Fusion of Vehicular Wireless Communications and Automotive Remote Sensors

1
Department of Electronics and Computer Engineering, Hanyang University, Seoul 04763, Korea
2
Department of Automotive Electronics and Control Engineering, Hanyang University, Seoul 04763, Korea
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(5), 1860; https://doi.org/10.3390/s21051860
Submission received: 18 January 2021 / Revised: 28 February 2021 / Accepted: 1 March 2021 / Published: 7 March 2021
(This article belongs to the Special Issue Cooperative Perception for Intelligent Vehicles)

Abstract

:
Driving environment perception for automated vehicles is typically achieved by the use of automotive remote sensors such as radars and cameras. A vehicular wireless communication system can be viewed as a new type of remote sensor that plays a central role in connected and automated vehicles (CAVs), which are capable of sharing information with each other and also with the surrounding infrastructure. In this paper, we present the design and implementation of driving environment perception based on the fusion of vehicular wireless communications and automotive remote sensors. A track-to-track fusion of high-level sensor data and vehicular wireless communication data was performed to accurately and reliably locate the remote target in the vehicle surroundings and predict the future trajectory. The proposed approach was implemented and evaluated in vehicle tests conducted at a proving ground. The experimental results demonstrate that using vehicular wireless communications in conjunction with the on-board sensors enables improved perception of the surrounding vehicle located at varying longitudinal and lateral distances. The results also indicate that vehicle future trajectory and potential crash involvement can be reliably predicted with the proposed system in different cut-in driving scenarios.

1. Introduction

Automated driving has generated increasing interest in recent decades due to its potential to address some of the most challenging issues faced by people and communities across the world. Road traffic crashes account for 1.35 million deaths a year worldwide, and they are the leading cause of death among children and young adults aged 5 to 29 years [1]. In addition, many people in different parts of the world still lack access to mobility, particularly in rural areas, and the number of older people who are no longer physically capable of driving has been increasing globally at the fastest rate recorded [2]. Another important issue in large metropolitan areas throughout the world is excessive traffic congestion, which is directly related with the growing number of vehicles on the road [2,3]. These challenges can be addressed by successful implementation of automated driving on public roads. With automated vehicles it is possible to achieve greater road safety, universal access to mobility, and higher transportation efficiency [4].
SAE International proposed different levels for driving automation, which include six levels ranging from no driving automation (level 0) to full driving automation (level 5) [5]. In level 0–2 systems, the driver is expected to respond to any evident vehicle system failure. A level 0 system does not provide any driving automation functions, while a level 1 system supports the driving task by performing either the longitudinal or the lateral motion control. A level 2 system performs both the longitudinal and the lateral motion control when engaged, and the driver is expected to supervise the automation system and take over the driving task whenever necessary to maintain safe driving. In level 3–5 systems, the driver determines whether to engage the automated driving system, and when engaged, the system performs the entire driving task. A level 3 system offers conditional driving automation, which permits the engagement of the automated driving functions only within its operational design domain (ODD). When engaged, the driver is expected to be responsive to a request to intervene and any system failures, and be ready to perform the driving task fallback in a timely manner. Similarly, a level 4 system also permits the automated driving engagement only within its ODD, but the driver is not expected to perform the driving task fallback and becomes a passenger of the vehicle while the system is engaged. Finally, a level 5 system permits the automated driving engagement under all driver-manageable on-road conditions, without limitations on the ODD.
Despite the considerable interest and effort devoted to automated driving in the past decades, most experts agree that level 5 systems are still decades away from becoming a reality on public roads [6]. At the moment, most automakers provide level 1 and 2 automation functions in the production vehicles. One of the most widely known level 2 systems is the Tesla Autopilot [7]. It has been stated that Tesla vehicles had logged a total of 3 billion miles with the Autopilot engaged as of February 2020 [8]. Tech companies such as Waymo and Uber have been working on the development of level 4 systems for ride-hailing services that involve a fleet of automated vehicles operating only within its ODD (e.g., specific geographical locations and appropriate weather conditions) [9]. The driving automation functions of automobiles require accurate and reliable perception of the surrounding environment, which is typically achieved by the use of remote sensors, such as radars, cameras, and lidars [10,11,12,13,14]. However, perception is a very challenging task due to the highly dynamic and complex nature of driving and traffic environment as well as varying lighting and weather conditions that affect the performance of the on-board sensors [15]. Perception errors in automated vehicles have resulted in a number of fatal crash incidents, and here we discuss some of the notable examples. In May 2016, the camera system of a Tesla Model S failed to distinguish the white side of a tractor trailer against the bright sky, and the vehicle hit the side of the trailer and passed underneath it, resulting in the first case of a traffic fatality involving automated vehicle technology [16]. In March 2018, the lane-keeping system of a Tesla Model X steered into a gore area and crashed into an impact attenuator when the perception system failed to recognize faded lane markings [17]. A crash incident that is very similar to the 2016 Tesla Model S crash occurred in March 2019, where a Tesla Model 3 crashed into the side of a tractor trailer and then drove beneath the trailer [18]. An Uber test vehicle was involved in a crash that resulted in a pedestrian fatality in March 2018, where the vehicle hit a pedestrian who was pushing a bicycle across the road at night [19]. The crash was predictable and avoidable, but a series of design flaws in the perception system contributed to the fatal outcome.
There has been increasing interest in connected and automated vehicles (CAVs) in recent years due to its potential to improve road safety, convenience, and energy efficiency [20,21,22]. The full benefits of automated driving can only be achieved when vehicles are capable of communicating and exchanging information with each other and also with the surrounding infrastructure. Vehicular wireless communications, which is often referred to as vehicle-to-everything (V2X) communications, incorporates various types of communication options depending on the participating entities. Some of the well-known types include vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), vehicle-to-pedestrian (V2P), and vehicle-to-network (V2N) communications. With V2X communications, it is possible to overcome the functional and environmental limitations of the on-board sensors. The advantages of V2X communications in terms of road safety and how it can complement and extend the perception methods based on automotive remote sensors have been described in more detail in [23].
The majority of recent studies on CAVs focus on cooperative adaptive cruise control, cooperative intersection control, and cooperative perception [22]. Although these studies present promising applications, not many studies have examined the benefits of the fusion of on-board remote sensors and cooperative approaches in the context of safety applications. An object association method based on V2V communications and on-board sensors was presented in [24], where the relative sender position and orientation were determined by using point matching algorithms. In [25], the plausibility checking of V2V communication data was implemented based on a multiple object tracking system with a camera sensor. Based on their evaluation, the authors reported that the proposed approach can overcome spoofing attacks if ghost vehicles are located within the camera field-of-view (FOV). A fusion approach based on radar and V2V communication data was suggested in [26], where an improved perception range and more accurate position and velocity estimates were obtained in a car-following scenario. In [27], an object matching algorithm based on V2V communication messages and radar measurements was presented and tested in highway driving scenarios, and the authors suggested that the track-to-track association algorithm can reliably handle the ambiguity issue in object matching. A simulation-based study on cooperative vehicle positioning was presented in [28], where average position errors were calculated for a varying percentage of the vehicles equipped with ranging sensors and V2V communication devices. A high-level fusion approach based on multiple on-board sensors and V2V/V2P communications was presented in [23] for the purpose of providing a timely warning prior to a possible collision. The proposed approach was evaluated in virtual driving environments, and the results demonstrated that reliable environment perception and collision prediction can be achieved by introducing V2X communications, even in scenarios where it is difficult to avoid a collision with existing safety systems based only on on-board sensors.
In this paper, we present the design and implementation of cooperative environment perception based on the fusion of V2X communications and automotive remote sensors. In continuation of our previous work [23], the Kalman-filter-based approach is employed for the high-level fusion of radar, camera, and V2X communication data, and the proposed cooperative approach is tested and evaluated in test-track environments. The experiments are carried out with two test vehicles, where each vehicle is equipped with a 5.9 GHz dedicated short-range communications (DSRC) transceiver along with a global navigation satellite system (GNSS) receiver for the exchange of vehicle state information. The host vehicle is additionally equipped with automotive radar and camera systems for remote sensing of the surrounding objects. The performance of the proposed approach for driving environment perception is evaluated at varying relative distances between the two test vehicles. The positioning accuracies at different longitudinal and lateral target vehicle positions are computed for each remote sensing system as well as for the proposed fusion method. The performance of the trajectory prediction and risk assessment is evaluated in different cut-in driving scenarios where the remote vehicle performs a lane-change maneuver in front of the host vehicle.
The rest of the paper is organized as follows. In Section 2, the overall architecture of the proposed driving environment perception system is described and background information about automotive remote sensors and V2X communications is provided. In Section 3, the proposed method for object state estimation and risk assessment is presented. The experimental results are reported and discussed in Section 4, and conclusions and directions for future research are presented in Section 5.

2. System Overview

2.1. Overall Design of the Proposed System

The overall design of the proposed cooperative environment perception approach based on on-board sensors and vehicular wireless communications is illustrated in Figure 1. The system is designed to serve as a flexible platform that enables reliable vehicle driving environment recognition and to provide appropriate road safety applications. Automotive remote sensors (e.g., radars and cameras) are often connected to the controller area network (CAN) bus such that the data from these sensors along with other data generated by in-vehicle electronic control units (ECUs) are collected in the form of CAN messages. The DSRC messages received with an on-board unit (OBU) are collected via Ethernet interfaces. For the enhancement of GNSS positioning performance, RTCM corrections, which contain differential corrections for the GNSS as defined by the Radio Technical Commission for Maritime Services (RTCM) Special Committee 104, can be obtained either through Networked Transport of RTCM via Internet Protocol (NTRIP) when an Internet connection is available, or through DSRC RTCM messages as defined in SAE J2735 [29] when it is possible to utilize roadside units (RSUs) capable of RTCM applications.
For cooperative relative positioning, and particularly for the fusion of the on-board sensors and V2X communications, it is necessary to transform the GNSS positioning information (i.e., latitude and longitude) of the host vehicle and the remote targets from the World Geodetic System 1984 (WGS 84) reference frame to the host vehicle reference frame. For this, WGS-84 coordinates are converted to earth-centered, earth-fixed (ECEF) coordinates and then to east-north-up (ENU) coordinates. The GNSS course heading of the host vehicle is utilized to finally rotate the remote target positions and find their relative positions in the host vehicle reference frame. The relative positions and other dynamic information on remote targets acquired from V2X communications can be used in conjunction with the measurements from on-board sensors for more accurate and reliable target state estimation, classification, and trajectory prediction. Some of the safety applications that can be offered with this cooperative approach are shown in the safety applications block in Figure 1.

2.2. Automotive Remote Sensors

The types of sensors used for driving environment perception and object tracking in recent years include radars, cameras, and lidars. These sensors perform detection of objects in the vehicle surroundings and provide information on the object state (e.g., relative position and relative speed) and object classification. Some of the most common advanced driver assistance system (ADAS) applications enabled by the environment perception technology include forward collision warning (FCW), automatic emergency braking (AEB), adaptive cruise control (ACC), lane departure warning (LDW), and lane keeping assist system (LKAS). For this study, we equipped our test vehicle with radar and camera sensors that have already been incorporated into production vehicles.
Automotive radar, which is an active ranging sensor designed for detecting and tracking remote targets in the surrounding environment, is one of the most used ranging sensors for ADAS functions these days. Automotive radars are capable of providing the relative position and speed information about the objects located within the sensor FOV, and they operate even in adverse weather conditions such as rain, fog, and snow. Two frequency bands, 24 GHz and 77 GHz, have been predominantly exploited for automotive radars [30,31,32]. The 24-GHz band is mainly used for short-range radar systems with a detection range up to 30 m [30,33], while the 77-GHz band is mainly used for long-range radar systems with a detection range up to 250 m [31]. A frequency modulated continuous wave (FMCW) radar is the most common type of automotive radars used for remote sensing of surroundings [31]. FMCW radars transmit a frequency-modulated signal in a continuous manner. The frequency of the signal changes linearly with time, and this enables accurate time measurement based on the frequency difference between the transmitted and received signals, which can be converted into range information. The specification of Delphi ESR 2.5, the multimode radar system utilized for the driving experiments in this study, is shown in Table 1.
Computer vision techniques are increasingly utilized for automotive safety applications. Camera systems with computer vision methods perform detection and localization of the objects that have been captured in the camera images. It is also possible to extract valuable information on the driving environment, such as lane marking positions and road curvature, from the images obtained with the cameras on board the vehicle. A comprehensive review on computer vision techniques for vehicle detection, tracking, and behavior characterization is presented in [34]. The most notable camera system that has been widely incorporated into production vehicles is that of Mobileye [35]. The longitudinal distance to the surrounding vehicle can be determined based on the image position of the target and the detected target width or height [36,37,38]. Based on [36], the accuracy of the range estimated with the camera system decreases quadratically as the distance to the object increases, while the error percentage increases linearly, such that a 5% error in range is expected at 45 m and a 10% error in range is expected at 90 m. The characteristics of the Mobileye 630 camera system that was installed on our test vehicle are summarized in Table 2.
Lidar stands for light detection and ranging, and as the name suggests, it measures the relative distance and angle to a target within its FOV by transmitting laser light and measuring reflected light. Near-infrared light with a wavelength of 905 nm is typically used for automotive lidars, and they are capable of higher localization performance compared with other automotive ranging sensors. Lidar sensors attracted much attention of the automated driving research community since they were used by many groups who participated in the DARPA Grand Challenges [39,40,41]. Despite the advantage of lidar sensors, most automakers are yet to incorporate them into production vehicles largely due to their drawbacks such as the high manufacturing cost and bulky form factor. Moreover, high-performance lidars generate millions of data points per second [42]. Processing lidar data points is often computationally expensive and the perception and tracking performance based on lidar measurements can vary depending on the algorithm used [43,44,45]. The use of lidar sensors is outside the scope of this paper, and perception approaches based on multiple remote sensors including lidars will be investigated in future work.

2.3. V2X Communications

There exists a range of wireless communication technologies that can be employed for CAV systems and intelligent transportation system (ITS) applications (e.g., 5.9 GHz DSRC, cellular communications, Wi-Fi, Bluetooth, satellite radio, and visible light communications) [20,46]. The V2X communications based on 5.9 GHz DSRC is a mature technology that is the most widely tested and commercially available [46,47,48]. The DSRC-based V2X communication technology is based on the IEEE 802.11p and the IEEE 1609 series of standards, which are collectively known as wireless access in vehicular environments (WAVE) standards [46]. The 5.9 GHz frequency band is divided into seven 10-MHz channels, which include one control channel (CCH) and six service channels (SSHs) [49]. As an alternative to the DSRC-based V2X communications, the cellular V2X (C-V2X) communications based on the 3GPP standards has attracted significant attention in recent years. The C-V2X communications utilizes the cellular network infrastructure (e.g., LTE and 5G networks) to enable V2X applications. For the upper layers, the C-V2X technology is expected to leverage existing standards (e.g., IEEE, SAE, ISO, and ETSI) and utilize common ITS message types, such as the basic safety message (BSM) of IEEE WAVE and the cooperative awareness message (CAM) and the decentralized environmental notification message (DENM) of ETSI ITS-G5. Some of the advantages of the C-V2X technology over 5.9 GHz DSRC include a much larger coverage area, higher throughput, lower latency, and more robust scalability [47,48]. Despite the advantages, the standardization work of C-V2X communications is still in process and the availability of C-V2X commercial hardware equipment is limited at this stage. In this work, we utilized 5.9 GHz DSRC for V2X communications, and the test vehicles were equipped with Cohda MK5 OBUs for the exchange of information among vehicles. The characteristics of DSRC-based V2X communications are shown in Table 3.
The vehicle state information is broadcast and shared among the vehicles equipped with DSRC devices by exchanging the BSM, which is defined in the SAE J2735 message set dictionary [29]. The BSM contains data obtained from the vehicle CAN bus and the GNSS receiver, which include safety critical state information such as the vehicle position, heading, speed, and yaw rate. The BSM is typically transmitted with a period of 100 ms on the dedicated safety channel (i.e., Channel 172 in the U.S.) [29,46]. The BSM Part I data contain the BSM core data frame that shall be included when broadcasting a BSM, while the BSM Part II data contain optional additional information (e.g., event flags, path history, path prediction, and exterior lights). The contents of the BSM core data frame are described in Table 4.

3. State Estimation and Prediction

A Kalman-filter-based fusion approach that was previously described in [23] is employed in this study for state estimation and trajectory prediction of the remote targets in the vehicle surroundings. At each time step, the measurements from the on-board remote sensors as well as the BSMs received from the remote targets are collected and processed with Kalman filter algorithms, which reduce the measurement noise and output the state and error covariance for each track. The types of information we used in this study to estimate the state and future trajectory of the remote target include the following: Position, speed, heading, yaw rate, and size information from the V2X communications; range and azimuth information from the radar system; and longitudinal and lateral distance information from the camera system. The state estimates from different tracks are then associated and fused together, where the weight for each track is determined based on the error covariance. The future trajectory of the remote target detected in the perception stage is estimated with the constant turn rate and velocity (CTRV) motion model. For performance evaluation in the context of safety applications, the future trajectory of the remote target is compared with the future trajectory of the host vehicle and an appropriate warning is generated when a possible collision is detected.

3.1. Kalman Filtering

Kalman filtering [50,51] is a recursive algorithm that estimates the state of a system as well as the estimation uncertainty based on the prior state and the noisy measurements. The operation of the Kalman filter is described in Figure 2. In the prediction step, the state x ^ k | k 1 and the error covariance P ^ k | k 1 are projected with the state transition matrix A from the previous state x ^ k 1 | k 1 and the corresponding error covariance P ^ k 1 | k 1 . The random variable w k is the process noise, which is assumed to be normally distributed with the process noise covariance Q k , such that w k   ~   N ( 0 , Q k ) . The process noise covariance Q k assumed here to be constant, but it may be changed during the filter operation for adjustment to different dynamics. In the update step, the error covariance P ^ k | k 1 is used along with the measurement matrix H and the measurement error covariance R k to compute the Kalman gain K k . The measurement matrix H maps the state vector x k to the measurement vector z k such that
z k = H x k + v k .
The random variable v k is the measurement noise, which is assumed to be normally distributed with the measurement error covariance R k , such that v k   ~   N ( 0 , R k ) . Finally, the state x ^ k | k and the error covariance P ^ k | k are updated based on the Kalman gain K k and the measurement vector z k obtained at time step k .
The relative positions and the motion equations of the remote targets in the vehicle surroundings are typically given in Cartesian coordinates, and it is necessary to perform a polar-to-Cartesian transformation when the ranging measurements obtained from remote sensors (e.g., radar sensors) are in polar coordinates. An extended Kalman filter (EKF) is often utilized to handle such nonlinear systems; however, the linear approximation of a nonlinear system may result in highly unstable performance, and the derivation of the Jacobian matrices is often nontrivial in many applications [52]. More advanced nonlinear filtering approaches such as sequential Monte Carlo methods, also known as particle filters, have been introduced [53], but the computation complexity becomes enormous for high dimensional problems and the use of particle filters should be determined based on the degree of the system nonlinearity [54]. In this study, the unbiased converted measurement Kalman filter algorithm as presented in [55,56] is employed to perform the coordinate transformation without bias and to obtain the correct covariance. This converted measurement approach yields nearly optimal estimates and provides higher estimation accuracy than the EKF [57].
The operation of the unbiased converted measurement Kalman filter is described in Figure 3. The filtering process includes additional steps to compute the covariance of the unbiased conversion compared with the linear Kalman filtering described above. The range measurement r m and the azimuth measurement θ m are defined as
r m = r + v r
θ m = θ + v θ
where r and θ are the true range and azimuth of the remote target, respectively, and v r and v θ are the associated measurement noise with an error standard deviation of σ r and σ θ , respectively. The unbiased converted measurements x m and y m are computed by taking into account the bias compensation factor such that
x m = λ θ 1 r m cos θ m
y m = λ θ 1 r m sin θ m .
The computation of the unbiased converted measurements as well as the associated covariance requires the compensation factors λ θ and λ θ , which are determined from
λ θ = E [ cos v θ ] = e σ θ 2 / 2
λ θ = E [ cos 2 v θ ] = e 2 σ θ 2 .
The rest of the steps for obtaining the covariance of the unbiased conversion are as shown in Figure 3, and the state x ^ k | k and the error covariance P ^ k | k are updated according to the Kalman gain K k and the measurement z k .

3.2. Data Fusion

A track-to-track fusion approach is employed in this study for combining high-level data from multiple sources. The data processing for each sensor system is performed individually at the sensor level in a high-level fusion system. Each sensor system outputs one or more tracks based on the sensor measurements, and the state estimates from multiple sensor tracks are associated and combined with a track-to-track fusion algorithm. A high-level fusion of multiple sensors has been successfully implemented in many studies dealing with automotive applications [58,59,60,61,62]. Important advantages of the high-level fusion approach lie in spatial and temporal alignment, modularity, and communication overhead [63,64]. A simple block diagram for the high-level fusion system architecture [63,65] is shown in Figure 4.
One of the most widely used algorithms for track-to-track fusion is the convex combination algorithm [65,66,67,68], and it has been used extensively for its simple implementation. Two state estimates x i and x j with the corresponding covariance P i and P j , respectively, can be fused to obtain the state estimate x ˇ by
x ˇ = P j ( P i + P j ) 1 x i + P i ( P i + P j ) 1 x j = P ˇ ( P i 1 x i + P j 1 x j )
where P ˇ is the covariance associated with the fused estimate, which is given by
P ˇ = P i P i ( P i + P j ) 1 P i =   P i ( P i + P j ) 1 P j =   ( P i 1 + P j 1 ) 1 .

3.3. Trajectory Prediction and Risk Assessment

The future trajectory of the remote target in the vehicle surroundings is estimated with a CTRV model. The state space at time step k is defined as
x k = [ X k   Y k   v k   ψ k   ω k ] T
where X k is the relative distance in the longitudinal direction, Y k is the relative distance in the lateral direction, v k is the target speed, ψ k is the relative course heading, and ω k is the yaw rate. The state transition equation for the prediction of the state at time step k + 1 is given by
x k + 1 = x k + [ v k ω k ( sin ( ψ k + ω k Δ t ) sin ( ψ k ) )   v k ω k ( cos ( ψ k + ω k Δ t ) + cos ( ψ k ) ) 0 ω k Δ t 0   ]   .
For the purpose of risk assessment, the circle model described in [23] is employed and a possible collision event is predicted based on the future trajectories of the remote vehicle and the host vehicle as shown in Figure 5. The host vehicle radius R H V and the remote vehicle radius R R V are given by
R H V = W H V 2 + L H V 2 2
R R V = W R V 2 + L R V 2 2
where W H V and L H V denote the width and the length of the host vehicle, respectively, and W R V and L R V denote the width and the length of the remote vehicle, respectively. A possible collision is determined if the following inequality is true:
( X H V X R V ) 2 + ( Y H V Y R V ) 2 R H V + R R V   .
The detection of a possible collision leads to the generation of a collision warning to the host vehicle. Based on the time-to-collision (TTC) estimate, four different collision warning messages are provided. Following the collision warning stages discussed in [69], the warning messages provided by the proposed system include “no threat” for no possible collision, “threat detected” for TTC > 2.6   s , “inform driver” for 1.6   s < TTC 2.6   s , and “warn driver” for TTC 1.6   s . The description for each warning message is described in Table 5. The conditions for the collision warning messages defined here are similar to those of Daimler PRE-SAFE [70] and Mobileye FCW [71]. The PRE-SAFE and Mobileye systems warn the driver approximately 2.6 s and 2.7 s before the expected collision, respectively. In the case of the PRE-SAFE system, an additional warning is provided at approximately 1.6 s before the expected collision.

4. Experimental Evaluation

The proposed system for cooperative driving environment perception was evaluated experimentally in test-track environments. Two test vehicles were utilized for the experiments where both vehicles were equipped with V2X communication devices such that the exchange of the BSM between the two test vehicles was enabled. In order to evaluate the benefits of introducing V2X communications to the currently available environment perception systems, the host vehicle in this work was additionally equipped with radar and camera systems that have already been adopted in production vehicles.

4.1. Vehicle Configuration

Two Kia Soul cars, one in white and the other in yellow as shown in Figure 6a, were used for the experiments. The experiment setup consisted of a combination of hardware and software components that enabled the host vehicle to gather information on the vehicle surroundings by means of on-board remote sensors and V2X communications. As shown in Figure 6b, the host vehicle (Kia Soul in white) was equipped with a Delphi ESR 2.5 radar and a Mobileye 630 camera for detection of surrounding objects, and each test vehicle was equipped with a Cohda MK5 OBU, a DSRC antenna, a u-blox F9P GNSS receiver, and a GNSS antenna, all of which together enabled the vehicles to exchange BSM via V2X communications and share vehicle position information with a lane-level accuracy. The u-blox F9P device is a low-cost GNSS module with built-in support for standard RTCM corrections.
For real-time data acquisition and processing, the host vehicle was equipped with a dSPACE SCALEXIO AutoBox, which is an in-vehicle prototyping system with a real-time processor and interfaces for CAN and Ethernet communications. The signals transmitted from the radar and camera systems as well as the in-vehicle ECUs were acquired via CAN interfaces, and the signals transmitted from the DSRC OBU were acquired via Ethernet interfaces. RTCM corrections were acquired with an NTRIP client running on a smartphone and streamed via a Bluetooth connection to the u-blox C099-F9P application board. In order to obtain ground truth positions for the test vehicles, both vehicles were equipped with an Ascen AKT980R, which is a highly accurate GNSS receiver that provides GNSS carrier phase measurements with a horizontal positioning accuracy of 0.008 m and a heading accuracy of 0.09 deg with a 2-m baseline. The overview of the hardware interface for the proposed system is presented in Figure 7. The driving experiments were conducted at the proving ground available at Korea Automotive Technology Institute which is shown in Figure 8.

4.2. Surrounding Vehicle Localization

4.2.1. Experimental Environment

In order to obtain relative position estimates of the remote vehicle from the radar and camera and also from V2X communications, driving experiments were conducted under the scenarios where the remote vehicle is driven ahead the host vehicle. For the performance evaluation of the proposed data fusion approach for different relative target locations, the relative distance between the two test vehicles was gradually increased. As shown in Figure 9, the first set of experiments was carried out under a scenario where both test vehicles drove in the same lane, and the second set of experiments was conducted under a scenario where the remote vehicle drove in the adjacent lane. The two driving scenarios, which are referred to as the “normal driving scenarios” here, are summarized in Table 6. For these two normal driving scenarios, dynamic driving maneuvers such as lane change maneuvers were not performed. For both driving scenarios, the host vehicle was driven at a speed of about 20 km/h while the remote vehicle was driven at about 25 km/h. The red and blue dashed lines in Figure 9 indicate the FOV of the camera and radar sensors, respectively. The narrower set of dashed lines in blue indicates the long-range radar coverage, whereas the wider set indicates the mid-range radar coverage. The relative positions of the target vehicle obtained from the radar, camera, and V2X communications are plotted in different colors. The red dot indicates the camera measurement while the blue dot indicates the radar measurement. The dot in magenta indicates the relative position obtained based on V2X communication data, and finally black dot denotes the ground truth position. All of these dots represent the estimated distance to the rear center of the remote vehicle from the origin, which is the center of the front bumper of the host vehicle. The magenta bounding box is created and rotated based on the width, length, and heading information obtained from V2X communications at a given time step. Finally, the black bounding box represents the ground truth position of the remote vehicle.

4.2.2. Performance Evaluation

Figure 10a,b show the ground truth distance to the remote vehicle obtained for the driving scenario where both vehicles are in the same lane. The longitudinal and lateral relative distance obtained from separate test runs are concatenated and presented together. The time step between two consecutive frames from the same test run corresponds to 0.1 s. The separation between the two test vehicles gradually increases while both vehicles stay in the same lane. The absolute error for the relative position in the longitudinal and lateral directions is presented in Figure 10c,d. The results of the proposed data fusion method are presented along with those of individual remote sensing systems. Despite the fluctuations observed in the position estimates of the radar, camera, and V2X communications, the proposed method enables reliable estimation of the relative position of the remote target at varying distances.
As presented in Table 7 and Table 8, the performance of the proposed data fusion method is evaluated by computing the root mean squared error (RMSE) and the standard deviation of the error, and compared with those calculated for other remote sensing systems. In order to determine the positioning accuracy at varying relative distances, the results are grouped in separate 10-m bins (in the longitudinal direction). For the driving scenario with both vehicles in the same lane, the longitudinal and lateral localization accuracies of the proposed method in terms of the RMSE are found to be 0.22 m and 0.13 m, respectively, when taking into account all the results that correspond to the range of longitudinal distances between 0 and 70 m.
The results obtained when the target vehicle is at a relative longitudinal distance of 70 m or longer are not used for this performance evaluation, considering that the lateral position accuracy of the GNSS system used for ground truth degrades at such long distances due to the limited heading accuracy such that it may not be appropriate to be utilized as a reference system. In addition, the Mobileye camera used in this work appears to suffer from significant degradation in longitudinal accuracy for remote targets located at such distances.
The ground truth distance to the remote vehicle during the driving scenario where the remote vehicle is in the adjacent lane is shown in Figure 11a,b. As previously explained, the relative positions of the remote vehicle acquired from separate test runs are concatenated and presented together, and the time step between two consecutive frames from the same test run corresponds to 0.1 s. The separation between the two test vehicles gradually increases while the remote vehicle stays in the adjacent lane. The absolute error for the longitudinal and lateral distances to the remote vehicle is presented in Figure 11c,d. Similar to the aforementioned case with both vehicles in the same lane, the proposed fusion method reliably estimates the remote target position at varying distances even at times when the accuracy of individual remote sensors becomes severely degraded.
The performance of the individual remote sensing systems and the proposed cooperative environment perception approach is evaluated by computing the RMSE and the standard deviation of the error at different relative distances as shown in Table 9 and Table 10. Similar to the previous scenario, the results are grouped in separate 10-m bins (in the longitudinal direction) to determine the positioning accuracy at varying relative distances for the driving scenario with the remote vehicle driving in the adjacent lane. Taking into account all of the position estimates that fall between 0 and 70 m in the longitudinal direction, the longitudinal and lateral localization accuracies of the proposed method in terms of the RMSE are found to be 0.33 m and 0.09 m, respectively.
Figure 12 shows the relative positioning accuracy in the longitudinal and lateral directions for the combined sets of the measurements from the two normal driving scenarios. In the longitudinal direction, the total RMSE of the relative position estimated with the proposed fusion approach is 0.27 m, which corresponds to an improvement of 86% and 52% compared to the total RMSE of the relative position estimated with the camera and radar sensors, respectively. In the lateral direction, the total RMSE of the relative position estimated with the proposed fusion approach is 0.12 m, which corresponds to an improvement of 27% and 56% compared to the total RMSE of the relative position estimated with the camera and radar sensors, respectively.

4.3. Cut-In Driving Scenario

4.3.1. Experimental Environment

The performance of the proposed cooperative environment perception approach was also evaluated in cut-in driving scenarios where the remote vehicle performs a lane-change maneuver in front of the host vehicle. Three different cut-in scenarios were considered in this work as described in Table 11. The host vehicle traveled at 40–45 km/h at the start of the cut-in maneuver in all three cut-in driving scenarios, while the remote vehicle originally driving in the adjacent lane cut in at a distance of 15–20 m in front of the host vehicle. The speed of the remote vehicle was set differently for each cut-in scenario in order to vary the level of collision threat, such that the scenario 1 presents the lowest level of threat while the highest level of threat is expected in the scenario 3.

4.3.2. Performance Evaluation

The ground truth relative position of the remote vehicle and the absolute error for the relative position during the cut-in driving scenario 1 are shown in Figure 13. The longitudinal and lateral relative distance measurements obtained from separate test runs are concatenated and shown together. The start and the duration of the cut-in events can be conveniently observed in Figure 13b, where the lateral distance changes from the center of the adjacent lane towards the center of the lane that the host vehicle is positioned. The TTC results and the corresponding levels of the collision warning obtained from the proposed method for trajectory prediction and risk assessment are presented in Figure 14. Possible collision events are successfully predicted for all four cut-in maneuvers performed by the remote vehicle. The TTC results for the collision detection shown are above 4 s except the third cut-in case. The levels of these threats other than the third one are minor, resulting in level-1 warnings only for a short duration of time. A level-2 warning is given for the third cut-in event, which is due to the more abrupt change in the longitudinal distance just prior to and during the lane-change maneuver.
Figure 15 shows the ground truth position and the absolute position error in longitudinal and lateral directions during the cut-in driving scenario 2. The data collected from separate test runs are concatenated as presented. A total of five cut-in events take place in this scenario, and the start and the duration of these events can be observed in Figure 15b. The TTC results and the corresponding collision warning levels are presented in Figure 16. For all five cut-in maneuvers performed by the remote vehicle in the scenario 2, possible collision events are correctly predicted and corresponding collision warnings are provided.
Figure 17 presents the ground truth relative position of the remote vehicle and the absolute error for the relative position during the cut-in driving scenario 3. The data obtained from separate test runs are concatenated and presented together. A total of eight cut-in events can be recognized in Figure 17b. An unusual remote vehicle maneuver in the lateral direction can be noticed before the initiation of the fourth cut-in maneuver. This was caused by an incident where our driver of the remote vehicle steered away to avoid another vehicle that was used by a different group in the proving ground. Despite the higher fluctuations of the position estimates from the individual remote sensing systems as shown in Figure 17c,d, the proposed cooperative approach provides accurate and reliable positioning results throughout the execution of cut-in maneuvers. The TTC estimates and the corresponding warning levels are presented in Figure 18. The results show that the proposed system for trajectory prediction and risk assessment enables reliable and timely warning to cope with the sudden lane change maneuvers performed in the cut-in driving scenario 3.

5. Conclusions

In this paper, we presented the experimental design and performance evaluation of the driving environment perception system based on the fusion of multiple on-board sensors and V2X communications. The two test vehicles used for the driving experiments were each equipped with DSRC equipment and a low-cost GNSS receiver for the exchange of BSM data. The host vehicle was additionally equipped with radar and camera sensors that have already been adopted in production vehicles. The performance of the proposed fusion approach in terms of relative positioning accuracy was evaluated at varying longitudinal and lateral distances between the two test vehicles. In the longitudinal direction, the total RMSE of the relative position estimated with the proposed fusion approach was 0.27 m, whereas those estimated with the camera and radar sensors were 1.91 m and 0.57 m, respectively, which correspond to an improvement of 86% and 52%. In the lateral direction, the total RMSE of the relative position estimated with the proposed method was 0.12 m, while those estimated with the camera and radar were 0.16 m and 0.26 m, respectively, corresponding to an improvement of 27% and 56%. The performance of the trajectory prediction and risk assessment was evaluated in different cut-in driving scenarios where the remote vehicle performs a lane-change maneuver in front of the host vehicle. The test results showed that the proposed method provides accurate target localization and facilitates reliable target trajectory prediction and detection of potential collision during the events when a remote vehicle driving in an adjacent lane cuts in front of the host vehicle.
Although the proposed cooperative approach for driving environment perception proved to be successful in the driving scenarios considered in this study, there still exists a number of challenges to be addressed. The scope of this paper has been limited to driving scenarios where a single remote target is present in the surroundings of the host vehicle. As part of future work, the benefits of the cooperative environment perception approach will be further investigated in scenarios that involve multiple surrounding objects for more diverse use cases. In addition, various factors that could adversely affect the accuracy and reliability of on-board sensors (e.g., high-curvature roads and adverse weather conditions) and V2X communications (e.g., large separations between vehicles, urban environments) will be examined so that the data quality levels that can be expected for each sensor track and for V2X communications can be more accurately determined in continuously changing driving conditions. It is also important to note that the trajectory prediction performed in the scope of this study was based on the physics-based motion model, which performs well for short-term prediction but degrades in its performance when the prediction horizon is extended. Furthermore, this trajectory prediction approach cannot anticipate the state changes caused by varying road curvature, traffic signals, or future driving maneuver execution. Therefore, future work should investigate interaction-aware motion models and also incorporate the map data and the signal phase and timing message, which can be acquired via V2X communications, for accurate and reliable longer-term trajectory prediction that takes into account inter-vehicle interaction, road configuration, and traffic signal dependencies.

Author Contributions

Conceptualization, M.B. and S.L.; methodology, M.B.; software, M.B., J.M., and W.K.; validation, M.B., J.M., and J.Y.; formal analysis, M.B.; investigation, M.B., J.M., and D.C.; data curation, M.B., J.M., W.K., and J.Y.; writing—original draft preparation, M.B.; writing—review and editing, M.B.; visualization, M.B.; supervision, S.L.; project administration, M.B. and D.C.; funding acquisition, S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Technology Innovation Program (10062375, Development of Core Technologies Based on V2X and In-Vehicle Sensors for Path Prediction of the Surrounding Objects (Vehicle, Pedestrian, Motorcycle)) funded by the Ministry of Trade, Industry and Energy (MOTIE, Korea).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

ADASAdvanced driver assistance system
BSMBasic safety message
CAMCooperative awareness message
CANController area network
CAVConnected and automated vehicle
CTRVConstant turn rate and velocity
C-V2XCellular vehicle-to-everything
DENMDecentralized environmental notification message
DSRCDedicated short-range communications
ECUElectronic control unit
FMCWFrequency modulated continuous wave
FOVField of view
GNSSGlobal navigation satellite system
ITSIntelligent transportation system
NTRIPNetworked Transport of RTCM via Internet Protocol
OBUOn-board unit
ODDOperational design domain
RSURoadside unit
RTCMRadio Technical Commission for Maritime Services
TTCTime-to-collision
V2IVehicle-to-infrastructure
V2NVehicle-to-network
V2PVehicle-to-pedestrian
V2VVehicle-to-vehicle
V2XVehicle-to-everything
WAVEWireless access in vehicular environments
WGSWorld Geodetic System

References

  1. World Health Organization. Global Status Report on Road Safety 2018; WHO: Geneva, Switzerland, 2018; ISBN 978-92-4-156568-4. [Google Scholar]
  2. Sustainable Mobility for All. Global Mobility Report 2017: Tracking Sector Performance; SuM4All: Washington, DC, USA, 2017; Available online: https://sum4all.org/publications/global-mobility-report-2017 (accessed on 20 November 2020).
  3. Rao, A.M.; Rao, K.R. Measuring Urban Traffic Congestion—A Review. Int. J. Traffic Transp. Eng. 2012, 2, 286–305. [Google Scholar]
  4. Litman, T. Autonomous Vehicle Implementation Predictions: Implications for Transport. Planning; Victoria Transport Policy Institute (VTPI): Victoria, BC, Canada, 2020. [Google Scholar]
  5. Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles; SAE J3016; SAE International: Warrendale, PA, USA, 2018.
  6. Mervis, J. Are We Going Too Fast on Driverless Cars? Available online: https://www.sciencemag.org/news/2017/12/are-we-going-too-fast-driverless-cars (accessed on 28 December 2020).
  7. Brown, B.; Laurier, E. The Trouble with Autopilots: Assisted and Autonomous Driving on the Social Road. In Proceedings of the Conference on Human Factors in Computing Systems, Denver, Colorado, USA, 6–11 May 2017; pp. 416–429. [Google Scholar]
  8. Karpathy, A. AI for Full-Self Driving. In Proceedings of the 5th Annual Scaled Machine Learning Conference, Mountain View, CA, USA, February 2020; Available online: https://info.matroid.com/scaledml-media-archive-2020 (accessed on 28 December 2020).
  9. Hawkins, A.J. Volvo Will Use Waymo’s Self-Driving Technology to Power a Fleet of Electric Robotaxis. The Verge, June 2020. Available online: https://www.theverge.com/2020/6/25/21303324/volvo-waymo-l4-deal-electric-self-driving-robot-taxi (accessed on 29 December 2020).
  10. Urmson, C.; Anhalt, J.; Bagnell, D.; Baker, C.; Bittner, R.; Clark, M.N.; Dolan, J.; Duggins, D.; Galatali, T.; Geyer, C.; et al. Autonomous driving in urban environments: Boss and the Urban Challenge. J. Field Robot. 2008, 25, 425–466. [Google Scholar] [CrossRef] [Green Version]
  11. Wille, J.M.; Saust, F.; Maurer, M. Stadtpilot: Driving Autonomously on Braunschweig’s Inner Ring Road. In Proceedings of the IEEE Intelligent Vehicles Symposium, San Diego, CA, USA, 21–24 June 2010; pp. 506–511. [Google Scholar]
  12. Guizzo, E. How Google’s Self-Driving Car Works, IEEE Spectrum Online. 2011. Available online: http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/how-google-self-driving-car-works (accessed on 27 November 2016).
  13. Ziegler, J.; Bender, P.; Schreiber, M.; Lategahn, H.; Strauss, T.; Stiller, C.; Dang, T.; Franke, U.; Appenrodt, N.; Keller, C.G.; et al. Making Bertha Drive—An Autonomous Journey on a Historic Route. IEEE Intell. Transp. Syst. Mag. 2014, 6, 8–20. [Google Scholar] [CrossRef]
  14. Broggi, A.; Cerri, P.; Debattisti, S.; Laghi, M.C.; Medici, P.; Molinari, D.; Panciroli, M.; Prioletti, A. PROUD—Public Road Urban Driverless-Car Test. IEEE Trans. Intell. Transp. Syst. 2015, 16, 3508–3519. [Google Scholar] [CrossRef]
  15. Marti, E.; de Miguel, M.A.; Garcia, F.; Perez, J. A Review of Sensor Technologies for Perception in Automated Driving. IEEE Intell. Transp. Syst. Mag. 2019, 11, 94–108. [Google Scholar] [CrossRef] [Green Version]
  16. National Transportation Safety Board. Collision between a Car Operating with Automated Vehicle Control, Systems and a Tractor-Semitrailer Truck Near Williston, Florida, 7 May 2016; Highway Accident Report NTSB/HAR-17/02; NTSB: Washington, DC, USA, 2017. Available online: https://data.ntsb.gov/Docket?NTSBNumber=HWY16FH018 (accessed on 29 December 2020).
  17. National Transportation Safety Board. Collision between a Sport Utility Vehicle Operating with Partial Driving Automation and a Crash Attenuator, Mountain View, California, 23 March 2018; Highway Accident Report NTSB/HAR-20/01; NTSB: Washington, DC, USA, 2020. Available online: https://data.ntsb.gov/Docket?NTSBNumber=HWY18FH011 (accessed on 29 December 2020).
  18. National Transportation Safety Board. Collision between Car Operating with Partial Driving Automation and Truck-Tractor Semitrailer, Delray Beach, Florida, 1 March 2019; Highway Accident Brief NTSB/HAB-20/01; NTSB: Washington, DC, USA, 2020. Available online: https://data.ntsb.gov/Docket?NTSBNumber=HWY19FH008 (accessed on 29 December 2020).
  19. National Transportation Safety Board. Collision between Vehicle Controlled by Developmental Automated Driving System and Pedestrian, Tempe, Arizona, 18 March 2018; Highway Accident Report NTSB/HAR-19/03; NTSB: Washington, DC, USA, 2019. Available online: https://data.ntsb.gov/Docket?NTSBNumber=HWY18MH010 (accessed on 29 December 2020).
  20. Shladover, S.E. Connected and Automated Vehicle Systems: Introduction and Overview. J. Intell. Transp. Syst. 2018, 22, 190–200. [Google Scholar] [CrossRef]
  21. Guanetti, J.; Kim, Y.; Borrelli, F. Control of Connected and Automated Vehicles: State of the Art and Future Challenges. Annu. Rev. Control. 2018, 45, 18–40. [Google Scholar] [CrossRef] [Green Version]
  22. Eskandarian, A.; Wu, C.; Sun, C. Research Advances and Challenges of Autonomous and Connected Ground Vehicles. IEEE Trans. Intell. Transp. Syst. 2019, 1–29. [Google Scholar] [CrossRef]
  23. Baek, M.; Jeong, D.; Choi, D.; Lee, S. Vehicle Trajectory Prediction and Collision Warning via Fusion of Multisensors and Wireless Vehicular Communications. Sensors 2020, 20, 288. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Rauch, A.; Maier, S.; Klanner, F.; Dietmayer, K. Inter-Vehicle Object Association for Cooperative Perception Systems. In Proceedings of the IEEE Conference on Intelligent Transportation Systems, The Hague, The Netherlands, 6–9 October 2013; pp. 893–898. [Google Scholar]
  25. Obst, M.; Hobert, L.; Reisdorf, P. Multi-Sensor Data Fusion for Checking Plausibility of V2V Communications by Vision-Based Multiple-Object Tracking. In Proceedings of the IEEE Vehicular Networking Conference, Paderborn, Germany, 3–5 December 2014; pp. 143–150. [Google Scholar]
  26. de Ponte Müller, F.; Diaz, E.M.; Rashdan, I. Cooperative Positioning and Radar Sensor Fusion for Relative Localization of Vehicles. In Proceedings of the IEEE Intelligent Vehicles Symposium, Gothenburg, Sweden, 19–22 June 2016; pp. 1060–1065. [Google Scholar]
  27. Chen, Q.; Roth, T.; Yuan, T.; Breu, J.; Kuhnt, F.; Zöllner, M.; Bogdanovic, M.; Weiss, C.; Hillenbrand, J.; Gern, A. DSRC and Radar Object Matching for Cooperative Driver Assistance Systems. In Proceedings of the IEEE Intelligent Vehicles Symposium, Seoul, Korea, 28 June–1 July 2015; pp. 1348–1354. [Google Scholar] [CrossRef]
  28. Fujii, S.; Fujita, A.; Umedu, T.; Kaneda, S.; Yamaguchi, H.; Higashino, T.; Takai, M. Cooperative Vehicle Positioning via V2V Communications and Onboard Sensors. In Proceedings of the IEEE Vehicular Technology Conference, San Francisco, CA, USA, 5–8 September 2011; pp. 1–5. [Google Scholar]
  29. Dedicated Short Range Communications (DSRC) Message Set Dictionary; SAE J2735; SAE International: Warrendale, PA, USA, 2016.
  30. Strohm, K.M.; Bloecher, H.-L.; Schneider, R.; Wenger, J. Development of Future Short Range Radar Technology. In Proceedings of the European Radar Conference, Paris, France, 3–4 October 2005; pp. 165–168. [Google Scholar] [CrossRef]
  31. Hasch, J.; Topak, E.; Schnabel, R.; Zwick, T.; Weigel, R.; Waldschmidt, C. Millimeter-Wave Technology for Automotive Radar Sensors in the 77 GHz Frequency Band. IEEE Trans. Microw. Theory Techn. 2012, 60, 845–860. [Google Scholar] [CrossRef]
  32. Ramasubramanian, K.; Ramaiah, K. Moving from Legacy 24 GHz to State-of-the-Art 77-GHz Radar. ATZelektronik Worldw. 2018, 13, 46–49. [Google Scholar] [CrossRef]
  33. Klotz, M.; Rohling, H. 24 GHz Radar Sensors for Automotive Applications. In Proceedings of the International Conference on Microwaves, Radar and Wireless Communications, Wroclaw, Poland, 22–24 May 2000; pp. 359–362. [Google Scholar] [CrossRef]
  34. Sivaraman, S.; Trivedi, M.M. Looking at Vehicles on the Road: A Survey of Vision-Based Vehicle Detection, Tracking, and Behavior Analysis. IEEE Trans. Intell. Transp. Syst. 2013, 14, 1773–1795. [Google Scholar] [CrossRef] [Green Version]
  35. Zhang, X.; Gao, H.; Xie, G.; Gao, B.; Li, D. Technology and Application of Intelligent Driving Based on Visual Perception. CAAI Trans. Intell. Technol. 2017, 2, 126–132. [Google Scholar] [CrossRef]
  36. Stein, G.P.; Mano, O.; Shashua, A. Vision-Based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy. In Proceedings of the IEEE Intelligent Vehicles Symposium, Columbus, OH, USA, 9–11 June 2003; pp. 120–125. [Google Scholar]
  37. Dagan, E.; Mano, O.; Stein, G.P.; Shashua, A. Forward Collision Warning with a Single Camera. In Proceedings of the IEEE Intelligent Vehicles Symposium, Parma, Italy, 14–17 June 2004; pp. 37–42. [Google Scholar]
  38. Han, J.; Heo, O.; Park, M.; Kee, S.; Sunwoo, M. Vehicle Distance Estimation Using a Mono-Camera for FCW/AEB Systems. Int. J. Automot. Technol. 2016, 17, 483–491. [Google Scholar] [CrossRef]
  39. Thrun, S.; Montemerlo, M.; Dahlkamp, H.; Stavens, D.; Aron, A.; Diebel, J.; Fong, P.; Gale, J.; Halpenny, M.; Hoffmann, G.; et al. Stanley: The Robot That Won the DARPA Grand Challenge. J. Field Robot. 2006, 23, 661–692. [Google Scholar] [CrossRef]
  40. Buehler, M.; Iagnemma, K.; Singh, S. (Eds.) The 2005 DARPA Grand Challenge: The Great Robot. Race; Springer: Berlin, Germany, 2007; ISBN 978-3-540-73428-4. [Google Scholar]
  41. Buehler, M.; Iagnemma, K.; Singh, S. (Eds.) The DARPA Urban Challenge: Autonomous Vehicles in City Traffic; Springer: Berlin, Germany, 2009; ISBN 978-3-642-03990-4. [Google Scholar]
  42. Hecht, J. Lidar for Self-Driving Cars. Opt. Photonics News 2018, 29, 26–33. [Google Scholar] [CrossRef]
  43. Labbé, M.; Michaud, F. RTAB-Map as an Open-Source Lidar and Visual Simultaneous Localization and Mapping Library for Large-Scale and Long-Term Online Operation. J. Field Robot. 2019, 36, 416–446. [Google Scholar] [CrossRef]
  44. de Paula Veronese, L.; Ismail, A.; Narayan, V.; Schulze, M. An Accurate and Computational Efficient System for Detecting and Classifying Ego and Sides Lanes Using LiDAR. In Proceedings of the IEEE Intelligent Vehicles Symposium, Changshu, China, 26–30 June 2018; pp. 1476–1483. [Google Scholar] [CrossRef]
  45. Dimitrievski, M.; Veelaert, P.; Philips, W. Behavioral Pedestrian Tracking Using a Camera and LiDAR Sensors on a Moving Vehicle. Sensors 2019, 19, 391. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Kenney, J.B. Dedicated Short-Range Communications (DSRC) Standards in the United States. Proc. IEEE 2011, 99, 1162–1182. [Google Scholar] [CrossRef]
  47. MacHardy, Z.; Khan, A.; Obana, K.; Iwashina, S. V2X Access Technologies: Regulation, Research, and Remaining Challenges. IEEE Commun. Surv. Tutor. 2018, 20, 1858–1877. [Google Scholar] [CrossRef]
  48. Zhao, L.; Li, X.; Gu, B.; Zhou, Z.; Mumtaz, S.; Frascolla, V.; Gacanin, H.; Ashraf, M.I.; Rodriguez, J.; Yang, M.; et al. Vehicular Communications: Standardization and Open Issues. IEEE Commun. Std. Mag. 2018, 2, 74–80. [Google Scholar] [CrossRef]
  49. IEEE Standard for Wireless Access in Vehicular Environments (WAVE)—Multi-Channel Operation; IEEE Std. 1609.4; IEEE: New York, NY, USA, 2016.
  50. Kalman, R.E. A new approach to linear filtering and prediction problems. Trans. ASME J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
  51. Welch, G.; Bishop, G. An Introduction to the Kalman Filter. In Proceedings of the SIGGRAPH, Los Angeles, CA, USA, 12–17 August 2001. Course 8. [Google Scholar]
  52. Julier, S.J.; Uhlmann, J.K. A New Extension of the Kalman Filter to Nonlinear Systems. In Proceedings of the AeroSense: The 11th International Symposium on Aerospace/Defense Sensing, Simulation, and Controls, Orlando, FL, USA, 21–25 April 1997; pp. 182–193. [Google Scholar]
  53. Arulampalam, M.S.; Maskell, S.; Gordon, N.; Clapp, T. A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking. IEEE Trans. Signal. Process. 2002, 50, 174–188. [Google Scholar] [CrossRef] [Green Version]
  54. Daum, F. Nonlinear Filters: Beyond the Kalman Filter. IEEE Trans. Aerosp. Electron. Syst. Mag. 2005, 20, 57–69. [Google Scholar] [CrossRef]
  55. Bar-Shalom, Y.; Li, X.R.; Kirubarajan, T. Estimation with Applications to Tracking and Navigation: Theory Algorithms and Software; John Wiley and Sons: New York, NY, USA, 2001; ISBN 0-471-41655-X. [Google Scholar]
  56. Mo, L.; Song, X.; Zhou, Y.; Sun, Z.; Bar-Shalom, Y. Unbiased Converted Measurements for Tracking. IEEE Trans. Aerosp. Electron. Syst. 1998, 34, 1023–1027. [Google Scholar] [CrossRef]
  57. Lerro, D.; Bar-Shalom, Y. Tracking with Debiased Consistent Converted Measurements vs. EKF. IEEE Trans. Aerosp. Electron. Syst. 1993, 29, 1015–1022. [Google Scholar] [CrossRef]
  58. Escamilla-Ambrosio, P.J.; Lieven, N. A Multiple-Sensor Multiple-Target Tracking Approach for the Autotaxi System. In Proceedings of the IEEE Intelligent Vehicles Symposium, Parma, Italy, 14–17 June 2004; pp. 601–606. [Google Scholar]
  59. Floudas, N.; Polychronopoulos, A.; Tsogas, M.; Amditis, A. Multi-Sensor Coordination and Fusion for Automotive Safety Applications. In Proceedings of the International Conference on Information Fusion, Florence, Italy, 10–13 July 2006; pp. 1–8. [Google Scholar]
  60. Matzka, S.; Altendorfer, R. A Comparison of Track-to-Track Fusion Algorithms for Automotive Sensor Fusion. In Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Seoul, Korea, 20–22 August 2008; pp. 189–194. [Google Scholar]
  61. Cheng, H. Autonomous Intelligent Vehicles: Theory, Algorithms, and Implementation; Springer: London, UK, 2011; ISBN 978-1-4471-2279-1. [Google Scholar]
  62. Aeberhard, M.; Schlichthärle, S.; Kaempchen, N.; Bertram, T. Track-to-Track Fusion with Asynchronous Sensors Using Information Matrix Fusion for Surround Environment Perception. IEEE Trans. Intell. Transp. Syst. 2012, 13, 1717–1726. [Google Scholar] [CrossRef]
  63. Aeberhard, M.; Kaempchen, N. High-Level Sensor Data Fusion Architecture for Vehicle Surround Environment Perception. In Proceedings of the International Workshop Intelligent Transportation, Hamburg, Germany, 22–23 March 2011. [Google Scholar]
  64. Steinbaeck, J.; Steger, C.; Holweg, G.; Druml, N. Next Generation Radar Sensors in Automotive Sensor Fusion Systems. In Proceedings of the Sensor Data Fusion: Trends, Solutions, Applications, Bonn, Germany, 10–12 October 2017; pp. 1–6. [Google Scholar]
  65. Chong, C.Y.; Mori, S.; Barker, W.H.; Chang, K.C. Architectures and Algorithms for Track Association and Fusion. IEEE Aerosp. Electron. Syst. Mag. 2000, 15, 5–13. [Google Scholar] [CrossRef]
  66. Bar-Shalom, Y. On the Track-to-Track Correlation Problem. IEEE Trans. Autom. Control. 1981, 26, 571–572. [Google Scholar] [CrossRef]
  67. Bar-Shalom, Y.; Campo, L. The Effect of the Common Process Noise on the Two-Sensor Fused-Track Covariance. IEEE Trans. Aerosp. Electron. Syst. 1986, 22, 803–805. [Google Scholar] [CrossRef]
  68. Chong, C.Y.; Mori, S. Convex Combination and Covariance Intersection Algorithms in Distributed Fusion. In Proceedings of the International Conference on Information Fusion, Montreal, QC, Canada, 7–10 August 2001. [Google Scholar]
  69. Ahmed-Zaid, F.; Bai, F.; Bai, S.; Basnayake, C.; Bellur, B.; Brovold, S.; Brown, G.; Caminiti, L.; Cunningham, D.; Elzein, H.; et al. Vehicle Safety Communications—Applications (VSC-A) Final Report; Rep. DOT HS 811 492A; National Highway Traffic Safety Administration: Washington, DC, USA, 2011.
  70. Bloecher, H.L.; Dickmann, J.; Andres, M. Automotive Active Safety and Comfort Functions Using Radar. In Proceedings of the IEEE International Conference on Ultra-Wideband, Vancouver, BC, Canada, 9–11 September 2009; pp. 490–494. [Google Scholar]
  71. Mobileye. Forward Collision Warning (FCW). Available online: https://www.mobileye.com/us/fleets/technology/forward-collision-warning/ (accessed on 10 November 2019).
Figure 1. Functional blocks of the proposed cooperative environment perception system.
Figure 1. Functional blocks of the proposed cooperative environment perception system.
Sensors 21 01860 g001
Figure 2. Operation of the Kalman filter algorithm.
Figure 2. Operation of the Kalman filter algorithm.
Sensors 21 01860 g002
Figure 3. Operation of the unbiased converted measurement Kalman filter algorithm.
Figure 3. Operation of the unbiased converted measurement Kalman filter algorithm.
Sensors 21 01860 g003
Figure 4. High-level fusion system architecture.
Figure 4. High-level fusion system architecture.
Sensors 21 01860 g004
Figure 5. Illustration of collision prediction based on the circle model [23].
Figure 5. Illustration of collision prediction based on the circle model [23].
Sensors 21 01860 g005
Figure 6. Test vehicles used for the driving experiments. (a) The host vehicle (white) and the remote vehicle (yellow); (b) experimental equipment installed on the test vehicle.
Figure 6. Test vehicles used for the driving experiments. (a) The host vehicle (white) and the remote vehicle (yellow); (b) experimental equipment installed on the test vehicle.
Sensors 21 01860 g006
Figure 7. Overview of the hardware interface.
Figure 7. Overview of the hardware interface.
Sensors 21 01860 g007
Figure 8. Proving ground at the Korea Automotive Technology Institute.
Figure 8. Proving ground at the Korea Automotive Technology Institute.
Sensors 21 01860 g008
Figure 9. Normal driving scenarios considered for the localization performance evaluation of the proposed method. (a) Both vehicles driving in the same lane; (b) remote vehicle driving in the adjacent lane.
Figure 9. Normal driving scenarios considered for the localization performance evaluation of the proposed method. (a) Both vehicles driving in the same lane; (b) remote vehicle driving in the adjacent lane.
Sensors 21 01860 g009
Figure 10. Relative distance to the remote vehicle during the normal driving scenario 1. (a) Ground truth longitudinal distance; (b) ground truth lateral distance; (c) absolute error for longitudinal distance estimates; (d) absolute error for lateral distance estimates.
Figure 10. Relative distance to the remote vehicle during the normal driving scenario 1. (a) Ground truth longitudinal distance; (b) ground truth lateral distance; (c) absolute error for longitudinal distance estimates; (d) absolute error for lateral distance estimates.
Sensors 21 01860 g010
Figure 11. Relative distance to the remote vehicle during the normal driving scenario 2. (a) Ground truth longitudinal distance; (b) ground truth lateral distance; (c) absolute error for longitudinal distance estimates; (d) absolute error for lateral distance estimates.
Figure 11. Relative distance to the remote vehicle during the normal driving scenario 2. (a) Ground truth longitudinal distance; (b) ground truth lateral distance; (c) absolute error for longitudinal distance estimates; (d) absolute error for lateral distance estimates.
Sensors 21 01860 g011aSensors 21 01860 g011b
Figure 12. Relative positioning accuracy in the longitudinal and lateral directions for the combined sets of the measurements obtained from the two normal driving scenarios.
Figure 12. Relative positioning accuracy in the longitudinal and lateral directions for the combined sets of the measurements obtained from the two normal driving scenarios.
Sensors 21 01860 g012
Figure 13. Relative distance to the remote vehicle during the cut-in driving scenario 1. (a) Ground truth longitudinal distance; (b) ground truth lateral distance; (c) absolute error for longitudinal distance estimates; (d) absolute error for lateral distance estimates.
Figure 13. Relative distance to the remote vehicle during the cut-in driving scenario 1. (a) Ground truth longitudinal distance; (b) ground truth lateral distance; (c) absolute error for longitudinal distance estimates; (d) absolute error for lateral distance estimates.
Sensors 21 01860 g013aSensors 21 01860 g013b
Figure 14. Potential crash events predicted during the cut-in driving scenario 1. (a) Time-to-collision (TTC); (b) collision warning level.
Figure 14. Potential crash events predicted during the cut-in driving scenario 1. (a) Time-to-collision (TTC); (b) collision warning level.
Sensors 21 01860 g014
Figure 15. Relative distance to the remote vehicle during the cut-in driving scenario 2. (a) Ground truth longitudinal distance; (b) ground truth lateral distance; (c) absolute error for longitudinal distance estimates; (d) absolute error for lateral distance estimates.
Figure 15. Relative distance to the remote vehicle during the cut-in driving scenario 2. (a) Ground truth longitudinal distance; (b) ground truth lateral distance; (c) absolute error for longitudinal distance estimates; (d) absolute error for lateral distance estimates.
Sensors 21 01860 g015aSensors 21 01860 g015b
Figure 16. Potential crash events predicted during the cut-in driving scenario 2. (a) Time-to-collision (TTC); (b) collision warning level.
Figure 16. Potential crash events predicted during the cut-in driving scenario 2. (a) Time-to-collision (TTC); (b) collision warning level.
Sensors 21 01860 g016
Figure 17. Relative distance to the remote vehicle during the cut-in driving scenario 3. (a) Ground truth longitudinal distance; (b) ground truth lateral distance; (c) absolute error for longitudinal distance estimates; (d) absolute error for lateral distance estimates.
Figure 17. Relative distance to the remote vehicle during the cut-in driving scenario 3. (a) Ground truth longitudinal distance; (b) ground truth lateral distance; (c) absolute error for longitudinal distance estimates; (d) absolute error for lateral distance estimates.
Sensors 21 01860 g017aSensors 21 01860 g017b
Figure 18. Potential crash events predicted during the cut-in driving scenario 3. (a) Time-to-collision (TTC); (b) collision warning level.
Figure 18. Potential crash events predicted during the cut-in driving scenario 3. (a) Time-to-collision (TTC); (b) collision warning level.
Sensors 21 01860 g018
Table 1. Automotive radar system specifications.
Table 1. Automotive radar system specifications.
TypeDelphi ESR 2.5
Long-RangeMid-Range
Frequency band76.5 GHz76.5 GHz
Range175 m60 m
Range accuracy0.5 m0.25 m
Angular accuracy0.5 deg1.0 deg
Horizontal FOV20 deg90 deg
Data update50 ms50 ms
Table 2. Automotive camera system specifications.
Table 2. Automotive camera system specifications.
TypeMobileye 630
Frame size640 × 480 pixels
Dynamic range55 dB linear
100 dB in HDR
Range accuracy (longitudinal)<10% (in general)
Width accuracy<10%
Horizontal field-of-view (FOV)38 deg
Data update66–100 ms
Table 3. Dedicated short-range communications (DSRC)-based vehicle-to-everything (V2X) communications characteristics.
Table 3. Dedicated short-range communications (DSRC)-based vehicle-to-everything (V2X) communications characteristics.
TypeIEEE WAVE
Frequency5.850–5.925 GHz
Channel1 CCH, 6 SCH
Bandwidth10 MHz
Data rate3–27 Mbps
Maximum range1000 m
ModulationOFDM
Media access controlCSMA/CA
Table 4. Data description for the basic safety message (BSM) core data frame.
Table 4. Data description for the basic safety message (BSM) core data frame.
ContentDescription
Message countSequence number for the same type of messages originated from the same sender.
Temporary IDDevice identifier that is modified periodically for on-board units (OBUs). This value may be fixed for roadside units (RSUs).
DSRC secondMilliseconds within a minute that typically represents the moment when the position was determined.
PositionGeographic latitude, longitude, and height.
Position accuracySemi-major axis (length and orientation) and semi-minor axis (length) of an ellipsoid representing the position accuracy.
Transmission stateVehicle transmission state (i.e., neutral, park, forward, and reverse).
SpeedVehicle speed.
HeadingVehicle heading. Past values may be used if the sender is stopped.
Steering wheel angleAngle of the vehicle steering wheel.
AccelerationVehicle acceleration in longitudinal, lateral, and vertical axes.
Yaw rateVehicle yaw rate.
Brake system statusStatus of the brake and other control systems (i.e., traction control, ABS, stability control, brake boost, and auxiliary brake).
Vehicle sizeVehicle width and length.
Table 5. Vehicle collision warning conditions [23].
Table 5. Vehicle collision warning conditions [23].
ConditionStageWarning TypeColor
No collision detectedNo threat (Level 0)VisualGray
TTC > 2.6   s Threat detected (Level 1)VisualGreen
1.6   s < TTC 2.6   s Inform driver (Level 2)Visual and audibleYellow
TTC 1.6   s Warn driver (Level 3) Visual and audibleRed
Table 6. Normal driving scenario descriptions.
Table 6. Normal driving scenario descriptions.
Scenario
Number
Host Vehicle
Speed (km/h)
Remote Vehicle
Speed (km/h)
Remote Vehicle
Driving Lane
12025Same as HV
22025Adjacent to HV
Table 7. Relative positioning accuracy in the longitudinal direction for the measurements obtained during the normal driving scenario 1.
Table 7. Relative positioning accuracy in the longitudinal direction for the measurements obtained during the normal driving scenario 1.
Data Range (m)Camera Lon. Position ErrorRadar Lon. Position ErrorV2X Lon. Position ErrorFusion Lon. Position Error
RMSE (m)SD (m)RMSE (m)SD (m)RMSE (m)SD (m)RMSE (m)SD (m)
0–100.360.240.390.180.250.220.130.13
10–200.190.150.510.210.310.280.120.11
20–300.330.230.540.150.380.370.240.09
30–400.630.450.560.200.330.330.290.16
40–501.230.790.740.420.400.380.240.23
50–601.820.981.090.570.330.330.230.23
60–702.731.690.830.320.260.250.280.16
Total1.160.960.670.350.340.320.220.18
Table 8. Relative positioning accuracy in the lateral direction for the measurements obtained during the normal driving scenario 1.
Table 8. Relative positioning accuracy in the lateral direction for the measurements obtained during the normal driving scenario 1.
Data Range (m)Camera Lat. Position ErrorRadar Lat. Position ErrorV2X Lat. Position ErrorFusion Lat. Position Error
RMSE (m)SD (m)RMSE (m)SD (m)RMSE (m)SD (m)RMSE (m)SD (m)
0–100.050.050.210.210.120.090.050.04
10–200.050.040.210.160.240.100.060.04
20–300.060.060.260.200.320.130.090.06
30–400.070.070.240.190.450.160.140.08
40–500.110.100.280.280.620.200.190.11
50–600.120.120.340.290.700.310.200.10
60–700.150.150.280.280.840.410.190.12
Total0.090.090.260.230.480.280.130.09
Table 9. Relative positioning accuracy in the longitudinal direction for the measurements obtained during the normal driving scenario 2.
Table 9. Relative positioning accuracy in the longitudinal direction for the measurements obtained during the normal driving scenario 2.
Data Range (m)Camera Lon. Position ErrorRadar Lon. Position ErrorV2X Lon. Position ErrorFusion Lon. Position Error
RMSE (m)SD (m)RMSE (m)SD (m)RMSE (m)SD (m)RMSE (m)SD (m)
0–10N/AN/A0.300.280.440.400.310.31
10–200.490.480.400.310.360.360.340.32
20–300.910.800.440.390.410.400.400.40
30–401.521.370.430.400.470.460.420.42
40–502.801.820.370.330.340.330.310.30
50–603.612.520.390.260.270.260.240.22
60–704.293.500.310.260.380.330.280.27
Total2.622.200.390.320.370.370.330.32
Table 10. Relative positioning accuracy in the lateral direction for the measurements obtained during the normal driving scenario 2.
Table 10. Relative positioning accuracy in the lateral direction for the measurements obtained during the normal driving scenario 2.
Data Range (m)Camera Lat. Position ErrorRadar Lat. Position ErrorV2X Lat. Position ErrorFusion Lat. Position Error
RMSE (m)SD (m)RMSE (m)SD (m)RMSE (m)SD (m)RMSE (m)SD (m)
0–10N/AN/A0.320.220.150.090.110.09
10–200.150.070.290.180.180.060.050.05
20–300.160.080.280.250.310.150.090.08
30–400.170.110.250.240.420.190.070.06
40–500.280.140.250.250.570.200.070.07
50–600.270.170.260.260.680.300.120.12
60–700.260.170.250.240.790.320.090.09
Total0.220.140.270.260.510.300.090.09
Table 11. Cut-in driving scenario descriptions.
Table 11. Cut-in driving scenario descriptions.
Scenario
Number
Host Vehicle
Speed (km/h)
Remote Vehicle
Speed (km/h)
Cut-In
Distance (m)
Number of Cut-In
Maneuvers
140–4535–4015–204
240–4525–3015–205
340–4515–2015–208
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Baek, M.; Mun, J.; Kim, W.; Choi, D.; Yim, J.; Lee, S. Driving Environment Perception Based on the Fusion of Vehicular Wireless Communications and Automotive Remote Sensors. Sensors 2021, 21, 1860. https://doi.org/10.3390/s21051860

AMA Style

Baek M, Mun J, Kim W, Choi D, Yim J, Lee S. Driving Environment Perception Based on the Fusion of Vehicular Wireless Communications and Automotive Remote Sensors. Sensors. 2021; 21(5):1860. https://doi.org/10.3390/s21051860

Chicago/Turabian Style

Baek, Minjin, Jungwi Mun, Woojoong Kim, Dongho Choi, Janghyuk Yim, and Sangsun Lee. 2021. "Driving Environment Perception Based on the Fusion of Vehicular Wireless Communications and Automotive Remote Sensors" Sensors 21, no. 5: 1860. https://doi.org/10.3390/s21051860

APA Style

Baek, M., Mun, J., Kim, W., Choi, D., Yim, J., & Lee, S. (2021). Driving Environment Perception Based on the Fusion of Vehicular Wireless Communications and Automotive Remote Sensors. Sensors, 21(5), 1860. https://doi.org/10.3390/s21051860

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop