Next Article in Journal
Fast and Accurate Detection of Dim and Small Targets for Smart Micro-Light Sight
Next Article in Special Issue
Interpretable Mixture of Experts for Decomposition Network on Server Performance Metrics Forecasting
Previous Article in Journal
An Effective and Robust Parameter Estimation Method in a Self-Developed, Ultra-Low Frequency Impedance Spectroscopy Technique for Large Impedances
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

RobustE2E: Exploring the Robustness of End-to-End Autonomous Driving

1
Information Science Academy, China Electronics Technology Group Corporation, Beijing 100846, China
2
State Key Lab of Software Development Environment, Beihang University, Beijing 100191, China
3
School of Computer Science and Engineering, Beihang University, Beijing 100191, China
4
Aviation Industry Development Research Center of China, Beijing 100029, China
5
China Electronics Standardization Institute, Beijing 100007, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Electronics 2024, 13(16), 3299; https://doi.org/10.3390/electronics13163299
Submission received: 22 July 2024 / Revised: 9 August 2024 / Accepted: 10 August 2024 / Published: 20 August 2024
(This article belongs to the Special Issue Trustworthy Deep Learning in Practice)

Abstract

:
Autonomous driving technology has advanced significantly with deep learning, but noise and attacks threaten its real-world deployment. While research has revealed vulnerabilities in individual intelligent tasks, a comprehensive evaluation of these impacts across complete end-to-end systems is still underexplored. To address this void, we thoroughly analyze the robustness of four end-to-end autonomous driving systems against various noise and build the RobustE2E Benchmark, including five traditional adversarial attacks and a newly proposed Module-Wise Attack specifically targeting end-to-end autonomous driving in white-box settings, as well as four major categories of natural corruptions (a total of 17 types, with five severity levels) in black-box settings. Additionally, we extend the robustness evaluation from the open-loop model level to the closed-loop case studies of autonomous driving system level. Our comprehensive evaluation and analysis provide valuable insights into the robustness of end-to-end autonomous driving, which may offer potential guidance for targeted improvements to models. For example, (1) even the most advanced end-to-end models suffer large planning failures under minor perturbations, with perception tasks showing the most substantial decline; (2) among adversarial attacks, our Module-Wise Attack poses the greatest threat to end-to-end autonomous driving models, while PGD- l 2 is the weakest, and among four categories of natural corruptions, noise and weather are the most harmful, followed by blur and digital distortion being less severe; (3) the integrated, multitask approach results in significantly higher robustness and reliability compared with the simpler design, highlighting the critical role of collaborative multitask in autonomous driving; and (4) the autonomous driving systems amplify the model’s lack of robustness, etc. Our research contributes to developing more resilient autonomous driving models and their deployment in the real world.

1. Introduction

With recent significant advancements in deep learning, autonomous driving technology is increasingly important in today’s society. Many mature intelligent tasks, such as computer vision [1,2,3] and intelligent decision making [4,5], etc., can serve autonomous driving well. The classical implementation approach for autonomous driving involves developing and deploying a series of subtasks separately [6,7]. While this approach is relatively simple, it may suffer from cumulative errors [8] and coordination challenges [9]. End-to-end autonomous driving models, as an advanced technological solution, map raw sensor data directly to driving decisions, greatly simplifying the deployment and design process.
Despite the strong performance of end-to-end models, they face significant security challenges. The complex and dynamic driving environment introduces numerous uncertainties, and deep learning models are inherently vulnerable to malicious attacks [10,11,12,13,14,15]. While extensive research has focused on the robustness of individual tasks in autonomous driving, especially perception tasks [16,17,18,19,20,21], these studies highlight vulnerabilities such as environmental conditions and adversarial inputs that can mislead perception systems and cause decision-making errors. However, there is limited research on the adversarial security of prediction modules, which are also crucial for system reliability [22,23]. Recent studies have shown that even simple regression-based decision models are not robust against minor noise [24]. However, despite their outstanding performance and revolutionary innovation, there is only limited robustness research conducted on end-to-end autonomous driving models [25,26], which mainly focuses on regression-based models and lacks more comprehensive natural robustness evaluation and closed-loop system assessment.
To address this gap, we build RobustE2E Benchmark to provide an in-depth analysis of the robustness of end-to-end autonomous driving against various types of noise. we assess five traditional adversarial attacks, introduce a novel Module-Wise Attack by introducing adversarial noise at the interfaces among tasks in white-box settings, and evaluate four major categories of natural corruptions (totaling 17 types across five severity levels) in black-box settings on two representative end-to-end autonomous driving models. Our evaluation extends from the open-loop model level to the closed-loop simulation environment and real-world car, where server planning errors are observed. Our findings indicate that even the most advanced end-to-end autonomous driving models can suffer from planning collapses under small perturbations, with the performance of perception tasks experiencing the most significant decline. In summary, the Module-Wise Attack is the most detrimental adversarial attack, while PGD- l 2 [27] is the least effective. Among natural corruptions, noise and weather are the most harmful, followed by blur, with digital distortion causing comparatively less impact. Based on the comparison of the two models, we find that integrated multitask models exhibit markedly greater robustness and reliability than simpler designs. Additionally, injecting noise into a single module affects the performance of all subsequent tasks, revealing the limited recovery ability of planning and we demonstrate the crucial role of the perception layer as a primary target for noise. Finally, we observe that autonomous driving systems exacerbate the models’ inherent vulnerabilities owing to the cumulative effect of errors across various components. The contributions of this paper are summarized as follows:
  • Module-Wise Attack targeting End-to-End Autonomous Driving. We propose a novel white-box Module-Wise Attack that designs and injects adversarial noise at the interfaces among tasks, providing new insights into how perturbations impact the interaction among modules and their collective robustness.
  • Development of the RobustE2E Benchmark. To the best of our knowledge, RobustE2E is the first to rigorously assess the robustness of end-to-end autonomous driving against various types of noise, incorporating five traditional adversarial attacks, a novel Module-Wise Attack, and four major categories of natural corruptions, with closed-loop evaluation on system level included.
  • Valuable Insights from Extensive Experimental Evaluation. Our comprehensive experiments deliver significant insights into the robustness and vulnerabilities of end-to-end autonomous driving, advancing the understanding of how different types of noise affect performance and interaction at the model level and system level.

2. Related Work

2.1. End-to-End Autonomous Driving

The end-to-end structure allows autonomous driving models to directly map data captured by raw sensors to driving decisions, greatly simplifying the deployment process of autonomous driving. The existing end-to-end autonomous driving models can be divided into two categories. The most direct way is to output driving decisions from the original input through a single neural network, without any supervision of perception and prediction [28,29,30]. These methods lack interpretability and are difficult to adapt to complex driving environments. In contrast, a more stable design is to jointly train all submodules with planning as the ultimate goal in order to fully utilize the role of each module, which is what we mainly focus on.
The early end-to-end autonomous driving models combine relatively few tasks. Zeng et al. [31] adopts a bounding box-based intermediate representation to construct the motion planner. Considering that Non-Maximum Suppression (NMS) in this method can lead to loss of perceptual information, P3 [9] innovatively designs an end-to-end network that utilizes maps, advanced control instructions, and LIDAR points to generate interpretable intermediate semantic occupancy representations, which facilitates safer trajectory planning. Following P3, MP3 [32] integrates online mapping in perception, thus exhibiting superior performance without HD maps, and ST-P3 [33] achieves end-to-end perception, prediction, and planning only through pure visual input for the first time. LAV [34] conducts end-to-end learning from all surrounding vehicles in addition to ego-vehicle, which can provide a richer driving experience. UniAD [35] is the first end-to-end network that integrates full-stack autonomous driving tasks. By thoroughly considering the contributions of each task to autonomous driving and mutual promotion among modules, UniAD [35] significantly surpasses previous state-of-the-art performance on each task.

2.2. Adversarial Attacks

Adversarial attacks [36,37,38] refer to carefully crafted perturbations designed for neural network input data, which are typically small but can cause models to produce complete error outputs. Szegedy et al. [10] first introduced the concept of adversarial attack and utilizes L-BFGS approximation to determine the magnitude of perturbations for attacking classification models. Following that, a series of adversarial attack methods are proposed, such as gradient-based methods [27,39,40,41,42,43,44] and optimization-based methods [45,46,47,48,49]. Although early adversarial attacks primarily target image classification models, they effectively demonstrate the vulnerability of neural networks. Their attack principles have guided the implementation of various attack methods in different tasks, posing a serious threat to the practical application of models in the real world.
In autonomous driving, numerous researchers have proposed adversarial attack methods targeting individual tasks. Currently, most attacks focus on the perception layer, indirectly affecting decision-making. Xie et al. [50] comprehensively investigates the robustness of purely image-based 3D object detection methods. They analyze the effects of digital attacks and patch-based physical attacks on 3D detectors under various attack conditions. Additionally, some methods [17,51,52,53] utilize differentiable rendering techniques to generate physical adversarial obstacles, capable of deceiving both cameras and LIDAR simultaneously. In tracking tasks, Wiyatno et al. [54] iteratively optimizes physical textures based on the Expectation Over Transformation (EOT) algorithm to fool tracking models when imaged under diverse conditions. Researchers have also introduced adversarial attacks into trajectory prediction [22,23], where slight perturbations to the historical trajectories of surrounding vehicles can lead to erroneous predictions by the ego vehicle, thus affecting route planning. As end-to-end frameworks gradually become mainstream in autonomous driving, there is a growing number of attack methods [24] targeting end-to-end regression-based planning models.

2.3. Robustness Benchmark in Autonomous Driving

Numerous research has been proposed on adversarial robustness in specific tasks of autonomous driving, confirming the effectiveness of common attacks and corruptions on deep learning models [52,55,56,57,58]. Additionally, there are also several works exploring the safety issues of autonomous driving from a system-level perspective. Guo et al. [59] explores the assessment of driveability in autonomous driving systems, focusing on factors, metrics, and datasets that impact the ability of autonomous vehicles to operate safely in various driving conditions. Kondermann et al. [60] presents a new stereo and optical flow dataset specifically designed for urban autonomous driving scenarios, addressing challenges like low light and adverse weather conditions. Despite innovative designs, none of them have thoroughly evaluated the impact of the proposed threats on model metrics or autonomous driving systems. From the perspective of scenarios, SafeBench [61] generates safety-critical testing cases to evaluate the robustness of autonomous driving algorithms to adversarial manipulation and natural distribution shifts.
Existing benchmarks either focus solely on models within a single task or emphasize the various environmental factors within the context of autonomous driving. As end-to-end models increasingly become a vital solution for autonomous driving, researchers have begun conducting adversarial safety research on end-to-end autonomous driving models [62]. However, the target models in these studies remain at a very basic level, typically relying on regression-based models, and the real-world robustness of models in autonomous driving systems has not been fully explored.

3. Module-Wise Attack

To effectively evaluate the robustness of end-to-end autonomous driving models, we have considered the feature of cooperative task facilitation emphasized in multiple previous works [32,33,35], and designed a novel white-box adversarial attack method, termed the Module-Wise Attack. We inject and optimize noise across the entire pipeline of the end-to-end models with a plan-oriented attack goal, detailed as follows.

3.1. Design of Module-Wise Noise

Starting from the initial input images, the method designs a set of adversarial noise
N = { δ i } i = 1 n ,
where i denotes the stage index of noise injection, corresponding to each phase of model inference, and n represents the total number of subtasks included in the model. The noise is strategically injected into the images and latent feature representations z j thus the selection of features for noise injection is critical. We identify Candidate Features  C at interfaces where task-specific information exchanges occur, i.e., all the feature information flowing from upstream modules to downstream modules at the module interaction interfaces are included in C . We then determine the set of features requiring perturbation, denoted as S , by tracing each candidate feature through the computation graph of the model. If a feature remains relevant in subsequent stages, it is included in S . Formally, this can be expressed as:
S = { z j z j C z j G } ,
where G is the set of features present in the model’s computation graph and we named S  Strategic Perturbation Features. Obviously, S is a subset of C , representing the features within C that persist in the computational graph across various modules. This allows noise at each stage to be updated targeting all relevant tasks.
Once the Strategic Perturbation Features are determined, we design Adversarial Noise Templates for initializing noise in each batch of data.
δ j T = T ( z j ) · U ( ϵ j , ϵ j ) ,
where T denotes the tensor template function to generate an empty tensor with the same dimensions and shape as a specific feature, and U represents the uniform distribution to fill the templates. These templates serve as the foundation for subsequent noise initialization and updates.

3.2. Attack Strategy

We posit that autonomous driving, as a complex intelligent task, requires evaluating robustness across the entire collaborative chain, considering each submodule’s vulnerabilities. We explain our attack strategy in noise initialization, noise propagation and storage, and iterative updates.
We follow the model’s inference process to initialize noise at each stage sequentially. For all features z j in S belonging to stage i, the corresponding adversarial noise template is δ j T . We project the noise template into the l p -ball ( B p ) constrained by the feature perturbation as the initial noise and aggregate all the feature noise of stage i to obtain δ i :
δ i = z j S i Π B p ( ϵ j ) δ j T ,
where ⨁ denotes the concatenation operation across the relevant dimensions after expansion, and Π represents the projection operation.
After initialization, we obtain the noise storage set N . These perturbations are strategically injected into the latent feature representations at each processing stage. The injection process is mathematically formulated as follows:
Z i = F i ( Z i 1 , θ i , δ i ) , i = 1 , , n ,
where F i denotes the i-th submodule with parameters θ i and Z i represents the direct input features of F i , which may contain multiple z j . The state of the model is represented collectively as 𝓩 = { Z i } i = 1 n , capturing the cumulative impact of injected noise across multiple stages. After injection, the noise remains stored until the entire autonomous driving task is completed.
The objective function of our attack is then formulated after each subtask:
L ( N ; 𝓩 ) = i = 1 n L i ( Z i ; θ i ) , i = 1 , , n ,
where L i keeps the same with the training loss function of task i. To optimize the adversarial noise N, a gradient-based approach is employed, aiming to maximize L ( N ; 𝓩 ) while ensuring imperceptibility and evading detection mechanisms:
δ i δ i + ϵ i k · s i g n ( δ i L ( N ; 𝓩 ) ) ,
where ϵ represents the hyperparameter for perturbation constraint and k represents the iteration numbers. Here, after updating, the noise is also projected back into the l p -ball of the corresponding feature noise. In each iteration, the noise propagates until the planning phase ends and is updated accordingly. The final noise is then injected into the model to complete the Module-Wise Attack. The method is summarized in Algorithm 1.
Algorithm 1: Module-Wise Attack
Input: The end-to-end autonomous driving model F ( x ; θ ) , minibatch images and labels.
Output: Adversarial noise for the input data.
1Generate Adversarial Noise Templates according to Formula (3)
2for t in k steps do
3 //Forward propagation
4 if t  = = 1 then
5  Initialize the noise set N for each module sequentially according to
 Formula (4).
6 end
7 Inject noise into each module sequentially according to Formula (5).
8 Compute the losses for each module sequentially.
9 Calculate the objective function according to Equation (6).
10 //Back propagation
11 Synchronize the update of all noise δ i according to Formula (7).
12end

4. RobustE2E Benchmark

We propose RobustE2E, a comprehensive robustness evaluation benchmark for end-to-end autonomous driving under consistent settings. RobustE2E provides researchers with a valuable tool to gain deeper insights into the impact of various perturbations on end-to-end autonomous driving robustness, aiding in the development of more robust methods for deploying reliable and secure autonomous driving models in real-world scenarios. The RobustE2E Benchmark encompasses both adversarial robustness and natural robustness, considering 2 end-to-end autonomous models, 6 progressive adversarial attack methods, and 17 types of natural corruptions, and with closed-loop case studies included. The overall framework of the benchmark is illustrated in Figure 1.

4.1. Robustness Evaluation Approaches

End-to-end autonomous driving models are susceptible to various types of noise interference in real environments. In accordance with the guidelines proposed by Tang et al. [12] and Xiao et al. [44], we classify these perturbations into adversarial attacks, and natural corruptions, and leverage them to evaluate the robustness of end-to-end autonomous driving models thoroughly. Besides, we extend the evaluation to the closed-loop case study.

4.1.1. Adversarial Attacks

To measure the robustness of the model in the worst-case scenario, we use 6 white-box adversarial attacks as an evaluation method. Specifically, we employ FGSM [39], MI-FGSM [40], PGD- l [27], PGD- l 1 [27], PGD- l 2 [27], and Module-Wise Attack to generate adversarial noises. These attack methods vary in strength and targets, providing a more comprehensive evaluation standard.
We adapt and implement all six attack methods above on both models. The first four methods apply minor perturbations to the visual inputs, while the Module-Wise Attack injects targeted noise at different stages of the model’s inference process. Since there is no empirical guideline for the perturbation magnitude of intermediate features, we choose to apply the same perturbation constraints as used for the images. For all attacks, we conducted multiple experiments under various perturbation constraints. We report representative results, where we uniformly set the maximum perturbation constraint for each stage across various attack methods to 0.2. For iterative methods, the number of iterations is set to 5.

4.1.2. Natural Corruptions

To simulate the effects of natural corruptions, we utilize 17 different black-box natural corruptions inspired by [63,64]. Specifically, natural corruptions can be classified into four categories: ❶ noise (e.g., Gaussian noise, shot noise, and impulse noise), ❷ blur (e.g., glass blur, defocus blur, motion blur, and zoom blur), ❸ weather (e.g., fog, frost, snow, and rain), and ❹ digital distortions (e.g., spatter, contrast, brightness, saturate, jpeg compression, and pixelate). Each type of natural corruption has severity levels ranging from 1 to 5, resulting in a total of 80 different settings.

4.1.3. Closed-Loop Case Study

Closed-loop Evaluation in Simulation Environment. In open-loop tests, gauging the systemic impact of adversarial attacks on autonomous driving is impossible. Therefore, we extend adversarial attacks to a closed-loop simulation environment. Here, sensors capture RGB images, which are the end-to-end model processes for perception, prediction, and planning. The model’s planning results are converted into control signals (throttle, brake, steering angle), completing a “single-step” driving operation and affecting the simulator’s environment, thus creating a closed-loop system. We implement real-time noise injection using an offline approach to generate universally applicable adversarial noise with the PGD method [27], trained on an open-source autonomous driving dataset. This noise, injected into the model’s sensor images, demonstrates optimal attack effectiveness due to extensive scenario exposure during training.
Closed-loop Evaluation in the Real World. We conducted a closed-loop test using JetBot [65], an autonomous vehicle based on the NVIDIA Jetson Nano to demonstrate the potential harm of adversarial noise on real autonomous driving systems. We simulated a scenario where a hacker infiltrates JetBot’s software, enabling attacks on the end-to-end model at the software layer. Due to JetBot’s hardware limitations, a simple end-to-end regression model is used, which outputs expected coordinates that are converted into vehicle control signals. We conducted targeted attacks to simulate malicious hijacking, constructing the attack’s objective function using the output coordinates for rightward deviation and acceleration attacks.

4.2. Evaluation Objects

4.2.1. Dataset

Our robustness evaluation experiments are conducted on the validation split of the large-scale dataset nuScenes [66]. The nuScenes dataset comprises approximately 15 h of real-world driving data collected in Boston and Singapore. It includes meticulously curated challenging and diverse driving scenarios, covering common driving segments, weather conditions, vehicle types, road markings, etc. The full dataset provides data from a complete suite of sensors for autonomous driving (6 cameras, 1 LIDAR, 5 RADAR, GPS, IMU) and the train–val split also includes highly accurate annotation information.

4.2.2. Models

We select the full-stack end-to-end autonomous driving model, UniAD [35]. UniAD [35] encompasses the complete subtasks of perception, prediction, and decision, achieving state-of-the-art performance on the nuScenes dataset. Additionally, we include the representative ST-P3 [33], which is the first end-to-end autonomous driving model based solely on visual inputs. ST-P3 [33] uniquely integrates perception, prediction, and decision into single tasks, significantly simplifying the model’s complexity while maintaining performance.

4.2.3. End-to-End Autonomous Driving Systems

In the simulation environment, we evaluate autonomous driving intelligent agents in the CARLA simulator. The agent acts as the ego vehicle, which must complete predetermined driving routes. On one hand, they can be equipped with sensors to interact with the environment, and on the other hand, they can connect to end-to-end models for inference.
In the real world, we evaluate the JetBot car, which embeds an end-to-end model pretrained in a manually constructed environment. The car can drive in compliance with traffic regulations within this range from any starting position.

4.3. Evaluation Metrics

4.3.1. Open-Loop Experiments

In our experiments, we use metrics similar to those in the study [35] for consistency and direct comparison. We adopt Average MultiObject Tracking Accuracy (AMOTA) to evaluate tracking performance. Intersection over Union (IOU) measures the alignment between predicted and ground truth bounding boxes for lanes, roads, etc. Minimum Average Displacement Error (minADE) quantifies motion forecasting precision. Additionally, the L2 errors between ground truth and predicted trajectories and the collision rate are applied to assess the planning’s safety and reliability. These metrics ensure a comprehensive evaluation of the proposed benchmark, aligning with established standards in autonomous driving research.

4.3.2. Closed-Loop Experiments

For simulation experiments, the metrics include RouteCompletionTest (percentage of route completed), OutsideRouteLanesTest (time spent outside designated lanes), CollisionTest (number of collisions), RunningRedLightTest (instances of running red lights), and RunningStopTest (failures to stop at stop signs). InRouteTest assesses whether the vehicle stayed within route boundaries, AgentBlockedTest checks if the vehicle blocks other agents, and Timeout measures task completion within a time limit. The Driving Score aggregates these metrics for a comprehensive assessment of overall driving performance.
As for the real-world evaluation, there have been no established performance metrics yet. We only report the driving errors observed in the vehicle’s operation.

5. Experiments

This section details the benchmark evaluation results and analysis. Our RobustE2E aims to address the following research questions through extensive experimentation: (1) Are high-performing end-to-end autonomous driving models equally robust to common types of noise? (2) Which type of noise poses the greatest threat to end-to-end autonomous driving models? (3) How does the complexity of subtasks in end-to-end autonomous driving models affect the final planning’s robustness? (4) In which subtasks does noise injection have the most detrimental impact on autonomous driving planning?

5.1. Main Results

Even the most advanced models can collapse under small perturbations, with the performance of perception tasks experiencing the most significant decline, which finally results in great plan errors. Table 1 and Table 2 show the main evaluation results of UniAD [35] and ST-P3 [33] under adversarial attacks and natural corruptions, respectively, where the bolded cell of each column represents the worst result for that metric. For UniAD [35], the tracking task’s key metric, AMOTA, dropped significantly from a high average of 0.576 to 0.148 (a decrease of 74.15%). Although subsequent modules also experience performance degradation, the extent is far less than that of the track module. The map module is the most robust, with the most important Lanes-IOU metric decreasing by an average of only 5.32%. Motion and occupancy predictions show larger errors, Even outstanding mapping tasks cannot compensate for the fragility of the track module. Unsurprisingly, due to perception and prediction errors, the final planning results are greatly affected. The average L2 error between the predicted and actual expected trajectories reached 1.72 m after attacks, compared with the original 1.08 m (an error increase of 58.42%). On real roads, this distance can cause serious safety incidents. ST-P3 [33] has fewer subtasks but shows a similar trend overall. The IOU of the perception layer dropped significantly by 19.69%, ultimately causing the planning L2 error to increase from 1.58 m to 2.88 m on average (an error increase of 81.91%).
Figure 2a,b depicts the model’s original performance and the predictions after injecting task-wise noise throughout the entire pipeline on various tasks in the same scenario. The ego vehicle’s planned route exhibits severe errors, such as making sharp turns leading to encroachment onto the lawn (red arrows). It is evident that the model makes errors across various tasks, such as missing surrounding vehicles and incorrectly predicting actions for stationary vehicles behind. There are significant discrepancies in the model’s mapping, which we believe to be the primary cause of planning errors in this scenario, as the vehicle predicts an area to the left front that is not drivable, resulting in a significant deviation to the right.

5.2. Comparison across Different Attack Methods

Adversarial attacks: Our Module-Wise Attack achieves the strongest attack effects on both models. Overall, the threat ranking of the six adversarial attack methods to end-to-end autonomous driving models can be divided into three levels: Module-Wise Attack; MI-FGSM [40] and PGD- l [27]; FGSM [39], PGD- l 1 [27] and PGD- l 2 [27]. Figure 3a,b shows the increase in planning error and collision rate caused by different adversarial attack methods. Our Module-Wise Attack exploits the vulnerabilities in the interactions between end-to-end autonomous driving modules, resulting in the greatest overall disruption. The strength of MI-FGSM [40] is similar to that of PGD- l [27]. MI-FGSM [40] even surpasses Module-Wise Attack on two metrics for UniAD [35], but it is not as effective as PGD- l [27] on ST-P3. Additionally, under the same perturbation budget, the attack strength constrained by l 1 and l 2 norms is significantly lower than that of the l norm, which results in a much larger overall perturbation to the image pixels, almost similar to FGSM [39].
Natural corruptions: Overall, the threat level of the four types of corruptions to end-to-end autonomous driving models is much lower than that of adversarial attacks. Their patterns are not entirely consistent across different models, unlike adversarial attacks. In summary, as illustrated in Figure 3c,d, the noise and weather corruption categories exhibit greater harm. In the weather category, the plan on UniAD [35] suffers the most server damage, and the noise category on ST-P3 [33] results in a planning error increase of 2.48 m, more than half of the other three categories combined (1.09 m). Additionally, digital distortion shows the least harm in both models. This alerts especially about the safety risks posed by weather and noise to autonomous driving systems.

5.3. Analysis of Impact of Subtask Design on Model Planning

The integrated, multitask approach results in significantly higher robustness and reliability compared with the simpler design, highlighting the critical role of collaborative multitasking in autonomous driving. The two models we selected are highly representative. UniAD [35] is a full-stack model integrating five autonomous driving tasks, while ST-P3 includes only one task each for perception, prediction, and planning, achieving decent planning performance with a simpler model structure. Comparing the evaluation results of these two models, UniAD [35] demonstrates higher robustness in planning under common perturbations. Its planning trajectory error increases by an average of 58.72%, compared with 82.79% for ST-P3, and its collision rate increases by an average of 0.74%, compared with 1.66%.
The superior robustness of UniAD [35] is due to its integrated architecture, which allows multiple tasks to provide complementary information and compensate for each other’s errors. This redundancy and cross-validation among tasks enhance stability and reliability. UniAD’s design [35] promotes error correction mechanisms, where inconsistencies detected by one task can be rectified by another. In contrast, ST-P3’s simpler structure [33] lacks such intertask interactions, making it more susceptible to errors and perturbations. This plan-oriented integrated approach underscores the importance of holistic designs in enhancing the resilience and safety of autonomous driving models.

5.4. Comparison of Noise Injection toward Different Modules

The experiments in this section are primarily conducted to assess the impact of injecting noise at different stages on the model’s performance. We choose UniAD [35] because of its full-stack architecture. In all experiments, noise is injected at only one stage. For each stage of the noise, we design two objective functions, corresponding to Task-oriented and Plan-oriented approaches. In the Task-oriented approach, we choose the loss function of the multiobject tracking for updating the noise N I for the input images. Subsequently, for each stage of noise, we choose the loss of its neighboring task as the attack target, namely, L m o t i o n for N A injected into Track-Motion branch and N M injected into Map-Motion branch, L o c c for N T injected into Motion-Occ branch, and L p l a n for N E injected into Motion-Plan branch. In the Plan-oriented approach, planning is targeted as the attack objective for each level of noise. The attack setup is consistent with Section 4.2.3. Figure 4 illustrates the experimental results.
Overall, injecting a single noise almost affects the performance of all subsequent tasks, indicating that the planning module has limited capabilities to recover from failures in upstream modules. In addition, the first four subplots all exhibit the same trend: under Task-oriented attack, the performance degradation of each module is higher compared with plan-oriented attack, particularly evident in the motion module (the average increase in minADE is 2.32 m for Task-oriented attack, while it is only 0.19 m for plan-oriented attacks). However, in the subplots for the final planning module, the trend is completely reversed, especially noticeable in the comparison between the lines in the N _ I and N _ A columns. This suggests that deteriorating the performance of specific tasks does not necessarily lead to worse planning performance. Therefore, we choose the sum of losses across all tasks as the attack target for Module-Wise Attack to achieve a relatively balanced attack across tasks.
The perception layer is vital for understanding the environment, making it the most critical target for noise injection, significantly impacting the planning performance. For the track and map modules in the perception layer, only N I affects their performance (average decrease of 0.569 on AMOTA and 12.32 on Lanes-IOU(%)), which aligns with intuition clearly. Under our experimental setup, for the prediction layers of motion and occupancy modules, the impact of noise injection at the image level is more severe on the model compared with noise injection on the track-motion branch. However, in the final planning module, this comparison is entirely reversed. Furthermore, for these three modules, the threat posed by noise injection on the track-motion branch is greater than that of noise injection on the map-motion branch. Additionally, as for the planning module, noise injection at the perception layer yields a more effective attack compared with noise injection at the prediction layer. Although attacking the motion-occupancy branch leads to a decrease in the performance of the occupancy module, this impact is not fully reflected in the plan’s L2 error. In fact, its effect on collision rate is slightly more pronounced. Based on these observations, we believe that in autonomous driving, the perception layer plays the most critical role in understanding the environment, especially in detection and tracking tasks. Therefore, attacks targeting input images and perceptual results significantly impact the planning performance of the ego vehicle. Indeed, other tasks also provide various essential auxiliary information for the overall performance of autonomous driving. Therefore, injecting noise across the entire pipeline enables the most effective attack, as demonstrated in Section 5.2.
Figure 5 illustrates the changes in the predictions of the autonomous driving model after injecting noise at different stages. In this scenario, injecting noise into the images directly affects the results of detection and mapping (Figure 5b), making subsequent prediction and planning tasks equally challenging. Injecting noise into the perceptual layer results in a noticeable deviation in trajectory prediction as well as its confidence compared with the original (Figure 5c,d). Injecting noise into the Motion-Plan branch leads to the most significant deviation in planned trajectories (Figure 5f), consistent with the above conclusions.

6. Case Studies

To comprehensively assess the model’s robustness at the system level, we conduct case studies in both simulated environments and real-world autonomous driving systems.

6.1. Closed-Loop Experiment in the Simulation Environment

We select the representative ST-P3 [33] and integrate it with the classic driving simulator CARLA to complete closed-loop autonomous driving tasks. Universal adversarial noise is generated on the nuScenes dataset [66], with the maximum perturbation under l regularization set to 0.2 and five iterations per frame. Initially, the noise targets six camera views; however, since the CARLA simulation only uses four cameras (front, rear, left, and right), we inject the corresponding noise into these views.
In CARLA’s Town01, we test both long and short routes under Clear Noon, Hard Rain Night, and Wet Cloudy Sunset conditions. The model makes real-time decisions based on sensor data, guiding the vehicle to the endpoint while handling behaviors like overtaking and lane changing. The model’s performance is evaluated using the Driving Score, based on metrics such as route completion and collision assessment.
We report the average results of 6 experiments, showing that the Driving Score decreases from 88.175 to 29.545 after attacks, as shown in Table 3. Under normal conditions, the model can nearly perfectly navigate the ego vehicle to the destination, with only minor lane deviations. However, under noise attacks, most metrics deteriorate significantly. The vehicle could not reach the designated endpoint, exhibiting severe lane deviations (27.33%) and collisions. While a single collision in a simulation may not lead to severe consequences, it poses a serious threat to life safety on real roads.
Visual examples illustrate the impact. In Figure 6, real-time noise injection on the Town01 short route causes the ego vehicle to crash at a specific location, with comparison frames from normal driving. During the attack, the vehicle sways and eventually veers left, colliding head-on with an oncoming vehicle. In Figure 7, the vehicle under attack veers toward the opposite lane’s roadside and gets stuck, exceeding the timeout period.

6.2. Closed-Loop Experiment in the Real World

Following Section 4.1.3, targeted attacks are simulated by manipulating these coordinates for rightward deviation and acceleration, implemented using the FGSM [39]. For efficiency, the noise is obtained through a single-step optimization, with the perturbation magnitude set to only 0.01 by the l norm (relative to 255). The results indicate that the robustness of the real car system is even worse, as minimal image perturbations can achieve the attacker’s goals.
We conduct real-world robustness evaluation experiments in an approximately 6 m2 manually constructed test area, which includes a large outer ring road and multiple intersections and T-junctions in the middle, with traffic lights set up on the roads. We attack the pretrained model on the car, randomly selecting the starting point of the car. Under normal conditions, the car would drive according to traffic regulations until manually stopped (without navigation functionality). We carry out “Left Deviation” and “Right Deviation” targeted attacks on the car, resulting in significant directional deviations on the normal road. We record videos during the driving, where Figure 8 shows the car veering toward the right wooded area at a right-turn intersection due to our attack, and Figure 9 shows the car leans to the left during a right turn. Real-world experiments demonstrate that even milder perturbations than in the digital world can effectively defeat the autonomous driving system.

6.3. Discussions

Based on the observations from the case studies and the robustness evaluation results presented earlier, we conclude that autonomous driving systems amplify the model’s lack of robustness. In our case studies, we only use black-box attacks and theoretically weaker perturbations, which pose less threat compared with the robustness assessment methods in RobustE2E. Attacks that cause minimal metric drops at the model level (such as FGSM [39]) can lead to severe erroneous driving decisions in autonomous driving systems (e.g., in real-world case studies). We believe this is due to the cumulative effect of errors across various components of the autonomous driving system, including hardware and control, etc., which amplify minor deviations at the model level.

7. Conclusions

This paper first designs a novel adversarial attack targeting end-to-end autonomous driving, i.e., Module-Wise Attack, based on the belief that the interaction interfaces between various intelligent tasks are potential vulnerabilities. Then, we construct the RobustE2E Benchmark to address the crucial gap in evaluating the robustness of end-to-end autonomous driving against various types of noise. The RobustE2E assesses five traditional adversarial attacks and our proposed Module-Wise Attack, evaluates four major categories of natural corruptions with multiple severity levels across two representative models, and finally, extends the evaluation from open-loop models to closed-loop simulation and real-world scenarios to analyze the system-level impact. We uncover significant insights into model performance under perturbations. We address the research questions and obtain valuable insights. Our findings reveal that even advanced end-to-end models suffer large planning failures under minor perturbations, with perception tasks showing the most substantial decline. The proposed Module-Wise Attack proves to be the most severe adversarial threat, while PGD- l 2 is the least. Among natural corruptions, noise and weather have the greatest detrimental effect, exactly the two most common in real-world scenarios. The analysis of multitask versus simpler designs highlights that integrated multitask models offer greater robustness and reliability. Additionally, injecting noise into a single module impacts subsequent tasks, where the results underscore the critical role of the perception layer and the limited recovery ability of the planning module. Our research demonstrates that autonomous driving systems can amplify inherent model vulnerabilities due to cumulative errors. We strongly hope that this study can raise awareness regarding the robustness issues of end-to-end autonomous driving and provide valuable guidance for enhancing the critical and sensitive modules of end-to-end autonomous driving to promote its deployment in the real world.

Author Contributions

Conceptualization, W.J. and L.W.; methodology, W.J. and L.W.; validation, L.W., T.Z. and W.B.; data curation, Y.C., J.D. and W.B.; writing—original draft preparation, L.W. and T.Z.; writing—review and editing, W.J., Y.C. and J.D.; visualization, Z.Z. and Q.F. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Key R&D Program of China (No. 2021ZD0110603) and Outstanding Research Project of Shen Yuan Honors College, BUAA (Grant. 230123206).

Data Availability Statement

The original contributions presented in this study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tseng, Y.H.; Jan, S.S. Combination of computer vision detection and segmentation for autonomous driving. In Proceedings of the 2018 IEEE/ION Position, Location and Navigation Symposium (PLANS), Monterey, CA, USA, 23–26 April 2018; pp. 1047–1052. [Google Scholar]
  2. Song, H. The application of computer vision in responding to the emergencies of autonomous driving. In Proceedings of the 2020 International Conference on Computer Vision, Image and Deep Learning (CVIDL), Nanchang, China, 15–17 May 2020; pp. 1–5. [Google Scholar]
  3. Kanchana, B.; Peiris, R.; Perera, D.; Jayasinghe, D.; Kasthurirathna, D. Computer vision for autonomous driving. In Proceedings of the 2021 3rd International Conference on Advancements in Computing (ICAC), Shanghai, China, 23–25 April 2021; pp. 175–180. [Google Scholar]
  4. Hubmann, C.; Becker, M.; Althoff, D.; Lenz, D.; Stiller, C. Decision making for autonomous driving considering interaction and uncertain prediction of surrounding vehicles. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 1–14 June 2017; pp. 1671–1678. [Google Scholar]
  5. Hoel, C.J.; Driggs-Campbell, K.; Wolff, K.; Laine, L.; Kochenderfer, M.J. Combining planning and deep reinforcement learning in tactical decision making for autonomous driving. IEEE Trans. Intell. Veh. 2019, 5, 294–305. [Google Scholar] [CrossRef]
  6. Nvidia. NVIDIA DRIVE End-to-End Solutions for Autonomous Vehicles. 2022. Available online: https://developer.nvidia.com/drive (accessed on 21 July 2024).
  7. Mobileye. Mobileye under the Hood. 2022. Available online: https://www.mobileye.com/ces-2022/ (accessed on 21 July 2024).
  8. Cui, H.; Radosavljevic, V.; Chou, F.C.; Lin, T.H.; Nguyen, T.; Huang, T.K.; Schneider, J.; Djuric, N. Multimodal trajectory predictions for autonomous driving using deep convolutional networks. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 2090–2096. [Google Scholar]
  9. Sadat, A.; Casas, S.; Ren, M.; Wu, X.; Dhawan, P.; Urtasun, R. Perceive, predict, and plan: Safe motion planning through interpretable semantic representations. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 414–430. [Google Scholar]
  10. Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. arXiv 2013, arXiv:1312.6199. [Google Scholar]
  11. Zhang, C.; Liu, A.; Liu, X.; Xu, Y.; Yu, H.; Ma, Y.; Li, T. Interpreting and Improving Adversarial Robustness of Deep Neural Networks with Neuron Sensitivity. IEEE Trans. Image Process. 2021, 30, 1291–1304. [Google Scholar] [CrossRef] [PubMed]
  12. Tang, S.; Gong, R.; Wang, Y.; Liu, A.; Wang, J.; Chen, X.; Yu, F.; Liu, X.; Song, D.; Yuille, A.; et al. Robustart: Benchmarking robustness on architecture design and training techniques. arXiv 2021, arXiv:2109.05211. [Google Scholar]
  13. Liu, A.; Liu, X.; Yu, H.; Zhang, C.; Liu, Q.; Tao, D. Training robust deep neural networks via adversarial noise propagation. IEEE Trans. Image Process. 2021, 30, 5769–5781. [Google Scholar] [CrossRef] [PubMed]
  14. Liu, A.; Tang, S.; Liang, S.; Gong, R.; Wu, B.; Liu, X.; Tao, D. Exploring the Relationship between Architecture and Adversarially Robust Generalization. arXiv 2022, arXiv:2209.14105. [Google Scholar]
  15. Guo, J.; Bao, W.; Wang, J.; Ma, Y.; Gao, X.; Xiao, G.; Liu, A.; Dong, J.; Liu, X.; Wu, W. A Comprehensive Evaluation Framework for Deep Model Robustness. Pattern Recognit. 2023, 137, 109308. [Google Scholar] [CrossRef]
  16. Abdelfattah, M.; Yuan, K.; Wang, Z.J.; Ward, R. Towards universal physical attacks on cascaded camera-lidar 3d object detection models. In Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 19–22 September 2021; pp. 3592–3596. [Google Scholar]
  17. Cao, Y.; Wang, N.; Xiao, C.; Yang, D.; Fang, J.; Yang, R.; Chen, Q.A.; Liu, M.; Li, B. Invisible for both camera and lidar: Security of multi-sensor fusion based perception in autonomous driving under physical-world attacks. In Proceedings of the 2021 IEEE Symposium on Security and Privacy (SP), Online, 23–26 May 2021; pp. 176–194. [Google Scholar]
  18. Boloor, A.; Garimella, K.; He, X.; Gill, C.; Vorobeychik, Y.; Zhang, X. Attacking vision-based perception in end-to-end autonomous driving models. J. Syst. Archit. 2020, 110, 101766. [Google Scholar] [CrossRef]
  19. Duan, R.; Mao, X.; Qin, A.K.; Chen, Y.; Ye, S.; He, Y.; Yang, Y. Adversarial laser beam: Effective physical-world attack to dnns in a blink. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 16062–16071. [Google Scholar]
  20. Song, D.; Eykholt, K.; Evtimov, I.; Fernandes, E.; Li, B.; Rahmati, A.; Tramer, F.; Prakash, A.; Kohno, T. Physical adversarial examples for object detectors. In Proceedings of the 12th USENIX Workshop on Offensive Technologies (WOOT 18), Baltimore, MD, USA, 13–14 August 2018. [Google Scholar]
  21. Huang, L.; Gao, C.; Zhou, Y.; Xie, C.; Yuille, A.L.; Zou, C.; Liu, N. Universal physical camouflage attacks on object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Online, 14–19 June 2020; pp. 720–729. [Google Scholar]
  22. Zhang, Q.; Hu, S.; Sun, J.; Chen, Q.A.; Mao, Z.M. On adversarial robustness of trajectory prediction for autonomous vehicles. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 15159–15168. [Google Scholar]
  23. Cao, Y.; Xiao, C.; Anandkumar, A.; Xu, D.; Pavone, M. Advdo: Realistic adversarial attacks for trajectory prediction. In Proceedings of the European Conference on Computer Vision, New Orleans, LA, USA, 19–24 June 2022; pp. 36–52. [Google Scholar]
  24. Wu, H.; Yunas, S.; Rowlands, S.; Ruan, W.; Wahlström, J. Adversarial driving: Attacking end-to-end autonomous driving. In Proceedings of the 2023 IEEE Intelligent Vehicles Symposium (IV), Anchorage, AK, USA, 4–7 June 2023; pp. 1–7. [Google Scholar]
  25. Chen, L.; Wu, P.; Chitta, K.; Jaeger, B.; Geiger, A.; Li, H. End-to-end autonomous driving: Challenges and frontiers. IEEE Trans. Pattern Anal. Mach. Intell. 2024. [Google Scholar] [CrossRef] [PubMed]
  26. Shibly, K.H.; Hossain, M.D.; Inoue, H.; Taenaka, Y.; Kadobayashi, Y. Towards autonomous driving model resistant to adversarial attack. Appl. Artif. Intell. 2023, 37, 2193461. [Google Scholar] [CrossRef]
  27. Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards deep learning models resistant to adversarial attacks. arXiv 2017, arXiv:1706.06083. [Google Scholar]
  28. Chen, D.; Koltun, V.; Krähenbühl, P. Learning to drive from a world on rails. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Nashville, TN, USA, 19–25 June 2021; pp. 15590–15599. [Google Scholar]
  29. Prakash, A.; Chitta, K.; Geiger, A. Multi-modal fusion transformer for end-to-end autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 7077–7087. [Google Scholar]
  30. Wu, P.; Jia, X.; Chen, L.; Yan, J.; Li, H.; Qiao, Y. Trajectory-guided control prediction for end-to-end autonomous driving: A simple yet strong baseline. Adv. Neural Inf. Process. Syst. 2022, 35, 6119–6132. [Google Scholar]
  31. Zeng, W.; Luo, W.; Suo, S.; Sadat, A.; Yang, B.; Casas, S.; Urtasun, R. End-to-end interpretable neural motion planner. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 8660–8669. [Google Scholar]
  32. Casas, S.; Sadat, A.; Urtasun, R. Mp3: A unified model to map, perceive, predict and plan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 14403–14412. [Google Scholar]
  33. Hu, S.; Chen, L.; Wu, P.; Li, H.; Yan, J.; Tao, D. St-p3: End-to-end vision-based autonomous driving via spatial-temporal feature learning. In Proceedings of the European Conference on Computer Vision, Tel-Aviv, Israel, 23–27 October 2022; pp. 533–549. [Google Scholar]
  34. Chen, D.; Krähenbühl, P. Learning from all vehicles. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 17222–17231. [Google Scholar]
  35. Hu, Y.; Yang, J.; Chen, L.; Li, K.; Sima, C.; Zhu, X.; Chai, S.; Du, S.; Lin, T.; Wang, W.; et al. Planning-oriented autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 17853–17862. [Google Scholar]
  36. Liu, S.; Wang, J.; Liu, A.; Li, Y.; Gao, Y.; Liu, X.; Tao, D. Harnessing Perceptual Adversarial Patches for Crowd Counting. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, Copenhagen, Denmark, 7–11 November 2022; pp. 2055–2069. [Google Scholar]
  37. Liu, A.; Huang, T.; Liu, X.; Xu, Y.; Ma, Y.; Chen, X.; Maybank, S.J.; Tao, D. Spatiotemporal attacks for embodied agents. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 122–138. [Google Scholar]
  38. Wang, J.; Liu, A.; Yin, Z.; Liu, S.; Tang, S.; Liu, X. Dual attention suppression attack: Generate adversarial camouflage in physical world. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 8565–8574. [Google Scholar]
  39. Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and harnessing adversarial examples. arXiv 2014, arXiv:1412.6572. [Google Scholar]
  40. Dong, Y.; Liao, F.; Pang, T.; Su, H.; Zhu, J.; Hu, X.; Li, J. Boosting adversarial attacks with momentum. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 9185–9193. [Google Scholar]
  41. Wang, H.; Dong, K.; Zhu, Z.; Qin, H.; Liu, A.; Fang, X.; Wang, J.; Liu, X. Transferable Multimodal Attack on Vision-Language Pre-training Models. In Proceedings of the 2024 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 20–22 May 2024; p. 102. [Google Scholar]
  42. Liu, A.; Guo, J.; Wang, J.; Liang, S.; Tao, R.; Zhou, W.; Liu, C.; Liu, X.; Tao, D. X-adv: Physical adversarial object attacks against x-ray prohibited item detection. arXiv 2023, arXiv:2302.09491. [Google Scholar]
  43. Xiao, Y.; Zhang, T.; Liu, S.; Qin, H. Benchmarking the robustness of quantized models. arXiv 2023, arXiv:2304.03968. [Google Scholar]
  44. Xiao, Y.; Liu, A.; Zhang, T.; Qin, H.; Guo, J.; Liu, X. RobustMQ: Benchmarking robustness of quantized models. Vis. Intell. 2023, 1, 30. [Google Scholar] [CrossRef]
  45. Carlini, N.; Wagner, D. Towards evaluating the robustness of neural networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 22–26 May 2017; pp. 39–57. [Google Scholar]
  46. Liu, A.; Tang, S.; Chen, X.; Huang, L.; Qin, H.; Liu, X.; Tao, D. Towards Defending Multiple lp-Norm Bounded Adversarial Perturbations via Gated Batch Normalization. Int. J. Comput. Vis. 2023, 132, 1881–1898. [Google Scholar] [CrossRef]
  47. Li, S.; Zhang, S.; Chen, G.; Wang, D.; Feng, P.; Wang, J.; Liu, A.; Yi, X.; Liu, X. Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 12324–12333. [Google Scholar]
  48. Liu, A.; Liu, X.; Fan, J.; Ma, Y.; Zhang, A.; Xie, H.; Tao, D. Perceptual-sensitive gan for generating adversarial patches. In Proceedings of the AAAI Conference on Artificial Intelligence, Waikiki, HI, USA, 27 January–1 February 2019; Volume 33, pp. 1028–1035. [Google Scholar]
  49. Liu, A.; Wang, J.; Liu, X.; Cao, B.; Zhang, C.; Yu, H. Bias-based universal adversarial patch attack for automatic check-out. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 395–410. [Google Scholar]
  50. Xie, S.; Li, Z.; Wang, Z.; Xie, C. On the Adversarial Robustness of Camera-based 3D Object Detection. arXiv 2023, arXiv:2301.10766. [Google Scholar]
  51. Abdelfattah, M.; Yuan, K.; Wang, Z.J.; Ward, R. Adversarial attacks on camera-lidar models for 3d car detection. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Online, 27 September–1 October 2021; pp. 2189–2194. [Google Scholar]
  52. Zhang, T.; Xiao, Y.; Zhang, X.; Li, H.; Wang, L. Benchmarking the Physical-world Adversarial Robustness of Vehicle Detection. arXiv 2023, arXiv:2304.05098. [Google Scholar]
  53. Jiang, W.; Zhang, T.; Liu, S.; Ji, W.; Zhang, Z.; Xiao, G. Exploring the Physical-World Adversarial Robustness of Vehicle Detection. Electronics 2023, 12, 3921. [Google Scholar] [CrossRef]
  54. Wiyatno, R.R.; Xu, A. Physical adversarial textures that fool visual object tracking. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 4822–4831. [Google Scholar]
  55. Michaelis, C.; Mitzkus, B.; Geirhos, R.; Rusak, E.; Bringmann, O.; Ecker, A.S.; Bethge, M.; Brendel, W. Benchmarking robustness in object detection: Autonomous driving when winter is coming. arXiv 2019, arXiv:1907.07484. [Google Scholar]
  56. Dong, Y.; Kang, C.; Zhang, J.; Zhu, Z.; Wang, Y.; Yang, X.; Su, H.; Wei, X.; Zhu, J. Benchmarking robustness of 3d object detection to common corruptions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 1022–1032. [Google Scholar]
  57. Zhang, T.; Wang, L.; Li, H.; Xiao, Y.; Liang, S.; Liu, A.; Liu, X.; Tao, D. LanEvil: Benchmarking the Robustness of Lane Detection to Environmental Illusions. arXiv 2024, arXiv:2406.00934. [Google Scholar]
  58. Nesti, F.; Rossolini, G.; Nair, S.; Biondi, A.; Buttazzo, G. Evaluating the robustness of semantic segmentation for autonomous driving against real-world adversarial patch attacks. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Online, 3–7 January 2022; pp. 2280–2289. [Google Scholar]
  59. Guo, J.; Kurup, U.; Shah, M. Is it safe to drive? An overview of factors, metrics, and datasets for driveability assessment in autonomous driving. IEEE Trans. Intell. Transp. Syst. 2019, 21, 3135–3151. [Google Scholar] [CrossRef]
  60. Kondermann, D.; Nair, R.; Honauer, K.; Krispin, K.; Andrulis, J.; Brock, A.; Gussefeld, B.; Rahimimoghaddam, M.; Hofmann, S.; Brenner, C.; et al. The hci benchmark suite: Stereo and flow ground truth with uncertainties for urban autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Las Vegas, NV, USA, 25 June–2 July 2016; pp. 19–28. [Google Scholar]
  61. Xu, C.; Ding, W.; Lyu, W.; Liu, Z.; Wang, S.; He, Y.; Hu, H.; Zhao, D.; Li, B. Safebench: A benchmarking platform for safety evaluation of autonomous vehicles. Adv. Neural Inf. Process. Syst. 2022, 35, 25667–25682. [Google Scholar]
  62. Deng, Y.; Zheng, X.; Zhang, T.; Chen, C.; Lou, G.; Kim, M. An analysis of adversarial attacks and defenses on autonomous driving models. In Proceedings of the 2020 IEEE International Conference on Pervasive Computing and Communications (PerCom), Austin, TX, USA, 23–27 March 2020; pp. 1–10. [Google Scholar]
  63. Jung, A.B.; Wada, K.; Crall, J.; Tanaka, S.; Graving, J.; Reinders, C.; Yadav, S.; Banerjee, J.; Vecsei, G.; Kraft, A.; et al. imgaug. 2020. Available online: https://github.com/aleju/imgaug (accessed on 1 February 2020).
  64. Hendrycks, D.; Dietterich, T. Benchmarking neural network robustness to common corruptions and perturbations. arXiv 2019, arXiv:1903.12261. [Google Scholar]
  65. Nvidia. JetBot. Available online: https://github.com/NVIDIA-AI-IOT/jetbot (accessed on 3 February 2021).
  66. Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Online, 14–19 June 2020; pp. 11621–11631. [Google Scholar]
Figure 1. Overview of the RobustE2E Benchmark. It encompasses both adversarial and natural robustness, considering 2 end-to-end autonomous models, 6 progressive adversarial attack methods, and 17 types of natural corruptions, with closed-loop case studies included.
Figure 1. Overview of the RobustE2E Benchmark. It encompasses both adversarial and natural robustness, considering 2 end-to-end autonomous models, 6 progressive adversarial attack methods, and 17 types of natural corruptions, with closed-loop case studies included.
Electronics 13 03299 g001
Figure 2. The visual comparison in the same scenario before and after the task-wise attack. Detection, prediction, and planning all exhibit noticeable errors, especially in the labeled circles and arrows. The images below are perturbed but visually imperceptible.
Figure 2. The visual comparison in the same scenario before and after the task-wise attack. Detection, prediction, and planning all exhibit noticeable errors, especially in the labeled circles and arrows. The images below are perturbed but visually imperceptible.
Electronics 13 03299 g002
Figure 3. Comparison of plan performance under adversarial attacks and natural corruptions on two models. We report the increase in L2 error and collision rate.
Figure 3. Comparison of plan performance under adversarial attacks and natural corruptions on two models. We report the increase in L2 error and collision rate.
Electronics 13 03299 g003
Figure 4. The performance changes in 5 tasks under individual noise injection. We choose the key metric for each task. The x-axis represents the original and each stage of noise injection. The y-axis represents key metrics for each task. Each stage of noise interferes with its subsequent modules.
Figure 4. The performance changes in 5 tasks under individual noise injection. We choose the key metric for each task. The x-axis represents the original and each stage of noise injection. The y-axis represents key metrics for each task. Each stage of noise interferes with its subsequent modules.
Electronics 13 03299 g004
Figure 5. Visualization example of individual noise injection. Each stage of noise poses a significant threat to specific tasks. The circles and arrows indicate examples of some errors.
Figure 5. Visualization example of individual noise injection. Each stage of noise poses a significant threat to specific tasks. The circles and arrows indicate examples of some errors.
Electronics 13 03299 g005
Figure 6. Comparison of driving performance before and after attacks in Carla simulator route0 (Clear Noon). The ego vehicle under attack collides on the left side during a meeting. The first row of each subfigure represents the images captured by the three RGB cameras mounted in front of the vehicle. The middle of the last row is the image captured by the camera mounted behind the vehicle. The left side is the cost map predicted by the model in the current driving area, and the right is the segmentation of the driving area and vehicles.
Figure 6. Comparison of driving performance before and after attacks in Carla simulator route0 (Clear Noon). The ego vehicle under attack collides on the left side during a meeting. The first row of each subfigure represents the images captured by the three RGB cameras mounted in front of the vehicle. The middle of the last row is the image captured by the camera mounted behind the vehicle. The left side is the cost map predicted by the model in the current driving area, and the right is the segmentation of the driving area and vehicles.
Electronics 13 03299 g006
Figure 7. Comparison of driving performance before and after attacks in Carla simulator route1 (Hard Rain Night). The ego vehicle under attack heads toward the roadside of the opposite lane.
Figure 7. Comparison of driving performance before and after attacks in Carla simulator route1 (Hard Rain Night). The ego vehicle under attack heads toward the roadside of the opposite lane.
Electronics 13 03299 g007
Figure 8. Robustness evaluation on Jetbot smart car. Subfigure (a) shows the frames of the car driving normally, and Subfigure (b) shows the frames of the car after the “Right Deviation” attack, where the car collides with the wooded area on the right side.
Figure 8. Robustness evaluation on Jetbot smart car. Subfigure (a) shows the frames of the car driving normally, and Subfigure (b) shows the frames of the car after the “Right Deviation” attack, where the car collides with the wooded area on the right side.
Electronics 13 03299 g008
Figure 9. Robustness evaluation on Jetbot smart car. Subfigure (a) shows the frames of the car driving normally, and Subfigure (b) shows the frames where the car leans to the left during a right turn.
Figure 9. Robustness evaluation on Jetbot smart car. Subfigure (a) shows the frames of the car driving normally, and Subfigure (b) shows the frames where the car leans to the left during a right turn.
Electronics 13 03299 g009
Table 1. The robust evaluation results of UniAD [35]. The up arrow indicates that the larger the value, the better the performance, while the down arrow indicates the opposite. All five tasks experience significant performance degradation under adversarial attacks and natural corruptions.
Table 1. The robust evaluation results of UniAD [35]. The up arrow indicates that the larger the value, the better the performance, while the down arrow indicates the opposite. All five tasks experience significant performance degradation under adversarial attacks and natural corruptions.
Attack MethodAmota ↑ Lanes-IOU ↑minADE ↓IOU-n ↑L2 Error (m) ↓Col.Rate ↓
Original0.57623.93%0.487465.10%1.08270.00%
Adversarial
Attacks
FGSM [39]0.11620.16%1.028247.80%2.04960.00%
MI-FGSM [40]0.06118.73%1.336245.20%2.41301.98%
PGD- l 1 [27]0.33222.60%0.926651.90%1.25730.39%
PGD- l 2 [27]0.27622.75%0.957651.90%1.22290.00%
PGD- l [27]0.06818.82%1.349645.70%2.33041.14%
Module-Wise Attack0.04818.94%1.926444.30%2.68141.52%
NaturalCorruptionsNoise0.16817.03%0.654151.83%1.30430.31%
Blur0.09316.01%0.747843.40%1.30040.77%
Weather0.13015.07%0.655146.58%1.32420.79%
Digital Distortions0.19716.01%0.747858.22%1.26910.24%
Table 2. The robust evaluation results of ST-P3. All metrics experience significant performance degradation under adversarial attacks and natural corruptions.
Table 2. The robust evaluation results of ST-P3. All metrics experience significant performance degradation under adversarial attacks and natural corruptions.
Attack MethodavgIOU ↑L2 Error (m) ↓Col.Rate ↓
Original38.11%1.58450.09%
Adversarial
Attacks
FGSM [39]38.11%1.58240.51%
MI-FGSM [40]9.75%3.05300.56%
PGD- l 1 [27]21.77%1.70440.43%
PGD- l 2 [27]34.36%1.55560.26%
PGD- l [27]8.32%3.36561.15%
Module-Wise Attack1.48%5.46226.67%
Natural
Corruptions
Noise8.37%4.06664.12%
Blur15.53%3.27761.67%
Weather21.90%2.25940.31%
Digital Distortions24.58%2.49640.98%
Table 3. The average results of the model completing driving tasks in the CARLA simulator, including two routes under three weather conditions.
Table 3. The average results of the model completing driving tasks in the CARLA simulator, including two routes under three weather conditions.
MetricsResults without AttacksResults under Attacks
RouteCompletionTest100%64.91%
OutsideRouteLanesTest11.83%27.33%
CollisionTest0 times1 times
RunningRedLightTest0 times0 times
RunningStopTest0 times0 times
InRouteTestSuccessSuccess
AgentBlockedTestSuccessSuccess
TimeoutSuccessFailure
Driving Score88.17529.545
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, W.; Wang, L.; Zhang, T.; Chen, Y.; Dong, J.; Bao, W.; Zhang, Z.; Fu, Q. RobustE2E: Exploring the Robustness of End-to-End Autonomous Driving. Electronics 2024, 13, 3299. https://doi.org/10.3390/electronics13163299

AMA Style

Jiang W, Wang L, Zhang T, Chen Y, Dong J, Bao W, Zhang Z, Fu Q. RobustE2E: Exploring the Robustness of End-to-End Autonomous Driving. Electronics. 2024; 13(16):3299. https://doi.org/10.3390/electronics13163299

Chicago/Turabian Style

Jiang, Wei, Lu Wang, Tianyuan Zhang, Yuwei Chen, Jian Dong, Wei Bao, Zichao Zhang, and Qiang Fu. 2024. "RobustE2E: Exploring the Robustness of End-to-End Autonomous Driving" Electronics 13, no. 16: 3299. https://doi.org/10.3390/electronics13163299

APA Style

Jiang, W., Wang, L., Zhang, T., Chen, Y., Dong, J., Bao, W., Zhang, Z., & Fu, Q. (2024). RobustE2E: Exploring the Robustness of End-to-End Autonomous Driving. Electronics, 13(16), 3299. https://doi.org/10.3390/electronics13163299

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop