RobustE2E: Exploring the Robustness of End-to-End Autonomous Driving
Abstract
:1. Introduction
- Module-Wise Attack targeting End-to-End Autonomous Driving. We propose a novel white-box Module-Wise Attack that designs and injects adversarial noise at the interfaces among tasks, providing new insights into how perturbations impact the interaction among modules and their collective robustness.
- Development of the RobustE2E Benchmark. To the best of our knowledge, RobustE2E is the first to rigorously assess the robustness of end-to-end autonomous driving against various types of noise, incorporating five traditional adversarial attacks, a novel Module-Wise Attack, and four major categories of natural corruptions, with closed-loop evaluation on system level included.
- Valuable Insights from Extensive Experimental Evaluation. Our comprehensive experiments deliver significant insights into the robustness and vulnerabilities of end-to-end autonomous driving, advancing the understanding of how different types of noise affect performance and interaction at the model level and system level.
2. Related Work
2.1. End-to-End Autonomous Driving
2.2. Adversarial Attacks
2.3. Robustness Benchmark in Autonomous Driving
3. Module-Wise Attack
3.1. Design of Module-Wise Noise
3.2. Attack Strategy
Algorithm 1: Module-Wise Attack | |||||
Input: The end-to-end autonomous driving model , minibatch images and labels. | |||||
Output: Adversarial noise for the input data. | |||||
1 | Generate Adversarial Noise Templates according to Formula (3) | ||||
2 | for t in k steps do | ||||
3 | //Forward propagation | ||||
4 | if t 1 then | ||||
5 | Initialize the noise set N for each module sequentially according to | ||||
Formula (4). | |||||
6 | end | ||||
7 | Inject noise into each module sequentially according to Formula (5). | ||||
8 | Compute the losses for each module sequentially. | ||||
9 | Calculate the objective function according to Equation (6). | ||||
10 | //Back propagation | ||||
11 | Synchronize the update of all noise according to Formula (7). | ||||
12 | end |
4. RobustE2E Benchmark
4.1. Robustness Evaluation Approaches
4.1.1. Adversarial Attacks
4.1.2. Natural Corruptions
4.1.3. Closed-Loop Case Study
4.2. Evaluation Objects
4.2.1. Dataset
4.2.2. Models
4.2.3. End-to-End Autonomous Driving Systems
4.3. Evaluation Metrics
4.3.1. Open-Loop Experiments
4.3.2. Closed-Loop Experiments
5. Experiments
5.1. Main Results
5.2. Comparison across Different Attack Methods
5.3. Analysis of Impact of Subtask Design on Model Planning
5.4. Comparison of Noise Injection toward Different Modules
6. Case Studies
6.1. Closed-Loop Experiment in the Simulation Environment
6.2. Closed-Loop Experiment in the Real World
6.3. Discussions
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Tseng, Y.H.; Jan, S.S. Combination of computer vision detection and segmentation for autonomous driving. In Proceedings of the 2018 IEEE/ION Position, Location and Navigation Symposium (PLANS), Monterey, CA, USA, 23–26 April 2018; pp. 1047–1052. [Google Scholar]
- Song, H. The application of computer vision in responding to the emergencies of autonomous driving. In Proceedings of the 2020 International Conference on Computer Vision, Image and Deep Learning (CVIDL), Nanchang, China, 15–17 May 2020; pp. 1–5. [Google Scholar]
- Kanchana, B.; Peiris, R.; Perera, D.; Jayasinghe, D.; Kasthurirathna, D. Computer vision for autonomous driving. In Proceedings of the 2021 3rd International Conference on Advancements in Computing (ICAC), Shanghai, China, 23–25 April 2021; pp. 175–180. [Google Scholar]
- Hubmann, C.; Becker, M.; Althoff, D.; Lenz, D.; Stiller, C. Decision making for autonomous driving considering interaction and uncertain prediction of surrounding vehicles. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 1–14 June 2017; pp. 1671–1678. [Google Scholar]
- Hoel, C.J.; Driggs-Campbell, K.; Wolff, K.; Laine, L.; Kochenderfer, M.J. Combining planning and deep reinforcement learning in tactical decision making for autonomous driving. IEEE Trans. Intell. Veh. 2019, 5, 294–305. [Google Scholar] [CrossRef]
- Nvidia. NVIDIA DRIVE End-to-End Solutions for Autonomous Vehicles. 2022. Available online: https://developer.nvidia.com/drive (accessed on 21 July 2024).
- Mobileye. Mobileye under the Hood. 2022. Available online: https://www.mobileye.com/ces-2022/ (accessed on 21 July 2024).
- Cui, H.; Radosavljevic, V.; Chou, F.C.; Lin, T.H.; Nguyen, T.; Huang, T.K.; Schneider, J.; Djuric, N. Multimodal trajectory predictions for autonomous driving using deep convolutional networks. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 2090–2096. [Google Scholar]
- Sadat, A.; Casas, S.; Ren, M.; Wu, X.; Dhawan, P.; Urtasun, R. Perceive, predict, and plan: Safe motion planning through interpretable semantic representations. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 414–430. [Google Scholar]
- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. arXiv 2013, arXiv:1312.6199. [Google Scholar]
- Zhang, C.; Liu, A.; Liu, X.; Xu, Y.; Yu, H.; Ma, Y.; Li, T. Interpreting and Improving Adversarial Robustness of Deep Neural Networks with Neuron Sensitivity. IEEE Trans. Image Process. 2021, 30, 1291–1304. [Google Scholar] [CrossRef] [PubMed]
- Tang, S.; Gong, R.; Wang, Y.; Liu, A.; Wang, J.; Chen, X.; Yu, F.; Liu, X.; Song, D.; Yuille, A.; et al. Robustart: Benchmarking robustness on architecture design and training techniques. arXiv 2021, arXiv:2109.05211. [Google Scholar]
- Liu, A.; Liu, X.; Yu, H.; Zhang, C.; Liu, Q.; Tao, D. Training robust deep neural networks via adversarial noise propagation. IEEE Trans. Image Process. 2021, 30, 5769–5781. [Google Scholar] [CrossRef] [PubMed]
- Liu, A.; Tang, S.; Liang, S.; Gong, R.; Wu, B.; Liu, X.; Tao, D. Exploring the Relationship between Architecture and Adversarially Robust Generalization. arXiv 2022, arXiv:2209.14105. [Google Scholar]
- Guo, J.; Bao, W.; Wang, J.; Ma, Y.; Gao, X.; Xiao, G.; Liu, A.; Dong, J.; Liu, X.; Wu, W. A Comprehensive Evaluation Framework for Deep Model Robustness. Pattern Recognit. 2023, 137, 109308. [Google Scholar] [CrossRef]
- Abdelfattah, M.; Yuan, K.; Wang, Z.J.; Ward, R. Towards universal physical attacks on cascaded camera-lidar 3d object detection models. In Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 19–22 September 2021; pp. 3592–3596. [Google Scholar]
- Cao, Y.; Wang, N.; Xiao, C.; Yang, D.; Fang, J.; Yang, R.; Chen, Q.A.; Liu, M.; Li, B. Invisible for both camera and lidar: Security of multi-sensor fusion based perception in autonomous driving under physical-world attacks. In Proceedings of the 2021 IEEE Symposium on Security and Privacy (SP), Online, 23–26 May 2021; pp. 176–194. [Google Scholar]
- Boloor, A.; Garimella, K.; He, X.; Gill, C.; Vorobeychik, Y.; Zhang, X. Attacking vision-based perception in end-to-end autonomous driving models. J. Syst. Archit. 2020, 110, 101766. [Google Scholar] [CrossRef]
- Duan, R.; Mao, X.; Qin, A.K.; Chen, Y.; Ye, S.; He, Y.; Yang, Y. Adversarial laser beam: Effective physical-world attack to dnns in a blink. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 16062–16071. [Google Scholar]
- Song, D.; Eykholt, K.; Evtimov, I.; Fernandes, E.; Li, B.; Rahmati, A.; Tramer, F.; Prakash, A.; Kohno, T. Physical adversarial examples for object detectors. In Proceedings of the 12th USENIX Workshop on Offensive Technologies (WOOT 18), Baltimore, MD, USA, 13–14 August 2018. [Google Scholar]
- Huang, L.; Gao, C.; Zhou, Y.; Xie, C.; Yuille, A.L.; Zou, C.; Liu, N. Universal physical camouflage attacks on object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Online, 14–19 June 2020; pp. 720–729. [Google Scholar]
- Zhang, Q.; Hu, S.; Sun, J.; Chen, Q.A.; Mao, Z.M. On adversarial robustness of trajectory prediction for autonomous vehicles. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 15159–15168. [Google Scholar]
- Cao, Y.; Xiao, C.; Anandkumar, A.; Xu, D.; Pavone, M. Advdo: Realistic adversarial attacks for trajectory prediction. In Proceedings of the European Conference on Computer Vision, New Orleans, LA, USA, 19–24 June 2022; pp. 36–52. [Google Scholar]
- Wu, H.; Yunas, S.; Rowlands, S.; Ruan, W.; Wahlström, J. Adversarial driving: Attacking end-to-end autonomous driving. In Proceedings of the 2023 IEEE Intelligent Vehicles Symposium (IV), Anchorage, AK, USA, 4–7 June 2023; pp. 1–7. [Google Scholar]
- Chen, L.; Wu, P.; Chitta, K.; Jaeger, B.; Geiger, A.; Li, H. End-to-end autonomous driving: Challenges and frontiers. IEEE Trans. Pattern Anal. Mach. Intell. 2024. [Google Scholar] [CrossRef] [PubMed]
- Shibly, K.H.; Hossain, M.D.; Inoue, H.; Taenaka, Y.; Kadobayashi, Y. Towards autonomous driving model resistant to adversarial attack. Appl. Artif. Intell. 2023, 37, 2193461. [Google Scholar] [CrossRef]
- Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards deep learning models resistant to adversarial attacks. arXiv 2017, arXiv:1706.06083. [Google Scholar]
- Chen, D.; Koltun, V.; Krähenbühl, P. Learning to drive from a world on rails. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Nashville, TN, USA, 19–25 June 2021; pp. 15590–15599. [Google Scholar]
- Prakash, A.; Chitta, K.; Geiger, A. Multi-modal fusion transformer for end-to-end autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 7077–7087. [Google Scholar]
- Wu, P.; Jia, X.; Chen, L.; Yan, J.; Li, H.; Qiao, Y. Trajectory-guided control prediction for end-to-end autonomous driving: A simple yet strong baseline. Adv. Neural Inf. Process. Syst. 2022, 35, 6119–6132. [Google Scholar]
- Zeng, W.; Luo, W.; Suo, S.; Sadat, A.; Yang, B.; Casas, S.; Urtasun, R. End-to-end interpretable neural motion planner. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 8660–8669. [Google Scholar]
- Casas, S.; Sadat, A.; Urtasun, R. Mp3: A unified model to map, perceive, predict and plan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 14403–14412. [Google Scholar]
- Hu, S.; Chen, L.; Wu, P.; Li, H.; Yan, J.; Tao, D. St-p3: End-to-end vision-based autonomous driving via spatial-temporal feature learning. In Proceedings of the European Conference on Computer Vision, Tel-Aviv, Israel, 23–27 October 2022; pp. 533–549. [Google Scholar]
- Chen, D.; Krähenbühl, P. Learning from all vehicles. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 17222–17231. [Google Scholar]
- Hu, Y.; Yang, J.; Chen, L.; Li, K.; Sima, C.; Zhu, X.; Chai, S.; Du, S.; Lin, T.; Wang, W.; et al. Planning-oriented autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 17853–17862. [Google Scholar]
- Liu, S.; Wang, J.; Liu, A.; Li, Y.; Gao, Y.; Liu, X.; Tao, D. Harnessing Perceptual Adversarial Patches for Crowd Counting. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, Copenhagen, Denmark, 7–11 November 2022; pp. 2055–2069. [Google Scholar]
- Liu, A.; Huang, T.; Liu, X.; Xu, Y.; Ma, Y.; Chen, X.; Maybank, S.J.; Tao, D. Spatiotemporal attacks for embodied agents. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 122–138. [Google Scholar]
- Wang, J.; Liu, A.; Yin, Z.; Liu, S.; Tang, S.; Liu, X. Dual attention suppression attack: Generate adversarial camouflage in physical world. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 8565–8574. [Google Scholar]
- Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and harnessing adversarial examples. arXiv 2014, arXiv:1412.6572. [Google Scholar]
- Dong, Y.; Liao, F.; Pang, T.; Su, H.; Zhu, J.; Hu, X.; Li, J. Boosting adversarial attacks with momentum. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 9185–9193. [Google Scholar]
- Wang, H.; Dong, K.; Zhu, Z.; Qin, H.; Liu, A.; Fang, X.; Wang, J.; Liu, X. Transferable Multimodal Attack on Vision-Language Pre-training Models. In Proceedings of the 2024 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 20–22 May 2024; p. 102. [Google Scholar]
- Liu, A.; Guo, J.; Wang, J.; Liang, S.; Tao, R.; Zhou, W.; Liu, C.; Liu, X.; Tao, D. X-adv: Physical adversarial object attacks against x-ray prohibited item detection. arXiv 2023, arXiv:2302.09491. [Google Scholar]
- Xiao, Y.; Zhang, T.; Liu, S.; Qin, H. Benchmarking the robustness of quantized models. arXiv 2023, arXiv:2304.03968. [Google Scholar]
- Xiao, Y.; Liu, A.; Zhang, T.; Qin, H.; Guo, J.; Liu, X. RobustMQ: Benchmarking robustness of quantized models. Vis. Intell. 2023, 1, 30. [Google Scholar] [CrossRef]
- Carlini, N.; Wagner, D. Towards evaluating the robustness of neural networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 22–26 May 2017; pp. 39–57. [Google Scholar]
- Liu, A.; Tang, S.; Chen, X.; Huang, L.; Qin, H.; Liu, X.; Tao, D. Towards Defending Multiple lp-Norm Bounded Adversarial Perturbations via Gated Batch Normalization. Int. J. Comput. Vis. 2023, 132, 1881–1898. [Google Scholar] [CrossRef]
- Li, S.; Zhang, S.; Chen, G.; Wang, D.; Feng, P.; Wang, J.; Liu, A.; Yi, X.; Liu, X. Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 12324–12333. [Google Scholar]
- Liu, A.; Liu, X.; Fan, J.; Ma, Y.; Zhang, A.; Xie, H.; Tao, D. Perceptual-sensitive gan for generating adversarial patches. In Proceedings of the AAAI Conference on Artificial Intelligence, Waikiki, HI, USA, 27 January–1 February 2019; Volume 33, pp. 1028–1035. [Google Scholar]
- Liu, A.; Wang, J.; Liu, X.; Cao, B.; Zhang, C.; Yu, H. Bias-based universal adversarial patch attack for automatic check-out. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 395–410. [Google Scholar]
- Xie, S.; Li, Z.; Wang, Z.; Xie, C. On the Adversarial Robustness of Camera-based 3D Object Detection. arXiv 2023, arXiv:2301.10766. [Google Scholar]
- Abdelfattah, M.; Yuan, K.; Wang, Z.J.; Ward, R. Adversarial attacks on camera-lidar models for 3d car detection. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Online, 27 September–1 October 2021; pp. 2189–2194. [Google Scholar]
- Zhang, T.; Xiao, Y.; Zhang, X.; Li, H.; Wang, L. Benchmarking the Physical-world Adversarial Robustness of Vehicle Detection. arXiv 2023, arXiv:2304.05098. [Google Scholar]
- Jiang, W.; Zhang, T.; Liu, S.; Ji, W.; Zhang, Z.; Xiao, G. Exploring the Physical-World Adversarial Robustness of Vehicle Detection. Electronics 2023, 12, 3921. [Google Scholar] [CrossRef]
- Wiyatno, R.R.; Xu, A. Physical adversarial textures that fool visual object tracking. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 4822–4831. [Google Scholar]
- Michaelis, C.; Mitzkus, B.; Geirhos, R.; Rusak, E.; Bringmann, O.; Ecker, A.S.; Bethge, M.; Brendel, W. Benchmarking robustness in object detection: Autonomous driving when winter is coming. arXiv 2019, arXiv:1907.07484. [Google Scholar]
- Dong, Y.; Kang, C.; Zhang, J.; Zhu, Z.; Wang, Y.; Yang, X.; Su, H.; Wei, X.; Zhu, J. Benchmarking robustness of 3d object detection to common corruptions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 1022–1032. [Google Scholar]
- Zhang, T.; Wang, L.; Li, H.; Xiao, Y.; Liang, S.; Liu, A.; Liu, X.; Tao, D. LanEvil: Benchmarking the Robustness of Lane Detection to Environmental Illusions. arXiv 2024, arXiv:2406.00934. [Google Scholar]
- Nesti, F.; Rossolini, G.; Nair, S.; Biondi, A.; Buttazzo, G. Evaluating the robustness of semantic segmentation for autonomous driving against real-world adversarial patch attacks. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Online, 3–7 January 2022; pp. 2280–2289. [Google Scholar]
- Guo, J.; Kurup, U.; Shah, M. Is it safe to drive? An overview of factors, metrics, and datasets for driveability assessment in autonomous driving. IEEE Trans. Intell. Transp. Syst. 2019, 21, 3135–3151. [Google Scholar] [CrossRef]
- Kondermann, D.; Nair, R.; Honauer, K.; Krispin, K.; Andrulis, J.; Brock, A.; Gussefeld, B.; Rahimimoghaddam, M.; Hofmann, S.; Brenner, C.; et al. The hci benchmark suite: Stereo and flow ground truth with uncertainties for urban autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Las Vegas, NV, USA, 25 June–2 July 2016; pp. 19–28. [Google Scholar]
- Xu, C.; Ding, W.; Lyu, W.; Liu, Z.; Wang, S.; He, Y.; Hu, H.; Zhao, D.; Li, B. Safebench: A benchmarking platform for safety evaluation of autonomous vehicles. Adv. Neural Inf. Process. Syst. 2022, 35, 25667–25682. [Google Scholar]
- Deng, Y.; Zheng, X.; Zhang, T.; Chen, C.; Lou, G.; Kim, M. An analysis of adversarial attacks and defenses on autonomous driving models. In Proceedings of the 2020 IEEE International Conference on Pervasive Computing and Communications (PerCom), Austin, TX, USA, 23–27 March 2020; pp. 1–10. [Google Scholar]
- Jung, A.B.; Wada, K.; Crall, J.; Tanaka, S.; Graving, J.; Reinders, C.; Yadav, S.; Banerjee, J.; Vecsei, G.; Kraft, A.; et al. imgaug. 2020. Available online: https://github.com/aleju/imgaug (accessed on 1 February 2020).
- Hendrycks, D.; Dietterich, T. Benchmarking neural network robustness to common corruptions and perturbations. arXiv 2019, arXiv:1903.12261. [Google Scholar]
- Nvidia. JetBot. Available online: https://github.com/NVIDIA-AI-IOT/jetbot (accessed on 3 February 2021).
- Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Online, 14–19 June 2020; pp. 11621–11631. [Google Scholar]
Attack Method | Amota ↑ | Lanes-IOU ↑ | minADE ↓ | IOU-n ↑ | L2 Error (m) ↓ | Col.Rate ↓ | |
---|---|---|---|---|---|---|---|
Original | 0.576 | 23.93% | 0.4874 | 65.10% | 1.0827 | 0.00% | |
Adversarial Attacks | FGSM [39] | 0.116 | 20.16% | 1.0282 | 47.80% | 2.0496 | 0.00% |
MI-FGSM [40] | 0.061 | 18.73% | 1.3362 | 45.20% | 2.4130 | 1.98% | |
PGD- [27] | 0.332 | 22.60% | 0.9266 | 51.90% | 1.2573 | 0.39% | |
PGD- [27] | 0.276 | 22.75% | 0.9576 | 51.90% | 1.2229 | 0.00% | |
PGD- [27] | 0.068 | 18.82% | 1.3496 | 45.70% | 2.3304 | 1.14% | |
Module-Wise Attack | 0.048 | 18.94% | 1.9264 | 44.30% | 2.6814 | 1.52% | |
NaturalCorruptions | Noise | 0.168 | 17.03% | 0.6541 | 51.83% | 1.3043 | 0.31% |
Blur | 0.093 | 16.01% | 0.7478 | 43.40% | 1.3004 | 0.77% | |
Weather | 0.130 | 15.07% | 0.6551 | 46.58% | 1.3242 | 0.79% | |
Digital Distortions | 0.197 | 16.01% | 0.7478 | 58.22% | 1.2691 | 0.24% |
Attack Method | avgIOU ↑ | L2 Error (m) ↓ | Col.Rate ↓ | |
---|---|---|---|---|
Original | 38.11% | 1.5845 | 0.09% | |
Adversarial Attacks | FGSM [39] | 38.11% | 1.5824 | 0.51% |
MI-FGSM [40] | 9.75% | 3.0530 | 0.56% | |
PGD- [27] | 21.77% | 1.7044 | 0.43% | |
PGD- [27] | 34.36% | 1.5556 | 0.26% | |
PGD- [27] | 8.32% | 3.3656 | 1.15% | |
Module-Wise Attack | 1.48% | 5.4622 | 6.67% | |
Natural Corruptions | Noise | 8.37% | 4.0666 | 4.12% |
Blur | 15.53% | 3.2776 | 1.67% | |
Weather | 21.90% | 2.2594 | 0.31% | |
Digital Distortions | 24.58% | 2.4964 | 0.98% |
Metrics | Results without Attacks | Results under Attacks |
---|---|---|
RouteCompletionTest | 100% | 64.91% |
OutsideRouteLanesTest | 11.83% | 27.33% |
CollisionTest | 0 times | 1 times |
RunningRedLightTest | 0 times | 0 times |
RunningStopTest | 0 times | 0 times |
InRouteTest | Success | Success |
AgentBlockedTest | Success | Success |
Timeout | Success | Failure |
Driving Score | 88.175 | 29.545 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jiang, W.; Wang, L.; Zhang, T.; Chen, Y.; Dong, J.; Bao, W.; Zhang, Z.; Fu, Q. RobustE2E: Exploring the Robustness of End-to-End Autonomous Driving. Electronics 2024, 13, 3299. https://doi.org/10.3390/electronics13163299
Jiang W, Wang L, Zhang T, Chen Y, Dong J, Bao W, Zhang Z, Fu Q. RobustE2E: Exploring the Robustness of End-to-End Autonomous Driving. Electronics. 2024; 13(16):3299. https://doi.org/10.3390/electronics13163299
Chicago/Turabian StyleJiang, Wei, Lu Wang, Tianyuan Zhang, Yuwei Chen, Jian Dong, Wei Bao, Zichao Zhang, and Qiang Fu. 2024. "RobustE2E: Exploring the Robustness of End-to-End Autonomous Driving" Electronics 13, no. 16: 3299. https://doi.org/10.3390/electronics13163299
APA StyleJiang, W., Wang, L., Zhang, T., Chen, Y., Dong, J., Bao, W., Zhang, Z., & Fu, Q. (2024). RobustE2E: Exploring the Robustness of End-to-End Autonomous Driving. Electronics, 13(16), 3299. https://doi.org/10.3390/electronics13163299