A Hybrid Fault Diagnosis Method for Autonomous Driving Sensing Systems Based on Information Complexity
Abstract
:1. Introduction
- (1)
- Heterogeneous sensor units’ delay during real-world deployment: During the deployment of actual vehicles, heterogeneous sensor units exhibit diverse delay characteristics. Existing research on hardware redundancy in this aspect remains inadequate, predominantly confined to simulation studies or offline testing phases. It is foreseeable that a substantial number of misjudgments will arise during real-world deployment;
- (2)
- Predominance of post-data collection analysis in analytical redundancy methods: Mainstream analytical redundancy methods involve the analysis of sensor data collected after its acquisition, which, to some extent, cannot guarantee a rapid response of the fault diagnosis systems. This approach is not conducive to guaranteeing the swift responsiveness of the fault diagnosis system in real-time scenarios.
- (1)
- Real-Time fault detection for autonomous driving sensing systems: We have designed a real-time fault detection method specifically tailored for autonomous driving sensing systems using a hybrid approach, ensuring high detection accuracy;
- (2)
- Diagnosis of fault types via information complexity metrics: We achieve fault type diagnosis through information complexity metrics, eliminating the need for precise modeling and complex data processing. This approach enhances response speed.
2. System Framework and Methodology
2.1. Fault Diagnosis Criteria for Autonomous Driving Subsystems
- Fault classification: Categorizing potential faults facilitates diagnosis and maintenance;
- Fault detection: Precisely identifying faults within the system to ensure the accuracy of their detection;
- Fault diagnosis: Identifying the components and types of faults occurring in the system to ensure accurate localization;
- Fault response: Timely responding to detected and diagnosed faults for prompt remediation;
- Fault tolerance: Assessing the system’s ability to tolerate different faults to achieve correct system degradation.
2.2. Fault Classification and the nuScenes-F Dataset
2.2.1. Proposed Fault Classification
2.2.2. nuScenes-F
- Density Decrease. Randomly deleting {8%, 16%, 24%, 32%, 40%} of points in one frame of LiDAR;
- Cutout. Randomly removing {3, 5, 7, 10, 13} groups of the point cloud, where the number of points within each group is N/50, and each group is within a ball in the Euclidean space, where N is the total number of points in one frame of LiDAR;
- LiDAR Crosstalk. Drawing inspiration from references [31], a subset of points with proportions {0.6%, 1.2%, 1.8%, 2.4%, 3%} was selected and augmented with 3 m Gaussian noise;
- FOV Lost. Five sets of FOV losses were chosen with the retained angle ranges specified as {(−105, 105), (−90, 90), (−75, 75), (−60, 60), (−45, 45)};
- Gaussian Corruption. For LiDAR, Gaussian noise was added to all points with severity levels {0.04 m, 0.08 m, 0.12 m, 0.16 m, 0.20 m}. For the camera, the imgaug library [35] was employed, utilizing predefined severity levels {1, 2, 3, 4, 5} to simulate varying intensities of Gaussian noise;
- Uniform Corruption. For LiDAR, uniform noises were introduced to all points with severities set at {0.04 m, 0.08 m, 0.12 m, 0.16 m, 0.20 m}. For the camera, uniform noise ranging from ±{0.12, 0.18, 0.27, 0.39, 0.57} was added to the image;
- Impulse Corruption. For LiDAR, the count of points in {N/25, N/20, N/15, N/10, N/5} was selected to introduce impulse noise, representing the severities, where N denotes the total number of points in one LiDAR frame. For the camera, the imgaug library was employed, utilizing predefined severities {1, 2, 3, 4, 5} to simulate varying intensities of impulse noise;
- Spatial Misalignment. Gaussian noises were introduced to the calibration matrices between LiDAR and the camera. Precisely, the noises applied to the rotation matrix were {0.04, 0.08, 0.12, 0.16, 0.20}, and for the translation matrix, they were {0.004, 0.008, 0.012, 0.016, 0.020};
- Temporal Misalignment. This was simulated by employing multiple frames with identical data. For LiDAR, the frames experiencing stickiness are {2, 4, 6, 8, 10}. For the camera, the frames with stickiness are also {2, 4, 6, 8, 10}.
2.3. Fault Detector
2.3.1. The “Ground Truth”—MMW Radar Clustering
2.3.2. LiDAR Detection—CNN_SEG
- Utilizing the objectness layer information, detect grid points representing obstacles (objects);
- Based on the center offset layer information, cluster the detected obstacle grid points to obtain clusters;
- Apply filtering based on positiveness and object height layer information to the background and higher points in each cluster;
- Employ class probability to classify each cluster, resulting in the final targets.
- The schematic diagram of the CNN_SEG process is illustrated in Figure 6.
2.3.3. Camera Detection—YOLOv7
2.3.4. Fusion of MMW Radar, Camera, and LiDAR
2.3.5. Fault Detection Metric
2.4. Fault Diagnoser
2.4.1. Metric for Image
2.4.2. Metric for Point Cloud
2.4.3. Time-Series Metric
2.4.4. CNN-Based Time-Series Classification
3. Experiments and Results Analysis
3.1. Validation on Dataset
- (1)
- Based on the established criteria for positioning error relative to the “ground truth”, the accuracy of fault detection averages 96.93%. No false positives occurred in scenarios without injected faults. Therefore, even in the presence of algorithmic delays in real-world vehicle deployment, the hardware-redundant fault detection method is commendable and applicable to autonomous driving sensor systems;
- (2)
- The fault diagnoser exhibited no misjudgments during testing. To observe differences in various fault types, we focused on the SoftMax output probabilities of the CNN, as outlined in Section 2.4.4. Notably, the accuracy for individual faults for LiDAR and the camera was comparable (89.67% and 89.42%, respectively). When misalignment faults occurred simultaneously in LiDAR and the camera, there was an improvement in diagnostic accuracy of approximately 3.6%, reaching 92.73%;
- (3)
- Regarding response times, the fault detector exhibited an average response time of 0.13 s in no-fault scenarios. For isolated LiDAR faults, the response time (averaging 0.76 s) was faster than isolated camera faults (averaging 0.87 s). In cases of misalignment faults occurring in both LiDAR and the camera, the response time was the longest, reaching an average of 1.28 s. A similar trend was observed in the fault diagnoser. The response time for isolated LiDAR faults (averaging 1.10 s) was faster than isolated camera faults (averaging 1.48 s), while misalignment faults in both resulted in the longest response time, averaging 1.57 s. Overall, fault diagnosis exhibited longer response times compared to fault detection, attributed to the CNN’s need for a time series of samples rather than a single-frame data approach used in fault detection.
3.2. Validation on Real-World Vehicle
4. Discussion and Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Hou, W.; Li, W.; Li, P. Fault diagnosis of the autonomous driving perception system based on information fusion. Sensors 2023, 23, 5110. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.; He, L.; Cheng, G. MLPC-CNN: A Multi-Sensor Vibration Signal Fault Diagnosis Method under Less Computing Resources. Measurement 2022, 188, 110407. [Google Scholar] [CrossRef]
- Zhao, G.; Li, J.; Ma, L. Design and implementation of vehicle trajectory perception with multi-sensor information fusion. Electron. Des. Eng. 2022, 1, 1–7. [Google Scholar]
- Gao, Z.; Cecati, C.; Ding, S.X. A survey of fault diagnosis and fault-tolerant techniques—Part I: Fault diagnosis with model-based and signal-based approaches. IEEE Trans. Ind. Electron. 2015, 62, 3757–3767. [Google Scholar] [CrossRef]
- Gao, Z.; Cecati, C.; Ding, S.X. A survey of fault diagnosis and fault-tolerant techniques—Part II: Fault diagnosis with knowledge-based and hybrid/active approaches. IEEE Trans. Ind. Electron. 2015, 62, 3768–3774. [Google Scholar] [CrossRef]
- Willsky, A.S. A survey of design methods for failure detection in dynamic systems. Automatica 1976, 12, 601–611. [Google Scholar] [CrossRef]
- Van Wyk, F.; Wang, Y.; Khojandi, A.; Masoud, N. Real-time sensor anomaly detection and identification in automated vehicles. IEEE Trans. Intell. Transp. Syst. 2019, 21, 1264–1276. [Google Scholar] [CrossRef]
- Muenchhof, M.; Isermann, R. Comparison of change detection methods for a residual of a hydraulic servo-axis. IFAC Proc. Vol. 2005, 38, 317–322. [Google Scholar] [CrossRef]
- Chan, C.W.; Hua, S.; Hong-Yue, Z. Application of fully decoupled parity equation in fault detection and identification of DC motors. IEEE Trans. Ind. Electron. 2006, 53, 1277–1284. [Google Scholar] [CrossRef]
- Escobet, T.; Travé-Massuyès, L. Parameter Estimation Methods for Fault Detection and Isolation. Bridge Workshop Notes. 2001, pp. 1–11. Available online: https://homepages.laas.fr/louise/Papiers/Escobet%20&%20Trave-Massuyes%202001.pdf (accessed on 11 November 2023).
- Hilbert, M.; Kuch, C.; Nienhaus, K. Observer based condition monitoring of the generator temperature integrated in the wind turbine controller. In Proceedings of the EWEA 2013 Scientific Proceedings, Vienna, Austria, 4–7 February 2013; pp. 189–193. [Google Scholar]
- Heredia, G.; Ollero, A. Sensor fault detection in small autonomous helicopters using observer/Kalman filter identification. In Proceedings of the 2009 IEEE International Conference on Mechatronics, Malaga, Spain, 14–17 April 2009; pp. 1–6. [Google Scholar]
- Wang, J.; Zhang, Y.; Cen, G. Fault diagnosis method of hydraulic condition monitoring system based on information entropy. Comput. Eng. Des. 2021, 8, 2257–2264. [Google Scholar]
- Chandola, V.; Banerjee, A.; Kumar, V. Anomaly detection: A survey. ACM Comput. Surv. (CSUR) 2009, 41, 1–58. [Google Scholar] [CrossRef]
- Sharifi, R.; Langari, R. Nonlinear Sensor Fault Diagnosis Using Mixture of Probabilistic PCA Models. Mech. Syst. Signal Process. 2017, 85, 638–650. [Google Scholar] [CrossRef]
- Yin, S.; Ding, S.X.; Xie, X.; Luo, H. A review on basic data-driven approaches for industrial process monitoring. IEEE Trans. Ind. Electron. 2014, 61, 6418–6428. [Google Scholar] [CrossRef]
- Meskin, N.; Khorasani, K. Fault Detection and Isolation: Multi-Vehicle Unmanned Systems; Springer Science & Business Media: Berlin, Germany, 2011. [Google Scholar]
- Li, M.; Wang, Y.-X.; Ramanan, D. Towards streaming perception. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part II 16. pp. 473–488. [Google Scholar]
- Dong, Y.; Kang, C.; Zhang, J.; Zhu, Z.; Wang, Y.; Yang, X.; Su, H.; Wei, X.; Zhu, J. Benchmarking Robustness of 3D Object Detection to Common Corruptions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 1022–1032. [Google Scholar]
- Ferracuti, F.; Giantomassi, A.; Ippoliti, G.; Longhi, S. Multi-Scale PCA based fault diagnosis for rotating electrical machines. In Proceedings of the 8th ACD 2010 European Workshop on Advanced Control and Diagnosis Department of Engineering, Ferrara, Italy, 18–19 November 2010; pp. 296–301. [Google Scholar]
- Yu, K.; Tao, T.; Xie, H.; Lin, Z.; Liang, T.; Wang, B.; Chen, P.; Hao, D.; Wang, Y.; Liang, X. Benchmarking the robustness of lidar-camera fusion for 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 3187–3197. [Google Scholar]
- Realpe, M.; Vintimilla, B.X.; Vlacic, L. A Fault Tolerant Perception System for Autonomous Vehicles. In Proceedings of the 2016 35th Chinese Control Conference (CCC), Chengdu, China, 27–29 July 2016; pp. 6531–6536. [Google Scholar]
- Schlager, B.; Goelles, T.; Behmer, M.; Muckenhuber, S.; Payer, J.; Watzenig, D. Automotive lidar and vibration: Resonance, inertial measurement unit, and effects on the point cloud. IEEE Open J. Intell. Transp. Syst. 2022, 3, 426–434. [Google Scholar] [CrossRef]
- Hendrycks, D.; Dietterich, T. Benchmarking neural network robustness to common corruptions and perturbations. arXiv 2019, arXiv:1903.12261. [Google Scholar]
- Li, Z.; Wang, W.; Li, H.; Xie, E.; Sima, C.; Lu, T.; Qiao, Y.; Dai, J. Bevformer: Learning bird’s-eye-view representation from multi-camera images via spatiotemporal transformers. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; pp. 1–18. [Google Scholar]
- Berger, M.; Tagliasacchi, A.; Seversky, L.M.; Alliez, P.; Levine, J.A.; Sharf, A.; Silva, C.T. State of the art in surface reconstruction from point clouds. In Proceedings of the 35th Annual Conference of the European Association for Computer Graphics, Eurographics 2014-State of the Art Reports, Strasbourg, France, 7–11 April 2014. [Google Scholar]
- Carballo, A.; Lambert, J.; Monrroy, A.; Wong, D.; Narksri, P.; Kitsukawa, Y.; Takeuchi, E.; Kato, S.; Takeda, K. LIBRE: The multiple 3D LiDAR dataset. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 1094–1101. [Google Scholar]
- Li, X.; Chen, Y.; Zhu, Y.; Wang, S.; Zhang, R.; Xue, H. ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 20371–20381. [Google Scholar]
- Ren, J.; Pan, L.; Liu, Z. Benchmarking and analyzing point cloud classification under corruptions. In Proceedings of the International Conference on Machine Learning, Baltimore, MD, USA, 17–23 July 2022; pp. 18559–18575. [Google Scholar]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? the kitti vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
- Briñón-Arranz, L.; Rakotovao, T.; Creuzet, T.; Karaoguz, C.; El-Hamzaoui, O. A methodology for analyzing the impact of crosstalk on LIDAR measurements. In Proceedings of the 2021 IEEE Sensors, Sydney, Australia, 31 October–3 November 2021; pp. 1–4. [Google Scholar]
- Li, S.; Wang, Z.; Juefei-Xu, F.; Guo, Q.; Li, X.; Ma, L. Common corruption robustness of point cloud detectors: Benchmark and enhancement. IEEE Trans. Multimed. 2023, 1–12. [Google Scholar] [CrossRef]
- Mao, J.; Niu, M.; Jiang, C.; Chen, J.; Liang, X.; Li, Y.; Ye, C.; Zhang, W.; Li, Z.; Yu, J. One Million Scenes for Autonomous Driving: ONCE Dataset. In Proceedings of the Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1), Online, 7–10 December 2021. [Google Scholar]
- Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 11621–11631. [Google Scholar]
- Jung, A. Imgaug documentation. Readthedocs. Io Jun 2019, 25. Available online: https://media.readthedocs.org/pdf/imgaug/stable/imgaug.pdf (accessed on 15 November 2023).
- Fan, Z.; Wang, S.; Pu, X.; Wei, H.; Liu, Y.; Sui, X.; Chen, Q. Fusion-Former: Fusion Features across Transformer and Convolution for Building Change Detection. Electronics 2023, 12, 4823. [Google Scholar] [CrossRef]
- Guo, J.; Han, K.; Wu, H.; Tang, Y.; Chen, X.; Wang, Y.; Xu, C. Cmt: Convolutional neural networks meet vision transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 12175–12185. [Google Scholar]
- Liu, Z.; Tang, H.; Amini, A.; Yang, X.; Mao, H.; Rus, D.L.; Han, S. Bevfusion: Multi-task multi-sensor fusion with unified bird’s-eye view representation. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 17–19 November 2023; pp. 2774–2781. [Google Scholar]
- Schubert, E.; Sander, J.; Ester, M.; Kriegel, H.P.; Xu, X. DBSCAN revisited, revisited: Why and how you should (still) use DBSCAN. ACM Trans. Database Syst. (TODS) 2017, 42, 1–21. [Google Scholar] [CrossRef]
- Lu, W.; Zhou, Y.; Wan, G.; Hou, S.; Song, S. L3-net: Towards learning based lidar localization for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 6389–6398. [Google Scholar]
- Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 7464–7475. [Google Scholar]
- Huang, H.; Huang, X.; Ding, W.; Yang, M.; Yu, X.; Pang, J. Vehicle vibro-acoustical comfort optimization using a multi-objective interval analysis method. Expert Syst. Appl. 2023, 213, 119001. [Google Scholar] [CrossRef]
- Tan, F.; Liu, W.; Huang, L.; Zhai, C. Object Re-Identification Algorithm Based on Weighted Euclidean Distance Metric. J. South China Univ. Technol. Nat. Sci. Ed. 2015, 9, 88–94. [Google Scholar]
- Wang, X.; Zhu, Z.; Zhang, Y.; Huang, G.; Ye, Y.; Xu, W.; Chen, Z.; Wang, X. Are We Ready for Vision-Centric Driving Streaming Perception? The ASAP Benchmark. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 9600–9610. [Google Scholar]
- Rao, M.; Chen, Y.; Vemuri, B.C.; Wang, F. Cumulative residual entropy: A new measure of information. IEEE Trans. Inf. Theory 2004, 50, 1220–1228. [Google Scholar] [CrossRef]
- Huang, H.; Lim, T.C.; Wu, J.; Ding, W.; Pang, J. Multitarget prediction and optimization of pure electric vehicle tire/road airborne noise sound quality based on a knowledge-and data-driven method. Mech. Syst. Signal Process. 2023, 197, 110361. [Google Scholar] [CrossRef]
- Huang, H.; Huang, X.; Ding, W.; Zhang, S.; Pang, J. Optimization of electric vehicle sound package based on LSTM with an adaptive learning rate forest and multiple-level multiple-object method. Mech. Syst. Signal Process. 2023, 187, 109932. [Google Scholar] [CrossRef]
Fault | FusionFormer [36] | CMT [37] | BEVFusion [38] | |
---|---|---|---|---|
None | 70.01 | 70.29 | 68.45 | |
Interference | Density | 63.26 | 63.38 | 61.68 |
Cutout | 61.80 | 61.34 | 60.22 | |
Crosstalk | 62.21 | 62.32 | 61.26 | |
FOV Lost | 26.13 | 23.73 | 24.72 | |
Gaussian (L) | 58.52 | 53.09 | 55.18 | |
Uniform (L) | 62.76 | 62.36 | 60.79 | |
Impulse (L) | 62.97 | 63.03 | 61.46 | |
Gaussian (C) | 54.57 | 62.17 | 58.64 | |
Uniform (C) | 57.20 | 62.88 | 59.88 | |
Impulse (C) | 54.76 | 62.03 | 58.51 | |
Alignment | Spatial | 63.31 | 63.81 | 62.23 |
Temporal | 51.06 | 42.06 | 44.60 |
Results | Values | |
---|---|---|
Average fault detection accuracy | 96.93% | |
Average fault diagnosis accuracy | Corruptions (L) | 89.67% |
Corruptions (C) | 89.42% | |
Misalignment | 92.73% | |
Average fault detection response time | No Fault | 0.13 s |
Corruptions (L) | 0.76 s | |
Corruptions (C) | 0.87 s | |
Misalignment | 1.28 s | |
Average fault diagnosis response time | Corruptions (L) | 1.10 s |
Corruptions (C) | 1.48 s | |
Misalignment | 1.57 s |
Results | Values | |
---|---|---|
Average fault detection accuracy | 95.33% | |
Average fault diagnosis accuracy | Corruptions (L) | 82.23% |
Corruptions (C) | 81.46% | |
Misalignment | 87.31% | |
Average fault detection response time | No Fault | 0.13 s |
Corruptions (L) | 0.82 s | |
Corruptions (C) | 0.92 s | |
Misalignment | 1.36 s | |
Average fault diagnosis response time | Corruptions (L) | 1.18 s |
Corruptions (C) | 1.56 s | |
Misalignment | 1.66 s |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jin, T.; Zhang, C.; Zhang, Y.; Yang, M.; Ding, W. A Hybrid Fault Diagnosis Method for Autonomous Driving Sensing Systems Based on Information Complexity. Electronics 2024, 13, 354. https://doi.org/10.3390/electronics13020354
Jin T, Zhang C, Zhang Y, Yang M, Ding W. A Hybrid Fault Diagnosis Method for Autonomous Driving Sensing Systems Based on Information Complexity. Electronics. 2024; 13(2):354. https://doi.org/10.3390/electronics13020354
Chicago/Turabian StyleJin, Tianshi, Chenxi Zhang, Yikang Zhang, Mingliang Yang, and Weiping Ding. 2024. "A Hybrid Fault Diagnosis Method for Autonomous Driving Sensing Systems Based on Information Complexity" Electronics 13, no. 2: 354. https://doi.org/10.3390/electronics13020354
APA StyleJin, T., Zhang, C., Zhang, Y., Yang, M., & Ding, W. (2024). A Hybrid Fault Diagnosis Method for Autonomous Driving Sensing Systems Based on Information Complexity. Electronics, 13(2), 354. https://doi.org/10.3390/electronics13020354