Next Article in Journal
Grape Maturity Estimation Using Time-of-Flight and LiDAR Depth Cameras
Next Article in Special Issue
Investigating the Impacts of Autonomous Vehicles on the Efficiency of Road Network and Traffic Demand: A Case Study of Qingdao, China
Previous Article in Journal
Driving Attention State Detection Based on GRU-EEGNet
Previous Article in Special Issue
End-to-End Autonomous Driving Decision Method Based on Improved TD3 Algorithm in Complex Scenarios
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Survey on Sensor Failures in Autonomous Vehicles: Challenges and Solutions

Polytechnic University of Coimbra, Rua da Misericórdia, Lagar dos Cortiços, S. Martinho do Bispo, 3045-093 Coimbra, Portugal
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(16), 5108; https://doi.org/10.3390/s24165108
Submission received: 14 June 2024 / Revised: 4 August 2024 / Accepted: 5 August 2024 / Published: 7 August 2024
(This article belongs to the Special Issue Intelligent Sensors and Control for Vehicle Automation)

Abstract

:
Autonomous vehicles (AVs) rely heavily on sensors to perceive their surrounding environment and then make decisions and act on them. However, these sensors have weaknesses, and are prone to failure, resulting in decision errors by vehicle controllers that pose significant challenges to their safe operation. To mitigate sensor failures, it is necessary to understand how they occur and how they affect the vehicle’s behavior so that fault-tolerant and fault-masking strategies can be applied. This survey covers 108 publications and presents an overview of the sensors used in AVs today, categorizes the sensor’s failures that can occur, such as radar interferences, ambiguities detection, or camera image failures, and provides an overview of mitigation strategies such as sensor fusion, redundancy, and sensor calibration. It also provides insights into research areas critical to improving safety in the autonomous vehicle industry, so that new or more in-depth research may emerge.

1. Introduction

Autonomous vehicles (AVs), or self-driving cars, are vehicles that perceive their surroundings and can move from point A to point B without human intervention. Through a combination of sensors, control theories, machine-learning algorithms, and computing, AVs can make real-time decisions. They are expected not only to reduce the number and severity of car accidents caused by human errors but also to reduce the carbon footprint through efficient driving [1]. However, despite these expected advantages, autonomous driving poses challenges such as a lack of governing legislation, a perceived increase in unemployment in the transportation sector, and cybersecurity threats [2,3].
Notwithstanding the current disbelief and uncertainty about fully autonomous vehicles [4,5], the number of vehicles using some degree of autonomous driving is increasing every year [6]. Many new vehicles now include driver assistance features that allow humans to perform multiple tasks on a daily basis, using sensors to perceive the vehicles’ surroundings and assist the human drivers in actions such as blind spot detection, lane change assistant, rear cross-traffic alert, forward cross-traffic alert, and adaptive cruise control, among many other examples. These actions represent incremental steps toward full autonomy. Nevertheless, current sensor technology is not reliable in all environments in which vehicles may operate [7,8]. Factors such as adverse weather conditions, complex urban settings, and sensor failures can significantly impair the performance of these systems. This unreliability poses challenges to the advancement of autonomous vehicles.
This survey aims to contribute to the development of solutions and strategies that increase the safety of autonomous vehicles by:
  • Identifying and categorizing sensor failures.
  • Identifying sensor limitations and the most frequent failures.
  • Researching mitigation strategies for the categorized problems, such as sensor calibration, sensor fusion, radar ambiguity detection, and strategies for dealing with camera failures such as blur, condensation, broken lenses, heat, water, etc. Given that none of those mitigation strategies solves the issue of a complete failure of a sensor, we also explored how redundancy is applied in AVs.
  • Discussing ongoing and potential future advances in the perception systems, such as continuous improvement of sensors, improving the robustness of machine--learning models, and the importance of redundancy in AVs.
Our research included surveying relevant papers. This survey highlights the importance of sensor fusion, redundancy, and continuous improvement in sensor calibration techniques to improve the safety and efficiency of AVs. It also shows that the topic of sensors in autonomous vehicles is becoming increasingly relevant, as the number of published papers per year has been increasing in recent years. Figure 1 shows the number of papers analyzed per year.
The remainder of this paper is organized as follows. Section 2 presents background concepts, introduces the architecture of an autonomous vehicle, and describes which sensors are used. Section 3 provides an overview of each sensor type, detailing its use and technical specifications. Section 4 describes mitigation strategies. Section 5 presents some ideas for future research in AVs. Finally, Section 6 concludes the paper by summarizing the findings.

2. Background Concepts

According to the J3016 standard proposed by the Society of Automotive Engineers (SAE) [9], autonomous vehicles can be classified into 6 levels of increasing degrees of autonomy. Level 0 refers to the weakest automation, where the driver has full control of the vehicle; Level 5 is the highest, corresponding to full automation, where the vehicle controls all aspects of driving and requires no human intervention. There are other classification schemes (e.g., the German Federal Highway Research Institute (BASt) and NHTSA [10]), but the J3016 is the most used in the industry and academia. The levels are shown in Figure 2.
These automation levels are important because they serve as general guidelines for how technologically advanced a car needs to be. Perception is a critical layer of the ADS (autonomous driving system), which uses sensors to perceive the environment and control the vehicle. Four main phases can be identified between sensing and controlling the vehicle: perception, planning and decision-making, motion and vehicle control, and system supervision [11]. This is illustrated in Figure 3. Perception of the surrounding environment is required at all levels of autonomy. Level 1 is the level that requires the least perception capability, as it only requires the perception needed to assist the driver in tasks such as parking and blind spot detection.
The main goal of the perception phase as described in [11] is to receive data from sensors and other sources (vehicle sensors configuration, map databases, etc.) and generate a representation of the vehicle state and a world model.
Sensors can be categorized into internal state sensors (or proprioceptive sensors), and external state sensors (or exteroceptive sensors) [12]. Proprioceptive sensors are those used to measure the (internal) state of the vehicle. This category includes sensors such as global navigation satellite systems (GNSS), inertial measurement units (IMUs), inertial navigation systems (INS), and encoders, which are devices used to provide feedback on the position, speed, and rotation of moving parts within the vehicle such as the steering wheel, motor, brake, accelerator pedals, etc. These sensors are used to obtain information about the vehicle’s position, motion, and odometry [13].
The autonomous vehicle (AV) can be positioned using relative or absolute methods. Relative positioning of an AV involves determining the vehicle’s coordinates based on (relative to) its surrounding landmarks, while absolute positioning involves determining the vehicle’s coordinates using a global reference frame (world) [14].
Exteroceptive sensors monitor the vehicle’s surroundings to obtain data on the terrain, the environment, and external objects. These sensors include cameras, LiDARs (light detection and ranging), radars (radio detection and ranging), ultrasonic sensors, and, in recent years, synthetic aperture radar (SAR). Although SAR is currently not implemented by manufacturers, we explored this sensor due to positive results in recent studies and its potential use in future vehicles.
In the typical AV use scenario, these sensors are used together to provide information about detection, lane occupancy, and more. Figure 4 shows an example of the type and location of sensors on a typical autonomous vehicle.
Sensors vary in technology and purpose, and each type of sensor has its weaknesses that are inherent in the technological capabilities that sensors use. To cover these weaknesses, some mitigation strategies are implemented, such as sensor calibration and sensor fusion [14]. Sensor fusion is a crucial part of autonomous driving systems [13,15] where input from multiple sensors is combined to reduce errors and overcome the limitations of individual sensors. Sensor fusion helps create a consistent and accurate representation of the environment in various harsh situations [16].

3. Sensors Used in the Perception Layer

This section provides an overview of each sensor type, detailing its use and technical specifications. Next, a categorization of sensor failures and limitations/weaknesses is presented, such as radar interference and harsh environments, offering a comprehensive overview of the various issues that can arise and their potential impact on AVs. This categorization is relevant to test teams, providing data that enhances the understanding of sensor issues in AVs to improve AV safety.
For the sake of clarity, we use the following definition from Avizienis, Laprie, Randell, and Landwehr [17]: a service failure (or simply a failure) is an event that occurs when the delivered service deviates from correct service, either because it does not comply with the functional specification, or because this specification did not adequately describe the system function. For example, an ultrasonic sensor fails when it delivers an incorrect service (wrong perception) due to interference between multiple sensors (which is called a fault). However, if this sensor does not perceive an object 100 m away, it is a limitation, but not a failure, since a maximum perception distance of 2 m is restrained by the functional specification. Since ultrasonic sensors are susceptible to too many service failures due to interference between multiple sensors, we call it a weakness.

3.1. Ultrasonic Sensors

Ultrasonic sensors use sound waves to perceive distance and detect the presence of objects. Ultrasonic sensors use sound waves in the frequency range of 20 to 40 kHz [18], which falls outside the human hearing range. The distance is calculated by emitting a sound wave and measuring its time-of-flight (ToF) until an echo signal is received. These sensors are directional with a very narrow beam detection range [18]. They are most used as parking sensors [19] and perform well in adverse weather conditions and dusty situations [7].
Given its narrow beam detection range, several ultrasonic sensors are needed to capture a full-field image. However, several sensors will interfere with each other and create interference, so a unique signature or identification code is required to reject echoes from other ultrasonic sensors in the vicinity. Ultrasonic sensors have a limited range, detecting obstacles up to 2 m [20].

3.2. RADAR: Radio Detection and Ranging

Radar sensors use millimeter waves (mm waves) and work by sending electromagnetic waves and then receiving them after they bounce back, using the Doppler shift effect. Radars can measure not only the exact distance but also the relative speed [15]. They operate at frequencies of 24/77/79 GHz, but most of the newly developed sensors operate in the frequency band of 76–81 GHz, as the use of the 24 GHz frequency was prohibited by regulators due to lower bandwidth, accuracy, and resolution. They typically have a perception range of 5 m up to 200 m [21], perform well in all weather conditions (rain, fog, and dark environments), and can accurately detect close-range targets on all sides of the vehicle [7,21]. Radars are used in blind spot detection (BSD), lane change assistant (LCA), rear cross traffic alert (RCTA), forward cross traffic alert (FCTA), adaptive cruise control, and radar video fusion.
One of the common problems of radars is that millimeter waves can give false positives because of possible bounced waves from the environment. Due to the increasing number of vehicles equipped with FMCW (frequency-modulated continuous-wave) radars, shared frequency interference is expected to become a problem [22].

3.3. LiDAR: Light Detection and Ranging

LiDAR sensors work by emitting pulses of light (laser) and measuring the time it takes for the light to bounce off objects and return to the sensor. By scanning the reflected laser beams emitted in various directions, LiDAR sensors can produce highly accurate spatial data, creating point clouds and distance maps of the environment [23]. LiDAR sensors are used to identify objects and pedestrians and avoid collisions. Due to their availability, 905 nm pulse LiDAR devices were used in the early AV systems. However, these 905 nm LiDAR systems have several important limitations, including high cost, inefficient mechanical scanning (in what concerns the movement necessary to direct the laser and sensor across its field of view), interference from other light sources, and eye-safety concerns leading to power restrictions that limited their detection range to approximately 100 m. This led to a shift to the retina-safe 1550 nm band, allowing higher pulse power with increased ranges of up to 300 m [23].
The performance of LiDAR in optimal environmental conditions is much better than that of radar, but as soon as there is fog, snow, or rain, its performance suffers [20,24]. In [25], the authors analyze the effects of mirror-like objects in LiDAR, explaining that laser scans can be completely reflected by mirrors, resulting in no range or intensity data, leading to the creation of a faulty map. They also explain the different behaviors of light reflection on multiple surfaces (Figure 5). This shows that LiDAR performance is strongly influenced by the environment.

3.4. Camera

Cameras can capture high-resolution details of objects up to 250 m away. Cameras are categorized into types that use visible (VIS) or infrared (IR) light, range-gated technology, polarization technology, and event detection [26]. They are based on one of two sensor technologies: charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS). CCD image sensors are produced using an expensive manufacturing process delivering sensors with high quantification efficiency and low noise. CMOS sensor technology was developed to minimize the cost of CCD fabrication at the expense of performance [11,14,21].
VIS cameras have a wide range of applications in AVs, including blind spot detection (BSD), lane change assistance (LCA), side view control, and accident recording. Deep learning algorithms can also be used with these cameras to detect and understand traffic signs and other objects [27,28,29].
IR cameras are passive sensors that use light in infrared wavelengths (780 nm to 1 mm), resulting in less light interference. The common application for AVs is in scenarios with illumination peaks, and in the detection of hot objects such as pedestrians [30,31], animals [32], or other vehicles [33].
Range-gated cameras are imaging systems that capture images based on the distance to the target by using a time-controlled gating mechanism. This gating mechanism synchronizes with a pulsed light source (such as a laser) to selectively allow light reflected from objects within a specific distance range to reach the camera sensor. By controlling the timing of the gate, the camera can effectively “slice” the scene at different distances, filtering out unwanted reflections and background noise to improve visibility in adverse conditions [34].
Polarization cameras are imaging devices that detect and measure the polarization state of light. Unlike conventional cameras that capture intensity and color, polarization cameras provide additional information about the angle and degree of light polarization. This extra layer of information can be used to infer various properties of objects and scenes that are not visible in standard images, and one of its main advantages is that it can detect transparent objects [35].
Event cameras are a type of vision sensor that capture changes in a scene asynchronously, as opposed to traditional cameras that capture full frames at fixed intervals. Event cameras detect and record individual pixel-level changes in brightness, called “events”, in real-time and with extremely low latency, and the main advantages are no motion blur and output over 1000 fps [36].
Cameras are strongly influenced by changes in lighting conditions, and by meteorological conditions such as rain, snow, and fog. Thus, cameras are typically paired with radar and/or LiDAR technologies to improve their resilience. In [37], the authors investigate the effects of degraded images on trained AI/ML agents, as seen in Figure 6. These camera failures were injected using the CARLA simulator [38], resulting in AV collisions, as shown in Figure 7, despite the application of some mitigation techniques (more in Section 4.6).

3.5. GNSS: Global Navigation Satellite Systems

The operating principle of the GNSS is based on the ability of the receiver to locate at least four satellites, and then compute the relative distance to each of them. Since the location of the satellites is known, the receiver can extrapolate its global position using trilateration.
GNSS signals are prone to many errors that reduce the accuracy of the system, including the following:
  • Timing errors, due to variations between the satellite’s atomic clock and the receiver’s crystal clock.
  • Signal delays, caused by propagation through the ionosphere and troposphere.
  • Multipath effect.
  • Satellite orbit uncertainties.
Current vehicle positioning systems improve their accuracy by combining GNSS signals with data from other vehicle sensors (e.g., inertial measurement units (IMU), LiDARs, radars, and cameras) to produce trustworthy position information [39,40,41]. This mitigation strategy is called sensor fusion and is discussed in Section 4.2. GNSS is susceptible to jamming, such as when the receiver encounters interference from other radio transmission sources. GNSS receivers also suffer from spoofing, when fake GNSS signals are intentionally transmitted to feed false position information and divert the target from their intended trajectory.

3.6. Inertial Measurements Units

Autonomous vehicles can detect slippage or lateral movement using inertial measurement units (IMU), which collect data from accelerometers, gyroscopes, and magnetometer sensors. Using this data, it is possible to detect the motion and orientation of the vehicle. The IMU, in conjunction with the other sensors, can correct errors and improve the sampling rate of the measurement system [21,41]. This approach is called inertial guidance.

3.7. Summary of Problems and Weaknesses of Interceptive and Exteroceptive Sensors

In Table 1, we present a comparison between each sensor’s advantages, as well as its limitations and weaknesses.
Table 2 presents, for each sensor type, the impact of the sensor failures on autonomous vehicles’ performance and safety.

4. Mitigation Strategies

This section analyzes mitigation strategies such as sensor calibration, sensor fusion techniques, radar interference, radar ambiguity detection, redundancy, camera image failures, and LiDAR mirror-like object detection.

4.1. Sensor Calibration

Subsequent processing steps such as sensor fusion, obstacle detection algorithms, localization, mapping, planning, and control require accurate sensor data. Thus, having calibrated sensors is of great importance. The following sections discuss three different forms of sensor calibration [14].

4.1.1. Intrinsic Calibration

This type of calibration addresses sensor-specific parameters and occurs before extrinsic calibration and the execution of obstacle detection algorithms. Intrinsic calibration calculates the inherent parameters of a sensor, such as the focal lengths of a vision camera, to compensate for systematic or deterministic errors.

4.1.2. Extrinsic Calibration

Extrinsic calibration is a Euclidean transformation that converts points from one 3D coordinate system to another. For example, it converts points from the 3D world or LiDAR coordinate system to the 3D camera coordinate system. This calibration calculates the position and orientation of the sensor relative to the three orthogonal axes of 3D space.
In [48], the authors propose an algorithm that calibrates the camera position using data provided by the IMU. In [49], extrinsic calibration between a camera and a LiDAR is achieved by using 3D calibration targets, and calibration of the sensors is then performed based on the extracted corresponding points of the object.

4.1.3. Temporal Calibration

Temporal calibration establishes the synchronization rate of multiple sensor data streams. This is an important aspect because different sensors operate at different timings and with diverse latencies: For example, while a camera collects images at a given number of frames per second, a LiDAR or a radar may scan at a different rate [14]. This issue is very important because fusing data from non-synchronized sensors can induce system errors.

4.2. Sensor Fusion

Sensor fusion combines data originating from multiple types of sensors, taking advantage of their strengths while compensating for their weaknesses. For example, combining data from cameras and radar can provide high-resolution images and the relative speeds of obstacles in the area. Table 3 outlines the best sensor used in each factor [14].
There is a considerable amount of research on multi-sensor fusion [11,14,15,50,51,52,53,54,55,56,57,58,59,60]. The most common sensor combinations for obstacle detection are camera–LiDAR (CL), camera–radar (CR), and camera–LiDAR–radar (CLR). The CR sensor combination is the most used in multi-sensor fusion systems for environmental perception, followed by the CLR and CL [54,61]. A best combination of sensors does not exist, as the definition of “best” depends on functionality and price. The CR sensor combination produces high-resolution images and provides distance and velocity data about the surrounding obstacles [62,63,64,65]. Similarly, the CLR sensor combination improves resolution over longer distances and provides accurate environmental information using LiDAR point clouds and depth map data [14].

4.2.1. Sensor Fusion Methodologies

Before fusing data, it is important to know which sensors to fuse and where to fuse—for instance, fuse only data corresponding to the front view of the vehicle or fuse everything around the car (bird’s eye)—and at which level the fusion should occur [63].
Multi-sensor data fusion (MSDF) frameworks have four levels: high-level fusion (HLF) also known as object-level; low-level fusion (LLF), also known as data-level fusion; mid-level fusion (MLF), also known as feature-level fusion; and hybrid-level fusion [66].
HLF (object-level fusion) approaches are often used due to their relatively low complexity when compared to LLF and MLF approaches. However, HLF provides insufficient information because classifications with lower confidence values are rejected, such as when there are multiple overlapping obstacles.
In contrast, the LLF (data-level fusion) technique integrates (or fuses) data from each sensor at the most basic level of abstraction (raw data). This preserves all information and can increase the accuracy of obstacle detection. This method requires precise external calibration, as it is heavily dependent on the good values of the sensors.
MLF (feature-level fusion) is an abstraction level between LLF and HLF; it combines multi-target features from the associated sensor data (raw measurements), such as color information from images or position features from radar and LiDAR, before performing recognition and classification on the merged multi-sensor features. However, MLF appears to be insufficient to achieve SAE automation levels 4 or 5 due to its limited sense of the environment and loss of contextual information [66].
Hybrid-level fusion can take the best of each level and merge the data. Although this gives good results, it greatly increases processing time [63].

4.2.2. Sensor Fusion Techniques and Algorithms

Although sensor fusion methodologies and algorithms have been extensively researched, a new study [67] suggests that it remains a difficult process due to the interdisciplinary variety of the suggested algorithms. Another study [55] classified these techniques and algorithms as classical sensor fusion algorithms and deep learning sensor fusion algorithms. Classical sensor fusion algorithms, such as knowledge-based methods, statistical methods, probabilistic methods, and so on, use theories of uncertainty from data imperfections, including inaccuracy and uncertainty, to fuse sensor data, whereas deep learning sensor fusion algorithms generate various multi-layer neural networks that enable them to process raw data and extract features to perform challenging and intelligent tasks, such as object detection in an urban environment. In the field of AV, algorithms such as convolutional neural networks (CNN) and recurrent neural networks (RNN) are the most used perception systems, and other algorithms such as deep belief networks (DBN), and autoencoders (AE) [57] are also used.
Convolutional neural networks (CNNs) are specialized artificial neural networks designed to process and analyze visual data. They are particularly effective for tasks involving image recognition, classification, and computer vision.
Recurrent neural networks (RNNs) are designed to handle sequential data and temporal dependencies. They are commonly used for tasks involving time series data, natural language processing, and speech recognition.
Deep belief networks (DBNs) are a type of generative graphical model composed of multiple layers of stochastic, latent variables. They can learn to probabilistically reconstruct their inputs by stacking restricted Boltzmann machines (RBMs).
Autoencoders (AEs) are neural networks designed to learn efficient encoding of input data by learning to encode the input into a compressed representation and then decode it back to the original input. They are used for data denoising and image retrieval.
In [14] the authors show some additional approaches using different algorithms, and Table 4 summarizes these techniques.

4.3. Radar Interference Mitigation Strategies

As the number of automotive radar sensors in use increases, so does the likelihood of radar interference. According to the European MOSARIM (More Safety for All by Radar Interference Mitigation) project [71] to avoid interference, signals must differ in at least one dimension, such as time, frequency, location, or waveform. Radar mitigation techniques can be classified into four main categories [72].
  • Detection and suppression at the receiver.
    Interference is detected in the measurement data and eliminated by removing the corrupted data and recreating its value [73].
  • Detection and avoidance.
    The radar actively modifies its signal to avoid interference in subsequent cycles when interference is detected in the measurement signal. This strategy is inspired by the interference avoidance mechanism of bats [74], which avoids interference rather than suppressing it locally.
  • Interference-aware cognitive radar.
    The radar senses the entire operational spectrum and adaptively avoids interference using waveform modification [75].
  • Centralized coordination.
    Self-driving cars are centrally coordinated to avoid radar interference [76]. Vehicles send their locations and routes to a control center, which models the radar operating schedule of vehicles in the same environment as a graph coloring problem and creates playbooks for each self-driving car to ensure that its radars work without interference.

4.4. Radar Ambiguity Detection Mitigation Strategies

The most important component of radar imagery is the reduction of angular ambiguities, which are signals caused by finite spatial sampling that can be misinterpreted as real targets [77].
A viable solution to the radar ambiguity problem in the automotive industry is to use a multichannel system [78] in which ambiguities are canceled if their angular distance from the real target is greater than the angular resolution of the multichannel array, as illustrated by the blue beam in Figure 8 [77,78].

4.5. Redundancy

Fully autonomous vehicles operate in real time and no matter the quality of the sensors or the robustness of an ADAS system once a sensor stops working it can put in danger the decision-making process of the system which can lead to accidents, which is why redundancy is implemented by car manufacturers to improve dependability. The concept of redundancy is duplication, triplication, etc., of one or more components of a system that perform the same function. For example, BMW’s autonomous driving system includes three AD (automatic driving) channels [79]. To avoid systematic faults and common cause failures, heterogeneous channels do not reuse hardware, software, algorithms, or sensors. In [80], the authors performed simulations with three AD channels operating simultaneously in the same test vehicle (Baidu Apollo 5.0, Autoware Auto AVP, and Comma.AI OpenPilot). They found that while the Apollo channel was usually the best, there were times when it failed while the other channels did not. In other words, the capabilities of some (less advanced) AD channels can complement those of other (more advanced) channels. This means that the combined capabilities of the multichannel system can potentially be greater than those of the most advanced channel alone. They then presented an architectural design pattern for cross-channel analysis, which is beyond the scope of this survey and will not be explored. Redundancy can be present in both hardware and software.
  • Hardware redundancy.
    This can include multiple components such as ECU, power supply, communication bus, input/output modules (sensors and actuators), communication module, etc. [81].
  • Software redundancy.
    This involves having multiple autonomous driving systems [81].

4.6. Camera Image Failures Mitigation Strategies

The authors of [37] researched several mitigation strategies for camera failures such as those presented in Table 5.
In addition to these mitigation techniques, redundancy—which is also covered in this survey—is strongly encouraged: if a camera fails in a fully autonomous vehicle, then a redundant camera could help tolerate this failure. This also applies to other sensors such as LiDAR, radar, etc.

4.7. LiDAR Mirror-like Object Detection Using Unsupervised Learning

In addition to sensor fusion, unsupervised machine-learning algorithms can help solve this challenge. Clustering is suitable for AVs because it requires no learning and can be applied in real-time. DBSCAN (density-based spatial clustering of applications with noise) can categorize LiDAR data into reflective and non-reflective qualities based on their different characteristics [99].
The LiDAR scan results are then used as input for the DBSCAN clustering algorithm. Each outlier feature is classified according to range and intensity measurements. Diffuse surface qualities remain within normal ranges and intensities. Mirrors produce negative space qualities that result in no range and very low or no intensity measurements. Self-detecting qualities are characterized by extremely high-intensity measurements. Mirror reflection qualities often include a region with no range or intensity. These qualities can be used to remove clusters affected by mirror reflections [25].

5. Future Research Directions

Based on the literature review, we identified three main topics for future research, which are detailed in the following sections. These topics were selected based on our survey of current trends, and emerging areas of interest within the field. Our goal is to contribute to the community by pinpointing opportunities for advancing research.

5.1. Synthetic Aperture Radar (SAR) in AVs

SAR is a sophisticated radar technology that produces high-resolution images of the environment. While SAR sensors are typically used in aerospace applications, because of their unique characteristics, their use in autonomous vehicles (AVs) is being researched. The movement of the radar antenna over a target area creates a large aperture (hence the name) and provides high-resolution images [100]. Although radars are the best sensors to use against bad weather, they lack resolution in comparison to optical sensors and SAR can achieve finer spatial resolution by processing data from multiple antenna placements [101,102]. Some applications of this sensor are to detect pedestrians and other vehicles with far greater accuracy than currently existing radars [77].
The authors of [100] provide both a theoretical and experimental perspective on the role of SAR imaging in the automotive environment. Technological advances in array design (many antennas), analog-to-digital conversion, and increasing on-board computing resources allow the implementation of advanced signal processing algorithms in a power-efficient manner [103].
The authors of [104] propose a two-stage MIMO-SAR processing approach that reduces the computational load while maintaining image resolution. In [101], the effects of multipathing (detection of false targets due to multiple signal reflections) were analyzed, and it was concluded that the antenna layout has an impact on such reflections.
In [102], the authors use Radar in conjunction with SAR to improve the resolution of the radar by proposing a CSLAM (coherent simultaneous localization and mapping) model for unmanned vehicles, and they conclude that it provides better resolution and can be a good alternative to expensive LiDAR sensors.
According to the research, this type of sensor is new in AV. Its high resolution requires the development and testing of new algorithms and data processing methods in real-world environments.

5.2. Sensor Fusion Algorithms

When sensor weaknesses cannot be physically eliminated, algorithms (used to mitigate problems such as poor camera image, ambiguity, and radar interference) and sensor fusion play a central role. We also found that sensor fusion still needs work and existing models need more training.
Sensor fusion uses supervised algorithms to combine distinct types of sensors to compensate for sensor deficiencies. These algorithms require further development; however, the use of reinforcement learning paradigms in conjunction with supervised learning algorithms could aid in a sensor fusion scenario. In addition, reinforcement learning algorithms can be used to estimate the risk of failure of the sensor fusion solution early and allow for human intervention [8]. Machine learning models still require more training and data in challenging driving and weather situations. ML models are overly focused on what the vehicle feels. Human driving decisions also depend on assumptions about what other drivers will do (which is controversial because it can also be the cause of accidents). Some models are already investigating motion prediction, which is still in its early phases but promises excellent outcomes [105].

5.3. Advanced Driver Assistance Systems (ADAS) Redundancy

ADAS redundancy is also being explored and is a good area of research; it consists of using multiple control systems that can take over if the primary control system fails, ensuring that the vehicle can continue to operate safely even if the primary control system is compromised; these systems may or may not share sensors. A single system could use cameras and radar, and a redundant system could use only LiDAR. It is critical to explore this topic because as the level of autonomy increases, safety becomes a greater concern. Since redundancy increases costs, optimized systems are important for manufacturers [81].

6. Conclusions

This survey achieved several key objectives relevant to the field of autonomous vehicles (AVs), with a particular focus on the challenges and mitigation strategies associated with sensor failures. It provided an overview of the sensors currently used in AVs, categorized their problems, and explored the strategies implemented to mitigate these issues.
Several weaknesses are identified associated with different sensors used in AVs, such as ultrasonic sensors, radar, LiDAR, cameras, GNSS, and IMUs. This categorization is critical to understanding the vulnerabilities in the perception systems of AVs and forms the basis for developing more robust AV technologies. Techniques to address sensor failures such as sensor calibration, sensor fusion, redundancy, and specific failure mitigation strategies for cameras, LiDAR, and radar were analyzed. These strategies are essential for improving the reliability and safety of AVs.
Despite the benefits, the current mitigation strategies have limitations. For example, sensor fusion algorithms require extensive training and data, especially under challenging driving conditions. In addition, the high cost and complexity of advanced sensors like LiDAR, and the computational requirements of emerging technologies such as synthetic aperture radar (SAR) present significant challenges that can be explored.
In conclusion, this study provides valuable insights into the sensor technologies used in autonomous vehicles, identifies their vulnerabilities, and evaluates current and potential mitigation strategies, highlighting the importance of continuously improving sensor technologies and sensor fusion algorithms to achieve higher levels of autonomy and safety in AVs. By identifying and categorizing sensor failures, we have provided a clear understanding of the limitations and frequent issues faced by AV sensors. Furthermore, our exploration of mitigation strategies, including sensor calibration, sensor fusion, and redundancy, highlights the critical steps needed to improve AV safety and reliability. A discussion on ongoing and future advances in perception systems highlights the need for continuous sensor improvement, robust machine learning models, and the significance of redundancy.
We suggest that future research should focus on improving sensor fusion algorithms, particularly through the application of reinforcement learning and scenario-based training. Additionally, exploring the use of SAR in AVs and improving ADAS redundancy will be critical to advancing AV safety and performance.

Author Contributions

Conceptualization, J.C. and J.D.; Methodology, F.M., J.B., J.C. and J.D.; Software, F.M.; Validation, FM., J.C. and J.D.; Formal analysis, F.M., J.B., J.C. and J.D.; Investigation, F.M.; Resources, F.M.; Data curation, F.M.; Writing—original draft preparation, F.M.; Writing—review and editing, J.B., J.C. and J.D.; Supervision, J.B., J.C. and J.D.; Project administration, J.C. and J.D.; Funding acquisition, J.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Silva, Ó.; Cordera, R.; González-González, E.; Nogués, S. Environmental impacts of autonomous vehicles: A review of the scientific literature. Sci. Total Environ. 2022, 830, 154615. [Google Scholar] [CrossRef] [PubMed]
  2. Xie, G.; Li, Y.; Han, Y.; Xie, Y.; Zeng, G.; Li, R. Recent Advances and Future Trends for Automotive Functional Safety Design Methodologies. IEEE Trans. Ind. Inf. 2020, 16, 5629–5642. [Google Scholar] [CrossRef]
  3. Rojas-Rueda, D.; Nieuwenhuijsen, M.J.; Khreis, H.; Frumkin, H. Autonomous Vehicles and Public Health. Annu. Rev. Public. Health 2020, 41, 329–345. [Google Scholar] [CrossRef] [PubMed]
  4. Vinkhuyzen, E.; Cefkin, M. Developing Socially Acceptable Autonomous Vehicles. Ethnogr. Prax. Ind. Conf. Proc. 2016, 2016, 522–534. [Google Scholar] [CrossRef]
  5. Morita, T.; Managi, S. Autonomous vehicles: Willingness to pay and the social dilemma. Transp. Res. Part. C Emerg. Technol. 2020, 119, 102748. [Google Scholar] [CrossRef]
  6. Wang, J.; Zhang, L.; Huang, Y.; Zhao, J.; Bella, F. Safety of Autonomous Vehicles. J. Adv. Transp. 2020, 2020, 8867757. [Google Scholar] [CrossRef]
  7. Ahangar, M.N.; Ahmed, Q.Z.; Khan, F.A.; Hafeez, M. A Survey of Autonomous Vehicles: Enabling Communication Technologies and Challenges. Sensors 2021, 21, 706. [Google Scholar] [CrossRef]
  8. Pandharipande, A.; Cheng, C.-H.; Dauwels, J.; Gurbuz, S.Z.; Ibanez-Guzman, J.; Li, G.; Piazzoni, A.; Wang, P.; Santra, A. Sensing and Machine Learning for Automotive Perception: A Review. IEEE Sens. J. 2023, 23, 11097–11115. [Google Scholar] [CrossRef]
  9. Ramos, M.A.; Correa Jullian, C.; McCullough, J.; Ma, J.; Mosleh, A. Automated Driving Systems Operating as Mobility as a Service: Operational Risks and SAE J3016 Standard. In Proceedings of the 2023 Annual Reliability and Maintainability Symposium (RAMS), Orlando, FL, USA, 23–26 January 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–6. [Google Scholar]
  10. Scurt, F.B.; Vesselenyi, T.; Tarca, R.C.; Beles, H.; Dragomir, G. Autonomous vehicles: Classification, technology and evolution. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1169, 012032. [Google Scholar] [CrossRef]
  11. Velasco-Hernandez, G.; Yeong, D.J.; Barry, J.; Walsh, J. Autonomous Driving Architectures, Perception and Data Fusion: A Review. In Proceedings of the 2020 IEEE 16th International Conference on Intelligent Computer Communication and Processing (ICCP), Cluj-Napoca, Romania, 3–5 September 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 315–321. [Google Scholar]
  12. Ignatious, H.A.; Sayed, H.-E.; Khan, M. An overview of sensors in Autonomous Vehicles. Procedia Comput. Sci. 2022, 198, 736–741. [Google Scholar] [CrossRef]
  13. Ortiz, F.M.; Sammarco, M.; Costa, L.H.M.K.; Detyniecki, M. Applications and Services Using Vehicular Exteroceptive Sensors: A Survey. IEEE Trans. Intell. Veh. 2023, 8, 949–969. [Google Scholar] [CrossRef]
  14. Yeong, D.J.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review. Sensors 2021, 21, 2140. [Google Scholar] [CrossRef] [PubMed]
  15. Wang, Z.; Wu, Y.; Niu, Q. Multi-Sensor Fusion in Automated Driving: A Survey. IEEE Access 2020, 8, 2847–2868. [Google Scholar] [CrossRef]
  16. AlZu’bi, S.; Jararweh, Y. Data Fusion in Autonomous Vehicles Research, Literature Tracing from Imaginary Idea to Smart Surrounding Community. In Proceedings of the 2020 Fifth International Conference on Fog and Mobile Edge Computing (FMEC), Paris, France, 20–23 April 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 306–311. [Google Scholar]
  17. Avizienis, A.; Laprie, J.-C.; Randell, B.; Landwehr, C. Basic concepts and taxonomy of dependable and secure computing. IEEE Trans. Dependable Secur. Comput. 2004, 1, 11–33. [Google Scholar] [CrossRef]
  18. Budisusila, E.N.; Khosyi’in, M.; Prasetyowati, S.A.D.; Suprapto, B.Y.; Nawawi, Z. Ultrasonic Multi-Sensor Detection Patterns on Autonomous Vehicles Using Data Stream Method. In Proceedings of the 2021 8th International Conference on Electrical Engineering, Computer Science and Informatics (EECSI), Semarang, Indonesia, 20–21 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 144–150. [Google Scholar]
  19. Paidi, V.; Fleyeh, H.; Håkansson, J.; Nyberg, R.G. Smart parking sensors, technologies and applications for open parking lots: A review. IET Intell. Transp. Syst. 2018, 12, 735–741. [Google Scholar] [CrossRef]
  20. Vargas, J.; Alsweiss, S.; Toker, O.; Razdan, R.; Santos, J. An Overview of Autonomous Vehicles Sensors and Their Vulnerability to Weather Conditions. Sensors 2021, 21, 5397. [Google Scholar] [CrossRef] [PubMed]
  21. Rosique, F.; Navarro, P.J.; Fernández, C.; Padilla, A. A Systematic Review of Perception System and Simulators for Autonomous Vehicles Research. Sensors 2019, 19, 648. [Google Scholar] [CrossRef]
  22. Komissarov, R.; Kozlov, V.; Filonov, D.; Ginzburg, P. Partially coherent radar unties range resolution from bandwidth limitations. Nat. Commun. 2019, 10, 1423. [Google Scholar] [CrossRef] [PubMed]
  23. Li, Y.; Ibanez-Guzman, J. Lidar for Autonomous Driving: The Principles, Challenges, and Trends for Automotive Lidar and Perception Systems. IEEE Signal Process Mag. 2020, 37, 50–61. [Google Scholar] [CrossRef]
  24. Dreissig, M.; Scheuble, D.; Piewak, F.; Boedecker, J. Survey on LiDAR Perception in Adverse Weather Conditions. In Proceedings of the 2023 IEEE Intelligent Vehicles Symposium (IV), Anchorage, AK, USA, 4–7 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–8. [Google Scholar]
  25. Damodaran, D.; Mozaffari, S.; Alirezaee, S.; Ahamed, M.J. Experimental Analysis of the Behavior of Mirror-like Objects in LiDAR-Based Robot Navigation. Appl. Sci. 2023, 13, 2908. [Google Scholar] [CrossRef]
  26. Li, Y.; Moreau, J.; Ibanez-Guzman, J. Emergent Visual Sensors for Autonomous Vehicles. IEEE Trans. Intell. Transp. Syst. 2023, 24, 4716–4737. [Google Scholar] [CrossRef]
  27. Roszyk, K.; Nowicki, M.R.; Skrzypczyński, P. Adopting the YOLOv4 Architecture for Low-Latency Multispectral Pedestrian Detection in Autonomous Driving. Sensors 2022, 22, 1082. [Google Scholar] [CrossRef]
  28. Sun, C.; Chen, Y.; Qiu, X.; Li, R.; You, L. MRD-YOLO: A Multispectral Object Detection Algorithm for Complex Road Scenes. Sensors 2024, 24, 3222. [Google Scholar] [CrossRef]
  29. Xie, Y.; Zhang, L.; Yu, X.; Xie, W. YOLO-MS: Multispectral Object Detection via Feature Interaction and Self-Attention Guided Fusion. IEEE Trans. Cogn. Dev. Syst. 2023, 15, 2132–2143. [Google Scholar] [CrossRef]
  30. Altay, F.; Velipasalar, S. The Use of Thermal Cameras for Pedestrian Detection. IEEE Sens. J. 2022, 22, 11489–11498. [Google Scholar] [CrossRef]
  31. Chen, Y.; Shin, H. Pedestrian Detection at Night in Infrared Images Using an Attention-Guided Encoder-Decoder Convolutional Neural Network. Appl. Sci. 2020, 10, 809. [Google Scholar] [CrossRef]
  32. Tan, M.; Chao, W.; Cheng, J.-K.; Zhou, M.; Ma, Y.; Jiang, X.; Ge, J.; Yu, L.; Feng, L. Animal Detection and Classification from Camera Trap Images Using Different Mainstream Object Detection Architectures. Animals 2022, 12, 1976. [Google Scholar] [CrossRef]
  33. Iwasaki, Y.; Misumi, M.; Nakamiya, T. Robust Vehicle Detection under Various Environmental Conditions Using an Infrared Thermal Camera and Its Application to Road Traffic Flow Monitoring. Sensors 2013, 13, 7756–7773. [Google Scholar] [CrossRef]
  34. Bijelic, M.; Gruber, T.; Mannan, F.; Kraus, F.; Ritter, W.; Dietmayer, K.; Heide, F. Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 11679–11689. [Google Scholar]
  35. Yamaguchi, E.; Higuchi, H.; Yamashita, A.; Asama, H. Glass Detection Using Polarization Camera and LRF for SLAM in Environment with Glass. In Proceedings of the 2020 21st International Conference on Research and Education in Mechatronics (REM), Cracow, Poland, 9–11 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar]
  36. Shariff, W.; Dilmaghani, M.S.; Kielty, P.; Moustafa, M.; Lemley, J.; Corcoran, P. Event Cameras in Automotive Sensing: A Review. IEEE Access 2024, 12, 51275–51306. [Google Scholar] [CrossRef]
  37. Ceccarelli, A.; Secci, F. RGB Cameras Failures and Their Effects in Autonomous Driving Applications. IEEE Trans. Dependable Secur. Comput. 2023, 20, 2731–2745. [Google Scholar] [CrossRef]
  38. Dosovitskiy, A.; Ros, G.; Codevilla, F.; Lopez, A.; Koltun, V. CARLA: An Open Urban Driving Simulator. In Proceedings of the 1st Annual Conference on Robot Learning, CoRL 2017, Mountain View, CA, USA, 13–15 November 2017. [Google Scholar]
  39. Raveena, C.S.; Sravya, R.S.; Kumar, R.V.; Chavan, A. Sensor Fusion Module Using IMU and GPS Sensors For Autonomous Car. In Proceedings of the 2020 IEEE International Conference for Innovation in Technology (INOCON), Bangluru, India, 6–8 November 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar]
  40. Yusefi, A.; Durdu, A.; Bozkaya, F.; Tığlıoğlu, Ş.; Yılmaz, A.; Sungur, C. A Generalizable D-VIO and Its Fusion With GNSS/IMU for Improved Autonomous Vehicle Localization. IEEE Trans. Intell. Veh. 2024, 9, 2893–2907. [Google Scholar] [CrossRef]
  41. Xia, X.; Hashemi, E.; Xiong, L.; Khajepour, A.; Xu, N. Autonomous Vehicles Sideslip Angle Estimation: Single Antenna GNSS/IMU Fusion With Observability Analysis. IEEE Internet Things J. 2021, 8, 14845–14859. [Google Scholar] [CrossRef]
  42. Zong, W.; Zhang, C.; Wang, Z.; Zhu, J.; Chen, Q. Architecture Design and Implementation of an Autonomous Vehicle. IEEE Access 2018, 6, 21956–21970. [Google Scholar] [CrossRef]
  43. Goberville, N.; El-Yabroudi, M.; Omwanas, M.; Rojas, J.; Meyer, R.; Asher, Z.; Abdel-Qader, I. Analysis of LiDAR and Camera Data in Real-World Weather Conditions for Autonomous Vehicle Operations. SAE Int. J. Adv. Curr. Pr. Mobil. 2020, 2, 2428–2434. [Google Scholar] [CrossRef]
  44. Raiyn, J. Performance Metrics for Positioning Terminals Based on a GNSS in Autonomous Vehicle Networks. Wirel. Pers. Commun. 2020, 114, 1519–1532. [Google Scholar] [CrossRef]
  45. Jing, H.; Gao, Y.; Shahbeigi, S.; Dianati, M. Integrity Monitoring of GNSS/INS Based Positioning Systems for Autonomous Vehicles: State-of-the-Art and Open Challenges. IEEE Trans. Intell. Transp. Syst. 2022, 23, 14166–14187. [Google Scholar] [CrossRef]
  46. Kamal, M.; Barua, A.; Vitale, C.; Laoudias, C.; Ellinas, G. GPS Location Spoofing Attack Detection for Enhancing the Security of Autonomous Vehicles. In Proceedings of the 2021 IEEE 94th Vehicular Technology Conference (VTC2021-Fall), Norman, OK, USA, 27–30 December 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–7. [Google Scholar]
  47. Liu, Z.; Wang, L.; Wen, F.; Zhang, H. IMU/Vehicle Calibration and Integrated Localization for Autonomous Driving. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 4013–4019. [Google Scholar]
  48. Wang, Z.; Zhang, Z.; Hu, X.; Zhu, W.; Deng, H. Extrinsic Calibration of Visual and Inertial Sensors for the Autonomous Vehicle. IEEE Sens. J. 2023, 23, 15934–15941. [Google Scholar] [CrossRef]
  49. Choi, J.D.; Kim, M.Y. A Sensor Fusion System with Thermal Infrared Camera and LiDAR for Autonomous Vehicles: Its Calibration and Application. In Proceedings of the 2021 Twelfth International Conference on Ubiquitous and Future Networks (ICUFN), Jeju Island, Republic of Korea, 17–20 August 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 361–365. [Google Scholar]
  50. Shahian Jahromi, B.; Tulabandhula, T.; Cetin, S. Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles. Sensors 2019, 19, 4357. [Google Scholar] [CrossRef]
  51. Lundquist, C. Sensor Fusion for Automotive Applications. Ph.D. Thesis, Department of Electrical Engineering Linköping University, Linköping, Sweden, 2011. [Google Scholar]
  52. Nobis, F.; Geisslinger, M.; Weber, M.; Betz, J.; Lienkamp, M. A Deep Learning-based Radar and Camera Sensor Fusion Architecture for Object Detection. In Proceedings of the 2019 Sensor Data Fusion: Trends, Solutions, Applications (SDF), Bonn, Germany, 15–17 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–7. [Google Scholar]
  53. Gu, S.; Zhang, Y.; Yang, J.; Alvarez, J.M.; Kong, H. Two-View Fusion based Convolutional Neural Network for Urban Road Detection. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–9 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 6144–6149. [Google Scholar]
  54. Pollach, M.; Schiegg, F.; Knoll, A. Low Latency And Low-Level Sensor Fusion For Automotive Use-Cases. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 6780–6786. [Google Scholar]
  55. Fayyad, J.; Jaradat, M.A.; Gruyer, D.; Najjaran, H. Deep Learning Sensor Fusion for Autonomous Vehicle Perception and Localization: A Review. Sensors 2020, 20, 4220. [Google Scholar] [CrossRef]
  56. Wang, X.; Li, K.; Chehri, A. Multi-Sensor Fusion Technology for 3D Object Detection in Autonomous Driving: A Review. IEEE Trans. Intell. Transp. Syst. 2023, 25, 1148–1165. [Google Scholar] [CrossRef]
  57. Shi, J.; Tang, Y.; Gao, J.; Piao, C.; Wang, Z. Multitarget-Tracking Method Based on the Fusion of Millimeter-Wave Radar and LiDAR Sensor Information for Autonomous Vehicles. Sensors 2023, 23, 6920. [Google Scholar] [CrossRef] [PubMed]
  58. Gao, L.; Xia, X.; Zheng, Z.; Ma, J. GNSS/IMU/LiDAR fusion for vehicle localization in urban driving environments within a consensus framework. Mech. Syst. Signal Process 2023, 205, 110862. [Google Scholar] [CrossRef]
  59. Xiang, C.; Feng, C.; Xie, X.; Shi, B.; Lu, H.; Lv, Y.; Yang, M.; Niu, Z. Multi-Sensor Fusion and Cooperative Perception for Autonomous Driving: A Review. IEEE Intell. Transp. Syst. Mag. 2023, 15, 36–58. [Google Scholar] [CrossRef]
  60. Hasanujjaman, M.; Chowdhury, M.Z.; Jang, Y.M. Sensor Fusion in Autonomous Vehicle with Traffic Surveillance Camera System: Detection, Localization, and AI Networking. Sensors 2023, 23, 3335. [Google Scholar] [CrossRef] [PubMed]
  61. Zhao, X.; Sun, P.; Xu, Z.; Min, H.; Yu, H. Fusion of 3D LIDAR and Camera Data for Object Detection in Autonomous Vehicle Applications. IEEE Sens. J. 2020, 20, 4901–4913. [Google Scholar] [CrossRef]
  62. Ogunrinde, I.; Bernadin, S. Deep Camera–Radar Fusion with an Attention Framework for Autonomous Vehicle Vision in Foggy Weather Conditions. Sensors 2023, 23, 6255. [Google Scholar] [CrossRef] [PubMed]
  63. Yao, S.; Guan, R.; Huang, X.; Li, Z.; Sha, X.; Yue, Y.; Lim, E.G.; Seo, H.; Man, K.L.; Zhu, X.; et al. Radar-Camera Fusion for Object Detection and Semantic Segmentation in Autonomous Driving: A Comprehensive Review. IEEE Trans. Intell. Veh. 2024, 9, 2094–2128. [Google Scholar] [CrossRef]
  64. Wang, S.; Mei, L.; Yin, Z.; Li, H.; Liu, R.; Jiang, W.; Lu, C.X. End-to-End Target Liveness Detection via mmWave Radar and Vision Fusion for Autonomous Vehicles. ACM Trans. Sens. Netw. 2024, 20, 1–26. [Google Scholar] [CrossRef]
  65. Kurniawan, I.T.; Trilaksono, B.R. ClusterFusion: Leveraging Radar Spatial Features for Radar-Camera 3D Object Detection in Autonomous Vehicles. IEEE Access 2023, 11, 121511–121528. [Google Scholar] [CrossRef]
  66. Banerjee, K.; Notz, D.; Windelen, J.; Gavarraju, S.; He, M. Online Camera LiDAR Fusion and Object Detection on Hybrid Data for Autonomous Driving. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1632–1638. [Google Scholar]
  67. Brena, R.F.; Aguileta, A.A.; Trejo, L.A.; Molino-Minero-Re, E.; Mayora, O. Choosing the Best Sensor Fusion Method: A Machine-Learning Approach. Sensors 2020, 20, 2350. [Google Scholar] [CrossRef]
  68. Kim, J.; Kim, J.; Cho, J. An advanced object classification strategy using YOLO through camera and LiDAR sensor fusion. In Proceedings of the 2019 13th International Conference on Signal Processing and Communication Systems (ICSPCS), Gold Coast, QLD, Australia, 16–18 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–5. [Google Scholar]
  69. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
  70. Roth, M.; Jargot, D.; Gavrila, D.M. Deep End-to-end 3D Person Detection from Camera and Lidar. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 521–527. [Google Scholar]
  71. Mazher, K.U.; Heath, R.W.; Gulati, K.; Li, J. Automotive Radar Interference Characterization and Reduction by Partial Coordination. In Proceedings of the 2020 IEEE Radar Conference (RadarConf20), Florence, Italy, 21–25 September 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar]
  72. Hakobyan, G.; Yang, B. High-Performance Automotive Radar: A Review of Signal Processing Algorithms and Modulation Schemes. IEEE Signal Process Mag. 2019, 36, 32–44. [Google Scholar] [CrossRef]
  73. Bechter, J.; Rameez, M.; Waldschmidt, C. Analytical and Experimental Investigations on Mitigation of Interference in a DBF MIMO Radar. IEEE Trans. Microw. Theory Tech. 2017, 65, 1727–1734. [Google Scholar] [CrossRef]
  74. Bechter, J.; Sippel, C.; Waldschmidt, C. Bats-inspired frequency hopping for mitigation of interference between automotive radars. In Proceedings of the 2016 IEEE MTT-S International Conference on Microwaves for Intelligent Mobility (ICMIM), San Diego, CA, USA, 19–20 May 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–4. [Google Scholar]
  75. Greco, M.S.; Gini, F.; Stinco, P.; Bell, K. Cognitive Radars: On the Road to Reality: Progress Thus Far and Possibilities for the Future. IEEE Signal Process Mag. 2018, 35, 112–125. [Google Scholar] [CrossRef]
  76. Khoury, J.; Ramanathan, R.; McCloskey, D.; Smith, R.; Campbell, T. RadarMAC: Mitigating Radar Interference in Self-Driving Cars. In Proceedings of the 2016 13th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), London, UK, 27–30 June 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–9. [Google Scholar]
  77. Tebaldini, S.; Manzoni, M.; Tagliaferri, D.; Rizzi, M.; Monti-Guarnieri, A.V.; Prati, C.M.; Spagnolini, U.; Nicoli, M.; Russo, I.; Mazzucco, C. Sensing the Urban Environment by Automotive SAR Imaging: Potentials and Challenges. Remote Sens. 2022, 14, 3602. [Google Scholar] [CrossRef]
  78. Schurwanz, M.; Oettinghaus, S.; Mietzner, J.; Hoeher, P.A. Reducing On-Board Interference and Angular Ambiguity Using Distributed MIMO Radars in Medium-Sized Autonomous Air Vehicle. IEEE Aerosp. Electron. Syst. Mag. 2024, 39, 4–14. [Google Scholar] [CrossRef]
  79. Fürst, S. Scalable, safe und multi-oem capable architecture for autonomous driving. In Proceedings of the 9th Vector Congress, Stuttgart, Germany, 21 November 2018. [Google Scholar]
  80. Hanselaar, C.A.J.; Silvas, E.; Terechko, A.; Heemels, W.P.M.H. Detection and Mitigation of Functional Insufficiencies in Autonomous Vehicles: The Safety Shell. In Proceedings of the 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), Macau, China, 8–12 October 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 2021–2027. [Google Scholar]
  81. Boubakri, A.; Mettali Gamar, S. A New Architecture of Autonomous Vehicles: Redundant Architecture to Improve Operational Safety. Int. J. Robot. Control Syst. 2021, 1, 355–368. [Google Scholar] [CrossRef]
  82. Cai, J.-F.; Ji, H.; Liu, C.; Shen, Z. Blind motion deblurring from a single image using sparse approximation. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 104–111. [Google Scholar]
  83. Fang, L.; Liu, H.; Wu, F.; Sun, X.; Li, H. Separable Kernel for Image Deblurring. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 2885–2892. [Google Scholar]
  84. Eigen, D.; Krishnan, D.; Fergus, R. Restoring an Image Taken through a Window Covered with Dirt or Rain. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 633–640. [Google Scholar]
  85. Gu, J.; Ramamoorthi, R.; Belhumeur, P.; Nayar, S. Removing image artifacts due to dirty camera lenses and thin occluders. In Proceedings of the ACM SIGGRAPH Asia 2009 Papers, Yokohama, Japan, 16–19 December 2009; ACM: New York, NY, USA, 2009; pp. 1–10. [Google Scholar]
  86. Kondou, M. Condensation Prevention Camera Device. U.S. Patent 9,525,809, 20 December 2016. [Google Scholar]
  87. Bhagavathy, S.; Llach, J.; Zhai, J. fu Multi-Scale Probabilistic Dithering for Suppressing Banding Artifacts in Digital Images. In Proceedings of the 2007 IEEE International Conference on Image Processing, San Antonio, TX, USA, 16 September–19 October 2007; IEEE: Piscataway, NJ, USA, 2007; pp. IV–397–IV–400. [Google Scholar]
  88. Cho, C.-Y.; Chen, T.-M.; Wang, W.-S.; Liu, C.-N. Real-Time Photo Sensor Dead Pixel Detection for Embedded Devices. In Proceedings of the 2011 International Conference on Digital Image Computing: Techniques and Applications, Noosa, QLD, Australia, 6–8 December 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 164–169. [Google Scholar]
  89. Zamfir, A.; Drimbarean, A.; Zamfir, M.; Buzuloiu, V.; Steinberg, E.; Ursu, D. An optical model of the appearance of blemishes in digital photographs. In Digital Photography III; Martin, R.A., DiCarlo, J.M., Sampat, N., Eds.; SPIE: St. Bellingham, WA, USA, 2007; p. 65020I. [Google Scholar]
  90. Chung, S.-W. Removing chromatic aberration by digital image processing. Opt. Eng. 2010, 49, 067002. [Google Scholar] [CrossRef]
  91. Kang, S.B. Automatic Removal of Chromatic Aberration from a Single Image. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 1–8. [Google Scholar]
  92. Grosssmann, B.; Eldar, Y.C. An efficient method for demosaicing. In Proceedings of the 2004 23rd IEEE Convention of Electrical and Electronics Engineers in Israel, Tel-Aviv, Israel, 6–7 September 2004; IEEE: Piscataway, NJ, USA, 2004; pp. 436–439. [Google Scholar]
  93. Shi, L.; Ovsiannikov, I.; Min, D.-K.; Noh, Y.; Kim, W.; Jung, S.; Lee, J.; Shin, D.; Jung, H.; Waligorski, G.; et al. Demosaicing for RGBZ Sensor. In Computational Imaging XI; Bouman, C.A., Pollak, I., Wolfe, P.J., Eds.; SPIE: St. Bellingham, WA, USA, 2013; p. 865705. [Google Scholar]
  94. Prescott, B.; McLean, G.F. Line-Based Correction of Radial Lens Distortion. Graph. Models Image Process. 1997, 59, 39–47. [Google Scholar] [CrossRef]
  95. Yoneyama, S. Lens distortion correction for digital image correlation by measuring rigid body displacement. Opt. Eng. 2006, 45, 023602. [Google Scholar] [CrossRef]
  96. Sawhney, H.S.; Kumar, R. True multi-image alignment and its application to mosaicing and lens distortion correction. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 235–243. [Google Scholar] [CrossRef]
  97. Baek, Y.-M.; Cho, D.-C.; Lee, J.-A.; Kim, W.-Y. Noise Reduction for Image Signal Processor in Digital Cameras. In Proceedings of the 2008 International Conference on Convergence and Hybrid Information Technology, Daejeon, Republic of Korea, 28–30 August 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 474–481. [Google Scholar]
  98. Guo, L.; Manglani, S.; Liu, Y.; Jia, Y. Automatic Sensor Correction of Autonomous Vehicles by Human-Vehicle Teaching-and-Learning. IEEE Trans. Veh. Technol. 2018, 67, 8085–8099. [Google Scholar] [CrossRef]
  99. Deng, D. DBSCAN Clustering Algorithm Based on Density. In Proceedings of the 2020 7th International Forum on Electrical Engineering and Automation (IFEEA), Hefei, China, 25–27 September 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 949–953. [Google Scholar]
  100. Gao, X.; Roy, S.; Xing, G. MIMO-SAR: A Hierarchical High-Resolution Imaging Algorithm for mmWave FMCW Radar in Autonomous Driving. IEEE Trans. Veh. Technol. 2021, 70, 7322–7334. [Google Scholar] [CrossRef]
  101. Manzoni, M.; Tebaldini, S.; Monti-Guarnieri, A.V.; Prati, C.M. Multipath in Automotive MIMO SAR Imaging. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–12. [Google Scholar] [CrossRef]
  102. Qian, K.; Zhang, X. SAR on the Wheels: High-Resolution Synthetic Aperture Radar Sensing on Moving Vehicles. In Proceedings of the 2023 57th Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 29 October–1 November 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1204–1209. [Google Scholar]
  103. Park, S.; Kim, Y.; Matson, E.T.; Smith, A.H. Accessible synthetic aperture radar system for autonomous vehicle sensing. In Proceedings of the 2016 IEEE Sensors Applications Symposium (SAS), Catania, Italy, 20–22 April 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–6. [Google Scholar]
  104. Harrer, F.; Pfeiffer, F.; Loffler, A.; Gisder, T.; Biebl, E. Synthetic aperture radar algorithm for a global amplitude map. In Proceedings of the 2017 14th Workshop on Positioning, Navigation and Communications (WPNC), Bremen, Germany, 25–26 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar]
  105. Bharilya, V.; Kumar, N. Machine learning for autonomous vehicle’s trajectory prediction: A comprehensive survey, challenges, and future research directions. Veh. Commun. 2024, 46, 100733. [Google Scholar] [CrossRef]
Figure 1. Number of papers reviewed per year (year; count).
Figure 1. Number of papers reviewed per year (year; count).
Sensors 24 05108 g001
Figure 2. SAE automation levels.
Figure 2. SAE automation levels.
Sensors 24 05108 g002
Figure 3. Functional architecture for an autonomous driving system.
Figure 3. Functional architecture for an autonomous driving system.
Sensors 24 05108 g003
Figure 4. Sensors in an autonomous vehicle.
Figure 4. Sensors in an autonomous vehicle.
Sensors 24 05108 g004
Figure 5. Light behavior on different surfaces.
Figure 5. Light behavior on different surfaces.
Sensors 24 05108 g005
Figure 6. Simulated camera failures (from [37]).
Figure 6. Simulated camera failures (from [37]).
Sensors 24 05108 g006
Figure 7. Number of collisions due to camera failures (from [37]).
Figure 7. Number of collisions due to camera failures (from [37]).
Sensors 24 05108 g007
Figure 8. Radar ambiguities.
Figure 8. Radar ambiguities.
Sensors 24 05108 g008
Table 1. Sensor’s advantages, limitations, and weaknesses (based on [21]).
Table 1. Sensor’s advantages, limitations, and weaknesses (based on [21]).
SensorAdvantagesLimitations/Weaknesses
Ultrasonic sensors
  • Affordable.
 Limitations:
  • Maximum range is 2 m.
  • Very narrow beam detection range [18,42].
 Weaknesses:
  • Interference.
Radar
  • Long range (up to 200 m).
  • Performs well in various weather conditions.
 Limitations:
  • Lower resolution when compared to cameras and LiDARs.
 Weaknesses:
  • Many false positives.
  • Radar interference [22].
LiDAR
  • High resolution.
  • Range up to 200 m.
  • Accurate distance measurement.
 Limitations:
  • Expensive.
 Weaknesses:
  • Suffers extremely from the weather [20,24,43].
  • Reflective objects pose challenges [25].
Cameras
  • High resolution.
  • Range up to 250 m.
  • Capable of object recognition.
 Limitations:
  • Requires computational resources (Complex image processing).
 Weaknesses:
  • Suffers from rough environmental conditions and several failures [37,43].
GNSS
  • Provides global positioning.
 Limitations:
  • Limited accuracy in certain conditions, such as dense areas [44].
  • Dependency on satellite visibility [45].
 Weaknesses:
  • Latency [44].
  • Vulnerable to signal jamming and spoofing [46].
IMU
  • Measures acceleration and rotation.
  • Provides orientation information.
 Limitations:
  • Drift over time without external reference [47].
 Weaknesses:
  • Requires frequent calibration to maintain accuracy.
Table 2. Sensor failures and their impact.
Table 2. Sensor failures and their impact.
Sensor TypeFailureImpact
Ultrasonic SensorsWrong perception due to interference between multiple sensors.Extreme range errors due to overlapping ultrasonic signals. Requires unique identification to reject false echoes.
RadarFalse positives due to bounced waves.This can lead to incorrect object detection or classification due to reflected signals from the environment.
Wrong perception due to frequency interference from multiple radars.Shared frequency interference may cause inaccuracies in object detection and tracking.
LiDARDetection performance degradation due to adverse weather conditions.Reduced effectiveness in fog, rain, or snow, leading to incomplete or inaccurate spatial data.
Missing or wrong perception due to reflection from mirrors or highly reflective surfaces.This can result in faulty maps or missing data due to the laser beams being completely reflected.
CameraPoor object detection due to variability in lighting conditions.Performance can be significantly impaired in varying light conditions, leading to poor object detection.
Image degradation due to rain, snow, or fog.This can result in blurred or obscured images, affecting the accuracy of perception tasks.
Misinterpretation in ADAS due to degraded images.Degraded images can lead to AV collisions if the AI/ML systems fail to properly interpret the information.
GNSSTiming errors due to clock differences.This can affect the accuracy of location information, leading to incorrect positioning.
Susceptibility to jamming and spoofing.This can lead to loss of navigation accuracy or misdirection if the GNSS signals are blocked or falsified.
Multipath effect and satellite orbit uncertainties.This can lead to errors in location determination due to signal reflections and orbital inaccuracies.
IMUError accumulation and drift.Errors in acceleration and rotational data can lead to inaccuracies in vehicle movement and orientation over time.
Table 3. Comparing sensors’ strengths in self-driving cars (based on [14]).
Table 3. Comparing sensors’ strengths in self-driving cars (based on [14]).
FactorsBest Sensor
RangeRadar
ResolutionCamera
Distance AccuracyLiDAR
VelocityRadar
Color Perception (e.g., traffic lights)Camera
Object DetectionLiDAR
Object ClassificationCamera
Lane DetectionCamera
Obstacle Edge DetectionCamera and LiDAR
Table 4. Sensor fusion algorithms and characteristics (based on [14]).
Table 4. Sensor fusion algorithms and characteristics (based on [14]).
SensorCharacteristics
YOLOYou Only Look Once (YOLO) is a single-stage detector that uses a single convolutional neural network (CNN) to predict bounding boxes and compute class probabilities and confidence scores for an image [68]. Advantages:
  • Real-time detection (single-pass detection).
  • Recognizes pedestrians and objects.
Weaknesses:
  • Lower accuracy than SSD.
  • The system struggles to recognize dense barriers, such as flocks of birds, due to its limited ability to propose more than two bounding boxes.
  • It has poor detection of small objects.
SSDThe Single-Shot Multibox Detector (SSD) is a single-stage CNN detector that converts bounding boxes into a collection of boxes with different sizes and aspect ratios to detect obstacles of various dimensions [69].Advantages:
  • Real-time and accurate obstacle detection.
  • Single pass.
  • Detecting small objects can be challenging. However, it outperforms YOLO.
Weaknesses:
  • Poor feature extraction in shallow layers.
  • Loses features in deep layers.
VoxelNetVoxelNet is a generic 3D obstacle detection network that combines feature extraction and bounding box prediction into a single-stage, fully trainable deep network. It detects obstacles using a voxelized technique based on point cloud data [70].Advantages:
  • No need for manual feature extraction.
  • Voxelization improves LIDAR data management by reducing sparsity.
Weaknesses:
  • Training takes a large amount of data and memory.
PointNetPointNet is a permutation-invariant deep neural network that learns global features from unordered point clouds using a two-stage detection approach [70].Advantages:
  • Ability to maintain point clouds in any sequence, with permutation independence.
Weaknesses:
  • Difficult to generalize to unknown point configurations.
Table 5. Camera image failure mitigation strategies (based on [37]).
Table 5. Camera image failure mitigation strategies (based on [37]).
ComponentFailureMitigations
LensBrightnessBrightness, if detected, can be compensated to some degree in post-processing. Images that are completely black or white are easy to detect, but recovering the original image is difficult.
BlurThere exist several approaches for eliminating or correcting image blur [82,83]. For example, in [82], the authors define blind blur as a new joint optimization problem that optimizes the sparsity of both the blur kernel and the clear image.
Broken LensImage processing can detect a broken lens, but reconstructing a clean image can be difficult.
DirtyImage processing software can remove localized rain and soil effects from a single image [84]. For example, physics-based approaches [85] can remove dust and dirt from digital photos and videos.
FlareLens artifacts such as flare and ghosting can be minimized or eliminated from a single input image during post-processing to correct the image.
RainTo mitigate the effects of rain, we refer to Dirty’s statement and [84].
CondensationSeveral studies, including [86], address the issue of avoiding or eliminating condensation inside cameras.
Image SensorBandingThere are numerous strategies to reduce the visual effects of banding, such as using dithering patterns [87].
Dead PixelDead pixels can be detected using solutions that can be implemented on an embedded device [88].
SpotsThere are many approaches to eliminate spots based on image processing; for example, [89] identifies spots and computes their physical appearance (size, shape, position, and transparency) in an image based on camera parameters.
Chromatic AberrationImage processing can help to reduce chromatic aberration [90,91].
MosaicAlthough efficient demosaicing methods exist [92,93] it is difficult to recover the image when demosaicing fails.
DistortionLens distortion can be detected and measured [94] and corrected using image processing methods [95,96].
NoiseThere are several methods to reduce or eliminate noise and sharpening, including commercial tools. There are also methods available that work directly at the sensor level, e.g., [97,98].
Sharpness
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Matos, F.; Bernardino, J.; Durães, J.; Cunha, J. A Survey on Sensor Failures in Autonomous Vehicles: Challenges and Solutions. Sensors 2024, 24, 5108. https://doi.org/10.3390/s24165108

AMA Style

Matos F, Bernardino J, Durães J, Cunha J. A Survey on Sensor Failures in Autonomous Vehicles: Challenges and Solutions. Sensors. 2024; 24(16):5108. https://doi.org/10.3390/s24165108

Chicago/Turabian Style

Matos, Francisco, Jorge Bernardino, João Durães, and João Cunha. 2024. "A Survey on Sensor Failures in Autonomous Vehicles: Challenges and Solutions" Sensors 24, no. 16: 5108. https://doi.org/10.3390/s24165108

APA Style

Matos, F., Bernardino, J., Durães, J., & Cunha, J. (2024). A Survey on Sensor Failures in Autonomous Vehicles: Challenges and Solutions. Sensors, 24(16), 5108. https://doi.org/10.3390/s24165108

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop