Next Article in Journal
Bicycling Phase Recognition for Lower Limb Amputees Using Support Vector Machine Optimized by Particle Swarm Optimization
Next Article in Special Issue
Pole-Like Object Extraction and Pole-Aided GNSS/IMU/LiDAR-SLAM System in Urban Area
Previous Article in Journal
In Situ Shear Test for Revealing the Mechanical Properties of the Gravelly Slip Zone Soil
Previous Article in Special Issue
Processing of Bathymetric Data: The Fusion of New Reduction Methods for Spatial Big Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

The Perception System of Intelligent Ground Vehicles in All Weather Conditions: A Systematic Literature Review

Hydrogen Research Institute, Université du Québec à Trois-Rivières, Trois-Rivières, QC G9A 5H7, Canada
*
Authors to whom correspondence should be addressed.
Sensors 2020, 20(22), 6532; https://doi.org/10.3390/s20226532
Submission received: 28 September 2020 / Revised: 10 November 2020 / Accepted: 12 November 2020 / Published: 15 November 2020
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)

Abstract

:
Perception is a vital part of driving. Every year, the loss in visibility due to snow, fog, and rain causes serious accidents worldwide. Therefore, it is important to be aware of the impact of weather conditions on perception performance while driving on highways and urban traffic in all weather conditions. The goal of this paper is to provide a survey of sensing technologies used to detect the surrounding environment and obstacles during driving maneuvers in different weather conditions. Firstly, some important historical milestones are presented. Secondly, the state-of-the-art automated driving applications (adaptive cruise control, pedestrian collision avoidance, etc.) are introduced with a focus on all-weather activity. Thirdly, the most involved sensor technologies (radar, lidar, ultrasonic, camera, and far-infrared) employed by automated driving applications are studied. Furthermore, the difference between the current and expected states of performance is determined by the use of spider charts. As a result, a fusion perspective is proposed that can fill gaps and increase the robustness of the perception system.

1. Introduction

According to the National Highway Traffic Safety Administration (NHTSA), human error is the key explanation for serious road accidents, along with environmental factors such as weather conditions, which are said to lead to accidents as well. On average, more than 6.4 million automobile accidents are registered in the United States (US) annually, of which 1.561 million are weather related [1,2]. Most of these accidents occur on wet pavements (due to rain and snow), while just 3% of them occur in the presence of fog. Similarly, the relationship between road accidents and adverse weather conditions across Europe has been outlined in the COST Action TU0702 report [3]. To mitigate road car accidents, the automation of many driving functions has been successfully implemented in most of the new-generation commercial vehicles. To categorize these systems, the Society of Automobile Engineers (SAE) has defined six levels of automation, ranging from 0 (zero automatic driving maneuver) to 5 (fully autonomous navigation). While the deployment of fully autonomous vehicles (level 5) is still expected in a few years, the current commercial vehicles are equipped with the Advanced Driver Assistance Systems (ADASs), which are usually classified between SAE autonomy level 2 and level 3 [4]. In brief, the ADASs use an environment perception module consisted of several sensors whose objective is to provide relevant data necessary to interpret the surrounding scenes near the vehicle. In normal climatic conditions, the reliability and benefits of the ADASs have gained popularity and confidence [5]. However, in adverse weather conditions, the experience of the driver is required to compensate for the failure of the ADASs to appropriately perceive the surrounding environment, which usually results in severe accidents. Despite the significant effect of weather conditions on intelligent navigation perception systems, most of the review papers related to ADASs mainly focus on the efficiency of algorithms without considering climatic conditions (such as snow, sleet, rain, and fog). In fact, the capability of autonomously and robustly perceiving the surroundings in all weather conditions has not been fairly taken into consideration. Moreover, most datasets focus on urban traffic in perfect light, and clear weather conditions are often preferred for testing purposes (such as daylight and sunny weather). Just 12 of the current 36 publicly accessible databases, such as AMUSE, CCSAD, CMU, ESATS, Elektra, Heidelberg, JAAD, Oxford, Stixel, HCI, and TROM, are designed to contribute to autonomous driving under adverse conditions (night fog and rain conditions) [6]. Only a few papers have addressed the sensors and their performance in severe weather conditions. For instance, in [7], a global review of the state-of-the-art of automated vehicles is presented, including an overview of the system architecture with a briefing on key functions, such as perception, localization, planning, and control. In addition, emerging algorithms for addressing spatial information, semantic information, and target motion tracking are presented. However, the study addresses the performance of sensors under normal weather conditions but does not cover the impact of intemperate or changing weather on the performance of the sensor, which is the gap to be discussed and filled. Similarly, in [8], the review article considered one of the main functions of an autonomous vehicle, i.e., perception systems, and addressed the role of sensors such as artificial cameras, radars and lidars in the perception environment and their performance based on popular algorithms used in obtaining spatial and semantic information of targets, along with the tracking of motion. In addition, the authors provided insight into current university research centers, technology firms, and their participation in the development of autonomous driving. However, this review article only provides very general information on the impact of varying weather on perception sensors and only lidar performance issues were highlighted in selected weather conditions. In [9], a systematic review of perception systems for sensing the environment and a position estimating system with various sensors and a sensor fusion algorithm is presented. In addition, insight into model-based simulators and the current state of regulations around the world is presented. However, the challenges faced due to varying weather on perception sensors are not highlighted. It is necessary to be aware of the impact of the weather on perception when driving in all weather conditions on highways and urban transport. Improved detection in severe weather conditions will help to solve sensing problems without any further efforts in algorithm processing. Therefore, there is a need for a unifying study that combines state-of-the-art information on the effect of weather on ADASs, as well as the related perception system.
Our contribution offers information which can fill the missing gap in articles [7,8,9], related to surveys of sensing technologies and various weather impacts, on diverse sets of perception sensors (such as radar, lidar, ultrasonic, camera, and far-infrared).
  • Additionally, a 3D visualization of state-of-the-art sensing technologies is shown using a spider chart, clarifying emerging trends and gaps in development. Based on the spider chart information, sensor reliability challenges and gaps can be tackled in the efficient implementation of automated driving in all weather conditions.
  • Sensor fusion perspective is proposed, which involves several strategies and tasks, that will help to facilitate active sensor toggling (switching). The active sensor toggling strategy helps in the selection of sensors depending on the environment awareness context. Moreover, a potential combination of sensors is proposed for selected driving safety applications for all weather drives.
The rest of the study is organized as follows: Section 2 explains the development of an intelligent vehicle and its safety applications, focusing on the various usages of perception sensors in production. Section 3 discusses some of the well-known safety applications of ADASs and semi-autonomous vehicles with a focus on their vulnerability to extreme weather conditions. Section 4 provides a detailed overview of the sensors used in driver assistance applications, and this section highlights key points of individual sensors, like the design of construction, working principle, strategies employed by individual sensors in the perception of the environment, and the limitations of sensors in various weather environments and, finally, the section ends with a demonstration of individual sensor performance using a spider chart. Section 5 consists of a sensor fusion perspective for all-weather navigation of vehicles. In this section, we highlight strategies and tasks at the time of fusing sensors for robustness and end the section with a proposal which includes potential combinations of sensors for various ADAS applications to enhance perception and eliminate functional variations. Section 6 outlines conclusions and discussions on today’s limitations in achieving complete autonomous drives.

2. Evolution of Intelligent Vehicle Technology

Before analyzing the sensors technologies and their capability of navigation in difficult weather conditions, the time evolution of vehicle technology is explained. In this regard, three main phases can be observed in the literature, accounting for (i) phase I that is related to a period between 1980 and 2003; (ii) phase II that is focused on a time interval between 2003 and 2008; and (iii) phase III that started in 2008.

2.1. Phase I (1980 to 2003)

During this phase, the dynamic stability of vehicles was one of the focal points. Inertial sensors incorporated into inertial measurement units (IMUs) combined with an odometer were often used to improve the stability of the vehicle, particularly when the road had several curves, and this soon led to driver assistance like anti-lock braking systems (ABSs), followed by traction control (TC) and electronic stability (ECS) [10]. Mercedes has shown efficacy and importance for human life with the combined ABS and ECS systems and the “Moose Test” has attracted public and official attention [11]. Nevertheless, safety concerns were limited to drivers and passengers, increasing concern about mobility and the safety of human life in the surrounding area, which led the way to the development of external sensors. In 1986, the European project PROMETHEUS [12] involving university research centers and transport as well as automotive companies, carried out basic studies on autonomous features ranging from collision prevention to cooperative driving to the environmental sustainability of vehicles. Within this framework, several different approaches to an intelligent transport system have been designed, implemented, and demonstrated. In 1995, the vision study laid the foundation for a research team led by Ernst Dickmann, who used the Mercedes-Benz S-Class and embarked on a journey of 1590 km from Munich (Germany) to Copenhagen (Denmark) and back, using jolting computer vision and integrated memory microprocessors optimized for parallel processing to react in real time. The result of the experiment marked the way for computer vision technology, where the vehicle, with high speeds of more than 175 km/h and with minimal human intervention, was driven autonomously 95% of the time. In the same year, in July 1995, Carnegie Mellon University’s NavLab5 traveled across the country on a “No Hands Across America” tour in which the vehicle was instrumented with a vision camera, GPS receiver gyroscope, and steering and wheel encoders. Moreover, neural networks were used to control the steering wheel, while the throttle and brakes were human controlled [13]. Later, in 1996, the University of Parma launched its ARGO project, which completed more than 2000 km of autonomous driving on public roads, using a two-camera system for road follow-up, platooning, and obstacle prevention [14]. Meanwhile, other technologies around the world have made way in the market for various semi-autonomous vehicle applications. For example, to develop car parking assistance systems, ultrasonic sensors were used to detect barriers in the surroundings. Initially, these systems had merely a warning function to help prevent collisions when moving in and out of parking spaces. Toyota introduced ultrasonic back sonar as a parking aid in the Toyota Corona in 1982 and continued its success until 1988 [15]. Later, in 1998, the Mercedes-Benz adaptive cruise control radar was introduced, and these features were initially only usable at speeds greater than 30 km/h [16]. Slowly, autonomous and semi-autonomous highway concepts emerged and major projects were announced to explore dynamic stability and obstacle detection sensors such as vision, radar, ultrasonic, differential GPS, and gyroscopes for road navigation. The navigation tasks included lane keeping, departure warning, and automatic curve warning [17,18]. Most of these projects were carried out in normal operating environments. The phase came to a halt with the National Automated Highway System Consortium [19] on the demonstration of automated driving functions and the discussion on seven specific topics related to automated vehicles: (i) driver assistance for safety, (ii) vehicle-to-vehicle communication, (iii) vehicle-to-environment communication, (iv) artificial intelligence and soft computing tools, (v) embedded high-performance hardware for sensor data processing, (vi) standards and best practices for efficient communication, and (vii) traffic analysis systems.

2.2. Phase II (2003 to 2008)

Several interesting projects were published in the second phase, such as the first Defense Advanced Research Projects Agency (DARPA)Grand Challenge, the second DARPA Grand Challenge, and the DARPA Urban Challenge [20,21]. These three projects and their corresponding competitions were designed to accelerate the development of intelligent navigation and control by highlighting issues such as off-road navigation, high-speed detection, and collision avoidance with surroundings (such as pedestrians, cycles, traffic lights, and signs). Besides, complex urban driving scenarios such as dense traffic and intersections were also addressed. The Grand Challenge has shown the potential of lidar sensors to perceive the environment and to create 3D projections to manage the challenging urban navigation environment. The Velodyne HDL64 [22], a 64-layer lidar, played a vital role for both the winning and the runner-up teams. During the competition, vehicles had to navigate the real environment independently for a long time (several hours). The winner of the Second Grand Challenge (Stanley, Stanford Racing Team, Stanford University) equipped his Stanley vehicle with five lidar units, a front camera, a GPS sensor, an IMU, wheel odometry, and two automotive radars. The winner of the Urban Challenge (2007) (Boss, Carnegie Mellon University Team) with its Boss vehicle featured a perception system made up of two video cameras, five radars, and 13 lidars (including a roof-mounted unit of the novel Velodyne 64HDL). The success of the Grand Challenges highlighted some important information, for example, the size of the sensors and their numbers increased significantly, leading to an increase in data acquisition density, which resulted in several researchers studying different types of fusion algorithms. Further data acquisition density studies have paved the way for the development of advanced driving maneuvers such as lane keeping and collision prevention with warning systems to help the driver avoid potential hazards. We also note that, although different challenges have been addressed in the context of competitions in urban navigation, all of them have been faced with clear weather conditions and no specific report has been provided on tests under varying climatic conditions.

2.3. Phase III (from 2008)

The third phase is a combination of driver assistance technology advancement and commercial development. The DARPA Challenges have strengthened partnerships between car manufacturers and the education sector and have mobilized several efforts to advance autonomous vehicles (AVs) in the automotive industry. This has involved a collaboration between General Motors and Carnegie Mellon University (Carnegie Mellon University), the Autonomous Driving Joint Research Lab, and a partnership between Volkswagen and Stanford University (Stanford University). Google’s Driverless Car initiative has introduced commercial research into autonomous cars from a university lab. In 2013, a Mercedes-Benz S-Class vehicle [23] was produced by the Karlsruhe Institute of Technology/FZI (Forschungszentrum Informatik) and Daimler R&D, which ran 100 km from Mannheim to Pforzheim (Germany) completely autonomously in a project designed to enhance safety. The vehicle, which is equipped with a single stereo vision system consisting of several new generations of long-range and short-range radar sensors, followed the historic memorial road of Bertha Benz. Phase III has focused on issues like traffic automation, cooperative driving, and intelligent road infrastructure. Among the major European Union initiatives, Highly Automated Vehicles for Intelligent Transport (HAVEIT, 2008–2011) [24,25,26] has tackled numerous driver assistance applications, such as adaptive cruise control, safety lane changing, and side monitoring. The sensor sets used in this project include a radar network, laser scanners, and ultrasonic sensors with advanced machine learning techniques as well as vehicle-to-vehicle communications systems (V2V). The results of this project developed safety architecture software for the management of smart actuators and temporary autopilot system tasks for urban traffic with data redundancy and led to successful green driving systems. Other platooning initiatives were Safe Roads for the Environment (SARTE, 2009–2012) [27], VisLab Intercontinental Autonomous Challenge (VIAC, 2010–2014) [28], the Grand Cooperative Driving Challenge, 2011 [29], and the European Truck Platooning Challenge, 2016 [30,31], which were major projects aimed at creating and testing some successful intersection driving strategies in cooperation. Global innovation, testing, and deployment of AV technology called for the adoption of standard guidelines and regulations to ensure a stable integration, which led to the introduction of SAE J3016 [4], which allows for six degrees of autonomy from 0 to 5 in all-weather situations where the navigation tasks at level 0 are managed by the driver and the computer at level 5. To respond to these regulations, the Google driverless car project began in 2009 to create the most advanced driverless car (SAE autonomy level 5) that features a 64-beam lidar rotating rooftop, creating 3D images of objects that help the car see distance and create images of objects within an impressive 200 m range. The camera mounted on the windshield helps the car see objects right in front of it and to record information about road signs and traffic lights. Four radars mounted on the front and rear bumpers of the car make it possible for the car to be aware of the vehicles in front of and behind it and to keep passengers and other motorists safe by avoiding bumps and crashes. To minimize the degree of uncertainty, GPS data are compared to the sensor map data previously collected from the aerial, which is fixed at the rear of the car and receives information on the exact location of the car and updates the internal map. An ultrasonic sensor mounted on one of the rear wheels helps keep track of movements and warn the car about obstacles in the rear. Usually, the ultrasonic sensors are used for parking assistance. Google researchers developed an infrastructure that was successfully tested over 2 million km on real roads. This technology belongs to the company Waymo [32]. Nissan’s Infiniti Q50 debuted in 2013 and became one of the company’s most powerful autonomous cars and the first to use the virtual steering column. The model has various features, such as lane changing, collision prevention, and cruise control, and is equipped with cameras, radar, and other next-generation technology. The driver does not need to handle the accelerator, brake, or steering wheel [33]. Tesla entered the course of automated driving in 2014 [34], with all its vehicles equipped with a monocular camera and an automotive radar that enabled autopilot level 2–3 functionality. In 2018, Mobileye, focusing on a vision-only approach to automated driving, presented an automated Ford demo with only 12 small, fully automated mono-cameras [35]. Beside the projects, there are many pilot projects in almost all G7 countries to improve the introduction rate of the ultimate driverless vehicle. Moreover, the ADAS has achieved high technology readiness, and many car manufacturers are now deploying this technology in their mass-market vehicles. Although several key elements for automatic maneuvers have been successfully tested, the features are not fully covered under all weather conditions. In the next section, we discuss the various applications of ADASs which are currently available in the market and their limitations in performing in various weather conditions.

3. Automated Navigation Features in Difficult Weather Conditions

3.1. Forward Assistance

3.1.1. Adaptive Cruise Control (ACC)

The ACC system helps the driver to longitudinally control the vehicle dynamics [36]. The main motivation for the ACC development is to relieve the driver from driving stress, distracting tasks, human error due to constant monitoring of speed, and maintaining proper progress in irregular traffic. This ACC feature is a combination of the cruise control with collision avoidance control. The vehicle speed is modulated based on the distance from the front vehicle (leading vehicle) [36,37,38], which is mainly intended for highway environments. Mitsubishi and Toyota introduced the cruise control function in Japan in 1996, based on lidar technologies [39]. Later, the ACC was expanded in Europe by Mercedes-Benz in 1999, where it used radar technology combined with an automatic braking system. Detecting other vehicles from a moving vehicle is a challenging task for smart vehicles in heavy traffic and winter conditions (for example, snowfall and icy roads). Particularly, the feature must deal with the vehicle skidding and sliding to avoid any collision when the road is wet, snowy, and icy. Indeed, snowfall can accumulate in the sensor locations and reduce the sensor capability to correctly perceive its surroundings. Additionally, the rainwater causes the oil and grease to rise to the top of the water on the road and creates a slippery or icy road. The popular sensors for ACC applications include the visible spectrum camera, the ultrasonic sensor, lidar, radar, and passive far-infrared cameras. The cameras can be used to know the surroundings and target vehicles ahead. However, in rain and fog conditions, sensing vehicles around and ahead becomes difficult as the lidar can send false obstacle detection alerts. Ultrasonic sensors work well in close range due to low noise in the reflection of sound waves from the targeted object. Radars are robust in all climatic conditions, however, due to their narrow detection field of view, other existing vehicles in the lane of the host vehicle cannot be adequately recognized, which may lead to a sudden collision. Passive far-infrared cameras can come in handy for adverse weather conditions. Indeed, the sensors with robust algorithms and image learning can detect obstacles accurately and robustly in most of the difficult climatic conditions due to their ability to see through fog, rain, and even snow [40,41].

3.1.2. Forward Collision Avoidance (FCA)

Various studies (such as the European Collision Support Initiative) in connection with a forward collision accident technology have demonstrated that many drivers do not brake or use the full braking system capacity when facing road emergencies. This feature mainly relies on continuous monitoring from different types of sensors like cameras, ultrasonic, radar, and lidar. FCA is intended to help the driver to respond quickly and safely to any obstacle on the road. In challenging road conditions, traction is even harder to control. According to ISO 15623 2013 [42,43], the minimum distance for vehicle detection must be over 45 m. For a relative velocity of 20 m/s, azimuth lateral sensing with a single sensor should be between 9° and 18° to recognize a vehicle as wide as 1.80 m. Furthermore, it should have a sufficiently large vertical visual range (elevation) to detect a target with a height of 1.1 m. For normal weather conditions, most sensors (radar, lidar, and cameras) meet the standards, but in more difficult environments (rain and snow with low visibility), sensors face challenges to detect relevant objects. As discussed in the ACC application, lidar, camera, and ultrasonic sensor capabilities are limited in bad weather conditions, while radar can still be more robust.

3.1.3. Road and Traffic Sign Recognition

Typically, road and traffic signs are either on the roadside or above the road. They give drivers important information to guide, warn, or regulate their behavior to make driving safer and easier. Road and traffic signs have special colors and symmetrical shapes, such as triangles, circles, octagons, diamonds, and rectangles. A sign’s shape, color, and its related ideogram are designed to draw the driver’s attention. Bad weather can decrease the visibility of traffic signs and an obstacle (such as a pedestrian or another vehicle) can partially obscure road signs, which may cause drivers to miss important road signs. Therefore, traffic and sign recognition technologies must be good at classifying them, even in non-ideal conditions. Radar and ultrasonic sensors cannot recognize and classify signboards. Lidar is good at mapping the surroundings, but it has a low color contrast and its elevation angle is not good enough to recognize and classify signboards [44]. For this application, machine learning with Complementary Metal Oxide Semiconductor (CMOS) cameras, have a good performance with low-expense solutions. Traffic sign recognition algorithms usually have two steps of detection and classification [45,46]. At night, cameras go blind and depend only on car headlights. However, this does not affect the performance since special paints are used on signboards make them bright at night [47]. In heavy rain and snowfall, the camera can have trouble detecting the signboards due to visibility issues. A passive far-infrared (FIR) camera works by evaluating an object’s thermal signature and emissivity, and it cannot see colors in detail. However, using FIR, which can efficiently detect and recognize signs from the background, in combination with a CMOS camera, can provide a hybrid perception solution that is robust in difficult weather conditions.

3.1.4. Traffic Jam Assist (TJA)

Traffic jams are the main reasons for the development of the traffic jam assist (TJA) feature. TJA utilizes functions of ACC and lane assist (see Section 3.2) to enable convenient and safe stop-and-go driving. The TJA system responds to other vehicles by adjusting the safe distance and autonomously handles steering in a lateral direction if any free space is detected [48]. If the vehicle with TJA encounters a situation where there are too many nearby activities, such as frequent changes in adjacent lanes, lots of obstacles, and unpredictable speeds of other vehicles, the driver receives a take-over prompt [49]. TJA combines longitudinal and lateral direction monitoring to locate surrounding vehicles. Information about both lane structures, such as markings and other vehicles in the immediate environment, is needed to design the TJA function. This implies that sensors should be sufficiently capable of locating immediate surroundings at a shorter distance to the host vehicle. Environmental effects such as snow, rain, and fog have limited effects on sensor performance since the detection must be performed in traffic in a small perimeter [48]. Various experiments have shown that although the performance of sensors degrades at long ranges in different weather conditions, it is good for shorter distances. Passive far-infrared cameras are computationally challenging for short ranges. Lidar can effectively locate the surroundings. Ultrasonic sensors are very effective in short ranges with low noise. They are also very cheap compared to other sensors. Their accuracy of detection may vary since their waves are influenced by temperature due to the emitted heat of surrounding vehicles. Mid-range radars are inexpensive and effective in detecting the surroundings. For the lateral direction, cameras are a cost-effective solution for detecting lane markings and boundaries.

3.2. Lateral Assistance

3.2.1. Lane Departure Warning (LDW) and Lane Keeping Assistance (LKA)

Unintentional lane departures on highways lead to dangerous accidents, based on various accident statistics [50]. To overcome this problem, lateral assistance (LA) has been developed. Lane departure warning (LDW) and lane keeping assistance (LKA) are two popular features. LDW warns drivers about deviation from the lane and LKA assists them with keeping the vehicle on track by controlling and steering the vehicle. For LKA systems, it is important to know the vehicle position with respect to the lane of travel. Lane detection and lane tracking are therefore critical tasks. Lane marks can sometimes be difficult to recognize on various road types due to snow, other vehicles, and changes to the road surface itself [51,52]. In all weather conditions, the lane sensing system should be capable of determining every type of markings on roads so the position and the trajectory of a vehicle regarding the lane can be reliably estimated. Since the detection of lanes does not require kinetic and shape information, cameras and lidar are mostly used [53,54]. However, difficult weather conditions and low visibility issues are still the major concerns that limit the performance of cameras.

3.2.2. Lane Change Assistance (LCA)/Blind Spot Monitoring (BSM)

Assisting drivers in changing lanes, where possible mistakes can be minimized, is the purpose of the lane change assistance (LCA) system. In urban, rural, and highway roads, the monitoring of the side lane traffic is very important for lane changing. Based on ISO standard 17387 [55], LCA tries to accomplish blind spot detection (BSD) that focuses on the vehicle approaching from behind in the side lane. For LCA, it is important to provide the driver with information about the immediate vehicles near the host vehicle. For blind spot monitoring, sensors with a medium range between 70 m and 100 m are used. Rear cameras can deliver information on the position of the vehicle approaching from behind at the time of the departure. However, the operating range is influenced by climate conditions. Passive far-infrared cameras and mid-range radar sensors can overcome weather conditions by robustly detecting surrounding vehicles in all weather conditions. Nevertheless, the cone-shaped spectrum of radar detection remains a difficult problem at bends [56]. In addition to radars, ultrasonic sensors can be used as a backup.

4. Sensors

4.1. Overview

All the above features were designed to improve vehicle safety and rely deeply on sensor data. Data accuracy from the sensors depends on environmental stimuli to perceive the surrounding scene. Car sensors are classified as proprioceptive or exteroceptive. Proprioceptive sensors help measure vehicle ego movement and dynamics. Some of the proprioceptive sensors are wheel speed sensors, torque sensors, steering angle sensors, and IMUs. On the other hand, exteroceptive sensors work by detecting obstacles, and recognizing the navigation scene and help gather information from the surroundings and improve knowledge of the vehicle’s location. Exteroceptive sensors are further categorized into passive and active sensors. Active sensors include ultrasonic, radar, and lidar, which function by emitting energy in the form of electromagnetic waves or radiation and measure return time to determine parameters such as distance and position. Instead of transmitting external signals or disruptions, passive sensors (infrared cameras) receive electromagnetic waves or radiation in the environment. Active sensors are preferred to passive ones, since they have less trouble discriminating between useful data and irrelevant signals. Figure 1 shows a wide range of the electromagnetic spectrum described by ISO 20473, including the wavelength of the following sensors: CMOS camera, near-infrared sensors, lidar, thermal camera, radar, and ultrasonic sensor.
Since different environmental stimuli can include any mixture of these wavelengths, the robustness of perception can be improved by combining different sensors with nonoverlapping wavelength intervals. However, the wavelength is not the single dominant function, additional geometry (shape), spatial resolution, and accuracy must be considered to overcome surrounding scene complexity and can require advanced filters to eliminate disturbances and interference while perceiving. In the following section, exteroceptive sensors responsible for environmental perception are addressed briefly and, with findings from reviewed experimental tests, the individual sensors’ advantages and disadvantages, along with their performance capabilities in different environments, are presented and the same is demonstrated on a spider chart for visualization. All the criteria behind the spider chart aim to demonstrate the state-of-the-art performance of individual sensors, compare their capabilities with other sensors, and give an idea of individual limitations. The information used to construct spider charts was obtained from manufacturers and the evaluation of experts’ key findings, and the related references are cited in each section, describing the individual sensors. Criteria chosen for sensor information are based on the ability of the sensor to obtain target spatial information (such as location, velocity, range, and shape) and detect and recognize objects (pedestrians, cars, trees, and streetlights). These criteria help to provide information such as sensor resolution and contrast. In the same way, the individual sensor capacity to work in the precipitate and aerosol environment has been highlighted with various experimental studies. The following criteria have been used in the visualization of the spider charts, and Table 1 helps in reading and interpreting the spider chart.
Range: Information is gathered from various sensor manufacturers and used to describe the performance of the sensors. For example, the ability of a sensor to detect any object over 100 m and above is assumed to represent high performing sensor and is represented with Index 4 (i = 4) on the spider charts to visualize this. However, if a sensor cannot detect objects at 100 m, the output of the sensor is considered as low performing, indicated with Index 1 (i = 1). For ADAS safety driving applications, range requirements may vary.
Resolution: Information on resolution is gathered from noted experimental studies, whose references are cited under individual sensor performance reviews. A high performance is considered for sensor if each measurement axis of the sensor can achieve a space resolution of less than 10 cm (i = 4) or is considered a low performance sensor otherwise (i = 1). Mapping resolution helps to estimate the sensors’ ability to provide information about the position, velocity, size, and shape of the target.
Contrast: Information on contrast is gathered from noted experimental studies, whose references are cited under individual sensor performance reviews. If the sensor can accurately identify an object when the ambient contrast is small, then sensor performance is assumed to be high (i = 4). Furthermore, if the sensor can reliably detect an object only when the ambient contrast is very high, then the sensor is assumed to have low performance (i = 1). This differentiation helps in understanding the ability of sensors to classify and track the target of interest.
Weather: If the sensors can reliably perform successful detection under harsh weather conditions, it is assumed as a high-performance sensor (i = 4). Similarly, if sensor provides good detection only in clear weather conditions, then the sensor is assumed to have low performance (i = 1). The information about the weather has been gathered based on noted experimental analysis and the related references are cited under individual sensor performance reviews.
Cost: Details on the individual costs of the sensors were collected from various online sources. For instance, we have a unit price of automotive ultrasonic sensors ranging from USD 16 to USD 40 [58,59] and of automotive radar ranging between USD 50 and USD 220, based on the application for short–medium–long range [60]. The automotive lidar price ranges between USD 500 to USD 75,000, based on the design and configuration of lidar and, in 2020, Velodyne announced it would be introducing a USD 100 lidar sensor applicable for use in cars [61,62]. The automotive mono-camera price ranges between USD 100 and USD 1000 [63,64] and automotive thermal cameras cost around USD 700 to USD 3000 [65]. In order to visualize the sensors’ cost on a spider chart, we compared the actual price with the ideal price, as shown in Table 2. The ideal price is proposed by authors and depends on the actual price versus the acceptable price by considering the sensor design and complexity.
The following sub-section presents various sensor reviews, which highlights key points such as market penetration, the working concepts and techniques of a specific sensor in environmental perception, and the advantages and disadvantages of sensors under different weather conditions.

4.2. Radar

Market penetration: Radio detection and ranging (radar) provides smart vehicles with effective safety procedures. This sensing technology started with Hertz and Hülsmeyer’s electromagnetic wave reflection studies [66], and now has steadily improved from basic blind spot detectors and cruise control systems to semi-autonomous obstacle detection and braking functions [16]. Bosch, with its partnership with Infineon, is the dominant manufacturer of radar in the market. The other big players are Continental, Autoliv, Delphi, Elesys, Hella, Fujistu Ten, Mitsubishi Electric, ZF-TRW, SmartMicro, Denso, Valeo, Hitachi Automotive, Clarion, etc. Technical information on radar by various manufacturers is presented in [67]. In Figure 2, an image of radars from Bosch is presented.
Working principle: The vehicle radar setup includes a transmitter, an antenna, a receiver, and a processing unit. The radar transmits electromagnetic waves produced by the transmitter in a known direction. If an obstacle or surface intercepts waves, they are reflected to the receiver system. The processing unit uses the captured signal to define target range, angle, and velocity. Based on applications, automotive radar sensors are categorized as short-range radar (SRR) (up to 30 m), medium-range radar (MRR), and long-range radar (LRR) (up to 250 m) [69,70,71,72,73,74,75,76,77]. Automotive radar systems typically run between 24 GHz and 76 GHz portions of the electromagnetic spectrum. In [78], an experimental distinction was made between 76 GHz operating with 4 GHz bandwidth and 24 GHz operating at 200 MHz bandwidth, with results concluding that low-bandwidth radars could not distinguish between two distinct obstacles and would send the driver or autonomous vehicle incorrect information. Many of today’s automotive radars are built on a frequency-modulated continuous wave (FMCW), because it enables easy modulation, high average power, high bandwidth, and excellent range resolution [79]. In the same reference [79], a brief review of the key developments in radar and signal processing techniques applied to the estimation of significant target parameters, such as range, velocity, and direction, are presented with mathematical illustrations. Thanks to extensive use in the automotive industry for various applications and due to advances in signal processing by applying machine learning, pattern recognition techniques, and robust algorithm developments [80], radar data now have more knowledge of object dimension [81], object orientation, motion prediction [82], and classification information [83,84,85].
Degradation of radar performance: Nonetheless, the radar performance in various weather conditions is not as good as in clear weather. Influences of adverse climates, such as precipitate and aerosol environments, on radar output were analyzed and it was concluded that precipitate environments, such as rain, had the most impact, causing a reduced range of radar, and a similar effect was found in the presence of wet snowfall. Experimental analysis in [86,87,88] supports the argument which focuses on radar performance in the rain and provides details on attenuation and backscatter effects responsible for the deterioration of radar performance in the rain. The attenuation effects that lead to a decline of the radar range are higher at a higher frequency (about a 45–50 percent decrease in the heavy and very heavy rainfall range), while backscatter effects in radar increase the noise in the receiver. It is noted that noise is high at a low frequency and causes false-positive errors at the receiver. Similarly, in [89], wet and dry snowfall impact on radar at 77 and 300 GHz frequencies was studied. The findings concluded that attenuation due to snow contributed to a decreased range of about 12~18.5 dB/km, and the study also noted that the attenuation varied greatly with snow water content. In [90], a mathematical model is presented to test wet snowfall output; the analysis shows that radar output has a similar effect on snowfall to that of rainfall. Although radar performance in precipitate surroundings has been observed to degrade, the study in [91,92], compares radar, lidar, and camera performances in simulated and real-world adverse climatic environments and concludes that radar outperforms lidar and cameras under the influence of rain. In [93,94], the effect of aerosols upon the transmission of radar signals has been investigated in controlled environments and mining applications, and the results show that radar is not impacted by the presence of airborne particles, such as dust and smoke because of its wavelengths, which are much larger than the characteristic dimensions of dust. Although radar may be a great option for all weather conditions, signal interference is still a matter of concern. More detailed information on radar interferences is presented in [95], which discusses the interference impact on radar, its characteristics, and mitigation strategies. Based on the review analysis, the advantages and disadvantages of radar are summarized and listed in Table 3.
The spider chart for radar is presented in Figure 3, where the ideal radar feature is shown with the blue bold curve, whereas the actual radar sensor output is shown with the red dashed line. The difference between both curves represents the difference between the current and desired radar sensor output.

4.3. Lidar

Market penetration: Lidar is the abbreviation of light detection and ranging. Lidar was developed in 1960 for the study of environmental measurements (atmospheric and oceanographic parameters) and later was developed for topographical 3D mapping applications in the mid-1990s [96]. In 2005, lidar was applied to vehicles to locate and avoid obstacles in the DARPA challenge [97]. Lidar commercial manufacturing companies include Velodyne, Quanergy, Leddartech, Ibeo, etc. The technological description of individual sensors is provided in [98]. In Figure 4, an image of an automotive lidar is shown.
Working principle: Just like radars use time of flight of radio wavelengths to collect target information, photodiodes are used by lidar to transmit the light pulse to the target. The optical receiver lens in the lidar system is used as a telescope for the processing of photodiode fragments of light photons. The collected reflections include 3D point clouds corresponding to the scanned environments and the strength of the reflected laser energies provides information about the range, speed, and direction of the target. Lidar manufacturers use two wavelengths: 905 nm and 1550 nm. The former is a common option for automotive manufacturers due to its reliability, eye protection, and cost-effectiveness with silicone detectors. In [100,101], an authoritative analysis on lidar, its waveform, and market penetration strategies are discussed. Lidars provide a good physical description of the target and, due to that, lidars have been used for target detection, tracking, and motion prediction., Filtering of the ground and clustering of the target [102,103,104,105] are two methods widely used by lidar for object detection, which provide the spatial information of the target. To classify and recognize objects (like pedestrians, trees, or vehicles), lidars make use of techniques such as machine learning based on object recognition [106,107,108,109], and additional methods such as global and local extraction of features to help in providing the structure of the target. Lidar uses the Bayesian filtering framework and data association methods for target tracking and motion prediction to provide information, such as velocity, trajectory, and object positioning [110,111,112]. In contrast to radar-based multi-object tracking, in which all detections are typically represented as points, lidar-based multi-tracking provides detection patterns of targets and this property of lidar scanning causes users to opt for lidar.
Degradation of lidar performance: Lidar performance in extreme weather conditions is not as strong as expected. Adverse weather conditions increase the transmission loss and decrease the reflectivity of the target. Under the perception category (fog, snow, and rain), fog has been found to have the greatest impact on the ability of lidar, due to its high expansion and backscattering properties, which are greater than in weather conditions like snowfall and rain [113]. The challenge in fog conditions is that many transmitted signals are lost, resulting in reduced power. Reduced power would alter the signal-to-noise ratio of the lidar sensor and influence its detection threshold, which leads to degraded perception performance. In [114], the depth of lidar performance in fog is studied and the observed light is scattered by fog particles, which not only reduces the detection range dramatically, but also leads to false detections. In the same study, the fog condition showed similar performance degradation to the airborne environment. In [115,116,117], qualitative and quantitative experimental studies of the fog effect on lidar in controlled environments outlined the loss of transmission phenomena leading to low received laser power and low target visibility. Lidar scanner capacity testing has been carried out in the northern part of Finland at Sodankylä Airport [118], where fog creates a special problem. Results indicate that fog reduces the sensor range by 25%. In [119], the quantitative performance of lidar with varying rain intensity, with the help of a mathematical model, is presented and the results show that as rainfall intensity increases, the lidar cloud density is affected, increasing false-positive errors. The same effect is presented in [120], where the authors’ analysis showed that the varying intensities, size, and shape of raindrops drastically influence the attenuation rates of lidar. The effect of snow on lidar performances, such as reflectivity and propagation through the snowy environment, is evaluated in [121], using four lidars of different manufacturers. The results observed from this experiment highlight that the receiving power levels generated by snowflakes or water droplets were high (due to false-positive errors) and tended to overload the optical receiver chain. The lidar performances are also altered by airborne particles, such as dust, which have greater characteristic wavelengths than lidar. These particles prevent the sensor from imaging its surroundings, resulting in reduced visibility and incomplete target information. In another work [122], the experiment outlines that the dust particles in the air are very often detected by laser sensors and hide obstacles behind the cloud of dust. To address these limitations, the use of lidars with a 1550 nm wavelength with a strong propagation ability is suggested. However, this solution is of limited use, because of constraints such as high cost and high energy usage. The loss of efficiency due to adverse environmental conditions was examined in [123], and a comparison was made between lidars with wavelengths 905 nm and 1550 nm. The findings of [123] have also shown that lasers with a wavelength of 1550 nm have a much higher water absorption compared to 905 nm lidar. Furthermore, due to the advantage of the higher wavelength (1550 nm), higher power can be used for the transmission of lidar signals, which could result in an increased range of detection in adverse climatic conditions, while maintaining eye safety regulations. The advantages and disadvantages of lidar are outlined in Table 3. The ideal lidar function is shown in Figure 5, with the blue bold curve, while the current lidar sensor performance is shown with the red dashed line. The difference between both curves reflects the gap between current and desired lidar sensor performance.

4.4. Ultrasonic Sensor

Market penetration: It is a very difficult task for drivers to track vehicles on the road, because they cannot always be aware of the presence of all obstacles around the vehicle. The ultrasonic sensor is popularly used to measure proximity to obstacles in a very short range and is widely used in areas where distance- and occupancy-related detections are needed. For vehicle applications, the popular uses of ultrasonic sensors include (1) low-speed car parking and (2) high-speed blind spot detection. For example, in the consumer market, Tesla Motors dominates the use of ultrasonic sensors and has already used ultrasonic sensors for some of its functions, such as Tesla’s advanced parking assistance with the “Auto-park” and “Summon” features, which promote the self-driving of a vehicle with a driver outside and the monitoring of a blind spot at high speed. The “Autopilot” and “Autosteer” features monitor surroundings of the vehicle and stabilize vehicle heading accordingly [124]. As far as parking sensors are concerned, more than half of the new vehicles in Europe and Asia have rear parking sensors, so it is not surprising that the global market for car parking sensors is projected to grow steadily over the next few years, with a compound annual growth rate of almost 24 percent by 2020 [125]. Currently, Bosch is the principal manufacturer of ultrasonic sensors and the technical specifications of the ultrasonic sensor are presented in [126]. In addition, an image of automotive ultrasonic sensors is shown in Figure 6.
Working principle: The configuration of the ultrasonic sensor consists of a piezoelectric material transducer, charged with an alternating electrical voltage, which causes fluctuation and short sound wave bursts. This sound wave is transmitted to the target, which reflects the sound of the sensor. The reflective sound echo of the sensor provides information on the distance, velocity, and angle of the obstacle. Information such as the distance to the target can be calculated by the flight technique, the speed can be estimated by the Doppler shifting process, and the target direction can be determined by the strength of the reflected sound wave [127,128,129]. Ultrasound speed is easily influenced by factors such as temperature, humidity, and wind, which cause the sound pressure to decay exponentially with the spread of sound over a distance, resulting in significant effects on the accuracy of the measurement and complicating the study of ultrasound sensors, which is why it is important to check the temperature and other transmission factors [130]. In addition to the speed of sound, the ultrasonic sensor accuracy also depends heavily on the reflective characteristics of the target surface, such as curvature, terrain, and design target material [131]. The ratio of humidity of the air is important for the determination of the maximum range of sensors. Frequencies greater than 50 kHz will result in weaker echoes due to the attenuation of airborne sounds, while the proportion of interference sounds at the receiver is higher for frequencies lower than 40 kHz. Due to this limitation, ultrasonic sensors on vehicles typically operate within a frequency band of between 40 and 50 kHz, which has been shown to be the best trade-off between acoustic performance (sensitivity and range) and ambient noise resistance [131,132,133,134]. A brief review of the state-of-the-art ultrasonic sensor with wave propagation, atmospheric attenuation, sound wave reflection, and target tracking, along with market penetration of ultrasonic sensors, is presented in [135]. Most of the studies on ultrasonic sensor performance are presented in [136,137,138,139,140,141,142,143,144,145,146,147], which focus on the ability of sound waves to detect, reflect, and track the target. Besides, sound wave propagation in the presence of changing winds and temperatures, as well as several new designs to enhance resolution, have been introduced. However, the results of all experiments show that good resolution is achieved within a shorter timeframe and the target wave reflection is accurate and reliable. The authors of [136,137,138,139,140,141,142,143,144,145,146,147], suggest the use of ultrasonic sensors for near-field perception based on their experimental studies.
Performance degradation of ultrasonic sensors: As the ultrasonic wave spreads through a homogeneous gas such as air, absorption and dispersion combine to give the overall attenuation level. Precipitation (fog, snow, and rain) and the presence of airborne particles have an insignificant effect on sound waves, although precipitation clearly affects humidity and may also affect wind and temperature gradients. Under normal circumstances, atmospheric absorption may be neglected, except where long distances or very high frequencies are involved [148,149,150]. Although a low-cost, high-performance ultrasonic sensor appears to be an appropriate choice for all-weather perception, it lacks safety and can be easily spoofed. For example, in [151], ultrasonic sensor vulnerabilities and their impact on performance have been exposed. The safety of the ultrasonic sensor, in [152] has also been analyzed and the study outlines the reliability of ultrasonic sensors for future use. Based on the gathered information, the current performance (red dashed lines) and desired performance (bold blue line) of ultrasonic sensors are presented in Figure 7, and Table 3 highlights ultrasonic sensor advantages and disadvantages.

4.5. Vision-Based Systems

Market penetration: Human driving is primarily based on an analysis of the characteristics of the surrounding vehicles, including obstacles and road signs. A camera provides a way to obtain some of this information for automated operation. Almost all SAE degrees of autonomy greater than 1 use cameras. Cameras are the only kind of imaging equipment capable of seeing colors. In Figure 8, a mono-camera from Mobileye is shown. Mobileye is a popular manufacturer currently leading monovision with smart technology and technical specifications of the intelligent vision-based camera can be found in [153].
Working principle: The camera is a digital lens imagery system that works by collecting and translating the image of an object into electrons on a pixel image sensor. Later, the camera capacitors convert electrons into voltages, which are later converted into an electronic digital signal [155]. Two imaging sensors, the charging coupling device (CCD) and the complementary metal oxide semiconductor device (CMOS-D), are typically used in real-time applications. Brief comparisons between the CCD and CMOS-D can be found in [155,156]. CCD cameras deliver excellent low-noise performance but are expensive and, as an alternative, the CMOS-D has been developed to reduce production costs and power consumption. Because of this advantage, the CMOS-D is widely preferred for automotive applications in the related industry. Cameras, in conjunction with computer vision and deep learning techniques, offer environmental information, such as detection of the target, their related physical descriptions (like the position of moving targets, size, and shape) and semantic descriptions (like recognizing and classifying trees, vehicles, traffic lights, and pedestrians). Camera information is easy to understand, which makes it more popular than other sensors. Various configurations exist in the camera and out of different configurations, monocular and stereo vision camera solutions are a common choice for researchers. In monocular systems, only one camera is used to detect, track, and measure longitudinal distances, based on landscape geometry. There is a downside to distance measurement in the monocular camera, since the distance is measured by using the position of the pixel in the vertical direction of the given image coordinates, which typically results in errors, due to the lack of direct depth measurements for the captured images. Compared to monocular cameras, stereo cameras with two cameras have an additional feature for measuring the distance between objects. Using stereo-view-based methods, two cameras can estimate the 3D coordinates of an object. A brief analysis of techniques for real-time obstacle detection and classification linked with various algorithms using the stereo camera is presented in [157]. While stereo vision cameras are effective in target detection and classification, they are more expensive than mono-cameras, and they also have problems with calibration and computational complexity. One of the reasons why vision-based approaches are favored in urban traffic is the identification of traffic lights. Traffic light detection methods in cameras are based on image processing, machine learning, and map-based techniques. Within an image-processing procedure, a single or multiple thresholding, filtering, and extraction operations are performed on an image to obtain a particular result. A slight miscalculation can influence image efficiency, which can be addressed by machine-based learning methods and processing algorithms. Nonetheless, to achieve optimal output using machine learning methods, it is important to collect massive training datasets and train the model for a significant amount of time. Map-based methods are used to overcome this limitation [158]. In [159], an image processing method used by a vision system for traffic light detection and recognition is presented, which involves image modifications like RGB to Hue Saturation Value (HSV) conversion and filtering. Furthermore, the article [160], proposes a system based on a fast convolutional neural network (CNN) based on the YOLOv2 network. This algorithm can detect the location of a traffic sign and classify it according to its form. Deep learning approaches are also provided in [161], which outperform image processing methods for the robust detection and recognition of traffic lights.
Vision-based system performance degradation: While advanced methods have improved recognition techniques, small variations in weather still influence camera measurements. The camera is very sensitive when faced with adverse climatic conditions. A camera in an aerosol environment experiences decreased visibility and contrast, and is unreliable in object recognition, and a camera is not recommended for environmental detection and vehicle control tasks under foggy conditions, as per [162]. In the same reference [162], a full description of the rain and fog interactions with a camera is also provided. Camera sensors have an advantage over object detection and classification and are important for automated safety systems. However, in [163], an indoor rain simulator was used to systematically investigate the effects of rain on camera data and the results outline that the performances of the camera sensors were mainly affected by decreased gradient magnitudes, resulting in a shift in the location and size of the bounding boxes during the detection process, leading to a decline in classification scores and resulting in uncertainty. Similarly, based on indoor experiments [164], the authors experimentally researched the effects of rain and showed that raindrops lead to an increase in the average intensity of the image and a decrease in contrast. The study presented in [165], develops an approach to quantifying the vulnerabilities of the camera, based on empirical measurements and the concluding results show that camera principal output loss occurs in lighting and precipitation, which is calculated to increase performance errors by 50 percent. The authors in [166], outlined that rainfall is visible only in the near-field and has the characteristics of fog when far away. Besides, the authors propose the need for post-processing methods to mitigate the impact of rain and fog. For instance, a de-watering approach to enhance vision efficiency in rainfall is presented in [167]. We can hardly find studies which show the influence of snow and its interaction with the camera, due to the lack of snow-based simulators. Yet snow effects the mechanical operation of the camera when positioned outside the vehicle. For example, when there is moisture around the camera below the freezing point, thin layers of ice will cover the camera lens and prevent the viewer from seeing any movement, other than crystalline snow patterns. A similar effect was explained in [168], showing the difficulty of using data generated by cameras for lane detection, due to external factors, such as frost or droplets of moisture on the glass in front of cameras. If the camera is placed inside the windshield, then falling snow with varied shapes makes it difficult to trace and eliminate it from image processing, leading to image recognition problems of the target. Camera interaction with airborne particles was examined in [93], under controlled environmental conditions. The experiment was to test the camera’s perception performance in a controlled environment and the results highlight that the presence of particles (smoke and dust) influence the camera’s image quality and the contrast, leading to poor object classification. The advantages and limitations of the vision-based system are highlighted in Table 3. In Figure 9, the performance of the vision-based system is plotted for visualization, where the dashed red line presents the current state of the art.

4.6. Far-Infrared Camera

Market penetration: Following major improvements in vehicle lighting over the years, night driving and severe weather conditions are still difficult. According to the NHTSA statistics (discussed in the Introduction), night driving accidents account for one-third of all road accidents, and they account for half of the fatal accidents due to poor visibility [2]. Thermal imaging sensors provide additional advantages for existing night driving visible cameras. The far-infrared (FIR) camera is passive in design and consumes less energy than any other sensor. Currently, FLIR Systems is a noted manufacturer in the thermal camera field and has presented a dataset online for detection and tracking performance. In Figure 10, a thermal camera from FLIR Systems is shown and its technical specifications have been disclosed in [169].
Working principle: All objects emit infrared at temperatures above absolute 0 degrees, and this radiation increases with temperature. A long-infrared camera uses far-reaching infrared light waves to detect variations in natural heat (thermal radiation) emitted by objects. This description is subsequently translated into an image. The infrared spectrum varies between 0.8 μm and 1000 μm [171], and can be classified into near-infrared (NIR) ranges from 0.8 μm to 2.5 μm, mid-infrared (MIR) ranges from 2.5 μm to 25 μm, and far-infrared ranges from 25 μm to 1000 μm (also known as thermal infrared). A FIR camera detector is a focal plane array (FPA) with a resolution ranging from 160 × 120 to 1024 × 1024 pixels of micrometer-sized pixels, made of different infrared wavelength-sensitive materials. The FPA detector technology in infrared cameras is divided into thermal uncooled microbolometers and quantum detectors. An uncooled microbolometer is a common type of thermal detector made of metal or semiconductor materials and used more frequently in automotive applications. A brief overview of the state of the art of FIR cameras, such as construction design, operation, attenuation, and limitations, for interested audiences, can be found in [171]. For certain cases, because of the fundamental differences between visual and infrared imaging, the techniques used to detect pedestrians in the visible spectrum cannot be extended to infrared images, and other approaches must be used. In [172,173], fusion between the thermal camera and regular visible camera for the detection task is presented and the same comparison of detection techniques can be noted. In [174,175], thorough research on the detection of pedestrians using an FIR camera was presented. In addition, various techniques and algorithms, such as isolated ROI to extract targets from the image, followed by the classification of the extracted target and tracking, were outlined. Similarly, in [176,177,178,179,180,181,182,183,184,185,186,187,188,189], thermal camera studies have been shown to identify and track pedestrians, vehicles, and animals.
Performance reliability: In comparison to the visible spectrum, FIR spectrum cameras lack a well-established public database and reliable benchmarking protocols for pedestrians, vehicle detection, classification, and tracking, making it difficult to test algorithms with a variety of well-known, typical automotive road scenes under various weather and lighting conditions. In general, FIR cameras operating in different environments undergo two atmospheric effects, absorption and dispersion. Atmospheric effects play critical role in preventing object radiation from entering the sensor. In automotive applications, FIR cameras are used to scan short distances of approximately 200–250 m and this short-distance sensing is not much influenced by atmospheric effects, compared to the aviation domain. In [190], a brief comparison in detection, classification, and recognition tasks in a normal and closed fog chamber with various vision spectral bands, such as visible (RGB), near-infrared (NIR), short-wave infrared (SWIR), and long-wave infrared (FIR), is presented and the results show that FIR spectral bands have superior performance capability, compared to other spectral bands in aerosol environments. Commercial manufacturers of thermal cameras claim that FIR thermal cameras are not affected by precipitate and airborne particles (fog, rain, snow, and dust), due to fact that air acts as a high-pass filter above 7.5μm [41], and due to their ability to penetrate different atmospheric conditions, they can be used to detect vehicles and obstacles robustly. The limitations and advantages of FIR are summarized in Table 3. Based on the information gathered, the performance of FIR systems is plotted on a spider chart in Figure 11, to visualize the performance gap.

4.7. Emerging Technology

Due to the quality cost of trading, we limit our investigations to radar, lidar, cameras, and ultrasonic sensors. We need more robust technologies that can overcome existing technology requirements and perform multiple tasks with minimal problems in all weather conditions. In this sense, the European DENSE project [115], intends to develop a new sensor subsystem, although the article only talks about lidar sensor alternatives to the current state of the art, such as the use of a wavelength of 1550 nm instead of a wavelength of 905 nm to evaluate the performance in and of different climates. Very promising results with the use of 1550 nm have been presented, and there is still a need to explore studies that can improve sensor performance for robustness. In addition to lidar, studies on sub-camera systems, such as gated SWIR cameras proposed as a replacement for scanning lidar systems in real time, which handle back-scatter and provide dense depth at long ranges and reliable performance in climatic conditions, have been studied in the DENSE project. Similarly, in the field of localization, visual odometry [191], is a technology that is gaining popularity as a complementary GPS sensor in autonomous vehicles. Under the radar, investigations have been carried out on the ground penetration of signals that exhibit robustness in any road and weather condition [192].

5. Perspective on Sensor Fusion

Sensors are the key to the perception of the outside world in the automated driving system and whose cooperation performance directly determines the safety of automated driving vehicles. Some sensors may be redundant under some environmental conditions, and some may be complementary, assisting in successful cooperation to ensure consistent and accurate obstacle detection. Sensor fusion is the method of using multi-sensor information to calculate, recreate the environment, and generate dynamic device responses, resulting in a consistent and accurate representation of the vehicle’s surroundings and position for safer navigation. The study presented in [193], discusses the traditional limitations of sensor fusion and focuses on different strategies by demonstrating the effectiveness of combining various sensors with a model. Moreover, the advantages that come along with sensor fusion are highlighted therein. Sensor fusion architecture includes three different levels: (1) sensor level, where two or more detectors are merged into one hardware; (2) task level, where features are extracted and fused from each sensor; and (3) decision level, where the result is calculated by combining individual decisions. These three techniques are successful on their own. The result of a sensor fusion process is mostly a high-level representation and abstract with rich knowledge of the environment for successful semantic analysis. In varying weather conditions, sensor fusion is very vital. For instance, during heavy snowfall, detecting obstacles with lidar and cameras is highly uncertain and the aid of radar, FIR thermal cameras, and ultrasonic sensors can be combined to enhance the detection and multi-tracking of the target. However, to achieve most of the robustness using data fusion, one must be cautious about two vital strategies for enhancing vehicle navigation safely in all weather conditions, which are as follows:
  • At the first stage, the fusion system must decide the features that make the navigation environment different from normal navigation. Therefore, there should be a context-aware mechanism that adapts the level of confidence of each piece of sensor information. When snow/rain is falling, the context-aware mechanism can simply use the camera and the weather data to confirm such an occurrence. Furthermore, a training process can be used to classify different weather-related road contexts.
  • At the second stage, fusion processing can be carried out to provide the most recent sensing information and the corresponding level of confidence. Although the fusion concept can enhance the capability of the automated navigation feature in all weather conditions, the overall processing power can be considerable. Therefore, the fusion hardware and software architectures should be deeply analyzed before further implementation steps.
Table 4 provides guidance on an appropriate combination of sensors for ADAS driving safety applications (discussed in Section 3), which can greatly enhance the understanding of the robustness for all weather conditions. Table 4 is organized with ADAS applications on the left and multiple sensors across the top. In the following paragraph, we review sensor fusion strategies for driving safety applications and the proposal to resolve current drawbacks by suggesting and combining sensors with existing fusion techniques.
  • Adaptive cruise control (ACC)
Selecting appropriate sensors for adaptive cruise control (ACC) applications requires a sensor that can remotely detect and track obstacles at a distance in all weather conditions. The host vehicle lookahead at a distance is the key factor in stabilizing the vehicle in the lane to implement the ACC. A few sensor fusion methods are already available, used to collect long-range information in real time. For instance, in [194], the fusion advantages of lidar and radar are used to achieve the precision of spatial data like target velocity and distance to host vehicle. Work produces a real-time algorithm that lets an autonomous car quickly follow other cars at different speeds while maintaining a safe distance. In [195], a stereo camera and a far-infrared camera are analyzed and the results show that far-infrared images have been able to detect vehicles at long ranges but lack target classification, when compared to stereo vision. Fusing data from stereo and far-infrared cameras resulted in improved performance, reducing false positives in vehicle detection at long ranges and enhancing overall system performance. In [196], the optically passive far-infrared camera and optically active lidar was used in a multi-sensor detection system for railways, which successfully installed a test system of up to 400 m under normal conditions, ensuring long-distance safety at a speed of more than 120 km/h. Based on the above analysis, we suggest a long-range radar sensor and far-infrared thermal cameras for ACC systems, as both sensors have long-range capability and are minimally influenced by environmental changes. Lidar has also been suggested, as it meets the range criteria in the ACC application and users could select lidar for combination with other sensors, based upon sensitivity to varying climates and cost constraints. The remaining sensors such as cameras, ultrasonic, etc., lack long-range detections and are also influenced by the weather, which reduces their preference for use in ACC applications.
  • Forward collision avoidance (FCA), traffic jam assist (TJA), and blind spot monitoring (BSM) systems
For FCA, TJA, and BSM driving safety applications, long-range target detection and classification and motion tracking of targets are important requirements. The combination of lidar, vision, and radar offers reasonable coverage, a better classification, and long- to short-range motion tracking of targets, which enables the estimation of accurate distance and speed measurements of targets. Some of the current integration methods for three common sensors are as follows: in [197], a multi-modal fusion approach between radar and cameras is proposed. The proposed method is designed as a two-stage object detection network which uses radar detection and camera image features to estimate distance and to classify objects. The results of the experiment show that the proposed algorithm can accurately estimate the distance for all detected objects with a mean absolute error of 2.65 for all images captured. In [198], real-time experimental work is presented on the basis of cameras, lidar, and radar to achieve a high degree of object identification, classification, and tracking, in four weather conditions (such as cloudy and wet, bright day, night, rain and snow). The article demonstrated the efficiency of the three capable sensors with four different combinations (camera + radar, camera + lidar, lidar + radar, and radar + camera + lidar) based on a probabilistic algorithm. The results of the experiment show that for a full sensor set (radar + camera + lidar), in cloudy and sunny weather, objects are tracked with an accuracy of 98.5% and 99.8%, and the detected objects are classified with 87.5% accuracy. In night, rainy, and snowy weather conditions, the experimental results show that objects are reliably tracked at 98.9% and 99.5% accuracy with a classification score reduced to 74.4%. In our proposition, if an FIR thermal camera was used with a probabilistic algorithm proposed by the author in [198], with active toggling (switching) of the camera for varying climates, this could have enhanced the classification score in the experiment.
  • Road and traffic sign recognition (TSR):
For TSR safety applications, the classification and resolution of targets is an important factor. Vision-based solutions have always been superior in classification and recognition techniques. In [199], a robust technique to detect traffic lights during both day and night conditions and to measure the distance between the approaching vehicle and the traffic light is measured using a Bayesian filter. The results show that it was possible to detect traffic lights with 99.4% accuracy in the 10–115 m range. In [200], real-time lidar laser reflectivity and mono-camera color features were combined to detect and classify traffic signals; the fusion resulted in 95.87% detection and 95.07% classification with average computing accuracy. However, when faced with adverse weather, we can make use of an FIR thermal camera as an alternative, which can also provide classification and recognition information, as seen in [195].
  • Lane departure warning and lane keeping warning (LDW and LKW) safety systems
In LDW and LKW safety systems, a camera was used mainly as a means to distinguish between road and lane markings and, because of developments in algorithms, lidar and radar have the ability to search for clues in the environment related to the road and update tracking information. For instance, in [201], radar, along with a mono-camera, was used to detect road barriers and calculate lateral distance from the vehicle to the barrier, giving an estimate of vehicle position on the road. However, accuracy could not be achieved but when the results were compared with lidar tracking performance, it was concluded that radar was able to perform better even in the absence of lidar. Further studies on the improvement of radar detection and tracking of surrounding clues can help achieve robustness for various weather drives.
  • Parking assistance (PA) systems
In PA safety applications, obstacle detection and tracking at short distances are vital. In [202], data fusion between lidar and cameras is presented, which makes use of the SLAM algorithm to find parking spaces. The results of the experiment show the average recall and precision are 98% and 97%, respectively. However, the experiment was limited to stationary obstacle detection and rectangle parking slots. Similarly, in [203], an approach was presented to fuse cameras, ultrasonic sensors, and odometers to find a vacancy in the parking lot. Ultrasonic sensors were used to detect obstacles and later odometers and camera images were used for tracking obstacles and empty space location. The proposed method achieved 96.3% recall and 93.4% precision for different parking types, with a classification score of 97.5%. However, this approach is susceptible to weather change and the authors believe that the influence of varying weather may cause results to differ. As an alternative, if the system had short-range radar, this could boost the system robustness for all-weather parking.

6. Conclusions

This article has provided a comprehensive review of the sensor technologies for both semi-autonomous and autonomous vehicles by considering the challenging issues related to their robustness in all weather conditions. A general view of the popular features of advanced driver assistance systems (ADASs) has been provided and the role of various sensors in these applications has been discussed with the limitations of these features in adverse climates. The study has provided a description of the advantages and disadvantages of the individual sensors, as well as their performance. Besides, the sensitivity of the different characteristics of the sensors, such as range, resolution, and speed detection in normal and difficult climatic conditions, has been discussed, and based on these characteristics, the differences in performance between current and ideal sensors for all weather conditions were mapped onto charts (known as spider charts). Our study has shown that the performance and robustness of the vehicle perception system can be enhanced by combining different sensors (such as cameras, radar, lidar, ultrasonic, and passive far-infrared cameras). For example, radar and passive infrared cameras provide a very good and effective range to detect obstacles in all weather conditions. However, lidar is not a widely favored sensor for the detection and tracking of obstacles due to disadvantages, such as cost and poor performance in rain, fog, and snow. On the other hand, cameras continue to play a key role in most automated navigation applications. The introduction of radar and passive infrared cameras improves the vehicle vision system in order to detect and monitor objects on the navigation scene. Nevertheless, the reliability of ADAS lateral features such as lane departure warning and lane keeping assistance, which rely on the visual road markings, cannot be easily improved under winter conditions. As a result, more studies are needed to solve this weakness. Nonetheless, when lateral driving assistance or parking assistance applications are carried out during heavy rainfall and snowfall, ultrasonic sensors combined with short-range radar can enable the vision system to succeed. Through identifying gaps in the research on semi-autonomous intelligent vehicles and shortcomings in current sensor technologies, this paper has thoroughly investigated the capabilities of current sensors for all weather drives. However, sensors and algorithms need to be made effective and reliable for any situation, and some points have been overlooked as far as the analysis is concerned, for example, the classification of bicycles, light poles, and pedestrians, which sometimes cause sensors to be mistaken when perceiving and, besides, the response time for slow-moving pedestrians and small animals or objects is of concern and, similarly, potholes and pitfalls that pose serious problems in adverse climates need to be addressed.

Author Contributions

Conceptualization, A.A., A.S.M.; Data curation, A.S.M.; Formal analysis, A.S.M., A.A.; Funding acquisition, S.K., K.A.; Investigation, A.S.M.; Methodology, A.S.M., A.A., S.K.; Project administration, S.K., K.A.; Resources, S.K., A.A.; Supervision, S.K., A.A., F.K.A.; Writing—original draft, A.S.M., A.A., S.K., K.A.; Writing—review and editing, A.S.M., F.K.A., N.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Sciences and Engineering Research Council of Canada, grant number CRSNG-RGPIN-2019-07206 and the Canada Research Chair Program, grant number CRC-950-232172.

Conflicts of Interest

The authors declare no conflict of interest and the funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. D. O. Transport. Weather Impact on Safety. Available online: https://ops.fhwa.dot.gov/weather/q1_roadimpact.htm (accessed on 1 October 2019).
  2. NHTSA. Traffic Safety Facts. Available online: https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812806 (accessed on 1 November 2019).
  3. El Faouzi, N.E.; Heilmann, B.; Aron, M.; Do, M.T.; Hautiere, N.; Monteil, J. Real Time Monitoring Surveillance and Control of Road Networks under Adverse Weather Condition; HAL: Houston, TX, USA, 2010. [Google Scholar]
  4. SAE J3016: Levels of Driving Automation. S. International. 2019. Available online: https://www.sae.org/news/2019/01/sae-updates-j3016-automated-driving-graphic (accessed on 1 October 2019).
  5. Ando, Y.M.R.; Nishihori, Y.; Yang, J. Effects of Advanced Driver Assistance System for Elderly’s Safe Transportation. In Proceedings of the SMART ACCESSIBILITY 2018: The Third International Conference on Universal Accessibility in the Internet of Things and Smart Environments, Rome, Italy, 25–29 March 2018. [Google Scholar]
  6. Kang, Y.; Yin, H.; Berger, C. Test Your Self-Driving Algorithm: An Overview of Publicly Available Driving Datasets and Virtual Testing Environments. IEEE Trans. Intell. Veh. 2018. [Google Scholar] [CrossRef]
  7. Yurtsever, E.; Lambert, J.; Carballo, A.; Takeda, K. A Survey of Autonomous Driving: Common Practices and Emerging Technologies. IEEE Access 2020, 8, 58443–58469. [Google Scholar] [CrossRef]
  8. Marti, E.; de Miguel, M.A.; Garcia, F.; Perez, J. A Review of Sensor Technologies for Perception in Automated Driving. IEEE Intell. Transp. Syst. Mag. 2019, 11, 94–108. [Google Scholar] [CrossRef] [Green Version]
  9. Rosique, F.; Navarro, P.J.; Fernandez, C.; Padilla, A. A Systematic Review of Perception System and Simulators for Autonomous Vehicles Research. Sensors 2019, 19, 648. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Bengler, K.; Dietmayer, K.; Farber, B.; Maurer, M.; Stiller, C.; Winner, H. Three Decades of Driver Assistance Systems: Review and Future Perspectives. IEEE Intell. Transp. Syst. Mag. 2014, 6, 6–22. [Google Scholar] [CrossRef]
  11. Diermeier, D. Mercedes and the Moose Test (B). Kellogg Sch. Manag. Cases 2017. [Google Scholar] [CrossRef]
  12. Billington, J. The Prometheus Project: The Story Behind One of AV’s Greatest Developments. Available online: https://www.autonomousvehicleinternational.com/features/the-prometheus-project.html (accessed on 1 October 2019).
  13. Carnegie Mellon University. No Hands Across America Journal. Available online: https://www.cs.cmu.edu/~tjochem/nhaa/Journal.html (accessed on 1 October 2019).
  14. Bertozzi, M.; Broggi, A.; Conte, G.; Fascioli, R. The Experience of the ARGO Autonomous Vehicle. In Proceedings of the Enhanced and Synthetic Vision, Orlando, FL, USA, 13–17 April 1998. [Google Scholar] [CrossRef]
  15. Automobile, W. Parking Sensor. Available online: https://en.wikipedia.org/wiki/Parking_sensor (accessed on 1 October 2019).
  16. Meinel, H.H. Evolving automotive radar—From the very beginnings into the future. In Proceedings of the 8th European Conference on Antennas and Propagation (EuCAP 2014), The Hague, The Netherlands, 6–11 April 2014; pp. 3107–3114. [Google Scholar] [CrossRef]
  17. Bertozzi, M.; Broggi, A.; Fascioli, A. Vision-based Intelligent Vehicles: State of the Art and Perspectives. Robot. Auton. Syst. 2000, 32, 1–16. [Google Scholar] [CrossRef] [Green Version]
  18. Broggi, A.; Bertozzi, M.; Fascioli, A. Architectural Issues on Vision-Based Automatic Vehicle Guidance: The Experience of the ARGO Project. Real-Time Imaging 2000, 6, 313–324. [Google Scholar] [CrossRef] [Green Version]
  19. Thorpe, C.; Jochem, T.; Pomerleau, D. The 1997 automated highway free agent demonstration. In Proceedings of the Conference on Intelligent Transportation Systems, Boston, MA, USA, 12 October 1997; pp. 496–501. [Google Scholar] [CrossRef]
  20. Urmson, C.; Duggins, D.; Jochem, T.; Pomerleau, D.; Thorpe, C. From Automated Highways to Urban Challenges. In Proceedings of the 2008 IEEE International Conference on Vehicular Electronics and Safety, Columbus, OH, USA, 22–24 September 2008; pp. 6–10. [Google Scholar] [CrossRef]
  21. Bebel, J.C.; Howard, N.; Patel, T. An autonomous system used in the DARPA Grand Challenge. In Proceedings of the 7th International IEEE Conference on Intelligent Transportation Systems (IEEE Cat. No.04TH8749), Washington, WA, USA, 3–6 October 2004; pp. 487–490. [Google Scholar] [CrossRef]
  22. Velodyne. HDL—64E Lidar. Available online: https://velodynelidar.com/products/hdl-64e/ (accessed on 1 October 2020).
  23. Dickmann, J.; Appenrodt, N.; Klappstein, J.; Bloecher, H.L.; Muntzinger, M.; Sailer, A.; Hahn, M.; Brenk, C. Making Bertha See Even More: Radar Contribution. IEEE Access 2015, 3, 1233–1247. [Google Scholar] [CrossRef]
  24. Hoeger, R.; Amditis, A.; Kunert, M.; Hoess, A.; Flemisch, F.; Krueger, H.P.; Bartels, A.; Beutner, A.; Pagle, K. Highly Automated Vehicles For Intelligent Transport: Haveit Approach. 2008. Available online: https://www.researchgate.net/publication/225000799_HIGHLY_AUTOMATED_VEHICLES_FOR_INTELLIGENT_TRANSPORT_HAVEit_APPROACH (accessed on 1 October 2019).
  25. Vanholme, B.; Gruyer, D.; Lusetti, B.; Glaser, S.; Mammar, S. Highly Automated Driving on Highways Based on Legal Safety. IEEE Trans. Intell. Transp. Syst. 2013, 14, 333–347. [Google Scholar] [CrossRef]
  26. Thomaidis, G.; Kotsiourou, C.; Grubb, G.; Lytrivis, P.; Karaseitanidis, G.; Amditis, A. Multi-sensor tracking and lane estimation in highly automated vehicles. IET Intell. Transp. Syst. 2013, 7, 160–169. [Google Scholar] [CrossRef]
  27. Dávila, A.; Nombela, M. Sartre–Safe Road Trains for the Environment Reducing Fuel Consumption through lower Aerodynamic Drag Coefficient. SAE Tech. Pap. 2011. [Google Scholar] [CrossRef]
  28. Bertozzi, M.; Bombini, L.; Broggi, A.; Buzzoni, M.; Cardarelli, E.; Cattani, S.; Cerri, P.; Debattisti, S.; Fedriga, R.; Felisa, M.; et al. The VisLab Intercontinental Autonomous Challenge: 13,000 km, 3 months, no driver. In Proceedings of the 17th World Congress on ITS, Busan, Korea, 1–5 September 2010. [Google Scholar]
  29. Englund, C.; Chen, L.; Ploeg, J.; Semsar-Kazerooni, E.; Voronov, A.; Bengtsson, H.H.; Didoff, J. The Grand Cooperative Driving Challenge 2016: Boosting the Introduction of Cooperative Automated Vehicles. 2016. Available online: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7553038&isnumber=7553013 (accessed on 1 October 2019).
  30. R. Tom Alkim. European Truck Platooning Challenge. Available online: http://wiki.fot-net.eu/index.php/European_Truck_Platooning_Challenge (accessed on 1 October 2019).
  31. Tsugawa, S.; Jeschke, S.; Shladover, S.E. A Review of Truck Platooning Projects for Energy Savings. IEEE Trans. Intell. Veh. 2016, 1, 68–77. [Google Scholar] [CrossRef]
  32. Poczte, S.L.; Jankovic, L.M. 10 The Google Car: Driving Toward A Better Future? J. Bus. Case Stud. First Quart. 2014, 10, 1–8. [Google Scholar]
  33. Nissan. Nissan Q50 2014. Available online: http://www.nissantechnicianinfo.mobi/htmlversions/2013_Q50_Special/Safety.html (accessed on 1 October 2019).
  34. Tesla. Available online: https://en.wikipedia.org/wiki/Tesla_Autopilot (accessed on 1 October 2019).
  35. Ford. Ford and Mobileye to Offer Better Camera-Based Collision Avoidance Tech. Available online: https://www.kbford.com/blog/2020/july/28/ford-and-mobileye-to-offer-better-camera-based-collision-avoidance-tech.htm (accessed on 1 October 2019).
  36. Xiao, L.; Gao, F. A comprehensive review of the development of adaptive cruise control systems. Veh. Syst. Dyn. 2010, 48, 1167–1192. [Google Scholar] [CrossRef]
  37. Ioannou, P. Guest Editorial Adaptive Cruise Control Systems Special ISSUE. 2003. Available online: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1255572 (accessed on 1 October 2019).
  38. Rajamani, R.; Zhu, C. Semi-autonomous adaptive cruise control systems. IEEE Trans. Veh. Technol. 2002, 51, 1186–1192. [Google Scholar] [CrossRef]
  39. Winner, H.; Schopper, M. Adaptive Cruise Control. In Handbook of Driver Assistance Systems: Basic Information, Components and Systems for Active Safety and Comfort; Winner, H., Hakuli, S., Lotz, F., Singer, C., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 1093–1148. [Google Scholar]
  40. Robinson, S.R. The Infrared & Electro-Optical Systems Handbook. Volume 8, No. 25. 1993. Available online: https://apps.dtic.mil/dtic/tr/fulltext/u2/a364018.pdf (accessed on 1 October 2019).
  41. FLIR. Seeing through Fog and Rain with a Thermal Imaging Camera. 2019. Available online: https://www.flirmedia.com/MMC/CVS/Tech_Notes/TN_0001_EN.pdf (accessed on 1 October 2019).
  42. Forward Vehicle Collision Warning Systems—Performance Requirements and Test Procedures, Intelligent Transport System ISO/TC 204, ISO. 2013. Available online: https://www.iso.org/obp/ui/#iso:std:iso:15623:ed-2:v1:en (accessed on 1 October 2019).
  43. Winner, H. Handbook on Driver Assistance System (Fundemental of collision protection systems); Springer International Publishing Switzerland: Cham, Switzerland, 2016. [Google Scholar]
  44. Sign Detection with LIDAR. Available online: https://unmanned.tamu.edu/projects/sign-detection-with-lidar/ (accessed on 1 October 2019).
  45. Johansson, B. Road Sign Recognition from a Moving Vehicle. Master’s Thesis, Uppsala University, Uppsala, Sweden, 2003. [Google Scholar]
  46. Yang, Y.; Luo, H.; Xu, H.; Wu, F. Towards Real-Time Traffic Sign Detection and Classification. IEEE Trans. Intell. Transp. Syst. 2016, 17, 2022–2031. [Google Scholar] [CrossRef]
  47. Chen, E.H.; Röthig, P.; Zeisler, J.; Burschka, D. Investigating Low Level Features in CNN for Traffic Sign Detection and Recognition. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 325–332. [Google Scholar] [CrossRef]
  48. Lüke, S.; Fochler, O.; Schaller, T.; Regensburger, U. Traffic-Jam Assistance and Automation. In Handbook of Driver Assistance Systems: Basic Information, Components and Systems for Active Safety and Comfort; Winner, H., Hakuli, S., Lotz, F., Singer, C., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  49. Traffic Jam Assistance-Support of the Driver in the Transverse and Longitudinal Guidance. 2008. Available online: https://mediatum.ub.tum.de/doc/1145106/1145106.pdf (accessed on 1 October 2019).
  50. Cicchino, J.B. Effects of lane departure warning on police-reported crash rates. J. Saf. Res. 2018, 66, 61–70. [Google Scholar] [CrossRef]
  51. Yenikaya, S.; Yenikaya, G.; Düven, E. Keeping the Vehicle on the Road-A Survey on On-Road Lane Detection Systems. ACM Comput. Surv. 2013. [Google Scholar] [CrossRef]
  52. Hata, A.; Wolf, D. Road marking detection using LIDAR reflective intensity data and its application to vehicle localization. In Proceedings of the 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), Qingdao, China, 20 November 2014; pp. 584–589. [Google Scholar]
  53. Feniche, M.; Mazri, T. Lane Detection and Tracking For Intelligent Vehicles: A Survey. In Proceedings of the 2019 International Conference of Computer Science and Renewable Energies (ICCSRE), Agadir, Morocco, 22–24 July 2019. [Google Scholar] [CrossRef]
  54. Li, Q.; Chen, L.; Li, M.; Shaw, S.; Nüchter, A. A Sensor-Fusion Drivable-Region and Lane-Detection System for Autonomous Vehicle Navigation in Challenging Road Scenarios. IEEE Trans. Veh. Technol. 2014, 63, 540–555. [Google Scholar] [CrossRef]
  55. Intelligent Transport System—Lane Change Decision Aid System, I. 17387:2008. 2008. Available online: https://www.iso.org/obp/ui/#iso:std:iso:17387:ed-1:v1:en (accessed on 1 October 2019).
  56. Zakuan, F.R.A.; Hamid, U.Z.A.; Limbu, D.K.; Zamzuri, H.; Zakaria, M.A. Performance Assessment of an Integrated Radar Architecture for Multi-Types Frontal Object Detection for Autonomous Vehicle. In Proceedings of the 2018 IEEE International Conference on Automatic Control and Intelligent Systems (I2CACIS), Shah Alam, Malaysia, 20–20 October 2018; pp. 13–18. [Google Scholar] [CrossRef]
  57. ISO 20473:2007(e), Optics and Photonics—Spectral Bands, i. O. F. Standardization. 2007. Available online: https://www.pointsdevue.com/03-iso-204732007e-optics-and-photonics-spectral-bands-international-organization-standardization# (accessed on 1 October 2019).
  58. Alibaba.com. Ultrasonic Parking Sensor. Available online: https://www.alibaba.com/showroom/ultrasonic-parking-sensor.html (accessed on 1 October 2020).
  59. NHTS. Preliminary Cost-Benefit Analysis of Ultrasonic and Camera Backup Systems. 2006. Available online: https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/nhtsa-2006-25579-0002.pdf (accessed on 1 October 2019).
  60. Digitimes Research: 79GHz to Replace 24GHz for Automotive Millimeter-Wave Radar Sensors. Available online: https://www.digitimes.com/news/a20170906PD208.html#:~:text=In%202017%2C%20prices%20for%2024GHz,and%20US%24100%2D200%20respectively (accessed on 1 October 2020).
  61. Lambert, E.G.S. LiDAR Systems: Costs, Integration, and Major Manufacturers. Available online: https://www.mes-insights.com/lidar-systems-costs-integration-and-major-manufacturers-a-908358/ (accessed on 1 October 2020).
  62. Khader, S.C.M. An Introduction to Automotive Light Detection and Ranging (LIDAR) and Solutions to Serve Future Autonomous Driving Systems. Available online: https://www.ti.com/lit/wp/slyy150a/slyy150a.pdf?ts=1602909172290&ref_url=https%253A%252F%252Fwww.google.com%252F#:~:text=As%20LIDAR%20has%20gained%20in,than%20US%24200%20by%202022 (accessed on 1 October 2020).
  63. CSI. Mobileye Car System Installation. Available online: https://www.carsystemsinstallation.ca/product/mobileye-630/ (accessed on 1 October 2020).
  64. Tech, E. Mobileye Monocamera. Available online: https://www.extremetech.com/extreme/145610-mobileye-outfits-old-cars-with-new-electronic-vision (accessed on 1 October 2020).
  65. GroupGets. FLIR ADK—Thermal Vision Automotive Development Kit. Available online: https://store.groupgets.com/products/flir-adk-thermal-vision-automotive-development-kit#:~:text=FLIR%20ADK%20%2D%20Thermal%20Vision%20Automotive%20Development%20Kit%20%E2%80%93%20GroupGets (accessed on 1 October 2019).
  66. Meinel, H.H.; Bösch, W. Radar Sensors in Cars. In Automated Driving: Safer and More Efficient Future Driving; Watzenig, D., Horn, M., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 245–261. [Google Scholar]
  67. Stuff, A. Radar Technical Specifications. Available online: https://autonomoustuff.com/wp-content/uploads/2020/06/radar-comp-chart.pdf (accessed on 1 October 2020).
  68. BOSCH. Mid Range Radar. Available online: https://www.bosch-mobility-solutions.com/en/products-and-services/passenger-cars-and-light-commercial-vehicles/driver-assistance-systems/lane-change-assist/mid-range-radar-sensor-mrrrear/ (accessed on 1 October 2020).
  69. Schneider, R.; Wenger, J. High resolution radar for automobile applications. Adv. Radio Sci. 2003, 1, 105–111. [Google Scholar] [CrossRef]
  70. Schneider, M. Automotive radar–Status and trends. In Proceedings of the German Microwave Conference, Ulm, Germany, 5–7 April 2005. [Google Scholar]
  71. Fölster, F.; Rohling, H. Signal processing structure for automotive radar. Frequenz 2006, 60, 20–24. [Google Scholar] [CrossRef]
  72. Li, J.; Stoica, P. MIMO radar with colocated antennas. IEEE Signal Process. Mag. 2007, 24, 106–114. [Google Scholar] [CrossRef]
  73. Rasshofer, R.H. Functional requirements of future automotive radar systems. In Proceedings of the 2007 European Radar Conference, Munich, Germany, 10–12 October 2007; pp. 259–262. [Google Scholar] [CrossRef]
  74. Schoor, M.; Yang, B. High-Resolution Angle Estimation for an Automotive FMCW Radar Sensor. In Proceedings of the International Radar Symposium (IRS), Cologne, Germany 5–7 September 2007. [Google Scholar]
  75. Brunnbauer, M.; Meyer, T.; Ofner, G.; Mueller, K.; Hagen, R. Embedded wafer level ball grid array (eWLB). In Proceedings of the 2008 33rd IEEE/CPMT International Electronics Manufacturing Technology Conference (IEMT), Penang, Malaysia, 4–6 November 2008. [Google Scholar]
  76. Knapp, H.; Treml, M.; Schinko, A.; Kolmhofer, E.; Matzinger, S.; Strasser, G.; Lachner, R.; Maurer, L.; Minichshofer, J. Three-channel 77 GHz automotive radar transmitter in plastic package. In Proceedings of the 2012 IEEE Radio Frequency Integrated Circuits Symposium, Montreal, QC, Canada, 17–19 June 2012; pp. 119–122. [Google Scholar] [CrossRef]
  77. Wagner, C.; Böck, J.; Wojnowski, M.; Jäger, H.; Platz, J.; Treml, M.; Dober, F.; Lachner, R.; Minichshofer, J.; Maurer, L. A 77GHz automotive radar receiver in a wafer level package. In Proceedings of the 2012 IEEE Radio Frequency Integrated Circuits Symposium, Montreal, QC, Canada, 17–19 June 2012; pp. 511–514. [Google Scholar] [CrossRef]
  78. Keysight Technologies. How Millimeter Wave Automotive Radar Enhances ADAS and Autonomous Driving. 2018. Available online: www.keysight.com (accessed on 1 October).
  79. Patole, S.M.; Torlak, M.; Wang, D.; Ali, M. Automotive radars: A review of signal processing techniques. IEEE Signal Process. Mag. 2017, 34, 22–35. [Google Scholar] [CrossRef]
  80. Dickmann, J.; Klappstein, J.; Hahn, M.; Appenrodt, N.; Bloecher, H.L.; Werber, K.; Sailer, A. Automotive radar the key technology for autonomous driving: From detection and ranging to environmental understanding. In Proceedings of the 2016 IEEE Radar Conference (RadarConf), Philadelphia, PA, USA, 2–6 May 2016. [Google Scholar] [CrossRef]
  81. Meinl, F.; Stolz, M.; Kunert, M.; Blume, H. R27. An experimental high performance radar system for highly automated driving. In Proceedings of the 2017 IEEE MTT-S International Conference on Microwaves for Intelligent Mobility (ICMIM), Nagoya, Japan, 19–21 March 2017; pp. 71–74. [Google Scholar] [CrossRef]
  82. Brisken, S.; Ruf, F.; Höhne, F. Recent evolution of automotive imaging radar and its information content. IET Radar Sonar Navig. 2018, 12, 1078–1081. [Google Scholar] [CrossRef]
  83. Eltrass, A.; Khalil, M. Automotive radar system for multiple-vehicle detection and tracking in urban environments. IET Intell. Transp. Syst. 2018, 12, 783–792. [Google Scholar] [CrossRef]
  84. Huang, X.; Ding, J.; Liang, D.; Wen, L. Multi-Person Recognition Using Separated Micro-Doppler Signatures. IEEE Sens. J. 2020, 20, 6605–6611. [Google Scholar] [CrossRef]
  85. Lee, S.; Yoon, Y.; Lee, J.; Kim, S. Human–vehicle classification using feature-based SVM in 77-GHz automotive FMCW radar. IET Radar Sonar Navig. 2017, 11, 1589–1596. [Google Scholar] [CrossRef]
  86. Zang, S.; Ding, M.; Smith, D.; Tyler, P.; Rakotoarivelo, T.; Kaafar, M.A. The Impact of Adverse Weather Conditions on Autonomous Vehicles: How Rain, Snow, Fog, and Hail Affect the Performance of a Self-Driving Car. IEEE Veh. Technol. Mag. 2019, 14, 103–111. [Google Scholar] [CrossRef]
  87. Yang, R.K.; Li, L.; Ma, H.H. Effects of Backscattering Enhancement Considering Multiple Scattering in Rain on Mmw Radar Performance; NISCAIR-CSIR: New Delhi, India, 2013. [Google Scholar]
  88. Huang, J.; Jiang, S.; Lu, X. Rain Backscattering Properties And Effects On The Radar Performance At Mm Wave Band. Int. J. Infrared Millim. Waves 2001, 22, 917–922. [Google Scholar] [CrossRef]
  89. Norouzian, F.; Marchetti, E.; Hoare, E.; Gashinova, M.; Constantinou, C.; Gardner, P.; Cherniakov, M. Experimental study on low-THz automotive radar signal attenuation during snowfall. IET Radar Sonar Navig. 2019, 13, 1421–1427. [Google Scholar] [CrossRef]
  90. Pozhidaev, V.N. AVR 8.1 Estimation of attenuation and backscattering of millimeter radio waves in meteorological formations. J. Commun. Technol. Electron. 2010, 55, 1223–1230. [Google Scholar] [CrossRef]
  91. Hasirlioglu, S.; Riener, A. Challenges in Object Detection Under Rainy Weather Conditions. In Proceedings of the Second EAI International Conference, INTSYS 2018, Guimarães, Portugal, 21–23 November 2018. [Google Scholar]
  92. Wang, J.-G.; Chen, S.J.; Zhou, L.-B.; Wan, K.; Yau, W.-Y. Vehicle Detection and Width Estimation in Rain by Fusing Radar and Vision. In Proceedings of the 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV), Singapore, 18–21 November 2018; pp. 1063–1068. [Google Scholar]
  93. Peynot, T.; Underwood, J.; Scheding, S. Towards reliable perception for unmanned ground vehicles in challenging conditions. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; pp. 1170–1176. [Google Scholar] [CrossRef] [Green Version]
  94. Brooker, G.; Hennessey, R.; Lobsey, C.; Bishop, M.; Widzyk-Capehart, E. Seeing through Dust and Water Vapor: Millimeter Wave Radar Sensors for Mining Applications. J. Field Robot. 2007, 24, 527–557. [Google Scholar] [CrossRef]
  95. Alland, S.; Stark, W.; Ali, M.; Hegde, M. Interference in Automotive Radar Systems: Characteristics, Mitigation Techniques, and Current and Future Research. IEEE Signal Process. Mag. 2019, 36, 45–59. [Google Scholar] [CrossRef]
  96. Wikipedia. Lidar. 2019. Available online: https://en.wikipedia.org/wiki/Lidar (accessed on 1 October 2019).
  97. Hecht, J. Lidar for Self Driving. 2018, pp. 1–8. Available online: https://www.osapublishing.org/DirectPDFAccess/4577511A-CFD1-17871681F42CA46FF8BC_380434/opn-29-1-26.pdf?da=1&id=380434&seq=0&mobile=no (accessed on 1 October 2019). (In English).
  98. Stuff, A. Lidar technical comparision. 2019. Available online: https://autonomoustuff.com/lidar-chart (accessed on 1 October 2019).
  99. Velodyne. HDL 64. Available online: https://velodynelidar.com/products/hdl-64e/ (accessed on 1 October 2020).
  100. Wallace, A.M.; Halimi, A.; Buller, G.S. Full Waveform LiDAR for Adverse Weather Conditions. IEEE Trans. Veh. Technol. 2020, 69, 7064–7077. [Google Scholar] [CrossRef]
  101. Li, Y.; Ibanez-Guzman, J. Lidar for Autonomous Driving: The Principles, Challenges, and Trends for Automotive Lidar and Perception Systems. IEEE Signal Process. Mag. 2020, 37, 50–61. [Google Scholar] [CrossRef]
  102. Zermas, D.; Izzat, I.; Papanikolopoulos, N. Fast segmentation of 3D point clouds: A paradigm on LiDAR data for autonomous vehicle applications. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 5067–5073. [Google Scholar] [CrossRef]
  103. Bogoslavskyi, I.; Stachniss, C. Fast range image-based segmentation of sparse 3D laser scans for online operation. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 163–169. [Google Scholar] [CrossRef]
  104. Thrun, A.P.S. Model based vehicle detection and tracking for autonomous urban driving. Auton Robot 2009, 26, 123–129. [Google Scholar] [CrossRef]
  105. Himmelsbach, M.; Mueller, A.; Lüttel, T.; Wünsche, H.J. LIDAR-based 3D Object Perception. In Proceedings of the 1st Int. Workshop on Cognition for Technical Systems, Muenchen, Germany, 6–10 May 2008. [Google Scholar]
  106. Capellier, E.; Davoine, F.; Cherfaoui, V.; Li, Y. Evidential deep learning for arbitrary LIDAR object classification in the context of autonomous driving. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 1304–1311. [Google Scholar] [CrossRef] [Green Version]
  107. Zeng, W.D.; Posner, I.; Newman, P. What could move? Finding cars, pedestrians and bicyclists in 3D laser data. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 4038–4044. [Google Scholar] [CrossRef]
  108. Douillard, J.U.B.; Vlaskine, V.; Quadros, A.; Singh, S. A Pipeline for the Segmentation and Classification of 3D Point Clouds; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar] [CrossRef] [Green Version]
  109. Premebida, C.; Ludwig, O.; Nunes, U. Exploiting LIDAR-based features on pedestrian detection in urban scenarios. In Proceedings of the 2009 12th International IEEE Conference on Intelligent Transportation Systems, St. Louis, MO, USA, 4–7 October 2009. [Google Scholar] [CrossRef]
  110. Kraemer, S.; Stiller, C.; Bouzouraa, M.E. LiDAR-Based Object Tracking and Shape Estimation Using Polylines and Free-Space Information. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4515–4522. [Google Scholar] [CrossRef]
  111. Zhang, X.; Xu, W.; Dong, C.; Dolan, J.M. Efficient L-shape fitting for vehicle detection using laser scanners. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 54–59. [Google Scholar] [CrossRef]
  112. Rachman, A.S.A. 3D-LIDAR Multi Object Tracking for Autonomous Driving. 2017. Available online: https://www.semanticscholar.org/paper/3D-LIDAR-Multi-Object-Tracking-for-Autonomous-and-Rachman/bafc8fcdee9b22708491ea1293524ece9e314851 (accessed on 1 October 2019).
  113. Hespel, L.; Riviere, N.; Huet, T.; Tanguy, B.; Ceolato, R. Performance evaluation of laser scanners through the atmosphere with adverse condition. Proc SPIE 2011. [Google Scholar] [CrossRef]
  114. Heinzler, R.; Schindler, P.; Seekircher, J.; Ritter, W.; Stork, W. Weather Influence and Classification with Automotive Lidar Sensors. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 1527–1534. [Google Scholar] [CrossRef] [Green Version]
  115. Kutila, M.; Pyykönen, P.; Holzhüter, H.; Colomb, M.; Duthon, P. Automotive LiDAR performance verification in fog and rain. In Proceedings of the IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, Maui, HI, USA, 4–7 November 2018; pp. 1695–1701. [Google Scholar] [CrossRef]
  116. Bijelic, M.; Gruber, T.; Ritter, W. A Benchmark for Lidar Sensors in Fog: Is Detection Breaking Down? In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 760–767. [Google Scholar] [CrossRef] [Green Version]
  117. Li, Y.; Duthon, P.; Colomb, M.; Ibanez-Guzman, J. What Happens for a ToF LiDAR in Fog? IEEE Trans. Intell. Transp. Syst. 2020. [Google Scholar] [CrossRef]
  118. Kutila, M.; Pyykonen, P.; Ritter, W.; Sawade, O.; Schäufele, B. Automotive LIDAR sensor development scenarios for harsh weather conditions. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Rio de Janeiro, Brazil, 1–4 November 2016; pp. 265–270. [Google Scholar]
  119. Goodin, C.; Carruth, D.; Doude, M.; Hudson, C. Predicting the influence of rain on LIDAR in ADAS. Electronics 2019, 8, 89. [Google Scholar] [CrossRef] [Green Version]
  120. Byeon, M.; Yoon, S.W. Analysis of Automotive Lidar Sensor Model Considering Scattering Effects in Regional Rain Environments. IEEE Access 2020, 8, 102669–102679. [Google Scholar] [CrossRef]
  121. Michaud, S.; Lalonde, J.-F.; Giguère, P. Towards Characterizing the Behavior of LiDARs in Snowy Conditions. In Proceedings of the 7th Workshop on Planning, Perception and Navigation for Intelligent Vehicles, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–3 October 2015. [Google Scholar]
  122. Ryde, J.; Hillier, N. Performance of laser and radar ranging devices in adverse environmental conditions. J. Field Robot. 2009, 26, 712–727. [Google Scholar] [CrossRef]
  123. Wojtanowski, J.; Zygmunt, M.; Kaszczuk, M.; Mierczyk, Z.; Muzal, M. Comparison of 905 nm and 1550 nm semiconductor laser rangefinders’ performance deterioration due to adverse environmental conditions. Opto-Electron. Rev. 2014, 22, 183–190. [Google Scholar] [CrossRef]
  124. Tesla “Model ‘S’ Owners Manual”. 2020. Available online: https://www.tesla.com/sites/default/files/model_s_owners_manual_north_america_en_us.pdf (accessed on 1 October 2020).
  125. Technavio. Global Automotive Parking Sensors Market 2016–2020. 2016. Available online: https://www.technavio.com/report/global-automotive-electronicsglobal-automotive-parking-sensors-market-2016-2020 (accessed on 1 October 2020).
  126. Bosch. Ultrasonic Sensor Technical Performance. Available online: https://www.bosch-mobility-solutions.com/en/products-and-services/passenger-cars-and-light-commercial-vehicles/driver-assistance-systems/construction-zone-assist/ultrasonic-sensor/ (accessed on 1 October 2020).
  127. Murata Manufacturing Co., Ltd. Ultrasonic Sensors. Available online: http://www.symmetron.ru/suppliers/murata/files/pdf/murata/ultrasonic-sensors.pdf (accessed on 1 October 2019).
  128. Pepperl_Fuchs. Technology Guide for Ultrasonic Sensors. 2010. Available online: https://files.pepperl-fuchs.com/webcat/navi/productInfo/doct/tdoct3631a_eng.pdf?v=20181114123018 (accessed on 1 October 2019).
  129. GmbH, R.B. Bosch Automotive Handbook; Robert Bosch GmbH: Gerlingen, Germany, 2004. [Google Scholar]
  130. Massa, D.P. Choosing ultrasonic sensor for proximity or distance measurement. Sensors 1999, 16, 3. [Google Scholar]
  131. Nordevall, J.; Method Development of Automotive Ultrasound Simulations. Applied Mechanics, Chalmers University of Technology. 2015. Available online: http://publications.lib.chalmers.se/records/fulltext/219224/219224.pdf (accessed on 1 October 2019).
  132. Hatano, H.; Yamazato, T.; Katayama, M. Automotive Ultrasonic Array Emitter for Short-range Targets Detection. In Proceedings of the 2007 4th International Symposium on Wireless Communication Systems, Trondheim, Norway, 17–19 October 2007; pp. 355–359. [Google Scholar]
  133. Agarwal, V.; Murali, N.V.; Chandramouli, C. A Cost-Effective Ultrasonic Sensor-Based Driver-Assistance System for Congested Traffic Conditions. IEEE Trans. Intell. Transp. Syst. 2009, 10, 486–498. [Google Scholar] [CrossRef]
  134. Adarsh, S.; Kaleemuddin, S.M.; Bose, D.; Ramachandran, K.I. Performance comparison of Infrared and Ultrasonic sensors for obstacles of different materials in vehicle/ robot navigation applications. IOP Conf. Ser. Mater. Sci. Eng. 2016, 149, 012141. [Google Scholar] [CrossRef] [Green Version]
  135. Kapoor, R.; Ramasamy, S.; Gardi, A.; van Schyndel, R.; Sabatini, R. Acoustic sensors for air and surface navigation applications. Sensors 2018, 18, 499. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  136. Li, S.E.; Li, G.; Yu, J.; Liu, C.; Cheng, B.; Wang, J.; Li, K. Kalman filter-based tracking of moving objects using linear ultrasonic sensor array for road vehicles. Mech. Syst. Signal Process. 2018, 98, 173–189. [Google Scholar] [CrossRef]
  137. Jiménez, F.; Naranjo, E.J.; Gómez, O.; Anaya, J.J. Vehicle Tracking for an Evasive Manoeuvres Assistant Using Low-Cost Ultrasonic Sensors. Sensors 2014, 14, 22689–22705. [Google Scholar] [CrossRef] [Green Version]
  138. Yu, J.; Li, S.E.; Liu, C.; Cheng, B. Dynamical tracking of surrounding objects for road vehicles using linearly-arrayed ultrasonic sensors. In Proceedings of the 2016 IEEE Intelligent Vehicles Symposium (IV), Gothenburg, Sweden, 19–22 June 2016; pp. 72–77. [Google Scholar] [CrossRef]
  139. Liptai, P.; Badida, M.; Lukáčová, K. Influence of Atmospheric Conditions on Sound Propagation—Mathematical Modeling. Óbuda Univ. e-Bull. 2015, 5, 127–134. [Google Scholar]
  140. Mahapatra, R.P.; Kumar, K.V.; Khurana, G.; Mahajan, R. Ultra Sonic Sensor Based Blind Spot Accident Prevention System. In Proceedings of the 2008 International Conference on Advanced Computer Theory and Engineering, Phuket, Thailand, 20–22 December 2008; pp. 992–995. [Google Scholar] [CrossRef]
  141. Alonso, L.; Oria, J.P.; Arce, J.; Fernandez, M. Urban traffic avoiding car collisions fuzzy system based on ultrasound. In Proceedings of the 2008 World Automation Congress, Hawaii, HI, USA, 28 September–2 October 2008. [Google Scholar]
  142. Kai-Tai, S.; Chih-Hao, C.; Chiu, H.C. Design and experimental study of an ultrasonic sensor system for lateral collision avoidance at low speeds. In Proceedings of the IEEE Intelligent Vehicles Symposium, Parma, Italy, 14–17 June 2004; pp. 647–652. [Google Scholar] [CrossRef]
  143. Rhee, J.H.; Seo, J. Low-Cost Curb Detection and Localization System Using Multiple Ultrasonic Sensors. Sensors 2019, 19, 1389. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  144. Hosur, P.; Shettar, R.B.; Potdar, M. Environmental awareness around vehicle using ultrasonic sensors. In Proceedings of the 2016 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Jaipur, India, 21–24 September 2016; pp. 1154–1159. [Google Scholar]
  145. Shin, S.; Choi, S.B. Target Speed Sensing Technique using Dilation Correlation of Ultrasonic Signal for Vehicle. In Proceedings of the 2019 IEEE Sensors Applications Symposium (SAS), Sophia Antipolis, France, 11–13 March 2019. [Google Scholar] [CrossRef]
  146. Kredba, J.; Holada, M. Precision ultrasonic range sensor using one piezoelectric transducer with impedance matching and digital signal processing. In Proceedings of the IEEE International Workshop of Electronics, Control, Measurement, Signals and their Application to Mechatronics (ECMSM), Donostia-San Sebastian, Spain, 24–26 May 2017. [Google Scholar] [CrossRef]
  147. Nauth, P.M.; Pech, A.H.; Michalik, R. Research on a new Smart Pedestrian Detection Sensor for Vehicles. In Proceedings of the IEEE Sensors Applications Symposium (SAS), Sophia Antipolis, France, 11–13 March 2019. [Google Scholar] [CrossRef]
  148. Tsai, W.-Y.; Chen, H.-C.; Liao, T.-L. An ultrasonic air temperature measurement system with self-correction function for humidity. Meas. Sci. Technol. 2005, 16, 548–555. [Google Scholar] [CrossRef]
  149. Muhoz, A.C. CHAPTER 15—Position and Motion Sensors. In Sensor Technology Handbook; Wilson, J.S., Ed.; Burlington: Newnes, Australia, 2005; pp. 321–409. [Google Scholar]
  150. S. F. University. SOUND PROPAGATION. Available online: https://www.sfu.ca/sonic-studio-webdav/handbook/Sound_Propagation.html (accessed on 1 October 2019).
  151. Lim, B.S.; Keoh, S.L.; Thing, V.L. Autonomous Vehicle Ultrasonic Sensor Vulnerability and Impact Assessment. 2018. Available online: https://ieeexplore.ieee.org/document/8355132 (accessed on 1 October 2020).
  152. Xu, W.; Yan, C.; Jia, W.; Ji, X.; Liu, J. Analyzing and Enhancing the Security of Ultrasonic Sensors for Autonomous Vehicles. IEEE Internet Things J. 2018, 5, 5015–5029. [Google Scholar] [CrossRef]
  153. Mobileye. Mobileye C2-270 Technical Datasheet. Available online: https://itk-mdl.asutk.ru/upload/iblock/c82/Mobileye%20C2-270%20Technical%20Spec%20v1.2.pdf (accessed on 1 October 2020).
  154. Autonomoustuff. Mobileye Moncamera. Available online: https://autonomoustuff.com/product/mobileye-camera-dev-kit/ (accessed on 1 October 2020).
  155. Mehta, S.; Patel, A.; Mehta, J. CCD or CMOS Image Sensor For Photography. In Proceedings of the International Conference on Communications and Signal Processing (ICCSP), Melmaruvathur, India, 2–4 April 2015. [Google Scholar] [CrossRef]
  156. T. D. Inc. CCD vs. CMOS. Available online: https://www.teledynedalsa.com/imaging/ knowledge-center/appnotes/ccd-vs-cmos/ (accessed on 20 February 2020).
  157. Bernini, N.; Bertozzi, M.; Castangia, L.; Patander, M.; Sabbatelli, M. Real-time obstacle detection using stereo vision for autonomous ground vehicles: A survey. In Proceedings of the 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), Qingdao, China, 8–11 October 2014; pp. 873–878. [Google Scholar] [CrossRef]
  158. John, V.; Yoneda, K.; Liu, Z.; Mita, S. Saliency Map Generation by the Convolutional Neural Network for Real-Time Traffic Light Detection Using Template Matching. IEEE Trans. Comput. Imaging 2015, 1, 159–173. [Google Scholar] [CrossRef]
  159. Mu, G.; Xinyu, Z.; Deyi, L.; Tianlei, Z.; Lifeng, A. Traffic light detection and recognition for autonomous vehicles. J. China Univ. Posts Telecommun. 2015, 22, 50–56. [Google Scholar] [CrossRef]
  160. Zhang, J.; Huang, M.; Jin, X.; Li, X. A Real-Time Chinese Traffic Sign Detection Algorithm Based on Modified YOLOv2. Algorithms 2017, 10, 127. [Google Scholar] [CrossRef] [Green Version]
  161. Kulkarni, R.; Dhavalikar, S.; Bangar, S. Traffic Light Detection and Recognition for Self Driving Cars Using Deep Learning. In Proceedings of the 2018 4th International Conference on Computing, Communication Control and Automation, ICCUBEA, Pune, India, 16–18 August 2018. [Google Scholar] [CrossRef]
  162. Hasirlioglu, S.; Riener, A. Introduction to rain and fog attenuation on automotive surround sensors. In Proceedings of the IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017. [Google Scholar]
  163. Hasirlioglu, S.; Riener, A. A Model-Based Approach to Simulate Rain Effects on Automotive Surround Sensor Data. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 2609–2615. [Google Scholar] [CrossRef]
  164. Hasirlioglu, S.; Kamann, A.; Doric, I.; Brandmeier, T. Test methodology for rain influence on automotive surround sensors. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; pp. 2242–2247. [Google Scholar] [CrossRef]
  165. Xique, I.J.; Buller, W.; Fard, Z.B.; Dennis, E.; Hart, B. Evaluating Complementary Strengths and Weaknesses of ADAS Sensors. In Proceedings of the IEEE Vehicular Technology Conference, Chicago, IL, USA, 27–30 August 2018. [Google Scholar] [CrossRef]
  166. Garg, K.; Nayar, S.K. When does a camera see rain? In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, Beijing, China, 17–21 October 2005; Volume 2, pp. 1067–1074. [Google Scholar] [CrossRef]
  167. Gangula, L.B.; Srikanth, G.; Naveen, C.; Satpute, V.R. Vision Improvement in Automated Cars by Image Deraining. In Proceedings of the 2018 IEEE International Students’ Conference on Electrical, Electronics and Computer Science (SCEECS), Bhopal, India, 24–25 February 2018. [Google Scholar] [CrossRef]
  168. Lee, U.; Jung, J.; Shin, S.; Jeong, Y.; Park, K.; Shim, D.H.; Kweon, I.S. EureCar turbo: A self-driving car that can handle adverse weather conditions. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 2301–2306. [Google Scholar] [CrossRef]
  169. Flir. Automotive Thermal Camera Specification. Available online: https://www.flir.ca/products/adk/ (accessed on 1 October 2020).
  170. FLIR. FLIR ADK. Available online: https://www.flir.ca/products/adk/ (accessed on 1 October 2020).
  171. FLIR Systems. The Ultimate Infrared Handbook For R&D Professionals; FLIR AB: Boston, MA, USA, 2010; Available online: https://www.flirmedia.com/MMC/THG/Brochures/T559243/T559243_EN.pdf (accessed on 1 October 2019).
  172. John, V.; Tsuchizawa, S.; Liu, Z.; Mita, S. Fusion of thermal and visible cameras for the application of pedestrian detection. Signal Image Video Process. 2016, 11, 517–524. [Google Scholar] [CrossRef]
  173. Chien, S.C.; Chang, F.C.; Tsai, C.C.; Chen, Y.Y. Intelligent all-day vehicle detection based on decision-level fusion using color and thermal sensors. In Proceedings of the International Conference on Advanced Robotics and Intelligent Systems, ARIS, Taipei, Taiwan, 6–8 September 2017. [Google Scholar] [CrossRef]
  174. Hurney, P.; Morgan, F.; Glavin, M.; Jones, E.; Waldron, P. Review of pedestrian detection techniques in automotive far-infrared video. IET Intell. Transp. Syst. 2015, 9, 824–832. [Google Scholar] [CrossRef]
  175. Berg, A. Detection and Tracking in Thermal Infrared Imagery. Ph.D. Thesis, Electrical Engineering Linkoping University, Linköping, Sweden, 2016. [Google Scholar]
  176. Kim, T.; Kim, S. Pedestrian detection at night time in FIR domain: Comprehensive study about temperature and brightness and new benchmark. Pattern Recognit. 2018, 79, 44–54. [Google Scholar] [CrossRef]
  177. Wang, H.; Cai, Y.; Chen, X.; Chen, L. Night-Time Vehicle Sensing in Far Infrared Image with Deep Learning. J. Sens. 2016, 2016, 3403451. [Google Scholar] [CrossRef] [Green Version]
  178. Qi, B.; John, V.; Liu, Z.; Mita, S. Pedestrian detection from thermal images: A sparse representation based approach. Infrared Phys. Technol. 2016, 76, 157–167. [Google Scholar] [CrossRef]
  179. Li, X.; Guo, R.; Chen, C. Robust Pedestrian Tracking and Recognition from FLIR Video: A Unified Approach via Sparse Coding. Sensors 2014, 14, 11245–11259. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  180. Forslund, D.; Bjärkefur, J. Night vision animal detection. In Proceedings of the 2014 IEEE Intelligent Vehicles Symposium Proceedings, Dearborn, MI, USA, 8–11 June 2014; pp. 737–742. [Google Scholar] [CrossRef]
  181. Fernández-Caballero, A.; López, M.T.; Serrano-Cuerda, J. Thermal-infrared pedestrian ROI extraction through thermal and motion information fusion. Sensors 2014, 14, 6666–6676. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  182. Christiansen, P.; Steen, K.; Jørgensen, R.; Karstoft, H. Automated Detection and Recognition of Wildlife Using Thermal Cameras. Sensors 2014, 14, 13778–13793. [Google Scholar] [CrossRef]
  183. Jeong, M.; Ko, B.C.; Nam, J.-Y. Early Detection of Sudden Pedestrian Crossing for Safe Driving During Summer Nights. IEEE Trans. Circuits Syst. Video Technol. 2017, 27, 1368–1380. [Google Scholar] [CrossRef]
  184. Baek, J.; Hong, S.; Kim, J.; Kim, E. Efficient Pedestrian Detection at Nighttime Using a Thermal Camera. Sensors 2017, 17, 1850. [Google Scholar] [CrossRef] [Green Version]
  185. Jeon, E.S.; Kim, J.H.; Hong, H.G.; Batchuluun, G.; Park, K.R. Human Detection Based on the Generation of a Background Image and Fuzzy System by Using a Thermal Camera. Sensors 2016, 16, 453. [Google Scholar] [CrossRef] [Green Version]
  186. Choi, Y.; Kim, N.; Hwang, S.; Kweon, I.S. Thermal Image Enhancement using Convolutional Neural Network. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 223–230. [Google Scholar] [CrossRef]
  187. Hwang, S.; Park, J.; Kim, N.; Choi, Y.; Kweon, I.S. Multispectral pedestrian detection: Benchmark dataset and baseline. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1037–1045. [Google Scholar] [CrossRef]
  188. Iwasaki, Y. A Method of Robust Moving Vehicle Detection For Bad Weather Using An Infrared Thermography Camera. In Proceedings of the International Conference on Wavelet Analysis and Pattern Recognition, Hong Kong, China, 30–31 August 2008. [Google Scholar]
  189. Iwasaki, Y.; Misumi, M.; Nakamiya, T. Robust Vehicle Detection under Various Environmental Conditions Using an Infrared Thermal Camera and Its Application to Road Traffic Flow Monitoring. Sensors 2013, 13, 7756. [Google Scholar] [CrossRef]
  190. Pinchon, N.; Cassignol, O.; Nicolas, A.; Bernardin, F.; Leduc, P.; Tarel, J.P.; Brémond, R.; Bercier, E.; Brunet, J. All-Weather Vision for Automotive Safety: Which Spectral Band? In Advanced Microsystems for Automotive Applications 2018; Dubbert, J., Müller, B., Meyer, G., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2018; pp. 3–15. [Google Scholar]
  191. Sabry, M.; Al-Kaff, A.; Hussein, A.; Abdennadher, S. Ground Vehicle Monocular Visual Odometry. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 3587–3592. [Google Scholar] [CrossRef]
  192. Howard, B. MIT Spinoff WaveSense’s Ground-Penetrating Radar Looks Down for Perfect Self-Driving. Available online: https://www.extremetech.com/extreme/306205-mit-wavesense-ground-penetrating-radar-self-driving (accessed on 1 October 2020).
  193. Wang, Z.; Wu, Y.; Niu, Q. Multi-Sensor Fusion in Automated Driving: A Survey. IEEE Access 2020. [Google Scholar] [CrossRef]
  194. Göhring, D.; Wang, M.; Schnürmacher, M.; Ganjineh, T. Radar/Lidar sensor fusion for car-following on highways. In Proceedings of the The 5th International Conference on Automation, Robotics and Applications, Wellington, New Zealand, 6–8 December 2011; pp. 407–412. [Google Scholar] [CrossRef] [Green Version]
  195. Savasturk, D.; Froehlich, B.; Schneider, N.; Enzweiler, M.; Franke, U. A Comparison Study on Vehicle Detection in Far Infrared and Regular Images. In Proceedings of the 2015 IEEE 18th International Conference on Intelligent Transportation Systems, Las Palmas, Spain, 15–18 September 2015. [Google Scholar]
  196. Mockel, S.; Scherer, F.; Schuster, P.F. Multi-sensor obstacle detection on railway tracks. In Proceedings of the IEEE IV2003 Intelligent Vehicles Symposium. Proceedings (Cat. No.03TH8683), Columbus, OH, USA, 9–11 June 2003; pp. 42–46. [Google Scholar] [CrossRef]
  197. Nabati, R.; Qi, H. Radar-Camera Sensor Fusion for Joint Object Detection and distance estimation. arXiv 2020, arXiv:2009.08428v1. [Google Scholar]
  198. Radecki, P.; Campbell, M.; Matzen, K. All Weather Perception_ Joint Data Association, Tracking and classification. arXiv 2016, arXiv:1605.02196v1. [Google Scholar]
  199. Diaz-Cabrera, M.; Cerri, P.; Medici, P. Robust real-time traffic light detection and distance estimation using a single camera. Expert Syst. Appl. 2015, 42, 3911–3923. [Google Scholar] [CrossRef]
  200. Zhou, L.; Deng, Z. LIDAR and vision-based real-time traffic sign detection and recognition algorithm for intelligent vehicle. In Proceedings of the 2014 IEEE 17th International Conference on Intelligent Transportation Systems (ITSC), Qingdao, China, 8–11 October 2014. [Google Scholar]
  201. Kim, T.; Song, B. Detection and Tracking of Road Barrier Based on Radar and Vision Sensor Fusion. J. Sens. 2016. [Google Scholar] [CrossRef] [Green Version]
  202. Im, G.; Kim, M.; Park, J. Parking Line Based SLAM Approach Using AVM/LiDAR Sensor Fusion for Rapid and Accurate Loop Closing and Parking Space Detection. Sensors 2019, 19, 4811. [Google Scholar] [CrossRef] [Green Version]
  203. Choi, J.; Chang, E.; Yoon, D.; Ryu, S.; Jung, H.; Suhr, J. Sensor Fusion-Based Parking Assist System; SAE Technical Paper; SAE: Warrendale, PN, USA, 2014. [Google Scholar]
Figure 1. Electromagnetic spectrum according to ISO 20473 [57].
Figure 1. Electromagnetic spectrum according to ISO 20473 [57].
Sensors 20 06532 g001
Figure 2. Mid-range radar from Bosch [68].
Figure 2. Mid-range radar from Bosch [68].
Sensors 20 06532 g002
Figure 3. Performance gap for radar sensor based on the scores gathered.
Figure 3. Performance gap for radar sensor based on the scores gathered.
Sensors 20 06532 g003
Figure 4. Velodyne HDL 64 lidar [99].
Figure 4. Velodyne HDL 64 lidar [99].
Sensors 20 06532 g004
Figure 5. The performance gaps in lidar sensors are illustrated using a spider chart, based on performance scores.
Figure 5. The performance gaps in lidar sensors are illustrated using a spider chart, based on performance scores.
Sensors 20 06532 g005
Figure 6. Ultrasonic surround sensor from Bosch [126].
Figure 6. Ultrasonic surround sensor from Bosch [126].
Sensors 20 06532 g006
Figure 7. Ultrasonic sensor spider chart, showing performance gap.
Figure 7. Ultrasonic sensor spider chart, showing performance gap.
Sensors 20 06532 g007
Figure 8. Intelligent mono-camera by Mobileye [154].
Figure 8. Intelligent mono-camera by Mobileye [154].
Sensors 20 06532 g008
Figure 9. Camera sensor spider chart. The difference between the red dashed line and the solid blue line is the performance gap.
Figure 9. Camera sensor spider chart. The difference between the red dashed line and the solid blue line is the performance gap.
Sensors 20 06532 g009
Figure 10. Thermal Vision FLIR ADK by FLIR Systems [170].
Figure 10. Thermal Vision FLIR ADK by FLIR Systems [170].
Sensors 20 06532 g010
Figure 11. Far-infrared sensor spider chart. The difference between the red dashed line and the solid blue line is the performance gap.
Figure 11. Far-infrared sensor spider chart. The difference between the red dashed line and the solid blue line is the performance gap.
Sensors 20 06532 g011
Table 1. Indexes to interpret spider charts.
Table 1. Indexes to interpret spider charts.
CriterionIndexes (i)
Range0—None
1—Very low performance
2—Low
3—High
4—Very high performance
Resolution
Contrast
Weather
Cost0—None
1—Very high cost
2—High cost
3—Low cost
4—Very low cost
Table 2. Ideal price selection of sensors from the actual market price.
Table 2. Ideal price selection of sensors from the actual market price.
SensorsActual PriceIdeal Price
Ultrasonic sensorsUSD 16 to USD 40USD 16 to USD 40
Automotive radarUSD 50 to USD 220USD 50 to USD 220
Automotive lidarUSD 500 to USD 75,000USD 100 to USD 10,000
Automotive mono-cameraUSD 100 to USD 1000USD 100 to USD 700
Automotive thermal cameraUSD 700 to USD 3000USD 100 to USD 1500
Table 3. List of advantages and disadvantages of perception sensors.
Table 3. List of advantages and disadvantages of perception sensors.
SensorAdvantagesDisadvantages
Radar
  • The sensor makes it possible to see for long distances ahead of the car in poor visibility conditions, which can help avoid collisions.
  • The sensor is small, lightweight, and affordable.
  • The sensor requires less power than a lidar sensor since it has no moving parts.
  • The sensor is more robust to failure compared to lidar.
  • Radar is less expensive than lidar.
  • The obtained images have low accuracy and resolution. Information on detected objects is limited (such as neither precise shape nor color information).
  • Increasing power may solve radar attenuation in a precipitate environment, but increasing power is not a viable economic solution.
  • The mutual interference of radar sensors is a growing issue.
  • The azimuthal and elevation resolution of automotive radars is poor, and this makes the detailed mapping of scenes and object classification difficult and error prone.
  • The sensor cannot give a 360⁰ measurement of the surroundings.
Lidar
  • The sensor can see long distances ahead of the car in good visibility conditions (neither rain nor fog).
  • The sensor can provide a full 360⁰ and 3D point clouds.
  • The images have good accuracy and resolution.
  • There are no significant interferences in multiple lidar sensors.
  • Lidar is more expensive than radar and camera.
  • Transmission is sparse (not dense), due to which small objects (like wires and bars) remain undetected.
  • Due to oscillating components, mechanical maintenance is high.
  • When detecting wet surfaces, lidar shows poor discrimination of contrast compared to dry surfaces.
  • The sensor requires more power than a radar sensor.
  • The sensor is influenced by varying climatic conditions.
Ultrasonic
  • Ultrasonic sensors are useful for the detection of transparent objects and non-metal objects.
  • Not influenced by varying climatic conditions.
  • Low in cost and small in dimensions.
  • At short ranges, higher resolution can be expected.
  • Unlike cameras, ultrasonic sensors overcome pedestrian occlusion problems.
  • They are available for short-range distances only.
  • Sensitive to temperatures and windy environments.
  • Interference and reverberation are problematic when two ultrasonic sensors operate in two cars or are placed close together.
  • Noise from environments may interfere with measurements.
Camera
  • Cameras maintain high resolution and color scales across the complete field of view. They offer a colorful perspective of the environment that helps to analyze the surroundings.
  • Stereo cameras can provide a 3D geometry of objects.
  • Cameras can robustly monitor and maintain information from surroundings over time.
  • They are small in dimensions.
  • Compared to lidar, they are cost-effective and easy to deploy on a vehicle.
  • The camera data require a powerful computation system to extract useful data.
  • The sensor is sensitive to heavy rain, fog, and snowfall, which reduces the capability of the computer system to reliably interpret the surrounding scene.
  • The distance to obstacle accuracy is limited.
Far-Infrared
  • Far-infrared (FIR) camera images depend on the target temperature and radiated heat. Therefore, light conditions and object surface features do not influence them.
  • Compared to lidar, FIR sensors are cheaper and smaller.
  • They have improved situational awareness at night.
  • FIR sensing range can cover up to 200 m or more horizontally and detect possible hazards ahead.
  • They have a better vision through dust, fog, and snow compared to cameras.
  • FIR camera data require demanding computation sources and robust algorithms to extract useful data.
  • This sensor is expensive, compared to Charging Coupling Device (CCD) or the Complementary Metal Oxide Semiconductor (CMOS) cameras.
  • The resolution of the FIR camera is low in comparison to the visible camera and provides images in grayscale. Due to this, fast-changing moments of objects are quite challenging to detect and classify in real time.
  • Since FIR systems calculate based on temperature differences, it is often difficult to distinguish between specific targets of interest in cold climate scenarios.
  • Partial occlusion of the target causes classifiers to ignore the target (like a pedestrian standing behind a car or a group of pedestrians overlapping each other). Solutions to overcome this problem have been studied and, besides, they cannot provide information about the distance to obstacles.
Table 4. A possible combination of sensors for all-weather navigation.
Table 4. A possible combination of sensors for all-weather navigation.
ApplicationRadarUltrasonicLidar CameraFar-Infrared
Short RangeMedium RangeLong RangeShort RangeMedium RangeLong RangeMonocular CameraStereo Camera
Adaptive Cruise Control
Forward Collision Avoidance
Road/Traffic Sign Recognition
Traffic Jam Assist
Lane Departure and Lane Keeping Assistance
Blind Spot Monitoring
Parking Assistance
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mohammed, A.S.; Amamou, A.; Ayevide, F.K.; Kelouwani, S.; Agbossou, K.; Zioui, N. The Perception System of Intelligent Ground Vehicles in All Weather Conditions: A Systematic Literature Review. Sensors 2020, 20, 6532. https://doi.org/10.3390/s20226532

AMA Style

Mohammed AS, Amamou A, Ayevide FK, Kelouwani S, Agbossou K, Zioui N. The Perception System of Intelligent Ground Vehicles in All Weather Conditions: A Systematic Literature Review. Sensors. 2020; 20(22):6532. https://doi.org/10.3390/s20226532

Chicago/Turabian Style

Mohammed, Abdul Sajeed, Ali Amamou, Follivi Kloutse Ayevide, Sousso Kelouwani, Kodjo Agbossou, and Nadjet Zioui. 2020. "The Perception System of Intelligent Ground Vehicles in All Weather Conditions: A Systematic Literature Review" Sensors 20, no. 22: 6532. https://doi.org/10.3390/s20226532

APA Style

Mohammed, A. S., Amamou, A., Ayevide, F. K., Kelouwani, S., Agbossou, K., & Zioui, N. (2020). The Perception System of Intelligent Ground Vehicles in All Weather Conditions: A Systematic Literature Review. Sensors, 20(22), 6532. https://doi.org/10.3390/s20226532

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop