Next Article in Journal
Nondestructive Determination of Nitrogen, Phosphorus and Potassium Contents in Greenhouse Tomato Plants Based on Multispectral Three-Dimensional Imaging
Previous Article in Journal
Automatic Management and Monitoring of Bridge Lifting: A Method of Changing Engineering in Real-Time
Previous Article in Special Issue
Empirical Analysis of Safe Distance Calculation by the Stereoscopic Capturing and Processing of Images Through the Tailigator System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Perception Sensors for Road Applications

Instituto Universitario de Investigación del Automóvil (INSIA), Universidad Politécnica de Madrid, 28031 Madrid, Spain
Sensors 2019, 19(23), 5294; https://doi.org/10.3390/s19235294
Submission received: 25 November 2019 / Accepted: 27 November 2019 / Published: 1 December 2019
(This article belongs to the Special Issue Perception Sensors for Road Applications)

1. Introduction

New assistance systems and the applications of autonomous driving of road vehicles imply ever-greater requirements for perception systems that are necessary in order to increase the robustness of decisions and to avoid false positives or false negatives.
In this sense, there are many technologies that can be used, both in vehicles and infrastructure. In the first case, technology, such as LiDAR or computer vision, is the basis for growth in the automation levels of vehicles, although its actual deployment also demonstrates the problems that can be found in real scenarios and that must be solved to continue on the path of improving the safety and efficiency of road traffic.
Usually, given the limitations of each of the technologies, it is common to resort to sensorial fusion, both of the same of type sensors and of different types.
Additionally, obtaining data for decision-making does not only come from on-board sensors, but wireless communications with the outside world allow vehicles to offer greater electronic horizons. In the same way, positioning in precise and detailed digital maps provides additional information that can be very useful in interpreting the environment.
The sensors also cover the driver in order to analyze their ability to perform tasks safely.
In all areas, it is crucial to study the limitations of each of the solutions and sensors, as well as to establish tools that try to alleviate these issues, either through improvements in hardware or in software. In this sense, the specifications requested of sensors must be established and specific methods must be developed to validate said specifications for the sensors and to complete the systems.
In conclusion, this Special Issue aims to bring together innovative developments in areas related to sensors in vehicles and the use of the information for assistance systems and autonomous vehicles.

2. Papers in the Special Issue

As assistance and automation increase in road vehicles, the requirements of perception systems rise significantly and new solutions emerge in research and the market. Reference [1] presents a systematic review of the perception systems and simulators for autonomous vehicles. This work has been divided into three parts. In the first part, perception systems are categorized as environment perception systems and positioning estimation systems. In the second part, the main elements to be taken into account in the simulation of a perception system of an AV are presented. Finally, the current state of regulations that are being applied in different countries around the world, on issues concerning the implementation of autonomous vehicles, is presented.
As previously mentioned, the number of small sophisticated wireless sensors that share the electromagnetic spectrum is expected to grow rapidly over the next decade and interference between these sensors is anticipated to become a major challenge. In Reference [2], the interference mechanisms in one such sensor, automotive radars, is studied, and the results are directly applicable to a range of other sensor situations.
One of the most common applications of perception systems in vehicles is obstacle detection. In this field, several technologies have been used for years and new algorithms have tried to obtain more robust and efficient results under complex scenarios.
A robust Multiple Object Detection and Tracking (MODT) algorithm for a non-stationary base is presented in Reference [3], using multiple 3D LiDARs for perception. The merged LiDAR data is treated with an efficient MODT framework, considering the limitations of the vehicle-embedded computing environment. The ground classification is obtained through a grid-based method, while considering a non-planar ground. Furthermore, unlike prior works, a 3D grid-based clustering technique is developed to detect objects under elevated structures. The centroid measurements obtained from the object detection are tracked using an Interactive Multiple Model-Unscented Kalman Filter–Joint Probabilistic Data Association Filter.
Reference [4] presents an efficient moving object detection algorithm that can cope with moving camera environments. In addition, a hardware design and the implementation results for the real-time processing of the proposed algorithm are presented. The proposed moving object detector was designed using hardware description language (HDL) and its real-time performance was evaluated using an FPGA based test system.
A computationally low-cost and robust detecting and tracking moving objects (DATMO) system, which uses as input only 2D laser rangefinder information, is presented in Reference [5]. Due to its low requirements, both in sensor needs and computation, the DATMO algorithm is meant to be used in current autonomous guided vehicles to improve their reliability for the cargo transportation tasks at port terminals, advancing towards the next generation of fully autonomous transportation vehicles.
A continuous waveform radar is widely used in intelligent transportation systems. There are several waveforms and the chirp sequence waveform has the ability to extract the range and velocity parameters of multiple targets. Reference [6] proposes a new waveform that follows the practical application requirements, high precision requirements, and low system complexity requirements. Theoretical analysis and simulation results verify that the new radar waveform is capable of measuring the range and radial velocity simultaneously and unambiguously, with high accuracy and resolution even in multi-target situations.
Another classical use of perception systems is the characterization of the scenario and the road. An Extended Line Map (ELM)-based precise vehicle localization method is proposed in Extended Line Map [7], and is implemented using 3D Light Detection and Ranging (LIDAR). A binary occupancy grid map in which grids for road marking or vertical structures have a value of one and the rest have a value of zero was created using the reflectivity and distance data of the 3D LIDAR.
Furthermore, vision-based lane-detection methods provide low-cost density information about roads. A robust and efficient method to expand the application of these methods to cover low-speed environments is presented in Reference [8].
Moreover, perception sensors are also used for driver and other passenger detection and characterization. Perhaps the least intrusive, physiology-based approach is to remotely monitor driver drowsiness by using cameras to detect facial expressions. A multi-timescale drowsiness characterization system composed of four binary drowsiness classifiers, operating at four distinct timescales and trained jointly, was developed in Reference [9].
Finally, the information retrieved by perception sensors can be used for decision-making systems. The first step for decision-making in an autonomous vehicle or an assistance system is the understanding of the environment. Reference [10] presents three ways of modelling traffic in a roundabout, quite a critical scenario, based on: (i) The roundabout geometry; (ii) mean path taken by vehicles inside the roundabout; and (iii) a set of reference trajectories traversed by vehicles inside the roundabout.
Reference [11] presents a machine learning-based technique to build a predictive model and to generate rules of action to allow autonomous vehicles to perform roundabout maneuvers. The approach consists of building a predictive model of vehicle speeds and steering angles based on collected data that are related to driver–vehicle interactions and other aggregated data intrinsic to the traffic environment.
Reference [12] presents a path-planning algorithm based on potential fields. Potential models are adjusted so that their behavior is appropriate to the environment and the dynamics of the vehicle and they can face almost any unexpected scenarios. The response of the system considers the road characteristics (e.g., maximum speed, lane line curvature, etc.) and the presence of obstacles and other users.

Funding

This work received no external funding.

Acknowledgments

Thanks are due to all authors for their valuable collaboration and contributions to this special issue. All papers presented to the call passed a rigorous refereeing process as full manuscripts. The accepted papers underwent final revision and approval for publication in a second/third round of reviewing. Gratitude is owed to the international team of reviewers for their diligence in assessing the papers and their thoughtful and constructive criticism with a great effort and dedication of their time.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Rosique, F.; Navarro, P.J.; Fernández, C.; Padilla, A. A Systematic Review of Perception System and Simulators for Autonomous Vehicles Research. Sensors 2019, 19, 648. [Google Scholar] [CrossRef] [PubMed]
  2. Skaria, S.; Al-Hourani, A.; Evans, R.J.; Sithamparanathan, K.; Parampalli, U. Interference Mitigation in Automotive Radars Using Pseudo-Random Cyclic Orthogonal Sequences. Sensors 2019, 19, 4459. [Google Scholar] [CrossRef] [PubMed]
  3. Sualeh, M.; Kim, G.-W. Dynamic Multi-LiDAR Based Multiple Object Detection and Tracking. Sensors 2019, 19, 1474. [Google Scholar] [CrossRef] [PubMed]
  4. Cho, J.; Jung, Y.; Kim, D.-S.; Lee, S.; Jung, Y. Moving Object Detection Based on Optical Flow Estimation and a Gaussian Mixture Model for Advanced Driver Assistance Systems. Sensors 2019, 19, 3217. [Google Scholar] [CrossRef] [PubMed]
  5. Vaquero, V.; Repiso, E.; Sanfeliu, A. Robust and Real-Time Detection and Tracking of Moving Objects with Minimum 2D LiDAR Information to Advance Autonomous Cargo Handling in Ports. Sensors 2019, 19, 107. [Google Scholar] [CrossRef] [PubMed]
  6. Wang, W.; Du, J.; Gao, J. Multi-Target Detection Method Based on Variable Carrier Frequency Chirp Sequence. Sensors 2018, 18, 3386. [Google Scholar] [CrossRef] [PubMed]
  7. Im, J.-H.; Im, S.-H.; Jee, G.-I. Extended Line Map-Based Precise Vehicle Localization Using 3D LIDAR. Sensors 2018, 18, 3179. [Google Scholar] [CrossRef] [PubMed]
  8. Li, Q.; Zhou, J.; Li, B.; Guo, Y.; Xiao, J. Robust Lane-Detection Method for Low-Speed Environments. Sensors 2018, 18, 4274. [Google Scholar] [CrossRef] [PubMed]
  9. Massoz, Q.; Verly, J.G.; Van Droogenbroeck, M. Multi-Timescale Drowsiness Characterization Based on a Video of a Driver’s Face. Sensors 2018, 18, 2801. [Google Scholar] [CrossRef] [PubMed]
  10. Muhammad, N.; Åstrand, B. Predicting Agent Behaviour and State for Applications in a Roundabout-Scenario Autonomous Driving. Sensors 2019, 19, 4279. [Google Scholar] [CrossRef] [PubMed]
  11. García Cuenca, L.; Sanchez-Soriano, J.; Puertas, E.; Fernandez Andrés, J.; Aliane, N. Machine Learning Techniques for Undertaking Roundabouts in Autonomous Driving. Sensors 2019, 19, 2386. [Google Scholar] [CrossRef] [PubMed]
  12. Martínez, C.; Jiménez, F. Implementation of a Potential Field-Based Decision-Making Algorithm on Autonomous Vehicles for Driving in Complex Environments. Sensors 2019, 19, 3318. [Google Scholar] [CrossRef] [PubMed] [Green Version]

Share and Cite

MDPI and ACS Style

Jiménez, F. Perception Sensors for Road Applications. Sensors 2019, 19, 5294. https://doi.org/10.3390/s19235294

AMA Style

Jiménez F. Perception Sensors for Road Applications. Sensors. 2019; 19(23):5294. https://doi.org/10.3390/s19235294

Chicago/Turabian Style

Jiménez, Felipe. 2019. "Perception Sensors for Road Applications" Sensors 19, no. 23: 5294. https://doi.org/10.3390/s19235294

APA Style

Jiménez, F. (2019). Perception Sensors for Road Applications. Sensors, 19(23), 5294. https://doi.org/10.3390/s19235294

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop