sensors-logo

Journal Browser

Journal Browser

Sensors and Sensor's Fusion in Autonomous Vehicles

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (30 June 2021) | Viewed by 107884

Special Issue Editors


E-Mail Website
Guest Editor
Department of Geodesy, Faculty of Civil and Environmental Engineering, Gdansk Technical University, Narutowicza St. 11/12, Gdansk, Poland
Interests: radar navigation; comparative (terrain reference) navigation; multi-sensor data fusion; automotive navigation; radar and sonar target detection and tracking; sonar imaging and understanding; MBES bathymetry; autonomous navigation; artificial intelligence for navigation; deep learning; geoinformatics, underwater navigation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Navigation, Maritime University of Szczecin, Waly Chrobrego 1-2, 70-500 Szczecin, Poland
Interests: spatial big data; spatial analysis; artificial neural networks; deep learning; data fusion; processing of bathymetric data; sea bottom modeling; data reduction
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Geodesy, Faculty of Civil and Environmental Engineering, Gdansk Technical University, Narutowicza St. 11/12, Gdansk, Poland
Interests: unmanned aerial vehicle technology; autonomous navigation; neural networks; non-GNSS navigation; photogrammetry; real-time photogrammetry; computer vision
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Collegaues,

This Special Issue seeks the submission of review and original research articles related to sensors and sensor fusion in autonomous vehicles. Autonomous vehicle navigation has been at the centre of several major developments, both in civilian and defence applications. New technologies like multisensory data fusion, big data processing, and deep learning are changing the quality of areas of applications, improving sensors and systems used. New ideas like 3D radar, 3D sonar, LIDAR, and others are based on autonomous vehicle revolutionary development.

The Special Issue is open to contributions dealing with many aspects of autonomous vehicle sensors and their fusion, like autonomous navigation, multi-sensor fusion, big data processing for autonomous vehicle navigation, sensors related to science/research, algorithms/technical development, analysis tools, synergy with sensors in navigation, and artificial intelligence methods for autonomous vehicle navigation.

Prof. Dr. Andrzej Stateczny
Dr. Marta Wlodarczyk-Sielicka
Dr. Pawel Burdziakowski
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Sensor’s fusion;
  • Sensors based on autonomous navigation;
  • Comparative (terrain reference) navigation;
  • 3D radar and 3D sonar;
  • Gravity and geomagnetic sensors;
  • LiDAR;
  • Artificial intelligence in autonomous vehicles;
  • Big data processing;
  • Close-range photogrammetry and computer vision methods;
  • Deep learning algorithms;
  • Fusion of spatial data;
  • Processing of sensors data.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

8 pages, 207 KiB  
Editorial
Sensors and Sensor’s Fusion in Autonomous Vehicles
by Andrzej Stateczny, Marta Wlodarczyk-Sielicka and Pawel Burdziakowski
Sensors 2021, 21(19), 6586; https://doi.org/10.3390/s21196586 - 1 Oct 2021
Cited by 7 | Viewed by 4771
Abstract
Autonomous vehicle navigation has been at the center of several major developments, both in civilian and defense applications [...] Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)

Research

Jump to: Editorial, Review

16 pages, 3517 KiB  
Article
The Algorithm of Determining an Anti-Collision Manoeuvre Trajectory Based on the Interpolation of Ship’s State Vector
by Piotr Borkowski, Zbigniew Pietrzykowski and Janusz Magaj
Sensors 2021, 21(16), 5332; https://doi.org/10.3390/s21165332 - 6 Aug 2021
Cited by 16 | Viewed by 3357
Abstract
The determination of a ship’s safe trajectory in collision situations at sea is one of the basic functions in autonomous navigation of ships. While planning a collision avoiding manoeuvre in open waters, the navigator has to take into account the ships manoeuvrability and [...] Read more.
The determination of a ship’s safe trajectory in collision situations at sea is one of the basic functions in autonomous navigation of ships. While planning a collision avoiding manoeuvre in open waters, the navigator has to take into account the ships manoeuvrability and hydrometeorological conditions. To this end, the ship’s state vector is predicted—position coordinates, speed, heading, and other movement parameters—at fixed time intervals for different steering scenarios. One possible way to solve this problem is a method using the interpolation of the ship’s state vector based on the data from measurements conducted during the sea trials of the ship. This article presents the interpolating function within any convex quadrilateral with the nodes being its vertices. The proposed function interpolates the parameters of the ship’s state vector for the specified point of a plane, where the values in the interpolation nodes are data obtained from measurements performed during a series of turning circle tests, conducted for different starting conditions and various rudder settings. The proposed method of interpolation was used in the process of determining the anti-collision manoeuvre trajectory. The mechanism is based on the principles of a modified Dijkstra algorithm, in which the graph takes the form of a regular network of points. The transition between the graph vertices depends on the safe passing level of other objects and the degree of departure from the planned route. The determined shortest path between the starting vertex and the target vertex is the optimal solution for the discrete space of solutions. The algorithm for determining the trajectory of the anti-collision manoeuvre was implemented in autonomous sea-going vessel technology. This article presents the results of laboratory tests and tests conducted under quasi-real conditions using physical ship models. The experiments confirmed the effective operation of the developed algorithm of the determination of the anti-collision manoeuvre trajectory in the technological framework of autonomous ship navigation. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

18 pages, 7935 KiB  
Article
Pole-Like Object Extraction and Pole-Aided GNSS/IMU/LiDAR-SLAM System in Urban Area
by Tianyi Liu, Le Chang, Xiaoji Niu and Jingnan Liu
Sensors 2020, 20(24), 7145; https://doi.org/10.3390/s20247145 - 13 Dec 2020
Cited by 10 | Viewed by 3938
Abstract
Vision-based sensors such as LiDAR (Light Detection and Ranging) are adopted in the SLAM (Simultaneous Localization and Mapping) system. In the 16-beam LiDAR aided SLAM system, due to the difficulty of object detection by sparse laser data, neither the grid-based nor feature point-based [...] Read more.
Vision-based sensors such as LiDAR (Light Detection and Ranging) are adopted in the SLAM (Simultaneous Localization and Mapping) system. In the 16-beam LiDAR aided SLAM system, due to the difficulty of object detection by sparse laser data, neither the grid-based nor feature point-based solution can avoid the interference of moving objects. In an urban environment, the pole-like objects are common, invariant and have distinguishing characteristics. Therefore, it is suitable to bring more robust and reliable positioning results as auxiliary information in the process of vehicle positioning and navigation. In this work, we proposed a scheme of a SLAM system using a GNSS (Global Navigation Satellite System), IMU (Inertial Measurement Unit) and LiDAR sensor using the position of pole-like objects as the features for SLAM. The scheme combines a traditional preprocessing method and a small scale artificial neural network to extract the pole-like objects in environment. Firstly, the threshold-based method is used to extract the pole-like object candidates from the point cloud, and then, the neural network is applied for training and inference to obtain pole-like objects. The result shows that the accuracy and recall rate are sufficient to provide stable observation for the following SLAM process. After extracting the poles from the LiDAR point cloud, their coordinates are added to the feature map, and the nonlinear optimization of the front end is carried out by utilizing the distance constraints corresponding to the pole coordinates; then, the heading angle and horizontal plane translation are estimated. The ground feature points are used to enhance the elevation, pitch and roll angle accuracy. The performance of the proposed navigation system is evaluated through field experiments by checking the position drift and attitude errors during multiple two-min mimic GNSS outages without additional IMU motion constrain such as NHC (Nonholonomic Constrain). The experimental results show that the performance of the proposed scheme is superior to that of the conventional feature point grid-based SLAM with the same back end, especially in congested crossroads where slow-moving vehicles are surrounded and pole-like objects are rich in the environment. The mean plane position error during two-min GNSS outages was reduced by 38.5%, and the root mean square error was reduced by 35.3%. Therefore, the proposed pole-like feature-based GNSS/IMU/LiDAR SLAM system can fuse condensed information from those sensors effectively to mitigate positioning and orientation errors, even in a short-time GNSS denied environment. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

20 pages, 4462 KiB  
Article
Processing of Bathymetric Data: The Fusion of New Reduction Methods for Spatial Big Data
by Marta Wlodarczyk-Sielicka and Wioleta Blaszczak-Bak
Sensors 2020, 20(21), 6207; https://doi.org/10.3390/s20216207 - 30 Oct 2020
Cited by 9 | Viewed by 3052
Abstract
Floating autonomous vehicles are very often equipped with modern systems that collect information about the situation under the water surface, e.g., the depth or type of bottom and obstructions on the seafloor. One such system is the multibeam echosounder (MBES), which collects very [...] Read more.
Floating autonomous vehicles are very often equipped with modern systems that collect information about the situation under the water surface, e.g., the depth or type of bottom and obstructions on the seafloor. One such system is the multibeam echosounder (MBES), which collects very large sets of bathymetric data. The development and analysis of such large sets are laborious and expensive. Reduction of the spatial data obtained from bathymetric and other systems collecting spatial data is currently widely used. In commercial programs used in the development of data from hydrographic systems, methods of interpolation to a specific mesh size are very frequently used. The authors of this article previously proposed original the true bathymetric data reduction method (TBDRed) and Optimum Dataset (OptD) reduction methods, which maintain the actual position and depth for each of the measured points, without their interpolation. The effectiveness of the proposed methods has already been presented in previous articles. This article proposes the fusion of original reduction methods, which is a new and innovative approach to the problem of bathymetric data reduction. The article contains a description of the methods used and the methodology of developing bathymetric data. The proposed fusion of reduction methods allows the generation of numerical models that can be a safe, reliable source of information, and a basis for design. Numerical models can also be used in comparative navigation, during the creation of electronic navigation maps and other hydrographic products. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

28 pages, 8295 KiB  
Article
Clothoid: An Integrated Hierarchical Framework for Autonomous Driving in a Dynamic Urban Environment
by Saba Arshad, Muhammad Sualeh, Dohyeong Kim, Dinh Van Nam and Gon-Woo Kim
Sensors 2020, 20(18), 5053; https://doi.org/10.3390/s20185053 - 5 Sep 2020
Cited by 11 | Viewed by 4336
Abstract
In recent years, research and development of autonomous driving technology have gained much interest. Many autonomous driving frameworks have been developed in the past. However, building a safely operating fully functional autonomous driving framework is still a challenge. Several accidents have been occurred [...] Read more.
In recent years, research and development of autonomous driving technology have gained much interest. Many autonomous driving frameworks have been developed in the past. However, building a safely operating fully functional autonomous driving framework is still a challenge. Several accidents have been occurred with autonomous vehicles, including Tesla and Volvo XC90, resulting in serious personal injuries and death. One of the major reasons is the increase in urbanization and mobility demands. The autonomous vehicle is expected to increase road safety while reducing road accidents that occur due to human errors. The accurate sensing of the environment and safe driving under various scenarios must be ensured to achieve the highest level of autonomy. This research presents Clothoid, a unified framework for fully autonomous vehicles, that integrates the modules of HD mapping, localization, environmental perception, path planning, and control while considering the safety, comfort, and scalability in the real traffic environment. The proposed framework enables obstacle avoidance, pedestrian safety, object detection, road blockage avoidance, path planning for single-lane and multi-lane routes, and safe driving of vehicles throughout the journey. The performance of each module has been validated in K-City under multiple scenarios where Clothoid has been driven safely from the starting point to the goal point. The vehicle was one of the top five to successfully finish the autonomous vehicle challenge (AVC) in the Hyundai AVC. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

18 pages, 13735 KiB  
Article
GNSS/IMU/ODO/LiDAR-SLAM Integrated Navigation System Using IMU/ODO Pre-Integration
by Le Chang, Xiaoji Niu and Tianyi Liu
Sensors 2020, 20(17), 4702; https://doi.org/10.3390/s20174702 - 20 Aug 2020
Cited by 64 | Viewed by 11009
Abstract
In this paper, we proposed a multi-sensor integrated navigation system composed of GNSS (global navigation satellite system), IMU (inertial measurement unit), odometer (ODO), and LiDAR (light detection and ranging)-SLAM (simultaneous localization and mapping). The dead reckoning results were obtained using IMU/ODO in the [...] Read more.
In this paper, we proposed a multi-sensor integrated navigation system composed of GNSS (global navigation satellite system), IMU (inertial measurement unit), odometer (ODO), and LiDAR (light detection and ranging)-SLAM (simultaneous localization and mapping). The dead reckoning results were obtained using IMU/ODO in the front-end. The graph optimization was used to fuse the GNSS position, IMU/ODO pre-integration results, and the relative position and relative attitude from LiDAR-SLAM to obtain the final navigation results in the back-end. The odometer information is introduced in the pre-integration algorithm to mitigate the large drift rate of the IMU. The sliding window method was also adopted to avoid the increasing parameter numbers of the graph optimization. Land vehicle tests were conducted in both open-sky areas and tunnel cases. The tests showed that the proposed navigation system can effectually improve accuracy and robustness of navigation. During the navigation drift evaluation of the mimic two-minute GNSS outages, compared to the conventional GNSS/INS (inertial navigation system)/ODO integration, the root mean square (RMS) of the maximum position drift errors during outages in the proposed navigation system were reduced by 62.8%, 72.3%, and 52.1%, along the north, east, and height, respectively. Moreover, the yaw error was reduced by 62.1%. Furthermore, compared to the GNSS/IMU/LiDAR-SLAM integration navigation system, the assistance of the odometer and non-holonomic constraint reduced vertical error by 72.3%. The test in the real tunnel case shows that in weak environmental feature areas where the LiDAR-SLAM can barely work, the assistance of the odometer in the pre-integration is critical and can effectually reduce the positioning drift along the forward direction and maintain the SLAM in the short-term. Therefore, the proposed GNSS/IMU/ODO/LiDAR-SLAM integrated navigation system can effectually fuse the information from multiple sources to maintain the SLAM process and significantly mitigate navigation error, especially in harsh areas where the GNSS signal is severely degraded and environmental features are insufficient for LiDAR-SLAM. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

18 pages, 6015 KiB  
Article
Extended Kalman Filter (EKF) Design for Vehicle Position Tracking Using Reliability Function of Radar and Lidar
by Taeklim Kim and Tae-Hyoung Park
Sensors 2020, 20(15), 4126; https://doi.org/10.3390/s20154126 - 24 Jul 2020
Cited by 80 | Viewed by 13039
Abstract
Detection and distance measurement using sensors is not always accurate. Sensor fusion makes up for this shortcoming by reducing inaccuracies. This study, therefore, proposes an extended Kalman filter (EKF) that reflects the distance characteristics of lidar and radar sensors. The sensor characteristics of [...] Read more.
Detection and distance measurement using sensors is not always accurate. Sensor fusion makes up for this shortcoming by reducing inaccuracies. This study, therefore, proposes an extended Kalman filter (EKF) that reflects the distance characteristics of lidar and radar sensors. The sensor characteristics of the lidar and radar over distance were analyzed, and a reliability function was designed to extend the Kalman filter to reflect distance characteristics. The accuracy of position estimation was improved by identifying the sensor errors according to distance. Experiments were conducted using real vehicles, and a comparative experiment was done combining sensor fusion using a fuzzy, adaptive measure noise and Kalman filter. Experimental results showed that the study’s method produced accurate distance estimations. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

17 pages, 1317 KiB  
Article
Beam Search Algorithm for Anti-Collision Trajectory Planning for Many-to-Many Encounter Situations with Autonomous Surface Vehicles
by Jolanta Koszelew, Joanna Karbowska-Chilinska, Krzysztof Ostrowski, Piotr Kuczyński, Eric Kulbiej and Piotr Wołejsza
Sensors 2020, 20(15), 4115; https://doi.org/10.3390/s20154115 - 24 Jul 2020
Cited by 12 | Viewed by 2823
Abstract
A single anti-collision trajectory generation problem for an “own” vessel only is significantly different from the challenge of generating a whole set of safe trajectories for multi-surface vehicle encounter situations in the open sea. Effective solutions for such problems are needed these days, [...] Read more.
A single anti-collision trajectory generation problem for an “own” vessel only is significantly different from the challenge of generating a whole set of safe trajectories for multi-surface vehicle encounter situations in the open sea. Effective solutions for such problems are needed these days, as we are entering the era of autonomous ships. The article specifies the problem of anti-collision trajectory planning in many-to-many encounter situations. The proposed original multi-surface vehicle beam search algorithm (MBSA), based on the beam search strategy, solves the problem. The general idea of the MBSA involves the application of a solution for one-to-many encounter situations (using the beam search algorithm, BSA), which was tested on real automated radar plotting aid (ARPA) and automatic identification system (AIS) data. The test results for the MBSA were from simulated data, which are discussed in the final part. The article specifies the problem of anti-collision trajectory planning in many-to-many encounter situations involving moving autonomous surface vehicles, excluding Collision Regulations (COLREGs) and vehicle dynamics. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Graphical abstract

21 pages, 6654 KiB  
Article
Using UAV Photogrammetry to Analyse Changes in the Coastal Zone Based on the Sopot Tombolo (Salient) Measurement Project
by Pawel Burdziakowski, Cezary Specht, Pawel S. Dabrowski, Mariusz Specht, Oktawia Lewicka and Artur Makar
Sensors 2020, 20(14), 4000; https://doi.org/10.3390/s20144000 - 18 Jul 2020
Cited by 34 | Viewed by 3920
Abstract
The main factors influencing the shape of the beach, shoreline and seabed include undulation, wind and coastal currents. These phenomena cause continuous and multidimensional changes in the shape of the seabed and the Earth’s surface, and when they occur in an area of [...] Read more.
The main factors influencing the shape of the beach, shoreline and seabed include undulation, wind and coastal currents. These phenomena cause continuous and multidimensional changes in the shape of the seabed and the Earth’s surface, and when they occur in an area of intense human activity, they should be constantly monitored. In 2018 and 2019, several measurement campaigns took place in the littoral zone in Sopot, related to the intensive uplift of the seabed and beach caused by the tombolo phenomenon. In this research, a unique combination of bathymetric data obtained from an unmanned surface vessel, photogrammetric data obtained from unmanned aerial vehicles and ground laser scanning were used, along with geodetic data from precision measurements with receivers of global satellite navigation systems. This paper comprehensively presents photogrammetric measurements made from unmanned aerial vehicles during these campaigns. It describes in detail the problems in reconstruction within the water areas, analyses the accuracy of various photogrammetric measurement techniques, proposes a statistical method of data filtration and presents the changes that occurred within the studies area. The work ends with an interpretation of the causes of changes in the land part of the littoral zone and a summary of the obtained results. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Graphical abstract

18 pages, 10978 KiB  
Article
Simultaneous Estimation of Vehicle Roll and Sideslip Angles through a Deep Learning Approach
by Lisardo Prieto González, Susana Sanz Sánchez, Javier Garcia-Guzman, María Jesús L. Boada and Beatriz L. Boada
Sensors 2020, 20(13), 3679; https://doi.org/10.3390/s20133679 - 30 Jun 2020
Cited by 30 | Viewed by 5205
Abstract
Presently, autonomous vehicles are on the rise and are expected to be on the roads in the coming years. In this sense, it becomes necessary to have adequate knowledge about its states to design controllers capable of providing adequate performance in all driving [...] Read more.
Presently, autonomous vehicles are on the rise and are expected to be on the roads in the coming years. In this sense, it becomes necessary to have adequate knowledge about its states to design controllers capable of providing adequate performance in all driving scenarios. Sideslip and roll angles are critical parameters in vehicular lateral stability. The later has a high impact on vehicles with an elevated center of gravity, such as trucks, buses, and industrial vehicles, among others, as they are prone to rollover. Due to the high cost of the current sensors used to measure these angles directly, much of the research is focused on estimating them. One of the drawbacks is that vehicles are strong non-linear systems that require specific methods able to tackle this feature. The evolution in Artificial Intelligence models, such as the complex Artificial Neural Network architectures that compose the Deep Learning paradigm, has shown to provide excellent performance for complex and non-linear control problems. In this paper, the authors propose an inexpensive but powerful model based on Deep Learning to estimate the roll and sideslip angles simultaneously in mass production vehicles. The model uses input signals which can be obtained directly from onboard vehicle sensors such as the longitudinal and lateral accelerations, steering angle and roll and yaw rates. The model was trained using hundreds of thousands of data provided by Trucksim® and validated using data captured from real driving maneuvers using a calibrated ground truth device such as VBOX3i dual-antenna GPS from Racelogic®. The use of both Trucksim® software and the VBOX measuring equipment is recognized and widely used in the automotive sector, providing robust data for the research shown in this article. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

17 pages, 2639 KiB  
Article
Correcting Decalibration of Stereo Cameras in Self-Driving Vehicles
by Jon Muhovič and Janez Perš
Sensors 2020, 20(11), 3241; https://doi.org/10.3390/s20113241 - 7 Jun 2020
Cited by 11 | Viewed by 4106
Abstract
Camera systems in autonomous vehicles are subject to various sources of anticipated and unanticipated mechanical stress (vibration, rough handling, collisions) in real-world conditions. Even moderate changes in camera geometry due to mechanical stress decalibrate multi-camera systems and corrupt downstream applications like depth perception. [...] Read more.
Camera systems in autonomous vehicles are subject to various sources of anticipated and unanticipated mechanical stress (vibration, rough handling, collisions) in real-world conditions. Even moderate changes in camera geometry due to mechanical stress decalibrate multi-camera systems and corrupt downstream applications like depth perception. We propose an on-the-fly stereo recalibration method applicable in real-world autonomous vehicles. The method is comprised of two parts. First, in optimization step, external camera parameters are optimized with the goal to maximise the amount of recovered depth pixels. In the second step, external sensor is used to adjust the scaling of the optimized camera model. The method is lightweight and fast enough to run in parallel with stereo estimation, thus allowing an on-the-fly recalibration. Our extensive experimental analysis shows that our method achieves stereo reconstruction better or on par with manual calibration. If our method is used on a sequence of images, the quality of calibration can be improved even further. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

22 pages, 6434 KiB  
Article
Shoreline Detection and Land Segmentation for Autonomous Surface Vehicle Navigation with the Use of an Optical System
by Stanisław Hożyń and Jacek Zalewski
Sensors 2020, 20(10), 2799; https://doi.org/10.3390/s20102799 - 14 May 2020
Cited by 17 | Viewed by 3931
Abstract
Autonomous surface vehicles (ASVs) are a critical part of recent progressive marine technologies. Their development demands the capability of optical systems to understand and interpret the surrounding landscape. This capability plays an important role in the navigation of coastal areas a safe distance [...] Read more.
Autonomous surface vehicles (ASVs) are a critical part of recent progressive marine technologies. Their development demands the capability of optical systems to understand and interpret the surrounding landscape. This capability plays an important role in the navigation of coastal areas a safe distance from land, which demands sophisticated image segmentation algorithms. For this purpose, some solutions, based on traditional image processing and neural networks, have been introduced. However, the solution of traditional image processing methods requires a set of parameters before execution, while the solution of a neural network demands a large database of labelled images. Our new solution, which avoids these drawbacks, is based on adaptive filtering and progressive segmentation. The adaptive filtering is deployed to suppress weak edges in the image, which is convenient for shoreline detection. Progressive segmentation is devoted to distinguishing the sky and land areas, using a probabilistic clustering model to improve performance. To verify the effectiveness of the proposed method, a set of images acquired from the vehicle’s operative camera were utilised. The results demonstrate that the proposed method performs with high accuracy regardless of distance from land or weather conditions. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

17 pages, 2352 KiB  
Article
Research on Target Detection Based on Distributed Track Fusion for Intelligent Vehicles
by Bin Chen, Xiaofei Pei and Zhenfu Chen
Sensors 2020, 20(1), 56; https://doi.org/10.3390/s20010056 - 20 Dec 2019
Cited by 21 | Viewed by 3871
Abstract
Accurate target detection is the basis of normal driving for intelligent vehicles. However, the sensors currently used for target detection have types of defects at the perception level, which can be compensated by sensor fusion technology. In this paper, the application of sensor [...] Read more.
Accurate target detection is the basis of normal driving for intelligent vehicles. However, the sensors currently used for target detection have types of defects at the perception level, which can be compensated by sensor fusion technology. In this paper, the application of sensor fusion technology in intelligent vehicle target detection is studied with a millimeter-wave (MMW) radar and a camera. The target level fusion hierarchy is adopted, and the fusion algorithm is divided into two tracking processing modules and one fusion center module based on the distributed structure. The measurement information output by two sensors enters the tracking processing module, and after processing by a multi-target tracking algorithm, the local tracks are generated and transmitted to the fusion center module. In the fusion center module, a two-level association structure is designed based on regional collision association and weighted track association. The association between two sensors’ local tracks is completed, and a non-reset federated filter is used to estimate the state of the fusion tracks. The experimental results indicate that the proposed algorithm can complete a tracks association between the MMW radar and camera, and the fusion track state estimation method has an excellent performance. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

14 pages, 3283 KiB  
Article
Multi-Level Features Extraction for Discontinuous Target Tracking in Remote Sensing Image Monitoring
by Bin Zhou, Xuemei Duan, Dongjun Ye, Wei Wei, Marcin Woźniak, Dawid Połap and Robertas Damaševičius
Sensors 2019, 19(22), 4855; https://doi.org/10.3390/s19224855 - 7 Nov 2019
Cited by 34 | Viewed by 3421
Abstract
Many techniques have been developed for computer vision in the past years. Features extraction and matching are the basis of many high-level applications. In this paper, we propose a multi-level features extraction for discontinuous target tracking in remote sensing image monitoring. The features [...] Read more.
Many techniques have been developed for computer vision in the past years. Features extraction and matching are the basis of many high-level applications. In this paper, we propose a multi-level features extraction for discontinuous target tracking in remote sensing image monitoring. The features of the reference image are pre-extracted at different levels. The first-level features are used to roughly check the candidate targets and other levels are used for refined matching. With Gaussian weight function introduced, the support of matching features is accumulated to make a final decision. Adaptive neighborhood and principal component analysis are used to improve the description of the feature. Experimental results verify the efficiency and accuracy of the proposed method. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

34 pages, 7863 KiB  
Article
Research on Longitudinal Active Collision Avoidance of Autonomous Emergency Braking Pedestrian System (AEB-P)
by Wei Yang, Xiang Zhang, Qian Lei and Xin Cheng
Sensors 2019, 19(21), 4671; https://doi.org/10.3390/s19214671 - 28 Oct 2019
Cited by 67 | Viewed by 9992
Abstract
The AEB-P (Autonomous Emergency Braking Pedestrian) system has the functional requirements of avoiding the pedestrian collision and ensuring the pedestrian’s life safety. By studying relevant theoretical systems, such as TTC (time to collision) and braking safety distance, an AEB-P warning model was established, [...] Read more.
The AEB-P (Autonomous Emergency Braking Pedestrian) system has the functional requirements of avoiding the pedestrian collision and ensuring the pedestrian’s life safety. By studying relevant theoretical systems, such as TTC (time to collision) and braking safety distance, an AEB-P warning model was established, and the traffic safety level and work area of the AEB-P warning system were defined. The upper-layer fuzzy neural network controller of the AEB-P system was designed, and the BP (backpropagation) neural network was trained by collected pedestrian longitudinal anti-collision braking operation data of experienced drivers. Also, the fuzzy neural network model was optimized by introducing the genetic algorithm. The lower-layer controller of the AEB-P system was designed based on the PID (proportional integral derivative controller) theory, which realizes the conversion of the expected speed reduction to the pressure of a vehicle braking pipeline. The relevant pedestrian test scenarios were set up based on the C-NCAP (China-new car assessment program) test standards. The CarSim and Simulink co-simulation model of the AEB-P system was established, and a multi-condition simulation analysis was performed. The results showed that the proposed control strategy was credible and reliable and could flexibly allocate early warning and braking time according to the change in actual working conditions, to reduce the occurrence of pedestrian collision accidents. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

21 pages, 12596 KiB  
Article
Low-Cost Sensors State Estimation Algorithm for a Small Hand-Launched Solar-Powered UAV
by An Guo, Zhou Zhou, Xiaoping Zhu and Fan Bai
Sensors 2019, 19(21), 4627; https://doi.org/10.3390/s19214627 - 24 Oct 2019
Cited by 12 | Viewed by 3654
Abstract
In order to reduce the cost of the flight controller and improve the control accuracy of solar-powered unmanned aerial vehicle (UAV), three state estimation algorithms based on the extended Kalman filter (EKF) with different structures are proposed: Three-stage series, full-state direct and indirect [...] Read more.
In order to reduce the cost of the flight controller and improve the control accuracy of solar-powered unmanned aerial vehicle (UAV), three state estimation algorithms based on the extended Kalman filter (EKF) with different structures are proposed: Three-stage series, full-state direct and indirect state estimation algorithms. A small hand-launched solar-powered UAV without ailerons is used as the object with which to compare the algorithm structure, estimation accuracy, and platform requirements and application. The three-stage estimation algorithm has a position accuracy of 6 m and is suitable for low-cost small, low control precision UAVs. The precision of full-state direct algorithm is 3.4 m, which is suitable for platforms with low-cost and high-trajectory tracking accuracy. The precision of the full-state indirect method is similar to the direct, but it is more stable for state switching, overall parameters estimation, and can be applied to large platforms. A full-scaled electric hand-launched UAV loaded with the three-stage series algorithm was used for the field test. Results verified the feasibility of the estimation algorithm and it obtained a position estimation accuracy of 23 m. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

34 pages, 2829 KiB  
Review
The Perception System of Intelligent Ground Vehicles in All Weather Conditions: A Systematic Literature Review
by Abdul Sajeed Mohammed, Ali Amamou, Follivi Kloutse Ayevide, Sousso Kelouwani, Kodjo Agbossou and Nadjet Zioui
Sensors 2020, 20(22), 6532; https://doi.org/10.3390/s20226532 - 15 Nov 2020
Cited by 87 | Viewed by 14958
Abstract
Perception is a vital part of driving. Every year, the loss in visibility due to snow, fog, and rain causes serious accidents worldwide. Therefore, it is important to be aware of the impact of weather conditions on perception performance while driving on highways [...] Read more.
Perception is a vital part of driving. Every year, the loss in visibility due to snow, fog, and rain causes serious accidents worldwide. Therefore, it is important to be aware of the impact of weather conditions on perception performance while driving on highways and urban traffic in all weather conditions. The goal of this paper is to provide a survey of sensing technologies used to detect the surrounding environment and obstacles during driving maneuvers in different weather conditions. Firstly, some important historical milestones are presented. Secondly, the state-of-the-art automated driving applications (adaptive cruise control, pedestrian collision avoidance, etc.) are introduced with a focus on all-weather activity. Thirdly, the most involved sensor technologies (radar, lidar, ultrasonic, camera, and far-infrared) employed by automated driving applications are studied. Furthermore, the difference between the current and expected states of performance is determined by the use of spider charts. As a result, a fusion perspective is proposed that can fill gaps and increase the robustness of the perception system. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

30 pages, 2136 KiB  
Review
A Review of Environmental Context Detection for Navigation Based on Multiple Sensors
by Florent Feriol, Damien Vivet and Yoko Watanabe
Sensors 2020, 20(16), 4532; https://doi.org/10.3390/s20164532 - 13 Aug 2020
Cited by 25 | Viewed by 5010
Abstract
Current navigation systems use multi-sensor data to improve the localization accuracy, but often without certitude on the quality of those measurements in certain situations. The context detection will enable us to build an adaptive navigation system to improve the precision and the robustness [...] Read more.
Current navigation systems use multi-sensor data to improve the localization accuracy, but often without certitude on the quality of those measurements in certain situations. The context detection will enable us to build an adaptive navigation system to improve the precision and the robustness of its localization solution by anticipating possible degradation in sensor signal quality (GNSS in urban canyons for instance or camera-based navigation in a non-textured environment). That is why context detection is considered the future of navigation systems. Thus, it is important firstly to define this concept of context for navigation and to find a way to extract it from available information. This paper overviews existing GNSS and on-board vision-based solutions of environmental context detection. This review shows that most of the state-of-the art research works focus on only one type of data. It confirms that the main perspective of this problem is to combine different indicators from multiple sensors. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Back to TopTop