Topic Editors

State Key Laboratory of Millimeter Waves, School of Information Science and Engineering, Southeast University, Nanjing 210096, China
Dr. Yi Ren
National Laboratory of Radar Signal Processing, School of Electronic Engineering, Xidian University, Xi'an 710071, China
Dr. Penghui Huang
School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China
School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China
National Laboratory of Radar Signal Processing, School of Electronic Engineering, Xidian University, Xi’an 710071, China

Information Sensing Technology for Intelligent/Driverless Vehicle, 2nd Volume

Abstract submission deadline
31 March 2025
Manuscript submission deadline
31 May 2025
Viewed by
26679

Topic Information

Dear Colleagues,

This Topic is a continuation of the previous successful Topic “Information Sensing Technology for Intelligent/Driverless Vehicle”.

As the basis for vehicle positioning and path planning, the environmental perception system is a significant part of intelligent/driverless vehicles, which is used to obtain the environmental information around the vehicle, including roads, obstacles, traffic signs, and the vital signs of the driver. In the past few years, environmental perception technology based on various vehicle-mounted sensors (camera, laser, millimeter-wave radar, and GPS/IMU) has made rapid progress. With further research into automatic driving and assisted driving, the information sensing technology of driverless cars has become a research hotspot, and thus the performance of vehicle-mounted sensors should be improved to adapt to the complex driving environment of daily life. However, in reality, there are still many developmental issues, such as immature technology, lack of advanced instruments, and experimental environments not being real. All these problems pose great challenges to traditional vehicle-mounted sensor systems and information perception technology, motivating the need for new environmental perception systems, signal processing methods, and even new types of sensors.

This Topic is devoted to highlighting the most advanced studies in technology, methodology, and applications of sensors mounted on intelligent/driverless vehicles. Papers dealing with fundamental theoretical analyses, as well as those demonstrating their applications to real-world and/or emerging problems, are welcome. We welcome original papers, and some review articles, in all areas related to sensors mounted on intelligent/driverless vehicles, including, but not limited to, the following suggested topics:

  • Vehicle-mounted millimeter-wave radar technology;
  • Vehicle-mounted LiDAR technology;
  • Vehicle visual sensors;
  • High-precision positioning technology based on GPS/IMU;
  • Muti-sensor data fusion (MSDF);
  • New sensor systems mounted on intelligent/driverless vehicles.

Dr. Yan Huang
Dr. Yi Ren
Dr. Penghui Huang
Dr. Jun Wan
Dr. Zhanye Chen
Dr. Shiyang Tang
Topic Editors

Keywords

  • information sensing technology
  • intelligent/driverless vehicle
  • millimeter-wave radar
  • LiDAR
  • vehicle visual sensor

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Remote Sensing
remotesensing
4.2 8.3 2009 24.7 Days CHF 2700 Submit
Sensors
sensors
3.4 7.3 2001 16.8 Days CHF 2600 Submit
Smart Cities
smartcities
7.0 11.2 2018 25.8 Days CHF 2000 Submit
Vehicles
vehicles
2.4 4.1 2019 24.7 Days CHF 1600 Submit
Geomatics
geomatics
- - 2021 21.8 Days CHF 1000 Submit

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (16 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
23 pages, 9336 KiB  
Article
MFO-Fusion: A Multi-Frame Residual-Based Factor Graph Optimization for GNSS/INS/LiDAR Fusion in Challenging GNSS Environments
by Zixuan Zou, Guoshuai Wang, Zhenshuo Li, Rui Zhai and Yonghua Li
Remote Sens. 2024, 16(17), 3114; https://doi.org/10.3390/rs16173114 - 23 Aug 2024
Cited by 1 | Viewed by 1004
Abstract
In various practical applications, such as autonomous vehicle and unmanned aerial vehicle navigation, Global Navigation Satellite Systems (GNSSs) are commonly used for positioning. However, traditional GNSS positioning methods are often affected by disturbances due to external observational conditions. For instance, in areas with [...] Read more.
In various practical applications, such as autonomous vehicle and unmanned aerial vehicle navigation, Global Navigation Satellite Systems (GNSSs) are commonly used for positioning. However, traditional GNSS positioning methods are often affected by disturbances due to external observational conditions. For instance, in areas with dense buildings, tree cover, or tunnels, GNSS signals may be obstructed, resulting in positioning failures or decreased accuracy. Therefore, improving the accuracy and stability of GNSS positioning in these complex environments is a critical concern. In this paper, we propose a novel multi-sensor fusion framework based on multi-frame residual optimization for GNSS/INS/LiDAR to address the challenges posed by complex satellite environments. Our system employs a novel residual detection and optimization method for continuous-time GNSS within keyframes. Specifically, we use rough pose measurements from LiDAR to extract keyframes for the global system. Within these keyframes, the multi-frame residuals of GNSS and IMU are estimated using the Median Absolute Deviation (MAD) and subsequently employed for the degradation detection and sliding window optimization of the GNSS. Building on this, we employ a two-stage factor graph optimization strategy, significantly improving positioning accuracy, especially in environments with limited GNSS signals. To validate the effectiveness of our approach, we assess the system’s performance on the publicly available UrbanLoco dataset and conduct experiments in real-world environments. The results demonstrate that our system can achieve continuous decimeter-level positioning accuracy in these complex environments, outperforming other related frameworks. Full article
Show Figures

Graphical abstract

17 pages, 1421 KiB  
Technical Note
Angle Estimation Using Learning-Based Doppler Deconvolution in Beamspace with Forward-Looking Radar
by Wenjie Li, Xinhao Xu, Yihao Xu, Yuchen Luan, Haibo Tang, Longyong Chen, Fubo Zhang, Jie Liu and Junming Yu
Remote Sens. 2024, 16(15), 2840; https://doi.org/10.3390/rs16152840 - 2 Aug 2024
Viewed by 753
Abstract
The measurement of the target azimuth angle using forward-looking radar (FLR) is widely applied in unmanned systems, such as obstacle avoidance and tracking applications. This paper proposes a semi-supervised support vector regression (SVR) method to solve the problem of small sample learning of [...] Read more.
The measurement of the target azimuth angle using forward-looking radar (FLR) is widely applied in unmanned systems, such as obstacle avoidance and tracking applications. This paper proposes a semi-supervised support vector regression (SVR) method to solve the problem of small sample learning of the target angle with FLR. This method utilizes function approximation to solve the problem of estimating the target angle. First, SVR is used to construct the function mapping relationship between the echo and the target angle in beamspace. Next, by adding manifold constraints to the loss function, supervised learning is extended to semi-supervised learning, aiming to improve the small sample adaptation ability. This framework supports updating the angle estimating function with continuously increasing unlabeled samples during the FLR scanning process. The numerical simulation results show that the new technology has better performance than model-based methods and fully supervised methods, especially under limited conditions such as signal-to-noise ratio and number of training samples. Full article
Show Figures

Figure 1

20 pages, 4902 KiB  
Article
Range-Velocity Measurement Accuracy Improvement Based on Joint Spatiotemporal Characteristics of Multi-Input Multi-Output Radar
by Penghui Chen, Jinhao Song, Yujing Bai, Jun Wang, Yang Du and Liuyang Tian
Remote Sens. 2024, 16(14), 2648; https://doi.org/10.3390/rs16142648 - 19 Jul 2024
Viewed by 658
Abstract
For time division multiplexing multiple input multiple output (TDM MIMO) millimeter wave radar, the measurement of target range, velocity and other parameters depends on the phase of the received Intermediate Frequency (IF) signal. The coupling between range and velocity phases occurs when measuring [...] Read more.
For time division multiplexing multiple input multiple output (TDM MIMO) millimeter wave radar, the measurement of target range, velocity and other parameters depends on the phase of the received Intermediate Frequency (IF) signal. The coupling between range and velocity phases occurs when measuring moving targets, leading to inevitable errors in calculating range and velocity from the phase, which in turn affects measurement accuracy. Traditional two-dimensional fast fourier transform (2D FFT) estimation errors are particularly pronounced at high velocity, significantly impacting measurement accuracy. Additionally, due to limitations imposed by the Nyquist sampling theorem, there is a restricted range for velocity measurements that can result in aliasing. In this study, we propose a method to address the coupling of range and velocity based on the original signal as well as a method for velocity compensation to resolve aliasing issues. Our research findings demonstrate that this approach effectively reduces errors in measuring ranges and velocities of high-velocity moving targets while efficiently de-aliasing velocities. Full article
Show Figures

Graphical abstract

27 pages, 3382 KiB  
Article
DOT-SLAM: A Stereo Visual Simultaneous Localization and Mapping (SLAM) System with Dynamic Object Tracking Based on Graph Optimization
by Yuan Zhu, Hao An, Huaide Wang, Ruidong Xu, Zhipeng Sun and Ke Lu
Sensors 2024, 24(14), 4676; https://doi.org/10.3390/s24144676 - 18 Jul 2024
Cited by 1 | Viewed by 1035
Abstract
Most visual simultaneous localization and mapping (SLAM) systems are based on the assumption of a static environment in autonomous vehicles. However, when dynamic objects, particularly vehicles, occupy a large portion of the image, the localization accuracy of the system decreases significantly. To mitigate [...] Read more.
Most visual simultaneous localization and mapping (SLAM) systems are based on the assumption of a static environment in autonomous vehicles. However, when dynamic objects, particularly vehicles, occupy a large portion of the image, the localization accuracy of the system decreases significantly. To mitigate this challenge, this paper unveils DOT-SLAM, a novel stereo visual SLAM system that integrates dynamic object tracking through graph optimization. By integrating dynamic object pose estimation into the SLAM system, the system can effectively utilize both foreground and background points for ego vehicle localization and obtain a static feature points map. To rectify the inaccuracies in depth estimation from stereo disparity directly on the foreground points of dynamic objects due to their self-similarity characteristics, a coarse-to-fine depth estimation method based on camera–road plane geometry is presented. This method uses rough depth to guide fine stereo matching, thereby obtaining the 3 dimensions (3D)spatial positions of feature points on dynamic objects. Subsequently, by establishing constraints on the dynamic object’s pose using the road plane and non-holonomic constraints (NHCs) of the vehicle, reducing the initial pose uncertainty of dynamic objects leads to more accurate dynamic object initialization. Finally, by considering foreground points, background points, the local road plane, the ego vehicle pose, and dynamic object poses as optimization nodes, through the establishment and joint optimization of a nonlinear model based on graph optimization, accurate six degrees of freedom (DoFs) pose estimations are obtained for both the ego vehicle and dynamic objects. Experimental validation on the KITTI-360 dataset demonstrates that DOT-SLAM effectively utilizes features from the background and dynamic objects in the environment, resulting in more accurate vehicle trajectory estimation and a static environment map. Results obtained from a real-world dataset test reinforce the effectiveness. Full article
Show Figures

Figure 1

18 pages, 4229 KiB  
Article
Reconfigurable Intelligent Surface Assisted Target Three-Dimensional Localization with 2-D Radar
by Ziwei Liu, Shanshan Zhao, Biao Xie and Jirui An
Remote Sens. 2024, 16(11), 1936; https://doi.org/10.3390/rs16111936 - 28 May 2024
Viewed by 871
Abstract
Battlefield surveillance radar is usually 2-D radar, which cannot realize target three-dimensional localization, leading to poor resolution for the air target in the elevation dimension. Previous researchers have used the Traditional Height Finder Radar (HFR) or multiple 2-D radar networking to estimate the [...] Read more.
Battlefield surveillance radar is usually 2-D radar, which cannot realize target three-dimensional localization, leading to poor resolution for the air target in the elevation dimension. Previous researchers have used the Traditional Height Finder Radar (HFR) or multiple 2-D radar networking to estimate the target three-dimensional location. However, all of them face the problems of high cost, poor real-time performance and high requirement of space–time registration. In this paper, Reconfigurable Intelligent Surfaces (RISs) with low cost are introduced into the 2-D radar to realize the target three-dimensional localization. Taking advantage of the wide beam of 2-D radar in the elevation dimension, several Unmanned Aerial Vehicles (UAVs) carrying RISs are set in the receiving beam to form multiple auxiliary measurement channels. In addition, the traditional 2-D radar measurements combined with the auxiliary channel measurements are used to realize the target three-dimensional localization by solving a nonlinear least square problem with a convex optimization method. For the proposed RIS-assisted target three-dimensional localization problem, the Cramer–Rao Lower Bound (CRLB) is derived to measure the target localization accuracy. Simulation results verify the effectiveness of the proposed 3-D localization method, and the influences of the number, the positions and the site errors of the RISs on the localization accuracy are covered. Full article
Show Figures

Graphical abstract

21 pages, 20528 KiB  
Article
Multi-Task Visual Perception for Object Detection and Semantic Segmentation in Intelligent Driving
by Jiao Zhan, Jingnan Liu, Yejun Wu and Chi Guo
Remote Sens. 2024, 16(10), 1774; https://doi.org/10.3390/rs16101774 - 16 May 2024
Cited by 1 | Viewed by 1256
Abstract
With the rapid development of intelligent driving vehicles, multi-task visual perception based on deep learning emerges as a key technological pathway toward safe vehicle navigation in real traffic scenarios. However, due to the high-precision and high-efficiency requirements of intelligent driving vehicles in practical [...] Read more.
With the rapid development of intelligent driving vehicles, multi-task visual perception based on deep learning emerges as a key technological pathway toward safe vehicle navigation in real traffic scenarios. However, due to the high-precision and high-efficiency requirements of intelligent driving vehicles in practical driving environments, multi-task visual perception remains a challenging task. Existing methods typically adopt effective multi-task learning networks to concurrently handle multiple tasks. Despite the fact that they obtain remarkable achievements, better performance can be achieved through tackling existing problems like underutilized high-resolution features and underexploited non-local contextual dependencies. In this work, we propose YOLOPv3, an efficient anchor-based multi-task visual perception network capable of handling traffic object detection, drivable area segmentation, and lane detection simultaneously. Compared to prior works, we make essential improvements. On the one hand, we propose architecture enhancements that can utilize multi-scale high-resolution features and non-local contextual dependencies for improving network performance. On the other hand, we propose optimization improvements aiming at enhancing network training, enabling our YOLOPv3 to achieve optimal performance via straightforward end-to-end training. The experimental results on the BDD100K dataset demonstrate that YOLOPv3 sets a new state of the art (SOTA): 96.9% recall and 84.3% mAP50 in traffic object detection, 93.2% mIoU in drivable area segmentation, and 88.3% accuracy and 28.0% IoU in lane detection. In addition, YOLOPv3 maintains competitive inference speed against the lightweight YOLOP. Thus, YOLOPv3 stands as a robust solution for handling multi-task visual perception problems. The code and trained models have been released on GitHub. Full article
Show Figures

Graphical abstract

35 pages, 62938 KiB  
Article
A Modified Frequency Nonlinear Chirp Scaling Algorithm for High-Speed High-Squint Synthetic Aperture Radar with Curved Trajectory
by Kun Deng, Yan Huang, Zhanye Chen, Dongning Fu, Weidong Li, Xinran Tian and Wei Hong
Remote Sens. 2024, 16(9), 1588; https://doi.org/10.3390/rs16091588 - 29 Apr 2024
Viewed by 1115
Abstract
The imaging of high-speed high-squint synthetic aperture radar (HSHS-SAR), which is mounted on maneuvering platforms with curved trajectory, is a challenging task due to the existence of 3-D acceleration and the azimuth spatial variability of range migration and Doppler parameters. Although existing imaging [...] Read more.
The imaging of high-speed high-squint synthetic aperture radar (HSHS-SAR), which is mounted on maneuvering platforms with curved trajectory, is a challenging task due to the existence of 3-D acceleration and the azimuth spatial variability of range migration and Doppler parameters. Although existing imaging algorithms based on linear range walk correction (LRWC) and nonlinear chirp scaling (NCS) can reduce the range–azimuth coupling of the frequency spectrum (FS) and the spatial variability of the Doppler parameter to some extent, they become invalid as the squint angle, speed, and resolution increase. Additionally, most of them ignore the effect of acceleration phase calibration (APC) on NCS, which should not be neglected as resolution increases. For these issues, a modified frequency nonlinear chirp scaling (MFNCS) algorithm is proposed in this paper. The proposed MFNCS algorithm mainly includes the following aspects. First, a more accurate approximation of range model (MAARM) is established to improve the accuracy of the instantaneous slant range history. Second, a preprocessing of the proposed algorithm based on the first range compression, LRWC, and a spatial-invariant APC (SIVAPC) is implemented to eliminate most of the effects of high-squint angle and 3-D acceleration on the FS. Third, a spatial-variant APC (SVAPC) is performed to remove azimuth spatial variability introduced by 3-D acceleration, and the range focusing is accomplished by the bulk range cell migration correction (BRCMC) and extended secondary range compression (ESRC). Fourth, the azimuth-dependent characteristics evaluation based on LRWC, SIVAPC, and SVAPC is completed to derive the MFNCS algorithm with fifth-order chirp scaling function for azimuth compression. Consequently, the final image is focused on the range time and azimuth frequency domain. The experimental simulation results verify the effectiveness of the proposed algorithm. With a curved trajectory, HSHS-SAR imaging is carried out at a 50° geometric squint angle and 500 m × 500 m imaging width. The integrated sidelobe ratio and peak sidelobe ratio of the point targets at the scenario edges approach the theoretical values, and the range-azimuth resolution is 1.5 m × 3.0 m. Full article
Show Figures

Graphical abstract

18 pages, 8995 KiB  
Article
Evaluating the Feasibility of Intelligent Blind Road Junction V2I Deployments
by Joseph Clancy, Dara Molloy, Sean Hassett, James Leahy, Enda Ward, Patrick Denny, Edward Jones, Martin Glavin and Brian Deegan
Smart Cities 2024, 7(3), 973-990; https://doi.org/10.3390/smartcities7030041 - 24 Apr 2024
Viewed by 1322
Abstract
Cellular Vehicle-to-Everything (C-V2X) communications is a technology that enables intelligent vehicles to exchange information and thus coordinate with other vehicles, road users, and infrastructure. However, despite advancements in cellular technology for V2X applications, significant challenges remain regarding the ability of the system to [...] Read more.
Cellular Vehicle-to-Everything (C-V2X) communications is a technology that enables intelligent vehicles to exchange information and thus coordinate with other vehicles, road users, and infrastructure. However, despite advancements in cellular technology for V2X applications, significant challenges remain regarding the ability of the system to meet stringent Quality-of-Service (QoS) requirements when deployed at scale. Thus, smaller-scale V2X use case deployments may embody a necessary stepping stone to address these challenges. This work assesses network architectures for an Intelligent Perception System (IPS) blind road junction or blind corner scenarios. Measurements were collected using a private 5G NR network with Sub-6GHz and mmWave connectivity, evaluating the feasibility and trade-offs of IPS network configurations. The results demonstrate the feasibility of the IPS as a V2X application, with implementation considerations based on deployment and maintenance costs. If computation resources are co-located with the sensors, sufficient performance is achieved. However, if the computational burden is instead placed upon the intelligent vehicle, it is questionable as to whether an IPS is achievable or not. Much depends on image quality, latency, and system performance requirements. Full article
Show Figures

Figure 1

17 pages, 9837 KiB  
Article
Robust Calibration Technique for Precise Transformation of Low-Resolution 2D LiDAR Points to Camera Image Pixels in Intelligent Autonomous Driving Systems
by Ravichandran Rajesh and Pudureddiyur Venkataraman Manivannan
Vehicles 2024, 6(2), 711-727; https://doi.org/10.3390/vehicles6020033 - 19 Apr 2024
Viewed by 1472
Abstract
In the context of autonomous driving, the fusion of LiDAR and camera sensors is essential for robust obstacle detection and distance estimation. However, accurately estimating the transformation matrix between cost-effective low-resolution LiDAR and cameras presents challenges due to the generation of uncertain points [...] Read more.
In the context of autonomous driving, the fusion of LiDAR and camera sensors is essential for robust obstacle detection and distance estimation. However, accurately estimating the transformation matrix between cost-effective low-resolution LiDAR and cameras presents challenges due to the generation of uncertain points by low-resolution LiDAR. In the present work, a new calibration technique is developed to accurately transform low-resolution 2D LiDAR points into camera pixels by utilizing both static and dynamic calibration patterns. Initially, the key corresponding points are identified at the intersection of 2D LiDAR points and calibration patterns. Subsequently, interpolation is applied to generate additional corresponding points for estimating the homography matrix. The homography matrix is then optimized using the Levenberg–Marquardt algorithm to minimize the rotation error, followed by a Procrustes analysis to minimize the translation error. The accuracy of the developed calibration technique is validated through various experiments (varying distances and orientations). The experimental findings demonstrate that the developed calibration technique significantly reduces the mean reprojection error by 0.45 pixels, rotation error by 65.08%, and distance error by 71.93% compared to the standard homography technique. Thus, the developed calibration technique promises the accurate transformation of low-resolution LiDAR points into camera pixels, thereby contributing to improved obstacle perception in intelligent autonomous driving systems. Full article
Show Figures

Graphical abstract

30 pages, 14867 KiB  
Article
Architecture and Potential of Connected and Autonomous Vehicles
by Michele Pipicelli, Alfredo Gimelli, Bernardo Sessa, Francesco De Nola, Gianluca Toscano and Gabriele Di Blasio
Vehicles 2024, 6(1), 275-304; https://doi.org/10.3390/vehicles6010012 - 29 Jan 2024
Cited by 3 | Viewed by 2721
Abstract
The transport sector is under an intensive renovation process. Innovative concepts such as shared and intermodal mobility, mobility as a service, and connected and autonomous vehicles (CAVs) will contribute to the transition toward carbon neutrality and are foreseen as crucial parts of future [...] Read more.
The transport sector is under an intensive renovation process. Innovative concepts such as shared and intermodal mobility, mobility as a service, and connected and autonomous vehicles (CAVs) will contribute to the transition toward carbon neutrality and are foreseen as crucial parts of future mobility systems, as demonstrated by worldwide efforts in research and industry communities. The main driver of CAVs development is road safety, but other benefits, such as comfort and energy saving, are not to be neglected. CAVs analysis and development usually focus on Information and Communication Technology (ICT) research themes and less on the entire vehicle system. Many studies on specific aspects of CAVs are available in the literature, including advanced powertrain control strategies and their effects on vehicle efficiency. However, most studies neglect the additional power consumption due to the autonomous driving system. This work aims to assess uncertain CAVs’ efficiency improvements and offers an overview of their architecture. In particular, a combination of the literature survey and proper statistical methods are proposed to provide a comprehensive overview of CAVs. The CAV layout, data processing, and management to be used in energy management strategies are discussed. The data gathered are used to define statistical distribution relative to the efficiency improvement, number of sensors, computing units and their power requirements. Those distributions have been employed within a Monte Carlo method simulation to evaluate the effect on vehicle energy consumption and energy saving, using optimal driving behaviour, and considering the power consumption from additional CAV hardware. The results show that the assumption that CAV technologies will reduce energy consumption compared to the reference vehicle, should not be taken for granted. In 75% of scenarios, simulated light-duty CAVs worsen energy efficiency, while the results are more promising for heavy-duty vehicles. Full article
Show Figures

Figure 1

25 pages, 2259 KiB  
Article
RC-SLAM: Road Constrained Stereo Visual SLAM System Based on Graph Optimization
by Yuan Zhu, Hao An, Huaide Wang, Ruidong Xu, Mingzhi Wu and Ke Lu
Sensors 2024, 24(2), 536; https://doi.org/10.3390/s24020536 - 15 Jan 2024
Cited by 6 | Viewed by 1433
Abstract
Intelligent vehicles are constrained by road, resulting in a disparity between the assumed six degrees of freedom (DoF) motion within the Visual Simultaneous Localization and Mapping (SLAM) system and the approximate planar motion of vehicles in local areas, inevitably causing additional pose estimation [...] Read more.
Intelligent vehicles are constrained by road, resulting in a disparity between the assumed six degrees of freedom (DoF) motion within the Visual Simultaneous Localization and Mapping (SLAM) system and the approximate planar motion of vehicles in local areas, inevitably causing additional pose estimation errors. To address this problem, a stereo Visual SLAM system with road constraints based on graph optimization is proposed, called RC-SLAM. Addressing the challenge of representing roads parametrically, a novel method is proposed to approximate local roads as discrete planes and extract parameters of local road planes (LRPs) using homography. Unlike conventional methods, constraints between the vehicle and LRPs are established, effectively mitigating errors arising from assumed six DoF motion in the system. Furthermore, to avoid the impact of depth uncertainty in road features, epipolar constraints are employed to estimate rotation by minimizing the distance between road feature points and epipolar lines, robust rotation estimation is achieved despite depth uncertainties. Notably, a distinctive nonlinear optimization model based on graph optimization is presented, jointly optimizing the poses of vehicle trajectories, LPRs, and map points. The experiments on two datasets demonstrate that the proposed system achieved more accurate estimations of vehicle trajectories by introducing constraints between the vehicle and LRPs. The experiments on a real-world dataset further validate the effectiveness of the proposed system. Full article
Show Figures

Figure 1

19 pages, 2402 KiB  
Article
Controllable Unsupervised Snow Synthesis by Latent Style Space Manipulation
by Hanting Yang, Alexander Carballo, Yuxiao Zhang and Kazuya Takeda
Sensors 2023, 23(20), 8398; https://doi.org/10.3390/s23208398 - 12 Oct 2023
Viewed by 1149
Abstract
In the field of intelligent vehicle technology, there is a high dependence on images captured under challenging conditions to develop robust perception algorithms. However, acquiring these images can be both time-consuming and dangerous. To address this issue, unpaired image-to-image translation models offer a [...] Read more.
In the field of intelligent vehicle technology, there is a high dependence on images captured under challenging conditions to develop robust perception algorithms. However, acquiring these images can be both time-consuming and dangerous. To address this issue, unpaired image-to-image translation models offer a solution by synthesizing samples of the desired domain, thus eliminating the reliance on ground truth supervision. However, the current methods predominantly focus on single projections rather than multiple solutions, not to mention controlling the direction of generation, which creates a scope for enhancement. In this study, we propose a generative adversarial network (GAN)–based model, which incorporates both a style encoder and a content encoder, specifically designed to extract relevant information from an image. Further, we employ a decoder to reconstruct an image using these encoded features, while ensuring that the generated output remains within a permissible range by applying a self-regression module to constrain the style latent space. By modifying the hyperparameters, we can generate controllable outputs with specific style codes. We evaluate the performance of our model by generating snow scenes on the Cityscapes and the EuroCity Persons datasets. The results reveal the effectiveness of our proposed methodology, thereby reinforcing the benefits of our approach in the ongoing evolution of intelligent vehicle technology. Full article
Show Figures

Figure 1

18 pages, 7241 KiB  
Article
Road-Network-Map-Assisted Vehicle Positioning Based on Pose Graph Optimization
by Shuchen Xu, Yongrong Sun, Kedong Zhao, Xiyu Fu and Shuaishuai Wang
Sensors 2023, 23(17), 7581; https://doi.org/10.3390/s23177581 - 31 Aug 2023
Viewed by 1270
Abstract
Satellite signals are easily lost in urban areas, which causes difficulty in vehicles being located with high precision. Visual odometry has been increasingly applied in navigation systems to solve this problem. However, visual odometry relies on dead-reckoning technology, where a slight positioning error [...] Read more.
Satellite signals are easily lost in urban areas, which causes difficulty in vehicles being located with high precision. Visual odometry has been increasingly applied in navigation systems to solve this problem. However, visual odometry relies on dead-reckoning technology, where a slight positioning error can accumulate over time, resulting in a catastrophic positioning error. Thus, this paper proposes a road-network-map-assisted vehicle positioning method based on the theory of pose graph optimization. This method takes the dead-reckoning result of visual odometry as the input and introduces constraints from the point-line form road network map to suppress the accumulated error and improve vehicle positioning accuracy. We design an optimization and prediction model, and the original trajectory of visual odometry is optimized to obtain the corrected trajectory by introducing constraints from map correction points. The vehicle positioning result at the next moment is predicted based on the latest output of the visual odometry and corrected trajectory. The experiments carried out on the KITTI and campus datasets demonstrate the superiority of the proposed method, which can provide stable and accurate vehicle position estimation in real-time, and has higher positioning accuracy than similar map-assisted methods. Full article
Show Figures

Figure 1

22 pages, 17114 KiB  
Article
Radar Timing Range–Doppler Spectral Target Detection Based on Attention ConvLSTM in Traffic Scenes
by Fengde Jia, Jihong Tan, Xiaochen Lu and Junhui Qian
Remote Sens. 2023, 15(17), 4150; https://doi.org/10.3390/rs15174150 - 24 Aug 2023
Cited by 11 | Viewed by 2398
Abstract
With the development of autonomous driving and the emergence of various intelligent traffic scenarios, object detection technology based on deep learning is more and more widely applied to real traffic scenarios. Commonly used detection devices include LiDAR and cameras. Since the implementation of [...] Read more.
With the development of autonomous driving and the emergence of various intelligent traffic scenarios, object detection technology based on deep learning is more and more widely applied to real traffic scenarios. Commonly used detection devices include LiDAR and cameras. Since the implementation of traffic scene target detection technology requires mass production, the advantages of millimeter-wave radar have emerged, such as low cost and no interference from the external environment. The performance of LiDAR and cameras is greatly reduced due to their sensitivity to light, which affects target detection at night and in bad weather. However, millimeter-wave radar can overcome the influence of these harsh environments and has a great auxiliary effect on safe driving on the road. In this work, we propose a deep-learning-based object detection method considering the radar range–Doppler spectrum in traffic scenarios. The algorithm uses YOLOv8 as the basic architecture, makes full use of the time series characteristics of range–Doppler spectrum data in traffic scenarios, introduces the ConvLSTM network, and exerts the ability to process time series data. In order to improve the model’s ability to detect small objects, an efficient and lightweight Efficient Channel Attention (ECA) module is introduced. Through extensive experiments, our model shows better performance on two publicly available radar datasets, CARRADA and RADDet, compared to other state-of-the-art methods. Compared with other mainstream methods that can only achieve 30–60% mAP performance when the IOU is 0.3, our model can achieve 74.51% and 75.62% on the RADDet and CARRADA datasets, respectively, and has better robustness and generalization ability. Full article
Show Figures

Graphical abstract

25 pages, 13207 KiB  
Article
Layered SOTIF Analysis and 3σ-Criterion-Based Adaptive EKF for Lidar-Based Multi-Sensor Fusion Localization System on Foggy Days
by Lipeng Cao, Yansong He, Yugong Luo and Jian Chen
Remote Sens. 2023, 15(12), 3047; https://doi.org/10.3390/rs15123047 - 10 Jun 2023
Cited by 3 | Viewed by 1999
Abstract
The detection range and accuracy of light detection and ranging (LiDAR) systems are sensitive to variations in fog concentration, leading to the safety of the intended functionality-related (SOTIF-related) problems in the LiDAR-based fusion localization system (LMSFLS). However, due to the uncontrollable weather, it [...] Read more.
The detection range and accuracy of light detection and ranging (LiDAR) systems are sensitive to variations in fog concentration, leading to the safety of the intended functionality-related (SOTIF-related) problems in the LiDAR-based fusion localization system (LMSFLS). However, due to the uncontrollable weather, it is almost impossible to quantitatively analyze the effects of fog on LMSFLS in a realistic environment. Therefore, in this study, we conduct a layered quantitative SOTIF analysis of the LMSFLS on foggy days using fog simulation. Based on the analysis results, we identify the component-level, system-level, and vehicle-level functional insufficiencies of the LMSFLS, the corresponding quantitative triggering conditions, and the potential SOTIF-related risks. To address the SOTIF-related risks, we propose a functional modification strategy that incorporates visibility recognition and a 3σ-criterion-based variance mismatch degree grading adaptive extended Kalman filter. The visibility of a scenario is recognized to judge whether the measurement information of the LiDAR odometry is disturbed by fog. Moreover, the proposed filter is adopted to fuse the abnormal measurement information of the LiDAR odometry with IMU and GNSS. Simulation results demonstrate that the proposed strategy can inhibit the divergence of the LMSFLS, improve the SOTIF of self-driving cars on foggy days, and accurately recognize the visibility of the scenarios. Full article
Show Figures

Figure 1

25 pages, 30818 KiB  
Article
Improving Pedestrian Safety Using Ultra-Wideband Sensors: A Study of Time-to-Collision Estimation
by Salah Fakhoury and Karim Ismail
Sensors 2023, 23(8), 4171; https://doi.org/10.3390/s23084171 - 21 Apr 2023
Cited by 6 | Viewed by 3518
Abstract
Pedestrian safety has been evaluated based on the mean number of pedestrian-involved collisions. Traffic conflicts have been used as a data source to supplement collision data because of their higher frequency and lower damage. Currently, the main source of traffic conflict observation is [...] Read more.
Pedestrian safety has been evaluated based on the mean number of pedestrian-involved collisions. Traffic conflicts have been used as a data source to supplement collision data because of their higher frequency and lower damage. Currently, the main source of traffic conflict observation is through video cameras that can efficiently gather rich data but can be limited by weather and lighting conditions. The utilization of wireless sensors to gather traffic conflict data can augment video sensors because of their robustness to adverse weather conditions and poor illumination. This study presents a prototype of a safety assessment system that utilizes ultra-wideband wireless sensors to detect traffic conflicts. A customized variant of time-to-collision is used to detect conflicts at different severity thresholds. Field trials are conducted using vehicle-mounted beacons and a phone to simulate sensors on vehicles and smart devices on pedestrians. Proximity measures are calculated in real-time to alert smartphones and prevent collisions, even in adverse weather conditions. Validation is conducted to assess the accuracy of time-to-collision measurements at various distances from the phone. Several limitations are identified and discussed, along with recommendations for improvement and lessons learned for future research and development. Full article
Show Figures

Figure 1

Back to TopTop