remotesensing-logo

Journal Browser

Journal Browser

Advances in Remote Sensing of Solving Challenges in Autonomous Driving and Safety Analysis

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Urban Remote Sensing".

Deadline for manuscript submissions: 31 January 2025 | Viewed by 5760

Special Issue Editors


E-Mail Website
Guest Editor
Sensing and Perception, SMART Mechatronics Research Group, Saxion University of Applied Sciences, Enschede, The Netherlands
Interests: autonomous vehicles; LIDAR/radar-based localization systems; mapping systems; SLAM technologies; eye-based human‒machine interface systems; driver monitoring systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Defense Systems Engineering, Sejong University, Seoul 05006, Republic of Korea
Interests: artificial intelligence; autonomous vehicles; wireless networks
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Secretary of ISPRS WG I/7-Mobile Mapping Technology, Interdepartmental Research Center of Geomatics (CIRGEO), University of Padua, Padua, Italy
Interests: geomatics; mobile mapping; laser scanning; photogrammetry; remote sensing; navigation; data processing; machine learning; unmanned aerial vehicles; cultural heritage
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

As safety is the prime priority and the key issue in commercializing autonomous vehicles, the main challenge in the research field has become to increase safety, and more in general the system performance, in critical and unique working conditions. This includes for instance: precise localization in snow–rain road conditions, generating accurate and largescale maps by SLAM technologies, far detection of construction areas for smooth path planning, maneuvering with the existence of unprotected turns, making a robust decision on classifying stationary vehicles as obstacles or temporarily stopped due to traffic jams and traffic signal recognition in sun glare. Without robustly solving these issues, autonomous driving will stay in the demonstration loop, and the deployment of autonomous vehicles will be limited to certain operating conditions. In addition, these problems may lead to deadly traffic accidents and produce a considerable negative impact on societies to accept running autonomous vehicles in streets.

Analyzing the reasons for these problems and clearly illustrating them are the cornerstone to investigating the relevant effects on the autopilot’s performance and proposing the corresponding optimal solutions. Remote sensing and image processing applications play the main role in designing optimal solutions based on sensory and observation data such as modeling the changes in the pattern distribution of LIDAR 3D point clouds in snowfall weather conditions and improving the localization accuracy by matching map observation environmental features. Therefore, this Special Issue aims to add value to the autonomous vehicle research field by demonstrating and analyzing critical and unique problems of mapping, localization, perception and path-planning modules that are rarely discussed in the literature and currently considered as futuristic matters.

Eventually, we hope to significantly contribute to increasing the safety of autonomous driving and provide prominent and robust solutions through the published papers.

Dr. Mohammad Aldibaja
Dr. Sufyan Ali Memon
Dr. Andrea Masiero
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • autonomous vehicles
  • 3D point cloud analysis
  • path planning with unprotected turns
  • robust perception of construction areas
  • SLAM-based mapping in challenging environments
  • road pavement assessment for driving safety analysis
  • object status classification in urban traffic conditions
  • LIDAR/radar-based localization systems in adverse weather conditions
  • map quality analysis and enhancement
  • lane graph generation
  • safe AI integration technologies into autonomous driving

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 3216 KiB  
Article
Enhancing Planning for Autonomous Driving via an Iterative Optimization Framework Incorporating Safety-Critical Trajectory Generation
by Zhen Liu, Hang Gao, Yeting Lin and Xun Gong
Remote Sens. 2024, 16(19), 3721; https://doi.org/10.3390/rs16193721 - 6 Oct 2024
Viewed by 1693
Abstract
Ensuring the safety of autonomous vehicles (AVs) in complex and high-risk traffic scenarios remains a critical unresolved challenge. Current AV planning methods exhibit limitations in generating robust driving trajectories that effectively avoid collisions, highlighting the urgent need for improved planning strategies to address [...] Read more.
Ensuring the safety of autonomous vehicles (AVs) in complex and high-risk traffic scenarios remains a critical unresolved challenge. Current AV planning methods exhibit limitations in generating robust driving trajectories that effectively avoid collisions, highlighting the urgent need for improved planning strategies to address these issues. This paper introduces a novel iterative optimization framework that incorporates safety-critical trajectory generation to enhance AV planning. The use of the HighD dataset, which is collected using the wide-area remote sensing capabilities of unmanned aerial vehicles (UAVs), is fundamental to the framework. Remote sensing enables large-scale real-time observation of traffic conditions, providing precise data on vehicle dynamics, road structures, and surrounding environments. To generate safety-critical trajectories, the decoder within the conditional variational auto-encoder (CVAE) is innovatively designed through a data-mechanism integration method, ensuring that these trajectories strictly adhere to vehicle kinematic constraints. Furthermore, two parallel CVAEs (Dual-CVAE) are trained collaboratively by a shared objective function to implicitly model the multi-vehicle interactions. Inspired by the concept of “learning to collide”, adversarial optimization is integrated into the Dual-CVAE (Adv. Dual-CVAE), facilitating efficient generation from normal to safety-critical trajectories. Building upon this, these generated trajectories are then incorporated into an iterative optimization framework, significantly enhancing the AV’s planning ability to avoid collisions. This framework decomposes the optimization process into stages, initially addressing normal trajectories and progressively tackling more safety-critical and collision trajectories. Finally, comparative case studies of enhancing AV planning are conducted and the simulation results demonstrate that the proposed method can efficiently enhance AV planning by generating safety-critical trajectories. Full article
Show Figures

Figure 1

21 pages, 31109 KiB  
Article
InstLane Dataset and Geometry-Aware Network for Instance Segmentation of Lane Line Detection
by Qimin Cheng, Jiajun Ling, Yunfei Yang, Kaiji Liu, Huanying Li and Xiao Huang
Remote Sens. 2024, 16(15), 2751; https://doi.org/10.3390/rs16152751 - 28 Jul 2024
Viewed by 689
Abstract
Despite impressive progress, obtaining appropriate data for instance-level lane segmentation remains a significant challenge. This limitation hinders the refinement of granular lane-related applications such as lane line crossing surveillance, pavement maintenance, and management. To address this gap, we introduce a benchmark for lane [...] Read more.
Despite impressive progress, obtaining appropriate data for instance-level lane segmentation remains a significant challenge. This limitation hinders the refinement of granular lane-related applications such as lane line crossing surveillance, pavement maintenance, and management. To address this gap, we introduce a benchmark for lane instance segmentation called InstLane. To the best of our knowledge, InstLane constitutes the first publicly accessible instance-level segmentation standard for lane line detection. The complexity of InstLane emanates from the fact that the original data are procured using cameras mounted laterally, as opposed to traditional front-mounted sensors. InstLane encapsulates a range of challenging scenarios, enhancing the generalization and robustness of the lane line instance segmentation algorithms. In addition, we propose GeoLaneNet, a real-time, geometry-aware lane instance segmentation network. Within GeoLaneNet, we design a finer localization of lane proto-instances based on geometric features to counteract the prevalent omission or multiple detections in dense lane scenarios resulting from non-maximum suppression (NMS). Furthermore, we present a scheme that employs a larger receptive field to achieve profound perceptual lane structural learning, thereby improving detection accuracy. We introduce an architecture based on partial feature transformation to expedite the detection process. Comprehensive experiments on InstLane demonstrate that GeoLaneNet can achieve up to twice the speed of current State-Of-The-Artmethods, reaching 139 FPS on an RTX3090 and a mask AP of 73.55%, with a permissible trade-off in AP, while maintaining comparable accuracy. These results underscore the effectiveness, robustness, and efficiency of GeoLaneNet in autonomous driving. Full article
Show Figures

Figure 1

19 pages, 4057 KiB  
Article
Global Navigation Satellite System/Inertial Measurement Unit/Camera/HD Map Integrated Localization for Autonomous Vehicles in Challenging Urban Tunnel Scenarios
by Lu Tao, Pan Zhang, Kefu Gao and Jingnan Liu
Remote Sens. 2024, 16(12), 2230; https://doi.org/10.3390/rs16122230 - 19 Jun 2024
Viewed by 1214
Abstract
Lane-level localization is critical for autonomous vehicles (AVs). However, complex urban scenarios, particularly tunnels, pose significant challenges to AVs’ localization systems. In this paper, we propose a fusion localization method that integrates multiple mass-production sensors, including Global Navigation Satellite Systems (GNSSs), Inertial Measurement [...] Read more.
Lane-level localization is critical for autonomous vehicles (AVs). However, complex urban scenarios, particularly tunnels, pose significant challenges to AVs’ localization systems. In this paper, we propose a fusion localization method that integrates multiple mass-production sensors, including Global Navigation Satellite Systems (GNSSs), Inertial Measurement Units (IMUs), cameras, and high-definition (HD) maps. Firstly, we use a novel electronic horizon module to assess GNSS integrity and concurrently load the HD map data surrounding the AVs. This map data are then transformed into a visual space to match the corresponding lane lines captured by the on-board camera using an improved BiSeNet. Consequently, the matched HD map data are used to correct our localization algorithm, which is driven by an extended Kalman filter that integrates multiple sources of information, encompassing a GNSS, IMU, speedometer, camera, and HD maps. Our system is designed with redundancy to handle challenging city tunnel scenarios. To evaluate the proposed system, real-world experiments were conducted on a 36-kilometer city route that includes nine consecutive tunnels, totaling near 13 km and accounting for 35% of the entire route. The experimental results reveal that 99% of lateral localization errors are less than 0.29 m, and 90% of longitudinal localization errors are less than 3.25 m, ensuring reliable lane-level localization for AVs in challenging urban tunnel scenarios. Full article
Show Figures

Figure 1

23 pages, 8941 KiB  
Article
DS-Trans: A 3D Object Detection Method Based on a Deformable Spatiotemporal Transformer for Autonomous Vehicles
by Yuan Zhu, Ruidong Xu, Chongben Tao, Hao An, Huaide Wang, Zhipeng Sun and Ke Lu
Remote Sens. 2024, 16(9), 1621; https://doi.org/10.3390/rs16091621 - 30 Apr 2024
Viewed by 1305
Abstract
Facing the significant challenge of 3D object detection in complex weather conditions and road environments, existing algorithms based on single-frame point cloud data struggle to achieve desirable results. These methods typically focus on spatial relationships within a single frame, overlooking the semantic correlations [...] Read more.
Facing the significant challenge of 3D object detection in complex weather conditions and road environments, existing algorithms based on single-frame point cloud data struggle to achieve desirable results. These methods typically focus on spatial relationships within a single frame, overlooking the semantic correlations and spatiotemporal continuity between consecutive frames. This leads to discontinuities and abrupt changes in the detection outcomes. To address this issue, this paper proposes a multi-frame 3D object detection algorithm based on a deformable spatiotemporal Transformer. Specifically, a deformable cross-scale Transformer module is devised, incorporating a multi-scale offset mechanism that non-uniformly samples features at different scales, enhancing the spatial information aggregation capability of the output features. Simultaneously, to address the issue of feature misalignment during multi-frame feature fusion, a deformable cross-frame Transformer module is proposed. This module incorporates independently learnable offset parameters for different frame features, enabling the model to adaptively correlate dynamic features across multiple frames and improve the temporal information utilization of the model. A proposal-aware sampling algorithm is introduced to significantly increase the foreground point recall, further optimizing the efficiency of feature extraction. The obtained multi-scale and multi-frame voxel features are subjected to an adaptive fusion weight extraction module, referred to as the proposed mixed voxel set extraction module. This module allows the model to adaptively obtain mixed features containing both spatial and temporal information. The effectiveness of the proposed algorithm is validated on the KITTI, nuScenes, and self-collected urban datasets. The proposed algorithm achieves an average precision improvement of 2.1% over the latest multi-frame-based algorithms. Full article
Show Figures

Figure 1

Back to TopTop