sensors-logo

Journal Browser

Journal Browser

Artificial Intelligence and Smart Sensors for Autonomous Driving

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Electronic Sensors".

Deadline for manuscript submissions: closed (20 August 2024) | Viewed by 12127

Special Issue Editors


E-Mail Website
Guest Editor
Laboratoire Connaissances et Intelligence Artificielle Distribuées, Université de Technologie de Belfort-Montbéliard, 90010 Belfort, France
Interests: software and knowledge engineering; multi-agent systems; optimization; platooning

E-Mail Website
Guest Editor
Laboratoire Connaissances et Intelligence Artificielle Distribuées;Université de Technologie de Belfort-Montbéliard, Belfort, France
Interests: connected autonomous vehicles; cooperative driving; artificial intelligence; control theory; urban mobility
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Autonomous driving is a major breakthrough technology of the 21st century. It is expected to make our roads safer, protect the transportation of people and goods, and significantly improve the efficiency of infrastructure sharing. Thus, a wide range of advances are currently being made to achieve autonomous driving. Artificial-intelligence-based sensor systems are key to this success. These systems harness the sensory abilities of vehicles to allow them to smartly interact with both the vehicle occupants and other road users. Artificial-intelligence-based sensor systems are involved in several tasks of autonomous driving such as global and relative positioning, obstacle detection, vehicle occupant monitoring and decision making.

Artificial-intelligence-based sensor systems are not only limited to a single-agent approach. They also cover multi-agent approaches, where their sensorial capabilities are extended, sharing onboard and offboard footage helping road users to cooperate and solve conflicts with other road users. Different artificial intelligence techniques are applied for perception, learning, reasoning, problem-solving, and communication (language) with others in the field of autonomous driving.

This Special Issue aims to identify both the current state of these issues using artificial-intelligence-based sensor systems in the field of autonomous driving, and what the future will bring in terms of research work addressing the challenges and potential of self-driving. There will be a special focus on open datasets that can support the transferability and benchmarking of different approaches.

The topics of interest include, but are not limited to:

  • Smart/intelligent sensors;
  • Smart, connected, wearable devices;
  • Computer vision and pattern recognition;
  • Cooperative, intelligent transport systems and services;
  • Adaptive human–robot interactions;
  • Multi-agent interactions;
  • Machine learning.

Prof. Dr. Abderrafiaa Koukam
Prof. Dr. Abdeljalil Abbas-Turki
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

41 pages, 57635 KiB  
Article
An Object-Centric Hierarchical Pose Estimation Method Using Semantic High-Definition Maps for General Autonomous Driving
by Jeong-Won Pyo, Jun-Hyeon Choi and Tae-Yong Kuc
Sensors 2024, 24(16), 5191; https://doi.org/10.3390/s24165191 - 11 Aug 2024
Viewed by 927
Abstract
To achieve Level 4 and above autonomous driving, a robust and stable autonomous driving system is essential to adapt to various environmental changes. This paper aims to perform vehicle pose estimation, a crucial element in forming autonomous driving systems, more universally and robustly. [...] Read more.
To achieve Level 4 and above autonomous driving, a robust and stable autonomous driving system is essential to adapt to various environmental changes. This paper aims to perform vehicle pose estimation, a crucial element in forming autonomous driving systems, more universally and robustly. The prevalent method for vehicle pose estimation in autonomous driving systems relies on Real-Time Kinematic (RTK) sensor data, ensuring accurate location acquisition. However, due to the characteristics of RTK sensors, precise positioning is challenging or impossible in indoor spaces or areas with signal interference, leading to inaccurate pose estimation and hindering autonomous driving in such scenarios. This paper proposes a method to overcome these challenges by leveraging objects registered in a high-precision map. The proposed approach involves creating a semantic high-definition (HD) map with added objects, forming object-centric features, recognizing locations using these features, and accurately estimating the vehicle’s pose from the recognized location. This proposed method enhances the precision of vehicle pose estimation in environments where acquiring RTK sensor data is challenging, enabling more robust and stable autonomous driving. The paper demonstrates the proposed method’s effectiveness through simulation and real-world experiments, showcasing its capability for more precise pose estimation. Full article
(This article belongs to the Special Issue Artificial Intelligence and Smart Sensors for Autonomous Driving)
Show Figures

Figure 1

23 pages, 7227 KiB  
Article
Dense Out-of-Distribution Detection by Robust Learning on Synthetic Negative Data
by Matej Grcić, Petra Bevandić, Zoran Kalafatić and Siniša Šegvić
Sensors 2024, 24(4), 1248; https://doi.org/10.3390/s24041248 - 15 Feb 2024
Cited by 24 | Viewed by 1223
Abstract
Standard machine learning is unable to accommodate inputs which do not belong to the training distribution. The resulting models often give rise to confident incorrect predictions which may lead to devastating consequences. This problem is especially demanding in the context of dense prediction [...] Read more.
Standard machine learning is unable to accommodate inputs which do not belong to the training distribution. The resulting models often give rise to confident incorrect predictions which may lead to devastating consequences. This problem is especially demanding in the context of dense prediction since input images may be only partially anomalous. Previous work has addressed dense out-of-distribution detection by discriminative training with respect to off-the-shelf negative datasets. However, real negative data may lead to over-optimistic evaluation due to possible overlap with test anomalies. To this end, we extend this approach by generating synthetic negative patches along the border of the inlier manifold. We leverage a jointly trained normalizing flow due to a coverage-oriented learning objective and the capability to generate samples at different resolutions. We detect anomalies according to a principled information-theoretic criterion which can be consistently applied through training and inference. The resulting models set the new state of the art on benchmarks for out-of-distribution detection in road-driving scenes and remote sensing imagery despite minimal computational overhead. Full article
(This article belongs to the Special Issue Artificial Intelligence and Smart Sensors for Autonomous Driving)
Show Figures

Figure 1

21 pages, 1640 KiB  
Article
QuantLaneNet: A 640-FPS and 34-GOPS/W FPGA-Based CNN Accelerator for Lane Detection
by Duc Khai Lam, Cam Vinh Du and Hoai Luan Pham
Sensors 2023, 23(15), 6661; https://doi.org/10.3390/s23156661 - 25 Jul 2023
Cited by 2 | Viewed by 2018
Abstract
Lane detection is one of the most fundamental problems in the rapidly developing field of autonomous vehicles. With the dramatic growth of deep learning in recent years, many models have achieved a high accuracy for this task. However, most existing deep-learning methods for [...] Read more.
Lane detection is one of the most fundamental problems in the rapidly developing field of autonomous vehicles. With the dramatic growth of deep learning in recent years, many models have achieved a high accuracy for this task. However, most existing deep-learning methods for lane detection face two main problems. First, most early studies usually follow a segmentation approach, which requires much post-processing to extract the necessary geometric information about the lane lines. Second, many models fail to reach real-time speed due to the high complexity of model architecture. To offer a solution to these problems, this paper proposes a lightweight convolutional neural network that requires only two small arrays for minimum post-processing, instead of segmentation maps for the task of lane detection. This proposed network utilizes a simple lane representation format for its output. The proposed model can achieve 93.53% accuracy on the TuSimple dataset. A hardware accelerator is proposed and implemented on the Virtex-7 VC707 FPGA platform to optimize processing time and power consumption. Several techniques, including data quantization to reduce data width down to 8-bit, exploring various loop-unrolling strategies for different convolution layers, and pipelined computation across layers, are optimized in the proposed hardware accelerator architecture. This implementation can process at 640 FPS while consuming only 10.309 W, equating to a computation throughput of 345.6 GOPS and energy efficiency of 33.52 GOPS/W. Full article
(This article belongs to the Special Issue Artificial Intelligence and Smart Sensors for Autonomous Driving)
Show Figures

Figure 1

17 pages, 655 KiB  
Article
Agent-Based Approach for (Peri-)Urban Inter-Modality Policies: Application to Real Data from the Lille Metropolis
by Azise Oumar Diallo, Guillaume Lozenguez, Arnaud Doniec and René Mandiau
Sensors 2023, 23(5), 2540; https://doi.org/10.3390/s23052540 - 24 Feb 2023
Cited by 1 | Viewed by 1610
Abstract
Transportation authorities have adopted more and more incentive measures (fare-free public transport, construction of park-and-ride facilities, etc.) to reduce the use of private cars by combining them with public transit. However, such measures remain difficult to assess with traditional transport models. This article [...] Read more.
Transportation authorities have adopted more and more incentive measures (fare-free public transport, construction of park-and-ride facilities, etc.) to reduce the use of private cars by combining them with public transit. However, such measures remain difficult to assess with traditional transport models. This article proposes a different approach: an agent-oriented model. To reproduce realistic applications in an urban context (a metropolis), we investigate the preferences and choices of different agents based on utilities and focus on a modal choice performed through a multinomial logit model. Moreover, we propose some methodological elements to identify the individuals’ profiles using public data (census and travel surveys). We also show that this model, applied in a real case study (Lille, France), is able to reproduce travel behaviors when combining private cars and public transport. Moreover, we focus on the role played by park-and-ride facilities in this context. Thus, the simulation framework makes it possible to better understand individuals’ intermodal travel behavior and assess its development policies. Full article
(This article belongs to the Special Issue Artificial Intelligence and Smart Sensors for Autonomous Driving)
Show Figures

Figure 1

16 pages, 2701 KiB  
Article
Muti-Frame Point Cloud Feature Fusion Based on Attention Mechanisms for 3D Object Detection
by Zhenyu Zhai, Qiantong Wang, Zongxu Pan, Zhentong Gao and Wenlong Hu
Sensors 2022, 22(19), 7473; https://doi.org/10.3390/s22197473 - 2 Oct 2022
Cited by 7 | Viewed by 3113
Abstract
Continuous frames of point-cloud-based object detection is a new research direction. Currently, most research studies fuse multi-frame point clouds using concatenation-based methods. The method aligns different frames by using information on GPS, IMU, etc. However, this fusion method can only align static objects [...] Read more.
Continuous frames of point-cloud-based object detection is a new research direction. Currently, most research studies fuse multi-frame point clouds using concatenation-based methods. The method aligns different frames by using information on GPS, IMU, etc. However, this fusion method can only align static objects and not moving objects. In this paper, we proposed a non-local-based multi-scale feature fusion method, which can handle both moving and static objects without GPS- and IMU-based registrations. Considering that non-local methods are resource-consuming, we proposed a novel simplified non-local block based on the sparsity of the point cloud. By filtering out empty units, memory consumption decreased by 99.93%. In addition, triple attention is adopted to enhance the key information on the object and suppresses background noise, further benefiting non-local-based feature fusion methods. Finally, we verify the method based on PointPillars and CenterPoint. Experimental results show that the mAP of the proposed method improved by 3.9% and 4.1% in mAP compared with concatenation-based fusion modules, PointPillars-2 and CenterPoint-2, respectively. In addition, the proposed network outperforms powerful 3D-VID by 1.2% in mAP. Full article
(This article belongs to the Special Issue Artificial Intelligence and Smart Sensors for Autonomous Driving)
Show Figures

Figure 1

17 pages, 3302 KiB  
Article
Heavy Rain Face Image Restoration: Integrating Physical Degradation Model and Facial Component-Guided Adversarial Learning
by Chang-Hwan Son and Da-Hee Jeong
Sensors 2022, 22(14), 5359; https://doi.org/10.3390/s22145359 - 18 Jul 2022
Cited by 3 | Viewed by 1931
Abstract
With the recent increase in intelligent CCTVs for visual surveillance, a new image degradation that integrates resolution conversion and synthetic rain models is required. For example, in heavy rain, face images captured by CCTV from a distance have significant deterioration in both visibility [...] Read more.
With the recent increase in intelligent CCTVs for visual surveillance, a new image degradation that integrates resolution conversion and synthetic rain models is required. For example, in heavy rain, face images captured by CCTV from a distance have significant deterioration in both visibility and resolution. Unlike traditional image degradation models (IDM), such as rain removal and super resolution, this study addresses a new IDM referred to as a scale-aware heavy rain model and proposes a method for restoring high-resolution face images (HR-FIs) from low-resolution heavy rain face images (LRHR-FI). To this end, a two-stage network is presented. The first stage generates low-resolution face images (LR-FIs), from which heavy rain has been removed from the LRHR-FIs to improve visibility. To realize this, an interpretable IDM-based network is constructed to predict physical parameters, such as rain streaks, transmission maps, and atmospheric light. In addition, the image reconstruction loss is evaluated to enhance the estimates of the physical parameters. For the second stage, which aims to reconstruct the HR-FIs from the LR-FIs outputted in the first stage, facial component-guided adversarial learning (FCGAL) is applied to boost facial structure expressions. To focus on informative facial features and reinforce the authenticity of facial components, such as the eyes and nose, a face parsing-guided generator and facial local discriminators are designed for FCGAL. The experimental results verify that the proposed approach based on a physical-based network design and FCGAL can remove heavy rain and increase the resolution and visibility simultaneously. Moreover, the proposed heavy rain face image restoration outperforms state-of-the-art models of heavy rain removal, image-to-image translation, and super resolution. Full article
(This article belongs to the Special Issue Artificial Intelligence and Smart Sensors for Autonomous Driving)
Show Figures

Figure 1

Back to TopTop