sensors-logo

Journal Browser

Journal Browser

Sensor-Fusion-Based Deep Interpretable Networks

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: 25 March 2025 | Viewed by 3916

Special Issue Editors


E-Mail Website
Guest Editor
Department of Information Science, Xi’an University of Technology, Xi’an 710048, China
Interests: visual information processing; pattern recognition
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Electrical and Computer Engineering, Oklahoma State University, Stillwater ,OK 74078,USA
Interests: image processing; machine learning; pattern recognition; computer vision; biomedical imaging and multimedia applications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
Interests: computer vision; machine learning; pattern recognition
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
Interests: image and video semantic segmentation; deep learning; industrial process control; industrial intelligence; natural language processing; knowledge graph
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Sensor fusion is a technology that combines data and information from multiple sensors to obtain more comprehensive, accurate, and reliable perception results. Sensor fusion based on deep interpretable networks is more advanced, utilizing the powerful modeling and abstraction capabilities of deep learning to process multi-source sensor data, while emphasizing the interpretability of the model, making the output results of the model more convincing and credible.

In this network, data fusion technology can make full use of redundant information and complementarity between different sensors to improve the overall perception accuracy and robustness. By accurately calibrating and synchronizing various sensors, deep learning models can learn deeper features and complex patterns in the fused data. In addition, the interpretability of the model aids in understanding its internal working principles and decision making, which is crucial for the safety and reliability of key application scenarios, such as autonomous driving and intelligent manufacturing.

This Special Issue aims to put together original research and review articles on recent advances, technologies, solutions, applications, and new challenges in the field of sensor fusion-based deep interpretable networks.

Potential topics include but are not limited to the following:

  • Sensor Fusion for Comprehensive Perception;
  • Multi-View Adaptive Fusion Network for Object Detection, Recognition and Understanding;
  • Deep Learning and Sensor Fusion for Enhanced Decision Making;
  • Open-Source Sensor Fusion for different application scenarios;
  • Kalman and Complementary Filters in Sensor Fusion;
  • Autonomous Driving with Scene Understanding;
  • Sensor Fusion for Motion Tracking Capabilities in Smartphones and Tablets.

Dr. Guangfeng Lin
Prof. Dr. Guoliang Fan
Dr. Zhigang Ling
Dr. Jiangyun Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensors fusion
  • reliability networks
  • causal reasoning
  • interpretable networks
  • graph neural networks
  • attention fusion mechanism

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 3792 KiB  
Article
Wind Turbine Blade Fault Detection Method Based on TROA-SVM
by Zhuo Lei, Haijun Lin, Xudong Tang, Yong Xiong and He Wen
Sensors 2025, 25(3), 720; https://doi.org/10.3390/s25030720 - 24 Jan 2025
Viewed by 357
Abstract
Wind turbines are predominantly situated in remote, high-altitude regions, where they face a myriad of harsh environmental conditions. Factors such as high humidity, strong gusts, lightning strikes, and heavy snowfall significantly increase the vulnerability of turbine blades to fatigue damage. This susceptibility poses [...] Read more.
Wind turbines are predominantly situated in remote, high-altitude regions, where they face a myriad of harsh environmental conditions. Factors such as high humidity, strong gusts, lightning strikes, and heavy snowfall significantly increase the vulnerability of turbine blades to fatigue damage. This susceptibility poses serious risks to the normal operation and longevity of the turbines, necessitating effective monitoring and maintenance strategies. In response to these challenges, this paper proposes a novel fault detection method specifically designed for analyzing wind turbine blade noise signals. This method integrates the Tyrannosaurus Optimization Algorithm (TROA) with a support vector machine (SVM), aiming to enhance the accuracy and reliability of fault detection. The process begins with the careful preprocessing of raw noise signals collected from wind turbines during actual operational conditions. The method extracts vital features from three key perspectives: the time domain, frequency domain, and cepstral domain. By constructing a comprehensive feature matrix that encapsulates multi-dimensional characteristics, the approach ensures that all relevant information is captured. Rigorous analysis and feature selection are subsequently conducted to eliminate redundant data, thereby focusing on retaining the most significant features for classification. A TROA-SVM classification model is then developed to effectively identify the faults of the turbine blades. The performance of this method is validated through extensive experiments, which indicate that the recognition accuracy rate is 98.7%. This accuracy is higher than that of the traditional methods, such as SVM, K-Nearest Neighbors (KNN), and random forest, demonstrating the proposed method’s superiority and effectiveness. Full article
(This article belongs to the Special Issue Sensor-Fusion-Based Deep Interpretable Networks)
Show Figures

Figure 1

15 pages, 7129 KiB  
Article
Enhancing LiDAR Mapping with YOLO-Based Potential Dynamic Object Removal in Autonomous Driving
by Seonghark Jeong, Heeseok Shin, Myeong-Jun Kim, Dongwan Kang, Seangwock Lee and Sangki Oh
Sensors 2024, 24(23), 7578; https://doi.org/10.3390/s24237578 - 27 Nov 2024
Viewed by 756
Abstract
In this study, we propose an enhanced LiDAR-based mapping and localization system that utilizes a camera-based YOLO (You Only Look Once) algorithm to detect and remove dynamic objects, such as vehicles, from the mapping process. GPS, while commonly used for localization, often fails [...] Read more.
In this study, we propose an enhanced LiDAR-based mapping and localization system that utilizes a camera-based YOLO (You Only Look Once) algorithm to detect and remove dynamic objects, such as vehicles, from the mapping process. GPS, while commonly used for localization, often fails in urban environments due to signal blockages. To address this limitation, our system integrates YOLOv4 with LiDAR, enabling the removal of dynamic objects to improve map accuracy and localization in high-traffic areas. Existing methods using LiDAR segmentation for map matching often suffer from missed detections and false positives, degrading performance. Our approach leverages YOLOv4’s robust object detection capabilities to eliminate potentially dynamic objects while retaining static environmental features, such as buildings, to enhance map accuracy and reliability. The proposed system was validated using a mid-size SUV equipped with LiDAR and camera sensors. The experimental results demonstrate significant improvements in map-matching and localization performance, particularly in urban environments. The system achieved RMSE (Root Mean Square Error) reductions compared to conventional methods, with RMSE values decreasing from 0.9870 to 0.9724 in open areas and from 1.3874 to 1.1217 in urban areas. These findings highlight the ability of the Vision + LiDAR + NDT method to enhance localization performance in both simple and complex environments. By addressing the challenges of dynamic obstacles, the proposed system effectively improves the accuracy and robustness of autonomous navigation in high-traffic settings without relying on GPS. Full article
(This article belongs to the Special Issue Sensor-Fusion-Based Deep Interpretable Networks)
Show Figures

Figure 1

20 pages, 9098 KiB  
Article
Local–Global Feature Adaptive Fusion Network for Building Crack Detection
by Yibin He, Zhengrong Yuan, Xinhong Xia, Bo Yang, Huiting Wu, Wei Fu and Wenxuan Yao
Sensors 2024, 24(21), 7076; https://doi.org/10.3390/s24217076 - 3 Nov 2024
Cited by 1 | Viewed by 1049
Abstract
Cracks represent one of the most common types of damage in building structures and it is crucial to detect cracks in a timely manner to maintain the safety of the buildings. In general, tiny cracks require focusing on local detail information while complex [...] Read more.
Cracks represent one of the most common types of damage in building structures and it is crucial to detect cracks in a timely manner to maintain the safety of the buildings. In general, tiny cracks require focusing on local detail information while complex long cracks and cracks similar to the background require more global features for detection. Therefore, it is necessary for crack detection to effectively integrate local and global information. Focusing on this, a local–global feature adaptive fusion network (LGFAF-Net) is proposed. Specifically, we introduce the VMamba encoder as the global feature extraction branch to capture global long-range dependencies. To enhance the ability of the network to acquire detailed information, the residual network is added as another local feature extraction branch, forming a dual-encoding network to enhance the performance of crack detection. In addition, a multi-feature adaptive fusion (MFAF) module is proposed to integrate local and global features from different branches and facilitate representative feature learning. Furthermore, we propose a building exterior wall crack dataset (BEWC) captured by unmanned aerial vehicles (UAVs) to evaluate the performance of the proposed method used to identify wall cracks. Other widely used public crack datasets are also utilized to verify the generalization of the method. Extensive experiments performed on three crack datasets demonstrate the effectiveness and superiority of the proposed method. Full article
(This article belongs to the Special Issue Sensor-Fusion-Based Deep Interpretable Networks)
Show Figures

Figure 1

14 pages, 35441 KiB  
Article
Smart Ship Draft Reading by Dual-Flow Deep Learning Architecture and Multispectral Information
by Bo Zhang, Jiangyun Li, Haicheng Tang and Xi Liu
Sensors 2024, 24(17), 5580; https://doi.org/10.3390/s24175580 - 28 Aug 2024
Cited by 1 | Viewed by 1023
Abstract
In maritime transportation, a ship’s draft survey serves as a primary method for weighing bulk cargo. The accuracy of the ship’s draft reading determines the fairness of bulk cargo transactions. Human visual-based draft reading methods face issues such as safety concerns, high labor [...] Read more.
In maritime transportation, a ship’s draft survey serves as a primary method for weighing bulk cargo. The accuracy of the ship’s draft reading determines the fairness of bulk cargo transactions. Human visual-based draft reading methods face issues such as safety concerns, high labor costs, and subjective interpretation. Therefore, some image processing methods are utilized to achieve automatic draft reading. However, due to the limitations in the spectral characteristics of RGB images, existing image processing methods are susceptible to water surface environmental interference, such as reflections. To solve this issue, we obtained and annotated 524 multispectral images of a ship’s draft as the research dataset, marking the first application of integrating NIR information and RGB images for automatic draft reading tasks. Additionally, a dual-branch backbone named BIF is proposed to extract and combine spectral information from RGB and NIR images. The backbone network can be combined with the existing segmentation head and detection head to perform waterline segmentation and draft detection. By replacing the original ResNet-50 backbone of YOLOv8, we reached a mAP of 99.2% in the draft detection task. Similarly, combining UPerNet with our dual-branch backbone, the mIoU of the waterline segmentation task was improved from 98.9% to 99.3%. The inaccuracy of the draft reading is less than ±0.01 m, confirming the efficacy of our method for automatic draft reading tasks. Full article
(This article belongs to the Special Issue Sensor-Fusion-Based Deep Interpretable Networks)
Show Figures

Graphical abstract

Back to TopTop