sensors-logo

Journal Browser

Journal Browser

Multi-Source Image Fusion, Restoration, and Understanding and Its Application in Sensing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Environmental Sensing".

Deadline for manuscript submissions: 25 July 2025 | Viewed by 2322

Special Issue Editors


E-Mail Website
Guest Editor
School of Information Science and Engineering, Hunan Institute of Science and Technology, Yueyang 414006, China
Interests: remote sensing; image processing; deep learning

Special Issue Information

Dear Colleagues,

Multi-Source Image Fusion, Restoration, and Understanding are pivotal in signal processing and computer vision, driven by the increasing demand for comprehensive image analysis in various applications. The background of this field lies in the rapid advancements in imaging technologies, which have led to the proliferation of images from diverse sources such as satellites, drones, and ground sensors. Each source captures distinct aspects of a scene, offering a more holistic and detailed representation when combined. Therefore, processing and analyzing images from multiple sources is crucial for various applications, including environmental monitoring, medical imaging, surveillance, and remote sensing.

This Special Issue aims to present recent advances in multi-source image fusion, restoration, and understanding techniques, as well as their applications in sensing. Authors are welcome to submit research papers and literature reviews related to multi-source image fusion, restoration, and understanding.

Dr. Xianfeng Ou
Dr. Honggang Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensing
  • image fusion
  • image restoration
  • image understanding

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 2326 KiB  
Article
Isotropic Brain MRI Reconstruction from Orthogonal Scans Using 3D Convolutional Neural Network
by Jinsha Tian, Canjun Xiao and Hongjin Zhu
Sensors 2024, 24(20), 6639; https://doi.org/10.3390/s24206639 - 15 Oct 2024
Viewed by 782
Abstract
As an alternative to true isotropic 3D imaging, image super-resolution (SR) has been applied to reconstruct an isotropic 3D volume from multiple anisotropic scans. However, traditional SR methods struggle with inadequate performance, prolonged processing times, and the necessity for manual feature extraction. Motivated [...] Read more.
As an alternative to true isotropic 3D imaging, image super-resolution (SR) has been applied to reconstruct an isotropic 3D volume from multiple anisotropic scans. However, traditional SR methods struggle with inadequate performance, prolonged processing times, and the necessity for manual feature extraction. Motivated by the exceptional representational ability and automatic feature extraction of convolutional neural networks (CNNs), in this work, we present an end-to-end isotropic MRI reconstruction strategy based on deep learning. The proposed method is based on 3D convolutional neural networks (3D CNNs), which can effectively capture the 3D structural features of MRI volumes and accurately predict potential structure. In addition, the proposed method takes multiple orthogonal scans as input and thus enables the model to use more complementary information from different dimensions for precise inference. Experimental results show that the proposed algorithm achieves promising performance in terms of both quantitative and qualitative assessments. In addition, it can process a 3D volume with a size of 256 × 256 × 256 in less than 1 min with the support of an NVIDIA GeForce GTX 1080Ti GPU, which suggests that it is not only a quantitatively superior method but also a practical one. Full article
Show Figures

Figure 1

23 pages, 44139 KiB  
Article
Degradation Type-Aware Image Restoration for Effective Object Detection in Adverse Weather
by Xiaochen Huang, Xiaofeng Wang, Qizhi Teng, Xiaohai He and Honggang Chen
Sensors 2024, 24(19), 6330; https://doi.org/10.3390/s24196330 - 30 Sep 2024
Viewed by 799
Abstract
Despite significant advancements in CNN-based object detection technology, adverse weather conditions can disrupt imaging sensors’ ability to capture clear images, thereby adversely impacting detection accuracy. Mainstream algorithms for adverse weather object detection enhance detection performance through image restoration methods. Nevertheless, the majority of [...] Read more.
Despite significant advancements in CNN-based object detection technology, adverse weather conditions can disrupt imaging sensors’ ability to capture clear images, thereby adversely impacting detection accuracy. Mainstream algorithms for adverse weather object detection enhance detection performance through image restoration methods. Nevertheless, the majority of these approaches are designed for a specific degradation scenario, making it difficult to adapt to diverse weather conditions. To cope with this issue, we put forward a degradation type-aware restoration-assisted object detection network, dubbed DTRDNet. It contains an object detection network with a shared feature encoder (SFE) and object detection decoder, a degradation discrimination image restoration decoder (DDIR), and a degradation category predictor (DCP). In the training phase, we jointly optimize the whole framework on a mixed weather dataset, including degraded images and clean images. Specifically, the degradation type information is incorporated in our DDIR to avoid the interaction between clean images and the restoration module. Furthermore, the DCP makes the SFE possess degradation category awareness ability, enhancing the detector’s adaptability to diverse weather conditions and enabling it to furnish requisite environmental information as required. Both the DCP and the DDIR can be removed according to requirement in the inference stage to retain the real-time performance of the detection algorithm. Extensive experiments on clear, hazy, rainy, and snowy images demonstrate that our DTRDNet outperforms advanced object detection algorithms, achieving an average mAP of 79.38% across the four weather test sets. Full article
Show Figures

Figure 1

21 pages, 5999 KiB  
Article
A Transformer-Based Image-Guided Depth-Completion Model with Dual-Attention Fusion Module
by Shuling Wang, Fengze Jiang and Xiaojin Gong
Sensors 2024, 24(19), 6270; https://doi.org/10.3390/s24196270 - 27 Sep 2024
Viewed by 432
Abstract
Depth information is crucial for perceiving three-dimensional scenes. However, depth maps captured directly by depth sensors are often incomplete and noisy, our objective in the depth-completion task is to generate dense and accurate depth maps from sparse depth inputs by fusing guidance information [...] Read more.
Depth information is crucial for perceiving three-dimensional scenes. However, depth maps captured directly by depth sensors are often incomplete and noisy, our objective in the depth-completion task is to generate dense and accurate depth maps from sparse depth inputs by fusing guidance information from corresponding color images obtained from camera sensors. To address these challenges, we introduce transformer models, which have shown great promise in the field of vision, into the task of image-guided depth completion. By leveraging the self-attention mechanism, we propose a novel network architecture that effectively meets these requirements of high accuracy and resolution in depth data. To be more specific, we design a dual-branch model with a transformer-based encoder that serializes image features into tokens step by step and extracts multi-scale pyramid features suitable for pixel-wise dense prediction tasks. Additionally, we incorporate a dual-attention fusion module to enhance the fusion between the two branches. This module combines convolution-based spatial and channel-attention mechanisms, which are adept at capturing local information, with cross-attention mechanisms that excel at capturing long-distance relationships. Our model achieves state-of-the-art performance on both the NYUv2 depth and SUN-RGBD depth datasets. Additionally, our ablation studies confirm the effectiveness of the designed modules. Full article
Show Figures

Figure 1

Back to TopTop