remotesensing-logo

Journal Browser

Journal Browser

Advances in Deep Fusion of Multi-Source Remote Sensing Images

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 15 February 2025 | Viewed by 7718

Special Issue Editors

School of Information and Communication Technology, Griffith University, Nathan, QLD 4111, Australia
Interests: pattern recognition; computer vision and spectral imaging with their applications to remote sensing and environmental informatics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Optoelectronic Information Science and Technology, Yantai University, Yantai 264003, China
Interests: deep fusion of multi-source data; band selection; denoising; hyperspectral unmixing; data augmentation; image classification; object detection

Special Issue Information

Dear Colleagues,

With developments in sensor technology, remote sensing data have become increasingly diversified, e.g., hyperspectral image, panchromatic image, high resolution color image, multi-spectral image, airborne LiDAR data, and Synthetic-Aperture Radar data (SAR). The multi-source data promote a broad application of remote sensing in many fields, such as environmental monitoring, smart agriculture, intelligent transportation, etc. While affected by the working principle of sensors, complex weather, lighting conditions, and human interference, multi-source data have shown great differences in apparent structures, completeness, and accuracy. To this end, multi-source data fusion is highly valuable to compensate for the information acquisition defects of a single sensor, and it helps enhance our comprehensive and effective perception of complex scenes. However, the existing deep fusion technology often neglects strong missing, low signal-to-noise ratio, high pseudo, and other factors, resulting in insufficient robustness of the fusion model, poor quality, and low credibility of the fusion results. To address these issues, this Special Issue aims to study the advances of multi-source remote sensing data fusion to improve the quality and credibility of multi-source data. Topics may range from overview, deep remote sensing image fusion methods, performance evaluation, and applications of fused images. Articles may address, but are not limited to, the following topics:

  • Review of multi-source image fusion;
  • Advanced processing methods of remote sensing images, e.g., denoising, deblurring, super-resolution, etc.;
  • Advanced remote sensing image augmentation methods concerning insufficient or imbalanced training data;
  • Advanced deep fusion methods of multi-source remote sensing images, e.g., hyperspectral image, panchromatic image, high resolution color image, multi-spectral image, airborne LiDAR data, and Synthetic-Aperture Radar data (SAR);
  • Supervised, weak supervised, or unsupervised representation learning methods for remote sensing images;
  • Application of remote sensing image fusion, e.g., classification, change detection, anomaly detection, object detection, disaster monitoring, scene recognition, etc.;
  • Light-weight fusion networks for remote sensing image fusion.

Dr. Jun Zhou
Prof. Dr. Qian Du
Prof. Dr. Danfeng Hong
Dr. Chenhong Sui
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • multi-source image fusion
  • image augmentation
  • representation learning
  • image classification
  • change detection
  • anomaly detection
  • object detection
  • light-weight network
  • denoising
  • deblurring
  • super-resolution

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

25 pages, 15402 KiB  
Article
Interplay Between Atmospheric Correction and Fusion Techniques Enhances the Quality of Remote Sensing Image Fusion
by Yang Li, Feinan Chen, Tangyu Sui, Rufang Ti, Weihua Cheng, Jin Hong and Zhenwei Qiu
Remote Sens. 2024, 16(21), 3916; https://doi.org/10.3390/rs16213916 - 22 Oct 2024
Viewed by 909
Abstract
Remote sensing image fusion technology integrates observational data from multiple satellite platforms to leverage the complementary advantages of the different types of remote sensing images. High-quality fused remote sensing images provide detailed information on surface radiation, climate, and environmental conditions, thereby supporting governmental [...] Read more.
Remote sensing image fusion technology integrates observational data from multiple satellite platforms to leverage the complementary advantages of the different types of remote sensing images. High-quality fused remote sensing images provide detailed information on surface radiation, climate, and environmental conditions, thereby supporting governmental policies on environmental changes. Improving the quality and quantitative accuracy of fused images is a crucial trend in remote sensing image fusion research. This study investigates the impact of atmospheric correction and five widely applied fusion techniques on remote sensing image fusion. By constructing four fusion frameworks, it evaluates how the choice of fusion method, the implementation of atmospheric correction, the synchronization of atmospheric parameters, and the timing of atmospheric correction influence the outcomes of remote sensing image fusion. Aerial flights using remote sensors were conducted to acquire atmospheric parameter distribution images that are strictly synchronous with the remote sensing images. Comprehensive and systematic evaluations of the fused remote sensing images were performed. Experiments show that for the remote sensing images used, selecting the appropriate fusion method can improve the spatial detail evaluation metrics of the fused images by up to 2.739 times, with the smallest deviation from true reflectance reaching 35.02%. Incorporating synchronous atmospheric parameter distribution images can enhance the spatial detail evaluation metrics by up to 2.03 times, with the smallest deviation from true reflectance reaching 5.4%. This indicates that choosing an appropriate fusion method and performing imaging-based synchronous atmospheric correction before fusion can maximize the enhancement of spatial details and spectral quantification in fused images. Full article
(This article belongs to the Special Issue Advances in Deep Fusion of Multi-Source Remote Sensing Images)
Show Figures

Figure 1

15 pages, 2452 KiB  
Article
Multistage Interaction Network for Remote Sensing Change Detection
by Meng Zhou, Weixian Qian and Kan Ren
Remote Sens. 2024, 16(6), 1077; https://doi.org/10.3390/rs16061077 - 19 Mar 2024
Cited by 2 | Viewed by 1335
Abstract
Change detection in remote sensing imagery is vital for Earth monitoring but faces challenges such as background complexity and pseudo-changes. Effective interaction between bitemporal images is crucial for accurate change information extraction. This paper presents a multistage interaction network designed for effective change [...] Read more.
Change detection in remote sensing imagery is vital for Earth monitoring but faces challenges such as background complexity and pseudo-changes. Effective interaction between bitemporal images is crucial for accurate change information extraction. This paper presents a multistage interaction network designed for effective change detection, incorporating interaction at the image, feature, and decision levels. At the image level, change information is directly extracted from intensity changes, mitigating potential change information loss during feature extraction. Instead of separately extracting features from bitemporal images, the feature-level interaction jointly extracts features from bitemporal images. By enhancing relevance to spatial variant information and shared semantic channels, the network excels in overcoming background complexity and pseudo-changes. The decision-level interaction combines image-level and feature-level interactions, producing multiscale feature differences for precise change prediction. Extensive experiments demonstrate the superior performance of our method compared to existing approaches, establishing it as a robust solution for remote sensing image change detection. Full article
(This article belongs to the Special Issue Advances in Deep Fusion of Multi-Source Remote Sensing Images)
Show Figures

Figure 1

22 pages, 2005 KiB  
Article
HATF: Multi-Modal Feature Learning for Infrared and Visible Image Fusion via Hybrid Attention Transformer
by Xiangzeng Liu, Ziyao Wang, Haojie Gao, Xiang Li, Lei Wang and Qiguang Miao
Remote Sens. 2024, 16(5), 803; https://doi.org/10.3390/rs16050803 - 25 Feb 2024
Cited by 2 | Viewed by 1893
Abstract
Current CNN-based methods for infrared and visible image fusion are limited by the low discrimination of extracted structural features, the adoption of uniform loss functions, and the lack of inter-modal feature interaction, which make it difficult to obtain optimal fusion results. To alleviate [...] Read more.
Current CNN-based methods for infrared and visible image fusion are limited by the low discrimination of extracted structural features, the adoption of uniform loss functions, and the lack of inter-modal feature interaction, which make it difficult to obtain optimal fusion results. To alleviate the above problems, a framework for multimodal feature learning fusion using a cross-attention Transformer is proposed. To extract rich structural features at different scales, residual U-Nets with mixed receptive fields are adopted to capture salient object information at various granularities. Then, a hybrid attention fusion strategy is employed to integrate the complementing information from the input images. Finally, adaptive loss functions are designed to achieve optimal fusion results for different modal features. The fusion framework proposed in this study is thoroughly evaluated using the TNO, FLIR, and LLVIP datasets, encompassing diverse scenes and varying illumination conditions. In the comparative experiments, HATF achieved competitive results on three datasets, with EN, SD, MI, and SSIM metrics reaching the best performance on the TNO dataset, surpassing the second-best method by 2.3%, 18.8%, 4.2%, and 2.2%, respectively. These results validate the effectiveness of the proposed method in terms of both robustness and image fusion quality compared to several popular methods. Full article
(This article belongs to the Special Issue Advances in Deep Fusion of Multi-Source Remote Sensing Images)
Show Figures

Figure 1

19 pages, 83006 KiB  
Article
SCDA: A Style and Content Domain Adaptive Semantic Segmentation Method for Remote Sensing Images
by Hongfeng Xiao, Wei Yao, Haobin Chen, Li Cheng, Bo Li and Longfei Ren
Remote Sens. 2023, 15(19), 4668; https://doi.org/10.3390/rs15194668 - 23 Sep 2023
Cited by 2 | Viewed by 1496
Abstract
Due to the differences in imaging methods and acquisition areas, remote sensing datasets can exhibit significant variations in both image style and content. In addition, the ground objects can be quite different in scale even within the same remote sensing image. These differences [...] Read more.
Due to the differences in imaging methods and acquisition areas, remote sensing datasets can exhibit significant variations in both image style and content. In addition, the ground objects can be quite different in scale even within the same remote sensing image. These differences should be considered in remote sensing image segmentation tasks. Inspired by the recently developed domain generalization model WildNet, we propose a domain adaption framework named “Style and Content Domain Adaptation” (SCDA) for semantic segmentation tasks involving multiple remote sensing datasets with different data distributions. SCDA uses residual style feature transfer (RSFT) in the shallow layer of the baseline network model to enable source domain images to obtain style features from the target domain and reduce the loss of source domain content information. Considering the scale difference of different ground objects in remote sensing images, SCDA uses the projection of the source domain images, the style-transferred source domain images, and the target domain images to construct a multiscale content adaptation learning (MCAL) loss. This enables the model to capture multiscale target domain content information. Experiments show that the proposed method has obvious domain adaptability in remote sensing image segmentation. When performing cross-domain segmentation tasks from VaihingenIRRG to PotsdamIRRG, mIOU is 48.64%, and the F1 is 63.11%, marking improvements of 1.21% and 0.45%, respectively, compared with state-of-the-art methods. When performing cross-domain segmentation tasks from VaihingenIRRG to PotsdamRGB, the mIOU is 44.38%, an improvement of 0.77% over the most advanced methods. In summary, SCDA improves the semantic segmentation of remote sensing images through domain adaptation for both style and content. It fully utilizes multiple innovative modules and strategies to enhance the performance and the stability of the model. Full article
(This article belongs to the Special Issue Advances in Deep Fusion of Multi-Source Remote Sensing Images)
Show Figures

Graphical abstract

Other

Jump to: Research

16 pages, 7952 KiB  
Technical Note
Unsupervised Domain Adaptation with Contrastive Learning-Based Discriminative Feature Augmentation for RS Image Classification
by Ren Xu, Alim Samat, Enzhao Zhu, Erzhu Li and Wei Li
Remote Sens. 2024, 16(11), 1974; https://doi.org/10.3390/rs16111974 - 30 May 2024
Cited by 2 | Viewed by 922
Abstract
High- and very high-resolution (HR, VHR) remote sensing (RS) images can provide comprehensive and intricate spatial information for land cover classification, which is particularly crucial when analyzing complex built-up environments. However, the application of HR and VHR images to large-scale and detailed land [...] Read more.
High- and very high-resolution (HR, VHR) remote sensing (RS) images can provide comprehensive and intricate spatial information for land cover classification, which is particularly crucial when analyzing complex built-up environments. However, the application of HR and VHR images to large-scale and detailed land cover mapping is always constrained by the intricacy of land cover classification models, the exorbitant cost of collecting training samples, and geographical changes or acquisition conditions. To overcome this limitation, we propose an unsupervised domain adaptation (UDA) with contrastive learning-based discriminative feature augmentation (CLDFA) for RS image classification. In detail, our method first utilizes contrastive learning (CL) through a memory bank in order to memorize sample features and improve model performance, where the approach employs an end-to-end Siamese network and incorporates dynamic pseudo-label assignment and class-balancing strategies for adaptive domain joint learning. By transferring classification models trained on a source domain (SD) to an unlabeled target domain (TD), our proposed UDA method enables large-scale land cover mapping. We conducted experiments using a massive five billion-pixels dataset as the SD and tested the HR and VHR RS images of five typical Chinese cities as the TD and applied the method on the completely unlabeled world view 3 (WV3) image of Urumqi city. The experimental results demonstrate that our method excels in large-scale HR and VHR RS image classification tasks, highlighting the advantages of semantic segmentation based on end-to-end deep convolutional neural networks (DCNNs). Full article
(This article belongs to the Special Issue Advances in Deep Fusion of Multi-Source Remote Sensing Images)
Show Figures

Figure 1

Back to TopTop