remotesensing-logo

Journal Browser

Journal Browser

Image Enhancement and Fusion Techniques in Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (30 September 2024) | Viewed by 5709

Special Issue Editors


E-Mail Website
Guest Editor
School of Information and Communications Engineering, Xi’an Jiaotong University, Xi’an 710049, China
Interests: image enhancement; remote sensing; object detection; deep learning

E-Mail Website
Guest Editor
School of Information and Communications Engineering, Xi’an Jiaotong University, Xi’an 710049, China
Interests: remote sensing; deep learning; image classification; data fusion; image enhancement

E-Mail Website
Guest Editor
Data Science in Earth Observation, Technical University of Munich, 81737 Munich, Germany
Interests: SAR image processing; few-shot learning; deep learning; forest monitoring; biomass estimations
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

One common challenge in remote sensing image-processing tasks is the obtaining of powerful, salient and comprehensive descriptions of images. However, remote sensing image acquisition is often affected by various factors, including noise, blurriness, low contrast, etc., leading to the limited availability of high-quality remote sensing images. Image enhancement and image fusion are two effective solutions to this issue. The former aims to enhance the quality and interpretability of images, while the latter generates more comprehensive descriptions of images through the integration of multi-source remote sensing data.

This Special Issue is accepting papers that discuss image enhancement and image fusion approaches and their applications in remote sensing image-processing tasks. Reviews and research articles on their methodologies or applications, including their advantages and limitations, are welcome. All contributions to this collection will undergo peer review. We welcome contributions that include, but are not limited to, the following:

  • Deep learning-based image enhancement;
  • Image denoising;
  • Generative models for image generation;
  • The super-resolution and restoration of images;
  • Image quality assessments;
  • Few-shot learning methods;
  • Multi-modality image fusion (optical, RGB, panchromatic, multispectral, hyperspectral, LiDAR, SAR, infrared, etc.);
  • Deep learning-based image fusion methods;
  • Feature-level or decision-level image fusion;
  • Computer vision methods in remote sensing image fusion;
  • Other downstream tasks in the process that include, but are not limited to, remote sensing image classification, image segmentation, object detection, change detection, target recognition, unmixing, etc.

Prof. Dr. Fan Li
Dr. Haixia Bi
Dr. Qian Song
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • image processing
  • image enhancement
  • deep learning
  • image fusion
  • image generation
  • super-resolution images
  • image quality assessments
  • multi-modality

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 10427 KiB  
Article
UMMFF: Unsupervised Multimodal Multilevel Feature Fusion Network for Hyperspectral Image Super-Resolution
by Zhongmin Jiang, Mengyao Chen and Wenju Wang
Remote Sens. 2024, 16(17), 3282; https://doi.org/10.3390/rs16173282 - 4 Sep 2024
Viewed by 946
Abstract
Due to the inadequacy in utilizing complementary information from different modalities and the biased estimation of degraded parameters, the unsupervised hyperspectral super-resolution algorithm suffers from low precision and limited applicability. To address this issue, this paper proposes an approach for hyperspectral image super-resolution, [...] Read more.
Due to the inadequacy in utilizing complementary information from different modalities and the biased estimation of degraded parameters, the unsupervised hyperspectral super-resolution algorithm suffers from low precision and limited applicability. To address this issue, this paper proposes an approach for hyperspectral image super-resolution, namely, the Unsupervised Multimodal Multilevel Feature Fusion network (UMMFF). The proposed approach employs a gated cross-retention module to learn shared patterns among different modalities. This module effectively eliminates the intermodal differences while preserving spatial–spectral correlations, thereby facilitating information interaction. A multilevel spatial–channel attention and parallel fusion decoder are constructed to extract features at three levels (low, medium, and high), enriching the information of the multimodal images. Additionally, an independent prior-based implicit neural representation blind estimation network is designed to accurately estimate the degraded parameters. The utilization of UMMFF on the “Washington DC”, Salinas, and Botswana datasets exhibited a superior performance compared to existing state-of-the-art methods in terms of primary performance metrics such as PSNR and ERGAS, and the PSNR values improved by 18.03%, 8.55%, and 5.70%, respectively, while the ERGAS values decreased by 50.00%, 75.39%, and 53.27%, respectively. The experimental results indicate that UMMFF demonstrates excellent algorithm adaptability, resulting in high-precision reconstruction outcomes. Full article
(This article belongs to the Special Issue Image Enhancement and Fusion Techniques in Remote Sensing)
Show Figures

Figure 1

21 pages, 2031 KiB  
Article
ConvMambaSR: Leveraging State-Space Models and CNNs in a Dual-Branch Architecture for Remote Sensing Imagery Super-Resolution
by Qiwei Zhu, Guojing Zhang, Xuechao Zou, Xiaoying Wang, Jianqiang Huang and Xilai Li
Remote Sens. 2024, 16(17), 3254; https://doi.org/10.3390/rs16173254 - 2 Sep 2024
Cited by 1 | Viewed by 1486
Abstract
Deep learning-based super-resolution (SR) techniques play a crucial role in enhancing the spatial resolution of images. However, remote sensing images present substantial challenges due to their diverse features, complex structures, and significant size variations in ground objects. Moreover, recovering lost details from low-resolution [...] Read more.
Deep learning-based super-resolution (SR) techniques play a crucial role in enhancing the spatial resolution of images. However, remote sensing images present substantial challenges due to their diverse features, complex structures, and significant size variations in ground objects. Moreover, recovering lost details from low-resolution remote sensing images with complex and unknown degradations, such as downsampling, noise, and compression, remains a critical issue. To address these challenges, we propose ConvMambaSR, a novel super-resolution framework that integrates state-space models (SSMs) and Convolutional Neural Networks (CNNs). This framework is specifically designed to handle heterogeneous and complex ground features, as well as unknown degradations in remote sensing imagery. ConvMambaSR leverages SSMs to model global dependencies, activating more pixels in the super-resolution task. Concurrently, it employs CNNs to extract local detail features, enhancing the model’s ability to capture image textures and edges. Furthermore, we have developed a global–detail reconstruction module (GDRM) to integrate diverse levels of global and local information efficiently. We rigorously validated the proposed method on two distinct datasets, RSSCN7 and RSSRD-KQ, and benchmarked its performance against state-of-the-art SR models. Experiments show that our method achieves SOTA PSNR values of 26.06 and 24.29 on these datasets, respectively, and is visually superior, effectively addressing a variety of scenarios and significantly outperforming existing methods. Full article
(This article belongs to the Special Issue Image Enhancement and Fusion Techniques in Remote Sensing)
Show Figures

Figure 1

23 pages, 2671 KiB  
Article
Multi-View Feature Fusion and Rich Information Refinement Network for Semantic Segmentation of Remote Sensing Images
by Jiang Liu, Shuli Cheng and Anyu Du
Remote Sens. 2024, 16(17), 3184; https://doi.org/10.3390/rs16173184 - 28 Aug 2024
Viewed by 707
Abstract
Semantic segmentation is currently a hot topic in remote sensing image processing. There are extensive applications in land planning and surveying. Many current studies combine Convolutional Neural Networks (CNNs), which extract local information, with Transformers, which capture global information, to obtain richer information. [...] Read more.
Semantic segmentation is currently a hot topic in remote sensing image processing. There are extensive applications in land planning and surveying. Many current studies combine Convolutional Neural Networks (CNNs), which extract local information, with Transformers, which capture global information, to obtain richer information. However, the fused feature information is not sufficiently enriched and it often lacks detailed refinement. To address this issue, we propose a novel method called the Multi-View Feature Fusion and Rich Information Refinement Network (MFRNet). Our model is equipped with the Multi-View Feature Fusion Block (MAFF) to merge various types of information, including local, non-local, channel, and positional information. Within MAFF, we introduce two innovative methods. The Sliding Heterogeneous Multi-Head Attention (SHMA) extracts local, non-local, and positional information using a sliding window, while the Multi-Scale Hierarchical Compressed Channel Attention (MSCA) leverages bar-shaped pooling kernels and stepwise compression to obtain reliable channel information. Additionally, we introduce the Efficient Feature Refinement Module (EFRM), which enhances segmentation accuracy by interacting the results of the Long-Range Information Perception Branch and the Local Semantic Information Perception Branch. We evaluate our model on the ISPRS Vaihingen and Potsdam datasets. We conducted extensive comparison experiments with state-of-the-art models and verified that MFRNet outperforms other models. Full article
(This article belongs to the Special Issue Image Enhancement and Fusion Techniques in Remote Sensing)
Show Figures

Graphical abstract

22 pages, 1215 KiB  
Article
Super-Resolution Learning Strategy Based on Expert Knowledge Supervision
by Zhihan Ren, Lijun He and Peipei Zhu
Remote Sens. 2024, 16(16), 2888; https://doi.org/10.3390/rs16162888 - 7 Aug 2024
Viewed by 1388
Abstract
Existing Super-Resolution (SR) methods are typically trained using bicubic degradation simulations, resulting in unsatisfactory results when applied to remote sensing images that contain a wide variety of object shapes and sizes. The insufficient learning approach reduces the focus of models on critical object [...] Read more.
Existing Super-Resolution (SR) methods are typically trained using bicubic degradation simulations, resulting in unsatisfactory results when applied to remote sensing images that contain a wide variety of object shapes and sizes. The insufficient learning approach reduces the focus of models on critical object regions within the images. As a result, their practical performance is significantly hindered, especially in real-world applications where accuracy in object reconstruction is crucial. In this work, we propose a general learning strategy for SR models based on expert knowledge supervision, named EKS-SR, which can incorporate a few coarse-grained semantic information derived from high-level visual tasks into the SR reconstruction process. It utilizes prior information from three perspectives: regional constraints, feature constraints, and attributive constraints, to guide the model to focus more on the object regions within the images. By integrating these expert knowledge-driven constraints, EKS-SR can enhance the model’s ability to accurately reconstruct object regions and capture the key information needed for practical applications. Importantly, this improvement does not increase the inference time and does not require full annotation of the large-scale datasets, but only a few labels, making EKS-SR both efficient and effective. Experimental results demonstrate that the proposed method can achieve improvements in both reconstruction quality and machine vision analysis performance. Full article
(This article belongs to the Special Issue Image Enhancement and Fusion Techniques in Remote Sensing)
Show Figures

Figure 1

Back to TopTop