remotesensing-logo

Journal Browser

Journal Browser

Advanced Learning Techniques for Remote Sensing Image Quality Improvement

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 August 2023) | Viewed by 26869

Special Issue Editors


E-Mail Website
Guest Editor
School of Astronautics, Beihang University, Beijing 102206, China
Interests: computer vision and related applications in remote sensing; self-driving; video games
Special Issues, Collections and Topics in MDPI journals
Department of Biological Systems Engineering, University of Wisconsin-Madison, 230 Agricultural Engineering Building, 460 Henry Mall, Madison, WI 53706, USA
Interests: hyperspectral remote sensing; machine learning; unmanned aerial vehicle (UAV)-based imaging platform developments; precision agriculture; high-throughput plant phenotyping
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Astronautics, Beihang University, Beijing 102206, China
Interests: image processing; pattern recognition; computer vision; machine learning; remote sensing; aerospace exploration
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Remote sensing image quality improvement, e.g., image super-resolution, image fusion, and image deblurring, are important foundations and prerequisites for many downstream remote sensing applications. In recent years, although substantial progress has been made in the above directions, there are still some open problems and challenges, such as how recent learning techniques (e.g., deep learning, generative neural networks) reshape and benefit remote sensing image processing, how to effectively evaluate the quality of remote sensing images, and how to deal with the rapidly growing data volume as well as their diverse modalities.

In this Special Issue, we invite you to submit the most recent advancements of this field in terms of methodological contributions as well as innovative applications. The potential topics may include but are not limited to:

  • Remote sensing image super-resolution;
  • Image registration and pan-sharpening;
  • Hyperspectral image denoising;
  • Remote sensing image deblurring and dehazing;
  • Deep-learning-based remote sensing image processing;
  • Multimodal data fusion between hyperspectral imagery with other data sources;
  • Cloud detection and removal in remote sensing images;
  • Remote sensing image quality assignment;
  • Advanced deep learning models, e.g., generative adversarial networks, diffusion probabilistic models, and physics-informed neural networks;
  • Advanced learning techniques, e.g., self-supervised learning;
  • Real-time processing of remote sensing images;
  • Applications of remote sensing image quality improvement in agriculture, marine, meteorology, and other fields

Dr. Zhengxia Zou
Dr. Zhou Zhang
Dr. Haopeng Zhang
Dr. Feng Gao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • image processing
  • machine learning
  • deep learning
  • generative adversarial networks
  • image super-resolution
  • pan-sharpening
  • image deblurring
  • image dehazing
  • cloud detection and removal
  • image quality assignment

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 58024 KiB  
Article
A Prior-Knowledge-Based Generative Adversarial Network for Unsupervised Satellite Cloud Image Restoration
by Liling Zhao, Xiaoao Duanmu and Quansen Sun
Remote Sens. 2023, 15(19), 4820; https://doi.org/10.3390/rs15194820 - 4 Oct 2023
Viewed by 1328
Abstract
High-quality satellite cloud images are of great significance for weather diagnosis and prediction. However, many of these images are often degraded due to relative motion, atmospheric turbulence, instrument noise, and other factors. In the satellite imaging process, the degradation also cannot be completely [...] Read more.
High-quality satellite cloud images are of great significance for weather diagnosis and prediction. However, many of these images are often degraded due to relative motion, atmospheric turbulence, instrument noise, and other factors. In the satellite imaging process, the degradation also cannot be completely corrected. Therefore, it is necessary to further improve the satellite cloud image quality for real applications. In this study, we propose an unsupervised image restoration model with a two-stage network, in which the first stage, named the Prior-Knowledge-based Generative Adversarial Network (PKKernelGAN), aims to learn the blur kernel, and the second stage, named the Zero-Shot Deep Residual Network (ZSResNet), aims to improve the image quality. In PKKernelGAN, we propose a satellite cloud imaging loss function, which is a novel objective function that brings optimization of a generative model into the prior-knowledge domain. In ZSResNet, we build a dataset which contains the original satellite cloud image as high-quality images (HQ) paired with low-quality images (LQ) generated by the blur kernel learning from PKKernelGAN. The above innovations lead to a more efficient local structure in satellite cloud image restoration. The original dataset of our experiment is from the Sunflower 8 satellite provided by the Japan Meteorological Agency. This dataset is divided into training and testing sets to train and test PKKernelGAN. Then, ZSResNet is trained by the “LQ–HQ” image pairs generated by PKKernelGAN. Compared with other supervised and unsupervised deep learning models for image restoration, our model has a better performance. Extensive experiments have demonstrated that our proposed model can achieve better performance on different datasets. Full article
Show Figures

Figure 1

23 pages, 387197 KiB  
Article
A Flexible Spatiotemporal Thick Cloud Removal Method with Low Requirements for Reference Images
by Yu Zhang, Luyan Ji, Xunpeng Xu, Peng Zhang, Kang Jiang and Hairong Tang
Remote Sens. 2023, 15(17), 4306; https://doi.org/10.3390/rs15174306 - 31 Aug 2023
Cited by 2 | Viewed by 1325
Abstract
Thick cloud and shadows have a significant impact on the availability of optical remote sensing data. Although various methods have been proposed to address this issue, they still have some limitations. First, most approaches rely on a single clear reference image as complementary [...] Read more.
Thick cloud and shadows have a significant impact on the availability of optical remote sensing data. Although various methods have been proposed to address this issue, they still have some limitations. First, most approaches rely on a single clear reference image as complementary information, which becomes challenging when the target image has large missing areas. Secondly, the existing methods that can utilize multiple reference images require the complementary data to have high temporal correlation, which is not suitable for situations where the difference between the reference image and the target image is large. To overcome these limitations, a flexible spatiotemporal deep learning framework based on generative adversarial networks is proposed for thick cloud removal, which allows for the use of three arbitrary temporal images as references. The framework incorporates a three-step encoder that can leverage the uncontaminated information from the target image to assimilate the reference images, enhancing the model’s ability to handle reference images with diverse temporal differences. A series of simulated and real experiments on Landsat 8 and Sentinel 2 data is performed to demonstrate the effectiveness of the proposed method. The proposed method is especially applicable to small/large-scale regions with reference images that are significantly different from the target image. Full article
Show Figures

Figure 1

18 pages, 4945 KiB  
Article
Effect of Bit Depth on Cloud Segmentation of Remote-Sensing Images
by Lingcen Liao, Wei Liu and Shibin Liu
Remote Sens. 2023, 15(10), 2548; https://doi.org/10.3390/rs15102548 - 12 May 2023
Cited by 3 | Viewed by 1919
Abstract
Due to the cloud coverage of remote-sensing images, the ground object information will be attenuated or even lost, and the texture and spectral information of the image will be changed at the same time. Accurately detecting clouds from remote-sensing images is of great [...] Read more.
Due to the cloud coverage of remote-sensing images, the ground object information will be attenuated or even lost, and the texture and spectral information of the image will be changed at the same time. Accurately detecting clouds from remote-sensing images is of great significance to the field of remote sensing. Cloud detection utilizes semantic segmentation to classify remote-sensing images at the pixel level. However, previous studies have focused on the improvement of algorithm performance, and little attention has been paid to the impact of bit depth of remote-sensing images on cloud detection. In this paper, the deep semantic segmentation algorithm UNet is taken as an example, and a set of widely used cloud labeling dataset “L8 Biome” is used as the verification data to explore the relationship between bit depth and segmentation accuracy on different surface landscapes when the algorithm is used for cloud detection. The research results show that when the image is normalized, the effect of cloud detection with a 16-bit remote-sensing image is slightly better than that of an 8-bit remote sensing image; when the image is not normalized, the gap will be widened. However, using 16-bit remote-sensing images for training will take longer. This means data selection and classification do not always need to follow the highest possible bit depth when doing cloud detection but should consider the balance of efficiency and accuracy. Full article
Show Figures

Figure 1

19 pages, 3882 KiB  
Article
A Closed-Loop Network for Single Infrared Remote Sensing Image Super-Resolution in Real World
by Haopeng Zhang, Cong Zhang, Fengying Xie and Zhiguo Jiang
Remote Sens. 2023, 15(4), 882; https://doi.org/10.3390/rs15040882 - 5 Feb 2023
Viewed by 1967
Abstract
Single image super-resolution (SISR) is to reconstruct a high-resolution (HR) image from a corresponding low-resolution (LR) input. It is an effective way to solve the problem that infrared remote sensing images are usually suffering low resolution due to hardware limitations. Most previous learning-based [...] Read more.
Single image super-resolution (SISR) is to reconstruct a high-resolution (HR) image from a corresponding low-resolution (LR) input. It is an effective way to solve the problem that infrared remote sensing images are usually suffering low resolution due to hardware limitations. Most previous learning-based SISR methods just use synthetic HR-LR image pairs (obtained by bicubic kernels) to learn the mapping from LR images to HR images. However, the underlying degradation in the real world is often different from the synthetic method, i.e., the real LR images are obtained through a more complex degradation kernel, which leads to the adaptation problem and poor SR performance. To handle this problem, we propose a novel closed-loop framework that can not only make full use of the learning ability of the channel attention module but also introduce the information of real images as much as possible through a closed-loop structure. Our network includes two independent generative networks for down-sampling and super-resolution, respectively, and they are connected to each other to get more information from real images. We make a comprehensive analysis of the training data, resolution level and imaging spectrum to validate the performance of our network for infrared remote sensing image super-resolution. Experiments on real infrared remote sensing images show that our method achieves superior performance in various training strategies of supervised learning, weakly supervised learning and unsupervised learning. Especially, our peak signal-to-noise ratio (PSNR) is 0.9 dB better than the second-best unsupervised super-resolution model on PROBA-V dataset. Full article
Show Figures

Figure 1

18 pages, 1697 KiB  
Article
CARNet: Context-Aware Residual Learning for JPEG-LS Compressed Remote Sensing Image Restoration
by Maomei Liu, Lei Tang, Lijia Fan, Sheng Zhong, Hangzai Luo and Jinye Peng
Remote Sens. 2022, 14(24), 6318; https://doi.org/10.3390/rs14246318 - 13 Dec 2022
Cited by 6 | Viewed by 1897
Abstract
JPEG-LS (a lossless (LS) compression standard developed by the Joint Photographic Expert Group) compressed image restoration is a significant problem in remote sensing applications. It faces the following two challenges: first, bridging small pixel-value gaps from wide numerical ranges; and second, removing banding [...] Read more.
JPEG-LS (a lossless (LS) compression standard developed by the Joint Photographic Expert Group) compressed image restoration is a significant problem in remote sensing applications. It faces the following two challenges: first, bridging small pixel-value gaps from wide numerical ranges; and second, removing banding artifacts in the condition of lacking available context information. As far as we know, there is currently no research dealing with the above issues. Hence, we develop this initial line of work on JPEG-LS compressed remote sensing image restoration. We propose a novel CNN model called CARNet. Its core idea is a context-aware residual learning mechanism. Specifically, it realizes residual learning for accurate restoration by adopting a scale-invariant baseline. It enables large receptive fields for banding artifact removal through a context-aware scheme. Additionally, it eases the information flow among stages by utilizing a prior-guided feature-fusion mechanism. Alternatively, we design novel R IQA models to provide a better restoration performance assessment for our study by utilizing gradient priors of JPEG-LS banding artifacts. Furthermore, we prepare a new dataset of JPEG-LS compressed remote sensing images to supplement existing benchmark data. Experiments show that our method sets the state-of-the-art for JPEG-LS compressed remote sensing image restoration. Full article
Show Figures

Figure 1

22 pages, 20708 KiB  
Article
Underwater Image Restoration via Contrastive Learning and a Real-World Dataset
by Junlin Han, Mehrdad Shoeiby, Tim Malthus, Elizabeth Botha, Janet Anstee, Saeed Anwar, Ran Wei, Mohammad Ali Armin, Hongdong Li and Lars Petersson
Remote Sens. 2022, 14(17), 4297; https://doi.org/10.3390/rs14174297 - 31 Aug 2022
Cited by 45 | Viewed by 5471
Abstract
Underwater image restoration is of significant importance in unveiling the underwater world. Numerous techniques and algorithms have been developed in recent decades. However, due to fundamental difficulties associated with imaging/sensing, lighting, and refractive geometric distortions in capturing clear underwater images, no comprehensive evaluations [...] Read more.
Underwater image restoration is of significant importance in unveiling the underwater world. Numerous techniques and algorithms have been developed in recent decades. However, due to fundamental difficulties associated with imaging/sensing, lighting, and refractive geometric distortions in capturing clear underwater images, no comprehensive evaluations have been conducted with regard to underwater image restoration. To address this gap, we constructed a large-scale real underwater image dataset, dubbed Heron Island Coral Reef Dataset (‘HICRD’), for the purpose of benchmarking existing methods and supporting the development of new deep-learning based methods. We employed an accurate water parameter (diffuse attenuation coefficient) to generate the reference images. There are 2000 reference restored images and 6003 original underwater images in the unpaired training set. Furthermore, we present a novel method for underwater image restoration based on an unsupervised image-to-image translation framework. Our proposed method leveraged contrastive learning and generative adversarial networks to maximize the mutual information between raw and restored images. Extensive experiments with comparisons to recent approaches further demonstrate the superiority of our proposed method. Our code and dataset are both publicly available. Full article
Show Figures

Graphical abstract

18 pages, 4847 KiB  
Article
Road Extraction Convolutional Neural Network with Embedded Attention Mechanism for Remote Sensing Imagery
by Shiwei Shao, Lixia Xiao, Liupeng Lin, Chang Ren and Jing Tian
Remote Sens. 2022, 14(9), 2061; https://doi.org/10.3390/rs14092061 - 25 Apr 2022
Cited by 15 | Viewed by 2505
Abstract
Roads are closely related to people’s lives, and road network extraction has become one of the most important remote sensing tasks. This study aimed to propose a road extraction network with an embedded attention mechanism to solve the problem of automatic extraction of [...] Read more.
Roads are closely related to people’s lives, and road network extraction has become one of the most important remote sensing tasks. This study aimed to propose a road extraction network with an embedded attention mechanism to solve the problem of automatic extraction of road networks from a large number of remote sensing images. Channel attention mechanism and spatial attention mechanism were introduced to enhance the use of spectral information and spatial information based on the U-Net framework. Moreover, residual densely connected blocks were introduced to enhance feature reuse and information flow transfer, and a residual dilated convolution module was introduced to extract road network information at different scales. The experimental results showed that the method proposed in this study outperformed the compared algorithms in overall accuracy. This method had fewer false detections, and the extracted roads were closer to ground truth. Ablation experiments showed that the proposed modules could effectively improve road extraction accuracy. Full article
Show Figures

Figure 1

18 pages, 12414 KiB  
Article
Characterization and Removal of RFI Artifacts in Radar Data via Model-Constrained Deep Learning Approach
by Mingliang Tao, Jieshuang Li, Jia Su and Ling Wang
Remote Sens. 2022, 14(7), 1578; https://doi.org/10.3390/rs14071578 - 24 Mar 2022
Cited by 8 | Viewed by 2503
Abstract
Microwave remote sensing instruments such as synthetic aperture radar (SAR) play an important role in scientific research applications, while they suffer great measurement distortion with the presence of radio frequency interference (RFI). Existing methods either adopt model−based optimization or follow a data−driven black−box [...] Read more.
Microwave remote sensing instruments such as synthetic aperture radar (SAR) play an important role in scientific research applications, while they suffer great measurement distortion with the presence of radio frequency interference (RFI). Existing methods either adopt model−based optimization or follow a data−driven black−box learning scheme, and both have specific limitations in terms of efficiency, accuracy, and interpretability. In this paper, we propose a hybrid model−constrained deep learning approach for RFI extraction and mitigation by fusing the classical model-based and advanced data-driven method. Considering the temporal-spatial correlation of target response, as well as the random sparsity property for time−varying interference, a joint low−rank and sparse optimization framework is established. Instead of applying the iterative optimization process with uncertain convergency, the proposed scheme approximates the iterative process with a stacked recurrent neural network. By adopting this hybrid model−constrained deep learning strategy, the original unsupervised decomposition problem is converted to a supervised learning problem. Experimental results show the validity of the proposed method under diverse RFI scenarios, which could avoid the manual tuning of model hyperparameters as well as speed up the efficiency. Full article
Show Figures

Figure 1

19 pages, 4053 KiB  
Article
Mining Cross-Domain Structure Affinity for Refined Building Segmentation in Weakly Supervised Constraints
by Jun Zhang, Yue Liu, Pengfei Wu, Zhenwei Shi and Bin Pan
Remote Sens. 2022, 14(5), 1227; https://doi.org/10.3390/rs14051227 - 2 Mar 2022
Cited by 6 | Viewed by 2773
Abstract
Building segmentation for remote sensing images usually requires pixel-level labels which is difficult to collect when the images are in low resolution and quality. Recently, weakly supervised semantic segmentation methods have achieved promising performance, which only rely on image-level labels for each image. [...] Read more.
Building segmentation for remote sensing images usually requires pixel-level labels which is difficult to collect when the images are in low resolution and quality. Recently, weakly supervised semantic segmentation methods have achieved promising performance, which only rely on image-level labels for each image. However, buildings in remote sensing images tend to present regular structures. The lack of supervision information may result in the ambiguous boundaries. In this paper, we propose a new weakly supervised network for refined building segmentation by mining the cross-domain structure affinity (CDSA) from multi-source remote sensing images. CDSA integrates the ideas of weak supervision and domain adaptation, where a pixel-level labeled source domain and an image-level labeled target domain are required. The target of CDSA is to learn a powerful segmentation network on the target domain with the guidance of source domain data. CDSA mainly consists of two branches, the structure affinity module (SAM) and the spatial structure adaptation (SSA). In brief, SAM is developed to learn the structure affinity of the buildings from source domain, and SSA infuses the structure affinity to the target domain via a domain adaptation approach. Moreover, we design an end-to-end network structure to simultaneously optimize the SAM and SSA. In this case, SAM can receive pseudosupervised information from SSA, and in turn provide a more accurate affinity matrix for SSA. In the experiments, our model can achieve an IoU score at 57.87% and 79.57% for the WHU and Vaihingen data sets. We compare CDSA with several state-of-the-art weakly supervised and domain adaptation methods, and the results indicate that our method presents advantages on two public data sets. Full article
Show Figures

Figure 1

23 pages, 11086 KiB  
Article
Learning a Fully Connected U-Net for Spectrum Reconstruction of Fourier Transform Imaging Spectrometers
by Tieqiao Chen, Xiuqin Su, Haiwei Li, Siyuan Li, Jia Liu, Geng Zhang, Xiangpeng Feng, Shuang Wang, Xuebin Liu, Yihao Wang and Chunbo Zou
Remote Sens. 2022, 14(4), 900; https://doi.org/10.3390/rs14040900 - 14 Feb 2022
Cited by 3 | Viewed by 2738
Abstract
Fourier transform imaging spectrometers (FTISs) are widely used in global hyperspectral remote sensing due to the advantages of high stability, high throughput, and high spectral resolution. Spectrum reconstruction (SpecR) is a classic problem of FTISs determining the acquired data quality and application potential. [...] Read more.
Fourier transform imaging spectrometers (FTISs) are widely used in global hyperspectral remote sensing due to the advantages of high stability, high throughput, and high spectral resolution. Spectrum reconstruction (SpecR) is a classic problem of FTISs determining the acquired data quality and application potential. However, the state-of-the-art SpecR algorithms were restricted by the length of maximum optical path difference (MOPD) of FTISs and apodization processing, resulting in a decrease in spectral resolution; thus, the applications of FTISs were limited. In this study, a deep learning SpecR method, which directly learned an end-to-end mapping between the interference/spectrum information with limited MOPD and without apodization processing, was proposed. The mapping was represented as a fully connected U-Net (FCUN) that takes the interference fringes as the input and outputs the highly precise spectral curves. We trained the proposed FCUN model using the real spectra and simulated pulse spectra, as well as the corresponding simulated interference curves, and achieved good results. Additionally, the performance of the proposed FCUN on real interference and spectral datasets was explored. The FCUN could obtain similar spectral values compared with the state-of-the-art fast Fourier transform (FFT)-based method with only 150 and 200 points in the interferograms. The proposed method could be able to enhance the resolution of the reconstructed spectra in the case of insufficient MOPD. Moreover, the FCUN performed well in visual quality using noisy interferograms and gained nearly 70% to 80% relative improvement over FFT for the coefficient of mean relative error (MRE). All the results based on simulated and real satellite datasets showed that the reconstructed spectra of the FCUN were more consistent with the ideal spectrum compared with that of the traditional method, with higher PSNR and lower values of spectral angle (SA) and relative spectral quadratic error (RQE). Full article
Show Figures

Figure 1

Back to TopTop