remotesensing-logo

Journal Browser

Journal Browser

Image Super-Resolution in Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (15 January 2020) | Viewed by 54601

Special Issue Editors


E-Mail Website
Guest Editor
1. Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109, USA
2. Department of Mathematics, University of California, Los Angeles, CA 90095, USA
Interests: data science; remote sensing; image processing; inverse problems; optimization; computational methods
Special Issues, Collections and Topics in MDPI journals
Department of Mathematics, University of Kentucky, Lexington, KY 40506, USA
Interests: mathematical image processing; compressive sensing; inverse problems; optimization; high-dimensional signal processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Remote-sensing images have been playing an important role in many areas including geology, oceanography, and weather forecasting. However, due to the limitations of imaging sensors, acquired images usually have limited spatial, spectral, and temporal resolutions. In addition, remote-sensing images often suffer from various types of degradations, such as noise, spatial distortion, and temporal blur. Reconstruction of a high-resolution image from a single image or a sequence of degraded, low-resolution images of the same scene, acquired from different views or at different conditions, is a challenging problem. Diverse novel and effective super-resolution approaches are being pursued in various remote-sensing applications. This Special Issue of Remote Sensing, which is focused on image super-resolution in remote sensing, aims to collect some of the most recent and promising super-resolution reconstruction techniques for remote-sensing images. It will consist of papers that showcase the latest research advances in the field of remote sensing. Authors are encouraged to submit high-quality, original research papers on remote-sensing image super-resolution. Topics of interest include, but not limited to, the following:

  • Spatial super-resolution
  • Temporal resolution enhancement
  • Spatio-temporal super-resolution
  • Spectral super-resolution
  • Single-frame and multi-frame resolution enhancement
  • Super-resolution from geometrically deformed remote-sensing images
  • Pansharpening of remote-sensing images
  • Fusion of multi-instrument data for enhancing its resolution

Dr. Igor Yanovsky
Dr. Jing Qin
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • super-resolution
  • resolution enhancement
  • spatial resolution
  • temporal resolution
  • deconvolution
  • deblurring
  • remote sensing
  • satellite imagery

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 6199 KiB  
Article
Fast Split Bregman Based Deconvolution Algorithm for Airborne Radar Imaging
by Yin Zhang, Qiping Zhang, Yongchao Zhang, Jifang Pei, Yulin Huang and Jianyu Yang
Remote Sens. 2020, 12(11), 1747; https://doi.org/10.3390/rs12111747 - 29 May 2020
Cited by 15 | Viewed by 2943
Abstract
Deconvolution methods can be used to improve the azimuth resolution in airborne radar imaging. Due to the sparsity of targets in airborne radar imaging, an L 1 regularization problem usually needs to be solved. Recently, the Split Bregman algorithm (SBA) has been widely [...] Read more.
Deconvolution methods can be used to improve the azimuth resolution in airborne radar imaging. Due to the sparsity of targets in airborne radar imaging, an L 1 regularization problem usually needs to be solved. Recently, the Split Bregman algorithm (SBA) has been widely used to solve L 1 regularization problems. However, due to the high computational complexity of matrix inversion, the efficiency of the traditional SBA is low, which seriously restricts its real-time performance in airborne radar imaging. To overcome this disadvantage, a fast split Bregman algorithm (FSBA) is proposed in this paper to achieve real-time imaging with an airborne radar. Firstly, under the regularization framework, the problem of azimuth resolution improvement can be converted into an L 1 regularization problem. Then, the L 1 regularization problem can be solved with the proposed FSBA. By utilizing the low displacement rank features of Toeplitz matrix, the proposed FSBA is able to realize fast matrix inversion by using a Gohberg–Semencul (GS) representation. Through simulated and real data processing experiments, we prove that the proposed FSBA significantly improves the resolution, compared with the Wiener filtering (WF), truncated singular value decomposition (TSVD), Tikhonov regularization (REGU), Richardson–Lucy (RL), iterative adaptive approach (IAA) algorithms. The computational advantage of FSBA increases with the increase of echo dimension. Its computational efficiency is 51 times and 77 times of the traditional SBA, respectively, for echoes with dimensions of 218 × 400 and 400 × 400 , optimizing both the image quality and computing time. In addition, for a specific hardware platform, the proposed FSBA can process echo of greater dimensions than traditional SBA. Furthermore, the proposed FSBA causes little performance degradation, when compared with the traditional SBA. Full article
(This article belongs to the Special Issue Image Super-Resolution in Remote Sensing)
Show Figures

Graphical abstract

20 pages, 6241 KiB  
Article
A Detail-Preserving Cross-Scale Learning Strategy for CNN-Based Pansharpening
by Sergio Vitale and Giuseppe Scarpa
Remote Sens. 2020, 12(3), 348; https://doi.org/10.3390/rs12030348 - 21 Jan 2020
Cited by 45 | Viewed by 3755
Abstract
The fusion of a single panchromatic (PAN) band with a lower resolution multispectral (MS) image to raise the MS resolution to that of the PAN is known as pansharpening. In the last years a paradigm shift from model-based to data-driven approaches, in particular [...] Read more.
The fusion of a single panchromatic (PAN) band with a lower resolution multispectral (MS) image to raise the MS resolution to that of the PAN is known as pansharpening. In the last years a paradigm shift from model-based to data-driven approaches, in particular making use of Convolutional Neural Networks (CNN), has been observed. Motivated by this research trend, in this work we introduce a cross-scale learning strategy for CNN pansharpening models. Early CNN approaches resort to a resolution downgrading process to produce suitable training samples. As a consequence, the actual performance at the target resolution of the models trained at a reduced scale is an open issue. To cope with this shortcoming we propose a more complex loss computation that involves simultaneously reduced and full resolution training samples. Our experiments show a clear image enhancement in the full-resolution framework, with a negligible loss in the reduced-resolution space. Full article
(This article belongs to the Special Issue Image Super-Resolution in Remote Sensing)
Show Figures

Graphical abstract

20 pages, 17709 KiB  
Article
Sentinel-2 Sharpening via Parallel Residual Network
by Jiemin Wu, Zhi He and Jie Hu
Remote Sens. 2020, 12(2), 279; https://doi.org/10.3390/rs12020279 - 15 Jan 2020
Cited by 17 | Viewed by 4386
Abstract
Sentinel-2 data is of great utility for a wide range of remote sensing applications due to its free access and fine spatial-temporal coverage. However, restricted by the hardware, only four bands of Sentinel-2 images are provided at 10 m resolution, while others are [...] Read more.
Sentinel-2 data is of great utility for a wide range of remote sensing applications due to its free access and fine spatial-temporal coverage. However, restricted by the hardware, only four bands of Sentinel-2 images are provided at 10 m resolution, while others are recorded at reduced resolution (i.e., 20 m or 60 m). In this paper, we propose a parallel residual network for Sentinel-2 sharpening termed SPRNet, to obtain the complete data at 10 m resolution. The proposed network aims to learn the mapping between the low-resolution (LR) bands and ideal high-resolution (HR) bands by three steps, including parallel spatial residual learning, spatial feature fusing and spectral feature mapping. First, rather than using the single branch network, the parallel residual learning structure is proposed to extract the spatial features from different resolution bands separately. Second, the spatial feature fusing is aimed to fully fuse the extracted features from each branch and produce the residual image with spatial information. Third, to keep spectral fidelity, the spectral feature mapping is utilized to directly propagate the spectral characteristics of LR bands to target HR bands. Without using extra training data, the proposed network is trained with the lower scale data synthesized from the observed Sentinel-2 data and applied to the original ones. The data at 10 m spatial resolution can be finally obtained by feeding the original 10 m, 20 m and 60 m bands to the trained SPRNet. Extensive experiments conducted on two datasets indicate that the proposed SPRNet obtains good results in the spatial fidelity and the spectral preservation. Compared with the competing approaches, the SPRNet increases the SRE by at least 1.538 dB on 20 m bands and 3.188 dB on 60 m bands while reduces the SAM by at least 0.282 on 20 m bands and 0.162 on 60 m bands. Full article
(This article belongs to the Special Issue Image Super-Resolution in Remote Sensing)
Show Figures

Figure 1

21 pages, 11553 KiB  
Article
Transferred Multi-Perception Attention Networks for Remote Sensing Image Super-Resolution
by Xiaoyu Dong, Zhihong Xi, Xu Sun and Lianru Gao
Remote Sens. 2019, 11(23), 2857; https://doi.org/10.3390/rs11232857 - 1 Dec 2019
Cited by 32 | Viewed by 4636
Abstract
Image super-resolution (SR) reconstruction plays a key role in coping with the increasing demand on remote sensing imaging applications with high spatial resolution requirements. Though many SR methods have been proposed over the last few years, further research is needed to improve SR [...] Read more.
Image super-resolution (SR) reconstruction plays a key role in coping with the increasing demand on remote sensing imaging applications with high spatial resolution requirements. Though many SR methods have been proposed over the last few years, further research is needed to improve SR processes with regard to the complex spatial distribution of the remote sensing images and the diverse spatial scales of ground objects. In this paper, a novel multi-perception attention network (MPSR) is developed with performance exceeding those of many existing state-of-the-art models. By incorporating the proposed enhanced residual block (ERB) and residual channel attention group (RCAG), MPSR can super-resolve low-resolution remote sensing images via multi-perception learning and multi-level information adaptive weighted fusion. Moreover, a pre-train and transfer learning strategy is introduced, which improved the SR performance and stabilized the training procedure. Experimental comparisons are conducted using 13 state-of-the-art methods over a remote sensing dataset and benchmark natural image sets. The proposed model proved its excellence in both objective criterion and subjective perspective. Full article
(This article belongs to the Special Issue Image Super-Resolution in Remote Sensing)
Show Figures

Graphical abstract

18 pages, 18695 KiB  
Article
Fast Super-Resolution of 20 m Sentinel-2 Bands Using Convolutional Neural Networks
by Massimiliano Gargiulo, Antonio Mazza, Raffaele Gaetano, Giuseppe Ruello and Giuseppe Scarpa
Remote Sens. 2019, 11(22), 2635; https://doi.org/10.3390/rs11222635 - 11 Nov 2019
Cited by 46 | Viewed by 6193
Abstract
Images provided by the ESA Sentinel-2 mission are rapidly becoming the main source of information for the entire remote sensing community, thanks to their unprecedented combination of spatial, spectral and temporal resolution, as well as their associated open access policy. Due to a [...] Read more.
Images provided by the ESA Sentinel-2 mission are rapidly becoming the main source of information for the entire remote sensing community, thanks to their unprecedented combination of spatial, spectral and temporal resolution, as well as their associated open access policy. Due to a sensor design trade-off, images are acquired (and delivered) at different spatial resolutions (10, 20 and 60 m) according to specific sets of wavelengths, with only the four visible and near infrared bands provided at the highest resolution (10 m). Although this is not a limiting factor in general, many applications seem to emerge in which the resolution enhancement of 20 m bands may be beneficial, motivating the development of specific super-resolution methods. In this work, we propose to leverage Convolutional Neural Networks (CNNs) to provide a fast, upscalable method for the single-sensor fusion of Sentinel-2 (S2) data, whose aim is to provide a 10 m super-resolution of the original 20 m bands. Experimental results demonstrate that the proposed solution can achieve better performance with respect to most of the state-of-the-art methods, including other deep learning based ones with a considerable saving of computational burden. Full article
(This article belongs to the Special Issue Image Super-Resolution in Remote Sensing)
Show Figures

Graphical abstract

24 pages, 5734 KiB  
Article
Super-Resolution of Remote Sensing Images via a Dense Residual Generative Adversarial Network
by Wen Ma, Zongxu Pan, Feng Yuan and Bin Lei
Remote Sens. 2019, 11(21), 2578; https://doi.org/10.3390/rs11212578 - 3 Nov 2019
Cited by 43 | Viewed by 6233
Abstract
Single image super-resolution (SISR) has been widely studied in recent years as a crucial technique for remote sensing applications. In this paper, a dense residual generative adversarial network (DRGAN)-based SISR method is proposed to promote the resolution of remote sensing images. Different from [...] Read more.
Single image super-resolution (SISR) has been widely studied in recent years as a crucial technique for remote sensing applications. In this paper, a dense residual generative adversarial network (DRGAN)-based SISR method is proposed to promote the resolution of remote sensing images. Different from previous super-resolution (SR) approaches based on generative adversarial networks (GANs), the novelty of our method mainly lies in the following factors. First, we made a breakthrough in terms of network architecture to improve performance. We designed a dense residual network as the generative network in GAN, which can make full use of the hierarchical features from low-resolution (LR) images. We also introduced a contiguous memory mechanism into the network to take advantage of the dense residual block. Second, we modified the loss function and altered the model of the discriminative network according to the Wasserstein GAN with a gradient penalty (WGAN-GP) for stable training. Extensive experiments were performed using the NWPU-RESISC45 dataset, and the results demonstrated that the proposed method outperforms state-of-the-art methods in terms of both objective evaluation and subjective perspective. Full article
(This article belongs to the Special Issue Image Super-Resolution in Remote Sensing)
Show Figures

Graphical abstract

24 pages, 6474 KiB  
Article
Spatial Resolution Matching of Microwave Radiometer Data with Convolutional Neural Network
by Yade Li, Weidong Hu, Shi Chen, Wenlong Zhang, Rui Guo, Jingwen He and Leo Ligthart
Remote Sens. 2019, 11(20), 2432; https://doi.org/10.3390/rs11202432 - 19 Oct 2019
Cited by 17 | Viewed by 4222
Abstract
Passive multi-frequency microwave remote sensing is often plagued with the problems of low- and non-uniform spatial resolution. In order to adaptively enhance and match the spatial resolution, an accommodative spatial resolution matching (ASRM) framework, composed of the flexible degradation model, the deep residual [...] Read more.
Passive multi-frequency microwave remote sensing is often plagued with the problems of low- and non-uniform spatial resolution. In order to adaptively enhance and match the spatial resolution, an accommodative spatial resolution matching (ASRM) framework, composed of the flexible degradation model, the deep residual convolutional neural network (CNN), and the adaptive feature modification (AdaFM) layers, is proposed in this paper. More specifically, a flexible degradation model, based on the imaging process of the microwave radiometer, is firstly proposed to generate suitable datasets for various levels of matching tasks. Secondly, a deep residual CNN is introduced to jointly learn the complicated degradation factors of the data, so that the resolution can be matched up to fixed levels with state of the art quality. Finally, the AdaFM layers are added to the network in order to handle arbitrary and continuous resolution matching problems between a start and an end level. Both the simulated and the microwave radiation imager (MWRI) data from the Fengyun-3C (FY-3C) satellite have been used to demonstrate the validity and the effectiveness of the method. Full article
(This article belongs to the Special Issue Image Super-Resolution in Remote Sensing)
Show Figures

Graphical abstract

18 pages, 5671 KiB  
Article
Bidirectional Convolutional LSTM Neural Network for Remote Sensing Image Super-Resolution
by Yunpeng Chang and Bin Luo
Remote Sens. 2019, 11(20), 2333; https://doi.org/10.3390/rs11202333 - 9 Oct 2019
Cited by 48 | Viewed by 5922
Abstract
Single-image super-resolution (SR) is an effective approach to enhance spatial resolution for numerous applications such as object detection and classification when the resolution of sensors is limited. Although deep convolutional neural networks (CNNs) proposed for this purpose in recent years have outperformed relatively [...] Read more.
Single-image super-resolution (SR) is an effective approach to enhance spatial resolution for numerous applications such as object detection and classification when the resolution of sensors is limited. Although deep convolutional neural networks (CNNs) proposed for this purpose in recent years have outperformed relatively shallow models, enormous parameters bring the risk of overfitting. In addition, due to the different scale of objects in images, the hierarchical features of deep CNN contain additional information for SR tasks, while most CNN models have not fully utilized these features. In this paper, we proposed a deep yet concise network to address these problems. Our network consists of two main structures: (1) recursive inference block based on dense connection reuse of local low-level features, and recursive learning is applied to control the model parameters while increasing the receptive fields; (2) a bidirectional convolutional LSTM (BiConvLSTM) layer is introduced to learn the correlations of features from each recursion and adaptively select the complementary information for the reconstruction layer. Experiments on multispectral satellite images, panchromatic satellite images, and nature high-resolution remote-sensing images showed that our proposed model outperformed state-of-the-art methods while utilizing fewer parameters, and ablation studies demonstrated the effectiveness of a BiConvLSTM layer for an image SR task. Full article
(This article belongs to the Special Issue Image Super-Resolution in Remote Sensing)
Show Figures

Graphical abstract

21 pages, 1551 KiB  
Article
Single Space Object Image Denoising and Super-Resolution Reconstructing Using Deep Convolutional Networks
by Xubin Feng, Xiuqin Su, Junge Shen and Humin Jin
Remote Sens. 2019, 11(16), 1910; https://doi.org/10.3390/rs11161910 - 15 Aug 2019
Cited by 16 | Viewed by 3243
Abstract
Space object recognition is the basis of space attack and defense confrontation. High-quality space object images are very important for space object recognition. Because of the large number of cosmic rays in the space environment and the inadequacy of optical lenses and detectors [...] Read more.
Space object recognition is the basis of space attack and defense confrontation. High-quality space object images are very important for space object recognition. Because of the large number of cosmic rays in the space environment and the inadequacy of optical lenses and detectors on satellites to support high-resolution imaging, most of the images obtained are blurred and contain a lot of cosmic-ray noise. So, denoising methods and super-resolution methods are two effective ways to reconstruct high-quality space object images. However, most super-resolution methods could only reconstruct the lost details of low spatial resolution images, but could not remove noise. On the other hand, most denoising methods especially cosmic-ray denoising methods could not reconstruct high-resolution details. So in this paper, a deep convolutional neural network (CNN)-based single space object image denoising and super-resolution reconstruction method is presented. The noise is removed and the lost details of the low spatial resolution image are well reconstructed based on one very deep CNN-based network, which combines global residual learning and local residual learning. Based on a dataset of satellite images, experimental results demonstrate the feasibility of our proposed method in enhancing the spatial resolution and removing the noise of the space objects images. Full article
(This article belongs to the Special Issue Image Super-Resolution in Remote Sensing)
Show Figures

Graphical abstract

16 pages, 1898 KiB  
Article
Deep Residual Squeeze and Excitation Network for Remote Sensing Image Super-Resolution
by Jun Gu, Xian Sun, Yue Zhang, Kun Fu and Lei Wang
Remote Sens. 2019, 11(15), 1817; https://doi.org/10.3390/rs11151817 - 3 Aug 2019
Cited by 75 | Viewed by 10128
Abstract
Recently, deep convolutional neural networks (DCNN) have obtained promising results in single image super-resolution (SISR) of remote sensing images. Due to the high complexity of remote sensing image distribution, most of the existing methods are not good enough for remote sensing image super-resolution. [...] Read more.
Recently, deep convolutional neural networks (DCNN) have obtained promising results in single image super-resolution (SISR) of remote sensing images. Due to the high complexity of remote sensing image distribution, most of the existing methods are not good enough for remote sensing image super-resolution. Enhancing the representation ability of the network is one of the critical factors to improve remote sensing image super-resolution performance. To address this problem, we propose a new SISR algorithm called a Deep Residual Squeeze and Excitation Network (DRSEN). Specifically, we propose a residual squeeze and excitation block (RSEB) as a building block in DRSEN. The RSEB fuses the input and its internal features of current block, and models the interdependencies and relationships between channels to enhance the representation power. At the same time, we improve the up-sampling module and the global residual pathway in the network to reduce the parameters of the network. Experiments on two public remote sensing datasets (UC Merced and NWPU-RESISC45) show that our DRSEN achieves better accuracy and visual improvements against most state-of-the-art methods. The DRSEN is beneficial for the progress in the remote sensing images super-resolution field. Full article
(This article belongs to the Special Issue Image Super-Resolution in Remote Sensing)
Show Figures

Figure 1

Back to TopTop