remotesensing-logo

Journal Browser

Journal Browser

Lightweight Deep Neural Networks for Remote Sensing Image Understanding

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 December 2020) | Viewed by 30403

Special Issue Editors


E-Mail Website
Guest Editor
School of Electronic Information and Artificial Intelligence, Shaanxi University of Science and Technology, Xi’an 710021, China
Interests: image processing; computer vision; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electronic and Electrical Engineering, Brunel University London, London UB8 3PH, UK
Interests: image processing; artificial intelligence; signal processing; affective computing
Special Issues, Collections and Topics in MDPI journals

E-Mail
Guest Editor
Xi’an University of Posts and Telecommunications
Interests: remote sensing image processing; high-resolution remote sensing

E-Mail Website
Guest Editor
WMG Data Science, University of Warwick, Coventry CV4 7AL, UK
Interests: computer vision; video analysis; machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the rapid development of intelligent information technology, images from remote sensing play a very important role in many research areas including geology, oceanography, weather forecasting, etc. However, compared to general digital images, remote sensing images require complex pre-processing such as optical geometric correction and radiation correction. Consequently, it is usually impossible to construct a dataset including a large number of remote sensing images. Deep learning techniques have emerged as a powerful alternative for machine learning with great model capacity and the learning ability of highly discriminative features for the task at hand. In particular, deep convolutional neural networks (CNN) have been widely used in the field of remote sensing. However, deep models often rely on a large number of annotated images, which is difficult for the field of remote sensing. How to train a lightweight deep learning model using small training samples is a challenge in remote sensing. This Special Issue aims to publish high-quality research papers, as well as review articles addressing emerging trends in remote sensing image understanding using lightweight deep neural network models. Original contributions, not currently under review for a journal or a conference, are solicited in relevant areas including, but not limited to, the following:

  • Object detection in remote sensing images using lightweight deep neural networks
  • Remote sensing image classification using lightweight deep neural networks
  • Change detection using lightweight deep neural networks
  • Super-resolution reconstruction of remote sensing images using lightweight deep neural networks
  • Remote sensing image restoration using lightweight deep neural networks
  • Remote sensing applications using deep neural networks
  • Deep neural networks for hyperspectral data
  • Review/Surveys of remote sensing image processing
  • New remote sensing image datasets

Prof. Dr. Tao Lei
Dr. Hongying Meng
Dr. Shuying Li
Dr. Lefei Zhang
Dr. Jungong Han
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing image classification
  • remote sensing image restoration
  • remote sensing application
  • deep neural network
  • hyperspectral image processing
  • neural network compression

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 569 KiB  
Article
PulseNetOne: Fast Unsupervised Pruning of Convolutional Neural Networks for Remote Sensing
by David Browne, Michael Giering and Steven Prestwich
Remote Sens. 2020, 12(7), 1092; https://doi.org/10.3390/rs12071092 - 29 Mar 2020
Cited by 11 | Viewed by 3654
Abstract
Scene classification is an important aspect of image/video understanding and segmentation. However, remote-sensing scene classification is a challenging image recognition task, partly due to the limited training data, which causes deep-learning Convolutional Neural Networks (CNNs) to overfit. Another difficulty is that images often [...] Read more.
Scene classification is an important aspect of image/video understanding and segmentation. However, remote-sensing scene classification is a challenging image recognition task, partly due to the limited training data, which causes deep-learning Convolutional Neural Networks (CNNs) to overfit. Another difficulty is that images often have very different scales and orientation (viewing angle). Yet another is that the resulting networks may be very large, again making them prone to overfitting and unsuitable for deployment on memory- and energy-limited devices. We propose an efficient deep-learning approach to tackle these problems. We use transfer learning to compensate for the lack of data, and data augmentation to tackle varying scale and orientation. To reduce network size, we use a novel unsupervised learning approach based on k-means clustering, applied to all parts of the network: most network reduction methods use computationally expensive supervised learning methods, and apply only to the convolutional or fully connected layers, but not both. In experiments, we set new standards in classification accuracy on four remote-sensing and two scene-recognition image datasets. Full article
Show Figures

Figure 1

13 pages, 5926 KiB  
Article
Sample Generation with Self-Attention Generative Adversarial Adaptation Network (SaGAAN) for Hyperspectral Image Classification
by Wenzhi Zhao, Xi Chen, Jiage Chen and Yang Qu
Remote Sens. 2020, 12(5), 843; https://doi.org/10.3390/rs12050843 - 5 Mar 2020
Cited by 21 | Viewed by 4731
Abstract
Hyperspectral image analysis plays an important role in agriculture, mineral industry, and for military purposes. However, it is quite challenging when classifying high-dimensional hyperspectral data with few labeled samples. Currently, generative adversarial networks (GANs) have been widely used for sample generation, but it [...] Read more.
Hyperspectral image analysis plays an important role in agriculture, mineral industry, and for military purposes. However, it is quite challenging when classifying high-dimensional hyperspectral data with few labeled samples. Currently, generative adversarial networks (GANs) have been widely used for sample generation, but it is difficult to acquire high-quality samples with unwanted noises and uncontrolled divergences. To generate high-quality hyperspectral samples, a self-attention generative adversarial adaptation network (SaGAAN) is proposed in this work. It aims to increase the number and quality of training samples to avoid the impact of over-fitting. Compared to the traditional GANs, the proposed method has two contributions: (1) it includes a domain adaptation term to constrain generated samples to be more realistic to the original ones; and (2) it uses the self-attention mechanism to capture the long-range dependencies across the spectral bands and further improve the quality of generated samples. To demonstrate the effectiveness of the proposed SaGAAN, we tested it on two well-known hyperspectral datasets: Pavia University and Indian Pines. The experiment results illustrate that the proposed method can greatly improve the classification accuracy, even with a small number of initial labeled samples. Full article
Show Figures

Graphical abstract

21 pages, 4780 KiB  
Article
PolSAR Image Classification with Lightweight 3D Convolutional Networks
by Hongwei Dong, Lamei Zhang and Bin Zou
Remote Sens. 2020, 12(3), 396; https://doi.org/10.3390/rs12030396 - 26 Jan 2020
Cited by 32 | Viewed by 4976
Abstract
Convolutional neural networks (CNNs) have become the state-of-the-art in optical image processing. Recently, CNNs have been used in polarimetric synthetic aperture radar (PolSAR) image classification and obtained promising results. Unlike optical images, the unique phase information of PolSAR data expresses the structure information [...] Read more.
Convolutional neural networks (CNNs) have become the state-of-the-art in optical image processing. Recently, CNNs have been used in polarimetric synthetic aperture radar (PolSAR) image classification and obtained promising results. Unlike optical images, the unique phase information of PolSAR data expresses the structure information of objects. This special data representation makes 3D convolution which explicitly modeling the relationship between polarimetric channels perform better in the task of PolSAR image classification. However, the development of deep 3D-CNNs will cause a huge number of model parameters and expensive computational costs, which not only leads to the decrease of the interpretation speed during testing, but also greatly increases the risk of over-fitting. To alleviate this problem, a lightweight 3D-CNN framework that compresses 3D-CNNs from two aspects is proposed in this paper. Lightweight convolution operations, i.e., pseudo-3D and 3D-depthwise separable convolutions, are considered as low-latency replacements for vanilla 3D convolution. Further, fully connected layers are replaced by global average pooling to reduce the number of model parameters so as to save the memory. Under the specific classification task, the proposed methods can reduce up to 69.83% of the model parameters in convolution layers of the 3D-CNN as well as almost all the model parameters in fully connected layers, which ensures the fast PolSAR interpretation. Experiments on three PolSAR benchmark datasets, i.e., AIRSAR Flevoland, ESAR Oberpfaffenhofen, EMISAR Foulum, show that the proposed lightweight architectures can not only maintain but also slightly improve the accuracy under various criteria. Full article
Show Figures

Graphical abstract

15 pages, 2386 KiB  
Article
Tree Cover Estimation in Global Drylands from Space Using Deep Learning
by Emilio Guirado, Domingo Alcaraz-Segura, Javier Cabello, Sergio Puertas-Ruíz, Francisco Herrera and Siham Tabik
Remote Sens. 2020, 12(3), 343; https://doi.org/10.3390/rs12030343 - 21 Jan 2020
Cited by 19 | Viewed by 4955
Abstract
Accurate tree cover mapping is of paramount importance in many fields, from biodiversity conservation to carbon stock estimation, ecohydrology, erosion control, or Earth system modelling. Despite this importance, there is still uncertainty about global forest cover, particularly in drylands. Recently, the Food and [...] Read more.
Accurate tree cover mapping is of paramount importance in many fields, from biodiversity conservation to carbon stock estimation, ecohydrology, erosion control, or Earth system modelling. Despite this importance, there is still uncertainty about global forest cover, particularly in drylands. Recently, the Food and Agriculture Organization of the United Nations (FAO) conducted a costly global assessment of dryland forest cover through the visual interpretation of orthoimages using the Collect Earth software, involving hundreds of operators from around the world. Our study proposes a new automatic method for estimating tree cover using artificial intelligence and free orthoimages. Our results show that our tree cover classification model, based on convolutional neural networks (CNN), is 23% more accurate than the manual visual interpretation used by FAO, reaching up to 79% overall accuracy. The smallest differences between the two methods occurred in the driest regions, but disagreement increased with the percentage of tree cover. The application of CNNs could be used to improve and reduce the cost of tree cover maps from the local to the global scale, with broad implications for research and management. Full article
Show Figures

Graphical abstract

20 pages, 8047 KiB  
Article
Urban Land Cover Classification of High-Resolution Aerial Imagery Using a Relation-Enhanced Multiscale Convolutional Network
by Chun Liu, Doudou Zeng, Hangbin Wu, Yin Wang, Shoujun Jia and Liang Xin
Remote Sens. 2020, 12(2), 311; https://doi.org/10.3390/rs12020311 - 17 Jan 2020
Cited by 34 | Viewed by 5387
Abstract
Urban land cover classification for high-resolution images is a fundamental yet challenging task in remote sensing image analysis. Recently, deep learning techniques have achieved outstanding performance in high-resolution image classification, especially the methods based on deep convolutional neural networks (DCNNs). However, the traditional [...] Read more.
Urban land cover classification for high-resolution images is a fundamental yet challenging task in remote sensing image analysis. Recently, deep learning techniques have achieved outstanding performance in high-resolution image classification, especially the methods based on deep convolutional neural networks (DCNNs). However, the traditional CNNs using convolution operations with local receptive fields are not sufficient to model global contextual relations between objects. In addition, multiscale objects and the relatively small sample size in remote sensing have also limited classification accuracy. In this paper, a relation-enhanced multiscale convolutional network (REMSNet) method is proposed to overcome these weaknesses. A dense connectivity pattern and parallel multi-kernel convolution are combined to build a lightweight and varied receptive field sizes model. Then, the spatial relation-enhanced block and the channel relation-enhanced block are introduced into the network. They can adaptively learn global contextual relations between any two positions or feature maps to enhance feature representations. Moreover, we design a parallel multi-kernel deconvolution module and spatial path to further aggregate different scales information. The proposed network is used for urban land cover classification against two datasets: the ISPRS 2D semantic labelling contest of Vaihingen and an area of Shanghai of about 143 km2. The results demonstrate that the proposed method can effectively capture long-range dependencies and improve the accuracy of land cover classification. Our model obtains an overall accuracy (OA) of 90.46% and a mean intersection-over-union (mIoU) of 0.8073 for Vaihingen and an OA of 88.55% and a mIoU of 0.7394 for Shanghai. Full article
Show Figures

Graphical abstract

20 pages, 5816 KiB  
Article
Land Cover Change Detection from High-Resolution Remote Sensing Imagery Using Multitemporal Deep Feature Collaborative Learning and a Semi-supervised Chan–Vese Model
by Xiaokang Zhang, Wenzhong Shi, Zhiyong Lv and Feifei Peng
Remote Sens. 2019, 11(23), 2787; https://doi.org/10.3390/rs11232787 - 26 Nov 2019
Cited by 16 | Viewed by 4259
Abstract
This paper presents a novel approach for automatically detecting land cover changes from multitemporal high-resolution remote sensing images in the deep feature space. This is accomplished by using multitemporal deep feature collaborative learning and a semi-supervised Chan–Vese (SCV) model. The multitemporal deep feature [...] Read more.
This paper presents a novel approach for automatically detecting land cover changes from multitemporal high-resolution remote sensing images in the deep feature space. This is accomplished by using multitemporal deep feature collaborative learning and a semi-supervised Chan–Vese (SCV) model. The multitemporal deep feature collaborative learning model is developed to obtain the multitemporal deep feature representations in the same high-level feature space and to improve the separability between changed and unchanged patterns. The deep difference feature map at the object-level is then extracted through a feature similarity measure. Based on the deep difference feature map, the SCV model is proposed to detect changes in which labeled patterns automatically derived from uncertainty analysis are integrated into the energy functional to efficiently drive the contour towards accurate boundaries of changed objects. The experimental results obtained on the four data sets acquired by different high-resolution sensors corroborate the effectiveness of the proposed approach. Full article
Show Figures

Graphical abstract

Back to TopTop