remotesensing-logo

Journal Browser

Journal Browser

Artificial Intelligence and Machine Learning for multi-source Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (1 July 2023) | Viewed by 42532

Special Issue Editors


E-Mail Website
Guest Editor

E-Mail Website
Guest Editor
Professor, Department of Electronics and Communication Engineering, Nitte Meenakshi Institute of Technology, Bangalore 560064,Karnataka, India
Interests: image processing; pattern; recognition; data science; IOT

E-Mail Website
Guest Editor
Department of Telecommunication Engineering, University of Study “Giustino Fortunato”, 82100 Benevento, Italy
Interests: statistical signal processing applied to radar target recognition global navigation satellite system reflectometry, and hyperspectral unmixing; elaboration of satellite data for Earth observation with application in imaging and sounding with passive (multispectral and hyperspectral) and active (SAR, GNSS-R) sensors
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recently, artificial intelligence and machine learning have gained attention and have been used to achieve great success in both the research community and industry, especially in the field of multi-source remote sensing. Due to the recent progress in machine learning, particularly in deep learning, many techniques for image analysis and understanding have been applied to solve real problems, and deep learning has been widely used to perform other computer vision tasks, such as video classification and image super-resolution. Learning an effective feature representation from a large amount of data through artificial intelligence techniques is useful for extracting the underlying structural features of the data even when a small amount of data is available. It also results in a better representation than hand-crafted features as the features learned through artificial intelligence techniques adapt well to the tasks at hand.

Currently, massive streams of Earth Observation data are being systematically collected from different cutting-edge optical and radar sensors, on-board satellites, and aerial and terrestrial platforms. These data include both images and video sequences at different spatial, spectral, and temporal resolutions and can be used to constantly monitor the Earth's surface. In order to fully exploit these datasets and deliver crucial information for numerous engineering, environmental, safety, and security applications, novel artificial intelligence and machine learning methods are required that will enable us to efficiently dissect and interpret the data and draw conclusions that the broader public can turn into action.

The scope of this Special Issue is interdisciplinary and seeks collaborative contributions from academia and industrial experts in the areas of geoscience and remote sensing, signal processing, computer vision, machine learning, and data science.

Prof. Dr. Silvia Liberata Ullo
Prof. Dr. Parameshachari Bidare Divakarachari
Prof. Dr. Pia Addabbo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • transfer learning and statistical learning methods for image classification
  • multispectral imaging and deep neural networks for precision farming
  • computer vision using deep convolutional networks for spatio-temporal remote sensing applications
  • automatic building segmentation using multi-constraint convolutional networks
  • urban land use mapping and analysis in the big data era
  • deep learning techniques for remote sensing image classification
  • detecting pipeline pathways in satellite images with deep learning
  • marine vision-based situational awareness using deep learning
  • computer vision for automatic ship detection in remote sensing images
  • hyper-spectral image classification using similarity-measurement-based recurrent neural networks
  • unsupervised deep feature extraction for remote sensing image classification

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

24 pages, 1272 KiB  
Article
Classification of High-Altitude Flying Objects Based on Radiation Characteristics with Attention-Convolutional Neural Network and Gated Recurrent Unit Network
by Deen Dai, Lihua Cao, Yangfan Liu, Yao Wang and Zhaolong Wu
Remote Sens. 2023, 15(20), 4985; https://doi.org/10.3390/rs15204985 - 16 Oct 2023
Viewed by 1438
Abstract
In the task of classifying high-altitude flying objects, due to the limitations of the target flight altitude, there are issues such as insufficient contour information, low contrast, and fewer pixels in the target objects obtained through infrared detection technology, making it challenging to [...] Read more.
In the task of classifying high-altitude flying objects, due to the limitations of the target flight altitude, there are issues such as insufficient contour information, low contrast, and fewer pixels in the target objects obtained through infrared detection technology, making it challenging to accurately classify them. In order to improve the classification performance and achieve the effective classification of the targets, this study proposes a high-altitude flying object classification algorithm based on radiation characteristic data. The target images are obtained through an infrared camera, and the radiation characteristics of the targets are measured using radiation characteristic measurement techniques. The classification is performed using an attention-based convolutional neural network (CNN) and gated recurrent unit (GRU) (referred to as ACGRU). In ACGRU, CNN-GRU and GRU-CNN networks are used to extract vectorized radiation characteristic data. The raw data are processed using Highway Network, and SoftMax is used for high-altitude flying object classification. The classification accuracy of ACGRU reaches 94.8%, and the F1 score reaches 93.9%. To verify the generalization performance of the model, comparative experiments and significance analysis were conducted with other algorithms on radiation characteristic datasets and 17 multidimensional time series datasets from UEA. The results show that the proposed ACGRU algorithm performs excellently in the task of high-altitude flying object classification based on radiation characteristics. Full article
Show Figures

Figure 1

18 pages, 5054 KiB  
Article
FCAE-AD: Full Convolutional Autoencoder Based on Attention Gate for Hyperspectral Anomaly Detection
by Xianghai Wang, Yihan Wang, Zhenhua Mu and Ming Wang
Remote Sens. 2023, 15(17), 4263; https://doi.org/10.3390/rs15174263 - 30 Aug 2023
Cited by 3 | Viewed by 1529
Abstract
Recently, the methods based on the autoencoder reconstruction background have been applied to the area of hyperspectral image (HSI) anomaly detection (HSI-AD). However, the encoding mechanism of the autoencoder (AE) makes it possible to treat the anomaly and the background indistinguishably during reconstruction, [...] Read more.
Recently, the methods based on the autoencoder reconstruction background have been applied to the area of hyperspectral image (HSI) anomaly detection (HSI-AD). However, the encoding mechanism of the autoencoder (AE) makes it possible to treat the anomaly and the background indistinguishably during reconstruction, which can result in a small number of anomalous pixels still being included in the acquired reconstruction background. In addition, the problem of redundant information in HSIs also exists in reconstruction errors. To this end, a fully convolutional AE hyperspectral anomaly detection (AD) network with an attention gate (AG) connection is proposed. First, the low-dimensional feature map as a product of the encoder and the fine feature map as a product of the corresponding decoding stage are simultaneously input into the AG module. The network context information is used to suppress the irrelevant regions in the input image and obtain the significant feature map. Then, the features from the AG and the deep features from upsampling are efficiently combined in the decoder stage based on the skip connection to gradually estimate the reconstructed background image. Finally, post-processing optimization based on guided filtering (GF) is carried out on the reconstruction error to eliminate the wrong anomalous pixels in the reconstruction error image and amplify the contrast between the anomaly and the background. Full article
Show Figures

Figure 1

23 pages, 2658 KiB  
Article
GCMTN: Low-Overlap Point Cloud Registration Network Combining Dense Graph Convolution and Multilevel Interactive Transformer
by Xuchu Wang and Yue Yuan
Remote Sens. 2023, 15(15), 3908; https://doi.org/10.3390/rs15153908 - 7 Aug 2023
Cited by 2 | Viewed by 1558
Abstract
A single receptive field limits the expression of multilevel receptive field features in point cloud registration, leading to the pseudo-matching of objects with similar geometric structures in low-overlap scenes, which causes a significant degradation in registration performance. To handle this problem, a point [...] Read more.
A single receptive field limits the expression of multilevel receptive field features in point cloud registration, leading to the pseudo-matching of objects with similar geometric structures in low-overlap scenes, which causes a significant degradation in registration performance. To handle this problem, a point cloud registration network that incorporates dense graph convolution and a mutilevel interaction Transformer (GCMTN) in pursuit of better registration performance in low-overlap scenes is proposed in this paper. In GCMTN, a dense graph feature aggregation module is designed for expanding the receptive field of points and fusing graph features at multiple scales. To make pointwise features more discriminative, a multilevel interaction Transformer module combining Multihead Offset Attention and Multihead Cross Attention is proposed to refine the internal features of the point cloud and perform feature interaction. To filter out the undesirable effects of outliers, an overlap prediction module containing overlap factor and matching factor is also proposed for determining the match ability of points and predicting the overlap region. The final rigid transformation parameters are generated based on the distribution of the overlap region. The proposed GCMTN was extensively verified on publicly available ModelNet and ModelLoNet, 3DMatch and 3DLoMatch, and odometryKITTI datasets and compared with recent methods. The experimental results demonstrate that GCMTN significantly improves the capability of feature extraction and achieves competitive registration performance in low-overlap scenes. Meanwhile, GCMTN has value and potential for application in practical remote sensing tasks. Full article
Show Figures

Graphical abstract

21 pages, 3795 KiB  
Article
DILRS: Domain-Incremental Learning for Semantic Segmentation in Multi-Source Remote Sensing Data
by Xue Rui, Ziqiang Li, Yang Cao, Ziyang Li and Weiguo Song
Remote Sens. 2023, 15(10), 2541; https://doi.org/10.3390/rs15102541 - 12 May 2023
Cited by 4 | Viewed by 2977
Abstract
With the exponential growth in the speed and volume of remote sensing data, deep learning models are expected to adapt and continually learn over time. Unfortunately, the domain shift between multi-source remote sensing data from various sensors and regions poses a significant challenge. [...] Read more.
With the exponential growth in the speed and volume of remote sensing data, deep learning models are expected to adapt and continually learn over time. Unfortunately, the domain shift between multi-source remote sensing data from various sensors and regions poses a significant challenge. Segmentation models face difficulty in adapting to incremental domains due to catastrophic forgetting, which can be addressed via incremental learning methods. However, current incremental learning methods mainly focus on class-incremental learning, wherein classes belong to the same remote sensing domain, and neglect investigations into incremental domains in remote sensing. To solve this problem, we propose a domain-incremental learning method for semantic segmentation in multi-source remote sensing data. Specifically, our model aims to incrementally learn a new domain while preserving its performance on previous domains without accessing previous domain data. To achieve this, our model has a unique parameter learning structure that reparametrizes domain-agnostic and domain-specific parameters. We use different optimization strategies to adapt to domain shift in incremental domain learning. Additionally, we adopt multi-level knowledge distillation loss to mitigate the impact of label space shift among domains. The experiments demonstrate that our method achieves excellent performance in domain-incremental settings, outperforming existing methods with only a few parameters. Full article
Show Figures

Graphical abstract

18 pages, 2996 KiB  
Article
Underground Water Level Prediction in Remote Sensing Images Using Improved Hydro Index Value with Ensemble Classifier
by Andrzej Stateczny, Sujatha Canavoy Narahari, Padmavathi Vurubindi, Nirmala S. Guptha and Kalyanapu Srinivas
Remote Sens. 2023, 15(8), 2015; https://doi.org/10.3390/rs15082015 - 11 Apr 2023
Cited by 12 | Viewed by 4373
Abstract
The economic sustainability of aquifers across the world relies on accurate and rapid estimates of groundwater storage changes, but this becomes difficult due to the absence of in-situ groundwater surveys in most areas. By closing the water balance, hydrologic remote sensing measures offer [...] Read more.
The economic sustainability of aquifers across the world relies on accurate and rapid estimates of groundwater storage changes, but this becomes difficult due to the absence of in-situ groundwater surveys in most areas. By closing the water balance, hydrologic remote sensing measures offer a possible method for quantifying changes in groundwater storage. However, it is uncertain to what extent remote sensing data can provide an accurate assessment of these changes. Therefore, a new framework is implemented in this work for predicting the underground water level using remote sensing images. Generally, the water level is defined into five levels: Critical, Overexploited, Safe, Saline, and Semi-critical, based on water quantity. In this manuscript, the remote sensing images were acquired from remote sensing images. At first, Wiener filtering was employed for preprocessing. Secondly, the Vegetation Indexes (VI) (Normalized Difference Vegetation Index (NDVI), Normalized Difference Snow Index (NDSI), Infrared index (IRI), Radar Vegetation Index (RVI)), and statistical features (entropy, Root Mean Square (RMS), Skewness, and Kurtosis) were extracted from the preprocessed remote sensing images. Then, the extracted features were combined as a novel hydro index, which was fed to the Ensemble Classifier (EC): Neural Networks (NN), Support Vector Machine (SVM), and improved Deep Convolutional Neural Network (DCNN) models for underground water level prediction in the remote sensing images. The obtained results prove the efficacy of the proposed framework by using different performance measures. The results shows that the False Positive Rate (FPR) of the proposed EC model is 0.0083, which is better than that of existing methods. On the other hand, the proposed EC model has a high accuracy of 0.90, which is superior to the existing traditional models: Long Short-Term Memory (LSTM) network, Naïve Bayes (NB), Random Forest (RF), Recurrent Neural Network (RNN), and Bidirectional Gated Recurrent Unit (Bi-GRU). Full article
Show Figures

Figure 1

17 pages, 2618 KiB  
Article
Deep Learning-Based Improved WCM Technique for Soil Moisture Retrieval with Satellite Images
by G. S. Nijaguna, D. R. Manjunath, Mohamed Abouhawwash, S. S. Askar, D. Khalandar Basha and Jewel Sengupta
Remote Sens. 2023, 15(8), 2005; https://doi.org/10.3390/rs15082005 - 10 Apr 2023
Cited by 43 | Viewed by 2686
Abstract
The water cycle around the globe is significantly impacted by the moisture in the soil. However, finding a quick and practical model to cope with the enormous amount of data is a difficult issue for remote sensing practitioners. The traditional methods of measuring [...] Read more.
The water cycle around the globe is significantly impacted by the moisture in the soil. However, finding a quick and practical model to cope with the enormous amount of data is a difficult issue for remote sensing practitioners. The traditional methods of measuring soil moisture are inefficient at large sizes, which can be replaced by remote sensing techniques for obtaining soil moisture. While determining the soil moisture, the low return frequency of satellites and the lack of images pose a severe challenge to the current remote sensing techniques. Therefore, this paper suggested a novel technique for Soil Moisture Retrieval. In the initial phase, image acquisition is made. Then, VI indexes (NDVI, GLAI, Green NDVI (GNDVI), and WDRVI features) are derived. Further, an improved Water Cloud Model (WCM) is deployed as a vegetation impact rectification scheme. Finally, soil moisture retrieval is determined by the hybrid model combining Deep Max Out Network (DMN) and Bidirectional Gated Recurrent Unit (Bi-GRU) schemes, whose outputs are then passed on to enhanced score level fusion that offers final results. According to the results, the RMSE of the Hybrid Classifier (Bi-GRU and DMN) method was lower (0.9565) than the RMSE of the Hybrid Classifier methods. The ME values of the HC (Bi-GRU and DMN) were also lower (0.728697) than those of the HC methods without the vegetation index, the HC methods without the presence of water clouds, and the HC methods with traditional water clouds. In comparison to HC (Bi-GRU and DMN), the HC method without vegetation index has a lower error of 0.8219 than the HC method with standard water cloud and the HC method without water cloud. Full article
Show Figures

Graphical abstract

23 pages, 10170 KiB  
Article
MSFANet: Multiscale Fusion Attention Network for Road Segmentation of Multispectral Remote Sensing Data
by Zhonggui Tong, Yuxia Li, Jinglin Zhang, Lei He and Yushu Gong
Remote Sens. 2023, 15(8), 1978; https://doi.org/10.3390/rs15081978 - 8 Apr 2023
Cited by 9 | Viewed by 2659
Abstract
With the development of deep learning and remote sensing technologies in recent years, many semantic segmentation methods based on convolutional neural networks (CNNs) have been applied to road extraction. However, previous deep learning-based road extraction methods primarily used RGB imagery as an input [...] Read more.
With the development of deep learning and remote sensing technologies in recent years, many semantic segmentation methods based on convolutional neural networks (CNNs) have been applied to road extraction. However, previous deep learning-based road extraction methods primarily used RGB imagery as an input and did not take advantage of the spectral information contained in hyperspectral imagery. These methods can produce discontinuous outputs caused by objects with similar spectral signatures to roads. In addition, the images obtained from different Earth remote sensing sensors may have different spatial resolutions, enhancing the difficulty of the joint analysis. This work proposes the Multiscale Fusion Attention Network (MSFANet) to overcome these problems. Compared to traditional road extraction frameworks, the proposed MSFANet fuses information from different spectra at multiple scales. In MSFANet, multispectral remote sensing data is used as an additional input to the network, in addition to RGB remote sensing data, to obtain richer spectral information. The Cross-source Feature Fusion Module (CFFM) is used to calibrate and fuse spectral features at different scales, reducing the impact of noise and redundant features from different inputs. The Multiscale Semantic Aggregation Decoder (MSAD) fuses multiscale features and global context information from the upsampling process layer by layer, reducing information loss during the multiscale feature fusion. The proposed MSFANet network was applied to the SpaceNet dataset and self-annotated images from Chongzhou, a representative city in China. Our MSFANet performs better over the baseline HRNet by a large margin of +6.38 IoU and +5.11 F1-score on the SpaceNet dataset, +3.61 IoU and +2.32 F1-score on the self-annotated dataset (Chongzhou dataset). Moreover, the effectiveness of MSFANet was also proven by comparative experiments with other studies. Full article
Show Figures

Graphical abstract

18 pages, 78897 KiB  
Article
Rust-Style Patch: A Physical and Naturalistic Camouflage Attacks on Object Detector for Remote Sensing Images
by Binyue Deng, Denghui Zhang, Fashan Dong, Junjian Zhang, Muhammad Shafiq and Zhaoquan Gu
Remote Sens. 2023, 15(4), 885; https://doi.org/10.3390/rs15040885 - 5 Feb 2023
Cited by 12 | Viewed by 3418
Abstract
Deep neural networks (DNNs) can improve the image analysis and interpretation of remote sensing technology by extracting valuable information from images, and has extensive applications such as military affairs, agriculture, environment, transportation, and urban division. The DNNs for object detection can identify and [...] Read more.
Deep neural networks (DNNs) can improve the image analysis and interpretation of remote sensing technology by extracting valuable information from images, and has extensive applications such as military affairs, agriculture, environment, transportation, and urban division. The DNNs for object detection can identify and analyze objects in remote sensing images through fruitful features of images, which improves the efficiency of image processing and enables the recognition of large-scale remote sensing images. However, many studies have shown that deep neural networks are vulnerable to adversarial attack. After adding small perturbations, the generated adversarial examples will cause deep neural network to output undesired results, which will threaten the normal recognition and detection of remote sensing systems. According to the application scenarios, attacks can be divided into the digital domain and the physical domain, the digital domain attack is directly modified on the original image, which is mainly used to simulate the attack effect, while the physical domain attack adds perturbation to the actual objects and captures them with device, which is closer to the real situation. Attacks in the physical domain are more threatening, however, existing attack methods generally generate the patch with bright style and a large attack range, which is easy to be observed by human vision. Our goal is to generate a natural patch with a small perturbation area, which can help some remote sensing images used in the military to avoid detection by object detectors and im-perceptible to human eyes. To address the above issues, we generate a rust-style adversarial patch generation framework based on style transfer. The framework takes a heat map-based interpretability method to obtain key areas of target recognition and generate irregular-shaped natural-looking patches to reduce the disturbance area and alleviates suspicion from humans. To make the generated adversarial examples have a higher attack success rate in the physical domain, we further improve the robustness of the adversarial patch through data augmentation methods such as rotation, scaling, and brightness, and finally, make it impossible for the object detector to detect the camouflage patch. We have attacked the YOLOV3 detection network on multiple datasets. The experimental results show that our model has achieved a success rate of 95.7% in the digital domain. We also conduct physical attacks in indoor and outdoor environments and achieve an attack success rate of 70.6% and 65.3%, respectively. The structural similarity index metric shows that the adversarial patches generated are more natural than existing methods. Full article
Show Figures

Figure 1

20 pages, 6096 KiB  
Article
Gaussian Mutation–Spider Monkey Optimization (GM-SMO) Model for Remote Sensing Scene Classification
by Abdul Lateef Haroon Phulara Shaik, Monica Komala Manoharan, Alok Kumar Pani, Raji Reddy Avala and Chien-Ming Chen
Remote Sens. 2022, 14(24), 6279; https://doi.org/10.3390/rs14246279 - 11 Dec 2022
Cited by 45 | Viewed by 2802
Abstract
Scene classification aims to classify various objects and land use classes such as farms, highways, rivers, and airplanes in the remote sensing images. In recent times, the Convolutional Neural Network (CNN) based models have been widely applied in scene classification, due to their [...] Read more.
Scene classification aims to classify various objects and land use classes such as farms, highways, rivers, and airplanes in the remote sensing images. In recent times, the Convolutional Neural Network (CNN) based models have been widely applied in scene classification, due to their efficiency in feature representation. The CNN based models have the limitation of overfitting problems, due to the generation of more features in the convolutional layer and imbalanced data problems. This study proposed Gaussian Mutation–Spider Monkey Optimization (GM-SMO) model for feature selection to solve overfitting and imbalanced data problems in scene classification. The Gaussian mutation changes the position of the solution after exploration to increase the exploitation in feature selection. The GM-SMO model maintains better tradeoff between exploration and exploitation to select relevant features for superior classification. The GM-SMO model selects unique features to overcome overfitting and imbalanced data problems. In this manuscript, the Generative Adversarial Network (GAN) is used for generating the augmented images, and the AlexNet and Visual Geometry Group (VGG) 19 models are applied to extract the features from the augmented images. Then, the GM-SMO model selects unique features, which are given to the Long Short-Term Memory (LSTM) network for classification. In the resulting phase, the GM-SMO model achieves 99.46% of accuracy, where the existing transformer-CNN has achieved only 98.76% on the UCM dataset. Full article
Show Figures

Figure 1

19 pages, 7044 KiB  
Article
Comparing Gaofen-5, Ground, and Huanjing-1A Spectra for the Monitoring of Soil Salinity with the BP Neural Network Improved by Particle Swarm Optimization
by Xiaofang Jiang and Xian Xue
Remote Sens. 2022, 14(22), 5719; https://doi.org/10.3390/rs14225719 - 12 Nov 2022
Cited by 4 | Viewed by 2234
Abstract
Most of the world’s saline soils are found in arid or semiarid areas, where salinization is becoming serious. Ground laboratory hyperspectral data (analytical spectral devices, ASD) as well as spaceborne hyperspectral data, including Gaofen-5 (GF-5) and Huanjing-1A (HJ-1A), provide convenient salinity monitoring. However, [...] Read more.
Most of the world’s saline soils are found in arid or semiarid areas, where salinization is becoming serious. Ground laboratory hyperspectral data (analytical spectral devices, ASD) as well as spaceborne hyperspectral data, including Gaofen-5 (GF-5) and Huanjing-1A (HJ-1A), provide convenient salinity monitoring. However, the difference among ASD, GF-5, and HJ-1A spectra in salinity monitoring remains unclear. So, we used ASD, GF-5, and HJ-1A spectra as data sources in Gaotai County of Hexi Corridor, which has been affected by salinization. For a more comprehensive comparison of the three spectra datum, four kinds of band screening methods, including Pearson correlation coefficient (PCC), principal component analysis (PCA), successive projections algorithm (SPA), and random forest (RF) were used to reduce the dimension of hyperspectral data. Particle swarm optimization (PSO) was used to improve the random initialization of weights and thresholds of the back propagation neural network (BPNN) model. The results showed that root mean square error (RMSE) and determination of the coefficients (R2) of models based on ASD and HJ-1A spectra were basically similar. ASD spectra (RMSE = 4 mS·cm−1, R2 = 0.82) and HJ-1A (RMSE = 2.98 mS·cm−1, R2 = 0.93) performed better than GF-5 spectra (RMSE = 6.45 mS·cm−1, R2 = 0.67) in some cases. The good modelling result of HJ-1A and GF-5 data confirmed that spaceborne hyperspectral imagery has great potential in salinity mapping. Then, we used HJ-1A and GF-5 hyperspectral imagery to map soil salinity. The results of GF-5 and HJ-1A showed that extremely and highly saline soil mainly occurred in grassland and the southern part of arable land in Gaotai County. Other lands mainly featured non-saline and slightly saline soil. This can provide a reference for salinity monitoring research. Full article
Show Figures

Figure 1

21 pages, 4843 KiB  
Article
Feature Weighted Attention—Bidirectional Long Short Term Memory Model for Change Detection in Remote Sensing Images
by Raj Kumar Patra, Sujata N. Patil, Przemysław Falkowski-Gilski, Zbigniew Łubniewski and Rachana Poongodan
Remote Sens. 2022, 14(21), 5402; https://doi.org/10.3390/rs14215402 - 28 Oct 2022
Cited by 52 | Viewed by 2231
Abstract
In remote sensing images, change detection (CD) is required in many applications, such as: resource management, urban expansion research, land management, and disaster assessment. Various deep learning-based methods were applied to satellite image analysis for change detection, yet many of them have limitations, [...] Read more.
In remote sensing images, change detection (CD) is required in many applications, such as: resource management, urban expansion research, land management, and disaster assessment. Various deep learning-based methods were applied to satellite image analysis for change detection, yet many of them have limitations, including the overfitting problem. This research proposes the Feature Weighted Attention (FWA) in Bidirectional Long Short-Term Memory (BiLSTM) method to reduce the overfitting problem and increase the performance of classification in change detection applications. Additionally, data usage and accuracy in remote sensing activities, particularly CD, can be significantly improved by a large number of training models based on BiLSTM. Normalization techniques are applied to input images in order to enhance the quality and reduce the difference in pixel value. The AlexNet and VGG16 models were used to extract useful features from the normalized images. The extracted features were then applied to the FWA-BiLSTM model, to give more weight to the unique features and increase the efficiency of classification. The attention layer selects the unique features that help to distinguish the changes in the remote sensing images. From the experimental results, it was clearly shown that the proposed FWA-BiLSTM model achieved better performance in terms of precision (93.43%), recall (93.16%), and overall accuracy (99.26%), when compared with the existing Difference-enhancement Dense-attention Convolutional Neural Network (DDCNN) model. Full article
Show Figures

Figure 1

26 pages, 10393 KiB  
Article
Spatiotemporal Assessment of Satellite Image Time Series for Land Cover Classification Using Deep Learning Techniques: A Case Study of Reunion Island, France
by Naik Nitesh Navnath, Kandasamy Chandrasekaran, Andrzej Stateczny, Venkatesan Meenakshi Sundaram and Prabhavathy Panneer
Remote Sens. 2022, 14(20), 5232; https://doi.org/10.3390/rs14205232 - 19 Oct 2022
Cited by 8 | Viewed by 3396
Abstract
Current Earth observation systems generate massive amounts of satellite image time series to keep track of geographical areas over time to monitor and identify environmental and climate change. Efficiently analyzing such data remains an unresolved issue in remote sensing. In classifying land cover, [...] Read more.
Current Earth observation systems generate massive amounts of satellite image time series to keep track of geographical areas over time to monitor and identify environmental and climate change. Efficiently analyzing such data remains an unresolved issue in remote sensing. In classifying land cover, utilizing SITS rather than one image might benefit differentiating across classes because of their varied temporal patterns. The aim was to forecast the land cover class of a group of pixels as a multi-class single-label classification problem given their time series gathered using satellite images. In this article, we exploit SITS to assess the capability of several spatial and temporal deep learning models with the proposed architecture. The models implemented are the bidirectional gated recurrent unit (GRU), temporal convolutional neural networks (TCNN), GRU + TCNN, attention on TCNN, and attention of GRU + TCNN. The proposed architecture integrates univariate, multivariate, and pixel coordinates for the Reunion Island’s landcover classification (LCC). the evaluation of the proposed architecture with deep neural networks on the test dataset determined that blending univariate and multivariate with a recurrent neural network and pixel coordinates achieved increased accuracy with higher F1 scores for each class label. The results suggest that the models also performed exceptionally well when executed in a partitioned manner for the LCC task compared to the temporal models. This study demonstrates that using deep learning approaches paired with spatiotemporal SITS data addresses the difficult task of cost-effectively classifying land cover, contributing to a sustainable environment. Full article
Show Figures

Figure 1

16 pages, 16693 KiB  
Article
Collaborative Consistent Knowledge Distillation Framework for Remote Sensing Image Scene Classification Network
by Shiyi Xing, Jinsheng Xing, Jianguo Ju, Qingshan Hou and Xiurui Ding
Remote Sens. 2022, 14(20), 5186; https://doi.org/10.3390/rs14205186 - 17 Oct 2022
Cited by 11 | Viewed by 2021
Abstract
For remote sensing image scene classification tasks, the classification accuracy of the small-scale deep neural network tends to be low and fails to achieve accuracy in real-world application scenarios. However, although large deep neural networks can improve the classification accuracy of remote sensing [...] Read more.
For remote sensing image scene classification tasks, the classification accuracy of the small-scale deep neural network tends to be low and fails to achieve accuracy in real-world application scenarios. However, although large deep neural networks can improve the classification accuracy of remote sensing image scenes to some extent, the corresponding deep neural networks also have more parameters and cannot be used on existing embedded devices. The main reason for this is that there are a large number of redundant parameters in large deep networks, which directly leads to the difficulty of application on embedded devices and also reduces the classification speed. Considering the contradiction between hardware equipment and classification accuracy requirements, we propose a collaborative consistent knowledge distillation method for improving the classification accuracy of remote sensing image scenes on embedded devices, called CKD. In essence, our method addresses two aspects: (1) We design a multi-branch fused redundant feature mapping module, which significantly improves the parameter redundancy problem. (2) To improve the classification accuracy of the deep model on embedded devices, we propose a knowledge distillation method based on mutually supervised learning. Experiments were conducted on two remote sensing image classification datasets, SIRI-WHU and NWPU-RESISC45, and the experimental results showed that our approach significantly reduced the number of redundant parameters in the deep network; the number of parameters decreased from 1.73 M to 0.90 M. In addition, compared to a series of student sub-networks obtained based on the existing different knowledge distillation methods, the performance of the student sub-networks obtained by CKD for remote sensing scene classification was significantly improved on two different datasets, with an average accuracy of 0.943 and 0.916, respectively. Full article
Show Figures

Figure 1

20 pages, 4889 KiB  
Article
Merging Multisatellite and Gauge Precipitation Based on Geographically Weighted Regression and Long Short-Term Memory Network
by Jianming Shen, Po Liu, Jun Xia, Yanjun Zhao and Yi Dong
Remote Sens. 2022, 14(16), 3939; https://doi.org/10.3390/rs14163939 - 13 Aug 2022
Cited by 10 | Viewed by 2631
Abstract
To generate high-quality spatial precipitation estimates, merging rain gauges with a single-satellite precipitation product (SPP) is a common approach. However, a single SPP cannot capture the spatial pattern of precipitation well, and its resolution is also too low. This study proposed an integrated [...] Read more.
To generate high-quality spatial precipitation estimates, merging rain gauges with a single-satellite precipitation product (SPP) is a common approach. However, a single SPP cannot capture the spatial pattern of precipitation well, and its resolution is also too low. This study proposed an integrated framework for merging multisatellite and gauge precipitation. The framework integrates the geographically weighted regression (GWR) for improving the spatial resolution of precipitation estimations and the long short-term memory (LSTM) network for improving the precipitation estimation accuracy by exploiting the spatiotemporal correlation pattern between multisatellite precipitation products and rain gauges. Specifically, the integrated framework was applied to the Han River Basin of China for generating daily precipitation estimates from the data of both rain gauges and four SPPs (TRMM_3B42, CMORPH, PERSIANN-CDR, and GPM-IMAGE) during the period of 2007–2018. The results show that the GWR-LSTM framework significantly improves the spatial resolution and accuracy of precipitation estimates (resolution of 0.05°, correlation coefficient of 0.86, and Kling–Gupta efficiency of 0.6) over original SPPs (resolution of 0.25° or 0.1°, correlation coefficient of 0.36–0.54, Kling–Gupta efficiency of 0.30–0.52). Compared with other methods, the correlation coefficient for the whole basin is improved by approximately 4%. Especially in the lower reaches of the Han River, the correlation coefficient is improved by 15%. In addition, this study demonstrates that merging multiple-satellite and gauge precipitation is much better than merging partial products of multiple satellite with gauge observations. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

24 pages, 6106 KiB  
Technical Note
CFRWD-GAN for SAR-to-Optical Image Translation
by Juan Wei, Huanxin Zou, Li Sun, Xu Cao, Shitian He, Shuo Liu and Yuqing Zhang
Remote Sens. 2023, 15(10), 2547; https://doi.org/10.3390/rs15102547 - 12 May 2023
Cited by 7 | Viewed by 2128
Abstract
Synthetic aperture radar (SAR) images have been extensively used in earthquake monitoring, resource survey, agricultural forecasting, etc. However, it is a challenge to interpret SAR images with severe speckle noise and geometric deformation due to the nature of radar imaging. The translation of [...] Read more.
Synthetic aperture radar (SAR) images have been extensively used in earthquake monitoring, resource survey, agricultural forecasting, etc. However, it is a challenge to interpret SAR images with severe speckle noise and geometric deformation due to the nature of radar imaging. The translation of SAR-to-optical images provides new support for the interpretation of SAR images. Most of the existing translation networks, which are based on generative adversarial networks (GANs), are vulnerable to part information loss during the feature reasoning stage, making the outline of the translated images blurred and semantic information missing. Aiming to solve these problems, cross-fusion reasoning and wavelet decomposition GAN (CFRWD-GAN) is proposed to preserve structural details and enhance high-frequency band information. Specifically, the cross-fusion reasoning (CFR) structure is proposed to preserve high-resolution, detailed features and low-resolution semantic features in the whole process of feature reasoning. Moreover, the discrete wavelet decomposition (WD) method is adopted to handle the speckle noise in SAR images and achieve the translation of high-frequency components. Finally, the WD branch is integrated with the CFR branch through an adaptive parameter learning method to translate SAR images to optical ones. Extensive experiments conducted on two publicly available datasets, QXS-SAROPT and SEN1-2, demonstrate a better translation performance of the proposed CFRWD-GAN compared to five other state-of-the-art models. Full article
Show Figures

Graphical abstract

Back to TopTop