remotesensing-logo

Journal Browser

Journal Browser

Active Learning Methods for Remote Sensing Data Processing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 12624

Special Issue Editors


E-Mail Website
Guest Editor
School of Electronics and Information, Center for Earth Observation Technology, Northwestern Polytechnical University, Xi’an 710129, China
Interests: hyperspectral remote sensing data processing; machine vision and image processing; neural network and machine learning

E-Mail Website
Guest Editor
School of Computer, Centre for Quantum Computation and Intelligent Systems, Wuhan University, Wuhan, China
Interests: big data mining management and analysis; multimedia technology and big data analysis; multimedia signal processing; machine learning and intelligent interaction; computer vision; computer applications; pattern recognition; artificial intelligence; data mining and analysis; audio and video processing; intelligent computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
National Laboratory of Radar Signal Processing, Xidian University, Xi’an 710126, China
Interests: radar target detection and recognition; SAR image processing; radar signal processing; machine learning
Special Issues, Collections and Topics in MDPI journals
College of Engineering & Computer Science, Australia National University, Canberra, Australia
Interests: saliency detection; uncertainty estimation; generative model; image processing; machine learning

Special Issue Information

Dear Colleagues,

Supervised (or semi-supervised) learning algorithms or learnable systems, such as support vector machines (SVMs), multilayer feedforward neural networks (MLFNN), ensemble-based learning, and deep learning methods, have been developed for various remote sensing data processing tasks.  However, the performance of learning algorithms (especially the deep learning ones) strongly depend on the availability of training data, which, in remote sensing, are typically obtained through very cost- and labor-intensive field work or time-consuming visual image interpretation. To enhance the generalization capabilities of the above-mentioned learning systems, active learning (AL) methods have evolved as a key concept to guide the annotation of the training dataset by querying the most informative samples to design learnable systems that can be trained with small number of samples.  

We would like to invite you to contribute to this Special Issue titled “Active Learning Methods for Remote Sensing Data Processing”, which will gather insights and contributions to the field of active learning for remote sensing data processing (RSDP). In the Special Issue, original research articles, reviews, and novel remote sensing datasets are welcome. Papers can be focused on topics that include but are not limited to the following:

  • Dataset optimization: Uncertainty-based, influence-based, intrinsic distribution, or structure methods or combined methods as well as remote sensing datasets and benchmarks for RSDP.
  • Learnable systems: SVM, MLFNN, RBF, ensemble systems, deep learning networks, skip-connection networks, etc.
  • Learning methods: Supervised, unsupervised, semi-supervised, few-shot, reinforcement, transfer, or deep learning methods.
  • Active learning for remote sensing applications: New AL methods or techniques for RS applications (visual imaging, microwave imaging radar, infrared imaging, THz imaging, and multi- or hyperspectral image processing, etc.)

Original research articles, reviews, novel remote sensing datasets, and new RSDP applications with AL all fit the scope of this Special Issue.

Prof. Dr. Mingyi He
Prof. Dr. Bo Du
Prof. Dr. Lan Du
Dr. Jing Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • active learning
  • remote sensing data processing
  • data optimization learnable systems
  • machine learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

19 pages, 16206 KiB  
Article
EFP-Net: A Novel Building Change Detection Method Based on Efficient Feature Fusion and Foreground Perception
by Renjie He, Wenyao Li, Shaohui Mei, Yuchao Dai and Mingyi He
Remote Sens. 2023, 15(22), 5268; https://doi.org/10.3390/rs15225268 - 7 Nov 2023
Viewed by 1418
Abstract
Over the past decade, deep learning techniques have significantly advanced the field of building change detection in remote sensing imagery. However, existing deep learning-based approaches often encounter limitations in complex remote sensing scenarios, resulting in false detections and detail loss. This paper introduces [...] Read more.
Over the past decade, deep learning techniques have significantly advanced the field of building change detection in remote sensing imagery. However, existing deep learning-based approaches often encounter limitations in complex remote sensing scenarios, resulting in false detections and detail loss. This paper introduces EFP-Net, a novel building change detection approach that resolves the mentioned issues by utilizing effective feature fusion and foreground perception. EFP-Net comprises three main modules, the feature extraction module (FEM), the spatial–temporal correlation module (STCM), and the residual guidance module (RGM), which jointly enhance the fusion of bi-temporal features and hierarchical features. Specifically, the STCM utilizes the temporal change duality prior and multi-scale perception to augment the 3D convolution modeling capability for bi-temporal feature variations. Additionally, the RGM employs the higher-layer prediction map to guide shallow layer features, reducing the introduction of noise during the hierarchical feature fusion process. Furthermore, a dynamic Focal loss with foreground awareness is developed to mitigate the class imbalance problem. Extensive experiments on the widely adopted WHU-BCD, LEVIR-CD, and CDD datasets demonstrate that the proposed EFP-Net is capable of significantly improving accuracy in building change detection. Full article
(This article belongs to the Special Issue Active Learning Methods for Remote Sensing Data Processing)
Show Figures

Graphical abstract

26 pages, 10301 KiB  
Article
Semi-Supervised Person Detection in Aerial Images with Instance Segmentation and Maximum Mean Discrepancy Distance
by Xiangqing Zhang, Yan Feng, Shun Zhang, Nan Wang, Shaohui Mei and Mingyi He
Remote Sens. 2023, 15(11), 2928; https://doi.org/10.3390/rs15112928 - 4 Jun 2023
Cited by 6 | Viewed by 2176
Abstract
Detecting sparse, small, lost persons with only a few pixels in high-resolution aerial images was, is, and remains an important and difficult mission, in which a vital role is played by accurate monitoring and intelligent co-rescuing for the search and rescue (SaR) system. [...] Read more.
Detecting sparse, small, lost persons with only a few pixels in high-resolution aerial images was, is, and remains an important and difficult mission, in which a vital role is played by accurate monitoring and intelligent co-rescuing for the search and rescue (SaR) system. However, many problems have not been effectively solved in existing remote-vision-based SaR systems, such as the shortage of person samples in SaR scenarios and the low tolerance of small objects for bounding boxes. To address these issues, a copy-paste mechanism (ISCP) with semi-supervised object detection (SSOD) via instance segmentation and maximum mean discrepancy distance is proposed (MMD), which can provide highly robust, multi-task, and efficient aerial-based person detection for the prototype SaR system. Specifically, numerous pseudo-labels are obtained by accurately segmenting the instances of synthetic ISCP samples to obtain their boundaries. The SSOD trainer then uses soft weights to balance the prediction entropy of the loss function between the ground truth and unreliable labels. Moreover, a novel evaluation metric MMD for anchor-based detectors is proposed to elegantly compute the IoU of the bounding boxes. Extensive experiments and ablation studies on Heridal and optimized public datasets demonstrate that our approach is effective and achieves state-of-the-art person detection performance in aerial images. Full article
(This article belongs to the Special Issue Active Learning Methods for Remote Sensing Data Processing)
Show Figures

Figure 1

19 pages, 5670 KiB  
Article
Unsupervised Image Dedusting via a Cycle-Consistent Generative Adversarial Network
by Guxue Gao, Huicheng Lai and Zhenhong Jia
Remote Sens. 2023, 15(5), 1311; https://doi.org/10.3390/rs15051311 - 27 Feb 2023
Cited by 5 | Viewed by 2127
Abstract
In sand–dust weather, the quality of the image is seriously degraded, which affects the ability of advanced applications to image using remote sensing. To improve the image quality and enhance the performance of image dedusting, we propose an end-to-end cyclic generative adversarial network [...] Read more.
In sand–dust weather, the quality of the image is seriously degraded, which affects the ability of advanced applications to image using remote sensing. To improve the image quality and enhance the performance of image dedusting, we propose an end-to-end cyclic generative adversarial network (D-CycleGAN) for image dedusting, which does not require pairs of sand–dust images and corresponding ground truth images for training. In other words, we train the network in an unpaired way. Specifically, we designed a jointly optimized guided module (JOGM), comprised of the sandy guided synthesis module (SGSM) and the clean guided synthesis module (CGSM), which aim to jointly guide the generator through corresponding discriminator adversarials to reduce the color distortion and artifacts. JOGM can significantly improve image quality. We propose a network hidden layer adversarial branch to perform adversarials from inside the network, which better supervises the hidden layer to further improve the quality of the generated images. In addition, we improved the original CycleGAN loss function and propose a dual-scale semantic perception loss in feature space and a color identity-preserving loss in pixel space to constrain the network. Extensive experiments demonstrate that our proposed network model effectively removes sand dust, has better clarity and image quality, and outperforms the state-of-the-art techniques. In addition, the proposed method can help the target detection algorithm to improve its detection accuracy and capability, and our method generalizes well to the enhancement of underwater images and hazy images. Full article
(This article belongs to the Special Issue Active Learning Methods for Remote Sensing Data Processing)
Show Figures

Figure 1

21 pages, 8362 KiB  
Article
AI-TFNet: Active Inference Transfer Convolutional Fusion Network for Hyperspectral Image Classification
by Jianing Wang, Linhao Li, Yichen Liu, Jinyu Hu, Xiao Xiao and Bo Liu
Remote Sens. 2023, 15(5), 1292; https://doi.org/10.3390/rs15051292 - 26 Feb 2023
Cited by 5 | Viewed by 1975
Abstract
The realization of efficient classification with limited labeled samples is a critical task in hyperspectral image classification (HSIC). Convolutional neural networks (CNNs) have achieved remarkable advances while considering spectral–spatial features simultaneously, while conventional patch-wise-based CNNs usually lead to redundant computations. Therefore, in this [...] Read more.
The realization of efficient classification with limited labeled samples is a critical task in hyperspectral image classification (HSIC). Convolutional neural networks (CNNs) have achieved remarkable advances while considering spectral–spatial features simultaneously, while conventional patch-wise-based CNNs usually lead to redundant computations. Therefore, in this paper, we established a novel active inference transfer convolutional fusion network (AI-TFNet) for HSI classification. First, in order to reveal and merge the local low-level and global high-level spectral–spatial contextual features at different stages of extraction, an end-to-end fully hybrid multi-stage transfer fusion network (TFNet) was designed to improve classification performance and efficiency. Meanwhile, an active inference (AI) pseudo-label propagation algorithm for spatially homogeneous samples was constructed using the homogeneous pre-segmentation of the proposed TFNet. In addition, a confidence-augmented pseudo-label loss (CapLoss) was proposed in order to define the confidence of a pseudo-label with an adaptive threshold in homogeneous regions for acquiring pseudo-label samples; this can adaptively infer a pseudo-label by actively augmenting the homogeneous training samples based on their spatial homogeneity and spectral continuity. Experiments on three real HSI datasets proved that the proposed method had competitive performance and efficiency compared to several related state-of-the-art methods. Full article
(This article belongs to the Special Issue Active Learning Methods for Remote Sensing Data Processing)
Show Figures

Figure 1

27 pages, 25623 KiB  
Article
A Transfer-Based Framework for Underwater Target Detection from Hyperspectral Imagery
by Zheyong Li, Jinghua Li, Pei Zhang, Lihui Zheng, Yilong Shen, Qi Li, Xin Li and Tong Li
Remote Sens. 2023, 15(4), 1023; https://doi.org/10.3390/rs15041023 - 13 Feb 2023
Cited by 5 | Viewed by 2648
Abstract
The detection of underwater targets through hyperspectral imagery is a relatively novel topic as the assumption of target background independence is no longer valid, making it difficult to directly detect underwater targets using land target information. Meanwhile, deep-learning-based methods have faced challenges regarding [...] Read more.
The detection of underwater targets through hyperspectral imagery is a relatively novel topic as the assumption of target background independence is no longer valid, making it difficult to directly detect underwater targets using land target information. Meanwhile, deep-learning-based methods have faced challenges regarding the availability of training datasets, especially in underwater conditions. To solve these problems, a transfer-based framework is proposed in this paper, which exploits synthetic data to train deep-learning models and transfers them to real-world applications. However, the transfer becomes challenging due to the imparity in the distribution between real and synthetic data. To address this dilemma, the proposed framework, named the transfer-based underwater target detection framework (TUTDF), first divides the domains using the depth information, then trains models for different domains and develops an adaptive module to determine which model to use. Meanwhile, a spatial–spectral process is applied prior to detection, which is devoted to eliminating the adverse influence of background noise. Since there is no publicly available hyperspectral underwater target dataset, most of the existing methods only run on simulated data; therefore, we conducted expensive experiments to obtain datasets with accurate depths and use them for validation. Extensive experiments verify the effectiveness and efficiency of TUTDF in comparison with traditional methods. Full article
(This article belongs to the Special Issue Active Learning Methods for Remote Sensing Data Processing)
Show Figures

Figure 1

Other

Jump to: Research

19 pages, 4689 KiB  
Technical Note
Improved Cycle-Consistency Generative Adversarial Network-Based Clutter Suppression Methods for Ground-Penetrating Radar Pipeline Data
by Yun Lin, Jiachun Wang, Deyun Ma, Yanping Wang and Shengbo Ye
Remote Sens. 2024, 16(6), 1043; https://doi.org/10.3390/rs16061043 - 15 Mar 2024
Viewed by 1196
Abstract
Ground-penetrating radar (GPR) is a widely used technology for pipeline detection due to its fast detection speed and high resolution. However, the presence of complex underground media often results in strong ground clutter interference in the collected B-scan echoes, significantly impacting detection performance. [...] Read more.
Ground-penetrating radar (GPR) is a widely used technology for pipeline detection due to its fast detection speed and high resolution. However, the presence of complex underground media often results in strong ground clutter interference in the collected B-scan echoes, significantly impacting detection performance. To address this issue, this paper proposes an improved clutter suppression network based on a cycle-consistency generative adversarial network (CycleGAN). By employing the concept of style transfer, the network aims to convert clutter images into clutter-free images. This paper introduces multiple residual blocks into the generator and discriminator, respectively, to improve the feature expression ability of the deep learning model. Additionally, the discriminator incorporates the squeeze and excitation (SE) module, a channel attention mechanism, to further enhance the model’s ability to extract features from clutter-free images. To evaluate the effectiveness of the proposed network in clutter suppression, both simulation and measurement data are utilized to compare and analyze its performance against traditional clutter suppression methods and deep learning-based methods, respectively. From the result of the measured data, it can be found that the improvement factor (Im) of the proposed method has reached 40.68 dB, which is a significant improvement compared to the previous network. Full article
(This article belongs to the Special Issue Active Learning Methods for Remote Sensing Data Processing)
Show Figures

Figure 1

Back to TopTop