remotesensing-logo

Journal Browser

Journal Browser

Remote Sensing Image Scene Classification Meets Artificial Intelligence

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 May 2023) | Viewed by 7977

Special Issue Editors

School of Geography, Geomatics and Planning, Jiangsu Normal University, Xuzhou 221116, China
Interests: object-oriented image processing; scene classification; deep learning

E-Mail Website
Guest Editor
Groupe Signal et Image, Université de Bordeaux, IMS, UMR 5218, Bordeaux, France
Interests: signal and image processing; pattern recognition; texture modeling; hyperspectral image classification; SAR image processing; high resolution remote sensing; images analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The objective of remote sensing image scene classification is to assign a semantic category to remote sensing images according to their content. It has a wide range of applications, including remote sensing data retrieval, agriculture, forestry, transportation, and environmental monitoring, although artificial intelligence (AI) has become a mainstream tool, having been successfully implemented in different industries due to the rise of massive data and the advancement of algorithms and processing capacity. The development of innovative methods based on the integration of multisource, multiresolution, and multitemporal images offers promising prospects regarding the consideration of remote sensing scene classification. This Special Issue focuses on advances in remote sensing scene classification using cross-domain data, multisource data, and multimodal data with the application of new methods, such as self-supervised learning, transfer learning, meta-learning, and vision transformers. Topics of interest include, but are not limited to:

  • Multisource/task remote sensing scene classification;
  • Multi/cross-domain scene classification;
  • Domain-adaptive scene classification;
  • Zero-/one-/few-shot learning;
  • Weakly /semi-supervised learning;
  • Noisy label learning;
  • Self-supervised learning;
  • Pretraining from computer vision to remote sensing;
  • Benchmarking datasets and codes;
  • Remote sensing applications. 

Dr. Junshi Xia
Dr. Erzhu Li
Dr. Lionel Bombrun
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • image scene
  • artificial intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 6799 KiB  
Article
FEPVNet: A Network with Adaptive Strategies for Cross-Scale Mapping of Photovoltaic Panels from Multi-Source Images
by Buyu Su, Xiaoping Du, Haowei Mu, Chen Xu, Xuecao Li, Fang Chen and Xiaonan Luo
Remote Sens. 2023, 15(9), 2469; https://doi.org/10.3390/rs15092469 - 8 May 2023
Cited by 3 | Viewed by 1970
Abstract
The world is transitioning to renewable energy, with photovoltaic (PV) solar power being one of the most promising energy sources. Large-scale PV mapping provides the most up-to-date and accurate PV geospatial information, which is crucial for planning and constructing PV power plants, optimizing [...] Read more.
The world is transitioning to renewable energy, with photovoltaic (PV) solar power being one of the most promising energy sources. Large-scale PV mapping provides the most up-to-date and accurate PV geospatial information, which is crucial for planning and constructing PV power plants, optimizing energy structure, and assessing the ecological impact of PVs. However, previous methods of PV extraction relied on simple models and single data sources, which could not accurately obtain PV geospatial information. Therefore, we propose the Filter-Embedded Network (FEPVNet), which embeds high-pass and low-pass filters and Polarized Self-Attention (PSA) into a High-Resolution Network (HRNet) to improve its noise resistance and adaptive feature extraction capabilities, ultimately enhancing the accuracy of PV extraction. We also introduce three data migration strategies by combining Sentinel-2, Google-14, and Google-16 images in varying proportions and transferring the FEPVNet trained on Sentinel-2 images to Gaofen-2 images, which improves the generalization performance of models trained on a single data source for extracting PVs in images of different scales. Our model improvement experiments demonstrate that the Intersection over Union (IoU) of FEPVNet in segmenting China PVs in Sentinel-2 images reaches 88.68%, a 2.37% increase compared to the HRNet. Furthermore, we use FEPVNet and the optimal migration strategy to extract photovoltaics across scales, achieving a precision of 94.37%. In summary, this study proposes the FEPVNet model with adaptive strategies for extracting PVs from multiple image sources, with significant potential for application in large-scale PV mapping. Full article
Show Figures

Graphical abstract

23 pages, 6097 KiB  
Article
Continual Contrastive Learning for Cross-Dataset Scene Classification
by Rui Peng, Wenzhi Zhao, Kaiyuan Li, Fengcheng Ji and Caixia Rong
Remote Sens. 2022, 14(20), 5105; https://doi.org/10.3390/rs14205105 - 12 Oct 2022
Cited by 5 | Viewed by 2841
Abstract
With the development of remote sensing technology, the continuing accumulation of remote sensing data has brought great challenges to the remote sensing field. Although multiple deep-learning-based classification methods have made great progress in scene classification tasks, they are still unable to address the [...] Read more.
With the development of remote sensing technology, the continuing accumulation of remote sensing data has brought great challenges to the remote sensing field. Although multiple deep-learning-based classification methods have made great progress in scene classification tasks, they are still unable to address the problem of model learning continuously. Facing the constantly updated remote sensing data stream, there is an inevitable problem of forgetting historical information in the model training, which leads to catastrophic forgetting. Therefore, we propose a continual contrastive learning method based on knowledge distillation and contrastive learning in this paper, which is named the Continual Contrastive Learning Network (CCLNet). To overcome the problem of knowledge forgetting, we first designed a knowledge distillation module based on a spatial feature which contains sufficient historical knowledge. The spatial and category-level knowledge distillation enables the model to effectively preserve the already learned knowledge in the current scene classification model. Then, we introduced contrastive learning by leveraging the comparison of augmented samples and minimizing the distance in the feature space to further enhance the extracted feature during the continual learning process. To evaluate the performance of our designed model on streaming remote sensing scene data, we performed three steps of continuous learning experiments on three datasets, the AID, RSI, and NWPU datasets, and simulated the streaming of remote sensing scene data with the aggregate of the three datasets. We also compared other benchmark continual learning models. The experimental results demonstrate that our method achieved superior performance in the continuous scene classification task. Full article
Show Figures

Graphical abstract

25 pages, 6494 KiB  
Article
Remote Sensing Image Scene Classification via Self-Supervised Learning and Knowledge Distillation
by Yibo Zhao, Jianjun Liu, Jinlong Yang and Zebin Wu
Remote Sens. 2022, 14(19), 4813; https://doi.org/10.3390/rs14194813 - 27 Sep 2022
Cited by 9 | Viewed by 2280
Abstract
The main challenges of remote sensing image scene classification are extracting discriminative features and making full use of the training data. The current mainstream deep learning methods usually only use the hard labels of the samples, ignoring the potential soft labels and natural [...] Read more.
The main challenges of remote sensing image scene classification are extracting discriminative features and making full use of the training data. The current mainstream deep learning methods usually only use the hard labels of the samples, ignoring the potential soft labels and natural labels. Self-supervised learning can take full advantage of natural labels. However, it is difficult to train a self-supervised network due to the limitations of the dataset and computing resources. We propose a self-supervised knowledge distillation network (SSKDNet) to solve the aforementioned challenges. Specifically, the feature maps of the backbone are used as supervision signals, and the branch learns to restore the low-level feature maps after background masking and shuffling. The “dark knowledge” of the branch is transferred to the backbone through knowledge distillation (KD). The backbone and branch are optimized together in the KD process without independent pre-training. Moreover, we propose a feature fusion module to fuse feature maps dynamically. In general, SSKDNet can make full use of soft labels and has excellent discriminative feature extraction capabilities. Experimental results conducted on three datasets demonstrate the effectiveness of the proposed approach. Full article
Show Figures

Graphical abstract

Back to TopTop