remotesensing-logo

Journal Browser

Journal Browser

Advanced Deep Learning Strategies for the Analysis of Remote Sensing Images - 2nd Edition

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 3092

Special Issue Editors


E-Mail Website
Guest Editor
Department of Agricultural Sciences, University of Naples Federico II, Naples, Italy
Interests: multi/hyperspectral remote sensing; image processing and analysis; machine learning; pattern recognition; computer vision
Special Issues, Collections and Topics in MDPI journals

E-Mail
Guest Editor
Applied Computer Science Department, College of Applied Computer Science, King Saud University, Riyadh 11543, Saudi Arabia
Interests: remote sensing; image processing and analysis

Special Issue Information

Dear Colleagues,

The last two decades have unveiled that remote sensing (RS) has become an essential technology in monitoring urban, atmospheric, and ecological changes. The increased availability of satellites and airborne sensors with different spatial and spectral resolutions has made this technology a key component in decision making. In addition to these traditional platforms, a new era has recently opened by the adoption of unmanned aerial vehicles (UAVs) for diverse applications such as policing, precision farming, and urban planning.

The potential provided in terms of observation capability similarly introduces considerable challenges in terms of information extraction. Yet, processing the massive data collected by these diverse platforms is impractical and ineffective using traditional image analysis methodologies. This calls for the adoption of powerful techniques that can extract reliable and useful information. In this context, deep learning (DL) strategies have shown promise in addressing the challenging needs of the RS community. DL was introduced decades ago, when the first steps toward building artificial neural networks were taken. However, due to limited processing resources, it did not reach cutting-edge success in data representation and classification tasks until the recent appearance of high-performance computing facilities. This, in turn, enabled the design of sophisticated deep neural architectures and boosted the precision to demonstrate groundbreaking performance in addressing many problems.

This Special Issue welcomes papers that explore novel and challenging topics for the analysis of remote sensing images acquired with diverse platforms. We welcome topics including but not limited to the following:

  • Semantic segmentation;
  • Domain adaptation from single and multiple sources;
  • Continual learning;
  • Exploration of the relationship between natural language and remote sensing images (bidirectional text to image retrieval, image captioning, visual question answering);
  • Crowd estimation in UAV imagery;
  • Image generation and conversion using generative adversarial networks.

The first edition of this Special Issue can be found at:

https://www.mdpi.com/journal/remotesensing/special_issues/advanced_deep_learning

Dr. Yakoub Bazi
Dr. Edoardo Pasolli
Dr. Mohamed Al-Rahal
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • multispectral/hyperspectral/UAV imagery
  • natural language and remote sensing
  • classification, restoration, super-resolution, retrieval, change detection
  • convolutional neural networks (CNNs)
  • generative adversarial networks (GANs)

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 2575 KiB  
Article
CapERA: Captioning Events in Aerial Videos
by Laila Bashmal, Yakoub Bazi, Mohamad Mahmoud Al Rahhal, Mansour Zuair and Farid Melgani
Remote Sens. 2023, 15(8), 2139; https://doi.org/10.3390/rs15082139 - 18 Apr 2023
Cited by 2 | Viewed by 1853
Abstract
In this paper, we introduce the CapERA dataset, which upgrades the Event Recognition in Aerial Videos (ERA) dataset to aerial video captioning. The newly proposed dataset aims to advance visual–language-understanding tasks for UAV videos by providing each video with diverse textual descriptions. To [...] Read more.
In this paper, we introduce the CapERA dataset, which upgrades the Event Recognition in Aerial Videos (ERA) dataset to aerial video captioning. The newly proposed dataset aims to advance visual–language-understanding tasks for UAV videos by providing each video with diverse textual descriptions. To build the dataset, 2864 aerial videos are manually annotated with a caption that includes information such as the main event, object, place, action, numbers, and time. More captions are automatically generated from the manual annotation to take into account as much as possible the variation in describing the same video. Furthermore, we propose a captioning model for the CapERA dataset to provide benchmark results for UAV video captioning. The proposed model is based on the encoder–decoder paradigm with two configurations to encode the video. The first configuration encodes the video frames independently by an image encoder. Then, a temporal attention module is added on the top to consider the temporal dynamics between features derived from the video frames. In the second configuration, we directly encode the input video using a video encoder that employs factorized space–time attention to capture the dependencies within and between the frames. For generating captions, a language decoder is utilized to autoregressively produce the captions from the visual tokens. The experimental results under different evaluation criteria show the challenges of generating captions from aerial videos. We expect that the introduction of CapERA will open interesting new research avenues for integrating natural language processing (NLP) with UAV video understandings. Full article
Show Figures

Figure 1

Back to TopTop