remotesensing-logo

Journal Browser

Journal Browser

Scalable and Credible Artificial Intelligence for Remote Sensing Imagery Understanding

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (30 April 2023) | Viewed by 18642

Special Issue Editors


E-Mail Website
Guest Editor
Department of Big Data Management and Applications, Chang’an University, Xi'an, China
Interests: UAV tracking; scene understanding of remote sensing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Geological Sciences, Stanford University, Palo Alto, CA, USA
Interests: geophysics; machine learning

E-Mail Website
Guest Editor
Department of Geological Sciences, University of Florida, Gainesville, FL, USA
Interests: subglacial conditions; ice-penetrating radar; geostatistics
Department of Big Data Management and Applications, Chang’an University, Xi'an, China
Interests: geophysics; artificial intelligence; road damage detection in UAV

E-Mail Website
Guest Editor
Department of Electro-Optics and Photonics Engineering (Head), School of Electrical and Computer Engineering, Ben-Gurion University, Beer Sheva, Israel
Interests: compressive hypersepctral imaging; deep learning for inverse problems

Special Issue Information

Dear Colleagues,

Remote sensing imagery understanding has become prevalent in the field of intelligent transportation, smart cities, geophysics, glaciology, urban planning, among others. The development of Artificial Intelligence  has heightened the need for a fine-grained data understanding method. However, the existing methods suffer from limited feature extraction and slow speed. Moreover, there is a huge gap between domain knowledge and remote sensing algorithms. With the aim of facilitating real-case applications, lightweight, scalable and credible artificial intelligence models have become a promising way to deal with large amounts of remote sensing data, with a complicated morphology. For example, the convolutional neural network and visual transformer exhibit powerful capability to deal with large-scale remote sensing images. In addition, a group of high-resolution geological realizations are created by the generative adversarial networks. There is significant potential to employ advanced AI models to fulfill data understanding in remote sensing applications. We warmly welcome high-quality original submissions, in the form of cutting-edge articles, along this research direction.

The topics of interest include, but are not limited to, the following:

  • Advanced AI models for remote sensing data understanding, such as scalable convolutional neural networks, parallel neural networks, robust generative adversarial networks, transformers, interpretive and credible deep networks, adversarial attack, AutoML, etc.;
  • Novel applications of AI models for remote sensing, such as transportation, smart cities, agriculture, UAV, urban planning, geophysics, geology, glaciology, etc.;
  • Emerging computer vision, signal processing, and optimization algorithms for remote sensing;
  • Target detection, tracking, and prediction in UAV videos;
  • Adaptive features or spectral fusion and selection models for multi-spectral, high-spectral remote sensing image understanding;  
  • Weakly or self-supervised learning for remote sensing with weak supervisions;
  • Semantic remote sensing image segmentation;
  • Detection of valuable objects for remote sensing;
  • Gap filling and image synthesis based on airborne and satellite images.

Dr. Jianwu Fang
Dr. Zhen Yin
Dr. Emma J. MacKie
Dr. Zuo Chen
Prof. Dr. Adrian Stern
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing data understanding
  • artificial intelligence
  • machine learning
  • neural networks
  • geophysics

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 28197 KiB  
Article
Color-Coated Steel Sheet Roof Building Extraction from External Environment of High-Speed Rail Based on High-Resolution Remote Sensing Images
by Yingjie Li, Weiqi Jin, Su Qiu, Dongsheng Zuo and Jun Liu
Remote Sens. 2023, 15(16), 3933; https://doi.org/10.3390/rs15163933 - 8 Aug 2023
Cited by 7 | Viewed by 1361
Abstract
The identification of color-coated steel sheet (CCSS) roof buildings in the external environment is of great significance for the operational security of high-speed rail systems. While high-resolution remote sensing images offer an efficient approach to identify CCSS roof buildings, achieving accurate extraction is [...] Read more.
The identification of color-coated steel sheet (CCSS) roof buildings in the external environment is of great significance for the operational security of high-speed rail systems. While high-resolution remote sensing images offer an efficient approach to identify CCSS roof buildings, achieving accurate extraction is challenging due to the complex background in remote sensing images and the extensive scale range of CCSS roof buildings. This research introduces the deformation-aware feature enhancement and alignment network (DFEANet) to address these challenges. DFEANet adaptively adjusts the receptive field to effectively separate the foreground and background facilitated by the deformation-aware feature enhancement module (DFEM). Additionally, feature alignment and gated fusion module (FAGM) is proposed to refine boundaries and preserve structural details, which can ameliorate the misalignment between adjacent features and suppress redundant information during the fusion process. Experimental results on remote sensing images along the Beijing–Zhangjiakou high-speed railway demonstrate the effectiveness of DFEANet. Ablation studies further underscore the enhancement in extraction accuracy due to the proposed modules. Overall, the DFEANet was verified as capable of assisting in the external environment security of high-speed rails. Full article
Show Figures

Figure 1

24 pages, 5813 KiB  
Article
TPH-YOLOv5++: Boosting Object Detection on Drone-Captured Scenarios with Cross-Layer Asymmetric Transformer
by Qi Zhao, Binghao Liu, Shuchang Lyu, Chunlei Wang and Hong Zhang
Remote Sens. 2023, 15(6), 1687; https://doi.org/10.3390/rs15061687 - 21 Mar 2023
Cited by 44 | Viewed by 9217
Abstract
Object detection in drone-captured images is a popular task in recent years. As drones always navigate at different altitudes, the object scale varies considerably, which burdens the optimization of models. Moreover, high-speed and low-altitude flight cause motion blur on densely packed objects, which [...] Read more.
Object detection in drone-captured images is a popular task in recent years. As drones always navigate at different altitudes, the object scale varies considerably, which burdens the optimization of models. Moreover, high-speed and low-altitude flight cause motion blur on densely packed objects, which leads to great challenges. To solve the two issues mentioned above, based on YOLOv5, we add an additional prediction head to detect tiny-scale objects and replace CNN-based prediction heads with transformer prediction heads (TPH), constructing the TPH-YOLOv5 model. TPH-YOLOv5++ is proposed to significantly reduce the computational cost and improve the detection speed of TPH-YOLOv5. In TPH-YOLOv5++, cross-layer asymmetric transformer (CA-Trans) is designed to replace the additional prediction head while maintain the knowledge of this head. By using a sparse local attention (SLA) module, the asymmetric information between the additional head and other heads can be captured efficiently, enriching the features of other heads. In the VisDrone Challenge 2021, TPH-YOLOv5 won 4th place and achieved well-matched results with the 1st place model (AP 39.43%). Based on the TPH-YOLOv5 and CA-Trans module, TPH-YOLOv5++ can further increase efficiency while achieving comparable and better results. Full article
Show Figures

Graphical abstract

27 pages, 13743 KiB  
Article
Cloud Contaminated Multispectral Remote Sensing Image Enhancement Algorithm Based on MobileNet
by Xuemei Li, Huping Ye and Shi Qiu
Remote Sens. 2022, 14(19), 4815; https://doi.org/10.3390/rs14194815 - 27 Sep 2022
Cited by 7 | Viewed by 2053
Abstract
Multispectral remote sensing images have shown unique advantages in many fields, including military and civilian use. Facing the difficulty in processing cloud contaminated remote sensing images, this paper proposes a multispectral remote sensing image enhancement algorithm. A model is constructed from the aspects [...] Read more.
Multispectral remote sensing images have shown unique advantages in many fields, including military and civilian use. Facing the difficulty in processing cloud contaminated remote sensing images, this paper proposes a multispectral remote sensing image enhancement algorithm. A model is constructed from the aspects of cloud detection and image enhancement. In the cloud detection stage, clouds are divided into thick clouds and thin clouds according to the cloud transmitability in multi-spectral images, and a multi-layer cloud detection model is established. From the perspective of traditional image processing, a bimodal pre-detection algorithm is constructed to achieve thick cloud extraction. From the perspective of deep learning, the MobileNet algorithm structure is improved to achieve thin cloud extraction. Faced with the problem of insufficient training samples, a self-supervised network is constructed to achieve training, so as to meet the requirements of high precision and high efficiency cloud detection under the condition of small samples. In the image enhancement stage, the area where the ground objects are located is determined first. Then, from the perspective of compressed sensing, the signal is analyzed from the perspective of time and frequency domains. Specifically, the inter-frame information of hyperspectral images is analyzed to construct a sparse representation model based on the principle of compressed sensing. Finally, image enhancement is achieved. The experimental comparison between our algorithm and other algorithms shows that the average Area Overlap Measure (AOM) of the proposed algorithm reaches 0.83 and the Average Gradient (AG) of the proposed algorithm reaches 12.7, which is better than the other seven algorithms by average AG 2. Full article
Show Figures

Graphical abstract

18 pages, 4572 KiB  
Article
Learned Design of a Compressive Hyperspectral Imager for Remote Sensing by a Physics-Constrained Autoencoder
by Yaron Heiser and Adrian Stern
Remote Sens. 2022, 14(15), 3766; https://doi.org/10.3390/rs14153766 - 5 Aug 2022
Cited by 4 | Viewed by 2416
Abstract
Designing and optimizing systems by end-to-end deep learning is a recently emerging field. We present a novel physics-constrained autoencoder (PyCAE) for the design and optimization of a physically realizable sensing model. As a case study, we design a compressive hyperspectral imaging system for [...] Read more.
Designing and optimizing systems by end-to-end deep learning is a recently emerging field. We present a novel physics-constrained autoencoder (PyCAE) for the design and optimization of a physically realizable sensing model. As a case study, we design a compressive hyperspectral imaging system for remote sensing based on this approach, which allows capturing hundreds of spectral bands with as few as four compressed measurements. We demonstrate our deep learning approach to design spectral compression with a spectral light modulator (SpLM) encoder and a reconstruction neural network decoder. The SpLM consists of a set of modified Fabry–Pérot resonator (mFPR) etalons that are designed to have a staircase-shaped geometry. Each stair occupies a few pixel columns of a push-broom-like spectral imager. The mFPR’s stairs can sample the earth terrain in along-track scanning from an airborne or spaceborne moving platform. The SpLM is jointly designed with an autoencoder by a data-driven approach, while spectra from remote sensing databases are used to train the system. The SpLM’s parameters are optimized by integrating its physically realizable sensing model in the encoder part of the PyCAE. The decoder part of the PyCAE implements the spectral reconstruction. Full article
Show Figures

Figure 1

22 pages, 12336 KiB  
Article
Developing a More Reliable Aerial Photography-Based Method for Acquiring Freeway Traffic Data
by Chi Zhang, Zhongze Tang, Min Zhang, Bo Wang and Lei Hou
Remote Sens. 2022, 14(9), 2202; https://doi.org/10.3390/rs14092202 - 5 May 2022
Cited by 20 | Viewed by 2472
Abstract
Due to the widespread use of unmanned aerial vehicles (UAVs) in remote sensing, there are fully developed techniques for extracting vehicle speed and trajectory data from aerial video, using either a traditional method based on optical features or a deep learning method; however, [...] Read more.
Due to the widespread use of unmanned aerial vehicles (UAVs) in remote sensing, there are fully developed techniques for extracting vehicle speed and trajectory data from aerial video, using either a traditional method based on optical features or a deep learning method; however, there are few papers that discuss how to solve the issue of video shaking, and existing vehicle data are rarely linked to lane lines. To address the deficiencies in current research, in this study, we formulated a more reliable method for real traffic data acquisition that outperforms the traditional methods in terms of data accuracy and integrity. First, this method implements the scale-invariant feature transform (SIFT) algorithm to detect, describe, and match local features acquired from high-altitude fixed-point aerial photographs. Second, it applies “you only look once” version 5 (YOLOv5) and deep simple online and real-time tracking (DeepSORT) to detect and track moving vehicles. Next, it leverages the developed Python program to acquire data on vehicle speed and distance (to the marked reference line). The results show that this method achieved over 95% accuracy in speed detection and less than 20 cm tolerance in vehicle trajectory mapping. This method also addresses common problems involving the lack of quality aerial photographic data and accuracy in lane line recognition. Finally, this approach can be used to establish a Frenet coordinate system, which can further decipher driving behaviors and road traffic safety. Full article
Show Figures

Figure 1

Back to TopTop