remotesensing-logo

Journal Browser

Journal Browser

Image Processing from Aerial and Satellite Imagery

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 15 March 2025 | Viewed by 10586

Special Issue Editors


E-Mail Website
Guest Editor
School of Applied Computational Sciences, Meharry Medical College, Nashville, TN 37208, USA
Interests: geospatial big data to support health care related application scenarios; unmanned aerial systems for environmental monitoring and emergence situations response; close-range photogrammetry, computer vision and 3D printing for health care and epidemiology; human–computer/human–robot symbiosis for decision support systems

E-Mail Website
Guest Editor
Interdisciplinary Research Center for Aviation and Space Exploration, King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia
Interests: machine learning and artificial intelligence models; approaches to remote sensing applications and geospatial data processing Innovative remote sensing and photogrammetry technologies for the assessment of the environmental impact of construction; solving problems of town planning and spatial territorial management; research into and application of remote sensing, UAS, close-range photogrammetry, and terrestrial laser scanning

E-Mail Website
Guest Editor
School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 60012, Tamil Nadu, India
Interests: remote sensing for aquatic and land applications; earth observations; image processing; ocean optics; radiative transfer theory; modelling water quality features using machine learning; object detection using remote sensing and AI-based approaches; land capability, coastal vulnerability, and environmental sensitivity mapping for decision support

Special Issue Information

Dear Colleagues,

Aerial and satellite imagery are invaluable resources in various fields, including environmental monitoring, urban planning, agriculture, disaster management, and more. This Special Issue of Remote Sensing, entitled “Image Processing from Aerial and Satellite Imagery”, aims to bring together cutting-edge interdisciplinary research in image processing techniques, geospatial science, and technology tailored to these data sources. With the proliferation of remote sensing platforms, there is a growing need for the use of advanced image analysis methods to extract meaningful information from the vast volumes of aerial and satellite imagery available today.

The primary objective of this Special Issue is to provide a platform for researchers, scientists, and experts that allows them to share their latest findings and innovations in the field of image processing for aerial and satellite imagery. This research aligns seamlessly with the journal's scope, which focuses on remote sensing technologies and their applications. By fostering collaboration and knowledge exchange, this Special Issue seeks to advance state-of-the-art image processing techniques, geospatial information science, and technologies for real-world applications, including challenges associated with the deployment of the geospatial big data obtained using satellite-, aerial/UAV-, and terrestrial-based observation techniques of Earth observation.

We invite submissions of original research articles, reviews, and innovative methodologies addressing, but not limited to, the following themes:

  • Image enhancement and restoration: Novel approaches for improving the quality of aerial and satellite images, including noise reduction, deblurring, correction, and super-resolution.
  • Feature extraction and classification: Algorithms and methods for automated detection and classification of objects and phenomena in imagery, such as land use/land cover classification, object recognition, and change detection.
  • Machine learning and deep learning: Applications of machine learning and deep learning techniques for image analysis in remote sensing, including convolutional neural networks, recurrent neural networks, and generative adversarial networks.
  • Data fusion: Integration of multiple data sources, such as multispectral, hyperspectral, and LiDAR data, to enhance the information extracted from imagery.
  • Time series analysis: Temporal analysis of aerial and satellite imagery to monitor dynamic processes and long-term trends.
  • Applications: Real-world applications of aerial and satellite imagery processing in fields like agriculture, forestry, urban planning, disaster monitoring, and environmental conservation.

This Special Issue provides a unique opportunity for researchers to contribute to the advancement of image processing methods for aerial and satellite imagery, ultimately supporting informed decision making and sustainable development in a variety of domains. We encourage authors to submit their high-quality research in order to help shape the future of this critical research area.

Prof. Dr. Eugene Levin
Prof. Dr. Roman Shults
Dr. Surya Prakash Tiwari
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • aerial imagery
  • satellite imagery
  • image processing
  • data fusion
  • machine learning
  • feature extraction
  • change detection
  • environmental monitoring
  • photogrammetry/space photogrammetry
  • land use/land cover classification
  • geospatial analysis

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

28 pages, 12630 KiB  
Article
Satellite Image Restoration via an Adaptive QWNNM Model
by Xudong Xu, Zhihua Zhang and M. James C. Crabbe
Remote Sens. 2024, 16(22), 4152; https://doi.org/10.3390/rs16224152 - 7 Nov 2024
Viewed by 483
Abstract
Due to channel noise and random atmospheric turbulence, retrieved satellite images are always distorted and degraded and so require further restoration before use in various applications. The latest quaternion-based weighted nuclear norm minimization (QWNNM) model, which utilizes the idea of low-rank matrix approximation [...] Read more.
Due to channel noise and random atmospheric turbulence, retrieved satellite images are always distorted and degraded and so require further restoration before use in various applications. The latest quaternion-based weighted nuclear norm minimization (QWNNM) model, which utilizes the idea of low-rank matrix approximation and the quaternion representation of multi-channel satellite images, can achieve image restoration and enhancement. However, the QWNNM model ignores the impact of noise on similarity measurement, lacks the utilization of residual image information, and fixes the number of iterations. In order to address these drawbacks, we propose three adaptive strategies: adaptive noise-resilient block matching, adaptive feedback of residual image, and adaptive iteration stopping criterion in a new adaptive QWNNM model. Both simulation experiments with known noise/blurring and real environment experiments with unknown noise/blurring demonstrated that the effectiveness of adaptive QWNNM models outperformed the original QWNNM model and other state-of-the-art satellite image restoration models in very different technique approaches. Full article
(This article belongs to the Special Issue Image Processing from Aerial and Satellite Imagery)
Show Figures

Figure 1

15 pages, 6018 KiB  
Article
Consistency Self-Training Semi-Supervised Method for Road Extraction from Remote Sensing Images
by Xingjian Gu, Supeng Yu, Fen Huang, Shougang Ren and Chengcheng Fan
Remote Sens. 2024, 16(21), 3945; https://doi.org/10.3390/rs16213945 - 23 Oct 2024
Viewed by 730
Abstract
Road extraction techniques based on remote sensing image have significantly advanced. Currently, fully supervised road segmentation neural networks based on remote sensing images require a significant number of densely labeled road samples, limiting their applicability in large-scale scenarios. Consequently, semi-supervised methods that utilize [...] Read more.
Road extraction techniques based on remote sensing image have significantly advanced. Currently, fully supervised road segmentation neural networks based on remote sensing images require a significant number of densely labeled road samples, limiting their applicability in large-scale scenarios. Consequently, semi-supervised methods that utilize fewer labeled data have gained increasing attention. However, the imbalance between a small quantity of labeled data and a large volume of unlabeled data leads to local detail errors and overall cognitive mistakes in semi-supervised road extraction. To address this challenge, this paper proposes a novel consistency self-training semi-supervised method (CSSnet), which effectively learns from a limited number of labeled data samples and a large amount of unlabeled data. This method integrates self-training semi-supervised segmentation with semi-supervised classification. The semi-supervised segmentation component relies on an enhanced generative adversarial network for semantic segmentation, which significantly reduces local detail errors. The semi-supervised classification component relies on an upgraded mean-teacher network to handle overall cognitive errors. Our method exhibits excellent performance with a modest amount of labeled data. This study was validated on three separate road datasets comprising high-resolution remote sensing satellite images and UAV photographs. Experimental findings showed that our method consistently outperformed state-of-the-art semi-supervised methods and several classic fully supervised methods. Full article
(This article belongs to the Special Issue Image Processing from Aerial and Satellite Imagery)
Show Figures

Figure 1

18 pages, 3781 KiB  
Article
Self-Attention Multiresolution Analysis-Based Informal Settlement Identification Using Remote Sensing Data
by Rizwan Ahmed Ansari and Timothy J. Mulrooney
Remote Sens. 2024, 16(17), 3334; https://doi.org/10.3390/rs16173334 - 8 Sep 2024
Viewed by 866
Abstract
The global dilemma of informal settlements persists alongside the fast process of urbanization. Various methods for analyzing remotely sensed images to identify informal settlements using semantic segmentation have been extensively researched, resulting in the development of numerous supervised and unsupervised algorithms. Texture-based analysis [...] Read more.
The global dilemma of informal settlements persists alongside the fast process of urbanization. Various methods for analyzing remotely sensed images to identify informal settlements using semantic segmentation have been extensively researched, resulting in the development of numerous supervised and unsupervised algorithms. Texture-based analysis is a topic extensively studied in the literature. However, it is important to note that approaches that do not utilize a multiresolution strategy are unable to take advantage of the fact that texture exists at different spatial scales. The capacity to do online mapping and precise segmentation on a vast scale while considering the diverse characteristics present in remotely sensed images carries significant consequences. This research presents a novel approach for identifying informal settlements using multiresolution analysis and self-attention techniques. The technique shows potential for being resilient in the presence of inherent variability in remotely sensed images due to its capacity to extract characteristics at many scales and prioritize areas that contain significant information. Segmented pictures underwent an accuracy assessment, where a comparison analysis was conducted based on metrics such as mean intersection over union, precision, recall, F-score, and overall accuracy. The proposed method’s robustness is demonstrated by comparing it to various state-of-the-art techniques. This comparison is conducted using remotely sensed images that have different spatial resolutions and informal settlement characteristics. The proposed method achieves a higher accuracy of approximately 95%, even when dealing with significantly different image characteristics. Full article
(This article belongs to the Special Issue Image Processing from Aerial and Satellite Imagery)
Show Figures

Figure 1

21 pages, 16543 KiB  
Article
Bidirectional Feature Fusion and Enhanced Alignment Based Multimodal Semantic Segmentation for Remote Sensing Images
by Qianqian Liu and Xili Wang
Remote Sens. 2024, 16(13), 2289; https://doi.org/10.3390/rs16132289 - 22 Jun 2024
Cited by 1 | Viewed by 1487
Abstract
Image–text multimodal deep semantic segmentation leverages the fusion and alignment of image and text information and provides more prior knowledge for segmentation tasks. It is worth exploring image–text multimodal semantic segmentation for remote sensing images. In this paper, we propose a bidirectional feature [...] Read more.
Image–text multimodal deep semantic segmentation leverages the fusion and alignment of image and text information and provides more prior knowledge for segmentation tasks. It is worth exploring image–text multimodal semantic segmentation for remote sensing images. In this paper, we propose a bidirectional feature fusion and enhanced alignment-based multimodal semantic segmentation model (BEMSeg) for remote sensing images. Specifically, BEMSeg first extracts image and text features by image and text encoders, respectively, and then the features are provided for fusion and alignment to obtain complementary multimodal feature representation. Secondly, a bidirectional feature fusion module is proposed, which employs self-attention and cross-attention to adaptively fuse image and text features of different modalities, thus reducing the differences between multimodal features. For multimodal feature alignment, the similarity between the image pixel features and text features is computed to obtain a pixel–text score map. Thirdly, we propose a category-based pixel-level contrastive learning on the score map to reduce the differences among the same category’s pixels and increase the differences among the different categories’ pixels, thereby enhancing the alignment effect. Additionally, a positive and negative sample selection strategy based on different images is explored during contrastive learning. Averaging pixel values across different training images for each category to set positive and negative samples compares global pixel information while also limiting sample quantity and reducing computational costs. Finally, the fused image features and aligned pixel–text score map are concatenated and fed into the decoder to predict the segmentation results. Experimental results on the ISPRS Potsdam, Vaihingen, and LoveDA datasets demonstrate that BEMSeg is superior to comparison methods on the Potsdam and Vaihingen datasets, with improvements in mIoU ranging from 0.57% to 5.59% and 0.48% to 6.15%, and compared with Transformer-based methods, BEMSeg also performs competitively on LoveDA dataset with improvements in mIoU ranging from 0.37% to 7.14%. Full article
(This article belongs to the Special Issue Image Processing from Aerial and Satellite Imagery)
Show Figures

Figure 1

20 pages, 12264 KiB  
Article
Land Use Recognition by Applying Fuzzy Logic and Object-Based Classification to Very High Resolution Satellite Images
by Dario Perregrini and Vittorio Casella
Remote Sens. 2024, 16(13), 2273; https://doi.org/10.3390/rs16132273 - 21 Jun 2024
Viewed by 624
Abstract
The past decade has seen remarkable advancements in Earth observation satellite technologies, leading to an unprecedented level of detail in satellite imagery, with ground resolutions nearing an impressive 30 cm. This progress has significantly broadened the scope of satellite imagery utilization across various [...] Read more.
The past decade has seen remarkable advancements in Earth observation satellite technologies, leading to an unprecedented level of detail in satellite imagery, with ground resolutions nearing an impressive 30 cm. This progress has significantly broadened the scope of satellite imagery utilization across various domains that were traditionally reliant on aerial data. Our ultimate goal is to leverage this high-resolution satellite imagery to classify land use types and derive soil permeability maps by attributing permeability values to the different types of classified soil. Specifically, we aim to develop an object-based classification algorithm using fuzzy logic techniques to describe the different classes relevant to soil permeability by analyzing different test areas, and once a complete method has been developed, apply it to the entire image of Pavia. In this study area, a logical scheme was developed to classify the field classes, cultivated and uncultivated, and distinguish them from large industrial buildings, which, due to their radiometric similarity, can be classified incorrectly, especially with uncultivated fields. Validation of the classification results against ground truth data, produced by an operator manually classifying part of the image, yielded an impressive overall accuracy of 95.32%. Full article
(This article belongs to the Special Issue Image Processing from Aerial and Satellite Imagery)
Show Figures

Figure 1

26 pages, 10617 KiB  
Article
Lightweight Super-Resolution Generative Adversarial Network for SAR Images
by Nana Jiang, Wenbo Zhao, Hui Wang, Huiqi Luo, Zezhou Chen and Jubo Zhu
Remote Sens. 2024, 16(10), 1788; https://doi.org/10.3390/rs16101788 - 18 May 2024
Cited by 1 | Viewed by 1154
Abstract
Due to a unique imaging mechanism, Synthetic Aperture Radar (SAR) images typically exhibit degradation phenomena. To enhance image quality and support real-time on-board processing capabilities, we propose a lightweight deep generative network framework, namely, the Lightweight Super-Resolution Generative Adversarial Network (LSRGAN). This method [...] Read more.
Due to a unique imaging mechanism, Synthetic Aperture Radar (SAR) images typically exhibit degradation phenomena. To enhance image quality and support real-time on-board processing capabilities, we propose a lightweight deep generative network framework, namely, the Lightweight Super-Resolution Generative Adversarial Network (LSRGAN). This method introduces Depthwise Separable Convolution (DSConv) in residual blocks to compress the original Generative Adversarial Network (GAN) and uses the SeLU activation function to construct a lightweight residual module (LRM) suitable for SAR image characteristics. Furthermore, we combine the LRM with an optimized Coordinated Attention (CA) module, enhancing the lightweight network’s capability to learn feature representations. Experimental results on spaceborne SAR images demonstrate that compared to other deep generative networks focused on SAR image super-resolution reconstruction, LSRGAN achieves compression ratios of 74.68% in model storage requirements and 55.93% in computational resource demands. In this work, we significantly reduce the model complexity, improve the quality of spaceborne SAR images, and validate the effectiveness of the SAR image super-resolution algorithm as well as the feasibility of real-time on-board processing technology. Full article
(This article belongs to the Special Issue Image Processing from Aerial and Satellite Imagery)
Show Figures

Graphical abstract

28 pages, 11352 KiB  
Article
Pansharpening Low-Altitude Multispectral Images of Potato Plants Using a Generative Adversarial Network
by Sourav Modak, Jonathan Heil and Anthony Stein
Remote Sens. 2024, 16(5), 874; https://doi.org/10.3390/rs16050874 - 1 Mar 2024
Cited by 2 | Viewed by 2739
Abstract
Image preprocessing and fusion are commonly used for enhancing remote-sensing images, but the resulting images often lack useful spatial features. As the majority of research on image fusion has concentrated on the satellite domain, the image-fusion task for Unmanned Aerial Vehicle (UAV) images [...] Read more.
Image preprocessing and fusion are commonly used for enhancing remote-sensing images, but the resulting images often lack useful spatial features. As the majority of research on image fusion has concentrated on the satellite domain, the image-fusion task for Unmanned Aerial Vehicle (UAV) images has received minimal attention. This study investigated an image-improvement strategy by integrating image preprocessing and fusion tasks for UAV images. The goal is to improve spatial details and avoid color distortion in fused images. Techniques such as image denoising, sharpening, and Contrast Limited Adaptive Histogram Equalization (CLAHE) were used in the preprocessing step. The unsharp mask algorithm was used for image sharpening. Wiener and total variation denoising methods were used for image denoising. The image-fusion process was conducted in two steps: (1) fusing the spectral bands into one multispectral image and (2) pansharpening the panchromatic and multispectral images using the PanColorGAN model. The effectiveness of the proposed approach was evaluated using quantitative and qualitative assessment techniques, including no-reference image quality assessment (NR-IQA) metrics. In this experiment, the unsharp mask algorithm noticeably improved the spatial details of the pansharpened images. No preprocessing algorithm dramatically improved the color quality of the enhanced images. The proposed fusion approach improved the images without importing unnecessary blurring and color distortion issues. Full article
(This article belongs to the Special Issue Image Processing from Aerial and Satellite Imagery)
Show Figures

Graphical abstract

Other

Jump to: Research

15 pages, 2538 KiB  
Technical Note
Multi-Scale Image- and Feature-Level Alignment for Cross-Resolution Person Re-Identification
by Guoqing Zhang, Zhun Wang, Jiangmei Zhang, Zhiyuan Luo and Zhihao Zhao
Remote Sens. 2024, 16(2), 278; https://doi.org/10.3390/rs16020278 - 10 Jan 2024
Viewed by 1229
Abstract
Cross-Resolution Person Re-Identification (re-ID) aims to match images with disparate resolutions arising from variations in camera hardware and shooting distances. Most conventional works utilize Super-Resolution (SR) models to recover Low Resolution (LR) images to High Resolution (HR) images. However, because the SR models [...] Read more.
Cross-Resolution Person Re-Identification (re-ID) aims to match images with disparate resolutions arising from variations in camera hardware and shooting distances. Most conventional works utilize Super-Resolution (SR) models to recover Low Resolution (LR) images to High Resolution (HR) images. However, because the SR models cannot completely compensate for the missing information in the LR images, there is still a large gap between the HR image recovered from the LR images and the real HR images. To tackle this challenge, we propose a novel Multi-Scale Image- and Feature-Level Alignment (MSIFLA) framework to align the images on multiple resolution scales at both the image and feature level. Specifically, (i) we design a Cascaded Multi-Scale Resolution Reconstruction (CMSR2) module, which is composed of three cascaded Image Reconstruction (IR) networks, and can continuously reconstruct multiple variables of different resolution scales from low to high for each image, regardless of image resolution. The reconstructed images with specific resolution scales are of similar distribution; therefore, the images are aligned on multiple resolution scales at the image level. (ii) We propose a Multi-Resolution Representation Learning (MR2L) module which consists of three-person re-ID networks to encourage the IR models to preserve the ID-discriminative information during training separately. Each re-ID network focuses on mining discriminative information from a specific scale without the disturbance from various resolutions. By matching the extracted features on three resolution scales, the images with different resolutions are also aligned at the feature-level. We conduct extensive experiments on multiple public cross-resolution person re-ID datasets to demonstrate the superiority of the proposed method. In addition, the generalization of MSIFLA in handling cross-resolution retrieval tasks is verified on the UAV vehicle dataset. Full article
(This article belongs to the Special Issue Image Processing from Aerial and Satellite Imagery)
Show Figures

Figure 1

Back to TopTop