remotesensing-logo

Journal Browser

Journal Browser

Intelligent Processing and Application of UAV Remote Sensing Image Data

A special issue of Remote Sensing (ISSN 2072-4292).

Deadline for manuscript submissions: 15 January 2025 | Viewed by 7426

Special Issue Editor


E-Mail
Guest Editor
The State Key Laboratory of Information Engineering in Surveying Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
Interests: remote sensing intelligent interpretation; GIS theory and application; unmanned autonomous aerial vehicles; multi-sensor integration; spatio-temporal big data analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Unmanned aerial vehicles (UAVs) carrying different remote sensing loads have been widely used in agricultural monitoring, disaster emergency, urban management, military and other fields. UAVs represent a valid alternative or a complementary solution to satelite platforms, especially for extremely high-resolution acquisitions on small or inaccessible areas, and are not limited by revisit cycles. However, the processing, fusion and comprehensive application of massive UAV remote sensing data are emerging as one of the most important issues in the community.

This Special Issue aims at collecting new developments and methodologies, best practices and applications of UAVs in intelligent processing and application of remote sensing image data.

  • Fine 3D reconstruction of buildings/structures
  • Autonomous indoor/underground landform 3D reconstruction(shopping malls, train station, underground park, catacombs, carst cave, etc.)
  • UAV online target detection and tracking
  • Intelligent interpretation of UAV video/image (image classification, feature extraction, target detection, change detection, biophysical parameter estimation, etc.)
  • Other on-board sensor data processing (multispectral, hyperspectral, thermal, lidar, SAR, gas or radioactivity sensors, etc.)
  • Data fusion: integration of UAV imagery with satellite, aerial or terrestrial data, integration of heterogeneous data captured by UAVs
  • Online and real-time processing/collaborative and fleet of UAVs applied to remote sensing
  • Applications (urban monitoring, precision farming, forestry, disaster prevention, assessment and monitoring, search and rescue, security, archaeology, industrial plant inspection, etc.)
  • Any use of UAVs related to remote sensing

You may choose our Joint Special Issue in Drones.

Prof. Dr. Haigang Sui
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • intelligent processing and application
  • UAV 3D reconstruction in indoor/underground scenes
  • target detection and tracking
  • intelligent interpretation of UAV video/image
  • online and real time processing
  • collaborative and UAV swarm

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

28 pages, 45529 KiB  
Article
High-Quality Damaged Building Instance Segmentation Based on Improved Mask Transfiner Using Post-Earthquake UAS Imagery: A Case Study of the Luding Ms 6.8 Earthquake in China
by Kangsan Yu, Shumin Wang, Yitong Wang and Ziying Gu
Remote Sens. 2024, 16(22), 4222; https://doi.org/10.3390/rs16224222 - 13 Nov 2024
Viewed by 574
Abstract
Unmanned aerial systems (UASs) are increasingly playing a crucial role in earthquake emergency response and disaster assessment due to their ease of operation, mobility, and low cost. However, post-earthquake scenes are complex, with many forms of damaged buildings. UAS imagery has a high [...] Read more.
Unmanned aerial systems (UASs) are increasingly playing a crucial role in earthquake emergency response and disaster assessment due to their ease of operation, mobility, and low cost. However, post-earthquake scenes are complex, with many forms of damaged buildings. UAS imagery has a high spatial resolution, but the resolution is inconsistent between different flight missions. These factors make it challenging for existing methods to accurately identify individual damaged buildings in UAS images from different scenes, resulting in coarse segmentation masks that are insufficient for practical application needs. To address these issues, this paper proposed DB-Transfiner, a building damage instance segmentation method for post-earthquake UAS imagery based on the Mask Transfiner network. This method primarily employed deformable convolution in the backbone network to enhance adaptability to collapsed buildings of arbitrary shapes. Additionally, it used an enhanced bidirectional feature pyramid network (BiFPN) to integrate multi-scale features, improving the representation of targets of various sizes. Furthermore, a lightweight Transformer encoder has been used to process edge pixels, enhancing the efficiency of global feature extraction and the refinement of target edges. We conducted experiments on post-disaster UAS images collected from the 2022 Luding earthquake with a surface wave magnitude (Ms) of 6.8 in the Sichuan Province of China. The results demonstrated that the average precisions (AP) of DB-Transfiner, APbox and APseg, are 56.42% and 54.85%, respectively, outperforming all other comparative methods. Our model improved the original model by 5.00% and 4.07% in APbox and APseg, respectively. Importantly, the APseg of our model was significantly higher than the state-of-the-art instance segmentation model Mask R-CNN, with an increase of 9.07%. In addition, we conducted applicability testing, and the model achieved an average correctness rate of 84.28% for identifying images from different scenes of the same earthquake. We also applied the model to the Yangbi earthquake scene and found that the model maintained good performance, demonstrating a certain level of generalization capability. This method has high accuracy in identifying and assessing damaged buildings after earthquakes and can provide critical data support for disaster loss assessment. Full article
Show Figures

Graphical abstract

21 pages, 7546 KiB  
Article
Unifying Building Instance Extraction and Recognition in UAV Images
by Xiaofei Hu, Yang Zhou, Chaozhen Lan, Wenjian Gan, Qunshan Shi and Hanqiang Zhou
Remote Sens. 2024, 16(18), 3449; https://doi.org/10.3390/rs16183449 - 17 Sep 2024
Viewed by 1059
Abstract
Building instance extraction and recognition (BEAR) extracts and further recognizes building instances in unmanned aerial vehicle (UAV) images, holds with paramount importance in urban understanding applications. To address this challenge, we propose a unified network, BEAR-Former. Given the difficulty of building instance recognition [...] Read more.
Building instance extraction and recognition (BEAR) extracts and further recognizes building instances in unmanned aerial vehicle (UAV) images, holds with paramount importance in urban understanding applications. To address this challenge, we propose a unified network, BEAR-Former. Given the difficulty of building instance recognition due to the small area and multiple instances in UAV images, we developed a novel multi-view learning method, Cross-Mixer. This method constructs a cross-regional branch and an intra-regional branch to, respectively, extract the global context dependencies and local spatial structural details of buildings. In the cross-regional branch, we cleverly employed cross-attention and polar coordinate relative position encoding to learn more discriminative features. To solve the BEAR problem end to end, we designed a channel group and fusion module (CGFM) as a shared encoder. The CGFM includes a channel group encoder layer to independently extract features and a channel fusion module to dig out the complementary information for multiple tasks. Additionally, an RoI enhancement strategy was designed to improve model performance. Finally, we introduced a new metric, Recall@(K, iou), to evaluate the performance of the BEAR task. Experimental results demonstrate the effectiveness of our method. Full article
Show Figures

Figure 1

22 pages, 6774 KiB  
Article
Path Planning of UAV Formations Based on Semantic Maps
by Tianye Sun, Wei Sun, Changhao Sun and Ruofei He
Remote Sens. 2024, 16(16), 3096; https://doi.org/10.3390/rs16163096 - 22 Aug 2024
Viewed by 771
Abstract
This paper primarily studies the path planning problem for UAV formations guided by semantic map information. Our aim is to integrate prior information from semantic maps to provide initial information on task points for UAV formations, thereby planning formation paths that meet practical [...] Read more.
This paper primarily studies the path planning problem for UAV formations guided by semantic map information. Our aim is to integrate prior information from semantic maps to provide initial information on task points for UAV formations, thereby planning formation paths that meet practical requirements. Firstly, a semantic segmentation network model based on multi-scale feature extraction and fusion is employed to obtain UAV aerial semantic maps containing environmental information. Secondly, based on the semantic maps, a three-point optimization model for the optimal UAV trajectory is established, and a general formula for calculating the heading angle is proposed to approximately decouple the triangular equation of the optimal trajectory. For large-scale formations and task points, an improved fuzzy clustering algorithm is proposed to classify task points that meet distance constraints by clusters, thereby reducing the computational scale of single samples without changing the sample size and improving the allocation efficiency of the UAV formation path planning model. Experimental data show that the UAV cluster path planning method using angle-optimized fuzzy clustering achieves an 8.6% improvement in total flight range compared to other algorithms and a 17.4% reduction in the number of large-angle turns. Full article
Show Figures

Figure 1

27 pages, 9304 KiB  
Article
KNN Local Linear Regression for Demarcating River Cross-Sections with Point Cloud Data from UAV Photogrammetry URiver-X
by Taesam Lee, Seonghyeon Hwang and Vijay P. Singh
Remote Sens. 2024, 16(10), 1820; https://doi.org/10.3390/rs16101820 - 20 May 2024
Viewed by 929
Abstract
Aerial surveying with unmanned aerial vehicles (UAVs) has been popularly employed in river management and flood monitoring. One of the major processes in UAV aerial surveying for river applications is to demarcate the cross-section of a river. From the photo images of aerial [...] Read more.
Aerial surveying with unmanned aerial vehicles (UAVs) has been popularly employed in river management and flood monitoring. One of the major processes in UAV aerial surveying for river applications is to demarcate the cross-section of a river. From the photo images of aerial surveying, a point cloud dataset can be abstracted with the structure from the motion technique. To accurately demarcate the cross-section from the cloud points, an appropriate delineation technique is required to reproduce the characteristics of natural and manmade channels, including abrupt changes, bumps and lined shapes. Therefore, a nonparametric estimation technique, called the K-nearest neighbor local linear regression (KLR) model, was tested in the current study to demarcate the cross-section of a river with a point cloud dataset from aerial surveying. The proposed technique was tested with synthetically simulated trapezoidal, U-shape and V-shape channels. In addition, the proposed KLR model was compared with the traditional polynomial regression model and another nonparametric technique, locally weighted scatterplot smoothing (LOWESS). The experimental study was performed with the river experiment center in Andong, South Korea. Furthermore, the KLR model was applied to two real case studies in the Migok-cheon stream on Hapcheon-gun and Pori-cheon stream on Yecheon-gun and compared to the other models. With the extensive applications to the feasible river channels, the results indicated that the proposed KLR model can be a suitable alternative for demarcating the cross-section of a river with point cloud data from UAV aerial surveying by reproducing the critical characteristics of natural and manmade channels, including abrupt changes and small bumps as well as different shapes. Finally, the limitation of the UAV-driven demarcation approach was also discussed due to the penetrability of RGB sensors to water. Full article
Show Figures

Figure 1

18 pages, 9209 KiB  
Article
UAV Complex-Scene Single-Target Tracking Based on Improved Re-Detection Staple Algorithm
by Yiqing Huang, He Huang, Mingbo Niu, Md Sipon Miah, Huifeng Wang and Tao Gao
Remote Sens. 2024, 16(10), 1768; https://doi.org/10.3390/rs16101768 - 16 May 2024
Viewed by 1182
Abstract
With the advancement of remote sensing technology, the demand for the accurate monitoring and tracking of various targets utilizing unmanned aerial vehicles (UAVs) is increasing. However, challenges such as object deformation, motion blur, and object occlusion during the tracking process could significantly affect [...] Read more.
With the advancement of remote sensing technology, the demand for the accurate monitoring and tracking of various targets utilizing unmanned aerial vehicles (UAVs) is increasing. However, challenges such as object deformation, motion blur, and object occlusion during the tracking process could significantly affect tracking performance and ultimately lead to tracking drift. To address this issue, this paper introduces a high-precision target-tracking method with anomaly tracking status detection and recovery. An adaptive feature fusion strategy is proposed to improve the adaptability of the traditional sum of template and pixel-wise learners (Staple) algorithm to changes in target appearance and environmental conditions. Additionally, the Moth Flame Optimization (MFO) algorithm, known for its strong global search capability, is introduced as a re-detection algorithm in case of tracking failure. Furthermore, a trajectory-guided Gaussian initialization technique and an iteration speed update strategy are proposed based on sexual pheromone density to enhance the tracking performance of the introduced re-detection algorithm. Comparative experiments conducted on UAV123 and UAVDT datasets demonstrate the excellent stability and robustness of the proposed algorithm. Full article
Show Figures

Figure 1

20 pages, 16892 KiB  
Article
Long-Range 3D Reconstruction Based on Flexible Configuration Stereo Vision Using Multiple Aerial Robots
by Borwonpob Sumetheeprasit, Ricardo Rosales Martinez, Hannibal Paul and Kazuhiro Shimonomura
Remote Sens. 2024, 16(2), 234; https://doi.org/10.3390/rs16020234 - 7 Jan 2024
Cited by 1 | Viewed by 1731
Abstract
Aerial robots, or unmanned aerial vehicles (UAVs), are widely used in 3D reconstruction tasks employing a wide range of sensors. In this work, we explore the use of wide baseline and non-parallel stereo vision for fast and movement-efficient long-range 3D reconstruction with multiple [...] Read more.
Aerial robots, or unmanned aerial vehicles (UAVs), are widely used in 3D reconstruction tasks employing a wide range of sensors. In this work, we explore the use of wide baseline and non-parallel stereo vision for fast and movement-efficient long-range 3D reconstruction with multiple aerial robots. Each viewpoint of the stereo vision system is distributed on separate aerial robots, facilitating the adjustment of various parameters, including baseline length, configuration axis, and inward yaw tilt angle. Additionally, multiple aerial robots with different sets of parameters can be used simultaneously, including the use of multiple baselines, which allows for 3D monitoring at various depth ranges simultaneously, and the combined use of horizontal and vertical stereo, which improves the quality and completeness of depth estimation. Depth estimation at a distance of up to 400 m with less than 10% error using only 10 m of active flight distance is demonstrated in the simulation. Additionally, estimation of a distance of up to 100 m with flight distance of up to 10 m on the vertical axis and horizontal axis is demonstrated in an outdoor mapping experiment using the developed prototype UAVs. Full article
Show Figures

Figure 1

Back to TopTop