Advances in Imaging and Sensing for Drones

A special issue of Drones (ISSN 2504-446X).

Deadline for manuscript submissions: closed (8 October 2023) | Viewed by 7600

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Technology, Ocean University of China, Qingdao 266100, China
Interests: computer vision; remote sensing image object detection; remote sensing image segmentation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Intelligence and Computing,Tianjin University, Tianjin 300350, China
Interests: computer vision; collaborative learning; evolutionary learning

E-Mail Website
Guest Editor
The Guangxi Key Lab of Multi-Source Information Mining & Security, Guangxi Normal University, Guilin 541004, China
Interests: collaborative computing; edge computing; object detection; object tracking; person re-identification; remote sensing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Drones have become a popular tool for a variety of applications, including agriculture and forestry, resource surveying, and marine environment monitoring, due to their flexibility, low cost, easy maintenance, and high resolution and fast imaging capabilities. Drones can fill the gap left by satellite remote sensing monitoring by providing accurate capture of areas that are not limited by access cycles. However, the large field of view and high-resolution imaging characteristics of drones creates additional challenges for drone image analysis. As a result, the analysis, fusion, and co-application of drone imagery are becoming among the most important issues in society.

The goal of this Special Issue is to collect papers (original research articles and review papers) to give insights about advances in imaging and sensing for drones.

This Special Issue focuses on the broad development prospects of processing image and sensor data acquired by drones. It includes theories and methods from the analysis of images and online processing to real-world practical applications. This can be achieved through image/signal processing or deep/machine learning algorithms. The latest technological developments will be shared through this Special Issue. Researchers and investigators are invited to contribute original research or review articles to this Special Issue.

This Special Issue will welcome manuscripts that link the following themes:

  • Analysis of drone videos/images (image classification, object detection, object tracking, image segmentation, feature extraction, change detection, etc.).
  • Data stitching and fusion: stitching of large-scale data from drone images and fusion with satellite, aerial or ground data.
  • Real-time processing of drone data: research on embedded platforms carried by drones.
  • Development of intelligent control software for drone flight control ground: research into intelligent control.
  • Applications (marine monitoring, resource surveying, search and rescue, agriculture, forestry, urban monitoring, disaster prevention and assessment, etc.).

Any content related to drone imagery is welcome for submission.

We look forward to receiving your original research articles and reviews.

Dr. Shengke Wang
Dr. Pengfei Zhu
Prof. Dr. Bineng Zhong
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

    • object detection and tracking
    • image segmentation and feature extraction
    • image stitching and fusion
    • online and real-time processing
    • environment Monitoring and inspection
    • computer vision
    • low-cost remote sensing
    • unmanned aerial vehicles (UAVs)

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 16057 KiB  
Article
DroneNet: Rescue Drone-View Object Detection
by Xiandong Wang, Fengqin Yao, Ankun Li, Zhiwei Xu, Laihui Ding, Xiaogang Yang, Guoqiang Zhong and Shengke Wang
Drones 2023, 7(7), 441; https://doi.org/10.3390/drones7070441 - 3 Jul 2023
Cited by 4 | Viewed by 3376
Abstract
Recently, the research on drone-view object detection (DOD) has predominantly centered on efficiently identifying objects through cropping high-resolution images. However, it has overlooked the distinctive challenges posed by scale imbalance and a higher prevalence of small objects in drone images. In this paper, [...] Read more.
Recently, the research on drone-view object detection (DOD) has predominantly centered on efficiently identifying objects through cropping high-resolution images. However, it has overlooked the distinctive challenges posed by scale imbalance and a higher prevalence of small objects in drone images. In this paper, to address the challenges associated with the detection of drones (DODs), we introduce a specialized detector called DroneNet. Firstly, we propose a feature information enhancement module (FIEM) that effectively preserves object information and can be seamlessly integrated as a plug-and-play module into the backbone network. Then, we propose a split-concat feature pyramid network (SCFPN) that not only fuses feature information from different scales but also enables more comprehensive exploration of feature layers with many small objects. Finally, we develop a coarse to refine label assign (CRLA) strategy for small objects, which assigns labels from coarse- to fine-grained levels and ensures adequate training of small objects during the training process. In addition, to further promote the development of DOD, we introduce a new dataset named OUC-UAV-DET. Extensive experiments on VisDrone2021, UAVDT, and OUC-UAV-DET demonstrate that our proposed detector, DroneNet, exhibits significant improvements in handling challenging targets, outperforming state-of-the-art detectors. Full article
(This article belongs to the Special Issue Advances in Imaging and Sensing for Drones)
Show Figures

Figure 1

20 pages, 13001 KiB  
Article
Towards Robust Visual Tracking for Unmanned Aerial Vehicle with Spatial Attention Aberration Repressed Correlation Filters
by Zhao Zhang, Yongxiang He, Hongwu Guo, Jiaxing He, Lin Yan and Xuanying Li
Drones 2023, 7(6), 401; https://doi.org/10.3390/drones7060401 - 16 Jun 2023
Cited by 1 | Viewed by 1696
Abstract
In recent years, correlation filtering has been widely used in the field of UAV target tracking for its high efficiency and good robustness, even on a common CPU. However, the existing correlation filter-based tracking methods still have major problems when dealing with challenges [...] Read more.
In recent years, correlation filtering has been widely used in the field of UAV target tracking for its high efficiency and good robustness, even on a common CPU. However, the existing correlation filter-based tracking methods still have major problems when dealing with challenges such as fast moving targets, camera shake, and partial occlusion in UAV scenarios. Furthermore, the lack of reasonable attention mechanism for distortion information as well as background information prevents the limited computational resources from being used for the part of the object most severely affected by interference. In this paper, we propose the spatial attention aberration repressed correlation filter, which models the aberrations, makes full use of the spatial information of aberrations and assigns different attentions to them, and can better cope with these challenges. In addition, we propose a mechanism for the intermittent learning of the global context to balance the efficient use of limited computational resources and cope with various complex scenarios. We also tested the mechanism on challenging UAV benchmarks such as UAVDT and Visdrone2018, and the experiments show that SAARCF has better performance than state-of-the-art trackers. Full article
(This article belongs to the Special Issue Advances in Imaging and Sensing for Drones)
Show Figures

Figure 1

22 pages, 1163 KiB  
Article
Learning to Propose and Refine for Accurate and Robust Tracking via an Alignment Convolution
by Zhiyi Mo and Zhi Li
Drones 2023, 7(6), 343; https://doi.org/10.3390/drones7060343 - 25 May 2023
Viewed by 1379
Abstract
Precise and robust feature extraction plays a key role in high-performance tracking to analyse the videos from drones, surveillance and automatic driving, etc. However, most existing Siamese network-based trackers mainly focus on constructing complicated network models and refinement strategies, while using comparatively simple [...] Read more.
Precise and robust feature extraction plays a key role in high-performance tracking to analyse the videos from drones, surveillance and automatic driving, etc. However, most existing Siamese network-based trackers mainly focus on constructing complicated network models and refinement strategies, while using comparatively simple and heuristic conventional or deformable convolutions to extract features from the sampling positions that may be far away from a target region. Consequently, the coarsely extracted features may introduce background noise and degrade the tracking performance. To address this issue, we present a propose-and-refine tracker (PRTracker) that combines anchor-free style proposals at the coarse level, and alignment convolution-driven refinement at the fine level. Specifically, at the coarse level, we design an anchor-free model to effectively generate proposals that provide more reliable interested regions for further verifying. At the fine level, an alignment convolution-based refinement strategy is adopted to improve the convolutional sampling positions of the proposals, thus making the classification and regression of them more accurate. Through using alignment convolution, the convolution sampling positions of the proposals can be efficiently and effectively re-localized, thus improving the accuracy of the extracted features. Finally, a simple yet robust target mask is designed to make full use of the initial state of a target to further improve the tracking performance. The proposed PRTracker achieves a competitive performance against six tracking benchmarks (i.e., UAV123, VOT2018, VOT2019, OTB100, NfS and LaSOT) at 75 FPS. Full article
(This article belongs to the Special Issue Advances in Imaging and Sensing for Drones)
Show Figures

Figure 1

Back to TopTop