Advances in Deep Learning for Drones and Its Applications

A special issue of Drones (ISSN 2504-446X).

Deadline for manuscript submissions: closed (31 March 2023) | Viewed by 64411

Special Issue Editors


E-Mail Website
Guest Editor
Cluster of Excellence "PhenoRob", Rheinische Friedrich-Wilhelms-Universität Bonn, Niebuhrstraße 1a, 53113 Bonn, Germany
Interests: active sensing; environmental mapping; informative path planning; robotic decision-making; agricultural robotics
Special Issues, Collections and Topics in MDPI journals
Tencent XR Vision Lab, Canberra, ACT 2601, Australia
Interests: UAV; robot vision; state estimation; deep learning in agriculture (horticulture); reinforcement learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Drones, especially vertical takeoff and landing (VTOL) platforms, are extremely popular and useful for many tasks. The variety of commercially available VTOL platforms today indicates that they have left the research lab and are being utilized for real-world aerial work, such as vertical structure inspection, construction site survey, and precision agriculture. These platforms offer high-level autonomous functionalities, minimizing user interventions, and can carry the useful payloads required for an application.

In addition, we have witnessed rapid growth in the area of machine learning, especially deep learning. This has demonstrated that state-of-the-art deep learning techniques can already outperform human capabilities in many sophisticated tasks, such as autonomous driving, playing games such GO or Dota 2 (reinforcement learning), and even in medical image analysis (object detection and instance segmentation).

Based on the two cutting-edge technologies mentioned above, there exists a growing interest in utilizing deep learning techniques for aerial robots, in order to improve their capabilities and level of autonomy. This step change will play a pivotal role in both drone technologies and the field of aerial robotics.

Within this context, we thus invite papers focusing on current advances in the area of deep learning for field aerial robots for submission to this Special Issue.

Papers are solicited on all areas directly related to these topics, including but not limited to the following:

  • Large-scale aerial datasets and standardized benchmarks for the training, testing, and evaluation of deep-learning solutions
  • Deep neural networks (DNN) for field aerial robot perception (e.g., object detection, or semantic classification for navigation)
  • Recurrent networks for state estimation and dynamic identification of aerial vehicles
  • Deep-reinforcement learning for aerial robots (discrete-, or continuous-control) in dynamic environments
  • Learning-based aerial manipulation in cluttered environments
  • Decision making or task planning using machine learning for field aerial robots
  • Data analytics and real-time decision making with aerial robots-in-the-loop
  • Aerial robots in agriculture using deep learning
  • Aerial robots in inspection using deep learning
  • Imitation learning for aerial robots (e.g., teach and repeat)
  • Multi aerial-agent coordination using deep learning

Dr. Marija Popović
Dr. Inkyu Sa
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • robotics
  • aerial robots
  • UAVs
  • drones
  • remote sensing
  • deep learning
  • deep neural networks
  • computer vision
  • robotic perception

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

18 pages, 5422 KiB  
Article
MS-YOLOv7:YOLOv7 Based on Multi-Scale for Object Detection on UAV Aerial Photography
by LiangLiang Zhao and MinLing Zhu
Drones 2023, 7(3), 188; https://doi.org/10.3390/drones7030188 - 9 Mar 2023
Cited by 45 | Viewed by 9491
Abstract
A multi-scale UAV aerial image object detection model MS-YOLOv7 based on YOLOv7 was proposed to address the issues of a large number of objects and a high proportion of small objects that commonly exist in the Unmanned Aerial Vehicle (UAV) aerial image. The [...] Read more.
A multi-scale UAV aerial image object detection model MS-YOLOv7 based on YOLOv7 was proposed to address the issues of a large number of objects and a high proportion of small objects that commonly exist in the Unmanned Aerial Vehicle (UAV) aerial image. The new network is developed with a multiple detection head and a CBAM convolutional attention module to extract features at different scales. To solve the problem of high-density object detection, a YOLOv7 network architecture combined with the Swin Transformer units is proposed, and a new pyramidal pooling module, SPPFS is incorporated into the network. Finally, we incorporate the SoftNMS and the Mish activation function to improve the network’s ability to identify overlapping and occlusion objects. Various experiments on the open-source dataset VisDrone2019 reveal that our new model brings a significant performance boost compared to other state-of-the-art (SOTA) models. Compared with the YOLOv7 object detection algorithm of the baseline network, the mAP0.5 of MS-YOLOv7 increased by 6.0%, the mAP0.95 increased by 4.9%. Ablation experiments show that the designed modules can improve detection accuracy and visually display the detection effect in different scenarios. This experiment demonstrates the applicability of the MS-YOLOv7 for UAV aerial photograph object detection. Full article
(This article belongs to the Special Issue Advances in Deep Learning for Drones and Its Applications)
Show Figures

Figure 1

22 pages, 4953 KiB  
Article
FRCNN-Based Reinforcement Learning for Real-Time Vehicle Detection, Tracking and Geolocation from UAS
by Chandra Has Singh, Vishal Mishra, Kamal Jain and Anoop Kumar Shukla
Drones 2022, 6(12), 406; https://doi.org/10.3390/drones6120406 - 9 Dec 2022
Cited by 21 | Viewed by 3278
Abstract
In the last few years, uncrewed aerial systems (UASs) have been broadly employed for many applications including urban traffic monitoring. However, in the detection, tracking, and geolocation of moving vehicles using UAVs there are problems to be encountered such as low-accuracy sensors, complex [...] Read more.
In the last few years, uncrewed aerial systems (UASs) have been broadly employed for many applications including urban traffic monitoring. However, in the detection, tracking, and geolocation of moving vehicles using UAVs there are problems to be encountered such as low-accuracy sensors, complex scenes, small object sizes, and motion-induced noises. To address these problems, this study presents an intelligent, self-optimised, real-time framework for automated vehicle detection, tracking, and geolocation in UAV-acquired images which enlist detection, location, and tracking features to improve the final decision. The noise is initially reduced by applying the proposed adaptive filtering, which makes the detection algorithm more versatile. Thereafter, in the detection step, top-hat and bottom-hat transformations are used, assisted by the Overlapped Segmentation-Based Morphological Operation (OSBMO). Following the detection phase, the background regions are obliterated through an analysis of the motion feature points of the obtained object regions using a method that is a conjugation between the Kanade–Lucas–Tomasi (KLT) trackers and Density-Based Spatial Clustering of Applications with Noise (DBSCAN) clustering. The procured object features are clustered into separate objects on the basis of their motion characteristics. Finally, the vehicle labels are designated to their corresponding cluster trajectories by employing an efficient reinforcement connecting algorithm. The policy-making possibilities of the reinforcement connecting algorithm are evaluated. The Fast Regional Convolutional Neural Network (Fast-RCNN) is designed and trained on a small collection of samples, then utilised for removing the wrong targets. The proposed framework was tested on videos acquired through various scenarios. The methodology illustrates its capacity through the automatic supervision of target vehicles in real-world trials, which demonstrates its potential applications in intelligent transport systems and other surveillance applications. Full article
(This article belongs to the Special Issue Advances in Deep Learning for Drones and Its Applications)
Show Figures

Figure 1

15 pages, 618 KiB  
Article
Real-Time Monitoring of Parameters and Diagnostics of the Technical Condition of Small Unmanned Aerial Vehicle’s (UAV) Units Based on Deep BiGRU-CNN Models
by Kamil Masalimov, Tagir Muslimov and Rustem Munasypov
Drones 2022, 6(11), 368; https://doi.org/10.3390/drones6110368 - 21 Nov 2022
Cited by 11 | Viewed by 2718
Abstract
The paper describes an original technique for the real-time monitoring of parameters and technical diagnostics of small unmanned aerial vehicle (UAV) units using neural network models with the proposed CompactNeuroUAV architecture. As input data, the operation parameter values for a certain period preceding [...] Read more.
The paper describes an original technique for the real-time monitoring of parameters and technical diagnostics of small unmanned aerial vehicle (UAV) units using neural network models with the proposed CompactNeuroUAV architecture. As input data, the operation parameter values for a certain period preceding the current and actual control actions on the UAV actuators are used. A reference parameter set model is trained based on historical data. CompactNeuroUAV is a combined neural network consisting of convolutional layers to compact data and recurrent layers with gated recurrent units to encode the time dependence of parameters. Processing provides the expected parameter value and estimates the deviation of the actual value of the parameter or a set of parameters from the reference model. Faults that have led to the deviation threshold crossing are then classified. A smart classifier is used here to detect the failed UAV unit and the fault or pre-failure condition cause and type. The paper also provides the results of experimental validation of the proposed approach to diagnosing faults and pre-failure conditions of fixed-wing type UAVs for the ALFA dataset. Models have been built to detect conditions such as engine thrust loss, full left or right rudder fault, elevator fault in a horizontal position, loss of control over left, right, or both ailerons in a horizontal position, loss of control over the rudder and ailerons stuck in a horizontal position. The results of estimating the developed model accuracy on a test dataset are also provided. Full article
(This article belongs to the Special Issue Advances in Deep Learning for Drones and Its Applications)
Show Figures

Figure 1

22 pages, 17550 KiB  
Article
Enhancing Drones for Law Enforcement and Capacity Monitoring at Open Large Events
by Pablo Royo, Àlex Asenjo, Juan Trujillo, Ender Çetin and Cristina Barrado
Drones 2022, 6(11), 359; https://doi.org/10.3390/drones6110359 - 17 Nov 2022
Cited by 5 | Viewed by 3690
Abstract
Police tasks related with law enforcement and citizen protection have gained a very useful asset in drones. Crowded demonstrations, large sporting events, or summer festivals are typical situations when aerial surveillance is necessary. The eyes in the sky are moving from the use [...] Read more.
Police tasks related with law enforcement and citizen protection have gained a very useful asset in drones. Crowded demonstrations, large sporting events, or summer festivals are typical situations when aerial surveillance is necessary. The eyes in the sky are moving from the use of manned helicopters to drones due to costs, environmental impact, and discretion, resulting in local, regional, and national police forces possessing specific units equipped with drones. In this paper, we describe an artificial intelligence solution developed for the Castelldefels local police (Barcelona, Spain) to enhance the capabilities of drones used for the surveillance of large events. In particular, we propose a novel methodology for the efficient integration of deep learning algorithms in drone avionics. This integration improves the capabilities of the drone for tasks related with capacity control. These tasks have been very relevant during the pandemic and beyond. Controlling the number of persons in an open area is crucial when the expected crowd might exceed the capacity of the area and put humans in danger. The new methodology proposes an efficient and accurate execution of deep learning algorithms, which are usually highly demanding for computation resources. Results show that the state-of-the-art artificial intelligence models are too slow when utilised in the drone standard equipment. These models lose accuracy when images are taken at altitudes above 30 m. With our new methodology, these two drawbacks can be overcome and results with good accuracy (96% correct segmentation and between 20% and 35% mean average proportional error) can be obtained in less than 20 s. Full article
(This article belongs to the Special Issue Advances in Deep Learning for Drones and Its Applications)
Show Figures

Figure 1

21 pages, 5075 KiB  
Article
Lightweight Detection Network for Arbitrary-Oriented Vehicles in UAV Imagery via Global Attentive Relation and Multi-Path Fusion
by Jiangfan Feng and Chengjie Yi
Drones 2022, 6(5), 108; https://doi.org/10.3390/drones6050108 - 27 Apr 2022
Cited by 20 | Viewed by 4419
Abstract
Recent advances in unmanned aerial vehicles (UAVs) have increased altitude capability in road-traffic monitoring. However, state-of-the-art vehicle detection methods still lack accurate abilities and lightweight structures in the UAV platform due to the background uncertainties, scales, densities, shapes, and directions of objects resulting [...] Read more.
Recent advances in unmanned aerial vehicles (UAVs) have increased altitude capability in road-traffic monitoring. However, state-of-the-art vehicle detection methods still lack accurate abilities and lightweight structures in the UAV platform due to the background uncertainties, scales, densities, shapes, and directions of objects resulting from the UAV imagery’s shooting angle. We propose a lightweight solution to detect arbitrary-oriented vehicles under uncertain backgrounds, varied resolutions, and illumination conditions. We first present a cross-stage partial bottleneck transformer (CSP BoT) module to exploit the global spatial relationship captured by multi-head self-attention, validating its implication in recessive dependencies. We then propose an angle classification prediction branch in the YOLO head network to detect arbitrarily oriented vehicles in UAV images and employ a circular smooth label (CSL) to reduce the classification loss. We further improve the multi-scale feature maps by combining the prediction head network with the adaptive spatial feature fusion block (ASFF-Head), which adapts the spatial variation of prediction uncertainties. Our method features a compact, lightweight design that automatically recognizes key geometric factors in the UAV images. It demonstrates superior performance under environmental changes while it is also easy to train and highly generalizable. This remarkable learning ability makes the proposed method applicable to geometric structure and uncertainty estimates. Extensive experiments on the UAV vehicle dataset UAV-ROD and remote sensing dataset UACS-AOD demonstrate the superiority and cost-effectiveness of the proposed method, making it practical for urban traffic and public security. Full article
(This article belongs to the Special Issue Advances in Deep Learning for Drones and Its Applications)
Show Figures

Figure 1

16 pages, 2947 KiB  
Article
An Intelligent Quadrotor Fault Diagnosis Method Based on Novel Deep Residual Shrinkage Network
by Pu Yang, Huilin Geng, Chenwan Wen and Peng Liu
Drones 2021, 5(4), 133; https://doi.org/10.3390/drones5040133 - 8 Nov 2021
Cited by 16 | Viewed by 3282
Abstract
In this paper, a fault diagnosis algorithm named improved one-dimensional deep residual shrinkage network with a wide convolutional layer (1D-WIDRSN) is proposed for quadrotor propellers with minor damage, which can effectively identify the fault classes of quadrotor under interference information, and without additional [...] Read more.
In this paper, a fault diagnosis algorithm named improved one-dimensional deep residual shrinkage network with a wide convolutional layer (1D-WIDRSN) is proposed for quadrotor propellers with minor damage, which can effectively identify the fault classes of quadrotor under interference information, and without additional denoising procedures. In a word, that fault diagnosis algorithm can locate and diagnose the early minor faults of the quadrotor based on the flight data, so that the quadrotor can be repaired before serious faults occur, so as to prolong the service life of quadrotor. First, the sliding window method is used to expand the number of samples. Then, a novel progressive semi-soft threshold is proposed to replace the soft threshold in the deep residual shrinkage network (DRSN), so the noise of signal features can be eliminated more effectively. Finally, based on the deep residual shrinkage network, the wide convolution layer and DroupBlock method are introduced to further enhance the anti-noise and over-fitting ability of the model, thus the model can effectively extract fault features and classify faults. Experimental results show that 1D-WIDRSN applied to the minimal fault diagnosis model of quadrotor propellers can accurately identify the fault category in the interference information, and the diagnosis accuracy is over 98%. Full article
(This article belongs to the Special Issue Advances in Deep Learning for Drones and Its Applications)
Show Figures

Figure 1

11 pages, 15165 KiB  
Article
MAGI: Multistream Aerial Segmentation of Ground Images with Small-Scale Drones
by Danilo Avola and Daniele Pannone
Drones 2021, 5(4), 111; https://doi.org/10.3390/drones5040111 - 4 Oct 2021
Cited by 9 | Viewed by 2598
Abstract
In recent years, small-scale drones have been used in heterogeneous tasks, such as border control, precision agriculture, and search and rescue. This is mainly due to their small size that allows for easy deployment, their low cost, and their increasing computing capability. The [...] Read more.
In recent years, small-scale drones have been used in heterogeneous tasks, such as border control, precision agriculture, and search and rescue. This is mainly due to their small size that allows for easy deployment, their low cost, and their increasing computing capability. The latter aspect allows for researchers and industries to develop complex machine- and deep-learning algorithms for several challenging tasks, such as object classification, object detection, and segmentation. Focusing on segmentation, this paper proposes a novel deep-learning model for semantic segmentation. The model follows a fully convolutional multistream approach to perform segmentation on different image scales. Several streams perform convolutions by exploiting kernels of different sizes, making segmentation tasks robust to flight altitude changes. Extensive experiments were performed on the UAV Mosaicking and Change Detection (UMCD) dataset, highlighting the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Advances in Deep Learning for Drones and Its Applications)
Show Figures

Figure 1

24 pages, 7747 KiB  
Article
Multiscale Object Detection from Drone Imagery Using Ensemble Transfer Learning
by Rahee Walambe, Aboli Marathe and Ketan Kotecha
Drones 2021, 5(3), 66; https://doi.org/10.3390/drones5030066 - 23 Jul 2021
Cited by 37 | Viewed by 12375
Abstract
Object detection in uncrewed aerial vehicle (UAV) images has been a longstanding challenge in the field of computer vision. Specifically, object detection in drone images is a complex task due to objects of various scales such as humans, buildings, water bodies, and hills. [...] Read more.
Object detection in uncrewed aerial vehicle (UAV) images has been a longstanding challenge in the field of computer vision. Specifically, object detection in drone images is a complex task due to objects of various scales such as humans, buildings, water bodies, and hills. In this paper, we present an implementation of ensemble transfer learning to enhance the performance of the base models for multiscale object detection in drone imagery. Combined with a test-time augmentation pipeline, the algorithm combines different models and applies voting strategies to detect objects of various scales in UAV images. The data augmentation also presents a solution to the deficiency of drone image datasets. We experimented with two specific datasets in the open domain: the VisDrone dataset and the AU-AIR Dataset. Our approach is more practical and efficient due to the use of transfer learning and two-level voting strategy ensemble instead of training custom models on entire datasets. The experimentation shows significant improvement in the mAP for both VisDrone and AU-AIR datasets by employing the ensemble transfer learning method. Furthermore, the utilization of voting strategies further increases the 3reliability of the ensemble as the end-user can select and trace the effects of the mechanism for bounding box predictions. Full article
(This article belongs to the Special Issue Advances in Deep Learning for Drones and Its Applications)
Show Figures

Figure 1

16 pages, 2874 KiB  
Article
Drone Trajectory Segmentation for Real-Time and Adaptive Time-Of-Flight Prediction
by Claudia Conte, Giorgio de Alteriis, Rosario Schiano Lo Moriello, Domenico Accardo and Giancarlo Rufino
Drones 2021, 5(3), 62; https://doi.org/10.3390/drones5030062 - 16 Jul 2021
Cited by 15 | Viewed by 5595
Abstract
This paper presents a method developed to predict the flight-time employed by a drone to complete a planned path adopting a machine-learning-based approach. A generic path is cut in properly designed corner-shaped standard sub-paths and the flight-time needed to travel along a standard [...] Read more.
This paper presents a method developed to predict the flight-time employed by a drone to complete a planned path adopting a machine-learning-based approach. A generic path is cut in properly designed corner-shaped standard sub-paths and the flight-time needed to travel along a standard sub-path is predicted employing a properly trained neural network. The final flight-time over the complete path is computed summing the partial results related to the standard sub-paths. Real drone flight-tests were performed in order to realize an adequate database needed to train the adopted neural network as a classifier, employing the Bayesian regularization backpropagation algorithm as training function. For the network, the relative angle between two sides of a corner and the wind condition are the inputs, while the flight-time over the corner is the output parameter. Then, generic paths were designed and performed to test the method. The total flight-time as resulting from the drone telemetry was compared with the flight-time predicted by the developed method based on machine learning techniques. At the end of the paper, the proposed method was demonstrated as effective in predicting possible collisions among drones flying intersecting paths, as a possible application to support the development of unmanned traffic management procedures. Full article
(This article belongs to the Special Issue Advances in Deep Learning for Drones and Its Applications)
Show Figures

Figure 1

14 pages, 1271 KiB  
Article
Acoustic-Based UAV Detection Using Late Fusion of Deep Neural Networks
by Pietro Casabianca and Yu Zhang
Drones 2021, 5(3), 54; https://doi.org/10.3390/drones5030054 - 26 Jun 2021
Cited by 30 | Viewed by 7222
Abstract
Multirotor UAVs have become ubiquitous in commercial and public use. As they become more affordable and more available, the associated security risks further increase, especially in relation to airspace breaches and the danger of drone-to-aircraft collisions. Thus, robust systems must be set in [...] Read more.
Multirotor UAVs have become ubiquitous in commercial and public use. As they become more affordable and more available, the associated security risks further increase, especially in relation to airspace breaches and the danger of drone-to-aircraft collisions. Thus, robust systems must be set in place to detect and deal with hostile drones. This paper investigates the use of deep learning methods to detect UAVs using acoustic signals. Deep neural network models are trained with mel-spectrograms as inputs. In this case, Convolutional Neural Networks (CNNs) are shown to be the better performing network, compared with Recurrent Neural Networks (RNNs) and Convolutional Recurrent Neural Networks (CRNNs). Furthermore, late fusion methods have been evaluated using an ensemble of deep neural networks, where the weighted soft voting mechanism has achieved the highest average accuracy of 94.7%, which has outperformed the solo models. In future work, the developed late fusion technique could be utilized with radar and visual methods to further improve the UAV detection performance. Full article
(This article belongs to the Special Issue Advances in Deep Learning for Drones and Its Applications)
Show Figures

Figure 1

Other

Jump to: Research

32 pages, 1825 KiB  
Systematic Review
Scalable and Cooperative Deep Reinforcement Learning Approaches for Multi-UAV Systems: A Systematic Review
by Francesco Frattolillo, Damiano Brunori and Luca Iocchi
Drones 2023, 7(4), 236; https://doi.org/10.3390/drones7040236 - 28 Mar 2023
Cited by 20 | Viewed by 5474
Abstract
In recent years, the use of multiple unmanned aerial vehicles (UAVs) in various applications has progressively increased thanks to advancements in multi-agent system technology, which enables the accomplishment of complex tasks that require cooperative and coordinated abilities. In this article, multi-UAV applications are [...] Read more.
In recent years, the use of multiple unmanned aerial vehicles (UAVs) in various applications has progressively increased thanks to advancements in multi-agent system technology, which enables the accomplishment of complex tasks that require cooperative and coordinated abilities. In this article, multi-UAV applications are grouped into five classes based on their primary task: coverage, adversarial search and game, computational offloading, communication, and target-driven navigation. By employing a systematic review approach, we select the most significant works that use deep reinforcement learning (DRL) techniques for cooperative and scalable multi-UAV systems and discuss their features using extensive and constructive critical reasoning. Finally, we present the most likely and promising research directions by highlighting the limitations of the currently held assumptions and the constraints when dealing with collaborative DRL-based multi-UAV systems. The suggested areas of research can enhance the transfer of knowledge from simulations to real-world environments and can increase the responsiveness and safety of UAV systems. Full article
(This article belongs to the Special Issue Advances in Deep Learning for Drones and Its Applications)
Show Figures

Figure 1

Back to TopTop