Next Article in Journal
Lightweight UAV Small Target Detection and Perception Based on Improved YOLOv8-E
Previous Article in Journal
High-Order Disturbance Observer-Based Fuzzy Fixed-Time Safe Tracking Control for Uncertain Unmanned Helicopter with Partial State Constraints and Multisource Disturbances
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Edge Computing-Driven Real-Time Drone Detection Using YOLOv9 and NVIDIA Jetson Nano

Department of Electrical and Computer Science Engineering, Institute of Infrastructure Technology Research and Management (IITRAM), Ahmedabad 380026, India
*
Author to whom correspondence should be addressed.
Drones 2024, 8(11), 680; https://doi.org/10.3390/drones8110680
Submission received: 5 October 2024 / Revised: 6 November 2024 / Accepted: 12 November 2024 / Published: 19 November 2024

Abstract

:
Drones, with their ability to vertically take off and land with their stable hovering performance, are becoming favorable in both civilian and military domains. However, this introduces risks of its misuse, which may include security threats to airports, institutes of national importance, VIP security, drug trafficking, privacy breaches, etc. To address these issues, automated drone detection systems are essential for preventing unauthorized drone activities. Real-time detection requires high-performance devices such as GPUs. For our experiments, we utilized the NVIDIA Jetson Nano to support YOLOv9-based drone detection. The performance evaluation of YOLOv9 to detect drones is based on metrics like mean average precision (mAP), frames per second (FPS), precision, recall, and F1-score. Experimental data revealed significant improvements over previous models, with a mAP of 95.7%, a precision of 0.946, a recall of 0.864, and an F1-score of 0.903, marking a 4.6% enhancement over YOLOv8. This paper utilizes YOLOv9, optimized with pre-trained weights and transfer learning, achieving significant accuracy in real-time drone detection. Integrated with the NVIDIA Jetson Nano, the system effectively identifies drones at altitudes ranging from 15 feet to 110 feet while adapting to various environmental conditions. The model’s precision and adaptability make it particularly suitable for deployment in security-sensitive areas, where quick and accurate detection is crucial. This research establishes a solid foundation for future counter-drone applications and shows great promise for enhancing situational awareness in critical, high-risk environments.

1. Introduction

Unmanned aerial vehicles (UAVs), another name for drones, are tiny aircraft that may operate remotely or on their own in autonomous mode. These devices come in various forms and sizes, from small consumer models to large, military-grade drones with cutting-edge sensors and weaponry [1]. Over the past 15 years, the drone industry has experienced exponential growth, becoming increasingly accessible to the public and available at more affordable prices [2]. Depending on their payload capacity, drones are employed in a range of applications. These includes, but are not limited to medical aid [3,4,5], disaster response [6,7,8], logistic and transportation [9,10], agriculture [11,12], remote sensing [13,14], space exploration [15], and inspection activities [16,17,18]. The multipurpose uses [19] of drones are illustrated in Figure 1. This versatility, flexibility, and ability to access remote or dangerous areas make them valuable tools in numerous industries. In the US, there were 791,597 drones registered as of 1 October 2024, with 387,746 of them being recreational [20].
The proliferation of drones by terrorist organizations and individuals involved in illegal drug trafficking has increased significantly. Additionally, the growing number of hobbyist drone operators introduces the risk of interference with critical operations such as firefighting and emergency response efforts. Consequently, there has been a surge in the misuse of drones. To counter unauthorized and undesirable drone interventions, automated drone detection is deemed necessary.
For the effective development of a drone detection system, it is crucial to consider the various threats posed by unmanned aerial vehicles. Figure 2 summarizes the primary categories of drone threats, which include unauthorized surveillance, smuggling, harassment, interfering with aircraft, violating no-fly zones, espionage, and collisions.
During a national outdoor event in August 2018, President Nicolas Maduro of Venezuela faced an assassination attempt when two explosive-laden drones targeted him, although they ultimately failed [21]. This incident stands out as the first recorded use of a drone in an attack on a country’s head of state. The closure of Gao International Airport ensued after a recreational drone collided with a runway truck, causing considerable damage and subsequent flight disruptions, emphasizing the urgent need for enhanced drone safety protocols at airports [22]. When suspicious drones were found close to the runway at New Delhi’s Indira Gandhi International Airport in August 2017, there was a serious security violation. This incident forced a temporary suspension of flight operations, leading to widespread disruptions. The event underscored the potential risks and vulnerabilities airports face from unauthorized drone activity [23]. In January 2018, Khmeimim Russian military bases in Syria were targeted by a swarm of 13 homemade aerial drones [24]. The UAV attacks on 14 September 2019 at Saudi Arabia’s Khurais oilfield and Abqaiq processing plant resulted in a 5.7-million-barrel decrease in oil production, escalating crude oil prices by 15% globally [25,26]. A smuggling operation involving drones that transported mobile phones worth over HKD 620 million from Hong Kong has been dismantled by authorities [27]. Moreover, incidents of drone violations have been reported during sports events, with drones illegally flying over football stadiums to cause disruptions [28]. The massive use of drones for offensive, defensive, and intelligence purposes was evident during the Ukrainian–Russian war in 2022 [29,30]. In 2024, there has been a notable increase in drone-related incidents, emphasizing the urgent need for effective counter-drone technologies and sensible regulations for small UAV usage. These incidents, which include smuggling, illegal intrusions, collisions, and technical malfunctions, have impacted various sectors. Most unauthorized drone activities occurred in sensitive locations like airports, border areas, prisons, and residential communities. Since January 2024, approximately 218 drone incidents have been reported [31]. Such incidents highlight the importance of UAV detection technologies in preventing unauthorized drone activity. These systems are engineered to detect drones; determine their position; and gather critical information about their type, direction, and speed to enable timely neutralization.
The tasks of detecting and classifying UAVs present several significant challenges, which researchers [32,33] are actively working to address. Some of these challenges include UAVs varying in size and speed, complicating detection and classification due to their distinct flying characteristics and shapes. High speeds, unpredictable flight patterns, and resemblance to birds or airplanes make accurate UAV identification challenging, necessitating fast and precise detection systems. UAVs operate at various altitudes and ranges, with low-altitude, short-range drones posing unique detection difficulties and potential risks for traditional countermeasure systems. Weather, urban obstructions, and adverse lighting can impair sensor effectiveness, leading to false positives or negatives in UAV identification.
The challenges in drone detection are addressed by utilizing a transfer learning approach with a Jetson Nano and a camera setup at the base station. Using YOLOv9 and deep learning methods, this setup enables the detection of drones at various altitudes and ranges. The camera captures data, which are processed in real time by the Jetson Nano at the base station, improving the detection and classification accuracy even in complex environments. This setup facilitates quick and effective responses to potential threats by analyzing factors such as speed, flight patterns, and environmental conditions.
The key contributions of the paper are as follows:
  • We employ the YOLOv9 algorithm, optimized through transfer learning, to train a comprehensive drone detection dataset.
  • The proposed method integrates the NVIDIA Jetson Nano platform for real-time drone detection, enabling efficient processing with reduced response time and energy consumption.
  • Environmental adaptability: The algorithm is tested across different environmental conditions, including daytime, sunny, and evening settings, demonstrating robustness in varying lighting and weather conditions.
  • Altitude-based drone detection: The detection system is validated at different altitudes, including 15 feet, 60 feet, and 110 feet, ensuring high accuracy across varying drone heights.
The rest of the paper is organized as follows: Section 2 outlines our approach to drone detection using various deep learning techniques. A comprehensive overview of the proposed system is provided in Section 3, detailing its architecture, hardware components, software framework, and the dataset employed, as well as the experimental setup that addresses the training and validation of the dataset, including transfer learning on a low-power edge computing device. Section 4 presents the results of the study and provides a discussion of our findings. Section 5 concludes with a summary of our results and discusses future research directions.

2. Overview of Drone Detection Technologies

Drone detection entails the identification and tracking of drones as they navigate through the sky. Drones, during their operation, release a variety of signals, such as noise, radio frequency (RF) emissions, and sound waves. Advanced detection technologies, including radar, LIDAR, and vision sensors, can pick up these signals. Depending on the detection technology used, drone detection can be classified into four key categories [34].
  • Acoustic Detection [35,36,37,38]: It captures the unique sound patterns from drone propellers and motors [39]. This method is relatively cheaper. One of its benefits is that it does not need a direct line of sight with the drone to function well in dimly lit areas and difficult environments like fog or dust. In isolated locations with little background noise, this technique performs admirably. However, its accuracy is significantly affected by background noise and weather conditions such as rain and wind [40]. These factors can interfere with the microphone arrays, making it harder to isolate the drone’s acoustic signature and potentially leading to false positives or reduced detection range. The detection capability extends to 200 m, but this range may differ depending on environmental factors.
  • RF-Based Detection [41,42,43,44]: Drones typically communicate with their controllers using RF signals in the 2.4 GHz to 5 GHz range [45]. The 2.4 GHz band is commonly used for longer-range communication, while the 5 GHz band can provide higher data rates and real-time control [46]. A drone’s controller can listen in on signals transmitted by the drone using RF sensors. However, this method can struggle with distinguishing between drones and other RF sources like Wi-Fi or Bluetooth devices. Additionally, it may require precise calibration and can be affected by signal interference in urban environments [47].
  • Radar-Based Detection [48,49,50,51]: Radars are commonly utilized for aircraft detection in military and civil fields, including aviation, and are thus recognized as reliable tools for detecting drones. It transmits radio waves, typically in the microwave range, and analyzes the reflected waves to determine the presence, distance, and speed of objects like aircraft or drones [52]. The major advantage of radar-based detection is its high accuracy in identifying and localizing objects compared to other methods. However, traditional radars face limitations, as they are designed to detect larger aircraft and often struggle with smaller objects like drones [53]. They may also have difficulty distinguishing between hovering drones and static reflective objects, as well as between small drones and birds. Furthermore, the drawbacks of radar-based drone detection include high costs and the necessity for specialized skills for installation and maintenance [54].
  • Vision-Based Detection [55,56,57,58]: A visual detection system captures drone images or videos using daylight, infrared, or thermal cameras and applies computer-vision-based algorithms for detection [59]. These systems rely on computer vision and deep learning to identify drones by analyzing appearance features such as color, shape, contour, and motion across successive frames [60]. This approach leverages both object recognition and motion tracking for effective drone identification in diverse environments. Visual drone detection provides precise identification and tracking through visual cues like shape and markings, which are difficult for acoustic and RF methods to detect. Unlike radar-based systems, it works efficiently despite signal interference and remains reliable in different environmental conditions, including adverse weather. Additionally, vision systems can integrate advanced artificial intelligence [61] for automated decision-making, enhancing overall detection accuracy and efficiency in dynamic airspace environments. Visual-based drone detection is categorized into two primary approaches: traditional techniques and deep learning methods [62]. Traditional techniques rely on filtering [63], threshold segmentation [64], and morphological operations [65] for handcrafted feature extraction. Despite their advantages, these methods are constrained by speed, accuracy, and their ability to adjust to environmental changes [66]. In contrast, contemporary research into visual-based drone detection and identification has increasingly leveraged deep learning models and feature learning to improve accuracy [67].
Deep learning, a cutting-edge method within machine learning, has gained prominence due to its exceptional performance and precision in results [68,69]. It is particularly adept at extracting features directly from raw data. The term “deep” refers to the multiple layers that exist between the input and output layers, where features are extracted in a hierarchical and nonlinear manner [70]. Object detection through deep learning methods fundamentally utilizes CNNs to extract features from images. These techniques are generally classified as one-stage and two-stage detectors [71].

2.1. Two-Stage Detectors

Two-stage detectors [72] initially generate region proposals (RoIs) and subsequently classify and refine the bounding boxes, resulting in higher accuracy but slower performance. Figure 3 illustrates the fundamental framework of two-stage detectors. Some of the most recognized region proposal-based methods are R-CNN [73], Fast R-CNN [74], Faster R-CNN [75], and Mask R-CNN [76].
R-CNN, developed by Girshick in 2014 [64], advanced the field of object detection by combining region proposal methods with deep learning techniques. It detects objects by creating region proposals through selective search, extracting features with a CNN, and then classifying and refining bounding boxes. Despite its effectiveness, R-CNN faces issues like slow processing, a complex training process, high memory demands, reliance on external proposals, and limited adaptability to different object scales.
Fast R-CNN [74] improves on R-CNN’s inefficiencies by processing the entire image in one forward pass and employing ROI pooling to generate fixed-size feature maps from the original maps, which significantly boosts computation speed. It replaces the SVM with a softmax layer for classification and integrates various architecture components, leading to enhanced speed, reduced memory usage, and an end-to-end training process that utilizes multi-task loss for labeled regions of interest.
Building on Fast R-CNN, Faster R-CNN [75] introduces a Region Proposal Network (RPN) that generates region proposals from convolutional feature maps, which eliminates the requirement for a distinct proposal stage. This integration enhances both speed and accuracy in object detection. The architecture also employs ROI pooling, which allows for efficient feature extraction from proposals of varying sizes [77].
Mask R-CNN [76] is an extension of Faster R-CNN that adds a branch for predicting segmentation masks on each detected object in addition to bounding box and class predictions. This enables instance segmentation, allowing the model to identify and delineate individual objects within an image. It enhances object detection by providing detailed pixel-level information for each object instance.

2.2. One-Stage Detectors

One-stage detectors [78] differ from conventional two-stage models by integrating object localization and classification into a single process, which allows for faster detection speeds that are optimal for real-time applications. The architecture of these detectors is depicted in Figure 4. Leading examples of this efficient strategy include the Single-Shot Multibox Detector (SSD) [79] and YOLO variants [80].
The SSD [81] architecture predicts class labels and bounding box offsets for a fixed number of default boxes at various scales across multiple feature layers. By incorporating anchor boxes with a variety of aspect ratios and sizes, SSD is able to detect objects of different shapes and dimensions in a single pass through the network.
YOLO [82] is a pioneering real-time object detection model that was introduced in 2015. It utilizes a fixed grid methodology, allowing the model to process an entire image in one go through a convolutional neural network, thereby simultaneously predicting bounding boxes and class probabilities. By dividing the image into regions, YOLO achieves high speeds and efficiencies, making it well suited for real-time applications while maintaining competitive accuracy across various computer vision tasks. Illustrated in Figure 5, the YOLO family has seen several iterations, with each version designed to improve upon previous models and tackle their limitations [83,84,85,86,87].
Table 1 provides a detailed comparison of different YOLO versions, showing improvements in their architecture, framework, mean average precision (mAP), and speed (FPS). From YOLOv1’s grid-based detection in 2015 to YOLOv9’s new Programmable Gradient Information (PGI) framework in 2024, each version has enhanced YOLO efficiency and accuracy.
The progression of YOLO models, particularly from YOLOv3 to YOLOv9, showcases improvements in speed, accuracy, and adaptability for complex object detection scenarios like drones. YOLOv9 addresses some of the limitations in previous versions, such as poor performance in adverse conditions and inefficient gradient propagation. This study will compare YOLOv9’s performance on Jetson Nano with earlier versions to determine its practical application in real-world drone detection.

3. Experimental Setup for Drone Detection Using YOLOv9

YOLOv9 is an advanced real-time object detection system that integrates cutting-edge deep learning methods and architectural innovations to deliver exceptional performance in detecting objects. YOLOv9 addresses critical challenges in object detection by incorporating reversible functions for data integrity, Programmable Gradient Information (PGI) [98] for precise gradient updates, and Generalized Efficient Layer Aggregation Network (GELAN) [99] to streamline feature extraction and speed. These advancements resolve information bottlenecks and enhance the model’s flexibility, allowing for a more efficient and accurate detection process compared to previous YOLO versions. Figure 6 shows the overall architecture of YOLOv9, showcasing the streamlined and efficient approach used for drone detection.
YOLOv9 employs a highly efficient architecture to enhance object detection accuracy and speed. The backbone leverages CSPNet [100] for optimizing gradient flow and ELAN [99] for improving processing speed while maintaining a lightweight design. This combination enables the effective extraction of multi-scale features from input images. The backbone also incorporates RepNCSP-ELAN 4 blocks, which integrate RepNBottleneck and CSP modules for detailed feature representation, allowing the model to capture both global and local patterns.
The neck of YOLOv9 improves feature fusion and aggregation, vital for detecting objects of varying sizes and scales. PANet modules are employed here to strengthen feature representation. YOLOv9 also introduces PGI (Propagation Guided Improvement), which enhances gradient backpropagation, facilitating faster training convergence and better model performance.
In the head of YOLOv9, the network predicts the final bounding boxes, class probabilities, and objectness scores. This section uses reversible functions to preserve data integrity and minimize information loss, which in turn boosts prediction accuracy. The architecture includes Adown blocks for efficient downsampling, maintaining critical spatial information while reducing feature map size.
YOLOv9’s customized loss function ensures effective optimization during training, and non-maximum suppression (NMS) is applied to refine detection results, significantly improving both efficiency and accuracy in object detection tasks.
The bounding box, classification loss, and objectness loss smaller loss functions—each addressing a distinct facet of the object identification task—combine to form the loss function in the context of YOLOv9. This is how it is computed [101]:
L Y O L O v 9 = λ b o x . L b o x + λ c l s . L c l s + λ o b j . L o b j
The weights λbox, λcls, and λobj control how much each component of the model’s error contributes during training. These weights can be fine-tuned using techniques from previous YOLO models, like YOLOv5 and YOLOv8.
The bounding box regression loss checks how well the model predicts the location and size of objects. It uses the Mean Squared Error (MSE) between the predicted and actual box coordinates to improve how well the model finds objects. Lbox is calculated as follows:
L b o x = 1 N i = 0 N [ x i x i t r u e 2 + y i y i t r u e 2 + w i w i t r u e 2 + h i h i t r u e 2 ]
where Lbox represents the bounding box regression loss and N denotes the total number of bounding boxes. The predicted center coordinates, width, and height of the bounding box are given as xi, yi, wi, and hi, respectively. Meanwhile,  x i t r u e y i t r u e ,   w i t r u e ,   h i t r u e  correspond to the actual center coordinates, width, and height of the bounding box.
Classification loss measures how correctly the model identifies the type of objects. It uses Cross Entropy Loss to compare the predicted class probabilities with the actual labels to boost classification accuracy. Lcls is calculated as follows:
L c l s = 1 N i = 1 N c = 1 C y i ( c ) log p ^ i ( c )
where Lcls denotes the classification loss, N represents the number of predicted bounding boxes, and C is the number of classes. The term  y i ( c )  is true for class c (with 1 indicating that the object belongs to class c and 0 otherwise), while  p ^ i ( c )  represents the predicted probability that the object belongs to class c.
Objectness loss checks if the model correctly identifies whether an anchor box contains an object. It uses MSE between the predicted and actual objectness scores to improve object detection. Lobj is calculated as follows:
L o b j = 1 N i = 1 N [ y i o b j log p ^ i o b j + ( 1 y i o b j ) log ( 1 p ^ i o b j ) ]
where Lobj represents the objectness loss,  y i o b j  is the true label indicating if the bounding box contains an object (1 if it does and 0 if it does not), and  p ^ i o b j  is the predicted objectness score or confidence level.
The drone detection process in YOLOv9 is both streamlined and efficient:
  • Input Handling: The image is passed through the backbone, where GELAN extracts multi-scale features.
  • Feature Aggregation: These features are processed in the neck using PGI to enhance gradient flow and feature fusion, improving detection across various object sizes.
  • Prediction Stage: Finally, the head uses reversible functions to maintain data accuracy while predicting bounding boxes, class probabilities, and objectness scores.

3.1. Training and Validation of Dataset

The proposed approach for drone detection involves four key steps, as illustrated in Figure 7. It begins with dataset preparation, which serves as input to the detection framework. The next step is training the model to accurately detect drones. In the third phase, the trained model is evaluated on various drone datasets to test its detection performance. Lastly, the model’s performance is assessed and then deployed for real-time drone detection operations.
For our drone detection project, we compiled a diverse dataset from several public sources, including Kaggle, Ms COCO, and Google. This dataset features images captured from different altitudes, angles, backgrounds, and perspectives to ensure comprehensive variability. Additionally, we supplemented this dataset with images collected from our drone flights. In total, a dataset comprising 9995 images depicting various types of drones was assembled. This includes 2467 images from Kaggle, 2691 images from Roboflow, 848 images collected during our drone flights, 3471 images from the MS COCO dataset, and 518 randomly sourced images from Google. For training and evaluating the YOLOv9 model, a 70:30 train–test split was applied, assigning 6996 images for training and 2999 images for testing. For training and evaluating our YOLOv9 model, we applied for a 70:30 train–test split, assigning 6996 images for training and 2999 images for testing.

3.2. Training of Custom Dataset with YOLOv9

To train YOLOv9 for drone detection, the experimental environment uses high-performance resources, ensuring repeatable results. The setup involves an Nvidia GeForce RTX 4060 GPU and Intel i9-12900 CPU (8 cores) in a laptop-based configuration. This system, supported by 32 GB DDR4-3200 RAM, runs on a 64-bit Ubuntu platform. The software environment includes Cuda 12.1, PyTorch 2.2.1, and Python 3.11.8, which are essential for the effective execution and real-time performance of the model.
Training the YOLOv9 model requires configuring key hyperparameters such as the learning rate, number of epochs, batch size, input dimensions, and weight initialization strategy. The model is trained using the stochastic gradient descent (SGD) algorithm, which utilizes backpropagation to adjust its parameters. The learning rate, a crucial factor, controls the speed of learning and is generally set to a small positive value between 0 and 1. Proper tuning of these parameters is essential to achieve optimal accuracy and performance of the model.
A higher learning rate accelerates the training process, thereby reducing the time needed to train the model. However, this can lead to increased average loss and decreased accuracy. In contrast, a lower learning rate results in a slower training process and may cause the model to become stuck at a high training error. Thus, it is crucial to find the optimal learning rate to balance training speed with model accuracy [102]. After determining the optimal model during the training phase, it was converted into a TensorRT model for deployment on the Jetson Nano. The model’s performance was then evaluated using the NVIDIA DeepStream SDK. Detailed parameter settings for the training process, including data augmentation techniques, can be found in Table 2, while Algorithm 1 outlines the pseudo-flow graph of the training model. For deployment on the Jetson Nano, we selected the compact variant of YOLOv9.
Algorithm 1: Training of YOLOv9
Drones 08 00680 i001

3.3. Transfer Learning

Transfer learning [103] is an approach in which a model that has been previously trained on a large dataset is adapted for a particular task by leveraging a smaller dataset. In this scenario, a collection of drone-related images is initially processed on a CPU to create a robust model capable of detecting drones. The insights gained from this training phase are then applied to an NVIDIA Jetson Nano, a compact AI edge device, where the model undergoes fine-tuning to optimize it for real-time detection. By utilizing the pre-trained weights, the Jetson Nano is able to perform real-time object detection efficiently, minimizing resource consumption while delivering fast and accurate results in practical applications. Figure 8 outlines transfer learning techniques.

3.4. System Architecture and Test Environment on Low-Power Edge Computing Device

In our system architecture, the Jetson Nano has been selected as the edge computing device for implementing the YOLOv9 algorithm for real-time drone detection. This device provides substantial computational capacity, offering 472 GFLOPs suitable for executing contemporary deep learning assignments, all while being competitively priced. Powered by a 64-bit quad-core ARM A57 processor operating at 1.43 GHz and containing 128 CUDA cores consuming 5 to 10 watts, it also includes 4 GB of LPDDR4 memory [104]. The Jetson Nano is specifically designed for edge computing applications, distinguished by its small size, cost-effectiveness, and minimal energy consumption. Algorithm 2 describes the pseudo-flow graph of the real-time drone detection on Jetson Nano.
Algorithm 2: Transfer learning on Jetson Nano for real-time drone detection
Drones 08 00680 i002
Figure 9a,b illustrate the system architecture for real-time drone detection using the NVIDIA Jetson Nano edge computing platform. The architecture comprises two main components: a high-speed camera and the Jetson Nano. Initially, a high-speed camera is positioned in the detection area to monitor the surroundings continuously. It captures real-time image data, which are then transmitted to the Jetson Nano for processing.
The Jetson Nano is equipped with the YOLOv9 algorithm, which has been pre-trained to analyze incoming image data in real time. This algorithm performs the detection and localization of drone targets within the captured images. Detected drone targets are identified, and their locations are pinpointed accurately. The flowchart of real-time drone detection using YOLOv9 and NVIDIA Jetson Nano is shown in Figure 10.

4. Results and Performance Evaluation

4.1. Assessment Indicators

To evaluate the effectiveness of the proposed approach, various critical performance metrics, such as precision, recall, F1-score, and mean average precision (mAP), are utilized. These metrics collectively offer a thorough assessment of the model’s object detection performance and accuracy.
Intersection over Union (IoU) evaluates the overlap between the predicted and ground truth bounding boxes, indicating how well the predicted box matches the actual object’s location. With IoU values ranging from 0 to 1, a value of 0 means no overlap, while a value of 1 signifies perfect alignment. Higher IoU values represent more precise localization, where the predicted bounding box closely matches the ground truth, as demonstrated by the following equation.
I o U = A e r a   o f   O v e r l a p A e r a   o f   U n i o n
The confusion matrix is a square matrix of size n × n, where n is the number of classes, used to evaluate the performance of a model. In the context of a drone detection system utilizing YOLOv9 with two classes of drone and other objects, the columns represent the true classes of detected objects, while the rows represent the predicted classes made by the YOLOv9 model.
Metrics such as precision, recall, F1-score, and accuracy are derived from the value in the confusion matrix true positive (TP), false positive (FP), true negative (TN), and false negative (FN) values which is shown in Figure 11.
For drones, precision is calculated as the proportion of predicted drones that are correctly identified as drones.
P r e c i s i o n ( D r o n e ) = T P ( D r o n e ) T P D r o n e + F P ( D r o n e )
Recall (for drones) is the proportion of actual drones that are correctly predicted as drones.
R e c a l l ( D r o n e ) = T P ( D r o n e ) T P D r o n e + F N ( D r o n e )
F1 score (for drones) is the harmonic means of precision and recall for drones, providing a balanced measure of detection performance.
F 1 S c o r e D r o n e = 2 × P r e c i s i o n D r o n e × R e c a l l D r o n e P r e c i s i o n D r o n e + R e c a l l D r o n e
Accuracy measures the overall correctness of the model’s predictions across all classes.
A c c u r a c y = T P + T N T P + F P + T N + F N
where
  • True positives (TP) are cases in which drones are correctly classified as drones.
  • False positives (FP) are cases in which non-drone entities are erroneously classified as drones.
  • True negatives (TN) are cases in which non-drone entities are accurately classified as other objects.
  • False negatives (FN) are cases in which drones are inaccurately classified as other objects.
Average precision (AP) is a crucial performance metric that eliminates the need to select a single confidence threshold value. It is defined by the area under the precision–recall (PR) curve. AP is high when both precision and recall are consistently high across various confidence threshold values and low when either precision or recall is low. The AP value ranges from 0 to 1.
A P = 0 1 P R d R
Mean average precision (mAP) measures how well a drone detection model accurately identifies and locates drones by comparing their predicted positions with the actual positions (ground truth). Higher mAP scores indicate that the model performs better at detecting and recognizing drones.
m A P = 1 C i = 1 c A P i
where APi represents the average precision value for the ith class, and C is the total number of classes under evaluation.

4.2. Experimental Results and Discussion

The trained YOLOv9 models over 100 epochs using a dataset comprising over 9995 images, including training, validation, and testing images. The training duration for the large dataset is 28 h, 8 min, and 28 s. Precision, recall, F1-score, mAP, and FPS were evaluated to know the performance of the YOLOv9 model. The loss of training and validation across epochs is shown in Figure 12, where the downward trend of the loss curve reflects minimized losses.
Figure 13 illustrates the accuracy throughout the training process. Initially, both training and testing accuracy increased rapidly. This is followed by a steady improvement between epochs 7 and 33. After 100 epochs of training, the model achieves 95.14% accuracy on the testing data and 94.46% precision on the training data.
In terms of the detection speed, FPS, and model parameters, 30 FPS or higher [37] is recommended to ensure accurate and timely detection, while 24.12 FPS is achieved by the proposed model along with 253.20 M and 51.6 MB.
Figure 14a,b present the YOLOv9 drone detection model’s output of predicted bounding box offsets. In this figure, x and y are the coordinates of the bounding box centers, and width and height define their size. The color variations in Figure 14 denote the data density, with darker colors showing where data are more concentrated. Most bounding boxes are positioned near the center of the image, with a higher prevalence of medium- and small-sized objects.
YOLOv9 marks a significant advancement in drone detection technology, as illustrated by the performance metrics presented in Table 3. With a precision of 0.946, YOLOv9 outperforms both YOLOv5 (0.918) and YOLOv8 (0.91), demonstrating enhanced accuracy in identifying true positives—an essential factor for effective drone detection. In terms of recall, YOLOv9 achieves 0.864, which is a substantial improvement over YOLOv4 (0.680) and YOLOv3 (0.70). This performance underscores its capability to detect actual drone instances more reliably. While it does fall slightly behind CNN (0.94) and Mask R-CNN (0.894), it still offers a robust detection rate compared to earlier YOLO models. Additionally, YOLOv9’s mean average precision (mAP50) of 0.9570 is the highest among all models assessed, highlighting its superior overall detection capabilities and significant progress in precision across various conditions. The F1 score of 0.9030 indicates a well-balanced performance between precision and recall, making YOLOv9 a dependable choice for real-time applications.
Finally, Figure 15 showcases the overall performance metrics achieved during the training of YOLOv9, while Figure 16, Figure 17 and Figure 18 demonstrate the results of real-time drone detection at different altitudes using Jetson Nano, a low-power edge computing device.
Our study on real-time drone detection indicates that altitude significantly impacts detection accuracy. At higher altitudes, smaller drones may appear less defined, complicating detection efforts. Additionally, fast-moving drones at these heights can suffer from motion blur, which might hinder the model’s ability to accurately predict bounding boxes and class probabilities. Environmental factors, such as atmospheric haze, can also become more pronounced at elevated positions, affecting detection performance.
However, our trained model successfully detects drones under a range of challenging conditions, as illustrated in the figures. Figure 19 highlights the model’s effectiveness in nighttime detection at an altitude of 110 feet, while Figure 20 showcases its ability to identify drones in dark environments with artificial lighting, demonstrating robustness in low-light scenarios. Figure 21 emphasizes the model’s performance in high-altitude situations with poor visibility, further confirming its adaptability. Figure 22 presents successful drone detection in low-visibility cloudy environments, and Figure 23 illustrates its capabilities in complex scenes with multiple objects and dynamic backgrounds. Finally, Figure 24 captures real-time drone detection amid intricate backgrounds, underscoring the model’s reliability in varied lighting and cluttered conditions. Overall, the model’s consistent performance across these diverse scenarios affirms its effectiveness for real-time drone detection applications.
Our YOLOv9-based system, optimized through transfer learning, is capable of detecting drones in diverse lighting conditions, including nighttime scenarios, without relying on thermal cameras or additional sensors. Through training on a comprehensive dataset that encompasses various lighting situations, including low-light and artificial illumination, the model has developed strong feature extraction capabilities that enable it to identify drones in real-world applications and challenging conditions. Additionally, the model incorporates contrast adjustments to improve its ability to differentiate drones from complex, dynamic, and low-visibility backgrounds, making it especially suitable for urban environments where background complexity varies considerably. While the model performs well in many challenging conditions, including cloudy and low-light settings, certain limitations remain when drones are camouflaged within dense urban landscapes or complex backgrounds.
In a subsequent experiment, the system was tested with multiple drones flying simultaneously. The results, illustrated in Figure 25 and Figure 26, demonstrate the successful detection of two and five drones, respectively, across various environmental conditions. This capability highlights the system’s versatility in scenarios involving multiple drones and validates its performance across different scenarios.
Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23, Figure 24, Figure 25 and Figure 26 illustrate the variability in IDs assigned to detected drones throughout the detection process. This optimized YOLOv9 algorithm assigns each drone a unique identifier to facilitate tracking as it moves within the camera’s field of view. While these IDs are useful for tracking purposes, they do not have significant implications for the study. The dynamic nature of these IDs can be attributed to several factors. When multiple drones are detected simultaneously, each is given a distinct ID. However, if a drone temporarily exits the frame and then reappears, the system may interpret it as a new detection and assign a different ID. Variations in altitude and position can also contribute to ID reassignment, as these changes may affect the drone’s appearance from the camera’s perspective. Additionally, environmental factors such as lighting conditions, shadows, and background changes can impact the algorithm’s ability to maintain consistent detection. In some cases, adjustments in the drone’s orientation or size may cause the algorithm to interpret it as a different object, leading to further variability in ID assignments.

5. Conclusions

Drone detection is a very challenging task as they are small in size and shaped like birds with dynamic movements. Radar signatures for drones are also very limited and require expensive hardware setups. Therefore, developing an accurate real-time detection model is very essential to achieving a balance between speed and accuracy. In this paper, YOLOv9 is used as a base model to minimize false detections, with a minimum time span. For the training, we created a new dataset by collecting images from available public resources, as well as our drone flights, to enhance the model’s ability to recognize drones and other objects, even at long distances. Additionally, powerful hardware is employed to ensure that the inference speed meets the requirements for real-time detection. The YOLOv9 model has achieved precision, recall, F1-score, and mAP values of 94.7%, 86.4%, 90.3%, and 95.7%, respectively, using the various datasets, after proper training. Jetson Nano, which is a low-power edge computing device, for real-time drone detection is used for backend computing support. Our updated YOLOv9 model demonstrated enhanced performance over previous versions, with improved recall, F1-score, and mAP, reflecting a 4.6% increase in mAP. Furthermore, the detection speed was tested, reaching a maximum frame rate of 24.12 FPS on the NVIDIA Jetson Nano.
The detector’s performance was evaluated using videos captured at three different altitudes—15 feet, 50 feet, and 110 feet—with lighting conditions to assess its effectiveness across varying heights. As the height increases, confidence score reduces, so we tested our drones around 110 feet in the green zone (the permitted limit to fly in India for drones) and found that the confidence score is more than 60 percent and detected properly. This system will be very helpful for the early and speedy detection of drones in crucial places. This system is designed for the early and rapid detection of drones in critical areas such as airports, military installations, and large public events. It can be deployed on both stationary and mobile platforms to activate quick countermeasures against unauthorized drones. However, adverse weather conditions, including heavy rain, fog, and low light, can affect the system’s performance. Future developments may involve incorporating thermal cameras to enhance detection capabilities in such conditions.
Future work will involve creating a robust and diverse dataset that captures a variety of environmental scenarios, such as distinct weather conditions, lighting variations, and seasonal changes, broadening the model’s generalization capabilities. The dataset will include different types of drones and varied flight patterns, ranging from hovering to high-speed flights, to support more intricate detection scenarios. Furthermore, we may utilize YOLOv10 and YOLOv11 to maximize accuracy, speed, and resilience on edge devices like the NVIDIA Jetson Nano, enhancing the model’s readiness for real-world applications.

Author Contributions

Problem identification and conceptualization, R.H. and A.R.; data collection and algorithm optimization, R.H.; critical analysis of output, R.H.; writing—original draft, R.H.; writing—review and editing, R.H. and A.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Acknowledgments

The author extends heartfelt thanks to the Drone Centre of Excellence at the Institute of Infrastructure Technology Research and Management (IITRAM) and the Directorate of Technical Education, Government of Gujarat, for their essential support and resources throughout our research. Their advanced facilities and expertise were crucial in facilitating our experiments and achieving the outcomes detailed in this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AdownAsymmetric Downsampling
APAverage Precision
CNNConvolution Neural Network
COCOCommon Objects in Context
Conv.Convolution
CPUCentral Processing Unit
CSPNetCross-Stage Partial Connections
FPSFrame Per Second
GELANGeneralize Efficient Layer Aggregation Network
GFLOPGiga Floating Point Operations per Second
GPS Global Positioning System
IOUIntersection Over Union
mAPMean Average Precision
MSEMean Squared Error
PGIProgrammable Gradient Information
R-CNNRegion-Based Convolutional Neural Network
RADARRadio Detection and Ranging
ResNetResidual Network
RepConvNRepetitive Convolutional N Block
RepNCSPRepeated Normalized Cross Stage Partial
RFRadio Frequency
RoIRegion of Interest
RPNRegion Proposal Network
SDGStochastic Gradient Descent
SSDSingle-Shot Multibox Detector
UAVUnmanned Aerial Vehicle
YOLOYou Only Look Once
YOLOv2You Only Look Once Version 2
YOLOv3You Only Look Once Version 3
YOLOv4You Only Look Once Version 4
YOLOv5You Only Look Once Version 5
YOLOv6You Only Look Once Version 6
YOLOv7You Only Look Once Version 7
YOLOv8You Only Look Once Version 8
YOLOv9You Only Look Once Version 9

References

  1. Yoo, L.S.; Lee, J.H.; Lee, Y.K.; Jung, S.K.; Choi, Y. Application of a Drone Magnetometer System to Military Mine Detection in the Demilitarized Zone. Sensors 2021, 21, 3175. [Google Scholar] [CrossRef] [PubMed]
  2. Behera, D.K.; Raj, A.B. Drone Detection and Classification Using Deep Learning. In Proceedings of the 2020 4th International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 13–15 May 2020; pp. 1012–1016. [Google Scholar] [CrossRef]
  3. Nyaaba, A.A.; Ayamga, M. Intricacies of Medical Drones in Healthcare Delivery: Implications for Africa. Technol. Soc. 2021, 66, 101624. [Google Scholar] [CrossRef]
  4. Flemons, K.; Baylis, B.; Khan, A.Z.; Kirkpatrick, A.W.; Whitehead, K.; Moeini, S.; Schreiber, A.; Lapointe, S.; Ashoori, S.; Arif, M.; et al. The Use of Drones for the Delivery of Diagnostic Test Kits and Medical Supplies to Remote First Nations Communities during Covid-19. Am. J. Infect. Control 2022, 50, 849–856. [Google Scholar] [CrossRef] [PubMed]
  5. Hiebert, B.; Nouvet, E.; Jeyabalan, V.; Donelle, L. The Application of Drones in Healthcare and Health-Related Services in North America: A Scoping Review. Drones 2020, 4, 30. [Google Scholar] [CrossRef]
  6. Papyan, N.; Kulhandjian, M.; Kulhandjian, H.; Aslanyan, L. AI-Based Drone Assisted Human Rescue in Disaster Environments: Challenges and Opportunities. Pattern Recognit. Image Anal. 2024, 34, 169–186. [Google Scholar] [CrossRef]
  7. Mohd Daud, S.M.S.; Mohd Yusof, M.Y.P.; Heo, C.C.; Khoo, L.S.; Chainchel Singh, M.K.; Mahmood, M.S.; Nawawi, H. Applications of Drone in Disaster Management: A Scoping Review. Sci. Justice 2022, 62, 30–42. [Google Scholar] [CrossRef]
  8. Zwegliński, T. The Use of Drones in Disaster Aerial Needs Reconnaissance and Damage Assessment-Three-Dimensional Modeling and Orthophoto Map Study. Sustainability 2020, 12, 6080. [Google Scholar] [CrossRef]
  9. Kellermann, R.; Biehle, T.; Fischer, L. Drones for Parcel and Passenger Transportation: A Literature Review. Transp. Res. Interdiscip. Perspect. 2020, 4, 100088. [Google Scholar] [CrossRef]
  10. Jahani, H.; Khosravi, Y.; Kargar, B.; Ong, K.L.; Arisian, S. Exploring the Role of Drones and UAVs in Logistics and Supply Chain Management: A Novel Text-Based Literature Review. Int. J. Prod. Res. 2024, 1–25. [Google Scholar] [CrossRef]
  11. Rejeb, A.; Abdollahi, A.; Rejeb, K.; Treiblmaier, H. Drones in Agriculture: A Review and Bibliometric Analysis. Comput. Electron. Agric. 2022, 198, 107017. [Google Scholar] [CrossRef]
  12. Chin, R.; Catal, C.; Kassahun, A. Plant Disease Detection Using Drones in Precision Agriculture. Precis. Agric. 2023, 24, 1663–1682. [Google Scholar] [CrossRef]
  13. Zhang, Z.; Zhu, L. A Review on Unmanned Aerial Vehicle Remote Sensing: Platforms, Sensors, Data Processing Methods, and Applications. Drones 2023, 7, 398. [Google Scholar] [CrossRef]
  14. Ejaz, N.; Choudhury, S. Computer Vision in Drone Imagery for Infrastructure Management. Autom. Constr. 2024, 163, 105418. [Google Scholar] [CrossRef]
  15. Sharma, M.; Gupta, A.; Gupta, S.K.; Alsamhi, S.H.; Shvetsov, A.V. Survey on Unmanned Aerial Vehicle for Mars Exploration: Deployment Use Case. Drones 2022, 6, 4. [Google Scholar] [CrossRef]
  16. Shakhatreh, H.; Sawalmeh, A.; Al-fuqaha, A.; Dou, Z.; Almaita, E.; Khalil, I.; Shamsiah, N.; Khreishah, A.; Guizani, M. Unmanned Aerial Vehicles: A Survey on Civil Applications and Key Research Challenges. arXiv 2018. [Google Scholar] [CrossRef]
  17. Li, Z.; Zhang, Y.; Wu, H.; Suzuki, S.; Namiki, A.; Wang, W. Design and Application of a UAV Autonomous Inspection System for High-Voltage Power Transmission Lines. Remote Sens. 2023, 15, 865. [Google Scholar] [CrossRef]
  18. Seo, J.; Duque, L.; Wacker, J. Drone-Enabled Bridge Inspection Methodology and Application. Autom. Constr. 2018, 94, 112–126. [Google Scholar] [CrossRef]
  19. Yaacoub, J.P.; Noura, H.; Salman, O.; Chehab, A. Security Analysis of Drones Systems: Attacks, Limitations, and Recommendations. Internet Things 2020, 11, 100218. [Google Scholar] [CrossRef]
  20. Federation of Aviation Administration (FAA). Available online: https://www.faa.gov/node/54496 (accessed on 5 November 2024).
  21. Watson, K. Venezuela President Maduro Survives “Drone Assassination Attempt”—BBC News. Available online: https://www.bbc.co.uk/news/world-latin-america-45073385 (accessed on 22 May 2024).
  22. Drone Crash Shuts Down Mali’s Gao Airport. Available online: https://aviation-safety.net/wikibase/313041 (accessed on 23 May 2024).
  23. Flight Operations Suspended at Delhi Airport After ‘Drone-like Object’ Spotted on Runway. Available online: https://indianexpress.com/article/india/delhi-airport-live-igflight-operation-at-delhi-airport-halted-as-pilot-spots-drone-4805435/ (accessed on 5 November 2024).
  24. Hambling, D. Swarm of Drones Attacks Airbase. Available online: https://dialnet.unirioja.es/servlet/articulo?codigo=6421736 (accessed on 5 November 2024).
  25. Saudi Arabia Oil Facilities Ablaze After Drone Strikes. BBC News. Available online: https://www.bbc.com/news/world-middle-east-49699429 (accessed on 5 November 2024).
  26. Frank Gardner Saudi Oil Facility Attacks: Race on to Restore Supplies. BBC News. Available online: https://www.bbc.com/news/world-middle-east-49775849 (accessed on 5 November 2024).
  27. Chaari, M.Z.; Al-Maadeed, S. The Game of Drones/Weapons Makers’ War on Drones. In Unmanned Aerial Systems; Koubaa, A., Azar, A.T., Eds.; Advances in Nonlinear Dynamics and Chaos (ANDC); Academic Press: Cambridge, MA, USA, 2021; pp. 465–493. ISBN 978-0-12-820276-0. [Google Scholar]
  28. Man Fined After Flying Drones over Premier League Stadiums. Available online: https://www.bbc.com/news/uk-england-nottinghamshire-34256680 (accessed on 20 September 2024).
  29. Kunertova, D. Drones Have Boots: Learning from Russia’s War in Ukraine. Contemp. Secur. Policy 2023, 44, 576–591. [Google Scholar] [CrossRef]
  30. Kunertova, D. The War in Ukraine Shows the Game-Changing Effect of Drones Depends on the Game. Bull. At. Sci. 2023, 79, 95–102. [Google Scholar] [CrossRef]
  31. Rahman, M.H.; Sejan, M.A.S.; Aziz, M.A.; Tabassum, R.; Baik, J.I.; Song, H.K. A Comprehensive Survey of Unmanned Aerial Vehicles Detection and Classification Using Machine Learning Approach: Challenges, Solutions, and Future Directions. Remote Sens. 2024, 16, 879. [Google Scholar] [CrossRef]
  32. Drone Incident Review. Available online: https://d-fendsolutions.com/drone-incident (accessed on 24 September 2024).
  33. Seidaliyeva, U.; Ilipbayeva, L.; Taissariyeva, K.; Smailov, N.; Matson, E.T. Advances and Challenges in Drone Detection and Classification Techniques: A State-of-the-Art Review. Sensors 2023, 24, 125. [Google Scholar] [CrossRef] [PubMed]
  34. Khan, M.A.; Menouar, H.; Eldeeb, A.; Abu-Dayya, A.; Salim, F.D. On the Detection of Unauthorized Drones—Techniques and Future Perspectives: A Review. IEEE Sens. J. 2022, 22, 11439–11455. [Google Scholar] [CrossRef]
  35. Dumitrescu, C.; Minea, M.; Costea, I.M.; Chiva, I.C.; Semenescu, A. Development of an Acoustic System for Uav Detection. Sensors 2020, 20, 4870. [Google Scholar] [CrossRef] [PubMed]
  36. Tejera-Berengue, D.; Zhu-Zhou, F.; Utrilla-Manso, M.; Gil-Pita, R.; Rosa-Zurera, M. Analysis of Distance and Environmental Impact on UAV Acoustic Detection. Electronics 2024, 13, 643. [Google Scholar] [CrossRef]
  37. Akbal, E.; Akbal, A.; Dogan, S.; Tuncer, T. An Automated Accurate Sound-Based Amateur Drone Detection Method Based on Skinny Pattern. Digit. Signal Process. 2023, 136, 104012. [Google Scholar] [CrossRef]
  38. Sedunov, A.; Haddad, D.; Salloum, H.; Sutin, A.; Sedunov, N.; Yakubovskiy, A. Stevens Drone Detection Acoustic System and Experiments in Acoustics UAV Tracking. In Proceedings of the 2019 IEEE International Symposium on Technologies for Homeland Security (HST), Woburn, MA, USA, 5–6 November 2019; pp. 1–7. [Google Scholar] [CrossRef]
  39. Al-Emadi, S.; Al-Ali, A.; Al-Ali, A. Audio-Based Drone Detection and Identification Using Deep Learning Techniques with Dataset Enhancement through Generative Adversarial Networks. Sensors 2021, 21, 4953. [Google Scholar] [CrossRef]
  40. Saria, M.H.D.; Al-sa, M.F. Data in Brief DroneRF Dataset: A Dataset of Drones for RF-Based Detection, classification and identification. Data Brief 2019, 26, 104313. [Google Scholar] [CrossRef]
  41. Kılıç, R.; Kumbasar, N.; Oral, E.A.; Ozbek, I.Y. Drone Classification Using RF Signal Based Spectral Features. Eng. Sci. Technol. Int. J. 2022, 28, 101028. [Google Scholar] [CrossRef]
  42. He, Z.; Huang, J.; Qian, G. UAV Detection and Identification Based on Radio Frequency Using Transfer Learning. In Proceedings of the 2022 IEEE 8th International Conference on Computer and Communications (ICCC), Chengdu, China, 9–12 December 2022; pp. 1812–1817. [Google Scholar]
  43. Aouladhadj, D.; Kpre, E.; Deniau, V.; Kharchouf, A.; Gransart, C.; Gaquière, C. Drone Detection and Tracking Using RF Identification Signals. Sensors 2023, 23, 7650. [Google Scholar] [CrossRef]
  44. Al-Sa’d, M.F.; Al-Ali, A.; Mohamed, A.; Khattab, T.; Erbad, A. RF-Based Drone Detection and Identification Using Deep Learning Approaches: An Initiative towards a Large Open Source Drone Database. Future Gener. Comput. Syst. 2019, 100, 86–97. [Google Scholar] [CrossRef]
  45. Sharma, A.; Vanjani, P.; Paliwal, N.; Basnayaka, C.M.W.; Jayakody, D.N.K.; Wang, H.C.; Muthuchidambaranathan, P. Communication and Networking Technologies for UAVs: A Survey. J. Netw. Comput. Appl. 2020, 168, 102739. [Google Scholar] [CrossRef]
  46. Frid, A.; Ben-Shimol, Y.; Manor, E.; Greenberg, S. Drones Detection Using a Fusion of RF and Acoustic Features and Deep Neural Networks. Sensors 2024, 24, 2427. [Google Scholar] [CrossRef]
  47. Flak, P. Drone Detection Sensor with Continuous 2.4 GHz ISM Band Coverage Based on Cost-Effective SDR Platform. IEEE Access 2021, 9, 114574–114586. [Google Scholar] [CrossRef]
  48. Deshmukh, S.; Vinoy, K.J. Design and Development of RADAR for Detection of Drones and UAVs. In Proceedings of the 2022 IEEE Microwaves, Antennas, and Propagation Conference (MAPCON), Bangalore, India, 12–16 December 2022; pp. 1714–1719. [Google Scholar]
  49. de Quevedo, Á.D.; Urzaiz, F.I.; Menoyo, J.G.; López, A.A. Drone Detection With X-Band Ubiquitous Radar. In Proceedings of the 2018 19th International Radar Symposium (IRS), Bonn, Germany, 20–22 June 2018; pp. 1–10. [Google Scholar]
  50. Nuss, B.; Sit, L.; Fennel, M.; Mayer, J.; Mahler, T.; Zwick, T. MIMO OFDM Radar System for Drone Detection. In Proceedings of the 2017 18th International Radar Symposium (IRS), Prague, Czech Republic, 28–30 June 2017; pp. 1–9. [Google Scholar] [CrossRef]
  51. Kim, B.K.; Park, J.; Park, S.J.; Kim, T.W.; Jung, D.H.; Kim, D.H.; Kim, T.; Park, S.O. Drone Detection with Chirp-Pulse Radar Based on Target Fluctuation Models. ETRI J. 2018, 40, 188–196. [Google Scholar] [CrossRef]
  52. Rai, P.K.; Idsoe, H.; Yakkati, R.R.; Kumar, A.; Ali Khan, M.Z.; Yalavarthy, P.K.; Cenkeramaddi, L.R. Localization and Activity Classification of Unmanned Aerial Vehicle Using MmWave FMCW Radars. IEEE Sens. J. 2021, 21, 16043–16053. [Google Scholar] [CrossRef]
  53. Doviak, R.J.; Zrnic, D.S.; Sirmans, D.S. Doppler Weather Radar. Proc. IEEE 1979, 67, 1522–1553. [Google Scholar] [CrossRef]
  54. El-Latif, E.I.A. Detection and Identification Drones Using Long Short-Term Memory and Bayesian Optimization. Multimed. Tools Appl. 2024, 1–17. [Google Scholar] [CrossRef]
  55. Wong, W.K.; Tan, P.N.; Loo, C.K.; Lim, W.S. An Effective Surveillance System Using Thermal Camera. In Proceedings of the 2009 International Conference on Signal Acquisition and Processing (ICSAP 2009), Kuala Lumpur, Malaysia, 3–5 April 2009; pp. 13–17. [Google Scholar] [CrossRef]
  56. Tang, P.; Wang, C.; Wang, X.; Liu, W.; Zeng, W.; Wang, J. Object Detection in Videos by High Quality Object Linking. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 1272–1278. [Google Scholar] [CrossRef]
  57. Rozantsev, A.; Lepetit, V.; Fua, P. Detecting Flying Objects Using a Single Moving Camera. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 879–892. [Google Scholar] [CrossRef]
  58. Aker, C.; Kalkan, S. Using Deep Networks for Drone Detection. In Proceedings of the IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Lecce, Italy, 29 August–1 September 2017; pp. 1–6. [Google Scholar] [CrossRef]
  59. Zhao, Z.Q.; Zheng, P.; Xu, S.T.; Wu, X. Object Detection with Deep Learning: A Review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef]
  60. Wang, L.; Ai, J.; Zhang, L.; Xing, Z. Design of Airport Obstacle-Free Zone Monitoring Uav System Based on Computer Vision. Sensors 2020, 20, 2475. [Google Scholar] [CrossRef] [PubMed]
  61. Lee, D.R.; Gyu La, W.; Kim, H. Drone Detection and Identification System Using Artificial Intelligence. In Proceedings of the 9th International Conference on Information and Communication Technology Convergence: ICT Convergence Powered by Smart Intelligence (ICTC), Jeju Island, Republic of Korea, 17–19 October 2018; pp. 1131–1133. [Google Scholar] [CrossRef]
  62. Lakkshmanan, A.; Seranmadevi, R.; Sree, P.H.; Tyagi, A.K. The Evolution of Object Detection Methods. In Enhancing Medical Imaging with Emerging Technologies; IGI Global Scientific Publishing: New York, NY, USA, 2024; Volume 133, pp. 166–179. [Google Scholar] [CrossRef]
  63. Wei, J.; Pan, S.; Gao, W.; Zhao, T. A Dynamic Object Filtering Approach Based on Object Detection and Geometric Constraint between Frames. IET Image Process. 2022, 16, 1636–1647. [Google Scholar] [CrossRef]
  64. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar] [CrossRef]
  65. Hirata, N.S.T.; Papakostas, G.A. On Machine-Learning Morphological Image Operators. Mathematics 2021, 9, 1854. [Google Scholar] [CrossRef]
  66. Zou, Z.; Chen, K.; Shi, Z.; Guo, Y.; Ye, J. Object Detection in 20 Years: A Survey. Proc. IEEE 2023, 111, 257–276. [Google Scholar] [CrossRef]
  67. Liu, Y.; Sun, P.; Wergeles, N.; Shang, Y. A Survey and Performance Evaluation of Deep Learning Methods for Small Object Detection. Expert Syst. Appl. 2021, 172, 114602. [Google Scholar] [CrossRef]
  68. Sarker, I.H. Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions. SN Comput. Sci. 2021, 2, 420. [Google Scholar] [CrossRef]
  69. Wu, X.; Sahoo, D.; Hoi, S.C.H. Recent Advances in Deep Learning for Object Detection. Neurocomputing 2020, 396, 39–64. [Google Scholar] [CrossRef]
  70. Ahmed, S.F.; Alam, M.S.B.; Hassan, M.; Rozbu, M.R.; Ishtiak, T.; Rafa, N.; Mofijur, M.; Shawkat, A.A.B.M.; Gandomi, A.H. Deep Learning Modelling Techniques: Current Progress, Applications, Advantages, and Challenges; Springer: Dordrecht, The Netherlands, 2023; Volume 56, ISBN 0123456789. [Google Scholar]
  71. Karbouj, B.; Topalian-Rivas, G.A.; Krüger, J. Comparative Performance Evaluation of One-Stage and Two-Stage Object Detectors for Screw Head Detection and Classification in Disassembly Processes. Procedia CIRP 2024, 122, 527–532. [Google Scholar] [CrossRef]
  72. Kaur, J.; Singh, W. Tools, Techniques, Datasets and Application Areas for Object Detection in an Image: A Review. Multimed. Tools Appl. 2022, 81, 38297–38351. [Google Scholar] [CrossRef]
  73. Xie, X.; Cheng, G.; Wang, J.; Yao, X.; Han, J. Oriented R-CNN for Object Detection. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 11–17 October 2021; pp. 3500–3509. [Google Scholar]
  74. Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar] [CrossRef]
  75. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
  76. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  77. Cheng, B.; Wei, Y.; Shi, H.; Feris, R.; Xiong, J.; Huang, T. Revisiting RCNN: On Awakening the Classification Power of Faster RCNN. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 473–490. [Google Scholar] [CrossRef]
  78. Carranza-García, M.; Torres-Mateo, J.; Lara-Benítez, P.; García-Gutiérrez, J. On the Performance of One-Stage and Two-Stage Object Detectors in Autonomous Vehicles Using Camera Data. Remote Sens. 2021, 13, 89. [Google Scholar] [CrossRef]
  79. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the Computer Vision—ECCV 2016, Amsterdam, The Netherlands, 11–14 October 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
  80. Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar]
  81. Li, Y.; Dong, H.; Li, H.; Zhang, X.; Zhang, B.; Xiao, Z. Multi-Block SSD Based on Small Object Detection for UAV Railway Scene Surveillance. Chin. J. Aeronaut. 2020, 33, 1747–1755. [Google Scholar] [CrossRef]
  82. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef]
  83. Hakani, R.; Prajapati, S.; Rawat, A. Optimizing UAV Detection Performance with YOLOv5 Series Algorithms. Int. J. Microsyst. IoT 2024, 2, 991–995. [Google Scholar] [CrossRef]
  84. Kim, J.-H.; Kim, N.; Won, C.S. High-Speed Drone Detection Based On Yolo-V8. In Proceedings of the 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023; pp. 1–2. [Google Scholar] [CrossRef]
  85. Feng, Y.; Wang, T.; Jiang, Q.; Zhang, C.; Sun, S.; Qian, W. A Efficient and Accurate UAV Detection Method Based on YOLOv5s. Appl. Sci. 2024, 14, 6398. [Google Scholar] [CrossRef]
  86. Agarwal, K.; Dhurandher, S.K.; Borah, S.; Woungang, I.; Sharma, D.K.; Arora, K. Performance Analysis of YOLOv7 and YOLOv8 Models for Drone Detection. In Proceedings of the 2023 International Conference on Network, Multimedia and Information Technology (NMITCON), Bengaluru, India, 1–2 September 2023; pp. 1–10. [Google Scholar]
  87. Shandilya, S.K.; Srivastav, A.; Yemets, K.; Datta, A.; Nagar, A.K. YOLO-Based Segmented Dataset for Drone vs. Bird Detection for Deep and Machine Learning Algorithms. Data Br. 2023, 50, 109355. [Google Scholar] [CrossRef]
  88. Chang, Y.L.; Anagaw, A.; Chang, L.; Wang, Y.C.; Hsiao, C.Y.; Lee, W.H. Ship Detection Based on YOLOv2 for SAR Imagery. Remote Sens. 2019, 11, 786. [Google Scholar] [CrossRef]
  89. Liao, Z.; Carneiro, G. On the Importance of Normalisation Layers in Deep Learning with Piecewise Linear Activation Units. In Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA, 7–10 March 2016; pp. 1–8. [Google Scholar]
  90. Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
  91. Bochkovskiy, A.; Wang, C.; Liao, H.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar] [CrossRef]
  92. Jocher, G. Ultralytics. YOLOv5. 2020. Available online: https://github.com/ultralytics/yolov5 (accessed on 30 September 2024).
  93. Jocher, G.; Stoken, A.; Borovec, J.; NanoCode012; ChristopherSTAN; Changyu, L.; Laughing; tkianai; yxNONG; Hogan, A.; et al. Ultralytics/Yolov5: V4.0-Nn.SiLU() Activations, Weights & Biases Logging, PyTorch Hub Integration, Version 4.0; Zenodo: Genève, Switzerland, 2021.
  94. Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W.; et al. YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications. arXiv 2022, arXiv:2209.02976. [Google Scholar]
  95. Wang, C.; Bochkovskiy, A.; Liao, H.M. YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. arXiv 2022, 1–15. [Google Scholar] [CrossRef]
  96. RangeKing Brief Summary of YOLOv8 Model Structure. Available online: https://github.com/ultralytics/ultralytics/issues/189 (accessed on 5 November 2024).
  97. Chien-Yao Wang, I.-H.Y. and H.-Y.M.L. YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information. arXiv 2024, arXiv:2402.13616. [Google Scholar] [CrossRef]
  98. Chien, C.T.; Ju, R.Y.; Chou, K.Y.; Chiang, J.S. YOLOv9 for Fracture Detection in Pediatric Wrist Trauma X-Ray Images. Electron. Lett. 2024, 60, 9–11. [Google Scholar] [CrossRef]
  99. Wang, C.Y.; Liao, H.Y.M.; Yeh, I.H. Designing Network Design Strategies Through Gradient Path Analysis. J. Inf. Sci. Eng. 2023, 39, 975–995. [Google Scholar] [CrossRef]
  100. Wang, C.; Liao, H.M.; Wu, Y.; Chen, P.; Hsieh, J.; Yeh, I. CSPNet: A New Backbone That Can Enhance Learning Capability of CNN. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 1571–1580. [Google Scholar]
  101. Li, J.; Feng, Y.; Shao, Y.; Liu, F. IDP-YOLOV9: Improvement of Object Detection Model in Severe Weather Scenarios from Drone Perspective. Appl. Sci. 2024, 14, 5277. [Google Scholar] [CrossRef]
  102. Torrey, L.; Shavlik, J. Transfer Learning. In Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques; IGI Global: New York, NY, USA, 2010; pp. 242–264. [Google Scholar]
  103. Zhuang, F.; Qi, Z.; Duan, K.; Xi, D.; Zhu, Y.; Zhu, H.; Xiong, H.; He, Q. A Comprehensive Survey on Transfer Learning. Proc. IEEE 2021, 109, 43–76. [Google Scholar] [CrossRef]
  104. Nvidia Corporation. Jetson NANO Module. Available online: https://developer.nvidia.com/embedded/jetson-nano (accessed on 30 September 2024).
  105. Mahdavi, F.; Rajabi, R. Drone Detection Using Convolutional Neural Networks. In Proceedings of the 2020 6th Iranian Conference on Signal Processing and Intelligent Systems (ICSPIS), Mashhad, Iran, 23–24 December 2020. [Google Scholar] [CrossRef]
  106. Wu, Q.; Feng, D.; Cao, C.; Zeng, X.; Feng, Z.; Wu, J.; Huang, Z. Improved Mask R-Cnn for Aircraft Detection in Remote Sensing Images. Sensors 2021, 21, 2618. [Google Scholar] [CrossRef]
  107. Al-Qubaydhi, N.; Alenezi, A.; Alanazi, T.; Senyor, A.; Alanezi, N.; Alotaibi, B.; Alotaibi, M.; Razaque, A.; Abdelhamid, A.A.; Alotaibi, A. Detection of Unauthorized Unmanned Aerial Vehicles Using YOLOv5 and Transfer Learning. Electronics 2022, 11, 2669. [Google Scholar] [CrossRef]
  108. Singha, S.; Aydin, B. Automated Drone Detection Using YOLOv4. Drones 2021, 5, 95. [Google Scholar] [CrossRef]
  109. Aydin, B.; Singha, S. Drone Detection Using YOLOv5. Eng 2023, 4, 416–433. [Google Scholar] [CrossRef]
  110. Yilmaz, B.; Kutbay, U. YOLOv8 Based Drone Detection: Performance Analysis and Optimization. Preprints 2024. [Google Scholar] [CrossRef]
Figure 1. Drone utilization: a spectrum of beneficial and malicious purposes.
Figure 1. Drone utilization: a spectrum of beneficial and malicious purposes.
Drones 08 00680 g001
Figure 2. Spectrum of drone threat scenarios.
Figure 2. Spectrum of drone threat scenarios.
Drones 08 00680 g002
Figure 3. Basic framework of a two-stage detector.
Figure 3. Basic framework of a two-stage detector.
Drones 08 00680 g003
Figure 4. Basic framework of a one-stage detector.
Figure 4. Basic framework of a one-stage detector.
Drones 08 00680 g004
Figure 5. Timeline of YOLO model advancements.
Figure 5. Timeline of YOLO model advancements.
Drones 08 00680 g005
Figure 6. State-of-the-art YOLOv9 architecture.
Figure 6. State-of-the-art YOLOv9 architecture.
Drones 08 00680 g006
Figure 7. The proposed drone detection diagram using YOLOv9.
Figure 7. The proposed drone detection diagram using YOLOv9.
Drones 08 00680 g007
Figure 8. Overview of transfer learning approaches.
Figure 8. Overview of transfer learning approaches.
Drones 08 00680 g008
Figure 9. (a) Edge device-centric architecture for real-time drone detection. (b) Experimental setup for real-time drone detection.
Figure 9. (a) Edge device-centric architecture for real-time drone detection. (b) Experimental setup for real-time drone detection.
Drones 08 00680 g009aDrones 08 00680 g009b
Figure 10. Overall conducted experiment flowchart.
Figure 10. Overall conducted experiment flowchart.
Drones 08 00680 g010
Figure 11. Confusion matrix in the proposed method.
Figure 11. Confusion matrix in the proposed method.
Drones 08 00680 g011
Figure 12. Training and validation evolution over the 100 epoch.
Figure 12. Training and validation evolution over the 100 epoch.
Drones 08 00680 g012
Figure 13. YOLOv9 modal accuracy over the 100 epoch.
Figure 13. YOLOv9 modal accuracy over the 100 epoch.
Drones 08 00680 g013
Figure 14. Distribution of the real bounding box: (a) center point distribution and (b) length and width distribution.
Figure 14. Distribution of the real bounding box: (a) center point distribution and (b) length and width distribution.
Drones 08 00680 g014
Figure 15. All the metrics of training the YOLOv9 model.
Figure 15. All the metrics of training the YOLOv9 model.
Drones 08 00680 g015
Figure 16. Real-time drone detection with YOLOv9 at an altitude of 15 feet.
Figure 16. Real-time drone detection with YOLOv9 at an altitude of 15 feet.
Drones 08 00680 g016
Figure 17. Detection of drones in real time with YOLOv9 from an altitude of 50 feet.
Figure 17. Detection of drones in real time with YOLOv9 from an altitude of 50 feet.
Drones 08 00680 g017
Figure 18. Real-time detection of drones using YOLOv9 at an altitude of 110 feet.
Figure 18. Real-time detection of drones using YOLOv9 at an altitude of 110 feet.
Drones 08 00680 g018
Figure 19. Drone detection in nighttime and at an altitude of 110 feet.
Figure 19. Drone detection in nighttime and at an altitude of 110 feet.
Drones 08 00680 g019
Figure 20. Detection of drones in dark environments with artificial lighting.
Figure 20. Detection of drones in dark environments with artificial lighting.
Drones 08 00680 g020
Figure 21. Detection of drones in high-altitude scenarios with poor visibility.
Figure 21. Detection of drones in high-altitude scenarios with poor visibility.
Drones 08 00680 g021
Figure 22. Drone detection in low-visibility cloudy environments.
Figure 22. Drone detection in low-visibility cloudy environments.
Drones 08 00680 g022
Figure 23. Drone detection in multiple-object scenes with complex and dynamic backgrounds.
Figure 23. Drone detection in multiple-object scenes with complex and dynamic backgrounds.
Drones 08 00680 g023
Figure 24. Real-time drone detection in complex backgrounds.
Figure 24. Real-time drone detection in complex backgrounds.
Drones 08 00680 g024
Figure 25. Concurrent detection of two drones with YOLOv9.
Figure 25. Concurrent detection of two drones with YOLOv9.
Drones 08 00680 g025
Figure 26. Advanced multiple drone detection with YOLOv9.
Figure 26. Advanced multiple drone detection with YOLOv9.
Drones 08 00680 g026
Table 1. Comparison of YOLO versions based on architecture, framework, mAP, and FPS.
Table 1. Comparison of YOLO versions based on architecture, framework, mAP, and FPS.
YOLO
Variant
Publication
Date
AnchorFrameworkBackbonemAPFPSDetection
Model
YOLOv12016NoDarknetDarknet-1963.4%45Single-stage detector
YOLOv2 [88]2017YesDarknet [89]Darknet-1976.8%45Single-stage detector
YOLOv3 [90]2018YesDarknetDarknet-5357.9%30Single-stage detector
YOLOv4 [91]2020YesDarknetCSPDarknet5365.7%62Single-stage detector
YOLOv5 [92]2020YesPyTorch [93]CSPDarknet5350.7%140Single-stage detector
YOLOv6 [94]2022YesPyTorchEfficientRep52.3%123Single-stage detector
YOLOv7 [95]2022YesPyTorchE-ELAN56.8%161Single-stage detector
YOLOv8 [96]2023YesPyTorchCSPDarknet5360%140Single-stage detector
YOLOv9 [97]2024YesPyTorchAdvancedCSPNet71.2%140Single-stage detector
Table 2. Parameters for training the model.
Table 2. Parameters for training the model.
Training ParameterValue
Class1
Batch Size8
Epoch100
Initial Learning Rate1 × 10−4
Final Learning Rate1 × 10−3
OptimizerSGD
Table 3. Performance comparison of YOLOv9 with existing drone detection methods.
Table 3. Performance comparison of YOLOv9 with existing drone detection methods.
ModelPrecisionRecallmAP50F1-Score
CNN [105]0.960.940.950.9498
Mask R-CNN [106]0.9360.8940.9250.9145
YOLOv3 [107]0.920.700.7850.795
YOLOv4 [108]0.9500.6800.74360.790
YOLOv5 [109]0.9180.8750.90400.896
YOLOv8 [110]0.910.940.860.85
YOLOv90.9460.8640.95700.9030
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hakani, R.; Rawat, A. Edge Computing-Driven Real-Time Drone Detection Using YOLOv9 and NVIDIA Jetson Nano. Drones 2024, 8, 680. https://doi.org/10.3390/drones8110680

AMA Style

Hakani R, Rawat A. Edge Computing-Driven Real-Time Drone Detection Using YOLOv9 and NVIDIA Jetson Nano. Drones. 2024; 8(11):680. https://doi.org/10.3390/drones8110680

Chicago/Turabian Style

Hakani, Raj, and Abhishek Rawat. 2024. "Edge Computing-Driven Real-Time Drone Detection Using YOLOv9 and NVIDIA Jetson Nano" Drones 8, no. 11: 680. https://doi.org/10.3390/drones8110680

APA Style

Hakani, R., & Rawat, A. (2024). Edge Computing-Driven Real-Time Drone Detection Using YOLOv9 and NVIDIA Jetson Nano. Drones, 8(11), 680. https://doi.org/10.3390/drones8110680

Article Metrics

Back to TopTop