Next Article in Journal
W-Band GaN HEMT Switch Using the State-Dependent Concurrent Matching Method
Next Article in Special Issue
A Novel Coherent Integration Algorithm for Maneuvering Target Detection Based on Symmetric Instantaneous Autocorrelation Function
Previous Article in Journal
Heterogeneous Quasi-Continuous Spiking Cortical Model for Pulse Shape Discrimination
Previous Article in Special Issue
Unambiguous Direction Estimation and Localization of Two Unresolved Targets via Monopulse Radar
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vision-Based Support for the Detection and Recognition of Drones with Small Radar Cross Sections

1
Department of Aeronautical Engineering, Faculty of Engineering, Sudan University of Science and Technology (SUST), Khartoum 11116, Sudan
2
Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
3
Department of Information Technology, College of Computing and Informatics, Saudi Electronic University, P.O. Box 93499, Riyadh 11673, Saudi Arabia
4
Department of Computer Engineering, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(10), 2235; https://doi.org/10.3390/electronics12102235
Submission received: 15 April 2023 / Revised: 9 May 2023 / Accepted: 10 May 2023 / Published: 15 May 2023

Abstract

:
Drones are increasingly vital in numerous fields, such as commerce, delivery services, and military operations. Therefore, it is essential to develop advanced systems for detecting and recognizing drones to ensure the safety and security of airspace. This paper aimed to develop a robust solution for detecting and recognizing drones and birds in airspace by combining a radar system and a visual imaging system, and contributed to this effort by demonstrating the potential of combining the two systems for drone detection and recognition. The results showed that this approach was highly effective, with a high overall precision and accuracy of 88.82% and 71.43%, respectively, and the high F1 score of 76.27% indicates that the proposed combination approach has great effectiveness in the performance. The outcome of this study has significant practical implications for developing more advanced and effective drone and bird detection systems. The proposed algorithm is benchmarked with other related works, which show acceptable performance compared with other counterparts.

1. Introduction

Unmanned Aerial Vehicles (UAVs), often known as drones, despite garnering considerable interest in various civil and commercial uses, unquestionably offer several risks to the safety of airspace that can threaten livelihood and property [1]. While these dangers can range widely in terms of the attackers’ motivations and level of expertise, they can all result in malicious attacks, from carelessness to severe disturbance; these occurrences are likewise becoming more frequent. For instance, in the first few months of the year 2019, numerous airports in the USA, UK, Ireland, and UAE suffered significant disruption to operations after drone observations [1,2].
According to traditional risk theory, huge risks are created when a chance is great and its repercussions are severe (risk assessment equals probability impact). By regulating drone operations, flight authorities worldwide are making great efforts to lower the probability component of the risk equation. Regulations may deter negligent or incompetent drone operations but cannot stop unlawful or terroristic attacks. They must be supported by technology that allows for drone detection, categorization, tracking, drone interdiction, and evidence gathering to be effective [3].
Visual and radar-based detection approaches can identify aerial objects with decreased radar cross sections (RCS) operating at low altitudes under various environmental situations. To increase the model’s resilience, radar-based detection approaches must be used for detection because optical detection methods are limited to favorable weather circumstances [4,5,6]; a convolutional neural network (CNN) performed so well in current object detection that older methods have almost vanished from the scene. The capacity of the convolutional neural network to extract features is its finest feature [7]. In [8], the authors thoroughly investigated drone RCS. They highlighted the blade material’s influence over the RCS. According to their findings, the metallic blade’s RCS is significantly greater than the RCS of the plastic ones.
The malicious use of UAVs has increased widely, and uncontrolled, it can considerably threaten livelihood. Furthermore, the relatively small RCS of UAVs is challenging due to distinguishing between drones and birds, resulting in poor, learned radar systems [9]. Previous research highlighted several methods that cannot deal with many complex environments; overcoming methods have relatively inaccurate outcomes. Therefore, an intelligent and optimized radar system is highly required; an integrated method to overcome these problematic disputes is a suitable solution [10].
In [11], the authors proposed conventional constant false alarm rates (CFARs) for radar detection of moving targets. The paper utilized deep learning (DL)-based algorithms for UAV detections. A CNN was used to classify and regress the input Range-Doppler Radar (RDR) maps patch and the Euclidean distance between the patch center and the target. Then, a non-maximum suppression (NMS) technique was proposed to reduce and control the false alarm. Finally, the experimental results for the training and test data show that the DL-CNN-NMS-based mechanism can detect the target more precisely and attain a much better false alarm rate than CFAR.
A novel ML Doppler signature (ML-DS)-based detection mechanism was proposed in [12] for the localization and classification of small drones. Extensive tests and experiments were conducted for performance and accuracy measurements for the proposed method. The achieved results show 97% accuracy by using the R square method, the model also shows acceptable complexity.
In [13], a human–vehicle objects classification was proposed by a combination of a support vector classifier (SVC) and the DL model called you-only-look-once (YOLO), which is applied to high-resolution automotive radar systems. To improve the performance of the classification, the target boundaries predicted from the DL-SVC-YOLO model are projected to the DL. Then, the overall classification accuracy was enhanced by combining the YOLO and SVC results with predefined boundaries of the targets. The achieved results illustrated that the proposed technique outperforms other algorithms based on SVC or YOLO only.
The 77 GHz millimeter Wave (mmW) radar was proposed based on machine learning-artificial neural networks (ML-ANNs), which are sometimes called deep learning (DL) [14]. The proposed ML-ANN-mmW radar utilized the targets’ statistics knowledge. The radar cross-section (RCS) data. The proposed method achieved more than 90% accuracy for the classification. Experiments were conducted on a beam steering-based radar, which achieved 98.7% accuracy. The ML-ANN-RCS radar was compared with the other related works and showed acceptable performance.
In [15], the ML-FMCW radar was proposed based on machine learning frequency modulation continuous waves that operate at 60–64 GHz for multiclass objects. The key aim of the study was feature extraction from the cloud data and object classification utilizing the Bagged ensemble models to reduce dataset variance and bias. To validate the robustness and validity of the proposed technique, noisy datasets have been utilized for moving UAV vehicles various distances, angles, and velocities. Experimental results showed that the proposed ML-FMCW method attains better accuracy for object classification than other related works based on signal characteristics, i.e., signal strength and velocities. A summary of the related works is illustrated in Table 1.
This paper aims to develop an efficient and optimized detection system to classify between drones; the objectives for this represented work are:
  • To increase drone detection range using 3D k-band radar and visual imaging;
  • To build a deep learning technique for detecting and recognizing drones using convolutional neural networks.
The construction of deep learning-based software that tracks, detects, and classifies objects from raw data in real-time using a convolutional neural network technique is proposed. Due to their comparatively high accuracy and speed, deep convolutional neural networks have been proven to be a trustworthy method for picture object detection and classification [22,23]. In addition, a CNN method enables UAVs to transform object data from the immediate surroundings into abstract data that machines can understand without human intervention. Machines can make real-time decisions based on the facts at hand. The ability of a UAV to fly autonomously and intelligently can be significantly enhanced by integrating CNN into the onboard guiding systems [24,25].
The remainder of the paper is organized as follows: Section 2 presents the visual-support detection method. Section 3 discusses the 3D K-band radar system. Then, in Section 4, the visual detection system is introduced; the performance evaluation and results are discussed in Section 5. Finally, a concluding remark is presented in Section 6.

2. The Visual-Support Method of Detection

The system supports two different detection methods, as shown in Figure 1. The first detection uses the 3D K-band radar, in which the signal is received from the radar antenna [26], then is added to the CNNs to analyze the simulated radar signals reflected off the drones using the short-time Fourier transform (STFT) spectrograms [27]. The drones differ in numerous respects, including blade length and rotation rates, which affect the STFT spectrograms.
The second method, visual imaging optical sensing, takes visual imaging of the target, then another CNN extracts the pixelated features for classification [28]. The second system is added to enhance the recognition of the first system; in case of failure in classifying the target, the second (vision) is used. The received signal on the first system is recorded to optimize the radar database [29].

3. K-Band Radar System

The detection process highly depends on two main functional factors, RCS and Signal-to-Noise Ratio (SNR). The surface area of a target visible to the radar is its RCS. When classifying drones with radars, the RCS is essential because it directly affects how strong the radar signal is returned [30]. This study utilizes 3D radar, which provides radar direction, ranging, and elevation in 3D compared with 2D radar, which provides only range and azimuth. Applications include weather monitoring, air defense, and surveillance. The micro-Doppler effects produced by drone propeller blades are used in this work (Figure 2).
Consequently, the RCS of the drones’ blades is crucial for this inquiry much more than the bodies [31]. As the SNR increases, classification performance should decline as the target becomes less distinct when separating drones from birds. Equation (1) [2] demonstrates how the SNR and RCS are firmly related, and as a result, the SNR is typically low for drones [32].
SNR = PtG2λ2σ/(4π)3R4KTBFl
where Pt is the transmitted power, G is the antenna gain, λ is the wavelength, σ is the radiate isotopically parameter, and R is the distance between the target and antenna. KTB is the thermal noise; K is the Boltzmann’s constant = 1.38 × 10−23 J/K [33], T is the room temperature, B is the noise bandwidth, F is the added noise of the actual receiver, and l is extra losses such as scanning, beam shape, integration, etc.
Therefore, as the signal SNR substantially impacts the effectiveness of the trained model, it is vital to comprehend and value it. Suppose the SNR of the training data is too high, in that case, the model does not generalize well to lower SNR and more realistic circumstances [34].

The Micro-Doppler Signature

To collect micro-Doppler data of aerial targets with birds and drones, a K-band radar was developed [35]. Based on our continuous flying methods, the initial investigation categorizes drones as one classification group and birds as another. The accuracy of categorization directly depends on the caliber of the extracted data. Along with the primary Doppler of the target, the target’s micro-Doppler signature is also received.
Since the micro-Doppler phenomenon provides information about the target’s moving parts, its implementation for precise feature extraction is essential for the target’s correct detection and categorization. After reducing noise, the mixer’s output extracts the Doppler and micro-Doppler components. The feature extraction from the radar data accommodates the time dependency because micro-Doppler is time dependent. The radar wavelength λ and the relative or radial blade tip speed v r r o t are what determine the micro-Doppler shift for f D r o t [36], a single propeller with a rotating axis parallel to the LoS is shown in Equation (2) and Figure 2.
f D r o t = 2 λ v r r o t
The speed of the blade tip along the radar’s line of sight (LoS) to the tangential velocity ( v r o t ), where r is the rotational speed and t is the time, is known as the radial blade tip velocity. The blades’ highest v r r o t is attained when perpendicular to the line of sight [37]. An approaching blade causes a positive micro-Doppler frequency change, and a receding blade causes a negative micro-Doppler frequency shift. The blade tip velocity can be expressed as:
v r o t = 2 π L Ω
where L is the blade length. The frequency-related information is extracted from the raw data using Fourier transformations, the Fourier transform only yields frequency information; thus, we use STFTs (5) to obtain temporal information [38]. The moving parts of the aerial targets produce the micro-Doppler frequency components that the STFT extracts. The micro-Doppler frequency can be expressed as follows:
f m i c r o D o p p l e r = 2 f c ω x r r a d i a l
where f is the carrier frequency, c is the velocity of light in free space, and the target is assumed to have angular velocity w and translational displacement x [39]. The STFT mathematical model used to extract the micro-Doppler signal component is as follows:
S T F T x t τ , ω X τ , ω = x t w t τ e i ω t d t  
By obtaining the power spectral density function of the target’s micro-Doppler characteristics, spectrogram pictures are produced from the STFT function [40]. Thus, using mathematical expressions, spectrogram-based images are obtained.
s p e c t r o g r a m x t τ , ω X τ , ω 2
Thus, both the target’s temporal and frequency information are contained in the resulting spectrogram. The classification algorithm is the final step in the process, allowing the system to identify the detected target accurately. Figure 3 shows three samples of the drone’s images and the primary Doppler signature effect produced by drone propeller blades.

4. Visual Detection System

There are three phases to the intelligent detecting process: Firstly, the visual system collects raw data, and the onboard intelligence system then processes the data in real-time. Autonomous and human-free decision-making is the final stage based on the processed data [41]. The entire process is completed in milliseconds, resulting in immediate task execution. The second stage is preprocessing the captured picture to match the network specification, such as resizing, changing the color scale, and reshaping. Finally, the third stage, where the CNN system is supposed to detect and classify surrounding objects in real time, is the most crucial step in the procedure [42].
The system consists of one camera (SIGMA-2000M-1012) with a resolution of 1920 × 1080 pixels (full H.D.) [43]. CNN (ResNet-50) applies the detected photo to extract features and compare them with the existing dataset to determine the target type.

4.1. The Convolutional Neural Network for Image Recognition

CNN has many filters that clarify and distinguish the advantages over a regular, fully connected neural network regarding image recognition. Figure 4 displays a typical CNN for image classification.
Technically, a picture enters the computer as a set of pixel values with the dimensions h, w, and d. Where h and w stand for the pixel counts along the height and width directions, respectively, and d stands for the number of color channels, three for a typical RGB color image. By shrinking the input image, the convolutional layer collects features from the raw image [44]. A convolutional layer consists of several similar filters, each of which is a tiny matrix with the dimensions fh, fw, and d. While the depth of the input image matrix and the filter matrix should be the same, fh and fw are typically significantly less than h and w. Each filter in a convolutional layer convolves an input image as it goes through to produce a relatively small output matrix known as a feature map. After all, filters complete their convolutions, and the input image is transformed into a relatively tiny matrix with a more significant depth. The user should choose the number of filters in the convolutional layer before image feeding, such that the depth equals that number [45].
Following convolution, a non-linear function performs a non-linear operation on the generated feature map. Because of its quicker computation and lack of a need for unsupervised pre-training, the rectified linear unit (ReLU) is typically used in CNN for non-linear mapping [37]. Equation (7) represents the ReLU function.
f x = max 0 , x

4.2. The Convolutional Neural Network of Visual Imaging

One of the convolutional neural network variations in ResNet is ResNet-50, which has 50 layers. Along with one MaxPool and one Average Pool layer, it has 48 Convolution layers. ResNet-50 is based on the deep residual learning framework, ResNet [46].
Even with exceedingly deep neural networks, the vanishing gradient problem is resolved. Even though it has 50 layers, ResNet-50 has around 23 million trainable parameters, which is substantially less than other architectures, as shown in Figure 5. The explanation for why it performs as it does is debatable, but explaining residual blocks and how they function can make things more transparent. Consider a neural network block where we want to learn the true distribution H, and its input is x. Let us write this as the difference (or residual) [47].
R x = o u t p u t i n p u t = H x x
Rearranging Equation (8) to have:
H x = R x x
The remaining block attempts to understand the accurate output, H. (x). We can observe that the layers are learning the residual, R, because we have an identity connection from x. (x). In contrast, the layers in a residual network learn the residual (R(x)), and the layers in a traditional network learn the accurate output (H(x)). Additionally, it was found that learning the residual of the input and output is more straightforward than learning the input alone [48]. Since they are skipped and do not complicate the architecture, the residual identity model permits the reuse of activation functions from earlier levels in this way.
  • A convolution with 64 different kernels, each with a stride of size two and a kernel size of 7 × 7, provides us one layer;
  • The next layer includes max pooling and a stride size of 2;
  • The following convolution consists of three layers: a 1 × 1, 64 kernel, a 3 × 3, 64 kernel, and finally, a 1 × 1, 256 kernel. These three levels were repeated three times, providing us nine layers in this phase;
  • The kernel of 1 × 1, 128 is shown next, followed by the kernel of 3 × 3, 128 and finally, the kernel of 1 × 1, 512. We performed this procedure four times for a total of 12 layers;
  • Following that, we have a kernel of size 1 × 1, 256, followed by two more kernels of size 3 × 3, 256 and size 1 × 1, 1024; this was repeated six times, providing us a total of 18 layers;
  • Then a 1 × 1, 512 kernel was followed by two other kernels of 3 × 3, 512 and 1 × 1, 2048, and this was repeated three times providing us a total of nine layers;
  • Then, we performed an average pool, finished it with a fully connected layer of 1000 nodes, and add a softmax function to produce one layer [25].
The evaluation of the performance of the CNN model is conducted using four main aspects that control the performance of the system and are mentioned below in detail:
A. 
Confusion Matrix (CM)
A CM is represented in table format (see Table 2) to measure the machine learning classification model and algorithm accuracy and performance. From the visualized table that the CM created, one can tune and improve the ML model performance. This paper used the scikit-learn library to build the table confusion-matrix (y-test, y-pred) for a multi-class classification problem. The CM contains four numbers (characteristics) true positive (TP), false positive (FP), true negative (TN), and false negative (FN). These characteristics aim to describe the performance and accuracy of the classification algorithm [49].
Evaluation of the system performance depends on the counts of test records correctly and incorrectly predicted by the model. Therefore, four classification metrics are calculated.
T P R = T P T P + F N
F P R = F P T P + F N
where TPR is the TP rate, and TNR is the TN rate.
B. 
The critical classification metrics are accuracy, recall, precision, and F1 score
Accuracy: In this study, the deep learning model correctly detects the percentage of the input data class (multirotor, helicopter, and bird).
Precision: Among the inputs whose class is predicted to be positive, what percentage of them are positive class members? The value of this metric is between zero and one. Precision is calculated separately for each of the classes. In this study, precision is defined in each multirotor, hexacopter, and bird class. For instance, the precision of the multirotor class means that of all the inputs projected as multirotor, what percentage are multirotor? Similarly, these criteria are defined for other classes.
Recall (Sensitivity): Like precision, recall is calculated separately for each class. For example, the recall in the multirotor class means that among all the multirotor entries, what percentage are correctly detected and recognized as multirotor?
F1 Score: F is the harmonic average of recall and precision calculated separately for each class. This measure performs well on unbalanced data because it considers false negative and false positive values [49].

5. Performance Evaluation and Results

The application of DL detection and a classification-based radar system usually needs a bulky dataset of different circumstances (i.e., daytime and climate). However, the dataset captured by radar sensors is even more scarce. Many researchers create their datasets and conduct research because of the features of the research field, which was dependent on the generation of the dataset; the author’s radar system used for detecting drones and birds provided data in the form of a decibel from [50]. These data were processed using the pseudo-color function in MATLAB, which allowed us to visualize the readings as images. The generated images served as the basis for the radar dataset used in our experiments, providing a unique representation of the radar signals that enabled us to distinguish between drones and birds.
The dataset used for training the proposed algorithm was 88 images in total. For the radar system, 24 images were applied; for the visual system, 64 images were used as 60% for training, 30% for validation, and 10% for testing. In the visual system, a dataset was presented with several sorts of drones and birds, such as hexacopters, Da-Jiang Innovations® (DJI) Phantoms [4], and birds, all the images were manually gathered from the Kaggle dataset site. These images featured a wide range of drone and bird kinds and various image scales, resolutions, and compositions. For instance, pictures of drones from a great distance and up close were chosen. Additionally, there were variations in terms of image composition, with most pictures featuring just one drone. The image quality ranged from high resolution (900 dpi) to extremely low resolution (72 dpi).
In Table 2, the radar specification was chosen carefully to detect the small RCS of the drones and birds of all different kinds. The radar simulation in MATLAB targets RCS = 1 cm2 by adjusting the radar pulse width to 70 µ sec and the radar height of 5 m; the probability of a false alarm (P.F.A.) reduced to 1 in a million, as shown in Figure 6.
The target detectability range was examined from 10 to 2000 m, and the P.F.A. of radar, considering the environmental situation, the SNR at 2 km range reached 11.2949 dB, which exceeds that in [24]. Therefore, the detectability of the target is calculated, as shown in Figure 7.
The results of using the radar system for drone and bird detection showed promise in its ability to accurately identify the presence of both, shown and illustrated in Figure 8. The recall for birds was 100%, indicating that the system effectively detected all instances of birds in the dataset. Additionally, the precision for birds was 63.46%, showcasing the ability of the system to minimize false positive detections [25].
The precision of drone detection was 100%, highlighting the system’s ability to accurately identify drone presence without any false positive detections. While the recall for drones was lower at 42.86%, as shown in Figure 8, further analysis and improvement to the system can lead to an increase in recall while maintaining high precision.
The overall precision of 88.82% and the F1 score of 76.27%, as shown in Figure 9, demonstrate the overall effectiveness of the radar system in detecting both drones and birds [26]. This is demonstrated by its higher recall rate and overall accuracy than the visual system. Furthermore, using a 3D K-band radar, which operates in a frequency range between 18 GHz and 27 GHz, further enhances the system’s detectability [27]. In addition, the 3D K-band radar provides improved range resolution, target detection, and clutter suppression compared with other frequency bands, making it a suitable choice for drone and bird detection. Overall, the radar system plays a critical role in ensuring the effectiveness and efficiency of the overall drone and bird detection system.

5.1. Radar Images under Various Environments

Noise presents a radar distance restriction; the SNR and noise ratio are necessary for the radar images and affect the probability of detecting the wrong target. Four measurements were captured under various noise backgrounds for our system, i.e., dust storms, clouds, noise from surrounding objects, and external sources. In addition, we noticed that the far reflector produced little signal strength that could not exceed the noise floor, leading to wrong target detections. These effects are captured in Figure 10, with four radar images with noise backgrounds.

5.2. Visual Imaging System Results

The system consists of one camera (SIGMA-2000M-1012) with a resolution of 1920 × 1080 pixels (full H.D.) [28]. CNN (ResNet-50) applies to the detected photo to extract features and compare them with the existing dataset to determine the target type. The results of the drone detection and recognition system using visual imaging show a high level of accuracy, with an overall accuracy of 89.12%. The precision results indicate that the system can accurately identify the drones, with a precision of 93.3% for the DJI Phantom drone, 94.29% for the hexacopter, and 81.08% for birds, as shown in Figure 11. The overall precision was 89.22%.
The recall results indicate that the system can identify a large proportion of the actual positive cases, with a recall of 83.35% for the DJI Phantom drone, 97.06% for the hexacopter, and 88.24% for birds. The overall recall was 89.22%. The F1 score, which balances the precision and recall, was 89.24%. This indicates that the system has a good balance between detecting positive cases and avoiding false positive detections [29]. Moreover, the radar classification network showed a significant decrement in accuracy, recall, precision, and F1 score, which were 71.43%, 71.43%, 81.82%, and 76.27%, respectively.
The results suggest that the system effectively detects and recognizes drones with exceptionally high precision and recall for the hexacopter. However, the relatively lower precision for birds may be due to differences in their physical characteristics, such as size and shape, which may impact the system’s ability to identify them accurately.
In conclusion, the drone detection and recognition system using visual imaging achieved high accuracy, precision, recall, and F1 score levels for all three drone types, demonstrating its potential for practical applications. Further research can explore the system’s performance under different environmental conditions and the potential for integrating the results from the visual imaging system with those from the radar system to achieve even higher levels of accuracy and robustness, as shown in Figure 11.
In Figure 11, precision means that among the inputs whose class is predicted to be positive, precision is calculated separately for each class. This study defines precision in each of the phantom, hexacopter, and bird classes. For instance, the precision of the phantom class means that all the inputs are projected as multirotor. In contrast, recall means sensitivity, which is calculated separately for each class. For example, the recall in the phantom class means that among all the phantom entries, what percentage are correctly detected and recognized as phantom? Another example is if the precision for birds is 63%, this percentage means 63% of the birds in the dataset were recognized as a bird correctly, which also indicates the ability of the system to minimize false positive detections with 63%. While if the precision of birds’ detection is 100%, it means the bird class was identified correctly among the other three classes, which highlights the system’s ability to identify birds’ presence without any false positive detections.
The confusion matrix is shown in Table 3. In the case of DJI Phantom drones, 82.35% of the instances were correctly classified as DJI Phantom drones. In comparison, the remaining 17.65% were incorrectly classified as birds. This shows that the classifier has good precision in detecting DJI Phantom drones, but there may be room for improvement in reducing the number of false negatives, as represented in Table 3. In Table 3, where the first row represents the TP and FP, while the second row represents the FN and TN. In the TP, the model predicted true and true, i.e., the model predicted that the object was a bird, and the prediction was correct. In TN, the model predicted false, and it is false, i.e., the model predicted the object is not a bird, which is correct (it was not a bird, either phantom or hexacopter). In FP, the model predicted a bird but was not a bird. Finally, the FN model predicted not a bird, and it was a bird. The figures 82.35%, 0, 17%, 0, 97.06%, 2.94%, 5.88%, 5.88%, and 88.24% are based on the statistics from the dataset specifications and features and can easily obtain it using the phyton lines classifier.fit (X_train, y_train), y_pred = classifier.predict(X_test).
In the case of hexacopters, the classifier has an excellent recall rate of 97.06%, with only 2.94% of the instances being incorrectly classified as birds. This suggests that the classifier has a high ability to identify hexacopters [30] correctly. However, the recall rate for birds is only 88.24%, which indicates that there may be some misclassifications between birds and other drone types. The confusion matrix also shows that 5.88% of the instances of birds were incorrectly classified as DJI Phantom drones, and another 5.88% were incorrectly classified as hexacopters.
Overall, the results of our classifier suggest that it has an excellent ability to detect DJI Phantom drones and hexacopters. However, when it comes to birds, the images of birds used for training and testing were of poor quality, such as being too far away or too blurry because it was manually gathered from the internet; this explains the false recognition as hexacopters and DJI Phantom.
Figure 12 compares the two systems’ results as a classifier. The result highlights the superiority of the intelligent vision system over the essential radar system in recognizing drones vs. birds. However, the radar system is crucial for the whole drone and bird detection system, as it has better detectability than the visual system. These results suggest that the radar system can be a valuable tool in detecting and recognizing drones and birds, particularly when combined with other systems such as visual imaging. It is important to note that the data generated by the radar system were limited compared with the visual imaging system. However, these results showcase the potential of radar technology in the drone and bird detection field.
For comparison between our study and other studies, Table 4 shows accuracy values for some closely related works such as DL-CNN-NMS [11], ML-DS [12], DL-SVC-YOLO [13], ML-ANN-RCS [14], ML-FMCW [15], and our study VSD-CNN-RCS. The accuracy values in the table depend on the dataset and its data variance and bias. Our system accuracy is 71.43% because the DJI dataset needs to be enriched. Moreover, the hexacopter and birds are similar and difficult to classify compared with other objects used for the related works. However, the accuracy we achieved is considered excellent for the experiments’ features, case study, and environment.

6. Conclusions and Future Work

This paper established the potential of using a combination of radar and visual imaging systems to detect and distinguish between drones and birds and laid the foundation for future research in this field. The paper also highlights the importance of continued research in this field, as developing efficient and effective drone and bird detection systems is crucial for ensuring the safety and security of airspace.
The study successfully demonstrated the potential of using a combination of radar and visual imaging systems to detect and recognize drones and birds. The proposed algorithm is benchmarked with other related works, which show acceptable performance compared with other counterparts.
The results showed that both systems had their strengths, with the radar system demonstrating high precision and the visual imaging system showing high recall. Combined, these systems provide a comprehensive approach to detecting and recognizing these objects in airspace. The high overall precision and accuracy of 88.82% and 71.43%, respectively, and the high F1 score of 76.27% indicate the effectiveness of this combined approach. Furthermore, the study’s results provide valuable insight into the potential of using a combination of radar and visual imaging systems. Further research in this area can lead to even more advanced and effective detection systems.
One of the limitations of this study is the lack of a comprehensive dataset. The data collection mainly depends on labor, which is quite expensive and time-consuming. The need for a large dataset is quite important to reduce the bias and variations in the dataset, consequently reducing the model complexity and the ML model overfitting.
For future works, one can explore the system’s performance for different environmental conditions and the possible integration of the visual imaging system results with those from the radar system to achieve even higher levels of robustness and accuracy.

Author Contributions

Conceptualization, S.E.A., T.E. and M.A.A.; methodology, O.Y.E., Y.A.A. and S.E.A.; software, M.A.A., O.Y.E. and Y.A.A.; validation, M.A., R.A. and R.A.S.; formal analysis, S.E.A., T.E. and R.A.S.; investigation, T.E.; resources, M.A., R.A., R.A.S. and S.E.A.; data curation, M.A.A., M.A., S.E.A. and Y.A.A.; writing—original draft preparation, O.Y.E. and Y.A.A.; writing—review and editing, M.A., R.A. and R.A.S.; visualization, R.A.; supervision, S.E.A.; project administration, S.E.A. and T.E.; funding acquisition, M.A. and R.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2023R97), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. I would also like to acknowledge the Deanship of Scientific Research, Taif University for funding this work.

Data Availability Statement

Not applicable.

Acknowledgments

Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2023R97), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. I would also like to acknowledge the Deanship of Scientific Research, Taif University for funding this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Khalifa, O.O.; Wajdi, M.H.; Saeed, R.A.; Hashim, A.H.A.; Ahmed, M.Z.; Ali, E.S. Vehicle Detection for Vision-Based Intelligent Transportation Systems Using Convolutional Neural Network Algorithm. J. Adv. Transp. 2022, 2022, 9189600. [Google Scholar] [CrossRef]
  2. Knott, E.F.; Schaeffer, J.F.; Tulley, M.T. Radar Cross Section; SciTech Publishing: Raleigh, NC, USA, 2004. [Google Scholar]
  3. Saeed, M.M.; Saeed, R.A.; Azim, M.A.; Ali, E.S.; Mokhtar, R.A.; Khalifa, O. Green Machine Learning Approach for QoS Improvement in Cellular Communications. In Proceedings of the 2022 IEEE 2nd International Maghreb Meeting of the Conference on Sciences and Techniques of Automatic Control and Computer Engineering (MI-STA), Sabratha, Libya, 23–25 May 2022; pp. 523–528. [Google Scholar]
  4. Schreiber, E.; Heinzel, A.; Peichl, M.; Engel, M.; Wiesbeck, W. Advanced Buried Object Detection by Multichannel, UAV/Drone Carried Synthetic Aperture Radar. In Proceedings of the 2019 13th European Conference on Antennas and Propagation (EuCAP), Krakow, Poland, 31 March–5 April 2019; pp. 1–5. [Google Scholar]
  5. Aswathy, R.H.; Suresh, P.; Sikkandar, M.Y.; Abdel-Khalek, S.; Alhumyani, H.; Saeed, R.A.; Mansour, R.F. Optimized Tuned Deep Learning Model for Chronic Kidney Disease Classification. Comput. Mater. Contin. 2022, 70, 2097–2111. [Google Scholar] [CrossRef]
  6. Farlik, J.; Kratky, M.; Casar, J.; Stary, V. Radar cross-section and detection of small unmanned aerial vehicles. In Proceedings of the 2016 17th International Conference on Mechatronics—Mechatronika (ME), Prague, Czech Republic, 7–9 December 2016; pp. 1–7. [Google Scholar]
  7. Hassan, M.B.; Saeed, R.A.; Khalifa, O.; Ali, E.S.; Mokhtar, R.A.; Hashim, A.A. Green Machine Learning for Green Cloud Energy Efficiency. In Proceedings of the 2022 IEEE 2nd International Maghreb Meeting of the Conference on Sciences and Techniques of Automatic Control and Computer Engineering (MI-STA), Sabratha, Libya, 23–25 May 2022; pp. 288–294. [Google Scholar]
  8. Jahangir, M.; Baker, C.J. CLASS U-space drone test flight results for non-cooperative surveillance using an L-band 3-D staring radar. In Proceedings of the 2019 20th International Radar Symposium (IRS), Ulm, Germany, 26–28 June 2019. [Google Scholar]
  9. Aswini, N.; Uma, S.V. Custom Based Obstacle Detection Using Yolo v3 for Low Flying Drones. In Proceedings of the 2021 International Conference on Circuits, Controls and Communications (CCUBE), Bangalore, India, 23–24 December 2021; pp. 1–6. [Google Scholar] [CrossRef]
  10. Saeed, R.A.; Omri, M.; Abdel-Khalek, S.; Ali, E.S.; Alotaibi, M.F. Optimal path planning for drones based on swarm intelligence algorithm. Neural Comput. Appl. 2022, 34, 10133–10155. [Google Scholar] [CrossRef]
  11. Wang, C.; Tian, J.; Cao, J.; Wang, X. Deep Learning-Based UAV Detection in Pulse-Doppler Radar. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5105612. [Google Scholar] [CrossRef]
  12. Sun, Y.; Abeywickrama, S.; Jayasinghe, L.; Yuen, C.; Chen, J.; Zhang, M. Micro-Doppler Signature-Based Detection, Classification, and Localization of Small UAV With Long Short-Term Memory Neural Network. IEEE Trans. Geosci. Remote Sens. 2021, 59, 6285–6300. [Google Scholar] [CrossRef]
  13. Kim, W.; Cho, H.; Kim, J.; Kim, B.; Lee, S. Target Classification Using Combined YOLO-SVM in High-Resolution Automotive FMCW Radar. In Proceedings of the 2020 IEEE Radar Conference (RadarConf20), Florence, Italy, 21–25 September 2020; pp. 1–5. [Google Scholar] [CrossRef]
  14. Cai, X.; Sarabandi, K. A Machine Learning Based 77 GHz Radar Target Classification for Autonomous Vehicles. In Proceedings of the 2019 IEEE International Symposium on Antennas and Propagation and USNC-URSI Radio Science Meeting, Atlanta, GA, USA, 7–12 July 2019; pp. 371–372. [Google Scholar] [CrossRef]
  15. Zhang, W.; Li, S.; Zhu, C.; Wang, C. Classification of Combination Target’s Components Based on Deep Learning. In Proceedings of the 2019 International Applied Computational Electromagnetics Society Symposium—China (ACES), Nanjing, China, 8–11 August 2019; pp. 1–2. [Google Scholar] [CrossRef]
  16. Javan, F.D.; Samadzadegan, F.; Gholamshahi, M.; Mahini, F.A. A Modified YOLOv4 Deep Learning Network for Vision-Based UAV Recognition. Drones 2022, 6, 160. [Google Scholar] [CrossRef]
  17. Roldan, I.; Del-Blanco, C.R.; de Quevedo, D.; Urzaiz, F.I.; Menoyo, J.G.; López, A.A.; Berjón, D.; Jaureguizar, F.; García, N. DopplerNet: A convolutional neural network for recognising targets in real scenarios using a persistent range–Doppler radar. IET Radar Sonar Navig. 2020, 14, 593–600. [Google Scholar] [CrossRef]
  18. Caris, M.; Johannes, W.; Sieger, S.; Port, V.; Stanko, S. Detection of small UAS with W-band radar. In Proceedings of the 2017 18th International Radar Symposium (IRS), Prague, Czech Republic, 28–30 June 2017; pp. 1–6. [Google Scholar] [CrossRef]
  19. Nguyen, P.; Truong, H.; Ravindranathan, M.; Nguyen, A.; Han, R.; Vu, T. Matthan: Drone Presence Detection by Identifying Physical Signatures in the Drone’s RF Communication. In Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services, New York, NY, USA, 19–23 June 2017. [Google Scholar]
  20. Seo, Y.; Jang, B.; Im, S. Drone Detection Using Convolutional Neural Networks with Acoustic STFT Features. In Proceedings of the 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Auckland, New Zealand, 27–30 November 2018; pp. 1–6. [Google Scholar] [CrossRef]
  21. Zhang, P.; Yang, L.; Chen, G.; Li, G. Classification of drones based on micro-Doppler signatures with dual-band radar sensors. In Proceedings of the 2017 Progress in Electromagnetics Research Symposium—Fall (PIERS—FALL), Singapore, 19–22 November 2017; pp. 638–643. [Google Scholar] [CrossRef]
  22. Saeed, R.A.; Khatun, S.; Ali, B.M.; Abdullah, K. A joint PHY/MAC cross-layer design for UWB under power control. Comput. Electr. Eng. 2010, 36, 455–468. [Google Scholar] [CrossRef]
  23. Samadzadegan, F.; Javan, F.D.; Mahini, F.A.; Gholamshahi, M. Detection and Recognition of Drones Based on a Deep Convolutional Neural Network Using Visible Imagery. Aerospace 2022, 9, 31. [Google Scholar] [CrossRef]
  24. Bjorklund, S.; Wadstromer, N. Target Detection and Classification of Small Drones by Deep Learning on Radar Micro-Doppler. In Proceedings of the 2019 International Radar Conference (RADAR), Toulon, France, 23–27 September 2019; pp. 1–6. [Google Scholar] [CrossRef]
  25. Pansare, A.; Sabu, N.; Kushwaha, H.; Srivastava, V.; Thakur, N.; Jamgaonkar, K.; Faiz, Z. Drone Detection using YOLO and SSD A Comparative Study. In Proceedings of the 2022 International Conference on Signal and Information Processing (IConSIP), Pune, India, 26–27 August 2022; pp. 1–6. [Google Scholar] [CrossRef]
  26. Husodo, A.Y.; Jati, G.; Alfiany, N.; Jatmiko, W. Intruder Drone Localization Based on 2D Image and Area Expansion Principle for Supporting Military Defence System. In Proceedings of the 2019 IEEE International Conference on Communication, Networks, and Satellite (Comnetsat), Makassar, Indonesia, 1–3 August 2019; pp. 35–40. [Google Scholar] [CrossRef]
  27. Alsolami, F.; Alqurashi, F.A.; Hasan, M.K.; Saeed, R.A.; Abdel-Khalek, S.; Ishak, A.B. Development of Self-Synchronized Drones’ Network Using Cluster-Based Swarm Intelligence Approach. IEEE Access 2021, 9, 48010–48022. [Google Scholar] [CrossRef]
  28. Rohman, B.P.A.; Andra, M.B.; Putra, H.F.; Fandiantoro, D.H.; Nishimoto, M. Multisensory Surveillance Drone for Survivor Detection and Geolocalization in Complex Post-Disaster Environment. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 9368–9371. [Google Scholar] [CrossRef]
  29. Addai, P.; Mohd, T.K. Power and Telecommunication Lines Detection and Avoidance for Drones. In Proceedings of the 2022 IEEE World A.I. IoT Congress (AIIoT), Seattle, WA, USA, 6–9 June 2022; pp. 118–123. [Google Scholar] [CrossRef]
  30. Alqurashi, F.A.; Alsolami, F.; Abdel-Khalek, S.; Ali, E.S.; Saeed, R.A. Machine learning techniques in internet of UAVs for smart cities applications. J. Intell. Fuzzy Syst. 2021, 42, 3203–3226. [Google Scholar] [CrossRef]
  31. Jahangir, M.; Baker, C.J. L-band staring radar performance against micro-drones. In Proceedings of the 2018 19th International Radar Symposium (IRS), Bonn, Germany, 20–22 June 2018; pp. 1–10. [Google Scholar] [CrossRef]
  32. Bin, M.S.; Khalifa, O.O.; Saeed, R.A. Real-time personalized stress detection from physiological signals. In Proceedings of the International Conference on Computing, Control, Networking, Electronics and Embedded Systems Engineering (IC-CNEEE), Khartoum, Sudan, 7–9 September 2015; pp. 352–356. [Google Scholar] [CrossRef]
  33. Gong, J.; Yan, J.; Li, D.; Chen, R.; Tian, F.; Yan, Z. Theoretical and Experimental Analysis of Radar Micro-Doppler Signature Modulated by Rotating Blades of Drones. IEEE Antennas Wirel. Propag. Lett. 2020, 19, 1659–1663. [Google Scholar] [CrossRef]
  34. Alsharif, S.; Saeed, R.A.; Albagory, Y. An Efficient HAPS Cross-Layer Design to Mitigate COVID-19 Consequences. Intell. Autom. Soft Comput. 2022, 31, 43–59. [Google Scholar] [CrossRef]
  35. Khalifa, O.O.; Roubleh, A.; Esgiar, A.; Abdelhaq, M.; Alsaqour, R.; Abdalla, A.; Ali, E.S.; Saeed, R. An IoT-Platform-Based Deep Learning System for Human Behavior Recognition in Smart City Monitoring Using the Berkeley MHAD Datasets. Systems 2022, 10, 177. [Google Scholar] [CrossRef]
  36. Koundinya, P.N.; Ikeda, Y.; Sanjukumar, N.T.; Rajalakshmi, P.; Fukao, T. Comparative Analysis of Depth Detection Algorithms using Stereo Vision. In Proceedings of the 2020 IEEE 6th World Forum on Internet of Things (WF-IoT), New Orleans, LA, USA, 2–16 June 2020; pp. 1–5. [Google Scholar] [CrossRef]
  37. Ahmed, C.A.; Batool, F.; Haider, W.; Asad, M.; Hamdani, S.H.R. Acoustic Based Drone Detection via Machine Learning. In Proceedings of the 2022 International Conference on I.T. and Industrial Technologies (ICIT), Chiniot, Pakistan, 3–4 October 2022; pp. 1–6. [Google Scholar] [CrossRef]
  38. Hameed, S.A.; Aboaba, A.A.; Khalifa, O.O.; Abdalla, A.H.; Daoud, J.I.; Saeed, R.A.; Mahmoud, O. Framework for enhancement of image-guided surgery: Finding area of tumor volume. Aust. J. Basic Appl. Sci. 2012, 6, 9–16. [Google Scholar]
  39. Jahangir, M.; Ahmad, B.I.; Baker, C.J. Robust Drone Classification Using Two-Stage Decision Trees and Results from SESAR SAFIR Trials. In Proceedings of the 2020 IEEE International Radar Conference (RADAR), Washington, DC, USA, 28–30 April 2020; pp. 636–641. [Google Scholar] [CrossRef]
  40. Saeed, R.A.; Khatun, S.; Ali, B.M.; Khazani, M. Performance Enhancement of UWB. Power Control using Ranging and Narrowband Interference Mitigation Technique. Int. Arab. J. Inf. Technol. (IAJIT) 2009, 6, 13–22. [Google Scholar]
  41. Phung, K.-P.; Lu, T.-H.; Nguyen, T.-T.; Le, N.-L.; Nguyen, H.-H.; Hoang, V.-P. Multi-model Deep Learning Drone Detection and Tracking in Complex Background Conditions. In Proceedings of the 2021 International Conference on Advanced Technologies for Communications (A.T.C.), Ho Chi Minh City, Vietnam, 14–16 October 2021; pp. 189–194. [Google Scholar] [CrossRef]
  42. Rong, Y.; Herschfelt, A.; Holtom, J.; Bliss, D.W. Cardiac and Respiratory Sensing from a Hovering UAV Radar Platform. In Proceedings of the 2021 IEEE Statistical Signal Processing Workshop (SSP), Rio de Janeiro, Brazil, 11–14 July 2021; pp. 541–545. [Google Scholar] [CrossRef]
  43. Mandal, B.; Okeukwu, A.; Theis, Y. Masked face recognition using resnet-50. arXiv 2021, arXiv:2104.08997. [Google Scholar]
  44. Sankupellay, M.; Konovalov, D. Birdcall recognition using deep convolutional neural network, ResNet-50. Proc. Acoust. 2018, 7, 1–8. [Google Scholar]
  45. Ahmed KE, B.; Mokhtar, R.A.; Saeed, R.A. A New Method for Fast Image Histogram Calculation. In Proceedings of the International Conference on Computing, Control, Networking, Electronics and Embedded Systems Engineering (ICCNEEE), Khartoum, Sudan, 7–9 September 2015; pp. 187–192. [Google Scholar]
  46. Unlu, E.; Zenou, E.; Riviere, N.; Dupouy, P.-E. Deep learning-based strategies for the detection and tracking of drones using several cameras. IPSJ Trans. Comput. Vis. Appl. 2019, 11, 7. [Google Scholar] [CrossRef]
  47. Kreyenschmidt, C. Exemplary integration of machine learning for information extraction in existing buildings. In Proceedings of the 31 Forum Bauinformatik, Berlin, Germany, 11–13 September 2019; Universitätsverlag der TU Berlin: Berlin, Germany, 2019; p. 17. [Google Scholar]
  48. Bernardini, A.; Mangiatordi, F.; Pallotti, E.; Capodiferro, L. Drone detection by acoustic signature identification. Electron. Imaging 2017, 29, 60–64. [Google Scholar] [CrossRef]
  49. Zhuang, Z.; Guo, R.; Zhang, Y.; Tian, B. UAV Localization Using Staring Radar Under Multipath Interference. In Proceedings of the IGARSS 2022—2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 2291–2294. [Google Scholar] [CrossRef]
  50. Di Seglio, M.; Filippini, F.; Bongioanni, C.; Colone, F. Human and Drone Surveillance via RpF-based WiFi Passive Radar: Experimental Validation. In Proceedings of the 2022 23rd International Radar Symposium (IRS), Gdansk, Poland, 12–14 September 2022; pp. 402–407. [Google Scholar] [CrossRef]
Figure 1. The flow chart of the method.
Figure 1. The flow chart of the method.
Electronics 12 02235 g001
Figure 2. Schematic of micro-Doppler measurement in a simple model.
Figure 2. Schematic of micro-Doppler measurement in a simple model.
Electronics 12 02235 g002
Figure 3. Three samples of the drones’ images with the micro-Doppler signature effect.
Figure 3. Three samples of the drones’ images with the micro-Doppler signature effect.
Electronics 12 02235 g003
Figure 4. The convolutional neural network for image classification.
Figure 4. The convolutional neural network for image classification.
Electronics 12 02235 g004
Figure 5. ResNet-50 architecture.
Figure 5. ResNet-50 architecture.
Electronics 12 02235 g005
Figure 6. The target detectability range.
Figure 6. The target detectability range.
Electronics 12 02235 g006
Figure 7. Radar detectability performance.
Figure 7. Radar detectability performance.
Electronics 12 02235 g007
Figure 8. Radar classification categories performance.
Figure 8. Radar classification categories performance.
Electronics 12 02235 g008
Figure 9. Overall performance of the radar system.
Figure 9. Overall performance of the radar system.
Electronics 12 02235 g009
Figure 10. Radar images (red circles) under various noise background environments.
Figure 10. Radar images (red circles) under various noise background environments.
Electronics 12 02235 g010
Figure 11. Precision and recall of categories in the visual imaging system.
Figure 11. Precision and recall of categories in the visual imaging system.
Electronics 12 02235 g011
Figure 12. Comparison between the visual and radar results.
Figure 12. Comparison between the visual and radar results.
Electronics 12 02235 g012
Table 1. A summary of the related works.
Table 1. A summary of the related works.
Related WorksProblem StatementModelMethodologyFindings
[16]Classification of radar-detected targets Range-Doppler radar using CNN DopplerNet: RDRD database with CNN classifier for RDR The high-accuracy results (99.48%)
[17]Misuse and unauthorized intrusionYOLOv4 DL-CNN with vision aidA video dataset is introduced to YOLOv4 More precise and detailed semantic features were extracted by changing the number of CNN layers
[18]Detect small and slow UAVs in challenging scenarios, e.g., smoky, foggy, or loud environments W-band radar with Micro-Doppler analysis W-band radar n realistic scenarios, including 3D localization, combined with classification by utilizing Micro-Doppler analysisSmall UAS detected the range coverage of several hundred meters
[19]Detection of physical characteristics of the drone during communicationMatthan theory Matthan was prototyped and evaluated using SDR radios in three different real-world environmentsHigh accuracy, precision, and recall, all above 90% at 50 m were achieved.
[20]Drone detection of various miniaturization and modification.CNN with the acoustic signals2D feature employed is made of the normalized short-time Fourier transformation (STFT) magnitude. The experiment is conducted in the open environment with DJI Phantom 3 and 4 hovering drone98.97% detection rate and 1.28 false alarm
[21]Enhance the robustness of micro-Doppler-based classification of dronesA dual band radar classification schemePCA is utilized for features extraction, then SVM is used for classification Accuracy of 100%, 97%, and 92% were achieved for helicopter, quadcopter, and hexacopter, respectively.
Table 2. The specification of the radar.
Table 2. The specification of the radar.
ParameterValueParameterValue
Operating Frequency24 GHz (K-band)Peak Power10 watts
Bandwidth200 MHzSignal polarizationHorizontal
Antenna Gain30 dBP.F.A.1 × 10−6
Noise Temperature800 KPulse Width7 × 10−5 s
PRF1 kHzCutoff range5 m
Table 3. Confusion matrix.
Table 3. Confusion matrix.
DJI Phantom HexacopterBirds
DJI Phantom 82.35%017%
Hexacopter097.06%2.94%
Birds5.88%5.88%88.24%
Table 4. A benchmark of the work with the related works.
Table 4. A benchmark of the work with the related works.
MethodologyDL-CNN-NMSML-DSDL-SVC-YOLO ML-ANN-RCSML-FMCW VSD-CNN-RCS
Classification accuracy68.6097.0065.6098.7062.9071.43
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abdelsamad, S.E.; Abdelteef, M.A.; Elsheikh, O.Y.; Ali, Y.A.; Elsonni, T.; Abdelhaq, M.; Alsaqour, R.; Saeed, R.A. Vision-Based Support for the Detection and Recognition of Drones with Small Radar Cross Sections. Electronics 2023, 12, 2235. https://doi.org/10.3390/electronics12102235

AMA Style

Abdelsamad SE, Abdelteef MA, Elsheikh OY, Ali YA, Elsonni T, Abdelhaq M, Alsaqour R, Saeed RA. Vision-Based Support for the Detection and Recognition of Drones with Small Radar Cross Sections. Electronics. 2023; 12(10):2235. https://doi.org/10.3390/electronics12102235

Chicago/Turabian Style

Abdelsamad, Safa E., Mohammed A. Abdelteef, Othman Y. Elsheikh, Yomna A. Ali, Tarik Elsonni, Maha Abdelhaq, Raed Alsaqour, and Rashid A. Saeed. 2023. "Vision-Based Support for the Detection and Recognition of Drones with Small Radar Cross Sections" Electronics 12, no. 10: 2235. https://doi.org/10.3390/electronics12102235

APA Style

Abdelsamad, S. E., Abdelteef, M. A., Elsheikh, O. Y., Ali, Y. A., Elsonni, T., Abdelhaq, M., Alsaqour, R., & Saeed, R. A. (2023). Vision-Based Support for the Detection and Recognition of Drones with Small Radar Cross Sections. Electronics, 12(10), 2235. https://doi.org/10.3390/electronics12102235

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop