Next Article in Journal
A Drone-Based Structure from Motion Survey, Topographic Data, and Terrestrial Laser Scanning Acquisitions for the Floodgate Gaps Deformation Monitoring of the Modulo Sperimentale Elettromeccanico System (Venice, Italy)
Previous Article in Journal
Sequential Task Allocation of More Scalable Artificial Dragonfly Swarms Considering Dubins Trajectory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Truncated Statistics Constant False Alarm Rate Detection of UAVs Based on Neural Networks

1
School of Electronic Information Engineering, Beihang University, Beijing 100191, China
2
School of Cyber Science and Technology, Beihang University, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Drones 2024, 8(10), 597; https://doi.org/10.3390/drones8100597
Submission received: 8 September 2024 / Revised: 1 October 2024 / Accepted: 16 October 2024 / Published: 18 October 2024

Abstract

:
With the rapid popularity of unmanned aerial vehicles (UAVs), airspace safety is facing tougher challenges, especially for the identification of non-cooperative target UAVs. As a vital approach for non-cooperative target identification, radar signal processing has attracted continuous and extensive attention and research. The constant false alarm rate (CFAR) detector is widely used in most current radar systems. However, the detection performance will sharply deteriorate in complex and dynamical environments. In this paper, a novel truncated statistics- and neural network-based CFAR (TSNN-CFAR) algorithm is developed. Specifically, we adopt a right truncated Rayleigh distribution model combined with the characteristics of pattern recognition using a neural network. In the simulation environments of four different backgrounds, the proposed algorithm does not need guard cells and outperforms the traditional mean level (ML) and ordered statistics (OS) CFAR algorithms. Especially in high-density target and clutter edge environments, since utilizing 19 statistics obtained from the numerical calculation of two reference windows as the input characteristics, the TSNN-CFAR algorithm has the best adaptive decision ability, accurate background clutter modeling, stable false alarm regulation property and superior detection performance.

1. Introduction

With the rapid popularity and development of unmanned aerial vehicles (UAVs) comes an increasing number of airspace security threats [1]. In particular, there is a lack of effective surveillance means for non-cooperative UAVs. In order to address airspace security, more and more technical means are being proposed to target non-cooperative targets. Non-cooperative identification (NCI) refers to the reliable identification of suspicious targets without the precondition of communications [2,3]. The practical applications of radar in the past half century has proved that radar is a powerful tool in NCI systems [4].
UAV target detection is a basic function of radar before it realizes range measurement, velocity measurement and target tracking [5]. The constant false alarm rate (CFAR) algorithm is the most common and practical target detection method in radar systems [6]. The core idea of CFAR is to determine the detection threshold adaptively according to the given false alarm probability (PFA) through accurately modeling the statistical distribution of background clutter at the detection point [7].
In the process of implementation, the sliding window is usually used to accurately estimate the model parameters [8]. At present, there are many popular classical CFAR methods, such as cell averaging (CA), greatest of (GO), smallest of (SO) and ordered statistics (OS) [9,10]. Each of those algorithms has a high performance in some specific cases only. The performance may degrade due to the complex and changeable practical detection background environments [11].
Hence, many researchers try to improve the CFAR algorithm and have proposed many adaptive CFAR algorithms [12]. Adaptive variability index CFAR (VI-CFAR) is a composite of the CA-CFAR, SO-CFAR and GO-CFAR algorithms, and thus can dynamically select specific background estimation algorithms to perform adaptive threshold target detection. Although VI-CFAR can work stably in common test environments, the detection performance will be lost in high-density target environments and the requirements for the signal-to-noise ratio (SNR) are more stringent.
In fact, there are many existing studies working to improve the NCI accuracy under different background environments. The authors of [13,14] aimed at realizing target detection by auxiliary radar in heterogeneous clutter environments, based on the accuracy of the maximum likelihood estimation equation of covariance matrix and the asymptotic expression of false alarm probability obtained by solving fixed-point equations. However, the calculation of estimation and approximate accuracy is complex and, thus, the accuracy cannot be effectively guaranteed. The authors of [15] used clutter two-parameter logarithmic compression processing and cumulative amplitude average comprehensive constant false alarm processing to cope with a variety of typical clutter environments. The authors of [15,16] pointed out that most CFAR processing focused on specific clutter backgrounds, but the diversity and change in clutter make CFAR processing insufficient to meet detection requirements. Hence, the adaptive CFAR detector came into being. In [17], the variation index (VI-CFAR) was proposed to select the appropriate detection threshold according to the test results. Meanwhile, researchers also payed attention to the application of modern machine learning technologies to realize adaptive CFAR processing, such as intelligent detection using a support vector machine in [18].
In this paper, by introducing a novel truncated statistics method combined with the characteristics of the signal model of frequency modulation continuous wave radar, a TS-CFAR detection method based on the right truncated Rayleigh distribution model is firstly proposed, which does not need guard cells. Then, combined with the characteristics of neural network pattern recognition, 19 characteristic statistics are introduced to develop the TSNN-CFAR algorithm to improve radar target detection. The simulation results show that the proposed TSNN-CFAR UAV detector provides a low detection loss under a uniform background, and also achieves a stable detection performance and stable false alarm regulation under high-density target and clutter edge environments. Application in the field of UAV detection can effectively protect airspace safety.
The rest of this paper is organized as follows. Section 2 formulates the principle and performance characteristics of relevant traditional CFAR and adaptive CFAR algorithms. In Section 3, the principle and specific process of the proposed TSNN-CFAR algorithm are introduced, and the relevant performance is analyzed theoretically. A simulation experiment is represented in Section 4 with a discussion of the diagnosis results, including visualization analysis and comparison with other adaptive algorithms. Finally, the conclusions are shown in Section 5.

2. Preliminaries

In this section, we firstly introduce the models of classical CFAR UAV detectors, and then the principle and performance of the common adaptive CFAR methods are further expounded.

2.1. Typical CFAR Detector Model

In UAV detection, the principle of a constant false alarm target detection algorithm is to obtain the power level estimation of noise and clutter by processing the sampling values of reference cells around the detection cells, and then calculating the detection threshold of the detection cells according to the estimated values [19].
ML-CFAR is the first CFAR algorithm proposed, and is the most important CFAR algorithm in practical applications. The mean processing result of the cells in the reference windows is used as the basis of noise power level estimation, and the detection threshold is obtained by multiplying the threshold factor [20,21]. After comparison, the decision of whether there is a target or not is finally obtained. Figure 1 shows the schematic diagram of ML-CFAR. Assume that there are N reference cells (discretized unit of space). μ ˜ L and μ ˜ R denote the sample means on the left and right side, respectively. Guard cells next to detection cell are to prevent the target signals from spreading to adjacent cells to affect the noise power estimation. α denotes the threshold factor. Z denotes the noise power estimate by mean estimation. The detection threshold is given by T = α Z . D is the sampling value of the guard cell. According to the Neyman–Pearson criterion, D will be sentenced following
H 0 : D < T , H 1 : D T .
Hypothesis H 1 represents that there exists a target in the cell and hypothesis H 1 represents that there is no target.
CA-CFAR averages the sampled values of all reference units as the basis for noise power level estimation [22,23]. In homogeneous environments, the false alarm probability for CA-CFAR is calculated by
P ¯ f a = ( 1 + α C A N ) N
GO-CFAR is proposed mainly to solve the problem that CA-CFAR is more likely to cause false alarms at the clutter edge and, thus, the larger average value in the reference windows is selected as the basis for power estimation [24,25]. In homogeneous environments, the false alarm probability for GO-CFAR is calculated by
P ¯ f a / 2 = 1 + α GO ( N / 2 ) N / 2 2 + α GO ( N / 2 ) N / 2 × k = 0 N 2 1 N 2 1 + k 2 + α GO ( N / 2 ) k
According to Equation (3), given fixed P ¯ f a and N, the threshold factor α GO can be computed and a further detection threshold can be obtained.
SO-CFAR mainly aims at the weak target shadowing problem of CA-CFAR. Contrary to GO-CFAR, SO-CFAR selects the smaller average value in reference windows as the basis for power estimation [25,26]. The false alarm probability for SO-CFAR is calculated by
P ¯ f a / 2 = 2 + α SO ( N / 2 ) N / 2 × k = 0 N 2 1 N 2 1 + k k 2 + α SO N / 2 k
Similarly, the corresponding detection threshold can be obtained easily.
OS-CFAR uses the sorting results of sample values in the reference windows to estimate the power level. OS-CFAR firstly sorts all sample values in the reference windows to obtain the ascending sequence, and then the k-th ordered statistic value will be the power estimate to calculate the detection threshold [27,28]. In homogeneous environments, the false alarm probability for OS-CFAR is calculated by
P ¯ f a = k N k Γ ( N k + 1 + α O S ) Γ ( N k + 1 + α O S ) Γ ( k ) Γ ( N + α O S + 1 )
Since the false alarm probability is not affected by noise power, OS-CFAR can realize the requirement of constant false alarm.
According to the existing theoretical analysis and experimental applications, the computational complexity of the above methods is not high and the hardware implementations are simple. However, in the process of target detection, the protrusion phenomenon is common for CA-CFAR and GO-CFAR, and shadowing easily occurs in high-density target detection. Meanwhile, with the decrease in the number of guard cells, shadowing is more serious. In addition, CA-CFAR and GO-CFAR are prone to missing the detection of weak clutter and false alarm at the edge of strong clutter. SO-CFAR increases the risk of false alarm in a strong clutter area [29,30].
In summary, the classical CFAR method described above is derived and applicable in a uniform Gaussian environment, but when there are multiple targets in the environment around the detection unit or when it is located at the edge of the clutter, a single use only of uniform Gaussian statistics as the detection threshold will result in false alarms and missed alarms. Therefore, adaptive CFAR algorithms are developed to obtain an ideal detection performance in more complex environments.

2.2. VI-CFAR Detector Model

The remarkable feature of the adaptive CFAR algorithms is the adaptive selection of CFAR processing methods and parameters according to partial characteristics of the sampling values of reference cells, so as to ensure a better detection performance in a specific non-homogeneous environment.
VI-CFAR is a typical adaptive CFAR algorithm. The core idea is to dynamically select the appropriate CFAR methods through the second-order statistic V I of the reference cells and the ratio moving range ( M R ) of the mean values of the left and right windows, so as to ensure robustness in various environments [31]. The statistic VI is used to determine whether the sampled value in the reference windows comes from a homogeneous environment. It is the ratio of the second-order central moment to the second-order origin moment plus a constant, similar to the shape parameter estimation. The statistic M R is used to test whether the mean values of the left and right reference windows are the same.
After obtaining the statistic V I for each side window, it will be compared with the decision threshold K V I . The homogeneous or non-homogeneous environments are judged by
V I < K V I Homogeneous environment V I K V I Non-homogeneous environment
The consistency of the left and right reference windows is obtained by comparing the statistic M R with the decision threshold K M R , i.e.,
K M R 1 < M R < K M R Same means M R K M R 1 o r K M R M R Different means
After the above two decisions, VI-CFAR selects the corresponding CFAR method to calculate the detection threshold according to the decision results. Table 1 shows the threshold selection scheme of VI-CFAR.

2.3. NN-CFAR Detector Model

Neural networks have provided a series of beneficial helps for pattern recognition, data analysis and other aspects [32]. Especially in the application of pattern recognition, the input characteristic quantity is used for nonlinear transformation to transform the output content of the component category. The core idea of NN-CFAR is to treat the neural network as a classifier to distinguish background environment types and select an appropriate CFAR algorithm according to the background environment types, thus ensuring better target detection ability.
The input of NN-CFAR based on statistical characteristics consists of 8 statistical values and 30 reference cell sampling values. The eight statistical values are standard deviation, mean absolute error (MAE), skewness (SKEW), kurtosis (KURT), range, information entropy, lower fourth score and median.
Order the sample sequence in the reference window from small to large, and define X ¯ as the ordered sequence. Thus, the information entropy is expressed as
E n t r o p y = E log 2 p ( X ˜ ) = s u m ( p ( X ˜ ) log 2 p ( X ˜ ) ) p ( x ˜ i ) = i / N ,
where x ˜ i is the sampled data after ordering and p ( x ˜ i ) is the cumulative probability.
Through training a large amount of data, the NN-CFAR classifier will be formed. In the application process, the 8 characteristic statistics are obtained according to the reference cells to be detected, and the 30 sampling values are input into the classifier together. Finally, the CFAR algorithm is selected according to Table 2.
The above two adaptive algorithms, i.e., VI-CFAR and NN-CFAR, can deal with more complex background environments than the traditional CFAR algorithms. However, the detection performance of VI-CFAR is poor when there are interference targets on both the left and right windows, i.e., SO-CFAR is insufficient in the high-density target environments. Although NN-CFAR has good robustness, it fluctuates obviously in the clutter edge region and the false alarm probability increases. This is because a guard cell may lead to the existence of lag and advance discrimination. The performance of NN-CFAR in the marginal region becomes better with the decrease in the number of guard cells.

3. Proposed TSNN-CFAR

In the previous sections, the working principle and performance of common CFAR algorithms have been systematically introduced. Many researchers have proposed corresponding solutions to the robustness of CFAR detection performance in complex environments. Among them is the TS-CFAR detector. Based on the adaptive optimization of TS-CFAR, we propose TSNN-CFAR based on a neural network.

3.1. TS-CFAR Detector Model

The main purpose of the TS-CFAR algorithm is to estimate the parameters of the probability distribution model by using the reference cells near the detection cells, so as to obtain the adaptive threshold value [33].
Assuming that the sample conforms to a Rayleigh distribution and the measured value is denoted by X, its probability density function (PDF) is denoted by
p X ( x ) = x μ 2 e x 2 / 2 μ 2 , x > 0 0 , x 0
where μ denotes the mean.
TS-CFAR first sets a truncation depth h, and removes the cells in the reference windows that are larger than the truncation depth. In this case, the remaining I noise cells obey the right truncation probability distribution model, which is given by
p ˜ ( x ) = p ( x ) P ( h ) , 0 < x h 0 , x > h
where P ( h ) is the probability at the truncated depth h. Hence, the mean is the only parameter that needs to be estimated. The maximum likelihood estimator of the mean can be obtained from the likelihood function, i.e.,
L = i = 1 I p ˜ x ˜ i = exp i = 1 I x ˜ i 2 / 2 μ 2 μ 2 I 1 exp h 2 / 2 μ 2 I i = 1 I x ˜ i
where x ˜ i i = 1 I represents the amplitude value of the remaining I cells after removing the outlier cells in the reference windows. The maximum likelihood estimation equation can be obtained by the logarithm of the likelihood function, i.e.,
ln L μ ^ = 2 I μ ^ + 1 μ ^ 3 i = 1 I x ˜ i 2 + I ξ σ ^ g ( ξ ) G ( ξ ) = 0
with ξ = h / μ ^ , g ( ξ ) = ξ exp ξ 2 / 2 , and G ( ξ ) = 1 exp ξ 2 / 2 . Assume
J ( ξ ) = 1 ξ 2 ξ g ( ξ ) G ( ξ )
Combining Equation (12) with Equation (13), we have
J ( ξ ) = 1 I h 2 i = 1 I x ˜ i 2
Therefore, to obtain the maximum likelihood estimation μ ^ , it is necessary to first use the amplitude value of the remaining cells to obtain J ( ξ ) , and then use Equation (13) to solve. Finally, the estimation value can be obtained according to ξ = h / μ ^ .
Equation (13) shows that ξ can only by solved through time-consuming numerical methods. However, practical applications require a high real-time performance of radar. It is necessary to formulate the look-up table between J ( ξ ) and ξ in advance. Some look-up tables are shown in Table 3.
Note that μ ^ is the sample mean estimate plus a truncation correction. The given false alarm probability is related to the cumulative distribution function (CDF), which can be expressed as
P f a = 1 P ( H ) = e H 2 2 μ ^ 2
Hence, the detection threshold is obtained according to the given false alarm probability P f a and estimation μ ^ .
The TS-CFAR algorithm (Algorithm 1) is based on TS and has many advantages. It is designed to accommodate multiple jamming targets in the reference windows. The TS-CFAR algorithm can also control the false alarm probability very well.
Algorithm 1: TS-CFAR [33]
Drones 08 00597 i001
In the TS-CFAR algorithm, reference windows with no guard cells can be used for TS-CFAR processing because a small amount of target amplitude appearing in the reference region has little effect on noise power estimation [33].

3.2. TSNN-CFAR UAV Detector Design

The detection performance of TS-CFAR is related to the truncation depth. If the truncation depth is too small, there will be a large deviation in parameter estimation. If the truncation depth is too large, too much sample data will be involved in parameter estimation [35]. If the same truncation depth is used on both sides of the clutter edge, it is easy for the clutter false alarm phenomenon to appear. Therefore, we focus on the adaptive judgment at the edge of clutter, and propose a truncation statistic CFAR algorithm based on a neural network.
Figure 2 shows the four detection environments that CFAR needs to deal with. The squares in the figure represent a distance cell, and the position where D is located is the current detection unit. (a) shows that the target is located in the detection cell, and the surrounding distance cell is uniformly noisy. (b) represents a high-density target environment, which is different from (a) in that, while the target is located in the detection cell, the neighboring distance cell contains a valid target. (c) and (d) are low-energy regions located at the edge of the clutter. Correspondingly, (e) and (f) are high-energy regions located at the edge of the clutter.
In order to cope with all the environments shown in Figure 2, the adaptive CFAR is improved. Due to the homogeneous environments, CA-CFAR is the best. NN-CFAR shows lag and advance discrimination because of the existence of the guard cell. TS-CFAR can be used to remove the characteristics of guard cells. Therefore, combining TS-CFAR with NN-CFAR can not only remove the guard cells but also optimize the characteristic statistics. Thus, the optimized TSNN-CFAR identification accuracy will be higher.
The schematic diagram of the proposed TSNN-CFAR is shown in Figure 3. Eight statistics of left and right cells, as well as the second-order statistics V I and M R , are obtained based on all the sampling values in the left and right windows. A total of 19 statistical characteristic values are used as the basis of background classification. The output of the neural network is regarded as the basis for selection. Next, according to the identification results combined with the classification shown in Table 4, the CFAR scheme is selected. Meanwhile, the detection estimate and threshold factor are obtained by using the selected CFAR algorithm. Finally, the detection threshold is obtained, and the comparator determines whether there is a target in the detection cell.
As shown in Figure 4, the neural network used in this paper is the multilayer perceptron network consisting of input, hidden and output layers. Each neuron in the hidden layer and output layer has an activation function. In this paper, Sigmoid is used as the activation function in the neural network to reflect the complex nonlinear relationship between input and output. The network consists of 19 inputs (VI, MR, etc.) and two hidden layers. The output layer is classified into four neurons according to the background environments. In the training process, the neural network uses 800,000 sets of data generated by simulation in four background conditions for training, 70% of which is used for simulation, 15% for verification and the rest for testing. Radar echoes obey Rayleigh distributions. The training algorithm is the scaled conjugate gradient method. The cross-entropy error is used for performance discrimination. The maximum number of iterations of the neural network is set as 500. Then, the maximum mean square error (MSE) is configured as 10 6 .
By optimizing the input of the NN classifier, the TSNN classifier has some improvement compared with the NN classifier. The input number of the input layer is reduced from 38 to 19, and the error is also reduced to 2.69 × 10−2. The training rounds are also greatly reduced and the judgment accuracy of the class is improved. Specific data are shown in Table 5.
According to the table, the initial judgment adaptive decision performance should be optimized. However, the analysis and evaluation of the performance will depend on the detection probability and false alarm probability in each case. After generating the dataset by simulation, the Stratified K-Fold Cross-Validation method is used to obtain the test dataset. The respective test datasets are used as inputs to NN-CFAR and TSNN-CFAR, and the confusion matrices shown in Figure 5 were obtained from the final classification results. Figure 5 shows the four true numbers of categories in the test set of NN-CFAR are 26,638/26,453/13,654/13,255, and the prediction correctness rates are 99.49%/97.90%/96.00%/97.20%/96.00%/97.20%. The four true categories of the test set used for TSNN-CFAR are 26,544/26,766/13,399/13,291 with 99.98%/99.98%/99.90%/99.98% correct predictions. It is observed that TSNN-CFAR reduces the number of input parameters while the classification correctness is improved.
In the next section, we will further compare the detection performance and false alarm of the proposed TSNN-CFAR algorithm with that of other existing algorithms in different background environments.

4. Performance Evaluation

To test and verify the superiority of the proposed TSNN-CFAR, simulations of the radar detection system are carried out in the MATLAB R2019b environment based on the square law detector. The results are tested by Monte Carlo simulations. The default parameters are as follows. The length of distance units is set as 200 and the number of reference cells is set as N = 30 with no guard cell. The value of the constant CFAR is set as P f a = 10 4 . The decision thresholds are set as K M R = 1.806 and K V I = 4.76 .

4.1. Comparison of Single-Target Detection Performance in Homogeneous Environments

In the simulated homogeneous environments, consider a target with target fluctuation model Swerling I and S N R = 20 dB located in distance unit 95.
The detection and category judgment results are shown in Figure 6 and Figure 7. It can be seen that removing guard cells, the protrusion of traditional algorithms, CA-CFAR and genetic algorithm CFAR are more serious, and the target shadowing effect is more obvious. Figure 7 shows the result of the adaptive selection in the homogeneous background. Although NN-CFAR only uses eight characteristic inputs, the judgment ability is similar to that of TSNN-CFAR. However, the performance of VI-CFAR is the most unstable, and inappropriate judgments occur more frequently.
In Figure 8 and Figure 9, the proposed method and benchmarks are tested in a homogeneous background with noise SNR levels ranging from 0 to 30 dB. It is clear that CA-CFAR is the best solution in homogeneous environments. Meanwhile, due to the right category judgment, the detection probabilities of NN-CFAR and TSNN-CFAR are equal to that of CA-CFAR. Due to a certain deviation in the decision, the detection loss of VI-CFAR is larger than that of TSNN-CFAR, but the detection performance is better than that of OS-CFAR.

4.2. Comparison of Detection Performance in High-Density Target Environments

In the simulated scenery, four real targets with different SNR are simulated located in the distance units 90, 95, 100 and 110, respectively.
The multi-target detection and category judgment results are shown in Figure 10 and Figure 11. Figure 10 shows that missed detections occur in CA-CFAR, GO-CFAR and SO-CFAR in high-density target environments. TSNN-CFAR and TS-CFAR show good robustness and the performance is close to that of optimal. In Figure 11, since NN-CFAR only adopts eight characteristic inputs, the decision fluctuates wildly in the locations of dense targets. Meanwhile, the category judgment of TSNN-CFAR is the best.
With noise SNR levels ranged from 0 to 30 dB, an interference target with the same SNR as a real target is added in a unilateral reference window, and the detection probabilities of the proposed method and benchmarks are tested in Figure 12 and Figure 13. It can be seen that the detection probability of TSNN-CFAR is basically consistent with TS-CFAR, and the detection probability loss is minimal. Since the interference target is added in the noise power estimation of CA-CFAR and GO-CFAR, the detection threshold is raised, leading to a decrease in detection probability, while SO-CFAR and OS-CFAR maintain good detection performance due to eliminating the influence of interference signals. Since VI-CFAR may be judged to select CA-CFAR and CA-CFAR has a smaller detection loss compared with SO-CFAR, the performance of VI-CFAR is obviously higher than that of SO-CFAR.
Further, Figure 14 and Figure 15 show the result of detection probability in the scenario where two interference targets are respectively added in two sides of the reference windows. TSNN-CFAR and TS-CFAR have the best detection performance, while the other traditional CFAR methods have obvious performance degradation due to the presence of interference signals. However, NN-CFAR is basically consistent with OS-CFAR because the category judgment algorithm is OS-CFAR.

4.3. Comparison of False Alarm Control in Clutter Edges

In this section, the focus is on the ability to control false alarms of TSNN-CFAR at the clutter edge. Assume that the radar echo has signals in the clutter edge and the average power of the weak clutter region is 20 dB. The strong clutter region is from distance unit 100 to unit 200; the average power is 40 dB. Other conditions are consistent with the last section.
Figure 16 and Figure 17 show the detection result in the clutter edge situation. It is seen that NN-CFAR and TSNN-CFAR have the best clutter edge control ability due to the removal of guard cells. However, NN-CFAR will have a little fluctuation. TS-CFAR is prone to false alarm in the strong clutter area due to the over-strong robustness.
The category judgment result of the adaptive algorithms and the probabilities of false alarm for different CFAR methods are shown in Figure 18 and Figure 19, respectively. Figure 18 shows that the selection errors occur before VI-CFAR and NN-CFAR enter the clutter region. The selection oscillation is obvious after NN-CFAR enters the clutter region. The TSNN-CFAR algorithm has the best selection ability and is the most stable. Figure 19 shows that, at the clutter edge, TSNN-CFAR uses the SO-CFAR algorithm in the weak clutter area to reduce the occurrence of missed detection, and the false alarm probability is close to SO-CFAR. In the strong clutter region followed by the GO-CFAR algorithm, the peak effect is reduced in false alarm probability detection. TS-CFAR is affected by the truncated depth and has an obvious false alarm peak effect in the strong clutter region.

4.4. Comprehensive Evaluation of Robustness in Complex Environments

In order to clearly compare the robustness of various algorithms in complex environments, values δ P d and δ P f a are introduced for detection deviation and false alarm deviation. For detection deviation, the partial difference value is set as 100 for the best detector and the detection deviation value δ P d i for the i-CFAR method is calculated by
Δ P d i = μ i - C F A R 100 max ( μ C F A R )
where μ i - C F A R represents the mean deviation of the detection result for the i-CFAR method and the theoretically optimal detection result. μ i - C F A R is calculated by
μ i - C F A R = ( T i - C F A R T O P T ) / N
Similarly, the false alarm deviation value δ P f a for the i-CFAR method is calculated by
Δ P f a i = 100 μ i - C F A R 100 max ( μ C F A R ) + M
where M is a constant, aiming to uniformly improve the mean value of the partial difference as well as conveniently draw and observe. It is noted that the detection deviation value refers to the detection performance under different SNRs, while the false alarm deviation focuses on the false alarm performance of different distance units at the clutter edge.
According to the above experimental results, the robustness of different CFAR algorithms is evaluated and scored comprehensively by using detection deviation and false alarm deviation values, as shown in Figure 20. PD0, PD1, PD2 and PFA refer to the homogeneous environment, high-density target environment with a interference target, high-density target environment with two interference targets and clutter edge situation, respectively. The deviation scores are equal to detection deviation values from S N R = 5 dB to S N R = 10 dB and false alarm deviation values from distance unit 85 to distance unit 115.
According to Figure 20, for PD0, the performance of TSNN-CFAR and CA-CFAR is consistent and optimal. For PD1 and PD2, TSNN-CFAR and TS-CFAR have the highest scores and the best performance. At the edge, TSNN-CFAR has the highest false alarm deviation score. It shows that TSNN-CFAR has the strongest comprehensive ability of weak missed detection probability and strong false alarm regulation.

5. Conclusions

In order to optimize the processing capability of TS-CFAR at the clutter edge and reduce the missed detection probability of weak targets, a TSNN-CFAR UAV detector was proposed in this paper. TSNN-CFAR was based on TS-CFAR, and the presence of a small amount of target amplitude in the reference cells had almost no effect on the power estimation of noise clutter. Therefore, the reference window with no guard cell can be used for TSNN-CFAR processing.
By comparing different CFAR algorithms, it was clearly shown that TSNN-CFAR is better than traditional CFAR algorithms with the mean method and OS-CFAR algorithm in the reference window without guard cells. The TSNN-CFAR algorithm had the best selection ability and the most stable false alarm regulation ability in multi-target environments and clutter edge environments, since 19 statistics obtained from the numerical calculation of both sides of windows were used as the characteristic input and the guard cells were removed. Therefore, it is expected to be suitable for systems that need to detect multiple targets and respond to clutter.
The detection performance of TSNN-CFAR for a high-density target environment is closely related to the truncation depth. However, the truncation depth is usually difficult to determine due to the lack of a priori knowledge of the background environment. Once the truncation depth is not set properly, the background noise parameter estimation will be highly biased. Meanwhile, the existing experiments are carried out on simulation datasets, which lack real signal data. We will conduct further research on the above issues in the future, and carry out UAV detection experiments based on real data on the basis of building an experimental platform.

Author Contributions

Conceptualization, W.D. and W.Z.; methodology, W.D.; software, W.D.; validation, W.D. and W.Z.; formal analysis, W.D.; investigation, W.D. and W.Z.; resources, W.D.; data curation, W.D.; writing—original draft preparation, W.D. and W.Z.; writing—review and editing, W.Z.; visualization, W.D. and W.Z.; supervision, W.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
UAVsUnmanned Aerial Vehicles
TSNNTruncated Statistics Neural Network
CFARConstant False Alarm Rate
NCINon-Cooperative Identification
MRMoving Range
TSTruncated Statistics
NNNeural Network
MLMean Level
OSOrdered Statistics
PFAFalse Alarm Probability
CACell Averaging
GOGreatest Of
SOSmallest Of
VIVariability Index
SNRSignal-to-Noise Ratio
MAEMean Absolute Error
SKEWSkewness
KURTKurtosis
CDFCumulative Distribution Function
MSEMean Square Error

References

  1. Song, X.; Zhao, S.; Wang, X.; Li, X.; Tian, Q. Performance Analysis of UAV RF/FSO Co-Operative Communication Network with Co-Channel Interference. Drones 2024, 8, 70. [Google Scholar] [CrossRef]
  2. Jia, W. Image Segmentation Solutions for Improved Non-Cooperative Target Recognition. J. Eng. Res. Rep. 2024, 26, 236–242. [Google Scholar] [CrossRef]
  3. Yang, J.; Zhang, Z.; Mao, W.; Yang, Y. Identification and micro-motion parameter estimation of non-cooperative UAV targets. Phys. Commun. 2021, 46, 101314. [Google Scholar] [CrossRef]
  4. Rosenbach, K.; Schiller, J. Non co-operative air target identification using radar imagery: Identification rate as a function of signal bandwidth. In Proceedings of the Record of the IEEE 2000 International Radar Conference [Cat. No. 00CH37037], Alexandria, VA, USA, 7–12 May 2000; IEEE: Piscataway, NJ, USA, 2000; pp. 305–309. [Google Scholar]
  5. Wang, M.; Li, X.; Gao, L.; Sun, Z.; Cui, G.; Yeo, T.S. Signal accumulation method for high-speed maneuvering target detection using airborne coherent MIMO radar. IEEE Trans. Signal Process. 2023, 71, 2336–2351. [Google Scholar] [CrossRef]
  6. Rihan, M.Y.; Nossair, Z.B.; Mubarak, R.I. An improved CFAR algorithm for multiple environmental conditions. Signal Image Video Process. 2024, 18, 3383–3393. [Google Scholar] [CrossRef]
  7. Yang, B.; Zhang, H. A CFAR algorithm based on Monte Carlo method for millimeter-wave radar road traffic target detection. Remote Sens. 2022, 14, 1779. [Google Scholar] [CrossRef]
  8. Zeng, Z.; Cui, L.; Qian, M.; Zhang, Z.; Wei, K. A survey on sliding window sketch for network measurement. Comput. Netw. 2023, 226, 109696. [Google Scholar] [CrossRef]
  9. Kim, J.H.; Bell, M.R. A computationally efficient CFAR algorithm based on a goodness-of-fit test for piecewise homogeneous environments. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 1519–1535. [Google Scholar] [CrossRef]
  10. Cui, Z.; Hou, Z.; Yang, H.; Liu, N.; Cao, Z. A CFAR target-detection method based on superpixel statistical modeling. IEEE Geosci. Remote. Sens. Lett. 2020, 18, 1605–1609. [Google Scholar] [CrossRef]
  11. Cao, Z.; Li, J.; Song, C.; Xu, Z.; Wang, X. Compressed sensing-based multitarget CFAR detection algorithm for FMCW radar. IEEE Trans. Geosci. Remote Sens. 2021, 59, 9160–9172. [Google Scholar] [CrossRef]
  12. Zhou, J.; Xie, J. An Improved Quantile Estimator with Its Application in CFAR Detection. IEEE Geosci. Remote Sens. Lett. 2023. [Google Scholar] [CrossRef]
  13. Abbadi, A.; Abbane, A.; Bencheikh, M.L.; Soltani, F. A new adaptive CFAR processor in multiple target situations. In Proceedings of the 2017 Seminar on Detection Systems Architectures and Technologies (DAT), Algiers, Algeria, 20–22 February 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–5. [Google Scholar]
  14. Wang, Y.; Xia, W.; He, Z. CFAR knowledge-aided radar detection with heterogeneous samples. IEEE Signal Process. Lett. 2017, 24, 693–697. [Google Scholar] [CrossRef]
  15. Liu, Y.; Zhang, S.; Suo, J.; Zhang, J.; Yao, T. Research on a new comprehensive CFAR (Comp-CFAR) processing method. IEEE Access 2019, 7, 19401–19413. [Google Scholar] [CrossRef]
  16. Sana, S.; Ahsan, F.; Khan, S. Design and implementation of multimode CFAR processor. In Proceedings of the 2016 19th International Multi-Topic Conference (INMIC), Islamabad, Pakistan, 5–6 December 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–6. [Google Scholar]
  17. Smith, M.E.; Varshney, P.K. Intelligent CFAR processor based on data variability. IEEE Trans. Aerosp. Electron. Syst. 2000, 36, 837–847. [Google Scholar] [CrossRef]
  18. Wang, L.; Wang, D.; Hao, C. Intelligent CFAR detector based on support vector machine. IEEE Access 2017, 5, 26965–26972. [Google Scholar] [CrossRef]
  19. Jiménez, L.P.J.; García, F.D.A.; Alvarado, M.C.L.; Fraidenraich, G.; de Lima, E.R. A general CA-CFAR performance analysis for weibull-distributed clutter environments. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4025305. [Google Scholar] [CrossRef]
  20. Madjidi, H.; Laroussi, T.; Farah, F. On maximum likelihood quantile matching cfar detection in weibull clutter and multiple rayleigh target situations: A comparison. Arab. J. Sci. Eng. 2023, 5, 6649–6657. [Google Scholar] [CrossRef]
  21. Jeong, T.; Park, S.; Kim, J.W.; Yu, J.W. Robust CFAR detector with ordered statistic of sub-reference cells in multiple target situations. IEEE Access 2022, 10, 42750–42761. [Google Scholar] [CrossRef]
  22. Medeiros, D.S.; García, F.D.A.; Machado, R.; Santos Filho, J.C.S.; Saotome, O. CA-CFAR Performance in K-Distributed Sea Clutter with Fully Correlated Texture. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1500505. [Google Scholar] [CrossRef]
  23. Kuang, C.; Wang, C.; Wen, B.; Hou, Y.; Lai, Y. An improved CA-CFAR method for ship target detection in strong clutter using UHF radar. IEEE Signal Process. Lett. 2020, 27, 1445–1449. [Google Scholar] [CrossRef]
  24. Sahed, M.; Kenane, E.; Khalfa, A.; Djahli, F. Exact Closed-Form Pfa Expressions for CA- and GO-CFAR Detectors in Gamma-Distributed Radar Clutter. IEEE Trans. Aerosp. Electron. Syst. 2022, 59, 4674–4679. [Google Scholar] [CrossRef]
  25. Baadeche, M.; Soltani, F.; Gini, F. Performance comparison of mean-level CFAR detectors in homogeneous and non-homogeneous Weibull clutter for MIMO radars. Signal Image Video Process. 2019, 13, 1677–1684. [Google Scholar] [CrossRef]
  26. Oudira, H.; Gouri, A.; Mezache, A. Optimization of distributed CFAR detection using grey wolf algorithm. Procedia Comput. Sci. 2019, 158, 74–83. [Google Scholar] [CrossRef]
  27. Ruida, C.; Yicheng, J.; Zhenwei, M.; Gang, Y.; Bing, W. A New CFAR Detection Algorithm Based on Sorting Selection for Vehicle Millimeter Wave Radar; Report 0148-7191; SAE Technical Paper: Warrendale, PA, USA, 2020. [Google Scholar]
  28. Villar, S.A.; Menna, B.V.; Torcida, S.; Acosta, G.G. Efficient approach for OS-CFAR 2D technique using distributive histograms and breakdown point optimal concept applied to acoustic images. IET Radar Sonar Navig. 2019, 13, 2071–2082. [Google Scholar] [CrossRef]
  29. Sim, Y.; Heo, J.; Jung, Y.; Lee, S.; Jung, Y. FPGA Implementation of Efficient CFAR Algorithm for Radar Systems. Sensors 2023, 23, 954. [Google Scholar] [CrossRef]
  30. Akhtar, J. Training of neural network target detectors mentored by SO-CFAR. In Proceedings of the 2020 28th European signal processing conference (EUSIPCO), Amsterdam, The Netherlands, 18–21 January 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1522–1526. [Google Scholar]
  31. Sahal, M.; Said, Z.A.; Putra, R.Y.; Kadir, R.E.A.; Firmansyah, A.A. Comparison of CFAR methods on multiple targets in sea clutter using SPX-radar-simulator. In Proceedings of the 2020 International Seminar on Intelligent Technology and Its Applications (ISITIA), Surabaya, Indonesia, 22–23 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 260–265. [Google Scholar]
  32. Amiri, Z.; Heidari, A.; Navimipour, N.J.; Unal, M.; Mousavi, A. Adventures in data analysis: A systematic review of Deep Learning techniques for pattern recognition in cyber-physical-social systems. Multimed. Tools Appl. 2024, 83, 22909–22973. [Google Scholar] [CrossRef]
  33. Tao, D.; Anfinsen, S.N.; Brekke, C. Robust CFAR detector based on truncated statistics in multiple-target situations. IEEE Trans. Geosci. Remote Sens. 2015, 54, 117–134. [Google Scholar] [CrossRef]
  34. Cohen, A.C. Truncated and Censored Samples: Theory and Applications; CRC Press: Boca Raton, FL, USA, 1991. [Google Scholar]
  35. Zhou, J.; Xie, J.; Liao, X.; Sun, C. Robust Sliding Window CFAR Detection Based on Quantile Truncated Statistics. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5117823. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of ML-CFAR.
Figure 1. Schematic diagram of ML-CFAR.
Drones 08 00597 g001
Figure 2. A case with no guard cells: (a) Homogeneous environment. (b) High-density target environment. (c,d) Low-power area of clutter edge. (e,f) High-power area of clutter edge.
Figure 2. A case with no guard cells: (a) Homogeneous environment. (b) High-density target environment. (c,d) Low-power area of clutter edge. (e,f) High-power area of clutter edge.
Drones 08 00597 g002
Figure 3. Schematic diagram of TSNN-CFAR.
Figure 3. Schematic diagram of TSNN-CFAR.
Drones 08 00597 g003
Figure 4. NN structure of TSNN-CFAR.
Figure 4. NN structure of TSNN-CFAR.
Drones 08 00597 g004
Figure 5. Confusion matrices of (a) NN-CFAR and (b) TSNN-CFAR on datasets.
Figure 5. Confusion matrices of (a) NN-CFAR and (b) TSNN-CFAR on datasets.
Drones 08 00597 g005
Figure 6. Single-target detection result in the homogeneous environment.
Figure 6. Single-target detection result in the homogeneous environment.
Drones 08 00597 g006
Figure 7. Category judgment result in the homogeneous environment.
Figure 7. Category judgment result in the homogeneous environment.
Drones 08 00597 g007
Figure 8. Detection probability result in the homogeneous environment.
Figure 8. Detection probability result in the homogeneous environment.
Drones 08 00597 g008
Figure 9. Corresponding local enlarged result for Figure 8.
Figure 9. Corresponding local enlarged result for Figure 8.
Drones 08 00597 g009
Figure 10. Multi-target detection result in the high-density target environment.
Figure 10. Multi-target detection result in the high-density target environment.
Drones 08 00597 g010
Figure 11. Category judgment result in the high-density target environment.
Figure 11. Category judgment result in the high-density target environment.
Drones 08 00597 g011
Figure 12. Detection probability result with an interference target in the high-density target environment.
Figure 12. Detection probability result with an interference target in the high-density target environment.
Drones 08 00597 g012
Figure 13. Corresponding local enlarged result for Figure 12.
Figure 13. Corresponding local enlarged result for Figure 12.
Drones 08 00597 g013
Figure 14. Detection probability result with 2 interference targets located in both sides of reference windows in the high-density target environment.
Figure 14. Detection probability result with 2 interference targets located in both sides of reference windows in the high-density target environment.
Drones 08 00597 g014
Figure 15. Corresponding local enlarged result for Figure 14.
Figure 15. Corresponding local enlarged result for Figure 14.
Drones 08 00597 g015
Figure 16. Detection result in the clutter edge situation.
Figure 16. Detection result in the clutter edge situation.
Drones 08 00597 g016
Figure 17. Corresponding local enlarged result for Figure 16.
Figure 17. Corresponding local enlarged result for Figure 16.
Drones 08 00597 g017
Figure 18. Category judgment result in the clutter edge situation.
Figure 18. Category judgment result in the clutter edge situation.
Drones 08 00597 g018
Figure 19. False alarm probability in the clutter edge situation.
Figure 19. False alarm probability in the clutter edge situation.
Drones 08 00597 g019
Figure 20. Score of the robustness for different CFAR algorithms.
Figure 20. Score of the robustness for different CFAR algorithms.
Drones 08 00597 g020
Table 1. Threshold selection schemes of VI-CFAR.
Table 1. Threshold selection schemes of VI-CFAR.
CategoryHomogeneous Decision in Left WindowHomogeneous Decision in Right WindowMean ConsistencyAdaptive Threshold for VI-CFARCorresponding CFAR Scheme
1YesYesYes α N m e a n ( μ ˜ L , μ ˜ R ) CA
2YesYesNo α N / 2 max ( μ ˜ L , μ ˜ R ) GO
3YesNo- α N / 2 μ ˜ L CA
4NoYes- α N / 2 μ ˜ R CA
5NoNo- α N / 2 max ( μ ˜ L , μ ˜ R ) SO
Table 2. The selection schemes of NN-CFAR.
Table 2. The selection schemes of NN-CFAR.
CategoryBackground ClassificationCFAR Scheme
1Homogeneous environmentCA
2High-density target environmentOS
3Low-power area of clutter edgeSO
4High-power area of clutter edgeGO
Table 3. Partial look-up table.
Table 3. Partial look-up table.
ξ 0.0000.0010.0020.0030.0040.0050.0060.0070.0080.009
0.0500.0000040.0000050.0000060.0000070.0000090.0000110.0000130.0000150.0000170.000020
0.0600.0000240.0000270.0000310.0000360.0000410.0000470.0000530.0000600.0000670.000075
0.0700.0000840.0000940.0001040.0001160.0001280.0001410.0001550.0001710.0001870.000204
0.0800.0002230.0002420.0002630.0002850.0003090.0003340.0003600.0003880.0004170.000448
0.0900.0004810.0005150.0005500.0005880.0006270.0006680.0007110.0007560.0008020.000851
0.1000.0009020.0009540.0010090.0010660.0011250.0011870.0012500.0013160.0013840.001455
0.1100.0015280.0016040.0016820.0017620.0018450.0019310.0020190.0021100.0022040.002300
0.1200.0024000.0025020.0026070.0027150.0028260.0029390.0030560.0031760.0032990.003425
A more complete table is given in [34].
Table 4. Classification of TSNN-CFAR.
Table 4. Classification of TSNN-CFAR.
CategoryBackground ClassificationCFAR Scheme
1Homogeneous environmentCA
2High-density target environmentTS
3Low-power area of clutter edgeSO
4High-power area of clutter edgeGO
Table 5. Comparison of classifier performance.
Table 5. Comparison of classifier performance.
IndexNNTSNN
Input layer3819
Error1.99 × 10−22.69 × 10−4
Convergence time481145
Optimal verification performance1.9 × 10−21.15 × 10−4
Accuracy of category 199.5%100%
Accuracy of category 297.9%100%
Accuracy of category 396.0%99.9%
Accuracy of category 497.2%100%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dong, W.; Zhang, W. Robust Truncated Statistics Constant False Alarm Rate Detection of UAVs Based on Neural Networks. Drones 2024, 8, 597. https://doi.org/10.3390/drones8100597

AMA Style

Dong W, Zhang W. Robust Truncated Statistics Constant False Alarm Rate Detection of UAVs Based on Neural Networks. Drones. 2024; 8(10):597. https://doi.org/10.3390/drones8100597

Chicago/Turabian Style

Dong, Wei, and Weidong Zhang. 2024. "Robust Truncated Statistics Constant False Alarm Rate Detection of UAVs Based on Neural Networks" Drones 8, no. 10: 597. https://doi.org/10.3390/drones8100597

APA Style

Dong, W., & Zhang, W. (2024). Robust Truncated Statistics Constant False Alarm Rate Detection of UAVs Based on Neural Networks. Drones, 8(10), 597. https://doi.org/10.3390/drones8100597

Article Metrics

Back to TopTop