Next Article in Journal
Robust Detection of Abandoned Object for Smart Video Surveillance in Illumination Changes
Previous Article in Journal
An Electrochemical DNA Biosensor for Carcinogenicity of Anticancer Compounds Based on Competition between Methylene Blue and Oligonucleotides
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Radar HRRP Recognition Method with Accelerated T-Distributed Stochastic Neighbor Embedding and Density-Based Clustering

State Key Laboratory of Complex Electromagnetic Environment Effects on Electronics and Information System, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(23), 5112; https://doi.org/10.3390/s19235112
Submission received: 21 October 2019 / Revised: 9 November 2019 / Accepted: 20 November 2019 / Published: 22 November 2019
(This article belongs to the Section Remote Sensors)

Abstract

:
High-resolution range profile (HRRP) has attracted intensive attention from radar community because it is easy to acquire and analyze. However, most of the conventional algorithms require the prior information of targets, and they cannot process a large number of samples in real time. In this paper, a novel HRRP recognition method is proposed to classify unlabeled samples automatically where the number of categories is unknown. Firstly, with the preprocessing of HRRPs, we adopt principal component analysis (PCA) for dimensionality reduction of data. Afterwards, t-distributed stochastic neighbor embedding (t-SNE) with Barnes–Hut approximation is conducted for the visualization of high-dimensional data. It proves to reduce the dimensionality, which has significantly improved the computation speed. Finally, it is exhibited that the recognition performance with density-based clustering is superior to conventional algorithms under the condition of large azimuth angle ranges and low signal-to-noise ratio (SNR).

1. Introduction

As a device for transmitting and receiving electromagnetic waves, radar has played an indispensable role in in both civilian and military fields [1]. Radar automatic target recognition refers to the work of automatically identifying targets of interest with the acquired radar information [2,3]. In the field of radar target recognition, the main research objects can be divided into two categories: Natural and man-made objects. Natural targets mainly include lakes, mountains, trees, crops, etc., while man-made targets include typical targets such as buildings, billboards, airplanes, vehicles, unmanned aerial vehicles (UAVs), and ships. In the research of natural targets, many great achievements have been made [4,5,6,7]. However, the volume of the artificial target is relatively small compared to the natural target, and the structure is more complicated. Therefore, the difficulty for researchers has been correspondingly increased.
High-resolution range profile is the distribution of target scattering centers with radar echoes along the line of sight (LOS). The principle of radar high-resolution range profiles (HRRPs) is uncomplicated, which are liable to be acquired and analyzed, hence they have been widely studied. Many signal processing approaches have been developed in recent years. In reference [8], a novel detection and motion compensation algorithm has been proposed to solve the distortions of HRRPs when relative motions between the target and the radar happens. Aubry et al. have put forward a novel HRRP estimation algorithm that optimizes the probing waveforms and reduces the error according to the acquired information [9]. To reduce the effects of noise, an effective HRRP recognition approach can be obtained with the basis of orthogonal matching pursuit, which is validated by real-measured data [10]. Meanwhile, Guo et al. utilized one-dimensional residual-inception network to recognize HRRPs [11]. To improve the ability of recognizing HRRPs with bistatic radar, RELAX method and novel feature extraction processing have been used [12]. Nevertheless, the difficulties in recognizing HRRPs under the condition of low signal-to-noise ratio (SNR) have not been solved effectively.
In the area of HRRP recognition, various classification algorithms have been applied. An extended support vector data description (SVDD) method with negative examples has been proposed to classify four aircrafts exactly with a small number of training samples [13]. Pan et al. developed a method that contains discriminant deep belief network (DDBN) and statistic distribution analysis to recognize three kinds of aircraft [14]. With sparse representation classification criterion, a dictionary learning method has been utilized to improve the classification accuracy of aircrafts with different SNRs [15]. Feng et al. have built stacked autoencoder deep networks for feature extraction and recognition, which performs better than the shallow models [16]. Moreover, the multi-layer perception has been adopted to obtain the relationship between labels and samples with the demonstration of real-measured aircraft data [17]. Recent research on target classification mainly focus on supervised or semi-supervised learning, which needs the labels of targets that cannot be easily obtained in the real military field. Therefore, it is necessary to classify targets automatically without prior information. Unsupervised learning has been employed in feature extraction of HRRP samples in recent years [18,19,20,21,22]. In [18], a stacked autoencoder was utilized to learn features with different levels of data. A novel dictionary learning method has been adopted to extract robust features from HRRPs [19]. Ma et al. utilized stacked denoising and contractive encoder to study the hidden property of corner reflector HRRPs [20]. Local factor analysis [21] and denoising autoencoder [22] have been proposed by some other scholars to study features of HRRPs. Those unsupervised learning algorithms mainly focus on feature extraction, not for classification. As a consequence, we need to study the classification of HRRPs with unsupervised learning algorithms.
With the development of modern radar measurement technology, the data dimensionality of HRRPs has increased. Therefore, it is necessary to reduce the data dimensionality and improve processing ability. Principal component analysis (PCA) includes a procedure that transforms a number of correlated variables into a smaller number of uncorrelated variables called principal components [23,24]. Robust principal component analysis has been widely used by many researchers in radar application in recent years [25,26,27,28,29]. Robust PCA has been adopted to image rotating parts and estimate the rotating parameters with measured data [28]. Nguyen et al. used robust PCA to extract sources of interference from radar signals [29]. Visualization of high-dimensional data is able to effectively enhance the understanding of the distribution of target points, and is more conducive to human analysis and judgment, which can further reduce the target dimension and facility the auxiliary judgment. The t-distributed stochastic neighbor embedding (t-SNE) provides approaches to automatically obtain understanding from large datasets [30]. Most classification algorithms, including the recently popular deep learning, tend to obtain the labels of some specific samples in advance. However, in practical situations, especially in the battlefield, it is impossible to get the prior information of targets. The application of clustering algorithm is consequently indispensable [31,32]. In addition, a novel density-based clustering method can effectively improve the recognition performance of the target without training [33].
We study the HRRPs on basis of real-measured and electromagnetic calculation data. Firstly, we discuss the background of the issue and problems to be addressed. The signal model that includes the principle of HRRP formation is introduced in Section 2. Section 3 presents the proposed method, which is based on PCA, the accelerated t-SNE with Barnes–Hut approximation, and the density-based clustering algorithm. In Section 4, the experiment results are presented. Section 5 shows the conclusions of the paper and provides prospect for the future study.
The proposed algorithm contains three great advantages as mentioned below:
  • Effective and fast dimensional reduction. PCA greatly reduces the dimensionality of the data while preserving the HRRP information. Meanwhile, with the accelerated t-SNE, we can achieve further dimensionality reduction much faster than the conventional t-SNE.
  • Visualization of high-dimensional data. After the operation of PCA, the dimension of data is still high, and it is difficult to express the distribution of data points in 2D or 3D coordinate system. The t-SNE algorithm provides a valid approach to present data for visualization which is conducive to the intuitive judgment.
  • High accuracy of clustering without training. At this stage, many recognition algorithms need to be trained for classification. However, in some cases, especially in military field, the samples of specific targets cannot be obtained. In this paper, the high-accuracy HRRP clustering method can obtain classification results without training.

2. The Signal Model of HRRPs

The transmitted linear-frequency-modulated signals are defined as follows [34]
s t ( t , τ ) = r e c t ( τ T p ) exp ( j 2 π f c t + j π γ τ 2 )
where τ is fast time, t is full time; r e c t ( ) is the operation of rectangular window and the width is defined as T p ; carrier frequency is f c and the chirp rate is γ .
When the wavelength of radar is much smaller than the measured target, the scattering model can be simplified as scattering centers in high-frequency regime. We suppose that the target consists of M scattering points.
We define the distance from the scatter points to radar as
R i ( t ) ,   where   i [ 1 , M ] , i N .
The returned echoes with scattering point model are represented as
s r ( t , τ ) = i = 1 M A i r e c t ( τ 2 R i ( t ) c T p ) exp ( j 2 π ( f c t f c 2 R i ( t ) c ) ) × exp ( j π γ ( τ 2 R i ( t ) c ) 2 )
where A i denotes amplitude, which is related to the radar cross section, and c is the speed of light and the slow time is t n = t τ .
After the operations of dechirping and phase compensation, the returned echoes can be written as
s r 1 ( t , τ ) = i = 1 M A i r e c t ( τ 2 R i ( t n ) c T p ) exp ( j 4 π c R Δ i ( t n ) f c ) × exp ( j 4 π c R Δ i ( t n ) γ ( τ 2 R r e f ( t n ) c ) )
where R r e f ( t n ) is the reference distance in the dechirping progress at the slow time t n , and R Δ i ( t n ) = R p ( t n ) R r e f ( t n ) . We perform the Fourier transform to Equation (4) along the fast time dimension, and the HRRPs can be shown as
s r 2 ( r , t n ) = | i = 1 M A i exp ( j 4 π R Δ i ( t n ) / λ c ) sinc ( 2 B c ( r R Δ i ( t n ) ) ) |
where B is the bandwidth and λ c = c / f c .

3. Proposed Method

3.1. Principal Component Analysis Based on Singular Value Decomposition

Principal component analysis (PCA) is a predominant algorithm for dimensionality reduction and extraction of independent elements in mathematical statistics [24]. It contains eigenvalue decomposition, singular value decomposition, and generalized singular value decomposition, etc. In order to effectively remove the redundancy of HRRPs and increase the calculation speed, it is necessary to extract the subspace features by principal component analysis. In this paper, aiming to minimize the loss of features, we utilize singular value decomposition method to achieve the effect of data dimensionality reduction.
We define the matrix D that represent the data sets of HRRPs, where the rows correspond to observations with different azimuth angles of targets and the columns correspond to the absolute values of amplitudes along the range of the line of sight. With SVD, the data matrix can be divided into three parts, shown as follows:
D = P Σ Q T
where P denotes the eigenvectors of matrix D D T and Q represent the eigenvectors of matrix D T D . The columns of P and Q are the right and left singular vectors of the data matrix. The diagonal elements of matrix Σ 2 are both the eigenvalues of D D T and D T D .
Then, reconstruct the data matrix by selecting the principal component, which is denoted as:
D ˜ = P ˜ Σ ˜ Q ˜ T
where the size of reconstructed matrix D ˜ is M by N r (where N r < N ) and N r , which can preserve features of data, is the number of eigenvalues selected from the diagonal matrix with descending order. The columns of P ˜ and Q ˜ correspond to the eigenvalues of D ˜ .
The definition of eigenvalue ratio r e is given by
r e = l = 1 N r λ l l = 1 N λ l
where λ l is the corresponding eigenvalue. More details can be found in reference [24].

3.2. Accelerated t-SNE with Barnes–Hut Approximation

T-distributed stochastic neighbor embedding (t-SNE) is a valid method to project the N r -dimensional HRRP into the c -dimensional ( c is much smaller than N r ) embedding and it is convenient to analyze low-dimensional data for visualization [30].
The dataset of HRRP is X = { x 1 , x 2 , x 3 , , x i , , x M } , X i R N r and after the transformation of t-SNE HRRPs can be represented as Y = { y 1 , y 2 , y 3 , , y i , , y M } , y i R c .
The conditional probabilities of similarity between input HRRPs are expressed as
p j | i = { exp ( x j x i 2 / 2 σ i 2 ) k i exp ( x k x i 2 / 2 σ i 2 ) , i j   0   ,   i = j
Joint probabilities p i j , which measure the pairwise similarity between the HRRP samples x i and x j , are denoted as
p i j = p j | i + p i | j 2 M
We utilize the normalized student-t to indicate the relationship between the output y i and y j in the projection embedding with a single degree of freedom:
q i j = { ( y j y i 2 2 + 1 ) 1 k i ( y k y i 2 2 + 1 ) 1 , i j   0   ,   i = j
The Kullback–Leibler divergence between input and output probability distribution is
C = i j p i j log p i j q i j
In order to minimize the value of Equation (12), the gradient is given by
C y i = 4 j ( q i j p i j ) ( y j y i ) ( y i y j 2 2 + 1 ) 1
However, the computation resource is obviously limited due to the scale of t-SNE algorithm varies quadratically with the number of total samples M . Due to the limitation of time and calculation capability, the maximum number of samples is below 10,000 in ordinary application and the real-time condition cannot be satisfied.
The gradient can be divided into the attractive and repulsive forces [35]. Barnes–Hut approximation is an effective algorithm to reduce the calculation burden brought by the repulsive forces. It is based on the theory of quadtree, which consists of nodes that indicates a cell with different size parameters. We minimize Equation (12) with Barnes–Hut approximation to obtain the optimal projection datasets Y = { y i * } i = 1 M , y i * R c .
Y = arg min Y i j p i j log p i j q i j

3.3. A Novel Density-Based Clustering

The clustering algorithm is able to classify objects automatically with no prior information in unsupervised learning. A novel density-based clustering approach focuses on the property that the density values of neighbors around the cluster centers are much lower than that of the cluster centers and the distances between cluster centers are relatively large. It has been proved to be robust and effective on several different datasets [33]. In this algorithm, local density ρ i and distance from higher-density point δ i are the key parameters that show the property of objects and the subscript i is the data point.
The local density can be represented by the cutoff or gaussian kernel. The definition of cut-off kernel is given by
ρ i = j i χ ( d i j d c )
where
χ ( x ) = { 0 ,   x 0 1 ,   x < 0
d c denotes the cutoff distance and d i j represents the distance between points i and j .
Suppose there are N 1 points in the dataset I S ; the distance is defined as follows
δ q i = { max j 2 { d q i d q j } ,   i = 1 min j < i { δ q j }   ,   i 2
where ρ q 1 ρ q 2 ρ q i ρ q N 1 .
The simple version of the novel density-based algorithm, which contains five steps, is listed in Algorithm 1.
Algorithm 1. Simple version of the novel density-based algorithm. The steps for the novel density-based algorithm.
1: Input: Distance d i j , i < j , i , j I S
2: Initialization: Cutoff distance d c = d [ 1 2 N 1 ( N 1 1 ) t + 1 2 ] , where [] represent rounding function and 0.1 t 0.2 . Attribute of points n i = 0 , i I S .
3: Results: Number of categories of the HRRPs and the type of each sample.
4: Begin
5: Step 1. The computation of { ρ i } i = 1 N 1 and { q i } i = 1 N 1 .
6: Step 2. Calculation of distance and attribute { n i } i = 1 N 1 .
7: Step 3. Identification of clustering centers and classification of the other points.
8: Step 4. The selection of average local density in the border region.
9: Step 5. Classification of points with the label of cluster core or cluster halo.
10: End

3.4. Overall Structure of Proposed Method

The overall structure of our proposed method is shown in Figure 1. Firstly, we employ the absolute value of the HRRP amplitude for further dimension reduction and recognition. Secondly, we utilize principle component analysis to reduce the dimension of HRRP data, which can subtract data redundancy and reduce processing time. Visualization of high dimensional data provides people with an intuitive understanding and it also compresses HRRPs effectively. Thirdly, we adopt the algorithm of t-SNE with Barnes–Hut approximation, which is much faster than the conventional t-SNE. Finally, most classification algorithms in radar automatic target recognition need to obtain target labels in advance, while the attribute information of targets cannot be acquired under some specific conditions. Therefore, a predominant clustering algorithm is demanded, and we take advantage of a novel density-based clustering algorithm to accomplish accurate classification of man-made objects without training.

4. Experiment Results

We firstly focus on three types of UAV targets, namely UAV1, UAV2, and UAV3. UAV1 and UAV2 have the same structure and size with different material. Data of UAV1 and UAV3 are calculated by electromagnetic software, while UAV2 has been measured in anechoic chamber. The pith angle of the measurement is 0° and the azimuth angle range in this paper is 60°. The actual pictures or electromagnetic calculation models of these three types of UAV targets are shown in Figure 2. The frequency ranges from 8 to 12 GHz with the interval of 20 MHz and all data are acquired with full polarization. All experiments were performed on a PC with a 3.20 GHz i7-87000 CPU and 16 GB RAM.
HRRPs of UAV1 with full polarization of azimuth 0° are presented in Figure 3. HH, HV and VV are three polarization channels. The average amplitudes of HH and VV channels are much larger than that of the HV channel. However, the amplitude distributions of UAV HRRPs with HH and VV polarization are roughly similar. Strong scattering points are concentrated in the range from 3.5 to 6 m. Between 4.4 m and 5.2 m, the energy of the one-dimensional range image is obviously concentrated, and more than three peaks appear with HH and VV polarization. When the azimuth angle is 0°, the positions and numbers of strong scatters are approximately identical to co-polarized channels. However, comparing HH and VV channels, the relative magnitudes of the same channel are different according to Figure 3a,c. With high-resolution range profiles of three channels, we are able to estimate the approximate size information roughly.
Figure 4 displays the HRRPs of UAV2 with full polarization at the azimuth angle of 0 degrees. In contrast with UAV1 with the same azimuth angle, the data of UAV2, which is real measured in an anechoic chamber, shows smaller amplitude differences among three polarization channels. Most energy of the three HRRPs is centralized between the range of 2.5 to 5 m and each channel has a maximum amplitude that indicates a strong scattering center between 3.5 and 4 m. Except for the strong scattering center, the amplitude distribution in other range areas is relatively uniform. HRRPs of UAV2 with HH and VV polarization are weaker than those of UAV1 at the same azimuth angle of 0°. Nevertheless, the real-measured data of UAV2 in HV channel retain higher amplitudes.
As shown in Figure 5, HRRPs of UAV1 of three polarization channels of azimuth 60° are different from Figure 3. It is apparent that amplitudes of range profiles are smaller than those of the azimuth angle of 0 degrees in HH and VV polarization channels. The maximum amplitudes of the co-polarized HRRPs are both less than 0.2 and there is no range cell with a particularly high amplitude value. There is a significant difference between the amplitudes of HRRPs in HV polarization channel with azimuth angles of 0° and 60°, and the average amplitude of azimuth 0 degrees is higher than that of 60°, which is different from those of HH and VV polarization channel. Therefore, it is necessary for us to analyze the mean amplitude at different azimuth angles.
As shown in Figure 6, the fluctuation for UAV1 and UAV2 in mean amplitudes is approximately consistent from azimuth 0 to 60 degrees, because they are identical in structure and size. The mean amplitudes of UAV3 are mostly inferior to the other two UAVs in the azimuth angle range from 0 to 40°. When the azimuth angle scope is between 30° and 60°, the mean amplitude of UAV3 shows an uptick in volatility. In the same azimuth angle range, the amplitude changes of UAV 1 and UAV 2 are approximately the same, and the mean amplitude of UAV1 is greater than UAV2 in most cases due to the differences of shape and material properties. It is difficult to identify different UAVs with the average amplitude. Therefore, the amplitude information of HRRPs should be deeply studied.
We choose the HRRPs of HH polarization channel to carry out the study. Since the dimension of each sample is 1024, it is necessary to reduce the dimension and remove the data redundancy through principal component analysis. However, the selection of dimension size is a significant issue. The goal to reduce the target dimension as much as possible with the amplitude information of HRRPs well preserved should be achieved. The information loss of HRRP data can be obtained according to the eigenvalue ratio. As can be seen from the Figure 7, as the dimension increases, the eigenvalue ratio gradually increases, and finally approaches 100%. When the dimension is chosen to be 300, the eigenvalue ratio exceeds 99.5%, and there is almost no loss of information of HRRPs. Therefore, in this paper, we select 300 as the dimension size in PCA processing and provide the processed HRRP data to the subsequent accelerated t-SNE for further dimension reduction and visualization.
T-SNE is an effective high-dimensional data visualization algorithm. It can be utilized to intuitively obtain the distribution of UAV data points. In addition, it is also a data dimension reduction algorithm, which reduces the computational burden for subsequent classification work. The traditional t-SNE algorithm is inefficient in calculation. By adopting the Barnes–Hut algorithm, it is possible to quickly obtain data in 2D, 3D, and other multidimensional conditions. Visualization of data points of three types of UAV targets in two-dimensional coordinate systems is given in Figure 8. In Figure 8, a small number of data points of UAV1 are close to those of UAV3, which may cause difficulties in clustering. In this paper, we set the output dimension of each HRRP sample to be 5 after the t-SNE algorithm with Barnes–Hut processing.
Figure 9 shows the comparison of the calculation time of traditional t-SNE and accelerated t-SNE with Barnes–Hut algorithm under different sample sizes. In this experiment, the UAV HRRPs containing noise is generated by Monte Carlo simulation, thus a large number of HRRP samples are provided with the same SNR. As can be seen from Figure 9, with Barnes–Hut algorithm, we can effectively reduce the computation time. As the samples of HRRPs increase, the calculation time of conventional t-SNE increases exponentially, while the calculation time of t-SNE using the Barnes–Hut algorithm arises slowly and linearly. The larger the amount of data to be processed, the more prominent time advantage will be using the Barnes–Hut algorithm, which enhances the capability of real-time data processing.
In order to clearly show the difference in the run-time brought by the two algorithms clearly, we select the last set of data in Figure 10. We could see in Table 1 that the total calculation time of conventional t-SNE algorithm is close to 2000 s, while it takes less than 151 s using the Barnes–Hut algorithm, which has improved the processing efficiency by an order of magnitude. To further demonstrate the real-time processing of HRRP data, we give the average execution time of samples. The conventional algorithm takes about 0.16 s, while the proposed algorithm only needs even less than 0.013 s. This algorithm significantly improves the real-time visualization and dimensional reduction ability of UAV HRRPs.
As can be seen from Figure 10, when the SNR is 40 dB, the proposed algorithm can provide 100% classification accuracy under six different azimuth angles from 10° to 60° with the interval of 10°. After the processing of t-SNE with Barnes–Hut algorithm, we utilize the density-based and fast search clustering algorithm to separate the three types of UAV HRRP samples in the clustering space. In the case of different azimuth angle ranges, the classification results show robustness under the condition of high SNR. Although the classification accuracy of the HRRPs has been high enough under the condition of high SNR, it is still necessary to study the clustering under the condition of low SNR to further prove the superiority of the proposed algorithm.
As shown in Figure 11, when the SNR is 5 dB, the classification results of different algorithms are worse than that of 40 dB. The effects of noise and azimuth range on the algorithm are also clearer. With the increase of azimuth angle range, the accuracy of classification of the three algorithms decreases. The clustering accuracy of our proposed algorithm is superior to two conventional methods, which include k-means and DBSCAN with different azimuth angle ranges. Since the clustering algorithm cannot obtain any label information of the targets, it is easy to generate confusion under the condition of large azimuth angle range and low SNR. Under the condition that the azimuth angle range is 60° and the SNR is 5 dB, the accuracy of classification with conventional algorithms are both inferior to 64%. However, the algorithm that we have proposed in this paper can achieve an accuracy of nearly 73%, and it still maintains a high accuracy under low SNR and large azimuth angle range, which fully demonstrates superiority and reliability of the proposed algorithm.
In order to further verify the reliability of the proposed algorithm, we utilized the real-measured HRRPs from three different kinds of flying planes, which contain the An-26, the Cessna Citation, and the Yark-42. The models of three planes are shown in Figure 12, where Figure 12a–c represents the An-26, the Cessna Citation, and the Yark-42, respectively. The An-26 is a propeller plane with medium size, the Cessna Citation is a jet plane with small size, and the Yark-42 is a large-sized jet plane. The parameters of planes in the experiment are shown in Table 2. All the HRRPs in this experiment are measured by a C-band radar system, which ranges 5.32–5.72 GHz. The wavelength of radar is 0.05 m and the pulse repetition frequency is 400 Hz. The radar transmits the chirp signal and the range resolution is 0.375 m.
Figure 13 shows HRRP samples of three planes in this experiment. It can be seen from Figure 13 that there are differences in HRRPs of three planes. HRRPs contains the size, structure, material, and other information between different planes. Therefore, the study on the difference of HRRPs is conducive to the automatic recognition of radar target. In this paper, we choose 1000 HRRP samples of each plane; namely, 3000 samples have been utilized to test the performance of three different algorithms. The number of range elements of all HRRP samples is 256; that is, each HRRP is a 256-dimensional vector in this experiment.
As shown in Figure 14, the points with three different colors represent planes after classification. With the proposed algorithm, all three planes are clustered automatically with high classification accuracy. It can be seen from Table 3 that after adopting accelerated t-SNE, the classification accuracy of k-means and DBSCAN is 92.17% and 92.00%, respectively. However, the clustering accuracy of the proposed algorithm is higher than that of the other two conventional algorithms, which reaches 94.23%. The validity and robustness of the proposed algorithm have been proved in different datasets. The algorithm not only improves the calculation speed and reduces the amount of data, but also effectively enhances the accuracy of classification and provides more possibilities for automatic real-time target recognition.

5. Conclusions

This study sets out to recognize targets automatically based on high-dimensional data reduction and visualization with HRRPs. On the basis of analyzing and preprocessing of HRRPs, we utilize principal component analysis to reduce the dimensionality. Furthermore, we adopt t-SNE with Barnes–Hut approximation to carry out the visualization of HRRPs and further data dimensionality reduction with higher processing speed. We cluster HRRPs without labels more accurately than other conventional algorithms.
The research of HRRPs is definitely a frontier issue. The methods that effectively extract, detect, and identify objects in the complex electromagnetic environments such as ground clutter and sea clutter are still urgent to be found. In the later stage, an in-depth study will be carried out on the recognition of targets in the harsh clutter condition, so as to truly realize the reliable identification of targets in different environments with HRRPs.

Author Contributions

H.W. and D.D. wrote the paper together; D.D. designed and optimized the algorithm; H.W. performed the algorithm with MATLAB; X.W. reviewed and revised the paper.

Funding

This research was funded by the National Science Fund for Distinguished Young Scholars (No. 61625108) and Excellent Youth Foundation of Hu’nan Scientific Committee (No. 2017JJ1006).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, F.; Pang, C.; Li, Y. Algorithms for designing unimodular sequences with high Doppler tolerance for simultaneous fully polarimetric radar. Sensors 2018, 18, 905. [Google Scholar] [CrossRef] [PubMed]
  2. El-Darymli, K. Automatic target recognition in synthetic aperture radar imagery: A state-of-the-art review. IEEE Access 2016, 4, 6014–6058. [Google Scholar] [CrossRef]
  3. Bhattacharyya, K.; Deka, R.; Baruah, S. Automatic RADAR Target Recognition System at THz Frequency Band. A Review. ADBU J. Eng. Technol. 2017, 6, 1–15. [Google Scholar]
  4. Tao, C.; Chen, S.; Li, Y. PolSAR land cover classification based on roll-invariant and selected hidden polarimetric features in the rotation domain. Remote Sens. 2017, 9, 660. [Google Scholar]
  5. Chen, S.W.; Wang, X.S.; Sato, M. PolInSAR complex coherence estimation based on covariance matrix similarity test. IEEE Trans. Geosci. Remote Sens. 2012, 50, 4699–4710. [Google Scholar] [CrossRef]
  6. Maggiori, E.; Tarabalka, Y.; Charpiat, G. Convolutional neural networks for large-scale remote-sensing image classification. IEEE Trans. Geosci. Remote Sens. 2016, 55, 645–657. [Google Scholar] [CrossRef]
  7. Romero, A.; Gatta, C.; Camps-Valls, G. Unsupervised deep feature extraction for remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2015, 54, 1349–1362. [Google Scholar] [CrossRef]
  8. Yang, X.L.; Wen, G.J.; Ma, C.H. CFAR detection of moving range-spread target in white Gaussian noise using waveform contrast. IEEE Geosci. Remote Sens. Lett. 2016, 13, 282–286. [Google Scholar] [CrossRef]
  9. Aubry, A.; Carotenuto, V.; De Maio, A. High resolution range profile estimation via a cognitive stepped frequency technique. IEEE Trans. Aerosp. Electron. Syst. 2018, 55, 444–458. [Google Scholar] [CrossRef]
  10. Du, L.; He, H.; Zhao, L. Noise robust radar HRRP target recognition based on scatterer matching algorithm. IEEE Sens. J. 2015, 16, 1743–1753. [Google Scholar] [CrossRef]
  11. Guo, C.; He, Y.; Wang, H. Radar HRRP target recognition based on deep one-dimensional residual-inception network. IEEE Access 2019, 7, 9191–9204. [Google Scholar] [CrossRef]
  12. Lee, S.J.; Jeong, S.J.; Yang, E. Target identification using bistatic high-resolution range profiles. IET Radar Sonar Navig. 2016, 11, 498–504. [Google Scholar] [CrossRef]
  13. Guo, Y.; Xiao, H.; Kan, Y. Learning using privileged information for HRRP-based radar target recognition. IET Signal Process. 2017, 12, 188–197. [Google Scholar] [CrossRef]
  14. Pan, M.; Jiang, J.; Kong, Q. Radar HRRP target recognition based on t-SNE segmentation and discriminant deep belief network. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1609–1613. [Google Scholar] [CrossRef]
  15. Zhou, D. Radar target HRRP recognition based on reconstructive and discriminative dictionary learning. Signal Process. 2016, 126, 52–64. [Google Scholar] [CrossRef]
  16. Feng, B.; Chen, B.; Liu, H. Radar HRRP target recognition with deep networks. Pattern Recognit. 2017, 61, 379–393. [Google Scholar] [CrossRef]
  17. Du, C.; Chen, B.; Xu, B. Factorized discriminative conditional variational auto-encoder for radar HRRP target recognition. Signal Process. 2019, 158, 176–189. [Google Scholar] [CrossRef]
  18. Zhao, F.; Liu, Y.; Huo, K. Radar HRRP target recognition based on stacked autoencoder and extreme learning machine. Sensors 2018, 18, 173. [Google Scholar] [CrossRef]
  19. Li, L.; Liu, Z. Noise-robust HRRP target recognition method via sparse-low-rank representation. Electron. Lett. 2017, 53, 1602–1604. [Google Scholar] [CrossRef]
  20. Ma, Y.; Zhu, L.; Li, Y. HRRP-based target recognition with deep contractive neural network. J. Electromagn. Waves Appl. 2019, 33, 911–928. [Google Scholar] [CrossRef]
  21. Shi, L.; Wang, P.; Liu, H. Radar HRRP statistical recognition with local factor analysis by automatic Bayesian Ying-Yang harmony learning. IEEE Trans. Signal Process. 2010, 59, 610–617. [Google Scholar] [CrossRef]
  22. Yan, H.; Zhang, Z.; Xiong, G. Radar HRRP recognition based on sparse denoising autoencoder and multi-layer perceptron deep model. In Proceedings of the Fourth International Conference on Ubiquitous Positioning, Indoor Navigation and Location Based Services (UPINLBS), Shanghai, China, 3–4 November 2016; pp. 283–288. [Google Scholar]
  23. Zhao, Z.; Shkolnisky, Y.; Singer, A. Fast steerable principal component analysis. IEEE Trans. Comput. Imaging 2016, 2, 1–12. [Google Scholar] [CrossRef] [PubMed]
  24. Abdi, H.; Williams, L.J. Principal component analysis. Wiley Interdiscip. Rev. Comput. Stat. 2010, 2, 433–459. [Google Scholar] [CrossRef]
  25. Kalika, D.; Knox, M.T.; Collins, L.M. Leveraging robust principal component analysis to detect buried explosive threats in handheld ground-penetrating radar data. In Proceedings of the SPIE, Baltimore, MD, USA, 21 May 2015; Volume 9454. [Google Scholar]
  26. Borcea, L.; Callaghan, T.; Papanicolaou, G. Synthetic aperture radar imaging and motion estimation via robust principal component analysis. SIAM J. Imaging Sci. 2013, 6, 1445–1476. [Google Scholar] [CrossRef]
  27. Ai, X.; Luo, Y.; Zhao, G. Transient interference excision in over-the-horizon radar by robust principal component analysis with a structured matrix. IEEE Geosci. Remote Sens. Lett. 2015, 13, 48–52. [Google Scholar]
  28. Zhou, W.; Yeh, C.; Jin, R. ISAR imaging of targets with rotating parts based on robust principal component analysis. IET Radar Sonar Navig. 2016, 11, 563–569. [Google Scholar] [CrossRef]
  29. Nguyen, L.H.; Tran, T.D. RFI-radar signal separation via simultaneous low-rank and sparse recovery. In Proceedings of the 2016 IEEE Radar Conference, Philadelphia, PA, USA, 2–6 May 2016; pp. 1–5. [Google Scholar]
  30. Van Der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  31. Kanungo, T.; Mount, D.M.; Netanyahu, N.S. An efficient k-means clustering algorithm: Analysis and implementation. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 7, 881–892. [Google Scholar] [CrossRef]
  32. Han, J.; Kamber, M.; Tung, A.K.H. Spatial clustering methods in data mining. Geogr. Data Min. Knowl. Discov. 2001, 1, 188–217. [Google Scholar]
  33. Rodriguez, A.; Laio, A. Clustering by fast search and find of density peaks. Science 2014, 344, 1492–1496. [Google Scholar] [CrossRef]
  34. Du, L.; Liu, H.; Bao, Z. Radar automatic target recognition using complex high-resolution range profiles. IET Radar Sonar Navig. 2007, 1, 18–26. [Google Scholar] [CrossRef]
  35. Van Der Maaten, L. Accelerating t-SNE using tree-based algorithms. J. Mach. Learn. Res. 2014, 15, 3221–3245. [Google Scholar]
Figure 1. The structure of proposed method.
Figure 1. The structure of proposed method.
Sensors 19 05112 g001
Figure 2. Real-measured or electromagnetic calculation models of three unmanned aerial vehicles (UAVs); (a) UAV1; (b) UAV2; (c) UAV3.
Figure 2. Real-measured or electromagnetic calculation models of three unmanned aerial vehicles (UAVs); (a) UAV1; (b) UAV2; (c) UAV3.
Sensors 19 05112 g002aSensors 19 05112 g002b
Figure 3. High-resolution range profile (HRRPs) of UAV1 with full polarization of azimuth 0°; (a) UAV1 of HH channel; (b) UAV1 of HV channel; (c) UAV1 of VV channel.
Figure 3. High-resolution range profile (HRRPs) of UAV1 with full polarization of azimuth 0°; (a) UAV1 of HH channel; (b) UAV1 of HV channel; (c) UAV1 of VV channel.
Sensors 19 05112 g003
Figure 4. HRRPs of UAV2 with full polarization of azimuth 0°; (a) UAV2 of HH channel; (b) UAV2 of HV channel; (c) UAV3 of VV channel.
Figure 4. HRRPs of UAV2 with full polarization of azimuth 0°; (a) UAV2 of HH channel; (b) UAV2 of HV channel; (c) UAV3 of VV channel.
Sensors 19 05112 g004
Figure 5. HRRPs of UAV1 with full polarization of azimuth 60°; (a) UAV1 of HH channel; (b) UAV1 of HV channel; (c) UAV1 of VV channel.
Figure 5. HRRPs of UAV1 with full polarization of azimuth 60°; (a) UAV1 of HH channel; (b) UAV1 of HV channel; (c) UAV1 of VV channel.
Sensors 19 05112 g005
Figure 6. HRRPs of three UAVs at different azimuth angles with HH polarization.
Figure 6. HRRPs of three UAVs at different azimuth angles with HH polarization.
Sensors 19 05112 g006
Figure 7. Eigenvalue ratio of three UAVs with principal component analysis (PCA).
Figure 7. Eigenvalue ratio of three UAVs with principal component analysis (PCA).
Sensors 19 05112 g007
Figure 8. Visualization of three UAVs by t-distributed stochastic neighbor embedding (t-SNE) technique (with Barnes–Hut algorithm) in 2D coordinate system.
Figure 8. Visualization of three UAVs by t-distributed stochastic neighbor embedding (t-SNE) technique (with Barnes–Hut algorithm) in 2D coordinate system.
Sensors 19 05112 g008
Figure 9. Computation time between conventional t-SNE and accelerated t-SNE with Barnes–Hut algorithm with different data sizes.
Figure 9. Computation time between conventional t-SNE and accelerated t-SNE with Barnes–Hut algorithm with different data sizes.
Sensors 19 05112 g009
Figure 10. Clustering results of three UAVs with the signal-to-noise ratio (SNR) of 40dB with different azimuth angle ranges. (a) Azimuth angle range of 10°; (b) azimuth angle range of 20°; (c) azimuth angle range of 30°; (d) azimuth angle range of 40°; (e) azimuth angle range of 50°; (f) azimuth angle range of 60°.
Figure 10. Clustering results of three UAVs with the signal-to-noise ratio (SNR) of 40dB with different azimuth angle ranges. (a) Azimuth angle range of 10°; (b) azimuth angle range of 20°; (c) azimuth angle range of 30°; (d) azimuth angle range of 40°; (e) azimuth angle range of 50°; (f) azimuth angle range of 60°.
Sensors 19 05112 g010aSensors 19 05112 g010b
Figure 11. Classification results of three UAVs with different azimuth angle ranges between the proposed algorithm and conventional algorithms with the SNR of 5 dB.
Figure 11. Classification results of three UAVs with different azimuth angle ranges between the proposed algorithm and conventional algorithms with the SNR of 5 dB.
Sensors 19 05112 g011
Figure 12. Three imagery planes; (a) An-26; (b) Cessna Citation; (c) Yark-42.
Figure 12. Three imagery planes; (a) An-26; (b) Cessna Citation; (c) Yark-42.
Sensors 19 05112 g012
Figure 13. HRRP samples of three planes; (a) An-26; (b) Cessna Citation; (c) Yark-42.
Figure 13. HRRP samples of three planes; (a) An-26; (b) Cessna Citation; (c) Yark-42.
Sensors 19 05112 g013aSensors 19 05112 g013b
Figure 14. Clustering results of three flying planes with the proposed algorithm.
Figure 14. Clustering results of three flying planes with the proposed algorithm.
Sensors 19 05112 g014
Table 1. The run-time of two algorithms.
Table 1. The run-time of two algorithms.
AlgorithmTotal Time(s)Average Time(s)
Conventional t-SNE1954.1560.1617
Accelerated t-SNE with Barnes–Hut150.7710.0125
Table 2. Parameters of planes in the experiment.
Table 2. Parameters of planes in the experiment.
AircraftLength/mWidth/mHeight/m
An-2614.4015.904.57
Cessna Citation23.8029.209.83
Yark-4236.3834.889.83
Table 3. Classification accuracy of different algorithms with HRRP samples.
Table 3. Classification accuracy of different algorithms with HRRP samples.
AlgorithmAccelerated t-SNE + k-MeansAccelerated t-SNE + DBSCANProposed Algorithm
Accuracy92.17%92.00%94.23%

Share and Cite

MDPI and ACS Style

Wu, H.; Dai, D.; Wang, X. A Novel Radar HRRP Recognition Method with Accelerated T-Distributed Stochastic Neighbor Embedding and Density-Based Clustering. Sensors 2019, 19, 5112. https://doi.org/10.3390/s19235112

AMA Style

Wu H, Dai D, Wang X. A Novel Radar HRRP Recognition Method with Accelerated T-Distributed Stochastic Neighbor Embedding and Density-Based Clustering. Sensors. 2019; 19(23):5112. https://doi.org/10.3390/s19235112

Chicago/Turabian Style

Wu, Hao, Dahai Dai, and Xuesong Wang. 2019. "A Novel Radar HRRP Recognition Method with Accelerated T-Distributed Stochastic Neighbor Embedding and Density-Based Clustering" Sensors 19, no. 23: 5112. https://doi.org/10.3390/s19235112

APA Style

Wu, H., Dai, D., & Wang, X. (2019). A Novel Radar HRRP Recognition Method with Accelerated T-Distributed Stochastic Neighbor Embedding and Density-Based Clustering. Sensors, 19(23), 5112. https://doi.org/10.3390/s19235112

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop