Next Article in Journal
High-Resolution Flowering Index for Canola Yield Modelling
Previous Article in Journal
Confidence and Error Analyses of the Radiosonde and Ka-Wavelength Cloud Radar for Detecting the Cloud Vertical Structure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Support-Driven Sparse Recovery STAP Method with Subspace Penalty

National Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(18), 4463; https://doi.org/10.3390/rs14184463
Submission received: 24 July 2022 / Revised: 22 August 2022 / Accepted: 5 September 2022 / Published: 7 September 2022

Abstract

:
Detecting a moving target is an attractive topic in many fields, such as remote sensing. Space-time adaptive processing (STAP) plays a key role in detecting moving targets in strong clutter backgrounds for airborne early warning radar systems. However, STAP suffers serious clutter suppression performance loss when the number of training samples is insufficient due to the inhomogeneous clutter environment. In this article, an efficient sparse recovery STAP algorithm is proposed. First, inspired by the relationship between multiple sparse Bayesian learning (M-SBL) and subspace-based hybrid greedy algorithms, a new optimization objective function based on a subspace penalty is established. Second, the closed-form solution of each minimization step is obtained through the alternating minimization algorithm, which can guarantee the convergence of the algorithm. Finally, a restart strategy is used to adaptively update the support, which reduces the computational complexity. Simulation results show that the proposed algorithm has excellent performance in clutter suppression, convergence speed and running time with insufficient training samples.

1. Introduction

An important task of an airborne early warning radar is to detect slow moving targets. However, since the airborne early warning radar is in the downward-looking working mode, it often faces ground clutter with strong energy, and the target is often submerged in the strong clutter. How to effectively suppress the ground clutter is the key problem to be solved in the airborne radar system. Space-time adaptive processing (STAP) has played a key role in airborne radar systems because of its superior clutter suppression performance [1]. The traditional STAP algorithm uses the training samples adjacent to the cell under test (CUT) to obtain the clutter pulse noise covariance matrix (CNCM) estimation through the maximum likelihood method. The well-known Reed–Mallet–Brennan (RMB) criterion [2] shows that the following conditions must be met if the performance loss is not to exceed 3 dB. (1) Training samples must be independent and identically distributed (i.i.d); (2) the number of training samples should not be less than twice of the system’s degree of freedom (DOF); (3) and there should be no target in the training sample. However, it is difficult to obtain enough i.i.d training samples in the actual environment due to the ground environment, antenna configuration and so on.
In order to reduce the demand for the i.i.d training samples, many effective algorithms have been proposed in the past few decades. Reduced-dimension (RD) [3,4] and reduced-rank (RR) [5,6,7] algorithms can reduce the requirements of i.i.d training samples to twice of the reduced-dimension and twice of the clutter rank, respectively. However, due to the huge number of antenna elements and pulses, the number of i.i.d samples after RD or RR is still difficult to meet. In [8], an algorithm named direct data domain (D3) using only the CUT is proposed to solve the problem of insufficient training samples, but the D3 algorithm brings aperture loss. In recent years, knowledge-aided (KA) algorithms [9,10] have been proposed to improve the clutter suppression performance in inhomogeneous environments by using the prior information of the system and the surface. However, the performance of the KA algorithm is restricted by the accuracy of prior knowledge. Inaccurate prior knowledge will lead to performance degradation.
In recent years, with the rise of compressed sensing (CS) technology, many scholars have begun to focus on using sparse recovery (SR) technology to solve the problem of clutter suppression under the condition of insufficient samples. Many SR-based STAP (SR-STAP) algorithms have been proposed and achieved performance improvements. SR-STAP exploits the inherent sparsity of clutter in the angle-Doppler plane to achieve the accurate recovery of the clutter spectrum. In [11,12,13], some SR-STAP algorithms based on 1 norm minimization are proposed. However, these algorithms with single measurement vectors (SMV) only utilize the information provided by a single sample, and do not consider the joint sparse recovery of multiple measurement vectors (MMV). In [14,15,16,17], many SR-STAP algorithms with MMV are proposed, which result in better recovery performance. Moreover, a hybrid greedy algorithm called subspace augmented MUSIC (SA-MUSIC) is proposed [18,19], which can achieve a better recovery performance. This algorithm combines greedy steps with generalized MUSIC criteria to obtain a complete subspace.
In addition to the above-mentioned 1 norm-based and subspace-based greedy algorithms, there is also a class of sparse recovery algorithms with a great performance called sparse Bayesian learning (SBL). SBL does not need to set regularization parameters and is robust to noise, so it has attracted the attention of many scholars. SBL was proposed by Tipping in 2001 [20] and extended to the case of MMV in [21]. SBL was introduced into STAP (M-SBL) and it obtained a superior clutter suppression performance by Duan in [22]. However, M-SBL has large computational complexity and slow convergence speed, which limits its practical application. In [23], a fast-converging SBL (M-FCSBL) algorithm is proposed. However, the global convergence of the M-FCSBL algorithm is not guaranteed and it is sensitive to errors.
Although SBL differs greatly from the traditional sparse recovery algorithms in form, these two types of algorithms are intrinsically related. Wipf et al. pointed out that SBL can be transformed into the standard sparse restoration form by adding an additional non-separable sparsity inducing the prior, i.e., the log determinant penalty in [24]. In addition, the authors analyzed the benefits of the non-separable penalty; they found that SBL can recover true data from observations without noise. In [25], the authors reveal the connection between SBL and subspace-based hybrid greedy algorithms.
In this paper, we use the connection of SBL to subspace-based hybrid greedy algorithms to formulate a subspace penalty-based optimization problem. In order to ensure the convergence of the algorithm, we solve the above optimization problem by using the alternating minimization algorithm, and obtain the closed-form solution of the parameter update. In order to reduce the computational complexity, we adaptively update the support set in the iterative process which is similar to [26], thereby reducing the dimension of the dictionary and improving the computational efficiency. The major contributions of this paper are as follows:
  • We formulate a new optimization problem based on a subspace penalty. This optimization problem considers the connection between SBL and subspace-based algorithms;
  • To solve the above optimization problem, we use an alternating minimization algorithm. For each minimization step, a closed-form solution is obtained, which guarantees the convergence of the algorithm;
  • The restart strategy is used to adaptively estimate the support set in each iteration, thereby reducing the computational complexity;
  • The proposed algorithm (ASDSPL) is compared with other common SR-STAP algorithms in terms of the output signal to interference plus noise ratio (SINR) loss, improvement factor (IF), Capon spectrum, computational complexity and convergence speed using the simulation data.
The rest of this paper is organized as follows. In Section 2, the SR-STAP model is established. In addition, we also review some related knowledge of SBL. The proposed algorithm is presented in Section 3. In Section 4, the computational complexity analysis is evaluated. In Section 5, the performance of the proposed method is evaluated with simulation data. The conclusion is drawn in Section 6.
Notation: Throughout this paper, scalar quantities are denoted with italic typeface and lowercase bold and uppercase bold represent vectors and matrices, respectively. The l - th column of the matrix A is represented as a l . a l , k represents the k - th element of a l . The matrix transpose and conjugate transpose are represented by · T and · H , respectively. The symbol · F and · 0 are reserved for Frobenius ( F ) norms and 0 pseudo-norms. · 2 , 0 represents a mixed norm, which is defined as the non-zero element of the column vector consisting of the 2 norm of each row vector. The determinant of a matrix is denoted by · . The expectation of a random variable is denoted by the symbol Ε · . The symbol is used to represent the Kronecker product. The symbol C N ( x | c , d ) represents that the variable x obeys the complex Gaussian distribution with the mean c and variance b . The symbol tr ( A ) denotes the trace of the matrix A , the sum of the diagonal elements of A .

2. SR-STAP Model and SBL Review

2.1. SR-STAP Model

In this paper, an airborne radar system with a uniform line array (ULA) is considered. The antenna array consists of N array elements with an array element spacing d of half of the wavelength λ . The platform flies at a constant velocity v p . M pulses are transmitted at a constant pulse repetition frequency (PRF) f r within a coherent processing interval (CPI).
According to the Ward clutter model [1], the ground clutter is divided into many range rings, and each range ring is evenly divided into N c clutter patches, as shown in Figure 1. The clutter on each range ring can be viewed as a superposition of N c clutter patches. The spatial frequency and normalized Doppler frequency of the i - th clutter patch can be expressed as:
f s , i = d λ cos ( θ i ) sin ( φ i )
f d , i = 2 v p λ f r cos ( θ i ) sin ( φ i )
where θ i and φ i are the elevation angle and the azimuth angle corresponding to the i - th clutter patch, respectively.
Then the M N × 1 space-time snapshot of the l - th range ring can be expressed as:
x l = x c + x n = a = 1 N a i = 1 N c α a , i s ( f s , a , i , f d , a , i ) + x n
where N a indicates the number of ambiguous ranges, α a , i denotes the random complex amplitude corresponding to the i - th clutter patch of the a - th ambiguous range and the x n is assumed to be zero-mean Gaussian white noise. The space-time steering vector of the i - th clutter patch s ( f s , i , f d , i ) can be written as:
s ( f s , i , f d , i ) = b ( f d , i ) a ( f s , i )
b ( f d , i ) = [ 1 , exp ( j 2 π f d , i ) , ... , exp ( j ( M 1 ) 2 π f d , i ) ] T
a ( f s , i ) = [ 1 , exp ( j 2 π f s , i ) , ... , exp ( j ( N 1 ) 2 π f s , i ) ] T
here b ( f d , i ) and a ( f s , i ) are the corresponding temporal and spatial steering vectors.
The ideal CNCM R c + n can be expressed as:
R c + n = E [ x l x l H ] = R c + σ 2 I M N
where R c is the covariance matrix of clutter and σ 2 denotes the noise power. The optimal STAP weight vector can be obtained by the linearly constrained minimum variance (LCMV) criterion:
w opt = R c + n 1 s t s t H R c + n - 1 s t
where s t is the space-time steering vector pointing in the direction of the main beam.
In traditional SR-STAP algorithms, the angular-Doppler plane is uniformly discretized into K = N s N d grid points, where N s = ρ s N ( ρ s > 1 ) represents the number of angle bins and N d = ρ d M ( ρ d > 1 ) represents the number of Doppler bins. Then the SR signal model with MMV can be written as:
X = D A + X n
where X = [ x 1 , x 2 , ... , x L ] ; D is the space-time overcomplete dictionary consisting of the space-time steering vectors corresponding to all grid points; A = [ a 1 , a 2 , ... , a K ] denotes the sparse coefficient matrix; L denotes the number of snapshots; and X n denotes zero-mean Gaussian white noise.
Then the SR-STAP problem can be formulated in the form of basis pursuit denoising (BPDN):
A ˜ = arg min A | | A | | 2 , 0 s . t .   | | X D A | | F 2 ε
where the 2 , 0 norm is used to measure the sparsity and the F norm is used to indicate the fitness of the data. ε denotes the fitting error tolerance.

2.2. SBL Review

In SBL framework, assume that the noise follows a complex Gaussian distribution with a mean of zero and the variance σ . Then the likelihood function of the observation data X can be expressed as:
p X | A , σ = π σ M N L exp σ 1 l = 1 L x l D a l 2
In the MMV case, it is assumed that multiple measurement vectors satisfy the i.i.d assumption and have the same sparse property. A Gaussian prior with zero-mean is assigned to each column of A
a l ~ C N 0 , Γ , l = 1 , 2 , , L
where Γ = diag γ , γ = γ 1 , γ 2 , , γ K represents the hyper-parameter vector.
According to the likelihood function (11) and the prior distribution (12), the posterior distribution can be expressed as:
p A | X , Γ , σ = p X , A | Γ , σ p X | Γ , σ      = π K L Σ L exp l = 1 L a l μ l H Σ 1 a l μ l
with the mean and covariance given by:
μ = Γ D H Σ x - 1 X
Σ = Γ Γ D H Σ x 1 D Γ
where Σ x = σ I + D Γ D H .
The type-II maximum likelihood approach obtains the updated formula of the hyper-parameters by maximizing the marginal likelihood function in the γ space by integrating the unknown sparse coefficients out [24]. The cost function is expressed as:
G ( γ ) = tr ( Σ x 1 X X H ) + L log | Σ x |
where log ( ) represents the natural logarithm.
Then the SBL can also be expressed as a standard sparse recovery problem form [24]:
min A   | | X - D A | | F 2 + σ g ( A )
where g ( A ) represents a penalty term as follows:
g ( A ) = min γ 0   G ( A , γ )
where
G ( A , γ ) = tr ( A H Γ 1 A ) + L log | σ I + D Γ D H |
The penalty term in Equation (17) corresponds to a non-separable sparsity promoting penalty. Additionally, the optimization problem in Equation (17) can be solved by the alternating minimization method.
Step 1: Minimization with respect to A . In t - th iteration, we are given the estimate γ ( t ) , then get an updated formula for A ( t + 1 ) :
A ( t + 1 ) = Γ ( t ) D H ( σ I + D Γ ( t ) D H ) 1 X Γ ( t ) = diag ( γ ( t ) )
Step 2: Minimization with respect to γ . After being given the estimate A ( t ) , we obtain the updated formula of γ ( t + 1 ) by solving the optimization problem as follows:
γ ( t + 1 ) = arg   min γ 0 tr ( A ( t + 1 ) H Γ 1 A ( t + 1 ) H ) + L log | σ I + D Γ D H |
Then we can get the updated formula of γ ( t + 1 ) as:
γ i ( t + 1 ) = 1 L j = 1 L | a i , j | 2 d i H ( σ I + D Γ ( t ) D H ) 1 d i 1 / 2
For the non-separable penalty in Equation (19), a new interpretation can be obtained through some simple transformations. Because of the non-negative constraint on γ in Equation (18), the solution of Equation (18) needs to satisfy the Karush–Kuhn–Tucker (KKT) condition [27]:
γ i P ( A , γ ) γ i = γ i l = 1 L | a i , l | 2 γ i 2 + L d i H ( σ I + D Γ ( t ) D H ) 1 d i = 0
Then, we can get the following equation:
| | A i , : | | 2 γ i = L γ i d i H ( σ I + D Γ ( t ) D H ) 1 d i , γ i 0 0 , γ i = 0
where A i , : represents the i - th row of matrix A .
Therefore, the first item in Equation (19) can be rewritten as:
tr ( A H Γ 1 A ) = i S L γ i d i H ( σ I + D Γ ( t ) D H ) 1 d i                        = L tr ( ( σ I + D Γ ( t ) D H ) 1 D S Γ S D S H )                        = L tr ( ( σ I + D Γ ( t ) D H ) 1 D Γ D )
where S denotes the support set where γ i 0 .
Let q j , ( j = 1 , ... , M N ) denote the singular values of D Γ D H , then Equation (25) can be rewritten as:
tr ( A H Γ 1 A ) = L j = 1 M N q j σ + q j
In the same way, the second item in Equation (19) can be expressed as:
L log | σ I + D Γ D H | = L j = 1 M N log ( σ + q j )
In many rank norm minimization optimization problems, the matrix singular value is often used as a new sparsity metric to approximate the rank norm [28,29,30]. Therefore, we can treat the functions in Equations (26) and (27) as non-convex rank surrogates. Substituting Equations (26) and (27) into Equation (18) yields:
g ( A ) = L f 1 ( D Γ 1 / 2 ) + L f 2 ( D Γ 1 / 2 )
where f 1 ( ) and f 2 ( ) represent two different non-convex rank surrogates, respectively.
We consider the situation where D S Γ S 1 / 2 is a full column rank. Then we have:
rank ( D S Γ S D S H ) = | S | = | | γ | | 0
Therefore, the role of the above rank surrogates in SBL can be understood as a non-separable approximation of the | | A | | 2 , 0 .

3. Proposed Method

3.1. Subspace Penalty

Although the above non-separable penalty brings a lot of advantages, as shown in Equation (28), two different non-convex rank surrogates make the analysis difficult and do not use any subspace information. In sparse recovery, the Schatten p - q u a s i norm is a special norm, which is defined on the singular values of a matrix and can be used to solve various low-rank problems. Compared with the nuclear norm, the Schatten p - q u a s i norm has one more parameter p , which acts on the power of the singular value of the matrix. Therefore, the Schatten p - q u a s i norm can be viewed as a weighted nuclear norm. In addition, the Schatten p - q u a s i norm has a more flexible form than the nuclear norm and can reduce the weight of large singular values in the objective function. Therefore, the Schatten p - q u a s i norm is more suitable for describing the rank minimization problem [31,32].
Inspired by the 0 norm and rank norm similarity, a simpler and more intuitive Schatten p - q u a s i norm rank surrogate is used to replace the penalty in SBL in this paper. This new penalty, named the subspace penalty (SPL), can be expressed as:
g S P L ( A ) = 2 L tr | U H D Γ 1 / 2 | p , ( 0 < p 1 )
where U denotes the incomplete subspace of the clutter. The Schatten p q u a s i norm rank surrogate is defined as:
tr | Q | p = tr ( ( Q Q H ) p / 2 ) = j = 1 M N q j p ( Q )
Similar to the derivation of Equation (18) in Section 2, we can rewrite Equation (29) as follows:
g S P L ( A ) = min γ 0   G S P L ( A , γ )
where
G S P L ( A , γ ) = tr ( A H Γ 1 A ) + L tr ( ( U H A Γ A H U ) p / 2 )
Then a new optimization problem is established based on the new penalty in Equation (32),
min A   | | X - D A | | F 2 + σ g S P L ( A )
It is worth noting that tr ( ( U H A Γ A H U ) p / 2 ) is a non-convex function with respect to its singular values, and thus replace the SPL penalty in Equation (32) with its convex conjugate [33].
G S P L ( A , γ ) = min Ψ T   F S P L ( A , γ , Ψ )
where F S P L ( A , γ , Ψ ) can be expressed as:
F S P L ( A , γ , Ψ ) = tr ( A H Γ 1 A ) + L p ( tr ( ( U H A Γ A H U ) Ψ ) 2 s tr ( Ψ s / 2 ) )
where Τ denotes the set of symmetric positive semi-definite matrices and p , s satisfies
2 p + 2 s = 1
Although Equation (35) is not jointly convex, its greatest advantage over Equation (32) is its convexity for a single variable, while keeping the rest of the variables constant. Therefore, we can use an alternating minimization algorithm to obtain a closed-form solution at each minimization step. Similar to the derivation in Section 2.2, we can prove the equivalence of Equations (29) and (34).
Now, the final form of the optimization problem based on the SPL penalty can be written as:
( A , γ , Ψ ) = | | X - D A | | F 2 + σ tr ( A H Γ 1 A ) + L p ( tr ( ( U H A Γ A H U ) Ψ ) 2 s tr ( Ψ s / 2 ) )
Since Equation (37) is convex for A , γ and Ψ , respectively, an alternating minimization algorithm is used to solve Equation (37). The solution of Equation (37) also should satisfy the KKT conditions, as we have:
( A , γ , Ψ ) A = 2 D H ( X - D A ) + σ Γ 1 A = 0
( A , γ , Ψ ) Ψ = L p ( U H A Γ A H U ) L p Ψ s / 2 1 = 0
( A , γ , Ψ ) γ i μ i = l = 1 L | a i , l | 2 γ i 2 + L d i H U Ψ U H d i μ i = 0
μ i = 0 , i μ i γ i = 0 ,   μ i 0 ,   γ i 0 , i
According to Equations (38)–(41), we can get the following fixed-point iterations:
Step 1: Minimization with respect to A . In the t - th iteration, we are given the estimate γ ( t ) , then get an updated formula for A ( t + 1 ) :
A ( t + 1 ) = Γ ( t ) D H ( σ I + D Γ ( t ) D H ) 1 X Γ ( t ) = diag ( γ ( t ) )
Step 2: Update of Ψ . The closed-form solution for Ψ ( t + 1 ) can be expressed as:
Ψ ( t + 1 ) = ( U H A Γ ( t + 1 ) A H U ) 1 s / 2 1
Step 3: Update of γ . For the given Ψ ( t ) and A ( t ) , according to Equations (40) and (41), we have
γ i ( A , γ , Ψ ) γ i = γ i | | A i , : ( t ) | | 2 γ i 2 + L d i H U Ψ ( t ) U H d i = 0
then the update formula for γ can be written as:
γ i = if   d i H U Ψ ( t ) U H d i = 0   and   | | A i , : ( t ) | | 2 0 1 L j = 1 L | a i , j | 2 d i H U Ψ ( t ) U H d i 1 / 2 , if   d i H U Ψ ( t ) U H d i 0 c o n s t , if   d i H U Ψ ( t ) U H d i = 0   and   | | A i , : ( t ) | | 2 = 0
After obtaining the sparse coefficient A , we can get the estimated CNCM R by:
R = 1 L l = 1 L k = 1 K | | a l , k | | 2 d k d k H + α I M N
where α is the load factor.
As shown in Equation (45), even in the iterations, we mistakenly set the | | A i , : ( t ) | | 2 to zero, but the corresponding hyperparameter still has a probability of non-zero when d i H U Ψ ( t ) U H d i = 0 , which makes the proposed algorithm have the ability to correct this error. This is different from M-SBL, where once we mistakenly set the | | A i , : ( t ) | | 2 to zero, the mistake will continue in subsequent iterations. Therefore, the proposed algorithm has a stronger applicability and maintains a good performance even in the case of SMV.

3.2. Adaptive Update Support Set and Calculate U

Although the algorithm based on the SPL penalty proposed in the previous section had a good recovery performance, it needed to calculate the inverse of the matrix in the iterative process, which leads to a huge computational burden. In particular, the degree of freedom of modern airborne early warning radar systems is extremely high, which severely limits the application of algorithms in practice.
Due to the inherent sparsity of clutter signals, there are already a large number of redundant atoms in the dictionary D that do not contribute to the model. In addition, when the radar is working, the energy of the area irradiated by the main beam is much higher than the area irradiated by the side lobes. Inspired by this, we considered setting a threshold adaptively in each iteration to update the support set S . In the next iteration, we ignored the atoms outside the supported set, so that the dimension of the dictionary D was gradually reduced, then the computational complexity could be effectively reduced.
It is worth noting that the selection of the threshold affects the performance of the proposed algorithm. When setting the threshold, it is necessary to balance the computational efficiency and the recovery accuracy. In this paper, we set the threshold as:
η ( t + 1 ) = κ max ( | | A ( t ) | | F 2 )
where κ denotes an artificially set parameter used to balance the computational efficiency and recovery accuracy.
Then we could get the update of support set S as follows:
S ( t + 1 ) = { i | γ i > η ( t ) } , i = 1 , ... , K
Moreover, we needed to obtain the estimation methods for the incomplete clutter subspace U . Usually, we use the eigen-decomposition of the covariance matrix to obtain the clutter subspace. However, such estimates are inaccurate when the number of samples is insufficient. In this paper, we only needed to estimate an incomplete partial clutter subspace U .
First, the sample covariance matrix R ˜ can be obtained by maximum likelihood estimation.
R ˜ = X X H L
Then, the biased matrix R ^ can be computed by [18]:
R ^ = R ˜ λ M N ( R ˜ ) I
where λ M N ( R ˜ ) denotes the M N - th eigenvalue of matrix R ˜ . The matrix R ˜ and R ^ have the same eigenvectors.
Next, the incomplete partial clutter subspace U can be expressed as:
U = [ v 1 , v 2 , ... , v r ]
where v j , j = 1 , 2... , r denotes the j - th eigenvector of R ^ . The r can be determined by the following criteria:
λ r ( R ˜ ) λ r + 1 ( R ˜ ) λ 1 ( R ˜ ) τ λ k ( R ˜ ) λ k + 1 ( R ˜ ) λ 1 ( R ˜ ) , k > r
where τ is a non-negative threshold.
The pseudocode of the proposed algorithm is shown in Algorithm 1.
Algorithm 1. Pseudocode for ASDSPL-STAP algorithm.
Input: training samples X , dictionary matrix D .
Initialize: t = 0 , κ = 0.01 , ε = τ = 10 6 , p = 0 , iter max = 150 .
Step1: Calculate U using (49)-(52);
Step2: While t iter max
                t t + 1 ;
                Update A ( t ) using (42);
                Update Ψ ( t ) using (43);
                Update γ ( t ) using (45);
                Update support set S ( t ) using (47) and (48);
                Update dictionary matrix D ( t ) = D S ( t 1 ) ;
                If  | | γ ( t ) γ ( t 1 ) | | 2 / | | γ ( t 1 ) | | 2 < ε
                   break
              end
end
Step2: Estimate the CNCM R by (46)
Step3: Calculate the STAP weight w by (8);
Output:  Y = w H X .

4. Computational Complexity Analysis

In this section, we discuss the computational complexity of the proposed algorithm in detail and compare it with some common SR-STAP algorithms, such as M-CVX, M-OMP, M-FOCUSS, M-IAA, M-SBL and M-FCSBL. The number of multiplications is used to evaluate the computational complexity of the algorithm. For simplicity, the low-order terms of multiplications are omitted.
The calculation amount of the proposed algorithm and M-SBL is mainly concentrated in the matrix inversion of matrix U H A Γ A H U and matrix D Γ D H . Compared with M-SBL, the proposed algorithm reduces the computational complexity from two aspects: (1) with the SPL penalty, the dimension of matrix U H A Γ A H U is r × r , and the r satisfies r < M N . (2) By updating the atoms corresponding to the support set in each iteration, the dimension of the dictionary K A S D gradually decreases as the number of iterations increases. The relationship between the cardinality of the support set and iteration sequence number t is shown in Figure 2. It can be seen from Figure 2 that the cardinality of the support set drops sharply in the first few iterations, and then tends to converge.
From the above analysis, it can be seen that the ASDSPL algorithm has lower computational complexity than M-SBL. The computational complexities of different algorithms are shown in Table 1, where we assumed that the sparsity r s is equal to the clutter rank in M-OMP.

5. Numerical Simulation

In this section, we evaluate the performance of the proposed M-F-LSBL algorithm using simulated data and compare it with the existing STAP algorithms. The IF and SINR loss are used as the evaluation metrics of the clutter suppression performance.
IF = | w H s t | 2 tr ( R ) w H R w s t H s t
L S I N R = σ 2 M N | w H s t | 2 w H R w
where R is the known CNCM of the CUT, and w denotes the STAP weight vector.
A side-looking phased array radar was considered in our experiment. The main simulation parameters are shown in Table 2. The CUT is 600 - th range gate. We set the ρ s = 5 , ρ d = 5 and L = 6 . Moreover, all presented results are averaged over 100 independent Monte Carlo runs.

5.1. Performance Analysis in the Ideal Case

In this subsection, an ideal error free simulation scenario is considered. First, we give an intuitive view of the reconstructed Capon spectrum of ASDSPL, and compare the Capon spectrum obtained by the ideal CNCM, M-OMP algorithm, M-FOCUSS algorithm and other MMV SR-STAP algorithms. From Figure 3, we can observe that the Capon spectrums in Figure 3d–g obtained from the M-IAA, M-SBL, M-FCSBL and M-F-LSBL algorithms are similar to each other and are close to the ideal Capon spectrum. The value of the color bar corresponds to the energy value in the figure. We can obtain a more realistic clutter Capon spectrum via the MIAA, M-SBL, M-FCSBL and ASDSPL algorithms. The M-OMP selects the support set and then uses the least square to calculate the power, which brings additional errors.
In order to better illustrate the effectiveness of the proposed algorithm, the eigen-spectra of CNCMs obtained by different STAP methods are shown in Figure 4. Under the simulation conditions set in this subsection, the clutter rank is equal to 15 according to Brennan’s rule [1]. From Figure 4, we can see that the eigen-spectra obtained by the ideal CNCM decreases significantly when the eigenvalue index is equal to 15. Moreover, the eigen-spectra obtained by the M-IAA, M-SBL, M-FCSBL and ASDSPL algorithms is close to the optimal eigen-spectra. Therefore, the M-IAA, M-SBL, M-FCSBL and ASDSPL algorithms can achieve better clutter suppression performance compared with the M-OMP algorithm and M-FOCUSS algorithm.
In addition, the clutter suppression performance of different SR-STAP algorithms is also evaluated by the IF against the normalized Doppler frequency, as shown in Figure 5. We found that the IF curve obtained by the M-IAA, M-SBL, M-FCSBL and ASDSPL algorithms is close to the optimal IF curve regardless of the depth or shape of the notch.
In this paper, the relationship between the cost function C = ln R + Tr R 1 R in [23] and the number of iterations is used to evaluate the convergence performance of different algorithms. R is the exact CNCM. In addition, we assume that the noise power can be obtained by the silence of the receiver. As shown in Figure 6, we only considered five algorithms with a better performance: the M-IAA, M-SBL, M-FCSBL, M-FOCUSS and the ASDSPL algorithms. From Figure 6, we can observe that the M-IAA, M-FCSBL and the ASDSPL algorithms converge faster and only need 20 iterations to converge to the steady state, while the M-SBL and M-FOCUSS algorithms converge slower and need 150 iterations to reach the steady state.
In the next experiment, the average running time was used to assess the computational efficiency of different algorithms. The experiment results are presented in Table 3. Note that all simulations results are implemented by MATLAB 2021b on a computer equipped with Intel Xeon E5-2620 CPU @2.10GHz×2. Table 3 shows that the average running time of the ASDSPL algorithm is significantly less than other algorithms. The great computational efficiency of the ASDSPL algorithm comes from its low computational complexity and fast convergence speed.
Finally, the average SINR, which is defined as the mean of the L S I N R for f d ( 0.5 , 0.1 ) ( 0.1 , 0.5 ) against the number of snapshots, is shown in Figure 7. It can be observed that the M-IAA, M-SBL, M-FCSBL and the ASDSPL algorithms achieved the best performance with only three snapshots. The experimental results clearly prove the great performance of the proposed algorithm when the number of i.i.d training samples is small.

5.2. Performance Analysis in the Non-Ideal Case

In this subsection, we consider the non-ideal case with gain-phase (GP) errors. The GP error is caused by the inconsistent amplitude and phase characteristics of the RF amplifier part of the array channel. The GP error manifests itself as an inconsistency in the amplitude and phase of the individual channels. It can be described by introducing an error matrix T into the steering vector modeling. Then, the signal model with the GP error can be expressed as:
X G P = T D A + X n
where error matrix T is:
T = I M diag ( [ ρ 1 exp ( j ξ 1 ) , ... , ρ N exp ( j ξ N ) ] T )
where ρ i , ξ i , i = 1 , ... , N represents the amplitude error and phase error of i - th element.
The standard deviation of the gain error and phase error is 0.03 and 2°. Figure 8 shows the Capon spectrum recovered by the different SR-STAP algorithms in the case of a GP error. We can find that the existence of the GP error leads to the broadening of the Capon spectrum. Meanwhile, the M-SBL algorithm and ASDSPL algorithm can achieve a better recovery performance of the Capon spectrum, both in terms of location and power. Although, there is some performance loss compared to the error-free case, which is caused by the mismatch between the data and the dictionary.
Next, we illustrate the influence of the GP error by the eigen-spectra of CNCMs obtained by different STAP methods. As shown in Figure 9, the eigen-spectra obtained by the M-SBL and ASDSPL algorithms is close to the optimal eigen-spectra. The IF curve is shown in Figure 10; we find that the performance of the M-IAA and M-FCSBL algorithms is degraded due to the use of the standard Capon beamformer in each iteration. The standard Capon beamformer exhibits a poor performance in the presence of GP errors due to the mismatch of steering vectors. Moreover, compared with the ideal IF curve, the M-SBL and ASDSPL algorithms had a slight performance loss. The main lobe notch of the IF curve obtained by the M-SBL and ASDSPL algorithms has slightly widened, and the side lobe has decreased by 3 dB.

6. Conclusions

In this work, an adaptive support-driven sparse recovery STAP method with a subspace penalty was proposed. First, we developed a new optimization objective function-based subspace penalty. The objective function combines the information of SBL and the subspace, and achieves a great recovery performance. Second, the alternating minimization algorithm was used to get the closed-form solution of the optimization function. Due to the convexity of the objective function for each individual variable, the convergence of the proposed algorithm is guaranteed. Finally, an adaptive updating support set method was used to improve computational efficiency. The simulation results show that the proposed algorithm has an excellent clutter suppression performance and great computational efficiency.

Author Contributions

Conceptualization, D.W. and T.W.; investigation, W.C.; methodology, D.W. and W.C.; project administration, T.W.; software, D.W.; supervision, C.L.; visualization, D.W.; writing—original draft, D.W.; writing—review & editing, C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key R&D Program of China, grant number 2021YFA1000400.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ward, J. Space-Time Adaptive Processing for Airborne Radar. In Technical Report, 1015; MIT Lincoln Laboratory: Lexington, MA, USA, 1994. [Google Scholar]
  2. Brennan, L.E.; Mallett, J.D.; Reed, I.S. Theory of Adaptive Radar. IEEE Trans. Aerosp. Electron. Syst. 1973, 9, 237–251. [Google Scholar] [CrossRef]
  3. Klemm, R. Adaptive airborne MTI: An auxiliary channel approach. In IEE Proceedings F (Communications, Radar and Signal Processing); IET Digital Library: London, UK, 1987; Volume 134, pp. 269–276. [Google Scholar]
  4. Dipietro, R.C. Extended factored space-time processing for airborne radar systems. In Conference Record of the Twenty-Sixth Asilomar Conference on Signals, Systems & Computers; IEEE Computer Society: Pacific Grove, CA, USA, 1992; pp. 425–430. [Google Scholar]
  5. Goldstein, J.S.; Reed, I.S. Reduced-rank adaptive filtering. IEEE Trans. Signal Process. 1997, 45, 492–496. [Google Scholar] [CrossRef]
  6. Goldstein, J.S.; Reed, I.S. Subspace selection for partially adaptive sensor array processing. IEEE Trans. Aerosp. Electron. Syst. 1997, 33, 539–544. [Google Scholar] [CrossRef]
  7. Goldstein, J.S.; Reed, I.S.; Scharf, L.L. A multistage representation of the wiener filter based on orthogonal projections. IEEE Trans. Inf. Theory 1998, 44, 2943–2959. [Google Scholar] [CrossRef]
  8. Cristallini, D.; Burger, W. A robust direct data domain approach for STAP. IEEE Trans. Signal Process. 2012, 60, 1283–1294. [Google Scholar] [CrossRef]
  9. Melvin, W.L.; Showman, G.A. An approach to knowledge-aided covariance estimation. IEEE Trans. Aerosp. Electron. Syst. 2006, 42, 1021–1042. [Google Scholar] [CrossRef]
  10. Wicks, M.C.; Rangaswamy, M.; Adve, R.; Hale, T.B. Space–time adaptive processing: A knowledge-based perspective for airborne radar. IEEE Signal Process. Mag. 2006, 23, 51–65. [Google Scholar] [CrossRef]
  11. Yang, Z.C.; Li, X.; Wang, H.Q.; Jiang, W.D. On Clutter Sparsity Analysis in Space-Time Adaptive Processing Airborne Radar. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1214–1218. [Google Scholar] [CrossRef]
  12. Sun, K.; Zhang, H.; Li, G.; Meng, H.; Wang, X. A novel STAP algorithm using sparse recovery technique. In Proceedings of the 2009 IEEE International Geoscience and Remote Sensing Symposium, Cape Town, South Africa, 12–17 July 2009. [Google Scholar]
  13. Sun, K.; Meng, H.; Wang, Y.; Wang, X. Direct data domain STAP using sparse representation of clutter spectrum. Signal Process. 2011, 91, 2222–2236. [Google Scholar] [CrossRef]
  14. Tropp, J.A. Algorithms for simultaneous sparse approximation, part II: Convex relaxation. Signal Process. 2006, 86, 589–602. [Google Scholar] [CrossRef]
  15. Cotter, S.F.; Rao, B.D.; Engan, K.; Kreutz-Delgado, K. Sparse solutions to linear inverse problems with multiple measurement vectors. IEEE Trans. Signal Process. 2005, 53, 2477–2488. [Google Scholar] [CrossRef]
  16. Ma, Z.; Liu, Y.; Meng, H.; Wang, X. Jointly sparse recovery of multiple snapshots in STAP. In Proceedings of the 2013 IEEE Radar Conference (RadarCon13), Ottawa, ON, Canada, 29 April–3 May 2013; pp. 1–4. [Google Scholar]
  17. Yang, Z.; Li, X.; Wang, H.; Jiang, W. Adaptive clutter suppression based on iterative adaptive approach for airborne radar. Signal Process. 2013, 93, 3567–3577. [Google Scholar] [CrossRef]
  18. Lee, K.; Bresler, Y.; Junge, M. Subspace methods for joint sparse recovery. IEEE Trans. Inf. Theory 2012, 58, 3613–3641. [Google Scholar] [CrossRef]
  19. Wang, Z.; Wang, Y.; Duan, K.; Xie, W. Subspace-Augmented Clutter Suppression Technique for STAP Radar. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1–5. [Google Scholar] [CrossRef]
  20. Tipping, M.E. Sparse Bayesian learning and the relevance vector machine. J. Mach. Learn. 2001, 1, 211–244. [Google Scholar]
  21. Wipf, D.P.; Rao, B.D. An empirical Bayesian strategy for solving the simultaneous sparse approximation problem. IEEE Trans. Signal Process. 2007, 55, 3704–3716. [Google Scholar] [CrossRef]
  22. Duan, K.Q.; Wang, Z.T.; Xie, W.C.; Chen, H.; Wang, Y.L. Sparsity-based STAP algorithm with multiple measurement vectors via sparse Bayesian learning strategy for airborne radar. IET Signal Process. 2017, 11, 544–553. [Google Scholar] [CrossRef]
  23. Wang, Z.T.; Xie, W.C.; Duan, K.Q.; Wang, Y.L. Clutter suppression algorithm based on fast converging sparse Bayesian learning for airborne radar. Signal Process. 2017, 130, 159–168. [Google Scholar] [CrossRef]
  24. Wipf, D.P.; Rao, B.D.; Nagarajan, S. Latent Variable Bayesian Models for Promoting Sparsity. IEEE Trans. Inf. Theory 2011, 57, 6236–6255. [Google Scholar] [CrossRef]
  25. Ye, J.C.; Kim, J.M.; Bresler, Y. Improving M-SBL for Joint Sparse Recovery Using a Subspace Penalty. IEEE Trans. Signal Process. 2016, 63, 6595–6605. [Google Scholar] [CrossRef]
  26. Li, J.; Zhou, W.; Cheng, C. Adaptive support driven Bayesian reweighted algorithm for sparse signal recovery. Signal Image Video Process. 2020, 15, 1295–1302. [Google Scholar] [CrossRef]
  27. Chong, E.K.P.; Zak, S.H. An Introduction to Optimization; Wiley-Intersci.: New York, NY, USA, 1996. [Google Scholar]
  28. Mohan, K.; Fazel, M. Iterative reweighted least squares for matrix rank minimization. In Proceedings of the 48th IEEE Annual Allerton Conference Communication, Control Computing, Monticello, IL, USA, 29 September–1 October 2010; pp. 653–661. [Google Scholar]
  29. Fazel, M.; Hindi, H.; Boyd, S.P. Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In Proceedings of the 2003 American Control Conference, Denver, CO, USA, 4–6 June 2003. [Google Scholar]
  30. Wu, X.; Zhu, W.P.; Yan, J. A High-Resolution DOA Estimation Method with a Family of Nonconvex Penalties. IEEE Trans. Veh. Technol. 2017, 67, 4925–4938. [Google Scholar] [CrossRef]
  31. Nie, F.; Huang, H.; Ding, C. Low-rank matrix recovery via efficient schatten p-norm minimization. In Twenty-Sixth AAAI Conference on Artificial Intelligence; AAAI Press: Menlo Park, CA, USA, 2012. [Google Scholar]
  32. Xie, Y.; Gu, S.; Liu, Y.; Zuo, W.; Zhang, W.; Zhang, L. Weighted Schatten p-Norm Minimization for Image Denoising and Background Subtraction. IEEE Trans. Image Process. 2016, 25, 4842–4857. [Google Scholar] [CrossRef] [Green Version]
  33. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
Figure 1. Geometric configuration of an airborne radar.
Figure 1. Geometric configuration of an airborne radar.
Remotesensing 14 04463 g001
Figure 2. Cardinality of support set against iteration sequence number t.
Figure 2. Cardinality of support set against iteration sequence number t.
Remotesensing 14 04463 g002
Figure 3. Clutter Capon spectrum with the absence of nonideal factors. (a) Ideal CNCM; (b) M-OMP; (c) M-FOCUSS; (d) M-IAA; (e) M-SBL; (f) M-FCSBL; (g) ASDSPL.
Figure 3. Clutter Capon spectrum with the absence of nonideal factors. (a) Ideal CNCM; (b) M-OMP; (c) M-FOCUSS; (d) M-IAA; (e) M-SBL; (f) M-FCSBL; (g) ASDSPL.
Remotesensing 14 04463 g003aRemotesensing 14 04463 g003b
Figure 4. Eigen-spectra of the optimum CNCM and the estimated CNCM in ideal case.
Figure 4. Eigen-spectra of the optimum CNCM and the estimated CNCM in ideal case.
Remotesensing 14 04463 g004
Figure 5. IF curves with the ideal case.
Figure 5. IF curves with the ideal case.
Remotesensing 14 04463 g005
Figure 6. Value of cost function to the number of iterations.
Figure 6. Value of cost function to the number of iterations.
Remotesensing 14 04463 g006
Figure 7. Average SINR loss to the number of snapshots in ideal case.
Figure 7. Average SINR loss to the number of snapshots in ideal case.
Remotesensing 14 04463 g007
Figure 8. Clutter Capon spectrum with the presence of GP error. (a) Ideal CNCM; (b) M-OMP; (c) M-FOCUSS; (d) M-IAA; (e) M-SBL; (f) M-FCSBL; (g) ASDSPL.
Figure 8. Clutter Capon spectrum with the presence of GP error. (a) Ideal CNCM; (b) M-OMP; (c) M-FOCUSS; (d) M-IAA; (e) M-SBL; (f) M-FCSBL; (g) ASDSPL.
Remotesensing 14 04463 g008aRemotesensing 14 04463 g008b
Figure 9. Eigen-spectra of the optimum CNCM and the estimated CNCM with the presence of GP error.
Figure 9. Eigen-spectra of the optimum CNCM and the estimated CNCM with the presence of GP error.
Remotesensing 14 04463 g009
Figure 10. IF curves in the GP error case.
Figure 10. IF curves in the GP error case.
Remotesensing 14 04463 g010
Table 1. Computational complexity comparison.
Table 1. Computational complexity comparison.
AlgorithmComputational Load for a Single Iteration
M-OMP O N M K L + r s 3 + N M r s 2 + 2 N M r s
M-FOCUSS O N M K L + N M 3 + 2 N M 2 N s N d
M-IAA O N M K L + N M 3 + 2 N M 2 N s N d
M-SBL O ( N M ) 3 + 2 ( N M ) 2 N s N d + 3 K 2 N M + N M N s N d L
M-FCSBL O ( N M ) 3 + 5 K ( N M ) 2 + 3 K + ( 2 K L + 4 K + L ) N M + L
ASDSPL O ( r ) 3 + 2 ( N M ) 2 K A S D + 3 K A S D 2 N M + N M K A S D L
Table 2. List of simulation parameters.
Table 2. List of simulation parameters.
SymbolsParametersValue
λ Wavelength0.3 m
d Element spacing0.15 m
ϑ Platform velocity150 m/s
H height of platform 9000 m
M Number of pulses8
N Number of array elements8
f r PRF2000 Hz
f s Sampling frequency2.5 MHz
θ Azimuth angle90°
CNR Clutter to noise ratio50 dB
Table 3. Average running time comparison.
Table 3. Average running time comparison.
AlgorithmRunning Time
M-OMP0.025 s
M-FOCUSS11.294 s
M-IAA4.368 s
M-SBL16.511 s
M-FCSBL4.213 s
ASDSPL0.119 s
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, D.; Wang, T.; Cui, W.; Liu, C. Adaptive Support-Driven Sparse Recovery STAP Method with Subspace Penalty. Remote Sens. 2022, 14, 4463. https://doi.org/10.3390/rs14184463

AMA Style

Wang D, Wang T, Cui W, Liu C. Adaptive Support-Driven Sparse Recovery STAP Method with Subspace Penalty. Remote Sensing. 2022; 14(18):4463. https://doi.org/10.3390/rs14184463

Chicago/Turabian Style

Wang, Degen, Tong Wang, Weichen Cui, and Cheng Liu. 2022. "Adaptive Support-Driven Sparse Recovery STAP Method with Subspace Penalty" Remote Sensing 14, no. 18: 4463. https://doi.org/10.3390/rs14184463

APA Style

Wang, D., Wang, T., Cui, W., & Liu, C. (2022). Adaptive Support-Driven Sparse Recovery STAP Method with Subspace Penalty. Remote Sensing, 14(18), 4463. https://doi.org/10.3390/rs14184463

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop