Next Article in Journal
Warming Has Accelerated the Melting of Glaciers on the Tibetan Plateau, but the Debris-Covered Glaciers Are Rapidly Expanding
Previous Article in Journal
An Epipolar HS-NCC Flow Algorithm for DSM Generation Using GaoFen-3 Stereo SAR Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Iterative Reweighted STAP Algorithm for Airborne Radar

National Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(1), 130; https://doi.org/10.3390/rs15010130
Submission received: 23 November 2022 / Revised: 23 December 2022 / Accepted: 24 December 2022 / Published: 26 December 2022

Abstract

:
In recent years, sparse recovery-based space-time adaptive processing (SR-STAP) technique has exhibited excellent performance with insufficient samples. Sparse Bayesian learning algorithms have received considerable attention for their remarkable and reliable performance. Its implementation in large-scale radar systems is however hindered by the overwhelming computational load and slow convergence speed. This paper aims to address these drawbacks by proposing an improved iterative reweighted sparse Bayesian learning algorithm based on expansion-compression variance-components (ExCoV-IIR-MSBL). Firstly, a modified Bayesian probabilistic model for SR-STAP is introduced. Exploiting the intrinsic sparsity prior of the clutter, we divide the space-time coefficients into two parts: the significant part with nontrivial coefficients and the irrelevant part with small or zero coefficients. Meanwhile, we only assign independent hyperparameters to the coefficients in the significant part, while the remaining coefficients share a common hyperparameter. Then the generalized maximum likelihood (GML) criterion is adopted to classify the coefficients, ensuring both accuracy and efficiency. Hence, the parameter space in Bayesian inference will be significantly reduced, and the computational efficiency can be considerably promoted. Both theoretical analysis and numerical experiments validate that the proposed algorithm achieves superior performance with considerably improved computational efficiency in sample shortage scenarios.

Graphical Abstract

1. Introduction

Moving target detection is a fundamental function of radar systems for military surveillance and reconnaissance. Space-time adaptive processing (STAP) has become an effective and mature clutter suppression technique for airborne early warning phased array radar systems [1,2,3]. The optimal STAP filter is constructed using the ideal clutter plus noise covariance matrix (CNCM). In practice, the ideal CNCM is unknown a priori and is typically estimated using independent and identically distributed (i.i.d.) samples around the cell under test (CUT). According to the well-known Reed–Mallet–Brennan criterion [4], the number of i.i.d. samples with at least twice the system degree of freedom (DOF) is required to ensure a performance loss under 3 dB. Unfortunately, sufficient i.i.d. samples are often challenging to obtain due to various terrains, artificial structures and array configurations.
Over the past few decades, many STAP methods have been developed to improve clutter suppression performance in heterogeneous environments, including classical data-dependent reduced-rank (RR) methods [5,6,7,8] and data-independent reduced-dimension (RD) methods [9,10,11,12]. Although the required number of training samples is reduced to twice the reduced dimension, the requirement is still difficult to meet under severe non-stationary clutter environments for modern large-scale radar systems.
During the recent fifteen years, many sparse recovery-based STAP (SR-STAP) algorithms have been proposed by exploiting the underlying connection between the compressed sensing technique and the intrinsic sparsity prior of the clutter spectrum [13,14,15,16,17,18,19,20,21]. With the help of sparse recovery theory, these SR-STAP algorithms have shown excellent clutter suppression performance with limited samples. Depending on the sparse recovery algorithms, SR-STAP can be generally classified into several categories, such as convex relaxation [13,14,15], greedy algorithms [16], Bayesian inference [17,18,19], and other methods [20,21].
Convex relaxation substitutes the 0 norm with the 1 norm as the sparse penalty. The 1 norm optimization problem has shown that sparse solutions can be stably obtained under certain conditions [22]. It has been extensively applied to the least absolute shrinkage and selection operator (LASSO) and basis pursuit (BP) based STAP methods [23]. Most convex relaxation methods require careful tuning of the regularization parameter, and inappropriate parameter selection will jeopardize the performance of clutter suppression as well as slow-moving target detection [14,15]. However, choosing an appropriate regularization parameter is quite challenging in practice.
In recent years, sparse Bayesian learning (SBL) has drawn much effort due to its preferable advantages, such as automatic self-regulation [18] and flexibility in exploiting the potential signal structure [24]. SBL was first proposed by Tipping in 2001 [25] and introduced to the field of STAP by Duan in 2017, termed M-SBL-STAP [17]. Numerous empirical results indicate that the SBL based SR-STAP can provide satisfactory performance and is quite robust to noise and high coherence dictionary [19]. However, M-SBL-STAP faces an overwhelming computational burden and large memory requirements, hindering its implementation in large-scale radar systems. Many efficient SBL based STAP algorithms have been developed to address this issue. In [18], Wang proposed a fast-converging SBL algorithm by combining an approximation term, but the global convergence property of the algorithm is not guaranteed. An iterative reweighted based M-SBL (M-SBL-IR 2 , 1 ) STAP algorithm was proposed by Liu [19], which exhibits better reconstruction accuracy and has a favourable convergence speed.
Our experience with numerous simulation experiments demonstrates that the majority of the space-time coefficients to be recovered are zero or close to zero, and only a few have nontrivial values, owing to the sparse nature of the clutter. Therefore, in this paper, we are inspired to propose an improved Bayesian probabilistic model for SR-STAP, exploiting the aforementioned sparsity feature of the clutter. Instead of assigning a separate hyperparameter to each space-time coefficient in the conventional Bayesian model, only the significant coefficients are assigned independent hyperparameters, while the remaining irrelevant coefficients share a common hyperparameter. As a result, the parameter space to be updated will be dramatically reduced, and the computational efficiency will naturally be promoted. In [26], a heuristic expansion-compression variance-component based method (ExCoV) has been proposed to guide us to classify those hyperparameters.
Specifically, the main contributions of this paper are summarized as follows:
  • By exploiting the inherent sparsity nature of the clutter, the space-time coefficients are divided into two disjoint groups, i.e., the significant and irrelevant groups, thus reducing redundancy, scaling down the parameter space and yielding an improved Bayesian probabilistic model for SR-STAP with reduced computational complexity and reduced memory requirements.
  • The space-time coefficients are partitioned into the significant and irrelevant groups according to the generalized maximum likelihood (GML) criterion, which preserves both accuracy and efficiency, unlike the conventional SBL cost function.
  • We extend and modify the real-value ExCoV method to complex-value STAP applications to approximately maximize the GML objective function. Using the ExCoV scheme, it is unnecessary to specify convergence thresholds and maximum iteration times.
  • Extensive experiments as well as detailed comparative analyses are presented, such as clutter suppression performance and target detection performance, etc.
Notations used in this paper are as follows: vectors, matrices and scalars are denoted by bold lowercase, bold uppercase and italic letters, respectively. ( ) , ( ) T and ( ) H stand for conjugate, transpose and conjugate transpose. and are the Kronecker and Hadamard (elementwise) product. t r a c e ( ) is the trace operator. represents the set of complex values. F and 2 , 0 are respectively defined as the Frobenius norm and 2 , 0 mixed norm, which is the number of non-zero elements of the vector formed by the 2 norm of each row. E [ ] denotes the expectation operator.
The remainder of the paper is organized as follows. In Section 2, the airborne radar signal model is established. The formulation of SR-STAP model is presented in Section 3. The proposed algorithm is introduced in Section 4. The computational complexity analysis is presented in Section 5. Experiments and analyses are carried out on Section 6. Finally, Section 7 discuss the conclusions.

2. Signal Model

In this research, assume that an airborne pulsed-Doppler early warning radar system with a side-looking uniform linear array (ULA), as depicted in Figure 1, has N omnidirectional elements with half wavelength interelement spacing d = λ / 2 and transmits M coherent pulses over a coherent processing interval (CPI) at the fixed pulse repetition frequency (PRF) f r . The aircraft platform is cruising with a constant velocity v p .
Based on the well-known Ward clutter model [27], the ground can be divided into different range rings. The radar returns are composed of numerous evenly distributed and mutually uncorrelated clutter patches in azimuth angles θ . Considering the impact of range ambiguities, a general space-time snapshot x N M × 1 from the CUT can be formulated as
x = r = 1 N r i = 1 N c α r , i s ( f d r , i , f s r , i ) + n = r = 1 N r i = 1 N c α r , i ( s t ( f d r , i ) s s ( f s r , i ) ) + n
where α r , i and s ( f d r , i , f s r , i ) denote the complex amplitude and space-time steering vector from the corresponding clutter patch; N c is the number of independent clutter patches in a iso-range ring; N r represents the number of range ambiguities; the vector n is the thermal noise and modelled as a zero-mean complex Gaussian random process; s ( f d r , i , f s r , i ) , s t ( f d r , i ) = [ 1 , e j 2 π f d r , i , , e j 2 π ( M 1 ) f d r , i ] T and s s ( f s r , i ) = [ 1 , e j 2 π f s r , i , , e j 2 π ( N 1 ) f s r , i ] T stand for space-time, temporal and spatial steering vectors, respectively. According to the geometric configuration of the radar platform shown in Figure 1, the normalized spatial frequency f s r , i and normalized Doppler frequency f d r , i are defined as follows:
f s r , i = d λ cos φ r cos θ i
f d r , i = 2 v p λ f r cos φ r cos θ i
where φ r and θ i are the elevation and azimuth angle of the (r, i)-th clutter patch, respectively.
Since the clutter patches are mutually independent, the ideal clutter plus noise covariance matrix (CNCM) of the CUT can be expressed as
R = r = 1 N r i = 1 N c E { | α r , i | 2 } s ( f d r , i , f s r , i ) s H ( f d r , i , f s r , i ) + σ 2 I
where σ 2 denotes the noise power.
Under the linearly constrained minimum variance (LCMV) criterion, the optimum STAP weight vector, which maximizes signal-to interference-plus-noise ratio (SINR), can be given by
w = R 1 s t a r g e t s t a r g e t H R 1 s t a r g e t
where s t a r g e t is the space-time steering vector of the desired target.

3. Sparse Recovery Based STAP Model Specification

For the grid-based SR-STAP algorithms [23], the whole continuous spatial-temporal plane is uniformly discretized into K = N s M d grids, where N s = ρ s N ( ρ s > 1 ) is the number of normalized spatial frequency bins in the spatial domain and M d = ρ d M ( ρ d > 1 ) is the number of normalized Doppler frequency bins in the temporal domain. ρ s and ρ d are the resolution scales. Then Φ = [ s ( f d , 1 , f s , 1 ) , , s ( f d , M d , f s , N s ) ] is defined as the N M × K overcomplete dictionary matrix consisting of K grids. The SR-STAP signal model for the multiple measurement vectors (MMV) case X = [ x 1 , x 2 , , x L ] N M × L can next be reformulated as
X = Φ A + N
where L is the number of IID space-time snapshots; A = [ a 1 , a 2 , , a L ] K × L denotes the space-time coefficients matrix where the non-zero rows indicate the potential presence of clutter components, and each column has the same sparsity support; N = [ n 1 , n 2 , , n L ] N M × L is the zero mean Gaussian white noise matrix.
Since K N M , the problem of estimating A is fundamentally underdetermined. The canonical cost function of SR-STAP problem is formulated by
min A X Φ A F 2 + η A 2 , 0
where η is a nonnegative regularization parameter controlling the trade-off between the sparse penalty and data fidelity, and thus the choice of η is critical to the recovery performance. Unfortunately, finding the 2 , 0 norm optimal representation requires a combinatorial search (also known as NP-hard [28]) and, therefore, is difficult to obtain.
Many recent alternative tractable approaches have been developed to find sparse solutions efficiently [13,14,15,16,17,18,19,20,21]. Motivated by the predominance of the SBL algorithms in SR-STAP, we resort to SBL for accurately recovering A from X in this research.

4. Proposed Algorithm

Following the conventional sparse Bayesian learning framework, all the unknowns are treated as stochastic variables with assigned independent probability distributions. First of all, the noise matrix is modelled as white complex Gaussian noise with unknown power σ 2 ; thus the observed Gaussian likelihood function of the measurements X for the MMV case can be expressed as.
p ( X | A , σ 2 ) = l = 1 L C N ( X · l | Φ A · l , σ 2 Ι ) = 1 ( π σ 2 ) N M L exp [ σ 2 X Φ A F 2 ]
Since measurements are mutually independent and identically distributed, we suppose each column of the space-time coefficient matrix A · l is assigned with the same zero-mean complex Gaussian prior γ , governing the prior variance of each unknown space-time coefficient.
p ( A | γ ) = l = 1 L C N ( A · l | 0 , Γ ) = π K L | Γ | L exp ( l = 1 L A · l H Γ 1 A · l )
where 0 K × 1 is a zero vector, γ = [ γ 1 , γ 2 , , γ K ] is the unknown variance hyperparameters corresponding to the rows in A , and covariance matrix Γ is a diagonal matrix, where γ is its diagonal elements, i.e., Γ = d i a g ( γ ) .
Further, following the Bayesian theorem, the posterior probability density p ( A | X , γ , σ 2 ) can be easily calculated by
p ( A | X , γ , σ 2 ) = p ( X | A , σ 2 ) p ( A | γ ) p ( X | γ , σ 2 ) = p ( X | A , σ 2 ) p ( A | γ ) p ( X | A , σ 2 ) p ( A | γ ) d A
The posterior probability p ( A | X , γ , σ 2 ) obeys a multivariate complex Gaussian distribution C N ( μ , Σ ) with mean and covariance respectively given by
μ = Γ Φ H ( σ 2 I + Φ Γ Φ H ) 1 X
Σ = Γ Γ Φ H ( σ 2 I + Φ Γ Φ H ) 1 Φ Γ
Thus, with a fixed Γ , the estimated sparse recovery solution A ˜ of M-SBL is
A ˜ = Γ Φ H ( σ 2 I + Φ Γ Φ H ) 1 X
Accordingly, for γ i 0 , the corresponding row A ˜ i · of the space-time coefficient matrix will be zeros as well. In other words, if γ is sparse, the corresponding space-time coefficient estimation A ˜ will also be sparse.
The hyperparameters vector can be estimated by performing the type-II maximum likelihood procedure or evidence maximization in γ space [29]. Mathematically, the cost function can be expressed as minimizing the marginal likelihood function ( γ ) with respect to γ :
γ ˜ = arg min γ ( γ ) = arg min γ ln p ( X | γ , σ 2 ) = arg min γ ln p ( X | A , σ 2 ) p ( A | γ ) d A = arg min γ L ln | P | + l = 1 L x l H P 1 x l
After the unknown coefficients A have been integrated out, P stands for the covariance of the measurements X with the hyperparameters γ and σ 2 .
P σ 2 I + Φ Γ Φ H
This minimization can be performed by an iterative reweighted SBL based algorithm [19,30], which will be modified and described in the following part of this paper.
As we can observe from (11) and (12), the computational bottleneck is mainly manifested in large-scale matrix multiplication and matrix inversion operations. The storage requirement is also heavy. Moreover, during each iteration, a hyperparameter space ( γ , σ 2 ) of dimension K + 1 needs to be updated. These factors make SBL based STAP algorithms considerably slower than other types of sparse recovery algorithms, even though excellent clutter suppression performance can be achieved with limited training samples.
In fact, due to the intrinsic sparsity of the clutter spectrum on the spatial-temporal plane, only a few rows of A have nontrivial magnitudes, and the remaining elements are strictly zero (or close to zero). It implies that most of the hyperparameters in γ will converge to zeros, and they are redundant. Therefore, we can naturally consider partitioning the space-time coefficients into two parts: the significant elements part and the complementary irrelevant elements part. Instead of assigning independent hyperparameters to all coefficients in the conventional SBL framework, we can only allocate independent hyperparameters to significant coefficients part, while the remaining irrelevant coefficients share an identical hyperparameter. Consequently, the dimension of the hyperparameter space can be reduced to be proportional to the sparsity level of the clutter, which is much smaller than K [23]. As a result, the computation complexity will be greatly mitigated, and the memory consumption will be reduced, particularly in large-scale modern radar systems.
Next, define Θ = { 1 , 2 , , K } as the complete index set. The index set of significant coefficients is denoted by Θ α with size K α , and the complementary index set of irrelevant coefficients is denoted by Θ β = Θ \ Θ α with cardinality K β = K K α . Accordingly, we will also divide dictionary matrix Φ and space-time coefficients matrix A into two submatrices, i.e., Φ α N M × K α , Φ β N M × K β , A α K α × L and A β K β × L , respectively. For more details:
Φ α is the submatrix of Φ corresponding to the significant coefficients index set Θ α , e.g., if Θ α = { 2 , 8 , 9 } , then Φ α = [ s 2 , s 8 , s 9 ] , where s i is the ith column of Φ .
A α is the submatrix of A corresponding to the index set Θ α , e.g., if Θ α = { 2 , 8 , 9 } , then A α = [ A 2 · ; A 8 · ; A 9 · ] , where A i · is the ith row of A .
Then (9) can be reformulated as
p ( A | γ ) = p ( A α | γ Θ α ; Θ α ) p ( A β | γ Θ β ; Θ β ) = l = 1 L ( C N ( A α · l | 0 , Γ α ) C N ( A β · l | 0 , γ Θ β I ) ) = π K L | Γ α | L ( γ Θ β ) L K β exp ( l = 1 L ( A α l H Γ α 1 A α l + γ Θ β 1 A β l H A β l ) )
where Γ α and γ Θ β I denote the covariance matrix of the significant part A α and irrelevant part A β , respectively.
If all the coefficients belong to the Θ α index set and K α = K , then the above probabilistic model will degenerate to the original SBL full model in (9). Similarly, according to the index sets Θ α and Θ β , (11) and (12) can be respectively partitioned into submatrices
μ α = Γ α Φ α H P 1 X
μ β = γ Θ β Φ β H P 1 X
Σ α = Γ α Γ α Φ α H P 1 Φ α H Γ α
Σ β = γ Θ β I K β γ Θ β 2 Φ β H P 1 Φ β H
where the covariance matrix P in (15) now can be rewritten as
P = σ 2 I + Φ α Γ α Φ α H + γ Θ β Φ β Φ β H
For the sake of expression simplicity, we define the set of all unknowns:
ψ = ( γ Θ α , γ Θ β , σ 2 , Θ α )
Now, let us consider estimating the most superior and efficient coefficient index sets Θ α and Θ β , i.e., the optimal representation. According to [26], the generalized maximum-likelihood (GML) criterion can be used to assign the hyperparameters, and its equivalence with the canonical optimization problem in (7) is demonstrated. The GML criterion maximizes:
GML ( ψ ) = ln p ( X | ψ ) ln | ( ψ ) |
where the first term is the same marginal likelihood function in (14) with Θ α fixed, enforcing the estimate fit the measurements; the second term ( ψ ) is the Fisher information matrix (FIM) for the hyperparameters. Based on the well-known FIM for the Gaussian measurement model [31], we can deduce the block-partitioned FIM result for the hyperparameters vector (the detailed derivation is shown in Appendix A).
( ψ ) = [ γ Θ α , γ Θ α ( ψ ) γ Θ α , γ Θ β ( ψ ) γ Θ α , γ Θ β H ( ψ ) γ Θ β , γ Θ β ( ψ ) ]
with each block computed as
γ Θ α , γ Θ α ( ψ ) = 2 L ( Φ α H P 1 Φ α ) ( Φ α H P 1 Φ α )
[ γ Θ α , γ Θ β ( ψ ) ] i = 2 L Φ α H ( : , i ) P 1 Φ β Φ β H P 1 Φ α ( : , i )
γ Θ β , γ Θ β ( ψ ) = 2 L t r a c e ( P 1 Φ β Φ β H P 1 Φ β Φ β H )
Since the hyperparameter space is nested, namely any Θ α is a subset of Θ , the more parameters in the set Θ α , the larger the value ln p ( X | ψ ) will be, which reduces the model mismatch. Therefore, there will always be a tendency to choose the complete set Θ α = Θ [31]. However, the second term increases as the set Θ α grows, penalising the growth of the set. The GML criterion is, therefore, able to maintain a balance between underfitting and overfitting of the model, ensuring both accuracy and efficiency.
Nevertheless, directly applying the GML criterion (23) to obtain the optimal parameter index sets still requires an exhaustive search and is hard to implement in practical applications. Subsequently, we will employ an ExCoV based method to maximize the objective function approximately [32]. The fundamental idea is interleaving the expansion/compression step and the parameter update step. In each expansion or compression procedure, we modify the current estimate of Θ α by one element per step to obtain a larger GML ( ψ ) . Then followed by the parameter update step, the hyperparameters ( γ Θ α , γ Θ β , σ 2 ) are updated for a fixed index set Θ α . Subsequently, we introduce the three main iteration steps involved in ExCoV: expansion step, compression step and parameter update step.
A. Expansion step. In this step, we determine the index q Θ β ( p ) corresponding to the row of μ β ( p ) with the largest 2 norm
q = arg max q Θ β ( p ) ( μ β ( p ) ) q · 2
where the superscript ( · ) ( p ) represents the iteration number. Then the index q is moved from Θ β ( p ) to Θ α ( p ) , i.e., Θ α ( p + 1 ) = Θ α ( p ) { q } and Θ β ( p + 1 ) = Θ β ( p ) \ { q } , yielding K α ( p + 1 ) = K α ( p ) + 1 , K β ( p + 1 ) = K β ( p ) 1 . The new expanded vector γ Θ α ( p + 1 ) ( p ) can be expressed as γ Θ α ( p + 1 ) ( p ) = [ γ Θ α ( p ) ( p ) ; γ Θ β ( p ) ] .
B. Compression step. Here, we determine the index t Θ α ( p ) corresponding to the smallest element of γ Θ α ( p ) ( p )
t = arg min t Θ α ( p ) ( γ Θ α ( p ) , t ) ( p )
Then the index t is moved from Θ α ( p ) to Θ β ( p ) , i.e., Θ β ( p + 1 ) = Θ β ( p ) { t } and Θ α ( p + 1 ) = Θ α ( p ) \ { t } , yielding K α ( p + 1 ) = K α ( p ) 1 , K β ( p + 1 ) = K β ( p ) + 1 . The new compressed vector γ Θ α ( p + 1 ) ( p ) can be constructed as γ Θ α ( p + 1 ) ( p ) =   [ γ Θ α ( p ) , 1 ( p ) , , γ Θ α ( p ) , t 1 ( p ) , γ Θ α ( p ) , t + 1 ( p ) , , γ Θ α ( p ) , K α ( p ) ( p ) ] .
C. Parameters update step. In [26], the original ExCoV algorithm employs the expectation-maximization (EM) method for Bayesian inference to estimate the hyperparameters, which usually requires extensive iterations to converge [18]. To overcome this drawback and thus further improve code running speed, an iterative reweighted M-SBL strategy is modified to update hyperparameters.
In this step, we firstly assume that the index sets Θ α and Θ β are known and fixed. We next present the derivation of the parameters update step based on the iterative reweighed 2 , 1 M-SBL algorithm from [19] modified by the previous proposed Bayesian probabilistic model (details can be seen in Appendix B), which can remarkably accelerate the convergence speed. Then, the hyperparameters ( γ Θ α ( p + 1 ) ( p + 1 ) , γ Θ β ( p + 1 ) , ( σ 2 ) ( p + 1 ) ) can be updated by
γ Θ α , i ( p + 1 ) ( p + 1 ) = ( 1 L l = 1 L | μ α i l ( p + 1 ) | 2 s i H ( P ( p + 1 ) ) 1 s i ) 1 2
γ Θ β ( p + 1 ) = ( 1 L j = 1 K β l = 1 L | μ β j l ( p + 1 ) | 2 K β t r a c e ( ( P ( p + 1 ) ) 1 Φ β Φ β H ) ) 1 2
( σ 2 ) ( p + 1 ) = X Φ μ F 2 + L ( σ 2 ) ( p ) ( N ( σ 2 ) ( p ) t r a c e ( ( P ( p + 1 ) ) 1 ) ) N M L
where P ( p + 1 ) = ( ( σ 2 ) ( p ) I + Φ α Γ α ( p ) Φ α H + γ Θ β ( p ) Φ β Φ β H ) .
We term the proposed algorithm as an improved iterative reweighted sparse Bayesian learning algorithm based on expansion-compression variance components (ExCoV-IIR-MSBL). Figure 2 illustrates the processing flowchart of the proposed algorithm and its procedures are summarized as follows.
Step1 Initialization: Calculate the minimum norm solution of the space-time coefficients
μ ( 0 ) = Φ H ( Φ H Φ ) 1 X
Then we can utilize the prior knowledge provided by the inertial navigation system and radar system to initialize K α ( 0 )
K α ( 0 ) = N + β ( M 1 )
where β = 2 v p / ( f r d ) represents the slope of the clutter ridge [33] and denotes rounding-up operation; or simply set K α ( 0 ) = 1 , since the ExCoV method is insensitive to the initial value of K a . Hence, the index set Θ α ( 0 ) is constructed using the rows of μ ( 0 ) corresponding to the first K α ( 0 ) largest 2 norm. Subsequently, K β ( 0 ) = K K α ( 0 ) and Θ β ( 0 ) = Θ \ Θ α ( 0 ) .
Step2 Cycle Initialization: Set the initial values for the hyperparameters ( γ Θ α , γ Θ β , σ 2 ) .
( σ 2 ) ( 0 ) = 1 N M L X Φ A F 2
γ Θ α , i ( 0 ) = 1 L l = 1 L | s i H x l | 2 | s i H s i | 2 , ( s i Φ α a n d i = 1 , , K α ( 0 ) )
γ Θ β ( 0 ) = min i ( γ Θ α , i ) ( 0 )
Generally, these parameters can be initialized to an arbitrary positive vector, but it would be more beneficial if a rough estimate could be given. Then the initial values of the optimal record are set to ψ ^ = ( γ Θ α ( 0 ) , γ Θ β ( 0 ) , ( σ 2 ) ( 0 ) , Θ α ( 0 ) ) and μ ^ = μ ( 0 ) .
Step3 Expansion: Apply the aforementioned expansion step, yielding the updated Θ α ( p + 1 ) , Θ β ( p + 1 ) and γ Θ α ( p + 1 ) ( p ) .
Step4 Parameter Update: With the aforementioned modified iterative reweighted M-SBL update step, the updated parameters set ψ ( p + 1 ) = ( γ Θ α ( p + 1 ) ( p + 1 ) , γ Θ β ( p + 1 ) ( p + 1 ) , ( σ 2 ) ( p + 1 ) , Θ α ( p + 1 ) ) and space-time coefficients estimate μ ( p + 1 ) are obtained.
Step5 Optimal Estimates Update: During the iterations, we shall record the optimal unknown parameters set ψ ^ that maximizes the GML, and the corresponding space-time coefficients estimates μ ^ α and μ ^ β in the latest iteration cycle. After the unknown parameters ψ ( p + 1 ) = ( γ Θ α ( p + 1 ) ( p + 1 ) , γ Θ β ( p + 1 ) ( p + 1 ) , ( σ 2 ) ( p + 1 ) , Θ α ( p + 1 ) ) are updated, the new GML ( ψ ( p + 1 ) ) can be calculated using (23). Next, we will verify whether the following condition holds
GML ( ψ ( p + 1 ) ) > GML ( ψ ^ )
If the inequality holds, then we set ψ ^ = ψ ( p + 1 ) , μ ^ α = μ α ( p + 1 ) and μ ^ β = μ β ( p + 1 ) ; instead, keep ψ ^ , μ ^ α and μ ^ β unchanged.
Step6 Termination Condition Check: In this step, we would verify whether the expansion is still required to continue [26].
GML ( ψ ( p + 1 ) ) < min { GML ( ψ ( p + 1 T ) ) , 1 T t = 0 T 1 GML ( ψ ( p t ) ) }
where T is the length of a sliding average window. This condition will help us to determine if there is still a need for the expansion operation and prevents premature termination. If the inequality is satisfied, then the expansion operation is terminated, and Step7 begins; otherwise, the expansion continues and returns to Step3.
Step7 Compression: Apply the aforementioned compression step, yielding the updated Θ α ( p + 1 ) , Θ β ( p + 1 ) and γ Θ α ( p + 1 ) ( p ) .
Step8 Parameter Update: With the aforementioned iterative reweighted M-SBL update step, the updated parameters set ψ ( p + 1 ) = ( γ Θ α ( p + 1 ) ( p + 1 ) , γ Θ β ( p + 1 ) ( p + 1 ) , ( σ 2 ) ( p + 1 ) , Θ α ( p + 1 ) ) and space-time coefficients estimate μ ( p + 1 ) are obtained.
Step9 Optimal Estimates Update: Verify whether the condition (38) holds. If the inequality holds, then we set ψ ^ = ψ ( p + 1 ) , μ ^ α = μ α ( p + 1 ) and μ ^ β = μ β ( p + 1 ) ; instead, keep ψ ^ , μ ^ α and μ ^ β unchanged.
Step10 Termination Condition Check: In this step, we would verify whether the compression is still required to continue and check the same condition (39) in Step6. If the inequality is satisfied, then the compression operation is terminated, and Step11 begins; otherwise, the compression continues and returns to Step7.
Step11 Globally Optimal Estimates Update: Moreover, we keep a record of the globally optimal parameter set ψ ^ and the corresponding coefficients μ ^ in the entire iteration cycles, verifying the condition
GML ( ψ ^ ) > GML ( ψ ^ )
If the inequality holds, then the globally optimal estimates ψ ^ and μ ^ can be updated with ψ ^ and μ ^ ; instead, keep them unchanged.
Step12 Algorithm Iteration Termination Check: If the globally optimal index set Θ ^ α is consistent between two consecutive cycles. The proposed algorithm is next terminated and the final globally optimal space-time coefficients μ ^ and noise power ( σ 2 ) are output.
Otherwise, the globally optimal index set Θ ^ α is updated, then the algorithm continues. We reset the iteration index p = 0 , K α ( 0 ) = K ^ α , and calculate the initial values for the space-time coefficients
μ ( 0 ) = μ ^ + Φ H ( Φ H Φ ) 1 ( X Φ μ ^ )
Step13 STAP Weight Calculation: Once the estimated sparse recovery solution A ˜ = μ ^ and σ ˜ 2 = ( σ 2 ) is obtained, the CNCM can be reconstructed from
R ˜ = 1 L l = 1 L i = 1 K | A ˜ i l | 2 s i s i H + σ ˜ 2 I N M
Then the STAP weight w can be obtained by using (5).
Step14 Output: Give the filtered output of the cell under test y = w H x CUT .
From the above steps, it is worth noting that the proposed algorithm is fully automatic and does not require setting any convergence threshold as well as the maximum iteration number.

5. Computational Complexity Analysis

In this Section, we will analyse the computational complexity of the proposed ExCoV-IIR-MSBL algorithm and compare it with some state-of-the-art SR-STAP algorithms, including M-CVX [34], M-OMP [35], M-FOCUSS [36], M-IAA [21], M-SBL [17], M-FCSBL [18] and M-SBL-IR 2 , 1 [19]. Their computational complexity would be measured by the number of complex multiplications in a single iteration. The results are listed in Table 1, where rs stands for the clutter sparsity level.
In the conventional sparse Bayesian learning probabilistic model, the computational load is greatly centred on the parameter update step, see (11) and (12). The computational complexity is in the order of O ( 3 K 2 N M + 2 ( N M ) 2 K + ( N M ) 3 + N M K L ) , with large-scale matrix multiplication and matrix inversion dominating a major part. Moreover, the memory requirement is O ( K 2 ) . Therefore, these drawbacks make the MSBL-STAP challenging to implement in modern large-scale airborne radar systems, despite achieving an excellent clutter suppression performance with only a few snapshots.
The primary computational burden of the proposed algorithm is concentrated on the parameter update and GML criterion calculation steps. Since the parameter space dimension is significantly reduced, the scale of matrix multiplication is also significantly decreased, given that K α K . In addition, unlike other SBL algorithms, the proposed algorithm no longer needs to update the variance matrix (12); hence, the computational complexity can be further reduced. As a result, the computational complexity is in the order of O ( K α 2 N M + 2 ( N M ) 2 K + 2 ( N M ) 3 + ( N M ) 2 L + N M K L ) , and the required memory consumption is O ( N M K ) . It can be seen that the computational load and memory consumption of the proposed algorithm have been remarkably reduced. These advantages make the proposed algorithm more suitable for modern large-scale airborne radar systems.
For a more intuitive comparison of the above-mentioned algorithms, we plot Figure 3, illustrating the relationship between the number of complex multiplications and the number of pulses. We assume that ρ s = ρ d = 4 , N = 8 , L = 8 , and K α is set to be the rank of clutter. It can be seen from Figure 3 that the M-SBL-IR 2 , 1 has nearly the same computational load as M-SBL, and the proposed algorithm has the lowest computational complexity among these compared SBL based algorithms and is close to the M-IAA and M-FOCUSS algorithms.

6. Numerical Experiment

In this section, numerous experiments are performed with simulated data to evaluate the performance of the proposed algorithm. The simulated data are generated by the well-known Ward clutter model introduced in the previous section. The main simulation parameters of a side-looking ULA are listed in Table 2. The dictionary resolution scales are set to be ρ s = ρ d = 4 . Furthermore, all the simulation results are averaged over 100 independent Monte-Carlo trails.
First of all, we examine the average code running time of the proposed algorithm and compare it with other state-of-the-art SR-STAP algorithms, as listed in Table 3. M-CVX, M-IAA, M-OMP, M-FOCUSS, M-SBL, M-FCSBL and M-SBL-IR 2 , 1 are employed as benchmarks. Note that all the simulations are operated on the same workstation with Intel Xeon 4114 CPU @2.2GHz and 128 GB RAM.
As seen from Table 3, since the parameter space is dramatically shrunk, the average running time of the proposed ExCoV-IIR-SBL is remarkably faster than the other compared SBL based STAP algorithms. It is even better than M-FOCUSS and M-IAA since the proposed algorithm can reach the steady state with fewer iterations with the help of a modified iterative reweighed parameter update step. Thus, the efficiency of the proposed algorithm is demonstrated.
In the following experiments, we assess the clutter suppression performance via the metric of signal to interference plus noise ratio (SINR) loss [27], defined as the ratio of output SINR to the optimum output SNR in a noise-only environment, i.e.,
S I N R l o s s = σ 2 N M | w H s ( f d , f s ) | 2 w H R w
where R is the clairvoyant CNCM of the CUT, and w is the STAP weight.
Next, the clutter suppression performance in the ideal case is considered. The SINR loss performance of the proposed algorithm is evaluated and compared with other state-of-the-art SR-STAP algorithms, including M-CVX, M-IAA, M-OMP, M-FOCUSS, M-SBL, M-FCSBL and M-SBL-IR 2 , 1 . The number of i.i.d. training samples are set to be L = 9. Figure 4 depicts the curves of the SINR loss against the normalized Doppler frequency. As clearly illustrated in Figure 4, the proposed algorithm achieves the same near-optimal performance as M-IAA and the other three SBL based algorithms, revealing that the novel Bayesian probabilistic model proposed in this paper for STAP can accurately reconstruct the CNCM with a smaller parameter space. The performance of M-FOCUSS is found to be slightly inferior to theirs. Meanwhile, the proposed ExCoV-IIR-MSBL consumes the least running time among these three SBL based algorithms on this basis.
To more explicitly demonstrate the clutter suppression performance, Figure 5 plots the spatial-temporal adapted responses, given the STAP weight formed by the abovementioned algorithms. The presumed main lobe is located at a normalized spatial frequency of 0 with a normalized Doppler frequency of 0.2. From Figure 5, it is evident that the proposed method is able to maintain high gain at the presumed target location while precisely forming notches at the clutter ridge, suppressing both main lobe and sidelobe of the clutter component. Poorly performing algorithms have distorted two-dimensional responses and cannot accurately form notches at the clutter ridge.
Note that because of the massive computational complexity of the M-CVX, it will not be discussed in the subsequent experiments. Then, Figure 6 illustrates the curves of the average SINR loss versus different a number of training samples. It should be noted that the average SINR loss is calculated by the mean of SINR loss for f d ( 0.5 , 0.05 ) ( 0.05 , 0.5 ) in this paper. As demonstrated in Figure 6, all algorithms have a certain degree of performance loss with one snapshot and increase with the number of training samples. ExCoV-IIR-MSBL apparently exhibits promising performance similar to the M-IAA and other SBL algorithms under small sample conditions. The proposed algorithm achieves a near-optimum performance with merely three training samples.
Subsequently, we verify the target detection performance via the probability of detection (Pd) versus the target SNR curves, which are acquired by employing the cell-average constant false alarm rate (CA-CFAR) detector [37]. One hundred targets within the boresight are randomly added to the entire Range-Doppler plane, and the false alarm probability (Pfa) is fixed to 10−3. Figure 7 shows that the proposed algorithm and other SBL based algorithms have noticeable improvements compared to M-OMP and M-FOCUSS. This result indicates that the proposed algorithm has superior target detection performance.
Besides, we also consider the SINR loss performance under the spatial mismatch case in the presence of angle-dependent gain-phase error. Specifically, the gain errors among the antennas are generated by a Gaussian distribution with a standard deviation of 0.03, and the phase errors are uniformly distributed within [0°, 2°]. Figure 8 shows the SINR loss versus normalized Doppler frequency. From Figure 8, the curves show that all the SR-STAP algorithms suffer severe performance degradation due to the model mismatch between the dictionary and the actual clutter component. The M-IAA has the most serious performance loss, despite the fact that it can approach near-optimal as the SBL based algorithms in the ideal case.
Figure 9 shows the SINR loss versus the number of samples of above referred algorithms in the non-ideal case. As depicted in Figure 9, the proposed algorithm can yield a steady-state with only a few training samples as in the ideal case.
From the above experiments, it has been demonstrated that the proposed algorithm ensures a satisfactory clutter suppression and target detection performance while keeping a remarkable computational efficiency, thereby making it more suitable for implementation in modern large-scale radar systems.

7. Conclusions

In this work, to enhance the computational efficiency of the M-SBL based SR-STAP algorithm, we derive a novel algorithm called improved iterative reweighted sparse Bayesian learning based on expansion-compression variance-components (ExCoV-IIR-MSBL). Inspired by the intrinsic sparse prior to the clutter, an improved Bayesian probabilistic model with reduced parameter space is developed for SR-STAP. Furthermore, the GML criterion is utilized to partition the efficient and accurate parameter space. Then we extend the ExCoV method into the SR-STAP application to obtain the probabilistic model with maximized GML objective function and enhance its convergence speed by a modified M-SBL-IR 2 , 1 under the proposed Bayesian model. With numerical experiments, we show that the proposed algorithm outperforms other existing state-of-the-art algorithms with lower computational complexity.

Author Contributions

Conceptualization, W.C. and T.W.; investigation, D.W.; methodology, W.C. and D.W.; project administration, T.W.; software, W.C.; supervision, C.L.; visualization, W.C.; writing—original draft, W.C.; writing—review & editing, C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key R&D Program of China, grant number 2021YFA1000400.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Derivation of the FIM in (24)

In this Appendix, we develop the Fisher information matrix (FIM) for the unknown hyperparameters γ = [ γ Θ α ; γ Θ β ] . The FIM is given by
( ψ ) = E p ( X | ψ ) { [ γ ln p ( X | ψ ) ] [ γ ln p ( X | ψ ) ] T }
where p ( X | ψ ) is the joint probability density function of X conditioned on ψ and E p ( X | ψ ) { } is the conditional expectation of X given ψ . The derivative of p ( X | ψ ) with respect to the elements of γ is
γ k ln p ( X | ψ ) = L ln | P ( γ ) | γ k ( l = 1 L x l H P 1 ( γ ) x l ) γ k = L t r a c e ( P 1 ( γ ) P ( γ ) γ k ) l = 1 L x l H P 1 ( γ ) γ k x l
Put (A2) into (A1), and the ( k , j ) t h element of the FIM can be expressed as
( ψ ) k , j = E p ( X | ψ ) { [ γ k ln p ( X | ψ ) ] [ γ j ln p ( X | ψ ) ] } = E p ( X | ψ ) { [ L t r a c e ( P 1 ( γ ) P ( γ ) γ k ) l = 1 L x l H P 1 ( γ ) γ k x l ] [ L t r a c e ( P 1 ( γ ) P ( γ ) γ j ) r = 1 L x r H P 1 ( γ ) γ j x r ] } = 2 L t r a c e ( P ( γ ) 1 P ( γ ) γ k P 1 ( γ ) P ( γ ) γ j )
Substitute (21) into (A3), the Equations (25)–(27) are then derived.

Appendix B. Parameter Update Step Derivation

The derivation of (30)–(32) are given in this Appendix. Since the first term ln | P | in (14) is concave and non-decreasing with respect to γ 0 . Then a quadratic upper bound auxiliary function of can be formed via the concave conjugate function [30].
ln | P | = min z 0 z H γ h ( z )
where h ( z ) denotes the concave conjugate of ln | P | , and z can be divided into z = [ z α ; z β ] ( K α + 1 ) × 1 .
Based on [19], the second term l = 1 L x l H P 1 x l can be equivalently represented as the following formulation.
l = 1 L x l H P 1 x l = min A 1 σ 2 X Φ A F 2 + k Θ α l = 1 L | A k l | 2 γ Θ α , k + j Θ β l = 1 L | A j l | 2 γ Θ β
If we drop the minimization from (A4) and (A5), an upper bound on ( γ ) in (14) can be obtained
( γ ) = ( γ Θ α , γ Θ β ) = L ln | P | + l = 1 L x l H P 1 x l L ( z α H γ Θ α + z β γ Θ β h ( z ) ) + k Θ α l = 1 L | A k l | 2 γ Θ α , k + j Θ β l = 1 L | A j l | 2 γ Θ β + 1 σ 2 X Φ A F 2
Following the basic principles of convex analysis, the optimal z is given by the gradient of ln | P | with respect to γ , i.e., z = γ ln | P | , then
z i = { s i H P 1 s i t r a c e ( P 1 Φ β Φ β H ) z i z α z i = z β
Finally, for a fixed z , the Equations (30) and (31) are obtained by setting the derivative of the upper bound in (A6) with respect to γ to zero.
The noise power can be updated by utilizing the expectation-maximization (EM) Bayesian inference scheme.
E p ( A | X ; ψ ) [ ln p ( X | A , σ 2 ) ] E p ( A | X ; ψ ) [ N M L ln σ 2 σ 2 Y Φ A F 2 ] = N M L ln σ 2 σ 2 E p ( A | X ; ψ ) [ Y Φ A F 2 ] = N M L ln σ 2 σ 2 ( Y Φ μ F 2 + L t r a c e ( Φ H Φ Σ ) ) = N M L ln σ 2 σ 2 ( Y Φ μ F 2 + L σ 2 t r a c e ( I σ 2 P 1 ) )
Then Equation (32) is given by setting the derivative of (A8) with respect to σ 2 to zero.

References

  1. Brennan, L.E.; Mallett, J.D.; Reed, I.S. Theory of Adaptive Radar. IEEE Trans. Aerosp. Electron. Syst. 1973, 9, 237–251. [Google Scholar] [CrossRef]
  2. Klemm, R. Principles of Space-Time Adaptive Processing; The Institution of Electrical Engineers: London, UK, 2002. [Google Scholar]
  3. Guerci, J.R. Space-Time Adaptive Processing for Radar; Artech House: Norwood, MA, USA, 2003. [Google Scholar]
  4. Reed, I.S.; Mallett, J.D.; Brennan, L.E. Rapid Convergence Rate in Adaptive Arrays. IEEE Trans. Aerosp. Electron. Syst. 1974, 10, 853–863. [Google Scholar] [CrossRef]
  5. Goldstein, J.S.; Reed, I.S. Reduced-rank adaptive filtering. IEEE Trans. Signal Process. 1997, 45, 492–496. [Google Scholar] [CrossRef]
  6. Goldstein, J.S.; Reed, I.S. Theory of partially adaptive radar. IEEE Trans. Aerosp. Electron. Syst. 1997, 33, 1309–1325. [Google Scholar] [CrossRef]
  7. Haimovich, A.M. The eigencanceler: Adaptive radar by eigenanalysis methods. IEEE Trans. Aerosp. Electron. Syst. 1996, 32, 532–542. [Google Scholar] [CrossRef]
  8. Wang, Y.L.; Liu, W.J.; Xie, W.C.; Zhao, Y.J. Reduced-rank space-time adaptive detection for airborne radar. Sci. China Inf. Sci. 2014, 57, 1–11. [Google Scholar] [CrossRef]
  9. DiPietro, R. Extended factored space-time processing for airborne radar systems. Signals Syst. Comput. 1992, 1, 425–430. [Google Scholar]
  10. Wang, H.; Cai, L. On adaptive spatial-temporal processing for airborne surveillance radar systems. IEEE Trans. Aerosp. Electron. Syst. 1994, 30, 660–670. [Google Scholar] [CrossRef]
  11. Tong, Y.L.; Wang, T.; Wu, J.X. Improving EFA-STAP performance using persymmetric covariance matrix estimation. IEEE Trans. Aerosp. Electron. Syst. 1994, 30, 660–670. [Google Scholar] [CrossRef]
  12. Shi, J.X.; Xie, L.; Cheng, Z.Y.; He, Z.S.; Zhang, W. Angle-Doppler channel selection method for reduced-dimension STAP based on sequential convex programming. IEEE Commun. Lett. 2021, 25, 3080–3084. [Google Scholar] [CrossRef]
  13. Maria, S.; Fuchs, J. Application of the Global Matched Filter to Stap Data an Efficient Algorithmic Approach. In Proceedings of the 2006 IEEE International Conference on Acoustics Speech and Signal Processing, Toulouse, France, 14–19 May 2006. [Google Scholar]
  14. Sun, K.; Zhang, H.; Li, G.; Meng, H.; Wang, X. A novel STAP algorithm using sparse recovery technique. In Proceedings of the IGARSS 2009, Cape Town, South Africa, 12–17 July 2009. [Google Scholar]
  15. Sun, K.; Meng, H.; Wang, Y.; Wang, X. Direct data domain STAP using sparse representation of clutter spectrum. Signal Process. 2011, 91, 2222–2236. [Google Scholar] [CrossRef]
  16. Han, S.D.; Fan, C.Y.; Huang, X.T. A Novel STAP Based on Spectrum-Aided Reduced-Dimension Clutter Sparse Recovery. IEEE Geosci. Remote Sens. Lett. 2017, 14, 213–217. [Google Scholar] [CrossRef]
  17. Duan, K.Q.; Wang, Z.T.; Xie, W.C.; Chen, H.; Wang, Y.L. Sparsity-based STAP algorithm with multiple measurement vectors via sparse Bayesian learning strategy for airborne radar. IET Signal Process. 2017, 11, 544–553. [Google Scholar] [CrossRef]
  18. Wang, Z.T.; Xie, W.C.; Duan, K.Q.; Wang, Y.L. Clutter suppression algorithm based on fast converging sparse Bayesian learning for airborne radar. Signal Process. 2017, 130, 159–168. [Google Scholar] [CrossRef]
  19. Liu, C.; Wang, T.; Zhang, S.; Ren, B. Clutter suppression based on iterative reweighted methods with multiple measurement vectors for airborne radar. IET Radar Sonar Navig. 2022, 16, 1446–1459. [Google Scholar] [CrossRef]
  20. Yang, Z.C.; Li, X.; Wang, H.; Nie, L. Sparsity-based space-time adaptive processing using complex-valued Homotopy technique for airborne radar. IET Signal Process. 2014, 8, 552–564. [Google Scholar] [CrossRef]
  21. Yang, Z.C.; Li, X.; Wang, H.; Jiang, W. Adaptive clutter suppression based on iterative adaptive approach for airborne radar. Signal Process. 2013, 93, 3567–3577. [Google Scholar] [CrossRef]
  22. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  23. Yang, Z.C.; Li, X.; Wang, H.Q.; Jiang, W.D. On Clutter Sparsity Analysis in Space-Time Adaptive Processing Airborne Radar. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1214–1218. [Google Scholar] [CrossRef]
  24. Zhang, Z.L.; Rao, B.D. Extension of SBL algorithms for the recovery of block sparse signals with intra-block correlation. IEEE Trans. Signal Process. 2013, 61, 2009–2015. [Google Scholar] [CrossRef] [Green Version]
  25. Tipping, M.E. Sparse Bayesian learning and the relevance vector machine. J. Mach. Learn. 2001, 1, 211–244. [Google Scholar]
  26. Qiu, K.; Dogandzic, A. Variance-component based sparse signal reconstruction and model selection. IEEE Trans. Signal Process. 2010, 58, 2935–2952. [Google Scholar] [CrossRef]
  27. Ward, J. Space-Time Adaptive Processing for Airborne Radar; Lincoln Laboratory: Lexington, MA, USA, 1994. [Google Scholar]
  28. Natarajan, B. Sparse approximate solutions to linear systems. SIAM J. Comput. 1995, 24, 227–234. [Google Scholar] [CrossRef] [Green Version]
  29. Tipping, M.E.; Faul, A.C. Fast marginal likelihood maximization for sparse Bayesian models. In Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, Key West, FL, USA, 3–6 January 2003; Volume 1, pp. 276–283. [Google Scholar]
  30. Wipf, D.; Nagarajan, S. Iterative reweighted l1 and l2 methods for finding sparse solutions. IEEE J. Sel. Top. Signal Process. 2010, 2, 317–329. [Google Scholar] [CrossRef]
  31. Kay, S.M. Fundamentals of Statistical Signal Processing: Estimation Theory; Prentice-Hall: Englewood Cliffs, NJ, USA, 1993. [Google Scholar]
  32. Dogandzic, A.; Qiu, K. ExCoV: Expansion-compression variance-component based sparse-signal reconstruction from noisy measurements. In Proceedings of the 43rd Annual Conference Information Science and Systems, Baltimore, MA, USA, 18–20 March 2009; pp. 186–191. [Google Scholar]
  33. Goodman, N.A.; Stiles, J.M. On clutter rank observed by arbitrary arrays. IEEE Trans. Signal Process. 2007, 55, 178–186. [Google Scholar] [CrossRef]
  34. Tropp, J.A.; Gilbert, A.C.; Strauss, M.J. Algorithms for simultaneous sparse approximation, part I: Greedy pursuit. Signal Process. 2006, 86, 572–588. [Google Scholar] [CrossRef]
  35. Tropp, J.A. Algorithms for simultaneous sparse approximation, part II: Convex relaxation. Signal Process. 2006, 86, 589–602. [Google Scholar] [CrossRef]
  36. Cotter, S.F.; Rao, B.D.; Engan, K.; Kreutz-Delgado, K. Sparse solutions to linear inverse problems with multiple measurement vectors. IEEE Trans. Signal Process. 2005, 53, 2477–2488. [Google Scholar] [CrossRef]
  37. Liu, W.J.; Liu, J.; Hao, C.P.; Gao, Y.C.; Wang, Y.L. Multichannel adaptive signal detection: Basic theory and literature review. Sci. China Inf. Sci. 2022, 65, 121301. [Google Scholar] [CrossRef]
Figure 1. Geometric configuration of an airborne surveillance radar system.
Figure 1. Geometric configuration of an airborne surveillance radar system.
Remotesensing 15 00130 g001
Figure 2. Processing flowchart of the proposed algorithm.
Figure 2. Processing flowchart of the proposed algorithm.
Remotesensing 15 00130 g002
Figure 3. Computational complexity of different compared SR-STAP algorithms.
Figure 3. Computational complexity of different compared SR-STAP algorithms.
Remotesensing 15 00130 g003
Figure 4. SINR loss comparison of different algorithms in the ideal case.
Figure 4. SINR loss comparison of different algorithms in the ideal case.
Remotesensing 15 00130 g004
Figure 5. Spatial-temporal responses. (a) M-CVX; (b) M-OMP; (c) M-FOCUSS; (d) M-IAA; (e) M-SBL; (f) M-FCSBL; (g) M-SBL-IR 2 , 1 ; (h) Proposed algorithm.
Figure 5. Spatial-temporal responses. (a) M-CVX; (b) M-OMP; (c) M-FOCUSS; (d) M-IAA; (e) M-SBL; (f) M-FCSBL; (g) M-SBL-IR 2 , 1 ; (h) Proposed algorithm.
Remotesensing 15 00130 g005aRemotesensing 15 00130 g005b
Figure 6. The average SINR loss versus the number of training samples in the ideal case.
Figure 6. The average SINR loss versus the number of training samples in the ideal case.
Remotesensing 15 00130 g006
Figure 7. Probability of detection versus signal to noise ratio.
Figure 7. Probability of detection versus signal to noise ratio.
Remotesensing 15 00130 g007
Figure 8. SINR loss comparison of different algorithms with gain-phase error.
Figure 8. SINR loss comparison of different algorithms with gain-phase error.
Remotesensing 15 00130 g008
Figure 9. The average SINR loss versus the number of training samples with gain-phase error.
Figure 9. The average SINR loss versus the number of training samples with gain-phase error.
Remotesensing 15 00130 g009
Table 1. Computational complexity comparison.
Table 1. Computational complexity comparison.
AlgorithmComputational Complexity for a Single Iteration
M-CVX O ( ( K L ) 3 )
M-OMP O ( r s 3 + r s 2 N M + 2 r s N M + N M K L )
M-FOCUSS O ( 2 ( N M ) 2 K + ( N M ) 3 + N M K L )
M-IAA O ( 2 ( N M ) 2 K + ( N M ) 3 + N M K L )
M-SBL O ( 3 K 2 N M + 2 ( N M ) 2 K + ( N M ) 3 + N M K L )
M-FCSBL O ( 5 ( N M ) 2 K + ( N M ) 3 + 3 K + L ( 2 K L + 4 K + L ) N M )
M-SBL-IR 2 , 1 O ( 3 K 2 N M + 3 ( N M ) 2 K + ( N M ) 3 + N M K L )
ExCoV-IIR-MSBL O ( K α 2 N M + 2 ( N M ) 2 K + 2 ( N M ) 3 + ( N M ) 2 L + N M K L )
Table 2. Simulation Parameters.
Table 2. Simulation Parameters.
ParameterValue
Array element number N8
Pulse number M8
PRF fr2000 Hz
Wavelength λ0.3 m
Bandwidth B2.5 MHz
Platform velocity vp150 m/s
CNR50 dB
Clutter patch number Nc1801
Table 3. Running Time Comparison.
Table 3. Running Time Comparison.
AlgorithmAverage Running Time
M-CVX94.06 s
M-SBL11.75 s
M-FOCUSS10.48 s
M-SBL-IR 2 , 1 6.16 s
M-FCSBL2.20 s
M-IAA2.15 s
ExCoV-IIR-MSBL0.57 s
M-OMP0.01 s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cui, W.; Wang, T.; Wang, D.; Liu, C. An Improved Iterative Reweighted STAP Algorithm for Airborne Radar. Remote Sens. 2023, 15, 130. https://doi.org/10.3390/rs15010130

AMA Style

Cui W, Wang T, Wang D, Liu C. An Improved Iterative Reweighted STAP Algorithm for Airborne Radar. Remote Sensing. 2023; 15(1):130. https://doi.org/10.3390/rs15010130

Chicago/Turabian Style

Cui, Weichen, Tong Wang, Degen Wang, and Cheng Liu. 2023. "An Improved Iterative Reweighted STAP Algorithm for Airborne Radar" Remote Sensing 15, no. 1: 130. https://doi.org/10.3390/rs15010130

APA Style

Cui, W., Wang, T., Wang, D., & Liu, C. (2023). An Improved Iterative Reweighted STAP Algorithm for Airborne Radar. Remote Sensing, 15(1), 130. https://doi.org/10.3390/rs15010130

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop