Next Article in Journal
Denoising for 3D Point Cloud Based on Regularization of a Statistical Low-Dimensional Manifold
Previous Article in Journal
Indoor Positioning of Low-Cost Narrowband IoT Nodes: Evaluation of a TDoA Approach in a Retail Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fast Space-Time Adaptive Processing Algorithm Based on Sparse Bayesian Learning for Airborne Radar

National Lab of Radar Signal Processing, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(7), 2664; https://doi.org/10.3390/s22072664
Submission received: 21 February 2022 / Revised: 28 March 2022 / Accepted: 29 March 2022 / Published: 30 March 2022
(This article belongs to the Section Radar Sensors)

Abstract

:
Space-time adaptive processing (STAP) plays an essential role in clutter suppression and moving target detection in airborne radar systems. The main difficulty is that independent and identically distributed (i.i.d) training samples may not be sufficient to guarantee the performance in the heterogeneous clutter environment. Currently, most sparse recovery/representation (SR) techniques to reduce the requirement of training samples still suffer from high computational complexities. To remedy this problem, a fast group sparse Bayesian learning approach is proposed. Instead of employing all the dictionary atoms, the proposed algorithm identifies the support space of the data and then employs the support space in the sparse Bayesian learning (SBL) algorithm. Moreover, to extend the modified hierarchical model, which can only apply to real-valued signals, the real and imaginary components of the complex-valued signals are treated as two independent real-valued variables. The efficiency of the proposed algorithm is demonstrated both with the simulated and measured data.

1. Introduction

Space-time adaptive processing (STAP) has the capability to detect slow-moving targets that might otherwise be swallowed up by the strong sidelobe clutter. The performance of STAP filter is dependent on the accuracy of the clutter plus noise covariance matrix (CNCM) of the cell under test (CUT) [1]. According to the well-known Reed-Mallet-Brennan (RMB) rule [2], to achieve the steady performance, the number of the independent and identically distributed (i.i.d) secondary range cells used to estimate CNCM is no less than twice the system degrees of freedom (DOF). However, it is hard to obtain enough i.i.d samples in practice because of array geometry structures, nonhomogeneous clutter environment and so on [1]. Knowing how to improve the performance of STAP with limited samples has been a hot topic until now.
Reduced-dimension (RD) [3] and reduced-rank (RR) [4] algorithms are proposed to improve the performance of STAP with limited secondary samples. Even though they are easy to implement, their performance gets worse when the number of secondary samples is smaller than twice the rank of clutter [5].
Apart from RD and RR algorithms, some other algorithms [6,7,8,9,10] are proposed and succeed in suppressing clutter in theory. However, they face some disadvantages in practice. Sufficiently, (1) direct data domain (DDD) algorithms in [6] that achieve enough samples from the CUT suffer from aperture loss; and (2) knowledge-aided (KA) algorithms in [7,8,9,10] state that the accurate prior knowledge must be required in advance. Since the cost to achieve the accurate prior knowledge is expensive and the prior knowledge changes with time, KA algorithms are not widely used in practical applications.
Over the past twenty years, sparse recovery/representation (SR) algorithms have received continuous attention in STAP [11,12,13,14,15] because they have enormous potential to estimate the clutter spectrum with limited samples. The sparse Bayesian learning (SBL)-type algorithms are robust and have drawn continuous attention in the last five years.
The SBL algorithm was proposed by Tipping in [16]. Wipf enhanced it in the single measurement vector (SMV) case in [17] and then the multiple measurement vectors (MMV) case in [18]. Due to the excellent performance of SBL algorithms, the fast marginal likelihood maximization (FMLM) algorithm [19], the Bayesian compressive sensing (BCS) algorithm [20], the multitask BCS (M-BCS) algorithm [21], and other Bayesian algorithms [22,23,24] were proposed by researchers in the following years. SBL was firstly introduced into STAP with MMV, defined as the M-SBL-STAP algorithm in [25], by Duan to estimate CNCM and Wang improved the fast convergence of the algorithm in [26]. SBL has also been used to solve some common problems in STAP in the last five years, such as off-grid in [27] and discrete interference in [28]. However, the Bayesian algorithms aforementioned have one or more of the following disadvantages: (a) high computational cost, (b) inaccurate estimation of the noise variance and (c) inapplicability to complex-valued signals.
In this paper, to improve the computational efficiency of the M-SBL-STAP algorithm, we extend the FMLM algorithm to the conventional M-SBL-STAP algorithm. The real and imaginary components of the complex-valued signals are treated as two independent real-valued variables in order to extend the modified hierarchical model. Simulation results with both simulated and Mountain-Top data demonstrate that the proposed algorithm can achieve high computational efficiency and good performance.
The main contributions of this paper can be listed as follows:
  • We extend the FMLM algorithm into M-SBL-STAP for the purpose of identifying the support space of the data, i.e., the atoms whose corresponding hyper-parameters are non-zero. After support space identification, the dimensions of the effective problems are drastically reduced due to sparsity, which can reduce computational complexities and alleviate memory requirements.
  • According to [18], we have no access to obtain the accurate value of the noise variance. Instead of estimating the noise variance, we extend the modified hierarchical model, introduced in Section 4, to the SBL framework.
  • Although the hierarchical models apply to the real-valued signals, they cannot be extended directly to the complex-valued signal according to [29,30]. The data needed to be dealt with in STAP are all complex-valued. To solve the problem, we transform sparse complex-valued signals into group sparse real-valued signals.
Notation: In this article, scalar quantities are denoted with italic typeface. Boldface small letters are reserved for vectors, and boldface capital letters are reserved matrices. The i - th entry of a vector x is denoted by x i , while A i and A i j denote the i - th row and i , j - th element of a matrix A , respectively. The symbols T and H denote the matrix transpose and conjugate transpose, respectively. The symbols 1 , 2 and F are reserved for 1 , 2 and Frobenius ( F ) norms, respectively. 0 is reserved for 0 pseudo-norm. 2 , 0 stands for a mixed norm defined as the number of non-zero elements of the vector formed by the 2 norm of each row vector. The symbol is reserved for the determinant. The notations I , 0 and 1 represent the identity matrix, the all zero matrix and the all one matrix, respectively. The expectation of a random variable is denoted as Ε .

2. Background and Problem Formulation

2.1. STAP Signal Model for Airborne Radar

Consider an airborne pulsed-Doppler radar system with a side-looking uniform linear array (ULA) consisted of N elements. A coherent burst of M pulses is transmitted at a constant pulse repetition frequency (PRF) in a coherent processing interval (CPI). The complex sample at the CUT from the n - th element and the m - th pulse is denoted as x m n , and the data for the CUT can be written as a M N × 1 vector x , termed a space-time snapshot.
x = x 11 , x 12 , , x 1 N , x 21 , , x M N T
Radar systems need to ascertain whether the targets are present in the CUT; thus, target detection is formulated into a binary hypothesis problem: the hypothesis H 0 represents target absence and the other hypothesis H 1 represents target presence.
x = x c + n H 0 x = α t S t + x c + n H 1
where α t is the target amplitude and S t M N × 1 is the space-time vector of the target. The vector n M N × 1 is the thermal noise, which is uncorrelated both spatially and temporally. A general model for the space-time clutter x c M N × 1 is
x c = k = 1 N c α k υ f d , k , f s , k
υ st f d , k , f s , k = υ t f d , k υ s f s , k
υ t f d , k = 1 , exp j π f d , k , , exp j π M 1 f d , k T
υ s f s , k = 1 , exp j π f s , k , , exp j π N 1 f s , k T
where α k denotes the random amplitude of the echo from the corresponding clutter patch; N c denotes the number of clutter patches in a clutter ring; υ st , υ t and υ s denote space-time steering vector, temporal steering vector and spatial steering vector, respectively; and f d , k and f s , k denote the corresponding normalize temporal frequency and spatial frequency, respectively.
According to the linearly constrained minimum variance (LCMV) criterion, the optimal STAP weight vector is
w opt = R c + n 1 S t S t H R c + n 1 S t
where the CNCM R c + n is expressed as
R c + n = R c + R n = E x c x c H + σ 2 I

2.2. SR-STAP Model and Principle

Discretize the space-time plane uniformly into K = N S N d grids, where N S = φ S N   φ S > 1 denotes the number of normalized spatial frequency bins and N d = φ d M   φ d > 1 denotes the number of normalized Doppler frequency bins. Each grid corresponds to a space-time steering vector υ k   ,   k = 1 , 2 , ,   K . The dictionary D M N × K used in STAP is the collection of space-time steering vectors of all grids.
D = υ 1 , υ 2 ,   , υ K
The signal model in STAP can be cast in the following form
X = D A + n
where X = x 1 , x 2 ,   , x L ; A K × L denotes sparse coefficient matrix; non-zero elements indicate the presence of clutter on the space-time profile; L denotes the number of the range gates in the data; and n M N × L denotes zero-mean Gaussian noise.
In sparse signal recovery algorithms with MMV, our goal is to represent the measurement X , which is contaminated by noise as a linear combination of as few dictionary atoms as possible. Therefore, the objective function is expressed as
A = arg min A A 2 , 0 ,   s . t .   X D A F 2 ε
where ε is the noise error allowance.

3. M-SBL-STAP Algorithm

3.1. Sparse Bayesian Learning Formulation

In the SBL framework, x l l = 1 L are assumed as i.i.d training snapshots. The noise is submitted to white complex Gaussian distribution with unknown power σ 2 . The likelihood function of the measurement vectors can be denoted as
p X | A , σ 2 = π σ 2 M N L exp σ 2 l = 1 L x l D a l 2
Since the training snapshots are i.i.d, each column in A is independent and shares the same covariance matrix. Assign to each column in A with a zero-mean Gaussian prior
a l ~ C N 0 , Γ , l = 1 , 2 , , L
where 0 K × 1 is a zero vector and Γ = diag γ . γ = γ 1 , γ 2 , , γ K are unknown variance parameters corresponding to sparse coefficient matrix. The prior of A can be expressed as
p A | Γ = π K L Γ L exp l = 1 L a l H Γ 1 a l
Combining likelihood with prior, the posterior density of A can be easily expressed as
p A | X , Γ , σ 2 = p X | A , σ 2 p A | Γ p X | A , σ 2 p A | Γ d A
modulated by the hyper-parameters γ and σ 2 . To find the hyper-parameters γ and σ 2 , which are enough accurate to estimate the CNCM, the most common method is the expectation maximization (EM) algorithm. The EM algorithm is divided into two parts: E-step and M-step. t represents the sequence number of the current iteration. At the E-step: treat A as hidden data, and the posterior density can be described with hyper-parameters γ and σ 2 .
p A t + 1 | X , Γ t , σ 2 t = p X , A t + 1 | Γ t , σ 2 t p X | Γ t , σ 2 t = π K L Σ t + 1 L exp l = 1 L a l t + 1 μ l t + 1 H Σ t + 1 1 a l t + 1 μ l t + 1
with covariance and mean given by
Σ t + 1 = σ 2 t D H D + Λ t 1
μ t + 1 = σ 2 t Σ t + 1 D H X
where
Λ t = Γ t 1 = diag 1 γ 1 t , 1 γ 2 t , , 1 γ K t
μ t + 1 = μ 1 t + 1 , μ 2 t + 1 , , μ L t + 1
At the M-step, we update the hyper-parameters by maximizing the expectation of p X , A t + 1 | Γ t , σ 2 t .
γ t + 1 , σ 2 t + 1 = argmax Γ t , σ 2 t E ln p X , A t + 1 | Γ t , σ 2 t
where
γ i t + 1 = 1 L l = 1 L μ l t + 1 i 2 2 + Σ i i t + 1
σ 2 t + 1 = 1 L l = 1 L x l D μ l t + 1 2 2 N M K + k = 1 K Σ k k t + 1 γ k t
The M-SBL-STAP algorithm is shown in Algorithm 1.
Algorithm 1: Pseudocode for the M-SBL-STAP algorithm.
Step 1: Input: the clutter data X, the dictionary D
Step 2: Initialization: initial the values of γ and σ 2 .
Step 3: E-step: update the posterior moments Σ t + 1 and μ t + 1 using (17) and (18).
Step 4: M-step: update γ t + 1 and σ 2 t + 1 using (22) and (23).
Step 5: Repeat step 3 and step 4 until a stopping criterion is satisfied.
Step 6: Estimate the CNCM R by the formula
      R = 1 L l = 1 L k = 1 K μ l , k 2 υ k υ k H + β Ι
 where β is a load factor and the symbol * represents the stopping criterion.
Step 7: Compute the space-time adaptive weight w using (7).
Step 8: The output of the M-SBL-STAP algorithm is Z = w H X .

3.2. Problem Statement of the M-SBL-STAP Algorithm

At the E-step in each iteration, the inversion of a K × K matrix, which brings high computational complexities in the order of o K 3 , is required to be calculated when update covariance Σ (17). K is the number of the atoms in the dictionary, and the value of K is usually large. If we avoid calculating the matrix inversion of a K × K matrix, the computational complexities can be reduced a lot.
At the M-step in each iteration, the noise variance σ 2 needs to be estimated. However, it has been demonstrated in [18] that σ 2 estimated by (23) can be extremely inaccurate when the dictionary is structured and K N M . We can see in [21] that σ 2 is a nuisance parameter in the iterative procedure and an inappropriate value may contaminate the convergence performance of the algorithm.
We extend a modified hierarchical model to the SBL framework, which aims at integrating σ 2 out instead of estimating σ 2 . However, the modified model, which applies to the real-valued signals, cannot be directly extended to complex-valued signals.
In the next sections, we introduce the proposed algorithm to solve the above problems. The proposed algorithm is defined as M-FMLM-STAP algorithm.

4. The Proposed M-FMLM-STAP Algorithm

4.1. Modified Hierarchical Model

In [16], the scholars define the following hierarchical hyperpriors over Γ , as well as the noise variance σ 2 .
p Γ | c , d = k = 1 K Gamma 1 γ k | c , d
p σ 2 | a , b = Gamma σ 2 | a , b
where Gamma α | a , b = Γ a 1 b a α a 1 e b α and the ‘gamma function’ Γ a = 0 t a 1 e t d t . It has been demonstrated in [16] that an appropriate choice of the shape parameter a and the scale parameter b encourages the sparsity of the coefficient matrix.
In the STAP framework, CNCM can be expressed as
R c + n = D Γ D + σ 2 I
We can translate STAP weight vector in the form
w opt = μ R c + n 1 S t = λ D Γ σ 2 D + I 1 S t = λ R ˜ 1 S t
where λ and μ are constants.
The above analysis shows that R ˜ is equivalent to R c + n in the performance of clutter suppression. Thus, including σ 2 in the prior of Γ , each component of A is defined as a zero-mean Gaussian distribution and the modified hierarchical model follows (28) and (29).
p A | Γ , σ 2 = l = 1 L k = 1 K C N a l , k | 0 , γ k = l = 1 L k = 1 K C N a l , k | 0 , σ 2 γ k
p σ 2 | a , b = Gamma σ 2 | a , b
where Γ = diag γ = diag γ 1 , γ 2 , , γ 2 K = Γ σ 2 .

4.2. Application of the Modified Hierarchical Model to Complex-Valued Signals

However, although both of the original and the modified models apply to real-valued signals, they cannot be directly extended to complex-valued signals. To solve the above problem, the real and imaginary components of the complex-valued signals are treated as two independent real-valued variables.
y l = Re x l Im x l
The new signal model is expressed as
Y = Ψ B + N
where Y = y 1 , y 2 ,   , y L 2 M N × L is the new data model, B = b 1 , b 2 ,   , b L 2 K × L is the new sparse coefficient matrix and N 2 M N × L is the new noise matrix in new model. The new dictionary Ψ 2 M N × 2 K is expressed as
Ψ = Re υ 1 Im υ 1 Re υ K Im υ K Im υ 1 Re υ 1 Im υ K Re υ K
while the new covariance, mean and hyper-parameter in the modified model are expressed as
Σ = σ 2 Ψ T Ψ + Γ 1 1
μ = σ 2 Σ Ψ T y
Γ = diag γ 1 , γ 2 , , γ K , γ K + 1 , γ K + 2 , , γ 2 K
where γ 2 i 1 = γ 2 i ,   i = 1 , 2 , , K . The signal y l is group sparse because the entries in each group, saying γ i = γ 2 i 1 , γ 2 i ,   i = 1 , 2 , , K , are either zero or nonzero simultaneously.
In order to avoid calculating σ 2 in (33) and (34), we extend the modified hierarchical model to the SBL framework. The new real-valued likelihood function can be expressed as
p Y | B , σ 2 = 2 π σ 2 2 M N L 2 exp 1 2 σ 2 l = 1 L y l Ψ b l 2 = 2 π σ 2 M N L exp 1 2 σ 2 l = 1 L y l Ψ b l 2
and the new prior of B can be expressed as
p B | Γ , σ 2 = 2 π 2 K L 2 Γ L 2 exp 1 2 l = 1 L b l T Γ 1 b l = 2 π K L σ 2 Γ L 2 exp 1 2 l = 1 L b l T σ 2 Γ 1 b l
Combining the new likelihood and prior, the new posterior density of B can be expressed as
p B | Y , Γ , σ 2 = p Y | B , σ 2 p B | Γ , σ 2 p Y | B , σ 2 p B | Γ , σ 2 d B
Integrate σ 2 out, and the posterior density function of B is
p B | Y , Γ = p B | Y , Γ , σ 2 p σ 2 | a , b d σ 2 = Γ a + L K 1 + 1 2 b l = 1 L a l μ l H Σ 1 a l μ l a + L K Γ a 2 π b L K Σ L 2
where
Σ = Ψ T Ψ + Γ 1 1
μ l = Σ Ψ T y l
From (39), we can draw the conclusion that the modified formulation induces a heavy-tailed Student-t distribution on the residual noise, which improves the robustness of the algorithm.
The logarithm of the marginal likelihood function is
L Γ = log p Y | Γ = log p Y | B , σ 2 p B | Γ , σ 2 p σ 2 | a , b d B d σ 2 = 1 2 l = 1 L 2 M N + 2 a log y l T C 1 y l + 2 b + log C + constant
with
C = I 2 M N + Ψ Γ Ψ T
Since the signals y l ,   l = 1 , L is group-sparse, we define
α i = γ 2 i 1 = γ 2 i , i = 1 , 2 , , K
and then
Γ = diag α I 2
where α = α 1 , α 2 , , α K .

4.3. Maximization of L Γ to Estimate α

A most probable way to point estimate hyper-parameters α may be found via a type-II maximum likelihood procedure. Mathematically, maximize the marginal likelihood function or its logarithm (42) with respect to α . In the following substance, the FMLM algorithm is applied to maximize (42) to estimate α . Unlike the EM algorithm, the FMLM algorithm reduces the computational complexities by identifying the support space of data, i.e., the atoms in the dictionary whose corresponding values in α is non-zero.
For notational convenience, we ignore the symbol   . γ and Γ in the following substance all represent γ and Γ , respectively.
Define Ω as the set of the non-zero values in Γ and ψ as the support space of data.
Ω = diag α ω 1 , α ω 2 , , α ω J I 2
ψ = Φ ω 1 , Φ ω 2 , , Φ ω J
where
ω 1 , ω 2 , , ω J = i | 1 i K , α i 0
Φ i = Re υ i Im υ i Im υ i Re υ i , i = 1 , 2 , , K
and J is the number of non-zeros in α .
At the beginning of the FMLM algorithm, we initialize α 0 = 0 , namely, Ω 0 = and ψ 0 = . Then, we need to identify the support space of data in each iteration until that α converges to a steady point.
The matrix C can be decomposed into two parts.
C = I 2 M N + k i , 1 k K α k Φ k Φ k T + α i Φ i Φ i T = C i + α i Φ i Φ i T
The first term C i = I 2 M N + k i , 1 k K α k Φ k Φ k T contains all terms that are independent of α i , and the second term includes all the terms related to α i .
Using the Woodbury Matrix Identity, the matrix inversion and matrix determinate lemmas are expressed as
C 1 = C i 1 C i 1 Φ i α i 1 I 2 + Φ i T C i 1 Φ i 1 Φ i T C i 1
C = C i α i I 2 α i 1 I 2 + Φ i T C i 1 Φ i
Then, (42) can be expressed as
L Γ = 1 2 l = 1 L 2 M N + 2 a log y l T C i 1 y l + 2 b + log C i 1 2 l = 1 L 2 M N + 2 a log 1 y l T C i 1 Φ i α i 1 I 2 + Φ i T C i 1 Φ i 1 Φ i T C i 1 y l y l T C i 1 y l + 2 b + log α i I 2 + log α i 1 I 2 + Φ i T C i 1 Φ i + constant = L i + L α i
where L i contains all terms that are independent of α i and L α i includes all the terms related to α i .
Define
S ^ i Φ i T C i 1 Φ i , q ^ l , i Φ i T C i 1 y l , g ^ l , i y l T C i 1 y l + 2 b
Then, L α i can be expressed as
L α i = 1 2 l = 1 L 2 M N + 2 a log 1 q ^ l , i T α i 1 I 2 + S ^ i 1 q ^ l , i g ^ l , i + 2 log α i + log α i 1 I 2 + S ^ i
The eigenvalue decomposition (EVD) of S ^ i
S ^ i = V i Λ i V i T = j = 1 2 s ^ i , j υ i , j υ i , j T
where s ^ i , j and υ i , j denote the j - th eigenvalue and eigenvector of S ^ i respectively. V i = υ i , 1 , υ i , 2 is the eigen-matrix of S ^ i .
Define c l , i = V i T q ^ l , i = c l , i , 1 c l , i , 2 , and the formula (55) can then be expressed as
L α i = 1 2 l = 1 L 2 M N + 2 a log 1 α i c l , i , 1 2 1 + α i s ^ i , 1 + α i c l , i , 2 2 1 + α i s ^ i , 2 g ^ l , i + 2 log α i + j = 1 2 log 1 + α i s ^ i , j α i
The next step involves maximizing (57) to estimate the hyper-parameters α i   , i , and we choose only one candidate from α i   , i that can maximize (53).
Differentiate L α i with respect to α i , and set the result to zero
L α i α i = 1 2 l = 1 L 2 M N + 2 a c l , i , 1 2 1 + α i s ^ i , 1 2 + c l , i , 2 2 1 + α i s ^ i , 2 2 g ^ l , i α i c l , i , 1 2 1 + α i s ^ i , 1 α i c l , i , 2 2 1 + α i s ^ i , 2 + s ^ i , 1 1 + α i s ^ i , 1 + s ^ i , 2 1 + α i s ^ i , 2
which is equivalent to solve the following polynomial function
f α i = d 3 α i 3 + d 2 α i 2 + d 1 α i 1 + d 0 = 0
where
d 0 = l = 1 L g ^ l , i s ^ i , 1 + s ^ i , 2 2 M N + 2 a c l , i , 1 2 + c l , i , 2 2 d 1 = l = 1 L 4 g ^ l , i s ^ i , 1 s ^ i , 2 + g ^ l , i s ^ i , 1 2 + s ^ i , 2 2 c l , i , 1 2 + c l , i , 2 2 s ^ i , 1 + s ^ i , 2 2 2 M N + 2 a c l , i , 1 2 s ^ i , 2 + c l , i , 2 2 s ^ i , 1 d 2 = l = 1 L 3 g ^ l , i s ^ i , 1 2 s ^ i , 2 + 3 g ^ l , i s ^ i , 1 s ^ i , 2 2 3 c l , i , 1 2 s ^ i , 1 s ^ i , 2 3 c l , i , 2 2 s ^ i , 1 s ^ i , 2 c l , i , 1 2 s ^ i , 2 2 c l , i , 2 2 s ^ i , 1 2 2 M N + 2 a + 1 c l , i , 1 2 s ^ i , 2 2 2 M N + 2 a + 1 c l , i , 2 2 s ^ i , 1 2 d 3 = l = 1 L 2 g ^ l , i s ^ i , 1 2 s ^ i , 2 2 2 c l , i , 1 2 s ^ i , 1 s ^ i , 2 2 2 c l , i , 2 2 s ^ i , 1 2 s ^ i , 2
It has at most 3 roots found by standard root-finding algorithms. Considering that α i 0 and the corresponding Φ i is not in the support space when α i = 0 , the set of the positive roots is
Θ = α i | α i + , f α i = 0
The optimal α i is given by
α i = 0 i f   max α i Θ L α i < 0 arg max α i Θ L α i else
If α i = 0 , it means the set α i , Φ i has no contribution to L Γ , that is to say, L α i = 0 . If max α i Θ L α i < 0 , we need to set α i = 0 because L 0 = 0 >   max α i L α i .
In the t + 1 - th iteration, we choose only one α j , 1 j K that can maximize (53), while fix the other α i | 1 i K , i j . The sequence number j is expressed as
j = arg max i , 1 i K L Γ = arg max i , 1 i K L i + L α i
In order to avoid calculating L i , i , we define
Δ L i L i + L α i L Γ t = L i + L α i L i L α i t = L α i L α i t
Since L Γ t in the t + 1 - th iteration is a fixed constant, the sequence number j can also be expressed as
j = argmax i , 1 i K Δ L i = argmax i , 1 i K L α i L α i t
The next step is to change the value of α j while fixing the other α i | 1 i K , i j .
α j t + 1 = α j
α i t + 1 = α i t , 1 i K , i j
If α j t + 1 > 0 and α j t = 0 , add Φ j to ψ t (i.e., Φ j is not in ψ t , ψ t + 1 = [ ψ t , Φ j ] , Ω t + 1 = Ω t α j t + 1 I 2 ); if α j t + 1 > 0 and α j t > 0 (i.e., Φ j is already in ψ t , ψ t + 1 = ψ t ), replace α j t with α j t + 1 in Ω t ; if α j t + 1 = 0 and α j t > 0 , delete Φ j from ψ t and delete α j t from Ω t ; and if α j t + 1 = 0 and α j t = 0 (i.e., Γ t + 1 = Γ t ), stop iteration because the procedure converges to steady state. Finally, ψ * is the exact support space of the data and Ω is the set of the non-zero values in the exact Γ where the symbol * represents the stopping criterion.

4.4. Fast Computation of S ^ i , q ^ l , i , g ^ l , i

In each iteration, we need to calculate the matrix inversion of all C i ,   i when updating S ^ i , q ^ l , i , g ^ l , i . In order to reduce computation complexities, we need a fast means to update S ^ i , q ^ l , i , g ^ l , i .
Define
S i Φ i T C 1 Φ i ,   q l , i Φ i T C 1 y l ,   g l y l T C 1 y l + 2 b
If we can calculate S ^ i , q ^ l , i , g ^ l , i with S i , q l , i , g l , the computational complexities can be reduced a lot because we only calculate the matrix inversion of C .
Substituting (51) into (68), we can arrive at the following formula:
S i = Φ i T C 1 Φ i = Φ i T C i 1 C i 1 Φ i α i 1 I 2 + Φ i T C i 1 Φ i 1 Φ i T C i 1 Φ i = S ^ i S ^ i α i 1 I 2 + S ^ i 1 S ^ i = V i S ^ i , 1 1 + α i S ^ i , 1 S ^ i , 2 1 + α i S ^ i , 2 V i T
We can draw the conclusion that the eigen-matrix of S i is the same as that of S ^ i from (56) and (69). The EVD of S i can be also expressed as
S i = V i S i , 1 S i , 2 V i T
where S i , j denotes the j - th eigenvalue of S i .
S i , j = S ^ i , j 1 + α i S ^ i , j , j = 1 , 2
Thus, S ^ i can be obtained via the EVD of S i
S ^ i = V i S i , 1 1 α i S i , 1 S i , 2 1 α i S i , 2 V i T
q ^ l , i and g ^ l , i can be also computed with q l , i and g l , respectively, via the EVD of S i .
q l , i = Φ i T C 1 y l = Φ i T C i 1 C i 1 Φ i α i 1 I 2 + Φ i T C i 1 Φ i 1 Φ i T C i 1 y l = q ^ l , i S ^ i α i 1 I 2 + S ^ i 1 q ^ l , i = I 2 S ^ i α i 1 I 2 + S ^ i 1 q ^ l , i
Thus,
q ^ l , i = I 2 S ^ i α i 1 I 2 + S ^ i 1 1 q l , i = V i 1 1 α i s i , 1 1 1 α i s i , 2 V i T q l , i
Similarly,
g l = y i T C i 1 C i 1 Φ i α i 1 I 2 + Φ i T C i 1 Φ i 1 Φ i T C i 1 y l + 2 b = g ^ l , i q ^ l , i T α i 1 I 2 + S ^ i 1 q ^ l , i = g ^ l , i q ^ l , i T V i α i 1 α i S i , 1 α i 1 α i S i , 2 V i T q ^ l , i
Using (74), we can obtain
g ^ l , i = g l + q l , i T V i α i 1 α i s i , 1 α i 1 α i s i , 2 V i T q l , i
Therefore, S ^ i , q ^ l , i , g ^ l , i can be obtained from S i , q l , i , g l using (72), (74) and (76).
Compared with (68), there is another approach that requires fewer computational complexities to calculate S i , q l , i , g l . The formula (43) is equivalent to
C = I 2 M N + ψ Ω ψ T
With matrix inversion lemmas and (77), it is fast and more convenient to update S i , q l , i , g l with the following formulae compared with (68).
S i = Φ i T Φ i Φ i T ψ Σ ψ T Φ i q l , i = Φ i T y l Φ i T ψ Σ ψ T y l g l = y l T y l y l T ψ Σ ψ T y l + 2 b
where Σ herein represents the covariance of the non-zeros in Γ .
Σ = ψ T ψ + Ω 1 1
Additionally, the mean of the non-zeros in Γ herein is expressed as
μ l = Σ ψ T y l , l = 1 , , L
The computational complexities are measured in terms of the number of multiplications. Assuming that the dimension of ψ is 2 M N × r , r is not a fixed value in each iteration but satisfies the condition r M N < K because the measurements are sparse. When we calculate S i with (68), the computation complexities are in order of o 2 M N 3 + 8 M N 2 + 8 M N . With (78), the computation complexities of calculating S i are in order of o r 3 + 16 M N + 8 M N r + 2 r 2 . It is apparent that the latter operation is faster and more convenient.
Since the measurements are sparse, the dimension of the matrix ψ T ψ + Ω 1 in (79) is far smaller than the dimension of Ψ T Ψ + Γ 1 in (41). The proposed M-FMLM-STAP algorithm identifies the support space of data to reduce the effective problem dimensions drastically due to sparsity. Therefore, the proposed algorithm has tremendous potential for real- time operation.
The proposed M-FMLM-STAP algorithm is shown in Algorithm 2. The detailed update formulae are shown in the Appendix A.
Algorithm 2: Pseudocode for M-FMLM-STAP algorithm.
Step 1: Input: the original data X, the original dictionary D and a = b = 0 .
Step 2: y = Re X ; Im X and Ψ .
Step 3: Initialize: C = C i = I   α = 0 and ψ = .
Step 4: while not converged do
    Choose only one candidate j 1 , 2 ,   , K and find optimal α j using (65).
    If α j t + 1 > 0 and α j t = 0 , then
       ψ t + 1 = [ ψ t , Φ j ] , and Ω t + 1 = Ω t α j t + 1 I 2 .
    Otherwise, if α j t + 1 > 0 and α j t > 0 , then
       ψ t + 1 = ψ t , and replace α j t with α j t + 1 .
    Otherwise, if α j t + 1 > 0 and α j t > 0 , then
      delete Φ j from ψ t , and delete α j t from Ω t .
    end
    Update Σ , μ , S i , q l , i and g l referring to Appendix A.
   end while
Step 5: Estimate the CNCM R by
     R = 1 L l = 1 L i = 1 J μ l , 2 i 1 2 + μ l , 2 i 2 υ ω i υ ω i H + β Ι
  where the vector υ ω i is the ω i - th column of D , J is the number of non-zeros in α and β is a load factor. The symbol * represents the stopping criterion.
Step 6: Compute the space-time adaptive weight w using (7).
Step 7: The output of M-FMLM-STAP is Z = w H X .
The main symbols aforementioned are listed in Table 1.

5. Complexity Analysis and Convergence Analysis

5.1. Complexity Analysis

Here, we compare the computational complexities of the proposed M-FMLM-STAP algorithm and the M-SBL-STAP algorithm for a single iteration. The computational complexities are measured in terms of the number of multiplications.
First, we analyze the computational complexities of the M-SBL-STAP algorithm. It is apparent that the main computational complexities of the M-SBL-STAP algorithm are related to the formulae (17) and (18). Noting that Λ in (17) is a diagonal matrix, the computational complexities of (17) are in the order of o K 3 + K 2 M N . The computational complexities of (18) are in the order of o K 2 M N + K M N L . Thus, the computational complexities of the M-SBL-STAP algorithm are in the order of o K 3 + 2 K 2 M N + K M N L .
Second, we analyze the computational complexities of the M-FMLM-STAP algorithm. Noting that the dimension of S i is 2 × 2 , the computational complexities of the EVD of S i are small enough to be ignored, that is to say, the computational complexities to calculate S ^ i , q ^ l , i , g ^ l , i with S i , q l , i , g l can be ignored. Meanwhile, the computational complexities to solve the polynomial function (59), i , 1 i K and to find the sequence number (65) are in direct proportion to K , which are also small enough to be ignored. Thus, the main computational complexities of the M-FMLM-STAP algorithm are used to update S i , q l , i , g l , Σ , μ . Assuming that the dimension of ψ is 2 M N × r , r is not a fixed value in all iterations. However, the condition r M N < K is satisfied because the measurements are sparse. The computational complexities of the term Φ i T Φ i T ψ Σ ψ T are in the order of o 8 M N r + 2 r 2 . Thus, i and l , the computational complexities of (78), are in the order of o 8 M N r + 2 r 2 K + 8 M N K + 4 M N L K + 4 M N r + r 2 + 4 M N L . The computational complexities of (79) and (80) are in the order of o 2 M N r 2 + r + r 3 and o 2 M N r 2 + 2 M N r L , respectively. To sum up, ignore the low-order computational complexities, and the computational complexities of the M-FMLM-STAP algorithm for a single iteration are in the order of o 8 M N r K + 4 M N L K + 6 M N r L .
Figure 1 illustrates the complexity requirements of two algorithms for a single iteration. It shows the computational complexities to the number of pluses M for a single iteration. Suppose that φ S = φ d = 4 , N = 10 and L = 6 . Although the value r is unknown and unfixed in each iteration, we can suppose that it is twice as much to the rank of clutter in the current iteration. We can draw the conclusion that the computational complexities of the M-SBL-STAP algorithm are far more than that of the M-FMLM-STAP algorithm for a single iteration.

5.2. Convergence Analysis

It is obvious that the logarithm of the marginal likelihood function L Γ has upper bound [31]. From (62), in the t - th iteration, L α i is the maximal value of L α i , that is to say, L α i L α i t . Therefore, the condition L Γ t + 1 L Γ t , t is promised in each iteration referring to (65). Thus, the convergence performance of our proposed algorithm is guaranteed.

6. Performance Assessment

In this section, we firstly verify the performance of the loading sample matrix inversion (LSMI) algorithm, the multiple orthogonal matching pursuit (M-OMP-STAP) algorithm [32], the M-SBL-STAP algorithm and the proposed M-FMLM-STAP algorithm with the simulated data of side-looking ULA in the ideal case and the array errors case. The first two algorithms are common approaches in STAP. We then assess the performance of the M-SBL-STAP algorithm and the proposed M-FMLM-STAP algorithm with the Mountain-Top data. We utilize the improvement factor (IF) and signal to interference plus noise ratio (SINR) loss as two measurements of performance, which are expressed as
IF = w H s 2 w H R w tr R s H s
L S I N R = σ 2 M N w H s 2 w H R w
where R is the exact CNCM of the CUT.

6.1. Simulated Data

The parameters of a side-looking phased array radar are listed in Table 2. The 600th range gate is chosen to be CUT. According to the parameters of the radar system, the slope of the clutter ridge is 1. We set the resolution scales φ S = 5 and φ d = 5 . Total of six training samples are utilized for two algorithms and the parameters a = b = 0 .
In Figure 2, there are five clutter power spectrum figures in the framework of the side-looking ULA in the ideal case. Figure 2a–e show the clutter power spectrums estimated by the exact CNCM, the LSMI algorithm, the M-OMP-STAP algorithm, the M-SBL-STAP algorithm and the M-FMLM-STAP algorithm, respectively. We note that the clutter spectrums obtained by the M-SBL-STAP algorithm and the M-FMLM-STAP algorithms are closer to the exact spectrum both on the location and power than the other algorithms in the ideal case. As shown in Figure 3, the IF factor curve achieved by the M-SBL-STAP algorithm is a litter better than that achieved by the M-FMLM-STAP algorithm because the latter algorithm is sensitive to noise fluctuation. However, the latter algorithm needs much fewer computational complexities, and the loss on performance can be offset by the decrease on the computational complexities. Using a computer equipped with two Inter(R) Xeon(R) Gold 6140 CPU @2.30 GHz 2.29 GHz processors, the former spends more than 40 s and the latter spends less than 4 s at average. The larger the value of K , the bigger gap of computational complexities between the M-SBL-STAP algorithm and the M-FMLM-STAP algorithm.
In Figure 4, we apply four approaches in the non-ideal case with amplitude Gaussian error (standard deviation 0.03) and phase random error (standard deviation 2 ). The error is the same at all direction. Figure 4a–e show the clutter power spectrums estimated by the exact CNCM, the LSMI algorithm, the M-OMP-STAP algorithm, the M-SBL-STAP algorithm and the M-FMLM-STAP algorithm, respectively. We note that the clutter spectrums obtained by the M-SBL-STAP algorithm and the M-FMLM-STAP algorithm are much closer to the exact spectrum, both in terms of the location and power in the non-ideal case. As shown in Figure 5, the notch of the IF factor curve achieved by the M-SBL-STAP algorithm is close to that achieved by the M-FMLM-STAP algorithm, which means the performance of two algorithms is about the same in the non-ideal case. However, the running time of the latter is much less than that of the former.
In Figure 6, the average SINR loss, defined as the mean of the SINR loss values in the entire normalized Doppler frequency range to the number of training samples of the M-SBL-STAP algorithm and the M-FMLM-STAP algorithm, is presented. The curve of the average SINR loss achieved by the M-SBL-STAP algorithm is a litter better than that achieved by the M-FMLM-STAP algorithm within 0.5 dB, which can be ignored in practice. However, the gap on computational complexities of two algorithms will be larger as the DOF of the radar system increases. All presented results are averaged over 100 independent Monte Carlo runs.

6.2. Measured Data

We apply the M-SBL-STAP algorithm and the M-FMLM-STAP algorithm to the public available Mountain-Top set, i.e., t38pre01v1 CPI6 [33]. In this data file, the number of array elements and coherent pulses are 14 and 16, respectively. There are 403 range cells, and the clutter locates around 245° relative to the true North. The target is located in 147th range cell, and the azimuth angle is 275° to the true North. The normalized target Doppler frequency is 0.25. The estimated clutter Capon spectrum utilizing all 403 range cells is provided in Figure 7.
Figure 8 depicts the STAP output power of the M-SBL-STAP algorithm and the M-FMLM-STAP algorithm in the range cells from 130 to 165, and 10 range cells out of 20 range cells located next to the CUT are selected as training samples. The target locates at 147th range cell and can be detected by two algorithms. Obviously, the detection performance of the proposed M-FMLM-STAP algorithm is close to that of M-SBL-STAP algorithm. However, the operation time of the former algorithm is much less than that of the latter algorithm. Therefore, the proposed algorithm is applicable in practice.

7. Conclusions

In this paper, we have extended the real-valued multitask compressive sensing technique to suppress complex-valued heterogeneous clutter for airborne radar. Unlike the conventional M-SBL-STAP algorithm, the proposed algorithm avoids the inversion of a K × K matrix at each iteration to guarantee real-time operations. We integrate the noise σ 2 out instead of estimating σ 2 to overcome the problem that we have no access to obtain the accurate value of σ 2 . The complex-valued multitask clutter data are translated into group real-valued sparse signals in the article to suppress clutter because the modified hierarchical model is not suitable to complex-valued signals. At the end, simulation results demonstrate the high computational efficiency and the great performance of the proposed algorithm.

Author Contributions

Conceptualization, C.L. and T.W.; methodology, C.L. and T.W.; software, C.L.; validation, C.L., S.Z. and B.R.; writing—original draft preparation, C.L.; writing—review and editing, S.Z.; visualization, C.L.; supervision, T.W.; project administration, T.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key R&D Program of China, grant number 2021YFA1000400.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

For notational convenience, we ignore the symbol t in the t - th iteration and use ~ to represent the t + 1 - th iteration.

Appendix A.1. Adding a New Basis Group Function ( α ˜ i > 0   & &   α i = 0 )

When α i = 0 , C = C i .
Δ L i = 1 2 l = 1 L 2 N M + 2 a log 1 q l , i T α ˜ i 1 I 2 + S i 1 q l , i g l + log α ˜ i I 2 + log α ˜ i 1 I 2 + S i
Σ ˜ = Σ + Σ ψ T Φ i Z Φ i T ψ Σ Σ ψ T Φ i Z Σ ψ T Φ i Z T Z
μ ˜ l = μ l Σ ψ T Φ i Z q l , i Z q l , i
S ˜ i = S i S i Z S i = S i Φ i T E i Z E i T Φ i
q ˜ l , i = q l , i S i Z q l , i = q l , i Φ i T E i Z q l , i
g ˜ l = g l q l , i T Z q l , i = g l y l T E i Z E i T y l
where Z = α ˜ i 1 I 2 + S i 1 and E i = Φ i ψ Σ ψ T Φ i .

Appendix A.2. Re-Estimating a Basis Group Function ( α ˜ i > 0   & &   α i > 0 )

i represents the index of the i - th group in the whole dictionary matrix, and g represents the position index of the above-mentioned i - th group in the used basis matrix ψ . α ˜ i is the hyper-parameter re-estimated, Δ α i = α ˜ i α i , Σ g = Σ : , 2 g 1 : 2 g , and Σ g g = Σ 2 g 1 : 2 g , 2 g 1 : 2 g .
Δ L i = 1 2 l = 1 L 2 N M + 2 a log 1 q ^ l , i T α ˜ i 1 I 2 + S ^ i 1 q ^ l , i g ^ l , i + log α ˜ i I 2 + log α ˜ i 1 I 2 + S ^ i 2 N M + 2 a log 1 q ^ l , i T α i 1 I 2 + S ^ i 1 q ^ l , i g ^ l , i log α i I 2 log α i 1 I 2 + S ^ i
Σ ˜ = Σ Σ g G Σ g T
μ ˜ = μ Σ g G μ g
S ˜ i = S i + Φ i T ψ Σ g G Σ g T ψ T Φ i
q ˜ l , i = q l , i + Φ i T ψ Σ g G Σ g T ψ T y l
g ˜ l = g l + y l T ψ Σ g G Σ g T ψ T y l
where G = α i α ˜ i Δ α i I 2 + Σ g g 1 and μ g = Σ g T ψ T y denotes the g - th 2 × 1 sub-vector of μ .

Appendix A.3. Deleting a Basis Group Function ( α ˜ i = 0   & &   α i > 0 )

i represents the index of the i - th group in the whole dictionary matrix, and g represents the position index of the above-mentioned i - th group, which is removed in the used basis matrix ψ .
Δ L i = 1 2 l = 1 L 2 N M + 2 a log 1 q ^ l , i T α i 1 I 2 + S ^ i 1 q ^ l , i g ^ l , i + log α i I 2 + log α i 1 I 2 + S ^ i
Σ ˜ = Σ Σ g Σ g g 1 Σ g T
μ ˜ = μ Σ g Σ g g 1 μ g
S ˜ i = S i + Φ i T ψ Σ g Σ g g 1 Σ g T ψ T Φ i
q ˜ l , i = q l , i + Φ i T ψ Σ g Σ g g 1 Σ g T ψ T y l
g ˜ l = g l + y l T ψ Σ g Σ g g 1 Σ g T ψ T y l

References

  1. Ward, J. Space-Time Adaptive Processing for Airborne Radar; Technical Report; MIT Lincoln Laboratory: Lexington, KY, USA, 1998. [Google Scholar]
  2. Reed, I.S.; Mallet, J.D.; Brennan, L.E. Rapid convergence rate in adaptive arrays. IEEE Trans. Aerosp. Electron. Syst. 1974, AES-10, 853–863. [Google Scholar] [CrossRef]
  3. Dipietro, R.C. Extended factored space-time processing for airborne radar systems. In Proceedings of the 26th Asilomar Conference on Signals, Systems and Computing, Pacific Grove, CA, USA, 26–28 October 1992; pp. 425–430. [Google Scholar]
  4. Haimovich, A. The eigencanceler: Adaptive radar by eigenanalysis methods. IEEE Trans. Aerosp. Electron. Syst. 1996, 32, 532–542. [Google Scholar] [CrossRef]
  5. Melvin, W.L. A STAP overview. IEEE Aerosp. Electron. Syst. Mag. 2004, 19, 19–35. [Google Scholar] [CrossRef]
  6. Sarkar, T.K.; Park, S.; Koh, J. A deterministic least squares approach to space time adaptive processing. Digit. Signal Process. 1996, 6, 185–194. [Google Scholar] [CrossRef] [Green Version]
  7. Capraro, C.T.; Capraro, G.T.; Bradaric, I.; Weiner, D.D.; Wicks, M.C.; Baldygo, W.J. Implementing digital terrain data in knowledge-aided space-time adaptive processing. IEEE Trans. Aerosp. Electron. Syst. 2006, 42, 1080–1097. [Google Scholar] [CrossRef]
  8. Capraro, C.T.; Capraro, G.T.; Weiner, D.D.; Wicks, M.C.; Baldygo, W.J. Improved STAP performance using knowledge-aided secondary data selection. In Proceedings of the 2004 IEEE Radar Conference, Philadelphia, PA, USA, 29 April 2004. [Google Scholar]
  9. Melvin, W.; Wicks, M.; Antonik, P.; Salama, Y.; Li, P.; Schuman, H. Knowledge-based space-time adaptive processing for airborne early warning radar. IEEE Aerosp. Electron. Syst. Mag. 1998, 13, 37–42. [Google Scholar] [CrossRef]
  10. Bergin, J.S.; Teixeira, C.M.; Techau, P.M. Improved clutter mitigation performance using knowledge-aided space-time adaptive processing. IEEE Trans. Aerosp. Electron. Syst. 2006, 42, 997–1009. [Google Scholar] [CrossRef]
  11. Donoho, D.L.; Elad, M.; Temlyakov, V.N. Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inf. Theory 2006, 52, 6–18. [Google Scholar] [CrossRef]
  12. Sun, K.; Zhang, H.; Li, G.; Meng, H.D.; Wang, X.Q. A novel STAP algorithm using sparse recovery technique. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Cape Town, South Africa, 12–17 July 2009; Volume 1, pp. 3761–3764. [Google Scholar]
  13. Yang, Z.C.; Li, X.; Wang, H.Q.; Jiang, W.D. On clutter sparsity analysis in space-time adaptive processing airborne radar. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1214–1218. [Google Scholar] [CrossRef]
  14. Yang, Z.C.; de Lamare, R.C.; Li, X. L1-regularized STAP algorithm with a generalized sidelobe canceler architecture for airborne radar. IEEE Trans. Signal Process. 2012, 60, 674–686. [Google Scholar] [CrossRef]
  15. Sen, S. Low-rank matrix decomposition and spatio-temporal sparse recovery for STAP radar. IEEE J. Sel. Top. Signal Process. 2015, 9, 1510–1523. [Google Scholar] [CrossRef]
  16. Tipping, M.E. Sparse Bayesian learning and the relevance vector machine. J. Mach. Learn. 2001, 1, 211–244. [Google Scholar]
  17. Wipf, D.P.; Rao, B.D. Sparse Bayesian learning for basis selection. IEEE Trans. Signal Process. 2004, 52, 2153–2164. [Google Scholar] [CrossRef]
  18. Wipf, D.P.; Rao, B.D. An empirical Bayesian strategy for solving the simultaneous sparse approximation problem. IEEE Trans. Signal Process. 2007, 55, 3704–3716. [Google Scholar] [CrossRef]
  19. Tipping, M.E.; Faul, A.C. Fast marginal likelihood maximization for sparse Bayesian models. In Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, Key West, FL, USA, 3–6 January 2003; pp. 3–6. [Google Scholar]
  20. Ji, S.H.; Xue, Y.; Carin, L. Bayesian compressive sensing. IEEE Trans. Signal Process. 2008, 56, 2346–2356. [Google Scholar] [CrossRef]
  21. Ji, S.H.; Dunson, D.; Carin, L. Multi-task compressive sensing. IEEE Trans. Signal Process. 2009, 57, 92–106. [Google Scholar] [CrossRef]
  22. Wu, Q.S.; Zhang, Y.M.; Amin, M.G.; Himed, B. Complex multitask Bayesian compressive sensing. In Proceedings of the 2014 IEEE International Conference on Acoustic, Speech and Signal Processing, Florence, Italy, 4–9 May 2014. [Google Scholar]
  23. Serra, J.G.; Testa, M.; Katsaggelos, A.K. Bayesian K-SVD using fast variational inference. IEEE Trans. Image Process. 2017, 26, 3344–3359. [Google Scholar] [CrossRef]
  24. Ma, Z.Q.; Dai, W.; Liu, Y.M.; Wang, X.Q. Group sparse Bayesian learning via exact and fast marginal likelihood maximization. IEEE Trans. Signal Process. 2017, 65, 2741–2753. [Google Scholar] [CrossRef]
  25. Duan, K.Q.; Wang, Z.T.; Xie, W.C.; Chen, H.; Wang, Y.L. Sparsity-based STAP algorithm with multiple measurement vectors via sparse Bayesian learning strategy for airborne radar. IET Signal Process. 2017, 11, 544–553. [Google Scholar] [CrossRef]
  26. Wang, Z.T.; Xie, W.C.; Duan, K.Q. Clutter suppression algorithm base on fast converging sparse Bayesian learning for airborne radar. Signal Process. 2017, 130, 159–168. [Google Scholar] [CrossRef]
  27. Yuan, H.D.; Xu, H.; Duan, K.Q.; Xie, W.C.; Liu, W.J.; Wang, Y.L. Sparse Bayesian learning-based space-time adaptive processing with off-grid self-calibration for airborne radar. IEEE Access 2018, 6, 47296–47307. [Google Scholar] [CrossRef]
  28. Yang, X.P.; Sun, Y.Z.; Yang, J.; Long, T.; Sarkar, T.K. Discrete Interference suppression method based on robust sparse Bayesian learning for STAP. IEEE Access 2019, 10, 26740–26751. [Google Scholar] [CrossRef]
  29. Bai, Z.L.; Shi, L.M.; Sun, J.W.; Christensen, M.G. Complex sparse signal recovery with adaptive Laplace priors. arXiv 2006, arXiv:2006.16720v1. [Google Scholar]
  30. Babacan, S.D.; Molina, R.; Katsaggelos, A.K. Bayesian compressive sensing using Laplace priors. IEEE Trans. Image Process. 2010, 19, 53–63. [Google Scholar] [CrossRef]
  31. Wipf, D.; Nagarajan, S. A new view of automatic relevance determination. Adv. Neural Inf. Process. Syst. 2008, 20, 1–9. [Google Scholar]
  32. Tropp, J.A. Algorithms for simultaneous sparse approximation, part I: Greedy pursuit. Signal Process. 2006, 86, 572–588. [Google Scholar] [CrossRef]
  33. Titi, G.W.; Marshall, D.F. The ARPA/NAVY mountaintop program: Adaptive signal processing for airborne early warning radar. In Proceedings of the 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing, Atlanta, GA, USA, 9 May 1996. [Google Scholar]
Figure 1. Computational complexity versus the number of pluses for a single iteration.
Figure 1. Computational complexity versus the number of pluses for a single iteration.
Sensors 22 02664 g001
Figure 2. Angle-Doppler clutter power spectrums in the ideal case: (a) exact spectrum calculated by the ideal CNCM; (b) sparse spectrum estimated by LSMI; (c) sparse spectrum estimated by M-OMP-STAP; (d) sparse spectrum estimated by M-SBL-STAP; and (e) sparse spectrum estimated by M-FMLM-STAP.
Figure 2. Angle-Doppler clutter power spectrums in the ideal case: (a) exact spectrum calculated by the ideal CNCM; (b) sparse spectrum estimated by LSMI; (c) sparse spectrum estimated by M-OMP-STAP; (d) sparse spectrum estimated by M-SBL-STAP; and (e) sparse spectrum estimated by M-FMLM-STAP.
Sensors 22 02664 g002
Figure 3. IF curves in the ideal case.
Figure 3. IF curves in the ideal case.
Sensors 22 02664 g003
Figure 4. Angle-Doppler clutter power spectrums in the non-ideal case with array errors: (a) exact spectrum calculated by the exact CNCM; (b) sparse spectrum estimated by LSMI; (c) sparse spectrum estimated by M-OMP-STAP; (d) sparse spectrum estimated by M-SBL-STAP; and (e) sparse spectrum estimated by M-FMLM-STAP.
Figure 4. Angle-Doppler clutter power spectrums in the non-ideal case with array errors: (a) exact spectrum calculated by the exact CNCM; (b) sparse spectrum estimated by LSMI; (c) sparse spectrum estimated by M-OMP-STAP; (d) sparse spectrum estimated by M-SBL-STAP; and (e) sparse spectrum estimated by M-FMLM-STAP.
Sensors 22 02664 g004
Figure 5. IF curves in the non-ideal case.
Figure 5. IF curves in the non-ideal case.
Sensors 22 02664 g005
Figure 6. Average SINR loss to the number of training samples.
Figure 6. Average SINR loss to the number of training samples.
Sensors 22 02664 g006
Figure 7. Estimated clutter Capon spectrum with all samples.
Figure 7. Estimated clutter Capon spectrum with all samples.
Sensors 22 02664 g007
Figure 8. STAP output power to the range cell of two algorithms.
Figure 8. STAP output power to the range cell of two algorithms.
Sensors 22 02664 g008
Table 1. The main symbols.
Table 1. The main symbols.
SymbolsParametersSymbolsParameters
X The original data Y The new data
D The original dictionary Ψ The new dictionary
A The original coefficient matrix B The new coefficient matrix
Γ The original hyper-parameter Γ The new hyper-parameter
Σ The covariance of coefficient μ The mean of coefficient
Ω The set of the non-zero values in Γ ψ The support space of data
S ^ i , q ^ l , i , g ^ l , i See (54) S i , q l , i , g l See (68)
Φ i See (49)
Table 2. Radar system parameters.
Table 2. Radar system parameters.
SymbolsParametersValue
λ Wavelength0.3 m
d Distance between elements0.15 m
ϑ Platform velocity150 m/s
H Platform height9000 m
M Number of pulses8
N Number of channels8
f PRF Pulse repetition frequency2000 Hz
f S Range sampling frequency2.5 MHz
θ Perspective angle90°
CNR Clutter to noise ratio30 dB
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, C.; Wang, T.; Zhang, S.; Ren, B. A Fast Space-Time Adaptive Processing Algorithm Based on Sparse Bayesian Learning for Airborne Radar. Sensors 2022, 22, 2664. https://doi.org/10.3390/s22072664

AMA Style

Liu C, Wang T, Zhang S, Ren B. A Fast Space-Time Adaptive Processing Algorithm Based on Sparse Bayesian Learning for Airborne Radar. Sensors. 2022; 22(7):2664. https://doi.org/10.3390/s22072664

Chicago/Turabian Style

Liu, Cheng, Tong Wang, Shuguang Zhang, and Bing Ren. 2022. "A Fast Space-Time Adaptive Processing Algorithm Based on Sparse Bayesian Learning for Airborne Radar" Sensors 22, no. 7: 2664. https://doi.org/10.3390/s22072664

APA Style

Liu, C., Wang, T., Zhang, S., & Ren, B. (2022). A Fast Space-Time Adaptive Processing Algorithm Based on Sparse Bayesian Learning for Airborne Radar. Sensors, 22(7), 2664. https://doi.org/10.3390/s22072664

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop