Next Article in Journal
A Point Cloud Segmentation Method for Dim and Cluttered Underground Tunnel Scenes Based on the Segment Anything Model
Next Article in Special Issue
A Robust TCPHD Filter for Multi-Sensor Multitarget Tracking Based on a Gaussian–Student’s t-Mixture Model
Previous Article in Journal
Evaluation of the CRTM Land Emissivity Model over Grass and Sand Surfaces Using Ground-Based Measurements
Previous Article in Special Issue
Frequency Domain Imaging Algorithms for Short-Range Synthetic Aperture Radar
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

ADMM-Net for Beamforming Based on Linear Rectification with the Atomic Norm Minimization

College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(1), 96; https://doi.org/10.3390/rs16010096
Submission received: 6 December 2023 / Revised: 22 December 2023 / Accepted: 23 December 2023 / Published: 25 December 2023
(This article belongs to the Special Issue State-of-the-Art and Future Developments: Short-Range Radar)

Abstract

:
Target misalignment can cause beam pointing deviations and degradation of sidelobe performance. In order to eliminate the effect of target misalignment, we formulate the jamming sub-space recovery problem as a linearly modified atomic norm-based optimization. Then, we develop a deep-unfolding network based on the alternating direction method of multipliers (ADMM), which effectively improves the applicability and efficiency of the algorithm. By using the back-propagation process of deep-unfolding networks, the proposed method could optimize the hyper-parameters in the original atomic norm. This feature enables the adaptive beamformer to adjust its weight according to the observed data. Specifically, the proposed method could determine the optimal hyper-parameters under different interference noise matrix conditions. Simulation results demonstrate that the proposed network could reduce computational cost and achieve near-optimal performance with low complexity.

1. Introduction

Adaptive beamforming technology is a extensively employed methodology in various domains, including radar, sonar, wireless communication, and medical imaging. This technology leverages the spatial information dimension inherent in sensor arrays to mitigate interference, clutter, and extraneous signals, consequently enhancing the proficiency of target detection and tracking [1]. With the ongoing advancement in the exploration of beamforming under non-ideal conditions, its practical application in engineering is progressively evolving [2,3,4,5,6].
Typically, adaptive beamforming technology is designed based on certain criteria, such as Minimum Variance Distortionless Response (MVDR), Minimum Mean Square Error (MMSE), and Maximum Signal-to-Noise Ratio (SNR). Nevertheless, conventional adaptive beamforming techniques encounter the following challenges:
(1)
Target signals and interference signals often occur simultaneously, making it difficult to distinguish. The presence of target signals can induce deviation in the formed beam from its intended direction, diminishing the effectiveness of interference suppression from sidelobes. In severe instances, this phenomenon may lead to the cancellation of target signals.
(2)
Target misalignment caused by biases in target prior information or array structural errors can also cause beam pointing deviations, degradation of sidelobe performance, and self-cancellation of target signals [7,8,9].
To tackle these challenges, researchers have proposed numerous robust adaptive beamforming methods [10]. The MVDR beamformer incorporates diagonal loading of the sample matrix inversion method, which modifies the relative sizes of the eigenvalues of the sample matrix through diagonal loading, thereby enhancing the robustness of the MVDR beamforming method [11,12,13,14]. These methods, along with another class known as worst-case optimization beamformers, share a degree of equivalence. Worst-case optimization beamformers enhance the robustness of the beamformer against the two aforementioned non-ideal factors by setting a protection range around the intended beam direction. The main problem with this approach is that it is difficult to determine the optimal protection range or optimal diagonal loading parameters. Since the method employs a relaxation approach during the solution process, this cannot guarantee its optimal performance [15,16,17,18]. Linearly constrained minimum variance (LCMV) beamformers introduce additional linear constraints during the optimization process to augment the robustness of beamforming. When the guiding vector under the linear constraints exhibits a strong correlation with the actual target guiding vector, this class of methods demonstrates robust performance. However, their drawback is the higher sidelobe deformations [19,20,21,22,23,24]. Another class of robust adaptive beamforming methods is based on subspace techniques. These methods project the specified guiding vector onto the signal and interference subspaces of the sample covariance matrix, which provides good robustness against guiding vector misalignment [25,26]. By analyzing the projection of the target guiding vector, the subspace-based methods can solve the problem of target signal cancellation caused by the contamination of training samples. Nevertheless, these methods necessitate prior knowledge of the target number, a requirement that poses challenges under conditions characterized by a low signal-to-noise ratio and a high-dimensional signal–interference subspace. In addition to the aforementioned traditional methods, recent research has started to use subspace reconstruction methods for adaptive beamforming [27,28,29,30]. Sparse recovery methods are employed to recover the interference subspace matrix and calculate adaptive weights based on the reconstructed interference subspace [27], which utilizes the low-rank characteristics of the interference or clutter subspace in the entire sample matrix. However, the above-mentioned methods require the guiding vectors of interference and clutter to fall onto the dictionary matrix of sparse recovery, which cannot handle off-grid situations. Moreover, this class of methods also utilizes a protection range to mitigate the impact of target signals, thereby necessitating prior knowledge of the target protection range. In order to enhance the robustness of adaptive beamforming in the aforementioned two scenarios, a robust adaptive beamforming method in [31] is proposed with the linearly modified atomic norm-based optimization, which utilizes the linearly modified atomic norm-based optimization algorithm to simultaneously estimate the target guiding vector and reconstruct the interference subspace. Herein, the atomic norm-based optimization method does not necessitate prior knowledge of the target and interference guiding vectors. Furthermore, this method separates the interference subspace from the data, which can approach the optimal output signal-to-noise ratio performance [32,33,34]. However, this method also requires the determination of hyper-parameters during the solution process, and the choice of hyper-parameters has a significant impact on the performance of the method [35,36,37,38,39].
To address this problem, this paper proposes a robust adaptive beamforming method based on deep-unfolding networks, which adopts the form of deep-unfolding networks for the subspace recovery algorithm based on atomic norm optimization, and uses the back-propagation process of deep-unfolding networks to optimize the hyper-parameters in the traditional atomic norm optimization iteration. Furthermore, the proposed method determines the optimal hyper-parameters under varying interference noise matrix conditions.
The remainder of this paper is organized as follows: Section 2 presents the signal model, Section 3 introduces the unfolded algorithm based on linearly corrected atomic norm minimization with ADMM, Section 4 analyzes the experimental results, and Section 5 provides a comprehensive summary of the entire paper.

2. Signal Model

Considering a linear array with M omnidirectional antennas, the received signal can be expressed as
Y = S + J + N
where Y = [ y ( 1 ) , y ( 2 ) , , y ( T ) ] denotes the received signal with T snapshots, S , J and N denote the target signal, interference and Gaussian white noise, respectively. The signals are assumed to be independent of each other at different time periods; the array manifold can be expressed as a ( θ ) = ( 1 / M ) [ 1 , e j 2 π d / λ sin ( θ ) , , e j 2 π d / λ ( M 1 ) sin ( θ ) ] T , where d denotes element interval, λ denotes the wavelength of radar, ( ) T denotes transpose operator.
Herein, the target signal and interference signal can be respectively expressed as
S = c 0 a ( θ 0 ) b 0 T
J = i = 1 K c i a ( θ i ) b i T
where θ 0 denotes the DOA of target signal, which can be obtained using existing algorithms [40,41]. θ i denotes the DOA of the ith interference signal, and b 0 = [ b 0 ( 1 ) , b 0 ( 2 ) , , b 0 ( T ) ] T and b i = [ b i ( 1 ) , b i ( 2 ) , , b i ( T ) ] T , respectively, denote the normalized complex amplitudes of the target and interference. c i denotes the real positive value of signal power. Herein, the interference signal power is much greater than the target signal power. Moreover, the mean and variance of target signal are zero and σ 2 , which is independent of the inference signal and noise.
Adaptive digital beamforming (ADBF) is designed to eliminate the interference signal by applying adaptively calculated weights for the received signal; the output can be expressed as
Y ¯ T = w H ( S + J + N )
where ( ) H denotes the conjugate transpose operator.
In order to suppress interference, w should be orthogonal to the interference subspace while keeping the main lobe unchanged along the target direction. The classical Wiener filter determines the weights by solving the following linear-constrained quadratic optimization, which can be expressed as
min w w H R w , s . t . w H a ( θ ) = 1
where θ represents the expected pointing direction of the adaptive beam, R = E ( Y Y H ) represents the data covariance matrix, E ( ) represents the calculated statistical expectation. Since the estimated covariance R ^ also includes the target signal, the adaptive beamformer can also offset the target signal.
As for the data of the target contamination, w should be calculated as
min w w H R J w , s . t . w H a ( θ ) = 1
where R J is a Hermitian Toeplitz matrix and can be considered as the interference signal subspace, i.e., R J = i = 1 K c i 2 a ( θ i ) a ( θ i ) H . However, R J is very difficult to obtain.

3. Proposed Algorithm

3.1. ADMM Model Based on the Linear Correction Atomic Norm

Traditional methods recover the interference signal subspace by decomposition of the eigenvalues of R . However, this method is only effective when the number of interference sources is known and the interference-to-noise ratio (INR) is high. Moreover, the performance of the algorithm relies heavily on the estimation of the matrix, which indicated that a large number of snapshots are required for an accurate recovery of the matrix. As well, the conventional methods still need to assume that the target signal subspace is approximately orthogonal to the interference signal subspace. Therefore, we need new methods to estimate R J .
In this paper, by combining our approach with the start-of-art methods, we build a model based on linear rectification of atomic norms for the beamforming problem in the case of target corrupted training data and use the ADMM algorithm to solve it effectively.
The core of the problem is to find a solution with the least number of atoms to describe J , while S + J is bounded within the Frobenius norm sphere around Y . Therefore, the following problem models can be expressed as
min X , θ 0 , s X a ( θ 0 ) s T A , 0 s . t . 1 2 Y X F 2 η
where X is the estimate of S + J , s is the estimate of the target signal, F denotes the F norm, η denotes the artificial parameters related to the noise power, A , 0 denotes the non-convex norm, which can be defined as
X A , 0 = inf r X = i r c i A ( θ i , b i ) , c i 0
where r denotes the number of atoms by forming the interference signal.
Minimizing directly (8) is, however, proven to be NP-hard. Thus, we opt to utilize the atomic norm as a convex relaxation of the atomic ℓ0 norm.
min X , θ 0 , s X a ( θ 0 ) s T A s . t . 1 2 Y X F 2 η
where A denotes the atomic norm, which is defined as
X A = inf i c i X = i c i A θ i , b i , c i 0
By using the Schur complementary lemma and definitions of atomic norm, the above optimization allows the following equivalent SDP features
min X , s inf u C M , Ω C T × T 1 2 Tr ( τ ( u ) ) + 1 2 Tr ( Ω ) s . t . τ ( u ) X a ( θ ^ 0 ) s T ( X a ( θ ^ 0 ) s T ) H Ω _ 0 , 1 2 Y X F 2 η
where τ ( u ) is the Hermitian Toeplitz matrix formed by the vector as the first column, Ω is a variable matrix, Tr ( ) denotes the trace of the matrix, which can be utilized to achieve the optimization based on the atomic norm. Note that the semidefinite constraint in (11) means X a ( θ ^ 0 ) s T having the same column space as τ ( u ) . More specifically, τ ( u ) is the estimate of the interference subspace i = 1 K c i 2 a ( θ i ) a ( θ i ) H .
In order to apply the ADMM method, (11) is repeated as
min X , s , u , Ω ε 2 Tr ( τ ( u ) ) + Tr ( Ω ) + 1 2 Y X F 2 s . t . Z = τ ( u ) X a ( θ ^ 0 ) s T ( X a ( θ ^ 0 ) s T ) H Ω , Z _ 0
where ε is the regularization parameter related to η . Firstly, the augmented Lagrangian function of (12) is expressed as
φ ( X , s , Ω , Λ , Z ) = 1 2 X Y F 2 + ε 2 Tr ( ε ( u ) ) + Tr ( Ω ) + Λ , Z τ ( u ) X a ( θ 0 ) s T ( X a ( θ 0 ) s T ) H Ω + ρ 2 Z τ ( u ) X a ( θ 0 ) s T ( X a ( θ 0 ) s T ) H Ω F 2
where Λ and ρ are Lagrange multipliers, Z , Ω and Λ are Hermite matrices. The update steps for ADMM are as follows
( X t + 1 , s t + 1 , u t + 1 , Ω t + 1 ) = argmin X , s , u , Ω φ ( X , s , u , Ω , Λ t , Z t ) Z t + 1 = argmin Z _ 0 φ ( X t + 1 , s t + 1 , u t + 1 , Ω t + 1 , Λ t , Z ) Λ t + 1 = Λ t + ρ Z t + 1 τ ( u t + 1 ) X t + 1 a ( θ 0 ) ( s t + 1 ) ( X t + 1 a ( θ 0 ) ( s t + 1 ) H ) Ω t + 1
where the superscript t denotes the tth iteration.
Moreover, the matrix decomposition in (15) can be expressed as
Λ = Λ M × M Λ M × T Λ T × M λ T × T , Z = Z M × M Z M × T Z T × M Z T × T
Then, the closed-form update rules can be written as follows:
Ω t + 1 = Z T × T t + 1 ρ Λ T × T t ε 2 I u t + 1 = 1 ρ Υ g ( Λ M × M t ) + ρ g ( Z M × M t ) ε 2 e 1 X ¯ t + 1 = 2 ρ F 1 H F 1 + F 2 H F 2 1 F 2 H Υ + 2 F 1 H Λ M × T t + 2 ρ F 1 H Z M × T t
where Υ is the diagonal matrix with the diagonal element Υ i , i = 1 M i + 1 , i = 1 , 2 , , M . g ( ) denotes the linear mapping from the matrix to the vector, where the value of the ith element corresponds to the sum of the matrix element values satisfied by the number of rows p and columns q. X ¯ = [ X T s ] T denotes that s T replaces the (M + 1)-th row of X.
By applying eigendecomposition of τ ( u t + 1 ) X t + 1 a ( θ 0 ) ( s t + 1 ) T ( X t + 1 a ( θ 0 ) ( s t + 1 ) T ) H Ω t + 1     1 ρ Λ t = σ i t U i t ( U i t ) H , Z can be rewritten as:
Z t + 1 = i D σ i t U i t ( U i t ) H
where D = i σ i t 0 .
Based on the closed-form update rules listed in (14), (16), and (17), the solution of the problem is obtained by running the above iterations until a predetermined error tolerance or upper iteration limit is reached.
Since the traditional ADMM algorithm requires hundreds of iterations to obtain the ideal results, the operation efficiency is low. In addition, the parameters ε and ρ in the algorithm need to be set manually, which leads to a significant impact on the final results. In view of the above problems, this paper combines the ADMM algorithm with the deep-unfolding network to obtain the ADMM network, which effectively improves the applicability and efficiency of the algorithm.

3.2. Design of the C-ADMM-Net

The parameters Ω , u , X ¯ , Z and Λ that are involved in the ADMM algorithm require iteration to ensure the correct update of parameters. The initial data can be randomly generated under the Hermitian matrix. Moreover, the guidance vector representing the direction of the target can usually be accurately acquired in advance.
It can be noticed that the final output of the key factors is affected by artificial parameters ε and ρ , which will directly determine the performance and operational complexity of the ADMM algorithm, but with great uncertainty. In total, this paper optimizes the parameter setting for the deep-unfolding network to improve the performance of the algorithm.

3.2.1. The Update Layer of Data

The updated structure of the parameter Ω is shown in Figure 1. Λ and Z are both Hermitian matrices. Ω is updated according to (16) with parameters ε ρ and Z .
In the ADMM algorithm, the ith iteration needs to artificially set the parameters to ε and ρ . The parameters to be learned in the network are defined as ε ( i ) and ρ ( i ) , and the corresponding forward propagation expression can be expressed as
Ω ( i + 1 ) = Z T × T ( i ) + 1 ρ ( i ) Λ T × T ( i ) ε ( i ) 2 I
Similarly to the update layer of parameter Ω , the update layer of parameter u in the network is determined by ε ( i ) and ρ ( i ) , a more complex structure. According to the update rule in (16), we can design the structure of the update layer as shown in Figure 2.
Accordingly, the corresponding forward propagation is given by
u ( i + 1 ) = 1 ρ Υ g ( Λ M × M ( i ) ) + ρ g ( Z M × M ( i ) ) ε 2 e 1
It can be seen that the parameters Ω and u are only determined by the parameters Z and Λ ; Z and Λ will directly affect the final calculation result.
In addition to the influence of Z and Λ , the update of parameter X ¯ also needs to use the received data of the radar. At this time, the only parameter that must be optimized is ρ ( i ) , and the update layer structure is shown in Figure 3.
Accordingly, the forward propagation can be expressed as
X ¯ ( i ) = 2 ρ ( i ) F 1 H F 1 + F 2 H F 2 1 F 2 H Υ + 2 F 1 H Λ M × T ( i ) + 2 ρ ( i ) F 1 H Z M × T ( i )

3.2.2. The Update Layer of Matrix Reconstruction

The update of the parameter Z requires exploiting the results of the output for update layer and parameter Λ . The calculation process involves matrix reconstruction and data screening. The forward propagation expression can be divided into
τ ( u ( i + 1 ) ) X ( i + 1 ) a ( θ 0 ) ( s ( i + 1 ) ) T ( X ( i + 1 ) a ( θ 0 ) ( s ( i + 1 ) ) T ) H Ω ( i + 1 ) 1 ρ Λ ( i ) = σ k ( i ) U k ( i ) ( U k ( i ) ) H
Z ( i + 1 ) = k D σ k ( i ) U k ( i ) ( U k ( i ) ) H , D = k | σ k ( i ) 0
where s ( i + 1 ) can be obtained from X ¯ ( i + 1 ) . The essence of (22) is to filter the feature data whose feature is greater than or equal to 0. The specific updated structure is shown in Figure 4.
The last parameter Λ that needs to be updated also needs to be utilized by the matrix reconstruction, which will use all the four parameters that appear in the above update process. At this time, the parameter to be optimized is ρ ( i ) , and the update layer of parameter Λ is shown in Figure 5.
Accordingly, the corresponding forward propagation can be expressed as
Λ ( i + 1 ) = Λ ( i ) + ρ Z ( i + 1 ) τ ( u ( i + 1 ) ) X ( i + 1 ) a ( θ 0 ) ( s ( i + 1 ) ) ( X ( i + 1 ) a ( θ 0 ) ( s ( i + 1 ) ) H ) Ω ( i + 1 )

3.2.3. Analysis of C-ADMM-Net Structure

The proposed C-ADMM-net consists of a cascade network of an input layer, an output layer, and a P-level substructure, where the output layer is composed of a single update layer of data.
The C-ADMMN network structure is shown in Figure 6. The input layer includes the initial random Hermitian matrices Λ 0 and Z 0 , as well as the received signal Y . The output layer only needs to calculate the parameter u out . The above C-ADMM-net contains P update layers of data and P update layers of matrix reconstruction.
For this network structure, the parameters to be learned can be expressed as the following sets:
α = { ρ ( i ) , ε ( i ) | i = 0 , 1 , 2 , , P 1 }
It can be noticed that two parameters need to be learned for the P-level C-ADMM-net. In traditional algorithms, the parameters can be only manually adjusted for the output. However, by using the C-ADMM-net for application, it is necessary to train a more theoretical optimal value and array signal, and then process the received array signal, which can effectively reduce the operational complexity and improve the adaptability.

3.2.4. Back Propagation Algorithm in Complex Number Domain

The input of the C-ADMM-net is the array signal and the output is adaptive beamforming weights. The training set of the network can be expressed as Θ = { Y i , S i l a b e l } i = 1 N c , where Y i denotes the ith group of array signals, S i l a b e l denotes the corresponding label of SINR, N c denotes the total number of datasets. In this paper, the SINR corresponding to the theoretical optimal weight is considered as the label, and the loss function is defined as
L o s s = 1 N c i = 1 i = N c S i S i l a b e l 2
where · 2 denotes the second norm, S i denotes the SINR obtained from training. The loss function can intuitively reflect the difference between the trained network and the ideal case. By applying (25) and the complex domain BP algorithm, the derivative of f ( O ) about O for any complex domain matrix O and real function f ( O ) can be calculated as
Grad f ( O ) = 2 d f ( O ) d O = f ( O ) Re { O } + j f ( O ) Im { O }
where Re { O } and Im { O } respectively denote the real and imaginary parts of the matrix. The scalar form of the calculated chain rule of the complex number domain gradient can be further obtained as
f ( η ) η = f Re { O } , Re ( O ) η + f Im { O } , Im ( O ) η
where η denotes the real number scalar, f ( η ) denotes the real-valued function of η .
By applying the chain rule shown in (27) to the ADMM-Net, the loss function can calculate the gradient of any parameter in the parameter set. After obtaining the gradient, the training can be updated by using the gradient descent.

4. Computer Simulation Experiments

4.1. Introduction of the Dataset

The experimental section primarily employs various simulation data to verify the algorithm. In the simulation data, the radar is assumed to be a uniform array of 10 units at half wavelength. There are two strong disturbances, incident into the radar from two different directions far away from the main lobe, with the main lobe pointing in the desired direction.
The simulation dataset has a total of 600 sets of array signal data, and the corresponding label optimal SINR, where 300 groups were randomly taken as training samples, leaving 300 groups as testing data. Each set of data contains 10 array echo data; each array echo information group is a 10 × T matrix, where T denotes the number of snapshots, and the number of layers of the expanded network is set to 30 layers.
The specific training parameters are set as follows: the root mean square (RMS) error function is chosen as the loss function, and the utilized optimizer is Adam with typical parameter values of betas = (0.9, 0.999) and eps = 10−8. Additionally, the learning rate is configured as 0.04. Due to the limited parameters to be learned in the network, the small dataset adopted in the experiment can effectively learn the parameters without producing an overfitting phenomenon.

4.2. Experimental Results and Analysis

4.2.1. Contrast of Beamforming Optimization

In scenario 1, the target is located at θ 0 = 15 ° , two strong disturbances are located θ 1 = 10.5 ° and θ 2 = 30 ° , and the number of snapshots for the DBF collection is 20. The SNR is 0   dB , and the interference-to-noise ratio is 20 dB. The results of the beamforming direction diagram are shown in Figure 7.
The simulation results indicate that all the algorithms successfully suppress interference; however, the strong RVO-LCMV algorithm exhibits a significant offset in the main lobe, and the traditional reconstruction method algorithm demonstrates notably high side lobes. The ADMM algorithm and the proposed C-ADMMN algorithm exhibit the ability to preserve favorable main and side lobe characteristics while effectively suppressing interference. In comparison, the C-ADMMN algorithm, being closer to the interference direction, demonstrates superior performance.
In Scenario 2, consider cases where the SNR is already high enough for detection. As the target signal becomes stronger, the probability of target self-elimination becomes higher. When the SNR is set to 10 dB, with all other parameters held constant as in scenario 1, the results of the beamforming direction diagram are illustrated in Figure 8.
It can be seen that in this case, all methods avoid target self-elimination. The main value of RVO-LCMV showed a significant deviation, and the side lobe was also higher. Only the ADMM algorithm and the proposed C-ADMMN algorithm can produce a depression at 10.5°, when the zero-trap direction of the C-ADMMN algorithm is still closer to the interference direction. Subsequently, the investigation delves into scenarios involving variations in target and interference.
In scenario 3, the target direction is changed to θ 0 = 10 ° , the interference directions are changed to θ 1 = 35 ° , θ 2 = 16.2 ° . The beam direction diagram of the different algorithms is shown in Figure 9. In this case, the proposed algorithm C-ADMMN still has a lower side lobe and a deeper zero point than the traditional algorithm.
The test loss function of the three algorithm changes with the number of training times is shown in Figure 10. From the initial training rounds, it is evident that the algorithm’s performance with ADMM is suboptimal due to the small number of network layers in conjunction with manually set parameters. However, as the training progresses, the parameters are effectively adjusted, and the error value shows a downward trend on the whole. After more than 200 training rounds, the error value is basically stable.

4.2.2. Comparison of the Algorithm Performance

The performance of different algorithms is compared in this section. Considering that the input data are damaged by the target signal, the other parameter settings are the same as in scenario 1. Two hundred Monte Carlo experiments were used to calculate the average output SINR, where the optimal SINR is calculated on the premise that all information is fully known and the test data are not affected by the target. The performance of different algorithms versus SNR and snapshot is shown in Figure 11 and Figure 12. It can be noticed that the relative ADMM algorithm has a performance second only to the C-ADMMN algorithm, which is the basis of the advantages of the algorithm. The utilization of the trained C-ADMMN yields better beamforming performance and consistently has the best performance throughout the process of SNR and fast beat number changes. In addition, under different SINR and fast beat conditions in the experiment, the trained network in scene 1 was used. For different target and interference situations, the output SINR is the same as Figure 11 and is not drawn repeatedly here. This shows that the trained network has a good universality.
For an intuitive presentation, the RMSE and operation times of different algorithms in different scenarios are given in Table 1. Table 1 provides a more intuitive perspective, indicating that the proposed C-ADMMN algorithm substantially reduces operation time and exhibits minimal deviation from the theoretical optimal value.

5. Conclusions

This paper proposes a robust adaptive beamforming method based on deep-unfolding networks, which adopts the form of deep-unfolding networks for the subspace recovery algorithm based on atomic norm optimization. The backpropagation process within the deep-unfolding networks is employed to optimize hyper-parameters during the traditional atomic norm optimization iteration. Moreover, the proposed method determines the optimal hyper-parameters under different interference noise matrix conditions, enhancing the performance of the traditional interference subspace recovery method based on the ADMM algorithm.

Author Contributions

Conceptualization, Z.G. and X.Z.; methodology, M.R.; writing—review and editing, X.S.; supervision, Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the National Natural Science Foundation of China (62022091, 61921001).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare there are no conflicts of interest.

References

  1. Capon, J. High-resolution frequency-wavenumber spectrum analysis. Proc. IEEE 1969, 57, 1408–1418. [Google Scholar] [CrossRef]
  2. Yue, Y.; Zhou, C.; Xing, F.; Choo, K.-K.R.; Shi, Z. Adaptive Beamforming for Cascaded Sparse Diversely Polarized Planar Array. IEEE Trans. Veh. Technol. 2023, 72, 15648–15664. [Google Scholar] [CrossRef]
  3. Yang, J.; Yang, Y.; Liao, B. Robust adaptive Bayesian beamforming against stationary and nonstationary interferences. Signal Process. 2023, 212, 109122. [Google Scholar] [CrossRef]
  4. TLuo, T.; Chen, P.; Cao, Z.; Zheng, L.; Wang, Z. URGLQ: An Efficient Covariance Matrix Reconstruction Method for Robust Adaptive Beamforming. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 5634–5645. [Google Scholar] [CrossRef]
  5. Huang, Y.; Fu, H.; Vorobyov, S.A.; Luo, Z.-Q. Robust Adaptive Beamforming via Worst-Case SINR Maximization With Nonconvex Uncertainty Sets. IEEE Trans. Signal Process. 2023, 71, 218–232. [Google Scholar] [CrossRef]
  6. Wu, X.; Luo, J.; Li, G.; Zhang, S.; Sheng, W. Fast Wideband Beamforming Using Convolutional Neural Network. Remote Sens. 2023, 15, 712. [Google Scholar] [CrossRef]
  7. Huang, L.; Zhang, B.; Ye, Z. Robust adaptive beamforming using a new projection approach. In Proceedings of the 2015 IEEE International Conference on Digital Signal Processing (DSP), Singapore, 21–24 July 2015; pp. 1181–1185. [Google Scholar] [CrossRef]
  8. Feldman, D.D.; Griffiths, L.J. A projection approach for robust adaptive beamforming. IEEE Trans. Signal Process. 1994, 42, 867–876. [Google Scholar] [CrossRef]
  9. Ruan, H.; de Lamare, R.C. Robust Adaptive Beamforming Based on Low-Rank and Cross-Correlation Techniques. IEEE Trans. Signal Process. 2016, 64, 3919–3932. [Google Scholar] [CrossRef]
  10. Bao, Y.; Zhang, H.; Liu, X.; Jiang, Y.; Tao, Y. Design of Robust Sparse Wideband Beamformers with Circular-Model Mismatches Based on Reweighted ℓ2,1 Optimization. Remote Sens. 2023, 15, 4791. [Google Scholar] [CrossRef]
  11. Li, J.; Stoica, P.; Wang, Z. On robust Capon beamforming and diagonal loading. IEEE Trans. Signal Process. 2003, 51, 1702–1715. [Google Scholar] [CrossRef]
  12. Elnashar, A.; Elnoubi, S.M.; El-Mikati, H.A. Further Study on Robust Adaptive Beamforming With Optimum Diagonal Loading. IEEE Trans. Antennas Propag. 2006, 54, 3647–3658. [Google Scholar] [CrossRef]
  13. Yang, J.; Ma, X.; Hou, C.; Liu, Y. Automatic Generalized Loading for Robust Adaptive Beamforming. IEEE Signal Process. Lett. 2009, 16, 219–222. [Google Scholar] [CrossRef]
  14. Du, L.; Li, J.; Stoica, P. Fully Automatic Computation of Diagonal Loading Levels for Robust Adaptive Beamforming. IEEE Trans. Aerosp. Electron. Syst. 2010, 46, 449–458. [Google Scholar] [CrossRef]
  15. Vorobyov, S.; Gershman, A.; Luo, Z.-Q. Robust adaptive beamforming using worst-case performance optimization: A solution to the signal mismatch problem. IEEE Trans. Signal Process. 2003, 51, 313–324. [Google Scholar] [CrossRef]
  16. Lorenz, R.G.; Boyd, S.P. Robust minimum variance beamforming. IEEE Trans. Signal Process. 2005, 53, 1684–1696. [Google Scholar] [CrossRef]
  17. Liao, B.; Guo, C.T.; Huang, L.; Li, Q.; Liao, G.S.; So, H.C. Robust adaptive beamforming with random steering vector mismatch. Signal Process. 2016, 129, 190–194. [Google Scholar] [CrossRef]
  18. Hassanien, A.; Vorobyov, S.A.; Wong, K.M. Robust Adaptive Beamforming Using Sequential Quadratic Programming: An Iterative Solution to the Mismatch Problem. IEEE Signal Process. Lett. 2008, 15, 733–736. [Google Scholar] [CrossRef]
  19. Xu, J.; Liao, G.; Zhu, S.; Huang, L. Response Vector Constrained Robust LCMV Beamforming Based on Semidefinite Programming. IEEE Trans. Signal Process. 2015, 63, 5720–5732. [Google Scholar] [CrossRef]
  20. Xu, J.W.; Liao, G.S.; Zhu, S.Q. Robust LCMV beamforming based on phase response constraint. Electron. Lett. 2012, 48, 1284. [Google Scholar] [CrossRef]
  21. Somasundaram, S.D. Linearly Constrained Robust Capon Beamforming. IEEE Trans. Signal Process. 2012, 60, 5845–5856. [Google Scholar] [CrossRef]
  22. Chen, C.-Y.; Vaidyanathan, P.P. Quadratically Constrained Beamforming Robust Against Direction-of-Arrival Mismatch. IEEE Trans. Signal Process. 2007, 55, 4139–4150. [Google Scholar] [CrossRef]
  23. Yu, Z.L.; Ser, W.; Er, M.H.; Gu, Z.; Li, Y. Robust Adaptive Beamformers Based on Worst-Case Optimization and Constraints on Magnitude Response. IEEE Trans. Signal Process. 2009, 57, 2615–2628. [Google Scholar] [CrossRef]
  24. Yu, Z.L.; Er, M.H.; Ser, W. A Novel Adaptive Beamformer Based on Semidefinite Programming (SDP) With Magnitude Response Constraints. IEEE Trans. Antennas Propag. 2008, 56, 1297–1307. [Google Scholar] [CrossRef]
  25. Huang, F.; Sheng, W.; Ma, X. Modified projection approach for robust adaptive array beamforming. Signal Process. 2012, 92, 1758–1763. [Google Scholar] [CrossRef]
  26. Jia, W.; Jin, W.; Zhou, S.; Yao, M. Robust adaptive beamforming based on a new steering vector estimation algorithm. Signal Process. 2013, 93, 2539–2542. [Google Scholar] [CrossRef]
  27. Zhang, W.; Liu, T.; Yang, G.; Jiang, C.; Hu, Y.; Lan, T.; Zhao, Z. A Novel Method for Improving Quality of Oblique Incidence Sounding Ionograms Based on Eigenspace-Based Beamforming Technology and Capon High-Resolution Range Profile. Remote Sens. 2022, 14, 4305. [Google Scholar] [CrossRef]
  28. Yang, H.; Wang, P.; Ye, Z. Robust Adaptive Beamforming via Covariance Matrix Reconstruction and Interference Power Estimation. IEEE Commun. Lett. 2021, 25, 3394–3397. [Google Scholar] [CrossRef]
  29. Yang, Z.; de Lamare, R.C.; Li, X. L1 Regularized STAP Algorithms with a Generalized Sidelobe Canceler Architecture for Airborne Radar. IEEE Trans. Signal Process. 2012, 60, 674–686. [Google Scholar] [CrossRef]
  30. Wu, Q.; Zhang, Y.D.; Amin, M.G.; Himed, B. Space-time adaptive processing and motion parameter estimation in multi-static passive radar exploiting Bayesian compressive sensing. IEEE Trans. Geosci. Remote Sens. 2016, 54, 944–957. [Google Scholar] [CrossRef]
  31. Yang, X.; Sun, Y.; Zeng, T.; Long, T.; Sarkar, T.K. Fast STAP Method Based on PAST with Sparse Constraint for Airborne Phased Array Radar. IEEE Trans. Signal Process. 2016, 64, 4550–4561. [Google Scholar] [CrossRef]
  32. Zhang, X.; Jiang, W.; Huo, K.; Liu, Y.; Li, X. Robust Adaptive Beamforming Based on Linearly Modified Atomic-Norm Minimization with Target Contaminated Data. IEEE Trans. Signal Process. 2020, 68, 5138–5151. [Google Scholar] [CrossRef]
  33. Bhaskar, B.N.; Tang, G.; Recht, B. Atomic Norm Denoising with Applications to Line Spectral Estimation. IEEE Trans. Signal Process. 2013, 61, 5987–5999. [Google Scholar] [CrossRef]
  34. Li, Y.; Chi, Y. Off-the-Grid Line Spectrum Denoising and Estimation with Multiple Measurement Vectors. IEEE Trans. Signal Process. 2016, 64, 1257–1269. [Google Scholar] [CrossRef]
  35. Tang, G.; Bhaskar, B.N.; Shah, P.; Recht, B. Compressed sensing off the grid. IEEE Trans. Inf. Theory 2013, 59, 7465–7490. [Google Scholar] [CrossRef]
  36. Pei, B.; Han, H.; Sheng, Y.; Qiu, B. Research on smart antenna beamforming by generalized regression neural network. In Proceedings of the 2013 IEEE International Conference on Signal Processing, Communication and Computing (ICSPCC 2013), Kunming, China, 5–8 August 2013; pp. 1–4. [Google Scholar] [CrossRef]
  37. Zaharis, Z.D.; Yioultsis, T.V.; Skeberis, C.; Xenos, T.D.; Lazaridis, P.I.; Mastorakis, G.; Mavromoustakis, C.X. Implementation of antenna array beamforming by using a novel neural network structure. In Proceedings of the 2016 International Conference on Telecommunications and Multimedia (TEMU), Heraklion, Greece, 25–27 July 2016; pp. 1–5. [Google Scholar] [CrossRef]
  38. Zaharis, Z.D.; Gravas, I.P.; Lazaridis, P.I.; Yioultsis, T.V.; Antonopoulos, C.S.; Xenos, T.D. An Effective Modification of Conventional Beamforming Methods Suitable for Realistic Linear Antenna Arrays. IEEE Trans. Antennas Propag. 2020, 68, 5269–5279. [Google Scholar] [CrossRef]
  39. Kim, Y.-S.; Schvartzman, D.; Yu, T.-Y.; Palmer, R.D. Fast Adaptive Beamforming for Weather Observations with Convolutional Neural Networks. Remote Sens. 2023, 15, 4129. [Google Scholar] [CrossRef]
  40. Shi, J.; Hu, G.; Zhang, X.; Sun, F.; Zhou, H. Sparsity-based 2-D DOA estimation for co-prime array: From sum-difference coarray viewpoint. IEEE Trans. Signal Process. 2017, 65, 5591–5604. [Google Scholar] [CrossRef]
  41. Ren, M.; Hu, G.; Shi, J.; Zhou, H. Joint Angle and Gain-Phase Error Estimation for Nested Bistatic MIMO Radar via Tensor Decomposition. Signal Process. 2023, 202, 108740. [Google Scholar] [CrossRef]
Figure 1. The update layer of parameter Ω .
Figure 1. The update layer of parameter Ω .
Remotesensing 16 00096 g001
Figure 2. The update layer of parameter u.
Figure 2. The update layer of parameter u.
Remotesensing 16 00096 g002
Figure 3. The update layer of parameter X ¯ .
Figure 3. The update layer of parameter X ¯ .
Remotesensing 16 00096 g003
Figure 4. The update layer of parameter Z.
Figure 4. The update layer of parameter Z.
Remotesensing 16 00096 g004
Figure 5. The update layer of parameter Λ .
Figure 5. The update layer of parameter Λ .
Remotesensing 16 00096 g005
Figure 6. The C-ADMM-net Structure.
Figure 6. The C-ADMM-net Structure.
Remotesensing 16 00096 g006
Figure 7. The beam patterns of the different algorithms in Scenario 1.
Figure 7. The beam patterns of the different algorithms in Scenario 1.
Remotesensing 16 00096 g007
Figure 8. Beam patterns of different algorithms in scenario 2.
Figure 8. Beam patterns of different algorithms in scenario 2.
Remotesensing 16 00096 g008
Figure 9. Beam patterns of different algorithms in scenario 3.
Figure 9. Beam patterns of different algorithms in scenario 3.
Remotesensing 16 00096 g009
Figure 10. Learning curves in different scenarios.
Figure 10. Learning curves in different scenarios.
Remotesensing 16 00096 g010
Figure 11. Outputs SINR versus SNR and snapshots in scenario 1.
Figure 11. Outputs SINR versus SNR and snapshots in scenario 1.
Remotesensing 16 00096 g011
Figure 12. Outputs SINR versus SNR and snapshots in scenario 3.
Figure 12. Outputs SINR versus SNR and snapshots in scenario 3.
Remotesensing 16 00096 g012
Table 1. Performance comparison of different algorithms in different scenarios.
Table 1. Performance comparison of different algorithms in different scenarios.
AlgorithmRMSETime
Scene 1worst-case optimization3.751148.845960
LSMI-MVDR3.409751.199925
RVO-LCMV10.147448.082712
ADMM (max 300)0.819235.036440
C-ADMMN (30 layer)0.45943.770667
Scene 2worst-case optimization9.882849.784180
LSMI-MVDR10.329053.399999
RVO-LCMV18.795847.921498
ADMM (max 300)0.896933.197317
C-ADMMN (30 layer)0.41733.607241
Scene 3worst-case optimization3.817649.784180
LSMI-MVDR3.609850. 870276
RVO-LCMV10.199748.201499
ADMM (max 300)0.907833.372376
C-ADMMN (30 layer)0.52113.496087
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gong, Z.; Zhang, X.; Ren, M.; Su, X.; Liu, Z. ADMM-Net for Beamforming Based on Linear Rectification with the Atomic Norm Minimization. Remote Sens. 2024, 16, 96. https://doi.org/10.3390/rs16010096

AMA Style

Gong Z, Zhang X, Ren M, Su X, Liu Z. ADMM-Net for Beamforming Based on Linear Rectification with the Atomic Norm Minimization. Remote Sensing. 2024; 16(1):96. https://doi.org/10.3390/rs16010096

Chicago/Turabian Style

Gong, Zhenghui, Xinyu Zhang, Mingjian Ren, Xiaolong Su, and Zhen Liu. 2024. "ADMM-Net for Beamforming Based on Linear Rectification with the Atomic Norm Minimization" Remote Sensing 16, no. 1: 96. https://doi.org/10.3390/rs16010096

APA Style

Gong, Z., Zhang, X., Ren, M., Su, X., & Liu, Z. (2024). ADMM-Net for Beamforming Based on Linear Rectification with the Atomic Norm Minimization. Remote Sensing, 16(1), 96. https://doi.org/10.3390/rs16010096

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop