Next Article in Journal
Semantic Segmentation and 3D Reconstruction of Concrete Cracks
Next Article in Special Issue
Physical-Based Spatial-Spectral Deep Fusion Network for Chlorophyll-a Estimation Using MODIS and Sentinel-2 MSI Data
Previous Article in Journal
Using the Geodetector Method to Characterize the Spatiotemporal Dynamics of Vegetation and Its Interaction with Environmental Factors in the Qinba Mountains, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bias Analysis and Correction for Ill-Posed Inversion Problem with Sparsity Regularization Based on L1 Norm for Azimuth Super-Resolution of Radar Forward-Looking Imaging

1
School of Remote Sensing & Geomatics, Nanjing University of Information Science & Technology, Nanjing 210044, China
2
College of Surveying and Geoinformatics, Tongji University, Shanghai 200092, China
3
College of Geography and Oceanography, Minjiang University, Fuzhou 350108, China
4
Key Laboratory of Marine Environmental Survey Technology and Application, Ministry of Natural Resources, Guangzhou 510300, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(22), 5792; https://doi.org/10.3390/rs14225792
Submission received: 20 October 2022 / Revised: 9 November 2022 / Accepted: 11 November 2022 / Published: 16 November 2022
(This article belongs to the Special Issue Deep Learning in Optical Satellite Images)

Abstract

:
The sparsity regularization based on the L1 norm can significantly stabilize the solution of the ill-posed sparsity inversion problem, e.g., azimuth super-resolution of radar forward-looking imaging, which can effectively suppress the noise and reduce the blurry effect of the convolution kernel. In practice, the total variation (TV) and TV-sparsity (TVS) regularizations based on the L1 norm are widely adopted in solving the ill-posed problem. Generally, however, the existence of bias is ignored, which is incomplete in theory. This paper places emphasis on analyzing the partially biased property of the L1 norm. On this basis, we derive the partially bias-corrected solution of TVS and TV, which improves the rigor of the theory. Lastly, two groups of experimental results reflect that the proposed methods with partial bias correction can preserve higher quality than those without bias correction. The proposed methods not only distinguish the adjacent targets, suppress the noise, and preserve the shape and size of targets in visual terms. Its improvement of Peak Signal-to-Noise Ratio, Structure-Similarity, and Sum-Squared-Errors assessment indexes are overall 2.15%, 1.88%, and 4.14%, respectively. As such, we confirm the theoretical rigor and practical feasibility of the partially bias-corrected solution with sparsity regularization based on the L1 norm.

1. Introduction

Ill-posed models widely exist in fields of geodesy and remote sensing, such as GNSS (Global Navigation Satellite System) fast ambiguity resolution [1,2,3], gravity field determination from the satellite missions of Gravity Recovery And Climate Experiment (GRACE) [4,5,6,7,8,9,10], geometric correction of satellite imagery based on rational function model (RFM) [11,12,13], azimuth super-resolution imaging of Real Aperture Radar (RAR) [13,14,15,16,17,18,19,20,21,22,23,24,25], signal restoration for vision sensing [26,27,28], full waveform Lidar data deconvolution [29,30,31] and optical remote sensing deblurring [32,33,34]. As is known, the solutions of a well-posed model should satisfy three standards: i.e., existence, uniqueness, and stability [35]. For any ill-posed observational equations, the coefficient matrix is full-rank, but its condition number is significantly great. In consequence, very small observation errors can also corrupt the estimated parameters considerably [36,37]. In other words, the solution of an ill-posed model violates the stability of necessary conditions of a well-posed solution, which is existent and unique but unstable. Therefore, regularization methods are proposed to stabilize the ill-posed solutions.
Tikhonov regularization has been successfully proposed to solve the ill-posed problems, which is based on the L2-norm criterion that the residuals and parameters sum of squares are minimized [38,39]. Tikhonov regularization utilizes the regularization parameter to reduce the influence of the small singular values of the coefficient matrix and further decrease the condition number to get a better regularization solution. But it also introduces biases to the estimated solution. Considering the impact of small singular values, Xu [40] and Hansen [41] directly truncated the small singular values and implemented pseudo inversion to obtain a stable solution, which is the famous Truncated Singular Value Decomposition (TSVD) regularization. Although TSVD can get fine and stable solutions for ill-posed models, it still is biased due to the information loss of the coefficient matrix. To this end, Xu et al. [42], Xu [43], Shen et al. [37], Chen et al. [44], and Ji et al. [36] corrected the biases of the biased parameter estimates with fully-, partially- and adaptively- bias correction strategies. These methods of Tikhonov regularization based on the L2 norm can depress the noise by smoothing the processed results. Thereby, when latent sparsity of unknown parameters exists, the Tikhonov regularization based on the L2 norm will be unsuitable. For example, in the imaging region, the interest targets are sparse in ship monitoring or airport surveillance compared with the entire imaging area [16].
Therefore, in order to reconstruct these sparse characteristics, the sparsity regularization terms based on the L1 norm were applied to estimate sparse parameter solutions. Azadbakht et al. [31] proposed a sparsity-constrained regularization approach for the deconvolution of the returned Lidar waveform and successfully restored the target cross-section. Zhang et al. [23] used the method solved by splitting the Bergman algorithm to get strong-point targets. A sparse denoising-based super-resolution method (SDBSM) was proposed in [18] to avoid losing the shape characteristics treated as strong-point targets. Furthermore, Tuo et al. [45] developed an iterative reweighted least-squares method for the sparsity-constrained regularization method to accelerate the computation speed. Zhang et al. [46] designed a total variation (TV) [47] super-resolution imaging method based on the TSVD strategy. Shortly after, a fast sparse-TSVD version was put forward by Tuo et al. [20] to improve its efficiency. Zhang et al. [22] introduced the TV method to restore the contour information of the target smoother and the L1 norm to preserve the sparsity to improve the resolution of radar forward-looking imaging, which was called the TV-sparse (TVS) method. Similarly, Quan et al. [19] developed an improved quasi-Newton iteration method based on the Graphics Processing Unit Platform to raise the computation efficiency of sparse reconstruction. Due to the information loss of the coefficient matrix, the restored results of TV are slightly inferior to those of TVS. The Gohberg-Senmencul representation-based fast TV method was proposed by Zhang et al. [17] to improve the computation efficiency. Recently, Huo et al. [48] conceptualized a balanced Tikhonov and TV regularization approach to retain the sparsity of the candidate solution and the smoothness of continuous contours in the regularization solution. Although these methods based on the L1 norm with different algorithms can get fine results, which can preserve the sparsity of the unknown parameters and its smooth shape of continuous contours, the bias of the regularization solution based on the L1 norm cannot be analyzed nor mitigated. Obviously, it is theoretically insufficient and incomplete for the analytical solution of the L1 norm.
To this end, the main purpose of the paper addresses three goals. They are: (1) The bias analysis of the sparsity regularization method based on the L1 norm is implemented to place emphasis on its theoretical importance and make up for its incompleteness. (2) The comparison with the bias of the sparsity regularization method based on the L2 norm is realized to explicate its advantage with partial bias. The consistency with the conclusion of partially bias correction regularization methods proposed by Shen et al. [37] is analyzed, and the theoretic reason for its advantage will be mined. (3) A piecewise partially bias-corrected regularization solution based on the L1 norm is deduced, and the corresponding iterative algorithms based on the two typical TV and TVS models will be designed to achieve a more accurate solution.
The remainder of this paper is organized as follows. Section 2 contains a brief introduction to the azimuth echo convolution model of radar forward-looking imaging. In Section 3, an analysis and a comparison with the regularization solution based on the L1 norm and L2 norm are carried out, with their differences being expressed and the nature of partial bias being revealed. In Section 4, taking two typical TV and TVS methods to alleviate the ill-posed problem as examples, we derive their partial bias correction solutions with a piecewise form. The corresponding iterative flowcharts are designed. In Section 5, two experimental examples have been designed to demonstrate the performance of the proposed methods, namely, 1-D point target and 2-D area simulation for azimuth super-resolution of radar forward-looking imaging. A discussion on the superiority of the proposed methods with bias correction is implemented in Section 6. In final, conclusions and remarks are summarized in Section 7.

2. Azimuth Echo Convolution Model of Radar Forward-Looking Imaging

The azimuth echo signal convolution model of radar forward-looking imaging is illustrated in Figure 1, according to which the imaging platform travels along the Y-direction with a velocity of v and at the height of H. R0 is the initial slant range between the radar and the target P, with the initial azimuth angle being θ0. φ0 is the initial pitching angle. As the platform travels with a time interval t, the slant range changes into R, the pitching angle turns to φ, and the azimuth angle becomes θ.
According to the trigonometric relation, the slant range history R(t) satisfies
R ( t ) = R 0 2 + ( vt ) 2 2 R 0 vtcos θ 0 cos φ 0 R 0 vtcos θ 0 cos φ 0
Considering both the range resolution and working distance, the radar transmits linear frequency-modulated signal is expressed as
r ( τ ) = rect ( τ T p ) exp ( j 2 π f c ) exp ( j π μ τ 2 )
where τ is the fast times in range direction, the carrier frequency of the transmitted signal is denoted as fc, Tp is the signal duration, and μ stands for the chirp rate; rect(·) is a rectangle window. After platform traveling and antenna scanning, the received signal of the target P can be evolved as follows,
r ( τ , t ) = σ p h ( t t 0 ) rect ( τ τ d T p ) exp ( j 2 π f c ( τ τ d T p ) ) exp ( j π μ ( τ τ d T p ) 2 )
where h(·) represents the modulation effect of the antenna pattern, σp is the target scattering coefficient, τ d = 2 R ( t ) / c is time delay, and t0 is the moment when the beam center scans P. By matched filtering the widely utilized technology for high range resolution, the received signal becomes
r ( τ , t ) = σ p h ( t t 0 ) sin c ( B r ( τ τ d ) ) × exp ( j 2 π f c τ d )
Radar scanning imaging is accompanied by platform movement, which results in the echoes aliasing in different units. Therefore, its influence has to be removed [16,46,48]. Furthermore, the received echo can be modeled in the following convolution form,
r = a u + ε
where r represents the discrete azimuth echo signal vector, a is the convolution kernel standing for antenna pattern signal, “ ” is the convolution symbol, and ε is the random error.

3. Analysis and Comparison between L2 Norm and L1 Norm

In order to show the bias of the regularization solution based on the L1 norm intuitively, we derive the formulae of the bias and analyze and compare it with that based on the L2 norm.
For the linear discrete ill-posed models, its expression can be written by,
r = A u + ε
where r is an m-vector of observations, A is an m × n (m > n) deterministic coefficient matrix, which is assumed as full column rank but with a large condition number, u is the n-vector of unknown parameters, and ε is the random error vector with zero mean and variance-covariance matrix σ 0 2 W 1 . Here σ 0 2 is the variance of unit weight, and W is a weight matrix, which is set as the identity matrix in this paper. According to the Tikhonov regularization criterion, the objective function is expressed as,
u ^ = arg   min u ^ A u r 2 2 + α u 2 2
where α is the positive regularization parameter and · 2 2 is the L2 norm. For the solution of the Tikhonov regularization method based on the L2 norm, namely u ^ T = ( A T A + α I ) 1 A T r , applying the mathematical expectation and matrix inversion formula to it, we get E ( u ^ T ) = u ¯ α ( A T A + α I ) 1 u ¯ where E(·) is the operator of mathematical expectation and u ¯ is the true value of the unknown parameter. Thus, its bias can be obtained according to [37,42],
bias ( u ^ T ) = α ( A T A + α I ) 1 u ¯
Since the bias vector can be formally computed with Equation (8), we can formally remove the biases from the regularization solution and obtain the bias-corrected regularized estimate as follows:
u c = u ^ T bias ( u ¯ T ) = u ^ T + α ( A T A + α I ) 1 u ¯
But the L2 norm representing the smoothing characters cannot solve the ill-posed problem with sparsity regularization, and in particular, of deconvolution problem. According to Tuo et al. [20], it can be recast as Equation (6), where A is a corresponding circulant Toeplitz matrix from a. In general, the L0 norm is selected to represent the sparsity of the parameters, but it is a non-convex problem which is hard to solve. Therefore, the L1 norm is utilized to replace it with the L0 norm for a balance between sparsity and smoothness. For the ill-posed problem with sparsity regularization, the objective function (7) can be recast as
u ^ = arg min u A u r 2 2 + β u 1
where β is a positive regularization parameter. A splitting Bergman algorithm (SBA) can be used to estimate the parameter u [49,50]. An intermediate variable w = u is introduced, and Equation (10) can be rewritten as
u ^ = arg   min u A u r 2 2 + β w 1 , s . t . w = u
According to the SBA, the above-mentioned formula can be further evolved with the following iterative form,
{ ( u ^ k + 1 , w k + 1 ) = arg min u , d A u r 2 2 + β w 1 + γ     w u b k 2 2 b k + 1 = b k + ( u ^ k + 1 w k + 1 )
where γ is the penalty parameter with a large number, and b is the gradient of w. The superscript k stands for the kth iteration. The solutions of iterative strategy can be updated with the following formulae,
{ u ^ k + 1 = ( A T A + γ I ) 1 ( A T r + γ ( w k b k ) ) w k + 1 = sign ( u ^ k + 1 + b k ) max ( u ^ k + 1 + b k β / 2 γ , 0 ) b k + 1 = b k + u ^ k + 1 w k + 1
where sign(·) stands for the signum function. The first formula of Equation (12) is similar to the Tikhonov regularization solution and with the extra term w k b k , which is dependent on SBA. Thereby, we derive the equivalent formula according to the matrix inversion formula ( A T A + γ I ) 1 as follows,
u ^ = [ ( A T A ) 1 γ ( A T A + γ I ) 1 ( A T A ) 1 ] ( A T r + γ ( w k b k ) ) = ( A T A ) 1 A T r γ ( A T A + γ I ) ( A T A ) 1 A T r + γ ( A T A + γ I ) 1 ( w k b k ) = u ¯ γ ( A T A + γ I ) 1 ( u ¯ ( w k b k ) )
For convenience to express, the superscript k is omitted. According to the definition of expectation, the bias u ^ can be expressed by,
bias ( u ^ ) = γ ( A T A + γ I ) 1 ( u ¯ ( w b ) )
When γ is equal to α, through subtracting Equation (8) and Equation (14), one can obtain the difference Δb between the L1 norm and L2 norm as,
Δ b = γ ( A T A + γ I ) 1 ( w b )
Substituting the intermediate variable solution w in Equation (12) into Δb, we can achieve the following piecewise formulation as
Δ b = { γ ( A T A + γ I ) 1 u ^ , u ^ b > β / 2 γ γ ( A T A + γ I ) 1 b     , u ^ b β / 2 γ
Referring to the definition of SBA, the intermediate variable b is a small value. Therefore, the above-mentioned Δb can be approximately recast as,
Δ b { γ ( A T A + γ I ) 1 u ^ , u ^ b > β / 2 γ 0                                                         , u ^ b β / 2 γ
The equation infers that the bias existing in the L1 norm and L2 norm are closely equal when u ^ b β / 2 γ . In other words, the solution of the L2 norm needs to correct the part of this condition u ^ b > β / 2 γ , while that of the L1 norm need not do. Furthermore, according to the definition of iterative shrinkage threshold algorithms about w, u ¯ ( w b )  bias( u ^ ) can be rewritten as
u ¯ ( w b ) = { u ¯ u ^ + β / 2 γ , u ¯ + b > β / 2 γ u ^ + b , u ¯ + b β / 2 γ
With the iterations employed deeply, the true value u ¯ is gradually approached by the estimated u ^ , i.e., Equation (18) can be updated with the form of limit as follows,
lim u ^ u ¯ ( u ¯ ( w b ) ) = { β / 2 γ , u ¯ + b > β / 2 γ u ^ + b , u ¯ + b β / 2 γ
The positive penalty parameter γ is a large value and grows with a scale greater than 1 as its iteration implements, then β is a small positive value. Thus, the value of β/γ is close to zero. Thereby, Equation (19) can be approximately recast as,
lim u ^ u ¯ ( u ¯ ( w b ) ) { 0 , u ^ + b > β / 2 γ u ^ + b , u ^ + b β / 2 γ
From it, u ^ of Equation (13) based on the L1 norm is a partially bias-corrected regularization solution, obviously. Namely, the bias existing in the solution u ^ based on the L1 norm is partially unbiased when u ^ b β / 2 γ . Shen et al. [37] reported that the partially bias-corrected regularization solution is superior to the fully bias-corrected regularization solution (7) but depends on the inequality with regard to the singular values of coefficient matrix A and the regularization parameter α. It suggests that extra work to fully bias-correct is employed for the Tikhonov regularization solution based on the L2 norm, while the regularization solution based on the L1 norm for partial bias correction originates from its own characteristics contained in Equation (20). In other words, from the frequentists’ point of view, the low-frequency part of singular values of the coefficient matrix is not corrected, while the rest part is corrected [36].

4. Bias Correction for TV-Sparse and TV Model

In ill-posed sparsity inversion problems for latent sparsity parameters, like deconvolution, the TVS and TV methods are usually used to relax it for satisfactory results. But their biases are typically ignored. Therefore, we give the following deductions to correct their biases.

4.1. Deduction of TVS Model with Bias Correction

The TVS method proposed by Zhang et al. [22] is based on TV regularization term utilizing the sparsity property of the signal u 1 and its gradient u 1 , whose objective function can be evolved from Equation (10) as follows,
u ^ = arg   min u A u r 2 2 + α u 1 + β u 1
According to SBA, the objective function (21) can be recast by introducing the intermediate variables v = ∇u and w = u,
u ^ = arg   min u A u r 2 2 + α v 1 + β w 1 + γ 1 v u b 1 2 2 + γ 2 w u b 2 2 2
where b1 and b2 are the gradient of ∇u and u, γ1 and γ2 are the penalty parameters with large values, and α and β are the positive regularization parameters. The corresponding solutions with an iterative strategy can be derived as,
{ u k + 1 = ( A T A + γ 1 T + γ 2 I ) 1 ( A T r + γ 1 T ( v k b 1 k ) + γ 2 ( w k b 2 k ) ) v k + 1 = sign ( u k + 1 + b 1 k ) max ( u k + 1 + b 1 k α / 2 γ 1 , 0 ) w k + 1 = sign ( u k + 1 + b k ) max ( u k + 1 + b 2 k β / 2 γ 2 , 0 ) b 1 k + 1 = b 2 k + u k + 1 v k + 1 b 2 k + 1 = b 1 k + u k + 1 w k + 1
Referring to the analyses in Section 3 and Equations (13)–(15), the formula of the estimated solution u for TVS can be rewritten as follows,
u k + 1 = u ¯ γ 1 ( A T A + γ 1 T + γ 2 I ) 1 T ( u ¯ ( v k b 1 k ) ) γ 2 ( A T A + γ 1 T + γ 2 I ) 1 ( u ¯ ( w k b 2 k ) )
The bias can consist of two parts, including b i a s 1 = γ 1 ( A T A + γ 1 T + γ 2 ) 1 · T ( u ¯ ( v k b 1 k ) ) and   b i a s   2 = γ 2 ( A T A + γ 1 T + γ 2 ) 1 ( u ¯ ( w k b 2 k ) ) . For the convenience of representation, stands for the real number space. We define a set Φ 2 , 1 ( x ) = { x | x > β / 2 γ 2 } , Φ 2 , 2 ( x ) = { x | x β / 2 γ 2 } , and Φ 2.1 Φ 2.2 = . Similarly, the first term bias1 can be reconstructed as,
bias 1 { 0 , Φ 1 , 1 ( u ^ k + 1 + b 1 k + 1 ) γ 1 ( A T A + γ 1 T ) 1 T ( u ^ k + 1 + b 1 k + 1 ) , Φ 1 , 2 ( u ^ k + 1 + b 1 k + 1 )
Set Φ 1 , 1 ( x ) = { x | x > α / 2 γ 1 } , Φ 1 , 2 ( x ) = { x | x α / 2 γ 1 } , and Φ 1.1 Φ 1.2 = . Due to the uncertainty of the relationship between sets Φ 1 , 1 and Φ 2 , 1 , a necessary discussion can be carried out as follows. By comparison with Equations (20) and (25), the inclusion relation of sets Φ 1 , 1 and Φ 2 , 1 is dependent on the values of α/γ1 and β/γ2, namely, α/γ1 < β/γ2, α/γ1 = β/γ2, and α/γ1 > β/γ2. Concretely,
(1)
when α / γ 1 < β / γ 2 , i.e., Φ 2 , 1 Φ 1 , 1 and Φ 1 , 2 Φ 2 , 2 . The partially bias-corrected solution of Equation (25) can be obtained by,
u T V S B C k + 1 = { u ^ k + 1 , Φ 2 , 1 ( u ^ k + 1 + b 2 k + 1 ) u ^ k + 1 + γ 2 ( A T A + γ 2 I ) 1 ( u ^ k + 1 + b 2 k + 1 ) , Φ 2 , 1 ( u ^ k + 1 + b 2 k + 1 ) Φ 1 , 1 ( u ^ k + 1 + b 1 k + 1 ) u ^ k + 1 + γ 1 ( A T A + γ 1 T ) 1 T ( u ^ k + 1 + b 1 k + 1 ) + γ 2 ( A T A + γ 2 I ) 1 ( u ^ k + 1 + b 2 k + 1 ) , Φ 1 , 2 ( u ^ k + 1 + b 1 k + 1 )
(2)
when α / γ 1 = β / γ 2 , i.e., Φ 2 , 1 = Φ 1 , 1 and Φ 1 , 2 = Φ 2 , 2 . The partially bias-corrected solution of Equation (25) can be simplified as,
u T V S B C k + 1 = { u ^ k + 1 , Φ 2 , 1 ( u ^ + b 2 k + 1 ) o r Φ 1 , 1 ( u ^ k + 1 + b 1 k + 1 ) u ^ k + 1 + γ 1 ( A T A + γ 1 T ) 1 T ( u ^ k + 1 + b 1 k + 1 ) + γ 2 ( A T A + γ 2 I ) 1 ( u ^ k + 1 + b 2 k + 1 ) , Φ 1 , 2 ( u ^ k + 1 + b 1 k + 1 )     o r Φ 2 , 2 ( u ^ k + 1 + b 2 k + 1 )
(3)
when α / γ 1 > β / γ 2 , i.e., Φ 1 , 1 Φ 2 , 1 and Φ 2 , 2 Φ 1 , 2 . The partially bias-corrected solution of Equation (25) can be refined as,
u T V S B C k + 1 = { u ^ k + 1 , Φ 1 , 1 ( u ^ k + 1 + b 1 k + 1 ) u ^ k + 1 + γ 1 ( A T A + γ 1 T ) 1 T ( u ^ k + 1 + b 1 k + 1 ) , Φ 2 , 1 ( u ^ k + 1 + b 2 k + 1 ) Φ 1 , 1 ( u ^ k + 1 + b 1 k + 1 ) u ^ k + 1 + γ 1 ( A T A + γ 1 T ) 1 T ( u ^ k + 1 + b k ) + γ 2 ( A T A + γ 2 I ) 1 ( u ^ k + 1 + b 2 k + 1 ) , Φ 1 , 2 ( u ^ k + 1 + b 1 k + 1 )
Obviously, the solution of the TVS model based on the L1 norm is partially biased, which is a key point superior to those based on the L2 norm. The implemented flowchart of the proposed TVS with bias correction (TVSBC) can be found in Figure 2.

4.2. Extension of TV Model with Bias Correction

Considering the TV model proposed by Zhang et al. [17], we find that the TV model is simpler than the TV-sparse model without β u 1 , the expression of which is displayed as follows,
u ^ = min u A u r 2 2 + α u 1
By employing the SBA to calculate the solution, the intermediate variable vectors v and b can be introduced. The corresponding objective function can be approximately transformed as,
u ^ = min u A u r 2 2 + α v 1 + γ u ( v b ) 2 2
And the target parameter and variable vectors can be estimated iteratively with the following formulae,
{ u ^ k + 1 = ( A T A + γ T ) 1 ( A T r + γ T ( v k b k ) ) v k + 1 = sign ( u ^ k + 1 + b k ) max ( u ^ k + 1 + b k α / 2 γ , 0 ) b k + 1 = b k + u ^ k + 1 v k + 1
As we know, the bias of Equation (31) can be derived easily,
bias ( u ^ ) { 0 , u k + b k > α / 2 γ γ ( A T A + γ T ) 1 T ( u ¯ + b k ) , u k + b k α / 2 γ
And the bias-corrected solution of uTVBC can be calculated according to Equation (32),
u T V B C k + 1 { u k + 1 , u k + 1 + b k + 1 > α / 2 γ u k + 1 + γ ( A T A + γ T ) 1 T ( u k + 1 + b k + 1 ) , u k + 1 + b k + 1 α / 2 γ
Obviously, by comparison with Equations (26)~(28) and (33) from the form of formulae and judgment criterion, TVBC is simpler than TVSBC. And its iterative process is shorter than TVSBC, and the flowchart can be displayed in Figure 3.

5. Experiments and Results

In addition to theoretical derivation, the performance of the proposed TVSBC and TVBC methods will be verified by two numerical experiments designed in this section. Two aspects of performance, including the ability to distinguish adjacent targets and maintain the shape of the target, are considered. Several traditional deconvolution methods, including Blind deconvolution [51], Richardson–Lucy method [52], Regularized Filter (Tikhonov Regularization) [53], Wiener Filter [53], Truncated Singular Value Decomposition (TSVD) [20], Sparse Denoising-Based Super-Resolution Method (SDBSM) [18], TV [17] and TVS [22], are selected as competitors to the two proposed methods.

5.1. Evaluated Indexes

As we know, in deconvolution problems, the true parameters are impossibly known in practical applications, and only their estimates can be obtained by these methods. However, since the simulation results can be repeated and the true parameters are also known, the sum square errors (SSE) can be numerically calculated as follows,
M = i = 1 N ( x ^ x ¯ ) T ( x ^ x ¯ )
where M denotes the numerical SSE of the estimated parameter, N is the number of repeated experiments, x ^ and x ¯ stand for the deconvolution estimated and true parameters. The smaller values of M, the closer the parameter estimates are to the true values.
Additionally, the peak signal-to-noise ratio (PSNR) is used to assess the ability of the noise suppression ability, which is defined as
PSNR = 20 log ( A s / A n )
where As is the maximum amplitude of the target and An is the maximum amplitude of the noise. And then, the Structural Similarity (SSIM) [54] is selected to evaluate the capability to differentiate adjacent targets and the similarity between the restored and original signals, which can be obtained by the following formula,
SSIM ( x , y ) = ( 2 μ x μ y + C 1 ) ( 2 σ x y + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 )
where x and y reference signal and test signal; μ x , μ y , σ x , and σ y are average value and standard variance of x and y, respectively; σ x y is the covariance of x and y; C1, C2, and C3 are the contrast values. The greater PSNR and SSIM, the better the restoration results.

5.2. Experiment1: 1-D Point Target Simulation

In order to assess the performance of the proposed TVBC and TVSBC, referring to Tuo et al. [20], we conduct a 1-D point simulation of azimuth super-resolution for radar forward-looking imaging. The 1-D simulation scene is illustrated in Figure 4a, whose parameters are listed in Table 1. There are four-point targets whose centers width are −1.95°, −0.35°, −0.15° and 0.25°, and whose widths are 0.2°, 0.2°, 0.1°, and 0.1°, respectively. The simulated convolution kernel is illustrated in Figure 5, whose parameters can be found in Table 1. For a more suitable actual working environment, Gaussian white noise with different SNRs of the range from 1 dB to 25 dB with an interval of 2 dB. Three typical simulated beam echoes affected by the noise of SNRs = 5 dB, 13 dB, and 21 dB are demonstrated in Figure 4b,c. The obvious features are contaminated by the simulated convolution kernel and random white noise. For example, two peaks in Figure 4b and three peaks in Figure 4c,d are seriously disturbed by high-frequency noise, which makes the signal unrecognizable.
We set up 500 independent simulation experiments with different SNRs. The degraded signals can be restored by these deconvolution methods, whose results can be evaluated by the mean PSNR, SSIM, and SSE listed in Table 2. From it, we can see that Blind Deconvolution, Wiener Filter, and SDBSM can have fine mean PSNR but terrible mean SSIM, and Regularized Filter based on L2 norm proposed for the smooth signal has the lowest mean PSNR and SSIM due to a loss in the accurate noise level estimation and smooth the significant signal features. With the decreasing SNRs during simulation, the noise level increases, whose effect on these above-mentioned deconvolution methods has been magnified accordingly. It infers that these deconvolution methods are not robust against noise. In addition, the Richardson–Lucy method and TSVD method can keep a balance between mean PSNR and SSIM and with lesser mean SSE, which maintains a middling level compared with TV, TVS, TVBC, and TVSBC. Overall, TV, TVS, TVBC, and TVSBC provide excellent evaluation results that are evidenced by high mean PSNR and SSIM and small mean SSE. By comparing the TV/TVS without and with bias correction, we can find that the proposed bias correction methods can effectively improve the performance of the original TV and TVS. It seems that the proposed methods are not only rigorous in theory but also feasible in practice.
A group of restoration results in 500 simulations when SNR = 21 dB are illustrated in Figure 6. Due to the low noise level, isolated targets can be distinguished by all deconvolution methods. But Blind Deconvolution, Regularized Filter, Wiener Filter, TSVD, and SDBSM cannot distinguish the middle two adjacent targets. From Figure 6d, the Richardson–Lucy method can recover the middle two adjacent targets, but an extra fake peak accompanies the left isolated targets. In addition, as illustrated in Figure 6g–j, TV, TVS, TVBC, and TVSBC have excellent capability of distinguishment to these isolated and adjacent targets, and few significant fake targets are produced.
To better compare TV and TVS methods with and without bias correction, we display three groups of results in Figure 7 with high, middle, and low SNRs. With the increase in the simulated SNRs, these results of TV and TVS methods can restore better-qualified images, and these main targets can be distinguished better. Furthermore, these results after bias correction by TVBC and TVSBC can be higher restoration quality than those of TV and TVS methods. The intensity of these significant targets recovered from TVBC and TVSBC is higher than that of TV and TVS, as demonstrated in Figure 7 with red rectangles. Similarly, the restored contour information is illustrated in Figure 7 with gray rectangles, whose intensities of TVBC and TVSBC are lower than those of TV and TVS. It proves that bias correction can effectively improve the restored quality of the 1-D targets scene in visual.
Furthermore, for better quantitative comparison, the line charts of PSNR, SSIM, and SSE with the increase of simulation SNRs ranging from 1~25 dB with 2 dB interval are shown in Figure 8. As expected, the results of partial bias correction of TVBC and TVSBC are improved. Among them, the PSNRs and SSIMs of TVBC and TVSBC are greater than those of TV and TVS, while the SSEs of TVBC and TVSBC are less than those of TV and TVS. The mean improvement ratio of PSNR, SSIM, and SSE of TV and TVS relative to TVBC and TVSBC are 3.00% and 1.00%, 1.94% and 0.70%, 6.00%, and 2.08%, respectively.
In addition, the statistical results of PSNR, SSIM, and SSE with 500 simulations when the noise of SNR is 15 dB are presented in Figure 9 and Table 3, and the results of TVBC and TVSBC are finer than those of TV and TVS. Specifically, the mean improvement ratios of TVBC and TVSBC with regards to TV and TVS are 1.61% and 2.82%, 1.72% and 1.08, 3.57%, and 6.23%, respectively. In summary, the application of bias correction can effectively improve the quality of the restored signals when based on TVS or TV.

5.3. Experiment2: 2-D Area Data Processing

In this section, we simulate the 2-D data to discuss the performance of the proposed bias-corrected methods TVBC and TVSBC. The convolution kernel is the same as the 1-D target simulation. The scanning region is ± 5 ° , and the range of breadth is 500 m. Nine random-size targets are designed and distributed in the area scene, as illustrated in Figure 10a. Additionally, the noise under high-, middle- and low-SNR conditions are simulated (5 dB, 15 dB, and 25 dB, respectively). Comparing with Figure 10a and Figure 10b–d, we can see that the adjacent targets at the first and second lines are distinguishable, but those at the third line are blurred.
The results generated by these different methods are demonstrated in Figure 11, Figure 12 and Figure 13. As shown in Figure 11 with SNR = 5 dB, for the Blind Deconvolution method, improvement is poor, whose shapes are blurred, and the noise still exists. Regularized Filter, Wiener Filter, TSVD, and SDBSM can remove noise sufficiently, but the obvious targets are fuzzy. In particular, TSVD and SBDSM produce more fake targets, and even partially recognizable targets are replaced with these fake targets. It shows that these methods are vulnerable to noise. Richardson–Lucy methods have a balance between denoising and deconvolution. In other words, it restores fairly clear targets, which can be helpful in positioning the targets. However, the shape of targets is lost by comparison with the original real echo. As shown in Figure 11g–j, the restored results of TV, TVBC, TVS, and TVSBC can not only distinguish all adjacent targets and suppress the noise but also maintain clear shapes and similar sizes. In particular, for the low intensity of the target, TVBC can get a more obvious rectangle shape than that of TV. TVS and TVSBC have similar results without significant differences in visual, but the size of TVSBC is closer than that of TVBC to the simulated echo. As listed in Table 4, all evaluation results of TVBC and TVSBC are superior to those of TV and TVS, respectively. And the improvement ratios of PSNR, SSIM, and SSE with TVBC, TVSBC, and TV, TVS are 5.2% and 0.3%, 6.6% and 1.5%, 10.0%, and 0.7%, respectively.
Similarly, the increases in SNRs in Figure 12 and Figure 13 have fewer effects on these restored results, but the results of Blind Deconvolution, Regularized Filter, Wiener Filter, TSVD, and SDBSM are relatively poor by comparison with those of TV, TVBC, TVS, and TVSBC. The results of the Richardson–Lucy method can get better with the decrease of the noise level, but the shape of it cannot be restored as well as TV, TVBC, TVS, and TVSBC. The line chart of assessment for PSNRs, SSIM, and SSE of these methods is demonstrated in Figure 14. It can be seen that the Richardson–Lucy method is ranked only second to TV, TVBC, TVS, and TVSBC, which is consistent with what is displayed in Figure 11, Figure 12 and Figure 13. For middle SNR, the improvement ratios of PSNR, SSIM, and SSE with TVBC, TVSBC, and TV, TVS are 2.2% and 1.1%, 3.2% and 0.8%, 4.9%, and 2.5%, respectively. For high SNR, the improvement ratios of PSNR, SSIM, and SSE with TVBC, TVSBC, and TV, TVS are 0.6% and 1.7%, 0.6% and 0.7%, 1.3%, and 4.1%, respectively. For middle and high SNRs of noise, TV, TVBC, TVS, and TVSBC can get fine restored results superior to the others. In particular, bias-corrected methods can have better results than those without bias correction, which demonstrates that this work is meaningful and feasible.

6. Discussion

An ill-posed inversion problem is usually solved by the regularization methods with different regularization terms to constrain the parameter space for stable solutions. In typical Tikhonov regularization based on the L2 norm, the biases of solutions are paid more attention to be analyzed and better corrected. For example, the fully-, partially- and adaptively- bias-corrected methods are raised [36,37,42]. In contrast, the sparsity regularization terms based on the L1 norm are utilized to address the ill-posed sparsity inversion problems, but the biases of its solution are seldom involved and analyzed. To this end, this paper places emphasis on the analysis of the biases of regularization solutions based on the L1 norm. From the perspective of the formulae comparison and derivation, the differences between regularization solutions based on the L1 norm and L2 norm are obtained. The partially-biased property of the regularization solution based on the L1 norm is revealed herein, the bias of which is less than that of the L2 norm by analysis in Section 2 when both are without bias correction. It reflects that the results of the L1 norm will be better than the L2 norm, which is consistent with the restored results from TV and TVS displayed in Section 4. The improvements of TVBC and TVSBC for PSNR, SSIM, and SSE overall are 2.15%, 1.8%, and 4.14% to those of TV and TVS, respectively.
Additionally, for the partially biased property, we derive the piecewise form of the partially bias-corrected solution in Section 3. In other words, the proposed partial bias correction can retain original information in low frequency and reduce the negative effect in the high frequency of the coefficient matrix. The corresponding flowchart for the proposed TVSBC and TVBC is designed for better implementation. The idea of bias analysis and correction for the L1 norm solution is rigorously extended in theory. In practice, for different prior regularization terms based on the L1 norm, the biases of its solutions can be analyzed and corrected, referring to ours.
Inspired by [36,37,42,43], the quality of the regularization solution is relative to the sparsity regularization parameters selection, which is empirically determined, however, which is empirically selected in this paper. Therefore, we will investigate the relation between parameters selection criterion and bias correction in future works.

7. Conclusions and Outlook

In this paper, an analysis of the bias of the ill-posed sparsity inversion problem based on the L1 norm is implemented. The partially biased property of the L1 norm is revealed. Without bias correction, the solution of the L1 norm is superior to that of Tikhonov regularization because of its own partially biased nature. Furthermore, on the basis of it, we give the partially bias-corrected solutions of the TVS and TV methods, which are dependent on the sparsity property L1 norm. In a word, the partial bias correction method avoids modifying the part of the unbiased solution, which improves the rigor of the theory. In addition, the bias-corrected method of TVBC and TVSBC are helpful in improving the quality of restored signals by retaining more signals in low-frequency and suppressing noise in high-frequency more effectively. The robustness of TV and TVS has also been enhanced.
Two experimental examples have been presented to demonstrate the performance of the proposed methods, including 1-D point target and 2-D area simulation for super azimuth resolution of radar forward-looking imaging. The bias-corrected methods can preserve higher quality than the corresponding methods without bias correction, whose results not only distinguish the adjacent targets, suppress the noise, and preserve the shape and size of targets in visual, but also have excellent statistical results of PSNR, SSIM, and SSE assessment indexes. It infers that the partially bias correction idea is worth being promoted to improve the theoretic rigor of sparse regularization solution and the quality of the restored signals.

Author Contributions

Conceptualization, methodology, original draft preparation, J.H.; Validation and supervision, S.Z. (Songlin Zhang); review and editing, S.Z. (Shouzhu Zheng); revision, M.W.; revision and supervision, H.D.; revision, Q.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Startup Foundation for Introducing Talent of Nanjing University of Information Science & Technology (2022R118; 2020R053) and the Startup Foundation for Introducing Talent of Minjiang University (MJY22018), and Open Fund of Key Laboratory of Marine Environmental Survey Technology and Application, Ministry of Natural Resources (MESTA-2020-B011).

Data Availability Statement

Not applicable.

Acknowledgments

We thank anonymous reviewers for their comments towards improving this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, B.; Feng, Y.; Shen, Y.; Wang, C. Geometry-specified troposphere decorrelation for subcentimeter real-time kinematic solutions over long baselines. J. Geophys. Res. 2010, 115, L06604. [Google Scholar] [CrossRef] [Green Version]
  2. Li, B.; Shen, Y.; Feng, Y. Fast GNSS ambiguity resolution as an ill-posed problem. J. Geod. 2010, 84, 683–698. [Google Scholar] [CrossRef] [Green Version]
  3. Shen, Y.; Li, B. Regularized solution to Fast GPS Ambiguity Resolution. J. Surv. Eng. 2007, 133, 168–172. [Google Scholar] [CrossRef] [Green Version]
  4. Zhong, B.; Tan, J.; Li, Q.; Li, X.; Liu, T. Simulation analysis of regional surface mass anomalies inversion based on different types of constraints. Geod. Geodyn. 2021, 12, 298–307. [Google Scholar] [CrossRef]
  5. Chen, T.; Kusche, J.; Shen, Y.; Chen, Q. A Combined Use of TSVD and Tikhonov Regularization for Mass Flux Solution in Tibetan Plateau. Remote Sens. 2020, 12, 2045. [Google Scholar] [CrossRef]
  6. Chen, Q.; Shen, Y.; Chen, W.; Francis, O.; Zhang, X.; Chen, Q.; Li, W.; Chen, T. An Optimized Short-Arc Approach: Methodology and Application to Develop Refined Time Series of Tongji-Grace2018 GRACE Monthly Solutions. J. Geophys. Res. Solid Earth 2019, 124, 6010–6038. [Google Scholar] [CrossRef]
  7. Yang, F.; Kusche, J.; Forootan, E.; Rietbroek, R. Passive-ocean radial basis function approach to improve temporal gravity recovery from GRACE observations. J. Geophys. Res. Solid Earth 2017, 122, 6875–6892. [Google Scholar] [CrossRef] [Green Version]
  8. Save, H.; Bettadpur, S.; Tapley, B.D. High-resolution CSR GRACE RL05 mascons. J. Geophys. Res. Solid Earth 2016, 121, 7547–7569. [Google Scholar] [CrossRef]
  9. Rowlands, D.D.; Luthcke, S.B.; McCarthy, J.J.; Klosko, S.M.; Chinn, D.S.; Lemoine, F.G.; Boy, J.-P.; Sabaka, T.J. Global mass flux solutions from GRACE: A comparison of parameter estimation strategies—Mass concentrations versus Stokes coefficients. J. Geophys. Res. 2010, 115, 1275. [Google Scholar] [CrossRef]
  10. Reigber, C.; Schmidt, R.; Flechtner, F.; König, R.; Meyer, U.; Neumayer, K.-H.; Schwintzer, P.; Zhu, S.Y. An Earth gravity field model complete to degree and order 150 from GRACE: EIGEN-GRACE02S. J. Geodyn. 2005, 39, 1–10. [Google Scholar] [CrossRef]
  11. Gholinejad, S.; Naeini, A.A.; Amiri-Simkooei, A. Optimization of RFM Problem Using Linearly Programed ℓ ₁-Regularization. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–9. [Google Scholar] [CrossRef]
  12. Gholinejad, S.; Amiri-Simkooei, A.; Moghaddam, S.H.A.; Naeini, A.A. An automated PCA-based approach towards optization of the rational function model. ISPRS J. Photogramm. Remote Sens. 2020, 165, 133–139. [Google Scholar] [CrossRef]
  13. Zhang, Y.; Lu, Y.; Wang, L.; Huang, X. A New Approach on Optimization of the Rational Function Model of High-Resolution Satellite Imagery. IEEE Trans. Geosci. Remote Sens. 2012, 50, 2758–2764. [Google Scholar] [CrossRef]
  14. Chen, H.; Li, Y.; Gao, W.; Zhang, W.; Sun, H.; Guo, L.; Yu, J. Bayesian Forward-Looking Superresolution Imaging Using Doppler Deconvolution in Expanded Beam Space for High-Speed Platform. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
  15. Tan, K.; Lu, X.; Yang, J.; Su, W.; Gu, H. A Novel Bayesian Super-Resolution Method for Radar Forward-Looking Imaging Based on Markov Random Field Model. Remote Sens. 2021, 13, 4115. [Google Scholar] [CrossRef]
  16. Li, W.; Li, M.; Zuo, L.; Sun, H.; Chen, H.; Li, Y. Forward-Looking Super-Resolution Imaging for Sea-Surface Target with Multi-Prior Bayesian Method. Remote Sens. 2022, 14, 26. [Google Scholar] [CrossRef]
  17. Zhang, Q.; Zhang, Y.; Zhang, Y.; Huang, Y.; Yang, J. Airborne Radar Super-Resolution Imaging Based on Fast Total Variation Method. Remote Sens. 2021, 13, 549. [Google Scholar] [CrossRef]
  18. Zhang, Q.; Zhang, Y.; Zhang, Y.; Huang, Y.; Yang, J. A Sparse Denoising-Based Super-Resolution Method for Scanning Radar Imaging. Remote Sens. 2021, 13, 2768. [Google Scholar] [CrossRef]
  19. Quan, Y.; Zhang, R.; Li, Y.; Xu, R.; Zhu, S.; Xing, M. Microwave Correlation Forward-Looking Super-Resolution Imaging Based on Compressed Sensing. IEEE Trans. Geosci. Remote Sensing 2021, 59, 8326–8337. [Google Scholar] [CrossRef]
  20. Tuo, X.; Zhang, Y.; Huang, Y.; Yang, J. Fast Sparse-TSVD Super-Resolution Method of Real Aperture Radar Forward-Looking Imaging. IEEE Trans. Geosci. Remote Sens. 2021, 59, 6609–6620. [Google Scholar] [CrossRef]
  21. Mao, D.; Zhang, Y.; Zhang, Y.; Huo, W.; Pei, J.; Huang, Y. Target Fast Reconstruction of Real Aperture Radar Using Data Extrapolation-Based Parallel Iterative Adaptive Approach. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 2258–2269. [Google Scholar] [CrossRef]
  22. Zhang, Q.; Zhang, Y.; Huang, Y.; Zhang, Y.; Pei, J.; Yi, Q.; Li, W.; Yang, J. TV-Sparse Super-Resolution Method for Radar Forward-Looking Imaging. IEEE Trans. Geosci. Remote Sens. 2020, 58, 6534–6549. [Google Scholar] [CrossRef]
  23. Zhang, Q.; Zhang, Y.; Huang, Y.; Zhang, Y. Azimuth Super-resolution of Forward-Looking Radar Imaging Which Relies on Linearized Bregman. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2032–2043. [Google Scholar] [CrossRef]
  24. Zhang, Q.; Zhang, Y.; Huang, Y.; Zhang, Y.; Li, W.; Yang, J. Total Variation Super-Resolution Method for Radar Forward-Looking Imaging. In Proceedings of the 2019 6th Asia-Pacific Conference on Synthetic Aperture Radar (APSAR), Xiamen, China, 26–29 November 2019; pp. 1–4. [Google Scholar]
  25. Pu, W.; Bao, Y. RPCA-AENet: Clutter Suppression and Simultaneous Stationary Scene and Moving Targets Imaging in the Presence of Motion Errors. IEEE Trans. Neural Netw. Learn. Syst. 2022. [Google Scholar] [CrossRef] [PubMed]
  26. Su, D.; Feng, R. EISRP: Efficient infrared signal restoration processing for object tracking in human-robot interaction. Infrared Phys. Technol. 2020, 111, 103544. [Google Scholar] [CrossRef]
  27. Liu, T.; Li, Y.F.; Liu, H.; Zhang, Z.; Liu, S. RISIR: Rapid Infrared Spectral Imaging Restoration Model for Industrial Material Detection in Intelligent Video Systems. IEEE Trans. Ind. Inf. 2019, 1–10. [Google Scholar] [CrossRef]
  28. Liu, T.; Liu, H.; Li, Y.-F.; Chen, Z.; Zhang, Z.; Liu, S. Flexible FTIR Spectral Imaging Enhancement for Industrial Robot Infrared Vision Sensing. IEEE Trans. Ind. Inf. 2020, 16, 544–554. [Google Scholar] [CrossRef]
  29. Zhang, Z.; Xie, H.; Tong, X.; Zhang, H.; Tang, H.; Li, B.; Di, W.; Hao, X.; Liu, S.; Xu, X.; et al. A Combined Deconvolution and Gaussian Decomposition Approach for Overlapped Peak Position Extraction from Large-Footprint Satellite Laser Altimeter Waveforms. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 2286–2303. [Google Scholar] [CrossRef]
  30. Zhou, T.; Popescu, S.C.; Krause, K.; Sheridan, R.D.; Putman, E. Gold–A novel deconvolution algorithm with optimization for waveform LiDAR processing. ISPRS J. Photogramm. Remote Sens. 2017, 129, 131–150. [Google Scholar] [CrossRef]
  31. Azadbakht, M.; Fraser, C.; Khoshelham, K. A Sparsity-Based Regularization Approach for Deconvolution of Full-Waveform Airborne Lidar Data. Remote Sens. 2016, 8, 648. [Google Scholar] [CrossRef]
  32. Zhao, X.-L.; Wang, W.; Zeng, T.-Y.; Huang, T.-Z.; Ng, M.K. Total Variation Structured Total Least Squares Method for Image Restoration. SIAM J. Sci. Comput. 2013, 35, B1304–B1320. [Google Scholar] [CrossRef] [Green Version]
  33. Ji, H.; Wang, K. Robust image deblurring with an inaccurate blur kernel. IEEE Trans. Image Process. 2012, 21, 1624–1634. [Google Scholar] [CrossRef] [PubMed]
  34. Nan, Y.; Ji, H. Deep Learning for Handling Kernel/model Uncertainty in Image Deconvolution. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2388–2397. [Google Scholar]
  35. Hadamard, J. Lectures on Cauchy’s Problem in Linear Partial Differential Equations; Yale University Press: New Haven, CT, USA; New York, NY, USA, 1923. [Google Scholar]
  36. Ji, K.; Shen, Y.; Chen, Q.; Li, B.; Wang, W. An adaptive regularization solution to inverse ill-posed models. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  37. Shen, Y.; Xu, P.; Li, B. Bias-corrected regularization solution to inverse ill-posed models. J. Geod. 2012, 86, 597–608. [Google Scholar] [CrossRef]
  38. Tikhonov, A.N. Solution of incorrectly formulated problems and the regularization method. Dokl. Akad. Nauk SSSR 1963, 151, 501–504. [Google Scholar]
  39. Tikhonov, A.N. Regularizaiton of ill-posed problems. Dokl. Akad. Nauk SSSR 1963, 1, 49–52. [Google Scholar]
  40. Xu, P. Truncated SVD methods for discrete linear ill-posed problems. Geophys. J. Int. 1998, 135, 505–514. [Google Scholar] [CrossRef]
  41. Hansen, C.P. The truncatedSVD as a method for regularization. BIT 1987, 27, 543–553. [Google Scholar] [CrossRef]
  42. Xu, P.; Shen, Y.; Fukuda, Y.; Liu, Y. Variance Component Estimation in Linear Inverse Ill-posed Models. J. Geod. 2006, 80, 69–81. [Google Scholar] [CrossRef]
  43. Xu, P. Iterative generalized cross-validation for fusing heteroscedastic data of inverse ill-posed problems. Geophys. J. Int. 2009, 179, 182–200. [Google Scholar] [CrossRef] [Green Version]
  44. Chen, Q.; Shen, Y.; Kusche, J.; Chen, W.; Chen, T.; Zhang, X. High-Resolution GRACE Monthly Spherical Harmonic Solutions. J. Geophys. Res. Solid Earth 2021, 126, e2019JB018892. [Google Scholar] [CrossRef]
  45. Tuo, X.; Zhang, Y.; Huang, Y.; Yang, J. A Fast Sparse Azimuth Super-Resolution Imaging Method of Real Aperture Radar Based on Iterative Reweighted Least Squares With Linear Sketching. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 2928–2941. [Google Scholar] [CrossRef]
  46. Zhang, Y.; Tuo, X.; Huang, Y.; Yang, J. A TV Forward-Looking Super-Resolution Imaging Method Based on TSVD Strategy for Scanning Radar. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4517–4528. [Google Scholar] [CrossRef]
  47. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithm. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  48. Huo, W.; Tuo, X.; Zhang, Y.; Zhang, Y.; Huang, Y. Balanced Tikhonov and Total Variation Deconvolution Approach for Radar Forward-Looking Super-Resolution Imaging. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  49. Yang, Y.; Li, C.; Kao, C.-Y.; Osher, S. Split Bregman Method for Minimization of Region-Scalable Fitting Energy for Image Segmentation. In International Symposium on Visual Computing; Springer: Berlin/Heidelberg, Germany; pp. 117–128.
  50. Setzer, S.; Steidl, G.; Teuber, T. Deblurring Poissonian images by split Bregman techniques. J. Vis. Commun. Image Represent. 2010, 21, 193–199. [Google Scholar] [CrossRef]
  51. Biggs, D.S.; Andrews, M. Acceleration of iterative image restoration algorithms. Appl. Opt. 1997, 36, 1766–1775. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Fish, D.A.; Brinicombe, A.M.; Pike, E.R.; Walker, J.G. Blind deconvolution by means of the Richardson-Lucy algorithm. J. Opt. Soc. Am. A-Opt. Image Sci. Vis. 1995, 12, 58–65. [Google Scholar] [CrossRef] [Green Version]
  53. Gonzalez, R.C.; Woods, R.E. Digital Image Processing; Addison-Wesley Publishing Company: Boston, MA, USA, 1992. [Google Scholar]
  54. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Geometric diagram for forward-looking imaging model.
Figure 1. Geometric diagram for forward-looking imaging model.
Remotesensing 14 05792 g001
Figure 2. The flowchart of the proposed TVSBC.
Figure 2. The flowchart of the proposed TVSBC.
Remotesensing 14 05792 g002
Figure 3. The flowchart of the proposed TVBC.
Figure 3. The flowchart of the proposed TVBC.
Remotesensing 14 05792 g003
Figure 4. 1-D experiment scene of point target. (a) Simulated echo. (b) SNR = 5 dB. (c) SNR = 13 dB. (d) SNR = 21 dB.
Figure 4. 1-D experiment scene of point target. (a) Simulated echo. (b) SNR = 5 dB. (c) SNR = 13 dB. (d) SNR = 21 dB.
Remotesensing 14 05792 g004
Figure 5. Simulated convolution kernel with sinc2 function.
Figure 5. Simulated convolution kernel with sinc2 function.
Remotesensing 14 05792 g005
Figure 6. Restoration results of experimental scenes for visualization extracted from 500 simulations when SNR = 21 dB. (a) Blind deconvolution; (b) Regularized method; (c) Wiener Filter; (d) Richardson–Lucy method; (e) TSVD method; (f) SDBSM; (g) TV; (h) TVBC; (i) TVS; (j) TVSBC.
Figure 6. Restoration results of experimental scenes for visualization extracted from 500 simulations when SNR = 21 dB. (a) Blind deconvolution; (b) Regularized method; (c) Wiener Filter; (d) Richardson–Lucy method; (e) TSVD method; (f) SDBSM; (g) TV; (h) TVBC; (i) TVS; (j) TVSBC.
Remotesensing 14 05792 g006
Figure 7. Restoration results for experimental scenes with noise of difference simulated SNRs. (a) SNR = 5 dB, (b) SNR = 13 dB, (c) SNR = 21 dB.
Figure 7. Restoration results for experimental scenes with noise of difference simulated SNRs. (a) SNR = 5 dB, (b) SNR = 13 dB, (c) SNR = 21 dB.
Remotesensing 14 05792 g007
Figure 8. Different evaluated results under different SNRs. (ac) PSNR, SSIM, and SSE for TV and TVBC, respectively. (df) PSNR, SSIM, and SSE for TVS and TVSBC, respectively.
Figure 8. Different evaluated results under different SNRs. (ac) PSNR, SSIM, and SSE for TV and TVBC, respectively. (df) PSNR, SSIM, and SSE for TVS and TVSBC, respectively.
Remotesensing 14 05792 g008
Figure 9. PSNR, SSIM, and SSE results when simulation SNR = 15 dB. (a) PSNR for TV and TVS. (b) PSNR for TVS and TVSBC. (c) SSIM for TV and TVS. (d) SSIM for TVS and TVSBC. (e) SSE for TV and TVS. (f) SSE for TVS and TVSBC.
Figure 9. PSNR, SSIM, and SSE results when simulation SNR = 15 dB. (a) PSNR for TV and TVS. (b) PSNR for TVS and TVSBC. (c) SSIM for TV and TVS. (d) SSIM for TVS and TVSBC. (e) SSE for TV and TVS. (f) SSE for TVS and TVSBC.
Remotesensing 14 05792 g009
Figure 10. Simulated surface targets with convolution and noise degradation with different SNRs of simulated noise. (a) Simulated echo. (b) SNR = 5 dB. (c) SNR = 15 dB. (d) SNR = 25 dB.
Figure 10. Simulated surface targets with convolution and noise degradation with different SNRs of simulated noise. (a) Simulated echo. (b) SNR = 5 dB. (c) SNR = 15 dB. (d) SNR = 25 dB.
Remotesensing 14 05792 g010
Figure 11. Restored results of simulated surface targets with convolution and noise degradation when the SNR of noise is 5 dB. (a) Blind deconvolution. (b) Regularized Filtering. (c) Richardson–Lucy. (d) Wiener Filtering. (e) TSVD. (f) SDBSM. (g) TV. (h) TV BC. (i) TVS. (j) TVSBC.
Figure 11. Restored results of simulated surface targets with convolution and noise degradation when the SNR of noise is 5 dB. (a) Blind deconvolution. (b) Regularized Filtering. (c) Richardson–Lucy. (d) Wiener Filtering. (e) TSVD. (f) SDBSM. (g) TV. (h) TV BC. (i) TVS. (j) TVSBC.
Remotesensing 14 05792 g011
Figure 12. Restored results of simulated surface targets with convolution and noise degradation when SNR of noise is 15 dB. (a) Blind deconvolution. (b) Regularized Filtering. (c) Richardson–Lucy. (d) Wiener Filtering. (e) TSVD. (f) SDBSM. (g) TV. (h) TV BC. (i) TVS. (j) TVSBC.
Figure 12. Restored results of simulated surface targets with convolution and noise degradation when SNR of noise is 15 dB. (a) Blind deconvolution. (b) Regularized Filtering. (c) Richardson–Lucy. (d) Wiener Filtering. (e) TSVD. (f) SDBSM. (g) TV. (h) TV BC. (i) TVS. (j) TVSBC.
Remotesensing 14 05792 g012
Figure 13. Restored results of simulated surface targets with convolution and noise degradation when the SNR of noise is 25 dB. (a) Blind deconvolution. (b) Regularized Filtering. (c) Richardson–Lucy. (d) Wiener Filtering. (e) TSVD. (f) SDBSM. (g) TV. (h) TV BC. (i) TVS. (j) TVSBC.
Figure 13. Restored results of simulated surface targets with convolution and noise degradation when the SNR of noise is 25 dB. (a) Blind deconvolution. (b) Regularized Filtering. (c) Richardson–Lucy. (d) Wiener Filtering. (e) TSVD. (f) SDBSM. (g) TV. (h) TV BC. (i) TVS. (j) TVSBC.
Remotesensing 14 05792 g013
Figure 14. Different evaluation results of deconvolution methods under different SNRs. (ac) PSNR, SSIM, and SSE for deconvolution methods, respectively. (df) Zoomed-in (ac) for TV-based methods without and with bias correction, respectively.
Figure 14. Different evaluation results of deconvolution methods under different SNRs. (ac) PSNR, SSIM, and SSE for deconvolution methods, respectively. (df) Zoomed-in (ac) for TV-based methods without and with bias correction, respectively.
Remotesensing 14 05792 g014
Table 1. System parameters of point target simulation.
Table 1. System parameters of point target simulation.
ParametersValueUnits
Carrier frequency10GHz
Band width75MHz
Pulse interval2 × 10−6s
Beamwidth2°
Antenna scanning velocity30°/s
Scanning area−5~5°
Pulse repetition frequency1500Hz
Table 2. Mean PSNR (dB), SSIM and SSE results with 500 simulations.
Table 2. Mean PSNR (dB), SSIM and SSE results with 500 simulations.
MethodsMean PSNR (dB)Mean SSIMMean SSE
Blind Deconvolution17.2110.2770.541
Regularized Filter10.3040.1561.328
Wiener Filter17.3100.4990.531
Richardson–Lucy15.1950.6980.684
TSVD15.3360.5870.666
SDBSM18.6440.27031.292
TV18.5100.8010.469
TVBC18.9720.8160.444
TVS19.2140.8030.4309
TVSBC19.2440.8090.4305
Table 3. Mean PSNR (dB), SSIM and SSE results with 500 simulations of Figure 8.
Table 3. Mean PSNR (dB), SSIM and SSE results with 500 simulations of Figure 8.
MethodsPSNR (dB)SSIMSSE
TV19.3220.8080.422
TVBC19.6330.8220.407
TVS19.8000.8140.399
TVSBC20.3580.8230.374
Table 4. PSNR, SSIM, and SSE evaluation results of Figure 11, Figure 12 and Figure 13.
Table 4. PSNR, SSIM, and SSE evaluation results of Figure 11, Figure 12 and Figure 13.
MethodsSNR = 5 dBSNR = 15 dBSNR = 25 dB
PSNR (dB)SSIMSSEPSNR (dB)SSIMSSEPSNR (dB)SSIMSSE
TV17.4620.59529.97819.9990.77322.3870.7730.88220.152
TVBC18.3770.63426.98120.4310.79821.2990.7980.88819.882
TVS18.9220.61225.34120.7710.80720.4810.8070.88819.796
TVSBC18.9860.62125.15320.9920.81319.9670.8130.89418.979
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Han, J.; Zhang, S.; Zheng, S.; Wang, M.; Ding, H.; Yan, Q. Bias Analysis and Correction for Ill-Posed Inversion Problem with Sparsity Regularization Based on L1 Norm for Azimuth Super-Resolution of Radar Forward-Looking Imaging. Remote Sens. 2022, 14, 5792. https://doi.org/10.3390/rs14225792

AMA Style

Han J, Zhang S, Zheng S, Wang M, Ding H, Yan Q. Bias Analysis and Correction for Ill-Posed Inversion Problem with Sparsity Regularization Based on L1 Norm for Azimuth Super-Resolution of Radar Forward-Looking Imaging. Remote Sensing. 2022; 14(22):5792. https://doi.org/10.3390/rs14225792

Chicago/Turabian Style

Han, Jie, Songlin Zhang, Shouzhu Zheng, Minghua Wang, Haiyong Ding, and Qingyun Yan. 2022. "Bias Analysis and Correction for Ill-Posed Inversion Problem with Sparsity Regularization Based on L1 Norm for Azimuth Super-Resolution of Radar Forward-Looking Imaging" Remote Sensing 14, no. 22: 5792. https://doi.org/10.3390/rs14225792

APA Style

Han, J., Zhang, S., Zheng, S., Wang, M., Ding, H., & Yan, Q. (2022). Bias Analysis and Correction for Ill-Posed Inversion Problem with Sparsity Regularization Based on L1 Norm for Azimuth Super-Resolution of Radar Forward-Looking Imaging. Remote Sensing, 14(22), 5792. https://doi.org/10.3390/rs14225792

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop