Next Article in Journal
Dual-Function Radar Communications: A Secure Optimization Approach Using Partial Group Successive Interference Cancellation
Previous Article in Journal
Using a Neural Network to Model the Incidence Angle Dependency of Backscatter to Produce Seamless, Analysis-Ready Backscatter Composites over Land
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Baseline Bistatic SAR Three-Dimensional Imaging Method Based on Phase Error Calibration Combining PGA and EB-ISOA

1
School of Electronics and Communication Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China
2
School of Systems Science and Engineering, Sun Yat-sen University, Guangzhou 510275, China
3
Institute of Remote Sensing Satellite, China Academy of Space Technology, Beijing 100094, China
4
Department of Weapons Engineering, Army Academy of Artillery and Air Defense, Hefei 230031, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(3), 363; https://doi.org/10.3390/rs17030363
Submission received: 14 December 2024 / Revised: 12 January 2025 / Accepted: 17 January 2025 / Published: 22 January 2025
(This article belongs to the Section Engineering Remote Sensing)

Abstract

:
Tomographic synthetic aperture radar (TomoSAR) is an advanced three-dimensional (3D) synthetic aperture radar (SAR) imaging technology that can obtain multiple SAR images through multi-track observations, thereby reconstructing the 3D spatial structure of targets. However, due to system limitations, the multi-baseline (MB) monostatic SAR (MonoSAR) encounters temporal decorrelation issues when observing the scene such as forests, affecting the accuracy of the 3D reconstruction. Additionally, during TomoSAR observations, the platform jitter and inaccurate position measurement will contaminate the MB SAR data, which may result in the multiplicative noise with phase errors, thereby leading to the decrease in the imaging quality. To address the above issues, this paper proposes a MB bistatic SAR (BiSAR) 3D imaging method based on the phase error calibration that combines the phase gradient autofocus (PGA) and energy balance intensity-squared optimization autofocus (EB-ISOA). Firstly, the signal model of the MB one-stationary (OS) BiSAR is established and the 3D imaging principle is presented, and then the phase error caused by platform jitter and inaccurate position measurement is analyzed. Moreover, combining the PGA and EB-ISOA methods, a 3D imaging method based on the phase error calibration is proposed. This method can improve the accuracy of phase error calibration, avoid the vertical displacement, and has the noise robustness, which can obtain the high-precision 3D BiSAR imaging results. The experimental results are shown to verify the effectiveness and practicality of the proposed MB BiSAR 3D imaging method.

1. Introduction

Tomographic synthetic aperture radar (TomoSAR) is an advanced synthetic aperture radar (SAR) technology that can effectively extract the three-dimensional (3D) spatial information of scene targets from multiple two-dimensional (2D) SAR images. It has evolved under the influence of computer tomography techniques and has seen rapid development in recent years, achieving good 3D imaging performance [1,2,3,4,5,6]. The elevation information of the target is retained within the phase of 2D SAR images. By acquiring coherent images from different angles, TomoSAR forms a synthetic aperture in the direction perpendicular to the sight of radar, enabling the resolution of targets in the tomographic direction [7,8,9,10,11,12]. TomoSAR has significant applications in various fields, such as urban building extraction, forest parameter retrieval, and glacier structure analysis.
In 1994, Knaell and Cardillo [13] first applied the tomography theory to the 3D radar imaging, providing a basis for analyzing the radar imaging resolution and performance. Subsequently, multiple configurations of the multi-baseline (MB) monostatic SAR (MonoSAR) were explored, and the experimental data was collected. In 2000, the German Aerospace Center (DLR) performed multiple near-parallel flights using a single aircraft and applied Fourier transform algorithms to focus in the tomographic direction, achieving a vertical resolution of 2.9 m [2]. In 2002, She [14] applied the TomoSAR theory to spaceborne systems for the first time, proposing a signal processing workflow and conducting experiments using the ERS-1 satellite data. In 2006, the Air Force Research Laboratory (AFRL) carried out the airborne MB circular SAR (CSAR) field flight tests [15]. In 2012, DLR used its airborne F-SAR system to conduct MB CSAR field tests in the Vordemwald region of Switzerland [16], acquiring a set of fully polarized MB CSAR data on the circular trajectory. In 2012, based on the compressed sensing method, Zhu [17,18] proposed a novel 3D imaging method and conducted relevant experiments using the TerraSAR-X data to verify the performance of the method.
Due to the characteristic of the MB bistatic SAR (BiSAR) with one transmission and multiple receptions, the time baselines between different SAR images can be shortened, and the data coherence can be better maintained. This has led researchers to explore MB BiSAR imaging configurations that effectively address the issue of the temporal decorrelation. In 2009, Duque [19] extended the two-receiver antenna configuration of the bistatic interferometric SAR (InSAR) to multiple receiving antennas, forming a 3D imaging configuration for the MB BiSAR. In 2019, Hu [20] conducted theoretical analyses of spaceborne MB BiSAR 3D imaging principles and carried out the first experimental verification using the data from Beidou inclined geosynchronous orbit navigation satellites. In the same year, Ge [21] applied a similar bistatic dataset to address the differential TomoSAR problem in urban areas by dividing the original problem into two subproblems, i.e., processing them separately, and merging the results to achieve the building observation.
Currently, the research on the MB BiSAR imaging is limited, mostly remaining in the stages of theoretical analysis and experimental validation. Many new configurations are being proposed to solve specific issues. In this paper, the MB one-stationary (OS) BiSAR imaging configuration is studied, aimed at reducing temporal decorrelation effects. This configuration uses a static radar and multiple airborne radars flying in the formation to collect the echo data, thus minimizing the time baselines between data acquisitions. For MB SAR observations, due to the platform jitter and inaccurate position measurement, multiple 2D images are easily contaminated by phase errors. This contamination can lead to the degraded focusing in the tomographic direction, including reduced and broadened main lobes, as well as elevated side lobes. In the MB OS BiSAR imaging configuration studied in this paper, such issues are similarly unavoidable. To address these problems, two classical phase error estimation methods have been proposed.
One method is the phase gradient autofocus (PGA) approach [22], which employs the principle of the inverse filtering to estimate phase errors, typically providing accurate approximations of phase errors caused by various factors. In 2019, the PGA method is applied to the holographic SAR (HoloSAR) for correcting inter-channel phase errors [23], based on the assumption that phase errors change slowly in space. In 2021, in order to solve the vertical displacement problem of the PGA method, Lu [24] estimated the height of significant elements based on the network construction (NC) method, and proposed the NC-PGA method which effectively alleviated the vertical displacement effect. In 2024, Huang [25] further addressed the issue of the vertical displacement by proposing a beamforming PGA (BF-PGA) method and a cumulative intensity projection registration method, Which further corrected the phase error and achieved better vehicle reconstruction results. The main limitation of this type of method was that it requires the assumption of spatially invariant phase errors, meaning it cannot estimate tomographic phase errors on a per-pixel basis, but can only estimate an overall approximate error for the block in the range-azimuth region.
Another category of autofocus methods is based on the sharpness metric criteria. This type of method takes the phase error to be estimated as a variable, uses the sharpness as the optimization objective function, and estimates the phase that needs to be compensated through search. The entropy and contrast of an image are often used as criteria for sharpness measurement [26,27]. In 2012, based on the minimization of the elevation profile entropy, Pardini [28] proposed a new calibration method and demonstrated its phase error correction capability using simulated and real data. However, there is a phenomenon of vertical displacement in the experimental results. To solve the problem of vertical displacement, Aghababaee [29] proposed a method based on the phase derivative constraint optimization, which can correct phase errors on a per-pixel basis without requiring assumptions about the scene. However, it involves solving a multidimensional optimization problem for each pixel. In [30], based on the intensity-squared optimization autofocus (ISOA), an autofocus method was proposed for 2D back-projected images, which allows a closed-form solution in single-variable cases through the coordinate descent method, avoiding the variable search process and significantly reducing processing time.
Based on the studied MB OS SAR imaging configuration, to solve the issues of the PGA method’s inability to estimate phase errors pixel-by-pixel and its susceptibility to the vertical displacement, this paper proposes a MB BiSAR 3D imaging method based on the phase error calibration combining the PGA and energy balance ISOA (EB-ISOA) methods. The main contributions are as follows.
  • This paper explores a novel MB OS BiSAR 3D imaging configuration, introduces the concepts of incident angle and projection angle, presents the specific formulation of the de-ramping processing and validates its 3D imaging capability through both theoretical analysis and experimental evaluation.
  • The criterion of maximizing the square of the intensity is introduced to estimate phase errors pixel-by-pixel. Moreover, this paper uses the preliminary results of the PGA method as the initial values for the EB-ISOA method, achieving better phase correction results.
  • This paper improves the optimization model by proposing the energy balance processing to maintain the focus on weak targets. At the same time, this paper used the PS height obtained by the EB-ISOA method to compensate for the linear phase term introduced by the PGA method, which effectively alleviates the vertical displacement.
The remainder of this paper is formed as follows. In Section 2, the signal model of the MB OS BiSAR is established and the 3D imaging principle is presented, and then the characteristics of phase errors introduced by the platform jitter and inaccurate position measurements are analyzed. In Section 3, a 3D imaging method based on the phase error calibration combining the PGA and EB-ISOA is proposed. The effectiveness and practicability of the proposed MB BiSAR 3D imaging method are validated in Section 4. In Section 5, the projection relationship of the MB BiSAR imaging is discussed, and the calculation process of the projection angle and the incident angle is presented. In Section 6, the conclusions of the paper are presented.

2. Theoretical Foundation

In this section, the signal model of the MB OS BiSAR is established and the 3D imaging principle is presented. Then, the characteristics of phase errors introduced by the platform jitter and inaccurate position measurements are analyzed.

2.1. Signal Model and 3D Imaging Principle

The observation model of the MB OS BiSAR imaging is illustrated in Figure 1. In this setup, the receiver is mounted on an aircraft, while the transmitter remains stationary at a fixed location. The stationary transmitter and multiple airborne receivers form the MB BiSAR 3D imaging configuration. In Figure 1, L n n = 1 , 2 , , N represents the trajectory of the aircraft for the n-th flight, and N denotes the number of flight tracks. The trajectory L N / 2 is chosen as the reference trajectory. Based on the observation model depicted in Figure 1, the 2D imaging is performed separately for the echo data collected from each of the N trajectories, resulting in N 2D BiSAR images of the same target area.
Suppose there is an arbitrary point target P, with the coordinates ( x P , y P , z P ) . The ranges from the receiver and the transmitter on the n-th trajectory to the target P at the slow time η are denoted as r R , n ( η ) and r T ( η ) , respectively. Assuming the signal emitted by the transmitter is a linear frequency modulation signal, the signal received on the n-th trajectory is:
s n ( τ , η ) = γ P · w r τ r R , n ( η ) + r T ( η ) c · w a ( η η c ) · exp j 2 π f c r R , n ( η ) + r T ( η ) c + j π K c τ r R , n ( η ) + r T ( η ) c 2
where τ is the fast time, η is the slow time, γ P is the complex scattering coefficient of the target P, c is the speed of the electromagnetic wave propagation, r R , n ( η ) is the slant range from the receiving station to the target P on the n-th trajectory at the slow time η , r T ( η ) is the slant range from the transmitting station to the target P at the time η , w r is the range envelope, w a is the azimuth envelope, η c is the synthetic aperture center time of the receiving station, f c is the center frequency, and K c is the chirp rate.
The back-projection (BP) method is used for the 2D BiSAR imaging, which better preserves the phase information compared to frequency-domain methods. The 2D complex SAR image on the imaging plane at the zero-height can be expressed as [31]:
g n ( x , y , 0 ) = γ P · h Bi ( x x P , y y P ) · exp j 2 π r Bi ( η c ; x , y , 0 ) r Bi ( η c ; x P , y P , z P ) / λ
where g n ( x , y , 0 ) represents the complex value of the n-th SAR image at the grid point ( x , y , 0 ) , ( x P , y P , 0 ) represents the coordinates of the target P projected onto the imaging plane according to the elliptical geometry, λ = c / f c is the signal wavelength, h Bi ( x , y ) is the 2D response function of the point target in the BiSAR imaging, r Bi ( η c ; x , y , 0 ) is the round-trip slant range of the point ( x , y , 0 ) at the slow time η c , r Bi ( η c ; x P , y P , z P ) is the round-trip slant range of ( x P , y P , z P ) at the slow time η c .
For the sake of simplicity, it is assumed that h Bi ( x , y ) is the 2D Dirac delta function [12]. After obtaining the imaging result using the BP algorithm, the phase caused by r Bi ( η c ; x , y , 0 ) can be compensated. In addition, because the transmitting station is fixed, its range to the target remains unchanged in N observations, causing the same phase in N SAR images, which can be incorporated into the complex scattering coefficient. Therefore, by omitting the coordinate index of the 2D image, g n ( x , y , 0 ) can be simplified as Equation (3) after the approximation and phase compensation.
g n = γ P · exp j 2 π r R , n λ
In Figure 2, the geometric relationship of the target projection in the MB OS BiSAR imaging is depicted. The positions of the receiving stations at the synthetic aperture center time are denoted as R n ( n = 1 , 2 , , N ) . The position of the transmitting station is represented by T. In the BiSAR imaging, the target P is projected to the point P on the imaging plane, so the round-trip slant ranges of P and P are equal.
After obtaining N 2D images, the 2D image from the reference trajectory is selected as the master image, while the images obtained from other trajectories are considered as slave images for the image registration. Then, the registered images undergo de-ramping processing. For the n-th image, the de-ramping processing can be expressed as:
g n = γ P · exp j 2 π r R , n P λ · exp j 2 π r R , n P λ
where r R , n P represents the slant range from R n to P, and r R , n P represents the slant range from R n to P . From the geometric relationship and far-field condition in Figure 2, r R , n P and r R , n P can be expressed as:
r R , n P r + s · sin α 1 + α 2 b n 2 2 r
r R , n P r + b n 2 2 r
where r represents the range from the grid point to the reference trajectory, α 1 and α 2 are the angles as shown in Figure 2, and α 1 and α 2 are analyzed in Section 5. s represents the range between P and P . b n represents the vertical baseline of the n-th trajectory.
Therefore, Equation (4) can be expressed as:
g n = γ P · exp j π s 2 · sin 2 α 1 + α 2 λ r · exp j 2 π s · sin α 1 + α 2 · b n λ r
Because this paper is only concerned with the amplitude of the 3D image and not its phase, the first phase term in Equation (7) can be ignored [12]. Additionally, if multiple point targets are projected onto the same grid, Equation (7) can be rewritten as:
g n = γ ( s ) · exp j 2 π · sin ( α 1 + α 2 ) · b n λ r · s d s
where γ ( s ) is the complex scattering coefficient of the target at the elevation s. It can be observed from Equation (8) that, the 2D image and the scattering coefficient form a Fourier transform relationship. The spatial frequency is ξ n = sin ( α 1 + α 2 ) b n λ r . Therefore, performing a Fourier transform along the s direction for each pixel position in N images yields the 3D imaging result. The specific form is shown as:
γ ^ ( s ) = g n · exp j 2 π · sin ( α 1 + α 2 ) s λ r · b n d b n
After obtaining the result of Equation (9), the imaging results can be obtained in the x y s coordinate system established with the ground as the reference, which needs to be converted to the visually intuitive x y z coordinate system.
x = x y = y + s · c o s ( α 2 ) z = s · s i n ( α 2 )

2.2. Phase Error Analysis

Due to the insufficient accuracy of the measuring equipment, inaccurate position measurement of the radar often occurs when the platform moves, leading to phase errors in the generated 2D image. Figure 3 shows the measurement geometric of the n-th trajectory. R n represents the actual position of radar on the n-th trajectory, and R n represents the estimated position of radar. There is a difference ( Δ y , Δ z ) between R n and R n . θ n is the downward angle of radar for the n-th trajectory. r n is the true range, and r n is the range with deviation. The range error and phase error can be represented as:
Δ r n = r n r n = Δ y sin θ n Δ z cos θ n
φ n = 2 π Δ r n λ
The phase errors on each image vary with the change of θ n . In addition, due to the different radar position deviations on each trajectory, the phase error of multiple trajectories will vary with spatial position changes. The phase error between different trajectories is independent of each other and needs to be estimated separately. The existence of phase error terms may lead to problems such as the blurring, distortion, and reduced resolution in the tomographic profile. Substituting Equation (12) into Equation (8) yields the result of the phase error contamination as:
g n = exp ( j φ n ) · γ ( s ) · exp j 2 π · sin ( α 1 + α 2 ) · b n λ r · s d s

3. Proposed MB BiSAR 3D Imaging Method

In this section, a MB BiSAR 3D imaging method based on the phase error calibration combining the PGA and EB-ISOA is proposed. The overall process of the proposed 3D imaging method is shown in Figure 4, which mainly includes five distinct steps:
  • Registering images and de-ramping;
  • Selecting PS based on the amplitude dispersion index (ADI) criteria;
  • Using the PGA method to estimate the phase error and obtain coarse calibration results;
  • Using the EB-ISOA method to process coarse calibration results and obtain precise calibration results;
  • Performing the tomographic imaging and coordinate transformation to obtain final 3D imaging results.
Since the first step has already been presented in Section 2, the remaining part of this section will cover the last four steps.

3.1. PS Selection Based on ADI Criteria

The ADI criterion is used to screen PS among candidate pixels [32], and it can be expressed as:
μ = N n = 1 N | g n | 2 / n = 1 N | g n | 2 1
Select pixels with μ < T h r μ as the PS, and T h r μ is usually set to 0.25 [33]. Due to the fact that the ADI considers the amplitude stability of the selected pixels on different baselines, rather than directly considering the amplitude level. Therefore, when weakly scattered pixels meet the ADI, they can also be selected as PS. After calculating ADI and filtering the pixels, the data of the q-th PS selected from the imaging area on the n-th image can be represented as:
g q , n = γ q exp ( j φ n + j 2 π ξ n s q ) + w q , n
where γ q is the complex scattering coefficient of the q-th PS, s q is the elevation of the q-th PS, and w q , n is the additive noise [34].

3.2. Coarse Phase Error Estimation Based on PGA

The PGA method is based on the assumption that phase error can be considered invariant within a certain range [35,36]. The PGA method mainly includes the following steps:
Firstly, the linear phase term removal. In this paper, the discrete Fourier transform is used to estimate s q . After compensating for this linear phase term, Equation (15) can be expressed as:
g q , n = γ q exp ( j φ n ) + w q , n
where w q , n = w q , n · exp ( j 2 π ξ n s q ) .
Secondly, the phase gradient estimation. Based on the principle of maximum likelihood estimation, the phase gradient Δ φ ^ can be estimated as:
Δ φ ^ n = a r g q = 1 Q [ ( g q , n 1 ) · g q , n ]
where a r g ( · ) denotes the phase extraction operation, ( · ) denotes the conjugate operation, and Q is the number of the PS.
Finally, the phase error estimation. By summing up Δ φ ^ n , φ n can be obtained as:
φ ^ n = 0 , n = 1 i = 2 n Δ φ ^ i , n = 2 , 3 , , N .
Considering the influence of noise, the estimation result of phase error may not be close to the true phase error. Therefore, in the PGA method, the strategy of cyclic iteration is used to gradually improve the accuracy of phase error estimation. The convergence condition of the iterative process can be expressed as [22]:
n = 1 N φ ^ n i φ ^ n i 1 2 < T h r i t e r
where φ ^ n i is the results of phase error estimation at the i-th iteration and the T h r i t e r is usually set to 0.001 [13]. In this stage of the proposed method, it only needs the coarse estimation of the phase error, so the convergence condition can be relaxed. In this paper, it is set to 0.01.

3.3. Precise Phase Error Estimation Based on EB-ISOA

The ISOA method is based on the intensity-squared criterion and does not require assumptions of the phase error invariance or noise specific statistical models. It can correct phase errors pixel-by-pixel. The phase error estimation process of the ISOA method can be represented by:
φ ^ = max φ { C [ f ( s 1 , φ ) , f ( s 2 , φ ) , , f ( s N , φ ) ] }
where C [ · ] is the objective function, φ = [ φ 1 , , φ n ] is the calibration phase vector to be calculated, s d is the elevation, and f ( s d , φ ) = γ ^ s d , φ 2 , ( 1 d N ) represents the intensity at s d in the tomographic focusing result.
The intensity-square is chosen as the measure of the sharpness, and maximizing the intensity-square can maximize the variance of the intensity relative to its average value, resulting in the optimal sharpness of the focused signal [27]. The objective function is defined as:
C [ f ( s , φ ) ] = d = 1 N f 2 ( s d , φ )
According to Equation (9), it can be concluded that:
γ ^ ( s d , φ ^ ) = n = 1 N g n · e j 2 π ξ n s d · e j φ ^ n = n = 1 N ν d , n · e j φ ^ n
where ν d , n = g n · e j 2 π ξ n s d .
The optimization problem of Equation (20) does not have a closed-form solution and is usually solved using numerical iterative methods. The coordinate descent method can be to transform the above problem into the following form:
φ ^ n i + 1 = max φ { C [ f ( s , φ ^ 1 i + 1 , , φ ^ n 1 i + 1 , φ , φ ^ n + 1 i , , φ ^ N i ) ] }
where φ is the phase error to be estimated in the ( i + 1 ) -th iteration, Equation (22) can be converted into:
γ ^ ( s d , φ ^ ) = l = 1 n 1 ν d , l e j φ ^ l i + 1 + l = n + 1 N ν d , l e j φ ^ l i + ν d , n e j φ ^ = p d + e j φ ^ q d
where p d = l = 1 n 1 ν d , l e j φ ^ l i + 1 + l = n + 1 N ν d , l e j φ ^ l i , and q d = ν d , n . Equation (24) conforms to Equation (9) in [30], and subsequent implementation details can be found in [30].
However, the intensity squared criterion tends to focus on strong energy sources that are affected by phase noise interference, where the magnitude of the intensity squared is mainly determined by the strong scattering points in the scene. Therefore, the estimation of phase error tends to prioritize focusing on strong scattering points to obtain larger intensity squared values. This results in the focusing result after phase correction being mainly determined by the strong energy component. To address this issue, we introduce image energy balance to improve the ISOA method, which also named EB-ISOA. We propose a balancing operator Ω ( · ) to balance the energy distribution of ν d , n . To avoid loss of phase information, the balancing operator can be expressed as:
Ω ν d , n = Ψ V d , n e j φ d , n
where V d , n = | ν d , n | , φ d , n = a n g l e ( ν d , n ) , and the function of Ψ ( · ) is to reduce the dynamic range of V d , n , that is, to reduce the difference in the contribution of projection energy from different regions to C [ f ( s , φ ) ] . Common nonlinear operators can be used to reduce the dynamic range of data energy. In our data processing, the logarithmic function l o g ( · ) is chosen as the balancing operator.
The ISOA method does not introduce linear phase error terms during the optimization, and therefore does not produce the vertical displacement. However, the PGA-based methods may not be accurate in estimating the elevation of the PS, resulting in the vertical displacement when eliminating the linear phase term, and this vertical displacement may gradually increase during the iteration process. So, before using the EB-ISOA method to process the coarse calibration results obtained by the PGA method, the data of the PS can be calibrated using the EB-ISOA method to obtain the elevation of the PS. Next, the PS elevation obtained by the EB-ISOA and PGA methods is used to perform the elevation calibration and obtain the vertical displacement amount of the PGA calibration result. The specific elevation calibration process is as follows:
Δ s = 1 Q q = 1 Q s q A 1 s q A 2
where s q A 1 expresses the elevation of the q-th PS obtained by the EB-ISOA method, and s q A 2 expresses the elevation of the q-th PS obtained by the PGA method.
After obtaining the vertical displacement amount, the corresponding linear phase term is compensated to the PGA calibration result, which can complete the elevation calibration of the PGA result. After completing the elevation calibration, the EB-ISOA method can be used to process the coarse calibration data of the PGA method pixel-by-pixel.

3.4. Tomographic Imaging and Coordinate Transformation

After the phase error calibration, it is necessary to perform focusing processing on the tomographic signal. From Equation (9), it can be seen that using discrete Fourier transform can achieve focusing on tomographic signals. However, considering that noise often exists in actual signals, which can affect the focusing performance of discrete Fourier transform, this paper chooses the ABF method with anti noise capability to achieve focusing of tomographic signals. The ABF method is based on filtering signals with different spatial frequencies through a spatial filter h ( s ) of order N, so that the reflected signal of the target at the elevation s is not distorted and the amplitude of signals at other frequencies is reduced. By using the ABF method, the estimation of intensity distribution can be denoted as:
P ^ A B F s = h H s R ^ h ( s )
where h ( s ) can be expressed as:
h ( s ) = R ^ 1 a ( s ) a H ( s ) R ^ 1 a ( s )
where a ( s ) = exp ( j 2 π ξ 1 s ) , exp ( j 2 π ξ 2 s ) , , exp ( j 2 π ξ N s ) T , ( · ) H denotes the Hermitian operator, and R ^ is the covariance matrix which can be expressed as:
R ^ = 1 M m = 1 M g m g m H
The g m = 1 , , M can be chosen from the pixels around a selected pixel. After the tomographic imaging, Equation (10) can be used to obtain the visually intuitive results.

4. Experimental Results and Analysis

In this section, the 3D imaging performance of the proposed MB OS BiSAR 3D imaging method is verified through simulation experiments, which demonstrates the effectiveness and practicality of the proposed method. Based on the imaging geometry configuration shown in Figure 2, the system parameters and baseline configuration in Table 1 are used to generate the simulation data for three target scenarios. Notably, 15,000 m is the ground range between the scene center and the reference trajectory. Two experiments are conducted using the parameters in Table 1. The first experiment is a point targets imaging experiment in an environment without tomographic phase error, aimed at verifying the 3D imaging capability of the MB OS BiSAR and the effectiveness of the proposed 3D imaging method (eliminating the phase error calibration step). The second experiment is the tomographic phase error calibration experiment, aimed at verifying the effectiveness and practicality of the proposed phase error calibration method.

4.1. Three-Dimensional Imaging Capability Experiment

As shown in Figure 5, five point targets are set with the following coordinates: A ( 10 , 10 , 4 ) m, B ( 10 , 10 , 4 ) m, C ( 10 , 10 , 4 ) m, D ( 10 , 10 , 4 ) m, and E ( 0 , 0 , 0 ) m. In this experiment, the trajectory is uniformly distributed along the height direction, and the total length of baseline is 480 m. 30 m × 50 m (X-axis × Y-axis) is the size of the imaging scene. The observation angle of the reference trajectory towards the scene center is 56.34°. The theoretical elevation resolution is 1.50 m, and the unambiguous elevation is 36.04 m. The scattering coefficients of the point targets are all 1 m 2 , with the highest target at 4 m and the lowest target at −4 m. The OS BiSAR data for 25 voyages has been obtained through the simulation.
The BP algorithm is used to perform the 2D imaging processing on the echo data received by the receiving radar on all trajectories, and 25 BiSAR images are obtained. The imaging results obtained by performing the de-ramping processing and Fourier transform focusing on each image are shown in Figure 6. The discrete points of the sidelobes in the 3D imaging results of point targets can affect the 3D observation of the imaging results. To better display the imaging results without changing the imaging results, only discrete points with amplitudes greater than 0.05 times of the maximum amplitude value are shown in Figure 6. From the imaging results of the point targets, it can be seen that its 3D shape approximates the discretization of the 3D sinc function, which can visually distinguish the point targets more clearly, verifying the 3D imaging capability of the proposed MB OS BiSAR method. In order to have a clearer view of the coordinate details of the 3D imaging results, Figure 7 shows the results of the 3D imaging results from three perspectives after coordinate transformation. From the results of the three perspectives, it can be seen that the imaging results of the point target exhibit the shape of a 2D sinc function on each projection plane, which is in line with theoretical expectations.
Due to the fact that the BP algorithm is used for the 2D imaging and its focusing ability has been validated by numerous researches, the following analysis will only focus on the focusing effect in the tomographic direction. The tomographic profiles of the imaging results of five point targets before coordinate transformation are shown in Figure 8. From Figure 8, it can be seen that the tomographic profiles of the five targets all follow the form of a one-dimensional sinc function, indicating that the targets have been well-focused in the tomographic direction. Since Figure 8 shows the tomographic profiles of the imaging results without coordinate transformation, the peak positions in each subgraph are not the actual heights of the point targets.
In order to further quantitatively demonstrate the performance of the 3D imaging of the MB OS BiSAR, Table 2 presents the focus positions of the five point targets and the three performance parameters of the point targets. From the impulse response width (IRW) indicator in Table 2, it can be seen that point targets have a certain resolution in the three direction, while the peak side lobe ratio (PSLR) and integrated side lobe ratio (ISLR) indicators indicate a significant attenuation of the sidelobes of point targets relative to the main lobe in the three direction. Observing the expected and measured values of the indicators at different point targets in the range and azimuth directions, it can be found that the measured and expected values of the IRW of the five point targets are close to each other, which is in line with theoretical expectations. However, PSLR and ISLR are slightly lower than the expected values, which meets the requirements because the smaller the PSLR and ISLR, the more concentrated the point target energy is in the main lobe, and the less energy leakage. Observing the three indicators in the elevation direction, the IRW measurement values of the five points are slightly lower than the expected values, while the PSLR and ISLR are slightly higher than the expected values. This is mainly because the sidelobe section of the tomographic direction is not perpendicular to the tomographic axis, resulting in the tomographic data sampled along the tomographic axis not being completely in the ideal sinc distribution form, causing slight deviations in the calculation results of the indicators. However, it can also be seen that the level of energy focusing in the tomographic direction is close to the performance of the sinc function. In addition, it can be seen that the 3D coordinates of the point targets in the 3D imaging results are close to their true coordinates, but not equal to their true coordinates. This is due to the approximation operation during the de-ramping process, registration errors between different images and the insufficient up-sampling factor during the position calculation. The above experimental results objectively verify that the proposed MB BiSAR configuration has the ability to obtain resolution and energy focusing ability in three directions: range, azimuth and elevation.

4.2. Phase Error Calibration Experiment

4.2.1. Experiment for Simple Targets and Scenarios

Set the target T a r z composed of multiple point targets, as shown in Figure 9a. Notably, 20 m × 40 m (X-axis × Y-axis) is the size of the imaging scene. The scattering coefficients of the point targets that make up the target T a r z are all 1 m 2 , and the highest height of the point targets is 5 m and the lowest height is −5 m. The distance intervals between adjacent points in the X, Y and Z directions are 2 m, 2 m and 2.5 m, respectively. The 3D imaging results of the target T a r z without the phase error obtained according to the above settings are shown in Figure 9b.
A set of Gaussian distributed phase errors along the tomographic direction within the range of [ π , π ] radians is set, with a mean of 0 and a standard deviation of 0.32 π [29]. Based on the above data, the phase errors that vary linearly along the range and azimuth directions are added to each registered SAR image [37]. Considering the spatial relationship between the scene and the radar and the scale of the imaging scene, the phase error slowly changes on each image. In addition, the signal-to-noise ratio (SNR) is set to 5 dB.
The 2D imaging result generated from the acquisition signal corresponding to the reference trajectory is shown in Figure 10a, while the PS extracted by the ADI criterion are shown in Figure 10b. After statistics, the number of extracted PS is 468. Comparing the 2D imaging results with the PS extraction results, the position of PS basically corresponds to the position of isolated scattered points on the 2D image, and the energy aliasing part of the dense points located in the middle of the Y-axis of Figure 10a has not been extracted. This indicates that after screening by ADI, the extracted PS are the scattering points with isolated properties.
The 3D imaging results after the phase error contamination and phase error calibration using different methods are shown in Figure 11. From Figure 11a, it can be seen that the due to the influence of phase error, the imaging results have obvious blurring, making it difficult to obtain the shape and structural information of the target. After the phase error calibration, the focusing effect of the target is improved and the profile is clear. From Figure 11b–d, it can be seen that using the PGA, ISOA, and BF-PGA methods individually can each produce imaging results with some improvement, but they all exhibit certain problems in the reconstructed vertical plane. The results obtained by the PGA method showed an overall displacement, while the suppression of sidelobes is insufficient. The results obtained by the BF-PGA method can show the more distinct profile, but there is also the vertical displacement. The PGA method and BF-PGA method both rely on the assumption of phase error invariance and can only compensate for the same phase for all pixels in the same image, so the phase compensation effect for different pixels will vary. For some pixels, the tomographic focusing effect may deteriorate after compensation by PGA or BF-PGA methods. In addition, both PGA and BF-PGA methods require estimation of the height of the PS, and whether using DFT or BF methods, the estimated height of the PS may deviate, leading to vertical displacement in the height of the 3D image. The results obtained by the ISOA method do not produce overall offset, but the sidelobe suppression levels of different pixels are inconsistent, and the profiles are not smooth enough. This is mainly because the original ISOA method overly focuses on strong scattering targets, to some extent plundering the energy of weak scattering targets, leading to the phenomenon of over focusing. From Figure 11e, the proposed phase error calibration method can achieve the higher calibration effects and better sidelobe suppression effect. At the same time, there is no vertical displacement in the results, and a clear target profile can be seen. Comparing Figure 11c,e, it can be observed that after energy balance processing, targets with weaker energy can achieve better focusing without obvious over-focusing phenomena.
Figure 12 shows the tomographic profile of a selected PS in the results obtained by the above methods. It can be seen that the results of the PGA and BF-PGA methods both have the vertical displacement, while the elevation of the ISOA method and the proposed method remained stable without obvious vertical displacement. In addition, the proposed method has the good sidelobe suppression effect, which is superior to other comparison methods. For images with phase errors, even after the phase calibration processing, their phase is still close to the error free phase, and it is difficult to fully recover to the clean phase. In this case, the sidelobes in the case of ’without phase errors’ are weaker than those in the proposed method, indicating that there is less energy leakage and better focusing effect in the absence of the error. For the tomographic profile of the BF-PGA method, it can be found that its sidelobe level is not stably attenuated. This phenomenon is mainly due to the assumption that the BF-PGA method can only compensate for one phase value for each image based on the phase error space invariance, resulting in inconsistent phase error correction effects for different PS points. By comparing the results of the proposed method, it can be found that the sidelobes gradually attenuate, and the attenuation trend is consistent with the trend without the phase error, indicating that the signal energy is mainly concentrated in the main lobe with the less energy leakage.
The entropy and contrast values of different phase methods on the 3D BiSAR images mentioned above are given in Table 3, which are used to comprehensively evaluate the imaging quality. The image entropy can be obtained by [38]:
IE = a = 1 N a b = 1 N b c = 1 N c p ( a , b , c ) ln [ p ( a , b , c ) ]
where N a , N b , and N c are the size of the 3D BiSAR image, p ( a , b , c ) is the intensity probability density of the selected pixel. p ( a , b , c ) can be expressed as:
p ( a , b , c ) = I ( a , b , c ) 2 a = 1 N a b = 1 N b c = 1 N c I ( a , b , c ) 2
where I ( a , b , c ) is the intensity of pixel. The image contrast is given by [25]:
IC = M ( I 2 ( a , b , c ) M ( I 2 ( a , b , c ) ) ) 2 M ( I 2 ( a , b , c ) )
where M ( · ) denotes the expectancy factor.
From Table 3, it can be seen that the proposed method has the highest contrast and lowest entropy on 3D BiSAR images. Overall, compared with other methods, the proposed method has demonstrated its higher focusing degree, smaller sidelobes, more detailed inversion results and superior calibration performance.
This paper sets up a Monte Carlo experiment with the SNR as the variable. When examining the effectiveness of different algorithms under different SNR, Gaussian phase noise and uniform distributed phase noise are used to contaminate the data. The mean of Gaussian phase noise is 0, and the standard deviation is 0.32 π . Uniform distributed phase noise follows a uniform distribution of [ π , π ] . The number of repeated experiments for each variable’s discrete value is set to 100. Figure 13 and Figure 14 show the variation curves and standard deviations of entropy and contrast without phase error calibration and after phase error calibration, respectively, using four methods in Gaussian phase noise and uniform phase noise environments.
From Figure 13 and Figure 14, it can be seen that as the SNR increases, the entropy and contrast of the processing results of these methods decrease and increase, respectively. Compared to the other three methods, the proposed method has the smallest entropy and the highest contrast in both noisy environments. This indicates that the proposed method further corrects pixel-by-pixel based on the initial values obtained by PGA, achieving finer phase error correction of the data and obtaining higher quality 3D focused images. In addition, the PGA-based methods are prone to failure in the low SNR, while the ISOA method and the proposed method still have certain correction effects. This is mainly because at low SNR, the assumption that the phase error remains unchanged is difficult to hold, making it difficult for PGA based methods to be effective. This further demonstrates the advantage of the proposed method having a wider range of applicable scenarios. In terms of standard deviation in Figure 13 and Figure 14, the proposed method has slightly higher overall standard deviation for IE and IC than PGA and BF-PGA methods, and the standard deviation variation range of the three methods is not large and relatively stable.

4.2.2. Experiment for Complex Targets and Natural Scenes

To further demonstrate the effectiveness of the proposed algorithm, an experiment with complex targets and natural scenes is conducted. Notably, 40 m × 40 m (X-axis × Y-axis) is the size of the imaging scene. Figure 15 shows the 3D perspective view of the complex target model and the projection plane view graphics. In addition, in order to be closer to the actual situation of phase error generation, unlike the phase error generation process in Section 4.2.1, this experiment directly adds perturbed position errors to the set aircraft trajectory, thereby causing distance errors and further simulating the phase error generation process in natural environments. The relevant system parameters of the experiment are consistent with the parameters in Table 1.
Due to disturbance errors, the generated 2D SAR images may appear defocused. In this paper, the autofocus BP method is used for 2D autofocus processing. The 2D SAR image under trajectory disturbance is shown in Figure 16a, which exhibits obvious defocusing and blurring phenomena, making it difficult to see the approximate outline and structure of the target. The 2D SAR image after 2D autofocus processing is shown in Figure 16b. The clear target structure and contour can be seen in the image, and the defocusing phenomenon of the 2D image is well suppressed.
Perform image registration on the 2D autofocus processed image stack using the reference trajectory image as the reference image, and perform oblique processing to obtain preliminary processed image stack data. The proposed phase calibration method and the other three phase calibration methods are used to perform phase error calibration and unified ABF focusing processing on the image stack data, and the results obtained are shown in Figure 17 and Figure 18.
Figure 17a shows the result of direct focusing without phase error correction, indicating that under the influence of phase error, the tomographic signal was not successfully compressed and the focusing basically failed. Figure 17b,d demonstrate that PGA and BF-PGA can effectively correct phase, but the correction effect varies significantly among different pixel points. The pixel points located in the middle area of the X-axis have better correction effect and more concentrated energy, while the pixel points located on both sides of the X-axis have more dispersed energy. In addition, the processing results of PGA and BF-PGA have a slight degree of vertical displacement. The processing result of Figure 17c is poor, as the weak point energy is plundered by the strong point, making it difficult to focus on the weak point and the overall outline unclear. Observing Figure 17e, the focused image corrected by the proposed phase correction method has a clear contours, good focusing of both strong and weak scattered points and no obvious vertical displacement. Observe the three areas circled by red lines in the subgraph of Figure 17, which are partially weak scattering areas in the scene. Comparing the results of four methods for processing regions A, B, and C, region A in Figure 17b has a few weak targets lost, while regions A and B in Figure 17c have lost most weak targets. However, regions A and B in Figure 17d and Figure 17e have achieved good focusing on weak targets, but the focusing depth in Figure 17e is deeper. For region C, four methods can achieve the focus on the target, but the results of the proposed method have better differentiation of the target. Figure 18 shows the observation images of the 3D imaging results from three perspectives. Comparing the three views of the target in Figure 15, it can be seen that these four methods have basically achieved the inversion of the contour of the target, but the contour of the result processed by the ISOA method is relatively blurry, losing the construction of detailed contours such as weak targets. In addition, the imaging results processed by PGA and BF-PGA methods both exhibit obvious vertical displacement. These results reflect the good focusing ability of the proposed method on weak targets, which can better invert the structure of targets in strong and weak target environments. The IE and IC of the image after phase error correction are given in Table 4. It can also be seen from the table that the focused image of the proposed method has the lowest image entropy and the highest contrast, resulting in the best overall phase calibration effect on the target.

5. Discussion

5.1. Analysis of BiSAR Projection

In the MonoSAR imaging, the projection relationship from the target to the imaging plane conforms to a circular projection relationship. While in the OS BiSAR imaging, the projection relationship from the target to the imaging plane conforms to an elliptical projection relationship. Under far-field conditions, due to the observation scene being much smaller than the range between the radar and the target, both circular and elliptical projection relationships can be approximated as the linear projection relationships. However, although they both can be approximated as a linear projection relationship, the angle between the approximated line (i.e., the tomographic direction) and the sight direction of the radar is not the same. In the MonoSAR imaging, the sight direction of the radar and the tomographic direction can be considered perpendicular. While in the BiSAR imaging, the sight direction of radar and the tomographic direction generally do not satisfy a vertical relationship, and projection angles need to be introduced for the geometric analysis.
The projection relationship under MonoSAR and BiSAR configurations is shown in Figure 19. As mentioned above, the projection relationship of both configurations can be approximated as a linear projection. The direction of this straight line is actually the tangent direction at the intersection of the imaging plane and the circle or ellipse. In Figure 19a, it can be seen that in the MonoSAR configuration, the tangent of the circle at P is perpendicular to the R O S P , i.e., perpendicular to the sight direction of the radar towards P .
Assuming that P is at the center of the scene, R O S P can be regarded as the slant range direction, while the tangent direction is the tomographic direction. Under the MonoSAR configuration, the tomographic direction is perpendicular to the slant range direction. In Figure 19b, it can be seen that in the BiSAR configuration, the tangent direction of the ellipse at P may not be perpendicular to the sight R P of the radar. Therefore, it is necessary to use elliptical geometric relationships to calculate the angle between the tangent direction s and the line of sight direction R P , and this angle is set as θ .

5.2. Calculation of Projection Angle and Incident Angle

According to the geometric relationship in Figure 19b, it can be inferred that θ = π α 1 α 2 . α 1 is the incident angle from the radar to the center of the scene, and α 2 can be regarded as the projection angle from the target P to the center of the scene P . Assuming at the center moment of the synthetic aperture, the coordinates of R are ( x R , y R , z R ) , and the coordinates of P are ( x P , y P , z P ) . Then, we can obtain x R = x P and α 1 can be given by:
α 1 = arctan ( z R z P y R y P )
Compared to the angle α 1 , the calculation of the projection angle α 2 is more complex. As shown in Figure 19b, based on the original O-XYZ coordinate system, a Cartesian coordinate system O-UV (with the U axis parallel to the Y axis and the V axis parallel to the Z axis) is established with the center of R T as the origin. Assuming the coordinates of T are ( x T , y T , z T ) , and considering that R, T, and P are all in the O-UV plane, then x T = x P = x R . Furthermore, the coordinates x O , y O , z O of the midpoint of R T in the O-XYZ coordinate system can be calculated as:
x O = x R y O = y T + y R 2 z O = z T + z R 2
In the O-UV coordinate system, the coordinates of P are ( u P , v P ) , and u P = y P y O , v P = z P z O . According to the properties of ellipses, the length of the major axis of the ellipse 2 a can be calculated as
2 a = r R P + r T P
where r R P represents the range from P to R, and r T P represents the range from P to T.
If the endpoint of the ellipse with the major axis closer to R is C 1 , then | C 1 R | can be expressed as:
| C 1 R | = ( 2 a | R T | ) / 2
where | R T | = ( x T x R ) 2 + ( y T y R ) 2 + ( z T z R ) 2 .
After that, we can obtain | C 1 O | = | C 1 R | + 1 / 2 | R T | , and the slope k R T of the R T in O-UV coordinates k R T = ( z R z T ) / ( y R y T ) . According to k R T , the coordinates of C 1 under the O-UV coordinate system can be calculated as:
( u C 1 , v C 1 ) = ( 0 , | C 1 O | ) , k R T = I n f ( | C 1 O | cos ( arctan ( k R T ) ) , | C 1 O | sin ( arctan ( k R T ) ) ) , o t h e r
The projection ellipse in the O-UV coordinate system takes the form of a general ellipse, which can be expressed as ( m u ) 2 + ( n v ) 2 = 1 . Given that C 1 ( u C 1 , v C 1 ) and P ( u P , v P ) are two points on an ellipse, the values of m and n can be obtained by substituting them into the above equation.
Under far-field observation conditions, it can be considered that the slope at P is positive in the O-UV coordinate system. From the general elliptic equation, d v / d u can be calculated as d v / d u = m u / n v . So, the slope k P and α 2 can be expressed as:
k P = m u P n v P
α 2 = arctan k P

5.3. Computational Complexity

The main difference between the PGA method and the BF-PGA method is the method used to calculate the PS height. The PGA method uses the discrete Fourier transform method to calculate the PS height, usually implemented using the fast Fourier transform. Its overall computational complexity can be denoted as:
O ( Q N l o g N )
where Q is the number of PS points that have been filtered, and N is the number of flight trajectories.
For the BF-PGA method, the beamforming algorithm is used to calculate the PS height, and the specific calculation formula can be expressed as:
P ^ B F s = a H s R ^ a ( s ) a H s a ( s )
Considering the same number of PS points, the beamforming method performs spatial spectral search using Equation (41), with a corresponding computational complexity of
O ( Q ( N + 1 ) N S )
where S denotes the number of samples alone elevation and S usually takes a value greater than N.
For the ISOA method, the main calculation is the optimization processing step for estimating the phase error of each pixel point, and its computational complexity is
O ( X Y N 3 )
where X and Y are the pixel sizes of the 2D image, respectively.
For the proposed method, the main calculation steps are the height calculation process of PGA and the phase error calculation process of each pixel point of EB-ISOA. The time consumption of PGA is almost negligible, so the computational complexity of the proposed method is consistent with that of Equation (43).
The average computation time of Monte Carlo experiments at SNR = 5 dB is selected to investigate the time consumption of several algorithms. The Q size in Equation (42) is about 450, N is 25, S is 25. X and Y are 81 and 161 in Equation (43), respectively. The average processing time of the PGA method is 0.029 s, the average processing time of the BF-PGA method is 1.567 s, the average processing time of the ISOA method is 108.722 s, and the average processing time of the proposed method is 104.458 s. The time consumption ratio of several phase correction methods conforms to the computational complexity given earlier. The method proposed in this paper, which performs phase error correction pixel-by-pixel, has a longer computation time compared to PGA based methods, but the phase correction effect is better.

6. Conclusions

To address the issue of the temporal decorrelation in the MB MonoSAR imaging, this paper investigates a MB OS BiSAR imaging configuration, which utilizes the imaging mode with one-transmit-multiple-receive to reduce the temporal baseline. First, the mathematical derivation of the signal model for the proposed MB OS BiSAR is conducted, yielding the signal form for de-ramping. Then, the 3D imaging process for this configuration is also presented. Furthermore, to address the phase error contamination caused by the platform jitter and inaccurate position measurements, a 3D imaging method based on the phase error calibration combining the PGA and EB-ISOA is proposed. The method first uses the ADI criterion to select PS, then applies the PGA method for the coarse phase error calibration, followed by the EB-ISOA for the precise phase error calibration of the coarse result. Finally, high-precision imaging results are obtained through the tomographic focusing and coordinate transformation.
To verify the practicality of the studied MB BiSAR configuration and the effectiveness of the proposed method, two simulation experiments are conducted. The first simulation experiment uses point target imaging results and three metrics to demonstrate the 3D imaging capability of the studied configuration. The second simulation experiment compares the proposed method with three phase error calibration methods. Qualitative evaluation of the imaging results from the simulation data is performed, along with the quantitative assessment based on entropy and contrast metrics. The results show that, compared to other comparison methods, the proposed method not only solves the issue of vertical displacement, but also enables the pixel-by-pixel calibration. The proposed method overcomes the limitation of the PGA method and the problem of weak target focus failure in ISOA, yielding clearer 3D BiSAR imaging results.
In the future work, we will reduce the computational complexity of the proposed algorithm and explore dividing the scene into blocks for the phase error estimation. Additionally, we will use the polynomial fitting to obtain approximate estimates to reduce processing time, enabling the method to be applied to more scenarios, such as large-scale urban and mountainous areas.

Author Contributions

Conceptualization, J.H., H.X. and N.Z.; formal analysis, H.X., Z.L. and N.Z.; methodology, J.H. and H.X.; investigation, J.H. and H.X.; software, J.H. and H.X.; resources, J.H., H.X. and H.L.; validation, J.H. and H.X.; data curation, J.H., H.X. and Z.W.; writing—original draft preparation, J.H. and H.X.; writing—review and editing, J.H., H.X., H.L., Z.W., B.X. and P.Q.; visualization, J.H. and H.X.; supervision, J.H., H.X. and N.Z.; project administration, J.H., H.X. and Z.L.; funding acquisition, H.X., Z.L. and N.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the Shenzhen Science and Technology Program (Grant No. JCYJ20240813151238049, and Grant No. 202206193000001, 20220815171723002), by the Guangdong Basic and Applied Basic Research Foundation (Grant No. 2023A1515011588), by the Fundamental Research Funds for the Central Universities, Sun Yat-sen University (Grant No. 24qnpy202, and Grant No. 24qnpy164), by the China Postdoctoral Science Foundation (Grant No. 2024M753740), by the National Natural Science Foundation of China (Grant No. 62203465, and Grant No. 62201614) and the Science and Technology Planning Project of Key Laboratory of Advanced IntelliSense Technology, Guangdong Science and Technology Department (Grant No. 2023B1212060024). Hongtu Xie is the corresponding author.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to thank the editors and reviewers for their very competent comments and helpful suggestions to improve this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Budillon, A.; Evangelista, A.; Schirinzi, G. Three-dimensional SAR focusing from multipass signals using compressive sampling. IEEE Trans. Geosci. Remote Sens. 2011, 49, 488–499. [Google Scholar] [CrossRef]
  2. Reigber, A.; Moreira, A. First demonstration of airborne SAR tomography using multibaseline L-band data. IEEE Trans. Geosci. Remote Sens. 2000, 38, 2142–2152. [Google Scholar] [CrossRef]
  3. Schmitt, M.; Zhu, X.X. Demonstration of single-pass millimeterwave SAR tomography for forest volumes. IEEE Trans. Geosci. Remote Sens. Lett. 2016, 13, 202–206. [Google Scholar] [CrossRef]
  4. Rambour, C.; Budillon, A.; Johnsy, A.C.; Denis, L.; Tupin, F.; Schirinzi, G. From interferometric to tomographic SAR: A review of synthetic aperture radar tomography-processing techniques for scatterer unmixing in urban areas. IEEE Geosci. Remote Sens. Mag. 2020, 8, 6–29. [Google Scholar] [CrossRef]
  5. Wu, Z.; Xie, H.; Gao, T.; Zhang, Y.; Liu, H. Moving target shadow detection method based on improved ViBe in VideoSAR images. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2024, 17, 14575–14587. [Google Scholar] [CrossRef]
  6. Liu, H.; Xie, H.; Gao, T.; Yi, S.; Zhang, Y. High-precision imaging and dense vehicle target detection method for airborne single-pass circular synthetic aperture radar. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2024, 17, 15330–15343. [Google Scholar] [CrossRef]
  7. Zhu, J.; Xie, Z.; Jiang, N.; Song, Y.; Han, S.; Liu, W.; Huang, X. Delay-doppler map shaping through oversampled complementary sets for high speed target detection. Remote Sens. 2024, 16, 2898. [Google Scholar] [CrossRef]
  8. Zhu, J.; Song, Y.; Jiang, N.; Xie, Z.; Fan, C.; Huang, X. Enhanced doppler resolution and sidelobe suppression performance for Golay complementary waveforms. Remote Sens. 2023, 15, 2452. [Google Scholar] [CrossRef]
  9. Xie, Z.; Wu, L.; Zhu, J.; Lops, M.; Huang, X.; Shankar, B. RIS-aided radar for target detection: Clutter region analysis and joint active-passive design. IEEE Trans. Signal Process. 2024, 72, 1706–1723. [Google Scholar] [CrossRef]
  10. Xie, Z.; Xu, Z.; Fan, C.; Han, S.; Zhu, J.; Huang, X. Modulus constrained minimax radar code design against target interpulse fluctuation. IEEE Trans. Veh. Technol. 2023, 72, 13671–13676. [Google Scholar] [CrossRef]
  11. Zhu, J.; Yin, T.; Guo, W.; Zhang, B.; Zhou, Z. An Underwater Target Azimuth Trajectory Enhancement Approach in BTR. Appl. Acoust. 2025, 230, 110373. [Google Scholar] [CrossRef]
  12. Fornaro, G.; Serafino, F.; Soldovieri, F. Three-dimensional focusing with multipass SAR data. IEEE Trans. Geosci. Remote Sens. 2003, 41, 507–517. [Google Scholar] [CrossRef]
  13. Knaell, K. Three-dimensional SAR from curvilinear apertures. In Proceedings of the 1996 IEEE National Radar Conference, Ann Arbor, MI, USA, 13–16 May 1996; pp. 220–225. [Google Scholar]
  14. She, Z.; Gray, D.A.; Bogner, R.E.; Homer, J.; Longstaff, I.D. Three-dimensional space-borne synthetic aperture radar (SAR) imaging with multiple pass processing. Int. J. Remote Sens. 2002, 23, 4357–4382. [Google Scholar] [CrossRef]
  15. Casteel, C.H., Jr.; Gorham, L.A.; Minardi, M.J.; Scarborough, S.M.; Naidu, K.D.; Majumder, U.K. A challenge problem for 2D/3D imaging of targets from a volumetric data set in an urban environment. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery XIV, Orlando, FL, USA, 10–11 April 2007; Volume 6568, pp. 97–103. [Google Scholar]
  16. Ponce, O.; Prats, P.; Scheiber, R.; Reigber, A.; Moreira, A. Polarimetric 3-D reconstruction from multicircular SAR at P-band. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 3130–3133. [Google Scholar]
  17. Zhu, X.X.; Bamler, R. Tomographic SAR inversion by L1- norm regularization—The compressive sensing approach. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3839–3846. [Google Scholar] [CrossRef]
  18. Zhu, X.X.; Bamler, R. Super-resolution power and robustness of compressive sensing for spectral estimation with application to spaceborne tomographic SAR. IEEE Trans. Geosci. Remote Sens. 2012, 50, 247–258. [Google Scholar] [CrossRef]
  19. Duque, S.; López-Dekker, P.; Mallorquí, J.J.; Nashashibi, A.Y.; Patel, A.M. Experimental results with bistatic SAR tomography. In Proceedings of the 2009 IEEE International Geoscience and Remote Sensing Symposium, Cape Town, South Africa, 12–17 July 2009; pp. II–37–II–40. [Google Scholar]
  20. Hu, C.; Zhang, B.; Dong, X.; Li, Y. Geosynchronous SAR tomography: Theory and first experimental verification using Beidou IGSO satellite. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6591–6607. [Google Scholar] [CrossRef]
  21. Ge, N.; Zhu, X.X. Bistatic-like differential SAR tomography. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5883–5893. [Google Scholar] [CrossRef]
  22. Wahl, D.E.; Eichel, P.; Ghiglia, D.; Jakowatz, C. Phase gradient autofocus-a robust tool for high resolution SAR phase correction. IEEE Trans. Aerosp. Electron. Syst. 1994, 30, 827–835. [Google Scholar] [CrossRef]
  23. Feng, D.; An, D.; Huang, X.; Li, Y. A phase calibration method based on phase gradient autofocus for airborne holographic SAR imaging. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1864–1868. [Google Scholar] [CrossRef]
  24. Lu, H.; Zhang, H.; Fan, H.; Liu, D.; Wang, J.; Wan, X.; Zhao, L.; Deng, Y.; Zhao, F.; Wang, R. Forest height retrieval using P-band airborne multi-baseline SAR data: A novel phase compensation method. ISPRS J. Photogramm. Remote Sens. 2021, 175, 99–118. [Google Scholar] [CrossRef]
  25. Huang, F.; Feng, D.; Hua, Y.; Ge, S.; He, J.; Huang, X. Phase Calibration in Holographic Synthetic Aperture Radar: An Innovative Method for Vertical Shift Correction. Remote Sens. 2024, 16, 2728. [Google Scholar] [CrossRef]
  26. Fienup, J.R.; Miller, J.J. Aberration correction by maximizing generalized sharpness metrics. J. Opt. Soc. Am. A 2003, 20, 609–620. [Google Scholar] [CrossRef]
  27. Morrison, R.L.; Do, M.N.; Munson, D.C. SAR image autofocus by sharpness optimization: A theoretical study. IEEE Trans. Image Process. 2007, 16, 2309–2321. [Google Scholar] [CrossRef] [PubMed]
  28. Pardini, M.; Papathanassiou, K.; Bianco, V.; Iodice, A. Phase calibration of multibaseline SAR data based on a minimum entropy criterion. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 5198–5201. [Google Scholar]
  29. Aghababaee, H.; Fornaro, G.; Schirinzi, G. Phase calibration based on phase derivative constrained optimization in multibaseline SAR tomography. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6779–6791. [Google Scholar] [CrossRef]
  30. Ash, J.N. An autofocus method for backprojection imagery in synthetic aperture radar. IEEE Trans. Geosci. Remote Sens. Lett. 2012, 9, 104–108. [Google Scholar] [CrossRef]
  31. Xie, H.; An, D.; Huang, X.; Zhou, Z. Fast factorized back projection algorithm based on elliptical polar coordinate for one-stationary bistatic low frequency UWB SAR imaging. Acta Elec. Sin. 2014, 42, 462–468. [Google Scholar]
  32. Ferretti, A.; Prati, C.; Rocca, F. Permanent scatterers in SAR interferometry. IEEE Trans. Geosci. Remote Sens. 2001, 39, 8–20. [Google Scholar] [CrossRef]
  33. Wang, D.; Zhang, F.; Chen, L.; Li, Z.; Yang, L. The calibration method of multi-channel spatially varying amplitude-phase inconsistency errors in airborne array TomoSAR. Remote Sens. 2023, 15, 3032. [Google Scholar] [CrossRef]
  34. De Macedo, K.A.C.; Scheiber, R.; Moreira, A. An autofocus approach for residual motion errors with application to airborne repeat-pass SAR interferometry. IEEE Trans. Geosci. Remote Sens. 2008, 46, 3151–3162. [Google Scholar] [CrossRef]
  35. Tebaldini, S.; Guarnieri, A.M. On the role of phase stability in SAR multibaseline applications. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2953–2966. [Google Scholar] [CrossRef]
  36. Iribe, K.; Papathanassiou, K.; Hajnsek, I.; Sato, M.; Yokota, Y. Coherent scatterer in forest environment: Detection, properties and its applications. In Proceedings of the 2010 IEEE International Geoscience and Remote Sensing Symposium, Honolulu, HI, USA, 25–30 July 2010; pp. 3247–3250. [Google Scholar]
  37. Lu, H.; Sun, J.; Wang, J.; Wang, C. A novel phase compensation method for urban 3D reconstruction using SAR tomography. Remote Sens. 2022, 14, 4071. [Google Scholar] [CrossRef]
  38. Yang, W.; Zhu, D. Multi-circular SAR three-dimensional image formation via group sparsity in adjacent sub-apertures. Remote Sens. 2022, 14, 3945. [Google Scholar] [CrossRef]
Figure 1. Observation model of the MB OS BiSAR imaging.
Figure 1. Observation model of the MB OS BiSAR imaging.
Remotesensing 17 00363 g001
Figure 2. Geometric relationship of the target projection in the MB OS BiSAR imaging.
Figure 2. Geometric relationship of the target projection in the MB OS BiSAR imaging.
Remotesensing 17 00363 g002
Figure 3. Geometric diagram of measurement.
Figure 3. Geometric diagram of measurement.
Remotesensing 17 00363 g003
Figure 4. Flowchart of the proposed 3D BiSAR imaging method based on the phase error calibration.
Figure 4. Flowchart of the proposed 3D BiSAR imaging method based on the phase error calibration.
Remotesensing 17 00363 g004
Figure 5. Schematic diagram of the point targets. The blue circle in the figure represents the point target.
Figure 5. Schematic diagram of the point targets. The blue circle in the figure represents the point target.
Remotesensing 17 00363 g005
Figure 6. Three-dimensional imaging results. The blue circle represents the point cloud. (a) Imaging results before the coordinate transformation. (b) Imaging results after the coordinate transformation.
Figure 6. Three-dimensional imaging results. The blue circle represents the point cloud. (a) Imaging results before the coordinate transformation. (b) Imaging results after the coordinate transformation.
Remotesensing 17 00363 g006
Figure 7. Three perspective display of 3D imaging results after coordinate transformation. The blue circle represents the point cloud. (a) XY section. (b) XZ section. (c) YZ section.
Figure 7. Three perspective display of 3D imaging results after coordinate transformation. The blue circle represents the point cloud. (a) XY section. (b) XZ section. (c) YZ section.
Remotesensing 17 00363 g007
Figure 8. Tomographic profiles of imaging results of point targets. The blue line is the profile formed by the sampling points in the tomographic direction. (a) Target A; (b) Target B; (c) Target C; (d) Target D; (e) Target E.
Figure 8. Tomographic profiles of imaging results of point targets. The blue line is the profile formed by the sampling points in the tomographic direction. (a) Target A; (b) Target B; (c) Target C; (d) Target D; (e) Target E.
Remotesensing 17 00363 g008
Figure 9. Target setting and imaging results. (a) Schematic diagram of the target T a r z . The blue circle represents the point target. (b) Imaging results of the target T a r z without phase errors. Different colors represent different amplitudes of pixels.
Figure 9. Target setting and imaging results. (a) Schematic diagram of the target T a r z . The blue circle represents the point target. (b) Imaging results of the target T a r z without phase errors. Different colors represent different amplitudes of pixels.
Remotesensing 17 00363 g009
Figure 10. Two-dimensional imaging result and the selected PS. (a) Two-dimensional imaging result. Different colors represent different amplitudes of pixels. (b) The selected PS.
Figure 10. Two-dimensional imaging result and the selected PS. (a) Two-dimensional imaging result. Different colors represent different amplitudes of pixels. (b) The selected PS.
Remotesensing 17 00363 g010
Figure 11. Imaging results of the target T a r z obtained by different methods. Different colors represent different amplitudes of pixels. (a) Imaging results with the phase error; (b) PGA method; (c) ISOA method; (d) BF-PGA method; (e) Proposed method.
Figure 11. Imaging results of the target T a r z obtained by different methods. Different colors represent different amplitudes of pixels. (a) Imaging results with the phase error; (b) PGA method; (c) ISOA method; (d) BF-PGA method; (e) Proposed method.
Remotesensing 17 00363 g011
Figure 12. Tomographic profile of the selected PS.
Figure 12. Tomographic profile of the selected PS.
Remotesensing 17 00363 g012
Figure 13. Monte Carlo experiments conducted using four methods under the influence of Gaussian phase noise. The variation curve and standard deviation of image indicators with the SNR after the calibration. (a) IE; (b) IC; (c) standard deviation of IE; (d) standard deviation of IC.
Figure 13. Monte Carlo experiments conducted using four methods under the influence of Gaussian phase noise. The variation curve and standard deviation of image indicators with the SNR after the calibration. (a) IE; (b) IC; (c) standard deviation of IE; (d) standard deviation of IC.
Remotesensing 17 00363 g013
Figure 14. Monte Carlo experiments conducted using four methods under the influence of uniform distributed phase noise. The variation curve and standard deviation of image indicators with the SNR after the calibration. (a) IE; (b) IC; (c) standard deviation of IE; (d) standard deviation of IC.
Figure 14. Monte Carlo experiments conducted using four methods under the influence of uniform distributed phase noise. The variation curve and standard deviation of image indicators with the SNR after the calibration. (a) IE; (b) IC; (c) standard deviation of IE; (d) standard deviation of IC.
Remotesensing 17 00363 g014
Figure 15. Target structure diagram. The blue circle represents the scattering point. (a) Three-dimensional perspective display. (b) XY perspective diagram. (c) XZ perspective diagram. (d) YZ perspective diagram.
Figure 15. Target structure diagram. The blue circle represents the scattering point. (a) Three-dimensional perspective display. (b) XY perspective diagram. (c) XZ perspective diagram. (d) YZ perspective diagram.
Remotesensing 17 00363 g015
Figure 16. SAR images before and after 2D autofocus processing. Different colors represent different amplitudes of pixels. (a) Before 2D autofocus processing. (b) After 2D autofocus processing.
Figure 16. SAR images before and after 2D autofocus processing. Different colors represent different amplitudes of pixels. (a) Before 2D autofocus processing. (b) After 2D autofocus processing.
Remotesensing 17 00363 g016
Figure 17. Imaging results of the complex target obtained by different methods. Different colors represent different amplitudes of pixels. The red circles and letters are the circled lines and names of the highlighted areas. (a) Imaging results with the phase error; (b) PGA method; (c) ISOA method; (d) BF-PGA method; (e) Proposed method.
Figure 17. Imaging results of the complex target obtained by different methods. Different colors represent different amplitudes of pixels. The red circles and letters are the circled lines and names of the highlighted areas. (a) Imaging results with the phase error; (b) PGA method; (c) ISOA method; (d) BF-PGA method; (e) Proposed method.
Remotesensing 17 00363 g017
Figure 18. Observation of 3D imaging results from three perspectives. Different colors represent different amplitudes of pixels. (ac) Processed by PGA method. (df) Processed by ISOA method. (gi) Processed by BF-PGA method. (jl) Processed by proposed method.
Figure 18. Observation of 3D imaging results from three perspectives. Different colors represent different amplitudes of pixels. (ac) Processed by PGA method. (df) Processed by ISOA method. (gi) Processed by BF-PGA method. (jl) Processed by proposed method.
Remotesensing 17 00363 g018
Figure 19. Projection of the MonoSAR and BiSAR configurations. (a) MonoSAR; (b) BiSAR.
Figure 19. Projection of the MonoSAR and BiSAR configurations. (a) MonoSAR; (b) BiSAR.
Remotesensing 17 00363 g019
Table 1. System parameters.
Table 1. System parameters.
ParameterValue
Center frequency9.375 GHz
Bandwidth100 MHz
Sampling rate120 MHz
Pulse repetition frequency300 Hz
Transmitting station height500 m
Receiving station height (Reference trajectory)10 km
Receiving station speed100 m/s
Number of Receiving station trajectories25
Trajectory interval20 m
Table 2. Imaging quality and focusing position of point targets imaging result.
Table 2. Imaging quality and focusing position of point targets imaging result.
Expected ValuesPoint Targets
A B C D E
RangeIRW (m)1.391.401.411.451.451.42
PSLR (dB)−13.2−13.32−13.20−13.29−13.48−13.09
ISLR (dB)−10−10.54−10.61−10.86−10.90−10.78
AzimuthIRW (m)0.650.660.660.660.660.66
PSLR (dB)−13.2−13.27−13.26−13.27−13.26−13.27
ISLR (dB)−10−10.13−10.12−10.13−10.13−10.12
ElevationIRW (m)1.501.231.221.231.231.22
PSLR (dB)−13.2−12.85−12.89−12.93−12.90−12.93
ISLR (dB)−10−9.79−9.82−9.71−9.77−9.76
True position (m)(−10, −10, 4)(−10, 10, −4)(10, −10, −4)(10, 10, 4)(0, 0, 0)
Focusing position (m)(−10.00, −9.86, 4.33)(−10.00, 10.06, −3.90)(10.06, −10.00, −3.90)(10.06, 10.20, 4.33)(0.06, 0.14, 0.26)
Table 3. Comparison of the image quality evaluation indicators for imaging results of the target T a r z .
Table 3. Comparison of the image quality evaluation indicators for imaging results of the target T a r z .
MethodsWithout CalibrationPGA MethodISOA MethodBF-PGA MethodProposed Method
IE10.629.279.159.298.63
IC4.098.308.747.719.21
Table 4. Comparison of the image quality evaluation indicators.
Table 4. Comparison of the image quality evaluation indicators.
MethodsWithout CalibrationPGA MethodISOA MethodBF-PGA MethodProposed Method
IE11.049.729.739.679.26
IC5.4510.9410.3211.0311.76
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

He, J.; Xie, H.; Liu, H.; Wu, Z.; Xu, B.; Zhu, N.; Lu, Z.; Qin, P. Multi-Baseline Bistatic SAR Three-Dimensional Imaging Method Based on Phase Error Calibration Combining PGA and EB-ISOA. Remote Sens. 2025, 17, 363. https://doi.org/10.3390/rs17030363

AMA Style

He J, Xie H, Liu H, Wu Z, Xu B, Zhu N, Lu Z, Qin P. Multi-Baseline Bistatic SAR Three-Dimensional Imaging Method Based on Phase Error Calibration Combining PGA and EB-ISOA. Remote Sensing. 2025; 17(3):363. https://doi.org/10.3390/rs17030363

Chicago/Turabian Style

He, Jinfeng, Hongtu Xie, Haozong Liu, Zhitao Wu, Bin Xu, Nannan Zhu, Zheng Lu, and Pengcheng Qin. 2025. "Multi-Baseline Bistatic SAR Three-Dimensional Imaging Method Based on Phase Error Calibration Combining PGA and EB-ISOA" Remote Sensing 17, no. 3: 363. https://doi.org/10.3390/rs17030363

APA Style

He, J., Xie, H., Liu, H., Wu, Z., Xu, B., Zhu, N., Lu, Z., & Qin, P. (2025). Multi-Baseline Bistatic SAR Three-Dimensional Imaging Method Based on Phase Error Calibration Combining PGA and EB-ISOA. Remote Sensing, 17(3), 363. https://doi.org/10.3390/rs17030363

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop