Next Article in Journal
Comparison of Pixel- and Object-Based Classification Methods of Unmanned Aerial Vehicle Data Applied to Coastal Dune Vegetation Communities: Casal Borsetti Case Study
Previous Article in Journal
Region Merging Method for Remote Sensing Spectral Image Aided by Inter-Segment and Boundary Homogeneities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parameter Estimation and Error Calibration for Multi-Channel Beam-Steering SAR Systems

1
School of Electronic and Information Engineering, Beihang University, Beijing 100191, China
2
School of Mathematics and Statistics, University of Sheffield, Sheffield S3 7RH, UK
3
Collaborative Innovation Center of Geospatial Technology, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(12), 1415; https://doi.org/10.3390/rs11121415
Submission received: 27 April 2019 / Revised: 29 May 2019 / Accepted: 11 June 2019 / Published: 14 June 2019

Abstract

:
Multi-channel beam-steering synthetic aperture radar (multi-channel BS-SAR) can achieve high resolution and wide-swath observations by combining beam-steering technology and azimuth multi-channel technology. Various imaging algorithms have been proposed for multi-channel BS-SAR but the associated parameter estimation and error calibration have received little attention. This paper focuses on errors in the main parameters in multi-channel BS-SAR (the derotation rate and constant Doppler centroid) and phase inconsistency errors. These errors can significantly reduce image quality by causing coarser resolution, radiometric degradation, and appearance of ghost targets. Accurate derotation rate estimation is important to remove the spectrum aliasing caused by beam steering, and spectrum reconstruction for multi-channel sampling requires an accurate estimate of the constant Doppler centroid and phase inconsistency errors. The time shift and scaling effect of the derotation error on the azimuth spectrum are analyzed in this paper. A method to estimate the derotation rate is presented, based on time shifting, and integrated with estimation of the constant Doppler centroid. Since the Doppler histories of azimuth targets are space-variant in multi-channel BS-SAR, the conventional estimation methods of phase inconsistency errors do not work, and we present a novel method based on minimum entropy to estimate and correct these errors. Simulations validate the proposed error estimation methods.

1. Introduction

By using azimuth antenna beam-steering technology, spaceborne synthetic aperture radar (SAR) systems can operate in beam-steering (BS) modes, such as sliding spotlight mode [1,2] and Terrain observation with progressive scan (TOPS) mode [3,4], which can achieve higher resolution or wider swaths than traditional strip-map mode. However, spaceborne SAR suffers from a conflict between azimuth resolution and range swath. By increasing the number of azimuth-receiving channels, multi-channel technology [5,6,7] can overcome this dichotomy and significantly increase the swath without resolution loss. As a result, multi-channel technology has been adopted in advanced spaceborne SAR systems [8,9,10,11] and processing algorithms for multi-channel strip-map imaging have been proposed, including spectrum reconstruction algorithm [12] and digital beamforming algorithm [13].
Increasing requirements for high-resolution wide-swath data are currently driving the development of multi-channel BS-SAR [14]. With the combination of beam-steering technology and multi-channel technology, multi-channel sliding spotlight SAR can provide very high resolution over a wide swath and multi-channel TOPS can produce imaging at high resolution with ultra-wide swath. However, two main challenges that must be faced by multi-channel BS-SAR are spectrum aliasing caused by the azimuth-varying Doppler centroid and non-uniform sampling [15]. Due to the azimuth beam steering, the Doppler centroid (DC) varies linearly along targets at different azimuths [3], and can be divided into two parts: the azimuth-varying Doppler centroid and the constant Doppler centroid (CDC). The azimuth-varying Doppler centroid, whose slope is denoted as the derotation rate (DRR), is related to the illumination geometry and the beam-steering rate, while the CDC is determined by the antenna pointing direction at the central time of observation. An azimuth-varying Doppler centroid results in the azimuth bandwidth spanning several pulse repetition frequencies (PRF) and Doppler spectrum aliasing. In addition, Doppler spectrum aliasing will be much more severe in multi-channel systems because the PRF is several times smaller than in single-channel BS-SAR and the data are usually non-uniform in time.
To resolve these challenges, several imaging algorithms have been proposed [14,15,16,17], including use of sub-aperture blocking and filter reconstruction [14], a full-aperture algorithm which combines derotation and a space-time adaptive processor [16] and an approach combining derotation and filter reconstruction [15,17]. In most algorithms, however, the deration requires accurate values of the DRR and CDC to remove the spectrum aliasing. If the DRR or CDC have errors, the azimuth-varying Doppler centroid cannot be fully eliminated, resulting in inaccurate correction of the azimuth antenna pattern. In addition, for a multi-channel SAR system, phase inconsistency errors between channels are inherent. These can strongly influence the performance of multi-channel signal reconstruction and even produce ghost targets in images. In addition, amplitude inconsistency errors, timing delay errors, and channel position errors should be taken into account for multi-channel systems. Therefore, it is important to develop methods to estimate parameters and correct errors that are properly suited to multi-channel BS-SAR.
In [18], the sensitivity of the DRR to the beam-steering rate was analyzed, but its effect on imaging and its estimation were not considered. The effect of CDC error on azimuth ambiguity was analyzed in [19]. The spatial cross-correlation coefficient (SCCC) method to estimate CDC error was presented in [20] in multi-channel strip-map and involved the phase inconsistency errors, but it is sensitive to non-uniformity in the spatial sampling. For phase inconsistency errors, many estimation methods [21,22,23,24,25,26,27,28,29] were proposed for multi-channel strip-map SAR, mainly based on multiple signal classification (MUSIC) theory [21], but this theory is not suitable for multi-channel BS-SAR system because the noise subspace cannot be extracted from the covariance matrix of the Doppler spectrum. Adaptively Weighted Least Squares (AWLS) [27] and Doppler Spectrum Optimization (DSO) [28], which are based on the spectrum power distribution, can be exploited in multi-channel BS-SAR, but are sensitive to the CDC error. Amplitude inconsistency and timing delay errors in multi-channel BS-SAR can be easily estimated using azimuth cross-correlation [29]. The channel position errors along-track are small enough to be ignored, while the position errors in range can be integrated into the phase inconsistency errors [24].
In this paper, we therefore develop algorithms for estimating and correcting the DRR error, CDC error and phase inconsistency errors for multi-channel BS-SAR. Section 2 reviews multi-channel BS-SAR systems and discusses their Doppler spectrum characteristics. Pre-processing algorithms are also reviewed in this section. Section 3 first derives an expression for the spectrum after azimuth pre-processing with DRR error, and this is used to define quantities that we term the ‘Symmetric Shift’ and ‘Scaling Effect’. A DRR error estimation method is proposed based on the Symmetric Shift, and this is then extended to give an estimation method for the CDC simultaneously. Next, motivated by the principle of spectrum optimization, a novel algorithm to estimate phase inconsistency errors is presented based on the minimum entropy criterion, which is robust against CDC error. An adaptive weighing strategy is also presented to suppress noise and increase the precision of the estimation methods. The performance of these methods proposed are assessed using simulated SAR data in Section 4 and conclusions are drawn in the Section 5.

2. Multi-Channel Beam-Steering SAR System

Azimuth multi-channel technology increases the number of azimuth-receiving channels, so significantly improves the spatial sampling rate; this is equivalent to increasing the PRF by using Digital Beam Forming technology. In addition, to achieve higher resolution or wider swath, azimuth multi-channel technology and beam-steering technology can be combined to provide new imaging modes, namely the multi-channel sliding spotlight mode and multi-channel TOPS mode.
As shown in Figure 1, the beam in multi-channel sliding spotlight SAR is steered from forward-looking to backward-looking squint during an illumination period, as in single-channel sliding spotlight SAR. A target can be observed for longer than for a SAR sensor working in strip-map mode, which leads to finer resolution. The N (N = 3 in Figure 1) receivers would acquire the reflected pulse almost simultaneously, and the SAR sensor could correspondingly store N times more pulse data than single-channel spotlight SAR. This means that by reducing the PRF by the factor N, the range swath could become wider while the azimuth resolution would not degrade. Similarly, multi-channel TOPS employs beam steering from backward-looking to forward-looking squint during illumination and can achieve wider swath than single-channel TOPS with the help of N azimuth channels.

2.1. Signal Mode

A multi-channel BS-SAR system transmits a linear frequency modulated signal and each sub-channel receives the backscattered echoes. Since parameter errors in azimuth are the focus of this paper, for simplicity only the azimuth signal is considered.
In multi-channel BS-SAR, antenna steering is combined with multi-channel technology. In Figure 2, T w is the overall illumination time, X s represents the fully illuminated swath in azimuth, and X f ( r ) represents the azimuth antenna footprint length. In addition, there are two parameters specific to multi-channel BS-SAR. One is ω θ , the beam-steering rate, which is usually constant. Since the effective velocity [30] of the satellite is also approximately invariant, the beam is assumed to focus on a virtual rotation point (VRP). The beam rotates clockwise around the VRP in multi-channel sliding spotlight SAR and counterclockwise in multi-channel TOPSAR. Another new parameter, r s , is the distance from the SAR sensor to the VRP at the central illumination time, while r is the distance from the SAR sensor to the target at zero-Doppler time. ω θ and r s are positive in multi-channel sliding spotlight mode and negative in multi-channel TOPS mode.
We define a factor α ( r ) , which multiplies the azimuth resolution in multi-channel BS-SAR due to the steering beam, as
α r = r s r r s .
It can be seen that α r < 1 in multi-channel sliding spotlight SAR and α r > 1 in multi-channel TOPSAR. So, the azimuth resolution in multi-channel sliding spotlight SAR is finer than in multi-channel strip-map SAR, while coarser in multi-channel TOPSAR. Reference [15] provides another definition of α r , which is essentially the same.
Assuming that there are N channels and the azimuth antenna pattern is rectangular with 3 dB beamwidth, the azimuth signal from a point target located at ( r , v t 0 ) acquired by i-th receiver can be expressed as
s s a i t ; r , t 0 = r e c t α r v t + d i d i 2 2 v t 0 X f r · r e c t v t + d i d i 2 2 v T w · r e c t v t 0 X s exp j 4 π λ R i t ; r , t 0 .
In Equation (2), t is the slow time in azimuth, r e c t represents the rectangular function, d i represents the distance between the i-th channel and the reference channel, which usually is at the center of the antenna, v is the effective velocity of the SAR sensor, and λ is the wavelength. R i t ; r , t 0 is the instantaneous distance between the i-th equivalent phase center of the sensor and an arbitrary target, given by [17]
R i t ; r , t 0 = 1 2 R t t 0 ; r + 1 2 R t + d i v t 0 ; r R t + d i 2 v t 0 ; r + f r d i 2 16 v 2 ,
where R t ; r = r 2 + v t 2 2 v t cos θ , in which θ is the aspect angle depending on r, and f r is Doppler rate. It can be seen that the signal of the i-th channel can be seen as a replica of the reference signal-channel BS-SAR impulse response with a delay.

2.2. Spectrum Characteristics

The azimuth spectrum of multi-channel BS-SAR has some particular characters due to steering beam and multi-channel sampling. The raw data support in the azimuth time-frequency domain (TFD) is outlined in Figure 3 for both multi-channel sliding spotlight SAR and multi-channel TOPSAR.
In Figure 3, three points (P1, P2 and P3) located on the left, center, and right of the azimuth are highlighted. The colored thin lines in this figure represent Doppler histories with negative slope, f r . The Doppler history of P2 is shifted upwards because of the constant Doppler centroid. The Doppler centroid of P1 is large in multi-channel sliding spotlight SAR because it experiences forward-looking squint illumination by the beam. By contrast, P3 is illuminated by backward-looking squint and has a smaller Doppler centroid. Consequently, the beam-steering results in a parallelogram TDF support which slants towards the lower right in multi-channel sliding spotlight SAR and towards the upper right in multi-channel TOPSAR. The steering bandwidth, B s , which occurs in multi-channel BS-SAR is then given by
B s = k ω · T w ,
where k ω is slope of the azimuth-varying Doppler centroid (or known as derotation rate) as [3]
k ω = 2 v λ cos θ d c t t 2 v λ sin θ ω θ 2 v 2 sin 2 θ λ r s
where θ d c ( t ) is the instantaneous aspect angle.
Due to the beam steering, the illumination time of any point target is expanded or shrunk by the factor 1 / α ( r ) , which results in their instantaneous bandwidth, B a , being weighted by the same factor. Figure 3 shows that the instantaneous bandwidth is larger than the PRF but less than N · P R F . The total bandwidth, including the instantaneous bandwidth B a and the steering bandwidth B s is then given by
B t = B a + B s = B 3 d B B 3 d B α r α r + k ω · T w ,
where B 3 d B is the bandwidth corresponding to the 3 dB antenna beam width.
It can be seen that the total bandwidth, B t , is very large in the case of beam steering while the value N · P R F is usually only a little larger than B 3 d B . This means the bandwidth would span several PRFs, resulting in spectrum aliasing. Moreover, the operating PRF in multi-channel SAR systems becomes several (∼N) times less than in single-channel SAR, so spectrum aliasing is very serious and needs to be removed.

2.3. Pre-Processing Approach

The combination of filtering reconstruction and derotation is commonly adopted to tackle spectrum aliasing in multi-channel BS-SAR because of its high efficiency and effectiveness. Derotation is essentially a convolution operation, which can be implemented by complex function multiplication and FFT, but the aliasing caused by under-sampling will remain. In the pre-processing algorithm, the FFT can be replaced by filter reconstruction, which can overcome the remaining aliasing and simultaneously produce a uniformly sampled signal.
The first phase function of pre-processing in a multi-channel BS-SAR system is given by
h 1 i t = exp j π k ω t + d i 2 v 2 j π f d t + d i 2 v + j π f r d i 2 4 v 2 ,
where f d is the CDC. After multiplying by the first phase function h 1 i t , the steering bandwidth B s is eliminated and the CDC is compensated as shown in Figure 4a.
Expanding the range R i t ; r , t 0 by a second order Taylor expansion and ignoring the constant term, the signal expression of i-th channel after multiplying by the first phase function h 1 i t is
s a i t ; r , t 0 = u a t + d i d i 2 v 2 v ; r , t 0 ,
where u a i t ; r , t 0 is the reference single-channel BS-SAR signal after first step of derotation, given by
u a t ; r , t 0 = r e c t α r v t v t 0 X f r · r e c t t T w · r e c t v t 0 X s exp j π f r t t 0 2 k ω t t 0 2 .
As shown in Figure 4a, the bandwidth of s a i t ; r , t 0 is still larger than the PRF due to the under-sampling in each channel. Transforming (8) into the Doppler domain by performing FFT operation:
S a i f ; r = 1 N k = 1 N U a f + k 1 · f p r f ; r e j 2 π f + k 1 · f p r f d i 2 v w i t h N · f p r f 2 + f d < f < N 2 · f p r f 2 + f d ,
where U a ( f + k 1 f p r f ; r ) is the k-th sub-band of the spectrum of u a t ; r , t 0 . The transfer matrix of multi-channel BS-SAR after the derotation can be described by [7,12]
H f = 1 N e j 2 π f + 0 · f p r f d 0 2 v e j 2 π f + N 1 · f p r f d 0 2 v e j 2 π f + 0 · f p r f d M 2 v e j 2 π f + N 1 · f p r f d M 2 v .
Equation (10) can be then rewritten in matrix form as
S a f , r = H f U a f , r ,
where
S a f , r = S a 0 f , r , S a 1 f , r , , S a N 1 f , r T ,
and
U a f , r = U a f + 0 · f p r f ; r , U a f + 1 · f p r f ; r , , U a f + N 1 · f p r f ; r T .
The reconstruction filter is then given by
P f = H 1 f = P 1 f , P 2 f , , P N f T ,
where 1 indicates the matrix inverse and T denotes matrix transposition. Then the unambiguous Doppler spectrum U a f , r can be obtained after summing the weighted S a i f ; r for all Doppler gates as
U a f , r = P f S a f , r .
Note P f may fail to reconstruct the spectrum because the transfer matrix H f is singular when the receivers’ samplings coincide spatially. References [31,32,33] provide the modified reconstruction filters for this case.
Using filter reconstruction in the standard derotation operation is superior to use of the FFT. Signals of N channels after the first phase multiplication are transformed into the Doppler domain and combined as a whole unambiguous spectrum by the reconstruction filtering, which plays a similar role to the FFT in the implementation of convolution. On the other hand, the signal produced by the reconstruction filter can also be regarded as being in the time domain [14] because the FFT in pre-processing is one of steps of derotation implement, which is essentially a convolution operation in time domain. The last phase multiplication in the pre-processing can then be performed as in single-channel BS-SAR, with a phase function given by
h 2 t = exp j π k ω t 2 .
The signal after the pre-processing can then be expressed as
s b t ; r , t d = r e c t t X f r v r s r r e c t t r s r t 0 T w r s r 1 r e c t v t 0 X s exp j π 2 v 2 cos 2 θ λ r s r t t 0 2 .
Please note that there are three envelope terms in (18) and v t 0 in the third term represents the azimuth position of arbitrary targets. Whatever the value of t 0 , the first term is contained in the second term, as demonstrated in [34]. Therefore, the time range of signal s b t ; r , t d is determined by the first term, and (18) can be simplified as
s b t ; r , t d = r e c t t X f r v r s r exp j π 2 v 2 cos 2 θ λ r s r t t 0 2 .
This means the time ranges of the pre-processed signals from targets at different azimuths are independent of their azimuth positions and overlap in time. For simplicity, we use the symbol { t } to represents the time range of the pre-processed signal.
Because only one FFT is employed in the pre-processing, the overlapped envelope is bell-shaped, and it is applicable to perform azimuth antenna pattern correction and signal weighting at this stage. After pre-processing, the equivalent PRF is given as [34]
f p r f 2 = N a · k ω N · f p r f ,
where N a is the total processing number in azimuth [34] after zero-padding. It should be selected to ensure f p r f 2 > B t .

3. Estimation Methods for DRR, CDC Error, and Phase Inconsistency Errors

Precise DRR, CDC, and well-calibrated phase inconsistency errors are important for focusing of multi-channel BS-SAR data. Compared with single-channel SAR, multi-channel SAR systems can achieve higher resolution or/and wider swath. As a result, the effects of DRR error on imaging are more serious and the required precision of the CDC is higher. At the same time, the spectrum aliasing brought by non-uniform sampling must be solved or circumvented before the estimation of DRR error ad CDC error, which do not exist in single-channel SAR system. Furthermore, the signal-noise-rate loss resulted from non-uniform sampling and the phase inconsistency errors between different channels increase the difficulties in estimation of DRR error and CDC error.
There have been numerous studies of the effects of the CDC error [30] and phase inconsistency errors [21,22,23,24,25,26,27,28,29] on imaging, but little analysis of the derotation rate error. In this section, the impact of DRR error on the pre-processing is first presented, and estimation methods for the DRR error, CDC error and phase inconsistency errors are then introduced. A novel weighting strategy is followed to improve the effectiveness and robustness of these estimation methods. Once these errors are estimated, they can be corrected in the imaging procedure.

3.1. Analysis of DRR Error

The derotation rate k ω is important in the pre-processing because the accuracy of the correction of the beam-steering bandwidth B s depends on it. The measurement of azimuth steered antenna pattern also requires high precise k ω [18]. However, k ω is not always known precisely due to the approximation in (5). k ω is defined as the partial derivative of the azimuth-varying Doppler centroid with time, which is approximately the Doppler rate of a point located at the VRP. Furthermore, the effective velocity in (5) may introduce an error into k ω , which will affect the pre-processing. Since the curvature of the orbit and earth, Linear geometric model [30] is commonly used in spaceborne SAR instead of curved geometric model for convenient. In the linear geometric model, the satellite velocity and beam velocity are substituted by the effective velocity in the imaging processing. According to the practical experience, however, those velocities are not exact for calculating derotation rate and the suitable velocity value for DRR is between satellite velocity and effective velocity. Though it is not accurate, the effective velocity is still used to calculate DRR in practice, which will introduce error inevitably. Since the satellite velocity is usually 10% larger than effective velocity for a typical spaceborne SAR satellite with height of 600 km, then the maximum calculating error for DDR will be 5%. If there is a DRR error, the time ranges of different targets in azimuth after pre-processing will not overlap, which will lead to erroneous azimuth antenna pattern correction and signal weighting. More seriously, there will be aliasing in the signal after pre-processing when this error is large enough.
Define k ω with DRR error as:
k ω e = k ω + Δ k ω = k ω 1 + δ .
Correspondingly, the slant range from the sensor to the VRP becomes
r s e = r s 1 + δ .
Substituting k ω e into (7), the signal after pre-processing with inaccurate DRR is given by
s b e t ; r , t 0 = r e c t 1 + δ t δ α r t 0 X Δ θ r v r s r δ α r r e c t t r s e r t 0 T w r s e r 1 r e c t v t 0 X s exp j π k ω e f r k ω e f r t t 0 2 ,
Unlike in (18), the first term in (23) depends on t 0 . Substituting X s 2 v < t 0 < X s 2 v into the first term and second term in (23), { t } constrained by the first and second terms and is given by
δ 1 + δ α r t 0 1 2 X Δ θ r 1 + δ v r s r δ α r < t < δ 1 + δ α r t 0 + 1 2 X Δ θ r 1 + δ v r s r δ α r
and
r s e r t 0 1 2 T w r s e r 1 < t < r s e r t 0 + 1 2 T w r s e r 1 .
The time range { t } given by (24) lies inside { t } given by (25), so the second term in (23) can be ignored (see Appendix A). The third term cannot be removed because the first term contains t 0 . So (23) can be modified to
s b e t ; r , t 0 = r e c t t δ 1 + δ α r t 0 X f r v r s r 1 1 + δ 1 δ α r r r s r e c t v t 0 X s exp j π k ω e f r k ω e f r t t 0 2 .

3.1.1. Symmetrical Shift

Compared to Equation (19), it can be seen from (26) and Figure 4b that the time range { t } of a point target located at ( r , v t 0 ) is shifted by
t s s = δ 1 + δ t 0 α r .
This means the overlapped pre-processed signals from different azimuth targets will be separated based on their different azimuth positions when DRR error exists. The shift of the signal from an arbitrary point target is linearly dependent on its azimuth position. The signals from two targets symmetrically located at the scene center will separate symmetrically, so we refer to this effect as a ‘symmetrical shift’.

3.1.2. Scaling Effect

The duration of { t } of an arbitrary point target is still constant but is increased or decreased by a factor given by:
β δ = 1 1 + δ 1 δ α r r r s = 1 1 + δ 1 δ α r 1 α r = 1 δ 1 + δ α r .
Since equivalent PRF (see in (20)) is a function of k ω , the number of samples of { t } for an arbitrary point target is also increased or decreased:
N t e N t = f p r f 2 e β δ f p r f 2 = 1 + δ β δ = 1 + δ δ α r ,
where N t e and N t are the numbers of samples of a target with and without DRR error, respectively. Here f p r f 2 e is the equivalent PRF after pre-processing with DRR error. When δ > 0 , N t is reduced for multi-channel sliding spotlight SAR and increased for multi-channel TOPSAR, while the opposite occurs when δ < 0 . Hence we refer to this as a ‘scaling effect’. Although N t will reduce in multi-channel sliding spotlight SAR with δ > 0 and in multi-channel TOPSAR with δ < 0 , the number of samples for all targets illustrated will increase because the ‘symmetrical shift’ will separate the energy along azimuth (see Appendix B).
Taking the multi-channel TOPS mode as an example, a simple simulation is developed using the parameters listed in Table 1. Figure 5 shows the results after pre-processing. Three points are located at different range and azimuth positions in the scenario in Figure 5a. With DRR error ( δ = 0.03 ), signal shift occurs after pre-processing in Figure 5b, whereas the time ranges { t } of three point targets are aligned after pre-processing without DRR error in Figure 5d. The number of samples clearly increase for DRR error in Figure 5c.

3.2. Estimation of the DRR Error

Based on the analysis on the effects of DRR error, a novel estimation method is proposed in this section, which contributes to the imaging processing of multi-channel BS-SAR. The central positions of signals coming from different point targets in azimuth after pre-processing will overlap without DRR error, while they will shift when DRR error occurs (see (27)). Assume there are two point targets located at (r, v t 1 ) and (r, v t 2 ), and the time interval between them is
Δ t = t 2 t 1 t 2 t 1 α r α r .
Then the time shift between the targets after pre-processing is
Δ t s 1 = δ 1 + δ t 2 t 1 α r .
Therefore, the DRR error can be obtained as
δ = Δ t s 1 Δ t s 1 Δ t Δ t 1 Δ t s 1 Δ t s 1 Δ t Δ t , Δ k ω = δ 1 + δ k ω e 1 + δ .
Since the raw data are a collection of echoes reflected by all scatterers on the ground, we cannot separate the echoes of two point targets from the raw data. If we split the raw data into two equal parts and extend them to the same size as the whole data by zero-padding, we obtain two images after the pre-processing. Although the envelopes of different targets in each image do not perfectly overlap due to their different locations, the whole envelope will also shift some pixels from the center of the image after pre-processing. So, the formula (32) derived for the point target echo will also apply to the raw data.
The raw data is split into two equal parts in azimuth. To hold the timing positions of sub-data, zero-padding is performed as shown in Figure 6. The zero-padded sub-data are then pre-processed. The time shift between the products of pre-processing can be obtained using image registration. Finally, the DRR error can be determined by (32). Since the time shift is estimated from image amplitude, the last phase multiplication in pre-processing can be ignored.

3.3. Estimation of the CDC Error

Since the Doppler spectrum of multi-channel SAR data is ambiguous, the CDC estimation method for single-channel SAR data cannot be used directly. Reference [20] proposes the SCCC method for a multi-channel SAR system, but this will fail when the spatial sampling of multi-channels is severely non-uniform, which restricts its application. In fact, the CDC can be determined based on the pre-processed signal. As noted in Section 2.3, the signal envelope after pre-processing is bell-shaped, similarly to the azimuth antenna pattern. So, the methods in [30], such as power balance and peak detecting, can be used on the pre-processed signal to estimate the CDC. In this section, these methods are extended to be used in the estimation of CDC error.
If there is a CDC error, Δ f d , the signal after the first phase multiplication with h 1 i t will include Δ f d . Due to the transformation of the pre-processing, the Doppler shift introduced by a CDC error can be seen as a time delay in the signal after pre-processing as
s c e t ; r , t 0 = r e c t t + Δ f d k ω e δ 1 + δ α r t 0 X f r v r s r 1 1 + δ 1 δ α r r r s r e c t v t 0 X s exp j π k ω e f r k ω e f r t t 0 2 .
From (33), it can be seen that the time range { t } of a point target is shifted by the DRR error and CDC error. So, the CDC error can be integrated into the estimation of DRR error. The time delay of an arbitrary point target after pre-processing with DRR error and DC error is given by
t s 2 = δ 1 + δ t 0 α r Δ f d k ω e .
For two point targets located at (r, v t 1 ) and (r, v t 2 ), the time delays after processing are
t s 2 , 1 = δ 1 + δ t 1 α r Δ f d k ω e t s 2 , 2 = δ 1 + δ t 2 α r Δ f d k ω e
It is then easy to obtain the DRR error and CDC error by solving Equation (35). The DRR error can also be solved using (32), and Δ f d can be obtained from (35). If we split the raw data equally as in Section 3.2, t 1 = t 2 . Therefore, the CDC error can be also be obtained as
Δ f d = 1 2 k ω e t s 2 , 1 + t s 2 , 1 .
The flowchart of the estimation of DRR error and CDC error is shown in Figure 6. To obtain sufficient precision, the estimation procedure can be iterated until the termination condition is met as
Δ k w q < Δ k w , t h , Δ f d q < Δ f d , t h ,
where Δ k w q and Δ f d q are the estimated DRR and CDC error after the q-th iteration respectively and Δ k w , t h and Δ f d , t h denote their thresholds for the precision.

3.4. Estimation for Phase Inconsistency Errors

Phase inconsistency error estimation is important for fine spectrum reconstruction and imaging in multi-channel SAR systems. The existing estimation methods for multi-channel strip-map SAR are mainly based on the MUSIC method [21]. This kind of methods are all based on the frequency consistency of all signals from different azimuth positions. In multi-channel strip-map SAR, all the signals from different targets will overlapped in Doppler domain as in Figure 7a. It can be seen that there are two ambiguous spectra in the grey parts, which can be used to construct the covariance matrix of Doppler spectrum. Then the signal subspace and the noise subspace can be extracted and are used to calibrate the phase inconsistency errors. However, in the multi-channel BS-SAR (Figure 7b), the Doppler spectrum of targets at different azimuth position is not overlapped because of the azimuth-varying Doppler centroid. Therefore, it is impossible to determine the ambiguity number in a certain Doppler gate and there is not appropriate spectrum that MUSIC method can employ.
References [27,28] propose estimation methods for multi-channel strip-map SAR based on the spectrum energy distribution, namely AWLS and DSO. The principle of these methods is that the spectrum energy in the effective bandwidth will leak out due to the phase inconsistency errors. Then some merit indicators can be used to calibrate the phase inconsistency errors. However, only part of spectrum is considered in both method which restricts their robustness to CDC error and their feasibilities to multi-channel BS-SAR was not investigated. In this section, the principle of optimization is extended into the multi-channel BS-SAR based on the expression of signal after pre-processing. The smaller the phase inconsistency errors, the smoother and more concentrated the spectrum. Image Entropy is then adopted to assess how well the spectrum is concentrated after pre-processing. The advantage of using image entropy is that it can simultaneously evaluate the spectrum inside and outside the effective bandwidth, which can achieve better performance that AWSL and DSO. Another benefit of using the whole spectrum is that the CDC error can be ignored while the AWLS and DSO methods require knowledge of the CDC.
Using m and l to represent the azimuth and range pixel indices respectively, rewrite Equation (12) in discrete form with phase inconsistency errors:
S ˜ a m , l = Γ H m U a m , l ,
where S ˜ a m , l is the discrete form of S a f , r with phase inconsistency errors, given as
S ˜ a m , l = S ˜ a 0 m , l , S ˜ a 1 m , l , , S ˜ a N 1 m , l T
and Γ is the phase inconsistency error matrix:
Γ = d i a g g = d i a g e j ξ 0 , e j ξ 1 , , e j ξ N 1 .
The unambiguous spectrum U a m , l can then be estimated by
U a m , l = P m Γ 1 S ˜ a m , l ,
where P m is the discrete form of P f in (15). Then U a ( m + n · f p r f ; l ) can be re-expressed as
U a m + n · f p r f ; l = g H d i a g P n m S ˜ a m , l .
The image entropy involving phase inconsistency error is then given as
Φ = l m n = 1 N U a m + n · f p r f ; r · U a H m + n · f p r f ; r E z ln U a m + n · f p r f ; r · U a H m + n · f p r f ; r E z = l f n = 1 N g H · Ω · g E z ln g H · Ω · g E z ,
where H represents conjugate-transpose and
Ω = d i a g P n m S ˜ a m , l S ˜ a m , l H d i a g P n m * ,
E z = l , m , n g H Ω g .
The optimization can be expressed as
g = arg min g l m n = 1 N g H Ω g E z ln g H Ω g E z .
To solve the optimization problem, the cyclic coordinate iteration is needed, in which we minimize the objective function on one channel error at a time while holding all other channels fixed. In each iteration, the optimum algorithms, such as gradient descent and Newton-like algorithm, can be used. Reference [29,35] give a fast convergence algorithm that can also be introduced here. A kind of closed-form solution in each iteration can be obtained by constructing a surrogate function, which has high convergence rate. The termination threshold for iterations is set as
Δ Φ = Φ q + 1 Φ q < Δ Φ t h ,
where Φ q is the image entropy after q-th iteration and Δ Φ t h denotes the threshold for the precision. The small Δ Φ t h , the higher the precision and the greater the computational load.
Since the estimation method is based on the minimum entropy, it can be named as minimum entropy (ME) method. The flowchart for this estimation method is given in Figure 8. Because the errors usually interact with each other in practice, we perform the estimations in Figure 6 and Figure 8 alternately to give high estimation precision. Notice that the proposed ME method in our manuscript is different from conventional minimum-entropy-based autofocusing. Because the proposed method is performed in Range-Doppler domain before the imaging processing and focuses on the phase inconsistency errors between different channels while MEBA is targeted at azimuth phase error in image domain.

3.5. Weighting for Estimation

In the previous discussion, it was seen that the shape of the pre-processed spectrum is important in estimating DRR error, CDC error, and phase inconsistency errors. In practice, however, thermal noise may affect the pre-processed spectrum. Reference [12] demonstrates that the output noise power after reconstruction can be amplified for the case of highly non-uniform sampling. The uniformity factor Fu can be defined as
F u = f p r f f p r f , u n i = N · d N · d v v f p r f f p r f v v f p r f f p r f ,
where d is the effective distance between adjacent channels. In [7], the signal-to-noise ratios (SNR) scaling factor, which quantifies the amplified degree of output noise power due to filter reconstruction, is defined as
Φ b f = S N R i n S N R o u t S N R i n S N R o u t P R F u n i = 1 σ 1 2 + 1 σ 2 2 + + 1 σ N 2 ,
where S N R i n and S N R o u t are the SNR before and after filter reconstruction, respectively, and P R F u n i represents the PRF corresponding to uniform space sampling. Here σ 1 , σ 2 , ⋯, σ N are the singular values of matrix H(see (11)). For the uniform case (Fu = 1), the SNR scaling factor is 1. When the system works with non-uniform PRF, the correlation between the adjacent row vectors in H will increase, which results in small singular values, so the SNR scaling factor will increase. More information on the SNR scaling factor can be found in [7].
An example of an amplified noise spectrum is shown in Figure 9a. It can be seen the noise spectrum components at the edge of the bandwidth dominate the degradation of SNR. When the SNR is low, the noise power may exceed the signal power at the edge of the bandwidth, which will degrade the performance of the proposed estimation methods. According to [31], the normalized noise spectrum can be got by reconstruction matrix, given as
S n o i s e ( f + k · f p r f ) = P k f 2 2 .
Then a natural weighting function can be formulated to suppress the noise spectrum components as
F 1 f = 1 S n o i s e f .
Evidently, the noise spectrum can be flattened with a uniform floor by using the weighting filter in Equation (51). Since the estimation of the DRR and CDC errors is based on the maximum of the spectrum, the weighted spectrum will became more concentrated (see Figure 9b) and the effect of noise will be significantly reduced. For the phase inconsistency errors, however, the noise spectrum at the edge of the bandwidth will still reduce the estimation performance and even cause the iteration to converge to the wrong value. To tackle this problem, we modify Equation (51) as
F 2 f = w i n f S n o i s e f ,
where w i n f is the Hanning window, Blackman window, or another commonly used window function. A criterion for choosing the window is that the edge sections of the spectrum cannot be weighted to 0 because all the spectrum components should be included when calculating the signal entropy. In this paper, Hamming window is chosen. The weighted spectrum becomes smoother after weighted by F 2 ( f ) , as shown in Figure 9c, and the noise is suppressed further, especially in the edge segment of the spectrum. After using the weighting function in (52), the estimation method for phase inconsistency errors will get better performance.

4. Experiments and Discussion

In this section, simulated multi-channel BS-SAR data are used to assess the performance of the proposed method to estimate the DRR error, CDC errors, and the phase inconsistency errors. For convenience, the phase inconsistency errors are called PI errors for short in Table 2 and Table 3.

4.1. Point Targets Simulation Experiments

The parameters in Table 1 are used in the simulation experiment. The DRR error, CDC error, and phase inconsistency errors are shown in Table 2. The effects of these errors are first simulated. The SNR is set as 20 dB in this experiment. The Doppler spectrum of multi-channel TOPSAR after pre-processing with different kinds of errors are shown in Figure 10 while the Doppler spectrum of multi-channel sliding spotlight is similar and omitted for briefness. However, the estimation results for both modes are shown in Table 2. Then the imaging results for both modes are presented in Table 3.
From Figure 10, it can be seen the azimuth spectrum with DRR error is slightly widened since the energies from targets located at different azimuth positions are slightly separated due to the ‘symmetric shift’ effect. As a result, some energy leaks out of the azimuth bandwidth and the amplitude near zero frequency is lower than that without error. The azimuth spectrum with CDC error is shifted because the constant Doppler error leads to a residual shift in the Doppler domain. The effect of phase inconsistency errors is more serious: the amplitudes of different sub-bands U a ( m + n · f p r f ; l ) are increased or decreased and the spectrum is discontinuous at the boundaries of adjacent sub-bands. The spectrum power outside the 3 dB bandwidth also clearly increases.
These impacts are more obvious after antenna correction (Figure 10b). The Doppler spectrum is deformed by DRR error and becomes tilted due to CDC error. Please note that the shape of the overall spectrum with DRR error is not the same as for arbitrary targets because the spectra of point targets located at different positions with DRR error are not located in same spectral interval. Since the spectrum is reconstructed by segment (see U a ( f + k · f p r f ; r ) in (14)), the phase inconsistency errors will disturb the weights of S a k f , r in each segment. Therefore, the spectrum with phase inconsistency errors becomes discontinuous and unsteady. The effects of these errors on Doppler spectrum in multi-channel sliding spotlight SAR are similar and are omitted here.
The imaging results, azimuth profiles and quality indicators of the point targets located at the scene edge are shown in Figure 11 and Table 3. It can be seen that azimuth resolution with DRR error degrades seriously because part of the spectrum is outside the effective bandwidth. The amplitude of the target also decreases due to imperfect azimuth antenna pattern correction and spectrum loss. From Figure 11f, the first null points of azimuth profile are about 25 dB, which cannot meet the quality of application in practice ( 40 dB). The CDC error seriously affects the image (Figure 11g) because the range cell migration is not corrected completely. In addition, both the Peak Sidelobe Ratio (PSLR) and Integrated Sidelobe Ratio (ISLR) decrease with phase inconsistency errors and the amplitude decreases as well. Overall, the effects of these errors on multi-channel sliding spotlight is smaller than multi-channel TOPS. Since the azimuth swath in multi-channel sliding spotlight is smaller, which leads the effect of DRR error smaller. CDC error also has much less impact on multi-channel sliding spotlight. However, the intensity loss is much seriously in multi-channel sliding spotlight than multi-channel TOPS.

4.2. Performance Analysis of Estimation Methods

To test the performance of proposed estimation methods, the Monte Carlo experiments with 100 times are carried on. In each experiment, the DRR error is set as 3%; the CDC error is set as 100 Hz and phase inconsistency errors are set random between ±30°. As illustrated in Section 3.5, the SNR after pre-processing in multi-channel BS-SAR system depends on the uniformity of the spatial sampling. Therefore, we here analyze the performance of estimation methods for DRR error, CDC error, and phase inconsistency errors against SNR and Fu. Since the results of multi-channel sliding spotlight SAR is similar with multi-channel TOPSAR, the performance is analyzed based on multi-channel TOPSAR for convenience. For the phase inconsistency errors, the root-mean-square error (RMSE) is defined as:
σ = 1 N 1 n = 1 N ζ i ζ i 2 ,
where ζ i is the estimate of ζ i .
The performance of the DRR error estimation method against SNR and Fu is shown in Figure 12. The estimation errors for DRR decrease as SNR increases for both values of Fu used, but the estimation error for Fu = 0.9 is much larger than the figure for Fu = 1.05. This can be explained by the azimuth ambiguity signal rate. When Fu = 0.9, the operating PRF of the system is lower than in the uniform case. As a result, more ambiguous power is aliased into the processing bandwidth. The fluctuation at Fu = 1.1 when SNR = 0 dB in Figure 12b can be explained by the same. As the estimation precisions of DRR error can also be impacted by the azimuth ambiguity signal rate, expect the SNR. When Fu increases from 1 to 1.2, the SNR scaling factor (see (49)) will increase, which will degrade the S N R o u t and result worse estimation precision. On the other hand, larger Fu means larger PRF and less ambiguity signal rate according the sampling theory. Then the less ambiguity will alleviate the negative tendency of estimation precision brought by worse Fu. In the case here, the scaling factor domains the upward tendency of estimation precision from Fu = 1 to Fu = 1.1 and then the ambiguity signal rate becomes the dominated factor after Fu = 1.1. It also can be seen that this conflict between scaling factor and ambiguity signal rate is depended on the SNR. When SNR is large enough (e.g., 20 dB), the estimation error has no obvious correlation with the uniformity factor.
The performance of the CDC error estimation methods against SNR and Fu is shown in Figure 13. In this experiment, we compare the SCCC method with the symmetric shift (SS) method proposed in this paper. The SS method achieves slightly better precision than SCCC for all SNR and is robust against the uniformity factor Fu, while the SCCC method is sensitive to Fu. This is because the SCCC method depends on the correlation between the adjacent pulses. As the Fu increases, the estimation error of SCCC method has the increasing tendency. The drop at Fu = 1.1 is only a bad case, which should be ignored. The figures for proposed SS method at SNR = 0 dB have not obvious correlation with Fu, which indicates SS method is not sensitive to Fu, because SS method is based on the shift between two images after pre-processing, which will partly neutralize the error due to increasing Fu. Therefore, the SS method is in most cases more accurate a bit.
The performance of the phase estimation method (ME method) against SNR and Fu and its comparison with the AWLS method described in [27] is shown in Figure 14. It can be seen that the proposed ME method performs better than AWLS against SNR and Fu. Generally, the precision of both estimates become better as Fu increases. This is because the azimuth ambiguous energies outside the main bandwidth are folded into the effective bandwidth. The azimuth ambiguity signal rate dominates when the PRF is less than P R F u n i . The sharp rise in the estimation error from AWLS as Fu changes from 1.15 to 1.2 is an artefact caused by the covariance matrix becoming singular when coincident sampling [31,32,33] occurs, i.e., the sampling of the first channel for one pulse coincides with the sampling of the last channel for the previous pulse, but the proposed method would not be affected in case of coincident sampling.

4.3. Distributed Targets Simulation Experiments

In this subsection, distributed target simulation experiments are conducted. The image used for simulation is acquired by Sentinel 1A in March 24th, 2019. This scene is located at Coastal Waters of Great Barrier Reefs in Australia with 20 m resolution. The parameters are the same as for multi-channel TOPSAR in Table 1 except that the resolution is now 20 m. The scene is 10 km in azimuth and 50 km in range. The original scene and imaging results with errors and after error compensation are shown in Figure 15. Adding the errors in Table 2, it can be seen that the image with errors (Figure 15b) is smeared and ghost targets occur. Ghost targets are mainly resulted from the phase inconsistency errors because this is one of specific symbols of phase inconsistency errors. Besides the overlay of ghost targets, the CDC error is another factor that leads the smeared effect. It is not easy to see the effects of DRR error in the image directly because it mainly has an effect on the azimuth resolution and intensity loss. After error correction, the image (Figure 15c) is well focused, as same as the image simulated without errors (Figure 15a).

5. Conclusions

Multi-channel BS-SAR is a promising approach for high resolution and wide-swath observation, but is subject to errors that can lead to degradation of image quality. To overcome spectrum aliasing caused by beam steering and non-uniform sampling of the azimuth channels, pre-processing, combined with derotation and spectrum reconstruction, is used by most multi-channel BS-SAR imaging algorithms; this requires accurate system parameters and error calibration. However, the derotation rate error is not always accurately calculated, and the constant Doppler centroid and phase inconsistency errors cannot be calibrated by conventional methods. The interaction between different errors also makes the error estimation more difficult.
The derotation rate error leads to a time shift in the pre-processed signal and signal scaling. The shift varies with the azimuth positions of targets, which breaks the overlap of the pre-processed signal, so azimuth antenna pattern correction and signal weighting cannot be performed correctly. Furthermore, the spectra of targets located at the edge of the illuminated scene may shift out of the processing bandwidth and cause resolution degradation and radiometric loss. The CDC error can also lead to a constant spectral shift, which allows the DRR and CDC errors to be determined by detecting the time shifts between two halves of the data after performing pre-processing. The method for estimating the derotation rate and CDC errors is validated by simulation, and the results show it to be robust against SNR and uniformity factor.
Phase inconsistency errors between azimuth channels are inherent in multi-channel SAR systems but most estimation methods for multi-channel strip-map SAR cannot be used in multi-channel BS-SAR because the spectra of different azimuth targets are separated in the Doppler domain, which prevents the use of traditional MUSIC methods. Methods based on the spectrum energy distribution are applicable to multi-channel BS-SAR but are sensitive to the CDC error because they depend on the spectrum only inside or outside the processing bandwidth. To overcome this, we use a minimum entropy estimation method based on the full spectrum which is robust against the CDC error. With the help of novel weighting strategy, the ME method can achieve excellent performance even in bad SNR and uniformity factor. Estimation and correction simulations demonstrate its excellent performance.

Author Contributions

H.G. and W.Y. conceived and developed the methods and performed the experiments; J.C., S.Q. and C.L. supervised the research; and H.G. wrote the paper.

Funding

This research received no external funding.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (NSFC) under Grant No. 61861136008 and Grant No. 61701012, Fundamental Research Funds for the Central Universities under Grand No. YWF-19-BJ-J-304 and China Scholarship Council (CSC) under Grant No. 201806020013.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

This appendix derives time range { t } given by (24) lies inside { t } given by (25). For clarity, we use t ( 1 ) and t ( 2 ) reference time range { t } in (24) and (25), respectively. If we can prove the following two items:
  • The lower boundary value t l ( 1 ) of t ( 2 ) is larger than the lower boundary value t l ( 2 ) of t ( 2 ) ;
  • The upper boundary value t r ( 1 ) of t ( 1 ) is less than the upper boundary value t r ( 2 ) of t ( 2 ) .
The ranges of t ( 1 ) and t ( 2 ) when t 0 = X s 2 v and t 0 = X s 2 v are derived as
t 1 t 0 = X s X s 2 v 2 v δ 1 + δ α r X s 2 v 1 2 X Δ θ r 1 + δ v r s r δ α r , δ 1 + δ α r X s 2 v + 1 2 X Δ θ r 1 + δ v r s r δ α r ,
t 1 t 0 = X s X s 2 v 2 v δ 1 + δ α r X s 2 v 1 2 X Δ θ r 1 + δ v r s r δ α r , δ 1 + δ α r X s 2 v + 1 2 X Δ θ r 1 + δ v r s r δ α r ,
t 2 t 0 = X s X s 2 v 2 v r s e r X s 2 v 1 2 T w r s e r 1 , r s e r X s 2 v + 1 2 T w r s e r 1 ,
t 2 t 0 = X s X s 2 v 2 v r s e r X s 2 v 1 2 T w r s e r 1 , r s e r X s 2 v + 1 2 T w r s e r 1 .
Then t 1 and t 2 can also be expressed as
t 1 = t 1 t 0 = X s X s 2 v 2 v + δ 1 + δ α r X s 2 v + t 0 ,
t 1 = t 1 t 0 = X s X s 2 v 2 v δ 1 + δ α r X s 2 v t 0 ,
t 2 = t 2 t 0 = X s X s 2 v 2 v + r s e r X s 2 v + t 0 ,
t 2 = t 2 t 0 = X s X s 2 v 2 v r s e r X s 2 v t 0 .
First taking the multi-channel sliding spotlight mode, in which r s > r > 0 , substitute (A6) and (A8) into t l ( 1 ) t l ( 2 ) , then
t l 1 t l 2 = t l ( 1 ) t 0 = X s X s 2 v 2 v δ 1 + δ α r X s 2 v t 0 t l ( 2 ) t 0 = X s X s 2 v 2 v + r s e r X s 2 v t 0 = Δ t l t 0 t 0 = X s X s 2 v 2 v + r s e r δ 1 + δ α r X s 2 v t 0 ,
where
Δ t l t 0 t 0 = X s X s 2 v 2 v = t l ( 1 ) t 0 = X s X s 2 v 2 v t l ( 2 ) t 0 = X s X s 2 v 2 v = δ 1 + δ α r X s 2 v 1 2 X Δ θ r 1 + δ v r s r δ α r r s e r X s 2 v + 1 2 T w r s e r 1 = δ 1 + δ α r X s + X Δ θ r 2 v r s e r X s + X Δ θ r 2 v + 1 2 T w r s e r 1 = δ 1 + δ T w 2 r s e r T w α r 2 + 1 2 T w r s e r 1 = δ 1 + δ T w 2 + 1 α r r s e r T w 2 1 2 T w = δ 1 + δ T w 2 + r s e r T w 2 1 2 T w = δ 1 + δ T w 2 + 1 1 + δ T w 2 1 2 T w = 0 .
Notice that we have used (22) and the following equation
X Δ θ r + X s = v T w α r .
Therefore,
t l 1 t l 2 = r s e r δ 1 + δ α r X s 2 v t 0 = 1 α r α r r s e r δ 1 + δ X s 2 v t 0 = 1 α r r s r r s r s e r 1 + r s e r X s 2 v t 0 = 1 α r r s e r 1 X s 2 v t 0 > 0 .
Next, substitute (A5) and (A7) into Δ t r ( t 0 ) as
t r 1 t r 2 = t r ( 1 ) t 0 = X s X s 2 v 2 v + δ 1 + δ α r X s 2 v + t 0 t r ( 2 ) t 0 = X s X s 2 v 2 v r s e r X s 2 v + t 0 = Δ t r t 0 t 0 = X s X s 2 v 2 v + δ 1 + δ α r r s e r X s 2 v + t 0 ,
where
Δ t r t 0 t 0 = X s X s 2 v 2 v = t r ( 1 ) t 0 = X s X s 2 v 2 v t r ( 2 ) t 0 = X s X s 2 v 2 v = δ 1 + δ α r X s 2 v + 1 2 X Δ θ r 1 + δ v r s r δ α r + r s e r X s 2 v 1 2 T w r s e r 1 = 0
as similar to (A10). Therefore,
t r 1 t r 2 = δ 1 + δ α r r s e r X s 2 v + t 0 = 1 α r 1 r s e r X s 2 v + t 0 < 0 .
Then consider the multi-channel TOPS mode, where r s < 0 . Substitute (A5) and (A7) into Δ t l ( t 0 ) as
t l 1 t l 2 = t l ( 1 ) t 0 = X s X s 2 v 2 v + δ 1 + δ α r X s 2 v + t 0 t l ( 2 ) t 0 = X s X s 2 v 2 v r s e r X s 2 v + t 0 = Δ t l t 0 t 0 = X s X s 2 v 2 v + δ 1 + δ α r r s e r X s 2 v + t 0 ,
where
Δ t l t 0 t 0 = X s X s 2 v 2 v = t l ( 1 ) t 0 = X s X s 2 v 2 v t l ( 2 ) t 0 = X s X s 2 v 2 v = δ 1 + δ α r X s 2 v + 1 2 X Δ θ r 1 + δ v r s r δ α r + r s e r X s 2 v 1 2 T w r s e r 1 = δ 1 + δ α r X s + X Δ θ r 2 v + r s e r X s + X Δ θ r 2 v 1 2 T w r s e r 1 = δ 1 + δ T w 2 + r s e r T w α r 2 1 2 T w r s e r 1 = δ 1 + δ T w 2 1 α r r s e r T w 2 + 1 2 T w = δ 1 + δ T w 2 r s e r T w 2 + 1 2 T w = δ 1 + δ T w 2 1 1 + δ T w 2 + 1 2 T w = 0 .
Notice that we have used (22) and (A11) here. Therefore,
t l 1 t l 2 = δ 1 + δ α r r s e r X s 2 v + t 0 = 1 α r 1 r s e r X s 2 v + t 0 > 0 .
Next, substitute (A6) and (A8) into Δ t r ( t 0 ) as
t r 1 t r 2 = t r ( 1 ) t 0 = X s X s 2 v 2 v δ 1 + δ α r X s 2 v t 0 t r ( 2 ) t 0 = X s X s 2 v 2 v + r s e r X s 2 v t 0 = Δ t r t 0 t 0 = X s X s 2 v 2 v + r s e r δ 1 + δ α r X s 2 v t 0 ,
where
Δ t r t 0 t 0 = X s X s 2 v 2 v = t r ( 1 ) t 0 = X s X s 2 v 2 v t r ( 2 ) t 0 = X s X s 2 v 2 v = δ 1 + δ α r X s 2 v 1 2 X Δ θ r 1 + δ v r s r δ α r r s e r X s 2 v + 1 2 T w r s e r 1 = 0 .
Therefore,
t r 1 t r 2 = r s e r δ 1 + δ α r X s 2 v t 0 = 1 α r r s e r 1 X s 2 v t 0 < 0 .
Finally, combine (A12) for multi-channel sliding spotlight SAR and (A18) for multi-channel TOPSAR as
t l 1 t l 2 = 1 α r r s e r 1 X s 2 v t 0 S l i d i n g S p o t l i g h t 1 α r 1 r s e r X s 2 v + t 0 T O P S
Combine (A15) multi-channel sliding spotlight SAR for and (A21) for multi-channel TOPSAR as
t r 1 t r 2 = 1 α r 1 r s e r X s 2 v + t 0 S l i d i n g S p o t l i g h t 1 α r r s e r 1 X s 2 v t 0 T O P S
Since α r is positive, r s e > r > 0 in multi-channel sliding spotlight SAR, r s e < 0 < r in multi-channel TOPSAR and X s 2 v t d X s 2 v , then t l 1 t l 2 > 0 and t r 1 t r 2 < 0 .

Appendix B

This appendix proves that the number of samples of the illuminated scene with DRR error is larger than without DRR error.
The duration of { t } for the illuminated scene is given by
Δ t max = X f r 1 + δ v r s r δ Y r + δ 1 + δ α r X s v .
The number of sampling of the illuminated scene is increased by
N w e N w = 1 + δ + δ α r X s X f r r r s s i g n δ = 1 + δ s i g n δ + 1 α r X s X f r r r s s i g n δ ,
where N w e and N w are number of sampling of the illuminated scene with and without k ω error. Because N t is increased in multi-channel TOPSAR when δ > 0 and in multi-channel sliding spotlight SAR when δ < 0 , the number of sampling of the illuminated scene N w must be increased in both cases. As for the another two case: multi-channel TOPSAR when δ < 0 and multi-channel sliding spotlight SAR when δ > 0 , note X s X f r is always lager than 1, so
N w e N w = 1 + δ s i g n δ + 1 α r X s X f r r r s s i g n δ > 1 + δ s i g n δ + 1 α r r r s s i g n δ = 1 + δ s i g n δ 1 1 α r + 1 α r s i g n r s 1 α r = 1 + δ 1 1 α r s i g n δ s i g n r s .
s i g n r s = s i g n δ = 1 for multi-channel sliding spotlight SAR when δ > 0 and for multi-channel TOPSAR when δ < 0 . Therefore, N w e N w > 1 , which means the number of samples of the illuminated scene increases when the error in k ω 0 .

References

  1. Mittermayer, J.; Moreira, A.; Loffeld, O. Spotlight SAR data processing using the frequency scaling algorithm. IEEE Trans. Geosci. Remote Sens. 1999, 37, 2198–2214. [Google Scholar] [CrossRef]
  2. Prats, P.; Scheiber, R.; Mittermayer, J.; Meta, A.; Moreira, A. Processing of Sliding Spotlight and TOPS SAR Data Using Baseband Azimuth Scaling. IEEE Trans. Geosci. Remote Sens. 2010, 48, 770–780. [Google Scholar] [CrossRef]
  3. De Zan, F.; Monti Guarnieri, A. TOPSAR: Terrain Observation by Progressive Scans. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2352–2360. [Google Scholar] [CrossRef]
  4. Meta, A.; Mittermayer, J.; Prats, P.; Scheiber, R.; Steinbrecher, U. TOPS Imaging With TerraSAR-X: Mode Design and Performance Analysis. IEEE Trans. Geosci. Remote Sens. 2010, 48, 759–769. [Google Scholar] [CrossRef]
  5. Currie, A.; Brown, M.A. Wide-swath SAR. IEE PROC-F 1992, 139, 122–135. [Google Scholar] [CrossRef]
  6. Gebert, N.; Krieger, G.; Moreira, A. High resolution wide swath SAR imaging with digital beamforming—Performance analysis, optimization and system design. In Proceedings of the EUSAR 2006: 6th European Conference on Synthetic Aperture Radar, Dresden, Germany, 16–18 May 2006; pp. 341–344. [Google Scholar]
  7. Gebert, N.; Krieger, G.; Moreira, A. Digital Beamforming on Receive: Techniques and Optimization Strategies for High-Resolution Wide-Swath SAR Imaging. IEEE Trans. Aerosp. Electron. Syst. 2009, 45, 564–592. [Google Scholar] [CrossRef] [Green Version]
  8. Brule, L.; Delisle, D.; Baeggli, H.; Graham, J. RADARSAT-2 Program update. In Proceedings of the 2005 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Seoul, Korea, 29 July 2005; Volume 1, p. 3. [Google Scholar] [CrossRef]
  9. Kankaku, Y.; Suzuki, S.; Osawa, Y. ALOS-2 mission and development status. In Proceedings of the 2013 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Melbourne, VIC, Australia, 21–26 July 2013; pp. 2396–2399. [Google Scholar] [CrossRef]
  10. Janoth, J.; Gantert, S.; Schrage, T.; Kaptein, A. Terrasar next generation—Mission capabilities. In Proceedings of the 2013 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Melbourne, VIC, Australia, 21–26 July 2013; pp. 2297–2300. [Google Scholar] [CrossRef]
  11. Huber, S.; Villano, M.; Younis, M.; Krieger, G.; Moreira, A.; Grafmueller, B.; Wolters, R. Tandem-L: Design Concepts for a Next-Generation Spaceborne SAR System. In Proceedings of the EUSAR 2016: 11th European Conference on Synthetic Aperture Radar, Hamburg, Germany, 6–9 June 2016; pp. 1–5. [Google Scholar]
  12. Krieger, G.; Gebert, N.; Moreira, A. Unambiguous SAR signal reconstruction from nonuniform displaced phase center sampling. IEEE Trans. Signal Process. 2004, 1, 260–264. [Google Scholar] [CrossRef]
  13. Li, Z.; Wang, H.; Su, T.; Bao, Z. Generation of wide-swath and high-resolution SAR images from multichannel small spaceborne SAR systems. IEEE Geosci. Remote Sens. Lett. 2005, 2, 82–86. [Google Scholar] [CrossRef]
  14. Gebert, N.; Krieger, G.; Moreira, A. Multichannel Azimuth Processing in ScanSAR and TOPS Mode Operation. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2994–3008. [Google Scholar] [CrossRef] [Green Version]
  15. Xu, W.; Huang, P.; Wang, R.; Deng, Y. Processing of Multichannel Sliding Spotlight and TOPS Synthetic Aperture Radar Data. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4417–4429. [Google Scholar] [CrossRef]
  16. Sun, G.C.; Xing, M.; Xia, X.G.; Wu, Y.; Huang, P.; Wu, Y.; Bao, Z. Multichannel Full-Aperture Azimuth Processing for Beam Steering SAR. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4761–4778. [Google Scholar] [CrossRef]
  17. Kuang, H.; Chen, J.; Yang, W.; Liu, W. An improved imaging algorithm for spaceborne MAPs sliding spotlight SAR with high-resolution wide-swath capability. Chin. J. Aeronaut. 2015, 28, 1178–1188. [Google Scholar] [CrossRef] [Green Version]
  18. Wollstadt, S.; Prats-Iraola, P.; Geudtner, D. TOPS Imaging Mode: Data-based Estimation of Antenna Pointing and Azimuth Steered Antenna Patterns. In Proceedings of the EUSAR 2014: 10th European Conference on Synthetic Aperture Radar, Berlin, Germany, 3–5 June 2014; pp. 1–4. [Google Scholar]
  19. Ge, S.; Liu, A.; Mu, D. Effects of Doppler Centroid Error on Azimuth Ambiguity in Along-track Multi-channel SAR Systems. In Proceedings of the EUSAR 2016: 11th European Conference on Synthetic Aperture Radar, Hamburg, Germany, 6–9 June 2016; pp. 1–4. [Google Scholar]
  20. Liu, Y.; Li, Z.; Wang, Z.; Bao, Z. On the Baseband Doppler Centroid Estimation for Multichannel HRWS SAR Imaging. IEEE Geosci. Remote Sens. Lett. 2014, 11, 2050–2054. [Google Scholar] [CrossRef]
  21. Li, Z.; Bao, Z.; Wang, H.; Liao, G. Performance improvement for constellation SAR using signal processing techniques. IEEE Trans. Aerosp. Electron. Syst. 2006, 42, 436–452. [Google Scholar] [CrossRef]
  22. Liu, A.; Liao, G.; Ma, L.; Xu, Q. An Array Error Estimation Method for Constellation SAR Systems. IEEE Geosci. Remote Sens. Lett. 2010, 7, 731–735. [Google Scholar] [CrossRef]
  23. Yang, T.; Li, Z.; Liu, Y.; Bao, Z. Channel Error Estimation Methods for Multichannel SAR Systems in Azimuth. IEEE Geosci. Remote Sens. Lett. 2013, 10, 548–552. [Google Scholar] [CrossRef]
  24. Liu, A.; Liao, G.; Xu, Q.; Ma, L. An Improved Array-Error Estimation Method for Constellation SAR Systems. IEEE Geosci. Remote Sens. Lett. 2012, 9, 90–94. [Google Scholar] [CrossRef]
  25. Liu, Y.Y.; Li, Z.F.; Suo, Z.Y.; Bao, Z. A novel channel phase bias estimation method for spaceborne along-track multi-channel HRWS SAR in time-domain. In Proceedings of the IET International Radar Conference 2013, Xi’an, China, 14–16 April 2013; p. B0420. [Google Scholar] [CrossRef]
  26. Feng, J.; Gao, C.; Zhang, Y.; Wang, R. Phase Mismatch Calibration of the Multichannel SAR Based on Azimuth Cross Correlation. IEEE Geosci. Remote Sens. Lett. 2013, 10, 903–907. [Google Scholar] [CrossRef]
  27. Liu, Y.Y.; Li, Z.F.; Yang, T.L.; Bao, Z. An Adaptively Weighted Least Square Estimation Method of Channel Mismatches in Phase for Multichannel SAR Systems in Azimuth. IEEE Geosci. Remote Sens. Lett. 2014, 11, 439–443. [Google Scholar] [CrossRef]
  28. Wang, Z.; Liu, Y.; Li, Z.; Xu, G.; Chen, J. Phase bias estimation for multi-channel HRWS SAR based on Doppler spectrum Optimization. Electron. Lett. 2016, 52, 1805–1807. [Google Scholar] [CrossRef]
  29. Zhang, S.X.; Xing, M.D.; Xia, X.G.; Liu, Y.Y.; Guo, R.; Bao, Z. A Robust Channel-Calibration Algorithm for Multi-Channel in Azimuth HRWS SAR Imaging Based on Local Maximum-Likelihood Weighted Minimum Entropy. IEEE Trans. Image Process. 2013, 22, 5294–5305. [Google Scholar] [CrossRef] [PubMed]
  30. Cumming, I.G.; Wong, F.H. Digital Processing of Synthetic Aperture Radar Data: Algorithms and Implementation; Artech House: Boston, MA, USA, 2004; pp. 322–379. [Google Scholar]
  31. Liu, B.; He, Y. Improved DBF Algorithm for Multichannel High-Resolution Wide-Swath SAR. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1209–1225. [Google Scholar] [CrossRef]
  32. Zuo, S.S.; Xing, M.; Xia, X.G.; Sun, G.C. Improved Signal Reconstruction Algorithm for Multichannel SAR Based on the Doppler Spectrum Estimation. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2017, 10, 1425–1442. [Google Scholar] [CrossRef]
  33. Liu, N.; Wang, R.; Deng, Y.; Zhao, S.; Wang, X. Modified Multichannel Reconstruction Method of SAR With Highly Nonuniform Spatial Sampling. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2017, 10, 617–627. [Google Scholar] [CrossRef]
  34. Yang, W.; Chen, J.; Liu, W.; Wang, P.; Li, C. A Modified Three-Step Algorithm for TOPS and Sliding Spotlight SAR Data Processing. IEEE Trans. Geosci. Remote Sens. 2017, 55, 6910–6921. [Google Scholar] [CrossRef] [Green Version]
  35. Kragh, T. Monotonic Iterative Algorithm for Minimum-Entropy Autofocus. In Proceedings of the Adaptive Sensor Array Processing Workshop, Lexington, MA, USA, 6–7 July 2006. [Google Scholar]
Figure 1. The combined modes with multi-channel technology and beam-steering technology. Three different colors (green, red, and yellow) in the figure represent three different channels.
Figure 1. The combined modes with multi-channel technology and beam-steering technology. Three different colors (green, red, and yellow) in the figure represent three different channels.
Remotesensing 11 01415 g001
Figure 2. Multi-channel sliding spotlight SAR and multi-channel TOPSAR. (a) is the multi-channel sliding spotlight and (b) is the multi-channel TOPSAR. Three different colors (green, red and yellow) in the figure represent three different channels.
Figure 2. Multi-channel sliding spotlight SAR and multi-channel TOPSAR. (a) is the multi-channel sliding spotlight and (b) is the multi-channel TOPSAR. Three different colors (green, red and yellow) in the figure represent three different channels.
Remotesensing 11 01415 g002
Figure 3. Time-frequency domain (TFD) diagram of multi-channel BS-SAR. (a) is for the multi-channel sliding spotlight and (b) is for the multi-channel TOPSAR. Three points are represented in red, yellow, and green. The time and bandwidth ranges are shown at the bottom and left. A t and A f represent signal amplitudes in the time and Doppler domain, respectively. The gradient color bars (blue, grey and gold) in the background represent the total PRF that multi-channel BS-SAR can achieve with N channels (here N = 3). The purple area is the whole TFD of the multi-channel BS-SAR system and the shaded parts on both sides are the TFDs of edge points without full illumination.
Figure 3. Time-frequency domain (TFD) diagram of multi-channel BS-SAR. (a) is for the multi-channel sliding spotlight and (b) is for the multi-channel TOPSAR. Three points are represented in red, yellow, and green. The time and bandwidth ranges are shown at the bottom and left. A t and A f represent signal amplitudes in the time and Doppler domain, respectively. The gradient color bars (blue, grey and gold) in the background represent the total PRF that multi-channel BS-SAR can achieve with N channels (here N = 3). The purple area is the whole TFD of the multi-channel BS-SAR system and the shaded parts on both sides are the TFDs of edge points without full illumination.
Remotesensing 11 01415 g003
Figure 4. TFD plots of multi-channel BS-SAR after multiplying by the first phase function. (a) is without DRR error and (b) is with DRR error. Three points are represented in red, yellow and green. The time and bandwidth ranges are shown at the bottom and left. A t and A f represent signal amplitudes in time domain and Doppler domain, respectively. The gradient color (blue, grey and gold) in background represent the total PRF with N channels (here N = 3). The purple area is the whole TFD and the shaded parts on both sides are the TFDs of some edge points without full illumination.
Figure 4. TFD plots of multi-channel BS-SAR after multiplying by the first phase function. (a) is without DRR error and (b) is with DRR error. Three points are represented in red, yellow and green. The time and bandwidth ranges are shown at the bottom and left. A t and A f represent signal amplitudes in time domain and Doppler domain, respectively. The gradient color (blue, grey and gold) in background represent the total PRF with N channels (here N = 3). The purple area is the whole TFD and the shaded parts on both sides are the TFDs of some edge points without full illumination.
Remotesensing 11 01415 g004
Figure 5. Demonstration of the influence of DRR: (a) shows a scene with three target points located at different range and azimuth positions; (b) illustrates the symmetrical shift of the azimuth signal after pre-processing with DRR error; (c) shows the scaling effect of the azimuth signal after pre-processing with DRR error; (d) the result of pre-processing without DRR error.
Figure 5. Demonstration of the influence of DRR: (a) shows a scene with three target points located at different range and azimuth positions; (b) illustrates the symmetrical shift of the azimuth signal after pre-processing with DRR error; (c) shows the scaling effect of the azimuth signal after pre-processing with DRR error; (d) the result of pre-processing without DRR error.
Remotesensing 11 01415 g005
Figure 6. Flowchart of the method to estimate DRR and CDC errors.
Figure 6. Flowchart of the method to estimate DRR and CDC errors.
Remotesensing 11 01415 g006
Figure 7. The Doppler frequency aliasing in multi-channel strip-map SAR (a) and multi-channel BS-SAR (b). The three colors represent three targets in azimuth. f and f a denote the baseband frequency and physical frequency, respectively.
Figure 7. The Doppler frequency aliasing in multi-channel strip-map SAR (a) and multi-channel BS-SAR (b). The three colors represent three targets in azimuth. f and f a denote the baseband frequency and physical frequency, respectively.
Remotesensing 11 01415 g007
Figure 8. Flowchart of the method to estimate phase inconsistency errors.
Figure 8. Flowchart of the method to estimate phase inconsistency errors.
Remotesensing 11 01415 g008
Figure 9. Weighting of spectrum for estimation. (a) shows the spectrum of signal, noise and noised signal after pre-processing without weighting; (b) is the spectrum of signal, noise and noised signal after pre-processing weighted by F 1 ( f ) ; (c) is the spectrum of signal, noise and noised signal after pre-processing weighted by F 2 ( f ) .
Figure 9. Weighting of spectrum for estimation. (a) shows the spectrum of signal, noise and noised signal after pre-processing without weighting; (b) is the spectrum of signal, noise and noised signal after pre-processing weighted by F 1 ( f ) ; (c) is the spectrum of signal, noise and noised signal after pre-processing weighted by F 2 ( f ) .
Remotesensing 11 01415 g009
Figure 10. Azimuth signals of multi-channel TOPSAR after pre-processing: (a) shows the amplitude of signals with DRR error (dotted green line), CDC error (dashed blue line), phase inconsistency errors (dash-dot yellow line) and without error (red line); (b) shows the amplitude of the signals after antenna correction.
Figure 10. Azimuth signals of multi-channel TOPSAR after pre-processing: (a) shows the amplitude of signals with DRR error (dotted green line), CDC error (dashed blue line), phase inconsistency errors (dash-dot yellow line) and without error (red line); (b) shows the amplitude of the signals after antenna correction.
Remotesensing 11 01415 g010
Figure 11. Imaging results of multi-channel TOPSAR without errors (a), with DRR error (b), CDC error (c), phase inconsistency errors (dh) are corresponding azimuth profiles. Imaging results of multi-channel sliding spotlight SAR without errors (i), with DRR error (j), CDC error (k), phase inconsistency errors (lp) are corresponding azimuth profiles.
Figure 11. Imaging results of multi-channel TOPSAR without errors (a), with DRR error (b), CDC error (c), phase inconsistency errors (dh) are corresponding azimuth profiles. Imaging results of multi-channel sliding spotlight SAR without errors (i), with DRR error (j), CDC error (k), phase inconsistency errors (lp) are corresponding azimuth profiles.
Remotesensing 11 01415 g011
Figure 12. The performance of DRR error estimation method versus SNR (a) and Fu (b).
Figure 12. The performance of DRR error estimation method versus SNR (a) and Fu (b).
Remotesensing 11 01415 g012
Figure 13. The performance of CDC error estimation method versus SNR (a) and Fu (b).
Figure 13. The performance of CDC error estimation method versus SNR (a) and Fu (b).
Remotesensing 11 01415 g013
Figure 14. The performance of CDC error estimation method versus SNR (a) and Fu (b).
Figure 14. The performance of CDC error estimation method versus SNR (a) and Fu (b).
Remotesensing 11 01415 g014
Figure 15. Simulated results from distributed targets: (a) is the image without errors, (b) is the image with errors and (c) is after error correction.
Figure 15. Simulated results from distributed targets: (a) is the image without errors, (b) is the image with errors and (c) is after error correction.
Remotesensing 11 01415 g015
Table 1. Simulation parameters.
Table 1. Simulation parameters.
Image ParametersMulti-Channel TOPSMulti-Channel Sliding Spotlight
Wavelength0.03 m0.03 m
Elevation angle30.030.0
Orbit height630 km630 km
Eccentricity0.00110.0011
Orbit inclination angle9898
Pulse width10 μ s10 μ s
PRF1700 Hz1700 Hz
Channel number66
Signal bandwidth80 MHz500 MHz
Sampling rate96 MHz600 MHz
Swath100 km × 100 km (A × R)20 km × 20 km (A × R)
Antenna length9 m9 m
Azimuth resolution5 m0.5 m
Aspect angle9090
Effective velocity7275.8 m/s7275.8 m/s
r s 202,942.8 m1,323,920.6 m
Table 2. Experiments results of CDC error, DRR error, and the phase inconsistency errors.
Table 2. Experiments results of CDC error, DRR error, and the phase inconsistency errors.
ErrorsMulti-Channel TOPSMulti-Channel Sliding Spotlight
True ValueEstimated ValueErrorTrue ValueEstimated ValueError
DRR error530.8 Hz/s531.15 Hz/s0.35 Hz/s70.6 Hz/s70.1 Hz/s0.5 Hz/s
CDC error100 Hz100.24 Hz0.24 Hz100 Hz100.32 Hz0.32 Hz
PI error (Channel 1)000000
PI error (Channel 2)3029.890.113029.870.13
PI error (Channel 3) 24 23.85 0.15 24 24.05 0.05
PI error (Channel 4)2423.920.082423.890.11
PI error (Channel 5) 10 9.98 0.02 10 9.97 0.03
PI error (Channel 6)54.890.1155.120.12
Table 3. Image quality indicators of the point targets located at the scene edge.
Table 3. Image quality indicators of the point targets located at the scene edge.
Multi-Channel TOPSAzimutdRangeNormalized Amplitude
Resolution (m)PSLR (dB)ISLR (dB)Resolution (m)PSLR (dB)ISLR (dB)
No Error3.35 13.27 10.06 3.33 13.27 10.04 1.00
DRR Error3.46 13.24 10.67 3.33 13.26 10.04 0.98
CDC Error4.40 12.68 15.07 4.24 24.61 23.06 0.41
PI errors3.35 11.26 8.49 3.33 13.28 10.04 0.96
Multi-Channel Sliding SpotlightAzimuthRangeNormalized Amplitude
Resolution (m)PSLR (dB)ISLR (dB)Resolution (m)PSLR (dB)ISLR (dB)
No Error0.31 13.27 10.46 0.34 13.20 10.55 1.00
DRR Error0.32 14.07 11.51 0.34 13.20 10.58 0.97
CDC Error0.31 13.26 10.42 0.34 13.23 10.56 0.99
PI errors0.31 11.97 9.00 0.34 13.28 10.56 0.33

Share and Cite

MDPI and ACS Style

Gao, H.; Chen, J.; Quegan, S.; Yang, W.; Li, C. Parameter Estimation and Error Calibration for Multi-Channel Beam-Steering SAR Systems. Remote Sens. 2019, 11, 1415. https://doi.org/10.3390/rs11121415

AMA Style

Gao H, Chen J, Quegan S, Yang W, Li C. Parameter Estimation and Error Calibration for Multi-Channel Beam-Steering SAR Systems. Remote Sensing. 2019; 11(12):1415. https://doi.org/10.3390/rs11121415

Chicago/Turabian Style

Gao, Heli, Jie Chen, Shaun Quegan, Wei Yang, and Chunsheng Li. 2019. "Parameter Estimation and Error Calibration for Multi-Channel Beam-Steering SAR Systems" Remote Sensing 11, no. 12: 1415. https://doi.org/10.3390/rs11121415

APA Style

Gao, H., Chen, J., Quegan, S., Yang, W., & Li, C. (2019). Parameter Estimation and Error Calibration for Multi-Channel Beam-Steering SAR Systems. Remote Sensing, 11(12), 1415. https://doi.org/10.3390/rs11121415

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop