Next Article in Journal
A Virtual Sensor for Collision Detection and Distinction with Conventional Industrial Robots
Previous Article in Journal
Mass-Sensitive Sensing of Melamine in Dairy Products with Molecularly Imprinted Polymers: Matrix Challenges
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Geometric Self-Calibration of YaoGan-13 Images Using Multiple Overlapping Images

1
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
2
School of Information Engineering, Xiangtan University, Xiangtan 411000, China
3
School of Geomatics, Liaoning Technical University, Fuxin 123000, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(10), 2367; https://doi.org/10.3390/s19102367
Submission received: 4 April 2019 / Revised: 30 April 2019 / Accepted: 17 May 2019 / Published: 23 May 2019
(This article belongs to the Section Remote Sensors)

Abstract

:
Geometric calibration is an important means of improving the absolute positioning accuracy of space-borne synthetic aperture radar imagery. The conventional calibration method is based on a calibration field, which is simple and convenient, but requires a great deal of manpower and material resources to obtain ground control points. Although newer cross-calibration methods do not require ground control points, calibration accuracy still depends on a periodically updated reference image. Accordingly, this study proposes a geometric self-calibration method based on the positioning consistency constraint of conjugate image points to provide rapid and accurate calibration of the YaoGan-13 satellite. The proposed method can accurately calibrate geometric parameters without requiring ground control points or high-precision reference images. To verify the absolute positioning accuracy obtained using the proposed self-calibration method, YaoGan-13 Stripmap images of multiple regions were collected and evaluated. The results indicate that high-accuracy absolute positioning can be achieved with a plane accuracy of 3.83 m or better for Stripmap data, without regarding elevation error. Compared to the conventional calibration method using high-accuracy control data, the difference between the two methods is only about 2.53 m, less than the 3-m resolution of the image, verifying the effectiveness of the proposed self-calibration method.

1. Introduction

The Chinese YaoGan-13 (YG-13) satellite mission, launched in November 2015, is equipped with a high-resolution synthetic aperture radar (SAR) X-band sensor. Synthetic aperture radar image products can be acquired using ScanSAR, Stripmap, and Sliding-spot modes, with the last of these providing SAR images at a very high resolution of about 0.5 m. The launch of YG-13 provided China with the ability to acquire high-resolution SAR images globally [1,2]. However, despite the strong capability of YG-13 in acquiring high-resolution SAR images, most of the images obtained by the satellite have exhibited poor absolute positioning accuracy, due to systematic timing offsets in the SAR system, including the time shift between the radar time and Global Positioning System (GPS) time (i.e., azimuth or along-track), and the internal electronic delay of the SAR instrument itself (i.e., the range delay time) [3]. As a result, the application of YG-13 images in activities such as resource monitoring has been significantly restricted.
Using high-accuracy control data, a conventional geometric calibration method can eliminate systematic errors, such as those experienced by YG-13 (including the internal electronic delay of the instrument and systematic azimuth shifts), improving the geometric positioning accuracy of the images. The conventional geometric calibration method has been thoroughly studied by many researchers and fully validated using the ERS-1/2, ENVISAT-ASAR, ALOS-PALSAR, TerraSAR-X/TanDEM-X, Sentinel-1A/1B, YaoGan-13, GaoFen-3, and other high-resolution satellites [4,5,6,7,8,9,10,11,12,13,14]. However, the conventional calibration method requires satellites to acquire images of calibration fields prior to conducting geometric calibration, reducing its timeliness in practical applications. Additionally, the conventional calibration method typically uses a high-precision corner reflector to generate control data, which can be expensive.
In the optical remote sensing field, many scholars have researched methods for geometric self-calibration that do not rely on control data, achieving notable success [15,16]. In the field of SAR geometric calibration, researchers have also begun to study geometric calibration without field calibration control data. Deng et al. performed cross calibration without using corner reflectors or high-precision digital elevation models to improve the absolute positioning accuracy of images collected by the GaoFen-3 (GF-3) satellite [2]. However, this method does not completely eliminate dependence on control data, as it still requires high-precision reference images. To the authors’ knowledge, no studies of completely independent SAR geometric self-calibration have been reported to date.
In this paper, a novel self-calibration method is proposed to determine the systematic timing offsets in the SAR system, independent of ground control points (GCPs). This method uses at least three images containing overlapping areas and takes advantage of the spatial intersection residual between conjugate points in these images to detect the timing offsets. The proposed method is free from the constraint of field control data present in traditional calibration methods. To demonstrate the accuracy of the proposed method, a series of experiments using Stripmap images collected by YG-13 is presented. The results show that the proposed method is able to effectively eliminate the systematic errors, due to the internal electronic delay of the instrument and the systematic azimuth shifts. After calibration, the plane absolute positioning accuracy of the YaoGan-13 Stripmap image was better than 3.83 m, just larger than the 3-m resolution of the images, verifying the effectiveness of the proposed method.

2. Methodology

2.1. Fundamental Theory of the Proposed Method

Figure 1 illustrates the proposed method of SAR image geometric self-calibration. In Figure 1a, S1 and S2 correspond to the two SAR antenna phase centers when the ground surface at Point A is photographed twice. The slant range between S1 and A is R1 and the slant range between S2 and A is R2, and so the intersection point between R1 and R2 is Point A. If a slant range measurement error ΔR exists due to the geolocation parameter error, the slant range R1 becomes R1 + ΔR, the slant range R2 becomes R2 + ΔR, and the new spatial intersection is at Point B. In this way, the spatial intersection point is affected by the error of the geometric positioning parameters. However, this could be true for another case: The changes in the slant range R1 and R2 can be caused by the position error of the ground point, because the ground surface position of Point A is unknown. Therefore, we can conclude that when using only two images, the exact reason for the changes in slant ranges R1 and R2 is uncertain.
Accordingly, in Figure 1b, we add a third image, and the slant range between S3 and A is given by R3. However, due to the aforementioned errors, slant range R3 becomes R3 + ΔR. New spatial intersection points then exist between S3 and S1 at Point C, and between S3 and S2 at Point D. If the change in slant range is caused by a ground point error, then spatial intersection Points B, C, and D should all be the same. If, however, the change in slant range is caused by a geolocation parameter error, then the spatial intersection Points B, C, and D are likely to be different. This difference is called the spatial intersection residual. A minimum spatial intersection residual can thus be used as a constraint condition for solving the self-calibration equation.

2.2. Proposed Geometric Self-Calibration Method

The physical imaging process of SAR can be represented by the slant range equation and Doppler equation [17,18].
The slant range equation is given by:
R = ( X s X ) 2 + ( Y s Y ) 2 + ( Z s Z ) 2 ,
where R is the slant range between the sensor and the target, and ( X s , Y s , Z s ) and ( X , Y , Z ) are the sensor and target position vectors, respectively.
The Doppler equation is given by:
f D = 2 λ R [ ( X s X ) X v + ( Y s Y ) Y v + ( Z s Z ) Z v ] ,
where f D is the Doppler center frequency for the SAR image, λ is the SAR wavelength, and X v , Y v , Z v is the phase center velocity vector of the SAR antenna.
Based on the slant range and Doppler equations, the geometric self-calibration model of a SAR image can be written as follows:
{ R = ( X s X ) 2 + ( Y s Y ) 2 + ( Z s Z ) 2 + R s + R a t m o f D = 2 λ ( R R s R a t m o ) [ ( X s X ) X v + ( Y s Y ) Y v + ( Z s Z ) Z v ] ,
where R s is the slant range correction and R a t m o is the atmospheric path delay, calculated using the atmospheric delay correction model [19,20,21].
The error equations for Equation (3) are given by:
{ V R = R X Δ X + R Y Δ Y + R Z Δ Z + R R s Δ R s + ( R ) R V f D = f D X Δ X + f D Y Δ Y + f D Z Δ Z + f D R s Δ R s + ( f D ) f D ,
where ( R ) and ( f D ) are the approximate values of the slant range and Doppler center frequency, respectively.
Equation (4) can be expressed as the following matrix equation:
V = A K L ,
where
  • K = [ Δ X , Δ Y , Δ Z , Δ R s ] T V = [ V R , V f D ] T ,
  • L = [ l R , l f D ] T = [ R ( R ) , f D ( f D ) ] T , and
  • A = [ a 11 , a 12 , a 13 , a 14 a 21 , a 22 , a 23 , a 24 ] T .
The values of the partial derivatives in Equation (4) are:
  • a 11 = X X s ( X X s ) 2 + ( Y Y s ) 2 + ( Z Z s ) 2 ,
  • a 12 = Y Y s ( X X s ) 2 + ( Y Y s ) 2 + ( Z Z s ) 2 ,
  • a 13 = Z Z s ( X X s ) 2 + ( Y Y s ) 2 + ( Z Z s ) 2 ,
  • a 14 = 1 ,
  • a 21 = 2 X v ( R R s R a t m o ) λ ,
  • a 22 = 2 Y v ( R R s R a t m o ) λ ,
  • a 23 = 2 Z v ( R R s R a t m o ) λ , and
  • a 24 = 2 [ ( X s X ) X v + ( Y s Y ) Y v + ( Z s Z ) Z v ] λ ( R R s R a t m o ) ( R R s R a t m o ) ,
where the initial value R s = 0 and the initial values of [ X   Y   Z ] are calculated using the least squares spatial point intersection.
According to the least squares principle of indirect adjustment, the normal form of Equation (5) can be expressed as follows:
A T A K = A T L .
The expression of the solution of Equation (6) can then be obtained by:
K = ( A T A ) 1 A T L .
The corrected ground coordinates of conjugate point [ Δ X   Δ Y   Δ Z ] and the slant range error d R s can then be obtained, using an iterative procedure to calculate [ Δ X , Δ Y , Δ Z , Δ R s ] T , expressed by the following algorithm steps:
  • Select the conjugate points. Measure the image plane coordinates of the conjugate points on each image;
  • Obtain information on geometric positioning parameters. Find and calculate the imaging time t , Doppler center frequency f D , and slant range R of the target point from the auxiliary files. Find and calculate the position vector X s , Y s , Z s and velocity vector X v , Y v , Z v of the satellite;
  • Determine the initial value of the unknown parameters. The measurement errors of the slant range and the systematic azimuth shifts are typically not large, so the initial values of the slant range correction and systematic azimuth shifts can be set to 0. The initial values of [ X   Y   Z ] are then calculated using a least squares spatial point intersection [22];
  • Calculate the approximate values of the slant range and the Doppler center frequency for each conjugate point. The approximate values of these unknown parameters are then substituted into the slant range equation (Equation (1)) and the Doppler equation (Equation (2)) to calculate the approximate values of the slant range ( R ) and the Doppler center frequency ( f D ) , respectively, for each conjugate point;
  • Calculate the coefficient and constant terms of the error equation (Equation (4)), point by point to establish the error equation;
  • Calculate the coefficient matrix A T A and constant term A T L of the normal equation (Equation (6)) to establish the normal equation;
  • Calculate the slant range and ground coordinate corrections of the conjugate points and add them to the corresponding approximate values to obtain the new approximate ground coordinates of the conjugate points and the slant range correction;
  • Calculate the slant range error. Check if the calculation converges by comparing the ground coordinate corrections of the conjugate points and the slant range error with the prescribed error limits: The correction of the slant range error is usually evaluated against a limit of 0.1 m. When the correction of the slant range error is less than 0.1 m, the iteration ends and proceeds to Step 9, otherwise, repeat Steps 4–8 with the newest approximation until the error limits are met;
  • Calculate the systematic azimuth shifts Δ t a . Calculate the image plane coordinates of the conjugate points by using the inverse location algorithm with the new approximate ground coordinates of the conjugate points, then update the azimuth imaging time [23]. Recalculate the position vectors X s , Y s , Z s and the velocity vectors X v , Y v , Z v of the satellite. Set the slant range correction to 0 and the initial ground coordinates of the conjugate points used are the results of the previous iteration. Repeat Steps 4–9 until the correction of the systematic azimuth shift is less than the limit;
  • The accurate values of [ Δ X , Δ Y , Δ Z , Δ R s ] T and Δ t a are obtained.

3. Experiment and Analysis

3.1. Experimental Study Areas and Data Sources

In order to verify the accuracy of the self-calibration method proposed in this paper, Stripmap images from the YaoGan-13 satellite acquired between 18 December, 2015 (2015-12-18) and 30 March, 2016 (2016-03-30) were used as experimental data. The resolution of the Stripmap images was 3 m and the swath width was 10 km. The internal electronic delay of the instrument (slant range error) is related to the bandwidth and pulse width of the radar signal. Therefore, the experimental data were divided into two groups according to the differences in the bandwidth and pulse width: Calibration Group A, with a bandwidth of 200 MHz and a pulse width of 24.4 μs, summarized in Table 1, and Calibration Group B, with a bandwidth of 150 MHz and a pulse width of 24.4 μs, summarized in Table 2. As can be seen in Figure 2, the data in Calibration Group A and Calibration Group B cover overlapping areas. The conjugate points were selected in this overlapping area.

3.2. Results of Proposed Self-Calibration Method

According to the theory presented in Section 2.1, the proposed geometric self-calibration method requires at least three images; these necessary redundant observations can be obtained by adding calibration images to the data set. As shown in Table 3 (for Calibration Group A) and Table 4 (for Calibration Group B), the number of images was increased sequentially according to the image acquisition time, resulting in a total of eight image combinations in Calibration Group A and eight image combinations in Calibration Group B. Three pairs of well-distributed conjugate points were then manually acquired from the calibration images. Using these conjugate points, self-calibration was conducted using the proposed method. The slant range correction and systematic azimuth shifts obtained by self-calibration using each image combination in Calibration Groups A and B are also shown in Table 3 and Table 4, respectively, in which it can be seen that the geometric calibration parameters obtained using different calibration combinations are similar. The difference between the maximum and minimum value of the slant range correction was 1.83 m, while the difference between the maximum and minimum value of the systematic azimuth shifts was about 0.153 ms with an accompanying 1.16-m azimuth geolocation error, given a spacecraft velocity of 7600 m/s. These ranges demonstrate that the calibration results are stable.

3.3. Validation of Self-Calibration Accuracy

In order to verify the accuracy of the proposed method for self-calibration, Validation Groups A and B, consisting of Stripmap images collected from the Songshan, Taiyuan, Anping, Xianning, and Tianjin areas, were evaluated with bandwidth and pulse width parameters matching Calibration Groups A and B and control data, as provided in Table 5. The terrain of the Taiyuan, Tianjin, and Anping test sites was almost flat, while that of Songshan and Xianning test sites was hilly. Several independent check points (ICPs) were manually extracted from the control data and the distributions of these ICPs in the validation images are shown in Figure 3. The sources of the three types of control data used are shown in Figure 4. The difference between the predicted location of the ICPs in the object space and their measured locations is the absolute positioning error, generally expressed in the north, east, and plane dimensions, separately. In this system, the “Plane” error is numerically equal to the square root of the sum of the squares of the “East” and “North” errors.
The root-mean-square error (RMSE) of the absolute positioning accuracy (north, east, and plane) was then calculated for the images in Validation Group A (Table 6) and Validation Group B (Table 7) before and after calibration. In the tables, Plan 1 represents the absolute positioning accuracy without calibration, while Plans 2–9 represent the absolute positioning accuracy for the different self-calibration combinations defined in each group. The results of the statistical analysis show that without calibration, the absolute positioning accuracy of both Validation Group A and Validation Group B is very poor at 29.47 m and 22.76 m, respectively. After calibration, the absolute positioning accuracy clearly shows significant improvement: High-accuracy absolute positioning is achieved with a plane accuracy of 3.83 m or better for Validation Group A and 3.41 m or better for Validation Group B.
To further illustrate the effectiveness of the proposed self-calibration method, corner reflectors were used as control points to conduct a conventional field calibration for Calibration Groups A and B. The absolute positioning accuracies of Validation Groups A and B using the reflectors were then compared with those determined by the self-calibration method, as shown in Table 8. The maximum positioning errors from among Plans 2–8 (shaded) in Table 6 and Table 7 were selected to represent the self-calibration results. Notably, the difference between the positioning accuracy provided by the proposed self-calibration method and the conventional field calibration is not large: For Validation Groups A and B, there was a difference of 2.53 m and 1.74 m, respectively, in plane positioning accuracy, both smaller than the 3-m resolution of the images.

4. Conclusions

It is generally known that the most critical aspect of improving the geolocation accuracy of satellite imagery is geometric calibration. Conventional field calibration and cross calibration methods cannot satisfy the demands for fast and accurate calibration. In this study, a novel self-calibration method based on the positioning consistency constraint of conjugate points was proposed to calibrate satellite geometric parameters without requiring ground control points or high-precision reference images. The proposed method uses at least three overlapping images and takes advantage of the spatial intersection residual between corresponding points in the images to calculate systematic errors (such as the internal electronic delay of the instrument and systematic azimuth shifts). YaoGan-13 Stripmap-mode images were collected as experimental data and analyzed using the proposed method. The results show that the plane absolute positioning accuracy after self-calibration is better than 3.83 m. The difference in accuracy compared to the conventional method was only about 2.53 m, which is less than the 3-m resolution of the images, verifying the effectiveness of the proposed self-calibration method.

Author Contributions

M.D. wrote the paper and conducted the experiments. G.Z. and C.C. guided the experiments and the structure of the paper. R.Z. checked the paper and provided suggestions.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 61771150, Grant No. 91538106, Grant No. 41501503, Grant No. 41601490, Grant No. 41501383), University-level scientific research projects of Xiangtan University (Grant No. 19QDZ11), Key research and development program of Ministry of science and technology (2016YFB0500801), China Postdoctoral Science Foundation (Grant No. 2015M582276), Hubei Provincial Natural Science Foundation of China (Grant No. 2015CFB330), Special Fund for High Resolution Images Surveying and Mapping Application System (Grant No. AH1601-10), Quality improvement of domestic satellite data and comprehensive demonstration of geological and mineral resources (Grant No. DD20160067).

Acknowledgments

The authors would like to thank the editors and the anonymous reviewers for their constructive suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Deng, M.; Zhang, G.; Zhao, R.; Zhang, Q.; Li, D.; Li, J. Assessment of the geolocation accuracy of YG-13A high-resolution SAR data. Remote Sens. Lett. 2018, 9, 101–110. [Google Scholar] [CrossRef]
  2. Deng, M.; Zhang, G.; Zhao, R.; Li, S.; Li, J. Improvement of Gaofen-3 absolute positioning accuracy based on cross-calibration. Sensors 2017, 17, 2903. [Google Scholar] [CrossRef] [PubMed]
  3. Schwerdt, M.; Brautigam, B.; Bachmann, M.; Doring, B.; Schrank, D.; Gonzalez, J.H. Final TerraSAR-X calibration results based on novel efficient methods. IEEE Trans. Geosci. Remote Sens. 2010, 48, 677–689. [Google Scholar] [CrossRef]
  4. Mohr, J.J.; Madsen, S.N. Geometric calibration of ERS satellite SAR images. IEEE Trans. Geosci. Remote Sens. 2001, 39, 842–850. [Google Scholar] [CrossRef]
  5. Small, D.; Schubert, A. “Guide to ASAR Geocoding.” University of Zurich. Available online: http://www.geo.uzh.ch/microsite/rsl-documents/research/publications/other-sci-communications/2008_RSL-ASAR-GC-AD-v101-0335607552/2008_RSL-ASAR-GC-AD-v101.pdf (accessed on 30 April 2008).
  6. Shimada, M.; Isoguchi, O.; Tadono, T.; Isono, K. PALSAR radiometric and geometric calibration. IEEE Trans. Geosci. Remote Sens. 2009, 47, 3915–3932. [Google Scholar] [CrossRef]
  7. Eineder, M.; Minet, C.; Steigenberger, P.; Cong, X.; Fritz, T. Imaging geodesy – Toward centimeter-level ranging accuracy with TerraSAR-X. IEEE Trans. Geosci. Remote Sens. 2011, 49, 661–671. [Google Scholar] [CrossRef]
  8. Schubert, A.; Small, D.; Miranda, N.; Geudtner, D.; Meier, E. Sentinel-1A product geolocation accuracy: Commissioning phase results. Remote Sens. 2015, 7, 9431–9449. [Google Scholar] [CrossRef]
  9. Schwerdt, M.; Schmidt, K.; Ramon, N.T.; Alfonzo, G.C.; Döring, B.J.; Zink, M.; Prats-Iraola, P. Independent verification of the Sentinel-1A system calibration. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 1097–1100. [Google Scholar] [CrossRef]
  10. Zhao, R.; Zhang, G.; Deng, M.; Yang, F.; Chen, Z.; Zheng, Y. Multimode hybrid geometric calibration of spaceborne SAR considering atmospheric propagation delay. Remote Sens. 2017, 9, 464. [Google Scholar] [CrossRef]
  11. Zhao, R.; Zhang, G.; Deng, M.; Xu, K.; Guo, F. Geometric calibration and accuracy verification of the GF-3 satellite. Sensors 2017, 17, 1977. [Google Scholar] [CrossRef] [PubMed]
  12. Luscombe, A.P. RADARSAT-2 SAR image quality and calibration operations. Can. J. Remote Sens. 2004, 30, 345–354. [Google Scholar] [CrossRef]
  13. Luscombe, A. Image Quality and Calibration of RADARSAT-2. In Proceedings of the 2009 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Cape Town, South Africa, 12–17 July 2009; pp. II-757–II-760. [Google Scholar]
  14. Covello, F.; Battazza, F.; Coletta, A.; Lopinto, E.; Fiorentino, C.; Pietranera, L.; Valentini, G.; Zoffoli, S. COSMO-SkyMed an existing opportunity for observing the Earth. J. Geodyn. 2010, 49, 171–180. [Google Scholar] [CrossRef] [Green Version]
  15. Kubik, P.; Lebègue, L.; Fourest, S.; Delvit, J.M.; de Lussy, F.; Greslou, D.; Blanchet, G. First In-flight Results of Pleiades 1A Innovative Methods for Optical Calibration. In Proceedings of the International Conference of Space Optics—ICSO 2012, Ajaccio, France, 9–12 October 2012; p. 1056407. [Google Scholar]
  16. Zhang, G.; Xu, K.; Zhang, Q.; Li, D. Correction of pushbroom satellite imagery interior distortions independent of ground control points. Remote Sens. 2018, 10, 98. [Google Scholar] [CrossRef]
  17. Liu, X.; Liu, J.; Hong, W. The analysis of the precision in spaceborne SAR image location. J. Remote Sens. 2006, 10, 76–81. [Google Scholar]
  18. Liu, X.; Ma, H.; Sun, W. Study on the geolocation algorithm of space-borne SAR image. In Advances in Machine Vision, Image Processing, and Pattern Analysis. IWICPAS 2006. Lecture Notes in Computer Science 4153; Zheng, N., Jiang, X., Lan, X., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; pp. 270–280. [Google Scholar]
  19. Jehle, M.; Perler, D.; Small, D.; Schubert, A.; Meier, E. Estimation of atmospheric path delays in TerraSAR-X data using models vs. measurements. Sensors 2008, 8, 8479–8491. [Google Scholar] [CrossRef] [PubMed]
  20. Schubert, A.; Jehle, M.; Small, D.; Meier, E. Influence of Atmospheric Path Delay on the Absolute Geolocation Accuracy of TerraSAR-X High-Resolution Products. IEEE Trans. Geosci. Remote Sens. 2010, 48, 751–758. [Google Scholar] [CrossRef]
  21. Breit, H.; Fritz, T.; Balss, U.; Lachaise, M.; Niedermeier, A.; Vonavka, M. TerraSAR-X SAR processing and products. IEEE Trans. Geosci. Remote Sens. 2010, 48, 727–740. [Google Scholar] [CrossRef]
  22. Raggam, H.; Gutjahr, K.; Perko, R.; Schardt, M. Assessment of the stereo-radargrammetric mapping potential of TerraSAR-X multibeam spotlight data. IEEE Trans. Geosci. Remote Sens. 2010, 48, 971–977. [Google Scholar] [CrossRef]
  23. Jiang, Y.H.; Zhang, G. Research on the methods of inner calibration of spaceborne SAR. In Proceedings of the 2011 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Vancouver, BC, Canada, 24–29 July 2011; pp. 914–916. [Google Scholar]
Figure 1. Schematic diagram of synthetic aperture radar (SAR) image geometric self-calibration.
Figure 1. Schematic diagram of synthetic aperture radar (SAR) image geometric self-calibration.
Sensors 19 02367 g001aSensors 19 02367 g001b
Figure 2. Spatial distribution of experimental data in (a) Calibration Group A (200 MHz and 24.4 μs), and (b) Calibration Group B (150 MHz and 24.4 μs).
Figure 2. Spatial distribution of experimental data in (a) Calibration Group A (200 MHz and 24.4 μs), and (b) Calibration Group B (150 MHz and 24.4 μs).
Sensors 19 02367 g002
Figure 3. Distribution of independent check points (ICPs) in the validation images.
Figure 3. Distribution of independent check points (ICPs) in the validation images.
Sensors 19 02367 g003
Figure 4. Sources of the three types of control data used for verification.
Figure 4. Sources of the three types of control data used for verification.
Sensors 19 02367 g004
Table 1. Calibration Group A image data.
Table 1. Calibration Group A image data.
Bandwidth and Pulse WidthImage IDImaging TimeCentral AngleOrbitLook-Side
200 MHz and 24.4 μsA12015-12-2836.1°AscR
B12016-01-0343.1°DescR
C12016-01-1626.8°AscR
D12016-01-1831.4°DescR
E12016-01-1937.5°AscL
F12016-03-03b24.8°AscL
G12016-03-1137.1°DescR
H12016-03-26a23.6°DescR
I12016-03-27a15.2°DescL
J12016-03-27b31.2°AscL
Desc = Descending; Asc = Ascending; L = Left; R = Right.
Table 2. Calibration Group B image data.
Table 2. Calibration Group B image data.
Bandwidth and Pulse WidthImage IDImaging TimeCentral AngleOrbitLook-Side
150 MHz and 24.4 μsA22015-12-2946.1°DescR
B22016-01-0446.9°AscL
C22016-01-07a54.6°DescR
D22016-01-17a50.9°DescR
E22016-01-17b48.9°AscR
F22016-03-1045.5°AscR
G22016-03-1548.0°AscR
H22016-03-26b50.4°AscL
I22016-03-29b38.0°AscR
J22016-03-3053.8°AscR
Desc = Descending; Asc = Ascending; L = Left; R = Right.
Table 3. Geometric calibration results for Calibration Group A (200 MHz and 24.4 μs).
Table 3. Geometric calibration results for Calibration Group A (200 MHz and 24.4 μs).
CombinationSlant Range Correction (m)Systematic Azimuth Shifts (ms)
A1-B1-C115.96−0.126
A1-B1-C1-D115.72−0.131
A1-B1-C1-D1-E115.88−0.140
A1-B1-C1-D1-E1-F116.35−0.126
A1-B1-C1-D1-E1-F1-G116.19−0.118
A1-B1-C1-D1-E1-F1-G1-H117.55−0.120
A1-B1-C1-D1-E1-F1-G1-H1-I116.61−0.128
A1-B1-C1-D1-E1-F1-G1-H1-I1-J116.57−0.134
Table 4. Geometric calibration results for Calibration Group B (150 MHz and 24.4 μs).
Table 4. Geometric calibration results for Calibration Group B (150 MHz and 24.4 μs).
CombinationSlant Range Correction (m)Systematic Azimuth Shifts (ms)
A2-B2-C217.340.009
A2-B2-C2-D217.16−0.000
A2-B2-C2-D2-E216.90−0.083
A2-B2-C2-D2-E2-F215.81−0.131
A2-B2-C2-D2-E2-F2-G215.73−0.133
A2-B2-C2-D2-E2-F2-G2-H216.12−0.130
A2-B2-C2-D2-E2-F2-G2-H2-I217.13−0.144
A2-B2-C2-D2-E2-F2-G2-H2-I2-J216.97−0.137
Table 5. Image data used in Validation Groups A and B.
Table 5. Image data used in Validation Groups A and B.
Validation GroupBandwidth and Pulse WidthTest SiteImaging TimeControl DataNumber of ICPs
A200 MHz and 24.4 μsSongshan2016-03-29bSix Corner reflectors4
Taiyuan2016-05-281:5000 DOM/DEM4
Tianjin2016-05-291:2000 DOM/DEM4
B150 MHz and 24.4 μsSongshan2016-04-02Six Corner reflectors3
Taiyuan2016-06-011:5000 DOM/DEM3
Anping2016-06-09GPS control points6
Tianjin2016-06-101:2000 DOM/DEM3
Xianning2016-06-12GPS control points4
Table 6. Comparison of absolute positioning accuracy of Validation Group A before and after compensating for geometric calibration parameters.
Table 6. Comparison of absolute positioning accuracy of Validation Group A before and after compensating for geometric calibration parameters.
Calibration PlanCombinationAbsolute Positioning Accuracy (m)
NorthEastPlane
Plan 1 *None6.9328.6429.47
Plan 2A1-B1-C11.653.063.47
Plan 3A1-B1-C1-D11.743.413.83
Plan 4A1-B1-C1-D1-E11.703.163.59
Plan 5A1-B1-C1-D1-E1-F11.522.462.89
Plan 6A1-B1-C1-D1-E1-F1-G11.562.703.12
Plan 7A1-B1-C1-D1-E1-F1-G1-H11.120.871.42
Plan 8A1-B1-C1-D1-E1-F1-G1-H1-I11.432.062.51
Plan 9A1-B1-C1-D1-E1-F1-G1-H1-I1-J11.452.112.56
* Plan 1 denotes no calibration performed.
Table 7. Comparison of absolute positioning accuracy of Validation Group B before and after compensating for geometric calibration parameters.
Table 7. Comparison of absolute positioning accuracy of Validation Group B before and after compensating for geometric calibration parameters.
Calibration PlanCombinationAbsolute Positioning Accuracy (m)
NorthEastPlane
Plan 1 *None5.4122.1122.76
Plan 2A2-B2-C21.441.542.11
Plan 3A2-B2-C2-D21.401.712.21
Plan 4A2-B2-C2-D2-E20.961.972.19
Plan 5A2-B2-C2-D2-E2-F20.943.183.31
Plan 6A2-B2-C2-D2-E2-F2-G20.953.273.41
Plan 7A2-B2-C2-D2-E2-F2-G2-H20.892.822.96
Plan 8A2-B2-C2-D2-E2-F2-G2-H2-I20.701.751.89
Plan 9A2-B2-C2-D2-E2-F2-G2-H2-I2-J20.741.912.05
* Plan 1 denotes no calibration performed.
Table 8. Comparison between conventional field calibration and proposed self-calibration method.
Table 8. Comparison between conventional field calibration and proposed self-calibration method.
Validation GroupCalibration MethodAbsolute Positioning Accuracy (m)
NorthEastPlane
AConventional field calibration1.040.781.30
Self-calibration1.743.413.83
Difference between the two methods2.53
BConventional field calibration0.701.521.67
Self-calibration0.953.273.41
Difference between the two methods1.74

Share and Cite

MDPI and ACS Style

Zhang, G.; Deng, M.; Cai, C.; Zhao, R. Geometric Self-Calibration of YaoGan-13 Images Using Multiple Overlapping Images. Sensors 2019, 19, 2367. https://doi.org/10.3390/s19102367

AMA Style

Zhang G, Deng M, Cai C, Zhao R. Geometric Self-Calibration of YaoGan-13 Images Using Multiple Overlapping Images. Sensors. 2019; 19(10):2367. https://doi.org/10.3390/s19102367

Chicago/Turabian Style

Zhang, Guo, Mingjun Deng, Chenglin Cai, and Ruishan Zhao. 2019. "Geometric Self-Calibration of YaoGan-13 Images Using Multiple Overlapping Images" Sensors 19, no. 10: 2367. https://doi.org/10.3390/s19102367

APA Style

Zhang, G., Deng, M., Cai, C., & Zhao, R. (2019). Geometric Self-Calibration of YaoGan-13 Images Using Multiple Overlapping Images. Sensors, 19(10), 2367. https://doi.org/10.3390/s19102367

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop