Next Article in Journal
Improved Spectral Detection of Nitrogen Deficiency and Yellow Mosaic Disease Stresses in Wheat Using a Soil Effect Removal Algorithm and Machine Learning
Next Article in Special Issue
Interferometric Calibration Based on a Constrained Evolutionary Algorithm without Ground Control Points for a Tiangong-2 Interferometric Imaging Radar Altimeter
Previous Article in Journal
Multimodal Deep Learning for Rice Yield Prediction Using UAV-Based Multispectral Imagery and Weather Data
Previous Article in Special Issue
A Novel Point Target Attitude Compensation Method Using Electromagnetic Reflectance Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

Improvement and Assessment of Gaofen-3 Spotlight Mode 3-D Localization Accuracy

1
School of Automation and Electronic Information, Xiangtan University, Xiangtan 411105, China
2
National Centre for Applied Mathematics in Hunan, Xiangtan 411105, China
3
School of Mathematics and Computational Science, Xiangtan University, Xiangtan 411105, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(10), 2512; https://doi.org/10.3390/rs15102512
Submission received: 17 March 2023 / Revised: 30 April 2023 / Accepted: 8 May 2023 / Published: 10 May 2023
(This article belongs to the Special Issue Spaceborne SAR Calibration Technology)

Abstract

:
The spotlight image acquired by the Gaofen-3 satellite has a resolution of 1 m, which has great potential for 3-D localization. However, there have been no public reports on the 3-D localization accuracy evaluation of Gaofen-3 spotlight synthetic aperture radar (SAR) images. Here, three study areas were selected from this perspective, and the SAR spotlight stereo images of the study area were acquired using Gaofen-3. In the case of no ground control points (GCPs), based on the Rational Polynomial Coefficient (RPC) model, these images were used for initial 3-D localization; the plane accuracy was better than 10 m in general, and the elevation accuracy was worse than 37 m in general. Subsequently, the RPC model was optimized using geometric calibration technology, and the 3-D localization accuracy was assessed again. The elevation accuracy was significantly improved, which was generally better than 5 m. The plan accuracy was also improved, and it was generally better than 6 m. It can be seen that Gaofen-3 spotlight stereo images are of good quality, and high plane accuracy can be obtained even without GCPs. Geometric calibration technology improves the 3-D localization accuracy, and the elevation accuracy optimization effect is remarkable. Moreover, the optimization effect of plane accuracy is affected by the properties of stereo-image pairs. The optimization effect of plane accuracy is obvious for asymmetric stereo-image pairs, and the optimization effect of plane accuracy is general for symmetric stereo-image pairs.

1. Introduction

On 10 August 2016, China successfully launched the Gaofen-3 satellite, the country’s first C-band multi-polarization high-resolution satellite, which can be widely used in environmental and marine science and other fields [1,2,3]. Gaofen-3 has 12 imaging modes, among which the spotlight mode has the highest resolution, reaching 1 m [4]. Two or more spaceborne synthetic aperture radar (SAR) images can be used for 3-D localization [5], which has irreplaceable advantages in modern topographic mapping. Owing to its high resolution, the Gaofen-3 spotlight mode can extract more abundant ground-object information and has great application potential in stereo photogrammetry.
In the 21st century, many spaceborne SAR systems have been launched worldwide, and many scholars have conducted extensive research on the 3-D localization of spaceborne SAR images. Toutin et al., did a lot of research on the 3-D localization of Radarsat-2. They preliminarily verified that the elevation accuracy of Radarsat-2 ultrafine data in 3-D localization could reach 2 m [6], used Radarsat-2 data to position the glacier in 3-D [7], and optimized the 3-D localization model of Radarsat-2 data and obtained a Digital Elevation Model (DEM) with a maximum linear error of around 10 m [8]. Papasodoro et al., used Radarsat-2 images to obtain a DEM with an average vertical deviation of around 4 m in Nunavut, Canada [9]. Based on the Rational Polynomial Coefficient (RPC) model, Capaldo used a Cosomo-Skymed focus stereo image to extract 3-D information and obtained a DEM with a median error of around 4 m [10]. Using Cosomo-Skymed X-band SAR data to generate DEM in the Dehradun region of India, Agrawal et al., obtained DEM with a Root Mean Square Error (RMSE) of 7.3 m [11]. Eldhuset K et al., used Angle reflectors to conduct 3-D localization of high-resolution TerraSAR-X spotlight images and obtained an elevation accuracy of better than 1 m [12]. Guo et al., studied the application of the RPC adjustment model in the 3-D intersection from stereo-image pairs [13] and then optimized the stereoscopic adjustment model of TerraSAR-X images based on the RPC model, obtaining a plane accuracy of 0.166 m and an elevation accuracy of 0.241 m under four corner ground control point (GCP) conditions [14]. Balss et al., showed that the absolute 3-D localization of TerraSAR-X in 3-D can reach the centimeter level under ideal conditions [15,16,17]. At present, research on the 3-D localization of Radarsat-2, Cosmo-Skymed, and TerraSAR-X images is extensive in the open literature.
However, no specific public reports exist on the 3-D localization accuracy evaluation of Gaofen-3 spotlight images. To overcome this problem, this study used the spotlight mode image acquired by Gaofen-3 as the test data to carry out relevant research. Gaofen-3 spotlight image data from three research sites (Henan, Shaanxi, and Shanghai) were obtained, and the original RPC model and the RPC model optimized by geometric calibration were used to evaluate the 3-D localization accuracy of these spotlight images. This is a detailed study on the evaluation of the 3-D localization accuracy of Gaofen-3 spotlight images.

2. Materials and Methods

2.1. Materials

2.1.1. GCPs

The GCPs in the study area were collected using global positioning system real-time kinematic (GPS-RTK) measurement technology, and the distribution of GCPs was relatively uniform. The plane and elevation accuracies of GCPs were both better than 0.1 m. The GCP was used as an independent checkpoint to evaluate the precision of 3-D localization before and after the RPC model of the Gaofen-3 image was optimized. As shown in Table 1, all image data had an appropriate number of GCPs.
GCPs were selected at obvious road intersections or field boundaries to facilitate subsequent manual comparisons. The geodetic coordinates of all GCPs were measured, and Google Earth optical images and field measurement photos corresponding to the GCPs’ positions were obtained. The collection time of these Google Earth optical images was similar to that of the corresponding Gaofen-3 images. As an example of the GCP shown in Figure 1, a comparison of the field survey map, Google Earth image, and Gaofen-3 SAR image revealed that the range-azimuth coordinates of GCPs collected on the SAR image were relatively accurate, and the location extraction accuracy of GCPs on the SAR image can be guaranteed to be approximately one pixel.

2.1.2. Study Area and Gaofen-3 Image Data

Several Gaofen-3 spotlight mode SAR images were collected in three research areas (Henan, Shaanxi, and Shanghai). The image datasets in the three areas were acquired at different look angles from either ascending or descending orbits.
In the first study area, Henan, the terrain is mainly mountainous, including hills and plains. The average altitude varies significantly in different terrains. The period of these image data was from October 2016 to December 2019. The characteristics of the dataset in this area are as follows: the side-looking direction of the SAR system is right looking; the range of incidence angle is ~21.1–40.6° (Table 2). The selected SAR images and control-point distributions are shown in Figure 2.
The second study area, Shaanxi, had an elevation of ~330–2645 m. The terrain is high in the north and south and low in the middle. As shown in Table 3, the acquisition time of the image data was concentrated in 2018, and the side-looking direction of the SAR system was right looking. The incidence angle of the images ranged from ~19.1–32.0°. The selected SAR images and control-point distributions are shown in Figure 3.
The elevation of the third study area, Shanghai, is high in the southeast and low in the northwest, with a ground elevation of ~3.5–4.5 m and an average elevation of 3.87 m. Data were collected between March and April 2018. The characteristics of the dataset are shown in Table 4, where only one image has the incidence angle direction to the left, and the other images have the incidence angle direction to the right; the range of incidence angle is ~23.6–42.7°. The selected SAR images and control-point distributions are shown in Figure 4.

2.2. Methods

Since all targets with different altitude coordinates are projected onto a single SAR image, a single SAR image can only locate the two-dimensional coordinates of the target point and cannot provide the altitude information of the target point [18,19]. However, two SAR images with a certain intersection angle can form a stereo-image pair to locate the three-dimensional coordinates of the target point [20]. Three-dimensional coordinate positioning uses the coordinates ( x 1 ,   y 1 ) and x 2 ,   y 2 of the image points with the same name on the stereo-image pair, and the range information and Doppler information during radar imaging. Two sets of conditional equations are established according to the range equation and Doppler equation. By solving these two sets of conditional equations, the three-dimensional coordinates of the target points can be obtained [21].

2.2.1. Stereo-Image Pair Acquisition Method

SAR stereo-image pairs exhibit certain image overlaps. According to the relationship between the two flight observation angles, they can be divided into ipsilateral and heterolateral stereo observations [22]. Ipsilateral stereo observation means that the radar captures images from the target area twice from the same side as the target area (Figure 5a). Heterolateral stereo observation implies that the radar captures images from the target area from both sides of the target area (Figure 5b).

2.2.2. 3-D Localization Method for Spaceborne SAR Imagery

Generally, the geometric relationship of a satellite remote sensing image is used as its mathematical model. The system error was eliminated in the adjustment process according to the positioning consistency principle of the same name points, and the corresponding object coordinates were calculated. According to the different geometric geolocation models, the main geolocation methods for spaceborne SAR imagery can be divided into the geolocation method based on the Range and Doppler model (RDM) [23] and the geolocation method based on the RPC model [24]. Yingdan verified through experiments that the 3-D localization method based on the RPC model can achieve the same accuracy as the 3-D localization method based on the strict imaging geometry model [25]. Under the premise of ensuring the same processing accuracy, the RPC model has higher versatility and is more convenient to process. Therefore, this study used the RPC model to perform the 3-D localization of the Gaofen-3 image.
The RPC model associates the image point coordinates r ,   c with the corresponding ground point coordinates P , L , H through the polynomial model. The association of two coordinates can be written as:
r = N u m L P , L , H D e n L P , L , H c = N u m S P , L , H D e n S P , L , H
where P, L, and H are coordinate values of the latitude, the longitude, and the elevation, respectively; r and c are the row and column index of pixels in the image;   N u m L P , L , H , D e n L ( P ,   L ,   H ) , N u m S ( P ,   L ,   H ) , and D e n S ( P ,   L ,   H ) are the terms of the third-order polynomial of   ( P ,   L ,   H ) .
By expanding Equation (1) to the first-order term according to Taylor’s formula, Equation (2) is obtained as follows:
r = r ^ + r P Δ P + r L Δ L + r H Δ H c = c ^ + c P Δ P + c L Δ L + c H Δ H
where r ^ and c ^ are the initial values of row and column coordinates, respectively; r P , r L , r H , c P , c L , and c H are first derivatives; Δ P , Δ L , and Δ H are corrections of the latitude, the longitude and the elevation, respectively.
Therefore, the error equation of the space front intersection is Equation (3), and the object square space coordinates of the image points can be solved by combining the observation of multiple image points with Equation (3).
v r = r P r L r H Δ P Δ L Δ H r r ^ v c = c P c L c H Δ P Δ L Δ H c c ^
where v r and v c are correction value of pixel coordinates.
From the above equations, we can see that for an unknown ground point, there are three unknowns in the forward solution method and that the two identical image points of the ground point in the stereo-image pair can obtain four equations. Therefore, the least-squares method can be used to obtain the square coordinates of the ground point. The specific steps were as follows:
  • The initial iteration value ( P 0 ,   L 0 ,   H 0 ) of the unknown ground point was set. In general, the average elevation of the corresponding region of the stereo-image pair can be used as the initial value.
  • Using the initial ground coordinates and observed values of the image point coordinates of the same name, the coefficients in the error equation were solved according to Equation (3).
  • The least-squares method was used to obtain the correction value of the ground unknown point coordinates.
  • When the correction values ( Δ P ,   Δ L ,   Δ H ) exceed the allowable limit value, the correction value calculated in the third step was superimposed on ( P 0 ,   L 0 ,   H 0 ) . In this way, we can get the new initial coordinates of the unknown points on the ground, and then start the calculation from the second step down.
  • When the correction value ( Δ P ,   Δ L ,   Δ H ) is within the limit value, the loop breaks, representing the end of the calculation. In this case, ( P 0   +   Δ P ,   L 0   +   Δ L ,   H 0   +   Δ H ) is the true value of an unknown point on the ground.

2.2.3. Optimizing the RPC Model

Owing to the errors in the observation parameters of the range-Doppler model, the RPC model obtained by fitting has errors. The analysis shows that the system time-delay error in the range direction and atmospheric delay error can be eliminated using the geometric calibration method. The specific geometric calibration method is described in detail in a previous study [26].

2.2.4. Evaluation of 3-D Stereo Accuracy

Using the image point coordinates of the same name points on the stereo-image pair (the same name points are the GCPs of the stereo-images overlap), the 3-D coordinates of the same name points can be calculated through the forward intersection and the RPC model. By comparing the 3-D coordinates calculated from all SAR images and field measurements, the RMSE can be obtained to evaluate 3-D localization accuracy [27].
For the standard 3-D coordinates, the Universal Transverse Mercator (UTM) coordinate system was used for the plane, and the ground height in the World Geodetic System 1984 (WGS84) coordinate system was used for the elevation. 3-D localization accuracy is generally evaluated in two dimensions: plane and elevation. The plane accuracy of 3-D localization is reflected by the RMSE of the north, east, and plane (north and east directions combined), and the elevation accuracy is reflected by the RMSE of the height. The specific technical process of stereotaxic accuracy evaluation is as follows is expressed as follows:
  • Pick the SAR spotlight image; the coordinates ( x 1 ,   y 1 )   and ( x 2 ,   y 2 ) of the GCP in the stereo-image pair were obtained.
  • The 3-D coordinates of the point with the same name ( X 1 ,   Y 1 ,   Z 1 ) were obtained using the RPC model, and the coordinates were compared with the coordinates of the target point. The field measurement coordinates ( X ,   Y ,   Z ) were transformed into plane coordinates and the elevation in the UTM coordinate system.
  • In the UTM coordinate system, the RMSE of the north direction, east direction, plane, and elevation height of all checkpoints of the stereo-image pair was calculated.

3. Results and Discussion

The 3-D localization accuracy was evaluated for the three study areas, and the results are summarized in Table 5, Table 6 and Table 7 and Figure 6. Table 5, Table 6 and Table 7 show the specific information of the stereo-image pairs in the three study areas and the 3-D localization accuracy before and after RPC model optimization. The line chart in Figure 6 reflects the variation in the 3-D localization accuracy after operation in ascending order of intersection angle size.
Through all the test data and charts, some test conclusions of the Gaofen-3 spotlight stereo image for 3-D localization can be drawn.
For the RPC model before optimization:
  • Overall, the elevation accuracy was poor, at ±33.546 m (the median elevation accuracy of all stereo-image pairs), which is worse than 37 m in most cases. The plane accuracy was better at ±5.307 m (the median plane accuracy of all stereo-image pairs), which is better than 10 m in general.
  • The plane accuracy of the 3-D localization of the ipsilateral stereo-image pair was lower than that of the heterolateral stereo-image pair, and the planar accuracy of the ipsilateral stereo-image pair HN_V was the worst at 27.448 m. The plane accuracy of the heterolateral stereo-image for SX_B was the worst at 21.443 m.
  • As shown in Figure 6, the 3-D localization accuracy of all the stereo-image pairs fluctuated significantly. In general, the larger the stereo intersection angle, the worse the elevation accuracy. The plane accuracy was less affected by the stereo intersection angle.
For the geometrically calibrated RPC model:
  • After geometric calibration, the accuracy of 3-D localization was greatly improved. The elevation accuracy was ±1.512 m (the median elevation accuracy of all optimized stereo-image pairs), which was mostly better than 5 m. The plane accuracy was ±4.783 m (the median plan accuracy of all optimized stereo-image pairs), which is better than 6 m in general. The elevation accuracy was improved by approximately 30 m, and the improvement effect was good for the stereo image, with a plane accuracy better than 5 m, and the improvement effect was approximately 10 m. For stereo images whose plane accuracy is worse than 5 m before optimization, the plane accuracy is generally improved.
  • The observed ipsilateral stereo-image pair HN_V was significantly better than the other stereo-image pairs in the improvement of plane accuracy, and the difference in plane accuracy before and after optimization was 19.186 m. The improvement in elevation accuracy was the worst, and the elevation difference before and after optimization was 6.151 m.
  • As shown in Figure 6, the fluctuation range of the 3-D localization accuracy became smaller and more stable within a certain range. The stereo accuracy after geometric calibration was less affected by the intersection angle.
Theoretically, heterolateral stereo images have noticeable disparities and large base-to-height ratios, and the accuracy of heterolateral 3-D localization is higher than that of ipsilateral 3-D localization, which was also confirmed by our experiment. The initialized RPC model was affected by the sensors, platform ephemeral, terrain height, and processor. Before model optimization, the elevation accuracy of 3-D localization was relatively low. The error of the stereo calibration was mainly in the range direction, while the stereo geometry calibration mainly improved this error; therefore, the optimization effect on the elevation accuracy was more noticeable. Most of the incidence angles of the selected heterolateral stereo-image pairs were similar, which approximates symmetric 3-D localization. Localization of the ipsilateral stereo-image pair was asymmetric. Therefore, the improvement of the plane accuracy of heterolateral 3-D localization had a small impact; however, it had a large impact on the improvement of the plane accuracy of ipsilateral 3-D localization. Geometric calibration technology has little influence on improving the plane accuracy of 3-D localization; however, it has a great impact on the improvement of the accuracy of the ipsilateral 3-D localization plane.

4. Conclusions

This study evaluated the accuracy of the 3-D localization of Gaofen-3 spotlight images. In the case of no control points based on the RPC model, the 3-D localization of the Gaofen-3 spotlight image yielded poor elevation accuracy and unstable plane accuracy. The RPC model was optimized using geometric calibration technology to improve the accuracy of the 3-D localization. Geometric calibration improves the range error of the slant range image, greatly improves the elevation accuracy, and stabilizes plane accuracy. In the case of no ground control points (GCPs), based on the RPC model, the plane accuracy was better than 10 m in general, and the elevation accuracy was worse than 37 m in general. After the RPC model was optimized using geometric calibration technology, the elevation accuracy was significantly improved, which was generally better than 5 m. The plane accuracy was also improved, and it was generally better than 6 m. In addition, the effect of the optimization of geometric calibration on the plane accuracy is affected by the anterolateral stereo-image pairs, and the improvement in the plane accuracy of the ipsilateral stereo-image pairs was more noticeable than that of the symmetric stereo-image pair. However, owing to the lack of test data for the asymmetric stereo-image pair, this conclusion requires further study.

Author Contributions

N.C. and M.D. wrote the paper and conducted the experiments. D.W. processed the image data. Z.Z. and Y.Y. guided the experiments and structure of the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Key Research and Development Program of the National (Grant No. 2020YFA0713503) and the Excellent Youth Project of the Hunan Provincial Education Department (Grant No. 22B0168).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, Q.; Yan, S.; Lü, M.; Zhang, L.; Liu, G. Accurate extraction and analysis of mountain glacier surface motion with GF-3 imagery. J. Glaciol. Geocryol. 2021, 43, 1594–1605. [Google Scholar]
  2. Zhang, L.; Wang, Z.; Ye, K.; Chang, D. Coastline Extraction Method Based on Multi Feature Classification of Full Polarization SAR Image. Remote Sens. Inf. 2022, 37, 58–65. [Google Scholar]
  3. Lin, X.; Luo, D.; Lian, R.; Wei, W. Application of GF-3 Spotlight Mode Data in Urban Flood Disaster Emergency Response. Geospat. Inf. 2021, 20, 1–4. [Google Scholar]
  4. Zhang, Q. System Design and Key Technologies of the GAOFEN-3 Satellite. Acta Geod. Cartogr. Sin. 2017, 46, 269–277. [Google Scholar]
  5. Gao, L. Basic Method and Practice of SAR Photogrammetric Processing; PLA Information Engineering University: Zhengzhou, China, 2004. [Google Scholar]
  6. Toutin, T.; Chenier, R. 3-D Radargrammetric Modeling of RADARSAT-2 Ultrafine Mode: Preliminary Results of the Geometric Calibration. IEEE Geosci. Remote Sens. Lett. 2009, 6, 282–286. [Google Scholar] [CrossRef]
  7. Toutin, T.; Blondel, E.; Clavet, D.; Schmitt, C. Stereo radargrammetry with Radarsat-2 in the Canadian Arctic. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2601–2609. [Google Scholar] [CrossRef]
  8. Toutin, T. Impact of Radarsat-2 SAR Ultrafine-Mode Parameters on Stereo-Radargrammetric DEMs. IEEE Trans. Geosci. Rem. Sens. 2010, 48, 3816–3823. [Google Scholar] [CrossRef]
  9. Papasodoro, C.; Royer, A.; Langlois, A.; Berthier, E. Potential of RADARSAT-2 stereo radargrammetry for the generation of glacier DEMs. J. Glaciol. 2016, 62, 486–496. [Google Scholar] [CrossRef]
  10. Capaldo, P.; Crespi, M.; Fratarcangeli, F.; Nascetti, A.; Pieralice, F. High-Resolution SAR Radargrammetry: A First Application With COSMO-SkyMed SpotLight Imagery. IEEE Trans. Geosci. Rem Sens. 2011, 8, 1100–1104. [Google Scholar] [CrossRef]
  11. Agrawal, R.; Das, A.; Rajawat, A. Accuracy Assessment of Digital Elevation Model Generated by SAR Stereoscopic Technique Using COSMO-Skymed Data. J. Indian Soc. Remote Sens. 2018, 46, 1739–1747. [Google Scholar] [CrossRef]
  12. Eldhuset, K.; Weydahl, D.J. Geolocation and stereo height estimation using TerraSAR-X spotlight image data. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3574–3581. [Google Scholar] [CrossRef]
  13. Zhang, G.; Li, Z.; Pan, H.; Qiang, Q.; Zhai, L. Orientation of Spaceborne SAR Stereo Pairs Employing the RPC Adjustment Model. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2782–2792. [Google Scholar] [CrossRef]
  14. Zhang, G.; Li, Z. RPC-based adjustment model for TerraSAR-X stereo orientation. Sci. Surv. Mapp. 2011, 36, 146–148. [Google Scholar]
  15. Balss, U.; Eineder, M.; Fritz, T.; Breit, H.; Minet, C. Techniques for High Accuracy Relative and Absolute Localization of TerraSAR-X/TanDEM-X Data. In Proceedings of the 2011 IEEE International Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, 24–29 July 2011. [Google Scholar]
  16. Balss, U.; Gisinger, C.; Cong, X.; Brcic, R.; Hackel, S.; Eineder, M. Precise Measurements on the Absolute Localization Accuracy of TerraSAR-X on the Base of Far-Distributed Test Sites. In Proceedings of the 10th European Conference on Synthetic Aperture Radar (EUSAR), Berlin, Germany, 5 June 2014. [Google Scholar]
  17. Balss, U.; Gisinger, C.; Eineder, M. Measurements on the Absolute 2-D and 3-D Localization Accuracy of TerraSAR-X. Remote Sens. 2018, 10, 656. [Google Scholar] [CrossRef]
  18. Cumming, I.G.; Wong, F.H. Digital Processing of Synthetic Aperture Radar Data: Algorithms and Implementation; Artech House: Boston, MA, USA, 2005. [Google Scholar]
  19. Liu, Y.; Xing, M.; Sun, G.; Lv, X.; Bao, Z.; Hong, W.; Wu, Y. Echo model analyses and imaging algorithm for high-resolution SAR on high-speed platform. IEEE Trans. Geosci. Remote Sens. 2012, 50, 933–950. [Google Scholar] [CrossRef]
  20. Li, D.; Liu, J.Y. A stereo positioning method of GaoFen-3 SAR images under different time and space conditions. J. Univ. Chin. Acad. Sci. 2021, 38, 519–523. [Google Scholar]
  21. Benyi, C.; Yong, F. The principles of positioning with space-borne SAR images. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Toronto, ON, Canada, 24–28 June 2002; Volume 4, pp. 1950–1952. [Google Scholar] [CrossRef]
  22. Chen, P.H.; Dowman, I.J. Space intersection from ERS-1 synthetic aperture radar images. Photogramm. Rec. 1996, 15, 561–573. [Google Scholar] [CrossRef]
  23. Sansosti, E.; Berardino, P.; Manunta, M.; Serafino, F.; Fornaro, G. Geometrical SAR image registration. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2861–2870. [Google Scholar] [CrossRef]
  24. Zhang, G.; Fei, W.; Li, Z.; Zhu, X.; Tang, X. Analysis and Test of the Substitutability of the RPC Model for the Rigorous Sensor Model of Spaceborne SAR Imagery. Acta Geod. Cartogr. Sin. 2010, 39, 264. [Google Scholar]
  25. Wu, Y. Accurate Geometric Positioning of Spaceborne SAR Remote Sensing Imagery; Wuhan University: Wuhan, China, 2010. [Google Scholar]
  26. Deng, M.; Zhang, G.; Cai, C.; Xu, K.; Zhao, R.; Guo, F.; Suo, J. Improvement and Assessment of the Absolute Positioning Accuracy of Chinese High-Resolution SAR Satellites. Remote Sens. 2019, 11, 1465. [Google Scholar] [CrossRef]
  27. Raggam, H.; Gutjahr, K.; Perko, R.; Schardt, M. Assessment of the Stereo-Radargrammetric Mapping Potential of TerraSAR-X Multibeam Spotlight Data. IEEE Trans. Geosci. Remote Sens. 2010, 48, 971–977. [Google Scholar] [CrossRef]
Figure 1. Example diagram of a GCP. (a) Field measurement photo; (b) Google Earth optical image; (c) Gaofen-3 SAR image. (The Red Cross mark in the figure represents a GCP).
Figure 1. Example diagram of a GCP. (a) Field measurement photo; (b) Google Earth optical image; (c) Gaofen-3 SAR image. (The Red Cross mark in the figure represents a GCP).
Remotesensing 15 02512 g001
Figure 2. Image and control point distributions in Zhengzhou, Henan Province.
Figure 2. Image and control point distributions in Zhengzhou, Henan Province.
Remotesensing 15 02512 g002
Figure 3. Image and control point distribution in Weinan, Shaanxi Province.
Figure 3. Image and control point distribution in Weinan, Shaanxi Province.
Remotesensing 15 02512 g003
Figure 4. Image and control point distribution in Pudong, Shanghai.
Figure 4. Image and control point distribution in Pudong, Shanghai.
Remotesensing 15 02512 g004
Figure 5. Method of acquiring radar stereo images. (a) Ipsilateral stereo observation; (b) Heterolateral stereo observation.
Figure 5. Method of acquiring radar stereo images. (a) Ipsilateral stereo observation; (b) Heterolateral stereo observation.
Remotesensing 15 02512 g005
Figure 6. Variation of 3-D localization accuracy with stereoscopic intersection angle.
Figure 6. Variation of 3-D localization accuracy with stereoscopic intersection angle.
Remotesensing 15 02512 g006
Table 1. Number of GCPs within the test image range.
Table 1. Number of GCPs within the test image range.
Test AreaHenan
Image IDHN_01HN_02HN_03HN_04HN_05HN_06HN_07HN_08HN_09HN_10
Number of GCPs8699897568
Test AreaShaanxiShanghai
Image IDSX_01SX_02SX_03SX_04SX_05SH_01SH_02SH_03SH_04SH_05
Number of GCPs11127105108141716
Table 2. Image data information of Henan.
Table 2. Image data information of Henan.
Image IDOrbitLook DirectionIncidence Angle (°)Acquisition Time
HN_01DescendingRight33.814 October 2016
HN_02DescendingRight25.77 August 2018
HN_03DescendingRight40.429 August 2019
HN_04DescendingRight37.46 December 2019
HN_05DescendingRight43.211 December 2019
HN_06DescendingRight37.47 November 2019
HN_07AscendingRight37.69 November 2019
HN_08AscendingRight30.214 November 2019
HN_09AscendingRight21.119 November 2019
HN_10DescendingRight40.624 November 2019
Table 3. Image data information of Shaanxi.
Table 3. Image data information of Shaanxi.
Image IDOrbitLook DirectionIncidence Angle (°)Acquisition Time
SX_01DescendingRight32.014 August 2018
SX_02DescendingRight32.011 December 2018
SX_03AscendingRight28.328 July 2018
SX_04AscendingRight39.54 August 2018
SX_05AscendingRight19.129 September 2018
Table 4. Image data information of Shanghai.
Table 4. Image data information of Shanghai.
Image IDOrbitLook DirectionIncidence Angle (°)Acquisition Time
SH_01DescendingRight42.79 March 2018
SH_02DescendingRight23.69 April 2018
SH_03DescendingLeft35.027 March 2018
SH_04AscendingRight28.923 March 2018
SH_05AscendingRight33.04 April 2018
Table 5. Estimation of 3-D localization accuracy in Zhengzhou, Henan Province.
Table 5. Estimation of 3-D localization accuracy in Zhengzhou, Henan Province.
Stereopairs Acquisition MethodStereopair IDImage IDTime Width & Bandwidth
(μs & MHz)
GCPsIntersection Angle (°)Before/After Model
Optimization
RMSE (m)
EastNorthPlaneHeight
HeterolateralHN_AHN_01240 & 45.000683.7Before7.0221.9787.29537.026
HN_07240 & 54.990After5.2081.4405.4030.580
HN_BHN_01240 & 45.000575.4Before3.8871.3964.13033.374
HN_08240 & 45.000After4.3831.0754.5130.803
HN_CHN_01240 & 45.000665.7Before2.3381.2152.63530.538
HN_09240 & 45.000After4.9450.8205.0121.288
HN_DHN_02240 & 45.000574.5Before9.1680.8129.20333.546
HN_07240 & 54.990After5.2210.8265.2860.115
HN_EHN_02240 & 45.000466.5Before5.9830.5296.00730.792
HN_08240 & 45.000After4.4090.5124.4391.234
HN_FHN_02240 & 45.000557.1Before4.5690.1364.57128.604
HN_09240 & 45.000After4.9850.2204.9901.682
HN_GHN_03240 & 54.990688.7Before1.4821.8002.33242.635
HN_07240 & 54.990After2.0471.3062.4283.575
HN_HHN_03240 & 54.990582.9Before1.9331.3372.35137.687
HN_08240 & 45.000After0.9071.1331.4521.634
HN_IHN_03240 & 54.990673.7Before3.9730.9534.08633.516
HN_09240 & 45.000After0.9300.6801.1520.545
HN_JHN_04240 & 45.000787.7Before4.8241.0394.93538.371
HN_07240 & 54.990After4.6861.0364.7990.558
HN_KHN_04240 & 45.000479.4Before1.5610.5391.65234.352
HN_08240 & 45.000After3.8160.7453.8880.868
HN_LHN_04240 & 45.000669.5Before0.4480.1540.47430.911
HN_09240 & 45.000After4.1400.3384.1541.512
HN_MHN_05240 & 54.990585.3Before0.2220.7570.78943.717
HN_07240 & 54.990After2.1310.8632.3003.253
HN_NHN_05240 & 54.990386.2Before3.5920.3003.60538.178
HN_08240 & 45.000After1.0830.2941.1231.074
HN_OHN_05240 & 54.990576.2Before5.9270.6135.95833.604
HN_09240 & 45.000After1.0370.4901.1470.204
HN_PHN_06240 & 45.000687.7Before5.9071.9546.22237.892
HN_07240 & 54.990After5.3761.7125.6420.343
HN_QHN_06240 & 45.000479.4Before2.9801.6723.41733.961
HN_08240 & 45.000After4.7451.6285.0161.111
HN_RHN_06240 & 45.000569.5Before1.3081.2211.78930.773
HN_09240 & 45.000After5.1741.0885.2881.575
HN_SHN_07240 & 54.990688.6Before0.4440.4220.61243.642
HN_10240 & 54.990After0.4880.4790.6844.540
HN_THN_08240 & 45.000582.4Before4.1650.3394.17938.494
HN_10240 & 54.990After0.8780.5691.0462.395
HN_UHN_09240 & 45.000573.6Before6.6490.6316.67934.067
HN_10240 & 54.990After1.2320.5551.3511.048
IpsilateralHN_VHN_02240 & 45.000521.2Before26.6486.57927.44813.320
HN_05240 & 54.990After7.5623.3298.2627.169
Table 6. Estimation of 3-D localization accuracy in Weinan, Shaanxi Province.
Table 6. Estimation of 3-D localization accuracy in Weinan, Shaanxi Province.
Stereopairs Acquisition MethodStereopair IDImage IDTime Width & Bandwidth
(μs & MHz)
GCPsIntersection Angle (°)Before/After Model
Optimization
RMSE (m)
EastNorthPlaneHeight
HeterolateralSX_ASX_01240 & 45.000671.4Before17.34212.08521.13733.004
SX_03240 & 35.009After4.5601.6914.8630.384
SX_BSX_01240 & 45.000583.7Before18.29211.19021.44336.506
SX_04240 & 45.000After6.2391.6776.4601.562
SX_CSX_01240 & 45.000661.6Before12.22113.62918.30526.083
SX_05240 & 35.009After2.1302.6473.3975.923
SX_DSX_02240 & 45.000571.4Before5.3022.4935.85933.457
SX_03240 & 35.009After5.8382.7156.4390.567
SX_ESX_02240 & 45.000483.6Before9.7941.8279.96337.870
SX_04240 & 45.000After6.9751.9257.2361.600
SX_FSX_02240 & 45.000561.6Before3.3674.1025.30725.322
SX_05240 & 35.009After0.6523.8803.9345.540
Table 7. Estimation of 3-D localization accuracy in Pudong, Shanghai.
Table 7. Estimation of 3-D localization accuracy in Pudong, Shanghai.
Stereopairs Acquisition MethodStereopair IDImage IDTime Width & Bandwidth
(μs & MHz)
GCPsIntersection Angle (°)Before/After Model
Optimization
RMSE (m)
EastNorthPlaneHeight
HeterolateralSH_ASH_02240 & 35.009666.6Before7.8082.1388.09631.309
SH_03240 & 45.000After4.1302.4144.7831.656
SH_BSH_02240 & 35.009863.0Before11.1781.11811.23429.434
SH_04240 & 35.009After9.5111.2139.5882.242
SH_CSH_02240 & 35.009867.4Before11.2951.20911.35927.889
SH_05240 & 45.000After8.0571.4618.1884.893
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, N.; Deng, M.; Wang, D.; Zhang, Z.; Yang, Y. Improvement and Assessment of Gaofen-3 Spotlight Mode 3-D Localization Accuracy. Remote Sens. 2023, 15, 2512. https://doi.org/10.3390/rs15102512

AMA Style

Chen N, Deng M, Wang D, Zhang Z, Yang Y. Improvement and Assessment of Gaofen-3 Spotlight Mode 3-D Localization Accuracy. Remote Sensing. 2023; 15(10):2512. https://doi.org/10.3390/rs15102512

Chicago/Turabian Style

Chen, Nuo, Mingjun Deng, Di Wang, Zhengpeng Zhang, and Yin Yang. 2023. "Improvement and Assessment of Gaofen-3 Spotlight Mode 3-D Localization Accuracy" Remote Sensing 15, no. 10: 2512. https://doi.org/10.3390/rs15102512

APA Style

Chen, N., Deng, M., Wang, D., Zhang, Z., & Yang, Y. (2023). Improvement and Assessment of Gaofen-3 Spotlight Mode 3-D Localization Accuracy. Remote Sensing, 15(10), 2512. https://doi.org/10.3390/rs15102512

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop