1. Introduction
Benefiting from the fast development of space transportation technology and sensor manufacturing technology, the acquisition capacity of remote sensing data has been largely enhanced [
1]. Currently, it has formed a complete air–space–ground remote sensing data acquisition system which has the characteristics of being efficient and diverse (multi-platform, multisensor, multiscale) and having a high resolution (spectrum, spatial, time), providing massive unstructured remote sensing data [
2]. Different types of sensors have different imaging principles and properties; therefore, different sensor data usually reflect different aspects of the ground objects and are good in different application areas. For example, optical imagery is widely applied in surveying and mapping disaster management [
3], and in change detection [
4]; SAR imagery is used for surface subsidence monitoring [
5], DEM/DSM generation [
6], and target recognition [
7]; and infrared imagery is applied in soil moisture inversion [
8] and forest fire monitoring [
9]. Although the current processing and application technology for a single type of sensor data has reached a high level, a single type of sensor only encodes the electromagnetic wave information of a specific band. Considering the complementarity between different types of sensor data, researchers have started to fuse multi-type sensor data to solve traditional remote sensing problems, such as land cover analysis [
10,
11,
12,
13], change detection [
14,
15,
16], image classification [
17], and image fusion [
18,
19]. As a critical problem in the field of photogrammetry and remote sensing, topographic mapping is mainly completed using optical satellite images [
20]. However, the classical orientation methods of satellite images are inefficient and have significant limitations in steep terrains. Comprehensive utilization of multi-type sensor data can be a promising solution to achieve a rapid and high-precision geometric orientation of satellite images, promoting fast and accurate global surveying and mapping [
21].
The rigorous sensor model (RSM) and the rational function model (RFM) are two commonly used models to build the geometrical relationship between image coordinates and object coordinates [
22]. The rational polynomial coefficients (RPCs) utilized in the RFM are derived from the RSM, and therefore the geolocation accuracy of the RFM is less than or equal to that of the RSM [
23]. However, the RFM has the advantage of good universal utilization compared with the RSM, making it be simply applied to different optical satellites. Due to the inaccurate measurements of satellite location and orientation in space, high-precision geolocation is hard to achieve directly. Since the accuracy of satellite location, which is usually measured by a global navigation satellite system (GNSS), is significantly higher than that of the measured attitude angle, the measurement errors of attitude angles are the main reason for the low geolocation accuracy of optical satellite data [
24]. In general, on-orbit geometric calibration [
25] and external control correction [
26] are two typical methods to improve the geo-localization accuracy. In particular, the external control information can be provided by traditional field-measured ground control points (GCPs) [
27,
28,
29], topographic maps, or orthoimage maps [
30,
31]. Measuring GCPs in the field costs a high amount of manual work and is not applicable in dangerous areas, while the method of using reference images [
32] is simpler and more efficient, where the GCPs are obtained from manual selection or automatic image matching. However, reference images are not available in most cases, restricting their wide application.
As active remote sensing satellites, the imaging principle and data acquisition concept of SAR satellites are significantly different from those of optical satellites. In particular, SAR satellites measure the backscattered signals from the reflecting ground objects in a side-view mode (angle between 20 and 60 with respect to the nadir direction) along the flight path. In the post-processing process, pulse compression technology is used to improve the range resolution, and synthetic-aperture technology is used to simulate the equivalent large-aperture antenna to improve the azimuth resolution so that the SAR satellite image always has a high spatial resolution [
33]. For example, the spatial resolution of TerraSAR-X is 1.5 × 1.5 m in spotlight mode, and 3 × 3 m in stripmap mode [
34]; the spatial resolution of GF-3 is 1 × 1 m in spotlight mode, and 3 × 3 m in stripmap mode [
35]; and the spatial resolution of TH-2 is 3 × 3 m in stripmap mode [
36]. Benefiting from the approximate spherical wave emitted by the radar antenna and the measurement scheme of SAR satellites, the geo-localization accuracy is not sensitive to the satellite attitude and weather conditions. In addition, SAR satellite images exhibit high geopositioning accuracy due to the precise orbit determination technology and atmospheric correction model. Bresnahan [
37] evaluated the absolute geopositioning accuracy of TerraSAR-X by using 13 field test sites and found that the RMSE value is around 1 m in spotlight mode and 2 m in stripmap mode. Eineder et al. [
38] verified that the geometric positioning accuracy of TerraSAR-X images can reach within one pixel in both the range and azimuth directions and even reach centimeter level for specific ground targets. Lou et al. [
36] investigated the absolute geopositioning accuracy of a Chinese SAR satellite, TH-2, in a few calibration fields distributed in China and Australia and found that the RMSE value reaches 10 m.
Considering the high and stable absolute geopositioning accuracy, SAR satellite data can be a reliable source for proving high-precision control information for an accurate orientation of optical satellite images, which can be obtained in most areas worldwide. Similar to the methods using orthoimage maps, tie points are identified between the SAR orthophoto and the optical satellite image, and these tie points are taken as GCPs to improve the orientation accuracy of optical satellite images [
39,
40]. Reinartz et al. [
41] used adaptive mutual information to match the SAR and optical satellite images, and the matches were employed to optimize the original RPCs. The results showed that the geometric positioning accuracy of the optical satellite is largely improved and reaches within 10 m. However, this method does not consider the complex terrain and cannot work with multiple SAR reference images. Merkle et al. [
42] used a large number of pre-registered TerraSAR-X and PRISM image datasets to train a Siamese neural network and used the network model to extract the corresponding points between the SAR and optical satellite images. The matching points were employed to optimize the sensor model, enhancing the geo-localization accuracy. However, this method cannot handle optical satellite and SAR images with large offsets and cannot work on large optical satellite images due to the limited GPU memory.
The significant difference in acquisition principles, viewing perspectives, and wavelength responses between SAR and optical satellite images makes finding reliable correspondences when matching optical and SAR images challenging [
42]. More specifically, SAR sensors acquire data in a sideways-looking manner, which may cause typical geometric distortions such as foreshortening, occlusion, and shadowing, especially for the areas with large height changes. In contrast, optical satellite images are acquired in the nadir view or small fixed slanted angles to provide high geometric accuracy. Consequently, the same object may have a completely different appearance on SAR and optical satellite images. Moreover, these two types of sensors receive and apply electromagnetic waves in different bands. The radar signal wavelength is at the level of centimeters, while that of optical signals is in nanometers. The different lengths of waves encode different aspects of object properties, therefore leading to significant intensity differences between SAR and optical satellite images for the same object. Additionally, the signal polarization, roughness, and reflectance properties of the object surface may also influence the wavelength responses greatly. In addition, the speckle effect on SAR images further increases the matching difficulty of optical satellite images.
Figure 1 shows the difference between an optical and a high-resolution SAR image for a selected scene containing manufactured structures and vegetation.
In this paper, we propose an improved multimodal image matching descriptor angle-weighted oriented gradient (AWOG) for SAR and optical satellite images, obtaining a robust and accurate matching performance. To describe the image features, we calculated a multi-dimensional feature vector lying on multiple uniform distribution orientations based on the horizontal and vertical gradient maps
and
for each pixel, as with the recent multimodal image matching method CFOG [
43]. Note that the range of [0°, 180°) was divided into several evenly distributed orientations, and we calculated the gradient direction and magnitude based on
and
, found the two orientations nearest to the image gradient direction, and distributed the gradient magnitude to these two orientations based on their weights. In this way, the image gradient value is only allocated to the two most related orientations rather than all orientations, increasing the distinguishability of feature vectors. During the process, we opened a window image on the whole image and assigned the two weighted amplitudes of each pixel in the window image to the corresponding orientation elements of the feature vector. After all the gradients of pixels in the window image were assigned, the feature vector of the target pixel was obtained and normalized to reduce the influence of the nonlinear intensity difference. In addition, we introduced a quick lookup table named the feature orientation index (FOI), which converts the inefficient calculation of each pixel into matrix operations and helps to assign the weighted values to their corresponding elements of the feature vector, significantly improving the computation efficiency. Additionally, in terms of metric similarity, we utilized phase correlation (PC) [
44] rather than the metrics in the spatial domain to speed up the matching process, but we omitted the normalization operation of PC since the descriptor was normalized during the feature vector generation.
In addition, we propose a precise geometric orientation framework for raw optical satellite images based on SAR orthophotos. Specially, we first collected all available SAR images of the study area and performed overlap detection between the SAR images and the target optical satellite image. Only the SAR images with large overlaps were retained as the reference image sets. Then, we created image pyramids for the optical satellite images and all the reference SAR images and detected feature points using the SAR-Harris feature detector [
45]. To ensure the uniformity and high discrimination of the obtained feature points simultaneously, we introduced an image block strategy. After that, we transferred the point features into their corresponding ground points and reprojected the ground points onto the optical satellite images. The reprojected locations on the satellite image were taken as the initial correspondences for the feature points of SAR images, which were optimized using AWOG. We further conducted error elimination to remove the unreliable matches. We deemed the retained matches to be correct and took them as virtual control points (VCPs). Finally, we took the VCPs and the optical satellite image as input and performed RFM-based block adjustment to achieve an accurate image orientation of the optical satellite images.
In summary, the main contributions of our work are as follows:
We propose a robust feature descriptor AWOG to match SAR and optical satellite images and introduce a PC-based matching strategy, obtaining stable and reliable matching point pairs with higher efficiency.
We put forward a framework for an accurate geometric orientation of optical images using the VCPs provided by SAR images.
Various experiments on SAR and optical satellite image datasets verify the superiority of AWOG to the other state-of-the-art multimodal image matching descriptors. Compared with CFOG, the correct matching ratio is improved by about 17%, and the RMSE of location errors is reduced by about 0.1 pixels. Additionally, the efficiency of the proposed method is comparable to CFOG and about 80% faster than the other state-of-the-art methods such as HOPC, DLSS, and MI.
We further prove the effectiveness of the proposed method for the geometric orientation of optical satellite images using multiple SAR and optical satellite image pairs. Significantly, the geopositioning accuracy of optical satellite images is improved, from more than 200 to around 8 m.
This paper is structured as follows:
Section 2 introduces some related works on SAR and optical image matching.
Section 3 presents the proposed matching method for SAR and optical satellite images.
Section 4 introduces a general orientation framework of optical satellite images using SAR images.
Section 5 presents the experiments and analysis concerning the proposed method.
Section 6 discusses several vital details. Finally, we conclude our work and discuss future research directions in
Section 7.
2. Related Work
Current SAR and optical satellite image matching studies can be roughly divided into feature-based and area-based methods [
46].
Feature-based methods detect salient features on images and match them based on similarity. The features can be seen as a simple representation of the whole image, thus being robust to geometric and radiometric changes, and can be classified into corner features [
47,
48], line features [
49,
50], and surface features [
51]. Towards matching SAR and optical satellite images, Fan et al. [
52] proposed the SIFT-M algorithm by extracting SIFT [
53] features on both images, constructing distinct feature descriptors from multiple support regions, and introducing a spatially consistent matching strategy to match these feature points. Compared to SIFT, SIFT-M obtains more matches and has higher efficiency and accuracy. Based on SIFT-M, Xiang et al. [
54] further proposed the OS-SIFT algorithm. They detected feature points in a Harris-scale space built on the consistent gradients of both SAR and optical satellite images and computed the histograms of multiple image patches to enhance the distinguishability of feature descriptors, increasing the robustness and performance of SIFT-M. Salehpour et al. [
55] proposed a hierarchical method based on the BRISK operator [
56]. They used an adaptive and elliptical bilateral filter to remove the speckle noise on SAR images and introduced a hierarchical approach to utilize the local and global geometrical relationship of BRISK features. This method has high accuracy, but the obtained matches are few and not well distributed over the full image. To improve the robustness against the significant nonlinear radiation distortions of SAR and optical images, Li et al. [
57] proposed a radiation-variation insensitive feature transform (RIFT) method. This approach first detects feature points on the phase congruency map [
58], which considers the repeatability and number, and then constructs a maximum index map (MIM) based on the log-Gabor convolution sequence. Moreover, they achieved rotation invariance by analyzing the value changes of the MIM. The results showed that RIFT performs well and is stable on multimodal image matching. Based on RIFT, Cui et al. [
59] further proposed SRIFT. They first established a multiscale space by using a nonlinear diffusion technique. Then, a local orientation and scale phase congruency (LOSPC) algorithm was used to obtain sufficiently stable key points. Finally, a rotation-invariant coordinate (RIC) system was employed to achieve rotation invariance. Compared to RIFT, SRIFT obtains more matches and strengthens the robustness against scale change. Wang et al. [
60] proposed a phase congruency-based method to register optical and SAR images. This method consists of employing a uniform Harris feature detection method and a novel local feature descriptor based on the histogram of the phase congruency orientation. The results showed that the ROS-PC is robust to nonlinear radiometric differences between optical and SAR images and can tolerate some rotation and scale changes. To increase the discriminability of the classical local self-similarity (LSS) descriptor [
61], Sedaghat et al. [
62] proposed a distinctive order-based self-similarity descriptor (DOBSS) based on the separable sequence. To fulfill multimodal image matching, they first extracted UR-SIFT [
63] feature points from both images, constructed DOBSS descriptors, and finally utilized cross-matching and consistency checks in the projective transformation model to find the correct correspondences. This method not only obtains more matches but also shows better robustness and accuracy than SIFT and LSS. Beyond the methods based on point features, Sui et al. [
64] proposed a multimodal image matching method by combing iterative line feature extraction and Voronoi integrated spectral point matching, increasing the matching reliability. However, this method requires a large number of lines in the scene. Hnatushenko et al. [
65] proposed a variational approach based on contour features for co-registration of SAR and optical images in agricultural areas. They performed a special pre-processing of SAR and optical images and achieved co-registration by transforming it into a constrained optimization problem. This method is quite efficient and fully automatic. Xu et al. [
66] proposed a surface feature-based method that employs iterative level set-based area feature segmentation to take advantage of the surface features. However, this method can only be applied to images containing significant regional boundaries, such as lakes and farmland. Even though the features have the advantage of flexibility and robustness on multimodal images, the repeatability is relatively low due to the large geometric deformations, and nonlinear intensity changes [
67,
68]. For example, the state-of-the-art method RIFT only has an inlier ratio of about 15%, decreasing the performance of feature-based methods in matching SAR and optical satellite images.
The area-based methods realize image matching by opening a matching window on the original images and utilizing a similarity metric to find the matches [
69]. Normalized cross-correlation (NCC) [
70] and mutual information (MI) [
71] are two popularly used metrics in the matching of optical satellite image pairs. However, NCC cannot handle the matching of SAR and optical images in most cases due to the nonlinear intensity deformation. On the contrary, MI describes statistical information of the image and has good robustness to nonlinear change. However, the MI method does not consider the influence of neighborhood pixels, making it easily fall into the local extremum. Moreover, MI is sensitive to the image window size and requires a large amount of calculation. Suri and Reinartz [
72] analyzed the performance of MI on the registration of TerraSAR-X and Ikonos imagery over urban areas and found that the MI-based method can successfully eliminate large global shifts. Apart from MI, which works on multimodal image matching, another method is to reduce the multimodal images into a uniform form. The phase correlation (PC) algorithm developed by Kuglin and Hines [
44] can be used to transform both SAR and optical satellite images into the frequency domain. This approach shows some robustness against nonlinear intensity distortion and has high efficiency. To explore the inherent structure information of SAR and optical satellite images, Xiang et al. proposed the OS-PC algorithm [
73] by combining robust feature representations and three-dimensional PC, which exhibits high robustness to radiometric and geometric differences.
Some researchers noticed that even though large nonlinear intensity differences exist between SAR and optical satellite images, the morphological region features remain [
74,
75]. Therefore, these researchers first extracted the image conjugated structure and then conducted matching of the structural image, increasing the matching performance. Li et al. [
76] used the histogram of oriented gradients (HOG) descriptor [
77] to express the image structure and used NCC as the measured metric, obtaining good matching results. Based on HOG, Ye et al. [
78] proposed the histogram of oriented phase congruency (HOPC) feature descriptor by replacing the gradient feature with a phase congruency feature, obtaining good performance on the matching of SAR and optical satellite images. Zhang et al. [
79] applied HOPC to a combined block adjustment framework for optical satellite stereo imagery which is assisted by spaceborne SAR images and laser altimetry data. Ye et al. [
80] further proposed a similarity metric, DLSC, using the NCC of a novel dense local self-similarity (DLSS) descriptor which is based on shape properties. Since DLSS captures the self-similarities within images, it is robust to large nonlinear intensity distortions. However, HOG, HOPC, and DLSS only construct descriptors for a few silent features. Thus, they are both sparse representations and omit some available and important structure information. Aiming at solving this issue, Ye et al. [
43] further proposed the channel features of oriented gradients (CFOG). CFOG computes the adjacent structure feature pixel by pixel, enhancing the ability to describe the structure details. In terms of the similarity metric, they applied the convolution theorem and Fourier transform to deduce the sum of squared differences (SSD) measure from the spatial domain to the frequency domain, achieving a fast calculation. Ye et al. [
81] used CFOG to fulfill the co-registration of Sentinel-1 SAR and Sentinel-2 optical images and evaluated the performance of various geometric transformation models, including polynomials, projective models, and RFMs. However, CFOG only computes the gradient maps in the horizontal and vertical directions and uses them to interpolate the gradients in all the other orientations, which brings ambiguity and decreases the accuracy.
With the development in the field of deep learning, some researchers have explored the application of deep learning-based techniques to fulfill the matching of SAR and optical satellite images. Merkle et al. [
82] investigated the possibility of cGANs (conditional generative adversarial networks) in this task. They first trained conditional generative adversarial networks to generate SAR-like image patches from optical satellite images and then achieved the matching of SAR and optical images using NCC, SIFT, and BRISK under the same image model. Following the same thought, Zhang et al. [
83] employed a deep learning-based image transfer method to eliminate the difference between SAR and optical images and applied a traditional method to match the multimodal remote sensing images. Ma et al. [
84] proposed a two-step registration method to register homologous and multimodal remote sensing images. They first computed the approximate relationship with a deep convolutional neural network, VGG-16 [
85], and then adopted a robust local feature-based matching strategy to refine the initial results. Similarly, Li et al. [
86] applied a two-step process to estimate the rigid body rotation between SAR and optical images. They first utilized a deep learning neural network called RotNET to coarsely predict the rotation matrix between the two images. After that, a better matching result was obtained using a novel local feature descriptor developed from the Gaussian pyramid. Zhang et al. [
87] set up a Siamese fully convolutional network and trained it with a loss function that maximizes the feature distance between the positive and negative samples. Mainly, they built a universal pipeline for multimodal remote sensing image registration. Hughes et al. [
88] developed a threefold approach for the matching of SAR–optical images. They first found the most suitable matching regions using a goodness network, then built a correspondence heatmap with a correspondence network, and finally eliminated the outliers by an outlier reduction network. Even though these methods obtained better results than the competing traditional geometric-based methods in their experimental results, current deep learning-based methods still suffer from the following problems. Firstly, there is no multimodal remote sensing dataset large enough to train a deep learning neural network for general use. Secondly, the deep learning-based methods bring challenges to the computer resources, especially the GPU, and the efficiency is relatively low under most computer setups. Finally, the current learning-based methods cannot be applied to full-scale satellite remote sensing images. These demerits restrict the practical application of deep learning techniques in the matching or registering of multimodal remote sensing images.
4. A General Orientation Framework of Optical Satellite Images Using SAR Images
In this section, we put forward a general framework for the automatic orientation of optical satellite images based on SAR orthophotos. Firstly, we perform overlap detection between the optical satellite image and SAR images to find the qualified images as the reference SAR image set. Then, the image pyramids are constructed for all the SAR and optical satellite images. The SAR-Harris feature point detector, combined with an image dividing strategy [
94], is utilized on the pyramid images to retain enough high-quality features across the entire image. After that, the SAR image and DEM data are used for rough geometric correction to coarsely eliminate the rotation and scale differences, and AWOG is applied on the rectified images to obtain accurate corresponding points, which are taken as virtual control points (VCPs). Finally, RFM-based block adjustment [
95] is carried out for the optical satellite image by using the VCPs, and the accurate orientation parameters are output. The specific steps are as follows:
(1) The available SAR images of the target study area are collected, and the helpful reference SAR image set is built through overlap detection. We check the corresponding ground cover of SAR images and the optical satellite image, and only the SAR images with overlapping ground areas are retained as reference images. Specially, we read the metadata of the SAR reference image to obtain the coordinates of the four image corners and other affiliated information and project the four corners onto the ground. In this way, the longitude and latitude ranges of the ground covering of the SAR reference image are obtained. For the optical satellite image, we can directly obtain the coordinates of the ground covering by reading the metadata of the optical satellite image. If there are overlaps between the two ground coverings, we deem these two images overlapped; otherwise, there is no overlap between these two images.
(2) Image pyramids are constructed for the optical satellite and reference SAR images, and high-quality feature points are detected on the SAR images. If the resolution difference between the reference image and the optical satellite image is not more than double, the image pyramid layer and the zoom ratio between different levels are set as 4 and 2, respectively. Suppose the resolution difference between the images is large. In that case, a certain number of pyramid image layers are added to the higher-resolution image according to the multiple of the resolution difference, and a lookup table is established to match the images between the levels with a smaller resolution difference. Then, the SAR-Harris corner detector [
45] is employed to detect feature points on each layer of the pyramid image of the reference SAR image. Additionally, a block strategy [
94] is also applied to ensure the uniformity of the point distribution. All the subsequent steps start with the top layer pyramid image.
(3) There are always large geometric deformations between the optical satellite and SAR images. Therefore, we first perform image reshaping (as shown in
Figure 7) to coarsely eliminate the rotation and scale difference and then use AWOG to match the images. First, the template window is determined on the reference SAR image with the obtained feature point as the center, and the ground area corresponding to the template window is acquired based on the approximate DEM of the study area, which can be provided by SRTM. To obtain the rough matching area, we reproject the ground area onto the optical satellite image with the RFM. Furthermore, an affine transformation model between the searching area of the optical satellite image and the template window of the reference image is established. The searching area of the optical satellite image is resampled using the bilinear interpolation method according to the established transformation model in order to eliminate the rotation and scale difference. At last, AWOG is applied to match the roughly geo-rectified image with the reference SAR image, and the RANSAC algorithm is utilized to eliminate unreliable matches.
(4) The RFM-based affine transformation compensation model [
95] is performed to improve the orientation accuracy of the original RPCs, and this process is completed using block adjustment with multiple reference SAR images. The affine transformation model has been widely applied to compensate the deviation of RPCs for high-resolution optical satellite images, which can be expressed as follows:
where
and
are the column and row of tie points in the optical satellite image, and
and
(
) are the unknown affine model parameters. In particular,
and
compensate the row direction and column direction errors on the image caused by the attitude and position inaccuracy in the satellite moving and scanning directions of the CCD linear sensor, respectively;
and
compensate the error caused by the drifts of the inertial navigation systems and GPS, respectively; and
and
compensate the error caused by an inaccurate image interior orientation.
In order to obtain the corresponding initial positions of the image points in the SAR reference image on the optical satellite image, we first need to obtain the geographic coordinates
of the image point in the SAR reference image, which can be calculated as follows:
where
and
are the longitude and latitude of point
with image coordinates
and
(index s denotes the SAR scene).
and
are the projection coordinates of point
.
is a transformation function to transform point coordinates from the projection system to the reference system.
and
are the projection coordinates of the upper left corner of the SAR reference image, and
and
are the resolution of the SAR reference image in the column direction and row direction, respectively.
After obtaining the corresponding ground points of the feature points in the reference image, the back-projection location of these ground points on the optical satellite image can be calculated using the RPCs and DEM of the target area, which are provided by the SRTM in this study.
where
and
are the normalized image coordinates of the tie point
computed by the forward rational functions of the optical satellite image.
are the normalized longitude, latitude, and altitude of the tie point
.
For each tie point
on the optical image, through combing Equations (12) and (14), the block adjustment can be described as
where
and
denote the column and row of the tie point
in the optical image, which is acquired by performing image matching, and
and
are the un-normalized image coordinates of the tie point after back-projection using the RPCs.
Finally, the optimal estimations of affine transformation parameters and are obtained by iterative least squares adjustment.
(5) The processing status is checked frequently. If the bottom layer of the pyramid image has completed the calculation, the final affine correction transformation model together with the original RPCs is output as the accurate orientation result. Otherwise, we transmit the calculation results of the current layer of the pyramid to the next layer until the bottom layer. The flowchart of this orientation framework is shown in
Figure 8.
7. Conclusions
This paper proposed an accurate geometric orientation method of optical satellite images using SAR images as reference data. Firstly, we developed a multimodal image-applicable dense descriptor AWOG through proposing an angle-weighted oriented gradient calculation method and introducing a feature orientation index table. To speed up the matching process of the calculated feature descriptors, the similarity measure 3D phase correlation was employed. Experiments on multiple multimodal image pairs proved that the proposed method is superior to the competing state-of-the-art multimodal image matching methods in terms of accuracy and efficiency, which can be well applied to fulfill the matching of optical satellite images and SAR images. In particular, the correct matching ratio increases by about 17%, and the matching accuracy improves by more than 0.1 pixels compared with the latest multimodal image matching method, CFOG. In terms of efficiency, the running time of the proposed method is only about 20% of HOPC, DLSS, and MI.
In addition, we proposed a general framework for the precise orientation of optical satellite images based on multiple SAR reference images. Significantly, the matches obtained from the optical satellite images and SAR images were taken as virtual control points to optimize the initial RPC parameters, improving the geometric positioning accuracy. Taking 12 TerraSAR-X images as the reference data, 4 GF-1 images, 4 GF-2 images, and 3 ZY-3 images were oriented with the proposed framework. The experimental results reveal the superior effectiveness and robustness of our framework. The VCPs obtained from the SAR reference images were sufficient and evenly distributed. For the GF-1 image with a large image size, more than 9000 VCPs were obtained; for the GF-2 image with a relatively small size, there were still 897 well-distributed VCPs obtained. As a result, the positioning accuracy of all used optical satellite images improved, from more than 200 to within 10 m. In addition, the proposed method has very high computation efficiency. The shortest time used for processing an image is about 33 s, and the longest time is still less than 100 s, which renders it well applicable to practical applications.
Since AWOG is not invariant to rotation and scale change, we eliminated the rotation and scale differences between SAR and optical satellite images before matching with the help of DEM and RPC data. In detail, a corresponding searching area of the matching window on the SAR reference image was determined on the optical satellite image to provide the initial location for template matching. However, when the DEM or RPC parameters of the target area cannot be obtained, our method is not applicable. In addition, our method relies on the image structural features. If the structural features in an image are not rich enough, such as the image containing a large forest area, the number of effective GCPs will reduce, and the distribution will worsen.
In the future, we would like to enable our method to be invariant to scale and rotation change. To solve this problem, we need to first consider fulfilling coarse image matching and obtaining the initial image transformation matrix by using image features and further refining the previous result with the method proposed in this study. For images with weak structural features, an image feature based on the image textures can be introduced to help with image matching. Additionally, we are willing to keep improving the matching performance of SAR and optical satellite images and conduct joint bundle adjustment of SAR images and optical satellite images to further enhance the applicability of using SAR images to automatically aid the orientation of optical satellite images. Moreover, deep learning is a very fast-growing technique. It is promising to apply it to real multimodal image processing applications in the short term.