Next Article in Journal
A SAR-Based Index for Landscape Changes in African Savannas
Previous Article in Journal
An Open-Source Semi-Automated Processing Chain for Urban Object-Based Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Knowledge-Based Search Strategy for Optimally Structuring the Terrain Dependent Rational Function Models

by
Mojtaba Jannati
*,
Mohammad Javad Valadan Zoej
and
Mehdi Mokhtarzade
Faculty of Geodesy and Geomatics, K. N. Toosi University of Technology, Tehran 19667-15433, Iran
*
Author to whom correspondence should be addressed.
Remote Sens. 2017, 9(4), 345; https://doi.org/10.3390/rs9040345
Submission received: 19 January 2017 / Revised: 19 March 2017 / Accepted: 30 March 2017 / Published: 11 April 2017

Abstract

:
Identifying the optimal structure of terrain dependent Rational Function Models (RFMs) not only decreases the number of Ground Control Points (GCPs) required, but also increases the accuracy of the model, by reducing the multi-collinearity of Rational Polynomials Coefficients (RPCs) and avoiding the ill-posed problem. Global optimization algorithms such as Genetic Algorithm (GA), evaluate the different combinations of parameters effectively. Therefore, they have a high ability to detect the optimal structure of RFMs. However, one drawback of these algorithms is their high computation cost. This article proposes a knowledge-based search strategy to overcome this deficiency. The backbone of the proposed method relies on the technical knowledge about the geometric condition of image at the time of acquisition, as well as the effect of external factors such as terrain relief on the image. This method was evaluated on four different datasets, including a SPOT-1A, a SPOT-1B, an IKONOS-Geo image, and a GeoEye-Geo imagery, using various number of GCPs. Experimental results demonstrate the efficiency of the proposed method to achieve a sub-pixel accuracy using few GCPs (only 4–6 points) in all datasets. The results also indicate that the proposed method improves the computation speed by 140 times when comparing with GA.

Graphical Abstract

1. Introduction

Due to periodical imaging from the earth, satellite sensors have become a powerful and unique tool for multi-temporal analysis. Accurate image geo-referencing is an important pre-processing step for this analysis [1], in order to avoid errors due to misregistration [2], because, even small geometric errors could have a significant effect on retrieval of subsequent information [3]. Most modern satellite imagery is supplied with a sufficient and reliable Rational Polynomial Coefficients (RPC) [4], and sub-pixel geo-referencing accuracies can be achieved by bias-compensating these coefficients using a few Ground Control Points (GCP) [5,6]. However, old archive imagery are a main part of the data sources for multi-temporal analysis. RPCs have not been available for these imagery, and if we could find any, they would not be reliable. Therefore, the Rational Function Model (RFM) can only be applied to the old archive imagery in a terrain dependent manner. However, if an accurately geo-referenced master image is available for the region of interest, old archive imagery can be co-registered with the master image [7].
Despite the popularity and multiple achievements of terrain independent RFMs and their widespread use for high resolution satellite images, the terrain dependent RFMs were not really welcomed [8,9,10]. At least, this can be due to two main reasons: (1) Third-order 3D RFMs encompass 80 parameters [11], none of which are totally independent [12,13]. The existing correlation among these parameters causes the problem be ill-posed [13,14]. This, in turn, results in a steady fall in accuracy and reliability, and, in some cases, even solving the equation system might be impossible [15]. (2) Even if the zero-order term of polynomials in the denominator of RFMs is supposed to be equal to the fixed value of 1.00, at least 39 Ground Control points (GCPs) are still needed to solve the remaining 78 parameters. Obviously, the degree of freedom would be zero in this case; consequently, the results are not quite reliable [16]. In some resources [17], the number of required GCPs for an accurate and reliable solution of terrain dependent RFMs is mentioned as the twice of the minimum sufficient GCPs, i.e., 78 points. In many cases, not only this many control points, but also occasionally the minimum required GCPs are not available either. Thus, some researchers restricted the use of terrain dependent RFMs only to the usages where accuracy is not a major concern [17,18,19], and some others are totally against its utilization [5,9].
There are two main strategies to encounter the ill-posed problems [20]: (1) variational regularization; and (2) variable selection. Regularization of the normal equation plays an important role in the reduction of the multi-collinearity of rational polynomial coefficients (RPC) and improving the model’s accuracy. Nevertheless, in this strategy besides the essential parameters a set of unnecessary ones are estimated as well. In fact, this strategy is attempting to cutting down the negative effect of the inessential parameters on estimating the effectual ones as much as possible. However, unwanted parameters will exist in the model and some GCPs are required to estimate them as well, which indicates an inefficient use of the available GCPs. Conversely, in the second strategy, the unnecessary parameters are put aside; consequently, apart from solving the problem of the ill-posed normal equation, the number of existing unknown parameters will be reduced [12,21,22]. Thus, fewer GCPs are required to solve the model, and, due to the increase in the degree of freedom, the final results will be more reliable [12,22,23]. Moreover, owing to the absence of unnecessary parameters and reducing the multiple correlations between the remaining parameters, the final model would be more interpretable [20].
The main issue in this strategy is to determine the optimal structure of the model through distinguishing the effectual and unnecessary parameters. The strategy accepted in commercial software such as PCI-Geomatica [24] for solving terrain dependent RFM is to use similar structure in the RFM. It means that, regardless of the zero-order term in the denominator polynomials, the structure of the four polynomials in the RFM is considered alike, and the maximum possible number of parameters regarding to the available GCPs are employed in all polynomials. Notwithstanding the least possible computational cost, it is evident that, in this strategy, achieving the most accurate results for a given number of GCPs is quite unlikely. Considering that the image space distortions depends not only on the interior and exterior orientation parameters of the imaging system, but also on other factors such as terrain relief [16], doubtlessly, the optimal structure of the RFM would be different for each scene.
Thus far, a wide variety of studies has been conducted in order to identify the optimal structure of RFMs, which generally fall into two main groups: linear algebra-oriented (LAO) and artificial intelligence-oriented (AIO). Among the LAO studies, the detection of the optimal structure of the RFM based on correlation analysis [21], eigenvalue of structure matrix [5], scatter matrix [25], multi-collinearity analysis [26], and nested regression [20] can be mentioned; and, from the AIO researches, the use of GA [22], modified GA [12], and PSO algorithm [23] to identify the optimal structure of the RFM can be pointed out.
One of the common features of the LAO methods is the sequential choice of parameters. This means that the quality of a parameter is firstly evaluated and its presence in the final structure is decided, then the next parameter is considered. In this way, not all the possible combinations of the parameters are taken into account; and consequently, the final structure would largely depend on the order of the evaluation of the parameters. Thus, the recognized optimal structure would probably not be global because variable selection is a non-deterministic polynomial (NP) problem whose search space is very large [27]. Alternatively, the AIO methods effectively investigate the different combinations of the parameters through taking advantage of the meta-heuristic search algorithms; therefore, they essentially have a higher potential to achieve the global optimal structure [12,22]. In comparison with the LAO methods, one other privilege of AIO methods is the possibility of determining the optimal structure of the RFM for a desirable number of GCPs. However, the computational cost of these algorithms is excessively high, which is a serious drawback from a practical point of view [20].
This paper relies on the technical knowledge of prevailing geometric conditions on the formation of satellite imagery and the effect of external factors (e.g., terrain relief) on the image. A knowledge-based search strategy is proposed to designate the optimal structure of terrain dependent RFMs in the geo-referencing of satellite imagery. Compared to the previous studies, the major advantage of the proposed method can be sought in establishing a balance among the four elements of accuracy, reliability, flexibility, and computational cost.

2. The Geometry of Linear Pushbroom Imaging

In the linear pushbroom imaging, by means of a row of detectors, which are arranged in the focal plane of the sensor and across the flight direction, a narrow strip of the object space is recorded at every instant, Figure 1. With the sensor’s moving forward, consecutive strips of the object space are recorded one after another; thus, its thorough coverage is provided [16].
In this way, the imaging geometry is parallel along the trajectory of linear pushbroom sensors [28]. Considering the rather short acquisition time of the imagery (for example, about 1 s for an IKONOS scene), the velocity, and orientation of the sensor can be assumed constant during the capture of the scene [29,30,31,32]. Hence, modeling the along-track image component via the 3D affine model is easy [32,33,34]. Implicit in the model are two individual projections, one scaled-orthogonal, and the other skew-parallel (Figure 2a). If the angle of the velocity vector and the optical axis of the sensor changes during the acquisition time, a series of time-dependent terms are required to be added to the 3D affine model [28,35]. Eventually, if the position of all ground points is transferred to a common height plane in the object space, the eight-term model can be effectively reduced to six parameters [36,37], which, is the case for images with few terrain reliefs. However, if the terrain relief is considerable, it is essential to provide a 3D affine model with one or more quadratic terms [28].
In contrast, the imaging geometry is perspective across the sensor’s trajectory [28], as can be seen in Figure 2b. Assuming a constant orientation during the capture of the scene, the across-track component of the image is successfully modeled via 3D projective model by some researchers [38,39].

3. Rational Function Model

In the RFM, the relationship of image space components (r,c) is stated according to the division of two 3D polynomials in object space components (X,Y,Z) [40].To reduce the computational errors and to improve the numerical stability of the equations, image and object coordinates are both normalized to the range of −1.00 to +1.00 [11]. The direct RFM is given in Equation (1) [40]:
r = P 1 ( X , Y , Z ) P 2 ( X , Y , Z ) , c = P 3 ( X , Y , Z ) P 4 ( X , Y , Z ) ,
where r and c are row and column components of the image space, respectively; X, Y, and Z are the points’ coordinates in the object space; and Pi is a polynomial in the ground coordinates. The maximum power of each ground coordinate is usually limited to 3, as well as the total power of all ground coordinates [14].
In such a case, each polynomial is of 20-terms cubic form (Equation (2)):
P = i = 0 m 1 j = 0 m 2 k = 0 m 3 a i j k X i Y j Z k    = a 0 + a 1 Z + a 2 Y + a 3 X      + a 4 Z Y + a 5 Z X + a 6 Y X + a 7 Z 2 + a 8 Y 2 + a 9 X 2      + a 10 Z Y X + a 11 Z 2 Y + a 12 Z 2 X + a 13 Y 2 Z + a 14 Y 2 X     + a 15 Z X 2 + a 16 Y X 2 + a 17 Z 3 + a 18 Y 3 + a 19 X 3 ,
where aijk stands for the polynomial coefficients called rational polynomial coefficients (RPCs). The order of the terms is trivial and may differ in different literature. The above order is adopted from [11].
The distortions caused by the projection geometry can be represented by the ratios of the first-order terms (refer to Section 2), while corrections such as earth curvature, atmospheric refraction, lens distortion, etc., can be well approximated by the second-order terms [14,41]. Significant high-frequency dynamic changes in sensor acceleration and orientation, coupled with large rotation angles, are the main reasons for implementing the third-order and even higher order terms. Fortunately, the relatively smooth trajectory of space borne sensors coupled with the narrow-angle viewing optics, means that such influences are unlikely to be encountered [5].
There are two different computational scenarios for RFMs. If the physical sensor model is available, an approximation of the model could be provided by RFMs; which is a terrain-independent solution and can be used as the replacement sensor model. On the other hand, in the terrain-dependent approach, the unknown parameters of RFMs are estimated using a set of GCPs [14,17,42].
In the terrain-dependent scenario, it is common to consider the zero-order parameter of the denominator polynomials be equal to the fixed value of 1.00, in order to avoid the singularity of the normal equation matrix [5]. Therefore, the model’s parameters decrease to 78 and, to solve them, at least 39 GCPs would be required. When the number of available GCPs is less than the limit, some parameters must be set aside. In the conventional term selection method, the model is estimated with the maximum possible number of parameters. In this way, parameters are included in a sequential manner, while a similar structure (with the exception of the zero-order term of denominator polynomials) is assumed for all polynomials, as it is implemented in PCI-Geomatica software [24]. Therefore, in order to use up “n” parameters in the structure of each polynomial, “2 × n − 1” GCPs are required to solve the RFM.

4. Proposed Method

According to Section 2 and Section 3, it can be noted that the first-order terms of the RFM play the most important role in the modeling of the image space distortions. In the vendor-provided RPCs, the coefficient of the zero- and first-order terms in both the numerator and the denominator polynomials are larger by many orders of magnitude than those of the second- and third-order terms [14], which in fact refers to the same thing. In the case of limited available GCPs, the relative priority of compeer terms depends on the conditions of the scene being modeled. Regarding the different geometry of the along-track and the across-track components, the number of required parameters in the two denominator polynomials is different. Moreover, increasing the field of view of the sensor leads to one or more second-order terms that is required in order to model the earth curvature, the off-nadir viewing angle of the sensor, and the lens distortion effects. The utilization of the third-order terms in the model’s structure depends on the availability of the sufficient GCPs. Accordingly, the following search strategy is proposed for a given number of available GCPs. In the first step, a simplified version of the RFM is considered as Equation (3).
r = P 1 P 2 = a 0 + a 1 X + a 2 Y + a 3 Z + a 4 X 2 + a 5 X Y + a 6 X Z + a 7 Y 2 + a 8 Y Z + a 9 Z 2 1 + b 1 X + b 2 Y + b 3 Z c = P 3 P 4 = c 0 + c 1 X + c 2 Y + c 3 Z + c 4 X 2 + c 5 X Y + c 6 X Z + c 7 Y 2 + c 8 Y Z + c 9 Z 2 1 + d 1 X + d 2 Y + d 3 Z ,
where (X,Y,Z) are the object space coordinates; (r,c) are the image space coordinates; ai, bi, ci, and di are the unknown coefficients of the model; P1 and P3 are the numerator polynomials of the row and column components; and P2 and P4 are the denominator polynomials of the row and column components, respectively.
Since the zero-order term of the numerator polynomials has an effective contribution, this term is used for both the row and the column components in the final structure of the RFM. All possible combinations of the remaining 12 parameters in the structure of Equation (3) are evaluated separately for each of the row and the column components and the optimal structure of the RFM are detected. The search space of the first step is illustrated in Figure 3.
In the first step, the maximum number of the search cases is equal to 4095, which can be calculated using Equation (4).
N 1 = i = 1 12 12 ! i ! × ( 12 i ) ! = 4095 ,
where N1 is the maximum number of the search cases in the first step of the proposed method.
Thereafter, the necessary first- and second-order terms are determined. In the second step, the feasibility of the enrichment of the detected structure is evaluated by means of adding the third-order terms to the numerator polynomials. Therefore, the functional form of the RFM being searched would be as Equation (5).
r = { P 1 o p t } + a 10 Z Y X + a 11 Z 2 Y + a 12 Z 2 X + a 13 Y 2 Z + a 14 Y 2 X + a 15 Z X 2 + a 16 Y X 2 + a 17 Z 3 + a 18 Y 3 + a 19 X 3 1 + { P 2 o p t } c = { P 3 o p t } + c 10 Z Y X + c 11 Z 2 Y + c 12 Z 2 X + c 13 Y 2 Z + c 14 Y 2 X + c 15 Z X 2 + c 16 Y X 2 + c 17 Z 3 + c 18 Y 3 + c 19 X 3 1 + { P 4 o p t } ,
where {Piopt} is the optimal structure of the i-th (for i = 1, 2, 3, and 4) polynomial of Equation (3), which is detected in the first step.
Afterwards, all possible combinations of these 10 added third-order terms to the numerator polynomials are searched in the presence of the optimal structures form the first step. Search space of the second step is illustrated in Figure 4.
In the second step, the maximum number of the search cases is equal to 1050, which can be calculated from Equation (6).
N 2 = i = 1 10 10 ! i ! × ( 10 i ) ! = 1050 ,
where N2 is the maximum number of the search cases in the second step of the proposed method.
Obviously, the feasibility of executing the second step search depends on the number of available GCPs and the number of terms of the optimal structures from the first step. According to repetitive experiments, it is suggested that only if the first step had been completed with at least five degrees of freedom (df), the second step to be executed. Thus, the flowchart of the proposed method would be as in Figure 5.
In this paper, the Goodness-of-Fit (GoF) of the created model for each structure to be searched was used to detect the optimal structure of RFM [20]. GoF of a statistical model describes how well the model fits a set of observations [43]. The coefficient of determination of a linear regression between the observed and estimated values is used as the statistical measure of GoF, Equation (7).
R 2 = i = 1 n ( y ^ i y ¯ ) 2 i = 1 n ( y i y ¯ ) 2
where n is the number of observation, y i is the i-th observation value, y ¯ is the mean of all the observation values, and y ^ i is the regression result of the i-th observation. The value of R2 is in the range of 0 to 1. The greater the value of R2 is, the better the regression line approximates the observation data, and the more suitable the model [44].
In addition to R2, the degree of freedom of the created model was considered as a measure of the internal reliability of the model. Thus, the final benefit function for assessing different structures in the proposed search scheme can be written as Equation (8), which has to be maximized.
B ( S j ) = R j 2 × d f j ,
where R j 2 is the determination coefficient of j-th structure (Sj), dfj is the degree of freedom of the Sj, and B(Sj) is the benefit factor of the Si.

5. Data Used

Four different datasets in Iran have been used in this article, including a SPOT-1A and a SPOT-1B imagery over Isfahan city, an IKONOS-Geo image, and one GeoEye-Geo image covering Hamedan and Uromieh cities, respectively. SPOT-3 Level lA is raw data that have been corrected radiometrically, while Level 1B data has been corrected additionally for certain known geometric distortions (Earth rotation and curvature, mirror look angles and orbit characteristics). Thus, SPOT-3 Level 1B image is partially rectified and approximates to an orthographic view of the Earth, but with the displacements due to relief remaining in the image. Furthermore, in this level, after these geometric corrections have been applied, the image is resampled at regular intervals of 10 m [16]. To produce a Geo product, Ikonos-2 and GeoEye-1 images use a correction process that removes image distortions introduced by the collection geometry and then resamples the imagery to a uniform ground sample distance and a specified map projection. Geometric information and the number of available GCPs of each dataset are shown in Table 1, along with the elevation relief of each dataset.
GCPs for Isfahan dataset were measured, in 1996, using differential GPS techniques based on the use of 3 Ashtech 12 dual frequency GPS sets. The accuracy of these points is estimated to be of the order of sub-meters. Their corresponding image co-ordinates were measured with an approximate precision of 0.5 pixels. For Hamedan and Uromieh datasets, GCPs were extracted from 1:2000 scale digital topographic maps produced by the National Cartographic Center (NCC) of Iran with 0.6 m planimetric and 0.5 m altimetric accuracies, respectively. The points are distinct features such as buildings, pool corners, walls, and road junctions. Their corresponding image coordinates were measured with an approximate precision of one pixel. These accuracies are lower than the potentially achievable accuracies from GeoEye-1 imagery. However, no better ground information was available for this dataset.

6. Results

In order to evaluate the accuracy of the proposed method, as it is impacted by the number of utilized GCPs, several experiments were performed on each dataset used. In each test, the root mean square error (rmse) of independent check points (CP), the mean, standard deviation (std.), and maximum absolute (max.) of residuals across the image’s rows (δr) and columns (δc), the condition number of the normal equation matrix (cond), the number of coefficients in P1, P2, P3, and P4 polynomials, and the degree of freedom (df) of the equation system are provided in Table 2, Table 3, Table 4 and Table 5, for both the first and the second steps.
As can be seen from the results given in Table 2, Table 3, Table 4 and Table 5, the method is able to achieve sub-pixel accuracies even if a few GCPs (i.e., only 4–6 points) are available. In addition, the results of the proposed method are robust with respect to the number and combination of ground control points, and by changing the number of control points, there were no significant changes in the results. The comparison can be made with the results obtained with the conventional RFM and optimally structured RFM using GA [22], which are given in Table 6, Table 7, Table 8 and Table 9. For a fair comparison, number and combination of GCPs were adopted similar to those of Table 2, Table 3, Table 4 and Table 5. The GA, however, did not converge to an acceptable result for fewer than eight GCPs in all datasets. Therefore, these results were excluded from Table 6, Table 7, Table 8 and Table 9. For each experiment, the GA was performed 10 times and the best results were reported. The number of parents, the population of generations, and maximum number of iterations were set to 20, 200, and 500, respectively. A detail description of GA optimization of RFMs is available in [22]. Comparisons of the results in Table 2, Table 3, Table 4 and Table 5 with Table 6, Table 7, Table 8 and Table 9 show that accuracies obtained from the proposed method are considerably better than those obtained by conventional RFMs. Although the accuracies form the proposed method are a little better than those obtained by the GA-optimized RFMs, but the proposed algorithm benefits from remarkably higher degrees of freedom leading to more reliable results. In addition, the computational cost of the proposed method is significantly lower than that of GA algorithm, which confirms its effective performance.
In another experiment, low order polynomials (i.e., first and second order polynomials in X, Y, and Z) were utilized for georeferencing the satellite imagery (Table 10, Table 11, Table 12 and Table 13). Normalized image and ground coordinates were used to estimate the models’ coefficients. Again, number and combination of GCPs were adopted similar to those of Table 2, Table 3, Table 4 and Table 5.
According to Table 10, Table 11, Table 12 and Table 13, first-order polynomials could not supply sub-pixel accuracies for SPOT-1A and SPOT-1B images, and the maximum absolute residual errors across the column numbers of Geoeye imagery are greater than one pixel, despite its sub-pixel accuracies. Second-order polynomials could not also provide sub-pixel accuracies for SPOT-1A imagery.
As the final experiment, the accuracy of the proposed method was examined with respect to the RPCs provided with Geoeye imagery. In this regard, first the RPCs were bias-compensated using four GCPs. Then, a regular grid of 100 image points was established, which is called original image points (OIPs) in this paper. Afterwards, these points were projected to the ground space for arbitrary heights in the range of the ground elevations by utilizing the compensated RPCs. Finally, these ground points were back-projected to the image space by means of the optimally structured RFM using the proposed method, which is called computed image points (CIPs). The mean, standard deviation (std.), and maximum absolute (max.) of discrepancies between the OIPs and CIPs across the image’s rows (δr) and columns (δc) are provided in Table 14, along with their root mean square error (rmse).

7. Discussion

According to Table 2, Table 3, Table 4 and Table 5, the proposed method is able to achieve sub-pixel accuracies even if the number of available GCPs is limited; so that, sub-pixel accuracies were obtained using only six GCPs in all datasets. Scatter plot of residual errors shows that no systematic errors have occurred (Figure 6). Moreover, the results from the proposed method are robust with respect to the number of available GCPs, so that there is no considerable change in the results with changing the number of GCPs. Obtained condition numbers illustrate that the normal equation matrices of the proposed method are pretty well-conditioned.
For the last three datasets (i.e., SPOT-1B, IKONOS-Geo, and GeoEye-Geo) which are geometrically corrected, the proposed method could achieve almost one-pixel accuracies using only four GCPs. However, for the first dataset (i.e., SPOT-1A) that is geometrically raw, it could not achieve an accuracy better than seven pixels using even five GCPs, which indicates more parameters are required in the structure of RFM to modeling the image space’s distortions.
In comparison with the results obtained from GA-optimized RFMs (Table 6, Table 7, Table 8 and Table 9), accuracies of the proposed method are competitive and even slightly better. It is worth mentioning that GA can theoretically provide the most accurate result if no computation limit is considered during the optimization process. However, in practice, some stopping criteria (e.g., a critical value for the fitness function or a certain number of iterations) are always utilized to complete the optimization process, which limits the achievable accuracies in comparison with the theoretical ones. In order to better compare these two methods, their accuracies are graphically displayed in Figure 7.
Considering the capability of the GA to globally search the solution space [45], it can be said that the results from the proposed method are to a great extent close to the global optimal solution. Undoubtedly, the final confirmation of the latter result requires more probing in the future studies. In addition, the proposed method can omit a larger number of parameters compared with the GA-optimized RFMs. Therefore, the optimal RFM has a higher number of degrees of freedom, resulting in a higher accuracy. On the other hand, according to Table 6, Table 7, Table 8 and Table 9, the average number of iterations of the GA was 350 iterations. Bearing in mind the fact that the population of each generation was considered to be 200 individuals, in each run the cost function has been evaluated on average 70,000 times; that is, 14 times slower than the proposed method. It should be noted that the results of the GA-optimized RFMs shown in Table 3 are provided for 10 times execution of the algorithm, which by multiplying these two rates, the computational cost if the GA would increase to 140 times.
In comparison with the results obtained from the conventional RFMs shown in Table 6, Table 7, Table 8 and Table 9, the proposed method has always achieved better accuracies using fewer parameters. Therefore, the proposed algorithm benefits from higher degrees of freedom leading to more reliable results. Accuracies obtained from these two methods are graphically compared in Figure 8.
According to Figure 8, upon increasing the number of GCPs, the accuracy of conventional RFMs shows an upward trend and after that comes down. It can be noted that increasing the number of GCPs and consequently the number of estimable parameters cover the under-parameterization problem. However, increasing the number of parameters in the model results in the increment risk of the over-parameterization. Consequently, due to the involvement of unnecessary parameters in the model as well as the inherent correlation of some parameters, the overall accuracy of the model will be decreased, whereas this trend is not ever seen in the results of the proposed method, and this method is able to recognize the optimally structured RFMs using any arbitrary number of GCPs.
According to Table 12 and Table 13, first-order polynomials have provided sub-pixel accuracies for Ikonos and Geoeye-1 imagery. This could be due to the fact that these two sensors have a narrow instantaneous field of view (IFOV) and swath width, which leads their geometry to be more similar to a parallel projection. Therefore, a 3D affine model (which is equivalent to a first-order polynomial in X, Y, and Z) is well suited for georeferencing these imagery, as it is confirmed in previous studies [26]. However, it is not the case for sensors with a rather wide IFOV and swath width such as SPOT-3; as the first-order polynomials did not provide sub-pixel accuracies for SPOT-1A and SPOT-1B images.
As the final point, considering the sampling rate and the number of detectors for images used in this research(Table 1), the required time interval to acquire a square scene for SPOT-3, IKONOS and Geoeye-1 sensors are approximately 9, 2 and 3.5 s, respectively. Since the proposed method has provided accurate and reliable results for SPOT-3 imagery, then the proposed method can be well utilized for other sensors with a shorter acquisition time interval.

8. Conclusions

Identifying the optimal structure of terrain dependent RFMs not only decreases the number of required GCPs, but also increases the model’s accuracy through reducing the multi-collinearity of RPCs and avoiding the ill-posed problem. Global optimization algorithms such as GA, evaluate different combinations of the parameters effectively. Thus, they have a high ability to detect the optimal structure of RFMs. However, one drawback of these algorithms is their high computational cost. To address this shortcoming, a knowledge-based search strategy is proposed in this article to identify the optimal structure of terrain dependent RFMs for geo-referencing of satellite imagery. The backbone of the proposed method is relying on the technical knowledge of the conditions controlling acquiring satellite images, as well as the effect of external factors (e.g., terrain relief) on the image space’s distortions. The capability of the proposed method in identifying the optimal structure of RFMs was evaluated, and its results were assessed in comparison with both GA-optimized and conventional RFMs.
According to the results, the proposed method achieved competitive and even better accuracies with a lower computational cost. It is worth mentioning that the speed of two methods was only compared based on the number of times which the cost function has been evaluated, while the search mechanism of the proposed method is much simpler than the GA. Therefore, the proposed method is in reality faster than the aforementioned ratio. Moreover, the proposed method could omit a larger number of parameters. Therefore, the proposed algorithm benefits from higher degrees of freedom leading to more reliable results. Future studies will be focused on the orthophoto and DEM generation using an optimally structured RFM based on the proposed method, as well as the epipolar resampling of linear pushbroom imagery.

Author Contributions

All the authors listed contributed equally to the work presented in this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Brook, A.; Ben-Dor, E. Automatic registration of airborne and spaceborne images by topology map matching with SURF processor algorithm. Remote Sens. 2011, 3, 65–82. [Google Scholar] [CrossRef]
  2. Aksakal, S.K. Geometric accuracy investigations of SEVIRI High Resolution Visible (HRV) Level 1.5 imagery. Remote Sens. 2013, 5, 2475–2491. [Google Scholar] [CrossRef]
  3. Aksakal, S.K.; Neuhaus, C.; Baltsavias, E.; Schindler, K. Geometric quality analysis of AVHRR orthoimages. Remote Sens. 2015, 7, 3293–3319. [Google Scholar] [CrossRef]
  4. Wang, M.; Yang, B.; Hu, F.; Zang, X. On-orbit geometric calibration model and its applications for high-resolution optical satellite imagery. Remote Sens. 2014, 6, 4391–4408. [Google Scholar] [CrossRef]
  5. Fraser, C.S.; Dial, G.; Grodecki, J. Sensor orientation via RPCs. ISPRS J. Photogramm. Remote Sens. 2006, 60, 182–194. [Google Scholar] [CrossRef]
  6. Pehani, P.; Cotar, K.; Marsetič, A.; Zaletelj, J.; Oštir, K. Automatic geometric processing for very high resolution optical satellite data based on vector roads and orthophotos. Remote Sens. 2016, 8, 343. [Google Scholar] [CrossRef]
  7. Ayoub, F. Monitoring Morphologic Surface Changes from Aerial and Satellite Imagery, on Earth and Mars. Ph.D. Thesis, Universite Toulouse III Paul Sabatier, Toulouse, France, 2014. [Google Scholar]
  8. Jannati, M.; Valadan Zoej, M.J.; Mokhtarzade, M. Epipolar resampling of cross-track pushbroom satellite imagery using the rigorous sensor model. Sensors 2017, 17, 129. [Google Scholar] [CrossRef] [PubMed]
  9. Jacobsen, K. Satellite image orientation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 703–709. [Google Scholar]
  10. Zhang, H.; Pu, R.; Liu, X. A new image processing procedure integrating PCI-RPC and ArcGIS-Spline tools to improve the orthorectification accuracy of high-resolution satellite imagery. Remote Sens. 2016, 8, 827. [Google Scholar] [CrossRef]
  11. NIMA (National Imaging and Mapping Agency). The Compendium of Controlled Extensions (CE) for the National Imagery 7kansmission Format (NITF), 2000, Version 2.1. Available online: http://geotiff.maptools.org/STDI-0002_v2.1.pdf (accessed on 19 January 2017).
  12. Jannati, M.; Valadan Zoej, M.J. Introducing genetic modification concept to optimize rational function models (RFMs) for georeferencing of satellite imagery. GISci. Remote Sens. 2015, 52, 510–525. [Google Scholar] [CrossRef]
  13. Long, T.; Jiao, W.; He, G. RPC estimation via L1-Norm-Regularized Least Squares (L1LS). IEEE Trans. Geosci. Remote Sens. 2015, 53, 4554–4567. [Google Scholar] [CrossRef]
  14. Tao, V.; Hu, Y. A Comprehensive study for rational function model for photogrammetric processing. J. Photogramm. Eng. Remote Sens. 2001, 67, 1347–1357. [Google Scholar]
  15. Boccardo, P.; Borgogno Mondino, E.; Giulio Tonolo, F.; Lingua, F. Orthorectification of high resolution satellite images. In Proceedings of the XX ISPRS Congress, Istanbul, Turkey, 12–23 July 2004; pp. 30–35. [Google Scholar]
  16. Valadan Zoej, M.J. Photogrammetric Evaluation of Space Linear Array Imagery for Medium Scale Topographic Mapping. Ph.D. Thesis, University of Glasgow, Glasgow, UK, 1997. [Google Scholar]
  17. Hu, Y.; Croitoru, A.; Tao, V. Understanding the rational function model: Methods and applications. Int. Arch. Photogramm. Remote Sens. 2004, 20, 663–668. [Google Scholar]
  18. Toutin, T.; Cheng, P. Demystification of IKONOS. Earth Obs. Mag. 2000, 9, 17–21. [Google Scholar]
  19. Toutin, T. Comparison of 3D physical and empirical models for generating DSMs from stereo HR images. Photogramm. Eng. Remote Sens. 2006, 72, 597–604. [Google Scholar] [CrossRef]
  20. Long, T.; Jiao, W.; He, G. Nested regression based optimal selection (NRBOS) of rational polynomial coefficients. Photogramm. Eng. Remote Sens. 2014, 80, 261–269. [Google Scholar]
  21. Sohn, H.; Park, C.; Yu, H. Application of rational function model to satellite images with the correlation analysis. KSCE J. Civ. Eng. 2003, 7, 585–593. [Google Scholar] [CrossRef]
  22. Valadan Zoej, M.J.; Mokhtarzade, M.; Mansourian, A.; Ebadi, H.; Sadeghian, S. Rational function optimization using genetic algorithms. Int. J. Appl. Earth Obs. Geoinform. 2007, 9, 403–413. [Google Scholar] [CrossRef]
  23. Yavari, S.; Valadan Zoej, M.J.; Mohammadzadeh, A.; Mokhtarzade, M. Particle swarm optimization of RFM for georeferencing of satellite images. IEEE Geosci. Remote Sens. Lett. 2012, 10, 135–139. [Google Scholar] [CrossRef]
  24. PCI Geomatica. PCI Geomatica OrthoEngine User Guide. Available online: https://www.ucalgary.ca/appinst/doc/geomatica_v91/manuals/orthoeng.pdf (accessed on 19 January 2017).
  25. Zhang, Y.; Lu, Y.; Wang, L.; Huang, X. A new approach on optimization of the rational function model of high-resolution satellite imagery. IEEE Trans. Geosci. Remote Sens. 2012, 50, 2758–2764. [Google Scholar] [CrossRef]
  26. Yuan, X.; Cao, J. An optimized method for selecting RPCs based on multicolinearity analysis. Geomat. Inf. Sci. Wuhan Univ. 2011, 36, 665–669. [Google Scholar]
  27. Amaldi, E.; Kann, V. On the approximability of minimizing non-zero variables or unsatisfied relations in linear systems. Theor. Comput. Sci. 1998, 209, 237–260. [Google Scholar] [CrossRef]
  28. Fraser, C.S.; Yamakawa, T. Insights into the affine model for satellite sensor orientation. ISPRS J. Photogramm. Remote Sens. 2004, 58, 275–288. [Google Scholar] [CrossRef]
  29. Gupta, R.; Hartley, R.I. Linear pushbroom cameras. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 963–975. [Google Scholar] [CrossRef]
  30. Wang, Y. Automated triangulation of linear scanner imagery. In Proceedings of the Joint Workshop of ISPRS WG I/1, I/3 and IV/4 on Sensors and Mapping from Space, Hanover, Germany, 27–30 September 1999. [Google Scholar]
  31. Habib, A.F.; Morgan, M.; Jeong, S.; Kim, K.O. Analysis of epipolar geometry in linear array scanner scenes. Photogramm. Rec. 2005, 20, 27–47. [Google Scholar] [CrossRef]
  32. Ono, T.; Honmachi, Y.; Ku, S. Epipolar resampling of high resolution satellite imagery. In Proceedings of the Joint Workshop of ISPRS WG I/1, I/3 and IV/4 Sensors and Mapping from Space, Hannover, German, 27–30 September 1999. [Google Scholar]
  33. Okamoto, A.; Akamatsu, S.; Hasegawa, H. Orientation theory for satellite CCD line-scanner imagery of hilly terrains. Int. Arch. Photogramm. Remote Sens. 1992, 29, 217–222. [Google Scholar]
  34. Morgan, M. Epipolar Resampling of Linear Array Scanner Scenes. Ph.D. Thesis, University of Calgary, Calgary, AB, Canada, 2004. [Google Scholar]
  35. Hanley, H.; Yamakawa, T.; Fraser, C. Sensor Orientation for High Resolution Satellite Imagery; ISPRS (International Archives of Photogrammetry Remote Sensing and Spatial Information Sciences): Denver, CO, USA, 2002. [Google Scholar]
  36. Baltsavias, E.; Pateraki, M.; Zhang, L. Radiometric and geometric evaluation of Ikonos Geo images and their use for 3D building modeling. In Proceedings of the Joint ISPRS Workshop High Resolution Mapping from Space 2001, Hanover, Germany, 19–21 September 2001. [Google Scholar]
  37. Fraser, C.S.; Baltsavias, E.; Gruen, A. Processing of Ikonos imagery for sub-metre 3D positioning and building extraction. ISPRS J. Photogramm. Remote Sens. 2002, 56, 177–194. [Google Scholar] [CrossRef]
  38. Kim, T. A study on the epipolarity of linear pushbroom images. Photogramm. Eng. Remote Sens. 2000, 66, 961–966. [Google Scholar]
  39. Lee, H.Y.; Park, W. A new epipolarity model based on the simplified pushbroom sensor model. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2002, 34, 631–636. [Google Scholar]
  40. OGC (Open GIS Consortium). The Open GIS Abstract Specification—Topic 7: The Earth Imagery Case. Available online: http://www.opengis.org/docs/99-107.pdf (accessed on 19 January 2017).
  41. Tao, C.V.; Hu, Y. Investigation on the Rational Function Model. In Proceedings of the 2000 ASPRS Annual Convention, Washington, DC, USA, 24–26 May 2000. [Google Scholar]
  42. Madani, M. Real-Time Sensor-Independent Positioning by Rational Functions. In Proceedings of the ISPRS Workshop on “Direct versus Indirect Methods of Sensor Orientation”, Barcelona, Spain, 25–26 November 1999; pp. 64–75. [Google Scholar]
  43. Cameron, A.; Windmeijer, F. An r-squared measure of goodness of fit for some common nonlinear regression models. J. Econom. 1997, 77, 329–342. [Google Scholar] [CrossRef]
  44. Pang, H. Econometrics; Southwestern University of Finance and Economics Press: Chengdu, China, 2011. [Google Scholar]
  45. De Jong, K.A. Genetic algorithms: A 10 year perspective. In Proceedings of the International Conference on Genetic Algorithms, Pittsburgh, PA, USA, 24–26 July 1985; pp. 169–177. [Google Scholar]
Figure 1. Linear pushbroom imaging.
Figure 1. Linear pushbroom imaging.
Remotesensing 09 00345 g001
Figure 2. Comparison of imaging geometry of the along- and the across-track components of pushbroom imagery: (a) geometry of the along-track component; and (b) geometry of the across-track component.
Figure 2. Comparison of imaging geometry of the along- and the across-track components of pushbroom imagery: (a) geometry of the along-track component; and (b) geometry of the across-track component.
Remotesensing 09 00345 g002
Figure 3. The search space of the first step search.
Figure 3. The search space of the first step search.
Remotesensing 09 00345 g003
Figure 4. Search space of the second step search.
Figure 4. Search space of the second step search.
Remotesensing 09 00345 g004
Figure 5. Flowchart of the proposed method.
Figure 5. Flowchart of the proposed method.
Remotesensing 09 00345 g005
Figure 6. Scatter plot of residual errors of the proposed method (using only six GCPs): (a) Isfahan dataset, SPOT-1A; (b) Isfahan dataset, SPOT-1B; (c) Hamedan dataset, IKONOS-Geo; and (d) Uromieh dataset, GeoEye-Geo.
Figure 6. Scatter plot of residual errors of the proposed method (using only six GCPs): (a) Isfahan dataset, SPOT-1A; (b) Isfahan dataset, SPOT-1B; (c) Hamedan dataset, IKONOS-Geo; and (d) Uromieh dataset, GeoEye-Geo.
Remotesensing 09 00345 g006
Figure 7. Graphical comparison of accuracies obtained from the proposed method and GA-optimized RFMs: (a) Isfahan dataset, SPOT-1A; (b) Isfahan dataset, SPOT-1B; (c) Hamedan dataset, IKONOS-Geo; and (d) Uromieh dataset, GeoEye-Geo.
Figure 7. Graphical comparison of accuracies obtained from the proposed method and GA-optimized RFMs: (a) Isfahan dataset, SPOT-1A; (b) Isfahan dataset, SPOT-1B; (c) Hamedan dataset, IKONOS-Geo; and (d) Uromieh dataset, GeoEye-Geo.
Remotesensing 09 00345 g007
Figure 8. Graphical comparison of accuracies obtained from the proposed method and conventional RFMs: (a) Isfahan dataset, SPOT-1A; (b) Isfahan dataset, SPOT-1B; (c) Hamedan dataset, IKONOS-Geo; and (d) Uromieh dataset, GeoEye-Geo.
Figure 8. Graphical comparison of accuracies obtained from the proposed method and conventional RFMs: (a) Isfahan dataset, SPOT-1A; (b) Isfahan dataset, SPOT-1B; (c) Hamedan dataset, IKONOS-Geo; and (d) Uromieh dataset, GeoEye-Geo.
Remotesensing 09 00345 g008
Table 1. Specifications of the dataset used.
Table 1. Specifications of the dataset used.
DatasetIsfahanHamedanUromieh
PlatformSPOT-3SPOT-3IKONOS-2GeoEye-1
SensorHRVHRVPanchromaticPanchromatic
ProductLevel 1ALevel 1BGeoGeo
AcquisitionJuly 1993July 1993October 2000August 2010
Off-nadir angle19.01°W19.01°W20.4°E0.84°W
Pixel Size13 μ13 μ12 μ8 μ
GSD10 m10 m0.82 m0.41 m
Sampling Rate665 line/s665 line/s6500 line/s10,000 line/s
No. of GCPs28283050
Terrain relief613.10 m613.10 m121.60 m185.00 m
Average terrain altitude1493.20 m1493.20 m1798.60 m1369.70 m
Table 2. Results obtained from the proposed method using different number of GCPs over Isfahan dataset, SPOT-1A.
Table 2. Results obtained from the proposed method using different number of GCPs over Isfahan dataset, SPOT-1A.
GCP, CP1st Step Search2nd Step Search
δr (Pixel)δc (Pixel)rmse (Pixel)cond.P1,P2,P3,P4dfδr (Pixel)δc (Pixel)rmse (Pixel)cond.P1,P2,P3,P4df
Meanstd.max.Meanstd.max.Meanstd.max.Meanstd.max.
4, 24−0.040.521.162.977.9323.18.330.7 × 1014,0,3,10
5, 230.010.491.124.505.5311.67.062.6 × 1013,1,5,01
6, 22−0.010.501.020.320.511.560.774.7 × 1044,1,4,21
8, 20−0.020.45−0.850.050.521.430.831.7 × 1026,2,6,02
10, 18−0.020.39−0.79−0.010.46−1.220.600.8 × 1014,0,4,210−0.040.38−0.710.010.360.610.522.3 × 1015,0,5,28
12, 16−0.020.33−0.800.010.49−1.240.626.5 × 1045,0,4,213−0.090.34−0.600.060.39−0.860.541.9 × 1027,0,8,27
14, 14−0.010.40−0.710.030.370.710.562.1 × 1046,1,9,111−0.020.35−0.640.060.390.750.529.3 × 1048,1,10,18
16, 12−0.020.32−0.640.020.270.660.517.7 × 1027,1,7,215−0.030.320.660.030.28−0.610.457.9 × 10313,1,12,25
18, 10−0.010.34−0.590.020.270.650.502.4 × 1014,1,7,222−0.030.29−0.520.020.290.650.441.7 × 1029,1,10,214
20, 8−0.000.33−0.580.020.220.590.447.0 × 1027,2,7,123−0.070.26−0.480.020.170.530.356.2 × 10311,2,9,117
Table 3. Results obtained from the proposed method using different number of GCPs over Isfahan dataset, SPOT-1B.
Table 3. Results obtained from the proposed method using different number of GCPs over Isfahan dataset, SPOT-1B.
GCP, CP1st Step Search2nd Step Search
δr (Pixel)δc (Pixel)rmse (Pixel)cond.P1,P2,P3,P4dfδr (Pixel)δc (Pixel)rmse (Pixel)cond.P1,P2,P3,P4df
Meanstd.max.Meanstd.max.Meanstd.max.Meanstd.max.
4, 240.130.621.74−0.500.67−2.361.041.7 × 1013,1,4,00
5, 230.080.541.150.010.64−1.360.852.7 × 1013,1,4,11
6, 220.060.571.50−0.040.61−1.390.825.7 × 1014,1,4,12
8, 200.030.491.09−0.000.51−1.570.751.4 × 1013,1,4,26−0.020.351.100.010.501.200.716.7 × 1014,1,6,23
10, 18−0.040.521.09−0.020.53−1.250.692.3 × 1024,1,6,180.000.541.21−0.010.43−1.260.688.9 × 1015,1,9,14
12, 16−0.090.481.19−0.020.42−1.400.695.7 × 1014,1,5,113−0.010.491.13−0.010.28−0.740.598.5 × 1015,1,7,110
14, 14−0.070.501.04−0.030.40−1.310.683.9 × 1014,0,6,117−0.020.501.04−0.020.32−0.910.627.9 × 1015,0,9,113
16, 120.030.491.18−0.020.45−0.880.680.6 × 1014,0,9,316−0.020.450.93−0.010.50−0.940.614.3 × 1029,0,13,37
18, 100.000.52−1.18−0.030.22−0.560.572.9 × 1016,0,6,321−0.010.50−1.090.010.17−0.310.593.4 × 10111,0,7,315
20, 8−0.010.530.99−0.010.23−0.530.547.6 × 1038,1,6,322−0.020.590.94−0.000.19−0.290.507.7 × 1049,1,7,320
Table 4. Results obtained from the proposed method using different number of GCPs over Hamedan dataset, IKONOS-2.
Table 4. Results obtained from the proposed method using different number of GCPs over Hamedan dataset, IKONOS-2.
GCP, CP1st Step Search2nd Step Search
δr (Pixel)δc (Pixel)rmse (Pixel)cond.P1,P2,P3,P4dfδr (Pixel)δc (Pixel)rmse (Pixel)cond.P1,P2,P3,P4df
Meanstd.max.Meanstd.max.Meanstd.max.Meanstd.max.
4, 260.270.781.680.180.601.871.090.3 × 1013,1,3,01
5, 25−0.090.49−1.010.100.611.960.848.4 × 1024,0,4,11
6, 24−0.070.380.830.090.491.380.771.5 × 1034,1,4,21
8, 22−0.080.37−0.890.060.390.740.611.1 × 1055,1,5,14
10, 20−0.050.34−0.810.060.380.790.538.3 × 1045,1,7,25−0.030.34−0.810.030.38−0.810.538.3 × 1045,1,7,25
12, 18−0.040.37−0.670.050.370.770.526.7 × 1056,2,7,27−0.040.37−0.990.020.35−0.990.512.1 × 10610,2,8,22
14, 16−0.030.26−0.55−0.050.35−0.670.456.3 × 1057,1,7,211−0.020.28−0.550.040.40−0.550.456.3 × 1057,1,7,211
16, 14−0.050.27−0.57−0.060.35−0.710.442.5 × 1068,2,6,313−0.030.26−0.55−0.050.33−0.550.421.4 × 1069,2,8,310
18, 12−0.040.29−0.61−0.030.30−0.480.422.7 × 1069,2,6,316−0.000.19−0.350.000.34−0.350.401.5 × 10611,2,9,311
20, 10−0.030.22−0.31−0.020.31−0.620.427.2 × 1057,1,5,225−0.010.17−0.42−0.010.31−0.420.384.3 × 10412,1,6,219
Table 5. Results obtained from the proposed method using different number of GCPs over Uromieh dataset, GeoEye-1.
Table 5. Results obtained from the proposed method using different number of GCPs over Uromieh dataset, GeoEye-1.
GCP, CP1st Step Search2nd Step Search
δr (Pixel)δc (Pixel)rmse (Pixel)cond.P1,P2,P3,P4dfδr (Pixel)δc (Pixel)rmse (Pixel)cond.P1,P2,P3,P4df
Meanstd.max.Meanstd.max.Meanstd.max.Meanstd.max.
4, 46−0.040.480.940.070.611.180.690.2 × 1013,0,3,11
5, 450.020.470.93−0.020.551.020.631.0 × 1014,0,3,12
6, 44−0.000.480.950.050.531.010.601.1 × 1014,0,4,13
8, 42−0.050.480.920.050.54−0.950.612.2 × 1015,0,6,05−0.030.430.920.040.50−0.950.612.2 × 1015,0,6,05
10, 40−0.040.49−0.820.040.510.880.601.6 × 1056,2,4,26−0.020.43−0.850.040.480.860.581.6 × 1056,2,4,26
12, 38−0.030.491.10−0.000.46−0.910.599.1 × 1065,2,6,1100.030.461.10−0.040.43−0.910.599.1 × 1065,2,6,110
14, 36−0.060.500.94−0.020.360.770.581.4 × 1065,2,6,114−0.010.460.92−0.020.37−0.880.571.6 × 1066,2,9,110
16, 340.010.46−0.84−0.040.390.920.581.7 × 1065,2,6,118−0.010.41−0.86−0.030.370.780.561.4 × 1066,2,7,116
18, 32−0.030.47−0.86−0.030.330.780.561.5 × 1025,1,5,025−0.020.42−0.87−0.030.34−0.750.552.1 × 10510,1,8,017
20, 30−0.010.480.92−0.030.360.760.541.6 × 1025,1,5,029−0.010.450.92−0.040.370.760.542.2 × 1046,1,10,023
Table 6. Results obtained from GA-optimized and convention RFMs using different number of GCPs over Isfahan dataset, SPOT-1A.
Table 6. Results obtained from GA-optimized and convention RFMs using different number of GCPs over Isfahan dataset, SPOT-1A.
GCP, CPGA-Optimized RFM (10 Times Run)Conventional RFM
δr (Pixel)δc (Pixel)rmse (Pixel)cond.IterationsP1,P2,P3,P4dfδr (Pixel)δc (Pixel)rmse (Pixel)cond.P1,P2,P3,P4df
Meanstd.max.Meanstd.max.Meanstd.max.Meanstd.max.
8, 200.030.57−1.38−0.070.61−1.400.912.1 × 1013796,2,4,22−0.321.16−2.651.281.846.912.563.4 × 1024,3,4,32
10, 180.020.511.180.050.59−1.220.884.5 × 1022805,4,6,23−0.171.14−2.490.190.95−2.791.511.3 × 1025,4,5,42
12, 160.030.491.08−0.050.591.120.874.1 × 1033676,3,10,230.240.57−1.120.630.631.541.102.9 × 1026,5,6,52
14, 140.020.45−1.03−0.050.521.520.832.2 × 1031449,5,8,24−0.150.82−1.740.590.831.931.337.5 × 1057,6,7,62
16, 12−0.010.501.10−0.030.510.870.795.4 × 1031959,4,7,480.160.83−2.10−0.140.872.051.232.5 × 1058,7,8,72
18, 100.020.49−0.87−0.000.481.100.733.7 × 1049611,4,9,480.240.74−1.49−0.310.95−2.281.283.6 × 1059,8,9,82
20, 80.010.521.14−0.010.47−0.720.717.2 × 10411512,9,10,730.081.76−3.850.440.771.431.985.1 × 10510,9,10,92
Table 7. Results obtained from GA-optimized and convention RFMs using different number of GCPs over Isfahan dataset, SPOT-1B.
Table 7. Results obtained from GA-optimized and convention RFMs using different number of GCPs over Isfahan dataset, SPOT-1B.
GCP, CPGA-Optimized RFM (10 Times Run)Conventional RFM
δr (Pixel)δc (Pixel)rmse (Pixel)cond.IterationsP1,P2,P3,P4dfδr (Pixel)δc (Pixel)rmse (Pixel)cond.P1,P2,P3,P4df
Meanstd.max.Meanstd.max.Meanstd.max.Meanstd.max.
8, 200.150.651.200.330.71−2.031.018.1 × 1015005,2,5,22−0.751.74−4.87−0.581.44−4.762.462.5 × 1014,3,4,32
10, 18−0.090.571.480.040.70−1.840.923.2 × 1023745,2,5,26−0.111.07−1.89−0.080.62−1.261.241.6 × 1025,4,5,42
12, 16−0.080.571.400.050.69−1.600.928.1 × 1013806,3,6,450.120.831.72−0.700.83−2.441.383.3 × 1026,5,6,52
14, 14−0.050.55−1.160.060.63−1.230.874.3 × 1024026,4,5,211−0.160.901.84−0.341.31−3.161.642.2 × 1047,6,7,62
16, 12−0.090.601.790.050.68−1.470.945.1 × 1022489,5,7,47−0.120.87−1.84−0.620.91−2.051.435.8 × 1048,7,8,72
18, 100.070.411.140.060.490.910.776.7 × 1024189,6,7,770.190.841.600.020.71−1.221.124.6 × 1059,8,9,82
20, 8−0.030.470.770.040.39−0.550.555.9 × 10331111,4,9,88−0.121.271.720.923.901.684.227.1 × 10510,9,10,92
Table 8. Results obtained from GA-optimized and convention RFMs using different number of GCPs over Hamedan dataset, IKONOS-2.
Table 8. Results obtained from GA-optimized and convention RFMs using different number of GCPs over Hamedan dataset, IKONOS-2.
GCP, CPGA-optimized RFM (10 Times Run)Conventional RFM
δr (Pixel)δc (Pixel)rmse (Pixel)cond.IterationsP1,P2,P3,P4dfδr (Pixel)δc (Pixel)rmse (Pixel)cond.P1,P2,P3,P4df
Meanstd.max.Meanstd.max.Meanstd.max.Meanstd.max.
8, 220.080.39−0.910.070.48−1.190.741.7 × 1055006,2,5,210.070.69−1.39−0.480.64−1.641.077.6 × 1014,3,4,32
10, 20−0.080.370.930.080.41−1.220.622.4 × 1053497,1,3,27−0.090.52−1.07−0.410.64−1.610.934.8 × 1035,4,5,42
12, 18−0.070.38−0.890.060.39−1.000.611.3 × 1062714,4,4,390.170.44−1.01−0.290.541.300.789.2 × 1056,5,6,52
14, 16−0.080.39−0.51−0.050.38−1.050.597.1 × 1043266,4,8,37−0.331.03−1.970.010.711.861.302.5 × 1057,6,7,62
16, 14−0.050.330.390.060.39−0.900.544.2 × 1064509,6,7,730.020.511.090.270.982.391.136.3 × 1058,7,8,72
18, 12−0.000.380.640.020.36−0.870.524.4 × 1053004,8,9,780.400.892.20−0.050.761.331.243.1 × 1069,8,9,82
20, 10−0.030.31−0.48−0.020.32−0.990.493.8 × 1052647,8,9,1150.652.978.87−0.452.39−4.093.955.8 × 10610,9,10,92
Table 9. Results obtained from GA-optimized and convention RFMs using different number of GCPs over Uromieh dataset, GeoEye-1.
Table 9. Results obtained from GA-optimized and convention RFMs using different number of GCPs over Uromieh dataset, GeoEye-1.
GCP, CPGA-Optimized RFM (10 Times Run)Conventional RFM
δr (Pixel)δc (Pixel)rmse (Pixel)cond.IterationsP1,P2,P3,P4dfδr (Pixel)δc (Pixel)rmse (Pixel)cond.P1,P2,P3,P4df
Meanstd.max.Meanstd.max.Meanstd.max.Meanstd.max.
8, 42−0.030.48−1.000.030.551.230.697.2 × 1034725,2,5,220.000.87−1.820.000.782.021.181.0 × 1024,3,4,32
10, 40−0.020.46−1.170.050.631.250.692.8 × 1063874,3,6,34−0.081.07−3.470.020.95−3.341.447.4 × 1025,4,5,42
12, 38−0.050.49−1.180.020.571.650.693.6 × 1074234,5,4,47−0.370.76−2.78−0.141.61−7.141.832.6 × 1056,5,6,52
14, 360.020.440.870.040.53−1.420.669.2 × 1065004,4,7,49−0.520.75−1.62−0.401.55−6.861.741.7 × 1067,6,7,62
16, 34−0.020.47−1.020.000.551.280.665.5 × 1065006,4,7,312−0.031.172.80−0.271.62−6.233.642.4 × 1068,7,8,72
18, 320.010.48−1.05−0.040.561.080.655.1 × 1065008,4,6,7110.865.71−13.17−0.421.16−4.089.095.0 × 1069,8,9,82
20, 30−0.030.46−0.940.020.50−0.990.614.7 × 1075009,5,10,511−1.468.0614.82−0.631.22−15.3711.159.6 × 10610,9,10,92
Table 10. Results obtained from low order polynomials using different number of GCPs over Isfahan dataset, SPOT-1A.
Table 10. Results obtained from low order polynomials using different number of GCPs over Isfahan dataset, SPOT-1A.
GCP, CPFirst-Order PolynomialSecond-Order Polynomial
δr (Pixel)δc (Pixel)rmse (Pixel)cond.dfδr (Pixel)δc (Pixel)rmse (Pixel)cond.df
Meanstd.max.Meanstd.max.Meanstd.max.Meanstd.max.
6, 22−0.030.070.67−0.052.644.413.641.6 × 1014
8, 20−0.040.250.570.032.892.733.501.3 × 1018
10, 18−0.020.370.58−0.043.304.863.311.4 × 10112
12, 16−0.030.330.57−0.023.464.953.482.3 × 10116−0.020.310.540.031.522.631.612.2 × 1014
14, 14−0.010.350.52−0.013.444.623.449.7 × 10120−0.010.280.480.021.552.611.589.7 × 1018
16, 12−0.000.410.640.033.244.683.271.4 × 10224−0.010.270.640.041.292.551.551.4 × 10212
18, 10−0.020.480.800.013.054.743.082.5 × 102280.020.350.650.011.292.481.512.5 × 10216
20, 8−0.000.450.780.012.944.762.983.2 × 102320.020.330.63−0.021.272.461.523.2 × 10220
Table 11. Results obtained from low order polynomials using different number of GCPs over Isfahan dataset, SPOT-1B.
Table 11. Results obtained from low order polynomials using different number of GCPs over Isfahan dataset, SPOT-1B.
GCP, CPFirst-Order PolynomialSecond-Order Polynomial
δr (Pixel)δc (Pixel)rmse (Pixel)cond.dfδr (Pixel)δc (Pixel)rmse (Pixel)cond.df
Meanstd.max.Meanstd.max.Meanstd.max.Meanstd.max.
6, 22−0.040.350.55−0.021.232.131.191.5 × 1014
8, 200.010.380.51−0.011.142.091.071.3 × 1018
10, 18−0.020.430.50−0.021.152.091.081.5 × 10112
12, 16−0.000.420.51−0.021.111.931.052.2 × 10116−0.010.310.63−0.011.061.111.012.2 × 1014
14, 14−0.040.491.000.011.091.941.069.7 × 10120−0.000.280.730.021.021.080.969.7 × 1018
16, 120.020.541.060.001.161.971.061.4 × 10224−0.020.330.670.011.041.020.961.4 × 10212
18, 100.010.581.06−0.021.041.951.022.5 × 10228−0.010.450.820.010.951.000.912.5 × 10216
20, 8−0.010.561.040.011.091.941.053.2 × 102320.000.450.790.010.960.980.903.2 × 10220
Table 12. Results obtained from low order polynomials using different number of GCPs over Hamedan dataset, IKONOS-2.
Table 12. Results obtained from low order polynomials using different number of GCPs over Hamedan dataset, IKONOS-2.
GCP, CPFirst-Order PolynomialSecond-Order Polynomial
δr (Pixel)δc (Pixel)rmse (Pixel)cond.dfδr (Pixel)δc (Pixel)rmse (Pixel)cond.df
Meanstd.max.Meanstd.max.Meanstd.max.Meanstd.max.
6, 240.030.250.270.010.650.970.650.7 × 1014
8, 220.030.220.25−0.020.581.080.631.2 × 1018
10, 200.010.240.42−0.030.521.000.591.7 × 10112
12, 180.020.320.43−0.010.611.000.603.1 × 101160.010.280.39−0.010.540.730.533.1 × 1014
14, 160.000.350.45−0.020.570.980.588.5 × 10120−0.000.250.58−0.010.480.630.518.5 × 1018
16, 140.010.360.53−0.000.560.930.581.4 × 102240.020.290.43−0.000.500.550.491.4 × 10212
18, 120.010.360.53−0.010.590.960.592.3 × 102280.010.300.430.010.470.550.482.3 × 10216
20, 100.020.350.52−0.000.560.870.576.2 × 102320.020.290.430.010.490.680.506.2 × 10220
Table 13. Results obtained from low order polynomials using different number of GCPs over Uromieh dataset, GeoEye-1.
Table 13. Results obtained from low order polynomials using different number of GCPs over Uromieh dataset, GeoEye-1.
GCP, CPFirst-Order PolynomialSecond-Order Polynomial
δr (Pixel)δc (Pixel)rmse (Pixel)cond.dfδr (Pixel)δc (Pixel)rmse (Pixel)cond.df
Meanstd.max.Meanstd.max.Meanstd.max.Meanstd.max.
6, 440.030.310.390.060.911.590.920.6 × 1014
8, 42−0.010.340.35−0.020.841.630.881.1 × 1018
10, 400.000.330.420.010.761.660.834.6 × 10112
12, 380.020.370.50−0.030.751.760.843.1 × 10216−0.010.340.420.000.250.870.613.1 × 1024
14, 36−0.020.410.57−0.020.761.720.865.9 × 10220−0.020.320.56−0.010.240.860.595.9 × 1028
16, 340.010.420.650.010.751.660.861.2 × 103240.010.230.560.000.230.790.541.2 × 10312
18, 32−0.030.400.61−0.010.711.660.817.9 × 103280.010.210.56−0.010.290.640.497.9 × 10316
20, 300.000.430.790.010.691.600.811.2 × 104320.020.310.78−0.010.280.680.501.2 × 10420
Table 14. Accuracies obtained from the proposed method using eight GCPs with respect to the bias compensated RPCs provided with Uromieh dataset, GeoEye-1.
Table 14. Accuracies obtained from the proposed method using eight GCPs with respect to the bias compensated RPCs provided with Uromieh dataset, GeoEye-1.
δr (Pixel)δc (Pixel)rmse (Pixel)
Meanstd.max.Meanstd.max.
−0.000.340.660.010.450.720.53

Share and Cite

MDPI and ACS Style

Jannati, M.; Valadan Zoej, M.J.; Mokhtarzade, M. A Knowledge-Based Search Strategy for Optimally Structuring the Terrain Dependent Rational Function Models. Remote Sens. 2017, 9, 345. https://doi.org/10.3390/rs9040345

AMA Style

Jannati M, Valadan Zoej MJ, Mokhtarzade M. A Knowledge-Based Search Strategy for Optimally Structuring the Terrain Dependent Rational Function Models. Remote Sensing. 2017; 9(4):345. https://doi.org/10.3390/rs9040345

Chicago/Turabian Style

Jannati, Mojtaba, Mohammad Javad Valadan Zoej, and Mehdi Mokhtarzade. 2017. "A Knowledge-Based Search Strategy for Optimally Structuring the Terrain Dependent Rational Function Models" Remote Sensing 9, no. 4: 345. https://doi.org/10.3390/rs9040345

APA Style

Jannati, M., Valadan Zoej, M. J., & Mokhtarzade, M. (2017). A Knowledge-Based Search Strategy for Optimally Structuring the Terrain Dependent Rational Function Models. Remote Sensing, 9(4), 345. https://doi.org/10.3390/rs9040345

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop