Next Article in Journal
Development of a Novel Transparent Flexible Capacitive Micromachined Ultrasonic Transducer
Next Article in Special Issue
Active Multimodal Sensor System for Target Recognition and Tracking
Previous Article in Journal
Fabrication and Characterization of Plasmonic Nanopores with Cavities in the Solid Support
Previous Article in Special Issue
A Multiple Sensors Platform Method for Power Line Inspection Based on a Large Unmanned Helicopter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera

State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
*
Authors to whom correspondence should be addressed.
Sensors 2017, 17(6), 1441; https://doi.org/10.3390/s17061441
Submission received: 11 May 2017 / Revised: 15 June 2017 / Accepted: 16 June 2017 / Published: 20 June 2017
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)

Abstract

:
The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.

1. Introduction

Linear array push broom is a widely used imaging mode for the high resolution optical satellite (HROS). To increase the resolution, the camera is typically designed with a long focal length to achieve a narrow field angle. To increase the field of view (FOV), it is necessary to assemble additional TDI-CCD detectors together because the existing TDI-CCD detector cannot achieve the large field of view requirement, such as IKONOS, Pleiades-HR, Worldview series, ZiYuan3, etc. [1,2,3]. Another method to enlarge the field of view is to use double-cameras attached with a high-rigidity support concurrently with push broom imaging. Launched in 2014, the GaoFen2 (GF2) remote sensing satellite is a Chinese high resolution optical satellite equipped with a double-camera push broom imaging system resulting in a high resolution of 0.81 m; a 45 km width strip can be obtained, which is four times the size that IKNONS can obtain. However, the image of each individual camera in the double-camera system has an independent optical imaging system and imaging model, which creates additional work in the subsequent processing. Therefore, it is necessary to investigate the corresponding image mosaicking approach for the double-camera system.
Researchers have focused on investigations involving stitching several satellite images together [4,5]; there are two methods that are traditionally used. The first method is to rectify all the images to the same geodetic coordinate system, and then stitch the rectified images by image registration and mosaicking to produce the complete image. This method has high geometric accuracy but high complexity in image rectification and registration [6,7,8]. The second method is to utilize the overlapping regions to match the homonymy points for stitching the image while disregarding the geometric imaging model for each image [9,10,11]. The stitched image that results from this method lacks a rigorous imaging model, therefore, it has a poor geometric accuracy and cannot adequately meet the surveying and mapping requirements. Researchers have proposed an inner FOV stitching method based on the virtual detector line built in the camera coordinate system for multi-TDI-CCD mosaicking and achieved an ideal result, which provided the basic idea for exploring the mosaicking approach using multiple cameras [12,13]. The difference and difficulty for a double-camera system mosaicking compared with the multi-TDI-CCD mosaicking is the unstable relative installation relationship caused by the changing thermal environment, that is to say, even though we calibrated the relative installation relationship by one image scene, for other scenes, the relative installation relationship will probably has a tiny change, which will directly influent the mosaicking accuracy.
In this paper, we propose an image mosaicking approach for double-camera systems on the GF2 satellite based on the big virtual camera (BVC) in the satellite body coordinate system, which is built with no internal distortion and its FOV covers all the camera’s FOV. According to the original rigorous imaging model and the updated rational function model (RFM) by relative orientation of each camera, images obtained at the same time by the double-camera system are re-projected to the same big virtual camera coordinate system to produce the entire central projection stitched image by feather processing alone. Moreover, the high accuracy rational function model (RFM) of the stitched image can be obtained concurrently, which will provide the geometric information required for post processing. The proposed approach has been validated for precision and robustness, and it works under varying topographical conditions and achieves the goal of fully automated image data pretreatment on the ground.

2. Methodology

2.1. Rigorous Imaging Model for the Single Camera

In the double-camera push-broom imaging system on the GF2 satellite, the principal optic axes of the cameras are installed in the same plane as the greatest extent and the angle between them is 2.01°, as shown in Figure 1a. Therefore, double-cameras share the same orbit and attitude data when taking concurrent images, while each single camera has its own installation angles and internal optical parameters. The detailed information of the single camera is listed in Table 1.
The focal plane of the single camera is composed of five-collinear TDI-CCD detectors within the field of view and there are many overlapping pixels between the adjacent TDI-CCD detectors for mosaicking. The overlapping regions of the double-camera imaging system can be divided into the camera overlapping region and the detector overlapping region as shown in Figure 1b,c. The camera overlapping region is between the double cameras, and the detector overlapping region is between the two adjacent single TDI-CCD detectors in the same single camera. The virtual single detector (VSD) is used for the five-collinear TDI-CCD detectors to generate the entire stitched image from the single camera, while the big virtual camera (BVC) is used for the double-cameras to generate the entire stitched image from the double-camera imaging system. The goal of image mosaicking for the double-camera system is to seamlessly stitch ten image strips obtained simultaneously by the ten TDI-CCD detectors and to produce a complete wide coverage stitched image based on the virtual single detector and the big virtual camera.
The imaging parameters of the rigorous geometric imaging model can be divided into the satellite auxiliary data and the camera parameters. The satellite auxiliary data of time, attitude and orbit help to convert the satellite body coordinates into object coordinates, which determines the ray of light from the projection center to the ground points in the satellite body coordinate system. The interior LOS parameters help to convert the image coordinates into the camera coordinates, which determines the accurate LOS of each detector in the camera coordinate system [14,15]. For the single push-broom camera, the rigorous imaging model of each TDI-CCD detector can be determined as follows:
[ X Y Z ] W G S 84 = [ X s Y s Z s ] W G S 84 + λ R J 2000 W G S 84 R b o d y J 2000 R c a m e r a b o d y [ tan ψ x tan ψ y 1 ]
{ tan ψ x = a 0 + a 1 s + a 2 s 2 + a 3 s 3 tan ψ y = b 0 + b 1 s + b 2 s 2 + b 3 s 3
where ψ x and ψ y are the look angles [16,17] between the line of sight (LOS) and the vertical axis and horizontal axis of the focal plane. S is the sequence number of the single CCD detector, and a 0 , a 1 , a 2 , a 3 and b 0 , b 1 , b 2 , b 3 are the corresponding internal parameters. [ X Y Z ] W G S 84 T is the object space WGS84 coordinate of the ground point. [ X S ( t ) Y S ( t ) Z S ( t ) ] T is the position vector of the projection center in the WGS84 coordinate system, which is interpolated from ephemeris time observations. R c a m e r a b o d y is the installation matrix from the camera coordinate system to the satellite body coordinate system; and R b o d y J 2000 is the attitude matrix from the satellite body coordinate system to the J2000 coordinate system, which is interpolated from the attitude time observations under the J2000 coordinate system; and R J 2000 W G S 84 represents the transformation matrix from the ECF coordinate system to the ECI coordinate system, which is based on the IAU 2000 precession-nutation model from the International Earth Rotation and Reference Systems Service (IERS) conventions.
The interior LOS of the virtual single detector can be easily determined by:
{ tan ψ x ˜ = A 0 tan ψ y ˜ = B 0 + B 1 s
where A 0 is the mean value of the internal parameters ( a 0 ) i i = 1 , 2 , 3 , 4 , 5 of the five TDI-CCD detectors, by which the virtual single detector is placed in the center line of the multi-TDI-CCD along the flight direction. B 0 is equal to b 0 and B 1 is equal to b 1 of the leftmost TDI-CCD1, which eliminates internal distortion for the virtual single detector. Additionally, the original auxiliary data should undergo steady-state processing [18,19] to generate the virtual single images. The steady-state processing is a method to correct the attitude oscillation so as to increase RFM fitting precision for the rigorous imaging model.

2.2. Rigorous Imaging Model of the Big Virtual Camera

The single camera auxiliary data can also be applied to the BVC using steady-state processing. The BVC parameters consist of the exterior installation matrix and the interior LOS parameters. Based on the symmetrical installation relationship in the double-camera system shown in Figure 1a, we can assume that the BVC focal length equates to the single camera and the BVC field of view covers the entire imaging system, which is equivalent to placing a big virtual detector on the focal plane, as shown in Figure 1c. The exterior BVC installation matrix is considered the unit matrix, which means the LOS of each detector in the camera coordinate system is equal to the LOS in the satellite body coordinate system. To determine the interior BVC LOS, the number of virtual detectors and the field of view should be determined first. N A and N B represent the number of detectors of the virtual single detectors A and B. According to the design criterion, the number of detectors in the camera overlapping region in the satellite body coordinate system is N C a m e r a . Therefore, the number of BVC detectors is:
N V = N A + N B N C a m e r a
Based on the above analysis, the LOS [ x i ( s ) y i ( s ) 1 ] T in the single camera coordinate system of the image point ( s , l ) i on the single virtual detector can be determined as follows:
x i ( s ) = A 0 i y i ( s ) = B 0 i + B 1 i s
where i = A , B , A 0 i and B 0 i , B 1 i are the interior LOS coefficients of the single virtual detector.
The serial number of virtual detectors A and B can be represented by S A = 0 , , N A 1 , and S B = 0 , , N B 1 . R A b o d y and R B b o d y are the installation matrices from the camera A and B coordinate systems to the satellite body coordinate system. As shown in Figure 1c, we can determine the endpoint LOSs for single virtual detectors A and B , A s t a r t : [ x A 0 ¯ y A 0 ¯ z A 0 ¯ ] T , A e n d : [ x A 1 ¯ y A 1 ¯ z A 1 ¯ ] T , B s t a r t : [ x B 0 ¯ y B 0 ¯ z B 0 ¯ ] T , B e n d : [ x B 1 ¯ y B 1 ¯ z B 1 ¯ ] T as follows:
[ x A 0 ¯ y A 0 ¯ z A 0 ¯ ] = R A b o d y [ x A ( 0 ) y A ( 0 ) 1 ] , [ x A 1 ¯ y A 1 ¯ z A 1 ¯ ] = R A b o d y [ x A ( N A 1 ) y A ( N A 1 ) 1 ] [ x B 0 ¯ y B 0 ¯ z B 0 ¯ ] = R B b o d y [ x B ( 0 ) y B ( 0 ) 1 ] , [ x B 1 ¯ y B 1 ¯ z B 1 ¯ ] = R B b o d y [ x B ( N B 1 ) y B ( N B 1 ) 1 ]
Subsequently, the LOS of A s t a r t , A e n d , B s t a r t and B e n d can be projected to the plane Z b to obtain their components in the X b and Y b directions, therefore, projected coordinates in the satellite body coordinate system for the four endpoints ( x A 0 , y A 0 ) , ( x A 1 , y A 1 ) , ( x B 0 , y B 0 ) , and ( x B 1 , y B 1 ) can be calculated as follows:
x A 0 = x A 0 ¯ z A 0 ¯ , y A 0 = y A 0 ¯ z A 0 ¯ ,   x A 1 = x A 1 ¯ z A 1 ¯ , y A 1 = y A 1 ¯ z A 1 ¯ x B 0 = x B 0 ¯ z B 0 ¯ , y B 0 = y B 0 ¯ z B 0 ¯ ,   x B 1 = x B 1 ¯ z B 1 ¯ , y B 1 = y B 1 ¯ z B 1 ¯
The projected BVC coordinates of the two endpoints ( x B V D s t a r t , y B V D s t a r t ) and ( x B V D e n d , y B V D e n d ) in the satellite body coordinate system can be calculated as follows:
x B V D s t a r t = x A 0 + x B 0 2 , y B V D s t a r t = y A 0 , x B V D e n d = x A 0 + x B 0 2 , y B V D e n d = y B 1
The BVC can be designed with no internal distortion by dividing the interior LOS equally according to the number of virtual detectors, and we can subsequently determine the LOS [ x V ( s ) y V ( s ) 1 ] T of the detector S ( s = 0 , , N V ) in the satellite body coordinate system as follows:
x V ( s ) = x B V D s t a r t , y V ( s ) = y B V D s t a r t + y B V D e n d y B V D s t a r t N V 1 s
Then, we can determine the BVC rigorous imaging model as follows:
[ X Y Z ] W G S 84 = [ X s Y s Z s ] W G S 84 + λ R J 2000 W G S 84 R b o d y J 2000 R c a m e r a b o d y [ x v ( s ) y v ( s ) 1 ]
The advantage of placing the virtual detector in the center line of the double-camera’s detectors along the track direction is to reduce the mosaic and virtualization errors. Based on the determined BVC rigorous imaging model, the corresponding RFM can be determined to generate the complete stitched images.

2.3. Image Mosaic Workflow Based on VSD and BVC

One straightforward method to produce the complete stitched image is to generate the stitched image from each single camera first, subsequently, stitch the two images to generate the complete stitched image from the double-cameras. However, it can be time-consuming to resample all the original single images twice. The image mosaic method workflow is based on the virtual single detector and the big virtual camera is designed with six primary steps as shown in Figure 2a.
Step 1: Determine the rigorous imaging model (RIM) for the original single camera. The R I M a and R I M b for the double-cameras can be constructed based on the collinearity condition equation. The high accuracy camera parameters on the satellite is necessary in the proposed mosaicking approach. The camera parameters that include the exterior installation angles and interior LOS parameters can be determined using an on-orbit high accuracy geometric calibration method [20,21,22]. The projection center and attitude of each imaging line can be interpolated from the original GPS and attitude data.
Step 2: Determine the RFM for the virtual single detector (VSD). The VSD shares the same installation angles with RIM and the interior VSD LOS can be designed with no distortion according to the RIM field of view. Based on the new integration time from averaging the original integration times, the interpolated projection center and the smoothed attitude data can be obtained to determine R F M a and R F M b for the left and right VSDs. The R F M a and R F M b are the steady-state reimaging model of V S D a and V S D b , and an excellent replacement for R I M a and R I M b , which have filtered the observation noise in the auxiliary data through the least square method and, thus, has a more accurate fitting precision [18].
Step 3: Generate the virtual single images (VSI)s of the camera overlap region. Although the consistency of the image positioning accuracy between the adjacent TDI-CCD detectors can be ensured after an on-orbit high accuracy calibration, the consistency of the image positioning accuracy between the adjacent double cameras is hard to maintain due to the inevitable changes of their relative installation relationship caused by the external environmental temperature, which will lead to the dislocation of the camera overlap region. Therefore, to ensure seamless mosaicking solely based on the geometrical information, the relative orientation of the R F M a and R F M b should be determined. The two original single images obtained by the leftmost TDI-CCD detector in the right single camera and the rightmost TDI-CCD detector in the left single camera are projected to their corresponding VSD according to the R F M a and R F M b , and their VSIs are generated using image resampling for the following relative orientation. Forward-projection is based on the rigorous imaging model and backward-projection is based on the RFM in steady-state reimaging processing, as shown in Figure 2b.
Step 4: The relative orientation of the two virtual single images (VSIs). Many homonymy points for the two adjacent VSIs can be matched automatically using the SFIT algorithm, and the image space coordinates ( x a , y a ) i are in V S I a and ( x b , y b ) i are in V S I b . The corresponding object space coordinates ( B , L , H ) a i and ( B , L , H ) b i for ( x a , y a ) i and ( x b , y b ) i can be determined using forward intersection based on R F M a and R F M b . Subsequently, the elevation H can be interpolated from the global DEM such as SRTM according to the average object space coordinates ( B a + B b 2 , L a + L a 2 ) i . The corresponding updated image space coordinates ( x a , y a ) i and ( x b , y b ) i in V S I a and V S I b of the same corresponding object space coordinate ( B a + B b 2 , L a + L a 2 , H ) i can be determined using backward intersection based on R F M a and R F M b . Finally, the affine transform coefficients of the updated R F M n e w a and R F M n e w b can be calculated based on the point pairs ( x a , y a ) i and ( x a , y a ) i in V S I a and ( x b , y b ) i and ( x b , y b ) i in V S I b . With the updated R F M n e w a and R F M n e w b , the consistency of V S D a and V S D b positioning accuracy can be ensured, which is the key factor for generating the complete stitched image from the double-cameras.
Step 5: Determine the RFM for the big virtual camera (BVC). Based on the rigorous imaging model of the single camera, the rigorous imaging model of the big virtual camera can be determined, and combined with the steady-state processed auxiliary data, the high accuracy R F M B V C of the complete stitched image can be produced, which will be applied to the final complete stitched image.
Step 6: Generate the complete stitched images. To generate the virtual single images in the BVC coordinate system, twice forward and backward intersections are performed.
(1) First, for the image point ( s , l ) o r i g i n a l of the original single image, the detector number s determines the interior orientation elements, and imaging line l determines the imaging time so that the exterior orientation elements can be interpolated by time from the attitude and orbit observation. Thus, the corresponding object space coordinate ( B , L , H ) of image point ( s , l ) o r i g i n a l will be calculated using the forward projection on elevation surface H using the rigorous imaging model. H can be interpolated from the reference elevation surface (such as SRTM DEM).
(2) The image point ( s , l ) V S D on the corresponding virtual single detector will be calculated using the backward projection from the object space coordinate ( B , L , H ) using the VSD RFM.
(3) Then, the updated object space coordinate ( B , L , H ) can be forward intersected from the image point ( s , l ) V S D based on the R F M n e w a \ b updated using the relative orientation, and H is interpolated from the reference elevation surface.
(4) The final image point ( s , l ) B V C on the big virtual detector can be backward intersected from the object space coordinate ( B , L , H ) based on the R F M B V C .
(5) Thus, point ( s , l ) B V C on the big virtual detector and point ( s , l ) o r i g i n a l on the original single image can be correlated. The digital number of ( s , l ) B V C will be determined from ( s , l ) o r i g i n a l using gray resampling.
(6) Repeat the above steps until every pixel for all the single virtual images on the big virtual detector is complete; then, ten single virtual images can be produced.
(7) Finally, the single virtual images produced will be stitched based on the same initial pixel coordinates for the big virtual detector in the big virtual camera, and feather processing without performing the local correction in the overlapping region can produce the seamless mosaic image.

2.4. Error Analysis of the Mosaic Imaging Process

2.4.1. Consistency of the Image Positioning Accuracy

Because the geometric imaging model is important in the proposed mosaicking approach, the consistency of the TDI-CCD detectors’ positioning accuracy is the geometric foundation for the mosaic accuracy. As shown in Figure 3, G is the object space point of the image space point P in the overlapping region of the stitched image. The image space points P1 and P2 in the adjacent images are determined using backward-projection from G to achieve the conjugate relationship; this means that P1 and P2 are homonymy points. In other words, the corresponding object space points G1 and G2 of P1 and P2 based on their own geometric imaging model should be as close as possible and the distance of the two object space points, which reflects the consistency of the positioning accuracy between the adjacent single images.
The positioning accuracy is determined by the accuracy of the satellite auxiliary data and the camera parameters. Using an on-orbit high accuracy geometric calibration [16,17] for the double-cameras, the camera interior LOS parameters can be continuously verified, while small changes will exist in the installation angles due to the changing thermal environment. Therefore, for the detector overlap region of the adjacent collinear TDI-CCD detector, the adjacent TDI-CCD detector scans the same object line at approximately the same time; then, the same attitude and orbit auxiliary data can be interpolated at the same time. Therefore, the consistency of their positioning accuracy can be achieved. For the camera overlap region of the adjacent cameras, the imaging time of the homonymy points is different due to the inevitable installation error, which causes the interpolated auxiliary data of the same scanning line to be slightly different [19]. Additionally, the unstable relative installation relationship will directly cause the consistency of their positioning accuracy not meet the seamless mosaicking requirements. Therefore, the relative orientation should be determined for the double-cameras.

2.4.2. Influence of the Elevation Error

The difference between the interpolated elevation and the actual elevation that is caused by the limit of the absolute positioning accuracy and the elevation data precision is another key factor for the mosaic accuracy.
As shown in Figure 4, in the overlapping region of the adjacent single image, G is the intersection point of the homonymy points P1 and P2 for the actual elevation, and ΔH is the elevation error. Δl represents the object space projection distance of the mosaic error caused by ΔH, as:
Δ l = Δ H × | tan α 1 tan α 2 |
where the directional angles α1 and α2 of the homonymy points P1 and P2 are the bias FOV angle of the single CCD detector in the satellite body coordinate system along the track.
For the non-overlapping area, Δv represents the virtualization error caused by the elevation error as follows:
Δ v = Δ H × | tan α tan β |
where α and β represent the bias FOV angles of the image homonymy point Q1 on the original single image and Q2 on the corresponding single virtual image.
The larger the intersection angle of the homonymy point’s LOS, the greater the influence of the elevation error will be, therefore, to ensure the mosaic and virtualization accuracy, the required interpolated elevation accuracy should be achieved. The elevation error will rarely affect the mosaic accuracy of the detector overlap region due to collinear installation, but has a significant effect on the mosaic accuracy of the camera overlap region. Placing the virtual detector in the center line of the double-camera’s TDI-CCD detectors along the track direction can divide the elevation error equally to all the detectors to ensure the mosaic and virtualization accuracy. After the on-orbit high accuracy calibration and relative orientation, the accuracy of the image positioning and the interpolated elevation from the global DEM can be ensured. Based on the analysis above, the goal is to achieve a seamless mosaic image using image feather processing when stitching the single BVC virtual images directly based on their image coordinate for the same big virtual detector coordinate system.

3. Experiments and Discussion

3.1. Experimental Data

Four sets of double-panchromatic-images from the GF2 satellite were used in the experiments to verify the reliability of the proposed approach. Table 2 lists the primary information for the images. The four study areas are located in Songshan, Anyang and Dongying, China, and the double-images have the same size. Songshan is a mountainous area with an elevation difference of 1359 m and an average elevation of 431 m, while Anyang and Dongying are flatlands with small elevation differences. Scenes A and B, images of Songshan, were captured in the same orbit on October 27, 2015. The experimental data are the panchromatic images after radiometric-correction, and the same high accuracy calibrated camera parameters and their corresponding auxiliary data. Digital orthophoto maps (DOM) and digital elevation models (DEM) obtained from the WorldView2 satellite was used as reference data for the geometric accuracy evaluation. The DOM resolution is 0.3 m, and the DEM resolution is 2 m. To ensure the appropriate number and distribution of the auto-matched ground control points (GCPs) is used for the geometric accuracy assessment, the selected satellite images have little cloud and water cover. The matching process was performed using the SIFT algorithm, and the matching accuracy was better than 0.3 pixels [23].
Visual and geometric accuracy evaluation of the complete stitched images were performed to comprehensively validate the effectiveness of our approach.

3.2. Results and Discussion

3.2.1. Visual Evaluation

Based on the high accuracy and stable interior calibrated parameters, seamless mosaicking of the detector overlapping region can be achieved directly based on the geometric relationship. Then feather processing is performed during the local processing, therefore, a significant dislocation of the seamline will occur if there is a significant image positioning deviation for the homonymy points on the adjacent single CCD detector, which can be effectively detected by visual evaluation. Due to the unstable small changes in the relative installation relationship of the double-cameras, relative orientation is required for the seamless mosaicking of the camera overlap region. Figure 5 shows the experimental results for Scene A GF2 satellite imagery. Areas 1, 2 and 3 are camera overlapping regions, while area 4 is the detector overlapping region between detector 2 and 3 in camera A, and area 5 is the detector overlapping region between detector 3 and 4 in camera B. The details of areas 1–5 in Figure 5a are shown in Figure 5b–k. Areas 1, 2, and 4 are in mountainous areas, and areas 3 and 5 are in plain areas.
Areas 4 and 5 could achieve seamless mosaicking when feather processing is performed for the detector overlapping regions. That is due to the high accuracy and stable on-orbit calibrated interior parameters, which are the most important factors for the consistency of the positioning accuracy in the single camera. We can also see that the stitched areas of the camera overlap regions are also seamless and smooth without a local correction after relative orientation with translation transformation. The stitching result will not be affected by the overlapping region terrain, which is benefited from the geometric characteristics we talk about in the following part.

3.2.2. Geometric Accuracy Evaluation

RFM Fitting Precision

The RFM fitting precision is one key index of the geometry quality for the complete stitched image, and it can be calculated using check points evenly distributed among the virtual control points when fitting the RFM based on the rigorous imaging model as shown in Figure 6 with the results shown in Table 3.
The maximum and minimum error absolute values for the rational function coefficients (RPCs) fit using the original attitude reaches approximately 0.65 pixels in the sample and 0.93 pixels in the line directions, respectively. However, the RMSE of the RPC fit using the smooth attitude becomes better than 1.00 × 10−4 in the sample and line directions, which may be ignored because it fully achieves the accuracy requirement. This is due to the virtual detector of the big virtual camera being designed without internal distortion and external attitude data in the steady-state processing, which are interpolated using the polynomial model that can filter the observation noise using the least square method (LSM). The high RFM accuracy for the complete stitched image obtained using the big virtual camera provides the necessary geometric information for post processing.

Mosaic Accuracy

The mosaic accuracy we focus on can be divided into the mosaic accuracy of the detector overlap region and the camera overlap region. Due to the collinear installation of the multi-TDI-CCD detectors, the same scanning line in the detector overlap region is concurrently imaged; then, the auxiliary data are concurrently interpolated by time. Based on the high-accuracy camera interior LOS parameters, the mosaic accuracy of the detector overlap region can stably reach approximately zero pixels. Therefore, feather processing is needed for seamless mosaicking. The mosaic accuracy of the camera overlap region is difficult to ensure using an on-orbit calibration due to the inconsistent relative installation relationship of the double-cameras caused by the changing thermal environment. Therefore, the relative orientation should be performed for the double-images captured by the double-cameras. Theoretically speaking, affine transformation should be performed in the relative orientation considering the rigorous installation relationship. However, the small imaging overlap region of the double-cameras and the complicated imaging terrain features cause the homonymy points auto-matching to be difficult and not completely reliable in most cases. Therefore affine transformation will probably cause a more unreliable geometric accuracy for the double images, especially at the edge of the double-images away from the overlapping region. Considering these problems, translation transformation may be an alternative choice if it can also achieve the ideal mosaic accuracy.
Based on the original RFM of virtual single detector, the adjacent single images can be projected to the big virtual detector according to Step 3 in the workflow. Subsequently, the mosaic accuracy can be evaluated using the consistency of the homonymy points in the overlapping regions. More than ten thousand evenly distributed dense homonymy points were matched automatically using the SIFT algorithm. (s,l)left is the image coordinate of the matched point in the left single virtual image, and (s,l)right is the image coordinate of the matched corresponding homonymy point in the adjacent right single virtual image. Because the homonymy points image coordinates of the left and right single virtual image are in the same big virtual detector plane coordinate system, the homonymy points image coordinates should be the same under ideal conditions. The deviation of the homonymy points’ image coordinates will directly affect and reflect the mosaic accuracy. The mosaic errors (pixels) relationship with the line number are shown in Figure 7. Significant mosaic errors occur across and along the track direction. The mosaic error fluctuations for the four scenes in the camera overlap region are small, which verifies the consistency of the auxiliary data measurements accuracy in the small imaging time interval of the double-cameras caused by the small displacement installation. The mosaic errors of scenes A and B in the same orbit are approximately the same, and they have significant differences compared to scenes C and D, which were imaged on a different day. Considering the use of high accuracy elevation information, such as GDEM2 for generating the virtual image, the inconsistent mosaic errors are probably caused by the change of the relative installation relationship for the double-cameras. Based on the densely distributed homonymy points in the camera overlap region, translation and affine transformations for the updated RFMs of the single camera are performed using the method in Step 4 of the workflow. The affine model based on RFM was used as the exterior orientation model [24,25]. Based on the updated RPMs, the mosaic accuracy statistics are re-calculated, and the results are shown in Table 4.
x + a 0 + a 1 x + a 2 y = R F M x ( l a t ,   l o n ,   h ) y + b 0 + b 1 x + b 2 y = R F M y ( l a t ,   l o n ,   h )
In Table 4, the mosaic error across and along the track directions after translation and affine transformations can be significantly reduced. There is no significant advantage for using the affine transformation compared to the translation transformation, thus, we can speculate that the relative installation change is primarily caused by the changing pitch and roll angles, while the yaw angle change is very small and can be ignored. In addition, a simple calculation shows that the plane mosaic accuracy is better than one pixel, and feather processing is also suitable for the seamless mosaicking of the camera overlap region after the relative orientation is determined.

Positioning Accuracy

Based on the updated RFMs obtained in Step 4, the complete stitched image can be generated using Step 5 and Step 6 in the workflow. To evaluate the performance of the proposed image mosaicking approach for maintaining the image position accuracy, the absolute and relative positioning accuracy for the complete stitched image and the single camera image from the single camera with the original RFM was analyzed. We extracted the corresponding DOM and DEM, and 64 GCPs for the complete stitched image and 32 GCPs for the single camera image were matched to evaluate the positioning accuracy. The absolute positioning accuracy is the image positioning accuracy without GCPs, and all GCPs were performed as check points to evaluate the absolute positioning accuracy. The internal relative positioning accuracy can be evaluated using the positioning accuracy with a few GCPs. An affine transformation of the image was applied based on 4 GCPs in the 4 corners of the image to eliminate the exterior systematic offset of the positioning, and the internal relative accuracy statistics were calculated based on the other GCPs. The RFM affine model was also used as the exterior orientation model [24,25].
The geometric accuracy evaluation results of the stitched images are listed in Table 5. The absolute positioning accuracy of the complete virtual image was approximately equal to the absolute positioning accuracy of the two single camera images due to the big virtual detector being placed in the center line of the double-camera detectors along the track direction. Additionally, the relative positioning accuracy of the complete virtual images was also approximately equal to one of the two single camera images. Though approximately 0.1 pixels for the internal relative accuracy loss occur after mosaicking, it still performed well in maintaining the original geometric accuracy. In addition, the internal relative accuracy achieved by the affine transformation is slightly better than the translation transformation, which benefits from the well matched dense homonymy points in the camera overlap region. However, in most cases, due to cloud coverage and complicated terrain, dense evenly distributed homonymy points are difficult to obtain for the coefficient calculations for the high accuracy affine transformation, which will lead to instability of the image edge away from the camera overlap region. However, the translation transformation is a much simpler method, and it can be achieved using a few homonymy points and has better practicability, therefore, it is the optimal method in the relative orientation. Thus, we conclude that the proposed mosaicking approach has no considerable negative effects on the absolute and relative positioning accuracy.

4. Conclusions and Future Work

In this paper, we have proposed an automatic image mosaicking approach for the double-camera system on the optical remote sensing satellite GF2, and subtly used the concept of the big virtual camera to obtain the stitched image and the corresponding high accuracy RFM for post concurrent processing. The following conclusion may be drawn from our results:
1)
Based on the rigorous imaging model for the single camera, the rigorous imaging model for the big virtual camera was established, which would exactly apply to the complete stitched image. Additionally, the image mosaic workflow based on the virtual single detector and the big virtual camera were presented in detail.
2)
High accuracy camera parameters on the satellite are necessary in the proposed mosaicking approach. Therefore, high accuracy on-orbit geometric calibration is the precondition to guarantee the effectiveness of our approach.
3)
Benefiting from the platform stability and the small relative installation error, the mosaic error of the camera overlap region could be controlled in one pixel after an on-orbit high accuracy geometric calibration and the relative orientation, otherwise, a local correction may be required to achieve seamless mosaicking.
4)
Cloud coverage and complex terrain in the camera overlap region may influence the homonymy point matching. Although we verified the ability of the translation transformation in the relative orientation to reduce the dependence on the homonymy point quantity and quality, it still cannot be applied to all situations.
5)
The relative installation instability of the double-cameras and the high-frequency platform vibration are key factors in achieving the seamless mosaic from the double-cameras, therefore, if they can be securely attached in the future satellite platform, a simpler workflow without relative orientation can achieve a more ideal mosaic result.

Acknowledgments

This work was substantially supported by the National Natural Science Foundation of China (Project Nos. 91438203, 91438111), the National Basic Research Program of China 973 Program (Project No. 2014CB744201), and Program for New Century Excellent Talents in University. GF2 data are provided by CRESDA. These supports are valuable.

Author Contributions

Cheng Yufeng and Jin Shuying carried out the experiment and wrote this paper. Wang Mi participated in the experiment. Zhu Ying and Dong Zhipeng were responsible for the data acquisition. All authors reviewed the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Baltsavias, E.; Li, Z.; Eisenbeiss, H. DSM generation and interior orientation determination of IKONOS images using a testfield in Switzerland. Photogramm. Fernerkund. Geoinform. 2006, 1, 41–54. [Google Scholar]
  2. Cao, J.; Yuan, X.; Gong, J. In-orbit Geometric Calibration and Validation of ZY-3 Three-line Cameras Based on CCD-Detector Look Angles. Photogramm. Rec. 2015, 30, 211–226. [Google Scholar] [CrossRef]
  3. Lussy, F.D.; Kubik, P.; Greslou, D.; Pascal, V.; Gigord, P. Pleiades-HR image system products and quality Pleiades-HR image system products and geometric accuracy. In Proceedings of the ISPRS Hannover Workshop, Hannover, Germany, 17–20 May 2005; pp. 1–6. [Google Scholar]
  4. Zitova, B.; Flusser, J. Image registration methods: A survey. Image Vis. Comput. 2003, 21, 977–1000. [Google Scholar] [CrossRef]
  5. Ait-Aoudia, S.; Mahiou, R.; Djebli, H.; Guerrout, E.H. Satellite and aerial image mosaicking—A comparative insight. In Proceedings of the 16th International Conference on Information Visualisation, Montpellier, France, 11–13 July 2012; pp. 652–657. [Google Scholar]
  6. Afek, Y.; Brand, A. Mosaicking of orthorectified aerial images. Photogramm. Eng. Remote Sens. 1998, 64, 115–124. [Google Scholar]
  7. Kim, K.; Jezek, K.C.; Liu, H. Orthorectified image mosaic of Antarctica from 1963 Argon satellite photography: Image processing and glaciological applications. Int. J. Remote Sens. 2007, 28, 5357–5373. [Google Scholar] [CrossRef]
  8. Faraji, M.R.; Qi, X.; Jensen, A. Computer vision–based orthorectification and georeferencing of aerial image sets. J. Appl. Remote Sens. 2016, 10, 036027. [Google Scholar] [CrossRef]
  9. Brown, M.; Lowe, D.G. Automatic Panoramic Image Stitching using Invariant Features. Int. J. Comput. Vis. 2007, 74, 59–73. [Google Scholar] [CrossRef]
  10. Adel, E.; Elmogy, M.; Elbakry, H. Image stitching system based on ORB feature-based technique and compensation blending. Int. J. Adv. Comput. Sci. Appl. 2015, 6, 55–62. [Google Scholar] [CrossRef]
  11. Behrens, A.; Rollinger, H. Analysis of Feature Point Distributions for Fast Image Mosaicing Algorithms. Acta Polytech. J. Adv. Eng. 2010, 50, 11–18. [Google Scholar]
  12. Pan, J.; Hu, F.; Wang, M.; Tang, X.; Jin, S.; Lu, G. An Inner FOV Stiching Method for Non-collinear TDI CCD Images. Acta Geod. Cartogr. Sin. 2014, 43, 1165–1173. [Google Scholar]
  13. Jiang, Y.; Zhang, G.; Li, D.; Tang, X.; Huang, W.; Li, L. Correction of Distortions in YG-12 High-Resolution Panchromatic Images. Photogramm. Eng. Remote Sens. 2015, 81, 25–36. [Google Scholar] [CrossRef]
  14. Poli, D.; Toutin, T. Review of developments in geometric modelling for high resolution satellite pushbroom sensors. Photogramm. Rec. 2012, 27, 58–73. [Google Scholar] [CrossRef]
  15. Toutin, T. Review article: Geometric processing of remote sensing images: Models, algorithms and methods. Int. J. Remote Sens. 2004, 25, 1893–1924. [Google Scholar] [CrossRef]
  16. Wang, M.; Yang, B.; Hu, F.; Zang, X. On-Orbit geometric calibration model and its applications for high-resolution optical satellite imagery. Remote Sens. 2014, 6, 4391–4408. [Google Scholar] [CrossRef]
  17. Zhang, G.; Jiang, Y.; Li, D.; Huang, W.; Pan, H.; Tang, X.; Zhu, X. In-Orbit Geometric Calibration and Validation of ZY-3 Linear Array Sensors. Photogramm. Rec. 2014, 29, 68–88. [Google Scholar] [CrossRef]
  18. Wang, M.; Zhu, Y.; Jin, S.; Pan, J.; Zhu, Q. Correction of ZY-3 image distortion caused by satellite jitter via virtual steady reimaging using attitude data. ISPRS J. Photogramm. Remote Sens. 2016, 119, 108–123. [Google Scholar] [CrossRef]
  19. Jiang, Y.H.; Zhang, G.; Tang, X.; Li, D.; Huang, W.C. Detection and Correction of Relative Attitude Errors for ZY1-02C. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7674–7683. [Google Scholar] [CrossRef]
  20. De Lussy, F.; Greslou, D.; Dechoz, C.; Amberg, V.; Delvit, J.M.; Lebegue, L.; Blanchet, G.; Fourest, S. Pleiades HR in flight geometrical calibration: Location and mapping of the focal plane. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 39, 519–523. [Google Scholar] [CrossRef]
  21. Wang, M.; Cheng, Y.; Chang, X.; Jin, S.; Zhu, Y. On-orbit geometric calibration and geometric quality assessment for the high-resolution geostationary optical satellite GaoFen4. ISPRS J. Photogramm. Remote Sens. 2017, 125, 63–77. [Google Scholar] [CrossRef]
  22. Chen, Y.; Xie, Z.; Qiu, Z.; Zhang, Q.; Hu, Z. Calibration and Validation of ZY-3 Optical Sensors. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4616–4626. [Google Scholar] [CrossRef]
  23. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  24. Fraser, C.S.; Hanley, H.B. Bias compensation in rational functions for IKONOS satellite imagery. Photogramm. Eng. Remote Sens. 2003, 69, 53–57. [Google Scholar] [CrossRef]
  25. Hanley, H.B.; Yamakawa, T.; Fraser, C.S. Sensor orientation for high-resolution satellite imagery. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2002, 34, 69–75. [Google Scholar]
Figure 1. (a) The installation structure and configuration of the GF2 double-camera imaging system; (b) Schematic diagram of the virtual single detector in the single camera; (c) Schematic diagram of the big virtual detector in the big virtual camera.
Figure 1. (a) The installation structure and configuration of the GF2 double-camera imaging system; (b) Schematic diagram of the virtual single detector in the single camera; (c) Schematic diagram of the big virtual detector in the big virtual camera.
Sensors 17 01441 g001
Figure 2. The image mosaic workflow based on the virtual single detector and the big virtual camera: (a) Flow diagram; (b) Schematic diagram.
Figure 2. The image mosaic workflow based on the virtual single detector and the big virtual camera: (a) Flow diagram; (b) Schematic diagram.
Sensors 17 01441 g002
Figure 3. Consistency of the homonymy points positioning accuracy.
Figure 3. Consistency of the homonymy points positioning accuracy.
Sensors 17 01441 g003
Figure 4. The influence of the elevation error.
Figure 4. The influence of the elevation error.
Sensors 17 01441 g004
Figure 5. Experimental results of the GF2 satellite imagery: (a) The complete stitched image; (b) Original overlapping area 1; (c) Original overlapping area 2; (d) Original overlapping area 3; (e) Original overlapping area 4; (f) Original overlapping area 5; (g) Updated overlapping area 1; (h) Updated overlapping area 2; (i) Updated overlapping area 3; (j) Updated overlapping area 4; (k) Updated overlapping area 5.
Figure 5. Experimental results of the GF2 satellite imagery: (a) The complete stitched image; (b) Original overlapping area 1; (c) Original overlapping area 2; (d) Original overlapping area 3; (e) Original overlapping area 4; (f) Original overlapping area 5; (g) Updated overlapping area 1; (h) Updated overlapping area 2; (i) Updated overlapping area 3; (j) Updated overlapping area 4; (k) Updated overlapping area 5.
Sensors 17 01441 g005
Figure 6. The schematic diagram of the virtual control points and check points.
Figure 6. The schematic diagram of the virtual control points and check points.
Sensors 17 01441 g006
Figure 7. Mosaicking error for the camera overlap region with line number based on original RFM.
Figure 7. Mosaicking error for the camera overlap region with line number based on original RFM.
Sensors 17 01441 g007
Table 1. Information for the single camera on the GF2 satellite.
Table 1. Information for the single camera on the GF2 satellite.
InformationMultispectral SensorPanchromatic Sensor
Spectral rangeB1: 450~520 nmPan: 450~900 nm
B2: 520~590 nm
B3: 630~690 nm
B4: 770~890 nm
Pixel size40 µm10 µm
TDI-CCD number of each band1536 × 56144 × 5
Overlapping TDI-CCD number95 × 4380 × 4
Ground sample distance3.24 m0.81 m
Focal length7785 mm
Field angle2.1°
Quantization bits10
Table 2. Experimental data information.
Table 2. Experimental data information.
Study AreaImagesImaging DateSatellite Attitude Roll/Pitch/Yaw (Degree) H a v e r / H d i f f *Image Size
Camera A/B
SongshanScene A27 October 201512.998700.001112.99763431/135929,200 × 27,620
SongshanScene B27 October 201512.998700.000913.00424431/135929,200 × 27,620
AnyangScene C20 October 2015−7.00335−0.000393.0497839/9829,200 × 27,620
DongyingScene D16 December 2016−4.00286−0.000143.006134/2329,200 × 27,620
* H a v e r denotes the average height and H d i f f denotes the height difference.
Table 3. Statistical results comparison of the RPCs fit using a smooth attitude.
Table 3. Statistical results comparison of the RPCs fit using a smooth attitude.
SceneStatistic (Pixels)Oscillating AttitudeSmooth Attitude
SampleLineSampleLine
AMean−2.01 × 10−4−3.92 × 10−5−1.05 × 10−6−1.78 × 10−7
RMSE0.130.433.37 × 10−53.03 × 10−5
Maximum0.650.798.99 × 10−56.78 × 10−5
Minimum−0.54-0.88−9.01 × 10−5−6.80 × 10−5
BMean−1.41 × 10−4−4.57 × 10−5−1.82 × 10−6−2.28 × 10−7
RMSE0.110.402.96 × 10−53.19 × 10−5
Maximum0.530.817.17 × 10−58.90 × 10−5
Minimum−0.28−0.84−7.12 × 10−5−8.84 × 10−5
CMean4.47 × 10−4−3.40 × 10−6−3.44 × 10−6−1.78 × 10−7
RMSE0.160.323.61 × 10−52.48 × 10−5
Maximum0.570.839.27 × 10−57.78 × 10−5
Minimum−0.42−0.93−9.22 × 10−5−7.80 × 10−5
DMean−2.63 × 10−6−4.20 × 10−6−2.22 × 10−6−1.50 × 10−7
RMSE0.230.323.26 × 10−55.12 × 10−6
Maximum0.570.748.10 × 10−51.00 × 10−5
Minimum−0.57−0.88−8.00 × 10−5−9.00 × 10−6
Table 4. Mosaic accuracy of the camera overlap region using different transformations.
Table 4. Mosaic accuracy of the camera overlap region using different transformations.
TransformationErrorScene AScene BScene CScene D
SampleLineSampleLineSampleLineSampleLine
TranslationMaximum0.590.580.590.650.520.60−0.550.56
Minimum−0.60−0.66−0.58−0.69−0.61−0.69−0.55−0.55
RMSE0.130.140.130.140.140.130.140.13
AffineMaximum0.590.590.600.640.530.60−0.540.56
Minimum−0.61−0.63−0.59−0.69−0.60−0.68−0.55−0.55
RMSE0.130.140.130.140.140.130.140.13
Table 5. Geometric accuracy evaluation of the complete stitched images.
Table 5. Geometric accuracy evaluation of the complete stitched images.
Absolute Positioning AccuracyRelative Positioning Accuracy
SceneSensorX/PixelY/PixelX/PixelY/Pixel
ACamera A−14.81−26.840.900.89
Camera B−14.84−20.670.950.92
Virtual Camera by TT−14.80−28.261.071.12
Virtual Camera by AT−13.10−21.411.011.05
BCamera A−13.89−29.560.920.89
Camera B−14.24−29.210.940.91
Virtual Camera by TT−14.15−29.231.051.08
Virtual Camera by AT−13.50−20.861.041.02
CCamera A9.6212.920.890.85
Camera B7.8811.200.920.86
Virtual Camera by TT9.1312.331.120.98
Virtual Camera by AT9.9311.181.000.96
DCamera A−19.19−23.160.910.90
Camera B−18.51−29.240.920.92
Virtual Camera by TT−19.23−27.271.021.08
Virtual Camera by AT−18.10−28.350.980.99

Share and Cite

MDPI and ACS Style

Cheng, Y.; Jin, S.; Wang, M.; Zhu, Y.; Dong, Z. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera. Sensors 2017, 17, 1441. https://doi.org/10.3390/s17061441

AMA Style

Cheng Y, Jin S, Wang M, Zhu Y, Dong Z. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera. Sensors. 2017; 17(6):1441. https://doi.org/10.3390/s17061441

Chicago/Turabian Style

Cheng, Yufeng, Shuying Jin, Mi Wang, Ying Zhu, and Zhipeng Dong. 2017. "Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera" Sensors 17, no. 6: 1441. https://doi.org/10.3390/s17061441

APA Style

Cheng, Y., Jin, S., Wang, M., Zhu, Y., & Dong, Z. (2017). Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera. Sensors, 17(6), 1441. https://doi.org/10.3390/s17061441

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop