Next Article in Journal
Microfluidic Surface Plasmon Resonance Sensors: From Principles to Point-of-Care Applications
Next Article in Special Issue
Fabrication of a Miniaturized ZnO Nanowire Accelerometer and Its Performance Tests
Previous Article in Journal
Robust Decentralized Nonlinear Control for a Twin Rotor MIMO System
Previous Article in Special Issue
Kalman Filters in Geotechnical Monitoring of Ground Subsidence Using Data from MEMS Sensors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High-Accuracy Self-Calibration for Smart, Optical Orbiting Payloads Integrated with Attitude and Position Determination

1
Department of Precision Instrument, Tsinghua University, Beijing 100084, China
2
State Key Laboratory of Precision Measurement Technology and Instruments, Beijing 100084, China
3
Collaborative Innovation Center for Micro/Nano Fabrication, Device and System, Beijing 100084, China
4
Photonics and Sensors Group, Department of Engineering, University of Cambridge, 9 JJ Thomson Avenue, Cambridge CB3 0FA, UK
*
Authors to whom correspondence should be addressed.
Sensors 2016, 16(8), 1176; https://doi.org/10.3390/s16081176
Submission received: 25 May 2016 / Revised: 21 July 2016 / Accepted: 21 July 2016 / Published: 27 July 2016
(This article belongs to the Collection Modeling, Testing and Reliability Issues in MEMS Engineering)

Abstract

:
A high-accuracy space smart payload integrated with attitude and position (SSPIAP) is a new type of optical remote sensor that can autonomously complete image positioning. Inner orientation parameters (IOPs) are a prerequisite for image position determination of an SSPIAP. The calibration of IOPs significantly influences the precision of image position determination of SSPIAPs. IOPs can be precisely measured and calibrated in a laboratory. However, they may drift to a significant degree because of vibrations during complicated launches and on-orbit functioning. Therefore, laboratory calibration methods are not suitable for on-orbit functioning. We propose an on-orbit self-calibration method for SSPIAPs. Our method is based on an auto-collimating dichroic filter combined with a micro-electro-mechanical system (MEMS) point-source focal plane. A MEMS procedure is used to manufacture a light transceiver focal plane, which integrates with point light sources and a complementary metal oxide semiconductor (CMOS) sensor. A dichroic filter is used to fabricate an auto-collimation light reflection element. The dichroic filter and the MEMS point light sources focal plane are integrated into an SSPIAP so it can perform integrated self-calibration. Experiments show that our method can achieve micrometer-level precision, which is good enough to complete real-time calibration without temporal or spatial limitations.

1. Introduction

High-resolution Earth observation applications have become essential in many fields, such as mapping, environmental monitoring, and exploration for resources. High-resolution images and high-accuracy image positioning determinations play an important role in Earth observation applications [1,2]. High-accuracy and high-resolution optical imaging payloads have strong requirements on their attitude control precision, orientation, and attitude transfer matrix [3,4]. However, traditional satellite payloads struggle to meet these requirements because traditional satellites mainly use the separation model of platforms and payloads. Platform precisions cannot fulfill the requirements of high-precision imaging payloads. Recently, a high-accuracy space smart payload system integrated with attitude and position (SSPIAP) was developed to meet the requirements for high-resolution and high-accuracy applications [5,6,7,8,9]. The SSPIAP integrates a high-resolution remote camera and miniature attitude-sensitive and position-sensitive devices, such as a star tracker, a micro-electro-mechanical system (MEMS) gyroscope, and a global positioning system (GPS) into a new smart payload system. The SSPIAP can autonomously achieve lightweight and high-performance attitude and position determination with a combination of celestial navigation, inertial navigation, and satellite navigation. Using high-accuracy attitude and position information, the SSPIAP can complete real-time imaging strategy adjustments, imaging, optimal high-resolution remote imaging, high-accuracy image position determination, and so on. Image position determination is one of the most significant functions of an SSPIAP when it is working on satellites. The inner orientation parameters (IOPs) of the SSPIAP, such as principal distance and principal point, strongly influence the accuracy of image position determination. Therefore, calibrations of the IOPs of the SSPIAP are necessary.
On the ground, many approaches to calibrate the IOPs of space optical cameras exist [10,11,12,13,14]. In [15], Yilmazturk used color targets to calibrate color digital cameras. In [16], Ricolfe-Viala et al. used a set of optimal conditions to improve calibration accuracy. In [17], Simon et al. used crossed-phase diffractive optical elements (DOEs) to generate equally spaced dots for wide-angle geometric camera calibration. The DOEs can generate multiple two dimensional (2D) diffraction grids which can be used to calibrate cameras for photogrammetry. In [18], the IOPs were calculated based on the three dimensional (3D) coordinates of several given points and image points. They adopt a four step method using multiple views to solve all the intrinsic parameters. This method can calibrate intrinsic parameters, distortion, and image deformation. The IOPs can be precisely calibrated in a laboratory. However, they may drift to a great degree because of vibration during complicated launches and on-orbit working conditions. The laboratory calibration approaches generally need a calibrated reference object. Thus, these methods are unsuitable for the on-orbit calibration of an SSPIAP.
Recently, several self-calibration methods, that do not require a calibrated reference object, have been developed to calculate IOPs [19,20]. These methods use constraints among the system parameters to calibrate cameras. Self-calibration methods can make it possible to use unknown scenes and motions to calibrate a camera [21]. In [22], Song proposed an active-vision-based self-calibration method. In [23], Caprile et al. performed self-calibration based on vanishing points or lines. In [24], Gonzalez-Aguilera used an iterative, and robust, least squares method to calculate internal calibration parameters combined with a geo-reference, terrestrial laser scanner (TLS) dataset. However, these self-calibration methods have computational complexity, heavy computation, and nontrivial solutions of equations. Meanwhile, calibration accuracy cannot be guaranteed and its robustness is very low. A self-calibrating bundle adjustment is ideal for camera calibrations, for a number of reasons summarized in [25]. In [26], Lichti et al. compared three geometric self-calibration methods for range cameras. The self-calibration bundle adjustment was found to be slightly superior. However, the self-calibrating bundle adjustment method suffers from long computation times [27]. Owing to the limited on-board computer of the SSPIAP platform, these methods are not suitable for high-accuracy calibration of remote sensing SSPIAP applications.
For on-orbit calibration of remote sensing cameras, traditional methods basically use ground control point (GCP) methods [28]. In [29], Fourest et al. adopted stars as control points (CPs) to complete calibration. This method performs calibration during on-orbit commissioning, with numerous measurements made on different stars as seen right around the World. This method adopts a wide base of star CPs. However, the availability and access to GCPs or star CPs is not always easy. Further, the accuracy of each GCP or star CP must be consistent with the requirements of the location system performance.
Recently, 180-degree satellite maneuvers have been used to calibrate on-orbit IOPs [30]. This method needs a standard ground calibration field. In [31], Delvit et al. proposed an auto-reverse method during the commissioning phase. Although this method is efficient and does not require external reference data, its operational implementation is highly constraining because the acquisition of the same site image pair wastes a long orbit portion to complete the alignment to the ground projection of the scan-line on the ground velocity. Therefore, existing on-ground methods and on-orbit methods both possess common characteristics with the aid of external targets. These methods are limited by space and time.
In this paper, we propose an on-orbit, auto-collimating self-calibration method for SSPIAPs. Our method includes four steps. In the first and second steps, a dichroic filter and a MEMS point light sources focal plane are integrated into an SSPIAP to perform the calibration of the IOPs. In the third step, an integration mathematical calibration model is built based on geometric imaging relationships between the first two steps. In the fourth step, a centroid extraction algorithm processes images to extract the star point position. Finally, the principal distance and the principal point can be calculated based on the mathematical calibration model. The SSPIAP can perform integrated self-calibration without temporal and spatial limitations. The rest of this paper is organized as follows: the principle of the proposed method is introduced in Section 2, while, Section 3 describes the experimental results.

2. Proposed Method for On-Orbit, Integrated Self-Calibration

2.1. Principles

Position determination (positioning) without using ground control points (GCPs) is one of the recent key technical problems for remote sensing photogrammetry. The IKonos-2 satellite can reach positioning accuracy of 15 m without GCPs [32]. The WorldView-2 satellite can reach positioning accuracy of 6.5 m without GCPs [33]. The Geoeye satellite can provide positioning accuracy of 4 m without GCPs [5]. An SSPIAP also adopts a positioning method without GCPs and requires a positioning accuracy of 5 m [6]. An SSPIAP uses forward-looking and back-looking images to complete the digital mapping. This method requires an SSPIAP to have accurate geometrical performance. In order to extract highly accurate topographical information from two overlapping strip images, an SSPIAP must provide highly accurate IOPs. After an adjustment of the optical camera, the IOPs deviate from the ideal values specified by the design, manufacture, assembly, etc. [34,35,36]. Therefore, accurate calibrations of IOPs are necessary for an SSPIAP. For an SSPIAP, positioning accuracy (without GCPs) mainly depends on the accuracy of satellite station positioning, attitude measurement accuracy, image points’ measurement accuracy, IOP measurement accuracy, etc. In order for an SSPIAP to reach a positioning accuracy of 5 m, the distribution of primary errors of the SSPIAP should be as follows: (1) the attitude determination accuracy is within 10″ (angular seconds); (2) the precision orbit determination is within 0.2 m; (3) the angle calibration accuracy between the star tracker and the optical camera is within 5″; (4) camera lens distortion calibration accuracy is within 5 μm; (5) the principal distance calibration accuracy is within 50 μm; (6) the principal point calibration accuracy is within a third of a pixel. After the adjustment of our SSPIAP, we calibrate all of the above parameters. In this SSPIAP, its optical camera uses a design that integrates with the installation structure of the Pico Star Tracker. They have a common benchmark. The Pico Star Tracker is a new type of self-developed, attitude measurement device with 7″ accuracy, which can obtain high accuracy outside the azimuth elements of our SSPIAP and meet the 10″ accuracy requirements. This SSPIAP can, in real-time, attain the benchmarks for the unified, on-orbit, inside-azimuth elements and the angle between the camera optical axis and the star sensor. The optical system of the Pico Star Tracker adopts a transmission structure. Its optical axis is not easily changed. Therefore, the optical axis of the Pico Star Tracker can be replaced by an external mechanical benchmark. In the SSPIAP, the mechanical benchmark of the Pico Star Tracker is similar to the installation structure of the optical camera. Therefore, the angle between the camera optical axis and the star sensor benchmark can also be calibrated by the proposed method. This can ensure that the angle calibration accuracy satisfies the 5″ requirements needed when the SSPIAP is working on a satellite. Usually, the angle variation between the optical axis of the camera and the star sensor benchmark is relatively small.
In this paper, we mainly discuss the on-orbit method of monitoring the principal distance and the principal point when the SSPIAP is working on a satellite. We introduce the calibration of IOPs, including the principal distance and the principal point. We propose an integrated self-calibration method which can be used on-ground and on-orbit.
An auto-collimation dichroic filter and MEMS point sources are integrated into the SSPIAP to perform the calibration of the IOPs. The principle of the proposed, auto-collimating calibration of the IOPs for the SSPIAP is shown in Figure 1. The optical camera system of the SSPIAP includes a primary mirror, a secondary mirror, an aspheric corrector, and a focal plane. The auto-collimation dichroic filter is plated on the plane of the aspheric corrector. This is done to fabricate an auto-collimation light reflection element. The point sources and the image sensor are integrated into the focal plane assembly. The auto-collimating light path includes point sources, a camera lens, an auto-collimation dichroic filter, and an image detector. According to the optical path of the camera, the MEMS point light sources are auto-collimating lights when they pass through the secondary mirror, primary mirror, and the aspheric corrector of camera lenses. The auto-collimating lights are reflected by the auto-collimation dichroic filter and then return into the camera lens. Lights going out of the camera are incident on the focal-plane detector.
In the proposed method, lighting elements and a detection module are introduced into the optical system of the camera between two interleaving focal-plane assemblies. A dichroic filter is also integrated into the optical system of the camera. The proposed method can be summarized in four steps.
In the first step, an auto-collimation dichroic filter is used to complete the integrated imaging and calibration. The dichroic filter can selectively pass light in a small range of bands while reflecting light in other bands. Figure 2 shows the reflection and transmission ratios of the dichroic filter for the visible optical camera.
The dichroic filter allows visible light to pass through while reflecting longer wavelength bands. According to the details of the dichroic filter, the bands of MEMS point sources can be determined. When an SSPIAP is working on a satellite, light reflected from and radiated off a target passes through the dichroic filter to complete on-orbit imaging. The lights of the MEMS point sources are reflected by the dichroic filter to complete on-orbit calibration.
In the second step, we fabricate a MEMS point source focal plane and then integrate it into the optical focal plane of the SSPIAP. The MEMS point sources are installed on the focal plane, which means that the monitoring optical path is auto-collimating. Thus, the principal distance and the principal point can be monitored when they are needed. To ensure a sufficiently small size and low power consumption, we used MEMS procedures to fabricate a point-source focal plane in this study. The point-source focal plane is mainly composed of the mask that is fabricated by utilizing the MEMS process, a housing, and an electrical system. The assembly of the point-source focal plane is shown in Figure 3.
The fabrication process of the mask is as follows: (1) chromium, gold and tantalum materials are plated on a specified glass substrate; and (2) the photoetching process is performed on the metal layers to obtain several small apertures. The mask includes a glass substrate, a mask layer, and an anti-reflective layer, etc., and the mask thickness is 1.6 mm. The etched apertures in the mask can pass through light, while other parts cannot because they are covered by metal layers. Optical anti-radiation quartz glass is used as the substrate because it can block free space, background cosmic radiation. First, a layer of chromium is plated on the glass substrate. The layer of chromium can completely attenuate the light with its thickness of 75 nm, which depends on the transmissivity of the optical system; Second, the gold membrane is plated on the chromium layer. The gold membrane is a mask layer and its thickness is 200 nm; Third, the tantalum membrane is plated on the gold layer, which is a radiation protection layer, and its thickness is 60 nm; Fourth, the photoresist is poured onto the plated substrate by spin coating. A polymethyl methacrylate (PMMA) material is used as the photoresist in the fabrication process; Fifth, proximity lithography is performed to expose the photoresist through a photomask; Sixth, some developer is applied to remove the exposed photoresist to form the pattern on the plated substrate; Seventh, a laser is used to cut the plated substrate in order to complete the integrated packaging with a complementary metal oxide semiconductor (CMOS) sensor; Eighth, a second chromium layer is plated on the cut substrate, this layer is used as the secondary reflection prevention layer and its thickness is 75 nm; Ninth, LEDs are installed under the mask. Light can pass through the etched apertures. Given that other parts are covered by a chromium layer, they stop the incident light; Finally, the LEDs, the image sensor, and the mask are packaged into a point-source focal plane. In the MEMS focal plane, the wavelength of the MEMS point sources is determined by the dichroic filter. The MEMS point sources can be controlled by the SSPIAP controller in real-time. In the SSPIAP’s imaging mode, the LEDs can be switched off and have no effect on imaging. The MEMS light sources are installed around image sensors. Accurate positions of the MEMS light sources are calculated based on the relationships between the image sensors and the optical system. For the optical remote sensor, several image sensors are butted together into one greater image sensor [37,38]. The MEMS light sources are generally placed on the butting area.
For the third step, we built a mathematical calibration model. According to the geometrical relationships between point-light source positions and their images, the mathematical calibration equation was repeatedly solved to calculate the IOPs of the SSPIAP.
In the fourth step, a controller of the SSPIAP controls the MEMS light sources. The CMOS senses the MEMS light sources images. A centroid extraction algorithm processes input images to extract the star point position. Finally, the principal distance and the principal point can be calculated using the mathematical calibration model.

2.2. On-Orbit Mathematical Calibration Model

In [39], a coordinate transform method was used to build the relationships between ground object targets and image positions. This coordinate transform method can obtain the instantaneous imaging geometrical relationships of optical remote sensing sensors. In this study, we adopt this coordinate transform to build a mathematical model of point sources and their image positions to calculate the instantaneous principal distance and principal point. According to Figure 1, several coordinate systems from the MEMS point sources to the dichroic filter plane were defined as follows: dichroic filter plane coordinate system F(xF,yF,zF), camera coordinate system C(xC,yC,zC), image plane coordinate system I(xI,yI,zI), source plane coordinate system S(xS,yS,zS), and detector plane coordinate system D(xD,yD,zD). The coordinate relationships of equivalent optical paths between point sources and the image positions are shown in Figure 4.
We define M t ( u 0 t , v 0 t , f 0 t ) as the elements of interior orientation at time t. In the S coordinate system, the position of a point source S1 is defined as S 1 ( x s 1 t , y s 1 t , 0 ) . In the C coordinate system, the position vector of the point source can be expressed as follows:
n 1 = [ x c s 1 y c s 1 z c s 1 ] T = M 3 M 2 M 1 [ x s 1 t , y s 1 t , 1 ] T
M 1 = [ 1 / d x 0 u s x 0 0 1 / d y v s y 0 0 0 1 ] , M 2 = [ d x 0 d x u 0 t 0 d y d y v 0 t 0 0 1 ] , M 3 = [ 1 0 0 0 1 0 0 0 f 0 t ]
where dx and dy are the sizes of, respectively, the x-axis and y-axis detectors, and (usx0,vsy0) is the coordinate position of the center of the MEMS point sources in the D coordinate system. (usx0,vsy0) can be obtained from the design value of the focal plane of the SSPIAP. At the adjustment stage, optical collimation and precise angle measurements are used to calibrate the SSPIAP to remove preparation and adjustment error. In our calibration process, a large-scale collimator and photoelectric theodolite are used to complete calibration of the center of the CMOS and (usx0,vsy0). From Equation (1), the unit vector of the principal ray emitted by the point source S1 in the C coordinate system can be expressed as follows:
u 1 = [ u 1 x u 1 y u 1 z ] T = [ x s 1 t + u s x 0 d x d x u 0 t , y s 1 t + v s y 0 d y d y v 0 t , f 0 t ] T ( x s 1 t + u s x 0 d x d x u 0 t ) 2 + ( y s 1 t + v s y 0 d y d y v 0 t ) 2 + ( f 0 t ) 2
Light emitted by the point source with position S1 passes through the camera optical system to become parallel light. The emitted parallel light from the camera’s optical system is reflected by the dichroic filter and then returned into the camera’s optical system. In the C coordinate system, the normal vector of the dichroic filter is defined as n = [ n c x d f n c y d f n c z d f ] T . The unit vector of the principal ray of the reflected beam is defined as u 1 . When the reflected beam passes through an optical system, an image point is obtained and then received by detectors. In this case, we define the image point as S 1 . In the D plane coordinate system, the position of the centroid of the image point is defined as ( x s 1 t , y s 1 t , 0 ) . In the C plane coordinate system, the position of the centroid of image point can be expressed as follows:
n 2 = [ x c s 1 y c s 1 z c s 1 ] T = M 3 M 1 1 [ x s 1 t y s 1 t 1 ] T
M 1 = [ 1 / d x 0 u 0 t 0 1 / d y v 0 t 0 0 1 ]
From Equation (4), the unit vector u 1 in the C coordinate system can be expressed as follows:
u 1 = [ u 1 x u 1 y u 1 z ] T = [ x s 1 t d x d x u 0 t y s 1 t d y d y v 0 t f 0 t ] T ( x s 1 t d x d x u 0 t ) 2 + ( y s 1 t d y d y v 0 t ) 2 + ( f 0 t ) 2
From the law of optical reflection, the relationship among the unit vector of the principal rays can be expressed as follows:
u 1 × n = u 1 × n u 1 · n = u 1 · n
The preceding equations are expanded to a set of scalar equations as follows:
{ u 1 y n c z d f u 1 z n c y d f = u 1 z n c y d f u 1 y n c z d f u 1 x n c z d f u 1 z n c x d f = u 1 z n c x d f u 1 x n c z d f u 1 x n c x d f + u 1 y n c y d f + u 1 z n c z d f = u 1 x n c x d f + u 1 y n c y d f + u 1 z n c z d f
Equations (7) and (8) represent the mathematical, spatial relationship equations between a point source and its image point. The mathematical modeling of the principal distance and the principal point can be established based on Equations (7) and (8). In our method, several conjugate pairs of MEMS point sources are symmetrically placed around the principal point.
In an actual optical system, MEMS point sources need to be installed around the image sensor. The imaging relationship of the equivalent optical path of self-calibration is shown in Figure 5. Let p1 and p2 be two installed point sources around the image sensor. To avoid influencing the imaging, p1 and p2 are installed on the same side of the image sensor. Let the position of point sources p1 and p2 be ( x s 1 , y s 1 ) and ( x s 2 , y s 2 ) in the S coordinate system, respectively. Let their respective images p 1 and p 2 in the D coordinate system be located at ( x p 1 , y p 1 ) and ( x p 2 , y p 2 ) , respectively. At the initial time, let the angles between the principal ray emitted from p1 and p2 and the optical axis be α1 and α2, respectively. Our system is designed with equal angles between the optical axis and principal rays emitted from p1 and p2. When the optical system is ideal and not maladjusted at the I position in Figure 6, α1 is equal to α2 (α1 = α2). The optical system is not ideal if it is affected by the manufacture and adjustment of the optical system. In an actual optical system or when maladjustment occurs (at the I’ and I” positions in Figure 6) α1 is not equal to α2 (α1α2).
From Equations (7) and (8), the imaging relationship can be expressed as follows:
{ f 0 t tan ( α 1 t ) = ( x s 1 t + u s x 0 d x d x u 0 t ) 2 + ( y s 1 t + v s y 0 d y d y v 0 t ) 2 f 0 t tan ( α 1 t ) = ( x p 1 t d x d x u 0 t ) 2 + ( y p 1 t d y d y v 0 t ) 2
{ f 0 t tan ( α 2 t ) = ( x s 2 t + u s x 0 d x d x u 0 t ) 2 + ( y s 2 t + v s y 0 d y d y v 0 t ) 2 f 0 t tan ( α 2 t ) = ( x p 2 t d x d x u 0 t ) 2 + ( y p 2 t d y d y v 0 t ) 2
For the remote sensing optical camera, the focal length is relatively long [40,41]. The focal length generally ranges from several meters to tens meters. Variations of the principal point generally occur at the micrometer level in on-orbit working conditions. Therefore, as the specifications are expressed as follows: | u 0 t u 0 | { f 0 t , f 0 , f 0 t + 1 } and | v 0 t v 0 | { f 0 t , f 0 , f 0 t + 1 } . The approximation of the angles between the principal rays emitted from p1 and p2 and the optical axis can be expressed as α 1 t + 1 α 1 t α 1 ,   α 2 t + 1 α 2 t α 2 . For the optical system of the SSPIAP, the angles between the principal rays of the MEMS point sources and the optical axis are approximately equal to the designed value. From Equations (9) and (10), the following equation can be obtained:
[ a 11 a 12 a 21 a 22 ] [ u 0 t v 0 t ] = [ s 1 s 2 ]
where a 11 = 2 d x x s 1 t + 2 u s x 0 d x 2 2 d x 2 x p 1 t , a 12 = 2 d y y s 1 t + 2 v s x 0 d y 2 2 d y 2 y p 1 t , a 21 = 2 d x x s 2 t + 2 u s x 0 d x 2 2 d x 2 x p 2 t , a 22 = 2 d y y s 2 t + 2 v s y 0 d y 2 2 d y 2 y p 2 t , s 1 = ( x s 2 t + u s x 0 d x ) 2 + ( y s 2 t + v s y 0 d y ) 2   ( x p 2 t d x ) 2 ( y p 2 t d y ) 2 , s 2 = ( x s 2 t + u s x 0 d x ) 2 + ( y s 2 t + v s y 0 d y ) 2 ( x p 2 t d x ) 2 ( y p 2 t d y ) 2 .
From Equation (11), the position of the principal point can be expressed as follows:
u 0 t = s 1 a 22 s 2 a 12 a 11 a 22 a 21 a 12 v 0 t = s 1 a 21 s 2 a 11 a 12 a 21 a 11 a 12
From Equations (10) and (12), the principal distance can be expressed as follows:
f 0 t = i = 1 N ( x p i t d x d x s 1 a 22 s 2 a 12 a 11 a 22 a 21 a 12 ) 2 + i = 1 N ( y p i t d y d y s 1 a 21 s 2 a 11 a 12 a 21 a 11 a 12 ) 2 tan 2 ( α 1 ) + tan 2 ( α 2 )
where N is the number of MEMS point sources. We used N equal to 2. In this case, the variation of the principal distance and point can be obtained as f 0 t t =   f 0 t + 1 f 0 t ,         u 0 t t = u 0 t + 1 u 0 t , and v 0 t t = v 0 t + 1 v 0 t . α1, α2 and the positions of two point sources can be accurately calibrated at the adjustment stage of the SSPIAP. The positions of two point source images can be calculated by the centroid extraction algorithm. Therefore, the variations of the principal distance and point can be determined.

3. Experiment and Analysis

3.1. Simulation and Analysis

To verify the effectiveness of the proposed method, we used an optical design program, ZEMAX (Zemax, LLC, Bellevue, WA, USA), to simulate calibration experiments. In ZEMAX, our input optical system model parameters were as follows: the focal length was 2032 mm and the aperture diameter was 203.2 mm. The Cassegrain optical system was adopted. The optical system was composed of a primary mirror and a secondary mirror. Based on the optical system, a monitoring optical path of the principal distance and the principal point was designed. A tertiary mirror was used in addition to the secondary mirror. The optical rays are reflected from the mirror, go through the optical system again, and are then concentrated on the CMOS detector. The designed optical path is shown in Figure 6. In this figure, ZSm1 is a mirror that simulates the dichroic filter, ZSm2 and ZSm3 are the primary mirror and secondary mirrors, respectively, and ZSm4 is a focal plane that simulates the MEMS sources.
The simulation experiment includes two steps. In the first step, the IOPs are calibrated without maladjustment in the optical system. In the second step, different maladjustments are deliberately used to calibrate the IOPs. In each step, the reference values of the principal distance and the principal point are first calculated under different maladjustments. For a maladjusted optical system, the coordinate positions from rays with different view fields to the image plane can be traced accurately. We used the on-ground calibration method based on measuring angles to calibrate the reference value of the principal distance and the principal point. The laboratory measuring angle method is widely used in the calibration of IOPs of remote sensing cameras [42,43,44]. In this method, an uncalibrated optical camera is placed on a precision turntable to capture the parallel lights of star points emitted by a collimator.
At different rotation angles of the turntable, the camera can obtain multiple positions of the star points and their images from different field of views (FOVs). The recorded rotation angle and image point positions are used to build an imaging geometrical equation between each star point and its image. From the principle of camera distortion, an optical camera has its minimum distortion at its principal point location [44,45]. A least squares method is used to solve the imaging geometrical equation to calculate the principal distance and the principal point. This method is simple and has high calibration accuracy. The calibration accuracy can reach the micrometer level [45,46]. As with the ground calibration method of the principal distance and the principal point, we used a least squares, multiple regression analysis to calculate the principal distance and the principal point of each test, maladjusted system [46,47]. The principal distance and the principal point can be expressed as follows:
f = ( i = 1 N L i tan 2 W i i = 1 N tan 3 W i ) ( i = 1 N L i tan W i i = 1 N tan 4 W i ) ( i = 1 N tan 3 W i ) 2 ( i = 1 N tan 2 W i i = 1 N tan 4 W i )
p x / y = ( i = 1 N L i tan 2 W i i = 1 N tan 2 W i ) ( i = 1 N L i tan W i i = 1 N tan 3 W i ) ( i = 1 N tan 3 W i ) 2 ( i = 1 N tan 2 W i i = 1 N tan 4 W i )
where f is the principal distance, px/y is the position of the principal point in the x direction or the y direction, i is the number of measurement points, Wi is the measurement angle of the ith measurement point, and Li is the measurement height of the ith measurement point.
In the first step, we used two different methods to calculate the principal distance when the optical system had no maladjustment (see the optical system in Figure 7). In our simulation, we used ten reference points at different FOV positions to estimate the principal distances and points for the on-ground method. The calculated results of the principal distance and the principal point in the simulation condition when the optical system had no maladjustments are shown in Table 1. This table presents the initial values of the principal distance and the principal point based on the two methods. For the ground method, the calculated error was 0.000853 μm. This error was produced by least squares estimation. This error can be accepted in our system. The initial values of the two methods can be considered approximately equal.
In on-orbit working conditions, the optical system may possess a variety of maladjustments. We set different maladjustments for the ZSm2. Using the same approach, we set ten reference points in a valid FOV. In the second step, we simulated three main maladjustment situations. First, we set maladjustments in the z-direction. We set twelve maladjustments of ZSm2 along the z-direction to test the proposed method. The deviations of the ZSm2 mirror from its original position were from 0.010 mm to 0.06 mm. For different maladjustment values, Figure 7 shows the calculation results using different methods. As shown in Figure 7, the principal point has not changed, but the principal distance has changed. This phenomenon is caused by the translation of the ZSm2 mirror along the optical axis direction. This case can be equivalent to a defocus phenomenon. In on-orbit working conditions, the defocus phenomenon often occurs. This case simulates the most common disorder situation. The variation of the principal distance can be accurately monitored, as shown in Figure 7. The difference between the two methods was less than 0.008 μm and this is acceptable.
Second, we set maladjustments in the y-direction and around the x-axis. In the y-direction, we set nine maladjustments of ZSm2 to test the proposed method. The deviations of the ZSm2 mirror from its original position were from 0.010 mm to 0.05 mm. Around x-axis, we set seven maladjustments of ZSm2 to test the proposed method. The deviations of the ZSm2 mirror from its original position were from 0.0001 degrees to 0.0012 degrees. Figure 8 and Figure 9 show the calculation results using different methods. Figure 8 and Figure 9 indicate that the principal distance has not changed and the principal point has changed.
These results are due to the translation of the ZSm2 mirror away from the optical axis direction and also the inclination toward the optical axis direction. These cases can be equivalent to a mismatch between the primary mirror and secondary mirror in on-orbit working conditions. These phenomena happen during on-orbit dynamic imaging. As shown in Figure 8 and Figure 9, the error is less than 1 μm under the different maladjustments, this is acceptable. Table 1 and Figure 7, Figure 8 and Figure 9 show that our method can accurately calibrate the variation of the principal distance of the principal point. In Figure 7, Figure 8 and Figure 9, the calibration error is the difference between the two methods. In Figure 9, the calibration error shows a linear growth because the larger maladjustments make larger deformations of the optical system and the images. Compared with the error in the on-ground method, the error in our method is less than 1 μm.

3.2. Experiments

We set up an experimental system for integrated calibration with imaging to verify the proposed method. Figure 10 shows the experimental calibration system.
The system included an optical camera, an auto-collimating filter, a processing circuit, a collimator, an optical theodolite, and a high-accuracy turntable. The optical system was a co-axial Schmidt–Cassegrain optical system (Celestron, Torrance, CA, USA). The designed value of the aperture diameter was 203.2 mm, and that for the focal length was 2032 mm, and the F/ratio of the optical system was 10. The image sensor was a CMOS detector, and the image resolution was 1280 × 1024 pixels. The MEMS point sources and image sensor were installed on the focal plane. A collimator provided an infinite target for the test system.
Firstly, we calibrated the reference value of the principal distance and principal point. The camera controller set the imaging mode. The three-axis turntable was adjusted evenly using the level. The collimator was also adjusted evenly. By adjusting the support tooling of the camera, and using the benchmark prisms of the camera’s optical axis, the camera’s visual axis and collimator were moved to share a common shaft. Star points of the collimator were imaged on the target CMOS sensor. The turntable was revolved, and the rotation angle was recorded. The captured image was also recorded. The processing circuit output star coordinates in real-time. Figure 11 shows the captured images.
To avoid startup and pause vibration errors, the turntable was rotated at the same period. Images and rotation angles were recorded in real-time. Figure 12 shows the centroid positions in a period.
Based on the measured centroid position and rotation angle, least squares, two-multiplication regression analysis was used to obtain the optimal estimation values of the IOPs. Table 2 shows the calculation results.
Secondly, we calibrated the principal distance and the principal point using our method. In the SSPIAP, the camera controller set the calibration mode and switched the MEMS point sources on or off. The MEMS point sources were lit when the camera controller executed the turn-on command. Figure 13 shows the captured images when two point sources were lit. Using the spot centroid algorithm, the position of images can be determined. The principal distance and the principal point were calculated according to Equations (12) and (13).
Table 3 shows the calculation results. The principal distance calibrated by our method was 2032.0818 mm and the standard deviation was 0.0343 mm. The deviation is a constant error. The error was mainly be produced by the on-ground method error, calibration errors of positions (xs1,ys1) and (xs2,ys2), centroid extraction error, and error in the installation of the focal plane. For the on-ground method, the calibration accuracy was better than 5 μm [44,45]. The position accuracy of the spot image is mainly determined by the centroid extraction algorithm. For the centroid extraction algorithm, the measurement accuracy can reach 0.05 pixels. The pixel size of the image sensor is 5.3 μm in the SSPIAP system. Thus, the extraction accuracy can reach 0.265 μm. The total standard deviation of the principal distance and the principal point can be expressed as follows:
σ f 0 t = i = 1 N ( f 0 t x p i t ) 2 σ x p 2 + i = 1 N ( f 0 t y p i t ) 2 σ y p 2
σ u 0 t = i = 1 N ( u 0 t x p i t ) 2 σ x p 2 + i = 1 N ( u 0 t y p i t ) 2 σ y p 2
σ v 0 t = i = 1 N ( v 0 t x p i t ) 2 σ x p 2 + i = 1 N ( v 0 t y p i t ) 2 σ y p 2
where N is the number of point sources, and σ x p and σ y p are the measurement accuracies of the centroid extraction algorithm in the x and y directions. That is, σ x p = σ y p = 0.265 μm. Based on Equations (16) and (18), the total standard deviation of the principal distance and the principal point is less than 0.02 μm. Therefore, the centroid extraction error was relatively small and can be neglected. The calibration errors of positions (xs1,ys1) and (xs2,ys2) and the installation errors of the focal plane were controlled to be less than 30 μm. However, these errors do not affect the monitoring of the relative variation of the principal distance and the principal point.
Thirdly, we adjusted the motion of the secondary mirror to simulate on-orbit maladjustments of the optical system. From analyzing the maladjustments of different elements of the optical system, we know that the secondary mirror and primary mirror maladjustments have an effect on the focal plane of the optical system. Given that the primary mirror is installed in the primary mirror room, it has almost no maladjustment. Figure 14 shows the principal distance variation under different maladjustments. In addition, we used the ground calibration method based on a least squares, multiple regression analysis as reference. The calibration deviation was less than 0.015 mm. The deviation stems from the self-method error between methods under the maladjusted condition.
In the experimental setup, the monitoring accuracy of the variation of the principal distance and the principal point were mainly affected by turntable vibrations, environmental vibrations, temperature, and airflows.
When the SSPIAP is working in a space environment, the centroid extraction and platform flutter temperature are the primary error sources. In the SSPIAP system, we designed a special thermal control system to maintain minimal changes in the temperature field of the optical system. Further, we use a manganin material in a structure designed for vibration attenuation in the SSPIAP. The vibration attenuation structure can significantly reduce the influence of flutter on the optical system. The centroid extraction error mainly depends on the centroid extraction algorithm. The extraction accuracy of our method is less than 0.05 pixels. In our SSPIAP system, the pixel size of the image sensor is at the micrometer level. Thus, the extraction accuracy is less than 1 μm, and the error is acceptable for the requirement of errors less than 50 μm. To test the monitoring accuracy of our method in an approximation to the on-orbit environment, our experiment was performed in a laboratory with constant temperature. Furthermore, the experimental turntable used a gas-floating vibration isolation platform to avoid vibration disturbance. We used our method to monitor the variation of the principal distance and the principal point in the static case. We processed tens of thousands of images to calculate the monitoring accuracy. The variation of the principal distance in 2000 real-time seconds is shown in Figure 15, while the corresponding variation of the principal point position is presented in Figure 16.
Based on the statistical data in Figure 15 and Figure 16, the mean square error formula is used to calculate the monitoring accuracy. The monitoring accuracy of the principal distance can reach 2.4 μm. The monitoring accuracy of the principal point can reach 2.6 μm and 3.9 μm in the x and y directions, respectively. The SSPIAP has 5 m image positioning accuracy. It requires less than 50 μm calibration accuracy of the principal distance and the principal point. The monitoring accuracy can reach the micrometer level and meet the SSPIAP mapping requirements.

4. Conclusions

In this paper, we discussed a high-accuracy on-orbit calibration method or the IOPs of an SSPIAP. We adopted an integrated method to build an auto-collimation self-calibration system. An auto-collimation dichroic filter and MEMS point sources were integrated into the SSPIAP. First, the point sources were installed on the focal plane, and we used the MEMS method to fabricate point sources and package them with the image sensor; Second, we integrated the auto-collimation dichroic filter into the optical systems of the SSPIAP; Third, a mathematical model of IOPs was built based on a geometrical imaging model; Fourth, the centroid extraction algorithm was used to process images to extract the star point position to calculate the IOPs; Finally, we used ZEMAX to simulate the proposed method and set up an experiment to verify the feasibility of our method. The monitoring accuracy can reach micrometer levels. The proposed method can complete self-calibration without space and time limitations in real-time. In addition, our method can be applied to other calibration methods to improve their performance.

Acknowledgments

This work is supported by Natural Science Foundation of China (No. 61505093) and National High Technology Research and Development Program of China (863 Program) (Grant No. 2012AA121503).

Author Contributions

F.X. and D.C. conceived and designed the experiments; J.L. performed the experiments; F.X. and Z.L. analyzed the data; D.C. contributed analysis tools; J.L. wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Figoski, J.W. Quickbird telescope: The reality of large high-quality commercial space optics. Proc. SPIE 1999, 3779, 22–30. [Google Scholar]
  2. Han, C. Recent Earth imaging commercial satellites with high resolutions. Chin. J. Opt. Appl. Opt. 2010, 3, 202–208. [Google Scholar]
  3. Wei, M.; Xing, F.; You, Z. An implementation method based on ERS imaging mode for sun sensor with 1 kHz update rate and 1 precision level. Opt. Express 2013, 21, 32524–32533. [Google Scholar] [CrossRef] [PubMed]
  4. Sun, T.; Xing, F.; You, Z.; Wei, M.S. Motion-blurred star acquisition method of the star tracker under high dynamic con-ditions. Opt. Express 2013, 21, 20096–20110. [Google Scholar] [CrossRef] [PubMed]
  5. You, Z.; Wang, C.; Xing, F.; Sun, T. Key technologies of smart optical payload in space remote sensing. Spacecr. Recover. Remote Sens. 2013, 34, 35–43. [Google Scholar]
  6. Li, J.; Xing, F.; You, Z. Space high-accuracy intelligence payload system with integrated attitude and position determination. Instrument 2015, 2, 3–16. [Google Scholar]
  7. Wang, C.; You, Z.; Xing, F.; Zhang, G. Image motion velocity field for wide view remote sensing camera and detectors exposure integration control. Acta Opt. Sin. 2013, 33, 88–95. [Google Scholar] [CrossRef]
  8. Wang, C.; Xing, F.; Wang, H.; You, Z. Optical flow method for lightweight agile remote sensor design and instrumentation. Proc. SPIE 2013, 8908, 1–10. [Google Scholar]
  9. Wang, C.; You, Z.; Xing, F.; Zhao, B.; Li, B.; Zhang, G.; Tao, Q. Optical flow inversion for remote sensing image dense registration and sensor’s attitude motion high-accurate measurement. Math. Probl. Eng. 2014, 2014, 432613. [Google Scholar] [CrossRef]
  10. Hong, Y.; Ren, G.; Liu, E. Non-iterative method for camera calibration. Opt. Express 2007, 23, 23992–24003. [Google Scholar] [CrossRef] [PubMed]
  11. Ricolfe-Viala, C.; Sanchez-Salmeron, A.; Valera, A. Calibration of a trinocular system formed with wideangle lens cameras. Opt. Express 2012, 20, 27691–27696. [Google Scholar] [CrossRef] [PubMed]
  12. Lin, P.D.; Sung, C.K. Comparing two new camera calibration methods with traditional pinhole calibrations. Opt. Express 2007, 15, 3012–3022. [Google Scholar] [CrossRef] [PubMed]
  13. Wei, Z.; Liu, X. Vanishing feature constraints calibration method for binocular vision sensor. Opt. Express 2008, 23, 18897–18914. [Google Scholar] [CrossRef] [PubMed]
  14. Bauer, M.; Grießbach, D.; Hermerschmidt, A.; Krüger, S.; Scheele, M.; Schischmanow, A. Geometrical camera calibration with diffractive optical elements. Opt. Express 2008, 16, 20241–20248. [Google Scholar] [CrossRef] [PubMed]
  15. Yilmazturk, F. Full-automatic self-calibration of color digital cameras using color targets. Opt. Express 2011, 19, 18164–18174. [Google Scholar] [CrossRef] [PubMed]
  16. Ricolfe-Viala, C.; Sanchez-Salmeron, A. Camera calibration under optimal conditions. Opt. Express 2011, 19, 10769–10775. [Google Scholar] [CrossRef] [PubMed]
  17. Simon, T.; Aymen, A.; Pierre, D. Cross-diffractive optical elements for wide angle geometric camera calibration. Opt. Lett. 2011, 36, 4770–4772. [Google Scholar]
  18. Heikkila, J.; Silven, O. A four-step camera calibration procedure with implicit image correction. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, 17–19 June 1997; pp. 1106–1112.
  19. Maybank, S.J.; Faugeras, O.D. A theory of self-calibration of a moving camera. Int. J. Comput. Vis. 1992, 8, 123–151. [Google Scholar] [CrossRef]
  20. Faugeras, O.D.; Luong, Q.T.; Maybank, S.J. Camera self-calibration: Theory and experiments. In Computer Vision ECCV’92; Springer: Berlin/Heidelberg, Germany, 1992; pp. 321–334. [Google Scholar]
  21. Hartley, R.I. Euclidean reconstruction from uncalibrated views. In Applications of Invariance in Computer Vision; Springer: Berlin/Heidelberg, Germany, 1994; pp. 235–256. [Google Scholar]
  22. Song, D.M. A self-calibration technique for active vision system. IEEE Trans. Robot. Autom. 1996, 12, 114–120. [Google Scholar] [CrossRef]
  23. Caprile, B.; Torre, V. Using vanishing points for camera calibration. Int. J. Comput. Vis. 1990, 4, 127–139. [Google Scholar] [CrossRef]
  24. Gonzalez-Aguilera, D.; Rodriguez-Gonzalvez, P.; Armesto, J.; Arias, P. Trimble Gx200 and Riegl LMS-Z390i sensor self-calibration. Opt. Express 2011, 19, 2676–2693. [Google Scholar] [CrossRef] [PubMed]
  25. Fraser, C.S. Photogrammetric camera component calibration: A review of analytical techniques. In Calibration and Orientation of Cameras in Computer Vision; Gruen, A., Huang, T.S., Eds.; Springer: Berlin, Germany, 2001; pp. 95–121. [Google Scholar]
  26. Lichti, D.D.; Kim, C. A Comparison of Three Geometric Self-Calibration Methods for Range Cameras. Remote Sens. 2011, 3, 1014–1028. [Google Scholar] [CrossRef]
  27. Lipski, C.; Bose, D.; Eisemann, M.; Berger, K.; Magnor, M. Sparse bundle adjustment speedup strategies. In WSCG Short Papers Post-Conference Proceedings, Proceedings of the 18th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision in Co-Operation with EUROGRAPHICS, Plzen, Czech Republic, 1–4 February 2010; Skala, V., Ed.; WSCG: Plzen, Czech Republic, 2010. [Google Scholar]
  28. De Lussy, F.; Greslou, D.; Dechoz, C.; Amberg, V.; Delvit, J.M.; Lebegue, L.; Blanchet, G.; Fourest, S. Pleiades HR in flight geometrical calibration: Localisation and mapping of the focalplane. In Proceedings of the XXII ISPRS Congress International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Melbourne, Australia, 25 August–1 September 2012; pp. 519–523.
  29. Fourest, S.; Kubik, P.; Lebegu, L.; Dechoz, C.; Lacherade, S.; Blanchet, G. Star-based methods for Pleiades HR commissioning. In Proceedings of the XXII ISPRS Congress International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Melbourne, Australia, 25 August–1 September 2012; pp. 531–536.
  30. Greslou, D.; Lussy, F.D.; Amberg, V.; Dechoz, C.; Lenoir, F.; Delvit, J.; Lebegue, L. Pleiades-HR 1A&1B image quality commissioning: Innovative geometric calibration methods and results. Proc. SPIE 2013, 8866, 1–12. [Google Scholar]
  31. Delvit, J.M.; Greslou, D.; Amberg, V.; Dechoz, C.; de Lussy, F.; Lebegue, L.; Latry, C.; Artigues, S.; Bernard, L. Attitude assessment using Pléiades-HR capabilities. In Proceedings of the XXII ISPRS Congress International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Melbourne, Australia, 25 August–1 September 2012; pp. 525–530.
  32. Cook, M.K.; Peterson, B.A.; Dial, G.; Gibson, L.; Gerlach, F.W.; Hutchins, K.S.; Kudola, R.; Bowen, H.S. IKONOS Technical performance Assessment. Proc. SPIE 2001, 4381, 94–108. [Google Scholar]
  33. Kaveh, D.; Mazlan, H. Very high resolution optical satellites for DEM generation: A review. Eur. J. Sci. Res. 2011, 49, 542–554. [Google Scholar]
  34. Jacobsen, K. Geometric calibration of space remote sensing cameras for efficient processing. Int. Arch. Photogramm. Remote Sens. 1998, 32, 33–43. [Google Scholar]
  35. Mi, W.; Bo, Y.; Fen, H.; Xi, Z. On-orbit geometric calibration model and its applications for high-resolution optical satellite imagery. Remote Sens. 2014, 6, 4391–4408. [Google Scholar]
  36. Xu, Y.; Liu, T.; You, H.; Dong, L.; Liu, F. On-orbit calibration of interior orientation for HJ1B-CCD camera. Remote Sens. Technol. Appl. 2011, 26, 309–314. [Google Scholar]
  37. Lv, H.; Han, C.; Xue, X.; Hu, C.; Yao, C. Autofocus method for scanning remote sensing camera. Appl. Opt. 2015, 54, 6351–6359. [Google Scholar] [CrossRef] [PubMed]
  38. Li, J.; Chen, X.; Tian, L.; Lian, F. Tracking radiometric responsivity of optical sensors without on-board calibration systems-case of the Chinese HJ-1A/1B CCD sensors. Opt. Express 2015, 23, 1829–1847. [Google Scholar] [CrossRef] [PubMed]
  39. Li, J.; Xing, F.; Sun, T.; You, Z. Efficient assessment method of on-board modulation transfer function of optical remote sensing sensors. Opt. Express 2015, 23, 6187–6208. [Google Scholar] [CrossRef] [PubMed]
  40. Gleyzes, M.A.; Perret, L.; Kubik, P. Pleiades system architecture and main performances. In Proceedings of the XXII ISPRS Congress International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Melbourne, Australia, 25 August–1 September 2012; pp. 537–542.
  41. Poli, D.; Remondino, F.; Angiuli, E.; Agugiaro, G. Radiometric and geometric evaluation of GeoEye-1, WorldView-2 and Pleiades stereo images for 3D information extraction. ISPRS J. Photogramm. Remote Sens. 2015, 100, 35–47. [Google Scholar] [CrossRef]
  42. Fu, R.; Zhang, Y.; Zhang, J. Study on geometric measurement methods for line-array stereo mapping camera. Spacecr. Recover. Remote Sens. 2011, 32, 62–67. [Google Scholar]
  43. Hieronymus, J. Comparaision of methods for geometric camera calibration. In Proceedings of the XXII ISPRS Congress International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Melbourne, Australia, 25 August–1 September 2012; pp. 595–599.
  44. Yuan, F.; Qi, W.J.; Fang, A.P. Laboratory geometric calibration of areal digital aerial camera. IOP Conf. Ser. Earth Enviton. Sci. 2014, 17, 012196. [Google Scholar] [CrossRef]
  45. Chen, T.; Shibasaki, R.; Lin, Z. A rigorous laboratory calibration method for interior orientation of airborne linear push-broom camera. Photogramm. Eng. Remote Sens. 2007, 73, 369–374. [Google Scholar] [CrossRef]
  46. Wu, G.; Han, B.; He, X. Calibration of geometric parameters of line array CCD camera based on exact measuring angle in lab. Opt. Precis. Eng. 2007, 15, 1628–1632. [Google Scholar]
  47. Yuan, F.; Qi, W.; Fang, A.; Ding, P.; Yu, X. Laboratory geometric calibration of non-metric digital camera. Proc. SPIE 2013, 8921, 99–103. [Google Scholar]
Figure 1. Principle of auto-collimating calibration method.
Figure 1. Principle of auto-collimating calibration method.
Sensors 16 01176 g001
Figure 2. Filtering performance of dichroic filter.
Figure 2. Filtering performance of dichroic filter.
Sensors 16 01176 g002
Figure 3. MEMS process to fabricate point-source focal plane.
Figure 3. MEMS process to fabricate point-source focal plane.
Sensors 16 01176 g003
Figure 4. Coordinate relationship of equivalent optical path.
Figure 4. Coordinate relationship of equivalent optical path.
Sensors 16 01176 g004
Figure 5. Imaging relationship in actual imaging system.
Figure 5. Imaging relationship in actual imaging system.
Sensors 16 01176 g005
Figure 6. Simulation of optical system.
Figure 6. Simulation of optical system.
Sensors 16 01176 g006
Figure 7. Simulation results of calibration when a mis-adjustment occurs in Z direction: (a) calculation results of variation of principal distance; (b) calibration error of two methods; (c) calculation results of vibration of principal point; and (d) calibration error of two methods.
Figure 7. Simulation results of calibration when a mis-adjustment occurs in Z direction: (a) calculation results of variation of principal distance; (b) calibration error of two methods; (c) calculation results of vibration of principal point; and (d) calibration error of two methods.
Sensors 16 01176 g007
Figure 8. Simulation results of calibration when a misadjustment occurs in the Y direction: (a) calculation results of variation of principal distance; (b) calibration error of two methods; (c) calculation results of vibration of principal point; and (d) calibration error of two methods.
Figure 8. Simulation results of calibration when a misadjustment occurs in the Y direction: (a) calculation results of variation of principal distance; (b) calibration error of two methods; (c) calculation results of vibration of principal point; and (d) calibration error of two methods.
Sensors 16 01176 g008
Figure 9. Simulation results of calibration when a misadjustment occurs around the X-axis: (a) calculation results of variation of principal distance (b); calibration error of two methods; (c) calculation results of vibration of principal point; and (d) calibration error of two methods.
Figure 9. Simulation results of calibration when a misadjustment occurs around the X-axis: (a) calculation results of variation of principal distance (b); calibration error of two methods; (c) calculation results of vibration of principal point; and (d) calibration error of two methods.
Sensors 16 01176 g009
Figure 10. Experimental system.
Figure 10. Experimental system.
Sensors 16 01176 g010
Figure 11. Image at different angles. (a) is 0 degree; (b) is −0.0225 degrees; (c) is −0.04 degrees; (d) is +0.04 degrees.
Figure 11. Image at different angles. (a) is 0 degree; (b) is −0.0225 degrees; (c) is −0.04 degrees; (d) is +0.04 degrees.
Sensors 16 01176 g011aSensors 16 01176 g011b
Figure 12. Centroid position of different points. (a) is the image points 1–8; (b) is the image points 9–16; (c) is the image points 17–24; (d) is image points 25–30.
Figure 12. Centroid position of different points. (a) is the image points 1–8; (b) is the image points 9–16; (c) is the image points 17–24; (d) is image points 25–30.
Sensors 16 01176 g012
Figure 13. Gray distribution of captured image for our method.
Figure 13. Gray distribution of captured image for our method.
Sensors 16 01176 g013
Figure 14. Calibration testing under maladjusted condition.
Figure 14. Calibration testing under maladjusted condition.
Sensors 16 01176 g014
Figure 15. Variation of principal distance.
Figure 15. Variation of principal distance.
Sensors 16 01176 g015
Figure 16. Variation of principal point.
Figure 16. Variation of principal point.
Sensors 16 01176 g016
Table 1. Simulation calculation results without maladjustments.
Table 1. Simulation calculation results without maladjustments.
ElementsGround MethodOur MethodMisadjustment (mm)
f (mm)2031.9999992032.0000
Δf (mm)8.527259 × 10−700
U0x (mm)000
U0y (mm)000
ΔU0x (mm)000
ΔU0y (mm)000
Table 2. Calculation results of principal distance and principal point using ground methods.
Table 2. Calculation results of principal distance and principal point using ground methods.
NumberElementsReference Value
1f (mm)2032.1161
2U0x (mm)–0.5856
3U0y (mm)–0.9643
Table 3. Calculation results of principal distance and principal point using our method.
Table 3. Calculation results of principal distance and principal point using our method.
NumberElementsCalibration Value
1f (mm)2032.0818
2U0x (mm)–0.5387
3U0y (mm)–0.9580
4Δf (mm)0.0342
5ΔU0x (mm)0.0469
6ΔU0y (mm)0.0063

Share and Cite

MDPI and ACS Style

Li, J.; Xing, F.; Chu, D.; Liu, Z. High-Accuracy Self-Calibration for Smart, Optical Orbiting Payloads Integrated with Attitude and Position Determination. Sensors 2016, 16, 1176. https://doi.org/10.3390/s16081176

AMA Style

Li J, Xing F, Chu D, Liu Z. High-Accuracy Self-Calibration for Smart, Optical Orbiting Payloads Integrated with Attitude and Position Determination. Sensors. 2016; 16(8):1176. https://doi.org/10.3390/s16081176

Chicago/Turabian Style

Li, Jin, Fei Xing, Daping Chu, and Zilong Liu. 2016. "High-Accuracy Self-Calibration for Smart, Optical Orbiting Payloads Integrated with Attitude and Position Determination" Sensors 16, no. 8: 1176. https://doi.org/10.3390/s16081176

APA Style

Li, J., Xing, F., Chu, D., & Liu, Z. (2016). High-Accuracy Self-Calibration for Smart, Optical Orbiting Payloads Integrated with Attitude and Position Determination. Sensors, 16(8), 1176. https://doi.org/10.3390/s16081176

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop