1. Introduction
Relative navigation is a key functionality for emerging new mission needs in automated rendezvous and docking, active debris removal, on-orbit servicing missions,
etc. [
1,
2,
3,
4]. It is essential for collision avoidance. Up to now, several technology demonstration missions are designed for cooperative targets, which have aids such as markers optical reflectors [
5,
6,
7]. It is still an open research area facing many technical challenges for uncooperative targets. One of the greatest challenges is to acquire the relative pose
i.e., the six degrees-of-freedom (6DOF) pose, between the target and the chaser which are, in general, considered to be moving independently.
Pose estimation is the process of estimating relative attitude and position, which depends on largely on the type of sensor. Usually stereo vision [
6,
8,
9,
10], monocular vision [
11,
12,
13] and LIDAR (light detection and ranging) [
4,
7,
14,
15,
16,
17,
18,
19] sensor are the sensor types used in space applications. Stereo vision approaches provide measurements of high accuracy at a high rate, but their working distance is limited to the baseline and the computational requirements increase with the increase of resolution. Monocular vision lacks of depth information, it often needs other additional information. LIDAR sensors, which output is the point cloud data, are robust against lighting changes. Each point cloud data contains a range vector in the sensor frame. In recent years, flash LIDAR [
4,
7,
15,
16,
17,
18] has discussed for pose estimation. Different from scanning LIDAR which employs a scanning device to collect one point cloud data at a time, flash LIDAR can collect the entire point cloud data image at once. This characteristic helps reduce distortion in the point cloud data and provide pose estimation result at a fast frame rate. However, the drawbacks are also obvious, such as low resolution (
i.e., the typical resolution is less than 256 × 256) [
4], and measurement noise
et al.The relative navigation in close range is still an open research area; especially the target satellite has no cooperative markers. This paper focuses on the process of 6DOF pose estimation in close range for satellite using the flash LIDAR sensor. Assuming that the 3D model of the target is known, a novel relative pose estimation method is proposed by directly aligning the sensor point cloud data and the model point cloud data. Different from the existing works, this method has no need for the process of feature detection and feature tracking; it directly aligns the dense point cloud data to realize the pose initial acquisition and the pose tracking. A simulation system is proposed to generate the simulated point cloud data based on the model point cloud data which can be used to simulate various conditions for actual motion and evaluate the performance of the sensor and pose algorithm.
The paper is organized as follows: In
Section 2, the related works in recent years are described in detail. Details of the proposed pose estimation method are presented in
Section 3. In
Section 4, the simulation system is introduced in detail. Some experimental results are shown and discussed in
Section 5. Finally the work is concluded in
Section 6 with a discussion of the limitations and future works.
2. Related Work
Relative pose estimation is an important process in the relative navigation of satellites in space. Six degrees-of-freedom pose estimation of relative motion is the key problem. The chaser is typically equipped with sensors that collect the images or point cloud data of the target to estimate the pose.
Many studies have been conducted in recent years. The relative navigation sensor is often designed and tested according to the specific space tasks. Optical vision sensor is adapted to obtain the relative attitude and position in the close approach phase. Stereo vision is most frequently used sensor [
6,
8,
9,
10]. The Argon system [
6] is the typical system, which has been developed by the Goddard Space Flight Center. The Argon system is designed for rendezvous and proximity operations, and it is the flight cameras used during the Relative Navigation Sensor experiment in the STS-125. The vision system of the SUMO/FREND program is depicted for the mission of autonomous satellite grapple in [
8]. A method by using the stereo and the 3D CAD model is given for estimating the pose in [
9]. By combining the image processing method and the filtering scheme, a stereo vision based relative motion estimation method is proposed for noncooperative satellites [
10]. Besides, the research based on the monocular vision is also developed by many scholars. An analysis and tests in the lab for orbital rendezvous operations are reported in [
11], its sensor is the combination of a commercial web cam and two lasers. A TV-based docking control system is presented in [
12] by using the monocular vision and the ISS (International Space Station) 3D model. A novel pose estimation method for noncooperative satellite is reported in [
13] by recognizing solar panel triangle structure.
LIDAR sensor is another type of sensor which is commonly adopted in space relative navigation. The comprehensive review is given in [
4] about the LIDAR technology as applied specifically to spacecraft relative navigation. The TriDAR system [
14] has been developed by the Canadian Space Agency, which use the triangulation and scanning LIDAR technology to provide the six degrees-of-freedom pose estimation. The system was selected for the Hubble Robotic Vehicle De-orbit Module mission and tested on STS-128, STS-131, STS-135. Recently, flash LIDAR sensors [
4,
7,
15,
16,
17,
18,
19] are developed and tested for several space program. The Ball Corp’s flash LIDAR is tested on STS-134 and is currently planned to be the primary relative navigation sensor for Orion multipurpose crew vehicle. Also, The ASC’s DragonEye flash LIDAR is selected by SpaceX for the Drogon capsule and is tested on STS-127 and STS-133. In close proximity flash LIDAR is more effective than scanning LIDAR as it can collect point cloud data at a faster frame rate. It can avoid the point cloud distortion when the target is rotating or translating. It is one of the most promising sensors for relative navigation. A method for cooperative relative navigation of spacecraft using flash LIDAR is presented in [
7], for which reflectors are needed. A 3D template matching technique is designed in [
15,
16] for pose initial acquisition. A novel pose initialization strategy based on Oriented, Unique, and Repeatable Clustered Viewpoint Feature Histograms (OUR-CVFH) is proposed and the dual state multiplicative extended Kalman filter is combined with the pose processor to realize the relative navigation in [
17]. A new method by estimating the relative pose and trajectory simultaneously using flash LIDAR is presented in [
18]. Besides, the flash LIDAR can be used in other space missions, such as safe landing [
19].
Several hardware-in-the-loop testbeds [
20,
21,
22] are designed for testing the sensor performance and the algorithms for space rendezvous operations. Within the vision based navigation sensor system test campaign, hardware-in-the-loop tests on the terrestrial, robotic based facility European Proximity Operations Simulator (EPOS) 2.0 were performed to test and verify the guidance, navigation and control algorithms using real sensor measurements [
20]. A hardware-in-the-loop long distance movement simulation system is designed and built at the DFKI RIC for the INVERITAS project [
21]. It incorporates real hardware like mock-ups of the client and the servicer, real sensors like stereo vision, as well as sensor data processing hardware and it can simulate rendezvous and capture maneuvers. A single vision based autonomous relative navigation algorithms are presented and tested on an air-bearing table [
22]. Compared with the hardware-in-the-loop systems, the advantage of the software simulation method is lower cost and easier implementation. A stereo based closed loop simulation system is designed in [
23] which includes the 3D target and chaser model, the relative orbital dynamic model, and the controller model. A point cloud modeling process is described in detail in [
24], and the modeling accuracy is assessed by comparing the simulated point cloud data against the test data in the laboratory experiment.
Point cloud based pose estimation methods are usually designed by registering the point cloud data collected from different viewpoints and distances. A model based method named 3D LASSO is proposed in [
25] which can provide six degrees-of-freedom relative pose information by processing the 3D scanning LIDAR sensor data and is adopted in TriDAR system. A sensor different from the scanning LIDAR like the photonic mixer device (PMD) has been also used to the same goal. A spacecraft pose estimation algorithm is tested in [
26] which process real-time PMD time-of-flight (ToF) camera frames to produce a six degrees-of-freedom pose estimate by 3D feature detection and feature matching. A new pose estimation method of satellites is presented in [
27] by fusing PMD time-of-flight (ToF) camera and CCD sensor in order to benefit from each other sensor’s advantages, and it is tested on the European Proximity Operations Simulator (EPOS).
In this paper, the attention is focused on the pose estimation method by using the data of the flash LIDAR sensor. The same as the scanning LIDAR, the flash LIDAR sensor provides both the angle and range measurements, which can be easily converted to the three-dimensional point cloud data in the sensor frame. Unlike the previous works, in this paper, assuming that the target satellite has no cooperative markers but its model is known, a novel model based pose estimation method of satellite is designed by matching the real time 3D sensor point cloud data and the 3D model point cloud data directly. A software simulation system is devised for numerical emulation. The pose estimation method is tested with real time-of-flight sensor and satellite model on an air-bearing platform and experiment results show its effectiveness.
3. Proposed Pose Method
In general terms, relative pose estimation is the problem of finding the set of parameters that describe the rigid rotation and the translation between a sensor reference frame and a target reference frame. In the paper the frame translation is realized by matching the point cloud data. A brief overview of the proposed method is presented in
Figure 1. When trying to follow the evolution of the relative pose of a satellite, two main steps are required: pose initialization and pose tracking. Pose initialization is performed when the first sensor point cloud data is acquired and no
a priori information about the target relative pose is available. A novel initial method is designed based on 3D model of the satellite. Pose tracking is the subsequent step allowing the pose parameters to be updated, on the basis of the previously estimated ones, as new measurements are acquired. The details of the proposed method are given in the following part of this section.
3.1. Definition of Reference Frames and Pose Parameters
For the relative navigation applications of space uncooperative satellite, four reference frames are of interest: the chaser body-fixed frame, the sensor frame, the target body-fixed frame, and the target model frame, as shown in
Figure 2.
The origin of the chaser body fixed frame and the target body fixed frame separately lie in the mass center of the chaser satellite and the target satellite. The orientation of the axes is determined by the pose and orbit control system. The origin of the sensor frame lies in the flash LIDAR sensor which is accommodated onboard the chaser. The axis increases along the optical axis away from the sensor, is selected as parallel to a reference body axis (in the example, parallel to an edge of the spacecraft bus), and obeys the right-hand role. The target model frame depends on the 3D model of the target model or the 3D point cloud data of the target. The origin of is defined in the centroid of the model and the orientation of the axes is parallel with the axes of the sensor frame. All reference frames may be located and oriented in a different way when required.
Assuming that the transformation matrix from to is known which is known by design, also the transformation matrix from to can be obtained offline, depending on the definition of the model frame. The pose information needed by the pose and orbit control system, which is represented by the transformation matrix from to , is easily established when the transformation matrix from to can be estimated by point cloud data processing method. Thus we focus on estimating the transformation matrix from to .
It is necessary to define the 6DOF relative pose parameters. The relative position is indicated as the translation vector
, as defined in Equation (1) and the relative attitude is represented as the rotation matrix
by a 312 sequence of Euler angles. Rotation about X axis by an angle
, rotation about Y axis by an angle
, rotation about Z axis by an angle
, are defined respectively as Equations (2)–(4).
Considered a point, which coordinate is
in the modal frame and the corresponding matching point, which coordinate is
in the sensor frame, according to the transformation, the following Equation (6) is satisfied. Also, the transformation matrix
can be expressed by
and
as given in Equation (7).
3.2. Pose Initial Acquisition
In order to solve the problem of pose initial acquisition, a novel model-based method is developed which compute the pose by directly aligning the sensor point cloud data with the prior modal point cloud data stored or built on board. In such a way, processing will not have to consider and match a number of features or to track them in a sequence of images, as in different approaches (see references [
9,
12,
14,
15,
16,
17,
25,
26,
27,
28,
29,
30]). The framework of the method is illustrated in
Figure 3.
The 3D model of the target is assumed as known, which could be CAD model or 3D point cloud, due to the initial pose is uncertain between the chaser and the target, we propose a global point cloud registration algorithm to estimate the initial pose, which includes three steps, the phase of principal direction transformation, the phase of translation domain estimation, and the phase of global optimal searching.
3.2.1. Principal Direction Transformation
The model point cloud data and the sensor point cloud data are defined as and . The principal direction transformation is separately carried out for and and the results are and . We adapt the to illustrate the compute procedure and generate the point cloud data .
Firstly, we compute the principal direction of the
by computing the eigenvectors of the covariance matrix
, The matrix
is defined as Equation (8) and Equation (9). The eigenvectors are sorted in ascending order, which represent the XYZ axis.
where
is point of
, and
is the mean value,
is the number of the
.
The local reference frame is defined, where
is the origin and the eigenvectors are axes. Unfortunately, due to the eigenvector decomposition ambiguity, a further sign disambiguation step in the computation of the local reference frame is needed to yield a fully repeatable local reference frame. More specifically, the first eigenvector which corresponds to the minimum eigenvalue is defined as
, and the opposite direction is defined as
. Then we judge the position relation by point and point according to the Equation (10), if the point is consistent with
, it is added to the point collection
, otherwise, the point belongs to
. The disambiguation
axis can be established by comparing the number of point collection
and
, so the disambiguation
axis is obtained as defined in Equation (11). The process of
axis is relevant to the maximum eigenvalue. The
axis can be obtained by cross product of
axis and
axis. So each eigenvector is re-oriented and represented as
.
Thus, we can compute
by transformation matrix
as Equation (12), where
,
. Similarly, the
is computed by transformation matrix
as Equation (13).
3.2.2. Translation Domain Estimation
We estimate the translation domain by using
and
. Firstly, the axis aligned bounding boxes are computed separately for
and
. Define
as the center of the bounding box of
and
as the center of the bounding box of
. The origin of
and
are moved to the center of each axis aligned bounding boxes and generate new point cloud named by
and
by transformation matrix
and
as Equations (14) and (15).
where
is
unit matrix.
Define the length of XYZ axis of the bounding box of
as
,
,
. The corresponding length of
as
,
,
. So we can compute the translation domain by Equation (16).
where
represent the translation range of XYZ axis.
represent the compensation factor. Due to the axis aligned bounding box is not the minimum bounding box, the compensation factor
is designed and its value is set to an empirical value such as 0.05 in Equation (17).
3.2.3. Global Optimal Searching
A global optimal searching method is used to matching the
and
. The branch-and-bound (BnB) is combined with the Iterative Closest Point (ICP) algorithm to search the 3D space efficiently [
31]. In this paper, the angle-axis representation is used, the entire space formed by XYZ axes rotations can be compactly represented as a solid radius
ball in 3D space. So we set the rotation domain as
that encloses the
ball. For the translation part we set the translation domain as illustrate in Equation (16).
The searching process is the same as that in [
31] and it is summarized as follows. Use the BnB to search the space, whenever a better solution is found, call ICP to refine the objective function value. Use ICPs result as an updated upper bound to continue the above BnB search until convergence. During BnB searches, the octree data structure is used and the process is repeated.
We define the matrix
as the global searching result. So we can get the initial pose matrix
by Equation (18).
We can get the 6DOF relative pose parameters from the which represents the transformation matrix from to .
3.3. Pose Tracking
After the initial pose is known, we can execute the pose tracking process to generate the continuous pose output by using the sensor point cloud data at a frame rate.
If the previous pose of the satellite is known, the problem of estimating the current pose can be simplified by restricting the search to solutions that are close the previous pose. In this paper, an Iterative Closest Point (ICP) algorithm can be used for this task to align the current sensor point cloud data with the model point cloud data.
Assuming that the previous transformation matrix is defined as , and the current sensor point cloud data is , the process is depicted as follows.
Firstly, we transform the by the matrix , then the converted sensor point cloud date is aligned with the model point cloud data by using the ICP algorithm and the current transformation matrix is obtained. Also, the 6DOF relative pose parameters is obtained from the .
Specifically, in this work, the ICP error is the mean squared distance of the corresponding points between the two point clouds. The ICP algorithm is stopped as soon as the variation of the ICP error among two subsequent iterations becomes less than 10−6 m2. Moreover, a maximum number of 20 iterations is set to prevent the ICP algorithm from taking too long.
6. Conclusions
A relative pose estimation method of satellite in close range is proposed, which uses the known target model and the point cloud data generated by the flash LIDAR sensor. The method estimates the relative pose directly on the basis of the dense point cloud data and can deal with large initial pose difference and rapid pose changes effectively. There is no need for the cooperative markers on the target satellite and the process of feature detection and feature tracking. The simulation system is designed to generate the simulated sensor point cloud data and truth pose value simultaneously by the various motion conditions. So it allows extensive pose estimation performance simulations for the pose estimation method and tests the performance of the specific sensor prior to field testing, saving cost and providing performance metrics of pose estimation algorithms under evaluation. The numerical simulated experiment results denote that the proposed pose estimation method is accurate and efficient. Also, the field experiment with the hardware system was conducted in order to test the performance on the ground.
The flash LIDAR sensor is a promising technology in space applications due to its unique combination of advantages (low power, high framerate, low mass, robustness). It can provide an alternative method for future relative navigation tasks in close ranges. Regarding future research, some improvements will be thought of on these aspects: (1) The point cloud filtering method will be designed and adopted to reduce the influence of noise and artifacts in field experiments; (2) Other high performance sensors will be modeled and tested by using the proposed pose estimation method and the simulation system.