Next Article in Journal
Evaluating Satellite Soil Moisture Datasets for Drought Monitoring in Australia and the South-West Pacific
Next Article in Special Issue
Design and Analysis of a New Deployer for the in Orbit Release of Multiple Stacked CubeSats
Previous Article in Journal
Analysis of the Feasibility of UAS-Based EMI Sensing for Underground Utilities Detection and Mapping
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image-Based Adaptive Staring Attitude Control for Multiple Ground Targets Using a Miniaturized Video Satellite

College of Aerospace Science and Engineering, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(16), 3974; https://doi.org/10.3390/rs14163974
Submission received: 12 July 2022 / Revised: 10 August 2022 / Accepted: 13 August 2022 / Published: 16 August 2022
(This article belongs to the Special Issue CubeSats Applications and Technology)

Abstract

:
A miniaturized video satellite can observe the ground targets by recording real-time video clips in staring control mode and therefore obtains a unique advantage over traditional remote sensing techniques. To further extend the application of a video satellite, a strategy for simultaneously observing a group of ground targets is to be developed. To cope with the impacts of an uncalibrated camera on the pointing accuracy which can lead to the failure of a multi-target observation task, an adaptive attitude control method is to be exploited. Hence, to observe multiple ground targets using an onboard uncalibrated camera, this paper proposes an image-based adaptive staring attitude controller. First, a target-selection strategy is proposed to realize a more balanced staring observation of the target group. Second, an updating law is proposed to estimate the camera parameters according to the projection equations. At last, an adaptive staring controller based on the estimated parameters is formulated, so that the center of mass of the ground targets on the image can be controlled towards its desired location, which is normally the image center. The stability of the proposed staring controller is proved using Barbalat’s Lemma. The simulation results show that even though the camera parameters are uncertain, the adaptive control method effectively achieves the staring observation for multiple ground targets by keeping their midpoint at the image center.

Graphical Abstract

1. Introduction

Conventional remote sensing satellites tend to conduct Earth observation in a push-broom or push-frame manner, while the emerging video satellites are capable of capturing real-time continuous images. To lower the cost and accelerate the development schedule, these video satellites normally adopt the commercial-off-the-shelf (COTS) components and are designed to be miniaturized. Therefore, microsatellites (e.g., Skybox [1], LAPAN-TUBSAT [2], Tiantuo-2 [3] and so on) and CubeSats (e.g., HiREV [4] and Hera Constellation [5]) stand out in these video-based Earth observation missions. Using a staring attitude controller, a video satellite can constantly orient the space-borne camera towards the ground targets for a period of time when the satellite flies over the region of interest (ROI). In this way, video satellites are qualified for applications like disaster relief, ground surveillance, and so on. However, there are still challenges remaining unsolved. First, the staring control for a single target has been developed using different methods, but a dedicated controller for the scenario where multiple targets are to be observed has been missing. Second, in the presence of an uncalibrated camera, previous control methods are faced with a decline in pointing accuracy. In this paper, we propose an image-based adaptive staring controller that takes advantage of image information to achieve multi-target observation.
Staring control methods have been primarily based on the prior location information of the target, which we could call a position-based method. Since the ground target is fixed on the Earth’s surface, the location of the target to be observed can be acquired according to its longitude, latitude, and height. Also, the target’s velocity in the inertial frame is computed through Earth rotation. On the basis of the location, ref. [6] designs the desired attitude in both numerical and analytical ways and then proposes a PD-like staring controller to achieve the desired camera pointing. Ref. [7] proposes a slide-mode controller using a similar position-based attitude formulation. Ref. [8] generates staring control commands analytically for Earth observation satellite and compares different position-based staring control methods. It is worth noting that these staring controllers, including other applications in [9,10,11,12,13,14,15,16,17], deal with only a single target, thus are not adequate to properly observe a group of targets when they are in the field of view (FOV) simultaneously.
Compared with a single target observation task, observation for multiple targets is more complex. If the ground targets are distributed sparsely, the camera can capture only one target in one frame. If the targets are distributed densely, they can be seen in the FOV and recorded in one frame simultaneously. For the former observation case, Cui [18,19] considers targets located along the sub-satellite track, and adjusts the attitude of the satellite to stare at the ground targets consecutively. The observation is divided into two phases: the stable phase when observing one target and the transition phase when switching to the next target. Two different control methods are proposed. One [18] is a conventional PD-like controller but with variable coefficients in corresponding phases, and the other one [19] is an optimal controller that schedules the observation strategy. Similarly, ref. [20] studies a scheduling problem for aerial targets. Although multiple ground targets are considered in this scenario, the targets are distant from each other, hence only one target appears on the image at one time, and the multi-target observation is essentially transformed into a sequence of single-target observations. For the latter observation case, the way to solve the problem that the image contains more than one target at one time remains unclear, therefore we will propose our solution in this paper.
In practice, the placement and the structure of the camera can differ from the ideal conditions because of the vibration, thermal-induced influence, and other complex factors during the launch and in-orbit operation. Considering such an uncalibrated camera, no matter whether multiple targets are considered, the aforementioned position-based methods cannot avoid the pointing deviation because they are solely dependent on the location information of the target. Thus, an adaptive image-based staring controller is developed to overcome the drawbacks of those position-based methods.
To utilize the image information, the pixel coordinate of a target on the image should be first extracted. In fact, the state-of-art image processing techniques [21,22,23,24,25,26] have been constantly progressing to accomplish the target identification or tracking, and this paper attaches importance to the image-based control instead of image processing, i.e., we do not consider the image processing time. In staring imaging applications, ref. [27] proposes an image-based control algorithm but the image is only used to designate a vector to align with a ground vector, which is essentially still a position-based method. Ref. [28] realizes a staring controller whose attitude errors are derived from the image errors. The problem is that the camera in his method is assumed to be fully calibrated, which is quite difficult in reality. Both uncalibrated and calibrated cameras have been widely studied in robotics [29,30], UAVs [31,32] and many other engineering areas [33,34,35,36], but the dynamics considered in these works are different from those of a spacecraft, thus are not applicable to staring attitude control.
To sum up, some of the aforementioned studies are fully based on position, some only observe a single target, and some neglect the uncertainties of an uncalibrated camera. This paper deals with the situation where multiple targets appear on the image and the video satellite equipped with an uncalibrated camera is supposed to stare at these targets. First, we propose a selection strategy to designate two targets as representatives, and our controller is based on the two representatives rather than the whole set of targets. Second, we propose an image-based adaptive staring controller using parameter estimation, so that, even though the camera is uncalibrated, we can still accomplish the staring observation.
The paper is organized as follows. In Section 2, the projection model of a pin-hole camera is established. In Section 3, the multi-target observation problem is formulated based on the image-based kinematics. In Section 4, the selection strategy of the targets is introduced, and then we propose the adaptive controller, including the definition and estimation of parameters. Section 5 gives the simulation samples of the selection strategy and controller respectively. Conclusions are drawn in Section 6.

2. Camera Preliminaries

The ground target is projected onto the image plane and then recorded as pixels. This projection process is dependent on the camera’s internal structure, the camera’s external placement on the satellite, and the relative position between the satellite and the target. In this section, we analyze the parameters of a pin-hole camera that affect the projection by establishing the projection model from the target’s position in the inertial frame to its corresponding pixel coordinate on the image.

2.1. The Intrinsic Camera Model

The intrinsic camera model reflects the projection of the target point on the image plane, which is essentially a pinhole model and is shown in Figure 1. The focal length is denoted as f. O U V is the 2D pixel frame in the normalized image plane. O c X c Y c Z c is the camera coordinate system, where the axis X c and Y c is parallel with U and V respectively and Z c is perpendicular to the image plane with the point of intersection located at the center u 0 , v 0 . The position vector of the target is c R c T = x c , y c , z c T , where the corner mark c shows that c R c T is expressed in the camera frame, and the corresponding projection pixel coordinate is u , v . Suppose the pixel size is d x × d y , then we have the following geometrical relation:
z c f = x c u u 0 · d x = y c v v 0 · d y
We can reorganize Equation (1) in the form of
z c u v 1 = f / d x 0 u 0 0 0 f / d y v 0 0 0 0 1 0 Π x c y c z c 1 = Π c R c T 1
where the matrix Π contains the intrinsic camera parameters.

2.2. The Extrinsic Camera Model

The position and attitude of the camera frame with respect to the body frame, O c X c Y c Z c , is displayed in Figure 2. c R b c represents the position of O c in the camera frame. b R b T is the target’s position in the body frame. M b c represents the rotation matrix from the body frame to the camera frame. We define the homogeneous transform matrix from the body frame to the camera frame T R 4 × 4 :
T = M b c c R b c 0 1 × 3 1
Then we have
c R c T 1 = T b R b T 1
According to Equations (2) and (4), the target’s position in the satellite body frame can be transformed to the pixel coordinate by
u v 1 = 1 z c Π · T · b R b T 1 = 1 z c N b R b T 1
where the projection matrix N = Π · T R 3 × 4 . N contains all the camera-related parameters. In the remainder of the article, different combinations of elements of N are widely used in different mathematical expressions. For simplicity, we reorganize the 12 elements of matrix N as different submatrices. Let n i j ( i = 1 , 2 , 3 j = 1 , 2 , 3 , 4 ) represents the elements of the projection matrix N. Denote the first two rows of N as submatrix P , and the third row is submatrix n 3 T . And we further define the first three columns of P as P 3 and the first three columns of n 3 T as n 3 3 T . Then we have the submatrices consisting of elements of N:
P = n 11 n 12 n 13 n 14 n 21 n 22 n 23 n 24 , P 3 = n 11 n 12 n 13 n 21 n 22 n 23 n 3 T = n 31 n 32 n 33 n 34 , n 3 3 T = n 31 n 32 n 33 N = P n 3 T

2.3. Staring Observation for a Single Ground Target

In Figure 3, O e X e Y e Z e is the Earth-centered inertial (ECI) frame. Denote the rotation matrix from the ECI frame to the body frame as M i b . i R e b represents the position vector from the Earth center to the satellite, which is expressed in the ECI frame. i R e T is the position vector of the target in the ECI frame. Define the homogeneous transform matrix from the inertial frame to the body frame T h :
T h = M i b M i b i R e b 0 1 × 3 1
Then the following equation is obtained:
b R b T 1 = T h i R e T 1
Substitute Equations (8) to (5), and we have the projection model:
u v 1 = 1 z c N · T h · i R e T 1
Equation (9) shows the mapping relationship of the position of the target in the ECI frame and its pixel coordinate on the image. We define the pixel coordinate as y t = u t v t T , thus
y = 1 z c P · T h · i R e T 1 z c = n 3 T · T h · i R e T 1
where P and n 3 T are defined in Equation (6). For an uncalibrated camera, the camera parameters P and n 3 T are uncertain, therefore the estimation is needed.

3. Problem Formulation

This section first introduces the way to simplify the multi-target observation problem and then establishes the kinematics and dynamics of attitude motion in the form of pixel coordinates. In the end, the problem of a multi-target staring observation is formulated.

3.1. Multi-Target Observation

Assuming all the targets are contained in the FOV at the same time and hence can be detected on the image frame, we propose a strategy to observe the entirety of the targets properly. Like the example, in Figure 4a, the rectangle denotes the image borders, inside which there are 5 target points. For single-target observation, our staring objective is to control the only target to the image center. For a multi-target case, however, if we only consider a single target, this may lead to an unexpected result. Take Figure 4b as an example. Point 1 is moved to the image center but point 5 is lost in the FOV. And point 3 and 4 are far from the image center, which is not preferable either.
In light of this, we would prefer all the targets to be similarly distributed around the image center. One possible solution is to move the center of mass to the image center. Without loss of generality, we assume every point owns the same mass weight. Hence, for n targets whose pixel coordinates are denoted as y i , i = 1 , , n , the center of mass is given by
η t = 1 n i = 1 n y i
The control objective is then to move η t to the image center. However, a new problem arises in a situation where most target points are densely distributed in a small area while a few other points are far from this area. In this situation, η t is almost the center of dense points, and our solution will finally place the dense points around the image center, from which the other points will be very far.
To realize the overall more balanced observation, we propose a selection strategy, which first classifies the target group into two clusters and then selects one target from each cluster, i.e., two targets in total. The center of mass η t will be replaced by the midpoint of these two targets as our control object. In this way, two clusters are equally important and therefore have a similar distance from the image center. The details of this strategy will be further explained later.

3.2. Attitude Kinematics and Dynamics

The satellite in our study is regarded as a rigid body. Assume r is the Euler axis from the ECI to the body frame and α is the rotation angle, then the attitude can be described by a quaternion q = q 0 q v T , which is defined as
q = cos α / 2 + r sin α / 2 = q 0 + q v
where q 0 is the scalar part and q v is the vector part of the quaternion, and the constraint q 0 2 + q v T q v = 1 always holds. Let ω t be the angular velocity of the satellite with respect to the ECI frame expressed in the body frame, and we have the attitude kinematics given by
q ˙ 0 = 1 2 q v T ω q ˙ v = 1 2 q 0 E 3 + s k q v ω
where E 3 is the 3 × 3 identity matrix. For any 3-dimensional vector x = x 1 , x 2 , x 3 T , the arithmetic operator s k · is defined as
s k x = 0 x 3 x 2 x 3 0 x 1 x 2 x 1 0
The attitude dynamics of a rigid satellite are given by
J ω ˙ t = ω t × J ω t + U t
where J represents the inertia matrix of the satellite and U t is the attitude control torques.

3.3. Projection Kinematics for a Single Point

Though the attitude of a satellite can be expressed in the form of a quaternion, instead of using an error quaternion like a conventional position-based method, our image-based method uses the error of the pixel coordinates to represent attitude error. Therefore, we need to figure out how the pixel coordinate varies with the angular velocity. Differentiate the depth z c t from Equation (10) and we get
z ˙ c t = n 3 T s k M i b t i R e T t M i b t i R e b t M i b t 0 1 × 3 0 1 × 3 ω t i V e b t + n 3 T T h t i V e T t 0 = n 3 3 T s k M i b t i R e T t M i b t i R e b t ω t + n 3 3 T M i b t i V e T t i V e b t
where i V e T is the velocity of the ground target and i V e b is the velocity of the satellite. Similarly, the derivative of the image coordinate is given by
y ˙ t = 1 z c t P · T ˙ h t · i R e T t 1 z ˙ c t z c t y t + 1 z c t P · T h t · i V e T t 1 = 1 z c t P 3 y t n 3 3 T s k M i b t i R e T t M i b t i R e b t ω t + P 3 y t n 3 3 T M i b t i V e T t i V e b t
For simplicity, we define the matrix
a t = n 3 3 T s k M i b t i R e T t M i b t i R e b t a v t = n 3 3 T M i b t i V e T t i V e b t A t = P 3 y t n 3 3 T s k M i b t i R e T t M i b t i R e b t A v t = P 3 y t n 3 3 T M i b t i V e T t i V e b t
Then Equations (16) and (17) can be rewritten as
z ˙ c t = a t ω t + a v t y ˙ t = 1 z c t A t ω t + A v t

3.4. Projection Kinematics for Multiple Points

As we select two targets as the representatives of the entire group of targets, the kinematics of double target points should be derived. The center of mass of two targets is also their midpoint, which is
η t = 1 2 y 1 t + y 2 t
where y 1 t and y 2 t are the two points respectively. The corner mark 1 and 2 are used to represent the two selected points in the remainder of this paper. Then we have
d z c 1 t · z c 2 t d t = z c 1 t a 2 t + z c 2 t a 1 t ω t + z c 1 t a v 2 t + z c 2 t a v 1 t = h t ω t + h v t
η ˙ t = 1 z c 1 t · z c 2 t z c 2 t · A 1 t + z c 1 t · A 2 t 2 ω t + z c 2 t · A v 1 t + z c 1 t · A v 2 t 2 = 1 z c 1 t · z c 2 t H t ω t + H v t
where
H t = z c 2 t · A 1 t + z c 1 t · A 2 t 2 H v t = z c 2 t · A v 1 t + z c 1 t · A v 2 t 2 h t = z c 1 t a 2 t + z c 2 t a 1 t h v t = z c 1 t a v 2 t + z c 2 t a v 1 t
Equations (21) and (22) are the dynamics of the midpoint of two targets. The way to select the two targets will be introduced in the next section.

3.5. Control Objective

At the initial time, a couple of ground targets are detected on the image through the space-borne camera. To acquire a better view of the entirety of the targets, we expect the center of mass of the target group to be located at the image center. In pursuit of a balanced observation, the midpoint η t of two targets are picked out to best represent the group’s center of mass according to our proposed selection strategy. With a proper estimation of the camera-related parameters, our objective of the proposed adaptive controller is to move η t to the image center, which is denoted as η d = u 0 v 0 T .

4. Controller Design

In this section, the selection strategy is firstly introduced to find the two most suitable targets, which will then be used by the controller to achieve the control objective. As the camera is uncalibrated, the elements of the projection matrix N are unknown, hence the parameters to be estimated should be defined. Due to the fact that the ground point is constantly moving along with the Earth’s rotation, it is rather tough to determine the desired angular velocity when the target projection has reached η d . Thus, a reference attitude trajectory will be proposed to avoid designing desired angular velocity in this section. And the adaptive staring controller will be designed based on the estimated parameters and the reference attitude.

4.1. Target-Selection Strategy Using a Clustering Method

Assuming there are targets denoted as y i , i = 1 , , n on the image, we propose the following strategy to select two points to best serve our multi-target observation.
Step 1: We adopt a clustering algorithm, i.e., k-means [37], to classify all the targets into two clusters, e.g., C 1 and C 2 . As a result, every point is classified as one of the two clusters and has the least distance from the center of mass of its corresponding cluster. Denote c 1 and c 2 as the center of mass of C 1 and C 2 respectively, then for any y j C 1 , we have
y j c 1 y j c 2
For any y k C 2 , we have
y k c 2 y k c 1
Step 2: We walk through all the target pairs consisting of two targets from the two clusters respectively. The target pair, of which the center of mass is closest to the center of mass of all the targets, is selected. Denote the selected targets as y 1 * C 1 and y 2 * C 2 , and they satisfy
y 1 * + y 2 * 2 1 n i = 1 n y i = min y j C 1 , y k C 2 y j + y k 2 1 n i = 1 n y i
Step 1 is to guarantee the balance of observation emphasis between two clusters, and step 2 is to find two proper targets whose midpoint resembles the center of mass of the entire target group most. Based on two selected targets, the following adaptive controller is designed to achieve the staring observation for multiple targets.

4.2. Parameter Definition

Due to the uncertainties of the uncalibrated camera, we have to properly define the camera parameters so that we can linearize and then estimate these parameters online. The parameter linearization is the basis of the parameter estimation and therefore we can formulate the adaptive controller using these estimated parameters. According to Equations (10) and (20), we have
2 z c 1 t · z c 2 t · η t = z c 2 t · z c 1 t · y 1 t + z c 1 t · z c 2 t · y 2 t = z c 2 t · P · T h · i R e T 1 1 + z c 1 t · P · T h · i R e T 2 1
Each element of the projection matrix N appearing in the above equation is coupled with another element, i.e., they appear in the form of a product of two elements. We define these coupling products as the parameters θ to be estimated. More specifically, after Equation (27) is expanded, we can find that θ consists of the products of n 34 times every element of N, every element of n 3 3 T times every element of n 3 3 T and every element of n 3 3 T times every element of P . Hence θ contains 42 parameters. As we can infer, the number of the parameters defined will fast increase when dealing with the center of mass of over two targets. In other words, the parameter estimation process will be more complex and computationally demanding. Therefore, our target-selection strategy is also able to reduce the computational burden.
The estimated values of the parameters are denoted as θ ^ t , and the head mark ^ is used to represent the estimated variables in the remainder of this paper.

4.3. Reference Attitude Trajectory

A PD-like controller requires the convergence of both the image error Δ η t = η t η d and the angular velocity error ω e t = ω t ω d t . But the time-varying desired angular velocity ω d t is hard to directly design. To avoid using ω d t , we propose the reference attitude. In this way, the attitude of the satellite is controlled to track the reference attitude instead. Define a reference trajectory of the midpoint as η r t :
η ˙ r t = η ˙ d λ Δ η t = λ Δ η t
The reference angular velocity trajectory ω r t is defined as
ω r t = H ^ + t z ^ c 1 t z ^ c 2 t · λ · Δ η t H ^ v t
where the pseudo inverse matrix H ^ + t of H ^ t is defined as
δ η ˙ t = η ˙ t η ˙ r t = Δ η ˙ t + λ Δ η t δ ω t = ω t ω r t
δ η ˙ t and δ ω t reflect the tracking error between the current attitude and the reference attitude. Worth noting that, the convergence of δ η ˙ t infers the convergence of Δ η t , which, according to its definition, is mathematically obvious and hence no proof is listed here.

4.4. Parameter Estimation

The estimation error is defined as Δ θ t = θ ^ t θ . According to Equation (27), an estimated projection error e t used to measure the performance of the parameter estimation is given by
e t = 2 z ^ c 1 t · z ^ c 2 t · η t z ^ c 1 t · P ^ · T h · i R e T 2 1 z ^ c 2 t · P ^ · T h · i R e T 1 1 = W t · Δ θ t
The matrix W t does not contain any camera parameters. Due to the proper definition of parameters, e t is linear with respect to Δ θ t , which enables us to estimate the parameters. Accordingly, we propose an updating law for the parameters θ ^ t :
θ ^ ˙ t = Γ 1 Y T t δ η ˙ t + W T t K 1 e t
where the regressor matrix Y t is given by
Δ θ T t · Y T t = z c 1 t z c 2 t z ^ c 1 t z ^ c 2 t η ˙ r t H v t H ^ v t H t H ^ t ω t T K 2
K 1 and K 2 are the diagonal positive coefficient matrices, and Y t does not contain any camera parameters. The updating law (32) contains two main parts. The first part will be utilized in the stability proof, and the second is about the negative gradient of e t , which updates the parameters in a direction that e t is reducing.

4.5. Adaptive Staring Controller

The adaptive staring controller is given by
U t = ω t × J ω t + J ω ˙ r t K 3 δ ω t H ^ T t K 2 δ η ˙ t
where U t is the attitude control torque, K 2 and K 3 are the diagonal positive coefficient matrices. Define two non-negative function
V 1 t = 1 2 δ ω T t J δ ω t V 2 t = 1 2 Δ θ T t Γ Δ θ t
Then we have the Lyapunov function V t = V 1 t + V 2 t . Differentiate V 1 t :
V ˙ 1 t = δ ω T t J δ ˙ ω t = δ ω T t J ω ˙ t ω ˙ r t = δ ω T t ω t × J ω t + U t J ω ˙ r t = δ ω T t K 3 δ ω t H ^ T t T t K 2 δ η ˙ t = δ ω T t K 3 δ ω t δ ω T t H ^ T t K 2 δ η ˙ t
Differentiate V 2 t :
V ˙ 2 t = Δ θ T t Γ Δ θ ˙ t = Δ θ T t Y T t δ η ˙ t + W T t K 1 e t = Δ θ T t Y T t δ η ˙ t e T t K 1 e t
We rewrite V 1 t and V 2 t :
ω T t H ^ T t = ω T t H T t + ω T t H ^ T t H T t = z c 1 t z c 2 t η ˙ t H v t T + ω T t H ^ T t H T t = z c 1 t z c 2 t η ˙ r t H v t + z c 1 t z c 2 t η ˙ t η ˙ r t T + ω T t H ^ T t H T t = z c 1 t z c 2 t η ˙ r t H v t + z c 1 t z c 2 t δ η ˙ t T + ω T t H ^ T t H T t ω r T t H ^ T t = z ^ c 1 t z ^ c 2 t η ˙ r t H ^ v t T
Simultaneously, we have
δ ω T t H ^ T t = ω T t H ^ T t ω r T t H ^ T t
The δ ω T t H ^ T t K 2 δ η ˙ t part in Equation (36) can then be rewritten as
δ ω T t H ^ T t K 2 δ η ˙ t = z c 1 t z c 2 t z ^ c 1 t z ^ c 2 t η ˙ r t H v t H ^ v t H t H ^ t ω t T K 2 δ η ˙ t + z c 1 t z c 2 t δ η ˙ T t K 2 δ η ˙ t = z c 1 t z c 2 t δ η ˙ T t K 2 δ η ˙ t Δ θ T t · Y T t · δ η ˙ t
Substitute Equations (38)–(40) into (36) and (37), and the following inequality is obtained
V ˙ t = V ˙ 1 t + V ˙ 2 t = δ ω T t K 3 δ ω t z c 1 t z c 2 t δ η ˙ T t K 2 δ η ˙ t e T t K 1 e t 0
We can conclude that ω t , e t and θ ^ t are all bounded, which leads to the control torque U t also being bounded. According to the dynamics Equation (15), ω ˙ t is bounded. Considering the expression of e ˙ t and θ ^ ˙ t , they are bounded, too. Therefore, we have the boundedness of V ¨ t . According to Barbalat’s Lemma, the following convergence is obtained
lim t δ ω t = 0 lim t δ η ˙ t = 0 lim t e t = 0
which suggests that
lim t Δ η t = 0 lim t Δ η ˙ t = 0
Hence the stability of the proposed adaptive staring controller is proved. The midpoint of two ground targets’ projections on the image plane is moved to the image center under the effect of our controller.

5. Simulations

This section presents the simulation results of our proposed target selection method and adaptive staring controller. With an emphasis on the staring control, our simulations do not consider the image processing time and assume the pixel coordinates of the target point on the image are obtained in real-time.

5.1. Target Selection

Two cases are tested to assess the performance of the target selection method. Each case has a different number and distribution of target points on the image. Table 1 lists the pixel coordinates of all the target points. Five points located mainly in the left-top area of the image are tested in Case 1, and seven points located mainly in the right-bottom area of the image are tested in Case 2.
The results of the proposed selecting method are shown in Figure 5. In both cases, the targets are divided into two clusters according to their relative locations. Every point has the least distance to the center of mass of its corresponding cluster. From each cluster respectively, a target, marked by a circle or rhombus in Figure 5, is selected to form a target pair. The midpoint of these two target points resembles the center of mass of the entire target group most.

5.2. Staring Control

In an ideal condition, the values of the camera parameters equal the ideal values, which are listed in Table 2. The operation M 321 · is the rotation matrix representing a 3-2-1 rotation sequence. However, the camera used on the satellite is uncalibrated, therefore the unknown real values of the camera parameters are different from their ideal values. The image is comprised of 752 × 582 pixels with the center 376 , 291 .
At the initial time (12 July 2021 04:30:00 UTC), a group of ground targets appears on the image, and then we conduct the proposed control to accomplish the staring observation. The first step is to select two targets out of the entire set. And the second step is to control the midpoint of the two selected targets to the image center. Without considering the height, the geographic locations of the ground targets are shown in Table 3. In Table 4, the initial attitude is given. In Table 5 where the right ascension of the ascending node is abbreviated to RAAN, we list the initial orbital elements, indicating an approximately 500 km high sun-synchronous orbit is adopted.
The maximum control torque of each axis is bounded by U max in Table 6. In our simulations, the inertia matrix J of the satellite is also given in Table 6 and the proposed controller adopts the control parameters in Table 7, where diag · represents a matrix with all the inputs placed at the diagonal. With the conditions listed in these tables, the simulation is conducted at a step size of 0.1s in the control loop.
Before the controller gets involved, we conduct target selection and the result is shown in Figure 6. At the initial time, there are overall five targets appearing on the image, and they are mainly located at the left bottom part. Two targets are picked out according to our selection strategy. Based on the selected two targets, the control torques are calculated and the attitude is adjusted accordingly, thus the projections of targets on the image are also moving. Figure 7 depicts the trajectories of the selected targets and their midpoint. The initial locations are marked by a circle while the final ones are marked by a square. As our controller takes effect, the targets start to move along the trajectories, reach the final locations and eventually converge to the final locations. The midpoint, as is shown, is kept at the image center 376 , 291 . The shape of the trajectory indicates that the target is not directly converging to its desired location in the initial stage, instead, it gradually adjusts the direction of its movement and finally turns to the final location. This is because there exists relative angular motion between the ground target and the optical axis of the camera according to initial conditions listed in Table 3, Table 4 and Table 5. In the initial stage, the initial relative motion is the dominant factor affecting the trajectory. As the controller gradually reorientates the camera to the target, the trajectory will eventually reach the desired destination. Although our controller is established on the two targets, Figure 8 shows that the observation for the entirety of five targets is also achieved at the same time, thanks to the proper selection of two representatives. Comparing Figure 6 with Figure 8, we can see the target points are all distributed around the image center with well-proportioned distances, therefore we can obtain the overall balanced observation for the whole group of targets in the field of view.
Figure 9 shows the estimated projection error e t defined in Equation (31). Although the actual parameters are unknown, the proposed parameter estimating method (32) is constantly updating the parameters in a direction of the negative gradient of e t , thus e t are reduced, which indicates that the estimated projection of the target on the image is approaching the actual projection. Our adaptive controller is formed on the basis of these estimated parameters, therefore the constant updating of parameters contributes to controlling the projection to the correct and desired location. The evolutions of angular velocity and control torques are shown in Figure 10 and Figure 11 respectively. The angular velocity experiences a rapid acceleration at the starting phase. This is because current angular motion needs to catch up with the desired motion so that the targets will not drift out of the field view. Accordingly, the control torques are outputted to support the angular acceleration. Soon after the rapid adjustment of angular motion, the angular velocity reaches a relatively stable phase. During this phase, the midpoint of the selected targets are gradually converging to the image center, and the rotation of the satellite is mainly to follow the motion of ground targets, which is actually the Earth rotation. Due to no need to conduct fast angular acceleration at this phase, the control torque outputs also decrease to a very low level. Even though we have moved the targets to the desired locations during the stable phase, it is worth noting that the angular velocity and control torques are quite small but not zero. Since the relative motion between the satellite and the Earth’s rotation is always time-varying, it is always required to adjust the angular motion to some extent. Only in this way, the staring observation can be maintained.

6. Conclusions

In light of the lack of a staring control method for a multi-target observation scenario, especially for an uncalibrated camera of a miniaturized video satellite, we propose an image-based adaptive staring controller. First, a selection strategy based on a clustering method is proposed and two targets as representatives are picked out of the entirety of the targets, which realizes a more balanced overall observation and also reduces the computational complexity. Second, the unknown camera parameters are estimated according to the self-updating law. As a result, the estimated projection error decreases. Third, the image-based adaptive staring controller for multiple targets is established using the estimated parameters. The simulations demonstrate the effectiveness of our proposed method. In the test case, though the camera is uncalibrated, our controller accomplishes the staring observation for a group of targets on the image.
In this paper, the motion of ground targets is predictable due to Earth’s rotation. For targets with unpredictable motion, a dedicated staring controller remains to be developed in the future.

Author Contributions

Conceptualization, C.S.; data curation, C.S. and M.W.; formal analysis, C.S. and C.F.; funding acquisition, C.F.; investigation, C.S.; methodology, C.S., C.F. and M.W.; project administration, C.F.; resources, C.F.; software, C.S.; supervision, C.F.; visualization, C.S.; writing—original draft, C.S.; writing—review and editing, C.F. and C.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant No. 11702321.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. d’Angelo, P.; Máttyus, G.; Reinartz, P. Skybox image and video product evaluation. Int. J. Image Data Fusion 2016, 7, 3–18. [Google Scholar] [CrossRef]
  2. Julzarika, A. Utilization of LAPAN Satellite (TUBSAT, A2, and A3) in supporting Indonesia’s potential as maritime center of the world. In Proceedings of the IOP Conference Series: Earth and Environmental Science; IOP Publishing: Philadelphia, PA, USA, 2017; Volume 54, p. 012097. [Google Scholar]
  3. Zhang, X.; Xiang, J.; Zhang, Y. Space object detection in video satellite images using motion information. Int. J. Aerosp. Eng. 2017, 2017, 1024529. [Google Scholar] [CrossRef]
  4. Cho, D.H.; Choi, W.S.; Kim, M.K.; Kim, J.H.; Sim, E.; Kim, H.D. High-resolution image and video CubeSat (HiREV): Development of space technology test platform using a low-cost CubeSat platform. Int. J. Aerosp. Eng. 2019, 2019, 8916416. [Google Scholar] [CrossRef]
  5. Poghosyan, A.; Golkar, A. CubeSat evolution: Analyzing CubeSat capabilities for conducting science missions. Prog. Aerosp. Sci. 2017, 88, 59–83. [Google Scholar] [CrossRef]
  6. Lian, Y.; Gao, Y.; Zeng, G. Staring imaging attitude control of small satellites. J. Guid. Control. Dyn. 2017, 40, 1278–1285. [Google Scholar] [CrossRef]
  7. Wu, S.; Sun, X.; Sun, Z.; Wu, X. Sliding-mode control for staring-mode spacecraft using a disturbance observer. Proc. Inst. Mech. Eng. Part G J. Aerosp. Eng. 2010, 224, 215–224. [Google Scholar] [CrossRef]
  8. Han, S.; Ahn, J.; Tahk, M.J. Analytical Staring Attitude Control Command Generation Method for Earth Observation Satellites. J. Guid. Control. Dyn. 2022, 45, 1–10. [Google Scholar] [CrossRef]
  9. Li, H.; Zhao, Y.; Li, B.; Li, G. Attitude Control of Staring-Imaging Satellite Using Permanent Magnet Momentum Exchange Sphere. In Proceedings of the 2019 22nd International Conference on Electrical Machines and Systems (ICEMS), Harbin, China, 11–14 August 2019; pp. 1–6. [Google Scholar]
  10. Li, C.; Geng, Y.; Guo, Y.; Han, P. Suboptimal Repointing Maneuver of a staring-mode spacecraft with one DOF for final attitude. Acta Astronaut. 2020, 175, 349–361. [Google Scholar] [CrossRef]
  11. Chen, X.; Ma, Y.; Geng, Y.; Wang, F.; Ye, D. Staring imaging attitude tracking control of agile small satellite. In Proceedings of the 2011 6th IEEE Conference on Industrial Electronics and Applications, Beijing, China, 21–23 June 2011; pp. 143–148. [Google Scholar]
  12. Li, P.; Dong, Y.; Li, H. Staring Imaging Real-Time Optimal Control Based on Neural Network. Int. J. Aerosp. Eng. 2020, 2020, 8822223. [Google Scholar] [CrossRef]
  13. Zhang, F.; Jin, L.; Rodrigo, G.A. An innovative satellite sunlight-reflection staring attitude control with angular velocity constraint. Aerosp. Sci. Technol. 2020, 103, 105905. [Google Scholar] [CrossRef]
  14. Lian, Y.; Meng, Y.; Zheng, W. Multi-reference Decentralized Cooperative Satellite Attitude Control for Ground-Target Staring. In Proceedings of the International Conference on Autonomous Unmanned Systems; Springer: Berlin/Heidelberg, Germany, 2021; pp. 2193–2202. [Google Scholar]
  15. Geng, Y.; Li, C.; Guo, Y.; Biggs, J.D. Hybrid robust and optimal control for pointing a staring-mode spacecraft. Aerosp. Sci. Technol. 2020, 105, 105959. [Google Scholar] [CrossRef]
  16. Song, C.; Fan, C.; Song, H.; Wang, M. Spacecraft Staring Attitude Control for Ground Targets Using an Uncalibrated Camera. Aerospace 2022, 9, 283. [Google Scholar] [CrossRef]
  17. Wu, Y.H.; Han, F.; Zheng, M.H.; Wang, F.; Hua, B.; Chen, Z.M.; Cheng, Y.H. Attitude tracking control for a space moving target with high dynamic performance using hybrid actuator. Aerosp. Sci. Technol. 2018, 78, 102–117. [Google Scholar] [CrossRef]
  18. Cui, K.K.; Xiang, J.H. Variable coefficients pd adaptive attitude control of video satellite for ground multi-object staring imaging. In Proceedings of the Electrical Engineering and Automation: Proceedings of the International Conference on Electrical Engineering and Automation (EEA2016); World Scientific: Singapore, 2017; pp. 738–750. [Google Scholar]
  19. Cui, K.; Xiang, J.; Zhang, Y. Mission planning optimization of video satellite for ground multi-object staring imaging. Adv. Space Res. 2018, 61, 1476–1489. [Google Scholar] [CrossRef]
  20. Yu, Y.; Hou, Q.; Zhang, J.; Zhang, W. Mission scheduling optimization of multi-optical satellites for multi-aerial targets staring surveillance. J. Frankl. Inst. 2020, 357, 8657–8677. [Google Scholar] [CrossRef]
  21. Said, Y.; Saidani, T.; Smach, F.; Atri, M.; Snoussi, H. Embedded real-time video processing system on FPGA. In Proceedings of the International Conference on Image and Signal Processing; Springer: Berlin/Heidelberg, Germany, 2012; pp. 85–92. [Google Scholar]
  22. Ghodhbani, R.; Horrigue, L.; Saidani, T.; Atri, M. Fast FPGA prototyping based real-time image and video processing with high-level synthesis. Int. J. Adv. Comput. Sci. Appl. 2020, 11, 108–116. [Google Scholar] [CrossRef]
  23. Qi, B.; Shi, H.; Zhuang, Y.; Chen, H.; Chen, L. On-board, real-time preprocessing system for optical remote-sensing imagery. Sensors 2018, 18, 1328. [Google Scholar] [CrossRef]
  24. Yu, S.; Yuanbo, Y.; He, X.; Lu, M.; Wang, P.; An, X.; Fang, X. On-Board Fast and Intelligent Perception of Ships With the “Jilin-1” Spectrum 01/02 Satellites. IEEE Access 2020, 8, 48005–48014. [Google Scholar] [CrossRef]
  25. Du, B.; Cai, S.; Wu, C. Object tracking in satellite videos based on a multiframe optical flow tracker. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 3043–3055. [Google Scholar] [CrossRef]
  26. Zhang, X.; Xiang, J. Moving object detection in video satellite image based on deep learning. In Proceedings of the LIDAR Imaging Detection and Target Recognition 2017; SPIE: Bellingham, WA, USA, 2017; Volume 10605, pp. 1149–1156. [Google Scholar]
  27. Zdešar, A.; Klančar, G.; Mušič, G.; Matko, D.; Škrjanc, I. Design of the image-based satellite attitude control algorithm. In Proceedings of the 2013 XXIV International Conference on Information, Communication and Automation Technologies (ICAT), Sarajevo, Bosnia and Herzegovina, 30 October–1 November 2013; pp. 1–8. [Google Scholar]
  28. Zhang, X.Y.; Xiang, J.H. Tracking imaging feedback attitude control of video satellite. In Proceedings of the Electrical Engineering and Automation: Proceedings of the International Conference on Electrical Engineering and Automation (EEA2016); World Scientific: Singapore, 2017; pp. 729–737. [Google Scholar]
  29. Liu, Y.H.; Wang, H.; Wang, C.; Lam, K.K. Uncalibrated visual servoing of robots using a depth-independent interaction matrix. IEEE Trans. Robot. 2006, 22, 804–817. [Google Scholar]
  30. Wang, H.; Liu, Y.H.; Zhou, D. Adaptive visual servoing using point and line features with an uncalibrated eye-in-hand camera. IEEE Trans. Robot. 2008, 24, 843–857. [Google Scholar] [CrossRef]
  31. Gonzalez-Garcia, A.; Miranda-Moya, A.; Castañeda, H. Robust visual tracking control based on adaptive sliding mode strategy: Quadrotor UAV-catamaran USV heterogeneous system. In Proceedings of the 2021 International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece, 15–18 June 2021; pp. 666–672. [Google Scholar]
  32. Xie, H.; Lynch, A.F.; Low, K.H.; Mao, S. Adaptive output-feedback image-based visual servoing for quadrotor unmanned aerial vehicles. IEEE Trans. Control. Syst. Technol. 2019, 28, 1034–1041. [Google Scholar] [CrossRef]
  33. Fan, C.; Liu, Y.; Song, B.; Zhou, D. Dynamic visual servoing of a small scale autonomous helicopter in uncalibrated environments. Sci. China Inf. Sci. 2011, 54, 1855–1867. [Google Scholar] [CrossRef]
  34. Qiu, Z.; Hu, S.; Liang, X. Disturbance observer based adaptive model predictive control for uncalibrated visual servoing in constrained environments. ISA Trans. 2020, 106, 40–50. [Google Scholar] [CrossRef]
  35. Felicetti, L.; Emami, M.R. Image-based attitude maneuvers for space debris tracking. Aerosp. Sci. Technol. 2018, 76, 58–71. [Google Scholar] [CrossRef]
  36. Pesce, V.; Opromolla, R.; Sarno, S.; Lavagna, M.; Grassi, M. Autonomous relative navigation around uncooperative spacecraft based on a single camera. Aerosp. Sci. Technol. 2019, 84, 1070–1080. [Google Scholar] [CrossRef]
  37. MacQueen, J. Classification and analysis of multivariate observations. In Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability; Statistical Laboratory of the University of California: Berkeley, CA, USA, 1967; pp. 281–297. [Google Scholar]
Figure 1. The intrinsic camera model.
Figure 1. The intrinsic camera model.
Remotesensing 14 03974 g001
Figure 2. The extrinsic camera model.
Figure 2. The extrinsic camera model.
Remotesensing 14 03974 g002
Figure 3. Staring observation for a single ground target.
Figure 3. Staring observation for a single ground target.
Remotesensing 14 03974 g003
Figure 4. Staring observation for multiple ground targets. (a) Initial point distribution. (b) Inappropriate point distribution.
Figure 4. Staring observation for multiple ground targets. (a) Initial point distribution. (b) Inappropriate point distribution.
Remotesensing 14 03974 g004
Figure 5. Target selection results of case 1 (a) and case 2 (b).
Figure 5. Target selection results of case 1 (a) and case 2 (b).
Remotesensing 14 03974 g005
Figure 6. The initial target distribution and selection results.
Figure 6. The initial target distribution and selection results.
Remotesensing 14 03974 g006
Figure 7. The trajectories of the two selected targets and their midpoint.
Figure 7. The trajectories of the two selected targets and their midpoint.
Remotesensing 14 03974 g007
Figure 8. The final target distribution.
Figure 8. The final target distribution.
Remotesensing 14 03974 g008
Figure 9. The time histories of estimated projection error.
Figure 9. The time histories of estimated projection error.
Remotesensing 14 03974 g009
Figure 10. The time histories of angular velocity.
Figure 10. The time histories of angular velocity.
Remotesensing 14 03974 g010
Figure 11. The time histories of control torques.
Figure 11. The time histories of control torques.
Remotesensing 14 03974 g011
Table 1. Distribution of targets.
Table 1. Distribution of targets.
TargetsCase 1Case 2
Point 1(60; 400)(480; 70)
Point 2(100; 500)(550; 40)
Point 3(200; 350)(620; 280)
Point 4(220; 400)(700; 120)
Point 5(150; 300)(610; 90)
Point 6(590; 300)
Point 7(650; 80)
Table 2. Camera parameters.
Table 2. Camera parameters.
Camera ParametersTheoretical ValuesReal Values
f1 m1.1 m
u 0 376396
v 0 291276
d x & d y 8.33 × 10 6 m 8.43 × 10 6
c R b c 0.2682 0.0408 0.0671 m 0.2582 0.0358 0.0771 m
M b c M 321 30 , 40 , 20 M 321 29 , 39 . 6 , 18 . 9
Table 3. Locations of ground targets.
Table 3. Locations of ground targets.
Target 1Target 2Target 3Target 4Target 5
Longitude ( )128.279128.262128.261128.271128.271
Latitude ( )64.7164.72564.71764.71564.72
Table 4. Initial attitude.
Table 4. Initial attitude.
Quaternion qAngular Velocity ω ( / s )
0.1795 , 0.7553 , 0.6211 , 0.1077 T 3.0997 , 10 , 2.9061 T × 10 4
Table 5. Initial orbital elements.
Table 5. Initial orbital elements.
Semi-Major AxisEccentricityInclinationArgument of PerigeeRAANTrue Anomaly
6868.14 km097.2574 59.3884 290.017 54.8163
Table 6. Satellite parameters.
Table 6. Satellite parameters.
J U max
2.2240 0.03264 0.02083 0.03264 3.2550 0.007557 0.02083 0.07557 2.1922 kg·m 2 0.1 N·m
Table 7. Control parameters.
Table 7. Control parameters.
ParameterValue
K 1 2 × E 3 × 10 15
K 2 3 0 0 1 × 10 21
K 3 1.1 × E 3
Γ diag E 7 × 10 5 , 8 × 10 2 , 2 × E 10 × 10 14 , 8 × E 16 × 10 8 , 4 × E 8 × 10 8
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Song, C.; Fan, C.; Wang, M. Image-Based Adaptive Staring Attitude Control for Multiple Ground Targets Using a Miniaturized Video Satellite. Remote Sens. 2022, 14, 3974. https://doi.org/10.3390/rs14163974

AMA Style

Song C, Fan C, Wang M. Image-Based Adaptive Staring Attitude Control for Multiple Ground Targets Using a Miniaturized Video Satellite. Remote Sensing. 2022; 14(16):3974. https://doi.org/10.3390/rs14163974

Chicago/Turabian Style

Song, Chao, Caizhi Fan, and Mengmeng Wang. 2022. "Image-Based Adaptive Staring Attitude Control for Multiple Ground Targets Using a Miniaturized Video Satellite" Remote Sensing 14, no. 16: 3974. https://doi.org/10.3390/rs14163974

APA Style

Song, C., Fan, C., & Wang, M. (2022). Image-Based Adaptive Staring Attitude Control for Multiple Ground Targets Using a Miniaturized Video Satellite. Remote Sensing, 14(16), 3974. https://doi.org/10.3390/rs14163974

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop