1. Introduction
With the rapid development of the robot industry, accurate and continuous positioning for robots has become particularly important. The error of low-cost inertial navigation systems (INS) accumulates quickly without global navigation satellite system (GNSS) assistance when GNSS signals are blocked, which leads to a rapid deterioration of the positioning accuracy of the integration system [
1,
2]. To solve this problem, researches on multi-sensor autonomous navigation are conducted all over the world.
However, high precision sensors cannot be widely equipped in a swarm because of size, weight, power and cost, which is summarized as a SWPAC problem. Therefore, CN began to pay greater attention to researchers in the robot and navigation field, with an interest in improving the positioning accuracy of other LANs by sharing the navigation information of HANs in the swarm. Advanced sensors such as depth camera, lidar and ultra-wide band (UWB) are applied to observe the relative position, attitude or velocity between nodes in swarm.
According to mathematical principles, the constraints in the three-dimensional coordinate between nodes can be established by using the relative range and angle between nodes in the swarm, which makes it possible for the accurate positioning of LANs based on navigation information obtained by HANs.
At present, UWB is generally installed on various unmanned platforms to provide the relative range between nodes in the research of CN. In general, this small sensor generally weighs less than 0.2 kg and consumes less than 10 w. The ranging error of UWB could be less than 0.5 m when measuring the target within 100 m under ideal conditions. In the published papers, many scholars use the relative range and angle information between nodes to conduct sufficient CN simulation experiments on different unmanned platforms [
3,
4]. In their simulation, the observation in the filter is the state of nodes [
4], which is not suitable for the IMU-based unmanned platform to carry out state prediction according to the next step of the Kalman filter. Paper [
5] proves that although IMU-based LANs could improve their positioning accuracy by CN, the attitude estimation would be divergent because the motion equation of the IMU-based unmanned platform is highly nonlinear and the higher-order term is always neglected in the Taylor expansion. The error of attitude cannot be well compensated and affects the filter, leading to the deterioration of the CN performance of the whole swarm. To solve this problem, a new method was needed to obtain the relative attitude then calculate the optimal result of attitude estimation of LANs through filters using the output of IMU installed on the platform.
Much work has been done on collaborative navigation to improve positioning accuracy. However, the research focusing on attitude accuracy could be more satisfactory. On the one hand, high-precision attitude estimation is an important means of precise cooperative navigation but research literature rarely mentions which sensor can be used to directly obtain the angle between two nodes. On the other hand, accurate attitude determination is the basis for the cooperative swarm to carry out more work. Taking the target detection work as an example, just taking the position information of each node cannot calculate the coverage of airborne radar, camera, and other sensors of the node in real time, nor can it achieve the tracking of moving targets.
In a word, how to obtain the relative attitude with high accuracy has become an urgent problem for small robot cooperative navigation. In reference [
6], the researchers used a complex UWB array to locate and orient the ground robot. The orientation accuracy is about 3 degrees when applying Kalman filtering, and the root mean square error (RMSE) is reduced to about 1.5 degrees after using the graph optimization algorithm based on the slide window. However, this method cannot apply to all unmanned swarms. First, complex UWB arrays will cost a great deal; Second, the UWB array layout requires a larger platform; Third, the severe electromagnetic signal interference will block communication between nodes in the swarm.
In order to design a method that is easy to realize in practice, in this paper we selected a camera commonly used by UAV to solve this problem. According to the principle of polar geometry, the 3D coordinates of the target in the camera frame (Frame-C) can be calculated by the coordinate in pixel frame (Frame-P) of the target and the distance from the lens. When three or more identifiable targets are in the overlap of a HAN and a LAN, the corresponding vectors can be obtained by connecting the targets, which should be more than three as well. For convenience, the vectors obtained by camera are called visual vector in this paper. The Euler between different nodes can be calculated by algorithms for multivector attitude determination using visual vectors‘ projection in Frame-C.
Attitude determination is a measurement technology based on relative positioning [
7]. By observing two or more baselines at the same time, the three-dimensional attitude calculation of the carrier can be realized, which is widely used in aerial, marine and land navigation. Generally, there are two methods to determine the three-axis attitude:
Deterministic algorithm, such as TRIAD (Tri-Axial Attitude Determination) [
8].
Optimization algorithm, such as QUEST (Quaternion Estimator) [
9] or SVD (Singular Value Decomposition) [
10].
When using a double baseline to solve an attitude determination problem, deterministic algorithm and several optimal algorithms could all work, but different algorithms have their own advantages and disadvantages in various application scenarios.
SVD (1968) was first proposed by Markley, but, it was not widely used in practice in the 1960s–1980s because of the large amount of calculation when the computational power was seriously limited. In this century, with the rapid development of computer computing power, Wahba’s problem has also been applied in more fields. Yang used the general root of quartic equation to solve the optimal quaternion in 2013. The algorithm is fast, but a problem still exists. It may not have a real root of characteristic polynomial, which will lead to plural quaternion [
11]. So in 2015, Yang introduced the idea of Riemannian manifold and developed a more robust iterative method [
12]. In 2017, Wu Jin designed a new fast linear attitude estimator (FLAE) [
13] where the analytical method and iterative method for solving the eigenvalues are provided at the same time.
Multi-vector attitude determination algorithm has been studied to solve the relative attitude problem between vehicles. In fact, the early research of attitude determination algorithm mainly comes from solving the problem of satellite attitude determination. The attitude of the body in the inertial coordinate system can be obtained by calculating the installation position of the star sensor in the body.
For solving the problem of space non-cooperative target relative attitude determination, Wang proposed a relative attitude determination method of target spacecraft using dual-feature structures [
14]. Zhang Lei [
15] introduced the deduction that the attitude measurement accuracy is not only affected by the relative position between the navigation stars described by the condition number, but also related to the different positions of the navigation star group in the star map.
Nevertheless, the application environment attitude determination faced in multi-unmanned platform CN swarm is different from traditional applications such as satellite attitude determination, which is mainly perceived in the following three aspects:
The first is much larger measurement noise. In the application of satellite attitude determination, signal-to-noise ratio (SNR) is about 2000 db which is an inaccessible precision for a camera with ranging functions such as a depth camera or a binocular camera installed on unmanned platforms.
The second is false-alarm picture in practice. We adopt a YOLO-deep-sort algorithm based on YOLOv5 edition6 to detect targets. Although this algorithm is very friendly to the low computational power platform because of its great performance on lightweight, the occasional target misjudgment is fatal to the traditional algorithm.
The third is the higher demand of rapidity and stability. As mentioned above, extracting the image features of the target is a relatively complex process for most low-cost unmanned platforms, and high-definition image of the target is necessary as well during this process. If the default observation caused by target invisibility or overtime calculation occurs in the process of navigation and attitude determination for a long time, the divergence of results will be irreversible.
As mentioned above, divergence of attitude always occurs when the extended Kalman Filter (EKF) or other linear filtering methods are directly used. At the same time, the uncertainty of detection is also not suitable for directly using the calculated relative angle as the observation of the filter. More significantly, the biggest difference between CN and single node positioning is that multiple UUVs can coordinate, cooperate and share their own state with other nodes in a CN swarm [
16], which means that every node needs to continuously send or receive information from each platform in the swarm. All status related to observational status of moment parameters must be changed when updating observables, which will increase the calculations and make communication difficult. Thus it will hinder the application of CN when using the Kalman filter to fuse multi-sensors’ data. As described in [
17] and [
18], it can be seen from simultaneous localization and robot mapping that the information filter (IF) performs better than a Kalman filter. Therefore, augmented IF with Euler as status and relative angle as observation is presented to use in CN to determine the attitude of nodes in the swarm.
The efficiency of cooperative navigation is mainly reflected in the fusion of the observation of all nodes in the swarm. In a decentralized swarm, each node needs to process its own sensing information and fuse it with those from other nodes to calculate the position and posture, which means that each node needs to process a large amount of information. The difficulty caused by time synchronization makes data fusion more complex. Different from the traditional Kalman Filter, the IF uses information parameters in its iterative optimization so that it only needs to update the parameters related to the observation without updating the status of all nodes in the swarm. The observation update of information filtering is local, so the computational power and communication consumption will be less, which makes it very suitable for cooperative navigation swarms that need to fuse a large amount of nonsequential information [
19].
Considering the practical problems mentioned above, we propose a new method to determine the attitude of unmanned platforms by visual vectors. Based on CN IF, this method obtains the relative-attitude Euler angle of nodes in the swarm quickly and accurately as the observation to calculate the attitude of LANs. The key points of our work are as follows:
The remainder of this paper is organized as follows.
Section 2 describes the details of the attitude determination of unmanned platforms in CN swarm algorithm. Then in
Section 3 we continue with the simulation of multi-vector attitude determination algorithm and attitude determination based on CN to verify the performance of the proposed method. In
Section 4, the analysis and discussion of the experimental results are presented to evaluate the performance of the algorithm.
Section 5 discusses conclusions and future work.
2. Materials and Methods
As shown in
Figure 1, attitude determination of unmanned platforms in the CN swarm mainly needs to solve three problems:
Target detection and calculation for coordinates in the navigation frame (Frame-N).
Determination of the relative angle between nodes in the swarm.
Attitude error compensation based on decentralized CN IF.
Among the three points mentioned, designing an improved algorithm for calculating the attitude-transfer matrix using visual vectors and compensating for the error of attitude angle of LANs based on IF constitute the main emphasis and innovation point of this paper. In
Section 2.1, the approach of constructing vectors from detected targets is described in detail. Then the fast and robust multi-vector attitude determination algorithm is derived in
Section 2.2 and IF is constructed in
Section 2.3.
2.1. Frames Involved and Method to Build Visual Vectors
In this paper, the installed cameras are fixed on the platforms, therefore, Frame-C can convert to body frame (Frame-B) through a known and invariant attitude conversion matrix. Consequently, we can calculate the attitude conversion matrix between platforms as long as we calculate the relative attitude relation between the cameras.
Related formula are proposed as Equation (1)
where
and
are the attitude conversion matrix which reflects the installation relationship of the camera. They should be invariant in practice so the key to calculate the relative angle is to calculate
.
According to the requirements of multivector attitude determination algorithm, the projection of at least two different vectors in two frames is required to obtain the attitude transformation matrix between these two frames. As shown in
Figure 1, the first step to construct the vector in Frame-C is to detect targets and get their coordinates. Next, connect center points of the box of the target in the proper order. If more than two sets of coordinates are correctly obtained in the first step, we can get enough visual vectors to determine the relative attitude between two platforms.
Besides the targets we need to detect and recognize, the cooperative nodes in the overlap horizon can also help to construct vectors. In general, it should not be hard to find three or more targets in the overlap to construct more than three sets of visual vectors.
As shown in
Figure 2, we do not discuss exceptional cases in this paper because how to conduct target recognition is not the focus of our research. In our experiment, we apply pruning YOLOv5, which could work at 20 Hz on our experimental platform to detect targets efficiently.
Figure 3 shows how to construct visual vectors in detail. The mathematical description is given as follows: Define the sets of detected targets for platform i and platform j as
and
. Take the union of these two sets as
, which represents the targets in overlap. Only when
can the attitude determination algorithm work.
represents the operation of calculating the number of elements in a set.
When it comes to the situation that more than one intra-class target are detected, the problem becomes much more complex. YOLO cannot identify intra-class targets, so different platforms might have different identifications of the same target or misjudge two different targets as the same one [
20]. If the cooperative nodes are detected in overlap, we might identify it by communication but this method cannot work when facing other targets. Employing JPDA (Joint Probabilistic Data Association) or Dead-Reckoning to identify unclassified targets by their different trajectory could solve this problem but it requires greater computation at the same time. We are not investigating this complex case in this paper. In the simulation in
Section 4 and the experience in
Section 5, we also place different kinds of objects in the experimental site.
2.2. Improved QUEST for Multi-Vector Attitude Determination
Because of its optimum balance of stability, rapidity and accuracy, QUEST has become the most prevalent attitude determination algorithm. Considering that the vector accuracy obtained by the unmanned platform attitude measurement is much lower, we need to make some improvements to enhance its robustness in the case of insufficient vector accuracy.
QUEST is also a algorithm based on Wahba’s problem. Wahba’s problem is to convert the direction measurement into attitude measurement. Its specific expression is as follows: if there are
unit vectors in the
which are recorded as
,
. The value obtained by measuring these unit vectors in another
is
,
. The problem is to find a rotation matrix
to minimize the loss function.
where
represents the weight factors for each obviation vector.
,
, are a set of positive weights satisfying
, usually chosen as
, with
the variance parameters of the measurement vectors.
To meet the requirements of the application, the use of attitude rotation quaternions can effectively reduce the amount of computation required in the solution process by reducing the number of unknowns. The loss function is expressed in the form of quaternion as Equation (3):
where
Utilizing Lagrange operator to calculate the maximum value of Equation (4), we get:
If and only if
is equal to the maximum eigenvalue of matrix
, the eigenvector of
is the optimal quaternion
. Therefore, we can rearrange Equation (5) as
From Equations (6) and (7), the inverse of the matrix is expressed by the adjoint matrix, then we get
Therefore, we describe
in terms of Gibbs vector and normalize it then we get:
where
Inserting Equation (11) into Equation (9) would mean that the
meets the
we need as:
Equation (12) is equivalent to the characteristic equation for the eigenvalues of
. To avoid the problems posed by this singularity with the consideration on the problem that the Gibbs vector becomes infinite when the angle of rotation is
, we derive an expression that permits the computation of
without the intermediary of the Gibbs vector. For an eigenvalue
of any square matrix
satisfies the characteristic equation:
According to the Cayley-Hamilton theorem [
21],
should satisfy the same equation. Then we get a convenient expression for the characteristic equation:
where
Since
is a small quantity causing
is very close to 1, its initial value can be set to
[
22]. Then, the Newton iteration can be conducted using:
Usually,
can be very accurate after several iterations in traditional QUEST algorithms. When the eigenvalue is obtained, the eigenvector can then be calculated using elementary row operations. However, as the accuracy is not linear with iteration times, fixed iteration times will not always achieve good results. Different from the method proposed by Markley, a novel symbolic approach is investigated for increasing the speed of calculating. For this purpose, Equation (15) can then be calculated as follows:
Then,
is chosen by the value that is nearest to one. In this way, the solving process of
is significantly shortened. The use of a general solution makes traditional QUEST a faster attitude solution algorithm with less floating-point operation. Inserting the
calculated by Equation (18) into Equation (9), we obtain the optimal attitude quaternion
as follow:
2.3. Attitude Angle Determination Based on IF
Cooperative navigation technology is widely used in unmanned platforms. The common method is data fusion with improved Kalman filters based on specific navigation sensors. As shown in [
15], when platforms are defined as a processing node, the Kalman filter can fuse navigation information of each node to the master node in a centralized Kalman filter algorithm.
Assume that the unmanned swarm consists of
nodes, the motion model and observation model for an IMU-based platform of attitude compensation in cooperative navigation are:
where,
and
represent the status of the whole CN swarm at the time
and
. Furthermore,
represents the status of platform-i at the time
and we have
.
is status transition matrix of platform-i, which is determined by the dynamics of this platform.
is the parameter required for state estimation,
is the system noise of platform-i at the time
, and the variance is
, meets
and
. The single-platform observation update equation at time k and the inter-platform observation equation of platform-i to platform-j can be expressed as:
where
and
represents the single platform observation from platform-i to itself and platform-j.
and
is the observation equation, which should suit the observation in the filter and the sensors used.
and
is the observation noise and follows
and
respectively.
For an unmanned swarm, the observation update of the single platform at the time k is
At the initial time, the state parameters of nodes are not correlated. After their collaborative observation, the state parameters of nodes in the swarm are correlated. It can be seen from Equations (23) and (24) that even if the single platform only updates the observation involved with itself, all status of overall nodes in the unmanned swarm still needs to be updated according to the traditional Kalman filtering step. At the time k, platform-i updates its observation to platform-j as:
Similarly, we can see from Equations (25) and (26) that although only two platforms have been involved with the observation, the observation update needs to update all the platforms’ statuses because the state parameters used in the CN Kalman filter are relevant.
Traditional decentralized CN Kalman filtering requires all platforms in the swarm to maintain time synchronization and continuously send or receive information from each other, which leads to a huge real-time communication burden. In addition, the observation update can only be carried out in sequence. For the observation at the same time, the swarm can only start the update of another observation after completing the update of the previous observation. As the number of the nodes in the swarm increases, the number of observations at the same time rises. This is why the traditional decentralized CN Kalman filtering is difficult to achieve in practice.
- 2.
Decentralized CN Information Filter
IF uses information parameters to augment states and update observations, then recovers the status and covariance. This method preserves the historical states in the filter, thus the joint distribution of the information matrix is sparse and computational complexity of filter is less.
The Gaussian filter is widely used in estimated algorithms. It can be described by moment parameters (mean and variance
) and information parameters (information vector and information matrix
). The relation between them is described as [
23]:
The Kalman filter is a Gaussian filter based on moment parameters and IF is a Gaussian filter based on information parameters; thus they are equal in style. Their calculation features are as different as their different parameters. But their filtering steps are the same. Transforming the Kalman filtering of single-platform observation update at time k to the IF form, we get:
where,
and
represents the prediction information matrix and observation update information matrix of nodes at time k.
denotes the observable linear matrix of platform-i.
is the weight of observation value of platform-i. Inverse the left and right sides of the Equation (28) and the observation update in IF form shows as Equation (29):
From Equation (29), it is obvious that IF only needs to update the information matrix of platform-i instead of updating the whole platforms’ status in the traditional way when updating the single platform observation. Similarly, as Equation (30) shows, it is only necessary to update the information matrix related to platform-i and platform-i when updating the observation between this two platforms.
- 3.
Design of state and observation in IF
This paper selects the following parameters as state variables: attitude angle error , horizontal velocity error , zero offset of accelerometer , gyroscope drift .
For attitude compensation, the system state vector
of IMU-based autonomous integrated navigation system can be defined as
This is because the ground (or sky) directional of the inertial navigation system is divergent and generally cannot be solved without external reference information or damping, so the ground velocity error state and height error state should be removed. The position error is that the velocity error is related as well. When there is no separate position reference information, the position error state should be removed.
Inertial Navigation System Error State as:
where
is the system state matrix and
is the systematic error matrix.
The selection of the measurement vector of the integrated navigation system is directly related to the measurement accuracy of the entire system. Based on the known matching results, the attitude observation equations are established.
In the loose combination observation method, the observation is the relative angle obtained by the sensor. Since the error state is selected to establish the state equation in the filter, the form of the observation equation will be very simple without considering the lever arm effect.
Set the observation as Equation (34)
We have . The error of attitude angle in three axes is selected as observation, in this way, the observation matrix H will be linear. The observation is directly obtained by subtracting the angle measured by the sensor from the attitude angle calculated by the IMU inertial navigation solution. Although the accuracy of loose combination will decrease compared with that of tight combination, the linear observation equation of loose combination will not introduce high-order error.