Next Article in Journal
Investigation of Rotor Efficiency with Varying Rotor Pitch Angle for a Coaxial Drone
Previous Article in Journal
Design and Implementation of Sensor Platform for UAV-Based Target Tracking and Obstacle Avoidance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Visual-Inertial Cross Fusion: A Fast and Accurate State Estimation Framework for Micro Flapping Wing Rotors

1
School of Aeronautic Science and Engineering, Beihang University, Beijing 100191, China
2
School of Mechanical Engineering, Purdue University, West Lafayette, IN 47906, USA
3
Institute of Unmanned System, Beihang University, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Drones 2022, 6(4), 90; https://doi.org/10.3390/drones6040090
Submission received: 9 March 2022 / Revised: 27 March 2022 / Accepted: 29 March 2022 / Published: 31 March 2022
(This article belongs to the Section Drone Design and Development)

Abstract

:
Real-time and drift-free state estimation is essential for the flight control of Micro Aerial Vehicles (MAVs). Due to the vibration caused by the particular flapping motion and the stringent constraints of scale, weight, and power, state estimation divergence actually becomes an open challenge for flapping wing platforms’ longterm stable flight. Unlike conventional MAVs, the direct adoption of mature state estimation strategies, such as inertial or vision-based methods, has difficulty obtaining satisfactory sensing performance on flapping wing platforms. Inertial sensors offer high sampling frequency but suffer from flapping-introduced oscillation and drift. External visual sensors, such as motion capture systems, can provide accurate feedback but come with a relatively low sampling rate and severe delay. This work proposes a novel state estimation framework to combine the merits from both to address such key sensing challenges of a special flapping wing platform—micro flapping wing rotors (FWRs). In particular, a cross-fusion scheme, which integrates two alternately updated Extended Kalman Filters based on a convex combination, is proposed to tightly fuse both onboard inertial and external visual information. Such a design leverages both the high sampling rate of the inertial feedback and the accuracy of the external vision-based feedback. To address the sensing delay of the visual feedback, a ring buffer is designed to cache historical states for online drift compensation. Experimental validations have been conducted on two sophisticated microFWRs with different actuation and control principles. Both of them show realtime and drift-free state estimation.

1. Introduction

Flapping wing Micro Aerial Vehicles (MAVs) adopt the flight principle of flying creatures and, thus, are promising for resembling animal-like extraordinary aerodynamic feats. To date, with the understanding of flapping flight aerodynamics and control strategies, the current flapping wing MAVs are becoming increasingly agile and miniaturized [1,2,3,4,5,6,7,8,9,10,11].
Among them, a new type of flapping-wing aircraft—micro Flapping Wing Rotorcrafts (FWRs)—integrates both the advantages of flapping and rotary wings, demonstrating superior capability of lift generation [1]. The state-of-the-art microFWRs even showcase several millimeter/milligram-scale designs [1,9,12,13,14,15,16,17,18,19] that have rarely been achieved by conventional fixed or rotary-winged vehicles. Such microFWRs are foreseen as alternatives to commercial drones and would be in used in increasing breadths of applications such as search and rescue, surveying and inspection, and aerial photography [20]. On the other hand, the small size and lightweight design of microFWRs result in undesired high control sensitivity, which limits the flight control performance significantly [2,3,4,6]. To address such a unique challenge, fast and accurate state estimation is the key prerequisite.
In order to accommodate stringent size and weight constraints, the microelectromechanical Systems (MEMSs)-based inertial measurement unit (IMU) is a practical sensing solution for miniaturized vehicles with high sampling rates (up to KiloHertz). To review, IMU accompanied with adequate state estimation algorithms works properly on conventional aircraft, especially on large-scale ones [21,22]. However, the direct adoption of such mature sensing solutions on flapping wing vehicles [2,3,10,11,23], usually results in inadequate performance due to their unsteady aerodynamic principles and time-varying system dynamics. In particular, the unsteady aerodynamic loading from the wings could induce severe vibration in inertial sensor readings [2,23,24,25], resulting in unmanageable sensing drift. As a result, flapping wing caused high-frequency varying aerodynamic loading lowers the accuracy of the IMU-based state estimation and affects flight control performance accordingly [2,11,23,24,25]. Vision-based motion capture system could be an effective alternative, which captures vehicle states by visual cues [26,27]. Nevertheless, the resolution of such external visual sensors may limit its tracking performance as the object becomes agile and tiny and environmental disturbances present (e.g., refractions or marker shielded). Most importantly, vision-based sensing runs on relatively low sampling frequencies (e.g., around 100–200 Hz for OptiTrack [26]), and it is hard to ensure real-time performance due to the slow image processing; namely, it always generates sluggish and delayed feedback, which could degrade control performance, if not losing the stability [26,27,28]. In fact, even for flying animals, the delayed sensory system can affect their flight control severely [29,30].
Due to the vibration caused by the particular flapping motion and the stringent constraints of scale, weight, and power, state estimation divergence becomes an open challenge for flapping wing platforms’ long-term stable flight. As a result, using either IMU or external visual sensors alone cannot provide high-frequency and high-fidelity state estimation for microFWRs’ flight control. It is desired to leverage both advantages of the above-mentioned sensing methods to obtain real-time and drift-free state estimation for precise control. To this end, particular sensor fusion challenges need to be addressed to attain satisfactory updating frequency, sensing accuracy, and delay compensation.
In this work, a state estimation framework is proposed to integrate multiple sensor readings, i.e., inertial and external visual sensors, to generate real-time and accurate state feedback for flapping wing MAVs. Because the sensing principles and readings of these two sensors are completely different, based on the convex combination theory, two particular Extended Kalman Filters (EKFs) are designed for sensor fusion. In order to enhance the computational efficiency and compensate for the sensing delay, a cross fusion framework is proposed to integrate these two EKFs’ estimates, aiming to leverage both the high sampling rate of the inertial feedback and the accuracy of the visual feedback. A ring buffer is implemented to cache the historical state update to enable backtracking during cross fusion, which plays an important role in sensing delay compensation. The detailed workflow of the proposed state estimation framework is presented in Section 4. The proposed state estimation framework has been validated experimentally on two microFWRs with different actuation principles. As a result, the proposed state estimation method resembles the accuracy of the visual feedback and without delay. Meanwhile, it retains detailed flight state variations captured by the inertial sensor, demonstrating high-sensing bandwidths. During the bench tests, the updating frequencies of the proposed method are on par with the inertial sensors, but they are not limited to those sensors.
Most of the existing flapping-wing microvehicles use IMU or external camera as their sensory system [1,2,3,4,5,6,7,8,9,10,11]. Compared with their existing state estimation methods [2,11,23,24,25,26,27,28], the proposed method is able to take both high-fidelity and high-frequency state estimation results into account. Such performances and the robustness of the proposed method have been validated by two sophisticated microFWRs’ real-world flight tests, which have even more complex aerodynamics than traditional flappers. We summarize the contributions as follows:
1
We proposed a generic method integrating inertial and external visual sensors by using EKFs’ convex combination that simultaneously guarantees the accuracy and updating frequency of FWRs’ state estimation. Such a method effectively addressed the above-mentioned sensing challenges of typical flapping-wing microvehicles;
2
A cross fusion framework to fusion pose information from the external visual sensors with the consideration of the transmission delay. This framework fundamentally benefits the control of small-sized agile aerial vehicles, which have high system sensitivity and were severely affected by the delay of pose feedback;
3
We implement the proposed method into two different prototypes of FWRs and conduct extensive real-world evaluation of our proposed method. Based on the test results, in addition to the aforementioned advantages, such a framework is capable of attenuating the influence of anomalous data.
The rest of the article is organized as follows. Section 2 and Section 3 introduce the test platforms and the corresponding sensing challenges. Section 4 details the architecture and the algorithm of the proposed state estimation framework. Section 5 presents the experimental validation of the proposed state estimation framework. Section 6 summarizes this work.

2. Test Platforms and Their Sensory System

In order to validate the effectiveness of the proposed state estimation framework, two FWR platforms with their respective scales have been tested in this study. As shown in Figure 1, the test platforms come with different actuation principles, system parameters, control logic, and sensing coefficients. In this section, the details of such platforms are introduced below. The parametric comparison is summarized in Table 1.

2.1. Platform (a): A Linkage-Drive MicroFWR

The test platform microFWR (a) is driven by a four-bar-like linkage. Such a mechanism, as shown in Figure 2, drives three flapping wings to generate rotation torque and lift. The flapping amplitude is constrained by the linkage while the flapping frequency can be altered for lift control. Since the constrained wing trajectory is not able to generate control torque, the control surfaces have been adopted on the tail to generate control torque for flight control. The onboard electronics include a STM32F405 microcontroller, an MPU9250 IMU sensor, motor and servo drivers, and detachable wireless telemetry. Among them, the IMU sensor provides high-frequency inertia feedback, and telemetry is used to receive external visual feedback. Detailed design of FWR (a) is presented in [10].

2.2. Platform (b): A Motor Direct-Drive MicroFWR

Test platform FWR (b) is directly actuated by two bi-directional rotating brushless dc motors; thus, it avoids the use of the complicated transmission system, as shown in platform (a). Its prototype is shown in Figure 3. For such a design, each wing is driven by its paired dc motor independently, which is similar to the Robotic Hummingbird designed by Tu et al. [31], but two wings are mounted anti-symmetrically. Therefore, the aerodynamic principle of FWR (b) is more similar to normal birds rather than hummingbirds. The motor equips Hall-sensor feedback for commutation control, yielding bi-directional rotation to enable reciprocating wing motion. Reduction gears and torsional springs are installed to connect the motor and wing for torque transmission. With aerodynamic and inertial loading, the wing is designed to rotate passively. Wing kinematics can be controlled by modulating the input voltage of the motor. The discrepancy of the wing kinematics can generate control torques to stabilize the vehicle.

2.3. Sensory System

The sensory system used in this study mainly consists of onboard IMU and offboard visual sensors. The specific onboard IMU is MPU9250, which contains a three-axis gyroscope, a three-axis accelerometer, and a three-axis magnetometer. Its updating frequency reaches as high as 400 kHz, which is sufficient for high bandwidth system control. For external visual sensing, we used OptiTrack (https://OptiTrack.com (accessed on 7 March 2022))—a motion capture system that relies on multiple infrared cameras to track the markers dotted on the test platform. The setup is depicted in Figure 4. Note, different sensing frequencies of the two test platforms were implemented in order to verify the generality of the later proposed sensor fusion method. According to such sensory system setups, experimental comparative studies with all three different state estimation strategies, e.g., onboard IMU only, offboard OptiTrack only, and the proposed IMU-OptiTrack fusion, have been conducted. A sample result that demonstrates the performance discrepancy is shown and discussed in Section 5.

3. State Estimation Challenges of FWRs

With the test platforms and sensor systems as described in Section 2, the exact state estimation challenges of microFWRs can be found systematically. Such challenges motivate this work accordingly. In this section, we introduce the respective limitations of IMU and external visual sensors on microFWRs.

3.1. Limitation of Inertial Sensors

MEMS-tech based IMUs can provide high-bandwidth inertial feedback, which is desirable for miniaturized aerial vehicles with stringent size, weight, and control sensitivity constraints. Although it works reasonably well on conventional vehicles with proper sensor fusion algorithms, simply implementing it on flapping wing vehicles usually obtains poor performance. Flapping-wing vehicles are known to face severe body vibration due to the high-frequency reciprocal wing motion and complex time-varying aerodynamics [2,23,24,25]. In particular, such severe oscillation not only generates undesired oscillatory control error but also affects IMU readings significantly, resulting in untrusted sensor feedback, as shown in Figure 5. Based on the previous study [2,23,24,25], such severe vibration is prominent in accelerometer measurements during flight. In addition, the gyroscope also demonstrates unmanageable sensor drift due to the bias and noise uncertainty. Without reliable accelerometer readings, merely using a gyroscope cannot sustain accuracies for long-term estimation.
Taking FWR (b) as an example: With the raw IMU data shown in Figure 5, several mature sensor fusion solutions have been tested, including complimentary filter and Extended Kalman filter. Their respective best performance is shown in Figure 6. Although it is already the best performance, it cannot be applied to flight control due to such obvious state estimation errors. In order to verify the dilemma of using IMU on flapping wing systems, quantitative studies have been conducted. As a result, in the case of completely distrusting the accelerometer, the state estimation will quickly diverge. Nevertheless, continuously increasing the weight of acceleration information in sensor fusion will obviously result in greater estimation errors [24].
In fact, two platforms in this article show similar sensing issues as the wing starts flapping. Relying on IMU alone renders it hard to achieve long-term reliable state estimation.

3.2. Limitation of External Visual Sensors

In order to avoid such particular challenges on IMU readings, an external motion capture system could be a practical method. Such a system relies on several infrared cameras to track predefined objects in real-time. Since it needs to process all camera information, the processed data are updated slowly and with certain delay, similarly to other visual-based sensing approaches.
The visual feedback delay can be determined by conducting a delay calibration test. During the test, we change the coordinate of the tracking object instantly by switching the lighting sequence of infrared LED1 and LED2. The delay can be determined by synchronizing the LED switching command and state feedback change in the time sequence, as shown in Figure 7. Three different data transmission schemes were implemented and tested: wired serial communication, 2.4 GHz wireless module nRF24L01 transmission, and ESP8266 WiFi transmission. The cable length of the serial communication is about 3 m. A long cable length was implemented to prevent affecting the free flight performance of FWR. The calibration result is shown in Figure 8, the result demonstrated that the wired serial communication has the lowest latency in about 10 ms, and the latency of the ESP8266 WiFi module’s transmission is slightly higher than serial communication in about 20 ms, while the latency of nRF24L01 transmission is the highest and has bad consistency. During the flight experiment conducted in Section 5, the wired serial was used to provide stable and reliable pose feedback.

4. State Estimation Framework

In order to address the specific sensing challenges obtained in Section 3, we propose a state estimation method in this section that can integrate two different sensors properly and provide fast and accurate state feedback for flight control. In this section, we first define the coordinates and vehicle states used in this study. Then, an EKF-based cross fusion framework is introduced in detail.

4.1. Spatial Frames

The spatial frames involved in our system consist of the following:
1
Vehicle body frame: Vehicle body frame is attached to the Center of the Gravity (CoG) of the vehicle and denoted by b ;
2
Onboard sensor frame: Onboard IMU sensor frame is a local frame in which it generates 10-DoF inertial feedback of the vehicle, including three-axis acceleration a b , three-axis angular rate ω b , three-axis magnetic field m b , and air-pressure. In this study, we attach the IMU frame to the CoG of the test vehicle and mark it as our estimated body frame b ;
3
Inertial frame: As shown in Figure 9, the conventional frame is introduced in which the external visual-feedback system operates as the inertial frame. The origin of the Inertial frame is arbitrarily set, which is defined by the vision system’s calibration. The z-axis is often chosen to be orthogonal to the local ground plane.

4.2. Vehicle States

A generic rigid body assumption is introduced for the state prediction of general MAVs, model physical parameters such as mass and inertia are assumed to be constant. The dynamic model follows the Newton–Euler equation, shown in Equation (1):
p ˙ = v m p ¨ = R f b + m g e 3 R ˙ = R ω ^ b J ω ˙ b + ω b × J ω b = τ b
where p = [ x , y , z ] T and v = p ˙ are the vehicle’s position and velocity in the inertial frame, R R 3 × 3 is the rotation matrix, f b = [ F x , F y , F z ] T is the control force applied on the vehicle, e 3 is a unit vector [ 0 , 0 , 1 ] T , ω b = [ ω x , ω y , ω z ] T is the body angular velocity, and ω ^ b is the skew-symmetric matrix of vector ω b , as shown in Equation (2). J R 3 × 3 is the inertia matrix, and τ b = [ τ x , τ y , τ z ] T is the control torque applied on the vehicle.
ω ^ b = 0 ω z ω y ω z 0 ω x ω y ω x 0
A quaternion based rotation matrix R ( q ) is defined by the following (3):
R ( q k ) = q 1 2 + q 2 2 q 3 2 q 4 2 2 ( q 2 q 3 q 1 q 4 ) 2 ( q 1 q 3 + q 2 q 4 ) 2 ( q 2 q 3 + q 1 q 4 ) q 1 2 q 2 2 + q 3 2 q 4 2 2 ( q 3 q 4 q 1 q 2 ) 2 ( q 2 q 4 q 1 q 3 ) 2 ( q 1 q 2 + q 3 q 4 ) q 1 2 q 2 2 q 3 2 + q 4 2
where q = [ q 1 , q 2 , q 3 , q 4 ] is the quaternion vector. The transition between Euler angle [ ϕ , θ , ψ ] and q is given by Equation (4).
ϕ θ ψ = arctan 2 q 0 q 1 + q 2 q 3 1 2 q 1 2 + q 2 2 arcsin 2 q 0 q 2 q 3 q 1 arctan 2 q 0 q 3 + q 1 q 2 1 2 q 2 2 + q 3 2
Moreover, the derivative of the quaternion is calculated using Equation (5).
q ˙ = 1 2 q [ 0 , ω x , ω y , ω z ] T = q 2 ω x q 3 ω y q 4 ω z q 1 ω x q 4 ω y + q 3 ω z q 4 ω x + q 1 ω y q 2 ω z q 3 ω x + q 2 ω y + q 1 ω z
Based upon it, the vehicle state is defined by the following:
x ^ = [ p , v , q , ω b ]
by using simple system identification, these system parameters can be easily obtained.

4.3. State Prediction

The generic system model can be written into the following discrete form as Equation (7):
x k + 1 = f ( x k , u k , b k ) + w k , y k + 1 = H x k + v k .
where x = [ x , y , z , x ˙ , y ˙ , z ˙ , q 1 , q 2 , q 3 , q 4 , ω x , ω y , ω z ] T is the state variables vector, the control vector is u = [ u t r u s t , u r o l l , u p i t c h , u y a w ] T , b is the constant sensing bias, y is the output variables vector, v and w are the zero-mean uncorrelated Gaussian noise, and k represents the discrete time step. H is observation matrix.
Analogous to Kalman filter, a discrete state prediction is given by Equation (8).
p k + 1 = p k + p ˙ k Δ t p ˙ k + 1 = p ˙ k + R ( q k ) [ 0 , 0 , F L ] T Δ t m q k + 1 = q k + q ˙ k Δ t ω k + 1 = ω k + ω ˙ k Δ t b k + 1 = b k
Here, F L is the body force, which is the function of input trust signal u t r u s t , and m is the mass of microFWR.
Angular acceleration can be derived from body dynamics using Equation (9).
ω ˙ = I y I z I x ω y ω z + T x I x I x I z I y ω x ω z + T y I y I x I y I z ω x ω y + T z I z
Here, T x , T y , T z are the three-axis torques in terms of the attitude control input u r o l l , u p i t c h , u r o l l [10].

4.4. Convex Combination Based Sensor Fusion

The estimation of the same flight state by two different sensors can be formulated as a convex combination problem [32,33]. As an important component of the proposed state estimation framework, EKF can be treated as a recursive form of Gauss–Newton optimization on a typical Kalman filter [34]. The key is to update the reference to address the significant nonlinearity during filtering. In this study, we combine two EKF filters following a cross-fusion law. The mixed sensing result emphasizes the qualities and overcomes the defects of each used sensor.
In particular, the data from IMU and OptiTrack update at different rates. Thus, two sets of EKF methods are implemented to estimate the state of tested FWR. IMU data are available at high sampling rates. The sensor fusion is running at the same frequency as the IMU updating. The a priori estimation of the state of FWR is given by Equation (10).
x ^ k k 1 = f x ^ k 1 k 1 , u k P k k 1 = F k P k 1 k 1 F k T + Q k
Here, ^ is the estimated variable, P k | k 1 is the a priori error covariance matrix, and F k is the derivative of f ( x , u ) at x k . Then, the measurement vector and Kalman gain K k n are updated by using Equation (11).
Y ^ k n = H k n x ^ k k 1 S k n = H k n P k 1 k 1 H k n T + R k n K k n = P k k 1 H k n T S k n 1 n 1 , 2
Here, Q k and R k n are the covariance matrices of the noises of v and w in Equation (7), S k n is the observation error covariance matrix, and I is the identity matrix and. When IMU data are available and no OptiTrack data were updated, we have n = n 1 , and the observation of the body angular rate Y k 1 = g x , g y , g z T is obtained from the gyroscope. When a new OptiTrack data frame is received, we have n = n 2 , and Y k 2 = [ x , y , z , q 1 , q 2 , q 3 , q 4 ] T ; the corresponding observation matrices H k 1 and H k 2 and the observation covariance matrices R k 1 and R k 2 are provided by Equation (12).
H k 1 = 0 10 × 10 0 10 × 3 0 3 × 10 I 3 × 3 H k 2 = I 3 × 3 0 3 × 3 0 3 × 4 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 4 0 3 × 3 0 4 × 3 0 4 × 3 I 4 × 4 0 4 × 3 0 3 × 3 0 3 × 3 0 3 × 4 0 3 × 3 , R k 1 = 0 10 × 10 0 10 × 3 0 3 × 10 10 3 × I 3 × 3 R k 2 = 10 6 × I 3 × 3 0 3 × 3 0 3 × 4 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 4 0 3 × 3 0 4 × 3 0 4 × 3 10 5 × I 4 × 4 0 4 × 3 0 3 × 3 0 3 × 3 0 3 × 4 0 3 × 3 Q k = 10 1 × I 3 × 3 0 3 × 3 0 3 × 4 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 4 0 3 × 3 0 4 × 3 0 4 × 3 10 2 × I 4 × 4 0 4 × 3 0 3 × 3 0 3 × 3 0 3 × 4 10 1 × I 3 × 3
As the data fusion result may come from two different sensors, the state estimation using only the measurement information and the a posteriori error covariance matrix from the sensor n at time k is given by Equation (13).
x ^ k k n = x ^ k k 1 + K k n Y k n Y ^ k n P k k n = I K k n H k n P k k 1 n n 1 , 2
The estimation error is given by Equation (14).
x k k n = x x ^ k k n
We are given the influence of the possible correlation between the local estimation errors. The local estimation errors of any two sensors are correlated. This kind of correlation should be considered when performing data fusion. The cross-covariance between the local estimation errors of the sensors is provided by Equation (15).
P k k E x k k n x k k n T = P k k 1 P k k 1 1 P k k 2
Based on the above derivation, the proposed convex combination based algorithm takes into account the correlation between the estimation errors of each sensor, which is the key advantage. Nevertheless, this method needs the gain of the filter, and the historical measurement matrix needs to be returned to the step where the last time cross fusion was completed. Such a method requires a lot of onboard calculation and storage space to iteratively calculate the covariance matrix between the estimation errors of the various sensors, which may lower its sensing efficiency significantly.

4.5. Cross Fusion Framework

In order to boost sensing efficiency, we propose a cross fusion framework, which is based on the systematic consideration of the characteristics of the two airborne sensors. Due to the high sensing frequency of IMU, we use OptiTrack update as the keyframe for sensor fusion. With the known OptiTrack delay step, we can trace back to the time step corresponding to the received OptiTrack data.
Base on the aforementioned state estimation methods, a cross-updating law is proposed to fuse the respective inertial and visual readings with different frequencies and states. Such a framework is illustrated in Figure 10.
In the framework, the inertial readings are available at a high updating frequency up to kilohertz. Therefore, it dominates the maximum state estimation frequency that coordinates with the flight control needs. In addition to the updating frequency, accuracy is also important. The sluggish external visual feedback plays an important role to guarantee estimation accuracy, which initiates an online calibration to refresh cached estimates in the buffer queue.
At a sample frame as shown in Algorithm 1, once the inertial sensor is available while visual reading does not update, the vehicle relies on it to perform state estimation. Meanwhile, the state will be cached in the buffer. The buffer size is mainly determined by visual feedback frequency. It should cover enough historical frames to integrate the delayed visual feedback. When a new visual feedback frame is received, it would trace back to the cached states to find the best match and instantiate a new state object. This object could replace the original state in the queue and trigger a rectification of the rest states, which would be refreshed by the cached inertial measurements to ensure fast convergence to the most recent state. Such a cross-updating law is able to eliminate sensing drifts as well as sensing delays.
To further boost computation efficiency, introducing a delay compensation factor to interpret sensor delay in the algorithm is suggested, which can be obtained by using a simple calibration test or by manually tuning. This factor determines a certain backtracking step to omit the computationally burdensome online matching process. For example, with a known delay, we can define the back steps in the cached states queue corresponding to the received visual data. After retrieving, it is used to update the initial frame and enable rectification until the most recent state feedback is ready. Meanwhile, as it has relatively low computation efficiency for some microcontroller, an IMU pre-integration algorithm [35] is suggested in the forward-calibration step. As shown in Figure 10, after the backtracking step, the correct attitude in time j is acquired; then, we can calculate the relative attitude between time j and k using Equation (16).
q k c o r r e c t = q j c o r r e c t q j k q j k = t [ j , k ] q j t 0 1 2 ω t δ t
Algorithm 1 Cross Fusion.
Notation: State x, History State x c a c h e , Imu I,History imu I c a c h e
Output:  x e s t i m a t e
1:
while True do
2:
   step++
3:
   if No OptiTrack Update then
4:
        x p r e d i c t =StatePredict()
5:
        x e s t i m a t e =StateEstimateIMU()
6:
   else
7:
        j = s t e p Δ t d e l a y
8:
        x = x c a c h e ( j )
9:
        x p r e d i c t =StatePredict()
10:
      x e s t i m a t e =StateEstimateOptitrack()
11:
     for  j = s t e p Δ t d e l a y + 1 to s t e p + 1  do
12:
         I = I c a c h e ( j )
13:
         x p r e d i c t =StatePredict()
14:
         x e s t i m a t e =StateEstimateIMU()
15:
         x c a c h e ( j ) = x e s t i m a t e
16:
     end for
17:
   end if
18:
    I c a c h e ( s t e p ) = I
19:
    x c a c h e ( s t e p ) = x e s t i m a t e
20:
end while
21:
 
22:
return  x e s t i m a t e

5. Experimental Results

To validate the proposed method, we conduct real-world experiments on two different FWR platforms described in Section 2. During each flight test, three different sensor fusion methods—pure IMU-based EKF method, OptiTrack feedback, and the proposed method—were adopted and recorded to estimate the pose of the FWRs for comparison. To address this, due to the limited capture area of our OptiTrack systems, the sustained flight time of different test is inconsistent. As for comparison, we take 10 second of data in each free flight experiment.

5.1. Sensor Fusion on FWR (a)

As shown in Figure 11a, in this test, the sensory system is constructed by an onboard MPU9250 inertial sensor together with an offboard OptiTrack visual feedback. From Figure 8, such offboard visual feedback can be treated as ground truth with certain delays. The buffer stores 50 previous frames that cover 0.1 s historical information. It is necessary to improve the quality of sensors and readings by filtering in order to remove noise. A low-pass filter(LPF) is implemented here, which has a cut-off frequency of 50 Hz.
For this test platform, we focus on roll and pitch estimation, since the slight position drift and rotating yaw do not affect hover stability. Due to the reciprocal up–down motion of the flapping wing rotor, it generates significant z-axis vibration, which affects the accuracy of the inertial measurements and flight control. Consequently, roll angle estimation based on the merely inertial sensor gradually diverges, as shown in Figure 12c. The root mean square errors (RMSEs) of using individual IMU with Extended Kalman filter and the proposed fusion method are listed in Table 2. As shown in the zoomed area in Figure 12, the proposed method could obviously eliminate the visual feedback delay. In this case, the external vision-based sensor and the proposed framework both demonstrate their proper performance in tracking the motion of the test vehicle. Results show that the proposed method has the overall best capability to track ground truth without delay.

5.2. Sensor Fusion on FWR (b)

As shown in Figure 11b, in this test, we use FWR (b) as the test platform with the same buffer size for state estimation. Since it is much smaller and lighter than FWR (a), it will be correspondingly more sensitive to vibration and sensing delay.
Compared with the former test, this case is more challenging. Not only has the vehicle become smaller and more agile, but the system oscillation is also more severe. The high-frequency reciprocating wing motion can result in fierce vibration along the dorsal thorax direction of the test platform, causing inertial measurement drift. Therefore, the cut-off frequency of LPF was changed to 150 Hz while the buffer size was the same as the former test.
As a result, IMU-based state estimation will cause obvious sensing bias, as shown in Figure 13c. The RMSE comparison result is shown in Table 3. The RMSE of the complementary filter-based state estimation is also tested. As a result, such inaccurate feedback can result in a quick divergence of flight control, causing serious consequences. Moreover, in this case, the visual feedback delay does affect control stability, since it is already greater than a wingbeat cycle. As the delay directly corresponds to the response latency of the flight control, the inconsistent control command can cause stability issues inevitably. Both EKF and complementary filter demonstrates poor performance. To improve state estimation performance, the proposed framework is implemented. The result is demonstrated in Figure 13, which generates reliable feedback without unforeseen drift and delay. Simultaneously, the proposed framework can also eliminate the outlier data from OptiTrack induced by the violent oscillation of the platform, as shown in the dashed box in Figure 13a,b. Furthermore, the updating frequency of the proposed method is 1 KHz, which is the same as the inertial readings.
The estimation result shows that the IMU-based method has high frequency, and the real time attitude of microFWRs has obvious bias. Compared between Table 2 and Table 3, a worse result was generated when the platform was smaller, which indicates that severe oscillation will lead to the the unmanageable measurement noise of the accelerometer. OptiTrack feedback has accurate pose measurements, but they are affected by low sampling rate and transmission delay, violent vibration also results in some undesired outlier. To combine the advantage and take transmission delay into consideration, the proposed method provides state estimation with high frequency and accuracy.

6. Conclusions

In this study, a generic state estimation solution was proposed for microFWRs. The contribution mainly lies in the cross fusion framework, which integrates the high sampling frequency of the inertial sensor and accurate visual feedback, yielding a real-time and drift-free estimation result. The framework has been experimentally validated on two sophisticated FWRs with different actuation principles. Both of them demonstrate high-frequency and high-fidelity sensing performance simultaneously. As observed from the results, the pure IMU-based state estimation can provide high frequency attitude with low latency, while the severe vibration of the FWRs’ platform will result in high sensing bias. The pure visual feedback attitude state can provided the accurate pose of FWRs, but low update rates and high latency mean that it is impossible for applications in flight control. The proposed method can provide pose estimation with high sampling frequency and accuracy for the microFWRs platform with violent vibration during flight, which is essential for the agile maneuvering control of microFWRs. Based on bench tests, the proposed approach holds great promise for being generalized to agile MAVs with different scales and design principles. In the future, we will implement this method on different platforms to study its portability.

Author Contributions

Conceptualization, Z.T. and X.D.; methodology, Z.T., X.D. and F.F.; software, X.D.; validation, Z.W., F.L. and S.L.; formal analysis, X.D.; investigation, Z.W.; resources, D.L.; data curation, Z.W.; writing—original draft preparation, X.D.; writing—review and editing, Z.T. and F.F.; visualization, X.D.; supervision, D.L.; project administration, Z.T.; funding acquisition, Z.T. and D.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, number 52102431, and the National Natural Science Foundation of China, number A020314.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors gratefully acknowledge the support of the National Natural Science Foundation of China. In addition, our thanks are given to the editors and reviewers for contributing to the final form of this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Guo, S.; Li, D.; Wu, J. Theoretical and experimental study of a piezoelectric flapping wing rotor for micro aerial vehicle. Aerosp. Sci. Technol. 2012, 23, 429–438. [Google Scholar] [CrossRef]
  2. Keennon, M.; Klingebiel, K.; Won, H. Development of the Nano Hummingbird: A Tailless Flapping Wing Micro Air Vehicle. In Proceedings of the 50th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition, Nashville, TN, USA, 9–12 January 2012. [Google Scholar]
  3. Ma, K.Y.; Chirarattananon, P.; Fuller, S.B.; Wood, R.J. Controlled flight of a biologically inspired, insect-scale robot. Science 2013, 340, 603–607. [Google Scholar] [CrossRef] [Green Version]
  4. Roll, J.A.; Bardroff, D.T.; Deng, X. Mechanics of a scalable high frequency flapping wing robotic platform capable of lift-off. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 4664–4671. [Google Scholar] [CrossRef]
  5. Phan, H.V.; Kang, T.; Park, H.C. Design and stable flight of a 21 g insect-like tailless flapping wing micro air vehicle with angular rates feedback control. Bioinspir. Biomim. 2017, 12, 036006. [Google Scholar] [CrossRef]
  6. Fuller, S. Four Wings: An Insect-Sized Aerial Robot with Steering Ability and Payload Capacity for Autonomy. IEEE Robot. Autom. Lett. 2019, 4, 570–577. [Google Scholar] [CrossRef]
  7. Karásek, M.; Muijres, F.T.; Wagter, C.D.; Remes, B.D.W.; de Croon, G.C.H.E. A tailless aerial robotic flapper reveals that flies use torque coupling in rapid banked turns. Science 2018, 361, 1089–1094. [Google Scholar] [CrossRef] [Green Version]
  8. Chen, Y.; Zhao, H.; Mao, J.; Chirarattananon, P.; Wood, R.J. Controlled flight of a microrobot powered by soft artificial muscles. Nature 2019, 575, 324–329. [Google Scholar] [CrossRef]
  9. Wang, C.; Zhang, W.; Zou, Y.; Meng, R.; Zhao, J.; Wei, M. A Sub-100 mg Electromagnetically Driven Insect-inspired Flapping-wing Micro Robot Capable of Liftoff and Control Torques Modulation. J. Bionic Eng. 2020, 17, 1085–1095. [Google Scholar] [CrossRef]
  10. Xin, D.; Daochun, L.; Xiang, J.; Ziyu, W. Design and experimental study of a new flapping wing rotor micro aerial vehicle. Chin. J. Aeronaut. 2020, 33, 3092–3099. [Google Scholar]
  11. Tu, Z.; Fei, F.; Deng, X. Untethered flight of an at-scale dual-motor hummingbird robot with bio-inspired decoupled wings. IEEE Robot. Autom. Lett. 2020, 5, 4194–4201. [Google Scholar] [CrossRef]
  12. Wu, J.H.; Zhou, C.; Zhang, Y.L. A novel design in micro-air-vehicle: Flapping rotary wings. In Applied Mechanics and Materials; Trans Tech Publications Ltd.: Stafa-Zurich, Switzerland, 2012; Volume 232, pp. 189–193. [Google Scholar]
  13. Wu, J.; Wang, D.; Zhang, Y. Aerodynamic analysis of a flapping rotary wing at a low Reynolds number. AIAA J. 2015, 53, 2951–2966. [Google Scholar] [CrossRef]
  14. Li, H.; Guo, S.; Zhang, Y.; Zhou, C.; Wu, J. Unsteady aerodynamic and optimal kinematic analysis of a micro flapping wing rotor. Aerosp. Sci. Technol. 2017, 63, 167–178. [Google Scholar] [CrossRef] [Green Version]
  15. Li, H.; Guo, S. Aerodynamic efficiency of a bioinspired flapping wing rotor at low Reynolds number. R. Soc. Open Sci. 2018, 5, 171307. [Google Scholar] [CrossRef] [Green Version]
  16. Guo, S.; Li, H.; Zhou, C.; Zhang, Y.; He, Y.; Wu, J. Analysis and experiment of a bio-inspired flyable micro flapping wing rotor. Aerosp. Sci. Technol. 2018, 79, 506–517. [Google Scholar] [CrossRef] [Green Version]
  17. Shao, H.; Li, D.; Kan, Z.; Li, H.; Yuan, D.; Xiang, J. Influence of wing camber on aerodynamic performance of flapping wing rotor. Aerosp. Sci. Technol. 2021, 113, 106732. [Google Scholar] [CrossRef]
  18. Chen, S.; Wang, L.; He, Y.; Tong, M.; Pan, Y.; Ji, B.; Guo, S. Aerodynamic performance of a flyable flapping wing rotor with passive pitching angle variation. IEEE Trans. Ind. Electron. 2021, in press. [Google Scholar] [CrossRef]
  19. Chen, S.; Wang, L.; Guo, S.; Zhao, C.; Tong, M. A Bio-Inspired Flapping Wing Rotor of Variant Frequency Driven by Ultrasonic Motor. Appl. Sci. 2020, 10, 412. [Google Scholar] [CrossRef] [Green Version]
  20. Beloev, I.H. A review on current and emerging application possibilities for unmanned aerial vehicles. Acta Technol. Agric. 2016, 19, 70–76. [Google Scholar] [CrossRef] [Green Version]
  21. Madgwick, S.O.; Harrison, A.J.; Vaidyanathan, R. Estimation of IMU and MARG orientation using a gradient descent algorithm. In Proceedings of the 2011 IEEE International Conference on Rehabilitation Robotics, Zurich, Switzerland, 29 June–1 July 2011; pp. 1–7. [Google Scholar]
  22. Loianno, G.; Brunner, C.; McGrath, G.; Kumar, V. Estimation, control, and planning for aggressive flight with a small quadrotor with a single camera and IMU. IEEE Robot. Autom. Lett. 2016, 2, 404–411. [Google Scholar] [CrossRef]
  23. Verboom, J.; Tijmons, S.; De Wagter, C.; Remes, B.; Babuska, R.; de Croon, G.C. Attitude and altitude estimation and control on board a flapping wing micro air vehicle. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 5846–5851. [Google Scholar]
  24. Tu, Z.; Fei, F.; Yang, Y.; Zhang, J.; Deng, X. Realtime on-board attitude estimation of high-frequency flapping wing mavs under large instantaneous oscillation. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 6806–6811. [Google Scholar]
  25. Fuller, S.B.; Helbling, E.F.; Chirarattananon, P.; Wood, R.J. Using a MEMS Gyroscope to Stabilize the Attitude of a Fly-Sized Hovering Robot. 2014. Available online: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.951.1367&rep=rep1&type=pdf (accessed on 7 March 2022).
  26. Rongfa, M.N.; Pantuphag, T.; Srigrarom, S. Analysis of kinematics of flapping wing uav using optitrack systems. Aerospace 2016, 3, 23. [Google Scholar] [CrossRef] [Green Version]
  27. Merriaux, P.; Dupuis, Y.; Boutteau, R.; Vasseur, P.; Savatier, X. A study of vicon system positioning performance. Sensors 2017, 17, 1591. [Google Scholar] [CrossRef]
  28. Zhong, S.; Chirarattananon, P. An efficient iterated EKF-based direct visual-inertial odometry for MAVs using a single plane primitive. IEEE Robot. Autom. Lett. 2020, 6, 486–493. [Google Scholar] [CrossRef]
  29. Taylor, G.K.; Krapp, H.G. Sensory systems and flight stability: What do insects measure and why? Adv. Insect Physiol. 2007, 34, 231–316. [Google Scholar]
  30. Elzinga, M.J.; Dickson, W.B.; Dickinson, M.H. The influence of sensory delay on the yaw dynamics of a flapping insect. J. R. Soc. Interface 2012, 9, 1685–1696. [Google Scholar] [CrossRef] [Green Version]
  31. Tu, Z.; Fei, F.; Zhang, J.; Deng, X. An at-scale tailless flapping-wing hummingbird robot. I. Design, optimization, and experimental validation. IEEE Trans. Robot. 2020, 36, 1511–1525. [Google Scholar] [CrossRef]
  32. D’Alfonso, L.; Grano, A.; Muraca, P.; Pugliese, P. Sensor fusing using a convex combination of two Kalman filters—Experimental results. In Proceedings of the 2013 16th International Conference on Advanced Robotics (ICAR), Montevideo, Uruguay, 25–29 November 2013. [Google Scholar] [CrossRef]
  33. Mourikis, A.; Roumeliotis, S. A Multi-State Constraint Kalman Filter for Vision-Aided Inertial Navigation. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Rome, Italy, 10–14 April 2007; Volume 22, pp. 3565–3572. [Google Scholar] [CrossRef]
  34. Bell, B.M.; Cathey, F.W. The iterated Kalman filter update as a Gauss-Newton method. IEEE Trans. Autom. Control 1993, 38, 294–297. [Google Scholar] [CrossRef]
  35. Forster, C.; Carlone, L.; Dellaert, F.; Scaramuzza, D. On-manifold preintegration for real-time visual–inertial odometry. IEEE Trans. Robot. 2016, 33, 1–21. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Illustration of test platforms: (a) a linkage-drive three-wing microFWR; (b) a motor direct-drive twin-wing microFWR.
Figure 1. Illustration of test platforms: (a) a linkage-drive three-wing microFWR; (b) a motor direct-drive twin-wing microFWR.
Drones 06 00090 g001
Figure 2. Driving principle of FWR (a).
Figure 2. Driving principle of FWR (a).
Drones 06 00090 g002
Figure 3. Driving principle of FWR (b).
Figure 3. Driving principle of FWR (b).
Drones 06 00090 g003
Figure 4. The setup of motion capture system.
Figure 4. The setup of motion capture system.
Drones 06 00090 g004
Figure 5. The raw IMU data of FWR (b).
Figure 5. The raw IMU data of FWR (b).
Drones 06 00090 g005
Figure 6. Several mature sensor fusion solutions of FWR (b) IMU data.
Figure 6. Several mature sensor fusion solutions of FWR (b) IMU data.
Drones 06 00090 g006
Figure 7. The delay calibration setup.
Figure 7. The delay calibration setup.
Drones 06 00090 g007
Figure 8. Delay calibration result.
Figure 8. Delay calibration result.
Drones 06 00090 g008
Figure 9. The Inertial and body frame of FWR.
Figure 9. The Inertial and body frame of FWR.
Drones 06 00090 g009
Figure 10. Framework of proposed method.
Figure 10. Framework of proposed method.
Drones 06 00090 g010
Figure 11. Sensor fusion experiments. (a) FWR (a); (b) FWR (b).
Figure 11. Sensor fusion experiments. (a) FWR (a); (b) FWR (b).
Drones 06 00090 g011
Figure 12. Attitude estimation result on FWR (a): (a) altitude estimation result using proposed method; (b) altitude feedback of delayed OptiTrack feedback; (c) altitude estimation result using EKF.
Figure 12. Attitude estimation result on FWR (a): (a) altitude estimation result using proposed method; (b) altitude feedback of delayed OptiTrack feedback; (c) altitude estimation result using EKF.
Drones 06 00090 g012
Figure 13. Attitude estimation result on FWR (b): (a) altitude estimation result using proposed method; (b) altitude feedback of delayed OptiTrack feedback; (c) altitude estimation result using EKF.
Figure 13. Attitude estimation result on FWR (b): (a) altitude estimation result using proposed method; (b) altitude feedback of delayed OptiTrack feedback; (c) altitude estimation result using EKF.
Drones 06 00090 g013
Table 1. Wing parameters, mass, and inertia of the test platforms.
Table 1. Wing parameters, mass, and inertia of the test platforms.
Test PlatformMicroFWR (a)MicroFWR (b)
Vehicle Parameters
Wing length ( R w )120 mm85 mm
Wingbeat frequency (f)16 Hz31 Hz
Total weight (m)27 g12.5 g
x-axis moments of inertia ( J x x )70,399 gmm 2 4238.13 gmm 2
y-axis moments of inertia ( J y y )68,782 gmm 2 3970.16 gmm 2
z-axis moments of inertia ( J z z )29,056 gmm 2 2440.95 gmm 2
Sensor Specifications
IMU sampling rate5121024 Hz
Gyroscope measurement range±2000 deg/s±2000 deg/s
Accelerometer measurement range±16 g±16 g
Vision feedback frequency100 Hz120 Hz
Table 2. RMSE of different fusion methods on FWR (a).
Table 2. RMSE of different fusion methods on FWR (a).
MethodRollPitch
Proposed Method1.81601.7444
OptiTrack with delay3.41753.9250
Extended Kalman Filter6.05045.8439
Table 3. RMSE of different fusion method on FWR (b).
Table 3. RMSE of different fusion method on FWR (b).
MethodRollPitch
Proposed Method5.30864.3171
OptiTrack with delay6.11775.3715
Extended Kalman filter11.19649.5309
Complementary Filter10.88457.8824
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dong, X.; Wang, Z.; Liu, F.; Li, S.; Fei, F.; Li, D.; Tu, Z. Visual-Inertial Cross Fusion: A Fast and Accurate State Estimation Framework for Micro Flapping Wing Rotors. Drones 2022, 6, 90. https://doi.org/10.3390/drones6040090

AMA Style

Dong X, Wang Z, Liu F, Li S, Fei F, Li D, Tu Z. Visual-Inertial Cross Fusion: A Fast and Accurate State Estimation Framework for Micro Flapping Wing Rotors. Drones. 2022; 6(4):90. https://doi.org/10.3390/drones6040090

Chicago/Turabian Style

Dong, Xin, Ziyu Wang, Fangyuan Liu, Song Li, Fan Fei, Daochun Li, and Zhan Tu. 2022. "Visual-Inertial Cross Fusion: A Fast and Accurate State Estimation Framework for Micro Flapping Wing Rotors" Drones 6, no. 4: 90. https://doi.org/10.3390/drones6040090

APA Style

Dong, X., Wang, Z., Liu, F., Li, S., Fei, F., Li, D., & Tu, Z. (2022). Visual-Inertial Cross Fusion: A Fast and Accurate State Estimation Framework for Micro Flapping Wing Rotors. Drones, 6(4), 90. https://doi.org/10.3390/drones6040090

Article Metrics

Back to TopTop