Next Article in Journal
Simulation and Optimization of a Planar-Type Micro-Hotplate with Si3N4-SiO2 Transverse Composite Dielectric Layer and Annular Heater
Next Article in Special Issue
Modeling and Parameter Sensitivity Improvement in ΔE-Effect Magnetic Sensor Based on Mode Localization Effect
Previous Article in Journal
Fabrication of an Oscillating Thermocycler to Analyze the Canine Distemper Virus by Utilizing Reverse Transcription Polymerase Chain Reaction
Previous Article in Special Issue
Control of a Drone in Virtual Reality Using MEMS Sensor Technology and Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Evaluation of MEMS-IMU Performance on the Absolute Trajectory Error of Visual-Inertial Navigation System

1
State Key Laboratory of Transducer Technology, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100190, China
2
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Micromachines 2022, 13(4), 602; https://doi.org/10.3390/mi13040602
Submission received: 15 February 2022 / Revised: 4 April 2022 / Accepted: 9 April 2022 / Published: 12 April 2022
(This article belongs to the Special Issue Advances in MEMS Theory and Applications)

Abstract

:
Nowadays, accurate and robust localization is preliminary for achieving a high autonomy for robots and emerging applications. More and more, sensors are fused to guarantee these requirements. A lot of related work has been developed, such as visual-inertial odometry (VIO). In this research, benefiting from the complementary sensing capabilities of IMU and cameras, many problems have been solved. However, few of them pay attention to the impact of different performance IMU on the accuracy of sensor fusion. When faced with actual scenarios, especially in the case of massive hardware deployment, there is the question of how to choose an IMU appropriately? In this paper, we chose six representative IMUs with different performances from consumer-grade to tactical grade for exploring. According to the final performance of VIO based on different IMUs in different scenarios, we analyzed the absolute trajectory error of Visual-Inertial Systems (VINS_Fusion). The assistance of IMU can improve the accuracy of multi-sensor fusion, but the improvement of fusion accuracy with different grade MEMS-IMU is not very significant in the eight experimental scenarios; the consumer-grade IMU can also have an excellent result. In addition, the IMU with low noise is more versatile and stable in various scenarios. The results build the route for the development of Inertial Navigation System (INS) fusion with visual odometry and at the same time, provide a guideline for the selection of IMU.

1. Introduction

In recent years, with the rapid growth in the fields of autonomous driving [1], augmented reality [2], virtual reality [3], and other emerging applications, the question of how to accurately obtain their localization information has become a crucial premise and foundation. To fulfill the requirements of these applications, a lot of exploration and novel work has been carried out by researchers, such as the work which fusions WiFi and IMU with floorplan [4]. Among many localization and navigation methods, inertial navigation is one of the mainstream methods at present. Inertial Measurement Unit (IMU) is the prerequisite of inertial navigation, as it plays a considerable role in indoor, urban high buildings, planetary exploration, and other GPS denial scenes. In terms of the indoor positioning technologies which heavily rely on IMU and other modal sensors, there are two major categories: building independent and dependent. Building independent draws support from image-based technologies and dead reckoning [5]. Building-dependent localization is realized by multi-modal sensors, such as Wi-Fi, Bluetooth, Ultra-Wide Band, Visible Light Communication, etc. With the development of the diversity of positioning technology, the hybrid patterns of indoor positioning based on smartphone cameras and IMU are also emerging [6]. IMU is significant for positioning and orientation applications. IMU is a device composed of a triaxial accelerometer that senses the linear acceleration coupled with the gravity of the body and a triaxial gyroscope that senses the angular velocity. The three axes are orthogonal to each other. Inevitably, due to the IMU being corrupted by inherent factors such as bias, noise, and random walk, localization is more and more unreliable; it is more exacerbated in low-cost IMU.
Thanks to the rapid improvement of the computing power of the platform, vision-based positioning methods have become more and more mature, such as visual odometry [7] which means that when a robot is in an unknown environment, this method can estimate its attitude by using the information of image while moving and exploring at the same time. In addition, the continuous breakthrough of deep learning boosts another new trend for the visual odometer, such as DeepVO [8] which is based on deep learning, and RoadMap [1] which is based on the semantic map. The end-to-end manner increases the adaptability and robustness of the scene. In the method of visual odometry, it is gratifying that the cumulative drift of trajectory is less than the inertial method, and the long-term stability is better than the inertial based. It is well known that the camera is used as an exteroceptive sensor for passive localization and features rich scene information. However, it is easy to be defeated in a dynamic environment, weak texture, fast motion, drastic changes in lighting and other scenes, and monocular visual odometry does not have distance perception. As an interoceptive sensor, IMU actively locates and is not susceptible to the environment. However, it is easily affected by noise, bias, and other inherent factors. How to fuse cameras and inertial or more information has become a research hotspot. Up to now, researchers have solved many problems by fusion camera and IMU, and many visual-inertial odometry (VIO) algorithms have been developed [9,10,11,12,13,14,15,16,17,18,19,20]. There are different schemes to fuse cameras and IMU which can be broadly categorized into the loosely-coupled [18,19,20] and the tightly-coupled [10,11,12,13,14,15,16,17]. VIO is also broadly divided into filtering-based [12,14,15] and optimization-based [10,11,13,16,17] in state estimation algorithm. The approaches based on optimization and tightly-coupled have more potential for accuracy [21].
As mentioned above, VIO-related algorithms have been rapidly developed. Nevertheless, few of them pay attention to the impact of different performance IMU on the accuracy of sensor fusion. What are the requirements for IMU performance in different situations? What is the impact of IMU with different performances on the accuracy of multi-sensor fusion especially for the emerging visual odometry technology? The analysis of this problem is significant to the hardware configuration, deployment of multi-sensor fusion systems, the development of IMU-based fusion localization technology, and other issues. To make it clearer, we chose the tightly-coupled algorithm VINS_Fusion [10,11] as the evaluation framework because of the representative in the operation based on sliding window optimization. Our platform is a wheeled robot running on the actual road, as shown in Figure 1. The main contributions in this study are summarized below.
In this paper, we propose a comparative evaluation framework to test the impact of different grade MEMS-IMU on the accuracy of the VIO algorithm as shown in Figure 2. In this paper, eight different experiments were designed, and a comprehensive evaluation was analyzed based on the absolute trajectory error. For a wide range of researchers, our experimental results are informative for sensor configuration and algorithms, and can clearly show the specific performance of different grade MEMS-IMU in VIO fusion accuracy. We can expediently select a MEMS-IMU for specific scenarios.
The rest of the paper is structured as follows: In Section 2, the related work about evaluation and analysis experiments and IMU selection are discussed. In Section 3, the framework flow of the overall experiment which is divided into four parts: hardware platforms (A), sensor setup (B), sensors parameter configuration (C), and evaluation (D) is presented. In Section 4, the results of the experiment are analyzed and discussed. Finally, the paper is concluded in Section 5. In addition, we append the absolute trajectory error of multiple IMUs in Section Appendix A.

2. Related Work

In order to make the positioning technologies more suitable for deployment and wide use, a lot of related work has been proposed. In the subsection Evaluation and Analysis Work, they paid more attention to the performance of many open-sourced algorithms on CPU/GPU. In subsection Multi-IMUs Work, the work on multiple IMUs is summarized, but their IMU type is single and the grade span is not wide. They mainly use redundant sensor groups to enhance the stability of the system. In addition, in partial work, MEMS-IMU was evaluated for user convenience.

2.1. Evaluation and Analysis Work

Six monocular visual-inertial algorithms were compared on several computing-constrained, single-board computers, and the accuracy of each algorithm, the time consumed per frame, and the utilization of CPU and memory were evaluated [22]. Similarly, Jinwoo et al. [23] compared the performance of nine visual(-inertial) odometry algorithms on different hardware platforms (Jetson TX2, Xavier NX, and AGX Xavier), evaluated the CPU utilization, and corresponding pose accuracy of each algorithm on each platform. Giubilato et al. [24] evaluated the performance of the visual odometry algorithm on Jetson TX2 and revealed the robustness, CPU/GPU utilization, frame rate, and so on. Alwin et al. [25] comprehensively evaluated and analyzed the performance of five sensor fusion algorithms of heading estimation using smartphone sensors, such as LKF, EKF, UKF, PF, and CF. These works are important for the selection of embedded platforms, fusion performance, and software deployment. However, they are more aimed at comparing the computing hardware and algorithms without considering the performance of the algorithms with different IMU. The impact of IMU with different performances on the accuracy of VIO is still blank.

2.2. Multi-IMUs Work

Three different IMU sensors were considered in [26,27], but they were mainly used to improve the robustness of the state estimation. Kevin et al. [27] utilized the information from multiple inertial measurement units in order to resile when one of them has sensor failures. Without considering the impact of different levels of IMU on the system, the authors in [28] exploited four IMUs and magnetometers with different positions to calculate the angular velocity and velocity information. However, there is no more consideration for the performance of IMU. In [29], Chao conducted a comparative investigation and evaluation on some low-cost IMUs, but focused on the sensor packages and available software solutions, and listed their specifications. The authors in [30] evaluated the accuracy of the IMU in the two smartphones and the inertial sensor in XIMU at a yaw angle. The IMU in the smartphone is MPU6500 and the IMU module is produced by BOSCH. XIMU refers to the combination of IMU and AHRS. The specific model is not indicated. The two IMUs used for comparison in this work are consumer IMUs, and the IMU with different performance is not considered. In [31], the authors evaluated four orientation algorithms (Madgwick, Mahony, EKF, TKF-Q) in the simulation environment. Nevertheless, this work was carried out in the simulation environment, considering the noise and bias instability, the veritable IMU will be affected by many factors, which cannot be represented in the simulation environment, and these IMUs were not evaluated in the actual environment. In [32], the authors carried out the evaluation of ZUPT, ZARU, and HDR algorithms which focus on heading drift reduction in pedestrian indoor navigation with consumption level IMU. In this work, the author focused on the error of the algorithm and did not analyze the impact of different performance IMUs. In [33], the authors evaluated seven commercial IMUs for persistent healthcare-related use cases. Through the analysis of the operating environment, cost performance, battery life, memory size, and other specifications, the selection criteria of IMU are obtained. This work considers more the convenience of these modules in use, and the performance span of different IMUs is relatively small. They are all assessed with consumer IMUs.
Compared with the above-related work, they did not pay attention to the grade, diversity of MEMS-IMU, and the impact of IMU on VIO fusion algorithms. In this paper, we have chosen six representative IMUs with different performances from consumer-grade to tactical grade for exploration and analysis.

3. Experiments and Methods

The specific framework of the experiment is shown in Figure 2. The hardware platform was built as shown in Figure 2 first, including the mobile platform and multi-IMU camera suite. Then, the sensors parameter configuration was conducted, including Allan’s analysis of variance, temporal-spatial calibration between IMU and camera, etc. Next, the algorithm was executed in the specified scene by controlling the hardware platform while recording the RTK (Real-Time Kinematic) data as the ground truth at the same time. The localization accuracy of RTK is ±(8 + 1 × 10−6 D) mm.
In order to explore the influence of IMU on sensor fusion accuracy, five kinds of motion states were designed: slow, normal, fast, varying velocity, and spin move. In these scenes, IMU has different levels of excitation to identify the IMU’s ability. For example, in uniform velocity scenes, IMU’s incentive is small. However, IMU will be more motivated in varying velocity and spin move scenes. At the same time, experiments in the strong light environment with different motion states and weak texture special scenes which to analyze the auxiliary effect of IMU were also carried out. In addition, the performance of the fusion system with different IMUs under long-term running conditions that last for 30 min was also explored. The specific scenes are shown in Table 1. The tightly-coupled algorithm about Visual-Inertial Systems (VINS_Fusion [10,11]) was executed as the evaluation framework. Firstly, feature detection [34] and tracking were carried out by leveraging visual information. By preintegrating [35,36] the IMU information, the motion constraints were built with regard to the timestamps of adjacent image frames. Then, the initialization procedure was invoked by using feature information and preintegration information, in order to maintain a constant amount of computing, the optimization algorithm based on the sliding window was adopted. Finally, because the two types of trajectories did not belong to the same coordinate, the alignment was carried out by an iterative closest point algorithm (ICP) [37] and evaluated the absolute trajectory error (ATE) of the corresponding experiment.
The ATE can be calculated by comparing the ground-truth value with the visual-inertial odometry results. p i r t k represents the true position value of timestamp i , p i o d o expresses the position output of VIO, and E i represents the ATE. As shown in the section Appendix A, the RMSE, median, and mean statistical errors are calculated with ATE. Through the rigid body transformation S composed of R and t , we can align the estimated results to the real value. To solve S , we aligned the estimated trajectory with the RTK trajectory using ICP [37].
E i = ( p i r t k ) 1 S ( p i o d o )
R M S E ( E 1 : n ) = ( 1 n i = 1 n E i 2 ) 1 / 2
min R , t J = 1 2 i = 1 k ( p i r t k ( R p i o d o + t )   ) 2 2
There is little drift for a period of time after the system starts executing, and the initial trajectory segment is used for alignment. If the whole trajectory is used for alignment, the error after a long time will be adjusted due to the transformation matrix S , giving rise to inconsistency error. Here, our trajectory segment selects the first 70 data ( k = 70 ) of the whole trajectory, that is, data of the first 15 s. In the experiment, because the vehicle ran on an approximate two-dimensional plane, we concentrated on the localization accuracy in x ,   y   directions. In a section of Appendix A, we summarized the comparison table of algorithm errors in different scenarios.

3.1. Fusion Algorithm of VIO

3.1.1. System States

The vector states of the system include   n + 1 pose state x   in the sliding window and m + 1 landmarks, where n   represents the size of a sliding window.
X s y s t e m = [ x 0 , x 1 , , x n , λ 0 , λ 1 , , λ m ]
The pose state x k includes position, velocity, orientation, accelerometer bias, and gyroscope bias.
x k = [ p w b k , v w b k , q w b k , b a , b g ]
( · ) w b k represents the state of body frame b with respect to the world frame w at time k. b a and b g represent accelerometer bias and gyroscope bias respectively.

3.1.2. Visual Constraints

Using the extracted feature points, we can construct visual constraints. According to the reprojection process of the camera, we can construct the error cost function, which is called the reprojection error, and the error is expressed as the estimated value minus the measured value:
r v i s u a l = [ x c j z c j u c j y c j z c j v c j ]
The [ x c j y c j z c j ] T represents the estimated value of the feature points of frame i th projected to the j th frame camera coordinate system according to the transformation matrix of T , the specific projection formula is shown below. The feature points of i th frame are first transformed into the world coordinate system by the pose matrix of the i th frame, and then the estimated value of the reprojection of these feature points under the j th frame is obtained by the pose matrix of the j th frame, where 1 λ denotes the depth information. The [ u c j v c j ] T represents the measurement value of the feature point in the j th frame in the camera coordinate system.

3.1.3. Pre-integration of IMU

In the VIO algorithm, since the output frequencies of the camera and IMU are different, the vision is generally about 30 Hz, and the IMU is generally about 200 Hz. To match each other between image and IMU measurements, we need to preintegrate the information of IMU. The IMU pre-integration changes the reference coordinate system to the body coordinate system of the previous frame rather than the world frame. This information is regarded as the motion constraint provided by the IMU.
Through the pre-integration [35,36], the following motion constraints can be constructed:
r I M U = [ r p r q r v r b a r b g ] = [ q b i w ( p w b j p w b i v i w Δ t + 1 2 g w Δ t 2 ) α b i b j 2 [ q b j b i q b i w q w b j ] i m q b i w ( v j w v i w + g w Δ t ) β b i b j b j a b i a b j g b i g ]
α b i b j ,   β b i b j ,   q b i b j represents the pre-integration measurement and [ q ] i m represents the imaginary part of a quaternion q . Through this step, we obtain a constraint on the IMU pre-integration information to constrain the state between two moments. For example, p w b j and p w b i denote the position of moment   i th and moment j th respectively. The position state is one of the system states.

3.1.4. Nolinearity Optimization

When the respective cost functions are constructed, we use the nonlinear optimization algorithm to jointly optimize the objective function (14). This objective function contains three residual terms, namely, the prior constraint with marginalization information, the IMU pre-integration measurement constraint, and the visual reprojection constraint.

3.2. Mobile Platform Setup

Figure 2 shows the mobile platform used in the experiment. It is equipped with five modules.

3.3. Sensor Setup

The MEMS-based IMU is becoming more and more precise, reliable, and rugged, indicating a great future potential as the MEMS technology continues to be developed. In addition, it has a smaller size, weight, lower cost, and power and is an ideal choice for UAVs, unmanned vehicles, wearable devices, and many other applications. Considering that most fusion scenarios require lightweight hardware systems, the MEMS IMU has gradually become the mainstream. The six different IMUs we selected in the paper are all based on MEMS.
In terms of performance and usage scenarios, IMU is divided into four categories [29,33]. The first is the navigation grade, which is mainly used in spacecraft, aircraft, ships, missiles, and other rugged demand occasions. The second is the tactical grade, which is mainly used for UAV navigation and localization, smart munitions, etc. It is the most diverse and has smaller footprints and lower cost than the navigation grade. The third is the industrial grade, mainly used in industrial equipment, industrial robots, and other fields. The last is consumer-grade. IMU of this grade is a common occurrence, which is mainly used in mobile phones, wearable devices, motion-sensing games, and so on.
In the experiment, we developed six IMUs with different performances, and all were rigidly mounted on the circuit board as shown in Figure 3. Two of them are classified into consumer-grade IMU (➀ MPU6050, ➁ HI219) and four of them are classified into tactical grade IMU (➂ NV-MG-201, ➃ ADIS16488, ➄ ADIS16490, ➅ MSCRG). The module of MPU6050 is very prevailing and easy to access in the community. The nominal performance is the worst, and the price is only $1. The module of HI219, which costs $20, has been processed by the manufacturer. Many internal specifications and parameters are unknown because of the internal processing. NV-MG-201 is a tactical IMU and costs $500. The next is the two tactical products of the ADI manufacturer. The accuracy of ADIS16488 is slightly lower than ADIS16490. ADIS16488 costs $2500 and ADIS16490 costs $3000. The last IMU module named MSCRG is composed of a gyroscope from Japan and an accelerometer from Switzerland. MSCRG IMU offers high immunity to vibration and shock because of the unique resonating cos2θ ring structure for the gyroscope and is the best in class capacitive bulk MEMS accelerometer, and costs $3500. Table 2 shows the nominal specification parameters provided by the manufacturer within the six IMUs. Apart from these IMU modules, the binocular camera with the type of ➆ RealSense D435i was used to obtain the image data.

3.4. Sensors Parameter Configuration

3.4.1. Calibration of MEMS-IMUs

Allan variance is widely applied to evaluate the noise parameters of IMU [38,39]. We used the open-sourced tool kalibr_allan [40] to analyze Allan’s deviation of each IMU. The Allan curve was plotted in Figure 4, and the Allan result was summarized in Table 3. If bias stability is taken as the evaluation standard, their accelerometer performance ranking from low to high is roughly: ADIS16488, HI219, MPU6050, NV-MG-201, MSCRG, ADIS16490. For gyroscopes, the ranking is ADIS488, MSCRG, MPU6050, ADIS16490, NV-MG-201, and HI219. In addition, there is no strong correlation with price. Although there is no strict standard, the tactical IMU has lower bias stability. Surprisingly, as a consumer-grade HI219 gyroscope, it has the lowest bias stability.
In-run bias stability, often called the bias instability, is an indication of how the bias will drift during a period of time at a certain temperature and is a considerable characterization parameter. For bias repeatability, it represents the dispersion of the sensor’s bias at each powerup. How similar is the bias at each powerup of IMU? Because the thermal, physical, electrical, etc., will not be exactly the same during each powerup, there will be fluctuations in the bias. If the bias repeatability is greater, the bias consistency is worse and will affect the accuracy of the system. The inertial navigation system can estimate the bias after each powerup. The noise represents the measurement noise of the sensor. Wiener process is usually used to model the process of bias changing continuously with time, which is called bias random walk. The noise and bias random walk constitute the diagonal covariance matrix of the sensor noise term.
Due to the incompleteness of the IMU measurement model, the parameters above cannot be directly used in the configuration parameters of VINS-Fusion after discretization, otherwise, the trajectory will drift. We appropriately enlarge and adjust the discretized parameters to obtain their configuration parameters. To control the variables in the experiment and consider only the different performances of IMUs, we average the configuration parameters and obtained a common configuration parameter. In this way, only the impact of IMU on the fusion algorithm is considered. The common configuration parameters are a c c _   n ( 0.170 ) , g y r _   n ( 0.014 ) , a c c _   w ( 0.008 ) , g y r _   w ( 0.00042 ) .

3.4.2. Camera-IMU Temporal-Spatial Calibration

The off-line calibration of cameras and IMU is a widely studied problem. In order to effectively fuse visual information with IMU information, we need to unify the camera and IMU into a certain coordinate system, that is, the spatial calibration of the camera and IMU. Spatial calibration bridges the gap between the data in different coordinate systems. In addition, since the camera and IMU are triggered separately under different clock sources, and there are problems such as transmission delay and CPU overload, it is necessary to correct the time offset between the camera and IMU. We exploited the open-sourced calibration tool Kalibr [41,42] to obtain the spatial transformation. As to the time offset error between IMU and cameras, we enabled the VINS_ Fusion’s online calibration algorithm [43].

3.4.3. Sensors Frequency

For the sake of experimental consistency, the output frequency of the six IMUs was uniformly set to 200 Hz and the frequency of the camera was 30 Hz. In addition, the resolution of the left and right images was 640 × 480 .

3.4.4. Loop Closure

In the experiment, the goal was to explore the fusion of vision and inertial independently. In consideration of the visual loop would calibrate the accumulated error, loop-closure detection was not enabled here.

3.5. Evaluation Scenario

3.5.1. Weak Texture in Corridor

We chose the corridor as the test scenario, walked straight for a distance, and returned the same way. There was a weak texture area as shown in Figure 5 at the corner, which lasted for 0.5 s. From Figure 6, it can be found that the IMU’s assistance makes the track coincide, and only the camera will directly give the wrong track.

3.5.2. Uniform Velocity Motion State

In these experiments, due to the high-precision RTK ground truth can be provided outdoors, we selected the environment as shown in Figure 7 for evaluation. There were three uniform velocity motion modes: slow, normal, and fast, and they moved for 300, 200, and 120 s, respectively. To maintain experimental consistency, each motion state was evaluated five times. One of the trajectory diagrams and ATE diagrams are drawn as shown in Figure 8.

3.5.3. Alternating Acceleration and Deceleration Motion State

In this scenario, the wheeled robot ceaselessly kept accelerating and decelerating in order to motivate IMU. Figure 9 plots the trajectory and the ATE error. Similarly, to maintain experimental consistency, this motion state was evaluated five times. It can be clearly reported that MPU6050 has a poor performance and larger drift.

3.5.4. Spin Move Forward Motion State

In this case, the wheeled robot moved forward with frequent rotation in order to motivate IMU. The trajectory shown in Figure 10 can reflect this movement. This motion was evaluated eleven times due to the complexity of the motion state. In this case, MPU6050 is easy to crash due to rapid rotation.

3.5.5. Strong Sun Light Scene

In this scene, the situation in a strong illumination environment was evaluated. As shown in Figure 11, there was obviously strong light in the environment. A cross-over study to evaluate the performance of different motion states under strong light was conducted. The trajectory under the variable speed scene is chosen as shown in Figure 12.

3.5.6. Long Term Scene

In long-term test experiment, which included three forms of motion: constant speed, variable speed, and spin move, was performed for 30 min. The trajectory is shown in Figure 13. From Table 4, the HI219 IMU improved the localization accuracy of the fusion system, while other IMUs deteriorated the accuracy of the system.

4. Results and Analysis

Through the preceding experiments and according to Table A1, Table A2, Table A3, Table A4, Table A5 and Table A6 in Section of Appendix A, we summarize the votes of accuracy improvement of each IMU as shown in Table 5. If an IMU improved the accuracy the most in an evaluation scenario, we would vote for this IMU and count the votes. For example, among the fifteen evaluations in uniform velocity scenes, the ADIS16488 performed best only once in the six IMUs. As for ADIS16490, the best performance was three times.
Some results are revealed based on the evaluation: (1) In the weak texture scenario over short time intervals, the localization posture is significantly improved with the aid of IMU, and only leveraging the camera will be defeated because of the absence of visual constraints. (2) The IMU’s incentive is relatively small in the nearly constant speed scenario, there is no salient difference between IMUs with different performances, consumer IMUs will also perform well, and better IMUs do not show more visible advantages. (3) In the case of varying speed scenarios, the assistance of an accelerometer is needed. The IMU with the excellent specification of bias stability has better performance, such as ADIS16490 and MSCRG. (4) In the spin move situation, the gyroscope is needed. At this time, the HI219 and MSCRG have better results. (5) These performance trends are maintained in the cross experiments facing a strong sunlight environment. (6) In the 30 min long-term test, only the HI219 improves the odometry accuracy of the fusion system. Among these results, it is surprising that HI219, as a consumer-grade also has a good performance.

4.1. Turntable Test

In order to make the results clearer, the professional multiaxial turntable as shown in Figure 14 was used to uniformly measure the angular velocity of the six IMUs. As can be seen from Figure 15a, the output of HI219 is always maintained at zero when the angular velocity is less than 0.83°/s, there is a threshold to perceive the rotational motion, while other sensors output their perceived angular velocity although the value is amiss. For example, when the robot is stationary, orientation will not drift due to the zero-bias of the gyroscope. This phenomenon makes HI219 bring less drift and results in the lowest bias stability of the gyroscope as shown in Figure 4b. In addition, when the angular velocity is greater than 16°/s, except mpu6050, the rest IMUs’ error has been reduced to less than 9%, and their error discrimination is very small as shown in Figure 15b, resulting in the algorithm error between them not being very remarkable.
It should be noted that the triaxial gyroscope sensor is actually composed of triaxial identical gyroscopes placed in orthogonal directions, hence the rotation of the gyroscope around the Z-axis was measured, and there are more cases around the gravity direction in-plane motion.

4.2. Quantitative Analysis

The measurement model of the accelerometer and gyroscope is given follows:
w ˜ b = w b + b g + n g
a ˜ b = q b w ( a w + g w ) + b a + n a
w ˜ b and a ˜ b represent the measurements of the gyroscope and accelerometer, respectively. w b and a w represent the ideal value of the gyroscope and accelerometer, respectively. Due to various factors, measurements are affected by gyroscope bias b g , acceleration bias b a , and noise n . In addition, we assume that the noise in gyroscope and acceleration measurements are Gaussian, that is, n g ~ N ( 0 ,   σ g 2 ) , n a ~ N ( 0 ,   σ a 2 ) . As for the bias, gyroscope bias and acceleration bias are modeled as a random walk, whose derivatives are gaussian, n b g ~ N ( 0 , σ b g 2   ) , n b a ~ N ( 0 , σ b a 2   ) , so we can obtain:
b ˙ g = n b g
b ˙ g = n b g
We can obtain the above four noise coefficients ( n g , n a , n b g , n b a ) through the Allan analysis in Section 3.4.1. These configuration parameters constitute the noise covariance matrix Q :
Q = [ σ a 2 0 0 0 σ g 2 0 0 0 σ a 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 σ g 2 0 0 0 σ b a 2 0 0 0 σ b g 2 ]
We can update the covariance matrix of pre-integration by the transition matrix F , V , and noise matrix   Q , see Appendix B for specific elements of F and V:
P k + 1 = F P k F T + V Q V T
Similarly, there is a covariance matrix for visual information. These covariance matrices represent the noise of IMU information and visual information. We can optimize the system state X   through the following cost function:
r o b j = m i n { r p J p X 2 + r I M U ( Z I M U , X ) p I M U 2 + r v i s u a l ( Z v i s u a l , X ) P v i s u a l 2 }
It should be noted that through the covariance matrix P I M U and P v i s u a l , the Euclidean distance is converted into Mahalanobis distance. It fixes the problem of inconsistent and related dimensions in European distance. Although the information is very different between the IMU and visual, they can be optimized in the same cost function (14) through Mahalanobis distance. In this way, the residual information of IMU and visual is statistically equivalent to each other. For visual information, it is consistent throughout the evaluation experiment, so only IMU information affects the accuracy of visual-inertial odometry.
Through the algorithm evaluation results in Table A1, Table A2, Table A3, Table A4, Table A5 and Table A6 in the section of Appendix A, we summarized the accuracy of the localization improved by adding IMU and obtained the improvement under the following four main motion scenes just as shown in Table 6.
Combined with quantitative results, in the uniform velocity situation, the lifting amplitude of each IMU is in the range of 0.1 m. In the varying velocity situation, the IMU produced by ADI has increased the most, which is better than MPU6050 in this scenario. In spin move situations, because of the special processing of HI219 and the superiority of the MSCRG structure, the accuracy is greatly improved by their assistance. In the strong illumination situation, there is little difference between them. Except for NV-MG-201, the lifting effect of different IMUs is 0.15m. The most expensive IMU is 2.5 cm higher in accuracy than the cheapest one.
In summary, the improvement of fusion accuracy with different grade MEMS-IMU is limited in these experimental scenarios, the consumer-grade IMU can also have an excellent result. The improvement of accuracy depends more on the algorithm.
As shown in formula (14), if the measurement noise of IMU is smaller, the theoretical accuracy will be higher. However, when the difference in measurements between these IMU is small, the difference in residual information between them is smaller. Due to the weighting effect in formula (14), this results in limited differences in accuracy.

5. Conclusions

In general, there are many internal and external factors that affect the IMU. We can obtain the representative parameters through the Allan variance method to quantitatively analyze the performance of IMU. However, the method cannot represent all performance in actual applications. In this paper, many scenario experiments were conducted, and the professional turntable was employed to analyze the error of six IMUs. The following conclusions are reached:
The assistance of IMU can improve the accuracy of multi-sensor fusion, and is more notable in weak texture scenes. In the constant speed scene, there is no obvious difference between IMUs with different performances. Under the excitation of rotation, acceleration, and deceleration, IMUs with excellent performance will have higher accuracy and are more stable. Owing to the lower bias stability and noise, making their performance more robust. The improvement of fusion accuracy is not directly proportional to the price with regard to the expensive ADIS16490 IMU, even so, it is more versatile in various scenarios. For HI219, a consumer IMU, there is a threshold for sensing rotation motion, performing well in rotation scenes, which may provide a reference for the processing of the algorithm. At the same time, according to the MSCRG IMU results, IMU with resistance to vibration and impact is more needed in the situation of frequently strenuous movement.

Author Contributions

Conceptualization, X.Z.; methodology, Y.L.; software, Y.L. and S.Z.; validation, X.Z.; formal analysis, Y.L.; investigation, Y.L.; resources, X.Z.; data curation, Y.L. and P.C.; writing—original draft preparation, Y.L.; writing—review and editing, X.Z. and Z.L.; supervision, Z.L.; project administration, X.Z.; funding acquisition, X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the Key Research Program of Frontier Science, CAS. (Grant No. ZDBS-LY-JSC028) and the National Natural Science Foundation of China (Grant No. 61971399).

Acknowledgments

The authors would like to thank Kunfeng Wang for his efforts with MEMS analysis and PengCheng Zheng, and Haifeng Zhang for assistance with the experimental operation and analysis. Thanks to everyone who helped us in our work.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

This appendix records the absolute trajectory error (ATE) results evaluated under each scenario. Each column represents different IMUs, and each row represents the algorithm accuracy including mean, median, and RMSE about ATE. Each cell represents the error results with IMU assistance and without IMU assistance (0.9124/1.1250 ↑). The bold represents that the accuracy has been improved with the aid of IMU. The symbol ↑ represents the IMU with the greatest accuracy improvement in this evaluation. We vote for the IMU through the symbol and make statistics according to the number of votes.

Appendix A.1. Uniform Scene

Table A1. The algorithm errors for slow velocity scene with six different MEMS-IMUs. (Bold indicates that the accuracy was improved compared with using only visual information.)
Table A1. The algorithm errors for slow velocity scene with six different MEMS-IMUs. (Bold indicates that the accuracy was improved compared with using only visual information.)
SlowATEMPU6050HI219NV-MG-201ADIS16488ADIS16490MSCRG
Slow0Mean0.9124/1.1250 ↑1.0151/1.17720.9344/1.11290.9871/1.07130.9177/1.05340.8240/1.0076
Median1.0192/1.2815 ↑1.1606/1.35461.0385/1.27731.0562/1.20020.9499/1.14790.7879/1.0531
RMSE1.0501/1.3076 ↑1.181/1.38141.0769/1.29431.1458/1.24141.0694/1.22690.9571/1.1748
Slow1Mean0.9982/0.98950.9016/1.01302.4118/0.99480.9532/0.96190.7919/0.9347 ↑0.9319/0.8955
Median1.1267/1.11550.9185/1.16562.0879/1.12791.0373/1.07200.7604/1.0316 ↑1.0083/0.9740
RMSE1.196/1.19511.0977/1.22373.0902/1.20271.1436/1.16280.9512/1.1286 ↑1.1177/1.0811
Slow2Mean0.8791/0.95471.082/0.97880.7747/0.9659 ↑0.9614/0.92660.8861/0.89670.8562/0.8535
Median0.9678/1.03181.1646/1.04150.8479/1.0376 ↑0.9768/1.00050.9715/0.97130.901/0.9153
RMSE1.0553/1.14001.3208/1.16900.9664/1.1542 ↑1.1714/1.10481.0685/1.06641.0334/1.0132
Slow3Mean1.0825/1.1577 ↑1.1673/1.17651.1591/1.16131.1518/1.12881.1141/1.10781.0253/1.0660
Median1.0354/1.0487 ↑1.1027/1.06371.0527/1.04241.0396/1.01711.0445/1.03831.0080/0.9922
RMSE1.3088/1.4197 ↑1.4243/1.44551.442/1.42781.3992/1.38781.3479/1.35761.2503/1.3082
Slow4Mean1.0988/1.09941.1458/1.14571.1683/1.12051.1014/1.07571.0066/1.02750.9774/0.9730
Median1.2009/1.22671.2493/1.27611.3026/1.25391.1807/1.18151.1067/1.10861.0918/1.0319
RMSE1.2709/1.27431.331/1.33051.3542/1.30061.2814/1.24861.1631/1.19231.1341/1.1300
Table A2. The algorithm errors for normal velocity scene with six different MEMS-IMUs.
Table A2. The algorithm errors for normal velocity scene with six different MEMS-IMUs.
NormalATEMPU6050HI219NV-MG-201ADIS16488ADIS16490MSCRG
Normal0Mean0.9012/0.94030.9512/0.95690.8939/0.9518 ↑0.9365/0.92140.9244/0.89310.8449/0.8592
Median1.0108/1.09671.0405/1.12980.9690/1.1193 ↑0.9933/1.05970.9363/0.99410.8833/0.9112
RMSE1.0628/1.12031.1431/1.14231.0515/1.1363 ↑1.1215/1.09871.1086/1.06501.0133/1.0267
Normal1Mean1.008/0.91501.0002/0.93751.3758/0.92611.1672/0.89101.0215/0.86850.8925/0.8311
Median1.0307/0.98960.9848/1.02871.2971/0.99951.1925/0.95191.0452/0.92040.8556/0.8462
RMSE1.1937/1.05771.1769/1.07951.6474/1.06961.383/1.03241.2145/1.01431.0726/0.9770
Normal2Mean1.1009/1.11631.1184/1.13371.8117/1.11981.0196/1.0886 ↑1.1002/1.06661.0511/1.0275
Median1.2592/1.31081.2650/1.32111.7949/1.30301.0272/1.2722 ↑1.2695/1.25551.2361/1.2060
RMSE1.2867/1.30381.3064/1.32622.1023/1.30941.1777/1.2700 ↑1.2763/1.24171.2189/1.1940
Normal3Mean1.094/1.06521.2055/1.07872.2835/1.06621.0922/1.04161.1508/1.02781.0364/1.0003
Median1.1548/1.08511.2595/1.09771.5083/1.08151.0228/1.04731.1174/1.02570.9945/0.9829
RMSE1.2522/1.21511.3812/1.22913.0337/1.21641.2712/1.19041.3275/1.17781.1962/1.1522
Normal4Mean1.0675/1.13921.0915/1.12230.8973/1.1497 ↑1.078/1.11151.0762/1.08010.9196/1.0357
Median1.1940/1.14461.2174/1.11901.0327/1.1515 ↑1.1788/1.10331.1431/1.37240.9527/1.0138
RMSE1.2755/1.36981.3080/1.50111.0475/1.3830 ↑1.3186/1.33831.3034/1.30041.0934/1.2487
Table A3. The algorithm errors for fast velocity scene with six different MEMS-IMUs.
Table A3. The algorithm errors for fast velocity scene with six different MEMS-IMUs.
FastATEMPU6050HI219NV-MG-201ADIS16488ADIS16490MSCRG
Fast0Mean0.9561/1.08200.9762/1.11004.9392/1.09020.8968/1.05140.8359/1.0234 ↑0.8661/0.9850
Median0.9923/1.11201.0133/1.18154.8637/1.13580.8401/1.05870.8179/1.0049 ↑0.8228/0.9544
RMSE1.1026/1.25031.1461/1.27896.156/1.25761.0471/1.21520.9937/1.1875 ↑1.0135/1.1462
Fast1Mean1.6826/1.13711.5799/1.15235.346/1.14331.1282/1.11271.1426/1.08871.3559/1.0530
Median1.8629/1.26331.6132/1.29485.7653/1.28491.1536/1.22001.0826/1.18251.5837/1.1135
RMSE1.974/1.34321.8504/1.35826.4123/1.35091.3272/1.31501.3627/1.28731.5819/1.2470
Fast2Mean1.1355/1.01101.2949/1.04263.7447/1.02280.9974/0.98640.9028/0.9466 ↑1.0003/0.9087
Median1.2935/1.07411.3555/1.11104.0317/1.09891.0213/1.02290.8274/0.9488 ↑1.0575/0.8987
RMSE1.349/1.17641.5494/1.21024.5396/1.18931.1687/1.15041.0330/1.1087 ↑1.1613/1.0701
Fast3Mean0.6902/0.8320 ↑0.8223/0.85511.1044/0.84031.078/0.81081.6313/0.78930.766/0.7615
Median0.7204/0.7722 ↑0.7826/0.80110.8646/0.78380.9344/0.75381.513/0.74440.7928/0.7227
RMSE0.8458/0.9759 ↑0.9655/0.99971.3657/0.98531.3591/0.95581.9245/0.93540.8762/0.9094
Fast4Mean0.8369/0.85110.9637/0.86360.7259/0.8507 ↑1.2174/0.82620.9624/0.81540.7646/0.7957
Median0.8422/0.91880.8478/0.95660.7117/0.9280 ↑1.0523/0.88030.8593/0.83640.7640/0.8134
RMSE0.9892/0.99001.0377/1.00450.8949/0.9897 ↑1.4716/0.96061.1077/0.95030.9042/0.9314

Appendix A.2. Alternating Acceleration and Deceleration

Table A4. The algorithm errors for varying speed scene with six different MEMS-IMUs.
Table A4. The algorithm errors for varying speed scene with six different MEMS-IMUs.
VaryingATEMPU6050HI219NV-MG-201ADIS16488ADIS16490MSCRG
Vary0Mean3.1442/0.85650.9675/0.88041.8236/0.86140.8628/0.82710.6212/0.7974 ↑0.6321/0.7578
Median2.5754/0.92861.0943/0.98761.6019/0.95160.9879/0.88730.5668/0.8196 ↑0.6858/0.7547
RMSE4.0895/0.99261.1185/1.01992.231/0.99741.0088/0.95770.7329/0.9255 ↑0.7222/0.8818
Vary1Mean0.7834/0.80520.9755/0.82422.3572/0.81060.7476/0.78210.8797/0.74970.6229/0.7410 ↑
Median0.7256/0.74731.0616/0.80001.9911/0.76210.6495/0.71880.8082/0.67740.5938/0.6674 ↑
RMSE0.9223/0.93981.1345/0.96423.0779/0.94720.8692/0.91801.0088/0.89140.7273/0.8898 ↑
Vary2Mean0.9991/0.82051.0003/0.83982.2953/0.82520.9603/0.79570.981/0.77510.7688/0.7450
Median0.8234/0.86590.9777/0.91201.8541/0.88031.0685/0.82540.9861/0.79260.8802/0.7519
RMSE1.2706/0.99071.2199/1.01383.0457/0.99721.1727/0.96111.2231/0.93730.9312/0.9032
Vary3Mean0.9397/0.84790.9267/0.86021.8007/0.85010.5774/0.8259 ↑0.7250/0.82240.7491/0.7976
Median0.9734/1.00721.056/1.01211.7407/0.99960.6014/0.9764 ↑0.7392/0.94330.7941/0.8830
RMSE1.1398/1.03191.123/1.04672.2473/1.03480.6782/1.0042 ↑0.8334/1.00280.8507/0.9718
Vary4Mean0.7687/0.82490.8873/0.83991.5669/0.82870.8343/0.80340.6626/0.7892 ↑0.7768/0.7647
Median0.8952/0.82330.8828/0.86831.4704/0.83130.8051/0.78830.6469/0.7745 ↑0.815/0.7513
RMSE0.9163/0.98521.0959/1.00211.919/0.98960.9668/0.95990.7201/0.9438 ↑0.9554/0.9158

Appendix A.3. Spin Move forward

Table A5. The algorithm errors for a spin move forward scene with six different MEMS-IMUs.
Table A5. The algorithm errors for a spin move forward scene with six different MEMS-IMUs.
Spin_moveATEMPU6050HI219NV-MG-201ADIS16488ADIS16490MSCRG
Spin_move0Mean9.3128/1.31580.8370/1.3188Fail/1.30270.6069/1.2982 ↑0.6830/1.2838Fail/1.2548
Median7.5538/1.48590.9094/1.4970Fail/1.47480.6195/1.4543 ↑0.6371/1.4289Fail/1.4066
RMSE11.8427/1.43960.9383/1.4431Fail/1.42520.6891/1.4192 ↑0.7805/1.4044Fail/1.3735
Spin_move1Mean514.9947/1.12301.1685/1.10243537/1.08961.4894/1.13591.9135/1.09790.9042/1.1532 ↑
Median464.7805/1.00731.2253/1.01442783/1.01171.4236/1.00031.6958/0.99780.7804/1.0180 ↑
RMSE686.0213/1.32041.3792/1.29974827/1.27651.6514/1.33812.2048/1.28581.0620/1.3512 ↑
Spin_move2Mean2463/2.41711.9738/2.3948 ↑3853/2.48895.6023/2.398010.8429/2.472298.577/2.3734
Median1842/2.43542.3318/2.3874 ↑3004/2.43965.8112/2.384811.944/2.3987144.8365/2.4576
RMSE3502/2.88962.3169/2.8421 ↑5261/3.00176.1396/2.868812.4/3.0149117.7245/2.8578
Spin_move3Mean762.9145/4.82195.7136/4.99501831/4.8104211.801/3.9853187.2453/4.60464.0960/4.7584 ↑
Median1024/4.65086.3944/4.91141795/4.6099296.713/4.1932254.9551/4.37904.3375/4.7746 ↑
RMSE938.5596/5.39196.1691/5.56462412/5.3985247.016/4.1072217.4266/5.17014.5292/5.3275 ↑
Spin_move4Mean6.8641/4.19752.8499/4.014816.3806/4.417814.6455/4.144713.4458/4.02482.2562/4.1285 ↑
Median6.097/4.43942.786/4.116615.6404/4.713713.917/4.318914.1733/4.02842.3056/4.5223 ↑
RMSE7.8865/4.57533.2994/4.385117.7753/4.835516.3221/4.523214.4809/4.41172.4836/4.5508 ↑
Spin_move5Mean1.7864/1.88221.3078/1.9618 ↑33.2181/1.60302.0107/1.78461.6161/1.84182.7025/1.8386
Median1.7204/1.51881.2788/1.6052 ↑26.1658/1.72651.9731/1.48961.5320/1.43202.6953/1.4755
RMSE2.1640/2.35351.5237/2.4532 ↑42.2767/1.83292.6537/2.20301.9955/2.32953.5095/2.2956
Spin_move6Mean13.6722/0.94340.6434/0.9332 ↑46.6108/0.93630.7195/0.94320.8378/0.95050.9842/0.9544
Median6.8829/0.98800.4331/0.9815 ↑56.8551/0.98170.5068/0.99320.7124/0.97870.8864/0.9844
RMSE21.8973/1.11600.8054/1.1017 ↑52.1628/1.10610.8797/1.11431.0111/1.12531.1772/1.1291
Spin_move7Mean2.8939/0.80540.9454/0.79671411/0.79330.8374/0.78840.8123/0.80021.7947/0.7939
Median2.2887/0.97961.0159/0.98391469/0.97660.8556/0.95750.8074/0.9543 ↑2.0807/0.9192
RMSE3.954/0.96251.1267/0.95051825/0.94710.9918/0.94230.9729/0.95832.031/0.9520
Spin_move8Mean4.2439/0.88240.8732/0.8513329.0178/0.85120.7316/0.8634 ↑0.7832/0.87880.8017/0.8844
Median2.2936/1.06790.9903/1.0039481.1018/1.01410.8610/1.0398 ↑0.9274/1.05350.9291/1.0623
RMSE6.586/1.01141.0114/0.9790389.8784/0.97720.8484/0.9885 ↑0.9270/1.00740.8981/1.0134
Spin_move9Mean5.4395/0.74350.7816/0.7302Fail/0.72880.7569/0.73360.949/0.75291.3903/0.7628
Median4.4004/0.85790.8878/0.8508Fail/0.84160.8768/0.83331.0605/0.84171.2123/0.8347
RMSE7.086/0.85540.8975/0.8357Fail/0.83690.8612/0.84691.0783/0.87381.6942/0.8881
Spin_move10Mean1980/0.79850.9958/0.803727.7235/0.78600.9549/0.79252.384/0.855724.744/0.8828
Median2001/0.90041.0436/0.890029.3865/0.87910.9904/0.90802.3141/0.951829.8661/0.9789
RMSE2527/0.90951.1376/0.908630.5027/0.90161.0575/0.90452.8871/0.969428.2283/1.0014

Appendix A.4. Strong Illumination

Table A6. The algorithm errors for strong illumination scene with six different MEMS-IMUs.
Table A6. The algorithm errors for strong illumination scene with six different MEMS-IMUs.
StrongilluminationATEMPU6050HI219NV-MG-201ADIS16488ADIS16490MSCRG
Vary_velocity0Mean1.4843/1.00770.9915/1.01831.4721/0.99710.9666/0.98410.7658/0.9681 ↑0.9828/0.9524
Median1.628/1.19071.1787/1.21151.2117/1.18631.1687/1.14910.7188/1.1042 ↑1.0526/1.0672
RMSE1.775/1.18351.1552/1.19691.7483/1.17161.1383/1.15440.8789/1.1350 ↑1.1748/1.1158
Fast0Mean1.0157/1.16161.0542/1.17641.1456/1.16100.8650/1.1382 ↑0.8858/1.11140.9009/1.0884
Median1.2064/1.41171.3031/1.45341.2621/1.42591.0067/1.3778 ↑1.0558/1.33041.1053/1.2560
RMSE1.2024/1.38761.2778/1.40681.4005/1.38711.0288/1.3576 ↑1.0675/1.32381.069/1.2939
Normal0Mean0.8409/0.92710.8953/0.94340.9146/0.93040.7958/0.9082 ↑0.8329/0.87370.7622/0.8510
Median1.0407/1.19210.8647/1.18561.1182/1.17890.9343/1.1743 ↑1.0235/1.08770.8512/1.0243
RMSE1.0102/1.12671.0873/1.14981.1284/1.13300.9649/1.0991 ↑0.9992/1.05640.9158/1.0273
Slow0Mean1.039/1.1498 ↑1.1163/1.16051.1237/1.15311.0662/1.13161.0539/1.11890.9892/1.0922
Median1.0382/1.1812 ↑1.1143/1.19651.1434/1.18371.0466/1.15071.0229/1.11340.9656/1.0583
RMSE1.2245/1.3645 ↑1.3212/1.37781.3313/1.36991.2667/1.34371.2483/1.32621.1685/1.2973
Spin_move0Mean111.9958/1.12970.8543/1.1502 ↑Fail/1.12750.9878/1.11921.768/1.08932.0928/1.0848
Median146.6053/1.08380.9553/1.1028 ↑Fail/1.07131.0784/1.05841.4484/1.01512.2004/0.9645
RMSE128.2474/1.33800.9831/1.3575 ↑Fail/1.33211.1702/1.32902.2083/1.30282.3653/1.3043
Vary_velocity1Mean0.7735/0.84980.6178/0.8052 ↑1.9662/0.83640.9341/0.87110.8033/0.91730.9016/0.9489
Median0.7705/0.89050.6200/0.8349 ↑2.0385/0.88920.8833/0.90740.5831/0.96990.8170/1.0091
RMSE0.9017/1.00730.7082/0.9524 ↑2.3543/0.99141.058/1.03411.1307/1.09021.0409/1.1280
Spin_move1Mean14.338/0.82730.5961/0.8104 ↑Fail/0.82010.6550/0.82361.1148/0.85982.0208/0.8599
Median11.3966/0.54620.5879/0.5573 ↑Fail/0.54840.6725/0.53881.2885/0.59251.4855/0.6288
RMSE18.6681/1.02490.6690/0.9988 ↑Fail/1.01670.7584/1.02401.3953/1.06372.6148/1.0571

Appendix B

The specific elements of F and V:
F = [ I f 12 f 13 f 14 f 15 0 f 22 0 0 f 25 0 f 32 I f 34 f 35 0 0 0 I 0 0 0 0 0 I ] V = [ v 11 v 12 v 13 v 14 0 0 0 v 22 0 v 24 0 0 v 31 v 32 v 33 v 34 0 0 0 0 0 0 v 45 0 0 0 0 0 0 v 56 ]
                                                f 12 = α b i b k + 1 δ θ b k b k = 1 4 ( R b i b k [ a b k b k a ] × δ t 2 + R b i b k + 1 [ ( a b k + 1 b k a ) ] × ( I [ w ] × δ t ) δ t 2 )  
          f 13 = α b i b k + 1 δ β b k b k = I δ t
      f 14 = α b i b k + 1 δ b k a = 1 4 ( q b i b k + q b i b k + 1 ) δ t 2
  f 15 = α b i b k + 1 δ b k g = 1 4 ( R b i b k + 1 [ a b k + 1 b k a ] × δ t 2 ) δ t
            f 22 = θ b i b k + 1 δ θ b k b k = I [ w ] ×
  f 25 = θ b i b k + 1 δ b k g = I δ t
f 32 = β b i b k + 1 δ θ b k b k = 1 2 ( R b i b k [ a b k b k a ] × δ t + R b i b k + 1 [ ( a b k + 1 b k a ) ] × ( I δ t [ w ] × δ t 2 ) )
f 34 = β b i b k + 1 δ b k a = 1 2 ( q b i b k + q b i b k + 1 ) δ t
f 35 = β b i b k + 1 δ b k g = 1 2 ( R b i b k + 1 [ a b k + 1 b k a ] × δ t ) δ t
v 11 = α b i b k + 1 n k a = 1 4 q b i b k δ t 2
v 12 = α b i b k + 1 n k g = v 14 = α b i b k + 1 n k + 1 g = 1 8 ( R b i b k + 1 [ a b k + 1 b k a ] × δ t 2 ) δ t
v 13 = α b i b k + 1 n k + 1 a = 1 4 q b i b k + 1 δ t 2
v 22 = θ b i b k + 1 n k g = v 24 = θ b i b k + 1 n k + 1 g = 1 2 I δ t
v 31 = β b i b k + 1 n k a = 1 2 q b i b k δ t
v 33 = β b i b k + 1 n k + 1 a = 1 2 q b i b k + 1 δ t
v 32 = β b i b k + 1 n k g = v 34 = β b i b k + 1 n k + 1 g = 1 4 ( R b i b k + 1 [ a b k + 1 b k a ] × δ t 2 ) δ t
v 45 = b k + 1 a n b k a = v 56 = b k + 1 g n b k g = I δ t

References

  1. Qin, T.; Zheng, Y.; Chen, T.; Chen, Y.; Su, Q. RoadMap: A Light-Weight Semantic Map for Visual Localization towards Autonomous Driving. arXiv 2021, arXiv:2106.02527. [Google Scholar]
  2. Apple. Apple ARKit. Available online: https://developer.apple.com/arkit/ (accessed on 15 February 2022).
  3. Facebook. Oculus VR. Available online: https://www.oculus.com/ (accessed on 15 February 2022).
  4. Herath, S.; Irandoust, S.; Chen, B.; Qian, Y.; Kim, P.; Furukawa, Y. Fusion-DHL: WiFi, IMU, and Floorplan Fusion for Dense History of Locations in Indoor Environments. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 5677–5683. [Google Scholar]
  5. Poulose, A.; Eyobu, O.S.; Han, D.S. An Indoor Position-Estimation Algorithm Using Smartphone IMU Sensor Data. IEEE Access 2019, 7, 11165–11177. [Google Scholar] [CrossRef]
  6. Poulose, A.; Han, D.S. Hybrid Indoor Localization Using IMU Sensors and Smartphone Camera. Sensors 2019, 19, 5084. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Mur-Artal, R.; Montiel, J.M.M.; Tardós, J.D. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef] [Green Version]
  8. Mohanty, V.; Agrawal, S.; Datta, S.; Ghosh, A.; Dutt Sharma, V.; Chakravarty, D. DeepVO: A Deep Learning approach for Monocular Visual Odometry. arXiv 2016, arXiv:1611.06069. [Google Scholar]
  9. Clark, R.; Wang, S.; Wen, H.; Markham, A.; Trigoni, N. VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
  10. Qin, T.; Cao, S.; Pan, J.; Shen, S.J. A General Optimization-based Framework for Global Pose Estimation with Multiple Sensors. arXiv 2019, arXiv:1901.03642. [Google Scholar]
  11. Qin, T.; Pan, J.; Cao, S.; Shen, S.J. A General Optimization-based Framework for Local Odometry Estimation with Multiple Sensors. arXiv 2019, arXiv:1901.03638. [Google Scholar]
  12. Li, M.; Mourikis, A.I. Improving the accuracy of EKF-based visual-inertial odometry. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 828–835. [Google Scholar]
  13. Leutenegger, S.; Lynen, S.; Bosse, M.; Siegwart, R.; Furgale, P. Keyframe-based visual–inertial odometry using nonlinear optimization. Int. J. Robot. Res. 2014, 34, 314–334. [Google Scholar] [CrossRef] [Green Version]
  14. Mourikis, A.I.; Roumeliotis, S.I. A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 3565–3572. [Google Scholar]
  15. Bloesch, M.; Omari, S.; Hutter, M.; Siegwart, R. Robust visual inertial odometry using a direct EKF-based approach. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 298–304. [Google Scholar]
  16. Qin, T.; Li, P.; Shen, S. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef] [Green Version]
  17. Mur-Artal, R.; Tardós, J.D. Visual-Inertial Monocular SLAM With Map Reuse. IEEE Robot. Autom. Lett. 2017, 2, 796–803. [Google Scholar] [CrossRef] [Green Version]
  18. Falquez, J.M.; Kasper, M.; Sibley, G. Inertial aided dense & semi-dense methods for robust direct visual odometry. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 3601–3607. [Google Scholar]
  19. Weiss, S.; Siegwart, R. Real-time metric state estimation for modular vision-inertial systems. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 4531–4537. [Google Scholar]
  20. Lynen, S.; Achtelik, M.W.; Weiss, S.; Chli, M.; Siegwart, R. A robust and modular multi-sensor fusion approach applied to MAV navigation. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 3923–3929. [Google Scholar]
  21. Huang, G. Visual-Inertial Navigation: A Concise Review. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 9572–9582. [Google Scholar]
  22. Delmerico, J.; Scaramuzza, D. A Benchmark Comparison of Monocular Visual-Inertial Odometry Algorithms for Flying Robots. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 2502–2509. [Google Scholar]
  23. Jeon, J.; Jung, S.; Lee, E.; Choi, D.; Myung, H. Run Your Visual-Inertial Odometry on NVIDIA Jetson: Benchmark Tests on a Micro Aerial Vehicle. IEEE Robot. Autom. Lett. 2021, 6, 5332–5339. [Google Scholar] [CrossRef]
  24. Giubilato, R.; Chiodini, S.; Pertile, M.; Debei, S. An evaluation of ROS-compatible stereo visual SLAM methods on a nVidia Jetson TX2. Measurement 2019, 140, 161–170. [Google Scholar] [CrossRef]
  25. Poulose, A.; Senouci, B.; Han, D.S. Performance Analysis of Sensor Fusion Techniques for Heading Estimation Using Smartphone Sensors. IEEE Sens. J. 2019, 19, 12369–12380. [Google Scholar] [CrossRef]
  26. Jeerage, M.K. Reliability analysis of fault-tolerant IMU architectures with redundant inertial sensors. IEEE Aerosp. Electron. Syst. Mag. 1990, 5, 23–28. [Google Scholar] [CrossRef]
  27. Eckenhoff, K.; Geneva, P.; Huang, G. Sensor-Failure-Resilient Multi-IMU Visual-Inertial Navigation. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 3542–3548. [Google Scholar]
  28. Brunner, T.; Changey, S.; Lauffenburger, J.-P.; Basset, M. Multiple MEMS-IMU localization: Architecture comparison and performance assessment. In Proceedings of the 22nd Saint Petersburg International Conference on Integrated Navigation Systems, ICINS 2015—Proceedings, Saint Petersburg, Russia, 25–27 May 2015; pp. 123–126. [Google Scholar]
  29. Chao, H.; Coopmans, C.; Di, L.; Chen, Y. A comparative evaluation of low-cost IMUs for unmanned autonomous systems. In Proceedings of the 2010 IEEE Conference on Multisensor Fusion and Integration, Salt Lake City, UT, USA, 5–7 September 2010; pp. 211–216. [Google Scholar]
  30. Machaj, J.; Racko, J.; Brida, P. Performance Comparison of Sensor Implemented in Smartphones with X-IMU. In Proceedings of the Conference on Computational Collective Intelligence, Halkidiki, Greece, 28–30 September 2016; pp. 190–199. [Google Scholar]
  31. Harindranath, A.; Arora, M. MEMS IMU Sensor Orientation Algorithms-Comparison in a Simulation Environment. In Proceedings of the 2018 International Conference on Networking, Embedded and Wireless Systems (ICNEWS), Bangalore, India, 27–28 December 2018; pp. 1–6. [Google Scholar]
  32. Woyano, F.; Lee, S.; Park, S. Evaluation and comparison of performance analysis of indoor inertial navigation system based on foot mounted IMU. In Proceedings of the 2016 18th International Conference on Advanced Communication Technology (ICACT), PyeongChang, Korea, 31 January–3 February 2016; pp. 792–798. [Google Scholar]
  33. Zhou, L.; Fischer, E.; Tunca, C.; Brahms, C.M.; Ersoy, C.; Granacher, U.; Arnrich, B. How We Found Our IMU: Guidelines to IMU Selection and a Comparison of Seven IMUs for Pervasive Healthcare Applications. Sensors 2020, 20, 4090. [Google Scholar] [CrossRef] [PubMed]
  34. Shi, J. Good features to track. In Proceedings of the 1994 IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994; pp. 593–600. [Google Scholar]
  35. Forster, C.; Carlone, L.; Dellaert, F.; Scaramuzza, D. On-Manifold Preintegration for Real-Time Visual--Inertial Odometry. IEEE Trans. Robot. 2017, 33, 1–21. [Google Scholar] [CrossRef] [Green Version]
  36. Lupton, T.; Sukkarieh, S. Visual-Inertial-Aided Navigation for High-Dynamic Motion in Built Environments Without Initial Conditions. IEEE Trans. Robot. 2012, 28, 61–76. [Google Scholar] [CrossRef]
  37. Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  38. El-Sheimy, N.; Hou, H.; Niu, X. Analysis and Modeling of Inertial Sensors Using Allan Variance. IEEE Trans. Instrum. Meas. 2008, 57, 140–149. [Google Scholar] [CrossRef]
  39. IEEE Std 647-1995; IEEE Standard Specification Format Guide and Test Procedure for Single-Axis Laser Gyros. The Institute of Electrical and Electronics Engineers, Inc.: New York, NY, USA, 15 May 1996. [CrossRef]
  40. RPNG. R.P.N.G. kalibr_allan. Available online: https://github.com/rpng/kalibr_allan (accessed on 15 February 2022).
  41. Rehder, J.; Nikolic, J.; Schneider, T.; Hinzmann, T.; Siegwart, R. Extending kalibr: Calibrating the extrinsics of multiple IMUs and of individual axes. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 4304–4311. [Google Scholar]
  42. Furgale, P.; Rehder, J.; Siegwart, R. Unified temporal and spatial calibration for multi-sensor systems. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 1280–1286. [Google Scholar]
  43. Qin, T.; Shen, S. Online Temporal Calibration for Monocular Visual-Inertial Systems. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 3662–3669. [Google Scholar]
Figure 1. Platform setup. ➀ Wheeled Vehicle; ➁ Remote Control; ➂ Ground Truth C; ➃ Multi-IMU & Camera Suite; ➄ Laptop with Intel Core i7-9750H @2.60 GHz × 12.
Figure 1. Platform setup. ➀ Wheeled Vehicle; ➁ Remote Control; ➂ Ground Truth C; ➃ Multi-IMU & Camera Suite; ➄ Laptop with Intel Core i7-9750H @2.60 GHz × 12.
Micromachines 13 00602 g001
Figure 2. The framework for evaluation of MEMS-IMU performance on Visual-Inertial Navigation System.
Figure 2. The framework for evaluation of MEMS-IMU performance on Visual-Inertial Navigation System.
Micromachines 13 00602 g002
Figure 3. Multi-IMUs camera setup.
Figure 3. Multi-IMUs camera setup.
Micromachines 13 00602 g003
Figure 4. Multi IMU Allan analysis: (a) Description of Allan curve of six different accelerometers; (b) Description of Allan curve of six different gyroscopes.
Figure 4. Multi IMU Allan analysis: (a) Description of Allan curve of six different accelerometers; (b) Description of Allan curve of six different gyroscopes.
Micromachines 13 00602 g004
Figure 5. Weak texture in corridor.
Figure 5. Weak texture in corridor.
Micromachines 13 00602 g005
Figure 6. The trajectory with and without IMU auxiliary in weak texture.
Figure 6. The trajectory with and without IMU auxiliary in weak texture.
Micromachines 13 00602 g006
Figure 7. The outdoor environment without strong sunlight.
Figure 7. The outdoor environment without strong sunlight.
Micromachines 13 00602 g007
Figure 8. The trajectory of uniform velocity motion state (slow, normal, fast): 6050, 100, nv, 16488, 16490, and mscrg represent MPU6050, HI219, NV−MG−201, ADIS16488, ADIS16490, and MSCRG, respectively. The left represents the ground truth trajectory and the output trajectory of visual−inertial odometry which is based on six different IMU (The x−axis and y−axis represent the 2D plane in the experimental environment). The right represents the ATE results for VIO. (The x−axis and y−axis represent the running timestamp and error, respectively.) (Other similar result graphs also follow this rule).
Figure 8. The trajectory of uniform velocity motion state (slow, normal, fast): 6050, 100, nv, 16488, 16490, and mscrg represent MPU6050, HI219, NV−MG−201, ADIS16488, ADIS16490, and MSCRG, respectively. The left represents the ground truth trajectory and the output trajectory of visual−inertial odometry which is based on six different IMU (The x−axis and y−axis represent the 2D plane in the experimental environment). The right represents the ATE results for VIO. (The x−axis and y−axis represent the running timestamp and error, respectively.) (Other similar result graphs also follow this rule).
Micromachines 13 00602 g008
Figure 9. The trajectory of varying velocity motion state.
Figure 9. The trajectory of varying velocity motion state.
Micromachines 13 00602 g009
Figure 10. The trajectory of the spin move forward motion state. The waveform trajectories can reflect this motion state.
Figure 10. The trajectory of the spin move forward motion state. The waveform trajectories can reflect this motion state.
Micromachines 13 00602 g010
Figure 11. The strong sunlight scene in the experiment.
Figure 11. The strong sunlight scene in the experiment.
Micromachines 13 00602 g011
Figure 12. The trajectory of a strong sunlight scene.
Figure 12. The trajectory of a strong sunlight scene.
Micromachines 13 00602 g012
Figure 13. The trajectory of the long−term scene.
Figure 13. The trajectory of the long−term scene.
Micromachines 13 00602 g013
Figure 14. The multiaxial turntable with multi-IMUs camera platform is rigidly mounted.
Figure 14. The multiaxial turntable with multi-IMUs camera platform is rigidly mounted.
Micromachines 13 00602 g014
Figure 15. The results of multiple IMUs on turntable: (a) Description of six MEMS-IMUs angular velocity test results with a multiaxial turntable; (b) Description of Z-axis angular velocity absolute error of six MEMS-IMUs.
Figure 15. The results of multiple IMUs on turntable: (a) Description of six MEMS-IMUs angular velocity test results with a multiaxial turntable; (b) Description of Z-axis angular velocity absolute error of six MEMS-IMUs.
Micromachines 13 00602 g015
Table 1. Scenario sequences.
Table 1. Scenario sequences.
Normal IlluminationStrong IlluminationCorridor
uniform_velocity
(approximate)
varying_velocityspin_movelong_termslow, normal, fast,
varying_velocity,
spin_move
weak_
texture
slownormalfastalternating acceleration and decelerationspin_move _forward30 min
Table 2. Nominal specifications of MEMS-IMUs in the experiment.
Table 2. Nominal specifications of MEMS-IMUs in the experiment.
GradeIMU Type RangeBandwidthBias
Stability
Bias
Repeatability
Non
linearity
ResolutionPrice $
ConsumerMPU6050Acc±4 g\\\0.5%0.06 mg/LSB1
Gyro±2000°/s\\\0.2%0.061°/s/LSB
ConsumerHI219Acc±16 g\\\\\20
Gyro±2000°/s\\\\\
TacticalNV-MG-201Acc±30 g100 Hz80 μg100 μg0.03%\500
Gyro±500°/s80 Hz0.8°/h0.8°/h0.03%\
TacticalADIS16488Acc±18 g330 Hz0.1 mg±16 mg0.5%0.8 mg/LSB2500
Gyro±450°/s330 Hz6.25°/h±0.2°/s0.01%0.02°/s/LSB
TacticalMSCRGAcc±30 g200 Hz45 μg3.6 mg0.3%0.0572 mg/LSB3000
Gyro±300°/s75 Hz\0.07°/s0.15%0.03125°/s/LSB
TacticalADIS16490Acc±8 g750 Hz3.6 μg±3.5 mg1.6%0.5 mg/LSB3500
Gyro±100°/s480 Hz1.8°/h0.05°/s0.3%0.005°/s/LSB
Table 3. Experimental calibration results of MEMS-IMUs.
Table 3. Experimental calibration results of MEMS-IMUs.
AccelerometerGyroscope
IMU TypeNoiseBias StabilityBias Random WalkNoiseBias StabilityBias Random Walk
MPU60500.0009950.000350.0000530.0000480.0000120.000001
HI2190.0014200.000400.0000430.0000055.00 × 10-70.000001
NV-MG-2010.0005080.000280.0000280.0000147.00 × 10-70.000001
ADIS164880.0029990.000600.0000140.0002520.0000340.000001
ADIS164900.0003780.0000340.0000050.0000518.00 × 10-60.000001
MSCRG0.0007010.000150.0000400.0002051.80 × 10-50.000013
Table 4. The ATE results for the long-term scene with six different MEMS-IMUs.
Table 4. The ATE results for the long-term scene with six different MEMS-IMUs.
Long Term MPU6050HI219NV-MG-201ADIS16488ADIS16490MSCRG
Long_termMean5.72/3.99343.5327/4.0316↑Fail5.5085/4.04966.4623/4.00125.7496/3.9860
Median4.5199/2.79602.6263/2.8203↑Fail4.9339/2.89027.1918/2.83695.6274/2.8210
RMSE7.4087/5.54484.2584/5.6121↑Fail6.3012/5.56897.8725/5.50186.5565/5.4862
Table 5. The votes for all scenarios with six different MEMS-IMUs.
Table 5. The votes for all scenarios with six different MEMS-IMUs.
ScenariosNumber of ExperimentsMPU6050HI219NV-MG-201ADIS16488ADIS16490MSCRG
Uniform_velocity
(approximate)
15304130
Varying_velocity5000121
Spin_move11030213
Strong_illumination7130210
Long_term1010000
Table 6. Improvement of average localization accuracy after adding IMU in different scenarios. Unit: m (we considered the situation of improvement of accuracy. Each item represents accuracy improvement in the situation of IMU auxiliary relative to the situation of only visual localization. For example, the auxiliary of MPU6050 improves the accuracy in uniform scenes by 0.1031 m).
Table 6. Improvement of average localization accuracy after adding IMU in different scenarios. Unit: m (we considered the situation of improvement of accuracy. Each item represents accuracy improvement in the situation of IMU auxiliary relative to the situation of only visual localization. For example, the auxiliary of MPU6050 improves the accuracy in uniform scenes by 0.1031 m).
ScenesMPU6050HI219NV-MG-201ADIS16488ADIS16490MSCRG
Uniform_velocity0.10310.10190.16920.09270.07830.0955
Varying_velocity0.04320.00000.00000.18740.19520.1477
Spin_move0.18950.70930.00000.18740.17620.8175
Strong_illumination0.13680.17690.03860.16340.16190.1381
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, Y.; Li, Z.; Zheng, S.; Cai, P.; Zou, X. An Evaluation of MEMS-IMU Performance on the Absolute Trajectory Error of Visual-Inertial Navigation System. Micromachines 2022, 13, 602. https://doi.org/10.3390/mi13040602

AMA Style

Liu Y, Li Z, Zheng S, Cai P, Zou X. An Evaluation of MEMS-IMU Performance on the Absolute Trajectory Error of Visual-Inertial Navigation System. Micromachines. 2022; 13(4):602. https://doi.org/10.3390/mi13040602

Chicago/Turabian Style

Liu, Yunfei, Zhitian Li, Shuaikang Zheng, Pengcheng Cai, and Xudong Zou. 2022. "An Evaluation of MEMS-IMU Performance on the Absolute Trajectory Error of Visual-Inertial Navigation System" Micromachines 13, no. 4: 602. https://doi.org/10.3390/mi13040602

APA Style

Liu, Y., Li, Z., Zheng, S., Cai, P., & Zou, X. (2022). An Evaluation of MEMS-IMU Performance on the Absolute Trajectory Error of Visual-Inertial Navigation System. Micromachines, 13(4), 602. https://doi.org/10.3390/mi13040602

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop