Next Article in Journal
Towards a Unified Framework for Software-Hardware Integration in Evolutionary Robotics
Previous Article in Journal
Kinematic Reliability of Manipulators Based on an Interval Approach
Previous Article in Special Issue
Adaptive Path Planning for Subsurface Plume Tracing with an Autonomous Underwater Vehicle
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Total Least Squares In-Field Identification for MEMS-Based Inertial Measurement Units †

DIN, University of Bologna, Via Terracini 24, 40131 Bologna, Italy
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in Duchi, M., Zaccaria, F., Briot, S., Ida’, E. Total Least Squares In-Field Identification for MEMS-Based Triaxial Accelerometers. In Mechanisms and Machine Science, Proceedings of the 11 16th IFToMM World Congress 2023, Tokyo, Japan, 5–10 November 2023; Springer Nature: Switzerland, 2023.
Robotics 2024, 13(11), 156; https://doi.org/10.3390/robotics13110156
Submission received: 9 July 2024 / Accepted: 21 October 2024 / Published: 23 October 2024

Abstract

:
Inertial Measurement Units are widely used in various applications and, hardware-wise, they primarily consist of a tri-axial accelerometer and a tri-axial gyroscope. For low-end commercial employments, the low cost of the device is crucial: this makes MEMS-based sensors a popular choice in this context. However, MEMS-based transducers are prone to significant, non-uniform and environmental-condition-dependent systematic errors, that require frequent re-calibration to be eliminated. To this end, identification methods that can be performed in-field by non-expert users, without the need for high-precision or costly equipment, are of particular interest. In this paper, we propose an in-field identification procedure based on the Total Least Squares method for both tri-axial accelerometers and gyroscopes. The proposed identification model is linear and requires no prior knowledge of the parameters to be identified. It enables accelerometer calibration without the need for specific reference surface orientation relative to Earth’s gravity and allows gyroscope calibration to be performed independently of accelerometer data, without requiring the sensor’s sensitive axes to be aligned with the rotation axes during calibration. Experiments conducted on NXP sensors FXOS8700CQ and FXAS21002 demonstrated that using parameters identified by our method reduced cross-validation standard deviations by about two orders of magnitude compared to those obtained using manufacturer-provided parameters. This result indicates that our method enables the effective calibration of IMU sensor parameters, relying only on simple 3D-printed equipment and significantly improving IMU performance at minimal cost.

1. Introduction

Inertial Measurement Units (IMUs) are widely used in many applications where the position and attitude of a body need to be evaluated: civilian navigation systems [1], robotics [2], aerospace [3], medical applications [4], as well as commercial low-end portable devices such as computers or smartphones [5]. Six-axis IMUs are essentially composed of an accelerometer and a gyroscope: these are transducers that output analog or digital signals based on physical acceleration or angular velocity input, respectively. In both cases, a linear function can be used to describe the relationship between input and output, allowing us to determine a mathematical model for the sensor [6].
To meet the widespread demand for consumer-grade devices, IMUs based on Micro Electro-Mechanical Systems (MEMS-based) are of particular interest due to their advantages in cost, size, weight, and power consumption compared to mechanical or optical alternatives [7]. However, MEMS manufacturing technologies produce sensors having non-uniform model parameters, which the manufacturer only estimates on batches [6]. For these two reasons, if manufacturer-provided parameters are used in the sensors’ mathematical model, slight to large measurement errors can be expected. Additionally, if those measurements need to be post-processed (as for position and attitude estimation) the errors are likely to be further propagated and amplified [8].
Sensor errors are typically categorized into systematic and random errors. While random errors can only be accounted for using stochastic models [9], systematic errors are deterministic, i.e., they can be minimized using a calibration procedure: essentially, sensor model parameters are identified by comparing their mathematical model output with a known reference. Calibration techniques can be classified into two main types: Laboratory and in-field ones. While the former requires using high-precision and costly equipment, such as mechanical rotating tables, to provide a known reference, the latter can be conducted by the end user without the need for high-end apparatus or external sensors [6]. The low-repeatability of the manufacturing process, the low cost and the environmental-condition-dependent behavior that characterize MEMS-based sensors make in-field techniques a sensible choice for their calibration.
Accelerometer in-field calibration methods leverage the fact that, when the sensor is kept static, the Euclidean norm of the acceleration measured by the sensor in its reference frame ( a m ) should be constant and equal to the one of the gravitational acceleration ( g ). Most of these methods involve re-orienting the device N times to collect measurements a m , i , where i = 1 , , N : the array collecting the sensor model parameters ( θ a ) is then obtained by minimizing the cost function
C a ( θ a ) = i = 1 N ( a m , i ( θ a ) 2 g 2 ) 2 ,
where · denotes the Euclidean norm of a vector. The problem is usually solved using Non-Linear Least Squares (NLS) methods [10,11,12]. Regarding gyroscopes, the Earth’s rotation rate cannot serve as a reliable reference due to its weak magnitude, which would result in an excessively low Signal to Noise Ratio (SNR) [5,6]. For this reason, accelerometer-dependent gyroscope calibration is often employed [11,12]. In this approach, the sensor is positioned in N static orientations by performing N 1 rotations: when the j-th non-static period begins, the gravity vector measured by the accelerometer ( g 0 , j ) is fed to an algorithm that, by integrating gyroscope measurements acquired during sensor rotation, gives the output as an estimation of the gravity vector at the end of the rotation ( g g , j ) that depends on the gyroscope’s acquired data. The array θ g collecting the gyroscope model parameters is then obtained by minimizing the cost function
C g ( θ g ) = j = 1 N 1 g a , j g g , j ( θ g ) 2 ,
where g a , j is the accelerometer-measured gravity vector at the end of the j-th rotation. These methods do not require placing the sensor in specific orientations, which enhances usability, but they suffer from two main drawbacks: (i) they rely on NLS optimization algorithms and consequently require a tentative initial parameter array, and (ii) the calibration outcome could be poor depending on the random orientations selected by the user. Moreover, if accelerometer-dependent gyroscope calibration is performed, any possible accelerometer calibration error is likely to be further propagated and amplified.
Methods for linear and non-iterative in-field calibration, which do not require any previous knowledge of the parameters to be calibrated, can be classified as identification methods and are highly desirable. For accelerometers, such methods were proposed in [13,14], but they rely on strict assumptions, including (i) the availability of a surface orthogonal to Earth’s gravity, and (ii) the sensor’s parameters being within known bounds. Other types of identification methods [15,16,17] for these devices are based on a suitable non-linear change of variables in the sensor model, which reformulates the classical NLS methods of [10,11,12] into linear ones. On the other hand, once the coefficients are determined, the inverse transformation of the non-linear change of variables needs to be performed, numerically propagating the errors in the estimated parameters. For gyroscopes, non-iterative methods have been proposed in [10,18]. However, these methods require rotating the sensor around its sensitive axes and relying on a mean angular velocity approach to compute rotations, rather than properly integrating gyroscope measurements. Assuming that the rotation axis coincides with one of the sensor’s axes and using the mean angular velocity for integration are weak assumptions, especially given that the calibration is intended to be performed in-field by the end user. In fact, if the rotations are to be performed manually, (i) precisely rotating around one of the sensor’s sensitive axes can be extremely challenging and (ii) achieving a smooth angular velocity profile can be difficult. Therefore, assuming an unknown rotation axis and performing a small time-step integration of gyroscope measurements would be desirable approaches to improve the robustness of the calibration. Moreover, these methods rely on accelerometer measurements to estimate reference rotation angles, which can result in error propagation. Additionally, they do not account for cross-sensitivities between the sensor axes.
In this paper, we propose an in-field identification method that extends the approach presented in [19] for accelerometers to include gyroscope calibration as well. This extended method is based on (i) a predetermined sequence of rotations used to acquire sensor outputs, and (ii) the solution of two Total Least Squares (TLS) problems. The sensor is placed in a 3D-printed multifaceted housing, which is rotated with respect to a sloped surface used as a reference. Each rotation is manually performed around a fixed axis that, in general, is assumed not to coincide with any of the sensor triad axes. Small time-step trapezoidal integration of gyroscope measurements is employed to obtain a reliable estimation of rotation angles. These estimations are then compared with the rotation angles expected given the geometry of the IMU’s housing and the chosen rotation sequence, allowing for accelerometer-independent gyroscope calibration. The TLS approach ensures that potential errors in the rotations are accounted for during calibration. In this manner, sensor model parameters, including sensitivities, cross-sensitivities and bias terms, are identified. In summary, our method’s contributions are as follows:
  • Our method does not require the calibration reference to have a known orientation with respect to Earth’s gravity, addressing a key shortcoming of existing accelerometer identification methods [13,14]. This enhances the in-field usability of our method.
  • In the gyroscope calibration model, the axis of each rotation is assumed to be unknown, i.e., non-coincident with any of the sensor’s sensitive axes. This contrasts with state-of-the-art gyroscope identification methods [10,18] and improves the robustness of the identification, especially since rotations are performed manually.
  • Accelerometer and gyroscope calibration are independent of each other, meaning that any errors in accelerometer calibration do not propagate to gyroscope calibration. This solves one of the drawbacks of [10,11,12,18].
  • Both accelerometer and gyroscope identification models are linear, eliminating the need for any initial guess regarding sensor parameters. In NLS-based methods [11,12], a poor choice of the initial guess can lead to optimization being trapped in a local minimum of the cost function, resulting in sub-optimal calibration outcomes.
  • The TLS approach is used to solve the linear identification systems of equations, allowing for the consideration of potential errors in the data acquisition process. In contrast, other linear methods like [10,18] use least squares solutions that assume the measurements acquired during the data collection process to be error-free, which is often not the case when rotations for in-field calibration are performed manually.
The rest of the paper is organized as follows: Section 2 presents the description of the identification algorithms, while experiments performed using NXP sensors FXOS8700CQ and FXAS21002 mounted on a single chip are presented and discussed in Section 3. Conclusions are drawn in Section 4.

2. Identification Algorithms Description

The proposed identification method requires placing a “single-board” IMU—namely, a device where the accelerometer and the gyroscope are rigidly mounted on the same electronic board and mechanical frame—inside a multifaceted 3D-printed housing, and re-orient such housing into N different orientations with respect to (w.r.t.) a reference surface. In practice, this is accomplished by positioning the housing on each of its facets according to a predetermined sequence of N 1 rotations. Let O x y z be a fixed reference frame rigidly attached to the sloped surface where the calibration takes place and O x y z be a mobile reference frame rigidly attached to the IMU.
We will refer to them as the fixed frame (F-frame) and the mobile frame (M-frame), respectively, and we assume accelerometer and gyroscope orthogonal triads to be coincident with the M-frame. When the calibration procedure starts, the M-frame is assumed to be randomly tilted w.r.t. the F-frame (Figure 1), meaning that the IMU is placed in an unknown orientation w.r.t. its housing. In the following subsections, accelerometer and gyroscope sensor models are introduced and their calibration algorithms are described.

2.1. Accelerometer

The mathematical model for a tri-axial accelerometer, which correlates the acceleration perceived by the sensor expressed w.r.t. the M-frame { a } M R 3 × 1 with the sensor’s raw output v = [ v 1 , v 2 , v 3 ] T is given by [6]:
v = A { a } M + b + ζ a ,
where A R 3 × 3 is a scale factor matrix, b R 3 × 1 is an array collecting bias terms and ζ a R 3 × 1 contains measurement noise. Although this model accurately represents the physical behavior of the sensor, its inverse form is preferable for practical uses:
{ a } M = A ( v + ζ a ) + b ,
where A = A 1 , b = A 1 b and ζ a = ζ a . The identification procedure aims to determine the elements of A and b , so that raw sensor output v can be transformed into an acceleration measurement { a } M . A is assumed to be symmetric and noise terms to be white Gaussian with zero mean [6]. We call diagonal elements A p p of matrix A sensitivities, while its off-diagonal elements A p q are referred to as cross-sensitivities. Cross-sensitivities are non-zero when non-orthogonality in the sensor’s triad arises due to manufacturing tolerances, and they also account for cross-talk effect between different channels caused by the sensor’s electronics. These non-idealities are particularly relevant for MEMS-based sensors [20] and our identification model takes them into account. We assume the gravitational acceleration to have a constant magnitude equal to g that we denote as { · } M , i ; the expression of a vector in M-frame coordinates when the IMU is placed in the i-th orientation of the procedure, for i = 1 , , N . Accordingly, the acceleration perceived by the sensor in the first orientation of the procedure can be written as follows:
{ a } M , 1 = g { n } M , 1 ,
where { n } M , 1 = [ n 1 , n 2 , n 3 ] T is a unit vector (i.e., its Euclidean norm is one) expressing Earth’s gravity direction in the mobile frame. { n } M , 1 is generally unknown, given that the reference is randomly oriented w.r.t. Earth’s gravity, and it will be determined as a sub-product of the identification. If the IMU is re-oriented in N different orientations during the calibration procedure, the acceleration perceived by the sensor in the i-th orientation for i = 1 , , N can be expressed as follows:
{ a } M , i = R i { a } M , 1 ,
where R i is a rotation matrix defined by the known sequence of rotations performed w.r.t. the F-frame. If we substitute Equation (4) into Equation (6), we can correlate the raw output of the sensor in the i-th configuration with the acceleration in the initial configuration as follows:
{ a } M , i = A ( v i + ζ a , i ) + b = R i { a } M , 1 .
Since measurement noise can be well-modeled as white Gaussian with zero mean, the average of k raw output samples acquired in the same orientation is considered in order to mitigate noise effects, namely:
v ¯ i = s = 1 k ( v i , s + ζ a , i , s ) k , k 1 .
In this way, the noiseless mathematical model is obtained as follows:
{ a } M , i = A v ¯ i + b = R i { a } M , 1 .
Re-arranging terms of Equation (9) so as to highlight known quantities and parameters to be calibrated yields the following:
M i θ = 0 3 × 1 ,
where
M i = v ¯ i , 1 v ¯ i , 2 v ¯ i , 3 0 0 0 0 v ¯ i , 1 0 v ¯ i , 2 v ¯ i , 3 0 I 3 × 3 R i 0 0 v ¯ i , 1 0 v ¯ i , 2 v ¯ i , 3
and
θ = [ A 11 , A 12 , A 13 , A 22 , A 23 , A 33 , b 1 , b 2 , b 3 , n 1 , n 2 , n 3 ] T .
We denoted as I 3 × 3 the third order identity matrix, as A u w the element in the u-th row and w-th column of matrix A and as b 1 , b 2 , b 3 the elements of vector b . According to our identification method, the gravity direction in the start configuration { n } M , 1 , expressed in the mobile frame, is also unknown and included in the parameters to be determined. By stacking Equation (10) for i = 1 , , N measurements, we obtain a homogeneous system of 3 N linear equations in 12 unknowns, which is over-determined if N > 4 :
M 1 M N θ = M θ = 0 3 N × 1 , M R 3 N × 12 .
Equation (13) can be solved for θ with the linear TLS method (see [21] for the theoretical foundations, or [22,23] for practical applications). Using the TLS method allows us to account for possible errors in the elements of matrix M originated in the data acquisition procedure. Please note that θ contains the actual parameters needed by the model, and not a combination of them, which is the second shortcoming of previous works [15,16,17]. In ideal conditions—namely, without any measurement noise or model error— M should be full-column rank deficient—namely, r a n k ( M ) < 12 —and a solution for Equation (13) different from the trivial one (which is θ = 0 ) could be found. In real applications, it is necessary to address a compatible system that is closest to Equation (13) in terms of the Frobenious norm:
M ^ θ ^ = 0 ,
where θ ^ , which is the solution to Equation (13), contains the unknown parameters estimations and M ^ has rank 11 and minimizes M ^ M F , where · F is the Frobenius norm. Matrix M ^ can be computed by means of the "economy size" singular value decomposition of M , given by the following:
M = U d i a g ( λ l ) 0 V T ,
where U R 3 N × 3 N and V R 12 × 12 are orthonormal matrices and d i a g ( λ l ) R 12 × 12 is a diagonal matrix containing the singular values λ l , for l = 1 , , 12 of M sorted in decreasing order. M ^ can be obtained as follows:
M ^ = M λ 12 U 12 V 12 T ,
being λ 12 the smallest singular value of M and U 12 , V 12 T the 12-th column of U and V , respectively. A solution to Equation (14) is given by θ ^ = V 12 . The solution to the proposed identification problem is obtained by normalizing θ ^ so that its The last three elements have unitary Euclidean norm. This generates an ambiguity regarding the sign of the solution. However, the ambiguity can be removed by making an ex-post check on the sign of the last element of θ , for example, if in the initial orientation of the procedure z is roughly pointing in the opposite direction w.r.t. g , then the last element of θ will be imposed to have negative sign), namely:
θ = θ ^ θ ^ 10 : 12 ,
where θ ^ 10 : 12 is the array collecting the last three elements of θ ^ .
Percentage relative standard deviation estimations for the identified parameters can be used to infer the accuracy of the obtained results. The estimation is performed according to the TLS techniques, and by assuming n 3 as the element having the maximum accuracy. Accordingly, the covariance matrix of the TLS solution may be approximated as [21]:
C θ = σ ^ M 2 ( 1 + θ 1 : 11 2 ) ( M ^ 1 : 11 T M ^ 1 : 11 ) 1 ,
where
σ M = λ 12 3 N 12
and M ^ 1 : 11 is obtained by discarding the last column of M ^ , whereas θ 1 : 11 is the array containing the first 11 elements of θ . We finally obtain the percentage relative standard deviation estimation as [23]:
σ % , θ l = 100 C θ ( l , l ) | θ l | , l = 1 , , 11 ;
where C θ ( l , l ) is the l-th diagonal term of C θ and θ l the l-th element of θ .

2.2. Gyroscope

Analogous to Section 2.1, we consider the angular velocity perceived by the gyroscope { ω } M , expressed in the M-frame, to be related to the sensor raw output r = [ r 1 , r 2 , r 3 ] T by the equation:
{ ω } M = G r + d + ζ g ,
where diagonal elements G p p of the symmetric matrix G R 3 × 3 are referred to as sensitivities, off-diagonal ones G p q are cross-sensitivities (which are, in general, non-zero), d = [ d 1 , d 2 , d 3 ] T collects bias terms and ζ g R 3 × 1 contains measurement noise. Previous identification methods [10,18] consider the cross-sensitivities to be negligible and this can lead to inaccurate calibration for MEMS-based sensors [20].
Let u x , u y , u z be three unit vectors that represent F-frame axes directions (Figure 1). When the sensor is placed in the first orientation of the calibration procedure, by using the Euler–Rodrigues formula [24], we can express u x , u y , u z in M-frame coordinates as follows:
{ u x } M , 1 = cos ϕ 1 0 0 + sin ϕ ( e × 1 0 0 ) + ( 1 cos ϕ ) e ( e · 1 0 0 ) , { u y } M , 1 = cos ϕ 0 1 0 + sin ϕ ( e × 0 1 0 ) + ( 1 cos ϕ ) e ( e · 0 1 0 ) , { u z } M , 1 = cos ϕ 0 0 1 + sin ϕ ( e × 0 0 1 ) + ( 1 cos ϕ ) e ( e · 0 0 1 ) ,
where e = [ e 1 , e 2 , e 3 ] T and ϕ are the axis and the angle that define the rotation needed to transform the M-frame into the F-frame, while · , × denote the Euclidean dot and cross products, respectively. In this first IMU orientation, we assume misalignments between the M-frame and the F-frame to be small (i.e., ϕ 0 ), so that Equation (22) can be approximated as:
{ u x } M , 1 1 ϕ e 3 ϕ e 2 , { u y } M , 1 ϕ e 3 1 ϕ e 1 , { u z } M , 1 ϕ e 2 ϕ e 1 1 .
According to our calibration method, the IMU needs to be rotated w.r.t. F-frame fixed axes; for each j-th rotation, with j = 1 , , N 1 , we can express in the M-frame the angular velocity { ω ( t ) } M , j perceived by the gyroscope at time instant t as follows:
{ ω ( t ) } M , j = δ ˙ ( t ) { u h } M , j ,
where δ ˙ ( t ) is the angular velocity magnitude at time instant t and h = x , y , z depending on whether the rotation is performed around x , y or z axis, respectively. Please notice that u x , u y , u z are fixed, namely they remain the same throughout the calibration procedure, but, in general, their expressions in the gyroscope triad (the M-frame) change between the different orientations of the procedure. We can relate the expression of u h in the M-frame to u h assumed in the initial configuration as follows:
{ u h } M , j = R j { u h } M , 1 ,
where R j is the rotation matrix that transforms a vector expressed in M,1-frame coordinates into a vector expressed in M,j-frame ones. Such a rotation matrix is coincident with the one introduced in Equation (6) and its results are determined by the known sequence of rotations performed. By substituting Equations (25) and (21) in Equation (24), we obtain:
{ ω ( t ) } M , j = G r ( t ) + d + ζ g ( t ) = δ ˙ ( t ) R j { u h } M , 1 .
Measurement noise can be removed by using proper filtering techniques: analogic or digital filters can be embedded directly in the IMU electronic board or implemented in the data acquisition system if the sampling time is sufficiently small. The noiseless calibration model can then be written as follows:
{ ω ( t ) } M , j = G r ( t ) + d = δ ˙ ( t ) R j { u h } M , 1 .
By integrating the right-most term of Equation (27) in the time interval [ t 0 , j , t 1 , j ] where the j-th rotation takes place, we obtain:
t 0 , j t 1 , j δ ˙ ( t ) R j { u h } M , 1 d t = Δ j R j { u h } M , 1 ,
where Δ j is the j-th expected rotation angle, whose numerical value depends on the geometry of IMU’s 3D-printed housing and the chosen rotation sequence. Since the estimation of Δ j does not rely on accelerometer measurements (as was the case in [10,18]), the results of gyroscope calibration are independent of those of the accelerometer. Integrating in the same time interval the central term of Equation (27) one obtains:
t 0 , j t 1 , j ( G r ( t ) + d ) d t = G 11 H 1 , j + G 12 H 2 , j + G 13 H 3 , j + T j d 1 G 12 H 1 , j + G 22 H 2 , j + G 23 H 3 , j + T j d 2 G 13 H 1 , j + G 23 H 2 , j + G 33 H 3 , j + T j d 3 ,
where
H 1 , j = t 0 , j t 1 , j r 1 ( t ) d t , H 2 , j = t 0 , j t 1 , j r 2 ( t ) d t , H 3 , j = t 0 , j t 1 , j r 3 ( t ) d t , T j = t 1 , j t 0 , j ,
G u w is the element in the u-th row and w-th column of matrix G and d 1 , d 2 , d 3 the elements of vector d . As introduced in Section 1, H 1 , j , H 2 , j , H 3 , j are obtained through the integration of gyroscope raw measurements and not by using a mean value approach. Combining Equations (27)–(29) we obtain:
G 11 H 1 , j + G 12 H 2 , j + G 13 H 3 , j + T j d 1 G 12 H 1 , j + G 22 H 2 , j + G 23 H 3 , j + T j d 2 G 13 H 1 , j + G 23 H 2 , j + G 33 H 3 , j + T j d 3 Δ j R j { u h } M , 1 = 0 3 × 1 .
Please notice that Equation (30) can assume a different form depending on weather the j-th rotation is performed around x , y or z, namely:
G 11 H 1 , j + G 12 H 2 , j + G 13 H 3 , j + T j d 1 G 12 H 1 , j + G 22 H 2 , j + G 23 H 3 , j + T j d 2 G 13 H 1 , j + G 23 H 2 , j + G 33 H 3 , j + T j d 3 Δ j R j { u x } M , 1 = 0 3 × 1 , if h = x G 11 H 1 , j + G 12 H 2 , j + G 13 H 3 , j + T j d 1 G 12 H 1 , j + G 22 H 2 , j + G 23 H 3 , j + T j d 2 G 13 H 1 , j + G 23 H 2 , j + G 33 H 3 , j + T j d 3 Δ j R j { u y } M , 1 = 0 3 × 1 , if h = y G 11 H 1 , j + G 12 H 2 , j + G 13 H 3 , j + T j d 1 G 12 H 1 , j + G 22 H 2 , j + G 23 H 3 , j + T j d 2 G 13 H 1 , j + G 23 H 2 , j + G 33 H 3 , j + T j d 3 Δ j R j { u z } M , 1 = 0 3 × 1 , if h = z
A relevant contribution to gyroscope identification methodology introduced in this work can now be highlighted: the simplification introduced by Equation (23) allows the inclusion of the expressions for the rotation axes in the M-frame at the beginning of the procedure (namely { u x } M , 1 , { u y } M , 1 , { u z } M , 1 ) into the identification model without producing any non-linearity into the equations and without assuming that { u x } M , 1 = [ 1 , 0 , 0 ] T , { u y } M , 1 = [ 0 , 1 , 0 ] T or { u z } M , 1 = [ 0 , 0 , 1 ] T (i.e., without assuming alignment of the rotation axes with the gyroscope’s sensitive axes). In this way, one of the shortcomings of previous gyroscope identification methods [10,18] is solved. One may argue that the simplification introduced in Equation (23) is valid only if ϕ (which is unknown and included, together with e , in the parameters to be determined) is sufficiently small: however, this assumption is justified for industrially manufactured sensors, and is further confirmed by the experimental results shown in Section 3. By substituting the expressions of { u x } M , 1 , { u y } M , 1 , { u z } M , 1 with those given by Equation (23) and re-organizing terms of Equation (31) so as to highlight known quantities and unknown parameters to be identified, each j-th rotation can be associated with a linear system of equations of the form:
L j ϑ = 0 3 × 1 ,
where
ϑ = [ G 11 , G 12 , G 13 , G 22 , G 23 , G 33 , d 1 , d 2 , d 3 , ϕ e 3 , ϕ e 2 , ϕ e 1 , 1 ] T
is the gyroscope-related counterpart of θ , introduced in Equation (12) to collect accelerometer-related parameters to be identified: analogously, ϑ contains gyroscope model parameters, quantities related to e , ϕ and a known term, i.e., its last element ϑ 13 = 1 . L j can be expressed as composition of two different matrices L j , 1 and Δ j L j , 2 , namely:
L j = L j , 1 Δ j L j , 2 ,
where L j , 1 is:
L j , 1 = H 1 , j H 2 , j H 3 , j 0 0 0 0 H 1 , j 0 H 2 , j H 3 , j 0 T j I 3 × 3 0 0 H 1 , j 0 H 2 , j H 3 , j
and has the same form independent of the axis around which the j-th rotation is performed, while L j , 2 is different depending on weather the j-th rotation is performed around x , y or z axis and can be defined as follows:
L j , 2 = R j , 2 ± R j , 3 0 3 × 1 R j , 1 , if h = x ± R j , 1 0 3 × 1 ± R j , 3 , R j , 2 , if h = y 0 3 × 1 R j , if h = z ,
being R j , p the p-th column of matrix R j , whose sign depends on weather the j-th rotation is performed counter-clockwise or clockwise, respectively. The mathematical derivation of Equation (32) can be found in Appendix A. By stacking Equation (32) for j = 1 , , N 1 , one obtains an homogeneous system of 3 ( N 1 ) linear equations in 13 unknowns, which is over-determined if N 6 :
L 1 L N 1 ϑ = L ϑ = 0 3 ( N 1 ) × 1 , L R 3 ( N 1 ) × 13
Equation (37) can be solved for ϑ using the linear TLS method, following the same approach taken in Section 2.1 [21,22,23]. Even in this case, employing the TLS solution for Equation (37) ensures that potential errors in the elements of L , arising from data acquisition inaccuracies, are accounted for. This represents another significant improvement over the methods proposed in [10,18], especially since manual rotations are likely to introduce inaccuracies in the measurement process. We address the compatible system closest to Equation (37) in terms of the Frobenius norm:
L ^ ϑ ^ = 0 ,
where ϑ ^ is the solution to Equation (37), containing estimated parameters, and L ^ is such that r a n k ( L ^ ) = 12 and L ^ L F is minimized. Exploiting the “economy size” singular value decomposition, L can be written as follows:
L = U d i a g ( β q ) 0 V T ,
being U R 3 ( N 1 ) × 3 ( N 1 ) and V R 13 × 13 orthonormal matrices, while d i a g ( β q ) R 13 × 13 is a diagonal matrix containing singular values β q of L , for q = 1 , , 13 , sorted in decreasing order. We can obtain L ^ as follows:
L ^ = L β 13 U 13 V 13 T ,
where β 13 is the smallest singular value of L and U 13 , V 13 are the 13-th column of U and V , respectively. A solution to Equation (38) is ϑ ^ = V 13 . The solution ϑ to the proposed identification problem can be obtained by taking into account that its last element ϑ 13 must be equal to 1 according to Equation (33), namely:
ϑ = ϑ ^ ϑ ^ 13 ,
where ϑ ^ 13 is the last element of ϑ ^ . We can then obtain e and ϕ as follows:
e = ϑ 12 : 10 ϑ 12 : 10 , ϕ = ϑ 12 : 10 ,
being ϑ 12 : 10 = [ ϑ 12 , ϑ 11 , ϑ 10 ] T , where ϑ q is the q-th element of ϑ . Since ϑ 13 = 1 is known, it is the element of ϑ having the maximum accuracy. Thus, we can approximate the covariance matrix of the TLS solution as follows:
C ϑ = σ ^ L 2 ( 1 + ϑ 1 : 12 2 ) ( L ^ 1 : 12 T L ^ 1 : 12 ) 1 ,
where
σ ^ L = β 13 3 ( N 1 ) 13 ,
L ^ 1 : 12 contains the first 12 columns of L ^ , while ϑ 1 : 12 contains the first 12 elements of ϑ . The percentage relative standard deviation is then obtained as follows:
σ % , ϑ q = 100 C ϑ ( q , q ) | ϑ q | , q = 1 , , 12 ;
where C ϑ ( q , q ) is the q-th diagonal term of C ϑ .

3. Experimental Procedure and Results

Simple 3D-printed IMU housings and references were produced to test the proposed method. This equipment includes a regular rectangular prism (prismatic) housing (Figure 2a), an 18-faced housing (Figure 2b) obtained by chamfering the edges of the former housing at 45°, a reference with three orthogonal surfaces and a base parallel to one of those surfaces (Figure 2c), and a reference with three orthogonal surfaces and a sloped base (Figure 2d). For each housing, a re-orientation scheme was defined to place the housing in at least every possible orientation where three of its faces are in contact with reference surfaces. Accordingly, N = 24 and N = 74 orientations were achieved in our tests for the prismatic and 18-faced housings, respectively.
Figure 3 illustrates a schematic of the first six orientations of the scheme used for the prismatic housing. In each image, rotation arrows indicate the movement required to transition from the current orientation to the following, while x , y , z represent the axes of the F-frame, which in practice can be identified with the three edges formed by the intersection of the three surfaces of the 3D-printed references (Figure 2c,d). The faces of the housing are numbered to make the rotation sequence clearer.Since the IMU is rigidly attached to its housing, as shown in Figure 4, if the housing is re-oriented into N different orientations, rotation matrices R i for i = 1 , , N can be computed as a combination of rotations around the axes of the F-frame. To achieve this, we begin considering that the rotation matrix R i , which transforms a vector expressed in the mobile frame into a vector expressed in the fixed frame when the IMU is in the i-th orientation, can be iteratively obtained from R i 1 . This process depends on the axis around which the rotation is performed to transition from the ( i 1 ) -th to the i-th IMU orientation:    
R 1 = I 3 × 3 , R i = R x ( Δ j ) R i 1 , if the rotation is performed around x R y ( Δ j ) R i 1 , if the rotation is performed around y R z ( Δ j ) R i 1 , if the rotation is performed around z ,
where R x ( Δ j ) , R y ( Δ j ) , R z ( Δ j ) are elementary rotation matrices of angle Δ j around x , y and z axis, respectively, while Δ j , j = i 1 is the j-th rotation angle, required to transition from the ( i 1 ) -th orientation to the i-th. Then, since Equations (7) and (26) require the acceleration and the angular velocity to be expressed in the moving frame, we obtain the needed rotation matrices as R i = R i T . The computation of R i matrices for the re-orientation sequences used in this paper is detailed in Appendix B.
The prismatic housing enables a simpler re-orientation procedure. However, if a reference non-sloped w.r.t. g is used (e.g., the one in Figure 2c), this re-orientation scheme is hardly optimal for accelerometer calibration. In fact, if the reference rests on a surface reasonably orthogonal to g , one of the sensor axes would be almost parallel to g , and the others almost orthogonal: the measures obtained on each channel would be whether near the maximum possible or near zero, meaning that they would be characterized by very small SNR. To overcome this problem, one can: (i) use a sloped reference, so that all axes measurements have greater SNR, or (ii) use a housing characterized by more possible orientations, improving also gyroscope calibration by performing more rotations. Thus, experiments on all four possible combinations of reference surface and housing were conducted. Figure 5 illustrates the experimental equipment in the initial orientation of the procedure for the Sloped 18-faced setup. A flowchart summarizing the steps needed to perform IMU calibration using our method can be found in Appendix C.
The Adafruit Precision NXP 9-DOF Breakout Board, which is a single-board IMU featuring NXP sensors FXOS8700CQ and FXAS21002, was employed for the tests. The main technical specifications of the IMU are listed in Table 1, while additional details are available in [25,26,27]. The FXOS8700CQ integrates a three-axis magnetometer (which is not of interest for this work) and a three-axis digital output accelerometer, while the FXAS21002 is a three-axis digital output gyroscope. IMU’s outputs were sampled at 300 Hz to acquire raw data from the accelerometer and the gyroscope. For the gyroscope, the signal was digitally low-pass filtered with a cutoff frequency of 128 Hz before the acquisition, using the filter integrated in the sensor [26]. During the re-orientation procedure, the IMU alternates between static and non-static states: for accelerometer calibration, only raw data collected during static states were retained, while for gyroscope calibration only raw data from non-static states were kept. Non-static states were identified using a non-static detector based on the low-pass filtered gyroscope signal: to obtain this signal, the acquired one was low-pass filtered again with a cutoff frequency that, in this case, was set to 0.5 Hz. Then, a certain time instant t was determined to be non-static if the magnitude of the filtered gyroscope signal at that instant exceeded a certain threshold, which was empirically set to 1000 Least Significant Bits (LSB). In the context of digital output sensors, an LSB represents the smallest measurable change in the physical quantity: accordingly, the physical value associated with an LSB depends on the sensor’s Full-Scale range and on the resolution ρ = 2 ν of its Analog-to-Digital Converter (ADC), being ν the number of bits available for the ADC. For the sensor used in our tests, 1 L S B corresponds to an angular velocity magnitude measurement of 0.01526 ° / s .
For the accelerometer, the mean values of the identified parameters, obtained from twelve different acquired datasets (three for each experimental setup) are reported in Table 2, along with their respective mean relative standard deviations. Elements of matrix A are expressed in [ g / L S B ] . This measurement unit is physically meaningful for sensitivity, as it indicates how much the measured physical quantity changes in relation to the smallest possible variation in the sensor’s raw output (which, for digital sensors, is measured in LSB), while those of b and { n } M , 1 are in [ g ] , i.e., they are normalized with respect to the modulus of the gravitational acceleration. The relative standard deviations are expressed as percentages of the parameter value and are obtained as described in Section 2.1. For the gyroscope, the mean values of the identified parameters, obtained from another twelve datasets acquired simultaneously with the accelerometer ones, are reported in Table 3, along with their respective mean relative standard deviations. Elements of matrix G are expressed in [ ° / s L S B ] , those of d are in [ ° / s ] , ϕ is in [ rad ] and elements of e are dimensionless. The relative standard deviations are expressed as percentages of the parameter value and are calculated as described in Section 2.2.
Table 2 and Table 3 indicate that cross-sensitivities are negligible compared to sensitivities for both sensors, meaning that the sensors’ axes are well-aligned. Consequently, really high relative standard deviations are obtained for these parameters (a parameter is considered well-identified if its percentage standard deviation is below 5 % ); namely, they are non-essential parameters for the specific sensors tested. Bias terms also exhibit high percentage standard deviations, even if at first sight they appear to be non-negligible w.r.t. sensitivities. However, their impact on the mathematical model output is minimal due to their small values. In fact, according to Equation (21), elements of matrix G are to be multiplied by the raw gyroscope output r , which is in the range [ 2 ν 1 , + 2 ν 1 ] : ν has a value that is between ν = 12 and ν = 16 , meaning that d becomes negligible w.r.t. G r if its components are the one in Table 3. Similar considerations apply to accelerometer bias terms. Accordingly, bias terms can be deemed non-essential parameters for both sensors. Furthermore, in gyroscope calibration, also bias drift is to be taken into account: however, this paper primarily focuses on identifying G , since only non-static measurements are employed. Variations of the procedure that are able to account also for the bias drift effect and properly identify static gyroscope bias could be the focus of future works. Finally, ϕ e 3 , ϕ e 2 , ϕ e 1 also exhibit a high percentage of standard deviations, but this is acceptable since their nominal values are very small. In fact, as we assumed in Section 2.2, ϕ 0 and this can be confirmed by calculating ϕ as in Equation (42), using parameters in Table 3: in the worst case (i.e., when ϕ assumes the highest value) we obtain ϕ = 0.0073 rad = 0.4205 ° .
Parameters identification best practice suggests excluding non-essential parameters from the model and performing a second identification with the simplified model, which does not include them [23]. The results for the identification of essential parameters are presented in Table 4 for the accelerometer and in Table 5 for the gyroscope. In general, all the essential parameters result in being well-identified for both sensors, since their percentage standard deviations fall below the 5 % threshold. As expected, datasets involving the 18-faced housing provide slightly better results for gyroscope calibration, with the identified parameters showing lower percentage standard deviations, while for the accelerometer, if the same housing is employed, experiments involving the sloped reference yield slightly better results in terms of percentage standard deviations of the estimated parameters.
In Table 6, we present accelerometer and gyroscope essential parameters ( A p p and G p p , respectively) derived in three different manners: (i) from sensors datasheets [26,27] (MAN), (ii) using our method (TLS) and (iii) using the NLS-based method outlined in [12] (NLS). Although our method allows for a straightforward determination of the sign of sensors’ parameters, all the parameters identified by our method are reported as positive in Table 6 to facilitate comparison with manufacturer-provided values and those derived using the NLS-based method. The NLS-based method, relying on Equations (1) and (2), minimizes a cost function that depends only on the norm of the sensors’ model outputs, making it insensitive to the sign of the parameters. The results show that our method yields parameters comparable to those obtained using the NLS-based approach. Additionally, since a known physical reference is available for the accelerometer (i.e., g = 1 ), we used one of the calibration datasets to compute the RMS error between the norm of the gravity vector, estimated via the sensor’s model, and the known reference as follows:
ξ R M S = 1 N i = 1 N ( g ˜ i 1 ) 2 ,
where g ˜ i is the gravity vector estimation obtained by employing in Equation (9) the sensor’s raw outputs acquired in the i-th orientation of the procedure for the Non-Sloped 18-faced setup. ξ R M S values computed using the above-mentioned three different parameter sets are reported in Table 6, which demonstrates a relevant improvement in terms of ξ R M S when transitioning from the manufacturer-parameters-computed one to the one obtained using our method’s parameters. The NLS-based method appears to yield better results in terms of ξ R M S ; however, it is important to note that its performance heavily depends on the quality of the initial guess provided to the algorithm, which in this case was set equal to the parameters specified in the sensor’s datasheet.
Finally, cross-validations of the identification results were performed. We cross-validated the parameters obtained with the Non-sloped 18-faced setup (Table 4 and Table 5) using datasets acquired during experiments involving the other three setups. For each dataset, we computed M according to Equations (11) and (13), and L as in Equations (32) and (35)–(37). The corresponding model error was then calculated as ϵ a = M θ for the accelerometer and as ϵ g = L ϑ for the gyroscope. For datasets involving the non-sloped reference, the last element of θ was set to 1 , while for those involving the sloped reference, the last three elements of θ were set to [ n 1 , n 2 , n 3 ] T , since only in the latter case all these three parameters result to be essential. Finally, we computed the cross-validation standard deviations as [22]:
σ a = ϵ a T ϵ a 3 N
and
σ g = ϵ g T ϵ g 3 ( N 1 )
for accelerometer and gyroscope datasets, respectively. Additionally, we also evaluated cross-validation standard deviations using manufacturer-provided sensor parameters. As manufacturer parameters, we used the ones in Table 6. Cross-validation results obtained using parameters identified by our method ( σ a , T L S and σ g , T L S ) are compared with the ones obtained using manufacturer-provided parameters ( σ a , M A N and σ g , M A N ) in Table 7. These results clearly demonstrate that our method is an effective choice for improving accelerometer and gyroscope accuracy with no significant additional cost: in fact, using the parameters identified by our method, we were able to reduce cross-validation standard deviations by nearly two orders of magnitude compared to the case where manufacturer parameters were employed.

4. Conclusions

In this paper, we proposed an identification method for accelerometers and gyroscopes, namely a general identification method for six-axis IMUs. Our method can be performed in-field by non-expert users, requiring only simple 3D-printed equipment. It is based on a pre-determined sequence of rotations and the solution of two TLS problems. No prior knowledge regarding references orientation w.r.t.  g or regarding rotation axes orientation w.r.t. gyroscope sensitive triad is required. Our identification method does not require prior knowledge of the parameters to be determined: this makes it a valuable option to generate a reliable initial parameter set to be used for NLS-based methods when manufacturer-provided parameters are unavailable (e.g., for prototypes or newly designed sensors). Experimental results showed that the proposed method is able to correctly identify sensor model parameters and that the accuracy of sensor models is severely increased w.r.t. using manufacturer-provided parameters. The proposed method is not suitable for estimating gyroscope static bias terms: this could be the focus of future works.

Author Contributions

Conceptualization, E.I.; methodology, M.D.; software, M.D.; validation, M.D.; formal analysis, E.I., M.D.; investigation, M.D.; resources, E.I.; data curation, M.D.; writing—original draft preparation, M.D.; writing—review and editing, E.I.; visualization, M.D.; supervision, E.I.; project administration, E.I.; funding acquisition, E.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author, Ida’ E., upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

We begin considering that expressions of { u x } M , 1 , { u y } M , 1 and { u z } M , 1 as in Equation (23) can be substituted into Equation (30) to obtain:
G 11 H 1 , j + G 12 H 2 , j + G 13 H 3 , j + T j d 1 G 12 H 1 , j + G 22 H 2 , j + G 23 H 3 , j + T j d 2 G 13 H 1 , j + G 23 H 2 , j + G 33 H 3 , j + T j d 3 Δ j R j 1 ϕ e 3 ϕ e 2 = 0 3 × 1 if h = x , G 11 H 1 , j + G 12 H 2 , j + G 13 H 3 , j + T j d 1 G 12 H 1 , j + G 22 H 2 , j + G 23 H 3 , j + T j d 2 G 13 H 1 , j + G 23 H 2 , j + G 33 H 3 , j + T j d 3 Δ j R j ϕ e 3 1 ϕ e 1 = 0 3 × 1 if h = y , G 11 H 1 , j + G 12 H 2 , j + G 13 H 3 , j + T j d 1 G 12 H 1 , j + G 22 H 2 , j + G 23 H 3 , j + T j d 2 G 13 H 1 , j + G 23 H 2 , j + G 33 H 3 , j + T j d 3 Δ j R j ϕ e 2 ϕ e 1 1 = 0 3 × 1 if h = z .
For each of Equation (A1), the second term of the left-hand side can be re-organized to obtain:
Δ j R j 1 ϕ e 3 ϕ e 2 = Δ j R j , 2 R j , 3 0 3 × 1 R j , 1 ϕ e 3 ϕ e 2 ϕ e 1 1 if h = x , Δ j R j ϕ e 3 1 ϕ e 1 = Δ j R j , 1 0 3 × 1 R j , 3 R j , 2 ϕ e 3 ϕ e 2 ϕ e 1 1 if h = y , Δ j R j ϕ e 2 ϕ e 1 1 = Δ j 0 3 × 1 R j ϕ e 3 ϕ e 2 ϕ e 1 1 if h = z .
The first term of the left-hand side of Equation (A1) remain unchanged if the rotation is performed around u x , u y or u z and it can be re-arranged as follows:
L j , 1 ϑ 1 : 9 = H 1 , j H 2 , j H 3 , j 0 0 0 0 H 1 , j 0 H 1 , j H 1 , j 0 T j I 3 × 3 0 0 H 1 , j 0 H 2 , j H 3 , j G 11 G 12 G 13 G 22 G 23 G 33 d 1 d 2 d 3 .
In order to re-write Equation (A2) in a more compact way, we can introduce a new matrix ( L j , 2 ) defined as in Equation (36). In Equations (A1) and (A2) we implicitly considered, according to the right-hand rule, that the rotation around x , y or z is happening counter-clockwise. If a clockwise rotation is performed, the unit vectors representing rotation axes would result oriented in the opposite direction w.r.t. the one in Figure 1, and their components in the M,1-frame would result opposite in sign. Alternatively, if the same unit vectors are considered, this would result in Δ j < 0 , obtaining the same result. By taking this into account and combining Equations (A1)–(A3) we obtain Equation (32).

Appendix B

R i rotation matrices computation for the 18-faced housing re-orientation procedure is shown in Algorithm A1, while Algorithm A2 details how the computation of rotation matrices R i is conducted for the prismatic one. In the former case, two redundant orientations are achieved in order to make the re-orientation procedure practically simpler to perform, making the data acquisition easier.
Algorithm A1  R i matrices computation for 18-faced housing (continued on the following page)
1:
i x y [ 5 , 9 , , 29 , 33 , 34 , 38 , , 46 , 50 , 51 , 55 , , 63 , 67 ]
2:
last- i x y 71
3:
i 90 [ 46 , 55 , 59 , 63 ]
4:
prevxsgn 1
5:
prevysgn 1
6:
prev x y ’x’
7:
R 1 = I 3 × 3
8:
N = 74
9:
for  i = 2 , i + + , i N  do
10:
     j = i 1
11:
    if  i i x y  then
12:
        if  i i 90  then
13:
            Δ j = π 2
14:
        else
15:
            Δ j = π 4
16:
        end if
17:
        if  prev x y = = ’x’ then
18:
           if prevysgn == 1 then
19:
                R i = R y ( Δ j ) R i 1
20:
                prevysgn 0
21:
           else
22:
                R i = R y ( Δ j ) R i 1
23:
                prevysgn 1
24:
           end if
25:
            prev x y ’y’
26:
        else if  prev x y = = ’y’ then
27:
           if prevxsgn == 1 then
28:
                R i = R x ( Δ j ) R i 1
29:
                prevxsgn 0
30:
           else
31:
                R i = R x ( Δ j ) R i 1
32:
                prevxsgn 1
33:
           end if
34:
            prev x y ’x’
35:
        end if
36:
    else if  i = = last - i x y  then
37:
         Δ j π 4
38:
         R i = R x ( Δ j ) R i 1
39:
    else
40:
         Δ j π 2
41:
         R i = R z ( Δ j ) R i 1
42:
    end if
43:
end for
44:
R i R i T i = 1 , , N
Algorithm A2  R i matrices computation for prismatic housing
1:
i 1
2:
r o t x [ 2 , 3 , 6 ]
3:
r o t x 5
4:
r o t y 4
5:
Δ π 2
6:
R 1 = I 3 × 3
7:
for  f = 1 , f + + , f 6  do
8:
     i i + 1
9:
    if  f r o t x  then
10:
         R i = R x ( Δ ) R i 1
11:
    else if  f r o t x  then
12:
         R i = R x ( Δ ) R i 1
13:
    else if  f r o t y  then
14:
         R i = R y ( Δ ) R i 1
15:
    end if
16:
    for  r z = 1 , r z + + , r z 3  do
17:
        if  i 2 r z 1  then
18:
            i i + 1
19:
        end if
20:
         R i = R z ( Δ ) R i 1
21:
    end for
22:
end for
23:
R i R i T i = 1 , , N

Appendix C

The flowchart in Figure A1 describes the key steps required in order to complete the calibration procedure proposed in this paper.
Figure A1. Flowchart illustrating the key steps of the calibration procedure.
Figure A1. Flowchart illustrating the key steps of the calibration procedure.
Robotics 13 00156 g0a1

References

  1. Guang, X.; Gao, Y.; Leung, H.; Liu, P.; Li, G. An Autonomous Vehicle Navigation System Based on Inertial and Visual Sensors. Sensors 2018, 18, 2952. [Google Scholar] [CrossRef] [PubMed]
  2. Gabaldo, S.; Idà, E.; Carricato, M. Pose-estimation methods for underactuated cable-driven parallel robots. Mech. Mach. Theory 2024, 199, 105690. [Google Scholar] [CrossRef]
  3. Lee, S.C.; Hong, S.K. Velocity-Aided Attitude Estimation for Helicopter Aircraft Using Microelectromechanical System Inertial-Measurement Units. Sensors 2016, 16, 2102. [Google Scholar] [CrossRef] [PubMed]
  4. Caramia, C.; Torricelli, D.; Schmid, M.; Muñoz-Gonzalez, A.; Gonzalez-Vargas, J.; Grandas, F.; Pons, J.L. IMU-Based Classification of Parkinson’s Disease From Gait: A Sensitivity Analysis on Sensor Location and Feature Selection. IEEE J. Biomed. Health Inform. 2018, 22, 1765–1774. [Google Scholar] [CrossRef] [PubMed]
  5. Harindranath, A.; Arora, M. A systematic review of user-conducted calibration methods for MEMS-based IMUs. Measurement 2024, 225, 114001. [Google Scholar] [CrossRef]
  6. Poddar, S.; Kumar, V.; Kumar, A. A Comprehensive Overview of Inertial Sensor Calibration Techniques. J. Dyn. Syst. Meas. Control 2016, 139, 011006. [Google Scholar] [CrossRef]
  7. Titterton, D.; Weston, J.L. Strapdown Inertial Navigation Technology; IET: Stevenage, UK, 2004. [Google Scholar]
  8. Aggarwal, P.; El-Sheimy, N.; Noureldin, A.; Syed, Z. MEMS-Based Integrated Navigation; Artech: Morristown, NJ, USA, 2010. [Google Scholar]
  9. Han, S.; Meng, Z.; Omisore, O.; Akinyemi, T.; Yan, Y. Random Error Reduction Algorithms for MEMS Inertial Sensor Accuracy Improvement—A Review. Micromachines 2020, 11, 1021. [Google Scholar] [CrossRef] [PubMed]
  10. Qureshi, U.; Golnaraghi, F. An Algorithm for the In-Field Calibration of a MEMS IMU. IEEE Sens. J. 2017, 17, 7479–7486. [Google Scholar] [CrossRef]
  11. Fong, W.; Ong, S.; Nee, A. Methods for in-field user calibration of an inertial measurement unit without external equipment. Meas. Sci. Technol. 2008, 19, 085202. [Google Scholar] [CrossRef]
  12. Tedaldi, D.; Pretto, A.; Menegatti, E. A robust and easy to implement method for IMU calibration without external equipments. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 3042–3049. [Google Scholar] [CrossRef]
  13. Grip, N.; Sabourova, N. Simple non-iterative calibration for triaxial accelerometers. Meas. Sci. Technol. 2011, 22, 125103. [Google Scholar] [CrossRef]
  14. Forsberg, T.; Grip, N.; Sabourova, N. Non-iterative calibration for accelerometers with three non-orthogonal axes, reliable measurement setups and simple supplementary equipment. Meas. Sci. Technol. 2013, 24, 035002. [Google Scholar] [CrossRef]
  15. Zhang, H.; Wu, Y.; Wu, M.; Hu, X.; Zha, Y. A Multi-Position Calibration Algorithm for Inertial Measurement Units. In Proceedings of the AIAA Guidance, Navigation and Control Conference and Exhibit, Honolulu, HI, USA, 18–21 August 2008. [Google Scholar]
  16. Zhang, H.; Wu, Y.; Wu, W.; Wu, M.; Hu, X. Improved multi-position calibration for inertial measurement units. Meas. Sci. Technol. 2009, 21, 015107. [Google Scholar] [CrossRef]
  17. Khankalantary, S.; Ranjbaran, S.; Ebadollahi, S. Simplification of calibration of low-cost MEMS accelerometer and its temperature compensation without accurate laboratory equipment. Meas. Sci. Technol. 2021, 32, 045102. [Google Scholar] [CrossRef]
  18. Ranjbaran, S.; Roudbari, A.; Ebadollahi, S. A Simple and Fast Method for Field Calibration of Triaxial Gyroscope by Using Accelerometer. J. Electr. Comput. Eng. Innov. (JECEI) 2017, 6, 1–6. [Google Scholar] [CrossRef]
  19. Duchi, M.; Zaccaria, F.; Briot, S.; Ida’, E. Total Least Squares In-Field Identification for MEMS-Based Triaxial Accelerometers. In Advances in Mechanism and Machine Science; Okada, M., Ed.; Springer Nature: Cham, Switzerland, 2023; pp. 570–579. [Google Scholar]
  20. Ru, X.; Gu, N.; Shang, H.; Zhang, H. MEMS Inertial Sensor Calibration Technology: Current Status and Future Trends. Micromachines 2022, 13, 879. [Google Scholar] [CrossRef] [PubMed]
  21. Van Huffel, S.; Vandewalle, J. The Total Least Squares Problem; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1991. [Google Scholar] [CrossRef]
  22. Idà, E.; Briot, S.; Carricato, M. Identification of the inertial parameters of underactuated Cable-Driven Parallel Robots. Mech. Mach. Theory 2022, 167, 104504. [Google Scholar] [CrossRef]
  23. Briot, S.; Gautier, M. Global identification of joint drive gains and dynamic parameters of parallel robots. Multibody Syst. Dyn. 2015, 33, 3–26. [Google Scholar] [CrossRef]
  24. Dai, J.S. Euler–Rodrigues formula variations, quaternion conjugation and intrinsic connections. Mech. Mach. Theory 2015, 92, 144–152. [Google Scholar] [CrossRef]
  25. Adafruit. Adafruit Precision NXP 9-DOF Breakout Board Technical Details. Adafruit Industries. Available online: https://learn.adafruit.com/nxp-precision-9dof-breakout/downloads (accessed on 20 October 2024).
  26. NXP Semiconductors. XAS21002C 3-Axis Digital Angular Rate Gyroscope Datasheet. 2015. Available online: https://cdn-learn.adafruit.com/assets/assets/000/040/671/original/FXAS21002.pdf?1491475056 (accessed on 20 October 2024).
  27. NXP Semiconductors. FXOS8700CQ 6-Axis Sensor with Integrated Linear Accelerometer and Magnetometer Datasheet. 2017. Available online: https://cdn-learn.adafruit.com/assets/assets/000/043/458/original/FXOS8700CQ.pdf?1499125614 (accessed on 20 October 2024).
Figure 1. Fixed frame (F-frame) and Mobile frame (M-frame) at the beginning of the calibration procedure.
Figure 1. Fixed frame (F-frame) and Mobile frame (M-frame) at the beginning of the calibration procedure.
Robotics 13 00156 g001
Figure 2. 3D-printed equipment used for the tests, including a rectangular prism housing (a), an 18-faced housing (b), a non-sloped reference (c) and a sloped reference (d).
Figure 2. 3D-printed equipment used for the tests, including a rectangular prism housing (a), an 18-faced housing (b), a non-sloped reference (c) and a sloped reference (d).
Robotics 13 00156 g002
Figure 3. First six orientations of the re-orientation scheme for the prismatic housing.
Figure 3. First six orientations of the re-orientation scheme for the prismatic housing.
Robotics 13 00156 g003
Figure 4. IMU mounted inside the prismatic housing (a) and inside the 18-faced housing (b).
Figure 4. IMU mounted inside the prismatic housing (a) and inside the 18-faced housing (b).
Robotics 13 00156 g004
Figure 5. Experimental equipment in the initial orientation of the procedure for the Sloped 18-faced setup.
Figure 5. Experimental equipment in the initial orientation of the procedure for the Sloped 18-faced setup.
Robotics 13 00156 g005
Table 1. Main IMU technical characteristics.
Table 1. Main IMU technical characteristics.
CharacteristicValue
Board’s dimensions [mm]20 × 28 × 2
Supply voltage [V]from 1.95 to 3.6
Supported digital interface I 2 C or S P I
Current consumption [mA] 2.94
Output data rate [Hz]up to 800
Table 2. Accelerometer identification results. For each experimental setup, the left column contains mean values of identified parameters ( θ l ), while the right column contains their mean percentage standard deviations ( σ % , θ l ).
Table 2. Accelerometer identification results. For each experimental setup, the left column contains mean values of identified parameters ( θ l ), while the right column contains their mean percentage standard deviations ( σ % , θ l ).
Non-Sloped PrismaticSloped PrismaticNon-Sloped 18-FacedSloped 18-Faced
θ l σ % , θ l θ l σ % , θ l θ l σ % , θ l θ l σ % , θ l
A 11 [ g L S B ] −2.417 × 10 4 0.135−2.428 × 10 4 0.178−2.420 × 10 4 0.096−2.422 × 10 4 0.257
A 12 [ g L S B ] 1.592 × 10 6 45.8231.329 × 10 6 38.889−4.397 × 10 6 3.872−5.301 × 10 6 7.387
A 13 [ g L S B ] 7.193 × 10 7 32.3216.985 × 10 7 39.377−4.073 × 10 7 17.592−1.299 × 10 6 30.965
A 22 [ g L S B ] 2.464 × 10 4 0.1352.472 × 10 4 0.1782.460 × 10 4 0.0982.468 × 10 4 0.260
A 23 [ g L S B ] −4.255 × 10 7 63.772−4.585 × 10 7 60.916−4.667 × 10 7 36.024−6.040 × 10 7 63.096
A 33 [ g L S B ] 2.428 × 10 4 0.1352.434 × 10 4 0.1782.433 × 10 4 0.0962.437 × 10 4 0.257
b 1 [ g ] 2.222 × 10 3 35.0472.786 × 10 3 36.1491.792 × 10 3 40.8231.297 × 10 3 101.695
b 2 [ g ] 4.416 × 10 3 17.7511.872 × 10 3 88.5933.969 × 10 3 14.0862.473 × 10 3 53.338
b 3 [ g ] −1.839 × 10 2 4.237−1.614 × 10 2 5.793−1.782 × 10 2 3.497−1.474 × 10 2 10.020
n 1 [ g ] 2.420 × 10 2 3.2974.205 × 10 1 0.2522.731 × 10 2 2.3324.297 × 10 1 0.357
n 2 [ g ] −7.073 × 10 3 12.2064.887 × 10 1 0.2249.044 × 10 3 3.6624.811 × 10 1 0.330
n 3 [ g ] −9.997 × 10 1 −7.644 × 10 1 −9.994 × 10 1 −7.641 × 10 1
Table 3. Gyroscope identification results. For each experimental setup, the left column contains mean values of identified parameters ( ϑ q ), while the right column contains their mean percentage standard deviations ( σ % , ϑ q ).
Table 3. Gyroscope identification results. For each experimental setup, the left column contains mean values of identified parameters ( ϑ q ), while the right column contains their mean percentage standard deviations ( σ % , ϑ q ).
Non-Sloped PrismaticSloped PrismaticNon-Sloped 18-FacedSloped 18-Faced
ϑ q σ % , ϑ q ϑ q σ % , ϑ q ϑ q σ % , ϑ q ϑ q σ % , ϑ q
G 11 [ ° / s L S B ] 1.537 × 10 2 0.2951.534 × 10 2 0.3771.531 × 10 2 0.2251.533 × 10 2 0.273
G 12 [ ° / s L S B ] −3.972 × 10 4 8.242−3.560 × 10 4 11.7203.950 × 10 5 68.0064.276 × 10 5 72.336
G 13 [ ° / s L S B ] −5.796 × 10 7 2721.2601.850 × 10 5 156.6901.067 × 10 4 57.9911.792 × 10 4 36.388
G 22 [ ° / s L S B ] −1.571 × 10 2 0.294−1.570 × 10 2 0.377−1.563 × 10 2 0.223−1.568 × 10 2 0.271
G 23 [ ° / s L S B ] 2.222 × 10 5 150.8751.477 × 10 5 635.4579.063 × 10 6 520.7681.589 × 10 5 133.8609
G 33 [ ° / s L S B ] −1.554 × 10 2 0.265−1.557 × 10 2 0.339−1.557 × 10 2 0.218−1.558 × 10 2 0.264
d 1 [ ° / s ] −9.415 × 10 1 9.536−9.416 × 10 1 12.310−1.1929.177−8.761 × 10 1 14.037
d 2 [ ° / s ] 8.196 × 10 1 10.8518.990 × 10 1 12.6739.468 × 10 1 7.4521.1777.757
d 3 [ ° / s ] 7.847 × 10 1 12.2678.037 × 10 1 14.2964.282 × 10 1 94.7527.564 × 10 1 69.528
ϕ e 3 [ r a d ] −5.641 × 10 3 66.938−3.837 × 10 3 185.5091.811 × 10 3 199.2314.254 × 10 3 153.948
ϕ e 2 [ r a d ] 4.393 × 10 3 40.6043.234 × 10 3 67.655−1.322 × 10 3 132.661−6.115 × 10 4 213.256
ϕ e 1 [ r a d ] −1.655 × 10 3 117.084−1.414 × 10 3 191.6821.144 × 10 3 128.5921.866 × 10 5 603.486
Table 4. Accelerometer identification results for essential parameters. For each experimental setup, the left column contains mean values of identified parameters ( θ l ), while the right column contains their mean percentage standard deviations ( σ % , θ l ).
Table 4. Accelerometer identification results for essential parameters. For each experimental setup, the left column contains mean values of identified parameters ( θ l ), while the right column contains their mean percentage standard deviations ( σ % , θ l ).
Non-Sloped PrismaticSloped PrismaticNon-Sloped 18-FacedSloped 18-Faced
θ l σ % , θ l θ l σ % , θ l θ l σ % , θ l θ l σ % , θ l
A 11 [ g L S B ] −2.416 × 10 4 0.721−2.428 × 10 4 0.504−2.418 × 10 4 0.495−2.422 × 10 4 0.472
A 22 [ g L S B ] 2.463 × 10 4 0.7212.470 × 10 4 0.5042.458 × 10 4 0.5052.468 × 10 4 0.473
A 33 [ g L S B ] 2.425 × 10 4 0.7212.433 × 10 4 0.5042.431 × 10 4 0.4952.437 × 10 4 0.473
n 1 [ g ] 4.265 × 10 1 0.702 4.296 × 10 1 0.657
n 2 [ g ] 4.824 × 10 1 0.641 4.813 × 10 1 0.605
n 3 [ g ] −1.000 −7.651 × 10 1 −1.000 −7.641 × 10 1
Table 5. Gyroscope identification results for essential parameters. For each experimental setup, the left column contains mean values of identified parameters ( ϑ q ), while the right column contains their mean percentage standard deviations ( σ % , ϑ q ).
Table 5. Gyroscope identification results for essential parameters. For each experimental setup, the left column contains mean values of identified parameters ( ϑ q ), while the right column contains their mean percentage standard deviations ( σ % , ϑ q ).
Non-Sloped PrismaticSloped PrismaticNon-Sloped 18-FacedSloped 18-Faced
ϑ q σ % , ϑ q ϑ q σ % , ϑ q ϑ q σ % , ϑ q ϑ q σ % , ϑ q
G 11 [ ° / s L S B ] 1.532 × 10 2 0.7991.532 × 10 2 0.7131.533 × 10 2 0.3961.535 × 10 2 0.384
G 22 [ ° / s L S B ] −1.572 × 10 2 0.800−1.573 × 10 2 0.714−1.566 × 10 2 0.394−1.571 × 10 2 0.382
G 33 [ ° / s L S B ] −1.545 × 10 2 0.705−1.550 × 10 2 0.629−1.554 × 10 2 0.383−1.553 × 10 2 0.370
Table 6. Comparison between sensors’ model parameters obtained with different methods and the related RMS error associated with the norm of the accelerometer’s model output. (The sign of the parameters has been omitted to facilitate comparison between the parameters obtained through different methods).
Table 6. Comparison between sensors’ model parameters obtained with different methods and the related RMS error associated with the norm of the accelerometer’s model output. (The sign of the parameters has been omitted to facilitate comparison between the parameters obtained through different methods).
MANTLSNLS
A 11 [ g L S B ] 2.44 × 10 4 2.418 × 10 4 2.420 × 10 4
A 22 [ g L S B ] 2.44 × 10 4 2.458 × 10 4 2.468 × 10 4
A 33 [ g L S B ] 2.44 × 10 4 2.431 × 10 4 2.429 × 10 4
G 11 [ ° / s L S B ] 1.5625 × 10 2 1.533 × 10 2 1.544 × 10 2
G 22 [ ° / s L S B ] 1.5625 × 10 2 1.566 × 10 2 1.569 × 10 2
G 33 [ ° / s L S B ] 1.5625 × 10 2 1.554 × 10 2 1.559 × 10 2
ξ R M S [ g ] 0.01320.01200.0119
Table 7. Accelerometer and gyroscope cross-validation standard deviations for the three datasets setups.
Table 7. Accelerometer and gyroscope cross-validation standard deviations for the three datasets setups.
σ a , MAN [ g ] σ a , TLS [ g ] σ g , MAN [ ° ] σ g , TLS [ ° ]
Non-sloped cubic0.66990.019486.92272.2519
Sloped cubic0.66840.010586.73851.5303
Non-sloped 18-faced0.67400.021778.67701.5243
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Duchi, M.; Ida’, E. Total Least Squares In-Field Identification for MEMS-Based Inertial Measurement Units. Robotics 2024, 13, 156. https://doi.org/10.3390/robotics13110156

AMA Style

Duchi M, Ida’ E. Total Least Squares In-Field Identification for MEMS-Based Inertial Measurement Units. Robotics. 2024; 13(11):156. https://doi.org/10.3390/robotics13110156

Chicago/Turabian Style

Duchi, Massimo, and Edoardo Ida’. 2024. "Total Least Squares In-Field Identification for MEMS-Based Inertial Measurement Units" Robotics 13, no. 11: 156. https://doi.org/10.3390/robotics13110156

APA Style

Duchi, M., & Ida’, E. (2024). Total Least Squares In-Field Identification for MEMS-Based Inertial Measurement Units. Robotics, 13(11), 156. https://doi.org/10.3390/robotics13110156

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop