Next Article in Journal
Bed-Based Ballistocardiography: Dataset and Ability to Track Cardiovascular Parameters
Next Article in Special Issue
Textile Electrodes: Influence of Knitting Construction and Pressure on the Contact Impedance
Previous Article in Journal
Faulty Feeder Identification Based on Data Analysis and Similarity Comparison for Flexible Grounding System in Electric Distribution Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Calibrating Range Measurements of Lidars Using Fixed Landmarks in Unknown Positions

1
Computer Engineering Department, University of Baghdad, Baghdad 10071, Iraq
2
Center for Applied Autonomous Sensor Systems (AASS), Örebro University, 70182 Örebro, Sweden
3
Department of Autonomous Systems, Otto-von-Guericke University, 39106 Magdeburg, Germany
4
Department of Engineering Cybernetics, Norwegian University of Science and Technology, 7491 Trondheim, Norway
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(1), 155; https://doi.org/10.3390/s21010155
Submission received: 27 October 2020 / Revised: 18 December 2020 / Accepted: 23 December 2020 / Published: 29 December 2020
(This article belongs to the Special Issue State-of-the-Art Sensors Technology in Sweden)

Abstract

:
We consider the problem of calibrating range measurements of a Light Detection and Ranging (lidar) sensor that is dealing with the sensor nonlinearity and heteroskedastic, range-dependent, measurement error. We solved the calibration problem without using additional hardware, but rather exploiting assumptions on the environment surrounding the sensor during the calibration procedure. More specifically we consider the assumption of calibrating the sensor by placing it in an environment so that its measurements lie in a 2D plane that is parallel to the ground. Then, its measurements come from fixed objects that develop orthogonally w.r.t. the ground, so that they may be considered as fixed points in an inertial reference frame. Moreover, we consider the intuition that moving the distance sensor within this environment implies that its measurements should be such that the relative distances and angles among the fixed points above remain the same. We thus exploit this intuition to cast the sensor calibration problem as making its measurements comply with this assumption that “fixed features shall have fixed relative distances and angles”. The resulting calibration procedure does thus not need to use additional (typically expensive) equipment, nor deploy special hardware. As for the proposed estimation strategies, from a mathematical perspective we consider models that lead to analytically solvable equations, so to enable deployment in embedded systems. Besides proposing the estimators we moreover analyze their statistical performance both in simulation and with field tests. We report the dependency of the MSE performance of the calibration procedure as a function of the sensor noise levels, and observe that in field tests the approach can lead to a tenfold improvement in the accuracy of the raw measurements.

1. Introduction

Localization is essential for applications where robots shall move precisely in the surroundings, and is typically performed by leveraging on measurements acquired through distance sensors. Calibrating these distance measurement sensors so that their readings are as accurate as possible is thus a preliminary task that is essential for achieving good navigation performance at a later stage.
To make a practical and typical (but not exhaustive) example, calibrating a distance sensor may mean considering a measurement model of the type
r = f bias d + f st . dev d e
where r is the noisy sensor reading, d is the true distance, f bias · includes the true distance plus an unknown bias term, and f st . dev · is a factor modulating the measurement noise whose stochasticity is induced by a random variable e typically assumed standard Gaussian and independent and identically distributed (iid). Such a model is motivated by practical real-life situations, such as the one depicted in Figure 1, and the uncertainty constructed according to the work in [1]. A strategy for calibrating a model such as (1) would in this case correspond to estimating f bias and f st . dev so that these factors can be accounted for when postprocessing new measurements from the sensor.
The operation of calibrating a sensor is typically performed by comparing the raw measurements of the sensor against readings from an external and sufficiently more accurate system (e.g., a motion capture system) that is considered as ground truth (e.g., as in [2]). Acquiring this ground truth (and thus this external and sufficiently more accurate system), however, may be expensive and time-consuming. It may thus be beneficial to find strategies that substitute this information acquisition step with more easily implementable and cheaper approaches.
For example, this substitution can be performed as follows: assume that in structured environments certain structures do not move (e.g., walls, doors, and corners in the built environment trunks of trees in a forest, etc.). Assume moreover that these structures may produce specific and easily recognizable signatures in the readings (e.g., trunks of trees in a forest produce somehow circular shaped features, as in Figure 2).
As soon as the structures do not move, these signatures may be considered as fixed points in an inertial reference frame. If a distance sensor is moved within this environment, then the measurements from the sensor referring to these fixed points should be such that the relative distances and angles among these fixed points remain the same. The calibration process may then be cast, from an intuitive perspective, as finding models like (1) for which the measurement process complies with the assumption that “fixed features shall have fixed relative distances and angles”.
The goal of this paper is thus to understand how to leverage these assumptions on the structure of the environment surrounding the distance sensors for the purpose of building statistically accurate distance sensors calibration strategies.
To do so we will thus consider using a simple strategy: (a) place some artificial landmarks (i.e., some poles) in random positions in space; (b) calibrate the sensor by making its measurements comply with the fixed-world assumption above.

1.1. Literature Review

The strategy described in the previous subsection relates to the existing literature as in the following. First of all, distance sensors are often used for reconstructing environmental maps used by robots to move without colliding with obstacles, as, for example, in [3] (We also note that map generation is not the only application; for example, forestry applications use lidars to measure and monitor the growth of forests, compute trunk’s diameters, and calculate the density of trunks or canopies [4,5]). Several strategies have been proposed to improve distance sensor performance and accuracy through statistical manipulation of their measurement processes. For example, the statistical sensor model for ultrasound sensors was presented in [6] with calibration algorithms in [7] and a good review on odometer calibration is presented in [8]. As a generic definition, statistical sensor calibration is the process of improving sensor accuracy and/or precision through transforming the measurements into something closer to ground truth via combining information about the same sensed quantity that is obtained using different sources of information like ground truth data. Sensor calibration for installation error has been well studied in the literature and, in general, is solved using nonlinear optimization, see for example in [9]. However, in this research, we are not considering the extrinsic calibration of the sensor but we focus on the intrinsic calibration that is dealing with the sensor nonlinearity and heteroskedastic, range-dependent, measurement error. Unfortunately, ground truth is not always available and in some cases, if exists, it is very expensive. Therefore, it is usually substituted with one of the following strategies.
  • Certain assumptions about the sensor movement and about the surrounding environment, in which the calibration process is shaped as joint parameters and state estimation, for example, lidar calibration from linear motion [10].
  • Another strategy for substituting ground truth information with some other information is to implement appropriate sensor fusion strategies, i.e., to combine redundant information from independent distance sensors. Such a strategy has been used in [10,11], where approximated Expectation Maximization (EM) procedures (in the former) and Markov chain Monte Carlo (MCMC) techniques under Bayesian frameworks (in the later) are used for joint parameter and state estimation combining information from lidars, odometry, and ultrasound sensors. Calibrating the intrinsic parameters of one beam based on other beams of rotating multi-beam lidar attracted large amount of research, for example, in [12,13,14,15]. We note that sensor fusion is a vast topic and there are many publications on calibrating other sensors, for example, magnetometer calibration using inertial sensors in [16], camera and IMU calibration in [17], and lidar and camera calibration in [18]. However, here, we are interested only in calibration that is related to lidars.
  • The last strategy is to use assumptions on the environment, for example, odometer calibration with localization [19]. Another example is to use the planar feature in the environment to calibrate lidars. Originally plane-based calibration was presented for calibrating airborne lidars in [20,21]. Then, the authors of [22] introduced a mathematical model and static calibration for the Velodyne HDL-64E lidar using planar feature and least squares solution. The authors of [23] calibrated a 3D lidar for both the geometric and temporal parameters based on Rényi Quadratic Entropy to formulate an optimization problem that maximizes the quality of the point cloud.
As said above, here we specifically investigate how to substitute ground truth information with assumptions on the environment. Our strategy will intrinsically require localizing (in a sense to be specified later) the position of the sensor within the surrounding environment. This means that our paper relates to the existing literature on localizing sensors in space using noisy measurements of distances from landmarks or beacons. To the best of our knowledge, it is possible to do so using three different approaches:
  • triangulation, where the position is determined through measuring the angles between the sensing device and the known landmarks (see, e.g., in [24]);
  • trilateration, where the position is determined through measuring the distances between the device and the landmarks (see, e.g., in [25,26]);
  • triangulateration, a strategy that combines both of the above (see, e.g., in [27,28]).
Generally, most algorithms use either triangulation or trilateration alone, as they require less information from the sensor (measuring both distances and angles, indeed, normally requires more expensive hardware). For this reason several studies on how to localize the position of a sensor using landmarks or beacons mostly use triangulation or trilateration approaches. For example, the authors of [29,30,31] all propose different techniques for self-localization using landmarks or beacons and triangulation concepts, while the authors of [25,26,32,33] all use trilateration.
In the literature mentioned above the solutions are based on the assumption that he landmark positions are known which is not the case in our setup. Extensive research has been done to solve mobile device localization given several known mobile base stations [34]. However, in this paper, we relax this assumption into a more general case where the landmark positions are assumed to be completely unknown. Instead we assume to know imprecise information about the sensor’s new position with respect to the previous one. This kind of information is actually the control commands to the robot which is always available for robot moving in its environment. Furthermore, to make the calibration process independent on the robot dynamical model, we assume to take measurements only when the robot is not moving (stand still).
Scan-matching techniques like ICP [35] and NDT [36,37] are also commonly used to localize a sensor in space. This class of methods determines the relative transformation between two lidar scans by minimizing surface-to-surface distances using all points in the scans, as opposed to a sparse set of extracted landmarks as is the case with triangulation and trilateration. Scan matching could potentially be used instead of, or in addition to, the control commands used as input to estimate the sensor pose in this work. However, this pose estimation is not the focus of the present paper, but rather the calibration of the range-dependent sensor noise.
Finally, we note that our strategy is specifically designed for cheap distance sensors: generally, the more accurate and precise a sensor is, the more useful (and, at the same time, likely expensive) it is. Our focus is on enabling software-based improvements of cheap sensors so that, by adding a bit of statistical processing, we extend their applicability. For this reason, we use triangulation lidar sensors as a practical and motivating example. Therefore, our paper relates also with the literature around the calibration of these sensors, and thus to the analyses on the effect of color of the target on the measured distance [38]; the works in [39,40], that build two partially different statistical models (the former homoskedastic, the latter heteroskedastic) and thus two slightly different calibration procedures based on ground truth information, using Weighted Least Squares (WLS) for parameter estimation and Akaike Information Criterion (AIC) for model selection; and the work in [41], that extends upon the work in [40] by including the effects of beam angles in the calibration process.

1.2. Statement of Contributions

Summarizing, we propose and validate a strategy that uses triangulateration concepts for calibrating distance sensors that return 2D measurements (i.e., both angles and distances). The algorithm leverages on placing the distance sensor inaccurately in equally distant positions along straight paths and making the measurements from the sensor comply with the assumption that the landmarks do not move, plus some other practical assumptions listed exhaustively in Section 2. The strategy is intended to be applicable at least (but not only) for the very specific situation where vacuum cleaning robots move within an apartment, and are able—by moving around and detecting obstacles—to self-calibrate their distance sensors.
More specifically, our strategy works as follows. First, we assume the pre-existence of a procedure that correctly identifies and distinguishes landmarks in the 2D measurements stream. Then, laddering on this knowledge, we perform two steps: (1) use the measured angles to the identified landmarks and the knowledge of the sensor movement to obtain an unbiased estimate of the landmarks positions in the fixed frame, and (2) calibrate the distance measurement model using these estimated landmarks positions.
We moreover validate the strategy using two approaches: (a) via simulated datasets, to analyze the limitations of the proposed procedure in a Monte Carlo (MC) fashion, and (b) to quantitatively assess the performance of the proposed procedure in real-life scenarios via field datasets recorded in a lab equipped with high-fidelity motion capture system.

1.3. Organization of the Manuscript

Section 2 formulates the calibration problem from a mathematical perspective. Section 3 describes the proposed algorithm, while Section 4 quantitatively assesses its performance. Section 5 concludes by listing the most important discoveries and research questions opened in the process.

2. Problem Formulation

We pose the following assumptions.
(A1)
The environment from which we collect the measurements to be used for the calibration process has particular structures that produce easily recognizable features in the sensor readings. For example, the situation is as in Figure 3, where corners and poles produce clear features in the 2D plane of the measurements. Note that this means that our strategy cannot work in environments that miss these easily recognizable structures (such as natural places like deserts, or flat areas without trees). However, generally we consider applications where robots shall move precisely in the surroundings, and this calls for objects to be avoided. If there are no such obstacles/structures then the need for precise calibration becomes feeble. Given this, without loss of generality we require static and detectable landmarks; in this paper, we will use cylinders with known radius, but it could be anything as long as we have a detector for it.
(A2)
The sensor measurements lie in a 2D plane that is parallel to the ground. Moreover, the objects that produce the above mentioned features develop orthogonally w.r.t. the ground. This implies that the distances measurements are not affected by tilt effects. This requirement may not hold in generic situations; however, our envisioned calibration strategy is to be carried out within a building, where the conditions above hold. The problem of removing these assumptions is considered as a potential future extension.
(A3)
The statistical model underlying the distance readings contains heteroskedastic noise (for which the variance of the noise increases with the actual distance that shall be measured) and a bias whose amplitude also increases with the distance above. More specifically, we will focus on the situation where there exist l = 1 , , L objects in the environment, and k = 1 , , K places where the sensor can be placed. We then let x l , y l and x ˜ k , y ˜ k be, respectively, the Cartesian coordinates of the L objects and of the K sensor positions. Accordingly, the actual distance between the sensor position k and the object placement l is
d l , k = ( x l x ˜ k ) 2 + ( y l y ˜ k ) 2 .
We then assume that the distance readings are distributed as the polynomial model
d ˜ l , k = i = 0 n b α i ( d l , k ) i bias + i = 0 n h β i ( d l , k ) i e l , k heteroskedastic noise
with e l , k N ( 0 , 1 ) iid. The model parameters are thus α i , β i , with n being the corresponding model order (hereafter assumed for simplicity equal for both the bias and noise terms, i.e., n = n b = n h ). Note that in the following we may also use a simplified distance model that, for the sake of numerical tractability, neglects the heteroskedastic term in (3) so that the model reduces to
d ˜ l , k = i = 0 n α i ( d l , k ) i + e l , k .
We will refer to this model as to the “simplified distance model”.
(A4)
Finally, we also assume that the angular readings θ ˜ l , k are noisy measurements of the actual angles θ l , k from which the object l is seen by the sensor at position k with respect to the reference frame of the horizontal axis. More precisely, we assume
θ ˜ l , k = θ l , k + ν l , k
where the measurement noise is ν l , k N 0 , σ θ 2 iid. Note that in practice this is a simplificative assumption that we use for analytical tractability reasons and that, a posteriori, is motivated by the numerical results we got during our experiments (For the sake of precision, it would be more formally correct to model the angle measurement noise through a Von Mises distribution with circular mean and noncircular concentration parameter. However, such a distribution converges to a normal one as the concentration parameter grows larger. In our case thus the approximation is justified in practice). Note, moreover, that this assumption implies that σ θ 2 is an unknown parameter of the model. We also assume that the error characteristics of (3)–(5) are time-invariant and do not depend on the absolute positions of the landmark (while they obviously depend on the relative distances “sensor vs. obstacle”). The angle θ l , k is thus the sum of the angle from which the object l is seen by the sensor with respect to the robot reference frame, plus the robot heading angle plus the rotation angle of the lidar’s internal coordinate system with the robot coordinate system which is assumed to be a known constant. Note also that the measurement noise ν l , k in (5) incorporates imprecisions in the knowledge of the robot heading and rotation angle.
Summarizing, the calibration procedure shall return a reasonable model order n ^ and an estimated parameter vector Θ ^ = α ^ 1 , , α ^ n ^ , β ^ 1 , , β ^ n ^ , σ ^ θ 2 . The problems are thus
(P1)
design a statistically optimal or near-optimal (in the Mean Squared Error (MSE) sense) algorithm that can be computed using closed-form expressions, and that can simultaneously estimate: the sensor coordinates x k , y k for each sampling position k, the position of the objects x l , y l for each object l, the model order n ^ and the model parameters vector Θ ^ above;
(P2)
quantitatively characterize the statistical performance of these estimators using appropriate mathematical analysis and field tests.

3. A Triangulateration Strategy for Calibrating Distance Sensors

Optimally solving the problem 2 above requires jointly solving a nonlinear parameter estimation and a nonlinear state estimation problem. The solution is in general not available in closed form, and a viable numerical strategy could be using Monte Carlo techniques. However, this would require extensively long simulations and high processing power, which we assume is not available or usable. Recall indeed our initial idea: our goal is to develop strategies that can be used to endow cheap embedded systems (such as vacuum cleaning robots) to autonomously self-calibrate their distance sensors when desired and without the need to connect to external computing infrastructure. Therefore, we proceed to solve the problem using an ad hoc strategy that is easily implementable on normal embedded systems at the cost of sacrificing optimality of the estimates in the MSE sense.
More precisely, we propose to construct an estimator that computes solutions performing the following steps:
  • Assume to know that there exist L landmarks, and to be able to identify and label them at each time instant from the raw measurements stream;
  • place the sensor in a finite number of ideally equally spaced positions along an ideally straight line (say s k where k = 1 , , K );
  • collect noisy measurements of the angles θ ˜ l , k and distances d ˜ l , k between the sensor and the various landmarks l = 1 , , L at each position s k (we recall that the stochastic models for these processes are (3) and (5));
  • estimate the 2D positions of the L landmarks in the inertial frame based on the sensor angle measurements θ ˜ l , k only, using the strategy defined in Section 3.1 below; and
  • given the estimated landmark positions above, and the measured distances d ˜ l , k , estimate the model order and model parameters (i.e., do the actual sensor calibration step) with the strategy proposed in Section 3.2 below.

3.1. Estimating the 2D Positions of Circular Landmarks

For illustration purposes and to make the paper self-contained, we now present a simple landmark position estimation algorithm and show that the method still works, even though better landmark detectors may be available. We also note that the focus of the paper is on sensor calibration. However, the same method developed for the sensor calibration part can be used to build a simple landmark position estimation algorithm. Recall then that we assume the landmarks l = 1 , , L to be circular geometrical features in the measurement stream that are induced by distinct circular objects in the sensor environment as in Figure 2. To contextualize this assumption, consider Figure 3 and Figure 4 and their captions, showing the sensor mounted on a robot moving in between the various circular landmarks. We used a lidar sensor from Neato. (It measure ranges from 0.2 m to 6 m and cover full 360 degree planar scan with angular resolution of 1 degree, [42]). (Neato Robotics www.neatorobotics.com).
Assume then, as in step 2 above, to place the sensor in k = 1 , , K ideally equally spaced positions along an ideally straight line, and to collect the corresponding raw measurements from the sensor. Note that it is possible to do the calibration without moving in a straight line, but it was chosen so to prevent the error from increasing largely during rotation and also to keep motion model (8) as simple as possible by excluding the robot heading angle. We selected equally spaced positions to minimize the error propagation from rover controller to the calibration process, as different step sizes might have different error levels in the controller. The next step is to compute, starting from this raw data, an estimate of the center of each circular landmark l using the information obtained at each sensor position k. Intuitively, we estimate the center of the landmark l as the point that minimizes the sum of its distances with the lines obtained at each k pointing towards the landmark, as shown in Figure 5 and described more formally in its caption.
Given that in this paper we assume that the sensor sees round landmarks, this means that we implicitly assume the presence of an offset in the distance measurements that is equal to the landmark radius. For brevity, here we assume to know this parameter. Otherwise, circular landmarks lead to raw measurements like the ones in Figure 2, from which it is not difficult to obtain practically accurate estimates of such radii.
Therefore, to summarize, we assume that the robot moves along a straight line and that the sensor takes measurements in equally spaced positions along this line. For each landmark we can compute the straight lines that aim from all the various sensor positions to the (individually estimated) landmark centers. If there was no error, we could find the center of each landmark simply by intersecting the various lines referring to the same landmark. As we are in practice always far from this ideal condition (i.e., do not have accurate information neither about the sensor positions nor the measurement angles), the proposal is to find an estimate of each landmark center, say x ^ l , y ^ l , with a least-squares solution that minimizes the sum of perpendicular distances from the unique solution point to all these lines.
The remainder of this section is then dedicated to deriving the analytical structure of such estimator. To help readability, the notation is general so that l means “landmark”, s k means “sensor position”, x and y relate to the Cartesian reference frame, ★ and ˜ refer to respectively the “ideal” and “noisy” versions of the same quantity. For example, the relation
θ ˜ l , k = θ l , k + ν l , k
indicates that the measured angle θ ˜ l , k of landmark l w.r.t. the position s k is a noisy version of the actual angle θ l , k .
To find the estimated landmark position x ^ l , y ^ l in a closed form, consider then that in its k-th position the sensor is located at s k = [ x ˜ k y ˜ k ] T (importantly, a quantity that is unknown to the system). Ideally, the ( k + 1 ) -th position of the sensor should be along a straight line (i.e., a fixed heading angle) and at a fixed distance, i.e., be
[ x k + 1 y k + 1 ] T = [ x k y k ] T + [ δ x k δ y k ] T
where δ x k and δ y k are either zeros when the robot is not moving or constants determined by the step size when the robot is moving. In practice, though, the ideal conditions are not satisfied. We thus model the actual sensor position to be
[ x ˜ k + 1 y ˜ k + 1 ] T = [ x k + 1 y k + 1 ] T + [ e x , k e y , k ] T
where e x , k and e y , k are zero mean Gaussian iid with the same variance σ s 2 . Note that this Gaussianity assumption is once again instrumental for the purpose of being able to devise computationally efficient schemes that can be implemented in embedded systems. Note, moreover, that we are recording the sensor measurements after the robot reached its new position (i.e., we are not considering measurements recorded during transients). As the actual sensor positions s k are not available, the best we have is only the reference (noiseless) sensor positions [ x k y k ] T which can be determined using (7).
Moreover, consider the actual sensor position s k and the measured angle θ ˜ l , k of landmark l w.r.t. the position s k (line whose slope is then tan ( θ ˜ l , k ) , as the dotted lines in Figure 5). The equation of this line is then
sin ( θ ˜ l , k ) x l cos ( θ ˜ l , k ) y l sin ( θ ˜ l , k ) x ˜ k + cos ( θ ˜ l , k ) y ˜ k = 0 .
Substituting (5) in (9) above yields
sin ( θ l , k + ν l , k ) x l cos ( θ l , k + ν l , k ) y l sin ( θ l , k + ν l , k ) x ˜ k + cos ( θ l , k + ν l , k ) y ˜ k = 0 .
Expanding then the sine and cosine terms using the trigonometric identities
sin ( θ l , k + ν l , k ) = sin ( θ l , k ) cos ( ν l , k ) + cos ( θ l , k ) sin ( ν l , k )
cos ( θ l , k + ν l , k ) = cos ( θ l , k ) cos ( ν l , k ) sin ( θ l , k ) sin ( ν l , k )
and simplifying gives the following expanded equation
sin ( θ l , k ) + cos ( θ l , k ) tan ( ν l , k ) x l cos ( θ l , k ) sin ( θ l , k ) tan ( ν l , k ) y l sin ( θ l , k ) + cos ( θ l , k ) tan ( ν l , k ) x ˜ k + cos ( θ l , k ) sin ( θ l , k ) tan ( ν l , k ) y ˜ k = 0 .
Moving the stochastic terms to the right hand side of the equation, we then obtain
sin ( θ l , k ) x l cos ( θ l , k ) y l sin ( θ l , k ) x ˜ k + cos ( θ l , k ) y ˜ k = g k tan ( ν l , k )
where
g k : = sin ( θ l , k ) y l y ˜ k + cos ( θ l , k ) x l x ˜ k .
Now substituting the geometrical identities sin ( θ l , k ) = ( y l y ˜ k ) d l , k and cos ( θ l , k ) = ( x l x ˜ k ) d l , k (immediately proved upon inspecting Figure 6) into (15), and then simplifying leads to
g k : = y l y ˜ k 2 + x l x ˜ k 2 d l , k = d l , k .
Recall that d l , k is the ground truth for the sensor-to-landmark distances—a ground truth that is not available. Consider then that our main goal is to calibrate the sensor without using such ground truth information. Therefore, the best we can do instead is to plug in the measured distances d ˜ l , k as estimates (or the sample mean if more than one measurement is available at the same sensor position).
For small values of ν l , k (i.e., for the case where the dotted lines in Figure 5 aim decently at their target) we can simplify (14) using the approximations
sin ( ν l , k ) ν l , k cos ( ν l , k ) 1 .
This means obtaining
sin ( θ l , k ) x l cos ( θ l , k ) y l sin ( θ l , k ) x ˜ k + cos ( θ l , k ) y ˜ k d ˜ l , k ν l , k .
Recall then that in our assumptions the measurement noise e l , k in (3) is statistically independent of the measurement noise ν l , k in (5). For that reason, the residual error in (17) will be heteroskedastic since the variance is σ θ 2 multiplied by the heteroskedastic variance of d ˜ l , k (that, we recall, is different in each sensor position). In the ideal case of no measurement noise, the line should pass through the center of the landmark l. In this special case, the point of intersection between these lines will be the position of the landmark center. As mentioned above, the presence of the measurement noise will however make the lines drift. The K lines corresponding to the K sensor positions in general will not intersect in a unique point, but in pairs. In this case, we may then solve the problem in a least-squares sense: the idea is to minimize the weighted sum of the squared distances (the solid red lines in Figure 5), i.e., to solve
x ^ l y ^ l = arg min x l , y l R W l 1 2 H l x l y l b ˜ l 2
where
W l : = 1 var d ˜ l , 1 0 0 0 1 var d ˜ l , 1 0 0 0 1 var d ˜ l , 1 H l : = sin ( θ l , 1 ) cos ( θ l , 1 ) sin ( θ l , K ) cos ( θ l , K ) b ˜ l : = sin ( θ l , 1 ) x ˜ 1 cos ( θ l , 1 ) y ˜ 1 sin ( θ l , K ) x ˜ K cos ( θ l , K ) y ˜ K .
Consequently, solving this system according to Aitken’s generalized least square method [43] gives the following Best Linear Unbiased Estimator (BLUE) of the landmark centers,
x ^ l y ^ l = H l T W l H l 1 W l H l T b ˜ l .
The computations above assume the full knowledge of the sensor positions s k = x ˜ k y ˜ k . As this information is not available, the best we can do is to plug in, instead, the expected sensor positions [ x k y k ] in (7). Replacing x ˜ k y ˜ k with [ x k y k ] in the least squares problem (18) thus leads to the problem
x ^ l y ^ l = arg min x l , y l R W l * 1 2 H l x l y l b l 2
where
W l * : = W l ( as explaind below Equation ( 25 ) ) b l : = cos ( θ l , 1 ) y 1 sin ( θ l , 1 ) x 1 cos ( θ l , K ) y K sin ( θ l , K ) x K
which in turns gives the weighted least squares estimator
x ^ l y ^ l = H l T W l * H l 1 W l * H l T b l ,
whose weights matrix W * is motivated below and defined in (25).
This estimator is solvable, in the sense that the embedded system has all the information necessary to compute and minimize the cost. In other words, solving (21) is computationally feasible as all the required information is available while solving (18) is not. However, solving (21) leads to solving an approximate model of the intersection problem which will indeed result in a biased estimator, as it corresponds to solving the system of equations that is obtained after substituting x k and y k from (8) into (17), i.e.,
sin ( θ l , k ) x l cos ( θ l , k ) y l sin ( θ l , k ) x k + cos ( θ l , k ) y k sin ( θ l , k ) e x , k cos ( θ l , k ) e y , k + d ˜ l , k ν l , k .
To characterize the error of this estimator, we notice that the residual error includes two different terms: the first is homoskedastic (specifically, corresponding to the first two terms in (24)) and with a variance of
var sin ( θ l , k ) e x , k cos ( θ l , k ) e y , k = sin ( θ l , k ) 2 σ s 2 + cos ( θ l , k 2 ) σ s 2 = σ s 2 .
The second term is heteroskedastic with variance σ θ 2 var d ˜ l , k (specifically, corresponding to the last terms in (24)). Consequently, we suggest to set the weighting matrix W l * as
W l * : = σ s 2 I + σ θ 2 W l .
Note that in our assumptions both σ s 2 and σ θ 2 are assumed unknown constants. However, minimizing the sum of the squared residual errors in (24) is equivalent to minimizing the sum of the squared residual errors d ˜ l , k ν l , k , as the transformation between these errors is affine. This means that replacing W l * with W l will give exactly the BLUE for the parameters in (24).

3.2. Calibrating the Sensor

Once all the L landmark positions are estimated as x ^ l , y ^ l as in the previous section, we can easily estimate the various landmark–sensor position distances d ^ l , k simply through computing the distance between each landmark with all the sensor positions and subtracting the radius of the landmark (which, as we said above, is either assumed to be known or assumed to be inferrable from the raw data). For notational compactness, define the ( K L × 1 ) -dimensional distance measurement vector
d ˜ : = d ˜ 1 , 1 d ˜ 1 , K d ˜ L , 1 d ˜ L , K T ,
the noise vector
e : = e 1 , 1 e 1 , K e L , 1 e L , K T ,
and rewriting (4) through a Vandermonde matrix, i.e.,
d ˜ = 1 d ^ 1 , 1 ( d ^ 1 , 1 ) 2 ( d ^ 1 , 1 ) n 1 d ^ 1 , K ( d ^ 1 , K ) 2 ( d ^ 1 , K ) n 1 d ^ L , 1 ( d ^ L , 1 ) 2 ( d ^ L , 1 ) n 1 d ^ L , K ( d ^ L , K ) 2 ( d ^ L , K ) n : = Φ α 0 α 1 α 2 α n : = α + e
where the Vandermonde matrix Φ is of size K L × n + 1 and the parameter vector of size n + 1 × 1 .
Given this notation, the calibration procedure consists of three phases:
  • phase#1: model parameters estimation. After obtaining the estimates of the distances between the sensor and landmarks, estimate the parameters α casting the problem as a linear regression on (26) and the measurement vector d ˜ for model orders n = 0 , 1 , 2 , , n max , where n max is a user-defined parameter. This means solving for each potential n the problem
    α ^ = arg min α R n + 1 Φ α d ˜ 2
    which has the closed-form solution
    α ^ = ( Φ T Φ ) 1 Φ T d ˜ .
    Note that, once again, the estimator α ^ is unbiased; however, due to the simplification of the noise term in (3) (i.e., ignoring the heteroskedastic part of the noise), α ^ will not be efficient.
  • phase#2: model order selection. We note that there exist various alternatives for selecting the optimal model order n ^ { 0 , 1 , 2 , , n max } : fitting opportune test sets, using crossvalidation, or also using model order selection criteria, for example, AIC. In the setups we considered for this paper we actually found that the model order selection problem has quite clear solutions, implying that all the various alternatives clearly indicated the very same number (see Section 4), implying in its turn that for our specific case all the various approaches tend to give equivalent results. It may, however, be that in other cases different strategies lead to different results;
  • phase#3: filtering new measurements. Once the model order selection and the model parameters estimation problems are solved, this means rewriting the “object distance vs. sensor reading” measurement model (3) as
    d ˜ = i = 0 n ^ α ^ i ( d ) i + i = 0 n ^ β ^ i ( d ) i e
    where d ˜ is the raw measurement, and d is the actual distance. To estimate d from d ˜ and the trained model one should thus invert (29). This inversion is not immediate; for example, one may solve the Least Squares (LS)-type optimization problem
    d ^ = arg min d ^ R + i = 0 n ^ α ^ i d ^ i d ˜ 2
    which requires finding the roots of a polynomial of order 2 n ^ . Thus, despite its apparent simplicity, the problem of finding polynomial roots requires numerical methods for polynomial orders greater than 3.

4. Numerical Results

In this section, we verify in an empirical way the performances of the proposed calibration procedure, first with simulations using Matlab® and then with laboratory experiments using real sensors and landmarks in an environment endowed with a localization infrastructure that can return accurate assumingly ground truth information.
In general terms we thus consider d ˜ j , j = 1 , , J raw measurements from a noncalibrated sensor. To each raw distance measurement d ˜ j there corresponds also a true distance d j , and a filtered distance d ^ j , i.e., the corresponding filtered version of these raw data.
As for the statistical performance index, the goal is to assess if and how much the calibration algorithm is actually leading to improved estimates of the distances, i.e., whether d ^ j is statistically closer to d j than d ˜ j , and if so how much. To do so we use the MSE, i.e.,
MSE a , b : = 1 J j = 1 J a j b j 2 .
More precisely we will compute the ratio between the MSE computed with the raw data (distances measured by the sensor) and the MSE computed with the estimated data (distances estimated by the algorithm), i.e., use the
MSE ratio : = MSE ( d , d ˜ ) MSE ( d , d ^ ) .
Thus, in order to get an improvement with the estimation, this ratio has to be greater than 1.

4.1. Analyzing the Statistical Properties of the Landmark Position Estimator through Simulation Results

Unless otherwise stated, for all our simulations’ plots, each point is the average of 1000 simulations of five landmarks and 20 sensor positions.
We start with statistically characterizing the landmark position estimator described in Section 3.1 by simulating a measurement model of the type (3) characterized by n = 2 , α = [ 0.0525 , 0.8838 , 0.0584 ] and β = 0.05 α , values that seem representing typical distance measurement systems mounted in modern autonomous vacuum systems.
We then investigate the MSE of the landmark position estimation procedure by analyzing how its bias and variance depend on three specific quantities:
  • the standard deviation σ θ associated to the uncertainty of the sensor-to-landmark angle measurement in (6),
  • the standard deviation σ s associated to the uncertainty in the sensor position evolution in (8), and
  • the total number of landmarks L present in the scene.
The results are summarized in Figure 7, plotting the dependencies on σ s for a set of given σ θ and L, and in Figure 8, plotting the dependencies on L given σ s and σ θ .
In words, the results shown in Figure 7 and Figure 8 confirm the obvious intuition that the smaller the noises, the better the estimator. However, we also note that, from numerical standpoints, it seems that guaranteeing σ s < 10 cm is important, and that guaranteeing σ s < 1 cm is instead not a necessity. This result is of practical importance, because the assumption that the sensor will be placed in a perfectly straight line will always be violated. However it seems that, at least for standard “domestic” cases like autonomous vacuum cleaners, violating this assumption will not disrupt the final results. We also note that Figure 8 suggests to set L to be around 5, i.e., an environment that is sufficiently rich while not being cluttered.

4.2. Analyzing the Statistical Properties of the Sensor Calibration Procedure through Simulation Results

We then pass to the second part of the estimation procedure, i.e., calibrating the model parameter that we presented in Section 3.2. Recall that the calibration algorithm is based on the estimated landmarks position, i.e., there is the need to estimate the landmarks positions first, to then proceed to the calibration step. We then define as main performance index the MSE ratio (31), i.e., a measure of how much worse the raw data measured by the sensor is w.r.t. the distances estimated by the data filtering algorithm. We analyze how the MSE ratio (31) depends on the standard deviation σ θ of the sensor-to-landmark angle measurement error in (6), and the standard deviation σ s of the sensor position evolution uncertainty in (8).
The results are summarized in Figure 9, and they show that the overall approach seems to be robust: increasing gradually the standard deviations σ θ and σ s does not lead to abrupt decays of the overall statistical performance. Moreover, for values of σ θ and σ s that are meaningful in autonomous vacuum cleaners situations, we note MSE ratios that may reach 100.
We also remark that the overall strategy seems robust in its model order selection step. More precisely, in all our simulations we selected a model order n = 2 , which is the value we obtained in our previous work [40] while calibrating the same lidar sensor from field data (a value that is numerically convenient also because n = 2 leads to closed-form solutions for (30)). In the simulations considered in this section, the overall estimation approach leads to estimating the model order n ^ as the correct one, i.e., 2, when the standard deviations σ θ and σ s are reasonably low. However, as the noises increase, we noted that the order selection process tends to become more and more conservative, and select the simpler model n ^ = 1 (see graphically Figure 10).
Finally, we again investigate the effect of using different numbers of landmarks on the whole calibration process. The results, summarized in Figure 11, show again that increasing the number of landmarks from one to three leads to noticeable improvements in the MSE ratio. However, increasing the number of landmarks further will not lead to further improvements while, at the same time, increasing the computation complexity.

4.3. Field Experiments

We consider field experiments in a laboratory provided with a Vicon motion capture system that uses triangulation to compute the position of the objects inside the laboratory. Such a Vicon system is very accurate, compared to the sensors we aim at calibrating. For this reason, we assume that the Vicon measurements are for all the practical purposes noiseless and considerable as ground truth.
We then apply both the landmarks position estimator in Section 3.1 and the consequent parameters calibration procedure in Section 3.2 to calibrate the triangulation lidar shown in Figure 4. We recall that this type of lidar is not really accurate, as its measurements are affected by both a systematic bias and a heteroskedastic variance (see Figure 1) that lead to increasing measurement errors as the measured distance increases.
For practical purposes we placed the lidar sensor on top of a Pioneer 3AT mobile robot, as shown in Figure 4, controlled through a computer using Robot Operating System (ROS). We also consider five hand-made cylindrical landmarks with a radius of 12 cm, scattered within the field of view of the Vicon system and in a way that all of them are always visible and distinguishable by the sensor from all the positions x k , y k from where it will take measurements. As a practical indication, because of the intrinsic limits of the considered sensor, each landmark has to be placed not farther than 5 meters and closer than 20 cm from all the sensor positions. We then programmed the robot to move on a straight line path, and oriented it so to not hit the landmarks while moving. More precisely, we programmed the robot to move and stop 10 times, doing each time an incremental step of 30 cm. In order to have proper calibration of the sensor we need to have a “sufficient” calibration dataset. In general, for range sensors, a sufficient dataset should cover all the sensor ranges of interest. During our field experiments we moved the sensor in a straight line for about 3 m to ensure the richness of the recorded dataset.
Figure 3 shows a photo of one of the experiments. We repeated this type of experiment with three different placements, so to take three datasets of Vicon measurements of distances and angles (i.e., ground truth) of the five landmarks for all the sensor positions. In other words, thanks to the Vicon system we were able to compute all the actual distances and angles between all the various landmarks and the sensor in its various positions.
We then split each dataset in three parts: the first two to be used as a training set (the first third to estimate the sensor parameters in (3) and the second third to choose the model order n ^ ) and the third part to be used as a test set. The field results presented for landmarks number other than 5 is basically obtained with the same recorded dataset of five landmarks after removing the receded data of extra land marks. For example, the 3-landmark datasets, is the 5-landmark datasets after removing the recorded data associated with the last two landmarks, and the 4-landmark datasets is the same as the 5-landmark datasets after removing the data of the fifth landmark, and so on. The obtained results, summarized in Figure 12, show again that the improvements change as more landmarks are involved in the calibration process. In other words, the field results are in good agreement with the simulated ones. However, a few outliers still exist which might be due to increased noise variance in one of the unconsidered processes like the landmark association problem. Moreover, in all the calibrations we performed on the different datasets, we obtain a selected model order n ^ = 1 , which indicates, based on our simulations, that there may be a high noise variance associated to the noise in measuring the angle with the landmarks.
Finally, Figure 13 shows how the proposed calibration procedure can help in real-life situations by reporting the measurement process relative to the third placement considered in Figure 12. Here, the landmarks’ borders are plotted as gray circles, the series of actual positions of the sensor as a stripe of blue dots. Moreover, the red crosses plot the raw measurements d ˜ l , k obtained by the sensor when observing the various landmarks, while the green dots plot the filtered measurements d ^ l , k obtained by applying the proposed calibration and filtering algorithms. One may note how the d ^ l , k ’s capture the actual positions of the various landmarks in a qualitatively much more precise way.

5. Conclusions

The nonlinear and heteroskedastic model of distance sensors can be calibrated by exploiting just the structure of a fixed environment. In other words, if the environment presents some particular features that may be used as generic landmarks, then one may use the fact that the landmarks do not move to infer the own movement. This means the possibility of estimating the landmarks’ positions minimizing opportune cost functions, and in this way obtain information useful to learn the characteristics of a distance sensor without the need for external distance measuring devices to be used as providers of ground truth information.
Through field experiments we saw that the overall proposed calibration approach may be quite robust: even if one does not get results that are as good as the ones achievable using external ground truth systems, our algorithm has been able to lead to a reduction of the norm of the measurement errors between the precalibration raw data and the postcalibration ones by a factor 10; in comparison, using ground truth calibration as in [40] led to a reduction factor of 17 (thus better but not of orders of magnitude, and at the cost of having to buy, set up and use a ground truth collection system).
We thus remark that this factor 10 achievement is through using just software logic and assumptions on the landmarks being fixed, and no additional hardware nor special conditions. In conclusion, the here proposed calibration procedure is expected to lessen the time to prepare the calibration setup, and is expected to be implementable well beyond laboratory setups.
We though recall that one standing assumption we exploited is that sensor measurements lie in a 2D plane that is parallel to the ground. As this requirement may not hold in some practical situations, we devise this as the most important future research direction spanned by the current work.

Author Contributions

The authors (A.A., D.V., M.M. and S.K.) are equally contributed in conceptualization, methodology, validation, writing—review and editing. The software, formal analysis and investigation, A.A.; supervision, D.V. and funding acquisition, M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the EIT Raw Materials project FIREMII under contract number 18011 and the European Union’s Horizon 2020 research and innovation programme under grant agreement number 732737 (ILIAD). The APC was funded by FIREMII project.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available at the current time.

Acknowledgments

I would like to acknowledge that part of this research has been done during my work with University of Baghdad.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. International Organization for Standardization. Uncertainty of Measurement-Part 3: Guide to the Expression of Uncertainty in Measurement (GUM: 1995); ISO: Geneva, Switzerland, 2008. [Google Scholar]
  2. Alhashimi, A. Statistical Sensor Calibration Algorithms. Ph.D. Thesis, Luleå University of Technology, Luleå, Sweden, 2018. [Google Scholar]
  3. Schwarz, B. LIDAR: Mapping the world in 3D. Nat. Photonics 2010, 4, 429–430. [Google Scholar] [CrossRef]
  4. Dassot, M.; Constant, T.; Fournier, M. The use of terrestrial LiDAR technology in forest science: Application fields, benefits and challenges. Ann. For. Sci. 2011, 68, 959–974. [Google Scholar] [CrossRef] [Green Version]
  5. Akay, A.E.; Oğuz, H.; Karas, I.R.; Aruga, K. Using LiDAR technology in forestry activities. Environ. Monit. Assess. 2009, 151, 117–125. [Google Scholar] [CrossRef] [PubMed]
  6. Burguera, A.; González, Y.; Oliver, G. Sonar sensor models and their application to mobile robot localization. Sensors 2009, 9, 10217–10243. [Google Scholar] [CrossRef]
  7. Noykov, S.; Roumenin, C. Calibration and interface of a polaroid ultrasonic sensor for mobile robots. Sens. Actuators Phys. 2007, 135, 169–178. [Google Scholar] [CrossRef]
  8. Dogruer, C.U. Online identification of odometer parameters of a mobile robot. In Proceedings of the International Joint Conference SOCO’14-CISIS’14-ICEUTE’14, Bilbao, Spain, 25–27 June 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 195–206. [Google Scholar]
  9. Karam, S.; Vosselman, G.; Peter, M.; Hosseinyalamdary, S.; Lehtola, V. Design, Calibration, and Evaluation of a Backpack Indoor Mobile Mapping System. Remote Sens. 2019, 11, 905. [Google Scholar] [CrossRef] [Green Version]
  10. Alhashimi, A.; Varagnolo, D.; Gustafsson, T. Calibrating Distance Sensors for Terrestrial Applications Without Groundtruth Information. IEEE Sens. J. 2017, 17, 3698–3709. [Google Scholar] [CrossRef] [Green Version]
  11. Alhashimi, A.; Del Favero, S.; Varagnolo, D.; Gustafsson, T.; Pillonetto, G. Bayesian strategies for calibrating heteroskedastic static sensors with unknown model structures. In Proceedings of the 2018 European Control Conference (ECC), Limassol, Cyprus, 12–15 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 2447–2453. [Google Scholar]
  12. Muhammad, N.; Lacroix, S. Calibration of a rotating multi-beam lidar. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 5648–5653. [Google Scholar]
  13. Levinson, J.; Thrun, S. Unsupervised calibration for multi-beam lasers. In Experimental Robotics; Springer: Berlin/Heidelberg, Germany, 2014; pp. 179–193. [Google Scholar]
  14. Sheehan, M.; Harrison, A.; Newman, P. Automatic self-calibration of a full field-of-view 3D n-laser scanner. In Experimental Robotics; Springer: Berlin/Heidelberg, Germany, 2014; pp. 165–178. [Google Scholar]
  15. Nouira, H.; Deschaud, J.E.; Goulette, F. Point cloud refinement with a target-free intrinsic calibration of a mobile multi-beam LiDAR system. In Proceedings of the ISPRS Congress 2016 International Society for Photogrammetry and Remote Sensing, Prague, Czech Republic, 12–19 July 2016. [Google Scholar]
  16. Kok, M.; Schön, T.B. Magnetometer calibration using inertial sensors. IEEE Sens. J. 2016, 16, 5679–5689. [Google Scholar] [CrossRef] [Green Version]
  17. Rehder, J.; Siegwart, R. Camera/IMU calibration revisited. IEEE Sens. J. 2017, 17, 3257–3268. [Google Scholar] [CrossRef]
  18. Zhou, L. A new minimal solution for the extrinsic calibration of a 2D LIDAR and a camera using three plane-line correspondences. IEEE Sens. J. 2013, 14, 442–454. [Google Scholar] [CrossRef]
  19. Martinelli, A.; Tomatis, N.; Siegwart, R. Simultaneous localization and odometry self calibration for mobile robot. Auton. Robot. 2007, 22, 75–85. [Google Scholar] [CrossRef]
  20. Filin, S. Recovery of systematic biases in laser altimetry data using natural surfaces. Photogramm. Eng. Remote Sens. 2003, 69, 1235–1242. [Google Scholar] [CrossRef]
  21. Skaloud, J.; Lichti, D. Rigorous approach to bore-sight self-calibration in airborne laser scanning. ISPRS J. Photogramm. Remote Sens. 2006, 61, 47–59. [Google Scholar] [CrossRef]
  22. Glennie, C.; Lichti, D.D. Static calibration and analysis of the Velodyne HDL-64E S2 for high accuracy mobile scanning. Remote Sens. 2010, 2, 1610–1624. [Google Scholar] [CrossRef] [Green Version]
  23. Sheehan, M.; Harrison, A.; Newman, P. Self-calibration for a 3D laser. Int. J. Robot. Res. 2012, 31, 675–687. [Google Scholar] [CrossRef] [Green Version]
  24. Hartley, R.I.; Sturm, P. Triangulation. Comput. Vis. Image Underst. 1997, 68, 146–157. [Google Scholar] [CrossRef]
  25. Manolakis, D.E. Efficient solution and performance analysis of 3-D position estimation by trilateration. IEEE Trans. Aerosp. Electron. Syst. 1996, 32, 1239–1248. [Google Scholar] [CrossRef]
  26. Alwan, N.A.S.; Mahmood, A.S. On Gradient Descent Localization in 3-D Wireless Sensor Networks. J. Eng. 2015, 21, 85–97. [Google Scholar]
  27. Berle, F. Mixed triangulation/trilateration technique for emitter location. In IEE Proceedings F (Communications, Radar and Signal Processing); IET: London, UK, 1986; Volume 133, pp. 638–641. [Google Scholar]
  28. Thomas, N.J.; Cruickshank, D.G.M.; Laurenson, D.I. Performance of a TDOA-AOA hybrid mobile location system. In Proceedings of the Second International Conference on 3G Mobile Communication Technologies, London, UK, 26–28 March 2001; pp. 216–220. [Google Scholar] [CrossRef]
  29. Leonard, J.J.; Durrant-Whyte, H.F. Mobile robot localization by tracking geometric beacons. IEEE Trans. Robot. Autom. 1991, 7, 376–382. [Google Scholar] [CrossRef]
  30. Betke, M.; Gurvits, L. Mobile robot localization using landmarks. IEEE Trans. Robot. Autom. 1997, 13, 251–263. [Google Scholar] [CrossRef] [Green Version]
  31. Esteves, J.S.; Carvalho, A.; Couto, C. Generalized geometric triangulation algorithm for mobile robot absolute self-localization. In Proceedings of the 2003 IEEE International Symposium on Industrial Electronics (ISIE’03), Rio de Janeiro, Brazil, 9–11 June 2003; IEEE: Piscataway, NJ, USA, 2003; Volume 1, pp. 346–351. [Google Scholar]
  32. Thomas, F.; Ros, L. Revisiting trilateration for robot localization. IEEE Trans. Robot. 2005, 21, 93–101. [Google Scholar] [CrossRef] [Green Version]
  33. Yang, Z.; Liu, Y. Quality of trilateration: Confidence-based iterative localization. IEEE Trans. Parallel Distrib. Syst. 2010, 21, 631–640. [Google Scholar] [CrossRef]
  34. del Peral-Rosado, J.A.; Raulefs, R.; López-Salcedo, J.A.; Seco-Granados, G. Survey of cellular mobile radio localization methods: From 1G to 5G. IEEE Commun. Surv. Tutor. 2017, 20, 1124–1148. [Google Scholar] [CrossRef]
  35. Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. Sensor fusion IV: Control paradigms and data structures. Int. Soc. Opt. Photonics 1992, 1611, 586–606. [Google Scholar]
  36. Biber, P.; Straßer, W. The normal distributions transform: A new approach to laser scan matching. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003)(Cat. No. 03CH37453), Las Vegas, NV, USA, 27–31 October 2003; IEEE: Piscataway, NJ, USA, 2003; Volume 3, pp. 2743–2748. [Google Scholar]
  37. Magnusson, M.; Lilienthal, A.; Duckett, T. Scan registration for autonomous mining vehicles using 3D-NDT. J. Field Robot. 2007, 24, 803–827. [Google Scholar] [CrossRef] [Green Version]
  38. Campos, D.; Santos, J.; Gonçalves, J.; Costa, P. Modeling and simulation of a hacked neato XV-11 laser scanner. In Robot 2015: Second Iberian Robotics Conference; Springer: Berlin/Heidelberg, Germany, 2016; pp. 425–436. [Google Scholar]
  39. Lima, J.; Gonçalves, J.; Costa, P.J. Modeling of a low cost laser scanner sensor. In CONTROLO’2014—Proceedings of the 11th Portuguese Conference on Automatic Control; Springer: Berlin/Heidelberg, Germany, 2015; pp. 697–705. [Google Scholar]
  40. Alhashimi, A.; Varagnolo, D.; Gustafsson, T. Statistical modeling and calibration of triangulation Lidars. In Proceedings of the Informatics in Control, Automation and Robotics: 13th International Conference, ICINCO 2016, Lisbon, Portugal, 29–31 July 2016; SCITEPRESS: Setubal, Portugal, 2016; Volume 1, pp. 308–317. [Google Scholar]
  41. Alhashimi, A.; Pierobon, G.; Varagnolo, D.; Gustafsson, T. Modeling and Calibrating Triangulation Lidars for Indoor Applications. In Proceedings of the Informatics in Control, Automation and Robotics: 13th International Conference, ICINCO 2016, Lisbon, Portugal, 29–31 July 2016; Springer International Publishing: Cham, Switzerland, 2018; pp. 342–366. [Google Scholar]
  42. Konolige, K.; Augenbraun, J.; Donaldson, N.; Fiebig, C.; Shah, P. A low-cost laser distance sensor. In Proceedings of the 2008 IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 19–23 May 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 3002–3008. [Google Scholar]
  43. Aitken, A.C. On least squares and linear combination of observations. Proc. R. Soc. Edinb. 1936, 55, 42–48. [Google Scholar] [CrossRef]
Figure 1. An example of a series of raw measurements obtained using a noncalibrated distance sensor (in this case the triangulation lidar described in Section 4.3).
Figure 1. An example of a series of raw measurements obtained using a noncalibrated distance sensor (in this case the triangulation lidar described in Section 4.3).
Sensors 21 00155 g001
Figure 2. Raw measurements (right plot) from a noncalibrated triangulation lidar surrounded by trees in a forest (the ones in the left picture).
Figure 2. Raw measurements (right plot) from a noncalibrated triangulation lidar surrounded by trees in a forest (the ones in the left picture).
Sensors 21 00155 g002
Figure 3. Setup of a typical calibration experiment, comprising five landmarks (white cylinders) and the robot–sensor system of Figure 4 moving among the landmarks.
Figure 3. Setup of a typical calibration experiment, comprising five landmarks (white cylinders) and the robot–sensor system of Figure 4 moving among the landmarks.
Sensors 21 00155 g003
Figure 4. Photo of a mobile robot with a triangulation lidar mounted on its top.
Figure 4. Photo of a mobile robot with a triangulation lidar mounted on its top.
Sensors 21 00155 g004
Figure 5. Illustration of the intuitions behind the suggested landmark position estimation algorithm. For each sensor position k one may identify from the raw distance measurements the angle between the sensor and the landmark, and thus the direction of the line from the sensor to the center of the landmark estimated while staying in position k (the dotted lines). Note that as while staying in position k the estimate of the center of the landmark is uncertain, these dotted lines do not aim perfectly at the actual center of the landmark, i.e., the blue dot labeled with l 1 . An intuitively meaningful strategy for estimating the unknown position of l 1 is then finding that point that minimizes the sum of its distances with the dotted lines from each position k. Given this intuition, the short solid red lines represent the distances between these directions and the center of the landmark.
Figure 5. Illustration of the intuitions behind the suggested landmark position estimation algorithm. For each sensor position k one may identify from the raw distance measurements the angle between the sensor and the landmark, and thus the direction of the line from the sensor to the center of the landmark estimated while staying in position k (the dotted lines). Note that as while staying in position k the estimate of the center of the landmark is uncertain, these dotted lines do not aim perfectly at the actual center of the landmark, i.e., the blue dot labeled with l 1 . An intuitively meaningful strategy for estimating the unknown position of l 1 is then finding that point that minimizes the sum of its distances with the dotted lines from each position k. Given this intuition, the short solid red lines represent the distances between these directions and the center of the landmark.
Sensors 21 00155 g005
Figure 6. A simplified diagram illustrating the geometrical relation between the angle θ l , k , s k , d l , k and the landmark position in the noiseless case. The dashed line in the plot indicates the robot heading direction, while the dotted line represents the sensor to landmark true distance.
Figure 6. A simplified diagram illustrating the geometrical relation between the angle θ l , k , s k , d l , k and the landmark position in the noiseless case. The dashed line in the plot indicates the robot heading direction, while the dotted line represents the sensor to landmark true distance.
Sensors 21 00155 g006
Figure 7. Bias and variance of the landmark position estimator of Section 3.1 as a function of the uncertainty in the sensor position evolution σ s for different sensor-to-landmark angle measurement standard deviations σ θ ’s (in degrees) for the case L = 1 .
Figure 7. Bias and variance of the landmark position estimator of Section 3.1 as a function of the uncertainty in the sensor position evolution σ s for different sensor-to-landmark angle measurement standard deviations σ θ ’s (in degrees) for the case L = 1 .
Sensors 21 00155 g007
Figure 8. Bias and variance of the landmark position estimator of Section 3.1 as a function of the number of landmarks L for the case σ θ = 2 .
Figure 8. Bias and variance of the landmark position estimator of Section 3.1 as a function of the number of landmarks L for the case σ θ = 2 .
Sensors 21 00155 g008
Figure 9. Dependency of the MSE ratio (31) on the standard deviation σ θ of the sensor-to-landmark angle measurement error in (6), and on the standard deviation σ s of the sensor position evolution uncertainty in (8) for the case L = 2 .
Figure 9. Dependency of the MSE ratio (31) on the standard deviation σ θ of the sensor-to-landmark angle measurement error in (6), and on the standard deviation σ s of the sensor position evolution uncertainty in (8) for the case L = 2 .
Sensors 21 00155 g009
Figure 10. Summary of the dependency of the model order selection step on the standard deviations σ θ and σ s for the case L = 2 . As the noises increase, the order selection process tends to select simpler models, as all the incorrectly classified model orders were of the kind n ^ = 1 .
Figure 10. Summary of the dependency of the model order selection step on the standard deviations σ θ and σ s for the case L = 2 . As the noises increase, the order selection process tends to select simpler models, as all the incorrectly classified model orders were of the kind n ^ = 1 .
Sensors 21 00155 g010
Figure 11. Dependency of the MSE ratio on the number of landmarks L as a function of the sensor position standard deviation σ s for the case σ θ = 2 . Decreasing σ s is, as expected, always beneficial, while increasing L is not so important.
Figure 11. Dependency of the MSE ratio on the number of landmarks L as a function of the sensor position standard deviation σ s for the case σ θ = 2 . Decreasing σ s is, as expected, always beneficial, while increasing L is not so important.
Sensors 21 00155 g011
Figure 12. Statistics of the field tests for all possible combinations of datasets recorded in three different placements. Plots show clear increment of the MSE ratio with the increased number of the involved landmarks in the calibration process.
Figure 12. Statistics of the field tests for all possible combinations of datasets recorded in three different placements. Plots show clear increment of the MSE ratio with the increased number of the involved landmarks in the calibration process.
Sensors 21 00155 g012
Figure 13. Example of the effects of the proposed calibration procedure on a field experiment. The actual sensor positions are plotted with a series of practically aligned blue dots, the true cylindrical landmarks in gray circles, the raw measurements taken by the sensor with red crosses, and the filtered distances, computed using the proposed strategy, in green circles. Ideally, the measurements should lie on the borders of the landmarks. It is immediately noticeable how the calibrated measurements lie much closer to such borders than the non-calibrated ones.
Figure 13. Example of the effects of the proposed calibration procedure on a field experiment. The actual sensor positions are plotted with a series of practically aligned blue dots, the true cylindrical landmarks in gray circles, the raw measurements taken by the sensor with red crosses, and the filtered distances, computed using the proposed strategy, in green circles. Ideally, the measurements should lie on the borders of the landmarks. It is immediately noticeable how the calibrated measurements lie much closer to such borders than the non-calibrated ones.
Sensors 21 00155 g013
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Alhashimi, A.; Magnusson, M.; Knorn, S.; Varagnolo, D. Calibrating Range Measurements of Lidars Using Fixed Landmarks in Unknown Positions. Sensors 2021, 21, 155. https://doi.org/10.3390/s21010155

AMA Style

Alhashimi A, Magnusson M, Knorn S, Varagnolo D. Calibrating Range Measurements of Lidars Using Fixed Landmarks in Unknown Positions. Sensors. 2021; 21(1):155. https://doi.org/10.3390/s21010155

Chicago/Turabian Style

Alhashimi, Anas, Martin Magnusson, Steffi Knorn, and Damiano Varagnolo. 2021. "Calibrating Range Measurements of Lidars Using Fixed Landmarks in Unknown Positions" Sensors 21, no. 1: 155. https://doi.org/10.3390/s21010155

APA Style

Alhashimi, A., Magnusson, M., Knorn, S., & Varagnolo, D. (2021). Calibrating Range Measurements of Lidars Using Fixed Landmarks in Unknown Positions. Sensors, 21(1), 155. https://doi.org/10.3390/s21010155

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop