Next Article in Journal
A Driving Power Supply for Piezoelectric Transducers Based on an Improved LC Matching Network
Next Article in Special Issue
Dynamic Focusing (DF) Cone-Based Omnidirectional Fingertip Pressure Sensor with High Sensitivity in a Wide Pressure Range
Previous Article in Journal
Optimal Control Algorithm for Stochastic Systems with Parameter Drift
Previous Article in Special Issue
Identifying the Strength Level of Objects’ Tactile Attributes Using a Multi-Scale Convolutional Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analytical Models for Pose Estimate Variance of Planar Fiducial Markers for Mobile Robot Localisation

1
Faculty of Mechanical Engineering, Brno University of Technology, Technická 2896/2, 616 69 Brno, Czech Republic
2
Independent Researcher, 74 401 Frenštát pod Radhoštěm, Czech Republic
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(12), 5746; https://doi.org/10.3390/s23125746
Submission received: 21 March 2023 / Revised: 29 May 2023 / Accepted: 8 June 2023 / Published: 20 June 2023
(This article belongs to the Collection Sensors and Data Processing in Robotics)

Abstract

:
Planar fiducial markers are commonly used to estimate a pose of a camera relative to the marker. This information can be combined with other sensor data to provide a global or local position estimate of the system in the environment using a state estimator such as the Kalman filter. To achieve accurate estimates, the observation noise covariance matrix must be properly configured to reflect the sensor output’s characteristics. However, the observation noise of the pose obtained from planar fiducial markers varies across the measurement range and this fact needs to be taken into account during the sensor fusion to provide a reliable estimate. In this work, we present experimental measurements of the fiducial markers in real and simulation scenarios for 2D pose estimation. Based on these measurements, we propose analytical functions that approximate the variances of pose estimates. We demonstrate the effectiveness of our approach in a 2D robot localisation experiment, where we present a method for estimating covariance model parameters based on user measurements and a technique for fusing pose estimates from multiple markers.

1. Introduction

The field of mobile robotics is experiencing a period of dynamic development, as can be seen from the ongoing research and the growing number of commercial applications [1,2]. One of the cornerstones of this field is the problem of robot localisation, i.e., determination of the robot’s pose (position and orientation) with respect to its surrounding environment. Based on the robot’s operating environment, we distinguish between indoor and outdoor robot localisation.
The problem of long-term outdoor localisation was significantly simplified by the ability to use the global navigation satellite system (GNSS). However, long-term indoor localisation remains a challenging task [3]. A common approach to indoor robot localisation is based on landmarks. Landmarks are distinct features of the environment that can be of natural or artificial origin. Natural landmarks, such as corners, columns and doors, are built-in features of the indoor environment, which is their main advantage and many localisation and mapping approaches rely on them [4]. However, they come at a cost of being less robust; according to Jang et al. [4] they are not unique, susceptible to viewing angle, background clutter and also environment variations, such as lighting conditions. On the other hand, artificial landmarks require modifications in the interior environment; they are designed to be easily detected and extracted from the environment [5].
Artificial landmarks can be either active or passive. Active landmarks emit information into the environment and can be represented by devices already present in the environment, such as WLAN [6] or specially designed devices (ultra-wideband radio [7] or optical beacons [8]). These devices are rather complex and expensive, and require a power source. In contrast, passive markers (also known as fiducial markers) are an inexpensive and simple way to obtain a relative positional reference from the sensor, usually the camera, to the marker. This might be useful in situations where we need a relative localisation between the robot and, for example, a charging station. If we already know the global position of the marker in the environment, we can obtain the global position of our robot.

1.1. Passive Fiducial Markers

Passive fiducial markers come in different spatial dimensions, shapes and sizes. For mobile robotics, planar quadrilateral markers (e.g., ArUco [9], AprilTag, ARTag) are among the most commonly used variants described in [10,11]. These markers are monochromatic, and the information is encoded in n-by-n internal pixels that are surrounded by a frame designed to help distinguish them from their surroundings and used to estimate the position of the marker relative to the camera. The data encoded in the internal pixels usually contains an identifier and some additional information that is used to check redundancy and recover lost information [5]. Practical applications of these markers are:
  • Kinematic calibration for industrial robots;
  • Visual servoing of industrial robots;
  • Robot navigation tasks;
  • Localisation and mapping tasks;
  • Human–machine interaction.
Extracting the information encoded in a marker requires several processing steps. In the first step, the processing algorithm focusses on finding the marker’s frame (also known as an anchor) in the image, which is usually a geometric shape such as a point, bar, ellipse, triangle or square. For quadrilateral systems, the anchors consist of four lines. The next step is to extract the polygons from the inner pixels, which are then merged into a larger code pattern. The extracted pattern is then validated against a library of possible code schemes. In the last step, the six-degrees-of-freedom (DoF) pose of the marker in the camera reference frame is estimated by computing the homographic transform [12,13]. Like other solutions for mobile robot localisation, planar fiducial markers have their own shortcomings:
  • Pose ambiguity [14,15]: during frontal observation of the marker the pose cannot be determined uniquely, i.e., the valid and the flipped invalid orientation are indistinguishable [14];
  • Degradation of accuracy with distance [16];
  • Occlusion [17,18]: this arises when a real object appears between the camera and a fiducial marker, making part of the marker invisible to the camera—it causes problems with marker detection
  • Marker size: according to Szentandrási et al. [17] (p. 1) “the markers must be large enough to provide sufficient amount of information and at the same time they must be small enough to fit into the camera’s field of view”;
  • Optical system limitations [5]: physical properties of the image acquisition system, e.g., resolution.
The two former issues, namely degradation of orientation accuracy in frontal observation and pose ambiguity, arise as the result of the estimation principle used to determine the pose of the marker. These problems are widely known, but the methods currently available do not provide a real solution; they merely mitigate their effects [14]. The presence of these issues is particularly relevant in mobile robotics, as incorrect pose information from a marker can corrupt localisation and present a risk to the integrity of the robot itself [14].

1.2. Extended Kalman Filter

Much like the above-mentioned issues related to marker pose estimation, measurements from other sensors also suffer from inaccuracies and contain a non-negligible amount of uncertainty. In order to deal with measurement inaccuracies and incorporate the related uncertainty, state estimator algorithms are used. One of the most commonly used is the Kalman filter (KF) [19] and its extension for nonlinear systems, the Extended Kalman Filter (EKF) first described in [20]. The advantage of the EKF is that it allows the combination (also the correction) of measurements from different sensors (inertial measurement unit (IMU), GNSS, odometry), thus increasing the accuracy of the estimate of the monitored quantities (in the case of a mobile robot, it is usually the robot’s pose, speeds and accelerations). In order for the EKF to work correctly, we need to have a model of the observed system describing the state g ( u k , x k 1 ), measurement prediction h ( x k ) , and the related uncertainty of process noise and observation noise given by covariance matrices Q k and R k , respectively. Here u k denotes control data, while x k and x k 1 mark state vector at time k and k 1 respectively.
In practical applications, the determination of covariance matrices Q k and R k is not straightforward and is often set using the trial and error approach [21]. In this work, we will present a method that addresses the issue of setting the variance of the measurement input from the planar fiducial markers and how to use it in a practical scenario of localisation of mobile robots operating in a 2D environment with known global positions of fiducial markers as shown in Figure 1.

2. Related Works

Planar fiducial markers were first mentioned in relation to pose estimation at the end of the last millennium. In their article, Tung et al. [22] used simple fiducial marks to locate the origin of parts’ local coordinate frame in an autonomous assembly task. In 1998 Rekimoto [23] presented an approach for fiducial marker processing, which combined pattern recognition and pose estimation; soon other applications followed, e.g., optical tracking, Sticker et al. [24], and augmented reality [25]. The article by Kato et al. [25] also introduced the first fiducial marker system called ARToolKit. Soon other marker systems started to emerge (e.g., ARTag [26], AprilTag [27] and STag [28], ArUco [9,29]) that have been designed to mitigate some of the shortcomings of the ARToolkit.
Of the aforementioned shortcomings, pose ambiguity during frontal observation (i.e., small viewing angle) is one of the key issues [14]; it is caused by the mathematical apparatus used to process the markers. Perspective projection is used to estimate the shape distortion of a marker in an image. As a result, the larger the angle of view, the more accurate the pose estimation. In contrast, from near-frontal observation, the pose estimation produces high uncertainty. Tanaka et al. [14] (p. 2) describe this phenomenon as “close to an orthogonal projection, sometimes we cannot distinguish the valid orientation and the flipped invalid orientation because they seem almost the same in the image”. More details on this topic are available in [14].
Several articles have addressed the question of the pose ambiguity. In [5], Posescu et al. tested the accuracy of pose estimation from ArUco markers and noted that the uncertainty is the largest for frontal observation of the marker. The previous findings are also supported by Lopez-Cerón et al. [30]. The authors performed experiments in which they examined the effects of different observation distances and angles in relation to the AprilTag marker. Similar results to previous papers were presented in [31,32] but, in these cases, the issue was observed on other types of markers, the ARToolKit marker and the Metaio marker, respectively, and experiments were carried out with a finer angular and distance resolution. In [32] it was shown that higher standard deviation values occur not only for frontal observations but also for observations under a large angle of view. In [16], the authors proved that not only AprilTag and ArUco, but also other marker systems such as ARTag and STag, are to a greater or lesser extent susceptible to these shortcomings.
To address these issues, different approaches were proposed. Some of the articles proposed new marker systems; for example, Tanaka et al. [33] proposed markers based on Moire patterns. Other authors proposed larger markers in combination with alternative patterns such as random dots proposed by Uchiyama et al. [34] or the uniform marker field proposed by Szentandrási et al. [17]. Another group of articles recommends the use of 3D markers [5,13,35]; however, the usage of these markers might be inconvenient, e.g., larger spatial requirements or a geometric model has to be very accurate [33].
Finally, there are articles that recommend the use of fusion algorithms (e.g., extended Kalman filter). These algorithms either combine pose estimates from fiducial markers with readings from a complementary sensor or pose estimates derived from two or more markers. An interesting implementation of the latter solution is presented in [30], where the authors propose an algorithm that determines the final position of the robot based on independent estimates of angular and coordinate values from multiple markers. In articles using the EKF, it is common that data from the motion sensor are typically included in the prediction step, while the pose estimate from the markers is used in the correction step. However, the type of sensors used in the EKF prediction model and the type of markers may differ from article to article; for example, Zeng et al. [36] used encoders, in [10,11,18] IMU is used, while in [37] Chavez et al. used the combination of IMU and Doppler velocity log (DVL). The correct setting of the observation noise covariance matrix is also essential for the proper function of the Kalman filter. Chavez et al. and Zheng et al. used a diagonal covariance matrix. Chavez et al. used as matrix entries the standard deviation of pose estimates from several markers, while Zheng et al. adjusted the entries proportionally to the estimated distance and angle of view of the markers. In [10,11,18], there is little information on how the authors have addressed this issue.
This article builds on the research of Patrik Vávra (co-author of the article), presented in his master thesis [29]. In this paper, we propose analytical functions that model the measurement noise of the pose estimates obtained from the planar markers. The main contribution is that these models reflect the true nature of measurement noise and the outputs of these models can be used to set values in the observation noise covariance matrix R k in EKF which does not need to be set using the trial and error method. The adaptive adjustments of R k mean that EKF can rely more on other sensors when the pose estimate obtained from markers is reliable and vice versa.

3. Problem Definition

As briefly described in Section 1, methods presented in this work are related to the problem of global localisation of an autonomous mobile robot in a 2D environment. Mobile robots that were the focus of this research are shown in Figure 1. These robots are four-wheel skid steer robots equipped with a colour camera set to a resolution of 1280 × 720 px, IMU and wheel encoders on each wheel. These robots were moving in a known 2D environment with scattered AruCo markers. The position of these markers relative to the global coordinate system is known and can therefore be used to provide a global positional estimate.
The schematic view of the environment is shown in Figure 2. From each marker observation, an estimate of the position and orientation of the marker in a 3D space relative to the camera is obtained. However, for localisation in a 2D plane, only the position in the x z plane and rotation around the y axis denoted as β are important. In Figure 2, the known position of the marker, expressed by position m x g , m z g and rotation m β g , is shown in the global coordinate system, as well as the measured position of the marker, denoted as m x c , m z c and m β c , in the camera coordinate system. The goal of this application is to determine the position of the camera C g in the global coordinate system together with its respective covariance matrix that can be used as input in the EKF.

4. Experimental Measurements of Pose Estimation Noise

This section describes the conducted experimental measurements in the real world and simulation. The results of these measurements serve to validate the character of measurement noise of the camera pose estimates obtained from the markers in different setups and compare them with results from previous research. Furthermore, the results will serve as a basis for the derivation of variance models presented in Section 5.

4.1. Experiment Setup

The goal of the following experiments is to present how the shortcomings of the planar markers, discussed in Section 1, affect the pose estimate obtained from the marker in relation to the mutual position of the marker and the camera. To achieve this, the camera was positioned in a grid relative to the marker and, at each position, the marker was rotated around its vertical axis y m .
During the measurements, the ArUco marker with ID 0, resolution 5 × 5 and edge size of 112 mm was mounted on a stepper motor, allowing a precise and repeatable rotation of the marker from 90 to + 90 with the step of 0 . 45 . A Logitech C922 camera with a resolution set to 1280 × 720 px was used to take 100 photos of the marker at each step of its rotation. In total, 40,100 photos were taken during a single experiment. Symmetrical and diffuse light was used to provide even illumination of the marker to minimise the effect of uneven light and shadows on the results.
The same experiment was repeated 15 times for different positions of the camera relative to the marker, as shown in Figure 3. In seven experiments, the camera was in line with the marker, the distance between them increasing from 400 mm to 1000 mm. In the rest of the experiments, the camera was placed off-centre from the marker in the direction of the x g axis.
All measurements were processed in Python 3.8.7 using the ArUco library included in OpenCV [38] in version 4.5.1. The camera was calibrated using a chequerboard pattern and functions available in OpenCV. Functions for marker detection and pose estimation were used with default settings; only the corner refinement method was set to the sub-pixel option to improve the pose estimation.
In the post-processing phase, the consecutive positions of the marker corners found in the photo are obtained. The positions of the four corners u i and v i , i being the index of the corner, are expressed in the coordinate system of the photo with axes u and v. Based on these positions, the area of the marker in the photo can be calculated using Equation (1) representing Gauss’s area formula. Detected corners are also used to obtain an estimated marker pose in the form of a rotation vector and a translation vector. The rotation vector is subsequently converted to Euler angles, which will be used to represent the rotation of the marker in the rest of this work.
S = 1 2 | u 1 v 2 + u 2 v 3 + u 3 v 4 + u 4 v 1 u 2 v 1 u 3 v 2 u 4 v 3 u 1 v 4 |

4.2. Measurement Results

As mentioned in Section 3, for the case of robot localisation in a 2D space, the required results from marker pose estimation are position in the x z plane and rotation around the y axis. Therefore, only these results will be further presented and analysed.
The following figures present the mean values and variances of an experiment in which the marker was placed 500 mm from the camera in z c and 0 mm in x c . All values are displayed in dependence on the angle β c set by the stepper motor, which we consider as a ground truth. Figure 4 presents the mean values of the position and rotation of the marker relative to the camera. It can be seen that the marker is detectable approximately in the range of ± 80 but deviations of the estimated position can be observed at the edges of the measurement range.
Variances of position and rotation estimates are shown in Figure 5. As in the previous figure, the end of the measurement range is characterised by a sharp increase in the variance. Another very important finding is that the variance is not constant throughout the measurement range. The variance of rotation m β c is characterised by an increase in the proximity of 0 where the marker surface is parallel to the camera. The same effect can be observed in the variance of position m x c although it is not that pronounced. This increase is caused by the pose ambiguity issue discussed in Section 1 and Section 2. The variance of m z c is shown in logarithmic scale due to the large difference between the magnitude of the variance near the ends of the measurement range and the rest of the data.
To visualise the change in variances in dependence not only on the rotation of the marker but also on the distance from the marker, measurements from distances in z c ranging from 400 mm to 1000 mm and x c equal to zero are presented in Figure 6. To see details of the variances, the sharp changes at the edge of the measurement range are trimmed off in the following figure. The trend observed in Figure 5 is also visible in the measurements farther away from the marker only with an increase in magnitude. It can also be observed that the peak magnitude of the variance of m β c is increasing at a higher rate than the other two values.
The results of experiments in which the position of the marker in the camera coordinate system is non-zero in the direction of x c had trends in variance similar to the previous results. The most significant difference was the shift of the variance peak of m β c as can be observed in Figure 7. This measurement was taken when the marker was moved 300 mm in the direction of x c and 600 mm in z c . It can be seen that the peak of the variance is approximately at 30 which corresponds to the situation where the normal vector of the marker points directly at the camera. This can be proven if we calculate the angle between the normal vector of the marker and the axis z c , which is equal to a t a n ( m x c / m z c ) resulting in 29 . 5 for the described situation.

4.3. Simulation Results

A large part of the development of modern robotic systems is done in simulations. Therefore, it is important to investigate whether the behaviour observed in real experiments is also present in the simulation environment. Gazebo robotic simulator [39] with the model of our robot equipped with the camera was used for the following experiments.
The camera sensor was set to the same resolution and field of view as the camera in the previous experiments. Radial and tangential distortion of the camera was neglected because it would be compensated in a calibration process anyway. In addition, Gaussian noise with a standard deviation of 0.01 was applied to the camera, so it would correspond to the noise present in the real camera. Due to the absence of shadows in the simulation, the marker was rotated only from 0 to 90 with an increment of one degree during each measurement.
The results of the experiments are presented in Figure 8b. In this set of experiments, the camera was moved from 400 mm to 1000 mm in direction of z c and positioned at 0 mm in x c . To better visualise the trends in the variance, the z axes of the graphs are in logarithmic scale. We can observe the same trends in the results as in the real experiments, differing only in the magnitude of the variance. This difference is likely due to imperfections present in the real camera that are not reflected in the camera model in the simulation.

4.4. Conclusion of Measurements

The results of both simulation and real experiments support the previous findings presented in [5,30,31,32]. The effects of pose ambiguity, and precision degradation with increasing distance and at the ends of the measuring range are clearly visible in the data. Furthermore, the effect of the camera not being in line with the marker is described. The important finding is that the variance depends not only on the distance of the camera from the marker but also on the angle of the marker relative to the camera. Moreover, this effect is most visible in the variance of σ m β c where the variance between 40 and 70 is an order of magnitude lower than near the peak around 0 .

5. Analytical Models of Pose Variance

Based on the presented results, it was validated that the variance of the pose estimation obtained from the marker is not constant, but depends on the orientation and position of the marker relative to the camera. Furthermore, the differences in variance in a single experiment were significant enough that they cannot be neglected for proper state estimation using EKF.
The goal of this subsection is to derive models that capture the character of variance in individual axes that are relevant to our presented application. Presented models are analytical with minimum parameters, so they are easy to estimate and still represent the character of variance well enough. Models do not need to precisely fit the data for the correct behaviour of the EKF, more important is to correctly capture the course of variance and its order of magnitude.

5.1. Marker Area Normalisation

The variance of the camera pose relative to the marker depends on the yaw angle of the marker and its position, as shown in previous figures. However, if the variance model depended on the distance of the camera from the marker in a direction of z c , this model would be usable only for markers of a specific size. The models can be independent of the size of the marker if they depend on the area of the marker in the image, since the larger marker farther away from the camera looks the same as the smaller marker closer to the camera, provided that they are both in focus. However, this also presents a problem since the area of the marker depends not only on its distance from the camera but also on its yaw angle relative to the camera. This can be solved by the area normalisation process presented in the next paragraph.
The goal of area normalisation of the marker is to convert the known image area S β of the marker that is rotated by a certain yaw angle β to the equivalent area S n of the marker that is in the same position and is not rotated. This situation in which the markers are captured by the camera is visualised from the top view in Figure 9, where the rotated marker with the edge size 2 a is shown in orange and the equivalent non-rotated marker in green. These markers are projected on the projection plane P P of the camera that is at a focal distance f from the centre of the projection C O P . The resulting image captured by the camera is shown in Figure 10.
The area of the non-rotated marker, representing the normalised area S n , can be calculated as an area of a square with an edge size of 2 a , resulting in Equation (2), where l is the distance from the marker to the C O P in the direction of the optical axis. The area of the rotated marker S β can be calculated as the area of the equilateral trapezoid with side lengths of c , b and height of a 1 + a 2 . Side lengths c and b can be derived from Figure 9 as c = 2 a f l + a sin β and d = 2 a f l a sin β . The length of a 1 + a 2 is obtained in a similar manner to a 1 = f a cos β l a sin β and a 2 = f a cos β l + a sin β . The resulting formula for the area of the rotated marker is represented by Equation (3).
S n = 4 a 2 f 2 l 2
S β = 4 a 2 f 2 l 2 cos β ( l 2 a 2 sin 2 β ) 2
Finally, from the ratio of the area of the non-rotated and rotated markers, we can obtain Formula (4) for the conversion of the measured rotated marker area to the normalised marker area.
S n S β = ( l 2 a 2 sin 2 β ) 2 l 4 cos β S n = ( l 2 a 2 sin 2 β ) 2 l 4 cos β S β
In case the marker is not positioned on the optical axis, Equation (4) is modified to Equation (5), where x represents the offset of the marker from the optical axis.
S n = ( l 2 a 2 sin 2 β ) 2 l 3 ( l cos β + x sin β ) S β
The result of area normalisation is shown in Figure 11 where the average measured S ¯ and normalised S n ¯ areas for each individual set of 100 photos are compared side by side. It can be seen that the area in Figure 11b is dependent only on the distance and not the angle β c .

5.2. Model of m β c Variance

The analytical model that describes the course of variance of m β c should be designed so that it can adequately capture the character of the variance and be simple enough, with minimum parameters, that it can be easily estimated on measured data. The following significant characteristics that the model should consider can be deduced from Figure 5c and Figure 6c:
  • The variance increases sharply at the edge of the detectability of the marker.
  • The variance has a Gaussian shape around the point where the normal vector of the marker points towards the camera.
  • The peak of the variance increases with increasing distance from the marker and consequently with a decreasing area of the detected marker.
Based on the mentioned criteria, an analytical function in the form of
σ m β c 2 = f ( m β c , S n , m x c , m z c ) can be proposed. The resulting function (6) consists of the power function part describing the growth of the variance with decreasing marker area, the Gaussian part capturing the peak of variance, the part capturing the increase in variance at the edge of detectability and an offset. The angle ϕ expressed in Equation (7) is the offset of m β c angle by the angle that represents the mutual position of the camera and the marker as described in Section 4.2. The model has five parameters p 1 to p 5 that must be estimated.
σ m β c 2 = p 1 S n p 2 e ϕ p 3 + p 4 90 | ϕ | + p 5
ϕ = m β c a t a n ( m x c / m z c )
Figure 12 shows an analytical model fitted to data from the real experiment presented in Section 4.2. Parameters were estimated using the Levenberg–Marquardt algorithm for solving the nonlinear least squares problem with the goodness of fit expressed by RMSE equalled to 0 . 1000 2 .

5.3. Models of m x c and m z c Variances

The following significant characteristics can be deduced based on Figure 5a,b and Figure 6a,b.
  • Variance sharply increases at the edge of the detectability of the marker.
  • The variance increases with increasing distance from the marker and consequently with decreasing detected marker area.
  • The variance is not significantly dependent on the yaw angle m β c .
The following analytical model given by (8) is based on previously mentioned characteristics of m x c and m z c variances. Since both variances show similar characteristics, only a single model will be used for both of them. The model consists of a part describing the growth of variance based on the decreasing area of the marker captured in the image. The denominator captures the increase in variance near the end of the measuring range and the last part is the offset of the model. The angle ϕ is given by Equation (7). The model has three parameters p 1 to p 3 .
σ m x / z c 2 = p 1 S n p 2 90 | ϕ | + p 3
Figure 13 shows the analytical models fitted to the data from the real experiment presented in Section 4.2. The parameters were estimated using the same method as in the previous Section 5.2 with the goodness of fit expressed by RMSE equalled to 0 . 0076 2 for model of σ m x c 2 and 1 . 3596 2 for model of σ m z c 2 .

6. Practical Application of Variance Models in 2D Mobile Robot Localisation

The following section presents an approach describing one of the ways of using the presented variance models in the mobile robot localisation problem. As described in Section 1, our research focused on the localisation of a four-wheel skid-steered mobile robot in a known environment with planar fiducial markers placed at known positions. The mobile robot in question is equipped only with a colour camera with a resolution of 1280 × 720 px, IMU and wheel encoders. The described use case is implemented in the Gazebo simulator in order to have full control over the experiment and its parameters and to have a ground truth reference for comparison. The robot in the simulation environment is presented in Figure 14.

6.1. Parameter Estimation on User Data

In Section 5, the parameters of the observation noise models were estimated on a large dataset of measurements with a precisely set yaw angle of the marker that was considered ground truth. This approach is not very practical in a real scenario due to its time consumption and the requirements for the preparation of the experiment. Therefore, with knowledge of significant features of variance models, only a subset of measured data is needed to estimate the required parameters in all models.
Because the measured variance is symmetric around 0 , measurements can be made in only one-half of the measurement range and then mirrored. For the model of m β c the measurements are needed at the yaw angle of 0 to capture the increase in variance at this angle of rotation. The next significant data points are around the angle of 15 to capture the spread of the peak. Other points are around 40 capturing the value of minimal variance and the final points are near the edge of detectability around 70 to capture the increase of variance in this area. The data points mentioned above will also serve well enough for the estimation of the parameters of the variance models of m x c and m z c . All data points at different yaw angles should be measured in at least five points throughout the measurement range in the direction of the z c axis. It is recommended to take at least 100 photos of the marker in each configuration.

6.2. Transformation of Measurements to Global Coordinate System

Each estimated pose from a marker can be considered as a measurement with a multivariate Gaussian distribution defined as N ( μ m c , Σ m c ) , where μ m c is a vector of mean values representing measured positions and rotation, and Σ m c is a covariance matrix with modelled variances on the main diagonal.
To transform a multivariate Gaussian distribution, a first-order linear transformation can be used as described in [40] by Equations (9) and (10).
c g = f ( m c , m g )
Σ y = f ( x ) x Σ f ( x ) T x
The function f ( m c , m g ) represents a transformation of the position and orientation of the camera and consequently of the robot itself from the camera coordinate system to the global coordinate system. The known pose of the marker in the global coordinate system m g and in the camera coordinate system m c needs to be combined using the geometric transformations in Equations (11)–(13) to obtain the pose of the camera in the global coordinate system c g .
c x g = m x c cos ( m β g + m β c ) m z c sin ( m β g + m β c ) + m x g
c z g = m x c sin ( m β g + m β c ) + m z c cos ( m β g + m β c ) + m z g
c β g = m β g + m β c
The term f ( x ) x in (10) represents the Jacobian of f ( m c , m g ) . Considering that normal distributions of the measurements represented by m c and m g are independent, Equation (10) can be rewritten in the following form:
Σ c g = f ( m c , m g ) m c Σ m c f ( m c , m g ) T m c + f ( m c , m g ) m g Σ m g f ( m c , m g ) T m g
Moreover, Jacobians f ( m c , m g ) m c and f ( m c , m g ) m g are represented by the following matrices:
f ( m c , m g ) m c = cos ( m β g + m β c ) sin ( m β g + m β c ) m x c sin ( m β g + m β c ) m z c cos ( m β g + m β c ) sin ( m β g + m β c ) cos ( m β g + m β c ) m x c cos ( m β g + m β c ) m z c sin ( m β g + m β c ) 0 0 1
f ( m c , m g ) m g = 1 0 m x c sin ( m β g + m β c ) m z c cos ( m β g + m β c ) 0 1 m x c cos ( m β g + m β c ) m z c sin ( m β g + m β c ) 0 0 1

6.3. Fusion of Pose from Multiple Markers

If multiple markers are detected in the image, their multivariate Gaussian distributions can be combined to improve the position estimate using their product [41] in the form of Equations (17) and (18).
μ = n = 1 N Σ n 1 1 n = 1 N Σ n 1 μ n
Σ = n = 1 N Σ n 1 1
This fused estimate with its covariance can be used as one of the inputs in the EKF or other state estimators in the form of the observation noise matrix R k .

6.4. EKF Localisation

As a final step, an EKF is used to fuse all sensory data and provide an estimate of the absolute position of the mobile robot in the environment. The experiment was carried out using the ROS 1 framework [42] in the Gazebo simulation environment. The simplified diagram that describes the entire localisation pipeline is shown in Figure 15.
The outputs from the simulation were in the form of odometry calculated using a differential drive model, data from the simulated IMU that were processed using the Madgwick filter [43] and camera images that were processed using the method described in this article. These sensor data were then sent as input to the EKF.
According to ROS conventions [44] three coordinate systems (frames) were used, m a p , o d o m and b a s e _ l i n k . The transformation from o d o m to b a s e _ l i n k is provided by the relative EKF that fuses data from the IMU and odometry. Absolute EKF is used to provide the transformation from m a p to o d o m based on data from the IMU, odometry and the global position estimate from fiducial markers. EKFs were implemented using a node from the robot_localization package [45].

6.5. Results

Figure 16 presents the variance models estimated from the simulation data. Data points were selected according to the recommendations in Section 6.1 from five different distances 400, 500, 600, 800 and 1100 mm, and five yaw angles of the marker 0 , 15 , 30 , 50 and 70 . Parameters were estimated using the Levenberg–Marquardt algorithm for solving the nonlinear least squares problem.
To visualise the effect of presented variance models, the robot with the camera was placed 400, 600, 1000 and 1200 mm from the marker, and the resulting observation noise in the form of covariance matrices at each position is visualised as error ellipses in Figure 17 and compared in Figure 18. The covariance matrix of the marker positions in the global coordinate system was set to zero; therefore, only the influence of variance models is visible. It can be observed that, with increasing distance from the marker, the error ellipse increases up to the point where two markers are visible. When multiple markers are visible, as in Figure 17 for a robot position of 1400 mm, their pose estimates can be fused. This results in a more accurate pose estimate in which the resulting position of the robot is closer to the ground truth.
In the last experiment, the mobile robot was driven manually in the simulation environment and its absolute position in the global coordinate system was estimated using the localisation pipeline described in Section 6.4. The experiment was carried out four times on the same data set, differing only in the variance model used. In the first experiment, the model of variance, as described in this paper, was used. In the following experiment, the variances of the pose estimate were set to a fixed value corresponding to the minimal, mean and maximal values of the measured variances used to estimate the individual variance models shown in Figure 16. The results of this experiment are shown in Figure 19 where the inputs to the EKF, odometry and estimated pose from the markers are compared with the ground truth in Figure 19a and the individual estimated poses resulting from the different settings of the variance models are compared in Figure 19b. The ground truth position was obtained directly from the simulation in Gazebo. Figure 20 presents a better visualisation of the differences among individual experiments as errors from the ground truth position for the x g and z g axes. The RMSE values, evaluated between the ground truth position and position obtained while using different variance models, are summarised in Table 1 for x g and y g axes.

7. Discussion

The methods presented in the previous sections proposed a straightforward way of estimating observation noise for pose estimates obtained from planar markers using analytical models. This pose estimate with its covariance matrix can consequently be used, for example, for localisation using the EKF as was presented.
The benefit of the proposed approach is that it eliminates the need for manual setting of the observation noise which can be estimated using the simple method with a relatively low number of measurements described in Section 6.1. The accuracy of the estimation depends on the position of the selected data points and their amount. However, with an increasing number of data points, the whole process of acquiring these data points becomes more time consuming.
The next key outcome is visible in Figure 19b and Table 1 where the position from localisation using the proposed variance models outperforms the estimates in which the fixed variance was used for the estimate of the marker pose. This is because the presented analytical models capture the true nature of the variances of the marker. Therefore, the resulting pose is less susceptible to less accurate pose estimates received from markers compared to situations where the variances are set at a constant value. This is more apparent in situations where the robot is farther away from the marker or the marker is less detectable, as shown in the zoomed section in Figure 19b where the proposed method is less susceptible to the noisy pose estimate from the marker compared to other methods. Setting the constant variances to either the minimal or maximal value causes the state estimator to trust the measurement less when it is accurate or more reliable when it is not.
While the approach we presented was demonstrated for 2D localisation, it can be generalised and applied to an arbitrary situation in a 3D environment. Moreover, the step of fusing the pose estimates from multiple markers can be integrated into the Kalman filter itself, rather than being a separate process. However, in this scenario, the Kalman filter must be able to handle a varying number of inputs, depending on the number of visible markers.
Future research should focus on methods for the fusion of estimates from multiple markers. Although the proposed method resulted in overall better performance compared to settings with the fixed variance of pose estimate, the compared methods provided better estimates in certain sections. This is likely caused by a violation of our assumption that individual pose estimates from multiple markers in the image are Gaussian and independent. Another consequence of this assumption is that, in cases where many markers are visible, the resulting product of Gaussian distributions would have unrealistically low variances.
An area for further research would be to enhance the proposed variance models, which are currently estimated for static camera situations. However, other factors such as motion blur resulting from the robot’s movement and camera vibrations can degrade the captured image and, as a result, the estimated pose from the fiducial marker. These influences must be considered in order to improve the accuracy of the pose estimates and the overall performance of the localisation system.

8. Conclusions

This paper presents a method for estimating the variance of the pose estimate from planar fiducial markers. The estimated variance is based on analytical models that correctly capture the varying nature of the variance that was experimentally validated. These models are simple enough to be easily estimated on a relatively small amount of data and still capture the variable nature of the pose estimate variances. The method used here can be applied to the problem of mobile robot localisation as was presented in Section 6.4. Here, our method of setting the observation noise of EKF based on variance models proved to be superior to experiments with variances set to a constant value.

Author Contributions

Conceptualisation, R.A., P.V. and M.B.; methodology, R.A., M.B. and P.V.; software, P.V. and R.A.; validation, R.A., P.V. and M.B.; formal analysis, B.D. and F.R.; investigation, R.A. and B.D.; resources, R.A. and B.D.; data curation, F.R.; writing—original draft preparation, R.A., B.D., F.R., M.F. and M.B.; writing—review and editing, R.A., M.F., M.B., B.D. and F.R.; visualisation, R.A., F.R. and M.F.; supervision, R.A. and M.B.; project administration, M.B.; funding acquisition, M.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Faculty of Mechanical Engineering, Brno University of Technology under the project FSI-S-20-6407: “Research and development of methods for simulation, modelling a machine learning in mechatronics”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is unavailable due to privacy restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cebollada, S.; Payá, L.; Flores, M.; Peidró, A.; Reinoso, O. A state-of-the-art review on mobile robotics tasks using artificial intelligence and visual data. Expert Syst. Appl. 2021, 167, 114195. [Google Scholar] [CrossRef]
  2. Rubio, F.; Valero, F.; Llopis-Albert, C. A review of mobile robots: Concepts, methods, theoretical framework, and applications. Int. J. Adv. Robot. Syst. 2019, 16, 1729881419839596. [Google Scholar] [CrossRef] [Green Version]
  3. Niloy, M.A.K.; Shama, A.; Chakrabortty, R.K.; Ryan, M.J.; Badal, F.R.; Tasneem, Z.; Ahamed, M.H.; Moyeen, S.I.; Das, S.K.; Ali, M.F.; et al. Critical Design and Control Issues of Indoor Autonomous Mobile Robots: A Review. IEEE Access 2021, 9, 35338–35370. [Google Scholar] [CrossRef]
  4. Jang, G.; Kim, S.; Lee, W.; Kweon, I. Robust self-localization of mobile robots using artificial and natural landmarks. In Proceedings of the 2003 IEEE International Symposium on Computational Intelligence in Robotics and Automation. Computational Intelligence in Robotics and Automation for the New Millennium (Cat. No.03EX694), Kobe, Japan, 16–20 July 2003; Volume 1, pp. 412–417. [Google Scholar] [CrossRef]
  5. Popescu, D.C.; Cernaianu, M.O.; Ghenuche, P.; Dumitrache, I. An assessment on the accuracy of high precision 3D positioning using planar fiducial markers. In Proceedings of the 2017 21st International Conference on System Theory, Control and Computing, Sinaia, Romania, 19–21 October 2017; pp. 471–476. [Google Scholar] [CrossRef]
  6. Deasy, T.P.; Scanlon, W.G. Stepwise Algorithms for Improving the Accuracy of Both Deterministic and Probabilistic Methods in WLAN-based Indoor User Localisation. Int. J. Wirel. Inf. Netw. 2004, 11, 207–216. [Google Scholar] [CrossRef]
  7. Ledergerber, A.; Hamer, M.; D’Andrea, R. A robot self-localization system using one-way ultra-wideband communication. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, Germany, 28 September–2 October 2015; pp. 3131–3137. [Google Scholar]
  8. Krejsa, J.; Věchet, S.; Hrbácek, J.; Ripel, T.; Ondroušek, V.; Hrbácek, R.; Schreiber, P. Presentation Robot Advee. Eng. Mech. 2011, 18, 307–322. [Google Scholar]
  9. Babinec, A.; Jurišica, L.; Hubinský, P.; Duchoň, F. Visual Localization of Mobile Robot Using Artificial Markers. Procedia Eng. 2014, 96, 1–9. [Google Scholar] [CrossRef] [Green Version]
  10. Bertoni, M.; Michieletto, S.; Oboe, R.; Michieletto, G. Indoor Visual-Based Localization System for Multi-Rotor UAVs. Sensors 2022, 22, 5798. [Google Scholar] [CrossRef] [PubMed]
  11. Kayhani, N.; Heins, A.; Zhao, W.; Nahangi, M.; McCabe, B.; Schoellig, A. Improved Tag-based Indoor Localization of UAVs Using Extended Kalman Filter. In Proceedings of the ISARC. International Symposium on Automation and Robotics in Construction, Banff, AB, Canada, 21–24 May 2019; pp. 624–631. [Google Scholar] [CrossRef]
  12. Fiala, M. Designing Highly Reliable Fiducial Markers. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1317–1324. [Google Scholar] [CrossRef] [PubMed]
  13. Abbas, S.M.; Aslam, S.; Berns, K.; Muhammad, A. Analysis and Improvements in AprilTag Based State Estimation. Sensors 2019, 19, 5480. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Tanaka, H.; Sumi, Y.; Matsumoto, Y. A solution to pose ambiguity of visual markers using Moiré patterns. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 3129–3134. [Google Scholar] [CrossRef]
  15. Ch’ng, S.F.; Sogi, N.; Purkait, P.; Chin, T.J.; Fukui, K. Resolving Marker Pose Ambiguity by Robust Rotation Averaging with Clique Constraints. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation, Paris, France, 31 May–31 August 2020. [Google Scholar]
  16. Kalaitzakis, M.; Carroll, S.; Ambrosi, A.; Whitehead, C.; Vitzilaios, N. Experimental Comparison of Fiducial Markers for Pose Estimation. In Proceedings of the 2020 International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece, 1–4 September 2020; pp. 781–789. [Google Scholar] [CrossRef]
  17. Szentandrási, I.; Zachariáš, M.; Havel, J.; Herout, A.; Dubská, M.; Kajan, R. Uniform Marker Fields: Camera localization by orientable De Bruijn tori. In Proceedings of the 2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Atlanta, GA, USA, 5–8 November 2012; pp. 319–320. [Google Scholar] [CrossRef]
  18. Neunert, M.; Bloesch, M.; Buchli, J. An open source, fiducial based, visual-inertial motion capture system. In Proceedings of the 2016 19th International Conference on Information Fusion (FUSION), Heidelberg, Germany, 5–8 July 2016; pp. 1523–1530. [Google Scholar]
  19. Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. Trans. ASME J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
  20. Mcgee, L.A.; Schmidt, S.F. Discovery of the Kalman Filter as a Practical Tool for Aerospace and Industry; National Aeronautics and Space Administration: Pasadena, CA, USA, 1985.
  21. Brablc, M. Hybrid Method for Modelling and State Estimation of Dynamic Systems; Brno University of Technology: Brno, Czech Republic, 2023. [Google Scholar]
  22. Tung, C.P.; Kak, A. Integrating sensing, task planning, and execution for robotic assembly. IEEE Trans. Robot. Autom. 1996, 12, 187–201. [Google Scholar] [CrossRef] [Green Version]
  23. Rekimoto, J. Matrix: A realtime object identification and registration method for augmented reality. In Proceedings of the 3rd Asia Pacific Computer Human Interaction (Cat. No.98EX110), Shonan Village Center, Japan, 17 July 1998; pp. 63–68. [Google Scholar] [CrossRef] [Green Version]
  24. Stricker, D.; Klinker, G.; Reiners, D. A fast and robust line-based optical tracker for augmented reality applications. In Proceedings of the International Workshop on Augmented Reality (IWAR’98), San Francisco, CA, USA, 1 November 1998. [Google Scholar]
  25. Kato, H.; Billinghurst, M. Marker tracking and HMD calibration for a video-based augmented reality conferencing system. In Proceedings of the Proceedings 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR’99), San Francisco, CA, USA, 20–21 October 1999; pp. 85–94. [Google Scholar] [CrossRef] [Green Version]
  26. Fiala, M. ARTag, a fiducial marker system using digital techniques. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 590–596. [Google Scholar] [CrossRef]
  27. Olson, E. AprilTag: A robust and flexible visual fiducial system. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 3400–3407. [Google Scholar] [CrossRef] [Green Version]
  28. Benligiray, B.; Topal, C.; Akinlar, C. STag: A Stable Fiducial Marker System. Image Vis. Comput. 2017, 89, 158–169. [Google Scholar] [CrossRef] [Green Version]
  29. Vávra, P. Využití Nástroje ROS pro Řízení Autonomního Mobilního Robotu. Master’s Thesis, University of Technology in Brno, Brno, Czech Republic, 2019. [Google Scholar]
  30. López-Cerón, A.; Cañas, J.M. Accuracy analysis of marker-based 3D visual localization. In Proceedings of the XXXVII Jornadas de Automática Jornadas de Automática 2016, Madrid, Spain, 7–9 September 2022. [Google Scholar]
  31. Abawi, D.; Bienwald, J.; Dorner, R. Accuracy in optical tracking with fiducial markers: An accuracy function for ARToolKit. In Proceedings of the Third IEEE and ACM International Symposium on Mixed and Augmented Reality, Arlington, VA, USA, 5 November 2004; pp. 260–261. [Google Scholar] [CrossRef]
  32. Pentenrieder, K.; Meier, P.; Klinker, G. Analysis of Tracking Accuracy for Single-Camera Square-Marker-Based Tracking. In Proceedings of the Dritter Workshop Virtuelle und Erweiterte Realitt der GIFachgruppe VR/AR, Koblenz, Germany, 25–26 September 2006. [Google Scholar]
  33. Tanaka, H.; Sumi, Y.; Matsumoto, Y. Avisual marker for precise pose estimation based on lenticular lenses. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 5222–5227. [Google Scholar] [CrossRef]
  34. Uchiyama, H.; Saito, H. Random dot markers. In Proceedings of the 2011 IEEE Virtual Reality Conference, Singapore, 19–23 March 2011; pp. 35–38. [Google Scholar] [CrossRef]
  35. Mateos, L.A. AprilTags 3D: Dynamic Fiducial Markers for Robust Pose Estimation in Highly Reflective Environments and Indirect Communication in Swarm Robotics. arXiv 2020, arXiv:2001.08622. [Google Scholar] [CrossRef]
  36. Zheng, J.; Bi, S.; Cao, B.; Yang, D. Visual Localization of Inspection Robot Using Extended Kalman Filter and Aruco Markers. In Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics, Kuala Lumpur, Malaysia, 12–15 December 2018; pp. 742–747. [Google Scholar] [CrossRef]
  37. Chavez, A.G.; Mueller, C.A.; Doernbach, T.; Birk, A. Underwater navigation using visual markers in the context of intervention missions. Int. J. Adv. Robot. Syst. 2019, 16, 1729881419838967. [Google Scholar] [CrossRef]
  38. Bradski, G. The OpenCV Library. Dobb’s J. Softw. Tools 2000, 25, 120–123. [Google Scholar]
  39. Koenig, N.; Howard, A. Design and use paradigms for Gazebo, an open-source multi-robot simulator. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), Sendai, Japan, 28 September–2 October 2004; Volume 3, pp. 2149–2154. [Google Scholar] [CrossRef] [Green Version]
  40. Blanco-Claraco, J.L. A tutorial on SE(3) transformation parameterizations and on-manifold optimization. arXiv 2021, arXiv:2103.15980. [Google Scholar]
  41. Petersen, K.B.; Pedersen, M.S. The Matrix Cookbook. Tech. Univ. Den. 2012, 7, 510. [Google Scholar]
  42. Quigley, M.; Conley, K.; Gerkey, B.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; Ng, A. ROS: An open-source Robot Operating System. In Proceedings of the ICRA Workshop on Open Source Software, Kobe, Japan, 12–17 May 2009; Volume 3. [Google Scholar]
  43. Madgwick, S.O.H.; Harrison, A.J.L.; Vaidyanathan, R. Estimation of IMU and MARG orientation using a gradient descent algorithm. In Proceedings of the 2011 IEEE International Conference on Rehabilitation Robotics, Zurich, Switzerland, 29 June–1 July 2011; pp. 1–7. [Google Scholar] [CrossRef]
  44. Meeussen, W. REP 105—Coordinate Frames for Mobile Platforms (ROS.org). 2010. Available online: https://www.ros.org/reps/rep-0105.html (accessed on 7 February 2023).
  45. Moore, T.; Stouch, D. A Generalized Extended Kalman Filter Implementation for the Robot Operating System. In Proceedings of the 13th International Conference on Intelligent Autonomous Systems (IAS-13), Padova, Italy, 15–18 July 2014. [Google Scholar]
Figure 1. Depiction of mobile robots in a described environment.
Figure 1. Depiction of mobile robots in a described environment.
Sensors 23 05746 g001
Figure 2. Schematic view of the real application with individual coordinate systems and dimensions. Each coordinate system is marked with an upper index, g for the global coordinate system, m for the marker coordinate system and c for the camera coordinate system.
Figure 2. Schematic view of the real application with individual coordinate systems and dimensions. Each coordinate system is marked with an upper index, g for the global coordinate system, m for the marker coordinate system and c for the camera coordinate system.
Sensors 23 05746 g002
Figure 3. Camera positions during the experimental measurements.
Figure 3. Camera positions during the experimental measurements.
Sensors 23 05746 g003
Figure 4. Mean values in individual increments of marker headings.
Figure 4. Mean values in individual increments of marker headings.
Sensors 23 05746 g004
Figure 5. Variance of marker position and heading dependent on β c .
Figure 5. Variance of marker position and heading dependent on β c .
Sensors 23 05746 g005
Figure 6. Variances in different distances from the camera dependent on yaw angle β c . Measurements from different distances are colour coded.
Figure 6. Variances in different distances from the camera dependent on yaw angle β c . Measurements from different distances are colour coded.
Sensors 23 05746 g006
Figure 7. Peak of variance shift due to the marker offset.
Figure 7. Peak of variance shift due to the marker offset.
Sensors 23 05746 g007
Figure 8. Variances in different distances from the camera dependent on yaw angle β c obtained from simulation in Gazebo. Measurements from different distances are colour coded.
Figure 8. Variances in different distances from the camera dependent on yaw angle β c obtained from simulation in Gazebo. Measurements from different distances are colour coded.
Sensors 23 05746 g008
Figure 9. Top view of the rotated (orange) and equivalent (green) marker captured by the camera on the projection plane (blue).
Figure 9. Top view of the rotated (orange) and equivalent (green) marker captured by the camera on the projection plane (blue).
Sensors 23 05746 g009
Figure 10. View of the rotated (orange) and equivalent (green) marker captured by the camera.
Figure 10. View of the rotated (orange) and equivalent (green) marker captured by the camera.
Sensors 23 05746 g010
Figure 11. Dependency of a marker area on angle β c and distance m z c ¯ .
Figure 11. Dependency of a marker area on angle β c and distance m z c ¯ .
Sensors 23 05746 g011
Figure 12. Model of σ m β c 2 estimated on real data.
Figure 12. Model of σ m β c 2 estimated on real data.
Sensors 23 05746 g012
Figure 13. Models of σ m x c 2 and σ m z c 2 estimated on real data.
Figure 13. Models of σ m x c 2 and σ m z c 2 estimated on real data.
Sensors 23 05746 g013
Figure 14. Simulation environment with a robot created in Gazebo.
Figure 14. Simulation environment with a robot created in Gazebo.
Sensors 23 05746 g014
Figure 15. Diagram of localisation pipeline with transformations.
Figure 15. Diagram of localisation pipeline with transformations.
Sensors 23 05746 g015
Figure 16. Parameter estimation on user data.
Figure 16. Parameter estimation on user data.
Sensors 23 05746 g016
Figure 17. Visualisation of covariance error ellipse based on distance and number of visible markers (the orange arrow represents pose estimated from markers with its covariance, the green arrow represents the ground truth position of the mobile robot and marker positions are represented by coordinate axes).
Figure 17. Visualisation of covariance error ellipse based on distance and number of visible markers (the orange arrow represents pose estimated from markers with its covariance, the green arrow represents the ground truth position of the mobile robot and marker positions are represented by coordinate axes).
Sensors 23 05746 g017
Figure 18. Comparison of standard deviations based on different distances m z c from the marker.
Figure 18. Comparison of standard deviations based on different distances m z c from the marker.
Sensors 23 05746 g018
Figure 19. Results from EKF localisation. (a) Comparison of odometry and position estimated from markers (shown as marker axes) with ground truth position and depiction of visible markers. (b) Comparison of position estimated by EKF using different variance models for the estimated pose from markers.
Figure 19. Results from EKF localisation. (a) Comparison of odometry and position estimated from markers (shown as marker axes) with ground truth position and depiction of visible markers. (b) Comparison of position estimated by EKF using different variance models for the estimated pose from markers.
Sensors 23 05746 g019
Figure 20. Comparison of differences from the ground truth in direction of x g and y g axes. (a) Difference from ground truth in direction of x g . (b) Difference from ground truth in direction of y g .
Figure 20. Comparison of differences from the ground truth in direction of x g and y g axes. (a) Difference from ground truth in direction of x g . (b) Difference from ground truth in direction of y g .
Sensors 23 05746 g020
Table 1. Comparison of RMSE for individual variance setting in direction of x g and y g axes.
Table 1. Comparison of RMSE for individual variance setting in direction of x g and y g axes.
x g RMSE [m] z g RMSE [m]
Variance Model0.01280.0161
Min0.01540.0215
Mean0.01920.0207
Max0.02230.0203
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Adámek, R.; Brablc, M.; Vávra, P.; Dobossy, B.; Formánek, M.; Radil, F. Analytical Models for Pose Estimate Variance of Planar Fiducial Markers for Mobile Robot Localisation. Sensors 2023, 23, 5746. https://doi.org/10.3390/s23125746

AMA Style

Adámek R, Brablc M, Vávra P, Dobossy B, Formánek M, Radil F. Analytical Models for Pose Estimate Variance of Planar Fiducial Markers for Mobile Robot Localisation. Sensors. 2023; 23(12):5746. https://doi.org/10.3390/s23125746

Chicago/Turabian Style

Adámek, Roman, Martin Brablc, Patrik Vávra, Barnabás Dobossy, Martin Formánek, and Filip Radil. 2023. "Analytical Models for Pose Estimate Variance of Planar Fiducial Markers for Mobile Robot Localisation" Sensors 23, no. 12: 5746. https://doi.org/10.3390/s23125746

APA Style

Adámek, R., Brablc, M., Vávra, P., Dobossy, B., Formánek, M., & Radil, F. (2023). Analytical Models for Pose Estimate Variance of Planar Fiducial Markers for Mobile Robot Localisation. Sensors, 23(12), 5746. https://doi.org/10.3390/s23125746

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop