1. Introduction
The development of agricultural robotics and precision farming techniques has the potential to address labour shortages and enhance production efficiency by significantly reducing the need for manual labour in demanding agrarian tasks. This marks a critical step towards modernising farming practices.
However, unstructured farm environments present unique challenges for agricultural robots. These environments are characterised by their dynamic and unpredictable nature, contrasting with the controlled conditions of greenhouses or structured plantations. Unstructured farms are often marked by irregular terrain, a diversity of crop types, and numerous natural obstacles, including rocks, trees, and water, all contributing to the complexity of navigating and executing tasks. Moreover, the lack of predefined paths and the continuous variation in environmental conditions, including soil moisture levels and plant growth stages, further amplifies the operational demands placed on robotic systems. Such complexities highlight the critical need for advanced sensing, localisation, and navigation capabilities in agricultural robots to effectively adapt to the ever-changing and varied agricultural landscape.
Traditional agricultural robots normally use RTK-GNSS, which provides high-precision positioning. However, these robots often struggle to provide real-time and precise performance in unstructured farms and complex settings [
1]. The primary reason is that dense foliage, obstacles, and hills can cause signal attenuation and reflection in most unstructured farm environments. In severe cases, obstacles can reduce the number of visible satellites, rendering the GNSS receiver entirely ineffective [
2]. The use of RTK-GNSS for agricultural robots also increases their cost, as the cost of the RTK-GNSS itself can be as high as USD 30,000.
Therefore, the cost has to be lowered to promote the use of high-accuracy localisation in agricultural robotics [
3,
4]. To address this issue, the multi-sensor fusion approach is often used to incorporate devices such as wheel odometers along with GNSS or to combine GNSS with IMU [
5,
6,
7,
8,
9,
10]. Lin et al. combined Oriented FAST. They used Rotated BRIEF SLAM (ORB-SLAM), IMU, and wheel odometry to overcome limitations caused by the lack of GPS [
11,
12].
Tixiao Shan proposed the LiDAR Inertial Odometry via Smoothing and Mapping (LIO-SAM) framework, a SLAM method that couples LiDAR and IMU data through factor graph optimisation. This integration enables high-precision localisation and mapping services in complex environments, and the factor graph optimisation framework enhances the algorithm’s robustness and accuracy [
13]. However, the algorithm requires an IMU with a high update frequency as well as a high-performance LiDAR, which costs up to GBP 3280 [
14]. Although it has excellent technical performance, these cost factors limit its application in agricultural applications.
This study tests and compares two different localisation methods that integrate data from low-cost sensors for real-time localisation in unstructured farm environments. By comparing these two approaches, the performance and feasibility of the low-cost localisation scheme can be evaluated to ensure that the proposed approach meets the needs of real-world applications for agricultural robots. Additionally, the LIO-SAM algorithm is also used to construct a 3D environmental map in this paper.
The primary contributions of this paper are as follows:
Use a low-cost, high-accuracy differential localisation system on the robot platform.
Develop a low-cost localisation method for unstructured agricultural environments based on ESKF, integrating IMU, RTK-GNSS, and wheel odometry.
Test the LIO-SAM algorithm on the agricultural environment for the first time.
Adapt the LIO-SAM algorithm to suit the RoboSense RS16 LiDAR and a low-cost IMU for agricultural applications.
The remainder of this paper is organised as follows.
Section 2 describes the methods and technologies used in this research.
Section 3 presents the experimental design and displays the results obtained.
Section 4 analyses and discusses the experimental results.
Section 5 concludes the paper.
2. Methodologies
This study proposes an approach to achieve low-cost, real-time localisation and mapping for agricultural robots in unstructured farming environments. It introduces a fusion algorithm integrating data from wheel odometry, IMU, and RTK-GNSS. Among these sensors, wheel odometry can provide short-term accurate motion data, but it is prone to cumulative errors; the IMU offers continuous acceleration and attitude information, but it is limited by drift issues, and RTK-GNSS delivers high-precision localisation data that can be affected by environmental obstacles. The ESKF algorithm is therefore utilised to integrate the data from these sensors and enhance the accuracy of the localisation system while maintaining cost efficiency in this paper. The performance of this method is also compared with the LIO-SAM method.
Figure 1a illustrates the diagram of the low-cost localisation algorithm for fusing data from three sensors using ESKF and the localisation framework diagram of the LIO-SAM algorithm. Initially, localisation data are obtained through the RTK-GNSS system and converted into odometry-format pose information. Then, this information, along with data from the IMU and wheel odometry, is fused using the ESKF to produce the final fused pose information.
Figure 1b shows the localisation framework diagram of the LIO-SAM algorithm. It obtains data from the IMU and 3D LiDAR to achieve positioning. The following sections detail the robot platform used in this study, the application of RTK technology, and the specific implementation of the ESKF fusion algorithm and the LIO-SAM algorithm.
2.1. Robot Platform
The robot platform used in this research is shown in
Figure 2. It uses a six-wheel differential suspension system; the wheels use high-power DC motors to provide sufficient power for the robot to climb and cross the obstacles, and the suspension structure allows the robot to cross the obstacles on the farm smoothly. The robot does not need to operate at high speeds, so a single antenna was used to reduce the cost.
This robot is also equipped with a 16-line LiDAR, which can construct a real-time 3D map of the surrounding environment through the LIO-SAM algorithm. The controller of this robot is NVIDIA’s newly released Jetson Orin Nano embedded development board, which provides sufficient arithmetic power for the robot to deal with the computational task of processing and fusing multiple sensors while controlling the cost. Compared to the IMU and LiDAR used in the original version of LIO-SAM, this robot platform would save up to 64% of costs [
13].
Table 1 shows the price list of navigation sensors used in this study.
Figure 3 illustrates the robot’s hardware architecture, showing how it perceives its environment using LiDAR, an IMU, and RTK-GNSS. Data from these sensors are transmitted to the main controller through serial and Ethernet connections. An STM32 controller is used to control the robot’s movements.
The robot platform dimensions are 800 mm × 500 mm × 430 mm, weighing 20 kg and having a load capacity of 10 kg. Designed to work on the complex and variable terrain of farms, the robot is equipped with a 6-wheel suspension system, and the minimum ground clearance is 300 mm. This design enables the robot to overcome obstacles efficiently and ensures stable operation on slopes up to 30°. The robot’s maximum moving speed is 0.5 m/s.
The LiDAR system on this robotic platform employs the low-cost RS-LiDAR-16, a 16-channel LiDAR from RoboSense. It measures from 0.4 m to 150 m with an accuracy of ±2 cm. Field of view is 30°, and the angular resolution is 0.1°. This LiDAR operates at a rotational speed of 1200 rpm, enabling a sampling frequency of 20 Hz. Point cloud data are transmitted to the robot controller via a 100 Mbps Ethernet port. High-frequency sampling is crucial for navigation and mapping in unstructured farm environments. The LiDAR operates at a wide range of temperatures, from −30 °C to 60 °C, which allows the robot to operate in complex climatic conditions on farms.
This robot platform utilises two u-blox ZED-F9P modules to establish a high-precision differential localisation system for localisation. One module is a rover fixed on the robot, and the other module is a base station fixed on the ground. It receives signals from the Global Navigation Satellite System (GNSS) as well as differential data from a base station. By using differential positioning technology, it achieves an accuracy of up to 14 mm.
The low-cost Wheeltec N100 IMU is used as its primary inertial measurement unit. The N100 IMU provides a pitch and roll accuracy of 0.05° RMS, which is crucial for precise motion control and stable navigation. It features a high output frequency of up to 200 Hz, ensuring the robot system can respond quickly and accurately to environmental changes.
2.2. Real-Time Kinematic RTK
The robot kinematic model provides the robot’s relative position by using data from its internal sensors, such as the wheel odometry and IMU. This model helps understand the robot’s movements in terms of its frame of reference, estimating changes in position and orientation as the robot moves. However, the kinematic model alone cannot determine the robot’s absolute position in a real-world coordinate system because it lacks global positioning information.
In unstructured farm environments, RTK techniques can be used to obtain the robot’s coordinates in the real world, as discussed in this section. These techniques provide absolute positioning data that can be combined with the relative position estimates from the kinematic model. By integrating data from RTK-GNSS with the robot’s kinematic model through ESKF, the robot can accurately map its position within a larger, real-world framework.
GNSS (global navigation satellite system) provides global location data (latitude, longitude, altitude) [
15]. The most recognised system is the US GPS, offering worldwide localisation and time data. Russia’s GLONASS, similar to GPS, serves internationally. The EU’s newer system Galileo aims for precise European services. China’s BeiDou, initially regional, has recently achieved global coverage.
In RTK systems, a base station is often used, and a rover or receiver is placed on a mobile robot. The base station, located at a surveyed position, can estimate the errors for each GNSS signal received. Once these error corrections are transmitted to the user’s receiver, integer ambiguity resolution (IAR) computations are carried out. This principle is more suitable when the user is relatively close to the reference station. However, if the distance between the user and the reference station is significant, atmospheric conditions at the two locations might differ. This disparity can lead to the unsuccessful execution of IAR operations. The typical guideline for maximum distance is around 25 km.
Another approach is Network RTK, which requires network access at the receiver’s location. In this method, the receiver, acting as a client receives differential signals from network stations [
16,
17]. This model is not limited by coverage area or distance. Generally, in most countries and regions, numerous service providers offer RTCM protocol data, which are used to convey correction information. The RTK-GNSS combines RTK technology with standard GNSS systems, capitalizing on GNSS’s global localisation abilities and RTK’s high precision. It offers localisation services on a global scale.
Robots can obtain positioning information from IMU, wheel odometry, and RTK-GNSS, but each sensor has advantages and disadvantages. Multi-sensor fusion algorithms are needed to integrate different sensors and enhance localisation abilities to improve the robustness of robot localisation.
2.3. Low-Cost RTK-GNSS System
This research investigated a low-cost robot localisation system in unstructured farming environments. The system requires a base station, depicted in
Figure 4, which gathers and transmits satellite data to the rover via digital radio. Concurrently, while obtaining data from the base, the rover also directly captures satellite information. The rover uses relative localisation principles to conduct real-time differential analyses with data from the base and its readings, pinpointing its precise three-dimensional position with centimetre-level accuracy. This approach, costing GBP 220, involves two units: one is a base, and another is a rover that operates independently of internet connectivity and external differential localisation services.
Table 2 shows the cost comparison with other RTK-GNSS systems [
18]. The absolute position coordinates are obtained through differential algorithms.
Figure 5 shows the horizontal position accuracy. The localisation accuracy is stable at around 20 mm. Three sites were tested, and in
Figure 6, the green lines show the trajectory of mapping the localisation data from the three test sites onto Bing’s static satellite cloud map.
2.4. Robot Motion Model
To estimate the robot’s position from the data from wheel odometry sensors, it is necessary to build a robot kinematic model. The data from wheel odometry are used as inputs, and the position and angle of the robot’s motion can then be obtained through the kinematic model.
To simplify the kinematic model, three assumptions were made. Firstly, it was assumed that there is no slippage between the wheels and the ground, that the ground provides sufficient friction, and that the six wheels are parallel to each other. The kinematic model of a six-wheel differential drive robot is shown in
Figure 7. Ideally, their speeds are synchronised for the wheels on the same side of the chassis. The chassis can perform circular motion around the instantaneous center of rotation (ICR). For the six wheels, the angular velocity of the circular motion is consistent, and the centre of this circular motion, the ICR, is always located on the
y-axis extension of the chassis’s center of gravity (COG). This is because the angular velocity
directly influences the rate at which the chassis rotates around the ICR, thereby affecting the position of the ICR relative to the COG. The distance (DC) between the ICR (instantaneous center of rotation) and the COG (center of gravity) is constrained due to the differential drive mechanism, which dictates that changes in the velocity of the wheels alter the chassis’s turning radius and thus shift the ICR’s position along the
y-axis. This constraint is related to the angular velocity
of the circular motion, reflecting the dependency of the ICR’s location on the rotational speed of the chassis, which in turn dictates the distance
between the ICR and the COG.
The velocity of the chassis is located at the center of mass (COM), represented by
. It is composed of components
and
. The velocities of the six wheels are denoted as
and
coming from target velocities
and
(where
i = 1, 2, 3, 4, 5, 6). The distance between the left and right wheels is denoted as
c. The formula for the angular velocity of circular motion is shown in Equation (1):
where
is the angular velocity of the circular motion,
is the linear velocity, and
is the radius of the circular motion. Set the angle between
and set the
y-axis as
. From the perpendicular relationship between
and
, it can be derived that
and
. Summarising the above, the following constraint relationship is established:
Given the condition of consistent angular velocity among the six wheels of the rotating rigid body, Equations (2) and (3) can be generalised to Equations (4) and (5) as follows:
Equation (6) is obtained from Equations (2)–(5):
At the same time,
(where
) and
satisfy Equations (7) and (8) regarding their projection lengths on the
x-axis and
y-axis, respectively:
For a six-wheel differential drive chassis, the speeds of the left and right wheels are set as
and
, respectively. Under the condition that the speeds of the front and rear wheels are strictly synchronised, the constraint, as shown in Equations (9) and (10), can be established as follows:
Thus, the inverse kinematics formula can be obtained simply by performing an inverse transformation. The inverse kinematics model for a six-wheel differential drive system is shown in Equation (12):
The rotational speed of each wheel obtained through wheel speed encoders allows for the calculation of the robot’s linear velocity
and angular velocity
. Within a two-dimensional plane,
can be decomposed into components in the
and
directions:
Integrate the angular velocity
to calculate the change in direction (yaw angle change):
The calculated displacements
and
are applied to the current position of the robot. In contrast, the orientation change
is applied to the current orientation to obtain the new pose of the robot:
Wheel odometry can be used to obtain the wheel speeds on the robot’s left and right sides. By substituting these into Equation (12), the robot’s velocity can be calculated. Then, by inserting Equations (14)–(16) into Equations (17)–(19), the position and angle of the robot’s movement can be determined, constituting the robot’s pose.
2.5. Method 1: RTK-GNSS IMU Odometry Fusion with ESKF
IMU, wheel odometry, and GNSS have drawbacks in unstructured farm environments. The IMU can provide rapid updates of motion status, but it may drift over prolonged use. Wheel odometry can provide continuous motion information, but differential wheels may experience significant slippage and accumulate errors. GNSS signals can be obstructed, leading to significant errors or unavailability in certain areas. To improve the accuracy of robot localisation, multi-sensor fusion algorithms are needed to integrate data from different sensors and enhance localisation abilities.
The Kalman filter is widely used to estimate system states by combining a predictive model with observational data from multiple sensors [
19]. However, compared to the extended Kalman filter (EKF), the error state Kalman filter (ESKF) offers more advantages in multi-sensor robot localisation. By linearising the error states, the ESKF provides higher precision, better robustness against sensor noise, and improved computational efficiency [
20]. Therefore, this study is the first to use the ESKF to fuse data from low-cost IMU, wheel odometry, and low-cost RTK-GNSS sensors to enhance localisation accuracy in unstructured farm environments.
2.5.1. State Definition
To fuse the data from IMU, RTK-GNSS, and wheel odometry using the ESKF, the state equations and observation equations need to be established, the state vector and error state vector need to be defined, and the state covariance matrix and observation noise covariance matrix need to be initialised.
In the ESKF, the state vector
X typically includes position, velocity, attitude (quaternion), accelerometer bias, gyroscope bias, and odometer bias. The state vector is defined as follows:
where
p is the position vector
,
v is the velocity vector
,
q is the quaternion representing the attitude
,
is the accelerometer bias,
is the gyro bias, and
is the odometer bias.
2.5.2. Continuous-Time State Equations
In the ESKF, state equations are used to describe the system state over time. The continuous-time state equations are:
Here, is a rotation matrix represented by the quaternion , is the acceleration measured by IMU, is the angular velocity measured by IMU, is the acceleration of gravity, and , , , , and represent process noise.
2.5.3. Discrete State Equations
Since both the sensor data and the computer processing are performed at discrete time steps, the continuous-time state equations need to be discretised.
In classical physics, the relationship between position
and velocity
can be expressed by the following differential equation:
where
is the acceleration. When discretizing, these continuous-time differential equations need to be converted into discrete-time difference equations.
Assume that the acceleration
is constant over a time step
. According to Newton’s laws of motion, the change in velocity
can be expressed as:
The change in position
can be obtained by integrating over the velocity. Over a time step
, assuming an initial velocity of
and an end velocity of
, the change in position can be expressed as an integral over the velocity:
Since the velocity
varies linearly, it can be approximated by trapezoidal integration:
Expanding this further, we obtain:
The acceleration
is obtained from the acceleration data measured by the IMU
minus the accelerometer bias
, and it also requires consideration of the rotation matrix
to transform the acceleration from the sensor coordinate system to the world coordinate system. Therefore, the position update equation and velocity update equation are:
In the ESKF, attitude is usually represented by quaternions. Quaternions are a mathematical tool that avoids singularities and simplifies rotation calculations. It needs to derive the discrete-time update equation for quaternions to use quaternions for attitude updates.
- 1.
Quaternion Representation of Attitude
A quaternion
represents a rotation and is defined as:
where
is the real part and
are the imaginary parts.
- 2.
Quaternion Differential Equation
The change in attitude is determined by angular velocity. Assuming the angular velocity is measured by the IMU, denoted as , and considering sensor bias and noise, the actual angular velocity is .
The differential equation for the quaternion is:
where ⊗ denotes quaternion multiplication.
- 3.
Discretizing the Quaternion Differential Equation
To discretise the quaternion differential equation, consider the change over a time step . Assuming the angular velocity is constant over this time step, we can represent the change in the quaternion as the product of the initial quaternion and a small rotation quaternion.
- 3.1
Small Rotation Quaternion
The angular velocity
over a small time step
can be approximated as:
The corresponding small rotation quaternion is:
- 3.2
Quaternion Update
The new attitude quaternion
can be represented as:
where
is the small rotation quaternion over the time step
.
- 4.
Using Exponential Mapping to Represent Small Rotations
To more accurately represent small rotations, an exponential mapping can be used:
where
is the exponential map for quaternions, which can be calculated using Rodrigues’ rotation formula or the quaternion exponential formula.
- 5.
Discrete-Time Attitude Update Equation
Combining the above derivations, the discrete-time attitude update equation is:
Accelerometer, gyroscope, and odometer deviations are assumed to be constant:
2.5.4. Error State Equations
To use ESKF, the error state equations need to be defined, capturing the difference between the true state and the estimated state. The error state vector is defined as:
The error state equations are:
where
is the skew-symmetric matrix of the vector
.
Discretizing the error state equations, we obtain:
2.5.5. Prediction of the Error Covariance
The prediction step of the error covariance matrix is given by:
where
is the Jacobian matrix of the state transition model, and
is the process noise covariance matrix.
2.5.6. Measurements
The GNSS provides position information for state correction as follows:
where
is the measurement matrix, and
is the measurement noise.
Wheel odometry provides velocity information for state correction as follows:
where
is the measurement matrix, and
is the measurement noise.
2.5.7. Kalman Gain and State Update
Kalman gain is computed as follows:
State update equations are as follows:
Error covariance update equations are as follows:
The pseudo-code for the fusion of Odom IMU and RTK-GNSS processes via ESKF is shown in Algorithm 1.
Algorithm 1 ESKF Integrating IMU, RTK-GNSS, and Wheel Odometry |
- 1:
Initialization: - 2:
Set initial state estimate and initial state covariance ▹ Initial position p, velocity v, orientation q, accelerometer bias , gyroscope bias , odometry bias - 3:
Set process noise covariance Q and measurement noise covariances , ▹ Uncertainty in model Q and sensors R - 4:
Define the state transition function and observation models , ▹ Functions for state prediction and measurements - 5:
for each time step do - 6:
Acquire data from sensors: - 7:
Obtain angular velocity and acceleration from the IMU ▹ IMU provides rotation rate and acceleration - 8:
Obtain wheel speeds , from the odometer ▹ Wheel encoders measure speeds and - 9:
Obtain global localisation coordinates from RTK-GNSS ▹ GNSS gives accurate global position - 10:
Compute control input and observation based on sensor data: - 11:
Calculate control input based on , , ▹ Compute linear and angular velocities - 12:
Observation from RTK-GNSS and from odometry and IMU ▹ Combine pose data from IMU and position from GNSS - 13:
Prediction step: - 14:
Predict state ▹ Predict next state using motion model - 15:
Calculate predicted covariance ▹ Estimate uncertainty in prediction - 16:
Update step: - 17:
if GNSS data available then - 18:
Calculate observation residual ▹ Difference between observed and expected GNSS measurements - 19:
Calculate residual covariance ▹ Uncertainty in GNSS observation residual - 20:
Calculate Kalman gain ▹ Weighting of prediction vs. GNSS observation - 21:
Update state estimate ▹ Correct the prediction with GNSS data - 22:
Update state covariance ▹ Update the estimate uncertainty - 23:
else - 24:
Calculate observation residual ▹ Difference between observed and expected odometry measurements - 25:
Calculate residual covariance ▹ Uncertainty in odometry observation residual - 26:
Calculate Kalman gain ▹ Weighting of prediction vs. odometry observation - 27:
Update state estimate ▹ Correct the prediction with odometry data - 28:
Update state covariance ▹ Update the estimate uncertainty
|
2.6. Method 2: LIO-SAM
This subsection describes another localisation method using LiDAR sensors called LIO-SAM (LiDAR Inertial Odometry via Smoothing and Mapping). This is a SLAM framework that combines LiDAR and inertial measurement unit (IMU) data and that, at its core, is an optimisation method based on the factor graph, which is a kind of bipartite graph used to model the impact of sensor measurements on robot constraints on positional and environmental landmarks.
Figure 8 shows the system structure of the LIO-SAM, which contains four factors [
13].
The first type is the IMU pre-integration factor (orange), based on the IMU measurements between two consecutive keyframes. The IMU can provide high-frequency information about the robot’s acceleration and angular velocity. By pre-integrating these IMU data, a rough estimate of the robot’s motion between the two keyframes can be obtained. This estimation helps bridge the motion between keyframes, providing continuous constraints for changes in the robot’s pose.
The second type is the LiDAR odometry factor (green), obtained by frame-to-frame matching of the LiDAR point cloud data from each keyframe with the data from the previous n keyframes. Based on the geometric correspondences between point clouds, this matching can provide accurate information about the robot’s position changes. The LiDAR odometry factors assist in fine-tuning the pose of the keyframes to ensure optimal alignment between LiDAR point clouds.
The third type is the GPS factor (yellow), derived from the GPS measurements for each keyframe, offering global localisation information. GPS data can provide a global reference for the robot, helping to correct accumulated drift errors and ensure the accuracy of long-term navigation. GPS factors are particularly useful in outdoor environments, especially in open spaces with good GPS signal reception.
The fourth type is the loop closure factor (black), a critical component of SLAM systems for identifying when the robot revisits previously explored areas. Upon detecting a potential loop closure, the system selects the candidate loop closure keyframe and its temporally adjacent keyframes for frame-to-frame matching. If the matching is successful, loop closure factors are added, connecting the current keyframe with the loop closure candidate keyframe. These factors help correct cumulative errors in the map, ensuring global consistency.
These four factors form the factor graph of the LIO-SAM. The LIO-SAM can fuse IMU, LiDAR, and GPS data by optimizing this factor graph, achieving high-precision real-time 3D mapping and localisation.
3. Experiments
The experiments were conducted at three test sites near the University of Sussex campus with dense vegetation and multiple terrains, similar to unstructured farming environments. As shown in
Figure 9, the University of Sussex is situated in South Downs National Park, with a wide range of grassland terrains.
Test Site A shown in
Figure 10 is surrounded by dense trees on two sides. It also includes several slopes and obstacles, simulating the dense vegetation in unstructured farm environments.
Test Site B is shown in
Figure 11. It also includes dense vegetation on the left side and a steep slope area on the right. The significant incline in this area is very similar to the undulating terrain found in farms and tested the robot’s performance in unstructured terrain conditions.
Test Site C, shown in
Figure 12, is surrounded by high and dense trees, similar to the complex vegetative cover found in actual farms.
A pre-planned trajectory was established for the robot to follow at each test site. To ensure consistency reproducibility and clarity of the experiments, the following steps were followed:
Initialisation: Before starting each experiment, all sensors were checked and calibrated to ensure they were functioning correctly.
Path Tracking: The robot moved along the pre-planned trajectory while continuously collecting sensor data.
Data Recording: All sensor data, including wheel odometry, IMU, LiDAR, and RTK-GNSS data, were recorded by the “rosbag record” command during the robot’s movement; “rosbag” is a popular ROS tool for recording and broadcasting ROS messages. It allows the user to record data about the ROS topics being posted and store it in a .bag file.
Data Analysis: The collected data were analysed using the EVO tool to calculate the absolute pose error (APE) metrics for each method. To verify the robustness of the proposed algorithm when the GNSS signal is lost, the GNSS signal was manually switched off at a random interval to simulate the situation when the GNSS signal is lost. As shown in
Figure 13, the green trajectories in the three test areas of the figure are the RTK-GNSS trajectories, and the trajectory breaks are where the RTK-GNSS topics were randomly turned off manually.
Comparison of Results: The localisation accuracies of the ESKF-based method and the LIO-SAM algorithm were compared across different test sites to evaluate their performance in various environments.
Figure 13.
Trajectories with the RTK-GNSS topics turned off. The green lines in the test areas are the trajectories of the robot. The discontinuous parts of the trajectories are areas where GNSS was manually switched off.
Figure 13.
Trajectories with the RTK-GNSS topics turned off. The green lines in the test areas are the trajectories of the robot. The discontinuous parts of the trajectories are areas where GNSS was manually switched off.
3.1. Evaluation of Localisation Accuracy of Method 1 and Method 2
The effectiveness of the sensor fusion for the robot localisation method used in this paper, in comparison to LIO-SAM localisation, was verified in three areas labelled a, b, and c. These areas provided a range of conditions for a thorough evaluation.
Figure 14 shows the experimental environments. The trajectory accuracy was evaluated using the EVO (Evaluation of Visual Odometry) tool, a standard tool in the fields of robotics and computer vision [
21]. It processes various trajectory data formats and offers extensive data analysis and visualisation abilities. The evaluation criterion was APE (absolute pose error), which measures the error between the estimated and the reference trajectories. For this study, the trajectory from RTK-GNSS served as the reference, ensuring a reliable baseline for the accuracy assessment of the proposed localisation method.
3.2. Real-Time Mapping
In this study, the LIO-SAM algorithm was deployed on the robotic platform. The imu-utils tool was employed to calibrate the IMU [
22]. The calibration parameters are shown in
Table 3. RoboSense RS-16, a low-cost LiDAR, was used in this research. However, the laser point cloud emitted by the RoboSense LiDAR has a different beam sequence and format than the Velodyne point cloud, which is the default in the algorithm. Therefore, the RS-to-Velodyne package was used to convert the point cloud data into the Velodyne data.
Experiments were conducted in three structured farm sites to validate the algorithm’s effectiveness. Area A is a large area with irregular terrain, generally sloping from lower to higher elevations, surrounded by trees, and occasionally traversed by pedestrians. Area B has a steeper slope, showcasing the diversity of farm terrains. Area C is flat but characterised by dense vegetation. The mapping results are shown in
Figure 15. For a more intuitive presentation of these complex agricultural environments,
Figure 16 maps the point clouds onto Bing static maps.
4. Results and Discussion
This study investigated a localisation method that uses ESKF to integrate data from wheel odometry, a low-cost IMU, and a low-cost RTK-GNSS. The accuracy of the estimated trajectories is also compared with the LIO-SAM algorithm using the absolute pose error (APE) metric from the Evaluation of Visual Odometry (EVO) and SLAM tools. EVO is a tool for evaluating the performance of visual odometry and SLAM algorithms, providing a convenient way to compare and evaluate trajectories generated by different algorithms [
21]. The results obtained from three different test areas are displayed in
Figure 14, where the data from the ESKF-based method and the LIO-SAM algorithm are shown in the left and right columns, respectively. Specifically, the first and third columns present the reference trajectories compared to the estimated trajectories by the two methods, while the second and fourth columns show the corresponding trajectory errors.
As shown in
Table 4, the first to third rows are the trajectory error evaluation parameters of the methods presented in this paper, and the fourth to sixth rows are the trajectory error evaluation parameters of the LIO-SAM method. Here, ‘Max’ refers to the maximum error, ‘Mean’ refers to the average error, ‘Median’ refers to the middle value in the list of all error values, ‘Min’ refers to the minimum error, ‘RMSE’ refers to the root mean square error, and ‘Std’ refers to the standard deviation.
The errors in
Table 4 are obtained by taking the GNSS trajectory as the ground truth trajectory and comparing the estimated trajectories of the proposed two algorithms. Divide the RMSE value by the total length of the trajectory and multiply by 100% to obtain the error percentage. By calculating the percentage errors for different experimental sites, the ESKF method’s percentage errors ranged from 0.11% to 0.54%, while the LIO-SAM method’s percentage errors ranged from 0.29% to 0.67%.
Absolute pose errors (APEs) obtained from three different test areas are displayed in
Figure 14. The left and right columns present the data from the ESKF-based method and the LIO-SAM algorithm, respectively. The first and third columns show the reference trajectories compared to the estimated trajectories by the two methods, while the second and fourth columns display the corresponding trajectory errors. From Test Area A to Test Area C, it can be seen that the errors of the ESKF-based method are smaller than the LIO-SAM method.
Figure 16 shows that the LIO-SAM algorithm successfully generated high-quality point cloud maps, accurately reflecting the terrain undulations and environmental features. The algorithms exhibited robust performance even when GNSS signals were lost, without any drift in the point cloud maps, accurately reflecting the terrain and environmental conditions of the test sites.
The ESKF-based method achieved a high localisation accuracy, and it is more cost-effective and robust when the GNSS signal is lost. It also provides a viable alternative for low-cost agricultural robot localisation. The LIO-SAM algorithm, on the other hand, successfully generated high-quality point cloud maps, accurately reflecting the terrain and environmental features, even when GNSS signals were intermittent.