1. Introduction
Navigation is one of the most challenging competencies required of an autonomous mobile robot (AMR). It can be defined as the combination of the four fundamental competences: perception, localization, path planning, map building, and interpretation. Robot navigation refers to the robot’s ability to determine its own position in the environment and then to plan a path towards its goal positions based on its knowledge about the environment and sensor values so as to reach its goal positions as efficiently and reliably as possible. Research on navigation algorithms is necessary to improve automatic mobile robots in all fields [
1,
2,
3,
4,
5,
6]. The navigation algorithm is the key technology for the autonomous navigation of robots, and it is also a research hotspot in the field of AMR. The use of modeling and simulations to develop navigation algorithms provides development flexibility and the capability to conduct extensive testing of the algorithm under a variety of operational environments and robot configurations [
7]. So, algorithm simulations have been widely used in navigation algorithm research.
Previously, the algorithm simulation of robots was usually developed using C++, Java, MATLAB and other programing languages. In the study presented in [
8], an algorithm for path planning to a target for a mobile robot in an unknown environment was implemented in Borland C++; afterwards, it was tested with Visual Basic and DELPHI programming language. The motion of the robot that moves from the initial position to the desired position following an estimated trajectory was shown in the simulation. In the study presented in [
9], a path finding simulator for the Pioneer 3DX mobile robot was designed with GUI (Graphical User Interface) in MATLAB. Five different algorithms, including the Dijkstra algorithm and A* algorithm, were implemented to determine the shortest path for a mobile robot between nodes within various mazes using the simulator. Using this simulation method, usually only two-dimensional simulation results can be displayed, and the display effect is not good. In order to show the results of the algorithm better, sometimes the navigation algorithm is simulated jointly by using programming language such as MATLAB and dynamic simulation software such as ADAMS, RecurDyn. In the study presented in [
10], the performance of an adaptive impedance algorithm for tendon-driven dexterous hands was validated by using MATLAB and ADAMS software in a joint simulation. A tendon-driven hand model was built and a control module was generated in ADAMS. Then, the control system was built in MATLAB using the control module. However, the simulation speed of this method is not very fast, and the simulation parameters can only be set in advance, so the real-time performance of the simulation is poor. In the simulation of some robots, especially industrial robots [
11,
12,
13], the position coordinates of each part of the robot are calculated by programming language, such as C++, and the corresponding 3D graphics are drawn by OpenGL to display the corresponding calculation results dynamically. This kind of three-dimensional motion simulation only shows the positions of the robot parts calculated by the kinematics equation, not by the physical engine. It is usually used in the simulations of industrial robots, but it is difficult to simulate some mobile robots with higher physical effect requirements using this method.
At present, there are also several commercial, professional robotic simulation platforms including software such as Webots, MRDS, Gazebo, MORSE, V-REP, Simbad, USARSim, STDR/Stage, and ARGoS [
14,
15,
16,
17,
18,
19]. Webots, an open-source robot simulator, can model and simulate any mobile robot, including wheeled, legged, and flying robots. It includes a complete library of sensors and actuators, and can be programmed in C, C++ and Java, or from third party software through TCP/IP [
20]. By using Webots, virtual environments can be achieved for robot simulations. Additionally, Webots allows 3D models that use the VRML97 standard to be imported [
21]. MRDS (Microsoft Robotics Developer Studio) [
22], a visual programming tool, is a Windows-based environment for robot control and simulation. MRDS not only supports the visual programming language, but also supports many programming languages such as Visual Basic, Visual C++, and IronPython. MRDS’s Visual Simulation Environment (VSE) ensures a high quality simulated environment by using NVIDIATM PhysXTM to create high fidelity 3D simulations with realistic object interactions [
23]. Gazebo [
24], a well-designed simulator, makes it possible to rapidly test algorithms, design robots, perform regression testing, and train AI systems using realistic scenarios. Gazebo offers the ability to accurately and efficiently simulate populations of robots in complex indoor and outdoor environments. It supports multiple physics engines (ODE, Bullet, DART). The graphic engine is robust and ensures the development of high-quality 3D models. It has been widely used in robot and multi-robot system simulations [
25,
26,
27] Gazebo is mainly used in the Linux system. Although we can set up a workspace for compiling Gazebo on Windows, it does not work in a very stable condition on Windows at present. V-rep (Virtual Robot Experimentation Platform) is a general purpose robotic simulator with an integrated development environment developed by Coppelia Robotics [
28]. V-rep supports many programming languages and has three graphical engines to compute faster dynamics and to simulate physics and object interactions. It is commonly used for the navigation simulation of robots [
29,
30,
31]. MORSE (Modular Open Robots Simulation Engine) [
32] is based on the open-source project Blender, a 3D game engine that comes with an integrated bullet physics engine. MORSE operates from a command line, and it is a purely Python application that supports almost any 3D model. Simbad, a 3D robot simulator, is a simple testing platform to study artificial intelligence and AI algorithms for autonomous robots and agents. However, this simulation tool does not provide a realistic simulation of the robot environment [
33]. USARSim (Unified System for Automation and Robot Simulation) [
18] is a 3D simulator based on the Unreal Tournament (UT) game engine. USARSim was developed to simulate multiple robots in search and rescue environments. It supports sound sensors, touch sensors, lasers, odometry, and cameras. Sim2Real (simulation to reality), which tends to be photo-realistic, is a hotspot in the research and application of robot simulations at present. In the study presented in [
34], the use of LiDAR sensor modeling and data augmentation with GANs for autonomous driving was studied. CycleGANs was employed to solve the sensor modeling problem for LiDAR to produce realistic LiDAR data from a simulated LiDAR (sim2real). In the study presented in [
35], Sim2Real viewpoint invariant visual serving by recurrent control was studied. The paper describes how the resulting model can be transferred to a real-world robot by disentangling perception from control and only adapting the visual layers. The ROS (Robot Operating System) is an open-source, meta-operating system for robots, and it is one of most popular types of robotics middleware. It currently only runs on Unix-based platforms. Rviz (ROS visualization) is a 3D visualizer for displaying sensor data and state information from ROS. Using Rviz, the current configuration on a virtual model of the robot can be visualized, and the live representations of sensor values coming over ROS topics can also be displayed. ROS is widely used in robot control and algorithm simulations. In order to better simulate the algorithms and display the simulation results, ROS is usually combined with Gazebo and Rviz [
15,
36,
37].
Unity3D is a game development platform. It is a fully integrated professional game engine and also has a variety of inbuilt user interfaces and 3D rendering capabilities alongside its own networking protocol [
38]. Unity3D makes use of programming language and its own development environment to create attractive 3D games and software. Unity3D also has a good simulation function for kinematics and dynamics based on the physical engine, due to the integration of PhysX, which is a scalable multi-platform game physics solution. In addition to game development, Unity3D has been applied in a wide range of fields [
39,
40], such as virtual places, visualization building, virtual teaching and training, and machine motion simulation. Unity3D is also used in robot kinematics, dynamics, and navigation algorithm simulations. In paper [
41], the method of robot simulation using the graphics engine and physical engine of Unity3D is shown. This method obtains realistic simulations of the execution of robotic tasks including sensing and motion primitives. Paper [
42] presents the implementation of a Unity3D-MATLAB simulator applied to the area of robotics. In the simulator, Unity3D exchanges information with MATLAB to execute different proposed control algorithms. In paper [
43], a ROS-Unity3D based system is introduced for the monitoring of an industrial robotic process as well as a framework to simulate and execute an industrial process monitoring task in Unity3D. In paper [
44], a novel real-time three-dimensional simulation system, ROSUnitySim, is presented using ROS and Unity3D, for local planning by miniature unmanned aerial vehicles (UAVs) in cluttered environments. The paper particularly introduces the modeling of environments and LiDAR sensor. In the study presented in paper [
45], the 3D shortest distance was studied using the A* algorithm in Unity3D. The applications of Unity have also been extended to machine learning. The Unity Machine Learning Agents Toolkit (ML–Agents) is an open-source Unity plugin for creating and interacting with simulation environments using the Unity platform. By taking advantage of Unity as a simulation platform, the toolkit enables the development of learning environments which are rich in sensory and physical complexity, provide compelling cognitive challenges and supporting dynamic multi-agent interactions [
46]. In paper [
47], in order to train and evaluate interactive agents in realistic simulated environments, the Interactive Question Answering Dataset (IQUAD V1), which builds on AI2-THOR [
48], a photo-realistic customizable simulation environment for the integration of indoor scenes with the Unity physics engine, is presented.
The types of professional robotic simulation software mentioned above each have their own advantages and disadvantages. Researchers need to make reasonable choices based on their actual needs. Some researchers have also analyzed and compared these simulators [
15,
17,
19,
49], as shown in
Table 1, which can provide references for making choices. The above professional robot simulation platforms or simulators have their own characteristics and have been used widely. Users can choose between them according to their own unique needs. However, many of them provide either unrealistic visual information, inaccurate physics, low task complexity, or a limited capacity for interactions among artificial agents [
47]. Since Unity3D has many advantages mentioned above, it is a good idea to implement the navigation algorithm simulating autonomous mobile robots, taking advantage of the realistic interactions between the robot and all the other elements of the environment that Unity3D provides. In this paper, through the secondary development of Unity3D, a navigation simulation platform based Unity3D is designed. Using the simulation platform, a virtual robot prototype can be established quickly with the imported 3D robot model and virtual joints and sensors, and navigation algorithm scripts can be added to the virtual prototype to carry out navigation simulations in the virtual ground environment. In this paper, the A* algorithm was improved for navigation in unknown 3D environment. Taking the Mecanum wheeled mobile robot as an example, its 3D robot model was imported into Unity3D, and the joint, sensor and navigation algorithm scripts are added to the model, and then the improved A* navigation algorithm was simulated using the robot virtual prototype. The test was carried out using the physical robot prototype in the physical environment, and the simulation trajectory and test trajectory were compared to verify the feasibility and availability of the proposed method.
This paper is organized as follows: In
Section 2, the simulation platform based on Unity3D is described, and the creation of the virtual joints, sensors, and environments in Unity3D is introduced. An improved A* algorithm that can be used in an unknown 3D environment is introduced in
Section 3.
Section 4 firstly describes the Mecanum wheel mobile robot and its kinematics model and then introduces the virtual prototype of the robot and the navigation simulation process of the improved A* algorithm on the simulation platform. In
Section 5, a navigation accuracy measurement experiment system for robots in the physical environment is created to evaluate the simulation effect of the simulation platform created in Unity3D, and the test results of physical robot in the physical environment are compared with the simulation results.
3. Improved A* Algorithm
Path planning is the task of finding a continuous path that will drive the robot from the start point to the target point. Based on the information about the obstacles, the working environment of a robot can be categorized as a completely known environment, a partially known environment, or a completely unknown environment. It can also be categorized as a static environment or a dynamic environment [
50,
51,
52]. There are many path planning and navigation algorithms, such as PRM, RRT, EST, RRT*, APF, MPC, ANN, GA, PSO, ACO, and D* [
53], compared to which the A* algorithm has advantages such as its simple principles, easy realization, and high efficiency. Thus, it has been widely investigated and applied. To increase the applicability of the A* algorithm, meet the requirements for navigation tasks, generate more smooth paths, and reduce the length and turning times, many improved A* algorithms have been proposed and studied in depth. A 3D A* algorithm was studied to configure the path between two nodes in a 3D environment, and was shown to be faster than an A* Algorithm with 2D layers [
45]. An improved A* algorithm was studied to improve the safety and smoothness of the planned path and to reduce the movement time of the robot in complex terrain [
53]. Several modifications (Basic Theta*, Phi*) and improvements (RSR, JPS) of the A* algorithm have been studied to reduce the computational time and optimize the path optimality [
54]. A modified A* algorithm for path planning with efficient coverage was presented, and can be used to generate waypoints in order to cover the narrow spaces [
55]. An improved A* algorithm considering water current, traffic separation, and berthing for vessel path planning [
56], which achieves the trade-off between path length and navigation safety, was proposed. So, the A* algorithm has good expansibility and adaptability and can be improved according to the actual working environment of the robot. Although the research interest of this paper is to propose a new method of navigation algorithm simulation in Unity3D, the research focus is not on the algorithm itself. In order to verify the availability and reliability of the simulation platform based on Unity3D, it is necessary to select the appropriate robot prototype and navigation algorithm. Therefore, the A* algorithm was selected to test and study the robot navigation simulation proposal.
3.1. Introduction of the A* Algorithm
The A* search algorithm is a global optimization and state space heuristic algorithm. It can be seen as an improved version of the Dijkstra algorithm with the addition of an evaluation function [
57]. In the search process, each search position in the state space is evaluated, and the least evaluated position is selected. Then, the search is carried out from this location until the target point is found. This can omit a large number of invalid search paths and improve the efficiency.
The evaluation function of the A* algorithm is as follows:
where
f(
n) is the estimated cost of arriving at the target node from the initial node through node
n.
g (
n) is the actual cost for travelling from the initial node to node
n in the state space.
h (
n) represents the cost of estimating the optimal path from node
n to target node. When the evaluation cost
h (
n) is closer to the real value, the efficiency of the algorithm is higher, and the likelihood of finding the optimal solution is higher. The flow chart of the A* algorithm is shown in
Figure 11.
3.2. Improvement of the A* Algorithm for Navigation in an Unknown Environment
The A* navigation algorithm is a global path planning algorithm for use in known environments. When a mobile robot navigates in an unknown environment, it needs to move while detecting and planning the path in real time according to the terrain and obstacles detected.
The strategy of the improved A* algorithm is to use the A* algorithm to conduct path planning in unknown environments and to make the mobile robot plan its movements. The robot continuously detects the surrounding environment in the course of movement and projects the detected environmental information into the map. If the detected obstacles do not block the planned path, the robot will continue to move along the original path. If the obstacles detected obstruct the planned path, the current position of the robot is set as the starting point of navigation, and the shortest path to the target point is re-planned according to the new environmental information at this time. On the whole, this path is not the shortest path from the original starting point to the target point, but it can avoid the roundabout path of the mobile robot.
3.3. Improvement of the A* Algorithm for Navigation in 3D Space
The A* algorithm is suitable for path planning in 2D space and cannot be used directly for navigation in a 3D environment. For the A* algorithm, there are two types of storage information for each grid point, which are the location information of the node in 2D space and whether the node can be passed. In order to use the A* navigation algorithm in 3D space, two variables, maxHeight and minHeight, which store information in 3D space, need to be expanded to represent the maximum and minimum heights of nodes, respectively. As shown in
Figure 12, the cuboids represent the nodes in the grid. Each node has its corresponding position information on the plane. The white cuboids represent the nodes through which the robot can pass, while the gray cuboids represent the nodes that the robot cannot pass through. The improved A* algorithm can record the height information of the nodes and realize the 3D map reconstruction, so that the robot can navigate in the 3D space.
The A* algorithm is used to navigate in 2D space, and for the reconstruction of a 2D map, 2D LiDAR is used. As shown in
Figure 13, 2D LiDAR casts laser lines, and when an obstacle is detected, the position information of the obstacle is projected into a 2D map, that is, the gray grid nodes, and the corresponding nodes of the obstacle are changed to be inaccessible.
When terrain reconstruction is performed in 3D space, the undulating ground and obstacles can be detected by 3D LiDAR. As shown in
Figure 14, the irregular square is a block of undulating ground divided in a grid, The laser cast from a 3D LiDAR can detect all positions on the undulating ground, and all height information is updated to the variables of the node, As shown in the square on the right, the height of the top surface (maxHeight) of the square is the value of the node, and the height of the bottom surface (minHeight) is the value of the node. The height of the square is
h (
h = maxHeight-minHeight).
Figure 15 shows how to update the node height information.
After 3D terrain detection, it is necessary to determine whether the robot can pass through a certain area according to the height information of the nodes. The first criterion is the height difference (maxHeight − minHeight) of the nodes, and the criterion is that the size of the grid is multiplied by a coefficient. The magnitude of this coefficient is determined by the obstacle-overcoming ability of the robot, which is highly related to the maximum obstacle that the robot can cross.
The second criterion is the height difference between the node and the surrounding nodes. As shown in
Figure 16, the intermediate gray blocks represent the nodes to be detected, and the eight white blocks represent the nodes around the nodes to be detected. The median height ((maxHeight + minHeight)/2) of the node to be detected is subtracted from the median height of the surrounding nodes, and the absolute value of the difference is used to determine whether the robot can pass through the node. The criterion is that the horizontal distance of the two nodes is multiplied by a coefficient. The coefficient is determined by the slope-climbing ability of the robot and is related to the maximum slope that the robot can climb.
3.4. Programming Implementation of the Improved A* Algorithm
The A* algorithm consists of four programs. The functions of the programs are to create nodes, to create grids, to calculate paths, to detect the environment, and to reconstruct maps. These are introduced separately.
3.4.1. Creation of Nodes
The function of this program is to build a classification for nodes, which can directly generate objects of this class when creating each node in the grid. This simplifies the program and means that it does not need to be attached to Unity. It can be directly called by other programs. First, we define the name of the class as Node and then define the data members of the class, as shown in
Table 6.
The way to get the target path from the parent variable is as follows: The target node is regarded as the first node of the path. The parent node of the target node is regarded as the second node of the path, and the parent node of the second node is regarded as the third node of the path until the starting node is found. The combination of the obtained nodes is the calculated target path. After the data member definition is finished, the constructor of the class is defined, and the four data members of the class, canWalk, worldPos, gridX, and gridY, are assigned values in the constructor.
3.4.2. Creation of Grids
The function of this program is to divide the map into grids and create nodes at the intersections of grids. This program needs to be attached to Unity, and the objects in Unity are used to assign values to variables in the program. The data members for the Grid program are defined, as shown in
Table 7.
The area size of the grids is determined by the starting position and the target position of the robot. The calculation method of the gridSize value used in this program is as follows:
After the data member definition has been completed, the member functions of the program are defined. The member function “CreatGrid” of the program is a program for dividing the grid, which runs when simulation begins in Unity. Firstly, the starting point of grid map is defined, which is the origin of the 2D map, where the starting point is selected in the lower left corner of the grid dividing area. The center point of the grid division area is calculated, and the coordinates of the center point are the average values of the coordinates of the starting position and the target position of the robot, that is, transform.position = (endPoint.transform.position + robot.transform.position)/2.
In
Figure 17, Point C is the center of the grid division area, and Point S is the starting point of the grid map. The program for calculating coordinates of S point is as follows:
The other grid nodes in the map can be obtained from the starting point of the grid map plus a multiple of the node diameter, that is, the coordinates of the node in the 2D grid map. In the 2D grid map presented in
Figure 16, the abscissa of point P is the abscissa of the starting point plus twice the diameter of the node, and the ordinate of point P is the ordinate of the starting point plus three times the diameter of the node. The corresponding calculation formula is
3.4.3. Planning Path
The function of this program is to use the A* algorithm to calculate the shortest path from the starting position of the robot to the target position according to the map. This program needs to be attached to Unity, as some objects in Unity are used to assign values to some variables in the program. In addition, the above Grid program needs to be called.
A program named “Findpath” is defined, and then the data members of the program are defined, including robot, endPoint and grid, as shown in
Table 8. After the data member definition has been completed, the member functions of the program are defined. A function is defined to assign the grid variables, that is, to call the Grid program. The assignment of variables uses the function grid = GetComponent<Grid>( ) in Unity. Next, a function is defined to calculate the path, and the calculated path is assigned to the path variable in the Grid program. The function first defines two variables of the List <Node> type: openSet and closeSet. Then, the program is written according to the flow chart of the A* algorithm shown in
Figure 11.
If the number of elements in the openSet becomes zero, the path to the target point cannot be found; if the number of elements in the openSet is not zero, the following operation is performed: The node with the smallest fCost value in the openSet is moved into the closeSet. If this node is the target node at this time, then the path is found; otherwise, the calculation will continue. The nearest node of the current smallest node should be searched. If the nearest node is neither in the closeSet nor an obstacle, the current smallest node is set as the parent of the nearest node, the estimated value of the nearest node is updated, and the nearest node is added to the openSet.
3.4.4. Environment Exploration and Topographic Reconstruction
The function of this program is to detect the environment by using 3D LiDAR and to create a 3D grid map. The path calculated by the “findpath” program drives the robot along the path to reach the target position. First, a program named “Navigation” is defined, and its data members and related variables are defined, as shown in
Table 9. In the calculation of the path based on the A* algorithm, the robot is regarded as a particle, and the obstacle expansion method is used to obtain the path. The A, B, C, and D rectangles in
Figure 18 represent the positions and sizes of the four obstacles, and rectangle E represents the size of the robot. The dotted line frame in the figure shows the shape of the obstacle after expanding the “barriderDistance” distance.
As shown in
Figure 19, Point C is the center position of the robot, that is, the current coordinate of the robot; Point B is the location of the LiDAR. The variable lidarPos stores three-dimensional vectors from point O to point C to correct the position of the robot from the radar position. The three-dimensional vectors of the collision point relative to point O measured by LiDAR and lidarPos can be used to obtain the three-dimensional vectors of the collision point relative to point C. Finally, two vector 3 variables RobSize1 and RobSize2 are defined. The variables store the size of the robot. When the LiDAR detects obstacles in this range, it is regarded as detecting the robot itself, ignoring the point automatically and preventing errors
In
Figure 19, taking the overhead sketch of the robot as an example and taking the center O of the LiDAR as the origin, the shape and size of the robot are shown in the outermost solid box. Point R1 and point R2 correspond to the 3D vectors of the robotSize1 and robotSize2 variables, which store the size of the robot. The stored size is slightly larger than the actual size of the robot. When the LiDAR detects obstacles inside the robot, such as points O2 and O3, it is regarded as detecting the robot itself. These points are automatically ignored. If the detection point is regarded as an obstacle outside the robot, such as points O1 and O4, the point is projected into the map. The flow chart for determining whether a point (taking O1 as an example) is located in a robot in 3D space is shown in
Figure 20.
The results of 3D LiDAR measurements in LiDAR variables are processed. The first step is to determine whether the collision point of the LiDAR is within the range of robotSize1 and robotSize2 variables. If so, this point will be skipped, and the next point will be detected. If not, the point will be corrected, and the coordinate system of the 3D LiDAR itself will be rotated to the same direction as the world coordinate system. As shown in
Figure 21, the coordinate system X’Y’Z’ is the self-coordinate system of the 3D LiDAR, and the coordinate system XYZ is in the same direction as the world coordinate system, and the origins of the two coordinate systems are the same. The relative rotation angle of the two coordinate systems is the Euler angle of the 3D LiDAR. The Euler angle used in Unity is in the order ZXY. To transform the world coordinate system into its own coordinate system, X’Y’Z’, it should be rotated
γ around the Z axis first, then
α around the X axis, and finally
β around the Y axis.
After the data members definition is completed, the member functions of the program are defined. A variable for storing the initial position of the robot is defined, and the displacement of the robot from the initial position to the current position, i.e., the 3D vector of the current position of the robot relative to the initial position, is obtained by simulating the output sensor of the inertial navigation sensor on the robot body. Through this 3D vector, the current position of the robot can be obtained, and the localization of the robot can be realized.
The rotation matrix of rotating degree
α around the X axis is as follows:
The rotation matrix of rotating degree
β around the Y axis is as follows:
The rotation matrix of rotating degree
γ around the Z axis is as follows:
The Euler angle of a 3D LiDAR is obtained. It rotates its own coordinate system in the same direction as that of the world coordinate system, contrary to the rotation order mentioned above. It is necessary to first rotate
β around the Y axis, then
α around the X axis, and finally
γ around the Z axis. Thus, the rotation matrix of the Euler angle is as follows:
The formula for correcting the detection results of the 3D LiDAR is as follows:
The result of the modified 3D LiDAR is the three-dimensional vector relative to the center of the 3D LiDAR. The three-dimensional vector of the collision point relative to the center of the robot is obtained by adding the lidarPos variable mentioned above. By adding the results to the three-dimensional coordinates of the robot in the world coordinate system, the three-dimensional coordinates of the collision point in the world coordinate system can be obtained and projected to the map for map reconstruction. Then, the locations of obstacles are judged according to the map information, and the canWalk variable of nodes is updated. Finally, the calculated path is used to control the robot’s movement.
5. Comparison between the Simulation Result and Test Result in the Physical Environment
In order to evaluate the simulation effect of the simulation platform for robot navigation, judge its feasibility, and evaluate the navigation simulation accuracy, it was necessary to build a navigation accuracy measurement experiment system for the robot in a physical environment. The experimental scheme was as follows [
66] : First, using the physical prototype of the robot shown in
Figure 23 as the test prototype, the improved A* algorithm was adopted to realize the navigation control of the physical prototype in the created navigation environment. Then, the environment information was tested, and the trajectory of the robot was captured by the Optitrack optical motion capture system of the Natural Company [
67]. Finally, the real trajectory of the robot was compared with the trajectory obtained by navigation simulation.
The navigation test system of the robot prototype using the Optitrack optical motion capture system is shown in
Figure 39. Three Optitrack Prime 13 cameras, high-speed motion capture cameras, were arranged on each side of the test area. The cameras used a Gigabit Ethernet GigE/PoE interface to connect to data and the power supply. All cameras were connected to a Gigabit network Hub with Ethernet cables. An installed workstation with Optical motion capture software named Motive was connected to the hub with a cable. The Motive software was used for the recording, presentation, playback, and remote data services of the position data. The Hand Rigid Bodies Marker Set was fixed on the robot prototype to test the space coordinates of the robot, and the markers were affixed on the obstacles to locate the obstacles in the test environment, as shown in
Figure 39 and
Figure 40.
In the physical environment shown in
Figure 40, a piece of scattered debris was laid on the ground and covered with paper to form an undulating terrain area, and three cubic obstacles were placed. The navigation test process of the robot prototype is shown in
Figure 40. The 3D coordinates of the robot were captured and recorded by the Optitrack optical motion capture system, and the 3D test trajectory was obtained. The simulation environment shown in
Figure 41a was created according to the physical environment shown in
Figure 40. The navigation simulation of the robot in the simulation environment was carried out. The trajectory of the robot virtual prototype in the navigation simulation was recorded in real time in the virtual environment, as shown in
Figure 41b–f. The test trajectory and simulation trajectory of the robot are shown in
Figure 42. A 2D LiDAR was employed on the physical robot prototype in the physical environment navigation, and a simulated 3D LiDAR was used on the robot virtual prototype in the navigation simulation. Since the ground in the test environment was generally flat and the obstacles were structured cubes, the detection difference between 2D and 3D LiDARs was neglected.
From
Figure 42, it can be seen that the test trajectory curve coincides with the simulation trajectory curve substantially. The two curves coincide well in the straight-line section, but the simulation trajectory deviates from the curvature of the measured trajectory as the robot turns. The main reason for the non-coincidence of bending trajectory is the deviation between the physical parameters in the simulation system, such as the friction coefficient and the elastic force, and the actual parameters. In this virtual simulation environment, the inertia of the prototype makes the robot deviate from the planned curve. The physical parameters can be revised according to the actual data of the moving mechanism and ground mechanics. We did not conduct an in-depth study on this. In general the test of robot navigation motion basically verifies the simulation of robot navigation, and the simulation results of the navigation algorithm simulation platform are shown to be credible, and the simulation accuracy is acceptable.