1. Introduction
In recent years, the use of autonomous mobile robots (AMRs) has been increasing in factories and distribution warehouses to address labour shortages, reduce the burden on people when carrying heavy objects, and improve work efficiency. Examples include Amazon Robotics Kiva [
1], which is used in Amazon distribution warehouses, and OMRON’s automated guided vehicle [
2], which is used mainly in factory automation and logistics.
Unmanned guided vehicles (UGVs) comprise two main categories: the guided type and the SLAM type. The guided type, traditionally employed across diverse sectors, relies on markers such as magnetic tape laid on the floor. Equipped with sensors, these vehicles detect the markers and autonomously navigate to their designated destinations. Conversely, the SLAM method utilizes laser-based environment measurement devices such as 2DLidar or 3DLidar to construct real-time maps of the surroundings. These maps facilitate self-position estimation and navigation along specified routes.
According to data from the Japan Industrial Vehicles Association, the total number of automatic guided vehicle system deliveries and exports in Japan reached 894 in 2020, marking a continuous increase from 2013 to 2019 [
3]. Notably, the proportion of the “magnetic type” decreased to 81.2%, while other types, including SLAM and mark recognition systems, increased to 4.0% from the previous year. This trend underscores the growing interest in unmanned guided vehicles, particularly for factory automation purposes.
In recent years, significant research and development efforts have focused on SLAM technology. This approach, utilizing laser range sensors for simultaneous self-position estimation and environment mapping, has garnered considerable attention [
4,
5]. These advancements hold promise for enhancing the efficiency and capabilities of unmanned guided vehicles across various industrial applications. SLAM technology has transcended its traditional industrial applications and found utility beyond factory settings [
6,
7]. For instance, iRobot’s Roomba 980 cleaning robot efficiently navigates rooms using SLAM technology [
8].
As it stands, both magnetic and SLAM-based systems require modifications upon introduction or alterations to transportation route layouts. In the case of the magnetic type, embedded magnetic tape necessitates removal and reinstallation when adjustments are made. Similarly, SLAM systems require manual operation of the transport vehicle to comprehensively map the work environment and construct an environment map, a process essential for labour-saving measures. In the SLAM method, self-position is estimated by matching the surroundings using a laser range sensor with an environmental map, so if there are many disturbances, such as people or obstacles, it may not be able to match the environmental map and the location may not be specified [
9,
10].
A possible solution to the challenges posed by SLAM technology is visual SLAM [
11,
12,
13]. Unlike traditional SLAM methods that rely on laser-based environment measurement devices, visual SLAM utilizes cameras to construct real-time maps and estimate the vehicle’s position [
14,
15]. This approach offers several advantages, chiefly, enhanced adaptability to dynamic environments. A disadvantage of visual SLAM is its limitation to 2D mapping. While visual SLAM excels in capturing rich environmental information, particularly in terms of texture and lighting, it struggles with accurately representing three-dimensional spaces. This limitation poses challenges in scenarios where precise depth perception and volumetric mapping are essential, such as navigating complex industrial environments or outdoor terrains with varied elevations. Despite advancements in visual SLAM algorithms and hardware, achieving accurate 3D mapping remains a significant hurdle [
12,
13]. The reliance on visual cues alone can lead to errors in depth estimation and spatial understanding, especially in environments with limited visual features or occlusions. Other explorations in the visual SLAM field have adopted machine learning approaches utilizing YOLO, convolutional neural networks, and semantic segmentation, among others [
7,
16,
17,
18]. In light of the limitations of visual SLAM, 3D SLAM emerges as a superior solution, albeit with certain drawbacks, particularly in terms of labour intensity. The current study proposes the construction of maps from 3D SLAM.
The review of the literature on UGVs encompasses a diverse array of studies and developments. L. Zhang et al. delved into visual SLAM technologies specifically tailored for autonomous driving vehicles, shedding light on advancements in this critical area [
12]. H. Azpúrua et al. contributed to the literature by exploring state-of-the-art robotic exploration techniques, including both single and cooperative approaches with homogeneous and heterogeneous teams, offering insights into collaborative robotics systems [
19].
Addressing the crucial localization problem, M. Guan and C. Wen proposed a novel method utilizing an extended Kalman filter in conjunction with particle filters to accurately estimate the robot’s state based on localized Ultra-Wideband (UWB) beacons [
20]. X. Zhao and C. C. Cheah presented an innovative Building Information Model (BIM)-based indoor robot initialization system, employing object detection to autonomously initialize mobile robots in unknown environments [
21]. These contributions offer valuable solutions to enhance UGV localization and initialization processes.
Additionally, advancements in material handling systems were highlighted by Harsh and S. S. Dhami, who developed a smart material handling system capable of autonomous operation and seamless integration with other manufacturing subsystems, thus optimizing industrial processes [
22]. Meanwhile, Ihtisham Ali et al. introduced a novel challenging dataset aimed at providing a comprehensive testing landscape for mobile robotics, autonomous driving research, and forestry operations, facilitating advancements in these domains [
23].
Furthermore, Z. Zhou et al. proposed an object detection solution based on deep learning applied to 3D point clouds, enhancing the automation of Small- and Medium-sized Enterprise (SME) production processes through collaborative mobile robot manipulation [
24]. M. Luperto et al. contributed a method for refining 2D grid maps by incorporating predictive layouts of rooms behind closed doors, improving mapping accuracy in indoor environments [
25].
Moreover, G. Fragapane et al. conducted a comprehensive literature review on the planning and control of AMRs in intralogistics, elucidating the impact of technological advancements on decision-making processes in this domain [
26]. A. Antonio et al. presented the methodology and outcomes of the MoPAD2 robot, an autonomous, cost-effective mobile platform equipped with a thermal scanning platform, showcasing practical applications in digitization efforts [
27]. S. Park et al. introduced a novel vision-based global localization approach utilizing hybrid maps of objects and spatial layouts, contributing to the advancement of localization technologies [
28]. Other studies working on localization have been conducted in numerous studies touching on different aspects robotics [
29,
30].
Both 2D SLAM and 3D SLAM rely on the fundamental process of generating a map of the environment they traverse and subsequently executing autonomous movements based on this map. Consequently, manual operation of the transport vehicle and thorough environmental mapping are essential prerequisites. This labour-intensive requirement underscores the necessity for labour-saving measures to streamline the process.
In our study, we employ a novel approach for self-position estimation by extracting data from the wall surface using point cloud information obtained from both the layout map and 3DLidar technology. This data fusion technique allows us to derive precise positional estimates, crucial for autonomous vehicle navigation. Subsequently, we implement our proposed self-localization method with the ultimate goal of achieving autonomous movement. By integrating advanced sensor data processing techniques with state-of-the-art localization algorithms, we aim to enable unmanned vehicles to navigate their environments autonomously and efficiently.
The proposed automatic guided vehicle system is designed to alleviate the manual labour associated with environmental measurement in existing systems. The system incorporates essential functionalities for autonomous movement, including the conversion of layout map data into point cloud maps for self-location estimation, route planning, route following, and obstacle detection and avoidance. By integrating these features, we ensure smooth and efficient navigation of guided vehicles within their operating environment. Additionally, we prioritize user-friendly operation by designing an intuitive interface that simplifies the process from loading the layout map to initiating autonomous movement. This user-centric approach enhances usability and facilitates seamless integration of our system into various industrial applications.
2. Materials and Methods
The proposed automatic guided vehicle system aims to streamline the manual environmental measurement work inherent in existing systems by leveraging self-position estimation and autonomous movement based on a layout map. Following environmental measurement, the robot autonomously constructs an environmental map, mirroring the process of the existing system. Our approach focuses on leveraging the wall shape of the factory for self-position estimation. By utilizing point cloud information obtained from 3DLidar, the layout map, and odometry from wheels, we extract point cloud data specifically for wall surfaces. Subsequently, we employ Monte Carlo localization using a particle filter, utilizing the degree of matching between the layout map and the converted point cloud information for estimation [
4].
The methodological framework of the proposed automatic guided vehicle system is delineated into three primary components: sensors, map, and path planning, as described in
Figure 1.
The sensor segment encompasses the integration of various sensors, notably the odometry and Lidar systems. The odometry system, incorporating a magnetic encoder, facilitates precise calculations of the vehicle’s movements, transmitting data to the microcomputer via SPI communication. Concurrently, the Lidar system, comprising 3DLidar technology, captures detailed point cloud data of the environment, crucial for self-position estimation as well as obstacle detection. The self-position and obstacle information is passed on to the subsequent segments to achieve the overall goal.
In the map segment, the process of layout map extraction towards self-position estimation is developed. Utilizing point cloud information acquired from 3Dlidar and the layout map, the methodology focuses on refining the self-position estimate and piecing together dynamic environmental detail. The extraction of point cloud data exclusively for wall surfaces enhances the precision of the self-position estimation. Additionally, the utilization of Monte Carlo localization, facilitated by a particle filter, enhances the accuracy of the self-position estimation based on the alignment between the layout map and converted point cloud data.
The final part of the methodology delineates the path planning process, encompassing target goal determination, pathfinding algorithms, and autonomous operation. Through comprehensive analysis and utilization of the constructed environment map, our system autonomously identifies target goals, employs pathfinding algorithms to determine optimal routes, and executes autonomous operations, ensuring efficient and obstacle-free navigation. Segments two and three are set out in sections by themselves.
2.1. The Hardware Unit
Figure 2 shows the setup of the automated guided vehicle system. In this case, the setup utilized a two-wheel mobile cart equipped with 3Dlidar and a PC. An off-the-shelf bogie was used; it was electrified by replacing the front wheels with free-wheel casters and the rear wheels with geared DC motors, as shown in
Figure 2. The configuration of the motor drive is shown in
Figure 3. A magnetic encoder was attached to the motor shaft, as seen in
Figure 3b. A magnetic encoder operates by attaching a magnet to the shaft of a motor and measures the rotation angle without contact with a sensor attached to the frame. The acquired rotation angle is used for estimating the amount of movement of the guided vehicle.
The motor receives the PWM signal and rotation direction Information from the microcomputer at the motor driver and rotates the motor. Commands to the microcomputer are controlled by serial communication from the PC. This makes it possible to control the trolley body by commands from the PC. In addition, the encoder attached to the motor sends the rotation angle to the microcomputer via SPI communication, and the microcomputer transmits it to the PC via serial communication. Details of each component are given in
Table 1.
2.2. Odometry
An odometry system was implemented using a magnetic encoder, which communicated with the microcomputer via SPI communication. This system calculated both the angle and number of rotations within the microcomputer and communicated this information to the computer via serial communication.
The relationship between odometry and the rotational speed of the left and right wheels is shown in Equation (1). Let
be the rotation angle of the right wheel and
be the rotation angle of the left wheel. Let
be the velocities of the left and right wheels in the external coordinate system, and let
be the radius of the wheel.
If the robot’s turning angular velocity is
, the robot’s speed is
, and the turning radius is
, then robot speed v can be stated as (2).
where
is the robot’s turning angular velocity,
is the robot’s speed, and
is the turning radius. If the distance between the wheels of the robot is
, Equation (3) is obtained.
Based on the above equations, the robot’s velocity
and turning angular velocity
are obtained from the encoder angle attached to the motor, as shown by Equation (4).
The velocity and rotational acceleration obtained by the above calculations were used to update the estimated position during self-position estimation.
2.3. Point Cloud from 3Dlidar for Wall Extraction
To estimate self-position from the layout map, the walls’ layout map and the wall point cloud data extracted from the point cloud measured by 3Dlidar were employed. The self-position was determined based on the degree of matching between these data sets. An algorithm for extracting wall surfaces was developed to ensure stable measurements indoors. This algorithm utilized the fact that the laser from 3Dlidar often reached the wall through the top of equipment or gaps. Processing was performed within the 3Dlidar Velodyne VLP16’s measurement cycle of 10 Hz. The 3D point cloud data measured indoors are shown in
Figure 4.
The wall extraction algorithm is illustrated in
Figure 5a. The algorithm defines a region of interest and separates the measurement point by the number of point clouds to create a blue fan-shaped area. The remaining point clouds are down-sampled using a voxel grid filter to reduce and equalize the number of point clouds. The processing results are shown in
Figure 5b.
The data derived from
Section 2.2 (odometry) and
Section 2.3 (3DLidar) is used to refine the proposed localization system derived from the layout maps. The following section elaborates on the generation of point clouds from aerial photographs and maps.
4. Verification and Discussion
In this section, the focus lies on the verification of the created layout map intended for the self-location estimation, alongside the validation of system performance. The proposed system was validated against the existing 3D-SLAM Autoware version 1.14.0. Autoware is open-source software designed for autonomous driving systems, operating on Linux ROS [
33]. Normal Distribution Transform (NDT) Matching, utilized within Autoware, is a conventional 3D-SLAM method based on point cloud scan matching. It involves representing point clouds using normal distributions within defined regions. This representation enables efficient and accurate registration of overlapping point clouds, facilitating precise determination of a robot or vehicle’s position within its environment. NDT matching is particularly advantageous in applications requiring high levels of localization accuracy, such as autonomous navigation systems. Further details can be obtained in a paper reported in [
34].
The experiment involved the utilization of data acquired by manually relocating a mobile trolley and subsequently measuring its position to verify self-position estimation. This data collection process encompassed various measurement locations, both within the confines of the factory premises and in outdoor spaces between different factory buildings. In addition, we verified the proposed self-localization method through ROS simulation utilizing the measured data.
4.1. Self-Position Estimation
The experimental results are presented below, where the point cloud map derived from the layout map is depicted as white dots, providing a spatial representation of the environment. The trajectory of the proposed self-localization system, represented by red lines, demonstrates the path estimated by our method based on the gathered data. Conversely, the trajectory generated by the conventional Autoware NDT Matching method, shown in blue lines, serves as a benchmark for comparison.
From
Figure 18a,b, for indoor and outdoor settings, it becomes apparent that the trajectories of both the proposed self-localization system and the conventional Autoware NDT Matching method closely align. This alignment signifies the successful self-position estimation achieved by our method, as it concurs with the trajectory derived from the established Autoware method. Such congruence underscores the accuracy and reliability of our proposed self-localization approach.
Within the factory premises, as seen in
Figure 18a, our system exhibited a comparable performance, with an average horizontal position accuracy of 1.3 m and a std 1.5 m. Moreover, the angle error averaged at 1.1 degrees, with a maximum error of 3.5 degrees, highlighting the consistent and precise self-localization capabilities of our method even in complex indoor settings as shown in
Figure 19.
Figure 18b, which shows mapping between factories, has a position accuracy averaging at 1.3 m and a std of 2.2 m. This underscores the reliability of our system in outdoor environments. Additionally, the angle error averaged at 1.8 degrees, showing the robustness of our method in maintaining orientation accuracy in varying conditions.
The achieved accuracies are typical for what is reported in the literature. Researchers in [
35] reported the absolute error from visual SLAM ranging between 0.002–4 m, depending on the Lidar equipment used, the correction algorithm in place, as well as the environment under consideration (outdoor/indoor). Another paper by [
36] utilized similar devices (Velodyne) and programmatic 3D reconstruction in an outdoor environment. The results showed a poor localization before the engagement of the loop closure algorithm. In essence, loop closure requires that the robot circle back to the start before the final map can be generated.
A study by [
37] presented a SLAM for 3DLidar-based online mapping. The study reported its generated output in map entropy, so it is hard to compare the performance. Notably, the authors utilized an iterative process to home in on a final map.
In general, studies about mobile (online) map generators are not easily compared since the environment is dynamic in nature. As such, one of the schemes of comparison is a visual inspection of the resultant cloud points [
37]. Arguably, our mapping system achieved comparable or superior performance from what is currently reported.
4.2. Autonomous Movement
For the autonomous movement verification, the path used in the prior experiment is duplicated. The path began from the part transfer position in the factory, passing through the automatic door at the factory entrance and going to another factory entrance. The route for the autonomous movement verification is shown in the trajectory map in
Figure 19. The use of this technique guaranteed consistency in comparison analysis with earlier experimental results.
The route was chosen based on several reasons, including its portrayal of real-world events found in industrial settings and its usefulness for assessing the autonomous movement system’s performance under various conditions. It also enabled an assessment of the system’s capacity to navigate through various environmental elements and barriers often seen in production environments.
In
Figure 20, the route planned for autonomous movement is the blue line and the position determined by the suggested self-localization approach is the red line. Remarkably, the trajectory showed effective independent navigation from the starting point to the target with no notable deviations. This observation demonstrates the self-localization method’s efficiency in properly directing the autonomous mobility system along a predefined course.
Furthermore, the alignment of the trajectory with the defined route, as shown in
Figure 20, demonstrated the system’s capacity to move independently along the desired path. This alignment validates the self-localization method’s precision and dependability in enabling accurate and consistent autonomous mobility in complicated industrial contexts.
The graph illustrating the distance of the running position (self-position) from the specified route during autonomous movement, as seen in
Figure 21, provides valuable insights into the system’s performance. Despite minor deviations, autonomous movement was achieved with an average distance of 0.12 m from the specified route, with a maximum deviation of 0.85 m. These results indicate a high degree of accuracy and precision in the system’s navigation capabilities, particularly in maintaining proximity to the intended path during operation.
The effective execution of the suggested self-localization approach using the layout map allowed confirmation of the viability of autonomous mobility. The technology exhibited the capacity to move independently along the defined path with an impressive average variation of only 0.12 m, as seen in
Figure 21a. This is a tremendous feat, especially given the complexity and dynamic nature of industrial contexts.
The accuracy of self-position estimation during autonomous movement significantly improved, with the average error decreasing from 1.3 m in the original trial to 0.68 m, as shown in
Figure 21b. This improvement, which is roughly half the prior inaccuracy, indicates a significant refinement in the system’s capacity to precisely predict its position in real time.
The reduction in self-position estimate error not only improves the dependability of autonomous movement but also leads to better safety and efficiency in industrial processes. With more precise localization, the system can better respond to environmental changes and unexpected barriers, lowering the chance of crashes and increasing navigation efficiency.
Overall, these findings represent a substantial step forward in the development of autonomous navigation systems, opening a path for their broad use in a variety of industrial applications. Moving forward, further development and optimisation of the self-localization approach has the potential to achieve even higher levels of precision and dependability, eventually improving the performance and usefulness of autonomous movement systems in real-world settings.
During the autonomous movement verification, different barriers such as lifts and sensor-type automated doors were encountered throughout the defined route. To guarantee safe navigation, an obstacle detection and avoidance feature was included, essentially halting autonomous travel until impediments were cleared. Specifically, when meeting a lift door, the algorithm properly identified it as an impediment and momentarily paused travel. As a result, even as the door was opening, the system determined that the guided vehicle’s height clearance was insufficient, necessitating another brief stop in travel. After completely opening the lift door and verifying the absence of obstacles, autonomous movement began, allowing the car to enter the lift. This complete functionality validation illustrates the system’s capacity to respond to a variety of scenario difficulties while maintaining safe and efficient autonomous mobility.
Despite possible differences between the present layout map and recent modifications to building structures, such as freshly erected prefabricated buildings, the system demonstrated robust self-position estimates. This stability was due to the system’s capacity to reliably determine self-position from the surrounding environment, even when the layout map did not perfectly coincide with the real surroundings. Thus, the verification highlights the system’s durability and flexibility, confirming its suitability for autonomous mobility in real-world circumstances.
A potential avenue for future research to enhance the path planning and map generation strategy proposed in this paper could involve focusing specifically on refining the loop closing mechanism, as adopted by other methods of online map generation from SLAM in the literature [
10,
37,
38]. By developing advanced algorithms that leverage sensor data to detect loop closure opportunities more accurately and efficiently, researchers could improve the system’s ability to recognize previously visited locations and ensure global consistency of the map and robot trajectory. Additionally, integrating techniques such as feature-based mapping and graph-based SLAM approaches could further enhance the robustness and reliability of the loop closing mechanism. Moreover, exploring adaptive loop closure thresholds and real-time loop correction mechanisms could offer flexibility and adaptability to varying environmental conditions, ultimately leading to more precise and efficient navigation for autonomous mobile robots.
5. Conclusions
The proposed 3D SLAM autonomous mobile robot system effectively addressed labour shortages in production and distribution settings by utilizing existing layout maps, eliminating the need for manual map generation. By integrating advanced self-localization methods and a novel SLAM approach leveraging 3DLidar data and layout map information, the system achieved precise self-positioning and navigation along random paths with remarkable accuracy.
The experimental validation revealed notable reductions in self-positioning errors, with a mean deviation of 0.12 m from the intended path, showcasing the system’s precision and accuracy in autonomous navigation. Additionally, consistent autonomous movement was observed even in scenarios with slight variations in layout maps and obstacles, underscoring the system’s reliability in real-world conditions.
Featuring functionalities such as route planning, tracking, and obstacle detection, the system demonstrated agile and adaptable navigation in dynamic environments. Experimental validation confirmed significant reductions in self-positioning errors and consistent autonomous movement, highlighting its reliability and precision in real-world scenarios.
Future efforts will focus on enhancing the system’s robustness through extensive validation across diverse production environments and further improving its capabilities for proactive navigation and obstacle avoidance.
In summary, this study’s findings lay the groundwork for the development of sophisticated autonomous mobile systems capable of navigating intricate environments efficiently and precisely, thereby addressing current labour challenges in production and distribution operations.