5.5. AMR Deployment, Achieved Operational Gains, and Practical Limitations
The preceding subsections analyzed the performance of the object detection models deployed on the AMR solutions developed for the three use cases under study. However, in practical real-world scenarios, the utility of an object detection model deployed on an AMR is not solely restricted to achieving actionable levels of predictive accuracy in benchmarks. By quantitatively measuring metrics such as processing speed and resource consumption, stakeholders can gain a comprehensive understanding of the model’s effectiveness, robustness in diverse conditions, and ease of deployment. Such an analysis is essential for identifying potential improvements, ensuring the safety of the design, and optimizing the robot’s performance in the targeted inspection tasks. To this end, this subsection describes the results of field test trials conducted with robots equipped with vision sensors and object detector models in real-world settings, under the framework of the European ESMERA project funded by the European Commission (ref. 780265), stressing key practical aspects that were validated on-site during the trials.
We begin with Use Case A, where we recall that the main objective of the AMR solution is to avoid the indiscriminate use of glyphosate. Currently, glyphosate is used at least twice a year, sprayed from maintenance trains traveling at 50 km/h. Such maintenance duties negatively impact the regular schedule of railway traffic. The novelty resides in using a non-chemical, non-polluting method (especially since there are efforts to ban glyphosate by regulatory institutions), which could be mechanical or, as proposed in this work, by laser irradiation. Undoubtedly, the robotic method is slower than the current one, but it aligns better with the search for clean methods.
Once the robot was deployed and run on the field tests designed in the aforementioned project (refer to the subplots in
Figure 10 for a visual summary of the process), several key performance indicators were registered. From the mechanical point of view, the deployed AMR achieved a speed of 5 km/h, with an average power consumption of less than 2 kW (including laser, sensing, navigation, and processing systems). From the observed maneuvers, an average estimation of 3–5 s per plant was needed for eliminating a single detected weed, yielding a daily weed removal rate of the robot in the range of 17,000 and 120,960 plants/day. This estimation was made by taking into account the area radiated by one laser head, and the possibility of implementing an array of laser diodes or laser heads, with seven heads operating simultaneously on the rail track. These statistics depend on the railway’s status and the spatial distribution of weeds along the rail. Nevertheless, they serve as a good estimation of the practical benefits when compared to manual removal.
Further gains were observed during the trials. The laser procedure prevents weeds from growing up again at least in the next 4 months after the intervention. From the mechanical side, the AMR system safely engages to the tracks and delivers feedback in less than 1 min, ensuring its fast deployment. It also surpasses infrastructure items on the tracks lower than 30 cm in height. Infrastructure items on tracks that the train can pass over can be surpassed by the robot in its normal operation. Finally, the AMR carries a variable number of batteries (i.e., in attachable wagons to increase the navigational autonomy), so that it can work during a complete working shift (8 h) without recharging or changing the battery packs.
Apart from radically changing the weed removal method (from manual to automated), the use of YOLO algorithms was proven to be differential in detecting vegetation precisely. With conventional vision algorithms (SIFT/SURF/Harris/Hough for reference point extraction, and chromatic masking to discriminate among colors, all implemented by using the OpenCV software library), the false positive rate was at least 20% higher, posing a high risk of irradiating glass, cardboard, or plastic with the laser. The OpenCV algorithm overly relied on the plant’s chromatic (green) component and was excessively permissive to the morphology of the vegetation (overgeneralized). In other words, it did not effectively suppress false positives. YOLO handles much better cases in doubt, reducing the number of false positives in their detected objects by at least 20%.
The usual procedure for the inspection of the containers first requires that the AGV navigates to the container. Once there, the AGV circles around the container (
Figure 11c) in the first inspection, with the cameras installed in the liftable receptacle (black box) (
Figure 11b,f). At a specific point, the elevator stops, opens the deployment ramp, and lets the surface inspection robots exit to the top of the container (
Figure 11c). It first releases a robot from the first floor of the receptacle, then raises the elevator again to let the second robot go out. As it elevates, the side cameras of the receptacle acquire lateral images (
Figure 11e). The robots concurrently inspect the top of the container (
Figure 11a,c,g), while the AGV continues to circle the container, concurrently inspecting the sides, while the robots on top inspect the surface of the container. Finally, the AGV raises the lift pod again to pick up the robots. It opens the access ramp, and the first robot enters the receptacle. It then lowers the receptacle slightly, deploys the second ramp, and the second robot enters it (
Figure 11e).
In the second use case, the field trials showed unexpected limitations of the devised solution: the AMR was unable to inspect the sides of containers that were adjacent to each other, even with conventional (visual) manual inspection. In this case, the following steps were taken:
The container targeted for inspection was separated from the others to allow access to its sides. In the port terminal, containers were in constant motion as they were loaded and unloaded from ships. Therefore, while this container movement slowed down the inspection and was more inconvenient, it was not a critical maneuver for the port operations.
If the container was empty, it was inspected from the inside for light leaks (
Figure 11h), indicating the presence of a hole. This workaround only allowed identifying
hole defects.
As a result of our field trials in Use Case B, defects could not be detected by the AMR more effectively than by the port experts. The port premises house very experienced operators who directly understand the potential causes of each defect. However, the method did achieve one of the desired safety outcomes by preventing them from climbing to the top of the containers, which was one of the desired outcomes in terms of safety. Also, by automating the process, we enhanced the digitization of the entire process and the data, because images sent and stored by the system are useful for the traceability of the inspection process and the accountability of decisions made. In all cases, operators decide whether to remove a container from circulation and set it for repair. However, the developed AMR system provides an informational database that can be used to safely validate such decisions.
From a mechanical perspective, one of the biggest limitations identified during the trials emerged when the upper robots moved from one container to another from the top. The initial idea was to let them move on their own in areas with many containers placed close to one another, traversing across all the containers by navigating through the small gaps and spaces between them. This did not work as expected. Although containers were close enough together (causing the infeasibility of a lateral inspection, as noted above), there was too much space for the top AMR to move from one container to the next one by solely relying on their tracks without falling or becoming stuck between the two containers. To amend this issue, the containers should have been placed less than three or four centimeters apart, but many of them were slightly more separated than this critical distance. The underlying trade-off between the maneuverability of container deployment in the port premises and the autonomy of the AMR to navigate through contiguous assets has captured the interest of the management of the port authority and is expected to drive applied research studies in the future.
When it comes to the object detection model itself, a common problem occurred with containers damaged with large dents, i.e., those covering almost the entire side panel of a container. Models trained to identify those defects effectively ended up detecting any container structure as a defect, annotating any panel as such. The reason for this detection failure is twofold: (1) the visual information varies significantly when the dent is viewed from different angles, which can be challenging even for the human eye; and (2) there is not as much chromatic information as when the dent is small, e.g., a scratch removing a significant amount of paint from the container’s surface. We envision that for this particular type of defect, the AMR should be equipped with additional sensors, increasing the cost of the overall robotic approach.
Despite these unexpected eventualities in the test trials of Use Case B, they demonstrated the improved safety of automating the entire operation rather than doing it manually. The key for port operators to embrace this solution was the incorporation of AI-empowered object detection models for the defects; otherwise, the performance differences compared to visual inspection would have been too significant for the AMR-based approach to be of any practical usefulness.
The cargo transport operation tackled in Use Case C involved a maneuver that none of the operators wanted to perform. They had to drive the AGV so close to the walls of the truck (where they could hardly see anything) that very few of them had managed to do it without bumping into the sides of the cargo truck. Most operators typically struggle with orientation; they start moving the forks inside, but often end up getting stuck inside the truck, requiring many maneuvers to deposit the load. Only minimal correction maneuvers are possible inside the truck, both laterally and angularly. The angle must be precisely defined before entering, taking into account that the truck is not always positioned the same way in the bay: there is always some lateral and angular displacement that complicates the loading maneuver. The trucker parks it with some references to the bay, but there is always some displacement. For manual loading, this displacement is irrelevant. However, for the AGV to operate autonomously, it is crucial that the maneuver is planned in advance. In this case, the AI-based object detector indicates whether the AGV is correctly aligned with the trailer. Upon a positive response, we can then calculate the angle at which the truck has been docked, in order to adjust the AGV’s pose to match the truck’s angle. The object detector aids in identifying the shapes within the point cloud that are characteristic of the bay entrance and the rear of the trailer, as well as indicating whether it is correctly oriented.
To verify the operational performance of the robotic solution devised to address this use case, a metallic structure was constructed to simulate the load to be deployed by the AGV inside the trailer (
Figure 12a–d). Once inside the trailer, measurements were taken with the lateral ultrasound and LiDAR sensors installed in the structure (
Figure 12e,f). It should be noted that in a fully real-world scenario, the same sensors are located in similar positions on a palletizer rather than on the aluminum frame used in our experiments. In addition, the robotic forklift is a pallet truck with a higher load capacity (
Figure 1c) because it must lift several tons.
In this case, once the robot has entered the truck perfectly aligned and with the correct orientation, it adjusts 1 cm at a time inside the container, moving slowly but steadily until it deposits the load. When exiting, it proceeds likewise in reverse. This is only possible if perfect orientation and movement are ensured upon entering. In our field trials, the correct or anomalous pose of the truck in the bay, detected by the AI-based approach from the images generated from the point cloud data captured by the LiDAR sensor, was found to be very valuable in safely completing the loading maneuver. However, in the field tests, the machine also failed to autonomously correct minimal angular and lateral deviations inside the truck. Despite the slow-motion dynamics imposed on the robotic solution (1 cm per cycle), the correction was not successfully completed in several spatial configurations. As a result, the AGV ended up hitting the lateral panels of the truck, causing catastrophic structural damages due to the high inertial force of its load. In such cases, the solution was given by the detection of a collision by proximity based on the ultrasound sensors, triggering a stop emergency in the AGV. The pose estimation method (based on YOLOv8, which elicited the best detection performance for this use case) and the correction prior to the entrance maneuver were found to be the only effective ways to perform a correct entry maneuver of the AGV into the truck. This, combined with the lateral collision avoidance system, comprised the overall AGV automated maneuver system that led to satisfactory results in the conducted field tests.