Next Article in Journal
Design and Experimental Evaluation of Multiple 3D-Printed Reduction Gearboxes for Wearable Exoskeletons
Previous Article in Journal
Trajectory Aware Deep Reinforcement Learning Navigation Using Multichannel Cost Maps
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Context-Specific Navigation for ‘Gentle’ Approach Towards Objects Based on LiDAR and URF Sensors

by
Claudia Álvarez-Aparicio
1,*,
Beáta Korcsok
2,3,
Adrián Campazas-Vega
1,
Ádám Miklósi
2,3,
Vicente Matellán
1 and
Bence Ferdinandy
2
1
Robotics Group, Universidad de León, Campus de Vegazana s/n, 24071 León, Spain
2
HUN-REN–ELTE Comparative Ethology Research Group, Pázmány Péter sétány 1/C, H-1117 Budapest, Hungary
3
Department of Ethology, Eötvös Loránd University, Pázmány Péter sétány 1/C, H-1117 Budapest, Hungary
*
Author to whom correspondence should be addressed.
Robotics 2024, 13(11), 167; https://doi.org/10.3390/robotics13110167
Submission received: 27 September 2024 / Revised: 7 November 2024 / Accepted: 14 November 2024 / Published: 19 November 2024
(This article belongs to the Section Sensors and Control in Robotics)

Abstract

:
Navigation skills are essential for most social and service robotics applications. The robots that are currently in practical use in various complex human environments are generally very limited in their autonomous navigational abilities; while they can reach the proximity of objects, they are not efficient in approaching them closely. The new solution described in this paper presents a system to solve this context-specific navigation problem. The system handles locations with differing contexts based on the use of LiDAR and URF sensors, allowing for the avoidance of people and obstacles with a wide margin, as well as for approaching target objects closely. To quantify the efficiency of our solution we compared it with the ROS contextless standard navigation (move_base) in two different robot platforms and environments, both with real-world tests and simulations. The metrics selected were (1) the time the robot needs to reach an object, (2) the Euclidean distance, and (3) the orientation between the final position of the robot and the defined goal position. We show that our context-specific solution is superior to the standard navigation both in time and Euclidean distance.

1. Introduction

Service robotics have emerged as a prominent technology in various settings, including domestic and industrial environments, owing to robots’ abilities to assist humans through diverse tasks. One of the fundamental functions performed by mobile service robots involves autonomously navigating from one point to another, avoiding obstacles and people in their path, a process that is known as navigation. Navigation requires maintaining a safe distance to prevent collisions. Typically, a considerably wide safety margin is adopted to ensure the integrity of the robot and its environment. Although this strategy is effective in many instances, it may not be so in specific contexts, such as when approaching a table to deliver items, whether in office environments, private residences, or restaurants.
One of the growing markets is using these types of robots is in the catering industry, where the robots support the human waiters in bringing dishes to or from the customers’ table. We can already find numerous restaurants in which waiter assistant robots are present, especially in restaurants where the volume of dishes brought to a table is very high. Waiter assistant robots, which are currently limited in their functional abilities compared to human waiters, have the potential to fulfill important service tasks and alleviate labour shortages [1]. Utilising these robots can lead to improvements in service quality and reduced operating costs for employers [2], and they can serve as an alternative labour solution during pandemics [3]. In addition, having this type of technology is a great attraction for many customers who are looking for new and innovative experiences.
Thus far, many of these robots have performed their navigation via line tracking [4] or by moving on built-in rails, as seen in the Hajime restaurant [5]. These navigation methods rely on fixed table locations, and they constrain the furniture distribution of the restaurants. However, these issues can be addressed using navigation methods that require minimal or no structural or design modifications in restaurants, favouring on-board approaches that are easily adaptable to new layouts or table placements. In addition to adaptability, cost-effectiveness, and robustness [4], the prevalence of use should also be taken into account.
It should be noted that although these robots do have the ability to deliver dishes to the table, they have a clear limitation: they cannot get close enough to the table for the customers to pick up the dishes while remaining seated. These robots incorporate a substantial margin of error for the final goal, such as stopping in the middle of the aisle and relying on customers or human waiters to retrieve the items [4].
A clear objective for improvement is for the robot to be able to deliver dishes all the way by approaching a customer’s table close enough for them to reach the plates. This goal is closely related to autonomous navigation, obstacle avoidance, and proxemics. The optimal navigation strategy for a waiter assistant robot may vary based on the context and the desired objective. Some facets of context-aware navigation have been explored previously, focusing mainly on the social aspects of robots navigating among humans, and they have been termed human-aware robot navigation [6]. The main goal of this field is creating navigational strategies and methods to increase the acceptance of robots by avoiding distressing robot behaviours (e.g., inadequate proxemic behaviours, movement noise, etc.), developing motion patterns that are intuitive to human observers, and implementing culturally relevant social conventions (see [6,7] for reviews of this topic). Human-aware navigation methods frequently utilise the cost functions of robot actions during global path planning, e.g., as costmaps to provide the necessary semantic or state information on the various features of a robot’s environment, such as, for example, the velocity and position of humans [6,8]. Both static and dynamic features can be incorporated, as seen in the navigation method of Haarslev et al. [8], where improved solutions to, e.g., navigating around groups of people and optimising collision avoidance with moving humans were provided. While human-aware navigation research is imperative for service robots, navigating in restaurant environments invoke other context-dependent difficulties.
Carrying items to customers presents a challenging navigation problem. The robot must initially avoid obstacles such as customers and tables, but it must then subsequently approach the table close enough to enable guests to easily access the carried items. Thus, proxemics is also a critical concern, encompassing the determination of the optimal distance and direction when approaching and interacting with humans [9,10,11,12]. These parameters also play a crucial role in navigation, such as in obstacle avoidance [13]. Overcoming these challenges requires the implementation of advanced sensors and algorithms, such as those needed to detect and avoid obstacles, map the environment, track the robot’s position, and adapt to changes in the environment. The need for such advanced software and hardware highlights the complexity and difficulty of developing efficient and safe navigation functionality in waiter assistant robots for busy restaurant environments [14].
Navigation tasks in waiter assistant robots can be achieved through model-based approaches, which utilise pre-defined maps to plan the intended trajectory. Additionally, external markers can be employed for navigation assistance, such as Radio Frequency Identification (RFID) [15,16], Quick Response (QR) codes [17], or aesthetic visual markers [18]. If the robot can orient itself based on more general environmental features, the use of designated markers can be avoided. The robot can utilise sensory inputs such as Laser Imaging Detection and Ranging (LiDAR) [19], Red Green Blue Depth (RGBD) [20], or stereo cameras for navigation based on environmental features (refer to [21] for a comprehensive review of vision-based navigation methods).
Some studies have focused on the parameterisation of navigation algorithms by treating them as black boxes [22]. Other studies have proposed various techniques for close-range table navigation, including employing a dense arrangement of passive RFID tags positioned in front of the table [23] utilising a vision system to gather information for path planning [24] or utilising short- and long-range Infrared (IR) sensors within an Robot Operating System (ROS) framework [15]. However, the majority of the navigation algorithms themselves are either still in development, their description lacks comprehensive details, or their use requires extensive external systems within the environment.
In recent years, there has been a growing interest in machine learning approaches for robot navigation (see [25] for an extensive review), with a particular emphasis on addressing the social and behavioural aspects of navigation. This includes predicting the emotional states of nearby humans [26] or learning semantic factors [27]. While the potential of deep learning methods in robot navigation is promising, their overall superiority over traditional approaches has not been established yet [25]. Currently, Simultaneous Localisation and Mapping (SLAM) is a widely used solution for most mobile robot platforms [28] as it allows one to explore and map an unknown environment autonomously without the need for training.
The de facto standard in robotics [29,30] is the ROS framework [31]. It is composed of a set of libraries that allow abstraction from the hardware and low-level communications between components and processes. The ROS framework is a distributed system based on the interchange of messages through communication channels known as topics. ROS allows us to abstract from the hardware and focus on the development of software to create adequate robot behaviours. ROS includes a series of libraries with basic functionalities for robots, including navigation (move_base action server). However, this server lacks context awareness and treats all goals equally, which restricts its reconfigurability for navigation strategies that require proximity adjustments based on the situation. Therefore, the strategy of leaving a wide margin of distance in all situations is often used.
A context-specific situation assessment method could provide a general and adaptable solution for navigation. Such a method could determine the optimal navigation strategy based on the context. For example, different approaches and distance parameters could be utilised when the robot is avoiding obstacles or approaching a goal table during item delivery.
This paper presents a novel method to address the context-specific navigation problem of robots when approaching tables based on the ROS framework. Our approach allows the navigation system to seamlessly switch between two navigation paradigms, enabling the robot to avoid obstacles and people maintaining an appropriate distance while also getting close to tables. For contextless navigation, we utilised the widely used ROS contextless navigation solution (move_base). Object avoidance was accomplished using a combination of a LiDAR sensor and six Ultrasonic Range Finder (URF) sensors, which are handled differently in context-specific and contextless navigation. We evaluated both approaches using three metrics: (1) the time to reach a new location, (2) the Euclidean distance, and (3) the orientation difference between the robot’s final position and the target location.

2. Materials and Methods

This section outlines the materials and methods that were employed to conduct the experiment. Specifically, we detail a novel navigation approach, as well as the environments and robots utilised to replicate and assess the experiment. The data acquisition process and the performance evaluation metrics selected to quantify the effectiveness of the new navigation approach are also presented.

2.1. Navigation Modes

To assess the effectiveness of our proposed approach, we conducted an experiment wherein two navigation modes were employed. The first mode, referred to as contextless navigation, is the standard option available in the ROS framework. The second mode, referred to as context-specific navigation, is the novel approach we propose in this study. The details of both navigation modes are presented in the subsequent sections.

2.1.1. Contextless Navigation

In this mode, the default ROS navigation stack is employed to navigate the robot towards the target table. The move_base [32] receives the goal location and utilises the robot’s URF and LiDAR sensors as observation sources for the obstacle layer (see Figure 1). These observation sources specify which sensors will provide information on the costmap (a grid that represents the robot’s vicinity, in which each cell has an assigned value that indicates the difficulty of traversing that area). The URF sensors are included as observation points because if only the LiDAR data were to be employed, the robot would be more likely to collide with tabletops as the table legs (which are detected by the LiDAR) can be situated relatively far from a table’s edge depending on the structure of the furniture. Without contextual knowledge of the environment, it is not feasible to selectively use the URF sensor data solely when the robot is approaching a table. In fact, very few objects, aside from a table, possess the characteristics that allow them to be exclusively detected by a sensor positioned at the URF’s height but not on the LiDAR’s. For example, a person would be detected by both sensors at different heights. In order to use URF sensors for an emergency stop function, a module should be added to the contextless navigation that enables this form of emergency stop (move_base does not allow this type of behaviour). Even if this module was developed, the use of URFs exclusively as emergency stop mechanisms presents a critical limitation: if the robot is stopped upon approaching an obstacle (such as a table) and subsequently resumes movement under move_base control, the absence of URF data as an observation input means that move_base would be unaware of the table’s presence, thus risking collision. Therefore, in addition to implementing an emergency stop module, an additional algorithm would be required to compute an alternative route that effectively guides the robot away from any detected obstacles.
The local planner used by move_base is the teb_local_planner (timed-elastic band) [33], an optimal local trajectory planner designed for navigation and control of mobile robots. During the runtime, the global path (initial trajectory) is optimised to minimise the trajectory execution time, the separation from obstacles, and the compliance with kinodynamic constraints, such as the maximum velocity and acceleration. The original approach was introduced in [34,35], and it was then later extended in [36,37] to address the issue of the timed-elastic band becoming stuck in a locally optimal trajectory when it cannot cross obstacles. The planner can switch between a subset of trajectories with distinct topologies, which are optimised in parallel using the concept of homotopy classes. Configuration files for this approach can be found in this repository (https://github.com/ferdinandyb/context-specific-table-navigation-2023/tree/main/navigation_modes/config, accessed on 15 September 2024).

2.1.2. Context-Specific Navigation

The context-specific mode of robot navigation is facilitated by an ROS action server, which is built using the SMACH python library [38]. This library enables the creation of ROS action servers that are powered by a state machine. The flowchart depicting the implementation of the context-specific mode is shown in Figure 2. In this mode, the robot’s navigation is also accomplished using move_base; however, only the LiDAR serves as an observation source, with the URFs being handled separately. The state machine runs two concurrent states, one for move_base and one for handling the URFs. An associated location database is linked to the action server, which allows the robot to be dispatched to predefined locations as an ROS action. These locations are divided into two categories: (a) free-standing locations, such as the middle of a room; and (b) close-range locations, where the robot must approach an object within a few centimetres.
When navigating to a free-standing location in the context-specific mode, the state machine primarily executes the move_base action server of ROS. The URF sensors are not included as observation sources and are only utilised for emergency stops in case the LiDAR fails to detect an obstacle (in both robots, the LiDARs are at the base of the robots, while the URFs were at table height (see Section 2.4)).
When a close-range location is designated as the navigation goal, the URFs handling concurrent state not only handles emergency stopping, but also executes the context-specific navigation code, which takes control when the robot arrives within close proximity of the goal. The twist_mux (http://wiki.ros.org/twist_mux, accessed on 15 September 2024) ROS node is used to achieve actual control of the robot, which prioritises velocity commands from the context-specific navigation code over those coming from move_base. In the same way as a free-standing location, the URF sensors are not included as observation sources. Instead, when the robot is going to a close-range location it takes over navigation if the robot is within 0.5 m of the goal, and the orientation of the robot is less than 90° off from the correct orientation (which is usually perpendicularly facing the table). This characteristic was taken into account, considering the robots that we were using for the experiment. There are other robots that—due to the disposition of their structural elements, e.g., the placement of trays—would finish parallel to the table, but this is easily adaptable in the software. The perpendicular arrival position was used when reaching a close-range location as we assumed it to be friendlier for a potential customer if the robot arrives already facing the table, rather than if it reaches the position and then adjusts the orientation.
The context-specific navigation calculates a smooth curve to the goal and advances slowly until the URF sensors report reaching an obstacle. More specifically, it calculates the arc of a circle that takes the robot from its current position and orientation into the specified goal’s position and orientation. The robot moves along this arc with its speed being smoothly adjusted to remove any sudden changes in movement. The current speed and angular velocity of the robot is updated at each time step. Choosing the coordinate system attached to the robot, the following equations are derivable from a linear speed decay and elementary coordinate geometry:
v r = v 0 v c τ ( t t c ) + v c ,
R = x g 1 + 1 tan ϕ g 2 ,
ω r ( t ) = ϕ g v r R ,
where x g is the abscissa of the goal; ϕ g is the orientation of the goal; v r and ω r are, respectively, the current (forward) speed and angular velocity (about its own axis) of the robot, which directly drive the robot; t c and v c are, respectively, the time and the speed of the robot when switching to the context-specific navigation; and τ is a parameter controlling the speed of decaying the robot’s speed from v c to the parameter v 0 .
In addition, the context-specific navigation monitors the URF sensors during the entire navigation and stops the robot if something comes within 5 cm of the robot. After an emergency stop, it tries to rotate the robot in a direction that is free of obstacles and hands back navigation to move_base if the robot is still trying to approach the proximity of the goal.

2.2. Testbed

To evaluate the two navigation methods, experiments were conducted in both real-world and simulated environments. The simulated environments provided an idealised scenario, free from sensor noise, enabling an analysis of the methods’ effectiveness under optimal conditions. The laboratory-based tests utilised real sensor data, where the inputs may have been affected by noise caused by external factors such as illumination. In this way, the experiments were carried out on two different robots, and they were then tested within Gazebo simulation worlds and in real environments, as detailed in the following sections.
Each experimental scenario involved the presence of two tables perpendicularly placed to one another, and they were located at different positions to provide a more challenging environment for the robot navigation. Notably, the structure of typical tables limits the robot’s ability to observe the table surface if it is not at the same height as the LiDAR, which can cause significant issues when navigating between table legs that are wide enough for the robot to fit between them. As this situation is frequently encountered in practice, the testbeds were equipped with tables exhibiting these characteristics.

2.2.1. Leon@Home Testbed

The Leon@Home Testbed [39] is a certified testbed [40] within the Robotics Group laboratory at the University of Leon, and it is designed to evaluate service robots in a realistic home environment. The testbed consists of a single-bedroom mock-up apartment built in an 8 m × 7 m space, and it is divided into a kitchen, living room, bedroom, and bathroom by 1 m -high walls. For this experiment, the living room furniture was removed and replaced with two tables, as shown in Figure 3a. In addition, a simulation of this part of the apartment was created to assess the experiment, as depicted in Figure 3b.

2.2.2. Budapest Testbed

The Budapest testbed, while not officially certified as a testbed, is utilised for animal behaviour tests and serves as a laboratory. The testbed is an empty room measuring 5.40 m × 6.27 m , and its flooring is tiled. As illustrated in Figure 3c, tables were arranged within the room. To facilitate simulation-based experimentation, a virtual model of this testbed was developed using Gazebo, as depicted in Figure 3d.

2.3. Locations

Maps were generated for all the rooms detailed in Section 2.2. The tables were arranged in a manner such that the base-mounted LiDAR on the robot could perceive a clear passage between the legs of the tables, leaving only the upper regions of the tables visible to the URF sensors located on the robot’s torso.
For each map, we defined nine distinct locations (https://github.com/ferdinandyb/context-specific-table-navigation-2023/tree/main/navigation_modes/locations, accessed on 15 September 2024) (refer to Figure 4). The base location (indicated by a red cross in Figure 4) was positioned far away from the tables. Additionally, eight close-range locations were established (indicated by blue and green crosses in Figure 4). The close-range and free-standing concepts are defined in Section 2.1.2. The blue crosses indicate a close-range location with a 3 cm distance between the robot and the table edges, whereas the green crosses represent a close-range location situated at a slightly farther distance of 15 cm from the table edges. The distances of 3 cm and 15 cm were measured between the edge of the table and the edge of the robot. These distances were selected to facilitate the analysis of both a realistic scenario, such as a robot positioned 15 cm from the table, and an extreme case where the robot is only 3 cm away and approaching a potential collision with the table. Therefore, two different configurations were tested for each map, one with a 3 cm distance and the other with a 15 cm distance. Thus, each configuration comprised five locations named location base (LB), Location 1 (L1), Location 2 (L2), Location 3 (L3), and Location 4 (L4).
In the subsequent phase, a collection of sequences was established. A sequence was composed of a series of locations that the robot would autonomously navigate to. Twelve sequences were implemented, which are presented in Table 1. These sequences were devised to account for the majority of the feasible paths that the robot might undertake between at least two points in the given setup with two tables. A “try” pertained to the execution of the twelve sequences under a specific configuration. It should be noted that all of the locations were treated as goals that the robot needed to reach before going to the next one.

2.4. Robot Hardware

In the experiments, we used two mobile service robots (both manufactured by Robotnik (Paterna, Valencia, Spain) [41]). We also conducted the experiments with simulations of both the robots to evaluate the effect of the environmental noise introduced through the sensors. The software to control the robots’ hardware is based on the ROS framework. The used ROS distribution was ROS kinetic as this distribution was supported by the hardware of both robots.

2.4.1. Orbi-One Robot

Orbi-One is a Robotnik RB-1 type mobile robot and is shown in Figure 5a. It is available at the Robotics Group [42] of the University of León (Spain). It accommodates several sensors, such as an RGBD camera on its head, a Hokuyo [43] (Osaka, Japan) URG-04LX-UG01 LiDAR sensor (range: 5600 mm × 240°), and an inertial unit on its wheeled base. Inside, an Intel Core i7 CPU (Santa Clara, California, USA) with 8 GB of RAM allows it to run the software to control the robot hardware. To carry out our experiment, a set of HC-SR04 URFs sensors (Kuongshun Electronic, China) were placed on its torso, and they were adjusted to the table surface heights with the motors of the torso. We used the robot’s on-board LiDAR sensor and the aforementioned URFs to collect data for our research.

2.4.2. Biscee Robot

Biscee is a Robotnik RB-1 BASE type robot with a custom-built torso, as shown in Figure 5b. It is available at the HUN-REN-ELTE Comparative Ethology Research Group [44] of the Eötvös Loránd University (Hungary). It accommodates an Orbbec [45] Astra RGBD camera and a Hokuyo UST-10LX LiDAR sensor (range: 10,000 mm × 270°) on its wheeled base, as well as two cameras and 6 HC-SR04 URF sensors on the head and torso, respectively. The robot is equipped with an Intel Core i5 CPU with 8 GB of RAM. The custom-built torso has been designed to act as a mobile tray. The tray also holds the URF sensors, where the actual height of the sensors can be adjusted with straps to accommodate any height. In the test scenario these were adjusted to the height of the tables. We used the robot’s on-board LiDAR sensor and the URFs to collect data for our research.

2.5. Data Gathering

To evaluate our proposed navigation solution, we created a dataset based on the experiment conducted in the four different environments described in Section 2.2. We employed objective measurements and statistical analysis to evaluate the solution’s performance on the two types of robots in both real-world and simulated settings. The data collected during the experiment were saved in formatted log text files and made publicly available at [46] to ensure the replicability of the experiment. The robot started moving from the base location (LB) and visited the different locations according to the sequences defined in Table 1. To ensure reliable results, each robot (Orbi-One real, Orbi-One simulation, Biscee real, and Biscee simulation) performed three tries per navigation mode (context-specific or contextless) and distance (3 or 15 cm ) to the table, completing each path a total of 12 times.
The collected data were saved in CSV format and organised based on several parameters. Firstly, these were categorised based on the environment in which these were recorded, as discussed in Section 2.2. Secondly, the data were labelled based on the robot used to collect them, i.e., Orbi-One or Biscee, as described in Section 2.4. Thirdly, the location from and the location where the robot was navigating to, i.e., LB, L1, L2, L3, and L4, were identified. Fourthly, the predefined distance between the robot and the tables, as discussed in Section 2.3, was included in the labelling process, with values of 3 or 15 cm . Finally, the used navigation mode, i.e., context-specific or contextless, was also included in the labelling process, as detailed in Section 2.1.
The experiment was evaluated using several variables that were saved and recorded in specific units as follows:
  • Time: the time in seconds (s) spent by the robot attempting to navigate from a previous position to a goal location.
  • Distance: the Euclidean distance in centimetres (cm) to the final location once the robot has stopped moving.
  • Orientation: the orientation distance in degrees (°) to the final location after the robot has stopped moving.

2.6. Evaluation

The evaluation was conducted using the dataset obtained during the data gathering process, as detailed in Section 2.5. The precision of the context-specific navigation approach was evaluated by comparing it with the contextless navigation approach. Three performance metrics were defined to evaluate the new approach.
The first metric was the time taken by the robot to reach a new location and is denoted by Time. For the traversed segments, we normalised the times segment-wise by dividing them with the mean time of the contextless navigation.
The second metric was the Euclidean distance between the final position of the robot and the goal location, which was calculated using Equation (4), where ( g x 1 , g y 1 ) represents the goal position and ( r x 1 , r y 1 ) represents the final position of the robot.
D i s t a n c e = ( g x 1 r x 2 ) 2 + ( g y 1 r y 2 ) 2 .
The third metric evaluated was the orientation distance between the final position reached by the robot and the orientation of the defined location, which is calculated using Equation (5). Here, Q g represents the quaternion of the goal that the robot should reach, and Q r represents the quaternion reached finally by the robot.
O r i e n t a t i o n = | | L o g Q g ( Q r ) | | .
A visual representation of the measurement of the distance and orientation metrics is presented in Figure 6. In this figure, the red cross represents a goal location, and the yellow arrow inside of it represents the corresponding orientation for this location. The orange circle represents the final position of the robot, and the blue arrow inside the circle shows the final orientation of the robot. The black dashed line represents the Euclidean distance between the location and the position reached by the robot, while the green line represents the angle difference between the orientation of the location and the orientation of the robot when it reached the final position.
We used Linear Mixed Models implemented in the statsmodels python library [47,48] to compare the metrics between the context-specific and contextless navigation, with four separate models for real vs. simulation and 15 cm target vs. 3 cm target variations. The exact path (from Location Y to Location X) and the specific robot (and thus the specific testbed) were treated as random effects, e.g., “Orientation ∼ navigation_type + group” (we used the statsmodels python library [49], which calls random effects as a group).
The data obtained from both the real and simulated robots were aggregated for analysis, and they were then separately presented for each scenario. Each scenario included both navigation configurations (contextless or context-specific) and two kinds of locations (3 or 15 cm ). Thus, we present four different configurations for each scenario: contextless 3 cm , contextless 15 cm , context-specific 3 cm , and context-specific 15 cm . The evaluation was conducted considering all the locations as goals across the different sequences. The proposed sequences (see Table 1) cover the majority of the possible routes between a free-standing location (LB) and a close-range location (L1–L4). During evaluation, every location (free-standing and close-range) in a sequence was treated as a goal in the same way as a robot deployed in a real service environment should navigate to any type of location.

3. Results

As described in Section 2.6, the performance of the new navigation approach was assessed based on three metrics: time, distance, and orientation. Two scenarios (real and simulated) with four configurations (contextless 3 cm , contextless 15 cm , context-specific 3 cm , and context-specific 15 cm ) were analysed. The results of the linear mixed models are presented in Table 2, Table 3 and Table 4, respectively. The first two columns of each table indicate the scenario (real or simulation) and the distance between the navigation locations and the Table (3 or 15 cm ). The third column reports the difference between the contextless and context-specific navigation, with context-specific navigation as the baseline. Positive values indicate a worse performance for contextless navigation, while negative values indicate a better performance. The fourth column presents the confidence intervals (CI), which are the minimum and maximum values that were obtained for each metric in the corresponding scenario.
Furthermore, violin plots were used to visually represent the performance of both the contextless and context-specific navigation approaches regarding the normalised time, distance, and orientation, as shown in Figure 7, Figure 8 and Figure 9. These plots provide a concise summary of the distribution of the data, including summary statistics, the shape of the distribution, and extreme values. The plots also use kernel density estimation (KDE) to show the empirical distribution of the sample. The data are presented for both scenarios (real and simulation) and location distances (3 or 15 cm ).
All these data can be found in the public repository (https://github.com/ferdinandyb/context-specific-table-navigation-2023, accessed on 15 September 2024). The data analysis can be replicated using the Jupyter notebook available in https://github.com/ferdinandyb/context-specific-table-navigation-2023/blob/main/dataset/results.ipynb (accessed on 15 September 2024), which is easily executable through the Binder available in the repository.

3.1. Time

Table 2 and Figure 7 present the results of the time metric, showing the time required by the robot to reach the target locations. The findings clearly show that context-specific navigation was faster in every scenario (even though context-specific navigation actually slows the robot down near the targets, while contextless navigation uses a constant velocity). The difference in performance was more significant when real robots were employed in the experiments.

3.2. Distance

Table 3 and Figure 8 present the measurements of the final Euclidean distance between the robot’s position and the target location. The findings demonstrate that the context-specific navigation strategy achieved greater proximity to the target location compared to the contextless navigation approach across all scenarios. The disparity between the two navigation approaches was more pronounced in real-world settings than in simulations.

3.3. Orientation

The orientation performance of the two navigation approaches is presented in Table 4 and Figure 9. It is evident that contextless navigation achieved a marginally better orientation to the target than context-specific navigation in all scenarios. However, this difference was more significant in the simulated environment and was not as pronounced with real robots.

4. Discussion

Upon examining the Tables and Figures in Section 3, the performance in each configuration can be individually observed. The difference between contextless and context-specific navigation was compared using context-specific navigation as the baseline. Positive values indicate a worse performance for contextless navigation, while negative values indicate a better performance. Moreover, it should be noted that the contextless navigation approach employed a constant velocity, while the context-specific navigation approach slowed the robot down when it was near the target location.
Focusing on the time spent by the robot to navigate from a previous position to a goal location, in all the proposed configurations (simulation— 3 cm , simulation— 15 cm , real— 3 cm , and real— 15 cm ), the contextless navigation approach took more time to reach the goal compared to the proposed context-specific navigation approach. Moreover—as illustrated by Figure 7 showing the distribution of the normalised time it took for the robots to reach their goals (normalisation was performed segment-wise to bring the context-specific scenarios time to a mean and standard deviation of 1)—not only was the context-specific approach much faster in all investigated configurations, but it was also much more reliable in producing similar results on subsequent runs.
In the case of the distance from the final position of the robot to the final target goal, Table 3 reveals that, in three of the four configurations (simulation— 3 cm , simulation— 15 cm , and real— 3 cm ) the contextless navigation made larger errors in reaching the exact goal position compared to the proposed context-specific navigation approach. In the fourth (real— 15 cm ) case, although contextless was still shown to be less accurate, the difference was not significant ( p = 0.25 ). Moreover, Figure 8 reveals that the context-specific approach was much more consistent during the various trials than the contextless approach, especially in the real world scenarios.
The last metric, orientation, measures the distance in degrees from the final orientation of the robot to the final target goal. The results obtained by the contextless navigation were slightly more accurate than the results with the context-specific approach (see Table 4 and Figure 9, which show the orientation differences of the robots to the final goals). It can be seen that the contextless navigation was slightly more accurate than the context-specific approach, which was due to the fact that, as the robot’s base we were working with was circular, the context-specific approach terminated if the orientation difference was within 90°.
After analysing the obtained data from each of the metrics proposed for the evaluation of this work, we considered it relevant to present some qualitative data. These were collected via visual observations during the execution of the different tests. We created a small selection (see Figure 10 and video (https://www.youtube.com/watch?v=9tnFeAcqxRE, accessed on 28 October 2024)) that shows the results obtained from this work.
The main drawback in contextless navigation comes from the move_base module. As can be seen in the video, in certain cases, especially in the tests with a distance of 3 cm to the table, move_base presented a certain limitation when reaching the goal. This caused the robot to stop and replan for a long period of time. The move_base utilises specific tolerance parameters for position and orientation within its local planner to generate adaptable paths towards a designated goal. Operationally, move_base continually attempts to direct the robot precisely to the intended target location. However, environmental factors (such as obstacles, dynamic changes, or slight variations in the environment) may hinder the robot from reaching this exact position. When these factors prevent an exact match with the target, move_base evaluates whether the robot is within a predefined acceptable tolerance range. If the robot is within this threshold, the goal is considered reached. Otherwise, move_base initiates a re-planning process to compute an alternative path to the goal.
These problems can manifest as issues in the behaviour of the robot in two scenarios: (1) Oscillations: move_base generates a new path but the robot remains unable to reach the goal. The robot may exhibit oscillatory movements as it repeatedly attempts to reach the target along successive paths. (2) Replanning and goal cancellation: if no feasible path can be calculated to reach the goal, then the robot stops while move_base attempts to compute alternative paths.
Also, sometimes move_base is not able to reach the goal and cancels the operation. It is important to note that the tolerance threshold we set for move_base was 10 cm . Considering the small distances we handled in the experiments, this margin was considerably wide. Even so, there were situations in which move_base failed to complete its objective.
The tolerance threshold is a parameter that determines how close a robot must get to a specified goal position for move_base to consider the goal successfully reached instead of requiring the robot to match the exact coordinates precisely. Establishing this tolerance threshold is essential; if chosen too low, measurement and motor control errors will essentially make the robot unable to reach the goal.
Moreover, it is crucial to consider the type of items that the robot will transport, as emphasised in a previous study [50]. The contextless solution exhibits oscillation issues when approaching the target in both 3 cm and 15 cm locations, with the oscillation being more pronounced in the former case. These oscillations can not only increase the time taken to reach the goal, but it can also pose challenges for carrying liquids. In contrast, the context-specific approach does not encounter any such oscillation problems.
In summary, the context-specific navigation outperformed the contextless navigation in terms of the time and Euclidean distance metrics for all scenarios considered (simulation— 3 cm , simulation— 15 cm , real— 3 cm , and real— 15 cm ) except for the Euclidean distance in the real— 15 cm scenario, which is where the mean errors were comparable. Furthermore, the presented video (https://www.youtube.com/watch?v=9tnFeAcqxRE, accessed on 28 October 2024) provides a visual summary of the limitations of the contextless versus context-specific navigation approaches. The real-world tests exhibited more noise than the simulations, which could explain why the robots performed better in the simulations. Nonetheless, the data indicate that the context-specific approach is not only faster and more accurate, but it is also more robust in actual environments than the contextless approach.

5. Conclusions

This paper proposes a navigation strategy utilising LiDAR and URF sensors to facilitate robot movement towards diverse locations and a changing of the navigation paradigm depending on the goal. To answer the research question posed in this work, i.e., “What is the performance difference between context-specific and contextless navigation in approaching tables?”, the findings reveal that the context-specific navigation approach outperformed the contextless approach in terms of faster goal attainment and closer proximity to the goal location in almost all of the scenarios studied. However, the contextless approach outperformed the context-specific navigation in the orientation metric in all of the scenarios. Additionally, the context-specific navigation approach mitigated oscillations near tables, unlike the contextless navigation approach. Besides a better outlook for transporting liquids, the elimination of oscillation can also be significant for the social behaviour of the robot as, in order for service robots to be perceived as social [51], they must move in a way that is perceived as sociable. However, an oscillating robot is unlikely to be perceived as an appropriate social agent as studies have already shown the importance of movement patterns in perceiving artificial objects as animate and having agency [52]. The present work has demonstrated that context-specific navigation methods can enable service robots to closely approach tables. However, it is still a field in which further research is needed. In our future work, we aim to create a more generic solution where it is not necessary to specify where the edge of the tables are, only the location of the tables. The robot could then approach the tables based on its environment. Moreover, we aim to investigate other context-specific features of service robot navigation, focusing on areas relating to the social aspects of human–robot interactions.

Author Contributions

Conceptualisation, C.Á.-A. and B.F.; Data curation, A.C.-V.; Formal analysis, A.C.-V. and B.F.; Funding acquisition, V.M. and B.F.; Investigation, C.Á.-A., B.K., A.C.-V., Á.M., V.M. and B.F.; Methodology, Á.M., V.M. and B.F.; Project administration, B.F.; Resources, Á.M. and V.M.; Software, C.Á.-A. and B.F.; Supervision, B.F.; Validation, C.Á.-A., B.K. and A.C.-V.; Visualisation, C.Á.-A. and B.K.; Writing—original draft, C.Á.-A. and B.K.; Writing—review and editing, A.C.-V., Á.M., V.M. and B.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the project SELF-AIR through the grant TED2021-132356B-I00 funded by MCIN/AEI/10.13039/501100011033 and by the “European Union NextGenerationEU/PRTR”, the HUN-REN-ELTE Comparative Ethology Research Group (01 031), the National Talent Programme (NTP-NFTÖ-21-B-0319), the project EDMAR through the grant PID2021-126592OB-C21 funded by MCIN AEI/10.13039/501100011033. Finally, this research is part of the project TESCAC, financed by “European Union NextGeneration-EU, the Recovery Plan, Transformation and Resilience, through INCIBE”.

Data Availability Statement

The source code of the new approach is available online under an open-source license [46]. A docker image with all the required software to test the context-specific navigation and to double check the overall evaluation posed in this paper is also available online under an open-source license [53].

Acknowledgments

The authors would like to thank Ángel Manuel Guerrero-Higueras for his support during the research.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
HRIHuman–Robot Interaction
IRInfrared
KDEKernel Density Estimation
LiDARLaser Imaging Detection and Ranging
QRQuick Response
RFIDRadio Frequency Identification
RGBDRed Green Blue Depth
ROSRobot Operating System
SLAMSimultaneous Localisation and Mapping
URFUltrasonic Range Finder

References

  1. Bowen, J.; Morosan, C. Beware hospitality industry: The robots are coming. Worldw. Hosp. Tour. Themes 2018, 10, 726–733. [Google Scholar] [CrossRef]
  2. Belanche, D.; Casaló, L.V.; Flavián, C. Frontline robots in tourism and hospitality: Service enhancement or cost reduction? Electron. Mark. 2021, 31, 477–492. [Google Scholar] [CrossRef]
  3. Kim, S.S.; Kim, J.; Badu-Baiden, F.; Giroux, M.; Choi, Y. Preference for robot service or human service in hotels? Impacts of the COVID-19 pandemic. Int. J. Hosp. Manag. 2021, 93, 102795. [Google Scholar] [CrossRef]
  4. Eksiri, A.; Kimura, T. Restaurant service robots development in Thailand and their real environment evaluation. J. Robot. Mechatron. 2015, 27, 91–102. [Google Scholar] [CrossRef]
  5. Flores-Vázquez, C.; Bahon, C.A.; Icaza, D.; Cobos-Torres, J.C. Developing a Socially-Aware Robot Assistant for Delivery Tasks. In Proceedings of the International Conference on Applied Technologies, Quito, Ecuador, 3–5 December 2019; Springer: Cham, Switzerland, 2019; pp. 531–545. [Google Scholar]
  6. Kruse, T.; Pandey, A.K.; Alami, R.; Kirsch, A. Human-aware robot navigation: A survey. Robot. Auton. Syst. 2013, 61, 1726–1743. [Google Scholar] [CrossRef]
  7. Rios-Martinez, J.; Spalanzani, A.; Laugier, C. From proxemics theory to socially-aware navigation: A survey. Int. J. Soc. Robot. 2015, 7, 137–153. [Google Scholar] [CrossRef]
  8. Haarslev, F.; Juel, W.K.; Kollakidou, A.; Krüger, N.; Bodenhagen, L. Context-aware Social Robot Navigation. In Proceedings of the ICINCO, Paris, France, 6–8 July 2021; pp. 426–433. [Google Scholar]
  9. Walters, M.L.; Oskoei, M.A.; Syrdal, D.S.; Dautenhahn, K. A long-term human-robot proxemic study. In Proceedings of the 2011 RO-MAN, Atlanta, Georgia, 31 July– August 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 137–142. [Google Scholar]
  10. Syrdal, D.S.; Koay, K.L.; Walters, M.L.; Dautenhahn, K. A personalized robot companion?—The role of individual differences on spatial preferences in HRI scenarios. In Proceedings of the RO-MAN 2007—The 16th IEEE International Symposium on Robot and Human Interactive Communication, Jeju, Republic of Korea, 26–29 August 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 1143–1148. [Google Scholar]
  11. Samarakoon, S.B.P.; Muthugala, M.V.J.; Jayasekara, A.B.P. A Review on Human–Robot Proxemics. Electronics 2022, 11, 2490. [Google Scholar] [CrossRef]
  12. Ivanov, S.; Gretzel, U.; Berezina, K.; Sigala, M.; Webster, C. Progress on robotics in hospitality and tourism: A review of the literature. J. Hosp. Tour. Technol. 2019, 10, 489–521. [Google Scholar] [CrossRef]
  13. Liu, S.B.; Roehm, H.; Heinzemann, C.; Lütkebohle, I.; Oehlerking, J.; Althoff, M. Provably safe motion of mobile robots in human environments. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1351–1357. [Google Scholar]
  14. Guillén-Ruiz, S.; Bandera, J.P.; Hidalgo-Paniagua, A.; Bandera, A. Evolution of Socially-Aware Robot Navigation. Electronics 2023, 12, 1570. [Google Scholar] [CrossRef]
  15. Cheong, A.; Foo, E.; Lau, M.; Chen, J.; Gan, H. Development of a Robotics Waiter System for the food and beverage industry. In Proceedings of the 3rd International Conference On Advances in Mechanical & Robotics Engineering, Zurich, Switzerland, 10–11 October 2015; pp. 21–25. [Google Scholar]
  16. Qing-xiao, Y.; Can, Y.; Zhuang, F.; Yan-zheng, Z. Research of the localization of restaurant service robot. Int. J. Adv. Robot. Syst. 2010, 7, 18. [Google Scholar] [CrossRef]
  17. Zhang, H.; Zhang, C.; Yang, W.; Chen, C.Y. Localization and navigation using QR code for mobile robot in indoor environment. In Proceedings of the 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), Zhuhai, China, 6–9 December 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 2501–2506. [Google Scholar]
  18. Farkas, Z.V.; Korondi, P.; Illy, D.; Fodor, L. Aesthetic marker design for home robot localization. In Proceedings of the IECON 2012–38th Annual Conference on IEEE Industrial Electronics Society, Montreal, QC, Canada, 25–28 October 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 5510–5515. [Google Scholar]
  19. Wan, A.; Foo, E.; Lai, Z.; Chen, H.; Lau, W. Waiter bots for casual restaurants. Int. J. Robot. Eng 2019, 4, 018. [Google Scholar] [CrossRef]
  20. Yuan, W.; Li, Z.; Su, C.Y. RGB-D sensor-based visual SLAM for localization and navigation of indoor mobile robot. In Proceedings of the 2016 International Conference on Advanced Robotics and Mechatronics (ICARM), Macau, China, 18–20 August 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 82–87. [Google Scholar]
  21. Yasuda, Y.D.; Martins, L.E.G.; Cappabianco, F.A. Autonomous visual navigation for mobile robots: A systematic literature review. ACM Comput. Surv. (CSUR) 2020, 53, 1–34. [Google Scholar] [CrossRef]
  22. Binch, A.; Das, G.P.; Fentanes, J.P.; Hanheide, M. Context dependant iterative parameter optimisation for robust robot navigation. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 3937–3943. [Google Scholar]
  23. Yu, Q.; Yuan, C.; Fu, Z.; Zhao, Y. An autonomous restaurant service robot with high positioning accuracy. Ind. Robot. Int. J. 2012, 39, 271–281. [Google Scholar] [CrossRef]
  24. Acosta, L.; González, E.; Rodríguez, J.N.; Hamilton, A.F. Design and implementation of a service robot for a restaurant. Int. J. Robot. Autom. 2006, 21, 273. [Google Scholar] [CrossRef]
  25. Xiao, X.; Liu, B.; Warnell, G.; Stone, P. Motion control for mobile robot navigation using machine learning: A survey. arXiv 2020, arXiv:2011.13112. [Google Scholar] [CrossRef]
  26. Bera, A.; Randhavane, T.; Prinja, R.; Kapsaskis, K.; Wang, A.; Gray, K.; Manocha, D. The emotionally intelligent robot: Improving social navigation in crowded environments. arXiv 2019, arXiv:1903.03217. [Google Scholar]
  27. Sünderhauf, N.; Dayoub, F.; McMahon, S.; Talbot, B.; Schulz, R.; Corke, P.; Wyeth, G.; Upcroft, B.; Milford, M. Place categorization and semantic mapping on a mobile robot. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 5729–5736. [Google Scholar]
  28. Tzafestas, S.G. Mobile robot control and navigation: A global overview. J. Intell. Robot. Syst. 2018, 91, 35–58. [Google Scholar] [CrossRef]
  29. Estefo, P.; Simmonds, J.; Robbes, R.; Fabry, J. The robot operating system: Package reuse and community dynamics. J. Syst. Softw. 2019, 151, 226–242. [Google Scholar] [CrossRef]
  30. Conner, D.C.; Willis, J. Flexible navigation: Finite state machine-based integrated navigation and control for ROS enabled robots. In Proceedings of the SoutheastCon 2017, Charlotte, NC, USA, 30 March–2 April 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–8. [Google Scholar]
  31. Quigley, M.; Conley, K.; Gerkey, B.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; Ng, A.Y. ROS: An open-source Robot Operating System. In Proceedings of the ICRA Workshop on Open Source Software, Kobe, Japan, 12–17 May 2009; Volume 3, p. 5. [Google Scholar]
  32. Marder-Eppstein, E. Move_BASE ROS Wiki. Available online: http://wiki.ros.org/move_base (accessed on 25 February 2022).
  33. Rösmann, C. Teb_Local_Planner Package. Available online: http://wiki.ros.org/teb_local_planner (accessed on 22 February 2022).
  34. Rösmann, C.; Feiten, W.; Wösch, T.; Hoffmann, F.; Bertram, T. Trajectory modification considering dynamic constraints of autonomous robots. In Proceedings of the ROBOTIK 2012: 7th German Conference on Robotics, Munich, Germany, 21–22 May 2012; VDE: Frankfurt am Main, Germany, 2012; pp. 1–6. [Google Scholar]
  35. Rösmann, C.; Feiten, W.; Wösch, T.; Hoffmann, F.; Bertram, T. Efficient trajectory optimization using a sparse model. In Proceedings of the 2013 European Conference on Mobile Robots, Barcelona, Spain, 25–27 September 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 138–143. [Google Scholar]
  36. Rösmann, C.; Hoffmann, F.; Bertram, T. Planning of multiple robot trajectories in distinctive topologies. In Proceedings of the 2015 European Conference on Mobile Robots (ECMR), Lincoln, UK, 2–4 September 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1–6. [Google Scholar]
  37. Rösmann, C.; Hoffmann, F.; Bertram, T. Integrated online trajectory planning and optimization in distinctive topologies. Robot. Auton. Syst. 2017, 88, 142–153. [Google Scholar] [CrossRef]
  38. Bohren, J.; Cousins, S. The smach high-level executive [ros news]. IEEE Robot. Autom. Mag. 2010, 17, 18–20. [Google Scholar] [CrossRef]
  39. Robotics Group of Universidad de León. Leon@home Testbed. Available online: https://robotica.unileon.es/index.php?title=Testbed (accessed on 4 February 2022).
  40. EU Robotics. ERL Certified Test Beds. Available online: https://old.eu-robotics.net/robotics_league/consumer/certified-test-beds/certified-test-beds.html (accessed on 18 October 2024).
  41. Robotnik. Robotnik Homepage. Available online: https://robotnik.eu/es/ (accessed on 4 February 2022).
  42. Robotics Group of Universidad de Léon. Robotics Group Homepage. Available online: https://robotica.unileon.es/ (accessed on 4 February 2022).
  43. Hokuyo. Hokuyo Homepage. Available online: https://www.hokuyo-aut.jp/ (accessed on 4 February 2022).
  44. ELKH-ELTE Comparative Ethology Research Group of the Eötvös Loránd University. ELKH-ELTE Comparative Ethology Research Group Homepage. Available online: https://etologia.elte.hu/en/home-2/ (accessed on 4 February 2022).
  45. Orbbec. Orbbec 3D Homepage. Available online: https://orbbec3d.com/ (accessed on 7 February 2022).
  46. Ferdinandy, B. Context-Aware and Cost-Effective Navigation for Approaching Restaurant Tables with Lidar and Ultrasound Range Sensors. Available online: https://zenodo.org/record/6330051 (accessed on 5 March 2022).
  47. Lindstrom, M.J.; Bates, D.M. Newton—Raphson and EM Algorithms for Linear Mixed-Effects Models for Repeated-Measures Data. J. Am. Stat. Assoc. 1988, 83, 1014–1022. [Google Scholar] [CrossRef]
  48. Seabold, S.; Perktold, J. statsmodels: Econometric and statistical modeling with python. In Proceedings of the 9th Python in Science Conference, Austin, TX, USA, 28 June–3 July 2010; p. 61. [Google Scholar]
  49. Statsmodels. Statsmodels Python Library Homepage. Available online: https://www.statsmodels.org/stable/index.html (accessed on 4 February 2022).
  50. Wan, A.Y.S.; Soong, Y.D.; Foo, E.; Wong, W.L.E.; Lau, W.S.M. Waiter robots conveying drinks. Technologies 2020, 8, 44. [Google Scholar] [CrossRef]
  51. Hoffman, G.; Ju, W. Designing Robots with Movement in Mind. J. Hum.-Robot Interact. 2014, 3, 91–122. [Google Scholar] [CrossRef]
  52. Abdai, J.; Ferdinandy, B.; Lengyel, A.; Miklósi, Á. Animacy Perception in Dogs (Canis Familiaris) and Humans (Homo Sapiens): Comparison May Be Perturbed by Inherent Differences in Looking Patterns. J. Comp. Psychol. 2021, 135, 82–88. [Google Scholar] [CrossRef]
  53. Álvarez-Aparicio, C. Context-Aware and Cost-Effective Navigation for Approaching Restaurant Tables with Lidar and Ultrasound Range Sensors Docker Image. Available online: https://hub.docker.com/r/claudiaalvarezaparicio/tablenavigation2022 (accessed on 17 February 2022).
Figure 1. Flowchart of how the contextless navigation is structured.
Figure 1. Flowchart of how the contextless navigation is structured.
Robotics 13 00167 g001
Figure 2. Flowchart of how the context-specific navigation is structured within a SMACH action server.
Figure 2. Flowchart of how the context-specific navigation is structured within a SMACH action server.
Robotics 13 00167 g002
Figure 3. (a) Leon@Home Testbed, (b) a simulated environment of León, (c) Budapest Testbed, and (d) a simulated environment of Budapest.
Figure 3. (a) Leon@Home Testbed, (b) a simulated environment of León, (c) Budapest Testbed, and (d) a simulated environment of Budapest.
Robotics 13 00167 g003
Figure 4. Locations: the red cross represents a free-standing location (base location), the blue crosses represent a close-range location with a 3 cm configuration, and the green crosses represent a close-range location with a 15 cm configuration.
Figure 4. Locations: the red cross represents a free-standing location (base location), the blue crosses represent a close-range location with a 3 cm configuration, and the green crosses represent a close-range location with a 15 cm configuration.
Robotics 13 00167 g004
Figure 5. (a) Orbi-One Robot. (b) Biscee robot.
Figure 5. (a) Orbi-One Robot. (b) Biscee robot.
Robotics 13 00167 g005
Figure 6. A representation of the measurement of the distance and orientation metrics. The red cross represents the goal location, and the yellow arrow inside of it represents the corresponding orientation for this location. The orange circle represents the final position of the robot, and the blue arrow inside of the circle shows the final orientation of the robot. The black dashed line represents the Euclidean distance between the location and the position reached by the robot. The green line represents the angle difference between the orientation of the location and the orientation of the robot when it reached the final position.
Figure 6. A representation of the measurement of the distance and orientation metrics. The red cross represents the goal location, and the yellow arrow inside of it represents the corresponding orientation for this location. The orange circle represents the final position of the robot, and the blue arrow inside of the circle shows the final orientation of the robot. The black dashed line represents the Euclidean distance between the location and the position reached by the robot. The green line represents the angle difference between the orientation of the location and the orientation of the robot when it reached the final position.
Robotics 13 00167 g006
Figure 7. The time summary statistics and the density of values of each measured scenario. The data obtained from both real robots are presented together, and the same is presented for the simulated robots. Inside each scenario (real and simulation), both navigation configurations (contextless or context-specific) are presented with the two kinds of locations created (3 or 15 cm ). For visualisation purposes, the individual times for each path segment were normalised to the average time needed for the context-specific navigation to traverse the specific segment. The comparison of both approaches shows that contextless was always slower than context-specific navigation.
Figure 7. The time summary statistics and the density of values of each measured scenario. The data obtained from both real robots are presented together, and the same is presented for the simulated robots. Inside each scenario (real and simulation), both navigation configurations (contextless or context-specific) are presented with the two kinds of locations created (3 or 15 cm ). For visualisation purposes, the individual times for each path segment were normalised to the average time needed for the context-specific navigation to traverse the specific segment. The comparison of both approaches shows that contextless was always slower than context-specific navigation.
Robotics 13 00167 g007
Figure 8. The distance summary statistics and the density of each measured scenario. The data obtained from both real robots are presented together, and the same is presented for the simulated robots. Inside each scenario (real and simulation), both navigation configurations (contextless or context-specific) are presented with the two kinds of locations created (3 or 15 cm ). The comparison of both approaches shows that contextless was always farther from the target than context-specific navigation.
Figure 8. The distance summary statistics and the density of each measured scenario. The data obtained from both real robots are presented together, and the same is presented for the simulated robots. Inside each scenario (real and simulation), both navigation configurations (contextless or context-specific) are presented with the two kinds of locations created (3 or 15 cm ). The comparison of both approaches shows that contextless was always farther from the target than context-specific navigation.
Robotics 13 00167 g008
Figure 9. The orientation summary statistics and the density of each scenario proposed. The data obtained from both real robots are presented together, and the same is presented for the simulated robots. Inside each scenario (real and simulation), both navigation configurations (contextless or context-specific) are presented with the two kinds of locations created (3 or 15 cm ). The comparison of both approaches shows that contextless was always better oriented than context-specific navigation.
Figure 9. The orientation summary statistics and the density of each scenario proposed. The data obtained from both real robots are presented together, and the same is presented for the simulated robots. Inside each scenario (real and simulation), both navigation configurations (contextless or context-specific) are presented with the two kinds of locations created (3 or 15 cm ). The comparison of both approaches shows that contextless was always better oriented than context-specific navigation.
Robotics 13 00167 g009
Figure 10. The contextless navigation 3 cm executing sequence number 7 (LB–L3–L2–L3–L2–LB): (a) the beginning of the execution; (b) move_base becomes stuck while replanning new paths to reach the goal; (c) move_base cancels the goal because it cannot be reached and goes to the next goal; (d) oscillation problems before reaching the goal; (e) going to the next goal; (f) oscillation problems, where move_base became stuck while replanning a new path and move_base finally cancelled the goal and went to the next one; (g) oscillation problems before reaching the goal; (h) going to the final goal, and (i) final goal reached. Selection of images from the video (https://www.youtube.com/watch?v=9tnFeAcqxRE, accessed on 28 October 2024).
Figure 10. The contextless navigation 3 cm executing sequence number 7 (LB–L3–L2–L3–L2–LB): (a) the beginning of the execution; (b) move_base becomes stuck while replanning new paths to reach the goal; (c) move_base cancels the goal because it cannot be reached and goes to the next goal; (d) oscillation problems before reaching the goal; (e) going to the next goal; (f) oscillation problems, where move_base became stuck while replanning a new path and move_base finally cancelled the goal and went to the next one; (g) oscillation problems before reaching the goal; (h) going to the final goal, and (i) final goal reached. Selection of images from the video (https://www.youtube.com/watch?v=9tnFeAcqxRE, accessed on 28 October 2024).
Robotics 13 00167 g010
Table 1. Defined sequences for the experiment. A sequence is defined as a list of locations that the robot should autonomously reach.
Table 1. Defined sequences for the experiment. A sequence is defined as a list of locations that the robot should autonomously reach.
SequenceLocations
1LB–L1–LB–L2–LB–L3–LB–L4–LB
2LB–L4–LB–L3–LB–L2–LB–L1–LB
3LB–L1–L2–L3–L4–LB
4LB–L4–L3–L2–L1–LB
5LB–L4–L1–L3–L2–LB
6LB–L2–L3–L1–L4–LB
7LB–L3–L2–L3–L2–LB
8LB–L2–L3–L2–L3–LB
9LB–L1–L4–L1–L4–LB
10LB–L4–L1–L4–L1–LB
11LB–L1–L2–L1–L3–L1–L4–LB
12LB–L4–L3–L4–L2–L4–L1–LB
Table 2. Linear mixed model of the time to target, comparing the contextless vs. context-specific navigation. The third column reports the difference between contextless and context-specific navigation, with context-specific navigation as the baseline. Positive values indicate a worse performance for contextless navigation, while negative values indicate a better performance. The fourth column presents the confidence intervals (CI), which are the minimum and maximum values that were obtained for each metric in the corresponding scenario. The results show that contextless was always slower, even though context-specific navigation included reducing the speed near the target. Confidence intervals at α = 0.05 , and *** indicates significance at p < 0.001 .
Table 2. Linear mixed model of the time to target, comparing the contextless vs. context-specific navigation. The third column reports the difference between contextless and context-specific navigation, with context-specific navigation as the baseline. Positive values indicate a worse performance for contextless navigation, while negative values indicate a better performance. The fourth column presents the confidence intervals (CI), which are the minimum and maximum values that were obtained for each metric in the corresponding scenario. The results show that contextless was always slower, even though context-specific navigation included reducing the speed near the target. Confidence intervals at α = 0.05 , and *** indicates significance at p < 0.001 .
RobotTarget Dist.Contextless vs.
Context-Specific (s)
CI (s)
Simulation3 cm 20.92 ***20.16–21.67
Simulation15 cm 6.68 ***5.41–7.95
Real3 cm 33.82 ***32.88–34.75
Real15 cm 12.52 ***11.63–13.41
Table 3. The linear mixed model of the final distance from the target, comparing the contextless vs. context-specific navigation. The third column reports the difference between the contextless and context-specific navigation, with context-specific navigation as the baseline. Positive values indicate a worse performance for contextless navigation, while negative values indicate a better performance. The fourth column presents the confidence intervals (CI), which are the minimum and maximum values that were obtained for each metric in the corresponding scenario. The results show that contextless was always farther from the target when terminating. Confidence intervals at α = 0.05 , and *** indicates significance at p < 0.001 . The difference is not significant with the real scenario at 15 cm.
Table 3. The linear mixed model of the final distance from the target, comparing the contextless vs. context-specific navigation. The third column reports the difference between the contextless and context-specific navigation, with context-specific navigation as the baseline. Positive values indicate a worse performance for contextless navigation, while negative values indicate a better performance. The fourth column presents the confidence intervals (CI), which are the minimum and maximum values that were obtained for each metric in the corresponding scenario. The results show that contextless was always farther from the target when terminating. Confidence intervals at α = 0.05 , and *** indicates significance at p < 0.001 . The difference is not significant with the real scenario at 15 cm.
RobotTarget Dist.Contextless vs.
Context-Specific (cm)
CI (cm)
Simulation3 cm 4.947 ***4.62–5.27
Simulation15 cm 2.480 ***2.18–2.78
Real3 cm 8.121 ***7.63–8.61
Real15 cm 0.759 0.32–1.20
Table 4. The linear mixed model of the final deviation from the target orientation, comparing the contextless vs. context-specific navigation. The third column reports the difference between the contextless and context-specific navigation, with context-specific navigation as the baseline. Positive values indicate a worse performance for contextless navigation, while negative values indicate a better performance. The fourth column presents the confidence intervals (CI), which are the minimum and maximum values that were obtained for each metric in the corresponding scenario. The results show that contextless was always better oriented. Confidence intervals at α = 0.05 , and *** indicates significance at p < 0.001 .
Table 4. The linear mixed model of the final deviation from the target orientation, comparing the contextless vs. context-specific navigation. The third column reports the difference between the contextless and context-specific navigation, with context-specific navigation as the baseline. Positive values indicate a worse performance for contextless navigation, while negative values indicate a better performance. The fourth column presents the confidence intervals (CI), which are the minimum and maximum values that were obtained for each metric in the corresponding scenario. The results show that contextless was always better oriented. Confidence intervals at α = 0.05 , and *** indicates significance at p < 0.001 .
RobotTarget Dist.Contextless vs.
Context-Specific (°)
CI (°)
Simulation3 cm 3.24 ***−3.54–−2.94
Simulation15 cm 4.56 ***−4.84–−4.27
Real3 cm 2.24 ***−2.58–−1.92
Real15 cm 2.73 ***−3.01–−2.44
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Álvarez-Aparicio, C.; Korcsok, B.; Campazas-Vega, A.; Miklósi, Á.; Matellán, V.; Ferdinandy, B. Context-Specific Navigation for ‘Gentle’ Approach Towards Objects Based on LiDAR and URF Sensors. Robotics 2024, 13, 167. https://doi.org/10.3390/robotics13110167

AMA Style

Álvarez-Aparicio C, Korcsok B, Campazas-Vega A, Miklósi Á, Matellán V, Ferdinandy B. Context-Specific Navigation for ‘Gentle’ Approach Towards Objects Based on LiDAR and URF Sensors. Robotics. 2024; 13(11):167. https://doi.org/10.3390/robotics13110167

Chicago/Turabian Style

Álvarez-Aparicio, Claudia, Beáta Korcsok, Adrián Campazas-Vega, Ádám Miklósi, Vicente Matellán, and Bence Ferdinandy. 2024. "Context-Specific Navigation for ‘Gentle’ Approach Towards Objects Based on LiDAR and URF Sensors" Robotics 13, no. 11: 167. https://doi.org/10.3390/robotics13110167

APA Style

Álvarez-Aparicio, C., Korcsok, B., Campazas-Vega, A., Miklósi, Á., Matellán, V., & Ferdinandy, B. (2024). Context-Specific Navigation for ‘Gentle’ Approach Towards Objects Based on LiDAR and URF Sensors. Robotics, 13(11), 167. https://doi.org/10.3390/robotics13110167

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop