Next Article in Journal
Uniqueness of Iris Pattern Based on the Auto-Regressive Model
Next Article in Special Issue
Attention-Based Variational Autoencoder Models for Human–Human Interaction Recognition via Generation
Previous Article in Journal
Enhancing Healthcare through Sensor-Enabled Digital Twins in Smart Environments: A Comprehensive Analysis
Previous Article in Special Issue
Table-Balancing Cooperative Robot Based on Deep Reinforcement Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bridging Requirements, Planning, and Evaluation: A Review of Social Robot Navigation

by
Jarosław Karwowski
,
Wojciech Szynkiewicz
and
Ewa Niewiadomska-Szynkiewicz
*
Institute of Control and Computation Engineering, Warsaw University of Technology, 00-665 Warsaw, Poland
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(9), 2794; https://doi.org/10.3390/s24092794
Submission received: 25 March 2024 / Revised: 21 April 2024 / Accepted: 24 April 2024 / Published: 27 April 2024

Abstract

:
Navigation lies at the core of social robotics, enabling robots to navigate and interact seamlessly in human environments. The primary focus of human-aware robot navigation is minimizing discomfort among surrounding humans. Our review explores user studies, examining factors that cause human discomfort, to perform the grounding of social robot navigation requirements and to form a taxonomy of elementary necessities that should be implemented by comprehensive algorithms. This survey also discusses human-aware navigation from an algorithmic perspective, reviewing the perception and motion planning methods integral to social navigation. Additionally, the review investigates different types of studies and tools facilitating the evaluation of social robot navigation approaches, namely datasets, simulators, and benchmarks. Our survey also identifies the main challenges of human-aware navigation, highlighting the essential future work perspectives. This work stands out from other review papers, as it not only investigates the variety of methods for implementing human awareness in robot control systems but also classifies the approaches according to the grounded requirements regarded in their objectives.

1. Introduction

The presence of robots in populated environments has become broadly discussed in the literature since deployments of interactive museum tour guide robots—RHINO [1] and MINERVA [2]—in the late 1990s. These field studies have provided many insights, and since then, robot navigation among humans has become a vast field of study.
The field has a historical tradition of being multidisciplinary, with researchers from robotics, artificial intelligence, engineering, biology, psychology, natural language processing, cognitive sciences, and even philosophy collaborating, resulting in a diverse range of outcomes [3,4]. Other than that, social navigation is closely linked to various research topics, such as human trajectory prediction, agent and crowd simulation, and naturally, to traditional robot navigation [5].
One of the primary objectives of robotics is to facilitate the seamless operation of intelligent mobile robots in environments shared with humans [4].
In our work, a socially navigating robot is an autonomous machine designed to act and interact with humans in shared environments, mitigating potential discomfort by mimicking social behaviors and adhering to norms. Robot navigation requirements are derived from user studies illustrating human preferences during an interaction, while the robot’s decision-making autonomy relies on perception and planning capabilities.
The range of social robots’ applications is diverse. In the late 2000s, Satake et al. [6] established a field study in a shopping mall where a robot recommended shops to people. A long-term validation of a robot operating in a crowded cafeteria was conducted by Trautman et al. [7]. Another extended deployment was accomplished by Biswas and Veloso [8], whose CoBots reached 1000 km of autonomous navigation. On the other hand, Shiomi et al. [9] performed a short-term validation study of a robot operation in a shopping mall. Recently, social robots are typically utilized for interaction in the context of home assistance and healthcare [3] or deployed for delivery purposes, e.g., pizza, mail, and packages [5].
Despite the recent advancements, mobile robots are still not prevalent in our homes and offices. Mirsky et al. [4] state that a primary factor contributing to this limitation is that achieving full autonomy remains feasible only in controlled environments and typically relies on hard-coded rules or learning from relatively clean datasets.
Our review can be segmented into two perspectives: requirements and algorithmic. The requirements perspective involves exploring various user studies to identify the rules for social robots to adhere to. Our primary focus lies in examining factors that cause human discomfort, as confirmed in real-world experiments involving human participants. In addition to identifying these factors, we aim to extract methods for mitigating discomfort to obtain implementable guidelines for robot control systems. Subsequently, the algorithmic perspective categorizes existing research regarding scientific approaches and maps those methods onto specified requirements taxonomy. In summary, our survey stands out by offering an in-depth investigation of aspects often discussed less extensively, while still following the latest developments in navigation.
The remainder of this section explains the scope of the reviewed topics and describes the materials collection methodology. Section 2 reviews previous surveys regarding social robot navigation, whereas Section 3 presents the state of the art from the requirements perspective, discussing the conclusions of user studies. The following sections give an algorithmic overview on perception (Section 4), motion planning (Section 5), and evaluation (Section 6). The survey proposals explaining identified research gaps are presented in Section 7, while the paper is summarized in Section 8.

1.1. Review Scope

The scope of the social robot navigation field is vast, and a comprehensive literature review in every aspect is practically unfeasible. Although we had to limit the scope of topics for a thorough examination, we understand the importance of concepts that could not be covered in this study.
Our survey concentrates on deriving the social robot navigation requirements from literature studies, and, based on that, discusses requirements-driven human-aware robot motion planning and metrics related to the social acceptance of robots. However, this review does not extensively explore the domains of, i.a., explicit communication or negotiation, and the range of interactions investigated was also limited to align with the scope of primary topics.
Effective decision making in socially aware navigation requires communication between robots and humans, particularly when the robot’s knowledge about the environment is limited. Specifically, explicit communication involves the auditory domain, as well as written instructions, which robots should interpret and respond to. Robots also need to convey their intentions and decisions to humans, utilizing verbal and visual techniques such as speech and gestures employing onboard actuators. The topic of explicit communication has been investigated to varying degrees in other review works [4,10,11]. Since it is related to higher-level problem-solving, we decided not to categorize our literature search according to this characteristic. In contrast, implicit communication is commonplace in human–robot interaction studies and is more relevant to the investigated topics; hence, it is widely discussed in our survey, as well as in [4,11,12].
Negotiation in social robot navigation acts as a form of dynamic information exchange. This may involve collaborative decision-making processes, e.g., requesting permission to pass. While the scope of the negotiations field extends way beyond human–robot interaction, this concept has been briefly discussed in other social robotics surveys [11,13].
On the other hand, what substantially affects the requirements and objectives of perception and human-aware robot motion planning is the type of robot. Variations in ground, aerial, or aquatic robots [11,14] significantly impact possible scenarios, hence also the range of human–robot interactions. The taxonomy of our considerations does not differentiate the robot types; instead, we focus primarily on ground-wheeled robots, although some principles and algorithmic techniques may also apply to aerial robots. While mobile manipulators may also fall into the category of ground-wheeled robots, their specific problems of low-level motion control tasks are not investigated.
The physical (contact-rich) interaction between robots and humans is a crucial topic in collaborative robotics and safety management. However, our navigation-focused review examines other types of interactions, namely, unfocused and focused [13], neither of which involve physical contact.

1.2. Materials Collection

The chosen methodology of selecting resources included in the survey does not strictly adhere to the scoping strategy typically applied in systematic reviews. Specifically, at first, we conducted a comprehensive literature analysis, drawing from review papers discussed in Section 2. The literature from previous surveys has been confined according to our primary topics and then further supplemented by some crucial works that did not appear in other review papers and more recent citations.
To select newer materials for inclusion in the survey, we searched across IEEE Xplore, ScienceDirect, SpringerLink, ACM Digital Library, and Google Scholar databases, as well as included relevant preprints from ArXiv. The queries used for the search engines were (‘social’ OR ‘human-aware’) AND ‘navigation’ AND ‘robot’, which allowed the gathering of over 600 works from various sources. However, our methodology involved identifying resources (papers, software modules, and datasets) based on their relevance to socially-aware robot navigation and its evaluation methods. Therefore, instead of including the vast amount of results from the databases, we selected the materials based on their appropriateness to the primary topics of the survey. The bibliography was also extended by validation of the cross-references between user studies which also led us to valuable materials. The described selection strategy ensures a concise yet comprehensive review of advancements in the field.
Notably, our survey is also not limited to specific publication years (e.g., [11]) as certain findings, particularly social robot navigation requirements derived from user studies, retain relevance over an extended period. Despite being a subject of research for over 20 years, the field has seen a surge in publications in recent years, as presented in Figure 1.

2. Related Work

In recent years, numerous surveys regarding social robot navigation have been proposed [3,4,5,11,12,13,14,15,16,17]. However, the topic is so broad that each one investigates the problem from different perspectives, e.g., evaluation, perception, and hardware.
For example, Kruse et al. [15] discussed the advancements of human-aware navigation for wheeled robots in assistive scenarios. They systematically reviewed the literature, choosing the key features facilitating human-aware navigation as human comfort, robot motions’ naturalness, and sociability. In addition to outlining the basic objectives of social robot navigation, they also focused on spatial constraints that enhance the robot’s sociability. They proposed that integrating them into a single control system mitigates human discomfort. Moreover, they explored numerous methods of human trajectory prediction.
Alternatively, Rios-Martinez et al. [13] delved into sociological concepts regarding the challenges of human-aware navigation. They discussed fundamental concepts related to social conventions and mapped them onto robotics perspectives. In conclusion, they posited that human management of space can be treated as a dynamic system whose complexity extends well beyond proxemics, with contextual factors playing a paramount role in detecting social situations.
In another review paper, Chik et al. [14] offered insights for service robot implementation, highlighting different motion planning system structures for robots operating in populated environments. The discussed navigation frameworks are classified based on their complexity and anticipative potential required for socially acceptable navigation. The authors also provided brief descriptions of algorithms that may enhance social robot navigation and compared them with the traditional methods. Their paper provides practical guidelines on which framework to choose under different conditions.
In a separate study, Charalampous et al. [16] attempted to systematize the recent literature based on the required levels of robot perception for navigating in a socially acceptable manner. They focused on techniques that could allow robots to perceive and interpret their surroundings on a high contextual level. Particularly, they explored methods related to robot’s social awareness (semantic mapping being one of them), the accessibility of datasets, and challenges that need to be confronted when robots operate and interact with humans.
Möller et al. [3] reviewed socially-aware robot navigation, focusing on aspects of computer vision. Namely, their classification of papers is based on the taxonomy of human behavior analysis and modeling, human–robot interaction, active vision, and visual robot navigation. They discussed, i.a., active vision and exploiting it to obtain more data under uncertainty, as well as high-fidelity simulators and numerous datasets, e.g., for human trajectory prediction. The authors pointed out the major research gaps as a lack of formalized evaluation strategies or insufficient datasets and suggested using voice interaction or gesture recognition more commonly to enrich the human–robot interactions.
A more recent survey by Mirsky et al. [4] concentrates on introducing a common language that unifies the vocabulary used in the prior works and highlights the open problems of social navigation. The main topic of the review is conflict avoidance; therefore, the scope of examined papers is bound to works regarding strictly unfocused [13] interactions. As the main challenge of social navigation, they specified standardization of evaluation metrics, group understanding, and context-aware navigation.
Another survey was proposed by Gao and Huang [5], who examined the evaluation techniques, scenarios, datasets, and metrics frequently employed in prior studies on socially aware navigation. They analyzed the drawbacks of current evaluation protocols and proposed opportunities for research enhancing the field of socially-aware robot navigation. Specifically, they stated that there are no standard evaluation protocols to benchmark the research progress, i.e., the field lacks unified datasets, scenarios, methods, and metrics. They also denote the necessity of developing comprehensive instruments to gauge sociability and higher-level social skills during navigational interactions.
Zhu and Zhang [18] discussed Deep Reinforcement Learning (DRL) and related frameworks for analyzing robot navigation regarding typical application scenarios, i.e., local obstacle avoidance, indoor navigation, multirobot navigation, and social navigation. In turn, Medina Sánchez et al. [19] explored the different aspects of indoor social navigation based on their experience with perception, mapping, human trajectory prediction, and planning. Besides describing the state-of-the-art approaches, they experimented with existing methods and investigated their performance in practice. Guillén-Ruiz et al. [20] discussed recent papers regarding social robot navigation in a more specific context. They reviewed methods for socially aware navigation and classified them according to the techniques implemented in robots to handle interaction or cooperation with humans.
In another recent review, Mavrogiannis et al. [17] synthesized existing problems of social robot navigation and established the core challenges of social robot navigation as motion planning, behavior design, and evaluating the emerging behavior of a robot. Their study aims to diagnose the fundamental limitations of common practices exploited in the field and to provide constructive feedback and suggestions.
Furthermore, at the Social Navigation Symposium in 2022, Francis et al. [12] discussed various generic guidelines for conducting social navigation studies and performing valuable evaluation of the experiments. The survey depicts the broadness of the research field and the challenges of social navigation studies. The authors define social robot navigation as respecting the principles of safety, comfort, legibility, politeness, understanding other agents, and being socially competent, proactive, and responsive to context. Their guidelines regard the evaluation of social navigation by the usage of metrics and the development of simulators, scenarios, datasets, and benchmarks. A framework design for this purpose is also presented.
The newest review by Singamaneni et al. [11] examines the field from four perspectives—robot types, planning and decision making, situation awareness and assessment, and evaluation and tools. The survey highlights the broadness of topics and methods involved in social robot navigation. Among their proposals are suggestions for standardizing human actions in benchmarks and establishing unified communication protocols to convey robot intentions.
In contrast to previous review articles, our survey aims to explicitly demonstrate how the key concepts explored by researchers in robotics and social sciences can be transferred into requirements for robot control systems [21] implementing robot navigation tasks. Our review reaches user studies to gather insights and perform the grounding of social robot navigation requirements. After identifying those core principles, perception and motion planning methods are reviewed regarding the taxonomy of requirements Figure 2. The classification of the social robot navigation requirements established in this study enables the identification of the gaps in motion planning algorithms, the drawbacks of state-of-the-art evaluation methods, and the proposal of relevant future work perspectives for researchers in the field. As researchers often try to implement different robot control strategies in an ad hoc manner to mimic human behaviors, we believe that a proper grounding of fundamental features will lead to further developments in the correct direction.
The summary of the state-of-the-art surveys is presented in Table 1, where the varying foci on concepts from perception, through motion planning, to evaluation are visible among different review papers.

3. Requirements of Socially Aware Navigation

Social robots were introduced to make human–robot interaction more natural and intuitive [22]. Generic characteristics of social navigation are commonly recalled in review works. For example, Kruse et al. [15] classify the main features as safety, comfort, naturalness, and sociability. On the other hand, in [13], the authors indicate key factors as distinguishing obstacles from persons, considering the comfort of humans—their preferences and their needs, not being afraid of people, and the legibility of motion intentions. More recently, Mavrogiannis et al. [17] proposed a classification that relies on proxemics, intentions, formations, and social spaces, ordered according to the social signal richness. Furthermore, Francis et al. [12] stated that principles of social robot navigation include safety, comfort, legibility, politeness, social competency, agent understanding, proactivity, and contextual appropriateness.
While the aspects above schematically display the goals of social navigation, the authors of the surveys do not attempt to extract the straightforward requirements to follow in social robot navigation. Instead, these terms are loosely defined; hence, they might refer to different means in different contexts or applications. As a consequence, it is tough to determine how to effectively gauge whether the robot behaves in a socially compliant manner. Our survey aims to reduce these abstract terms describing social norms. This is contrary to other review works, where, although taxonomies are presented and articles are classified into those groups, the fundamental concepts persist as vague definitions.
Thus, we perform the grounding of the requirements of social robot navigation. The requirements must be known to properly design a socially-aware robot navigation system. Various techniques have been experimented with an assertive robot, revealing that using knowledge from psychology leads to increased user trust [23]. Incorporating a study-driven approach, we researched human–robot interaction user studies to determine how humans perceive robots navigating around them and how robots should behave around humans under certain controlled conditions. Such an approach allows for obtaining guidelines on how the robot should behave in the presence of humans; hence, precise system requirements can be defined for phenomena that were sufficiently investigated in the literature, while other challenges are coarsely defined.
We separated the study-based grounding of social robot navigation requirements from algorithmic approaches to resolving them. Requirements are obtained from the results of user studies, whereas an algorithmic perspective is presented based on technical papers from the robotics field. Precise requirements grant implementation guidelines and straightforward evaluation of whether the robot behaves as expected.

3.1. Taxonomy of Requirements for Social Robot Navigation

Classical robot navigation emphasizes generating collision-free motions for a robot to move to the goal pose as fast as possible. This requires environment sensing for obstacle detection, efficient global pose estimation, and usually map building. Social robot navigation addresses not only the necessities of classical navigation but also extends its capabilities to accommodate social interaction.
The main objective of social navigation is to reduce the human discomfort of the navigating robot. Our taxonomy of social robot navigation requirements (Figure 3) involves the physical safety of humans (Req. 1), the perceived safety of humans (Req. 2), the naturalness of robot motion (Req. 3), and robots’ compliance with social norms (Req. 4). Specifically, the perceived safety of humans mostly relies on proxemics theory and the prevention of scaring a human. In turn, the naturalness of the robot’s motion does not affect the safety aspects of humans but regards the trustworthiness of the robot. Lastly, abiding by social conventions focuses on actions and sequences that require rich contextual information to mitigate human discomfort.
Our general taxonomy is designed to classify the essential concepts of social robot navigation clearly and unambiguously into one of the investigated groups to create a generic framework. We expect that the main characteristics selected for the taxonomy will stay pertinent in the future, with the possibility of incorporating additional attributes.
In the remaining part of this section, the social robot navigation requirements are discussed, while the algorithmic concepts describing how these socially aware navigation responsibilities can be embedded into robot control systems are discussed in Section 4 and Section 5.

3.2. Physical Safety of Humans (Req. 1)

The physical safety of humans is closely related to the collision avoidance capabilities of robots. Social robot navigation inherits this skill from the classical robot navigation requirements.
Francis et al. [12] denote physical safety as the first principle of social navigation that intends to protect humans, other robots, and their environments. The physical safety of humans during navigation is discussed in the newer literature [10,24] but has already been addressed as a fundamental robotics challenge several decades ago [25].
Nonetheless, the physical safety of other robots or machines is also of great significance [17,26,27,28].
For example, Guzzi et al. [29] conducted a study with multiple small-scale robots relying only on local sensing and employing proactive planning integrated with the heuristic pedestrian motion model [30]. In real-world experiments, in a crossing scenario, they observed different frequencies of collisions depending on the sensors’ field of view and safety margin; hence, the collision count was used as one of the metrics for assessing the safety margin parameter. Evaluating time-to-collision (TTC) is a proactive method to anticipate incoming collisions [31,32] that was also embedded in some benchmarks [33].

3.3. Perceived Safety of Humans (Req. 2)

The comfort of humans around robots is crucial; however, the robot’s behavior can influence that, potentially causing annoyance or stress [12,15]. Human discomfort during robot navigation often corresponds to a diminished perceived (or psychological) safety of humans. Perceived safety is the factor that might lead to physical safety violations (Section 3.2) if not addressed adequately beforehand. Stress-free and comfortable human–robot interaction is a broad topic [10] influenced by numerous features (Figure 4), including adherence to spatial distancing [13,34], performing natural movements [5], or preventing scaring or surprising a human [15]. The remaining part of this section discusses them in detail.

3.3.1. Regarding the Personal Zones of Individuals (Req. 2.1)

Proxemics is the most prominent concept regarding social distancing rules [34,35,36]. Some fundamental studies connected to proxemics theory confirm that the psychological comfort of humans is affected by interpersonal distancing [35,37,38]. Butler and Agah [39] explored the influential factors of how humans perceive a service robot during unfocused interactions. One of them was the distance factor, which induced feelings of discomfort or stress in some configurations. A similar study was conducted by Althaus et al. [40], who validated a navigation system that respects the personal spaces of humans in a real-world study.
Shapes of a personal zone impact the comfortable passing distances. Hall originally specified four circular spaces [34], while the personal zone, reserved for friends, is usually regarded as a no-go zone during unfocused human–robot interaction. Entering the personal zone is counted as a violation of comfort and safety [9,13,41]. The classification of all proxemic zones was described in detail in prior surveys, e.g., [13].
The initially suggested circular shape of the personal space [34] might not appropriately capture the features of human perception and motion. Further empirical studies suggested extending that to an egg shape [42], ellipses [43,44], asymmetrical shapes [45] (prolonged on the nondominant side), or changing dynamically [46]. In [45], it is also reported that the size of personal space does not change while circumventing a static obstacle regardless of walking speed and that the personal space is asymmetrical. The natural asymmetry of personal spaces is also reported in [47], where authors found out that if the robot has to approach a human closely, it is preferred to not move behind a human, so they can see the robot.
Numerous works conducted human-involving experiments to gather empirical data and to model complex and realistic uses of space [48,49,50,51,52]. Participants of the study in [48] rated distances between 1.2–2.4 m as the most comfortable for interaction situations. Experiments by Huettenrauch et al. [53] confirmed that in different spatial configurations, 73–85% of participants found Hall’s personal distance range (0.46–1.22 m) comfortable. Torta et al. [54], in their study involving human–robot interaction, examined the length of comfort zones as specific values of 1.82 m for a sitting person and 1.73 m for a standing person.
Pacchierotti et al. [49,50] examined discomfort as a function of, e.g., lateral distance gap in a hallway scenario. The lateral gap was also examined by Yoda and Shiota [55] in terms of the safety of passing a human by a robot in a hallway scenario. Three types of encounters were anticipated as test cases for their control algorithm, including a standing, a walking, and a running person. They approximated human passing characteristics from real experiments, defining clear formulas to follow in a robot control system. The authors found that the average distance between the passing humans depends on their relative speed and varies from 0.57 to 0.76 m.
The authors of [51] found that the discomfort rates differ between intrusions and extrusions from personal spaces, and distances of approximately 0.85–1.0 m are the most comfortable for a focused interaction with a stranger. On the other hand, Neggers et al. [52] conducted a study similar to [50] and compared their results. They obtained similar outcome and reported that the same function, an inverted Gaussian linking distance and comfort, can be used to fit the results’ data with only a small comfort amplitude shift between [50] and [52]. The authors of [52] also attempted to model an intrusion into personal space as a distance-dependent surface function.
However, there are also diverse exceptions to the mean shape of personal space. For example, Takayama et al. [56] indicated that during the study, participants with prior experience with pets or robots required less personal space near robots compared with people who do not possess such experience. Furthermore, a study presented in [57] endorses the concept that personal space is dynamic and depends on the situation. Velocity-dependent personal space shapes were also considered appropriate in [58,59,60].
Since various studies, even though conducted differently, yield similar results, they seem to approximate human impressions while interacting with robots and, as a consequence, allow modeling of the real-world phenomena of social distancing. The conclusions from the mentioned user studies give insights regarding the implementation of personal space phenomena in robot control systems.

3.3.2. Avoiding Crossing through Human Groups (Req. 2.2)

Recent research revealed that pedestrians tend to travel in groups [61,62]. Human groups create focused formations (F-formations) [63]—spatial arrangements that are intended to regulate social participation and the protection of the interaction against external circumstances [13]. F-formations might be static—consisting of people standing together engaged in a shared activity—or dynamic—consisting of people walking together—and might have different shapes [13,63].
The necessity of avoiding crossing F-formations arises from the fact that they always contain an O-space which is the innermost space shared by group members and reserved for in-group interactions. The discomfort caused by a robot to a group might be assessed as the robot’s intrusion into the O-space of the F-formation [64,65]. Results of numerous studies confirm that humans involved in an F-formation keep more space around a group than the mere addition of single personal spaces [66,67,68]; thus, individuals stay away from social groups. Furthermore, research by Rehm et al. [69] found that participants from high-contact cultures stand closer to a group of people compared with people from low-contact cultures.
A general guideline for robots navigating through populated environments is to avoid cutting through social groups [70], but if it is not possible, e.g., in a narrow corridor, they should politely pass through the O-space [12,71].

3.3.3. Passing Speed during Unfocused Interaction (Req. 2.3)

Rios-Martinez et al. [13] define unfocused interactions as ‘interpersonal communications resulting solely by virtue of an individual being in another’s presence’. As already highlighted in Section 3.3.1, excessive or insufficient passing speed proved significant in terms of discomfort among humans involved in an unfocused interaction with a robot in numerous experimental studies [39,49,50,60].
The most comprehensive study in that matter was recently proposed by Neggers et al. [60], who assessed human discomfort with a robot passing or overtaking them at different speeds at different distances. They found that higher speeds are generally less comfortable for humans when a robot moves at smaller distances. The authors claimed the inverted Gaussians with variable parameters accurately approximate the experimental results for all combinations of scenarios and speeds. The approximation of their findings with a continuous multivariable function has already been implemented (https://github.com/rayvburn/social_nav_utils (accessed on 20 March 2024)) and can be used for evaluating robot passing speed.

3.3.4. Motion Legibility during Unfocused Interaction (Req. 2.4)

Studies conducted by Pacchierotti et al. [50] examined a mutually dynamic situation of passing each other. They assessed human discomfort as a function of the lateral distance gap in a hallway scenario. What they found is that there was no significant impact from the lateral gap size when a robot signaled its passing intentions early. This notion is often referred to as motion legibility, which is an intent-expressive way of performing actions [72]. It can be increased by explicit signaling and also enriching behavior, so it can be used as a cue to the robot intention [73,74].
Lichtenthäler et al. [75] found a significant correlation between the perceived safety and legibility in their study. Gao and Huang [5] considered a flagship example of motion legibility as a scenario where a robot quickly moves toward a person, adjusting its trajectory just before an imminent collision. Despite avoiding direct physical contact, such behavior is likely to produce notable discomfort by the robot heading direction [76] due to lack of early signaling.

3.3.5. Approach Direction for a Focused Interaction (Req. 2.5)

Approaching direction to initiate a focused interaction is a broad field of social robot navigation studies. Rios-Martinez et al. [13] describe focused interaction as ‘occurring when individuals agree to sustain a single focus of cognitive and visual attention’. In most experimental cases, focused interaction involves approaching to start a verbal communication or to hand over the transported goods. The taxonomy in this matter separates approaching guidelines between individuals and F-formations.

Individual Humans (Req. 2.5.1)

In studies conducted by Dautenhahn et al. [77] and Koay et al. [78], participants were seated and asked to gauge their discomfort levels during the handover of objects by a robot that approached from various directions. The subjects of the study preferred frontal approaches over diagonal approaches from the left or right. The contradictory results were found in a study by Butler and Agah [39], where standing participants preferred an indirect approach direction.
Multiple studies depict that human preference is to be approached from the front and within the human field of view [75,79,80,81,82,83,84,85]. Walters et al. [79] examined a robot’s behavior of approaching a human for a fetch-and-carry task. The authors reported that seating participants found the direct frontal approach uncomfortable. The general preference was to be approached from either side, with a preference biased slightly to a rightward approach by the robot. However, the study depicted that a frontal approach is considered acceptable for standing humans in an open area. Another conclusion derived from the study is that humans prefer to be approached from within their field of view; hence approaching from behind should be avoided.
Torta et al. [81] conducted a user study considering different robot approach directions with the final pose at the boundary of a personal space. Similarly, they found that experiment subjects (seated) assessed frontal approach directions (up to ±35°) as comfortable, while they perceived farthermost (±70°) as uncomfortable. Comparable outcomes ensued from the study in [80]. Unlike the results of the user study performed by Dautenhahn et al. [77], in [81], no significant difference was found when the robot approached from the right side or the left side.
Furthermore, Koay et al. [82] researched robot approach distances and directions to a seated user for a handover task. The results show that the preferred approach direction is from either side at a distance of about 0.5 m from the subjects. An interesting fact is that this distance lies within an intimate space [34], but it was preferred because it prevented humans from having to reach out farther with their arms or standing up to pick up the goods from the robot’s tray.

Human Groups (Req. 2.5.2)

Approaching groups of humans requires slightly different strategies. Ball et al. [84] investigated the comfort levels of seated pairs of people engaged in a shared task when approached by a robot from eight directions. Participants rated robot approach behavior for three spatial configurations of seats. Approaches from common (to all subjects involved) ‘front’ directions were found to be more comfortable (group’s average) than from a shared rear direction. When seated pairs were in a spatial configuration that did not exhibit the common ‘front’ or ‘rear’ direction, no significant statistical differences were found. However, another finding of the study is that the presence and location of another person influence the comfort levels of individuals within the group.
Joosse et al. [85] explored the optimal approach of an engagement-seeking robot towards groups from three distinct countries, employing Hall’s proxemics model [34]. Their findings indicate that the most suitable approach distance seems to be approximately 0.8–1.0 m from the center of the group.
Karreman et al. [83] investigated techniques for a robot to approach pairs of individuals. Their findings revealed a preference among people for frontal approaches (regardless of side), with a dislike for being approached from behind. They also noted that environmental factors appeared to influence the robot’s approach behavior.

3.3.6. Approach Speed for a Focused Interaction (Req. 2.6)

Robot speeds are one of the factors impacting discomfort when approaching a human. Since the literature regarding approaching behavior is rich, there are also guidelines to follow in social robot navigation.
Butler and Agah [39] assessed the navigation of a mobile base around a stationary human using various trajectories and equipment resembling the human body. They discovered that speeds ranging from approximately 0.25 to 0.4 m/s were most comfortable, while speeds exceeding 1 m/s were uncomfortable. They also claimed that there might be a speed between 0.4 and 1.0 m/s that produces the least discomfort.
Sardar et al. [86] conducted a user study in which a robot approached a standing individual engaged in another activity. Experiments revealed notable distinctions in acceptance of invading the participant’s personal space by a robot and a human. In their study, only two speeds were evaluated, namely 0.4 and 1.0 m/s, while the robot’s faster speeds were more trustworthy (opposite to human confederates).
In a more recent study, Rossi et al. [87] evaluated speeds of 0.2, 0.6, and 1.0 m/s that affected the robot’s stopping distance while approaching. They found different human preferences for stopping distance depending on the activity currently executed by humans. Sitting participants favored shorter distances while walking subjects longer ones.

3.3.7. Occlusion Zones Avoidance (Req. 2.7)

Occlusion zones are related to areas not reached by the robot’s sensory equipment. Despite the robot’s most recent assumptions suggesting that these areas were previously unoccupied, such estimates may be inaccurate. Consequently, robots should avoid traversing near blind corners, as they may fail to detect individuals behind them, and vice versa. By going around the corner with a wider turn, the robot can explore the occluded space earlier, making it possible to react to humans sooner [15]. Proactivity in that matter prevents surprise or panic and generally positively impacts comfort and physical safety.
User studies generally confirm this issue, showing that humans tend to shorten their paths [88,89] to minimize energy expenditure. Taking shortcuts in public spaces increases the risk of encounters around blind corners.
Francis et al. [12] suggested that a robot entering a blind corner should communicate intentions explicitly with voice or flashing lights. However, this seems slightly unnatural, as even humans avoid shouting in corridors. Enabling audio or flashing lights might also be annoying for surrounding workers in shopping aisles.

3.4. Naturalness of the Robot Motion (Req. 3)

The naturalness of a robot’s motion can be referred to as emerging robot behaviors that are not perceived as odd. This is often related to the avoidance of erratic movements and oscillations Figure 5. Keeping a smooth velocity profile also produces an impression of trust and legibility among observing humans [75].

3.4.1. Avoiding Erratic Motions (Req. 3.1)

Erratic motions involve sudden changes in velocity, making it difficult to anticipate the next actions. This term is often used to describe the behavior of objects exhibiting chaotic movement patterns that make the robot look confused.
Erratic motions are often related to the smoothness of a robot’s velocity profile (Req. 3.1.1). Natural motions favor movements with minimum jerk [90], with mostly stable linear velocity and the angular velocity of zero, i.e., adjusting orientation only when necessary [5,15].
In contrast to smooth velocities, oscillating motions (Req. 3.1.2) involve alternating forward and backward motions, where the robot effectively does not make any progress. They may be present in some navigation approaches that rely solely on Artificial Potential Field [91] or Social Force Model [43].
Additionally, the in-place rotations (Req. 3.1.3) of a robot appear unnatural for human viewers; hence, it is preferred to avoid trajectories where a turning is performed on one spot [90,92]. Also, significant backward movements (Req. 3.1.4) should be avoided, as individuals rarely move in reverse in public areas. Such actions can pose collision risks, particularly for mobile bases lacking range sensors at the back.

3.4.2. Modulating Gaze Direction (Req. 3.2)

A broad area of research regarding motion naturalness corresponds to modulating the robot gaze direction. Humanoid robots are typically equipped with a ‘head’, inside which a camera is located (RGB or RGB-D), e.g., Nao, TIAGo, Pepper, Care-O-bot. Pan and tilt motions of the head joints can be used to modulate gaze direction.
Gaze direction is considered one of the social signals (cues) and a specific type of nonverbal communication between a robot and surrounding humans [4]. Among humans, it is closely related to their perception captured by the notion of Information Process Space [13]. Gaze is a general concept in which measurable aspects can be evaluated, such as fixation count and length [93], as well as gaze–movement angle [94]. Both provide valuable insights into human trajectory or behavior prediction [4].

Unfocused Interaction

In a study by Kitazawa and Fujiyama [95], the authors investigated gaze patterns in a collision avoidance scenario with multiple pedestrians moving along a corridor. Results of the experiment show that humans pay significantly more attention to the ground surface, which they explain as a focus on detecting potential dynamic hazards than fixating on surrounding obstacles. In an experiment conducted by Hayashi et al. [96], they noticed that participants were more willing to speak to the robot when it modulated its gaze direction. Kuno et al. [97] also concluded that robot head movement encourages interaction with museum visitors.
Fiore et al. [98] analyzed human interpretations of social cues in hallway navigation. They designed a study to examine different proxemics and gaze cues implemented by rotating the robot sensors. The results depict that the robot’s gaze behavior was not found to be significant, contrary to the robot’s proxemics behavior that affected participant impressions about the robot Section 3.3.1. Similarly, a study by May et al. [99] showed an understanding of robot intentions while conveyed using different cues. It turned out that the robot was understood better when a mechanical signal was used compared with using the gaze direction cue. Also, Lynch et al. [100] conducted a study employing a virtual environment where virtual agents established a mutual gaze with real participants during path-crossing encounters in a virtual hallway. Subjects of the study found the gaze factor to not be important for inferring about the paths of the virtual agents.
Different strategies of gaze modulation were studied by Khambhaita et al. [101]. Their research indicates that the robot’s head behavior of looking at the planned path resulted in more accurate anticipation of the robot’s motion by humans compared with when the head was fixed. The authors also found that the robot operating with the head behavior of alternately looking at the path and glancing at surrounding humans gave the highest social presence measures among the subjects. Similarly, Lu et al. [102] discussed a strategy of a robot looking at the detected human followed by looking ahead in 5-second cycles.

Focused Interaction

Research has shown that gaze modulation of the robot’s focused interactions should be treated differently than unfocused ones. Breazeal et al. [103] explored the impressions of humans participating in an experiment with a Kismet robot capable of conveying intentionality through facial expressions and behavior. They identified the necessity of gaze direction control for regulating conversation rate, as the robot directs its gaze to a locus of attention.
In another study, Mutlu et al. [104] implemented a robot gaze behavior based on previous studies [105,106] and their observations that people use gaze cues to establish and maintain their conversational partner’s roles as well as their own. The gaze behavior strategy produced turn-yielding signals only for conversation addressees. In their experiment, they found that using only the gaze cues, the robot manipulated who participated in and attended to a conversation.

3.5. Compliance with Social Norms (Req. 4)

Navigating humans adhere to diverse social norms influenced by cultural, interactional, environmental, and individual factors such as gender and age. Therefore, the robot’s compliance with social conventions is also a multifaceted concept (Figure 6), in contrast to low-level motion conventions, such as approach velocity. The aforementioned factors shape high-level social conventions involving navigation-based interactions like queuing, elevator decorum, yielding way to others, and adhering to right-of-way protocols. Robots considered sociable abide by social conventions. Despite the existence of customary routines, they are often challenging to model precisely due to their abstract nature, as seen in the discussion by Barchard et al. [107].
The authors of surveys [5,15] exemplify that even if the robot’s movements may appear natural and unobtrusive (Req. 3), it can violate typical social conventions. For instance, entering a crowded elevator without allowing occupants to exit first breaches common expectations, thereby potentially causing discomfort. Also, in different user studies, it is reported that human discomfort can be caused due to violations of social norms even if the rules of perceived safety of humans are properly adhered to in the robot navigation [108,109].
There are no predetermined sets of high-level social conventions, making compliance a dynamic and context-dependent aspect of robotic behavior [5] that requires a diverse level of contextual awareness.
The most common and meaningful social conventions examined in the literature are illustrated below. The complementary discussion attempts to clarify how they should be addressed in robot control systems.

3.5.1. Follow the Accompanying Strategy (Req. 4.1)

Strategies of executing the task of accompanying humans by the robot are dictated by the social conventions of how humans navigate in relation to other pedestrians. Customary human behaviors entail how robots should adjust their movements based on the relative position of the accompanying human (or humans), ensuring smooth and natural interactions.

Tracking Humans from the Front (Req. 4.1.1)

Numerous studies reviewed the relative pose that the robot should maintain while tracking a human from the front. For example, Jung et al. [110] performed a study to evaluate how often humans look back at the robot that tracks the subject from behind. They found that participants often looked back as they were curious about the robot, whether it bumped into them or tracked them well. The authors concluded that tracking from the front might be more comfortable and designed a robot control strategy that involves moving 1 m ahead of the tracked human, whose local movement goal is inferred by the robot online.
On the other hand, Young et al. [111] compared various relative poses for a robot led on a leash by a participant. The results reveal that having the robot move in front of the person was the most comfortable approach for joint motion. In another study, Carton et al. [112] proposed a framework for analyzing human trajectories. Their studies led to the conclusion that humans plan their navigation trajectories similarly whether they are walking past a robot or another human.

Person Following (Req. 4.1.2)

Gockley et al. [113] evaluated methods of avoiding rear-end collisions of a robot following a person. The first approach focuses on direction-following, where the robot follows the heading of a person, whereas the second method, path-following, relies on imitating the exact path that a person takes. The participants of the real-world experiments rated the direction-following robot’s behavior as substantially more human-like. However, the participants rated that the robot stayed too far away (1.2 ± 0.1 m) from them while moving.
Following an individual in populated environments is challenging as crowd behavior often manifests as flows of social groups, with individuals typically following the flow [61]. Studies show that joining a flow with a similar heading direction is more socially acceptable, resulting in fewer disturbances to surrounding pedestrians [114]. Collision avoidance techniques for following one person through a populated environment are discussed in [115,116].

Side by Side (Req. 4.1.3)

The tendency for people to walk side by side when walking together was discussed by Kahn et al. [117]. In situations with only two individuals walking, they typically adopt a side-by-side formation, while in crowded conditions or with three or more individuals, more complex formations such as ‘V’ shapes are observed [118]. Spatial preferences of humans when being followed by a robot were reviewed in [119]. In the majority of studies, the robot’s relative position to the person typically remains constant, with any adjustments being made primarily in response to environmental factors.
Saiki et al. [120] discussed how robots can serve walking people. In their experiments, people trajectories were recorded to develop a histogram of relative distances. The conclusion is that people’s average distance while walking alongside each other is 0.75 m.
Karunarathne et al. [121] designed a spatial model for side-by-side accompaniment without explicit communication about the goal of a human. During their study, they found that the distance maintained in a robot–human pair (1.25 m) was larger than that of the human pair on average (0.815 m).

3.5.2. Avoiding Blocking the Affordance Spaces (Req. 4.2)

The concept of affordance space relates to the potential activities that the environment offers to agents [122]. Affordance spaces could be mapped as free regions or banned regions in a function of time [123]. They have no specific shape [13] as they depend on specific actions.
Affordance spaces are specific to the robot environment and can be exemplified by the area near a painting in a gallery or menu stands in restaurants. In general, an affordance space can be crossed without causing disturbance to a human (unlike activity spaces in Section 3.5.3), but blocking an affordance space could be socially not accepted [13]. Also, for robots with a limited field of view (FOV), it is essential to utilize a predefined map of affordance spaces.
Raubal and Moratz [124] discussed a robot architecture incorporating a functional model for affordance-based agents. The crucial concept is to consider the information about locations of affordance spaces when selecting a coarsely defined (region-based) navigation goal or a goal on a topological map. The notion of affordance spaces was also discussed in the context of learning them online [125], as well as in gaining knowledge from the analysis of human trajectories [126].

3.5.3. Avoiding Crossing the Activity Spaces (Req. 4.3)

The activity space is an affordance space linked to an ongoing action performed by an agent—a human or another robot [13]. An activity space can be exemplified by the area between an observer and a painting in a gallery. Once the visitor initiates this space, the robot is obliged not to cross it [122]. Additionally, the robot’s perception has to dynamically infer whether a certain agent has initiated an activity space, e.g., by observing an object [125]. Furthermore, the activity space should be conditionally constrained; for instance, it should be less restrictive for a shorter robot compared with a taller one that might fully occlude the painting when crossing through an activity space.

3.5.4. Passing on the Dominant Side (Req. 4.4)

Bitgood and Dukes [89] discussed that people tend to proactively move to the right half portion of a hallway or a narrow passage, which is tied to cultural traffic rules. Multiple existing social robot navigation approaches already implemented strategies to follow the right side of the corridor or to favor passing humans on the right [59,73,116,127]. However, as Bitgood and Dukes suggest, this might not be a strict rule to follow in crowded spaces, as some people follow the other side as they have an incoming left-turn destination [89]. This is supported by the study conducted by Neggers et al. [60], who also examined the effect of the passing side and found that participants reported equal comfort levels for both sides. Nevertheless, Moussaïd et al. [128] conducted a set of controlled experiments and observed pedestrians’ preference to perform evasive maneuvers to the right while passing each other.

3.5.5. Yielding the Way to a Human at Crossings (Req. 4.5)

Moller et al. [3] posed the problem of who goes first at an impasse as one of the social conventions that are ‘less well-defined’. As stated in a survey by Mirsky et al. [4], the term ‘social navigation’ usually refers to a human-centric perspective; therefore, the robot is often obliged to yield the way to a human at a crossing.
The user study performed by Lichtenthäler et al. [75] showed that in the crossing scenario, the participants favored the navigation method in which the robot stopped to let a person pass. Yielding the way to a human based on the predicted motion was also investigated in [65].

3.5.6. Standing in Line (Req. 4.6)

Standing in line while forming a queue is one of the most common collective behaviors of humans. Nakauchi and Simmons [129] modeled how people stand in line by first collecting empirical data on the matter. Further, they utilized these data to model a range of behaviors for a robot tasked to get into a queue, wait, and advance in the queue alongside other individuals awaiting service.

3.5.7. Obeying Elevator Etiquette (Req. 4.7)

‘Elevator etiquette’ refers to the customary rules of humans entering and exiting a bounded space through a doorway, specifically letting people leave an elevator before attempting to enter. These rules are generalizable to numerous closed areas like rooms and corridors.
Gallo et al. [130] proposed the machine-like approach for the design of robot behavior policies that effectively accomplish tasks in an indoor elevator-sharing scenario without being disruptive. Alternatively, Lin et al. [109] discussed the social appropriateness of lining up for an elevator in the context of deploying a mobile remote presence. Elevator-related conventions were tackled in a robotic competition—“Take the Elevator Challenge” [131].

3.6. Discussion

We acknowledge that the proposed set of primitive requirements is subject to extension as the social navigation studies advance and new issues or additional cases are found [12]. Not only have some requirements mentioned above not been sufficiently studied, but there are also many other human conventions that have not been considered at all in user studies with robots; hence, there are no clear guidelines on how they can be tackled properly in social robot navigation. As a consequence, the comprehensive method for assessing compliance with social norms remains unresolved, in contrast to the agreement on criteria for evaluating the physical and perceived safety, as well as most cases covered by naturalness aspects.
An example phenomenon that was not targeted by user studies to the extent that allows establishing specific principles is facial expressions. Petrak et al. [71] discussed a side note of their study that enhanced robot facial expressions and gestures could make the behavior easier to anticipate for the experiment participants. Kruse et al. [15] pointed out additional navigation conventions, such as giving priority to elderly people at doorways, asking for permission to pass, and excusing oneself when one has to traverse a personal zone to reach a goal. Furthermore, Gao and Huang [5] indicated observing right-of-way at four-way intersections as another navigation-based interaction. On the other hand, despite that overtaking on the nondominant side has been implemented in some navigation methods [59,132], there are no clear guidelines that such behavior is common in environments other than narrow passages.
Nevertheless, implementing all requirements in a single robot control system is an enormous challenge, while integrating all constraints and norms requires rich contextual awareness of the robot.

4. Perception

Robot perception plays a substantial role in safe navigation and expands the intelligence of a robot. Social robots must differentiate obstacles from humans to interact in a discomfort-mitigating manner.
In robotics, various types of exteroreceptors [21] are utilized to perceive the environment. Tactile sensors provide feedback about physical contact, enabling robots to detect and respond to touch [40,49,50,133,134]. They are crucial for tasks requiring object recognition that other sensor types cannot capture. Sonar sensors utilize sound waves to detect the presence, distance, and velocity of objects, allowing robots to navigate and avoid obstacles in dynamic environments [39,40,135,136,137]. Laser range finders use laser beams to measure distances accurately, aiding in mapping and localization tasks [49,138,139,140,141,142,143]. RGB cameras capture images in visible light, enabling robots to recognize objects, navigate environments, and interpret visual cues [27,40,144]. Finally, RGB-D cameras, equipped with depth sensors, provide both color and depth information, enhancing object detection and enabling 3D mapping [140,145,146,147]. These sensor types play essential roles in robotics research and development, enabling robots to perceive and interact with their surroundings effectively.
The remainder of this section follows the taxonomy illustrated in Figure 7.

4.1. Environment Representation

Besides detecting obstacles and tracking humans, robot perception is usually employed to collect subsequent observations of the surroundings to create an environment model, among which the most popular are dense, sparse, and dual representations.
A dense representation constitutes a discretized map of the robot environment. Classical maps contain all types of obstacles embedded into the environment model without a semantic distinction. The most common planar map types are occupancy grids [148] and costmaps [149], while octomaps [150] represent occupancies in 3D space. The pioneering dense model is an occupancy grid [148] that represents the environment as a binary grid (graph) where each cell is either occupied or free, and all occupied cells are treated as equal obstacles. Therefore, costmaps were proposed to extend the classical occupancy grids. Costmaps introduce intermediate states (between free and occupied) of a cell [149] and constitute a 2D traversability grid in which cells are given a cost of traversal reflecting the difficulty of navigating the respective area of the environment [151]. This allows robots to plan paths that optimize not just for avoiding collisions but also for factors like proxemics. The dense representation of an environment is often solely used in classical robot navigation approaches [138,150,152].
Sparse environment representations typically refer to representations where only certain key features or landmarks are represented explicitly, with the rest of the space left unstructured or minimally represented. Sparse representation usually provides a concise description of the objects detected in the environment, constituting their semantic information with geometric attributes [28,153,154,155]. This method of storing environment objects also allows, e.g., applying linear algebra formulas to easily predict objects’ motion.
Dual environment representations, combining dense and sparse ones, are commonly used in social robot navigation [156,157,158,159]. While obstacle-filled costmaps are calculated, robot perception modules simultaneously detect and track humans in the environment. They provide sparse data about each human, e.g., a pose and velocity, or even spatial relationships [140,160]. Such information allows for dynamic modeling of personal spaces of individuals (Req. 2.1) and O-spaces of F-formations (Req. 2.2), which can later be embedded onto layered costmaps [161]. Layered costmaps extend the notion of traditional costmaps to facilitate separate representations of different contextual cues as spatial constraints in the robot environment. The resultant costmap with enriched information is flattened for motion planning; therefore, classical algorithms can still be used.

4.2. Human Detection and Tracking

Social robot navigation encompasses the awareness of humans surrounding the robot, as they must be treated differently from typical obstacles. The awareness arises from detecting and tracking people by the robot perception system [115] as well as exhibiting behavior that mitigates the discomfort of nearby humans. Various methods for human detection and tracking have been proposed in the literature [140,162,163,164,165,166,167].
Arras et al. [162] proposed a method utilizing a supervised learning technique for creating a classifier for people detection. Specifically, AdaBoost was applied to train a classifier from simple features of groups of neighboring beams corresponding to legs in the LiDAR’s range data. Similarly, Bozorgi et al. [167] focused on LiDAR data filtering to obtain robust human tracking in cluttered and populated environments. They integrated Hall’s proxemics model [34] with the global nearest neighbor to improve the accuracy of the scan-to-track data association of leg detection. Results of their experiments show that their method outperformed the state-of-the-art detector from [163].
In contrast, Linder et al. [140] proposed a multimodal (LiDAR and RGB-D) people-tracking framework for mobile platforms in crowded environments. Their pipeline comprises different detection methods, multisensor fusion, tracking, and filtering. Triebel et al. [160] extended the multihypothesis tracker from [168] to detect F-formation arrangements. Both works were integrated and implemented in the SPENCER robot [140,160].
Redmon et al. [164] framed the object detection problem as a regression problem to spatially separated bounding boxes and associated class probabilities. They proposed a generic framework for detecting objects of various classes on 2D images. Alternatively, Cao et al. [166] proposed an Open-Pose system for human skeleton pose estimation from RGB images. In another work, Juel et al. [169] presented a multiobject tracking system that can be adapted to work with any detector and utilize streams from multiple cameras. They implemented a procedure of projecting RGB-D-based detections to the robot’s base frame that are later transformed to the global frame using a localization algorithm.
Theodoridou et al. [144] used TinySSD [165] for human detection in their robot with limited computational resources. TinySSD is a lightweight single-shot detection deep convolutional neural network for real-time object detection, which only finds people in the images; hence, the authors of [144] had to perform image and range-based data matching in their system.
In real-world studies, robot sensors are used to detect and track humans. The survey by Möller et al. [3] discusses, i.a., the active perception idea. The authors denoted that active vision systems can influence the input by controlling the camera. As an extension of active perception, they depict active learning [170], which also influences the input data but during the training process. This enables the agent to intelligently choose what data points to exploit next.
To the best of our knowledge, currently, the most comprehensive human perception stack is SPENCER [140,160], which is available as the open-source software (https://github.com/spencer-project/spencer_people_tracking (accessed on 20 March 2024)) compatible with the Robot Operating System (ROS) [171,172].

4.3. Human Trajectory Prediction

In social navigation, classical planning methods, e.g., Artificial Potential Field (APF) [91] or DWA [135] often exhibit limited efficacy as pedestrians are treated merely as uncooperative obstacles. This limitation is exemplified by the freezing robot problem [173], where a mobile robot may become immobilized in a narrow corridor when confronted with a crowd of people unless it can anticipate the collective collision avoidance actions [174]. Therefore, predicting human trajectories is one of the fundamental concepts in social robot navigation, in particular in unfocused human–robot interactions, where explicit communication between agents is not present. Understanding how agents move can reduce the potential for conflicts, i.e., sudden encounters in which humans and robots might collide (Req. 1) [4,175]. Another particularly important aspect is that humans frequently undergo lengthy occlusion events; hence, their motion prediction prevents possible unexpected encounters.
In the social robot navigation literature, the prevailing method is the Inverse Reinforcement Learning (IRL) [176], which is based on the Markov Decision Process (MDP) [177]. The IRL identifies reward functions based on the observed behavior, enabling robots to learn from human demonstrations. It can be classified as an offline inference and learning method [4]. Henry et al. [178] used IRL to learn human motion patterns in simulation to use them later for socially aware motion planning. Rhinehart et al. [179] extended IRL for the task of continuously learning human behavior models with first-person-view camera images. Their Darko algorithm jointly discovers states, transitions, goals, and the reward function of the underlying MDP model. In another work, Vasquez et al. [180] conducted experiments to compare the performance of different IRL approaches, namely, Max-margin IRL [181] and Maximum Entropy IRL [182], which were later applied for robot navigation in a densely populated environment. Also, Kretzschmar et al. [183] used Maximum Entropy IRL to deduce the parameters of the human motion model that imitates the learned behaviors. IRL seeks to extract the latent reward or cost function from expert demonstrations by considering the underlying MDP. It learns from entire trajectories, and its computational expense arises from running RL in an inner loop [184]. Another approach was proposed by Goldhammer et al. [185], who used an Artificial Neural Network (ANN) with the multilayer perceptron architecture to learn usual human motion patterns. A different method was presented by Gao et al. [186], who trained a Reinforced Encoder–Decoder network to predict possible activities.
Alternatively, Long Short-Term Memory (LSTM) networks are one of the sequential methods that learn conditional models over time and recursively apply learned transition functions for inference [187]. Unlike standard feed-forward neural networks, also known as recurrent neural networks, these networks include feedback connections. Following the work by Alahi et al. [188], who presented a human trajectory forecasting model based on LSTM networks, they have become widely popular for this purpose. For example, Furnari and Farinella [189] utilized LSTM to predict future human actions in a domestic setting. Chen et al. [190] also created an LSTM-based model predicting socially aware trajectories learned from a dataset to later integrate this into a robot motion planning scheme. Recurrent Neural Networks (RNN) were also applied for sequence learning, e.g., by Vemula et al. [191] who proposed the Social Attention trajectory prediction model that captures the relative importance of each person when navigating in the crowd, irrespective of their proximity. Another work by Farha et al. [192] relies on training a Convolutional Neural Network (CNN) and a RNN to learn future sequences. They proved their method to be suited for long-term predictions of video sequences.
Another effective data-based method for learning from demonstrations is Generative Adversarial Imitation Learning (GAIL), applied by, e.g., Tai et al. [184] to learn continuous actions and desired force toward the target. Huang et al. [193] proposed a model-based interactive imitation framework combining the advantages of GAIL, interactive RL, and model-based RL.
On the other hand, Kanda et al. [194] used the Support Vector Machine (SVM) to classify 2 s recordings of human trajectories in a shopping mall into four behavior classes: fast walking, idle walking, wandering, and stopping. The classification relies on features of trajectory shapes and velocity. Coarse classification enables forecasting human trajectories [6]. Similarly, Xiao et al. [195] first pretrained the SVM to group activity classes, then predicted the trajectories based on those classes, and finally evaluated the system in a lab environment.
Alternatively, the Social Force Model (SFM) [43] with its numerous modifications [156,158,196], is also a popular method for human trajectory prediction; however, it requires knowledge about environmental cues to infer the possible goals of humans. Luber et al. [197] combined SFM with a tracker based on the Kalman filter to produce a more realistic prediction model of human motion under the constant velocity assumption. Recently, multiple approaches integrating SFM into neural network schemes were proposed. For example, Yue et al. [198] integrated SFM and a deep neural network in their Neural Social Physics model with learnable parameters. Gil and Sanfeliu [199] presented Social Force Generative Adversarial Network (SoFGAN) that uses a GAN and SFM to generate different plausible people trajectories reducing collisions in a scene.
Numerous works across various application domains depend on kinematic models for their simplicity and satisfactory performance, particularly in scenarios with minimal motion uncertainty and short prediction horizons. Among others, Elnagar [200] proposed a method predicting future poses of dynamic obstacles using a Kalman filter under the assumption of using a constant acceleration model. Similarly, Lin et al. [201] proposed a forecasting strategy that employs a bimodal extended Kalman filter to capture the dual nature of pedestrian behavior—either moving or remaining stationary. Also, Kim et al. [202] used a combination of ensemble Kalman filters and a maximum-likelihood estimation algorithm for human trajectory prediction.
In applications where performance is crucial, the constant velocity model, assuming piecewise constant velocity with white noise acceleration, can be applied. Despite its simplicity, it is commonly chosen as an ad hoc method for motion prediction in numerous approaches [139,203,204,205,206,207,208] having lightweight and straightforward implementation and yielding satisfactory results with high-frequency updates. Recently, Schöller et al. [209] discussed that the constant velocity model might outperform state-of-the-art neural methods in some scenarios.
Diverse methods were also evaluated for usage in human trajectory prediction, for example, belief distribution maps [210] that consider the obstacle situation in the robot’s environment, multigoal Interacting Gaussian Processes (mgIGP) [211] that can reason multiple goals of a human for cooperative navigation in dense crowds, or the Human Motion Behavior Model (HMBM) [212], allowing a robot to perform human-like decisions in various scenarios. Another method was proposed by Ferrer and Sanfeliu [213], who presented a geometric-based long-term Bayesian Human Motion Intentionality Predictor using a naive Bayes classifier that only requires training to obtain the set of salient destinations that configure a scene.
Our survey discusses the most common methods used in robotic applications, but various other methods for human trajectory prediction have evolved over the years. Rudenko et al. [187] presented a thorough review of the state-of-the-art human motion prediction methods, where they also discussed approaches that account for map information or environmental cues for predictions. An appropriate forecasting method has to be selected for a specific application based on multiple criteria, e.g., computational resources, prediction horizon, and detection uncertainty.

4.4. Contextual Awareness

A robot is perceived as intelligent if it utilizes the contextual information in its imperative [16,214]. The proper socially aware activity of a robot performing a single task might differ depending on the situation defined by a contextual arrangement. It is connected to adjusting the robot’s behavior, knowing what environment it is in (gallery or shopping mall), what task it performs (transporting a glass full of hot tea or packed goods), whom it interacts with (young person or elderly), and what social norms are expected in the environment (may differ between cultures).
Francis et al. [12], in their survey, identified the following forms of context: cultural context [26,34,85,215,216,217], environmental context, individuals diversity, task context, and interpersonal context, but their literature review in this area is narrow. The notion of context is usually regarded in the deliberative layer of the robot’s planning and embedded as spatial or spatiotemporal constraints in the motion planning [17,218,219].

4.4.1. Environmental Context

The environmental context is constituted by various characteristics of the robot’s surroundings. This information is particularly important for robots that act in different types of rooms, e.g., corridors and libraries of the university. While the robot might be sociable and lively in corridors, it is not necessarily appropriate to distract students in the library, where the robot should move slowly and be quiet. Therefore, researchers investigate different environmental concepts to embed them into robot navigation schemes.
Banisetty et al. [220] proposed a model-based context classifier integrated with a high-level decision-making system for socially aware navigation. Their CNN model distinguishes between different environmental contexts such as an art gallery, hallway, vending machine, and others. Additionally, based on the LiDAR observations and using the SVM, they classified social contexts, namely people forming a queue and F-formations. In continuation of this article, Salek Shahrezaie et al. [221] introduced classification and detection information into a knowledge base they queried to extract applicable social rules associated with the context at hand. This approach has been further extended in [142] for using environmental context, object information, and more realistic interaction rules for complex social spaces. On the other hand, Jia et al. [222] proposed a deep-learning-based method for detecting hazardous objects in the environment of an autonomous cleaning robot to maintain safe distances from them on the motion planning level. Recognizing human activity spaces is a part of environmental context awareness, as presented in the work by Vega et al. [223], who exploited the detection of specific objects for this purpose.
A leading approach to enable the robot’s contextual awareness is semantic mapping [224,225,226]. For example, Zhang et al. [227] used an object semantic grid map along with a topological map for the automatic selection of roughly defined navigation goals in a multiroom scenario. Alternatively, Núñez et al. [228] proposed a navigation paradigm where the semantic knowledge of the robot’s surroundings and different social rules are used in conjunction with the geometric representation of the environment’s semantic solutions. Their approach aims to integrate semantic knowledge and geometrical information. A promising method for the interactive building of semantic maps for robot navigation is illustrated in [229].

4.4.2. Interpersonal Context

Interpersonal cues are mainly related to social relationships between tracked humans in the robot environment. This knowledge can be embedded in control systems to enhance robot navigation skills. For example, Li et al. [230] proposed a dual-glance CNN-based model for visual recognition of social relationships. The first glance fixates on the person of interest, and the second glance deploys an attention mechanism to exploit contextual cues. Lu et al. [161] proposed an approach for context-sensitive navigation, mainly focusing on human-aware robot navigation and embedded spatial constraints into environment models in the form of costmaps.
The algorithm by Luber and Arras [168] was extended in [160] for detecting and learning sociospatial relations, which are used for creating a social network graph to track groups of humans. Patompak et al. [231] developed a Reinforcement Learning method of estimating a social interaction model for assisting the navigation algorithm regarding social relations between humans in the robot’s environment model. Similarly, Okal and Arras [232] employed Bayesian Inverse Reinforcement Learning for learning the cost function of traversing in the area with a group of humans.
Haarslev et al. [233] introduced contextual information into robot motion planning, namely F-formation spatial constraints in the costmaps used for planning. The F-formation arrangement is inferred from participants’ speed, line of sight, and potential focus points. Similarly, Schwörer et al. [234] detected people and their interactions to create spatial constraints in the environment model used for motion planning.

4.4.3. Diversity Context

Diversity-related contexts facilitate leveraging human diversity in social robot navigation. Researchers presented multiple studies regarding gender [235,236,237], age [235,236,238] personality [136,239], and diverse human groups representations [240]. All these traits affect how people interact with and perceive robots. Furthermore, Bera et al. [26] attempted to classify the personality of each pedestrian in the crowd to differentiate the sizes of personal spaces of individuals. Subsequently, the emotional state of the pedestrians was also inferred and embedded for socially aware navigation [27,241,242].

4.4.4. Task Context

A robot’s behavior differs based on a task to perform. If the robot is delegated to execute a task of a high priority, e.g., urgent transportation in a hospital, it will interact with humans only in an unfocused manner committing to collision avoidance and respecting personal spaces. However, if the robot’s task is to start sociably interacting with customers in a shopping mall to present products to them, it has to mildly start focused interactions with pedestrians. Therefore, the objectives of robot navigation differ between tasks, affecting the socially correct behavior scheme that should be followed.
Popular tasks delegated to social and assistive robots are transportation [79], guiding [160,243], or accompanying [157,244]. For example, accompanying objectives differ even between the tasks of attending individuals [244,245] and groups [157,246] or even between different strategies for accompanying individuals (Section 3.5.1). Similarly, a guiding robot, e.g., proposed in [243], mainly focuses on leader–follower tasks, but once it finishes the guided tour, it may drop the constraints specific to the guiding behavior (speed, etc.) and switch to socially aware collision avoidance and back to the reception area.
A significant challenge lies in integrating the contradictory objectives of treating humans as social obstacles during tasks requiring only unfocused interactions and regarding them as interaction partners when needed. As a result, methods introducing human awareness and social acceptance must be carefully selected to avoid interfering with contradictory modes of operation, as some constraints may need to be disabled in focused interaction mode while enabled in unfocused interaction mode [23].

5. Motion Planning

Robots using socially aware navigation planners are perceived to be more socially intelligent than those using traditional navigation planners as studied in [247]. This section discusses various navigation approaches and methods of incorporating social awareness into robot control systems.
The motion planning module is crucial for safely guiding the robot through dynamic environments. Motion planning for mobile robots is understood as a pose control scheme aimed at moving the robot from its initial pose to the target pose while considering the kinematic and dynamic (kinodynamic) constraints of the mobile base.
From the perspective of motion planning, requirements for social awareness presented in Section 3 might entail the necessity of specific enhancements compared with classical robot navigation. Namely, those can be classified into three specific groups. Firstly, modifications of the intermediate trajectory to the fixed goal. This might involve adjustments originating from respecting personal spaces (Req. 2.1), O-spaces of F-formations (Req. 2.2), and modulating speed (Req. 2.3) to mitigate the discomfort of surrounding humans. Secondly, the extended selection of the final poses for navigation tasks with coarsely defined goals. In particular, selecting such a pose that, e.g., does not block any affordance space (Req. 4.2), minimizes the discomfort of the approach to a human (Req. 2.5.1), or provides joining a queue in a socially compliant manner (Req. 4.6). Thirdly, dynamically inferring and following virtual goals in real time depending on the poses of cooperating humans, which enables efficient execution of accompanying tasks (Req. 4.1).
The predominant motion planning architecture for mobile robots relies on hierarchical planning with two asynchronously running modules, specifically, a global path planner and a local trajectory planner [138,248]. Global path planning involves finding a feasible path from a start configuration to a goal configuration while avoiding environmental obstacles. Algorithms generating global paths typically operate in a configuration space and consider the entire environment [249]. In contrast, local trajectory planning aims to generate trajectories for the robot to follow within a short time horizon that navigate the robot safely and efficiently through the environment while reacting to dynamic obstacles and perturbations. Algorithms producing local trajectories typically operate in the robot’s control space or velocity space and consider immediate sensor feedback and environmental information [138,152]. Usually, local trajectory planners operate at a higher frequency than global path planners to adjust the robot’s motion in real time, accounting for dynamic changes in the environment and ensuring safe and efficient navigation.
Our taxonomy of the algorithmic perspective of social robot navigation follows the hierarchical motion planning scheme, differentiating approaches for global path planning and local trajectory planning Figure 8.
Numerous surveys regarding social robot navigation thoroughly discussed motion planning [13,14,15]. However, our review aims not only to investigate the variety of methods of implementing human awareness in robot control systems but also to classify those approaches according to the requirements they fulfill. The classification of requirements regarded in objectives of different navigation algorithms is presented in Section 5.3.

5.1. Global Path Planning

In the context of global path planning for social navigation for surface robots, various methodologies are employed for the research. Recently, multiple surveys regarding path planning for mobile robots have been proposed [250,251,252,253,254]. State-of-the-art techniques can be classified into distinct groups. These include graph-based methods, potential field methods, roadmap methods, and sampling-based methods. Each class of approaches offers unique advantages and challenges, contributing to the broader landscape of mobile robot path planning.
Although in classical path-planning metaheuristic methods like genetic algorithms or particle swarm optimization are commonly discussed [255], to the best of our knowledge, they were not applied for human-aware navigation.

5.1.1. Graph-Based Methods

Graph-based methods for path finding fall into the category of approximate cell decomposition approach in which cells of predefined shape (usually rectangles) do not exactly cover the free space (in contrast to exact cell decomposition), but the cell connectivity in a graph is encoded [256].

Algorithms

The earliest graph (or grid) search methods in the context of computer science and algorithmic development can be traced back to the 1950s. One significant development was Dijkstra’s algorithm [257], which laid the foundation for many subsequent graph search and pathfinding algorithms. This algorithm was primarily focused on finding the shortest path in a graph. Later, Hart et al. [258] presented the A algorithm, which builds upon Dijkstra’s algorithm by incorporating heuristic information to guide the search more efficiently, making it particularly useful for pathfinding in large graphs. The heuristic utilizes the distance between the current processing node and the goal node on the solution space. Globally shortest paths are obtained using both heuristic estimates and actual costs in a weighted graph. Other variants of the A planning algorithm include D [259], Focused D [260], L P A [261], D Lite [262], E [263], Field D [151], and T h e t a [264]. A brief description of each variant is depicted below.
Graph-based planners usually require replanning if the underlying environment model changes. This drawback is addressed by the D [259], which is an incremental search algorithm for finding the shortest paths designated particularly for graphs that may dynamically change once the search begins as it possesses the procedure for updating paths if changes occur. Focused D [260] adapts the D to prioritize the exploration of areas closer to the goal. Lifelong Planning A ( L P A ) [261] is an incremental heuristic search algorithm that continuously improves its estimates of the shortest path while adapting to changes in the environment, providing efficient planning in dynamic environments. D Lite [262] is a simplified version of the D algorithm, focusing on efficient replanning for real-time performance in static or partially unknown environments. The wavefront expansion procedure (known as NF1 in [256]) is a simple global planner that expands the search to all adjacent nodes until the start node and goal node are covered. It was employed in [212] for path planning in human-populated environments. Another method is E [263] algorithm capable of dynamic replanning and user-configurable path cost interpolation. It calculates a navigation function as a sampling of an underlying smooth goal distance that takes into account a continuous notion of risk that can be controlled in a fine-grained manner.
The authors of Field D [151] addressed the problem of using discrete state transitions that constrain an agent’s motion to a narrow set of possible headings, which often occurs in classical grid-based path planners. Instead, they proposed the linear interpolation approach during planning to produce paths with a continuous range of headings. Alternatively, the T h e t a [264] method propagates information along grid edges (to achieve a short runtime) but without constraining the paths to the grid edges. Instead, any-angle paths are found by performing line-of-sight checks between nodes. When a direct line of sight is feasible between two adjacent nodes without intersecting obstacles, T h e t a considers the straight-line path, reducing the number of nodes expanded, compared with A . Also, T h e t a retains the optimality guarantees of A while producing smoother, more natural paths, especially in environments with narrow passages or obstacles.
Notably, Dijkstra’s algorithm does not account for the robot’s kinodynamic constraints, which may generate paths not admissible to robots with, e.g., Ackermann kinematics. However, Dolgov et al. [265] addressed this issue in their Hybrid A algorithm that extends the traditional A to handle continuous state spaces by discretizing them into a grid. It incorporates vehicle kinematic constraints, such as maximum velocity and steering angle, to generate feasible paths for vehicles navigating through complex environments. Recently, Macenski et al. [249] presented a search-based planning framework with multiple algorithm implementations, including the Cost-Aware Hybrid-A* planner that provides feasible paths using a Dubins or Reeds–Shepp motion model constrained by a minimum turning radius for Ackermann vehicles.

Human-Aware Constraints

The classical path-finding algorithms focus on calculating the shortest, collision-free path and do not explicitly regard humans in the environment; hence, they also do not consider social constraints. However, in graph-based methods, the planning procedure is separated from the definition of planning constraints incorporated into the environment representation [206]. Hence, researchers started to modify the environment models, e.g., costmaps, to embed human-aware constraints into the motion planning scheme while employing classical path-finding algorithms. Most approaches that extend environment representations focus on introducing spatial or spatiotemporal soft constraints representing proxemics [266] or social conventions [59,161].
For example, Sisbot et al. [266] presented a Human Aware Motion Planner (HAMP) that exploits algorithms for reasoning on humans’ positions, fields of view, and postures. They integrated different social constraints into their highly configurable planning scheme, including Gaussian-modeled personal spaces or hidden zones behind obstacles (visibility constraints). Kirby et al. [59] proposed a Constraint-Optimizing Method for Person-Acceptable NavigatION (COMPANION) framework in which, at the global path-planning level, multiple human social conventions, such as personal spaces and tending to one side of hallways, are represented as constraints on the robot’s navigation.
Lu et al. [73] presented a costmap-based system capable of creating more efficient corridor navigation behaviors by manipulating existing navigation algorithms and introducing social cues. They extended robot environment models with socially aware spatial constraints to navigate in a more human-friendly manner. Kollmitz et al. [206] presented a planning-based approach that uses predicted human trajectories and a social cost function to plan collision-free paths taking human comfort into account. They employed search-based, time-dependent path planning that accounts for the kinematic and dynamic constraints of a robot. The authors also exploited the layered costmap architecture [161] to create multiple layers related to human proxemics according to their prediction model. Okal et al. [232] proposed a method that uses IRL to learn features of a populated environment to model socially normative behaviors [180]. Once the reward function for a navigation task is obtained, it is used to define spatial costs of social normativeness that can be injected into a costmap used by a motion planner (either global or local). Some works also embedded dynamically recalculated personal zones into costmaps to account for dynamics of individual humans [59,244,267,268] or groups [269].

5.1.2. Potential Field Methods

Purely graph-based planners have limitations originating from their discontinuous representation of configuration space. On the other hand, potential field methods offer smoother path generation and can be directly related to sensor data, yet they suffer from the presence of local minima [263]. Path planning utilizing a potential field creates a gradient across the robot’s map that directs the robot to the goal position from multiple prior positions [256].
One of the pioneering works that introduced the concept of Artificial Potential Field (APF) for obstacle avoidance and navigation in robotics is [91]. The potential field methods treat the robot as a point in the configuration space under the influence of an APF. The goal, acting as a minimum in this space, exerts an attractive force on the robot, while obstacles act as repulsive forces. The superposition of all forces is applied to the robot. Such an APF smoothly guides the robot toward the goal while simultaneously avoiding known obstacles, just as a ball would roll downhill [270].
Later, Borenstein and Koren [271] developed a Virtual Force Field method that relies on two basic concepts: certainty grids for obstacle representation and potential fields for navigation. Their method enables continuous motion of the robot without stopping in front of obstacles with a speed of 0.78 m/s. However, the approach was abandoned due to the method’s instability and inability to pass through narrow passages [270]. The extended potential field method has been proposed by Khatib and Chatila [272] with two additions to the basic potential field, in particular, the rotation potential field and the task potential field.
More recently, Iizuka et al. [273] proposed a modified APF approach resistant to the local minimum issue in multiobstacle environments, while Weerakoon et al. [274] presented a deadlock-free APF-based path-planning algorithm. Similarly, Azzabi and Nouri [275] developed an approach that addresses the common issues of the original APF, namely local minima and the goal being nonreachable with obstacles nearby. Szczepanski [276] also proposed a path-planning method for mobile robots that uses the attractive potential for goal reaching as the original APF, but the repulsive potential is replaced by a general obstacle potential, equal to repulsive potential, vortex potential, or their superposition.

5.1.3. Roadmap Methods

Roadmap strategies capture the connectivity of the robot’s unobstructed space through a network of 1D curves or lines, denoted as roadmaps. Subsequently, the roadmap serves as a network of path segments for planning robot movement. Consequently, path planning is reduced to connecting the robot’s initial and goal positions to the road network, followed by identifying a sequence of routes from the initial robot position to its destination [270]. The most common approaches falling into the roadmap-based category are visibility graphs and Voronoi diagrams.
The visibility graph method is one of the earliest path-planning methods [256]. For a polygonal configuration space, the graph consists of edges joining all pairs of vertices that can see each other (including both the initial and goal positions as vertices as well). The unobstructed straight lines (roads) joining those vertices are the shortest distances between them, guaranteeing optimality in terms of the length of the solution path. The main caveat of the visibility graph is that the solution paths tend to move the robot as close as possible to obstacles on the way to the goal [270]. In contrast, the Voronoi diagram is an approach that maximizes the distance between the robot and obstacles in the map [270].
Our research regarding the applications of classical roadmap methods shows that they are rarely used in social robot navigation as they only consider binary environment models (obstacle or free space); hence, human awareness cannot be properly tackled. However, Voronoi diagrams might be used as reference path-planning approaches [204,277,278,279] for capturing the skeleton of the environment along with human-aware trajectory planners as in [132].

5.1.4. Sampling-Based Methods

The main idea of sampling-based motion planning is to avoid the explicit construction of obstacle regions but instead conduct a search that probes the configuration space with a sampling scheme [280]. The most prevalent methods falling into the category of sampling-based path planners are the Probabilistic Roadmap (PRM) [281] and the Rapidly exploring Random Trees (RRT) [282], both being probabilistically complete [280].

Algorithms

PRM [281] constructs a roadmap, a graph representation of the configuration space, by sampling random points and connecting them with collision-free paths. It focuses on building a network of feasible paths between different regions of the configuration space and is effective for multiquery scenarios or environments with complex obstacles.
RRT [282] builds a tree structure by iteratively selecting random points in the configuration space and extending the tree towards those points. It explores the configuration space rapidly and is particularly effective for high-dimensional spaces. Different variants of RRT have been developed, including RRT-Connect [283], R R T [284] or dual-tree version—DT-RRT [285].
Both PRM and RRT have different characteristics. PRM requires a two-phase process: first, constructing the roadmap offline and then querying the roadmap online to find a path between a start and goal configuration. In contrast, RRT performs exploration and path planning simultaneously, gradually growing towards the goal configuration during the search process. PRM is a well-suited method for scenarios where the environment is relatively static and the planner has sufficient computational resources to construct the roadmap offline, while RRT is often favored for real-time or dynamic environments, as it can adaptively explore the space and find feasible paths in a run-time. A notable feature of sampling-based methods is that these planners can regard the kinodynamic limits of the robot to generate feasible and safe motion plans in continuous state and action spaces.

Human-Aware Constraints

Some works focus on including constraints related to social conventions in sampling-based path-planning schemes. For example, Svenstrup et al. [286] modified the original RRT for navigation in human environments assuming access to full state information. Their modifications include adding the potential model designed for moving humans, so the customized RRT planner plans with a potential field representation of the world. Similarly, Rios-Martinez et al. [287] proposed Risk-RRT for global path planning. Their algorithm includes the knowledge of the personal spaces of pedestrians and the possible interactions between the F-formation’s participants. Risk-RRT penalizes the robot’s crossing through personal spaces and O-spaces of F-formations by assigning additional costs to those areas. Furthermore, Shrestha et al. [288] used RRT for global path planning in the environment with a stationary human. Vega et al. [223] attempted to integrate proxemics theory with their path planner incorporating PRM [289] and RRT [282] methods by defining personal spaces and activity spaces as forbidden areas for robot navigation. Alternatively, Pérez-Higueras et al. [290] developed a cost function for the RRT-based path planner employing Inverse Reinforcement Learning from demonstrations.

5.2. Local Trajectory Planning

The most common architecture for robot motion planning separates global path planning and local trajectory planning [138,248]. This separation of concerns allows for modular and flexible robotic systems, where different strategies can be applied at each level of abstraction to address specific requirements.
Local trajectory planners generate trajectories for the robot to follow within a short time horizon. Short time horizons allow operating with a higher frequency to instantly react to environmental changes and possible encounters. Trajectory planners operate in the robot’s control space or velocity space and regard not only spatial aspects of motion planning but also temporal ones. In the following part of this survey, various trajectory planning methods and approaches to incorporating human awareness into robot behavior are reviewed.

5.2.1. Sampling-Based Methods

Besides global path planning Section 5.1.4, sampling-based methods can also be applied to local trajectory planning. An extended RRT with a notion of time included—spatiotemporal RRT—was proposed by Sakahara et al. [204]. Their method integrates ideas of the RRT and the Voronoi diagram. Although motion prediction of dynamic objects is regarded, they do not explicitly capture social conventions. Nishitani et al. [205] extended this approach, presenting a human-centered X–Y–T space motion planning method. The authors included human personal space and directional area as well as the robot’s dynamic constraints in the planning scheme.
Pérez-Higueras et al. pointed out in [291] the future work perspective of using RRT as a local trajectory planner due to real-time capability, but their further work leaned toward learning-based approaches.

5.2.2. Fuzzy Inference Methods

Fuzzy inference systems (FIS) form another well-established paradigm for control systems, specifically useful to model imprecise or non-numerical information and decisions. FIS are applied for traditional robot navigation [292,293,294,295,296] and social robot navigation tasks [297,298,299,300]. They can also be integrated with other approaches, e.g., Q-learning [301] or Reinforcement Learning [302].
An example of the FIS method adapted for human-aware robot navigation is the work by Palm et al. [297], who derived fuzzy control rules for the robot’s actions based on expected human movements relative to the robot. They investigated the movement of humans in a shared space with a robot to determine lane preference and agent classification for collision avoidance. Another method was proposed by Obo and Yasuda [298], who developed a framework for robot navigation in crowds employing multiobjective behavior coordination for collision avoidance. Rifqi et al. [299] used FIS to dynamically change parameters of the SFM, which has been applied for controlling the movement of a healthcare robot. Rules that they designed switch the robot’s motion behavior based on its distance to human proxemics zones. Recently, Sampathkumar et al. [300] proposed a framework integrating an Artificial Potential Field and FIS for navigation that prioritizes safety and human comfort.

5.2.3. Force-Based Methods

Force-based approaches model the motion of individuals (humans or robots) in the environment considering the forces acting on them. These include a force attracting the agent to the goal and forces arising from interactions with other agents and environment objects such as obstacles. Typically, they are purely reactive methods that decide the next movement based on the environment arrangement at hand, i.e., obstacles and human locations. The resultant force can be directly transformed into a velocity command for a robot. The predominant methodologies within this category are Elastic Bands [303] and Social Force Model [43].
Elastic Bands [303] is a method that aims to close the gap between global path planning and reactive control, as it performs local path deformation based on internal and external forces. Internal forces contract the path, favoring the shortest path to the goal, while external forces repel the path from obstacles. The authors of the algorithm proposed a reference implementation based on bubbles that represent discrete path points and free space. Later, this method was extended by Brock et al. [304] mainly for motion generation in manipulation tasks performed in human environments. More recently, a socially aware specialization focusing on improving motion legibility of the Elastic Bands local trajectory planner has been developed for the SPENCER project [160]. The notion of human awareness has also been implemented into the Elastic Bands approach by Vega et al. [223].
On the other hand, Social Force Model (SFM) [43] has been one of the prevalent methods for crowd behavior simulation [305,306], human trajectory prediction (Section 4.3), and human-like motion generation in robotics. It constitutes a model inspired by fluid dynamics that illustrates an agent’s motion using a set of attractive and repulsive forces. Its flexible formulation allows for capturing additional models of social phenomena to obtain more realistic motion behaviors. Therefore, the original approach has undergone multiple extensions and over the years numerous successful real-world robotic applications have emerged [9,156,157,158,245,307,308].
Researchers expanded the basic SFM with explicit collision prediction [196,309], making the behavior more proactive and anticipatory. Kivrak et al. [158] also introduced collision prediction into SFM-based model which they integrated with a robot operating in an unknown environment with no a priori map. Similarly, Shiomi et al. [9] evaluated SFM with collision prediction [196] in a real-world shopping mall. Collective motion conventions were also integrated into the model formulation [310], as well as group formations [61,311,312]. Some works also focused on improving the realism of generated trajectories [313].
Truong and Ngo [307] proposed a proactive social motion model for safe and socially aware navigation in crowded environments. Their formulation takes into account the socio-spatiotemporal characteristics of humans, including human body pose, field of view, hand poses, and social interactions, which consist of human–object interaction and human group interaction.
Furthermore, Ferrer et al. [308] presented another model that extends the original formulation to effectively accompany a person. They implemented human behavior prediction to estimate the destination of the person the robot is walking with. Additionally, the authors exploited the parameterization of the SFM and applied a method of interactively learning the parameters of the model using multimodal human feedback.
Moreover, Repiso et al. presented studies regarding the robot accompanying single humans [245] and human groups [157]. In [245], they implemented three stages of focused interaction between the robot and a human: accompanying, approaching, and positioning. They inferred the human’s final destination (among all destinations marked in the environment beforehand) and predicted the human motion with the SFM. The SFM was also employed for the robot’s local trajectory planning, and spatial cost functions were used for trajectory scoring. In the following work, Repiso et al. [157] proposed an extended method that allows the robot to break the ideal side-by-side formation to avoid other people and obstacles, implementing the human-aware robot navigation strategy for accompanying groups of multiple humans.
Alternatively, Ferrer and Sanfeliu [156] developed a SFM-based Anticipative Kinodynamic Planning method for unfocused interactions between a robot and humans. They implemented a scalarised multiobjective cost function to choose the best trajectory amid the generated ones. On the other hand, We et al. [314] proposed a pedestrian’s heterogeneity-based social force model that captures the physiology and psychology attributes of pedestrians introducing physique and mentality coefficients into the SFM. Recently, SFM has also been involved in approaches integrating machine learning techniques with motion models [199,315].

5.2.4. Velocity Obstacles Methods

The Velocity Obstacle (VO) [316] concept is a foundation for a broad class of proactive methods for a robot’s local navigation. VO methods are based on a persistent effort to keep a robot collision-free, requiring only: a radius, a position, and a speed of each robot [317]. They generate avoidance maneuvers by selecting the robot velocities outside the collision cone, which consists of velocities that in the future would result in close encounters with obstacles moving at known velocities. A practical application of VO was introduced by Lin et al. [318]. They adapted the concept by assuming that each agent is a decision-making entity capable of selecting the appropriate velocity that responds to the other agents’ movements and replanning its path. Moreover, an extension of VO, called Reciprocal Velocity Obstacle (RVO), was developed by van den Berg et al. [319]. They exploited the fact that humans in the environment cooperate [320] and the approach guarantees to generate safe and oscillation-free motions under an assumption that all dynamic agents make a similar collision-avoidance reasoning [14]. Furthermore, a related method called Optimal Reciprocal Collision Avoidance (ORCA) [321] does not require implicit communication between agents and optimizes global objectives when finding collision-free velocities.
VO-based methods are rarely enhanced with socially aware concepts. Martinez-Baselga et al. [143] presented a Strategy-based Dynamic Object Velocity Space trajectory planner that explicitly regards the presence of dynamic obstacles but does not take any social conventions into account. Similarly, Zhang et al. [139] proposed a local trajectory planning scheme using ORCA that includes uncertainties of states of surrounding humans when selecting collision-free velocities.

5.2.5. Optimization-Based Methods

Another class of approaches for human-aware trajectory planning formulates the problem as an optimization task, which relies on finding control inputs that optimize (minimize or maximize) an objective function while satisfying kinodynamic and collision-free motion constraints. These hard constraints, inherited from classical robot navigation, restrict control inputs to those feasible for the specific mobile base at a given time and ensure the absence of collisions within the prediction horizon. The presence of collisions with the surrounding objects is assessed using the environment model and forward simulation of applying the computed controls. In contrast, soft constraints are embedded in the optimized objective function that takes into account, e.g., intrusions into the personal spaces of humans.
Most state-of-the-art methods planning optimal socially aware local trajectories extend the classical robot navigation algorithms, namely Dynamic Window Approach (DWA) [135] and Timed Elastic Bands (TEB) [153].

DWA-Based Methods

The DWA is one of the most common algorithms for collision avoidance. The main characteristic of the approach is that commands, controlling the translational and rotational velocities of the robot, are searched directly in the space of velocities. The search space is reduced to velocity pairs fulfilling kinodynamic constraints. Typically, for each velocity pair, the effect of applying those controls to the robot is simulated over a short time horizon, e.g., 1.5–3.0 s, which produces multiple circular trajectories. The optimal trajectory is the one maximizing the objective function consisting of three weighted components. In particular, the components evaluate the progress toward the goal, the distance to the closest obstacle, and the forward velocity of the robot. Numerous modifications of DWA have been proposed, as the objective function is expandable [322,323]. However, the method does not explicitly capture the dynamics of the obstacles taking into account only their current position.
Another method, Trajectory Rollout [152] is similar to DWA but exhibits one essential difference—in each forward simulation step, a set of feasible velocity pairs is updated as the kinematic constraints are recalculated according to the current velocity and dynamic constraints.
Constraints related to social conventions are usually embedded in the environment representation used by trajectory planners [210] or by extending the objective function [212,324]. For example, Weinrich et al. [210] applied the E algorithm as a global path planner along with an extended DWA method as a local trajectory planner. They extended DWA with an additional objective rating that considers spatiotemporal occupation probabilities of the tracked humans. In particular, they assigned personal spaces to humans using Gaussian Mixtures. The method provided successful collision avoidance by the robot in a passing scenario of a narrow hallway. A similar extension of DWA was proposed in [325].
Seder et al. [324] and Oli et al. [212] proposed navigation approaches that employed a modified DWA for human-aware local trajectory planning. They introduced human awareness by modifying the objective component related to clearance from obstacles, in particular, including predicted poses of tracked humans as future obstacle positions. The difference between these methods is that in [324] the authors assumed human motion predictions driven by the constant velocity model, while in [212] the SFM has been implemented. Also, the method from [324] used Focused  D as a global path planner, whereas in [212], the NF1 [256] was integrated.

TEB-Based Methods

The TEB is a traditional local trajectory planner that laid a foundation for multiple methods that enhanced this approach to capture human-awareness constraints [159,207,326]. The basic TEB deforms local trajectories according to the locations of obstacles in the environment, but, in contrast to Elastic Bands, with temporal information. Instead of forces from Elastic Bands, TEB uses an optimization objective to follow the global path regarding kinodynamic constraints, forming the optimization problem of nonlinear least squares.
Human-aware specialization of TEB, named HaTEB, was proposed by Khambhaita and Alami [207]. They extended the original optimization constraints with safety (minimum safety distance), time to collision, and directional constraints, including the predicted human trajectories in the problem formulation. Singamaneni et al. [159,208] developed the CoHAN planner—the HaTEB extension that handles large numbers of people and focuses on motion legibility improvements. The CoHAN has different tunable planning modes that can handle various indoor and crowded scenarios. Recently, Hoang et al. [326] presented GTEB model that extends TEB taking into account the robot’s current state, robot dynamics, dynamic social zones [267], regular obstacles, and potential approaching poses to generate the socially optimal robot trajectory.

Other Methods

Alternatively to DWA- and TEB-based methods, Forer et al. [327] proposed the Pareto Concavity Elimination Transformation (PaCcET) local trajectory planner. It aims to capture the nonlinear human navigation behavior, scoring trajectories with multiple objectives. The first relies on path distance, goal distance, heading difference, and distance to obstacles, while the second is based on the interpersonal distance between the robot and humans. Later, Banisetty et al. [220] extended PaCcET with social awareness objectives, specifically, maintaining appropriate distances to F-formations (groups) and distance to a scenario-dependent social goal.
In contrast, the authors of [328] proposed a planner that aims to exaggerate motions to increase intent expressiveness over passing sides for legible robot navigation [72]. They implemented a decision-making strategy, constructing the Social Momentum objective that takes pairwise momentum between robot and human into consideration. Another method was presented by Mehta et al. [329] who applied MultiPolicy Decision Making to navigate in dynamic environments with different policies, namely, Go-Solo, Follow-other, and Stop. The values of utility functions, which compromise between the distance traveled to the goal and the disturbance to surrounding agents caused by the robot, are predicted through forward simulation.
Optimal control techniques have also been employed to maintain the formation integrity [330,331]. For instance, in [330], formation control in a leader-follower arrangement was discussed. The authors developed a method that, under mild assumptions, guarantees the stabilization of the formation to the desired shape and scale. Similarly, an optimal control algorithm, but for sustaining formations of various structures, was proposed in [331]. On the other hand, Truc et al. [332] developed a 3D reactive planner for human-aware drone navigation in populated environments that is based on a stochastic optimization of discomfort caused by the drone’s proximity to pedestrians and the visibility of the drone.

5.2.6. Learning-Based Methods

In recent years, rapid growth in the machine learning field has been observed, and numerous planning approaches have evolved to capture the intricacies of human behaviors and transfer them into robot control strategies. The broadest attention in robot control applications gained Reinforcement Learning (RL) and Deep Reinforcement Learning (DRL). Specialized surveys on the applications of RL methods for robot navigation [333] and particularly on social robot navigation have already been published [334].

Inverse Reinforcement Learning 

A distinctively useful method for learning from demonstration is Inverse Reinforcement Learning (IRL) [181], as it allows to model the factors that motivate people’s actions instead of the actions themselves [180]. Example applications of IRL methods for human motion prediction were already presented in Section 4.3, but they might also be used for control purposes. For example, Kim and Pineau [335] learned a cost function involving social cues from features extracted from the RGB-D camera. Their IRL module uses a set of demonstration trajectories to learn the reference behavior when faced with different state features. Their approach is implemented as a trajectory planner with IRL-based cost function operating along with a global path planner. Similarly, Kuderer et al. [336] also use IRL with human demonstrations, but they extract features from the human trajectories and then use entropy maximization to determine the robot’s behavior during navigation in human environments. Pérez-Higueras et al. [291] also used IRL to transfer human motion behavior to a mobile robot. They evaluated different Markov Decision Process models and compared them with the baseline implementation of a global path planner and local trajectory planner without social costs. More recently, Karnan et al. [337] collected a large-scale dataset of socially compliant navigation demonstrations. They used it to perform behavior cloning [338] for a global path planner and local trajectory planner agents that aimed to mimic human navigation behaviors. The authors also performed an evaluation study for the learned approach, comparing it with a baseline ROS implementation.

Reinforcement Learning 

In contrast to IRL, the RL is used when the reward function is known or can be easily defined, and the goal is to find the best policy for achieving cumulative rewards. Recent works present the DRL as a framework to model complex interactions and cooperation, e.g., in social robot navigation.
In a study by Olivier et al. [320], the authors found that walking people mutually adjust their trajectories to avoid collision. This concept was exploited by Silva and Fraichard [339], whose approach relies on sharing motion effort between a robot and a human to avoid collisions. They learned a robot behavior using the RL to solve the reciprocal collision avoidance problem during simulated trials.
Li et al. [174] presented a Role Playing Learning formulated under a RL framework for purely local navigation of a robot accompanying a pedestrian. In their approach, the robot takes into account the motion of its companion to maintain a sense of affinity when they are traveling together towards a certain goal. A navigation policy is trained by Trust Region Policy Optimization with the use of features extracted from a LiDAR along with the goal as an input to output continuous velocity commands for navigation.
A series of works by Chen et al. [340,341] developed Collision Avoidance with Deep Reinforcement Learning (CADRL) approaches. Specifically, in a Socially Aware CADRL (SA-CADRL) [341], they designed a hand-crafted reward function that incorporates the social convention of passing side and enables a robot to move at human walking speed in a real-world populated environment. Everett et al. [154] proposed a GPU/CPU Asynchronous Advantage Actor-Critic CADRL (GA3C-CADRL) strategy that employs LSTM to use observations of arbitrary number or surrounding agents, while previous methods had this size fixed. A distinctive characteristic is that their algorithm learns collision avoidance among various types of dynamic agents without assuming they follow any particular behavior rules.
Jin et al. [342] presented another DRL method but for mapless collision-avoidance navigation where humans are detected using LiDAR scans. The reward function regards ego-safety, assessed from the robot’s perspective, and social safety, which evaluates the impact of the robot’s actions on nearby humans. The ego-safety zone maintains 0.4 m of separation between the robot and other objects, while social safety aims to prevent intrusions into approximated human personal space. Liang et al. [146] developed a RL-based collision-avoidance algorithm, named CrowdSteer, for navigation in crowded environments. The authors trained the algorithm using Proximal Policy Optimization (PPO) in high-fidelity simulation and deployed the approach on two differential drive robots.
Chen et al. [343] discussed extending pairwise interactions between the robot and individual humans to a robot interacting with a crowd. The authors developed Socially Attentive Reinforcement Learning (SARL) that jointly models human–robot as well as human–human interactions in an attention-based DRL framework by learning the collective importance of neighboring humans with respect to their future states. Their work was further enhanced by Li et al. [344] who addressed the problems of learned policies being limited to certain distances associated with the training procedure and the simplified environment representation that neglects obstacles different from humans. In their S A R L method, they introduced a dynamic local goal-setting mechanism and a map-based safe action space.
Guldenring et al. [345] proposed another DRL-based system to train neural network policies for local trajectory planning explicitly taking nearby humans into consideration. The approach uses Proximal Policy Optimization (PPO) as the main learning method while DRL agents are trained in randomized virtual 2D environments interacting with humans in an unfocused manner for plain collision avoidance.
Recently, Xie and Dames [147] proposed DRL policy for robot navigation through obstacle-filled and populated areas that intend to be generalizable to new environments. In particular, the DRL-VO reward function contains a novel term based on VO (Section 5.2.4) to guide the robot to actively avoid pedestrians and move toward its goal. In turn, Qin et al. [346] introduced a socially aware robot mapless navigation algorithm employing RL to learn strategies that conform to social customs and obey specific traffic rules.

Miscellaneous Approaches

Besides the aforementioned methods, learning-based applications include employing Hidden Markov Model (HMM) in a higher hierarchy system to learn choosing between the RL-based collision avoidance and target pursuing [347].
On the other hand, Tai et al. [184] attempted to apply Generative Adversarial Imitation Learning (GAIL) strategy to navigate in populated dynamic environments in a socially compliant manner via only raw depth inputs from RGB-D camera. Their approach learns continuous actions and desired force toward the target and outperformed pure behavior cloning policy regarding safety and efficiency.
In the approach by Lu et al. [348], the crowd’s density is dynamically quantified and incorporated into a reward function deciding the robot’s distance from pedestrians. The authors extended the DRL-based work from [343] so the best action is inferred from a reward function that regards the uncomfortable distance between the robot and a human. Alternatively, a system proposed by Yao et al. [114] incorporates a Generative Adversarial Network to track and follow social groups.

5.3. Discussion

A summary of discussed navigation methods according to the requirements they implement is presented in Table 2. The approaches listed in most cases employ the hierarchical structure in the motion planning system composed of a global path planner and a local trajectory planner. However, not all works explicitly reveal the planning algorithms used; thus, we do not show the details in that matter.
Each reviewed navigation method is classified based on the objectives addressed in the approach. However, the consequence of this methodology is that behavior cloning or imitation learning Section 5.2.6 are excluded from this classification, as without investigating the dataset, it is not clear which features were captured; hence, which requirements were targeted. On the other hand, VO-based methods (Section 5.2.4), which proactively adjust motion direction to avoid collisions, are always denoted as respecting motion legibility (Req. 2.4) (Section 3.3.4).
The requirements group most covered is by far the physical safety (Req. 1) inherited by social robot navigation from traditional navigation. It regards collision avoidance; hence, even approaches that do not explicitly regard humans in the environment (but rather moving obstacles) fall into this category. The most popular objective among social robot navigation algorithms is respecting personal spaces. However, in most methods, they are modeled as a circular shape, while many studies revealed their asymmetry (Section 3.3.1). In contrast, motion naturalness and, importantly, social conventions aspects, are less frequently discussed. The latter are rarely considered, since in research robots are usually designated for specific tasks, which influences a fragmentary approach to design and implementation.

6. Evaluation

Evaluating social robot navigation systems is essential for gathering insights on comfort among users and optimizing their performance in real-world environments. This section discusses different evaluation methods, classifies types of studies conducted to explore or verify designed navigation algorithms, and identifies tools facilitating efficient assessment, namely datasets, simulators, and benchmarks Figure 9.

6.1. Methods

In general, evaluation methods encompass qualitative and quantitative approaches. Qualitative methods often involve subjective assessments, such as questionnaires conducted during user studies, which gauge users’ preferences and comfort levels while interacting with the robot (e.g., [9,40,87]). These subjective evaluations provide valuable insights into the social acceptability of robot navigation.
On the other hand, quantitative methods utilize objective metrics formulated mathematically to assess various aspects of robot performance and social awareness (e.g., [131,323,329,335,350]). These metrics enable precise assessment and, thus, evidence-based comparison of different navigation algorithms. Researchers employing a combination of qualitative and quantitative evaluation methods [85,131,328] can comprehensively gauge both the performance and suitability of human-aware navigation systems in meeting the expectations of users.
In recent work, Biswas et al. [33] stated that an ideal method of evaluating social robot navigation is a large-scale, costly, and time-consuming qualitative user study. However, due to the indicated drawbacks, automated methods that provide a quantitative approximation of facts are required. Quantitative assessment methods are particularly useful for learning-based approaches, where the reward of action must be numeric. Similarly, the authors of planners that employ heuristics or optimize a single criterion benefit from benchmarking their methods against various strategies. Since automated quantitative methods produce invariable indicators of the algorithm’s performance, they are particularly relevant for usage, e.g., during the new algorithm development stage. Nevertheless, grounding the social robot navigation requirements and approximating the social phenomena as quantitative metrics would be impossible without user studies yielding qualitative results.

6.2. Studies

Social robotics experiments often involve user studies to gather subjective human impressions about the robot’s behavior, which is crucial for social robot navigation as they provide valuable insights that can be directly transferred onto navigation system requirements Section 3. Experiments conducted for collecting such data can be differentiated between controlled and exploratory.
Controlled studies provide the possibility to conduct tests under configurable conditions. Hence, researchers can control variables and conditions to isolate specific factors, e.g., robot speed [60], passing distances [49], and observe their effects. This, in turn, allows for gathering more precise measures of robot behavior when operating with different navigation algorithms. This type of study might include both questionnaires and laboratory studies. In contrast, exploratory studies are conducted in natural conditions with minimum or no preparation. They might take the form of, e.g., a case study [354] to gain insights or field studies [1,2] connected with observing and gathering data (qualitative and/or quantitative) regarding a robot deployed in the target environment. The principles of human–robot interaction studies design were identified by Bartneck et al. in [355].
Controlled studies facilitate the systematic evaluation of the robot’s human awareness across different motion planning algorithms. However, direct comparison necessitates adherence to two crucial rules. Firstly, environmental conditions must be reproducible in subsequent trials. Secondly, a specific baseline motion planning setup (e.g., relying on classical navigation objectives), against which the examined navigation system will be compared, must remain unchanged in the following trials. In the literature, customized navigation approaches are contrasted against other algorithms [208] or a teleoperated agent [157], depending on the study design and goals.
Controlled laboratory studies intend to simplify complex interactions into prescribed scenarios of agents’ movements under constant environmental conditions, so the number of varying factors in subsequent trials is limited. Gao and Huang [5] identified standard scenarios investigated in social robot navigation works that include passing [60,320,356], crossing [71,206], overtaking [60,312,341], approaching [267,326,352], accompanying [119,157,245], or combined.

6.3. Tools

Multiple tools facilitate the evaluation of social robot navigation approaches. They are particularly useful for performing preliminary tests before arranging real-world experiments, which may pose a significant organizational effort [6,9,77,89].

6.3.1. Datasets

The datasets can be employed to train models for human trajectory prediction and learning robot movements in populated environments. They are irreplaceable for neural approaches that optimize policy learning from data [269,322,348].
The pioneering datasets in the field are ETH [357] and UCY [358], suitable for tracking and prediction. They provide pedestrian trajectories from a top-view, fixed, outdoor-located camera. Later, Rudenko et al. [359] developed THÖR indoor dataset with human trajectory and eye gaze data with accurate ground truth information. The data were collected using motion capture hardware with 3D LiDAR recordings and a mobile robot in the scene. Another dataset, named SCAND, was proposed by Karnan et al. [337] and contains indoor and outdoor data from multiple sensors of a mobile robot teleoperated in a socially compliant manner.
Alternatively, SocNav1 [360] and SocNav2 [349] datasets were designed to learn and benchmark functions estimating social conventions in robot navigation by using human feedback in simulated environments. Wang et al. [361] developed TBD dataset containing human-verified labels, a combination of top-down and egocentric views, and naturalistic human behavior in the presence of a mobile capturing system moving in a socially acceptable way. Another dataset was used as a part of the CrowdBot project and is applicable for crowd detection and tracking, as well as learning navigation in populated, dynamic environments [362].
Recently, new datasets have emerged, for example, SiT [363], which contains indoor and outdoor recordings collected while the robot navigated in a crowded environment, capturing dense human–robot interactive dynamic scenarios with annotated pedestrian information. Nguyen et al. [364] developed MuSoHu dataset gathering recordings of sensors placed on human participants walking in human-occupied spaces; thus, interactions between robots and humans have not been captured. Hirose et al. [134] presented HuRoN dataset collected with multimodal sensory data from a robot operating with an autonomous policy interacting with humans in real-world scenes.
The publications relying on some of these datasets were identified in [5] and partially in [17], while in [3] the authors separated datasets for activity recognition, human pose estimation, and trajectory prediction.

6.3.2. Simulators

In recent years, simulation experiments have been more often chosen due to the growth of the field of RL [147,154,174,341,345] and other data-driven approaches [184]. Simulators are particularly useful tools for the systematic evaluation of social robot navigation algorithms as they can provide identical initial conditions of experiments in the following trials, which is not always possible in user studies. Simulators also facilitate the agile development of algorithms and provide flexibility, which datasets often lack. Furthermore, as opposed to real-world tests, in terms of time and resources, they are easily reconfigurable and cost-effective in repeating trials.
Numerous simulation ecosystems have been developed for robotics [365]. The majority is directly applicable to social robotics as they provide movable human-like postures, and several are suitable for rich human–robot interaction. The main characteristics of state-of-the-art approaches for conducting virtual social robot navigation experiments are presented in Table 3, whereas Table 4 illustrates their methods for simulating human motion behaviors.
The comparison in Table 3 includes 2D and 3D simulators, as well as frameworks that have ROS integration (the most popular robotic framework), are actively maintained, and are open-source. Architectures of software for human simulation can be distinguished on standalone simulators and frameworks. The latter are usually designed for controlling simulated humans and they abstract from a specific simulator; therefore, interfacing components are necessary for integration. The proposed classification regards the fidelity of the replication of virtual robots, i.e., whether dynamic intricacies (friction, etc.) are included or only the ideal kinematic model is considered. Additionally, the comparison identifies the variety of tasks that can be performed by simulated humans and the methods for controlling humans. The capability of setting dynamic goals for virtual humans is crucial for rich human–robot interactions, which usually require an orchestrator. For example, handover tasks can be simulated only with the synchronization of human and robot activities. Specifically, the human receives an object after the robot approaches them (which in high-fidelity simulation always takes varying amounts of time); hence, the reception must be triggered at different timestamps.
On the other hand, Table 4 presents the characteristics of the virtual humans’ navigation in each simulation ecosystem. The comparison points out the algorithms used for motion planning and whether the motion of each agent can be configured differently. The classification also includes information on whether the simulation ecosystem allows the formation-like motion of virtual humans, which is restricted by the capabilities of motion planning algorithms available.
Notably, more advanced simulators facilitate transferring the algorithms from virtual to real-world hardware. All listed simulators except flatland (https://github.com/avidbots/flatland (accessed on 20 March 2024)) [345] provide the kinodynamic fidelity of robots, whereas the exactness of frameworks depends on the simulators they are integrated with. Simplified, lightweight simulators with the possibility to simulate dynamic agents, such as SocialGym 2.0, are well-suited for learning applications requiring multiple repetitions, whereas high-fidelity simulators, like Gazebo (Ignition) or iGibson, target the rich interaction scenarios. Nevertheless, transferring navigation methods from the simulation into real-world experiments is essential to demonstrate that developed algorithmic approaches work not only in simulated setups but are also reliable and prospective for wider applications.

6.3.3. Benchmarks

Due to a growing set of navigation algorithms available, the importance of quantitative evaluation has increased. Lately, various automated quantitative assessment systems, called benchmarks, have been developed to ease the evaluation of traditional and social robot navigation. The appropriate benchmark design requires the knowledge of the requirements for robot navigation system Section 3, concurrently from the classical and human-aware points of view [76].
Several works have recently proposed benchmarking frameworks for evaluating robot motion planning algorithms from the classical navigation perspective [376,377,378,379,380,381,382,383,384,385], i.e., without considering human awareness constraints. These works mainly focus on performance metrics like navigation success rate, path length, or time required to reach the goal. Benchmarks for socially-aware robot navigation are the minority, but there are several works in that matter [33,369,386]. In some cases, simulators are coupled with internally calculated metrics for assessing navigation [369,374].
The primary features of state-of-the-art approaches for benchmarking robot navigation are presented in Table 5. The comparison includes only actively maintained and open-source benchmarks. The classification of methods focuses on the variety of metrics implemented (following the requirements taxonomy from Section 3), as well as determining suitable test environments (simulation/real world) and a set of analysis tools provided, e.g., for results presentation.
Quantitative metrics are the inherent parts of benchmark systems as they aim to implement objective criteria approximating subjective assessments. Therefore, the quantitative metrics should reflect mathematical formulas of requirements discussed in Section 3. Metrics covering most of the perceived safety principles for social robot navigation are developed in the SRPB (https://github.com/rayvburn/srpb (accessed on 20 March 2024)) benchmark, where human-awareness indicators also account for people tracking uncertainty, facilitating the evaluation with the robot’s onboard perception [76]. Besides the listed benchmark systems, several complementary indicators for assessing the perceived safety of humans in the context of social robot navigation also appear in [388]. The survey by Gao and Hoang [5] discusses in detail metrics presented in the literature.

7. Discussion

Although the literature regarding social robot navigation is vast, there are still issues of great significance that are fundamental for providing comprehensive social intelligence to robots. Major challenges and future work perspectives are identified in the remainder of this section.

7.1. In-Depth User Studies Exploring Human Preferences and Norm Protocols

The years 2000–2015 were very productive in user studies investigating social conventions and human preferences during interaction with robots [6,39,40,84,137]. Recently, we have observed much fewer exploratory and confirmatory studies [355], whereas, according to our extensive literature review, there are still some areas that could benefit from deeper investigation of how to obey complex norms and under what conditions Section 3.5. Also, multiple studies are contradictory regarding gaze modulation of robots Section 3.4.2. Continued research should provide valuable insights for understanding the robot’s social behavior requirements, as with the rapid growth of machine learning techniques, the analytical modeling of social phenomena receives less attention, being repressed by more accessible data-driven approaches.

7.2. Implementing Complex Social Conventions in Robot Navigation Systems

The classification of requirements’ fulfilment in various navigation approaches presented in Table 2 illustrates that social conventions are rarely addressed across algorithms and are rather implemented in a fragmentary manner. Among the specified in our taxonomy, the commonly neglected social norms include, e.g., standing in line or obeying elevator etiquette. We argue that the phenomenon of fewer works regarding social norm implementations is closely related to the necessity of including rich contextual information in robot navigation systems to behave in a socially acceptable way, which applies to the examples provided.
Multiple works discussed in Section 4.4 and Section 5 tackle contextual awareness fragmentarily, adhering only to specific rules to follow in a given context [131,220,221,222]. Notably, the literature review shows that many state-of-the-art Deep Reinforcement Learning methods implement rather a collision avoidance with dynamic objects than human-aware navigation, as the reward functions are formulated to consider only the separation distance between agents [134,146,174,342,343,344,345] imitating circular personal spaces, regardless of other social conventions and contextual cues.
A robot’s intelligence is often regarded as utilizing contextual information in its imperative [16,214]. Therefore, we argue that implementing complex social conventions in robot navigation systems requires integrating motion planning with knowledge bases [389], which could be updated by perception modules extracting environmental features in real time. However, including information from knowledge bases directly in existing motion planning approaches is impractical; hence, an additional component could be added to the standardized robot motion planning architecture consisting of a global path planner and a local trajectory planner. The role of the new social activity planner component would be to analyze environmental information and, based on the implemented social protocols, periodically generate new goal poses according to the task context Section 4.4.4. In this setup, the new component coordinates task execution in a socially acceptable manner, while the global path planner and the local trajectory planner handle motion planning concerning requirements related to the physical and perceived safety of humans, as well as to the robot’s motion naturalness. Additionally, the social activity planner component could be integrated with the robot’s head controller to properly modulate gaze direction during task execution.
An alternative method of integrating contextual richness directly into DRL-based end-to-end algorithms poses a possible challenge to capturing numerous intricacies of social robot navigation in a single control policy that might negatively affect the generalization capabilities of such approaches. Recently, a tendency to integrate learning-based approaches with classical algorithms evolved, e.g., [147,155,315,322], which might mitigate the identified drawback.
The concepts presented in [220,390] can be valuable insights for enhancing cognitive architectures that allow inferring environment objects’ relations once various facts about the environment, task, and humans are injected into the knowledge base. Works attempting to design context-aware social robot navigation integrated with a cognitive system are [228], where they used the CORTEX architecture [218], as well as [225,391]. Recently, the authors of [131] used socially aware navigation as one of the robot skills within a cognitive architecture, utilizing elements of environmental, interpersonal, and diversity contexts.

7.3. Context-Aware Framework for Modulating Motion Planning Objectives

Social robots are commonly deployed for tasks in complex environments. That requires rich contextual awareness, as the robots’ navigation objectives might vary according to a situation at hand Section 4.4.1. Enriched contextual awareness, discussed in Section 7.2, must be coordinated with a robot’s motion planning scheme to obtain human-aware behaviors and compliance with social conventions.
To achieve comprehensive human-aware robot navigation, which is a multiobjective problem, it is crucial not to treat social aspects as hard constraints. For instance, if a person is lying down due to fainting, the robot should be capable of approaching closely to check their condition, even if it means violating proxemics rules. Therefore, finding the relation between the navigation objectives and the contexts at hand could lead to obtaining more socially acceptable motions and enhance the perceived intelligence of a robot. This proposal aligns with one of the suggestions from [12].
Technically, the relation between contexts and navigation objectives can be reduced to a function that weighs the components of a multiobjective cost function designed to optimize human-aware navigation. Such a function could be embedded into the configurable context-aware orchestrating framework, which we indicate as a relevant future work perspective. Preliminary work in this matter has been conducted in [390], where the authors defined a mapping from the task-level knowledge to the motion-level knowledge to help enhance motion planning. Specifically, they identified variables that might be used in such an orchestrating framework and help dynamically weight the trajectory planning parameters. Nevertheless, finding the desired relation requires extensive user studies and creates perspectives for applying machine learning techniques, as manual tuning will probably be infeasible due to the complexity of the problem.

7.4. Context-Aware Benchmarks for Evaluating Nonprimitive Social Interactions

Benchmarks should also be aware of the contextual richness of the social robot navigation, as this would ease the assessment and deliver more accurate results. Contextual awareness of benchmarks is nontrivial to handle and infer from, while desired, similarly as in online navigation Section 7.3.
To exemplify the impact of environmental contexts, benchmark systems should only penalise the robot for traversing affordance spaces if they are actively exploited by humans, i.e., only if activity spaces were initiated. This, in turn, requires integrating multiple data during evaluation. The preliminary concept addressing the topic is implemented in SEAN 2.0 simulator [369], which detects different social situations, but this information is not considered in metrics evaluation. In contrast, SRPB benchmark [76] regards the interpersonal context penalizing a robot for crossing through O-spaces of F-formations (human groups) while not considering environmental cues in metrics.

7.5. Design of Absolute Social Metrics for Social Robot Navigation Benchmarking

An essential need in quantitative benchmarking of social robot navigation is the design of absolute metrics, i.e., comparable between diverse scenarios. Most existing metrics do not sufficiently capture the generalizability of evaluated algorithms across diverse contexts [33,328,369,374,386]. This highlights the necessity of creating universal metrics that go beyond the specific context of individual scenarios. Standardized metrics applicable across various scenarios and study environments can enhance the reproducibility and transferability of findings.

8. Summary

In this paper, we grounded social robot navigation requirements based on the reviewed user studies regarding unfocused and focused human–robot interactions, which highlighted objectives on how robots should behave in populated environments. The human-aware robot navigation requirements are organized into our taxonomy consisting of requirements for ensuring the physical and perceived safety of humans, as well as the requirements assuring the robot’s motion naturalness and the robot’s compliance with the social norms. This classification is the basis for the analysis of algorithmic topics.
Our study examines the key methods for addressing the fundamental challenges of social robot perception, namely the detection and tracking of humans in the robot’s environment. Diverse environment representations utilized in different motion planning approaches were also discussed, as well as various methods for human trajectory prediction which is crucial in real robots equipped with sensors with a limited field of view. The survey also highlights the topic of contextual awareness and how it was tackled in state-of-the-art navigation approaches.
The major part of our review encompasses various methods employed for robot motion planning that take into account constraints arising from the presence of surrounding humans. Approaches present in the literature were classified into global path planning and local trajectory planning algorithms according to the common hierarchical structure of motion planning systems. Both global path planners and local trajectory planners were organized into groups sharing common algorithmic characteristics. Besides a thorough description of various navigation methods, these approaches are classified according to the established requirements taxonomy, based on the objectives addressed.
This survey also explores the methods for evaluating social robot navigation as well as study types and tools relevant to the agile development of navigation techniques. The tools for the assessment were discussed distinguishing datasets, simulators, and benchmarks. An extensive comparison of actively maintained simulators for social robotics was proposed. Moreover, benchmarks suitable for quantitative evaluation of social robot navigation were classified utilizing the proposed requirements taxonomy, according to the implemented metrics.
Our study examined state-of-the-art in the social robot navigation field and proposed several major topics for future work with a context-aware framework for modulating navigation objectives being the most promising. As a consequence of the rapidly growing field of social robot navigation, further integration of socially aware mobile robots in daily lives is expected. This cross-sectional review contributes to the broader understanding of social robot navigation fundamentals that lie on the border of robotics and social sciences. Our survey sheds light on social aspects that have not been adequately addressed in technical and social science papers.

Author Contributions

Conceptualization, J.K. and W.S.; methodology, J.K.; investigation, J.K. and W.S.; writing—original draft preparation, J.K.; writing—review and editing, J.K., W.S. and E.N.-S.; visualization, J.K.; supervision, W.S. and E.N.-S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Burgard, W.; Cremers, A.B.; Fox, D.; Hähnel, D.; Lakemeyer, G.; Schulz, D.; Steiner, W.; Thrun, S. The interactive museum tour-guide robot. In Proceedings of the Fifteenth National/Tenth Conference on Artificial Intelligence/Innovative Applications of Artificial Intelligence, Madison, WI, USA, 26–30 July 1998; AAAI ’98/IAAI ’98. pp. 11–18. [Google Scholar]
  2. Thrun, S.; Bennewitz, M.; Burgard, W.; Cremers, A.; Dellaert, F.; Fox, D.; Hahnel, D.; Rosenberg, C.; Roy, N.; Schulte, J.; et al. MINERVA: A second-generation museum tour-guide robot. In Proceedings of the 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C), Detroit, MI, USA, 10–15 May 1999; Volume 3, pp. 1999–2005. [Google Scholar] [CrossRef]
  3. Möller, R.; Furnari, A.; Battiato, S.; Härmä, A.; Farinella, G.M. A survey on human-aware robot navigation. Robot. Auton. Syst. 2021, 145, 103837. [Google Scholar] [CrossRef]
  4. Mirsky, R.; Xiao, X.; Hart, J.; Stone, P. Conflict Avoidance in Social Navigation—A Survey. J. Hum. Robot Interact. 2024, 13, 1–36. [Google Scholar] [CrossRef]
  5. Gao, Y.; Huang, C.M. Evaluation of Socially-Aware Robot Navigation. Front. Robot. AI 2022, 8, 721317. [Google Scholar] [CrossRef] [PubMed]
  6. Satake, S.; Kanda, T.; Glas, D.F.; Imai, M.; Ishiguro, H.; Hagita, N. How to approach humans? strategies for social robots to initiate interaction. In Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction, La Jolla, CA, USA, 9–13 March 2009; HRI ’09. pp. 109–116. [Google Scholar] [CrossRef]
  7. Trautman, P.; Ma, J.; Murray, R.M.; Krause, A. Robot navigation in dense human crowds: Statistical models and experimental studies of human-robot cooperation. Int. J. Robot. Res. 2015, 34, 335–356. [Google Scholar] [CrossRef]
  8. Biswas, J.; Veloso, M. The 1,000-km Challenge: Insights and Quantitative and Qualitative Results. IEEE Intell. Syst. 2016, 31, 86–96. [Google Scholar] [CrossRef]
  9. Shiomi, M.; Zanlungo, F.; Hayashi, K.; Kanda, T. Towards a Socially Acceptable Collision Avoidance for a Mobile Robot Navigating Among Pedestrians Using a Pedestrian Model. Int. J. Soc. Robot. 2014, 6, 443–455. [Google Scholar] [CrossRef]
  10. Lasota, P.A.; Fong, T.; Shah, J.A. A Survey of Methods for Safe Human-Robot Interaction. Found. Trends® Robot. 2017, 5, 261–349. [Google Scholar] [CrossRef]
  11. Singamaneni, P.T.; Bachiller-Burgos, P.; Manso, L.J.; Garrell, A.; Sanfeliu, A.; Spalanzani, A.; Alami, R. A survey on socially aware robot navigation: Taxonomy and future challenges. Int. J. Robot. Res. 2024. [Google Scholar] [CrossRef]
  12. Francis, A.; Pérez-d’Arpino, C.; Li, C.; Xia, F.; Alahi, A.; Bera, A.; Biswas, A.; Biswas, J.; Chandra, R.; Lewis Chiang, H.T.; et al. Principles and Guidelines for Evaluating Social Robot Navigation Algorithms. arXiv 2023, arXiv:2306.16740. [Google Scholar]
  13. Rios-Martinez, J.; Spalanzani, A.; Laugier, C. From Proxemics Theory to Socially-Aware Navigation: A Survey. Int. J. Soc. Robot. 2015, 7, 137–153. [Google Scholar] [CrossRef]
  14. Chik, S.F.; Yeong, C.F.; Su, E.L.M.; Lim, T.Y.; Subramaniam, Y.; Chin, P.J.H. A Review of Social-Aware Navigation Frameworks for Service Robot in Dynamic Human Environments. J. Telecommun. Electron. Comput. Eng. 2016, 8, 41–50. [Google Scholar]
  15. Kruse, T.; Pandey, A.K.; Alami, R.; Kirsch, A. Human-Aware Robot Navigation: A Survey. Robot. Auton. Syst. 2013, 61, 1726–1743. [Google Scholar] [CrossRef]
  16. Charalampous, K.; Kostavelis, I.; Gasteratos, A. Recent trends in social aware robot navigation: A survey. Robot. Auton. Syst. 2017, 93, 85–104. [Google Scholar] [CrossRef]
  17. Mavrogiannis, C.; Baldini, F.; Wang, A.; Zhao, D.; Trautman, P.; Steinfeld, A.; Oh, J. Core Challenges of Social Robot Navigation: A Survey. J. Hum. Robot Interact. 2023, 12, 1–39. [Google Scholar] [CrossRef]
  18. Zhu, K.; Zhang, T. Deep reinforcement learning based mobile robot navigation: A review. Tsinghua Sci. Technol. 2021, 26, 674–691. [Google Scholar] [CrossRef]
  19. Medina Sánchez, C.; Zella, M.; Capitán, J.; Marrón, P.J. From Perception to Navigation in Environments with Persons: An Indoor Evaluation of the State of the Art. Sensors 2022, 22, 1191. [Google Scholar] [CrossRef]
  20. Guillén-Ruiz, S.; Bandera, J.P.; Hidalgo-Paniagua, A.; Bandera, A. Evolution of Socially-Aware Robot Navigation. Electronics 2023, 12, 1570. [Google Scholar] [CrossRef]
  21. Zieliński, C.; Kornuta, T.; Winiarski, T. A Systematic Method of Designing Control Systems for Service and Field Robots. In Proceedings of the 19th IEEE International Conference on Methods and Models in Automation and Robotics, MMAR’2014. IEEE, Miedzyzdroje, Poland, 2–5 September 2014; pp. 1–14. [Google Scholar] [CrossRef]
  22. Breazeal, C. Designing Sociable Machines. In Socially Intelligent Agents: Creating Relationships with Computers and Robots; Springer: Boston, MA, USA, 2002; pp. 149–156. [Google Scholar] [CrossRef]
  23. Babel, F.; Kraus, J.M.; Baumann, M. Development and Testing of Psychological Conflict Resolution Strategies for Assertive Robots to Resolve Human-Robot Goal Conflict. Front. Robot. AI 2021, 7, 591448. [Google Scholar] [CrossRef]
  24. Boddington, P. EPSRC Principles of Robotics: Commentary on safety, robots as products, and responsibility. Connect. Sci. 2017, 29, 170–176. [Google Scholar] [CrossRef]
  25. Clarke, R. Asimov’s Laws of Robotics: Implications for Information Technology-Part I. Computer 1993, 26, 53–61. [Google Scholar] [CrossRef]
  26. Bera, A.; Randhavane, T.; Prinja, R.; Manocha, D. SocioSense: Robot navigation amongst pedestrians with social and psychological constraints. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 7018–7025. [Google Scholar] [CrossRef]
  27. Narayanan, V.; Manoghar, B.M.; Dorbala, V.S.; Manocha, D.; Bera, A. ProxEmo: Gait-based Emotion Learning and Multi-view Proxemic Fusion for Socially-Aware Robot Navigation. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020. [Google Scholar]
  28. Bena, R.M.; Zhao, C.; Nguyen, Q. Safety-Aware Perception for Autonomous Collision Avoidance in Dynamic Environments. IEEE Robot. Autom. Lett. 2023, 8, 7962–7969. [Google Scholar] [CrossRef]
  29. Guzzi, J.; Giusti, A.; Gambardella, L.M.; Theraulaz, G.; Di Caro, G.A. Human-friendly robot navigation in dynamic environments. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 423–430. [Google Scholar] [CrossRef]
  30. Moussaïd, M.; Helbing, D.; Theraulaz, G. How simple rules determine pedestrian behavior and crowd disasters. Proc. Natl. Acad. Sci. USA 2011, 108, 6884–6888. [Google Scholar] [CrossRef] [PubMed]
  31. Forootaninia, Z.; Karamouzas, I.; Narain, R. Uncertainty Models for TTC-Based Collision-Avoidance. In Proceedings of the Robotics: Science and Systems XIII, Massachusetts Institute of Technology, Cambridge, MA, USA, 12–16 July 2017; Amato, N.M., Srinivasa, S.S., Ayanian, N., Kuindersma, S., Eds.; 2017; Volume 7. [Google Scholar] [CrossRef]
  32. Karamouzas, I.; Sohre, N.; Narain, R.; Guy, S.J. Implicit Crowds: Optimization Integrator for Robust Crowd Simulation. ACM Trans. Graph. 2017, 36, 1–13. [Google Scholar] [CrossRef]
  33. Biswas, A.; Wang, A.; Silvera, G.; Steinfeld, A.; Admoni, H. SocNavBench: A Grounded Simulation Testing Framework for Evaluating Social Navigation. J. Hum. Robot Interact. 2022, 11, 1–24. [Google Scholar] [CrossRef]
  34. Hall, E.T. The Hidden Dimension/by Edward T. Hall; A Doubleday Anchor book; Anchor Books: Garden City, NY, USA, 1969. [Google Scholar]
  35. Aiello, J.R. A further look at equilibrium theory: Visual interaction as a function of interpersonal distance. Environ. Psychol. Nonverbal Behav. 1977, 1, 122–140. [Google Scholar] [CrossRef]
  36. Ashton, N.L.; Shaw, M.E. Empirical investigations of a reconceptualized personal space. Bull. Psychon. Soc. 1980, 15, 309–312. [Google Scholar] [CrossRef]
  37. Baldassare, M. Human Spatial Behavior. Annu. Rev. Sociol. 1978, 4, 29–56. [Google Scholar] [CrossRef]
  38. Greenberg, C.I.; Strube, M.J.; Myers, R.A. A multitrait-multimethod investigation of interpersonal distance. J. Nonverbal Behav. 1980, 5, 104–114. [Google Scholar] [CrossRef]
  39. Butler, J.T.; Agah, A. Psychological Effects of Behavior Patterns of a Mobile Personal Robot. Auton. Robot. 2001, 10, 185–202. [Google Scholar] [CrossRef]
  40. Althaus, P.; Ishiguro, H.; Kanda, T.; Miyashita, T.; Christensen, H. Navigation for human-robot interaction tasks. In Proceedings of the IEEE International Conference on Robotics and Automation, New Orleans, LA, USA, 26 April–1 May 2004; Proceedings. ICRA ’04. Volume 2, pp. 1894–1900. [Google Scholar] [CrossRef]
  41. Hayduk, L. Personal Space: An Evaluative and Orienting Overview. Psychol. Bull. 1978, 85, 117–134. [Google Scholar] [CrossRef]
  42. Hayduk, L. The shape of personal space: An experimental investigation. Can. J. Behav. Sci. 1981, 13, 87–93. [Google Scholar] [CrossRef]
  43. Helbing, D.; Molnár, P. Social force model for pedestrian dynamics. Phys. Rev. E 1995, 51, 4282–4286. [Google Scholar] [CrossRef] [PubMed]
  44. Johansson, A.; Helbing, D.; Shukla, P.K. Specification of the Social Force Pedestrian Model by Evolutionary Adjustment to Video Tracking Data. Adv. Complex Syst. 2007, 10, 271–288. [Google Scholar] [CrossRef]
  45. Gérin-Lajoie, M.; Richards, C.L.; Fung, J.; McFadyen, B.J. Characteristics of personal space during obstacle circumvention in physical and virtual environments. Gait Posture 2008, 27, 239–247. [Google Scholar] [CrossRef] [PubMed]
  46. Baxter, J.C. Interpersonal Spacing in Natural Settings. Sociometry 1970, 33, 444–456. [Google Scholar] [CrossRef] [PubMed]
  47. Kessler, J.; Schroeter, C.; Gross, H.M. Approaching a Person in a Socially Acceptable Manner Using a Fast Marching Planner. In Intelligent Robotics and Applications; Jeschke, S., Liu, H., Schilberg, D., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 368–377. [Google Scholar]
  48. Thompson, D.E.; Aiello, J.R.; Epstein, Y.M. Interpersonal distance preferences. J. Nonverbal Behav. 1979, 4, 113–118. [Google Scholar] [CrossRef]
  49. Pacchierotti, E.; Christensen, H.; Jensfelt, P. Human-robot embodied interaction in hallway settings: A pilot user study. In Proceedings of the ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, Nashville, TN, USA, 13–15 August 2005; pp. 164–171. [Google Scholar] [CrossRef]
  50. Pacchierotti, E.; Christensen, H.I.; Jensfelt, P. Evaluation of Passing Distance for Social Robots. In Proceedings of the ROMAN 2006—The 15th IEEE International Symposium on Robot and Human Interactive Communication, Hatfield, UK, 6–8 September 2006; pp. 315–320. [Google Scholar] [CrossRef]
  51. Welsch, R.; von Castell, C.; Hecht, H. The anisotropy of personal space. PLoS ONE 2019, 14, e0217587. [Google Scholar] [CrossRef]
  52. Neggers, M.; Cuijpers, R.; Ruijten, P.; Ijsselsteijn, W. Determining Shape and Size of Personal Space of a Human when Passed by a Robot. Int. J. Soc. Robot. 2022, 14, 561–572. [Google Scholar] [CrossRef]
  53. Huettenrauch, H.; Eklundh, K.S.; Green, A.; Topp, E.A. Investigating Spatial Relationships in Human-Robot Interaction. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–13 October 2006; pp. 5052–5059. [Google Scholar] [CrossRef]
  54. Torta, E.; Cuijpers, R.H.; Juola, J.F. Design of a Parametric Model of Personal Space for Robotic Social Navigation. Int. J. Soc. Robot. 2013, 5, 357–365. [Google Scholar] [CrossRef]
  55. Yoda, M.; Shiota, Y. The mobile robot which passes a man. In Proceedings of the 6th IEEE International Workshop on Robot and Human Communication. RO-MAN’97 SENDAI, Sendai, Japan, 19 September–1 October 1997; pp. 112–117. [Google Scholar] [CrossRef]
  56. Takayama, L.; Pantofaru, C. Influences on Proxemic Behaviors in Human-Robot Interaction. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2009, St. Louis, MO, USA, 11–15 October 2009; Volume 2009, pp. 5495–5502. [Google Scholar] [CrossRef]
  57. Hayduk, L.A. Personal space: Understanding the simplex model. J. Nonverbal Behav. 1994, 18, 245–260. [Google Scholar] [CrossRef]
  58. Park, S.; Trivedi, M. Multi-person interaction and activity analysis: A synergistic track- and body-level analysis framework. Mach. Vis. Appl. 2007, 18, 151–166. [Google Scholar] [CrossRef]
  59. Kirby, R.; Simmons, R.; Forlizzi, J. COMPANION: A Constraint-Optimizing Method for Person-Acceptable Navigation. In Proceedings of the RO-MAN 2009—The 18th IEEE International Symposium on Robot and Human Interactive Communication, Toyama, Japan, 27 September–2 October 2009; pp. 607–612. [Google Scholar] [CrossRef]
  60. Neggers, M.M.E.; Cuijpers, R.H.; Ruijten, P.A.M.; IJsselsteijn, W.A. The effect of robot speed on comfortable passing distances. Front. Robot. AI 2022, 9, 915972. [Google Scholar] [CrossRef] [PubMed]
  61. Moussaïd, M.; Perozo, N.; Garnier, S.; Helbing, D.; Theraulaz, G. The Walking Behaviour of Pedestrian Social Groups and Its Impact on Crowd Dynamics. PLoS ONE 2010, 5, e10047. [Google Scholar] [CrossRef]
  62. Federici, M.L.; Gorrini, A.; Manenti, L.; Vizzari, G. Data Collection for Modeling and Simulation: Case Study at the University of Milan-Bicocca. In Cellular Automata; Sirakoulis, G.C., Bandini, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 699–708. [Google Scholar]
  63. Kendon, A. Spacing and Orientation in Co-present Interaction. In Development of Multimodal Interfaces: Active Listening and Synchrony, Proceedings of the Second COST 2102 International Training School, Dublin, Ireland, 23–27 March 2009, Revised Selected Papers; Springer: Berlin/Heidelberg, Germany, 2010; pp. 1–15. [Google Scholar] [CrossRef]
  64. Mead, R.; Atrash, A.; Matarić, M.J. Proxemic Feature Recognition for Interactive Robots: Automating Metrics from the Social Sciences. In Social Robotics; Mutlu, B., Bartneck, C., Ham, J., Evers, V., Kanda, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 52–61. [Google Scholar]
  65. Rios-Martinez, J.; Renzaglia, A.; Spalanzani, A.; Martinelli, A.; Laugier, C. Navigating between people: A stochastic optimization approach. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, St Paul, MN, USA, 14–18 May 2012; pp. 2880–2885. [Google Scholar] [CrossRef]
  66. Efran, M.G.; Cheyne, J.A. Shared space: The co-operative control of spatial areas by two interacting individuals. Can. J. Behav. Sci./Rev. Can. Des Sci. Du Comport. 1973, 5, 201–210. [Google Scholar] [CrossRef]
  67. Knowles, E.S.; Kreuser, B.; Haas, S.; Hyde, M.; Schuchart, G.E. Group size and the extension of social space boundaries. J. Personal. Soc. Psychol. 1976, 33, 647–654. [Google Scholar] [CrossRef]
  68. Krueger, J. Extended cognition and the space of social interaction. Conscious. Cogn. 2011, 20, 643–657. [Google Scholar] [CrossRef] [PubMed]
  69. Rehm, M.; André, E.; Nischt, M. Lets Come Together—Social Navigation Behaviors of Virtual and Real Humans. In Intelligent Technologies for Interactive Entertainment; Maybury, M., Stock, O., Wahlster, W., Eds.; Springer: Berlin/Heidelberg, Germany, 2005; p. 336. [Google Scholar]
  70. Katyal, K.; Gao, Y.; Markowitz, J.; Pohland, S.; Rivera, C.; Wang, I.J.; Huang, C.M. Learning a Group-Aware Policy for Robot Navigation. arXiv 2020, arXiv:2012.12291. [Google Scholar]
  71. Petrak, B.; Sopper, G.; Weitz, K.; André, E. Do You Mind if I Pass Through? Studying the Appropriate Robot Behavior when Traversing two Conversing People in a Hallway Setting. In Proceedings of the 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN), Vancouver, BC, Canada, 8–12 August 2021; pp. 369–375. [Google Scholar] [CrossRef]
  72. Dragan, A.D.; Lee, K.C.; Srinivasa, S.S. Legibility and predictability of robot motion. In Proceedings of the 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo, Japan, 3–6 March 2013; pp. 301–308. [Google Scholar] [CrossRef]
  73. Lu, D.V.; Smart, W.D. Towards more efficient navigation for robots and humans. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 1707–1713. [Google Scholar] [CrossRef]
  74. Kruse, T.; Kirsch, A.; Khambhaita, H.; Alami, R. Evaluating Directional Cost Models in Navigation. In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction, Bielefeld, Germany, 3–6 March 2014; HRI ’14. pp. 350–357. [Google Scholar] [CrossRef]
  75. Lichtenthäler, C.; Lorenzy, T.; Kirsch, A. Influence of legibility on perceived safety in a virtual human-robot path crossing task. In Proceedings of the 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, 9–13 September 2012; pp. 676–681. [Google Scholar] [CrossRef]
  76. Karwowski, J.; Szynkiewicz, W. Quantitative Metrics for Benchmarking Human-Aware Robot Navigation. IEEE Access 2023, 11, 79941–79953. [Google Scholar] [CrossRef]
  77. Dautenhahn, K.; Walters, M.; Woods, S.; Koay, K.; Nehaniv, C.; Sisbot, E.; Alami, R.; Siméon, T. How may i serve you? A robot companion approaching a seated person in a helping context. In Proceedings of the HRI 2006: Proceedings of the 2006 ACM Conference on Human-Robot Interaction, Salt Lake City, UT, USA, 2–4 March 2006; Volume 2006, pp. 172–179. [Google Scholar] [CrossRef]
  78. Koay, K.; Sisbot, E.; Syrdal, D.S.; Walters, M.; Dautenhahn, K.; Alami, R. Exploratory Study of a Robot Approaching a Person in the Context of Handing Over an Object. In Proceedings of the AAAI Spring Symposium—Technical Report, Stanford, CA, USA, 26–28 March 2007; pp. 18–24. [Google Scholar]
  79. Walters, M.L.; Dautenhahn, K.; Woods, S.N.; Koay, K.L. Robotic etiquette: Results from user studies involving a fetch and carry task. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Arlington, VA, USA, 10–12 March 2007; HRI ’07. pp. 317–324. [Google Scholar] [CrossRef]
  80. Svenstrup, M.; Tranberg, S.; Andersen, H.J.; Bak, T. Pose estimation and adaptive robot behaviour for human-robot interaction. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; ICRA’09. IEEE Press: New York, NY, USA, 2009; pp. 3222–3227. [Google Scholar]
  81. Torta, E.; Cuijpers, R.H.; Juola, J.F.; van der Pol, D. Design of Robust Robotic Proxemic Behaviour. In Social Robotics; Mutlu, B., Bartneck, C., Ham, J., Evers, V., Kanda, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 21–30. [Google Scholar]
  82. Koay, K.L.; Syrdal, D.S.; Ashgari-Oskoei, M.; Walters, M.L.; Dautenhahn, K. Social Roles and Baseline Proxemic Preferences for a Domestic Service Robot. Int. J. Soc. Robot. 2014, 6, 469–488. [Google Scholar] [CrossRef]
  83. Karreman, D.; Utama, L.; Joosse, M.; Lohse, M.; van Dijk, B.; Evers, V. Robot etiquette: How to approach a pair of people? In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction, Bielefeld, Germany, 3–6 March 2014; HRI ’14. pp. 196–197. [Google Scholar] [CrossRef]
  84. Ball, A.; Silvera-Tawil, D.; Rye, D.; Velonaki, M. Group Comfortability When a Robot Approaches. In Social Robotics; Beetz, M., Johnston, B., Williams, M.A., Eds.; Springer: Cham, Switzerland, 2014; pp. 44–53. [Google Scholar]
  85. Joosse, M.; Poppe, R.; Lohse, M.; Evers, V. Cultural Differences in how an Engagement-Seeking Robot should Approach a Group of People. In Proceedings of the 5th ACM international conference on Collaboration across boundaries: Culture, Distance & Technology (CABS 2014), Kyoto, Japan, 20–24 August 2014; Association for Computing Machinery: New York, NY, USA, 2014; pp. 121–130. [Google Scholar] [CrossRef]
  86. Sardar, A.; Joosse, M.; Weiss, A.; Evers, V. Don’t stand so close to me: Users’ attitudinal and behavioral responses to personal space invasion by robots. In Proceedings of the 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Boston, MA, USA, 5–8 March 2012; pp. 229–230. [Google Scholar] [CrossRef]
  87. Rossi, S.; Staffa, M.; Bove, L.; Capasso, R.; Ercolano, G. User’s Personality and Activity Influence on HRI Comfortable Distances. In Social Robotics; Kheddar, A., Yoshida, E., Ge, S.S., Suzuki, K., Cabibihan, J.J., Eyssel, F., He, H., Eds.; Springer: Cham, Switzerland, 2017; pp. 167–177. [Google Scholar]
  88. Sparrow, W.A.; Newell, K.M. Metabolic energy expenditure and the regulation of movement economy. Psychon. Bull. Rev. 1998, 5, 173–196. [Google Scholar] [CrossRef]
  89. Bitgood, S.; Dukes, S. Not Another Step! Economy of Movement and Pedestrian Choice Point Behavior in Shopping Malls. Environ. Behav. 2006, 38, 394–405. [Google Scholar] [CrossRef]
  90. Arechavaleta, G.; Laumond, J.P.; Hicheur, H.; Berthoz, A. The nonholonomic nature of human locomotion: A modeling study. In Proceedings of the First IEEE/RAS-EMBS International Conference on Biomedical Robotics and Biomechatronics, Pisa, Italy, 20–22 February 2006; BioRob: Heidelberg, Germany, 2006; pp. 158–163. [Google Scholar] [CrossRef]
  91. Khatib, O. Real-time obstacle avoidance for manipulators and mobile robots. In Proceedings of the 1985 IEEE International Conference on Robotics and Automation, St. Louis, MO, USA, 25–28 March 1985; Volume 2, pp. 500–505. [Google Scholar] [CrossRef]
  92. Carton, D.; Turnwald, A.; Wollherr, D.; Buss, M. Proactively Approaching Pedestrians with an Autonomous Mobile Robot in Urban Environments. In Experimental Robotics, Proceedings of the 13th International Symposium on Experimental Robotics, Québec City, QC, Canada, 18–21 June 2012; Springer International Publishing: Heidelberg, Germany, 2013; pp. 199–214. [Google Scholar] [CrossRef]
  93. Nummenmaa, L.; Hyönä, J.; Hietanen, J.K. I’ll Walk This Way: Eyes Reveal the Direction of Locomotion and Make Passersby Look and Go the Other Way. Psychol. Sci. 2009, 20, 1454–1458. [Google Scholar] [CrossRef] [PubMed]
  94. Cutting, J.; Vishton, P.; Braren, P. How we avoid collisions with stationary and moving objects. Psychol. Rev. 1995, 102, 627–651. [Google Scholar] [CrossRef]
  95. Kitazawa, K.; Fujiyama, T. Pedestrian Vision and Collision Avoidance Behavior: Investigation of the Information Process Space of Pedestrians Using an Eye Tracker. In Pedestrian and Evacuation Dynamics 2008; Klingsch, W.W.F., Rogsch, C., Schadschneider, A., Schreckenberg, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 95–108. [Google Scholar]
  96. Hayashi, K.; Shiomi, M.; Kanda, T.; Hagita, N. Friendly Patrolling: A Model of Natural Encounters. In Proceedings of the Robotics: Science and Systems VII, University of Southern California, Los Angeles, CA, USA, 27–30 June 2011. [Google Scholar] [CrossRef]
  97. Kuno, Y.; Sadazuka, K.; Kawashima, M.; Yamazaki, K.; Yamazaki, A.; Kuzuoka, H. Museum guide robot based on sociological interaction analysis. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, San Jose, CA, USA, 28 April–3 May 2007; CHI ’07. pp. 1191–1194. [Google Scholar] [CrossRef]
  98. Fiore, S.M.; Wiltshire, T.J.; Lobato, E.J.C.; Jentsch, F.G.; Huang, W.H.; Axelrod, B. Toward understanding social cues and signals in human-robot interaction: Effects of robot gaze and proxemic behavior. Front. Psychol. 2013, 4, 859. [Google Scholar] [CrossRef] [PubMed]
  99. May, A.D.; Dondrup, C.; Hanheide, M. Show me your moves! Conveying navigation intention of a mobile robot to humans. In Proceedings of the 2015 European Conference on Mobile Robots (ECMR), Lincoln, UK, 2–4 September 2015; pp. 1–6. [Google Scholar] [CrossRef]
  100. Lynch, S.D.; Pettré, J.; Bruneau, J.; Kulpa, R.; Crétual, A.; Olivier, A.H. Effect of Virtual Human Gaze Behaviour During an Orthogonal Collision Avoidance Walking Task. In Proceedings of the 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Tuebingen/Reutlingen, Germany, 18–22 March 2018; pp. 136–142. [Google Scholar] [CrossRef]
  101. Khambhaita, H.; Rios-Martinez, J.; Alami, R. Head-Body Motion Coordination for Human Aware Robot Navigation. In Proceedings of the 9th International workshop on Human-Friendlly Robotics (HFR 2016), Gênes, Italy, 29–30 October 2016; p. 8. [Google Scholar]
  102. Lu, D.V. Contextualized Robot Navigation. Ph.D. Thesis, Washington University in St. Louis, St. Louis, MO, USA, 2014. [Google Scholar]
  103. Breazeal, C.; Edsinger, A.; Fitzpatrick, P.; Scassellati, B. Active vision for sociable robots. IEEE Trans. Syst. Man Cybern.—Part A Syst. Hum. 2001, 31, 443–453. [Google Scholar] [CrossRef]
  104. Mutlu, B.; Shiwa, T.; Kanda, T.; Ishiguro, H.; Hagita, N. Footing in human-robot conversations: How robots might shape participant roles using gaze cues. In Proceedings of the 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI), La Jolla, CA, USA, 11–13 March 2009; pp. 61–68. [Google Scholar] [CrossRef]
  105. Kendon, A. Some functions of gaze-direction in social interaction. Acta Psychol. 1967, 26, 22–63. [Google Scholar] [CrossRef] [PubMed]
  106. Duncan, S. Some signals and rules for taking speaking turns in conversations. J. Personal. Soc. Psychol. 1972, 23, 283–292. [Google Scholar] [CrossRef]
  107. Barchard, K.A.; Lapping-Carr, L.; Westfall, R.S.; Fink-Armold, A.; Banisetty, S.B.; Feil-Seifer, D. Measuring the Perceived Social Intelligence of Robots. J. Hum.-Robot Interact. 2020, 9, 1–29. [Google Scholar] [CrossRef]
  108. Mumm, J.; Mutlu, B. Human-robot proxemics: Physical and psychological distancing in human-robot interaction. In Proceedings of the 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Lausanne, Switzerland, 8–11 March 2011; pp. 331–338. [Google Scholar] [CrossRef]
  109. Lin, C.; Rhim, J.; Moon, A.J. Less Than Human: How Different Users of Telepresence Robots Expect Different Social Norms. In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, 1–5 October 2023; pp. 3976–3982. [Google Scholar] [CrossRef]
  110. Jung, E.; Yi, B.; Yuta, S. Control algorithms for a mobile robot tracking a human in front. In Proceedings of the 25th 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2012, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 2411–2416. [Google Scholar] [CrossRef]
  111. Young, J.E.; Kamiyama, Y.; Reichenbach, J.; Igarashi, T.; Sharlin, E. How to walk a robot: A dog-leash human-robot interface. In Proceedings of the RO-MAN, 2011 IEEE, Atlanta, GA, USA, 31 July–3 August 2011; pp. 376–382. [Google Scholar] [CrossRef]
  112. Carton, D.; Olszowy, W.; Wollherr, D. Measuring the Effectiveness of Readability for Mobile Robot Locomotion. Int. J. Soc. Robot. 2016, 8, 721–741. [Google Scholar] [CrossRef]
  113. Gockley, R.; Forlizzi, J.; Simmons, R. Natural person-following behavior for social robots. In Proceedings of the 2007 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI), Arlington, VA, USA, 8–11 March 2007; pp. 17–24. [Google Scholar] [CrossRef]
  114. Yao, X.; Zhang, J.; Oh, J. Following Social Groups: Socially-Compliant Autonomous Navigation in Dense Crowds. In Proceedings of the IROS ’19 Cognitive Vehicles Workshop, Macau, China, 8 November 2019. [Google Scholar]
  115. Topp, E.A.; Christensen, H.I. Tracking for following and passing persons. In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; IEEE: New York, NY, USA, 2005; pp. 2321–2327. [Google Scholar] [CrossRef]
  116. Müller, J.; Stachniss, C.; Arras, K.; Burgard, W. Socially Inspired Motion Planning for Mobile Robots in Populated Environments. In Proceedings of the International Conference on Cognitive Systems (CogSys), Karlsruhe, Germany, 2–4 April 2008; pp. 85–90. [Google Scholar]
  117. Kahn, P.H.; Freier, N.G.; Kanda, T.; Ishiguro, H.; Ruckert, J.H.; Severson, R.L.; Kane, S.K. Design patterns for sociality in human-robot interaction. In Proceedings of the 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI), Amsterdam, The Netherlands, 12–15 March 2008; pp. 97–104. [Google Scholar] [CrossRef]
  118. Costa, M. Interpersonal Distances in Group Walking. J. Nonverbal Behav. 2010, 34, 15–26. [Google Scholar] [CrossRef]
  119. Honig, S.S.; Oron-Gilad, T.; Zaichyk, H.; Sarne-Fleischmann, V.; Olatunji, S.; Edan, Y. Toward Socially Aware Person-Following Robots. IEEE Trans. Cogn. Dev. Syst. 2018, 10, 936–954. [Google Scholar] [CrossRef]
  120. Saiki, L.Y.M.; Satake, S.; Huq, R.; Glas, D.F.; Kanda, T.; Hagita, N. How do people walk side-by-side?—Using a computational model of human behavior for a social robot. In Proceedings of the 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Boston, MA, USA, 5–8 March 2012; pp. 301–308. [Google Scholar]
  121. Karunarathne, D.; Morales, Y.; Kanda, T.; Ishiguro, H. Model of Side-by-Side Walking Without the Robot Knowing the Goal. Int. J. Soc. Robot. 2018, 10, 401–420. [Google Scholar] [CrossRef]
  122. Lindner, F.; Eschenbach, C. Towards a Formalization of Social Spaces for Socially Aware Robots. In Spatial Information Theory; Egenhofer, M., Giudice, N., Moratz, R., Worboys, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 283–303. [Google Scholar]
  123. Calderita, L.; Vega, A.; Bustos, P.; Núñez, P. Social Robot Navigation adapted to Time-dependent Affordance Spaces: A Use Case for Caregiving Centers. In Proceedings of the 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples Italy, 31 August–4 September 2020; pp. 944–949. [Google Scholar] [CrossRef]
  124. Raubal, M.; Moratz, R. A Functional Model for Affordance-Based Agents. In Towards Affordance-Based Robot Control; Rome, E., Hertzberg, J., Dorffner, G., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 91–105. [Google Scholar]
  125. Chung, S.Y.; Huang, H. Incremental learning of human social behaviors with feature-based spatial effects. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2012, Vilamoura, Algarve, Portugal, 7–12 October 2012; IEEE: New York, NY, USA, 2012; pp. 2417–2422. [Google Scholar] [CrossRef]
  126. Yuan, F.; Twardon, L.; Hanheide, M. Dynamic path planning adopting human navigation strategies for a domestic mobile robot. In Proceedings of the IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010—Conference Proceedings, Taipei, Taiwan, 18–22 October 2010; pp. 3275–3281. [Google Scholar] [CrossRef]
  127. Pacchierotti, E.; Christensen, H.I.; Jensfelt, P. Embodied Social Interaction for Service Robots in Hallway Environments. In Field and Service Robotics; Corke, P., Sukkariah, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; pp. 293–304. [Google Scholar]
  128. Moussaïd, M.; Helbing, D.; Garnier, S.; Johansson, A.; Combe, M.; Theraulaz, G. Experimental study of the behavioural mechanisms underlying self-organization in human crowds. Proc. R. Soc. B 2009, 276, 2755–2762. [Google Scholar] [CrossRef] [PubMed]
  129. Nakauchi, Y.; Simmons, R. A social robot that stands in line. In Proceedings of the 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000) (Cat. No.00CH37113), Takamatsu, Japan, 31 October–5 November 2000; Volume 1, pp. 357–364. [Google Scholar] [CrossRef]
  130. Gallo, D.; Gonzalez-Jimenez, S.; Grasso, M.A.; Boulard, C.; Colombino, T. Exploring Machine-like Behaviors for Socially Acceptable Robot Navigation in Elevators. In Proceedings of the 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Sapporo, Japan, 7–10 March 2022; pp. 130–138. [Google Scholar] [CrossRef]
  131. Ginés, J.; Martín, F.; Vargas, D.; Rodríguez, F.J.; Matellán, V. Social Navigation in a Cognitive Architecture Using Dynamic Proxemic Zones. Sensors 2019, 19, 5189. [Google Scholar] [CrossRef]
  132. Pandey, A.K.; Alami, R. A framework towards a socially aware Mobile Robot motion in Human-Centered dynamic environment. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 5855–5860. [Google Scholar] [CrossRef]
  133. Dondrup, C.; Hanheide, M. Qualitative Constraints for Human-aware Robot Navigation using Velocity Costmaps. In Proceedings of the 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 26–31 August 2016; pp. 586–592. [Google Scholar] [CrossRef]
  134. Hirose, N.; Shah, D.; Sridhar, A.; Levine, S. SACSoN: Scalable Autonomous Control for Social Navigation. IEEE Robot. Autom. Lett. 2024, 9, 49–56. [Google Scholar] [CrossRef]
  135. Fox, D.; Burgard, W.; Thrun, S. The dynamic window approach to collision avoidance. IEEE Robot. Autom. Mag. 1997, 4, 23–33. [Google Scholar] [CrossRef]
  136. Walters, M.; Dautenhahn, K.; te Boekhorst, R.; Koay, K.L.; Kaouri, C.; Woods, S.; Nehaniv, C.; Lee, D.; Werry, I. The influence of subjects’ personality traits on personal spatial zones in a human-robot interaction experiment. In Proceedings of the ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, Nashville, TN, USA, 13–15 August 2005; pp. 347–352. [Google Scholar] [CrossRef]
  137. Pacchierotti, E.; Christensen, H.I.; Jensfelt, P. Design of an Office-Guide Robot for Social Interaction Studies. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 4965–4970. [Google Scholar] [CrossRef]
  138. Marder-Eppstein, E.; Berger, E.; Foote, T.; Gerkey, B.; Konolige, K. The Office Marathon: Robust navigation in an indoor office environment. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 300–307. [Google Scholar] [CrossRef]
  139. Zhang, D.; Xie, Z.; Li, P.; Yu, J.; Chen, X. Real-time navigation in dynamic human environments using optimal reciprocal collision avoidance. In Proceedings of the 2015 IEEE International Conference on Mechatronics and Automation (ICMA), Beijing, China, 2–5 August 2015; pp. 2232–2237. [Google Scholar] [CrossRef]
  140. Linder, T.; Breuers, S.; Leibe, B.; Arras, K.O. On multi-modal people tracking from mobile platforms in very crowded and dynamic environments. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 5512–5519. [Google Scholar] [CrossRef]
  141. Singamaneni, P.T.; Favier, A.; Alami, R. Watch out! There may be a Human. Addressing Invisible Humans in Social Navigation. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 11344–11351. [Google Scholar] [CrossRef]
  142. Salek Shahrezaie, R.; Manalo, B.N.; Brantley, A.G.; Lynch, C.R.; Feil-Seifer, D. Advancing Socially-Aware Navigation for Public Spaces. In Proceedings of the 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Napoli, Italy, 29 August–2 September 2022; pp. 1015–1022. [Google Scholar] [CrossRef]
  143. Martinez-Baselga, D.; Riazuelo, L.; Montano, L. Long-Range Navigation in Complex and Dynamic Environments with Full-Stack S-DOVS. Appl. Sci. 2023, 13, 8925. [Google Scholar] [CrossRef]
  144. Theodoridou, C.; Antonopoulos, D.; Kargakos, A.; Kostavelis, I.; Giakoumis, D.; Tzovaras, D. Robot Navigation in Human Populated Unknown Environments Based on Visual-Laser Sensor Fusion. In Proceedings of the 15th International Conference on PErvasive Technologies Related to Assistive Environments, Corfu, Greece, 29 June–1 July 2022; Association for Computing Machinery: New York, NY, USA, 2022. PETRA ’22. pp. 336–342. [Google Scholar] [CrossRef]
  145. Vasquez, D.; Stein, P.; Rios-Martinez, J.; Escobedo, A.; Spalanzani, A.; Laugier, C. Human Aware Navigation for Assistive Robotics. In Proceedings of the ISER—13th International Symposium on Experimental Robotics—2012, Québec, QC, Canada, 18–21 June 2012; Available online: www.springerlink.com (accessed on 20 March 2024).
  146. Liang, J.; Patel, U.; Sathyamoorthy, A.J.; Manocha, D. Crowd-Steer: Realtime smooth and collision-free robot navigation in densely crowded scenarios trained using high-fidelity simulation. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, Yokohama, Japan, 7–15 January 2021. IJCAI’20. [Google Scholar]
  147. Xie, Z.; Dames, P. DRL-VO: Learning to Navigate Through Crowded Dynamic Scenes Using Velocity Obstacles. IEEE Trans. Robot. 2023, 39, 2700–2719. [Google Scholar] [CrossRef]
  148. Moravec, H.; Elfes, A. High resolution maps from wide angle sonar. In Proceedings of the 1985 IEEE International Conference on Robotics and Automation, St. Louis, MO, USA, 25–28 March 1985; Volume 2, pp. 116–121. [Google Scholar] [CrossRef]
  149. Ferguson, D.; Likhachev, M. Efficiently Using Cost Maps for Planning Complex Maneuvers; Lab Papers (GRASP): Philadelphia, PA, USA, 2008. [Google Scholar]
  150. Hornung, A.; Wurm, K.M.; Bennewitz, M.; Stachniss, C.; Burgard, W. OctoMap: An Efficient Probabilistic 3D Mapping Framework Based on Octrees. Auton. Robot. 2013, 34, 189–206. [Google Scholar] [CrossRef]
  151. Ferguson, D.; Stentz, A. Field D*: An Interpolation-Based Path Planner and Replanner. In Robotics Research; Thrun, S., Brooks, R., Durrant-Whyte, H., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 239–253. [Google Scholar]
  152. Gerkey, B.; Konolige, K. Planning and Control in Unstructured Terrain. In Proceedings of the ICRA Workshop on Path Planning on Costmaps, Pasadena, CA, USA, 19–23 May 2008. [Google Scholar]
  153. Rösmann, C.; Hoffmann, F.; Bertram, T. Integrated online trajectory planning and optimization in distinctive topologies. Robot. Auton. Syst. 2016, 88, 142–153. [Google Scholar] [CrossRef]
  154. Everett, M.; Chen, Y.F.; How, J.P. Motion Planning Among Dynamic, Decision-Making Agents with Deep Reinforcement Learning. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 3052–3059. [Google Scholar] [CrossRef]
  155. Patel, U.; Kumar, N.K.S.; Sathyamoorthy, A.J.; Manocha, D. DWA-RL: Dynamically Feasible Deep Reinforcement Learning Policy for Robot Navigation among Mobile Obstacles. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 6057–6063. [Google Scholar] [CrossRef]
  156. Ferrer, G.; Sanfeliu, A. Anticipative kinodynamic planning: Multi-objective robot navigation in urban and dynamic environments. Auton. Robot. 2019, 43, 1473–1488. [Google Scholar] [CrossRef]
  157. Repiso, E.; Garrell, A.; Sanfeliu, A. People’s Adaptive Side-by-Side Model Evolved to Accompany Groups of People by Social Robots. IEEE Robot. Autom. Lett. 2020, 5, 2387–2394. [Google Scholar] [CrossRef]
  158. Kivrak, H.; Cakmak, F.; Kose, H.; Yavuz, S. Social navigation framework for assistive robots in human inhabited unknown environments. Eng. Sci. Technol. Int. J. 2021, 24, 284–298. [Google Scholar] [CrossRef]
  159. Teja Singamaneni, P.; Favier, A.; Alami, R. Human-Aware Navigation Planner for Diverse Human-Robot Interaction Contexts. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 5817–5824. [Google Scholar] [CrossRef]
  160. Triebel, R.; Arras, K.; Alami, R.; Beyer, L.; Breuers, S.; Chatila, R.; Chetouani, M.; Cremers, D.; Evers, V.; Fiore, M.; et al. SPENCER: A Socially Aware Service Robot for Passenger Guidance and Help in Busy Airports. In Field and Service Robotics: Results of the 10th International Conference; Springer International Publishing: Cham, Switzerland, 2016; pp. 607–622. [Google Scholar] [CrossRef]
  161. Lu, D.V.; Hershberger, D.; Smart, W.D. Layered costmaps for context-sensitive navigation. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 709–715. [Google Scholar] [CrossRef]
  162. Arras, K.; Mozos, O.; Burgard, W. Using Boosted Features for the Detection of People in 2D Range Data. In Proceedings of the IEEE International Conference on Robotics and Automation, Rome, Italy, 10–14 April 2007; pp. 3402–3407. [Google Scholar] [CrossRef]
  163. Leigh, A.; Pineau, J.; Olmedo, N.; Zhang, H. Person tracking and following with 2D laser scanners. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 726–733. [Google Scholar] [CrossRef]
  164. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef]
  165. Wong, A.; Shafiee, M.J.; Li, F.; Chwyl, B. Tiny SSD: A Tiny Single-Shot Detection Deep Convolutional Neural Network for Real-Time Embedded Object Detection. In Proceedings of the 2018 15th Conference on Computer and Robot Vision (CRV), Toronto, ON, Canada, 8–10 May 2018; pp. 95–101. [Google Scholar]
  166. Cao, Z.; Hidalgo Martinez, G.; Simon, T.; Wei, S.; Sheikh, Y.A. OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 172–186. [Google Scholar] [CrossRef] [PubMed]
  167. Bozorgi, H.; Truong, X.T.; Ngo, T.D. Reliable, Robust, Accurate and Real-Time 2D LiDAR Human Tracking in Cluttered Environment: A Social Dynamic Filtering Approach. IEEE Robot. Autom. Lett. 2022, 7, 11689–11696. [Google Scholar] [CrossRef]
  168. Luber, M.; Arras, K.O. Multi-Hypothesis Social Grouping and Tracking for Mobile Robots. In Proceedings of the Robotics: Science and Systems (RSS’13), Berlin, Germany, 24–28 June 2013. [Google Scholar]
  169. Juel., W.K.; Haarslev., F.; Krüger., N.; Bodenhagen., L. An Integrated Object Detection and Tracking Framework for Mobile Robots. In Proceedings of the 17th International Conference on Informatics in Control, Automation and Robotics—ICINCO. INSTICC; Paris, France, 7–9 July 2020, SciTePress: Setúbal, Portugal, 2020; pp. 513–520. [Google Scholar] [CrossRef]
  170. Settles, B. Active Learning Literature Survey; Computer Sciences Technical Report 1648; University of Wisconsin: Madison, WI, USA, 2009. [Google Scholar]
  171. Quigley, M.; Conley, K.; Gerkey, B.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; Ng, A.Y. ROS: An open-source Robot Operating System. In Proceedings of the ICRA Workshop on Open Source Software, Kobe, Japan, 12–17 May 2009; Volume 3. [Google Scholar]
  172. Macenski, S.; Foote, T.; Gerkey, B.; Lalancette, C.; Woodall, W. Robot Operating System 2: Design, architecture, and uses in the wild. Sci. Robot. 2022, 7, eabm6074. [Google Scholar] [CrossRef] [PubMed]
  173. Trautman, P.; Krause, A. Unfreezing the robot: Navigation in dense, interacting crowds. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 797–803. [Google Scholar] [CrossRef]
  174. Li, M.; Jiang, R.; Ge, S.S.; Lee, T.H. Role playing learning for socially concomitant mobile robot navigation. CAAI Trans. Intell. Technol. 2018, 3, 49–58. [Google Scholar] [CrossRef]
  175. Chandra, R.; Maligi, R.; Anantula, A.; Biswas, J. SocialMapf: Optimal and Efficient Multi-Agent Path Finding With Strategic Agents for Social Navigation. IEEE Robot. Autom. Lett. 2023, 8, 3214–3221. [Google Scholar] [CrossRef]
  176. Russell, S. Learning agents for uncertain environments (extended abstract). In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, New York, NY, USA, 24–26 July 1998; COLT’ 98. pp. 101–103. [Google Scholar] [CrossRef]
  177. Bellman, R. A Markovian Decision Process. Indiana Univ. Math. J. 1957, 6, 679–684. [Google Scholar] [CrossRef]
  178. Henry, P.; Vollmer, C.; Ferris, B.; Fox, D. Learning to navigate through crowded environments. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 981–986. [Google Scholar] [CrossRef]
  179. Rhinehart, N.; Kitani, K.M. First-Person Activity Forecasting with Online Inverse Reinforcement Learning. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 3716–3725. [Google Scholar] [CrossRef]
  180. Vasquez, D.; Okal, B.; Arras, K.O. Inverse Reinforcement Learning algorithms and features for robot navigation in crowds: An experimental comparison. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 1341–1346. [Google Scholar] [CrossRef]
  181. Abbeel, P.; Ng, A.Y. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the Twenty-First International Conference on Machine Learning, New York, NY, USA, 26–27 June 2004; ICML ’04. p. 1. [Google Scholar] [CrossRef]
  182. Ziebart, B.D.; Maas, A.; Bagnell, J.A.; Dey, A.K. Maximum entropy inverse reinforcement learning. In Proceedings of the 23rd National Conference on Artificial Intelligence, Chicago, IL, USA, 13–17 July 2008; AAAI Press: Washington, DC, USA, 2008. AAAI’08. Volume 3, pp. 1433–1438. [Google Scholar]
  183. Kretzschmar, H.; Spies, M.; Sprunk, C.; Burgard, W. Socially compliant mobile robot navigation via inverse reinforcement learning. Int. J. Robot. Res. 2016, 35, 1289–1307. [Google Scholar] [CrossRef]
  184. Tai, L.; Zhang, J.; Liu, M.; Burgard, W. Socially Compliant Navigation Through Raw Depth Inputs with Generative Adversarial Imitation Learning. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 1111–1117. [Google Scholar] [CrossRef]
  185. Goldhammer, M.; Doll, K.; Brunsmann, U.; Gensler, A.; Sick, B. Pedestrian’s Trajectory Forecast in Public Traffic with Artificial Neural Networks. In Proceedings of the 2014 22nd International Conference on Pattern Recognition, Stockholm, Sweden, 24–28 August 2014; pp. 4110–4115. [Google Scholar] [CrossRef]
  186. Gao, J.; Yang, Z.; Nevatia, R. RED: Reinforced Encoder-Decoder Networks for Action Anticipation. In Proceedings of the British Machine Vision Conference (BMVC), London, UK, 4–7 September 2017. [Google Scholar] [CrossRef]
  187. Rudenko, A.; Palmieri, L.; Herman, M.; Kitani, K.M.; Gavrila, D.M.; Arras, K.O. Human motion trajectory prediction: A survey. Int. J. Robot. Res. 2020, 39, 895–935. [Google Scholar] [CrossRef]
  188. Alahi, A.; Goel, K.; Ramanathan, V.; Robicquet, A.; Fei-Fei, L.; Savarese, S. Social LSTM: Human Trajectory Prediction in Crowded Spaces. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 961–971. [Google Scholar] [CrossRef]
  189. Furnari, A.; Farinella, G. What Would You Expect? Anticipating Egocentric Actions With Rolling-Unrolling LSTMs and Modality Attention. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6251–6260. [Google Scholar] [CrossRef]
  190. Chen, Z.; Song, C.; Yang, Y.; Zhao, B.; Hu, Y.; Liu, S.; Zhang, J. Robot Navigation Based on Human Trajectory Prediction and Multiple Travel Modes. Appl. Sci. 2018, 8, 2205. [Google Scholar] [CrossRef]
  191. Vemula, A.; Muelling, K.; Oh, J. Social Attention: Modeling Attention in Human Crowds. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; IEEE Press: New York, NY, USA, 2018; pp. 1–7. [Google Scholar] [CrossRef]
  192. Farha, Y.; Richard, A.; Gall, J. When will you do what?—Anticipating Temporal Occurrences of Activities. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 5343–5352. [Google Scholar] [CrossRef]
  193. Huang, J.; Hao, J.; Juan, R.; Gomez, R.; Nakarnura, K.; Li, G. Model-based Adversarial Imitation Learning from Demonstrations and Human Reward. In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, 1–5 October 2023; pp. 1683–1690. [Google Scholar] [CrossRef]
  194. Kanda, T.; Glas, D.F.; Shiomi, M.; Ishiguro, H.; Hagita, N. Who will be the customer? a social robot that anticipates people’s behavior from their trajectories. In Proceedings of the 10th International Conference on Ubiquitous Computing, Seoul, Repbulic of Korea, 21–24 September 2008; UbiComp ’08. pp. 380–389. [Google Scholar] [CrossRef]
  195. Xiao, S.; Wang, Z.; Folkesson, J. Unsupervised robot learning to predict person motion. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 691–696. [Google Scholar] [CrossRef]
  196. Zanlungo, F.; Ikeda, T.; Kanda, T. Social force model with explicit collision prediction. EPL Europhys. Lett. 2011, 93, 68005. [Google Scholar] [CrossRef]
  197. Luber, M.; Stork, J.A.; Tipaldi, G.D.; Arras, K.O. People tracking with human motion predictions from social forces. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 464–469. [Google Scholar] [CrossRef]
  198. Yue, J.; Manocha, D.; Wang, H. Human Trajectory Prediction via Neural Social Physics. In Proceedings of the Computer Vision—ECCV 2022: 17th European Conference, Tel Aviv, Israel, 23–27 October 2022; Proceedings, Part XXXIV. Springer: Berlin/Heidelberg, Germany, 2022; pp. 376–394. [Google Scholar] [CrossRef]
  199. Gil, O.; Sanfeliu, A. Human motion trajectory prediction using the Social Force Model for real-time and low computational cost applications. In Proceedings of the 6th Iberian Robotics Conference, Coimbra, Portugal, 22–24 November 2023; pp. 1–12. [Google Scholar]
  200. Elnagar, A. Prediction of moving objects in dynamic environments using Kalman filters. In Proceedings of the 2001 IEEE International Symposium on Computational Intelligence in Robotics and Automation (Cat. No.01EX515), Banff, AB, Canada, 29 July–1 August 2001; pp. 414–419. [Google Scholar] [CrossRef]
  201. Lin, C.Y.; Kau, L.J.; Chan, C.Y. Bimodal Extended Kalman Filter-Based Pedestrian Trajectory Prediction. Sensors 2022, 22, 8231. [Google Scholar] [CrossRef]
  202. Kim, S.; Guy, S.J.; Liu, W.; Wilkie, D.; Lau, R.W.; Lin, M.C.; Manocha, D. BRVO: Predicting pedestrian trajectories using velocity-space reasoning. Int. J. Robot. Res. 2015, 34, 201–217. [Google Scholar] [CrossRef]
  203. Hsu, D.; Kindel, R.; Latombe, J.C.; Rock, S. Randomized Kinodynamic Motion Planning with Moving Obstacles. Int. J. Robot. Res. 2002, 21, 233–255. [Google Scholar] [CrossRef]
  204. Sakahara, H.; Masutani, Y.; Miyazaki, F. Safe Navigation in Unknown Dynamic Environments with Voronoi Based StRRT. In Proceedings of the 2008 IEEE/SICE International Symposium on System Integration, Nagoya, Japan, 4 December 2008; pp. 60–65. [Google Scholar] [CrossRef]
  205. Nishitani, I.; Matsumura, T.; Ozawa, M.; Yorozu, A.; Takahashi, M. Human-centered X-Y-T space path planning for mobile robot in dynamic environments. Robot. Auton. Syst. 2015, 66, 18–26. [Google Scholar] [CrossRef]
  206. Kollmitz, M.; Hsiao, K.; Gaa, J.; Burgard, W. Time dependent planning on a layered social cost map for human-aware robot navigation. In Proceedings of the 2015 European Conference on Mobile Robots (ECMR), Lincoln, UK, 2–4 September 2015; pp. 1–6. [Google Scholar] [CrossRef]
  207. Khambhaita, H.; Alami, R. A Human-Robot Cooperative Navigation Planner. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 6–9 March 2017; HRI ’17. pp. 161–162. [Google Scholar] [CrossRef]
  208. Singamaneni, P.T.; Alami, R. HATEB-2: Reactive Planning and Decision making in Human-Robot Co-navigation. In Proceedings of the 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 31 August–4 September 2020; pp. 179–186. [Google Scholar] [CrossRef]
  209. Schöller, C.; Aravantinos, V.; Lay, F.; Knoll, A. What the Constant Velocity Model Can Teach Us About Pedestrian Motion Prediction. IEEE Robot. Autom. Lett. 2020, 5, 1696–1703. [Google Scholar] [CrossRef]
  210. Weinrich, C.; Volkhardt, M.; Einhorn, E.; Gross, H.M. Prediction of human collision avoidance behavior by lifelong learning for socially compliant robot navigation. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 376–381. [Google Scholar] [CrossRef]
  211. Trautman, P.; Ma, J.; Murray, R.M.; Krause, A. Robot navigation in dense human crowds: The case for cooperation. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 2153–2160. [Google Scholar] [CrossRef]
  212. Oli, S.; L’Esperance, B.; Gupta, K. Human Motion Behaviour Aware Planner (HMBAP) for path planning in dynamic human environments. In Proceedings of the 2013 16th International Conference on Advanced Robotics (ICAR), Montevideo, Uruguay, 25–29 November 2013; pp. 1–7. [Google Scholar] [CrossRef]
  213. Ferrer, G.; Sanfeliu, A. Bayesian Human Motion Intentionality Prediction in urban environments. Pattern Recognit. Lett. 2014, 44, 134–140. [Google Scholar] [CrossRef]
  214. Schaefer, K.E.; Oh, J.; Aksaray, D.; Barber, D. Integrating Context into Artificial Intelligence: Research from the Robotics Collaborative Technology Alliance. AI Mag. 2019, 40, 28–40. [Google Scholar] [CrossRef]
  215. Bera, A.; Kim, S.; Randhavane, T.; Pratapa, S.; Manocha, D. GLMP- realtime pedestrian path prediction using global and local movement patterns. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 5528–5535. [Google Scholar] [CrossRef]
  216. Lim, V.; Rooksby, M.; Cross, E.S. Social Robots on a Global Stage: Establishing a Role for Culture During Human–Robot Interaction. Int. J. Soc. Robot. 2021, 13, 1307–1333. [Google Scholar] [CrossRef]
  217. Recchiuto, C.; Sgorbissa, A. Diversity-aware social robots meet people: Beyond context-aware embodied AI. arXiv 2022, arXiv:2207.05372. [Google Scholar]
  218. Bustos, P.; Manso, L.; Bandera, A.; Bandera, J.; García-Varea, I.; Martínez-Gómez, J. The CORTEX cognitive robotics architecture: Use cases. Cogn. Syst. Res. 2019, 55, 107–123. [Google Scholar] [CrossRef]
  219. Martín, F.; Rodríguez Lera, F.J.; Ginés, J.; Matellán, V. Evolution of a Cognitive Architecture for Social Robots: Integrating Behaviors and Symbolic Knowledge. Appl. Sci. 2020, 10, 6067. [Google Scholar] [CrossRef]
  220. Banisetty, S.B.; Forer, S.; Yliniemi, L.; Nicolescu, M.; Feil-Seifer, D. Socially Aware Navigation: A Non-linear Multi-objective Optimization Approach. ACM Trans. Interact. Intell. Syst. 2021, 11, 1–26. [Google Scholar] [CrossRef]
  221. Salek Shahrezaie, R.; Banisetty, S.B.; Mohammadi, M.; Feil-Seifer, D. Towards Deep Reasoning on Social Rules for Socially Aware Navigation. In Proceedings of the Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 8–11 March 2021; HRI ’21 Companion. pp. 515–518. [Google Scholar] [CrossRef]
  222. Jia, Y.; Ramalingam, B.; Mohan, R.E.; Yang, Z.; Zeng, Z.; Veerajagadheswar, P. Deep-Learning-Based Context-Aware Multi-Level Information Fusion Systems for Indoor Mobile Robots Safe Navigation. Sensors 2023, 23, 2337. [Google Scholar] [CrossRef]
  223. Vega, A.; Manso, L.J.; Macharet, D.G.; Bustos, P.; Núñez, P. Socially aware robot navigation system in human-populated and interactive environments based on an adaptive spatial density function and space affordances. Pattern Recognit. Lett. 2019, 118, 72–84. [Google Scholar] [CrossRef]
  224. Kostavelis, I.; Gasteratos, A. Semantic mapping for mobile robotics tasks: A survey. Robot. Auton. Syst. 2015, 66, 86–103. [Google Scholar] [CrossRef]
  225. Crespo, J.; Castillo, J.C.; Mozos, O.M.; Barber, R. Semantic Information for Robot Navigation: A Survey. Appl. Sci. 2020, 10, 497. [Google Scholar] [CrossRef]
  226. Alqobali, R.; Alshmrani, M.; Alnasser, R.; Rashidi, A.; Alhmiedat, T.; Alia, O.M. A Survey on Robot Semantic Navigation Systems for Indoor Environments. Appl. Sci. 2024, 14, 89. [Google Scholar] [CrossRef]
  227. Zhang, J.; Wang, W.; Qi, X.; Liao, Z. Social and Robust Navigation for Indoor Robots Based on Object Semantic Grid and Topological Map. Appl. Sci. 2020, 10, 8991. [Google Scholar] [CrossRef]
  228. Núñez, P.; Manso, L.; Bustos, P.; Drews, P., Jr.; Macharet, D. Towards a new Semantic Social Navigation Paradigm for Autonomous Robots using CORTEX. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2016)—BAILAR2016 Workshop, New York, NY, USA, 26–31 August 2016. [Google Scholar] [CrossRef]
  229. Cosgun, A.; Christensen, H.I. Context-aware robot navigation using interactively built semantic maps. Paladyn J. Behav. Robot. 2018, 9, 254–276. [Google Scholar] [CrossRef]
  230. Li, J.; Wong, Y.; Zhao, Q.; Kankanhalli, M.S. Visual Social Relationship Recognition. Int. J. Comput. Vis. 2020, 128, 1750–1764. [Google Scholar] [CrossRef]
  231. Patompak, P.; Jeong, S.; Nilkhamhang, I.; Chong, N.Y. Learning social relations for culture aware interaction. In Proceedings of the 2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Jeju, Repbulic of Korea, 28 June–1 July 2017; pp. 26–31. [Google Scholar] [CrossRef]
  232. Okal, B.; Arras, K.O. Learning socially normative robot navigation behaviors with Bayesian inverse reinforcement learning. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 2889–2895. [Google Scholar] [CrossRef]
  233. Haarslev., F.; Juel., W.K.; Kollakidou., A.; Krüger., N.; Bodenhagen., L. Context-aware Social Robot Navigation. In Proceedings of the 18th International Conference on Informatics in Control, Automation and Robotics—ICINCO. INSTICC, Setúbal, Portugal, 25–28 August 2021; SciTePress: Setúbal, Portugal, 2021; pp. 426–433. [Google Scholar] [CrossRef]
  234. Schwörer, T.; Schmidt, J.E.; Chrysostomou, D. Nav2CAN: Achieving Context Aware Navigation in ROS2 Using Nav2 and RGB-D sensing. In Proceedings of the 2023 IEEE International Conference on Imaging Systems and Techniques (IST), Copenhagen, Denmark, 17–19 October 2023; pp. 1–6. [Google Scholar] [CrossRef]
  235. Amaoka, T.; Laga, H.; Nakajima, M. Modeling the Personal Space of Virtual Agents for Behavior Simulation. In Proceedings of the 2009 International Conference on CyberWorlds, Bradford, UK, 7–11 September 2009; pp. 364–370. [Google Scholar] [CrossRef]
  236. Flandorfer, P. Population Ageing and Socially Assistive Robots for Elderly Persons: The Importance of Sociodemographic Factors for User Acceptance. Int. J. Popul. Res. 2012, 2012, 829835. [Google Scholar] [CrossRef]
  237. Strait, M.; Briggs, P.; Scheutz, M. Gender, more so than Age, Modulates Positive Perceptions of Language-Based Human-Robot Interaction. In Proceedings of the 4th International Syposium on New Frontiers in Human-Robot Interaction, AISB, Canterbury, UK, 21–22 April 2015. [Google Scholar]
  238. Nomura, T.; Kanda, T.; Suzuki, T.; Kato, K. Age differences and images of robots. Interact. Stud. 2009, 10, 374–391. [Google Scholar] [CrossRef]
  239. Robert, L. Personality in the Human Robot Interaction Literature: A Review and Brief Critique. In Proceedings of the 24th Americas Conference on Information Systems, New Orleans, LA, USA, 16–18 August 2018. [Google Scholar]
  240. Hurtado, J.V.; Londoño, L.; Valada, A. From Learning to Relearning: A Framework for Diminishing Bias in Social Robot Navigation. Front. Robot. AI 2021, 8, 650325. [Google Scholar] [CrossRef] [PubMed]
  241. Chen, L.; Wu, M.; Zhou, M.; She, J.; Dong, F.; Hirota, K. Information-Driven Multirobot Behavior Adaptation to Emotional Intention in Human–Robot Interaction. IEEE Trans. Cogn. Dev. Syst. 2018, 10, 647–658. [Google Scholar] [CrossRef]
  242. Bera, A.; Randhavane, T.; Manocha, D. The Emotionally Intelligent Robot: Improving Socially-aware Human Prediction in Crowded Environments. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
  243. Nanavati, A.; Tan, X.Z.; Connolly, J.; Steinfeld, A. Follow The Robot: Modeling Coupled Human-Robot Dyads During Navigation. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 3836–3843. [Google Scholar] [CrossRef]
  244. Ginés Clavero, J.; Martín Rico, F.; Rodríguez-Lera, F.J.; Guerrero Hernández, J.M.; Matellán Olivera, V. Defining Adaptive Proxemic Zones for Activity-Aware Navigation. In Advances in Physical Agents II; Bergasa, L.M., Ocaña, M., Barea, R., López-Guillén, E., Revenga, P., Eds.; Springer: Cham, Switzerland, 2021; pp. 3–17. [Google Scholar]
  245. Repiso, E.; Garrell, A.; Sanfeliu, A. Adaptive Side-by-Side Social Robot Navigation to Approach and Interact with People. Int. J. Soc. Robot. 2020, 12, 909–930. [Google Scholar] [CrossRef]
  246. Repiso, E.; Zanlungo, F.; Kanda, T.; Garrell, A.; Sanfeliu, A. People’s V-Formation and Side-by-Side Model Adapted to Accompany Groups of People by Social Robots. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 2082–2088. [Google Scholar] [CrossRef]
  247. Honour, A.; Banisetty, S.B.; Feil-Seifer, D. Perceived Social Intelligence as Evaluation of Socially Navigation. In Proceedings of the Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 8–11 March 2021; HRI ’21 Companion. pp. 519–523. [Google Scholar] [CrossRef]
  248. Moore, D.C.; Huang, A.S.; Walter, M.; Olson, E.; Fletcher, L.; Leonard, J.; Teller, S. Simultaneous local and global state estimation for robotic navigation. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 3794–3799. [Google Scholar] [CrossRef]
  249. Macenski, S.; Booker, M.; Wallace, J. Open-Source, Cost-Aware Kinematically Feasible Planning for Mobile and Surface Robotics. arXiv 2024, arXiv:2401.13078. [Google Scholar]
  250. Sánchez-Ibáñez, J.R.; Pérez-del Pulgar, C.J.; García-Cerezo, A. Path Planning for Autonomous Mobile Robots: A Review. Sensors 2021, 21, 7898. [Google Scholar] [CrossRef]
  251. Liu, L.; Wang, X.; Yang, X.; Liu, H.; Li, J.; Wang, P. Path planning techniques for mobile robots: Review and prospect. Expert Syst. Appl. 2023, 227, 120254. [Google Scholar] [CrossRef]
  252. Qin, H.; Shao, S.; Wang, T.; Yu, X.; Jiang, Y.; Cao, Z. Review of Autonomous Path Planning Algorithms for Mobile Robots. Drones 2023, 7, 211. [Google Scholar] [CrossRef]
  253. Karur, K.; Sharma, N.; Dharmatti, C.; Siegel, J.E. A Survey of Path Planning Algorithms for Mobile Robots. Vehicles 2021, 3, 448–468. [Google Scholar] [CrossRef]
  254. Yang, L.; Li, P.; Qian, S.; Quan, H.; Miao, J.; Liu, M.; Hu, Y.; Memetimin, E. Path Planning Technique for Mobile Robots: A Review. Machines 2023, 11, 980. [Google Scholar] [CrossRef]
  255. Bianchi, L.; Dorigo, M.; Gambardella, L.M.; Gutjahr, W.J. A survey on metaheuristics for stochastic combinatorial optimization. Nat. Comput. 2009, 8, 239–287. [Google Scholar] [CrossRef]
  256. Latombe, J.C. Robot Motion Planning; Springer Inc.: New York, NY, USA, 1991. [Google Scholar]
  257. Dijkstra, E.W. A Note on Two Problems in Connexion with Graphs. Numer. Math. 1959, 1, 269–271. [Google Scholar] [CrossRef]
  258. Hart, P.E.; Nilsson, N.J.; Raphael, B. A Formal Basis for the Heuristic Determination of Minimum Cost Paths. IEEE Trans. Syst. Sci. Cybern. 1968, 4, 100–107. [Google Scholar] [CrossRef]
  259. Stentz, A. Optimal and Efficient Path Planning for Unknown and Dynamic Environments; Tech. Rep. CMU-RI-TR-93-20; Robotics Institute, Carnegie Mellon University: Pittsburgh, PA, USA, 1993. [Google Scholar]
  260. Stentz, A. The focussed D* algorithm for real-time replanning. In Proceedings of the 14th International Joint Conference on Artificial Intelligence, Montreal, QC, Canada, 20–25 August 1995; IJCAI’95. Volume 2, pp. 1652–1659. [Google Scholar]
  261. Koenig, S.; Likhachev, M.; Furcy, D. Lifelong Planning A*. Artif. Intell. 2004, 155, 93–146. [Google Scholar] [CrossRef]
  262. Koenig, S.; Likhachev, M. Fast replanning for navigation in unknown terrain. IEEE Trans. Robot. 2005, 21, 354–363. [Google Scholar] [CrossRef]
  263. Philippsen, R.; Siegwart, R. An Interpolated Dynamic Navigation Function. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; pp. 3782–3789. [Google Scholar] [CrossRef]
  264. Daniel, K.; Nash, A.; Koenig, S.; Felner, A. Theta*: Any-Angle Path Planning on Grids. J. Artif. Intell. Res. (JAIR) 2014, 39, 533–579. [Google Scholar] [CrossRef]
  265. Dolgov, D.; Thrun, S.; Montemerlo, M.; Diebel, J. Path Planning for Autonomous Vehicles in Unknown Semi-structured Environments. Int. J. Robot. Res. 2010, 29, 485–501. [Google Scholar] [CrossRef]
  266. Sisbot, E.A.; Marin-Urias, L.F.; Alami, R.; Simeon, T. A Human Aware Mobile Robot Motion Planner. IEEE Trans. Robot. 2007, 23, 874–883. [Google Scholar] [CrossRef]
  267. Truong, X.T.; Ngo, T.D. “To Approach Humans?”: A Unified Framework for Approaching Pose Prediction and Socially Aware Robot Navigation. IEEE Trans. Cogn. Dev. Syst. 2018, 10, 557–572. [Google Scholar] [CrossRef]
  268. Vega-Magro, A.; Calderita, L.V.; Bustos, P.; Núñez, P. Human-aware Robot Navigation based on Time-dependent Social Interaction Spaces: A use case for assistive robotics. In Proceedings of the 2020 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Ponta Delgada, Portugal, 15–17 April 2020; pp. 140–145. [Google Scholar] [CrossRef]
  269. Melo, F.; Moreno, P. Socially Reactive Navigation Models for Mobile Robots. In Proceedings of the 2022 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Santa Maria da Feira, Portugal, 29–30 April 2022; pp. 91–97. [Google Scholar] [CrossRef]
  270. Siegwart, R.; Nourbakhsh, I.R.; Scaramuzza, D. Introduction to Autonomous Mobile Robots, 2nd ed.; The MIT Press: Cambridge, MA, USA, 2011. [Google Scholar]
  271. Borenstein, J.; Koren, Y. High-speed obstacle avoidance for mobile robots. In Proceedings of the IEEE International Symposium on Intelligent Control 1988, Arlington, VA, USA, 24–26 August 1988; pp. 382–384. [Google Scholar] [CrossRef]
  272. Khatib, M.; Chatila, R. An Extended Potential Field Approach for Mobile Robot Sensor-Based Motions. In Proceedings of the Intelligent Autonomous Systems IAS-4, Karlsruhe, Germany, 27–30 March 1995; IOS Press: Amsterdam, The Netherlands, 1995; pp. 490–496. [Google Scholar]
  273. Iizuka, S.; Nakamura, T.; Suzuki, S. Robot Navigation in dynamic environment for an indoor human monitoring. In Proceedings of the 23rd IEEE International Symposium on Robot and Human Interactive Communication, Edinburgh, UK, 25–29 August 2014; pp. 698–703. [Google Scholar] [CrossRef]
  274. Weerakoon, T.; Ishii, K.; Nassiraei, A.A.F. An Artificial Potential Field Based Mobile Robot Navigation Method To Prevent From Deadlock. J. Artif. Intell. Soft Comput. Res. 2015, 5, 189–203. [Google Scholar] [CrossRef]
  275. Azzabi, A.; Nouri, K. An advanced potential field method proposed for mobile robot path planning. Trans. Inst. Meas. Control 2019, 41, 3132–3144. [Google Scholar] [CrossRef]
  276. Szczepanski, R. Safe Artificial Potential Field—Novel Local Path Planning Algorithm Maintaining Safe Distance From Obstacles. IEEE Robot. Autom. Lett. 2023, 8, 4823–4830. [Google Scholar] [CrossRef]
  277. Garrido, S.; Moreno, L.; Abderrahim, M.; Martin, F. Path Planning for Mobile Robot Navigation using Voronoi Diagram and Fast Marching. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 2376–2381. [Google Scholar] [CrossRef]
  278. Friedman, S.; Pasula, H.; Fox, D. Voronoi random fields: Extracting the topological structure of indoor environments via place labeling. In Proceedings of the 20th International Joint Conference on Artifical Intelligence, Hyderabad, India, 6–12 January 2007; IJCAI’07. pp. 2109–2114. [Google Scholar]
  279. Lu, M.C.; Hsu, C.C.; Chen, Y.J.; Li, S.A. Hybrid Path Planning Incorporating Global and Local Search for Mobile Robot. In Advances in Autonomous Robotics; Herrmann, G., Studley, M., Pearson, M., Conn, A., Melhuish, C., Witkowski, M., Kim, J.H., Vadakkepat, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 441–443. [Google Scholar]
  280. LaValle, S.M. Planning Algorithms; Cambridge University Press: Cambridge, UK, 2006. [Google Scholar]
  281. Kavraki, L.; Svestka, P.; Latombe, J.C.; Overmars, M. Probabilistic roadmaps for path planning in high-dimensional configuration spaces. IEEE Trans. Robot. Autom. 1996, 12, 566–580. [Google Scholar] [CrossRef]
  282. LaValle, S.M.; Kuffner, J.J. Rapidly-Exploring Random Trees: Progress and Prospects. In Algorithmic and Computational Robotics: New Directions; AK Peters/CRC Press: Natick, MA, USA, 2001; pp. 293–308. [Google Scholar]
  283. Kuffner, J.; LaValle, S. RRT-connect: An efficient approach to single-query path planning. In Proceedings of the 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065), San Francisco, CA, USA, 24–28 April 2000; Volume 2, pp. 995–1001. [Google Scholar] [CrossRef]
  284. Karaman, S.; Frazzoli, E. Sampling-based algorithms for optimal motion planning. Int. J. Robot. Res. 2011, 30, 846–894. [Google Scholar] [CrossRef]
  285. Moon, C.b.; Chung, W. Kinodynamic Planner Dual-Tree RRT (DT-RRT) for Two-Wheeled Mobile Robots Using the Rapidly Exploring Random Tree. IEEE Trans. Ind. Electron. 2015, 62, 1080–1090. [Google Scholar] [CrossRef]
  286. Svenstrup, M.; Bak, T.; Andersen, H.J. Trajectory planning for robots in dynamic human environments. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 4293–4298. [Google Scholar] [CrossRef]
  287. Rios-Martinez, J.; Spalanzani, A.; Laugier, C. Understanding human interaction for probabilistic autonomous navigation using Risk-RRT approach. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 2014–2019. [Google Scholar] [CrossRef]
  288. Shrestha, M.C.; Nohisa, Y.; Schmitz, A.; Hayakawa, S.; Uno, E.; Yokoyama, Y.; Yanagawa, H.; Or, K.; Sugano, S. Using contact-based inducement for efficient navigation in a congested environment. In Proceedings of the 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Kobe, Japan, 31 August–4 September 2015; pp. 456–461. [Google Scholar] [CrossRef]
  289. Olson, E.; Leonard, J.; Teller, S. Fast iterative alignment of pose graphs with poor initial estimates. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006, Orlando, FL, USA, 15–19 May 2006; pp. 2262–2269. [Google Scholar] [CrossRef]
  290. Pérez-Higueras, N.; Caballero, F.; Merino, L. Teaching Robot Navigation Behaviors to Optimal RRT Planners. Int. J. Soc. Robot. 2018, 10, 235–249. [Google Scholar] [CrossRef]
  291. Pérez-Higueras, N.; Ramón-Vigo, R.; Caballero, F.; Merino, L. Robot local navigation with learned social cost functions. In Proceedings of the 2014 11th International Conference on Informatics in Control, Automation and Robotics (ICINCO), Vienna, Austria, 2–4 September 2014; Volume 02, pp. 618–625. [Google Scholar] [CrossRef]
  292. Lakhmissi, C.; Boumehraz, M. Fuzzy logic and reinforcement learning based approaches for mobile robot navigation in unknown environment. Mediterr. J. Meas. Control 2013, 9, 109–117. [Google Scholar]
  293. Pandey, A.; Sonkar, R.K.; Pandey, K.K.; Parhi, D.R. Path planning navigation of mobile robot with obstacles avoidance using fuzzy logic controller. In Proceedings of the 2014 IEEE 8th International Conference on Intelligent Systems and Control (ISCO), Coimbatore, India, 10–11 January 2014; pp. 39–41. [Google Scholar] [CrossRef]
  294. Omrane, H.; Masmoudi, M.S.; Masmoudi, M. Fuzzy Logic Based Control for Autonomous Mobile Robot Navigation. Comput. Intell. Neurosci. 2016, 2016, 9548482. [Google Scholar] [CrossRef] [PubMed]
  295. Zeinalova, L.M.; Jafarov, B.O. Mobile Robot Navigation with Preference-Based Fuzzy Behaviors. In Proceedings of the 11th International Conference on Theory and Application of Soft Computing, Computing with Words and Perceptions and Artificial Intelligence—ICSCCW-2021, Antgalya, Turkey, 26–27 August 2021; Aliev, R.A., Kacprzyk, J., Pedrycz, W., Jamshidi, M., Babanli, M., Sadikoglu, F.M., Eds.; Springer: Cham, Switzerland, 2022; pp. 774–782. [Google Scholar]
  296. Vásconez, J.P.; Calderón-Díaz, M.; Briceño, I.C.; Pantoja, J.M.; Cruz, P.J. A Behavior-Based Fuzzy Control System for Mobile Robot Navigation: Design and Assessment. In Advanced Research in Technologies, Information, Innovation and Sustainability; Guarda, T., Portela, F., Diaz-Nafria, J.M., Eds.; Springer: Cham, Switzerland, 2024; pp. 412–426. [Google Scholar]
  297. Palm, R.; Chadalavada, R.; Lilienthal, A.J. Fuzzy Modeling and Control for Intention Recognition in Human-robot Systems. In Proceedings of the 8th International Joint Conference on Computational Intelligence (IJCCI 2016)—FCTA. INSTICC; Porto, Portugal, 9–11 November 2016, SciTePress: Setúbal, Portugal, 2016; pp. 67–74. [Google Scholar] [CrossRef]
  298. Obo, T.; Yasuda, E. Intelligent Fuzzy Controller for Human-Aware Robot Navigation. In Proceedings of the 2018 12th France-Japan and 10th Europe-Asia Congress on Mechatronics, Tsu, Japan, 10–12 September 2018; pp. 392–397. [Google Scholar] [CrossRef]
  299. Rifqi, A.T.; Dewantara, B.S.B.; Pramadihanto, D.; Marta, B.S. Fuzzy Social Force Model for Healthcare Robot Navigation and Obstacle Avoidance. In Proceedings of the 2021 International Electronics Symposium (IES), Surabaya, Indonesia, 29–30 September 2021; pp. 445–450. [Google Scholar] [CrossRef]
  300. Sampathkumar, S.K.; Choi, D.; Kim, D. Fuzzy inference system-assisted human-aware navigation framework based on enhanced potential field. Complex Eng. Syst. 2024, 4, 3. [Google Scholar] [CrossRef]
  301. Glorennec, P.; Jouffe, L. Fuzzy Q-learning. In Proceedings of the 6th International Fuzzy Systems Conference, Barcelona, Spain, 5 July 1997; Volume 2, pp. 659–662. [Google Scholar] [CrossRef]
  302. Duan, Y.; Xin-Hexu. Fuzzy reinforcement learning and its application in robot navigation. In Proceedings of the 2005 International Conference on Machine Learning and Cybernetics, Guangzhou, China, 18–21 August 2005; Volume 2, pp. 899–904. [Google Scholar] [CrossRef]
  303. Quinlan, S.; Khatib, O. Elastic bands: Connecting path planning and control. In Proceedings of the IEEE International Conference on Robotics and Automation, Atlanta, GA, USA, 2–6 May 1993; Volume 2, pp. 802–807. [Google Scholar] [CrossRef]
  304. Brock, O.; Khatib, O. Elastic Strips: A Framework for Motion Generation in Human Environments. Int. J. Robot. Res. 2002, 21, 1031–1052. [Google Scholar] [CrossRef]
  305. Hoogendoorn, S.; Kessels, F.; Daamen, W.; Duives, D. Continuum modelling of pedestrian flows: From microscopic principles to self-organised macroscopic phenomena. Phys. A Stat. Mech. Its Appl. 2014, 416, 684–694. [Google Scholar] [CrossRef]
  306. Liu, B.; Liu, H.; Zhang, H.; Qin, X. A social force evacuation model driven by video data. Simul. Model. Pract. Theory 2018, 84, 190–203. [Google Scholar] [CrossRef]
  307. Truong, X.T.; Ngo, T.D. Toward Socially Aware Robot Navigation in Dynamic and Crowded Environments: A Proactive Social Motion Model. IEEE Trans. Autom. Sci. Eng. 2017, 14, 1743–1760. [Google Scholar] [CrossRef]
  308. Ferrer, G.; Zulueta, A.; Cotarelo, F.; Sanfeliu, A. Robot social-aware navigation framework to accompany people walking side-by-side. Auton. Robot. 2017, 41, 775–793. [Google Scholar] [CrossRef]
  309. Karamouzas, I.; Heil, P.; van Beek, P.; Overmars, M.H. A Predictive Collision Avoidance Model for Pedestrian Simulation. In Motion in Games; Egges, A., Geraerts, R., Overmars, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 41–52. [Google Scholar]
  310. Jiang, Y.Q.; Chen, B.K.; Wang, B.H.; Wong, W.F.; Cao, B.Y. Extended social force model with a dynamic navigation field for bidirectional pedestrian flow. Front. Phys. 2017, 12, 124502. [Google Scholar] [CrossRef]
  311. Huang, L.; Gong, J.; Li, W.; Xu, T.; Shen, S.; Liang, J.; Feng, Q.; Zhang, D.; Sun, J. Social Force Model-Based Group Behavior Simulation in Virtual Geographic Environments. ISPRS Int. J. Geo-Inf. 2018, 7, 79. [Google Scholar] [CrossRef]
  312. Sochman, J.; Hogg, D.C. Who knows who—Inverting the Social Force Model for finding groups. In Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, 6–13 November 2011; pp. 830–837. [Google Scholar] [CrossRef]
  313. Farina, F.; Fontanelli, D.; Garulli, A.; Giannitrapani, A.; Prattichizzo, D. Walking Ahead: The Headed Social Force Model. PLoS ONE 2017, 12, e0169734. [Google Scholar] [CrossRef] [PubMed]
  314. Wu, W.; Chen, M.; Li, J.; Liu, B.; Zheng, X. An Extended Social Force Model via Pedestrian Heterogeneity Affecting the Self-Driven Force. IEEE Trans. Intell. Transp. Syst. 2022, 23, 7974–7986. [Google Scholar] [CrossRef]
  315. Gil, O.; Garrell, A.; Sanfeliu, A. Social Robot Navigation Tasks: Combining Machine Learning Techniques and Social Force Model. Sensors 2021, 21, 7087. [Google Scholar] [CrossRef]
  316. Fiorini, P.; Shiller, Z. Motion Planning in Dynamic Environments Using Velocity Obstacles. Int. J. Robot. Res. 1998, 17, 760–772. [Google Scholar] [CrossRef]
  317. Daza, M.; Barrios-Aranibar, D.; Diaz-Amado, J.; Cardinale, Y.; Vilasboas, J. An Approach of Social Navigation Based on Proxemics for Crowded Environments of Humans and Robots. Micromachines 2021, 12, 193. [Google Scholar] [CrossRef]
  318. Lin, M.C.; Sud, A.; Van den Berg, J.; Gayle, R.; Curtis, S.; Yeh, H.; Guy, S.; Andersen, E.; Patil, S.; Sewall, J.; et al. Real-Time Path Planning and Navigation for Multi-agent and Crowd Simulations. In Motion in Games; Egges, A., Kamphuis, A., Overmars, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 23–32. [Google Scholar]
  319. van den Berg, J.; Lin, M.; Manocha, D. Reciprocal Velocity Obstacles for Real-Time Multi-agent Navigation. In Proceedings of the 2008 IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 19–23 May 2008; pp. 1928–1935. [Google Scholar] [CrossRef]
  320. Olivier, A.H.; Marin, A.; Crétual, A.; Berthoz, A.; Pettré, J. Collision avoidance between two walkers: Role-dependent strategies. Gait Posture 2013, 38, 751–756. [Google Scholar] [CrossRef] [PubMed]
  321. van den Berg, J.; Guy, S.J.; Lin, M.; Manocha, D. Reciprocal n-Body Collision Avoidance. In Robotics Research; Pradalier, C., Siegwart, R., Hirzinger, G., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 3–19. [Google Scholar]
  322. Matsuzaki, S.; Aonuma, S.; Hasegawa, Y. Dynamic Window Approach with Human Imitating Collision Avoidance. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 8180–8186. [Google Scholar] [CrossRef]
  323. Kobayashi, M.; Zushi, H.; Nakamura, T.; Motoi, N. Local Path Planning: Dynamic Window Approach With Q-Learning Considering Congestion Environments for Mobile Robot. IEEE Access 2023, 11, 96733–96742. [Google Scholar] [CrossRef]
  324. Seder, M.; Petrovic, I. Dynamic window based approach to mobile robot motion control in the presence of moving obstacles. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Rome, Italy, 10–14 April 2007; pp. 1986–1991. [Google Scholar] [CrossRef]
  325. Sebastian, M.; Banisetty, S.B.; Feil-Seifer, D. Socially-aware navigation planner using models of human-human interaction. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28 August–1 September 2017; pp. 405–410. [Google Scholar] [CrossRef]
  326. Hoang, V.B.; Nguyen, V.H.; Ngo, T.D.; Truong, X.T. Socially Aware Robot Navigation Framework: Where and How to Approach People in Dynamic Social Environments. IEEE Trans. Autom. Sci. Eng. 2023, 20, 1322–1336. [Google Scholar] [CrossRef]
  327. Forer, S.; Banisetty, S.B.; Yliniemi, L.; Nicolescu, M.; Feil-Seifer, D. Socially-Aware Navigation Using Non-Linear Multi-Objective Optimization. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1–9. [Google Scholar] [CrossRef]
  328. Mavrogiannis, C.; Alves-Oliveira, P.; Thomason, W.; Knepper, R.A. Social Momentum: Design and Evaluation of a Framework for Socially Competent Robot Navigation. J. Hum. Robot Interact. 2022, 11, 1–37. [Google Scholar] [CrossRef]
  329. Mehta, D.; Ferrer, G.; Olson, E. Autonomous navigation in dynamic social environments using Multi-Policy Decision Making. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Repbulic of Korea, 9–14 October 2016; pp. 1190–1197. [Google Scholar] [CrossRef]
  330. Tang, Z.; Cunha, R.; Hamel, T.; Silvestre, C. Formation control of a leader-follower structure in three dimensional space using bearing measurements. Automatica 2021, 128, 109567. [Google Scholar] [CrossRef]
  331. Nguyen, K.; Dang, V.T.; Pham, D.D.; Dao, P.N. Formation control scheme with reinforcement learning strategy for a group of multiple surface vehicles. Int. J. Robust Nonlinear Control 2024, 34, 2252–2279. [Google Scholar] [CrossRef]
  332. Truc, J.; Singamaneni, P.T.; Sidobre, D.; Ivaldi, S.; Alami, R. KHAOS: A Kinematic Human Aware Optimization-based System for Reactive Planning of Flying-Coworker. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 4764–4770. [Google Scholar] [CrossRef]
  333. Arulkumaran, K.; Deisenroth, M.P.; Brundage, M.; Bharath, A.A. Deep Reinforcement Learning: A Brief Survey. IEEE Signal Process. Mag. 2017, 34, 26–38. [Google Scholar] [CrossRef]
  334. Akalin, N.; Loutfi, A. Reinforcement Learning Approaches in Social Robotics. Sensors 2021, 21, 1292. [Google Scholar] [CrossRef]
  335. Kim, B.; Pineau, J. Socially Adaptive Path Planning in Human Environments Using Inverse Reinforcement Learning. Int. J. Soc. Robot. 2016, 8, 51–66. [Google Scholar] [CrossRef]
  336. Kuderer, M.; Kretzschmar, H.; Burgard, W. Teaching mobile robots to cooperatively navigate in populated environments. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 3138–3143. [Google Scholar] [CrossRef]
  337. Karnan, H.; Nair, A.; Xiao, X.; Warnell, G.; Pirk, S.; Toshev, A.; Hart, J.; Biswas, J.; Stone, P. Socially CompliAnt Navigation Dataset (SCAND): A Large-Scale Dataset of Demonstrations for Social Navigation. IEEE Robot. Autom. Lett. 2022, 7, 11807–11814. [Google Scholar] [CrossRef]
  338. Bain, M.; Sammut, C. A Framework for Behavioural Cloning. In Machine Intelligence 15, Intelligent Agents [St. Catherine’s College, Oxford, July 1995], GBR; Oxford University: Oxford, UK, 1999; pp. 103–129. [Google Scholar]
  339. Silva, G.; Fraichard, T. Human robot motion: A shared effort approach. In Proceedings of the 2017 European Conference on Mobile Robots (ECMR), Paris, France, 6–8 September 2017; pp. 1–6. [Google Scholar] [CrossRef]
  340. Chen, Y.F.; Liu, M.; Everett, M.; How, J.P. Decentralized non-communicating multiagent collision avoidance with deep reinforcement learning. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 285–292. [Google Scholar] [CrossRef]
  341. Chen, Y.F.; Everett, M.; Liu, M.; How, J.P. Socially Aware Motion Planning with Deep Reinforcement Learning. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; IEEE Press: New York, NY, USA, 2017; pp. 1343–1350. [Google Scholar] [CrossRef]
  342. Jin, J.; Nguyen, N.M.; Sakib, N.; Graves, D.; Yao, H.; Jagersand, M. Mapless Navigation among Dynamics with Social-safety-awareness: A reinforcement learning approach from 2D laser scans. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 6979–6985. [Google Scholar] [CrossRef]
  343. Chen, C.; Liu, Y.; Kreiss, S.; Alahi, A. Crowd-Robot Interaction: Crowd-Aware Robot Navigation With Attention-Based Deep Reinforcement Learning. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 6015–6022. [Google Scholar]
  344. Li, K.; Xu, Y.; Wang, J.; Meng, M. SARL*: Deep Reinforcement Learning based Human-Aware Navigation for Mobile Robot in Indoor Environments. In Proceedings of the 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO), Dali, China, 6–8 December 2019; pp. 688–694. [Google Scholar] [CrossRef]
  345. Guldenring, R.; Görner, M.; Hendrich, N.; Jacobsen, N.J.; Zhang, J. Learning Local Planners for Human-aware Navigation in Indoor Environments. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October–24 January 2020; pp. 6053–6060. [Google Scholar] [CrossRef]
  346. Qin, J.; Qin, J.; Qiu, J.; Liu, Q.; Li, M.; Ma, Q. SRL-ORCA: A Socially Aware Multi-Agent Mapless Navigation Algorithm in Complex Dynamic Scenes. IEEE Robot. Autom. Lett. 2024, 9, 143–150. [Google Scholar] [CrossRef]
  347. Ding, W.; Li, S.; Qian, H.; Chen, Y. Hierarchical Reinforcement Learning Framework Towards Multi-Agent Navigation. In Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO), Kuala Lumpur, Malaysia, 12–15 December 2018; IEEE Press: New York, NY, USA, 2018; pp. 237–242. [Google Scholar] [CrossRef]
  348. Lu, X.; Woo, H.; Faragasso, A.; Yamashita, A.; Asama, H. Socially aware robot navigation in crowds via deep reinforcement learning with resilient reward functions. Adv. Robot. 2022, 36, 388–403. [Google Scholar] [CrossRef]
  349. Bachiller, P.; Rodriguez-Criado, D.; Jorvekar, R.R.; Bustos, P.; Faria, D.R.; Manso, L.J. A graph neural network to model disruption in human-aware robot navigation. Multimed. Tools Appl. 2022, 81, 3277–3295. [Google Scholar] [CrossRef]
  350. Mavrogiannis, C.I.; Thomason, W.B.; Knepper, R.A. Social Momentum: A Framework for Legible Navigation in Dynamic Multi-Agent Environments. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; HRI ’18. pp. 361–369. [Google Scholar] [CrossRef]
  351. Pérez-D’Arpino, C.; Liu, C.; Goebel, P.; Martín-Martín, R.; Savarese, S. Robot Navigation in Constrained Pedestrian Environments using Reinforcement Learning. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 1140–1146. [Google Scholar] [CrossRef]
  352. Truong, X.T.; Ngo, T.D. Dynamic Social Zone based Mobile Robot Navigation for Human Comfortable Safety in Social Environments. Int. J. Soc. Robot. 2016, 8, 663–684. [Google Scholar] [CrossRef]
  353. Sousa, R.M.d.; Barrios-Aranibar, D.; Diaz-Amado, J.; Patiño-Escarcina, R.E.; Trindade, R.M.P. A New Approach for Including Social Conventions into Social Robots Navigation by Using Polygonal Triangulation and Group Asymmetric Gaussian Functions. Sensors 2022, 22, 4602. [Google Scholar] [CrossRef]
  354. Corrales-Paredes, A.; Sanz, D.O.; Terrón-López, M.J.; Egido-García, V. User Experience Design for Social Robots: A Case Study in Integrating Embodiment. Sensors 2023, 23, 5274. [Google Scholar] [CrossRef]
  355. Bartneck, C.; Belpaeme, T.; Eyssel, F.; Kanda, T.; Keijsers, M.; Šabanović, S. Human-Robot Interaction: An Introduction; Cambridge University Press: Cambridge, UK, 2020. [Google Scholar] [CrossRef]
  356. Senft, E.; Satake, S.; Kanda, T. Would You Mind Me if I Pass by You? Socially-Appropriate Behaviour for an Omni-based Social Robot in Narrow Environment. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020; HRI ’20. pp. 539–547. [Google Scholar] [CrossRef]
  357. Pellegrini, S.; Ess, A.; Schindler, K.; van Gool, L. You’ll never walk alone: Modeling social behavior for multi-target tracking. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 261–268. [Google Scholar] [CrossRef]
  358. Lerner, A.; Chrysanthou, Y.; Lischinski, D. Crowds by Example. Comput. Graph. Forum 2007, 26, 655–664. [Google Scholar] [CrossRef]
  359. Rudenko, A.; Kucner, T.P.; Swaminathan, C.S.; Chadalavada, R.T.; Arras, K.O.; Lilienthal, A.J. THÖR: Human-Robot Navigation Data Collection and Accurate Motion Trajectories Dataset. IEEE Robot. Autom. Lett. 2020, 5, 676–682. [Google Scholar] [CrossRef]
  360. Manso, L.J.; Nuñez, P.; Calderita, L.V.; Faria, D.R.; Bachiller, P. SocNav1: A Dataset to Benchmark and Learn Social Navigation Conventions. Data 2020, 5, 7. [Google Scholar] [CrossRef]
  361. Wang, A.; Biswas, A.; Admoni, H.; Steinfeld, A. Towards Rich, Portable, and Large-Scale Pedestrian Data Collection. arXiv 2023, arXiv:2203.01974. [Google Scholar]
  362. Paez-Granados, D.; He, Y.; Gonon, D.; Huber, L.; Billard, A. 3D point cloud and RGBD of pedestrians in robot crowd navigation: Detection and tracking. IEEE Dataport 2021. [Google Scholar] [CrossRef]
  363. Bae, J.; Kim, J.; Yun, J.; Kang, C.; Choi, J.; Kim, C.; Lee, J.; Choi, J.; Choi, J.W. SiT Dataset: Socially Interactive Pedestrian Trajectory Dataset for Social Navigation Robots. In Proceedings of the Thirty-Seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track, New Orleans, LA, USA, 10 December 2023. [Google Scholar]
  364. Nguyen, D.M.; Nazeri, M.; Payandeh, A.; Datar, A.; Xiao, X. Toward Human-Like Social Robot Navigation: A Large-Scale, Multi-Modal, Social Human Navigation Dataset. In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, 1–5 October 2023; pp. 7442–7447. [Google Scholar] [CrossRef]
  365. Camargo, C.; Gonçalves, J.; Conde, M.Á.; Rodríguez-Sedano, F.J.; Costa, P.; García-Peñalvo, F.J. Systematic Literature Review of Realistic Simulators Applied in Educational Robotics Context. Sensors 2021, 21, 4031. [Google Scholar] [CrossRef]
  366. Michel, O. WebotsTM: Professional Mobile Robot Simulation. Int. J. Adv. Robot. Syst. 2004, 1, 39–42. [Google Scholar] [CrossRef]
  367. Koenig, N.P.; Howard, A. Design and use paradigms for Gazebo, an open-source multi-robot simulator. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), Sendai, Japan, 28 September–2 October 2004; Volume 3, pp. 2149–2154. [Google Scholar]
  368. Karwowski, J.; Dudek, W.; Węgierek, M.; Winiarski, T. HuBeRo—A Framework to Simulate Human Behaviour in Robot Research. J. Autom. Mob. Robot. Intell. Syst. 2021, 15, 31–38. [Google Scholar] [CrossRef]
  369. Tsoi, N.; Xiang, A.; Yu, P.; Sohn, S.S.; Schwartz, G.; Ramesh, S.; Hussein, M.; Gupta, A.W.; Kapadia, M.; Vázquez, M. SEAN 2.0: Formalizing and Generating Social Situations for Robot Navigation. IEEE Robot. Autom. Lett. 2022, 7, 11047–11054. [Google Scholar] [CrossRef]
  370. Grzeskowiak, F.; Gonon, D.; Dugas, D.; Paez-Granados, D.; Chung, J.J.; Nieto, J.; Siegwart, R.; Billard, A.; Babel, M.; Pettré, J. Crowd against the machine: A simulation-based benchmark tool to evaluate and compare robot capabilities to navigate a human crowd. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; IEEE Press: New York, NY, USA, 2021; pp. 3879–3885. [Google Scholar] [CrossRef]
  371. Li, C.; Xia, F.; Martín-Martín, R.; Lingelbach, M.; Srivastava, S.; Shen, B.; Vainio, K.E.; Gokmen, C.; Dharan, G.; Jain, T.; et al. iGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday Household Tasks. In Proceedings of the 5th Conference on Robot Learning, London, UK, 8–11 November 2022; Faust, A., Hsu, D., Neumann, G., Eds.; PMLR: Rocks, PA, USA, 2022; Volume 164, pp. 455–465. [Google Scholar]
  372. Favier, A.; Singamaneni, P.T.; Alami, R. An Intelligent Human Avatar to Debug and Challenge Human-Aware Robot Navigation Systems. In Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction, Sapporo, Japan, 7–10 March 2022; IEEE Press: New York, NY, USA, 2022. HRI ’22. pp. 760–764. [Google Scholar]
  373. Hauterville, O.; Fernández, C.; Singamaneni, P.T.; Favier, A.; Matellán, V.; Alami, R. IMHuS: Intelligent Multi-Human Simulator. In Proceedings of the IROS2022 Workshop: Artificial Intelligence for Social Robots Interacting with Humans in the Real World, Kyoto, Japan, 27 October 2022. [Google Scholar]
  374. Sprague, Z.; Chandra, R.; Holtz, J.; Biswas, J. SOCIALGYM 2.0: Simulator for Multi-Agent Social Robot Navigation in Shared Human Spaces. arXiv 2023, arXiv:2303.05584. [Google Scholar]
  375. Pérez-Higueras, N.; Otero, R.; Caballero, F.; Merino, L. HuNavSim: A ROS 2 Human Navigation Simulator for Benchmarking Human-Aware Robot Navigation. IEEE Robot. Autom. Lett. 2023, 8, 7130–7137. [Google Scholar] [CrossRef]
  376. Heiden, E.; Palmieri, L.; Bruns, L.; Arras, K.O.; Sukhatme, G.S.; Koenig, S. Bench-MR: A Motion Planning Benchmark for Wheeled Mobile Robots. IEEE Robot. Autom. Lett. 2021, 6, 4536–4543. [Google Scholar] [CrossRef]
  377. Toma, A.; Hsueh, H.; Jaafar, H.; Murai, R.; Kelly, P.J.; Saeedi, S. PathBench: A Benchmarking Platform for Classical and Learned Path Planning Algorithms. In Proceedings of the 2021 18th Conference on Robots and Vision (CRV), Burnaby, BC, Canada, 26–28 May 2021; pp. 79–86. [Google Scholar] [CrossRef]
  378. Rocha, L.; Vivaldini, K. Plannie: A Benchmark Framework for Autonomous Robots Path Planning Algorithms Integrated to Simulated and Real Environments. In Proceedings of the 2022 International Conference on Unmanned Aircraft Systems (ICUAS), Dubrovnik, Croatia, 21–24 June 2022; pp. 402–411. [Google Scholar] [CrossRef]
  379. Tani, J.; Daniele, A.F.; Bernasconi, G.; Camus, A.; Petrov, A.; Courchesne, A.; Mehta, B.; Suri, R.; Zaluska, T.; Walter, M.R.; et al. Integrated Benchmarking and Design for Reproducible and Accessible Evaluation of Robotic Agents. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October–24 January 2020; pp. 6229–6236. [Google Scholar] [CrossRef]
  380. Mishkin, D.; Dosovitskiy, A.; Koltun, V. Benchmarking Classic and Learned Navigation in Complex 3D Environments. arXiv 2019, arXiv:1901.10915. [Google Scholar]
  381. Perille, D.; Truong, A.; Xiao, X.; Stone, P. Benchmarking Metric Ground Navigation. In Proceedings of the 2020 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Abu Dhabi, United Arab Emirates, 4–6 November 2020; pp. 116–121. [Google Scholar] [CrossRef]
  382. Wen, J.; Zhang, X.; Bi, Q.; Pan, Z.; Feng, Y.; Yuan, J.; Fang, Y. MRPB 1.0: A Unified Benchmark for the Evaluation of Mobile Robot Local Planning Approaches. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 8238–8244. [Google Scholar]
  383. Kästner, L.; Bhuiyan, T.; Le, T.A.; Treis, E.; Cox, J.; Meinardus, B.; Kmiecik, J.; Carstens, R.; Pichel, D.; Fatloun, B.; et al. Arena-Bench: A Benchmarking Suite for Obstacle Avoidance Approaches in Highly Dynamic Environments. IEEE Robot. Autom. Lett. 2022, 7, 9477–9484. [Google Scholar] [CrossRef]
  384. Chamzas, C.; Quintero-Peña, C.; Kingston, Z.; Orthey, A.; Rakita, D.; Gleicher, M.; Toussaint, M.; Kavraki, L.E. MotionBenchMaker: A Tool to Generate and Benchmark Motion Planning Datasets. IEEE Robot. Autom. Lett. 2022, 7, 882–889. [Google Scholar] [CrossRef]
  385. Tafnakaji, S.; Hajieghrary, H.; Teixeira, Q.; Bekiroglu, Y. Benchmarking local motion planners for navigation of mobile manipulators. In Proceedings of the 2023 IEEE/SICE International Symposium on System Integration (SII), Atlanta, GA, USA, 17–20 January 2023; pp. 1–6. [Google Scholar] [CrossRef]
  386. Karwowski, J.; Szynkiewicz, W. SRPB: A benchmark for the quantitative evaluation of a social robot navigation. In Proceedings of the 2023 27th International Conference on Methods and Models in Automation and Robotics (MMAR), Międzyzdroje, Poland, 22–25 August 2023; pp. 411–416. [Google Scholar] [CrossRef]
  387. Xia, F.; Shen, W.B.; Li, C.; Kasimbeg, P.; Tchapmi, M.E.; Toshev, A.; Martín-Martín, R.; Savarese, S. Interactive Gibson Benchmark: A Benchmark for Interactive Navigation in Cluttered Environments. IEEE Robot. Autom. Lett. 2020, 5, 713–720. [Google Scholar] [CrossRef]
  388. Singamaneni, P.T.; Favier, A.; Alami, R. Towards Benchmarking Human-Aware Social Robot Navigation: A New Perspective and Metrics. In Proceedings of the IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Busan, Republic of Korea, 28–31 August 2023. [Google Scholar] [CrossRef]
  389. Tenorth, M.; Beetz, M. KNOWROB—Knowledge processing for autonomous personal robots. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; pp. 4261–4266. [Google Scholar] [CrossRef]
  390. Singamaneni, P.T.; Umbrico, A.; Orlandini, A.; Alami, R. Towards Enhancing Social Navigation through Contextual and Human-related Knowledge. In Proceedings of the International Conference on Social Robotics 2022 Workshop: ALTRUIST, Florence, Italy, 13–16 December 2022. [Google Scholar]
  391. Manso, L.; Calderita, L.; Bustos, P.; Garcia, J.; Martínez, M.; Fernández, F.; Romero-Garcés, A.; Bandera, A. A General-Purpose Architecture to Control Mobile Robots. In Proceedings of the WAF 2014 15th Workshop of Physical Agents, León, Spain, 12–13 June 2014. [Google Scholar]
Figure 1. Number of publications from 2014 to 2024 included in the survey by year.
Figure 1. Number of publications from 2014 to 2024 included in the survey by year.
Sensors 24 02794 g001
Figure 2. A taxonomy of main concepts in social robot navigation. The principles for perception, motion planning and evaluation are derived from the grounded requirements. Parts of the figure have been generated with the Dall-E AI model.
Figure 2. A taxonomy of main concepts in social robot navigation. The principles for perception, motion planning and evaluation are derived from the grounded requirements. Parts of the figure have been generated with the Dall-E AI model.
Sensors 24 02794 g002
Figure 3. General taxonomy of social robot navigation requirements. The pictures illustrate example concepts of each taxon. The physical safety of humans is related to collision avoidance, whereas the requirements for the perceived safety of humans involve, e.g., avoiding occlusion zones such as corridor corners. Enhancing the naturalness of the robot’s motion links with the avoidance of in-place rotations. Furthermore, compliance with social norms may be connected with certain accompanying strategies. Parts of the figure have been generated with the Dall-E AI model.
Figure 3. General taxonomy of social robot navigation requirements. The pictures illustrate example concepts of each taxon. The physical safety of humans is related to collision avoidance, whereas the requirements for the perceived safety of humans involve, e.g., avoiding occlusion zones such as corridor corners. Enhancing the naturalness of the robot’s motion links with the avoidance of in-place rotations. Furthermore, compliance with social norms may be connected with certain accompanying strategies. Parts of the figure have been generated with the Dall-E AI model.
Sensors 24 02794 g003
Figure 4. Taxonomy of social robot navigation requirements related to the perceived safety of humans.
Figure 4. Taxonomy of social robot navigation requirements related to the perceived safety of humans.
Sensors 24 02794 g004
Figure 5. Taxonomy of social robot navigation requirements related to the naturalness of the robot’s motion.
Figure 5. Taxonomy of social robot navigation requirements related to the naturalness of the robot’s motion.
Sensors 24 02794 g005
Figure 6. Taxonomy of social robot navigation requirements related to the robot’s compliance with social norms.
Figure 6. Taxonomy of social robot navigation requirements related to the robot’s compliance with social norms.
Sensors 24 02794 g006
Figure 7. A taxonomy of perception for social robot navigation.
Figure 7. A taxonomy of perception for social robot navigation.
Sensors 24 02794 g007
Figure 8. A taxonomy of motion planning for social robot navigation.
Figure 8. A taxonomy of motion planning for social robot navigation.
Sensors 24 02794 g008
Figure 9. A taxonomy of evaluation for social robot navigation.
Figure 9. A taxonomy of evaluation for social robot navigation.
Sensors 24 02794 g009
Table 1. A classification of literature reviews discussing social robot navigation. Typical taxonomy concepts were selected as grouping criteria. The classification identifies the main concepts investigated in each survey article according to the selected taxa.
Table 1. A classification of literature reviews discussing social robot navigation. Typical taxonomy concepts were selected as grouping criteria. The classification identifies the main concepts investigated in each survey article according to the selected taxa.
SurveyRobot
Types
PerceptionMotion
Planning
EvaluationNav. System
Architecture
Kruse et al.
[15]
wheeledhuman traj.
prediction
global cost functions,
pose selection,
global and local
planning algorithms
simulation,
user studies
allocation of
main concepts
Rios-M. et al.
[13]
social cues
and signals
algorithms embedding
social conventions
allocation of
main concepts
Chik et al.
[14]
wheeledglobal path planning
and local trajectory
planning algorithms
various
motion planning
architectures
Charalampous
et al. [16]
semantic mapping,
human trajectory
prediction,
contextual awareness
benchmarks,
datasets
Möller et al.
[3]
active perception
and learning,
human behavior
prediction
applications of
activity recognition
for path planning,
trajectory modeling
benchmarks,
datasets,
simulation
Zhu
and Zhang
[18]
wheeledDRL-based
navigation
algorithms
navigation
frameworks
structures
Mirsky et al.
[4]
wheelednavigation models
and algorithms
for conflict avoidance
simulation,
various studies
Gao et al.
[5]
models for
assessment of
specific social
phenomena
questionnaires,
various studies,
scenarios, datasets,
simulation,
various metrics
Sánchez et al.
[19]
human detection,
semantic mapping,
human motion
prediction
predictive
and reactive
navigation
methods
datasets
Mavrogiannis
et al. [17]
design
challenges
human
intention
prediction
extensive
study involving
various navigation
algorithms
metrics, datasets,
simulation,
crowd models,
demonstration,
various studies
Guillén-Ruiz
et al. [20]
classification
of human motion
prediction
methods
agent motion models
and learning-based
methods,
multi-behavior
navigation
Francis et al.
[12]
diversity
of hardware
platforms
predicting and
accommodating
human behavior
social navigation
principles analysis,
planning extensions
with contextual
awareness
methodologies
and guidelines,
metrics, datasets,
scenarios,
simulators,
benchmarks
API
for metrics
benchmarking
Singamaneni
et al. [11]
ground,
aerial,
aquatic
human intentions
and trajectory
prediction,
contextual
awareness
generation of
global and local
motion (planning,
force, learning),
identifying
social norms
metrics, datasets,
benchmarks,
studies,
simulators
Oursground,
wheeled
human detection
and tracking,
trajectory prediction,
contextual awareness
requirements-based
global path and
local trajectory
planning methods
with social
constraints
metrics, datasets,
benchmarks
and simulators
classification
Table 2. Classification of robot navigation methods implementing the requirements from the presented taxonomy.
Table 2. Classification of robot navigation methods implementing the requirements from the presented taxonomy.
Physical Safety
[6,9,29,40,49,54,55,59,65,73,74,80,81,92,96,101,110,111,114,115,116,120,121,123,125,126,129,131,132,134,135,137,138,139,141,143,144,145,146,147,153,154,155,156,157,158,159,160,174,180,202,204,205,206,207,208,210,211,212,220,223,227,229,232,233,234,243,244,245,246,248,266,267,268,269,274,276,285,286,287,290,298,299,300,307,308,315,317,321,323,324,326,327,328,329,330,331,332,336,339,341,342,343,344,345,346,348,349,350,351,352]
Perceived Safety
Personal spaces [9,29,49,54,59,65,73,74,80,81,101,120,123,125,129,131,132,134,137,141,143,144,145,146,147,156,157,158,159,160,174,205,206,207,210,212,220,223,232,233,234,244,245,246,266,267,268,269,286,287,290,299,300,307,315,317,326,327,329,342,343,344,345,346,348,349,352,353]
O-spaces of F-formations [40,65,114,145,157,160,220,223,232,233,234,246,267,268,269,287,307,317,352,353]
Passing speed [49,55,96,137,141,145,159,180,208,332]
Motion legibility [55,74,101,139,141,147,159,160,180,202,206,207,208,317,321,328,336,346,350]
Approach direction [6,40,54,80,81,92,157,229,244,245,246,267,269,286,307,326,332,352]
Approach speed [40,54,81,92,157,245,246]
Occlusion zones [132,141,266]
Motion Naturalness
Velocity smoothness [29,59,125,135,147,156]
Oscillations [143,146]
In-place rotations
Backward movements
Gaze modulation [73,96,101]
Social Conventions
Accompanying [40,110,111,114,115,116,120,121,126,132,157,174,229,243,244,245,246,308,329,330,331]
Affordance spaces [123,125,223,227,267,268,307,352]
Activity spaces [123,125,223,267,268,307,352]
Passing side [49,59,73,132,137,221,336,341]
Yielding way
Standing in line [125,129,220]
Elevator etiquette
Table 3. Classification of robotic simulation systems with capabilities for replicating human motion behavior. Abbreviations used in the table: MG stands for moving to a goal, PG—performing gestures, FO—following an object, ST—sitting, CO—conversating, JG—joining groups, and MO—moving to an object.
Table 3. Classification of robotic simulation systems with capabilities for replicating human motion behavior. Abbreviations used in the table: MG stands for moving to a goal, PG—performing gestures, FO—following an object, ST—sitting, CO—conversating, JG—joining groups, and MO—moving to an object.
Approach Software
Architecture
Robot
Fidelity
Human
Task
Variety
Human Control
Scripted
Scenarios
Dynamic
Goals
Teleop
Webots [366]standalonekinodynamicMG
Gazebo [367]
(Ignition)
standalonekinodynamicMG, PG
PedsimROS [140]framework
(Gazebo
interface)
MG
flatlandstandalonekinematicMG
HuBeRo [368]framework
(Gazebo
interface)
MG, PG, FO,
ST, CO, MO
SEAN 2.0 [369]UnitykinodynamicMG, JG
Crowdbot [370]UnitykinodynamicMG
iGibson 2.0 [371]standalonekinodynamicMG
InHUS [372]framework
(Stage/Morse
interfaces)
MG
IMHuS [373]framework
(Gazebo
interface)
MG
SocialGym 2.0 [374]framework
(UTMRS interface)
kinodynamicMG
HuNavSim [375]framework
(Gazebo
interface)
MG
Table 4. Classification of robotic simulation systems from the perspective of methods to replicate human motion behavior.
Table 4. Classification of robotic simulation systems from the perspective of methods to replicate human motion behavior.
Approach Human
Motion
Planning
Human
Motion
Diversity
Human
Groups
Webots [366] naive trajectory following configurable speed
in a scripted trajectory
Gazebo [367]
(Ignition)
APF-like configurable weights of potentials
PedsimROS [140]SFM configurable motion model’s
properties and group assignment
flatland any ROS plugin
for motion planning
possible individual parameters
for each planning agent
HuBeRo [368] any ROS plugin
for motion planning
possible individual parameters
for each planning agent
SEAN 2.0 [369]Unity’s built-in path planner
with SFM
configurable behaviors (randomized,
handcrafted or graph-based control
of pedestrians), variable posture
Crowdbot [370]DWA, RVO, SFM configurable speed
in a scripted trajectory
iGibson 2.0 [371] A with ORCA configurable object radius
of ORCA
InHUS [372] any ROS plugin
for motion planning
possible individual parameters
for each planning agent
IMHuS [373] any ROS plugin
for motion planning
possible individual parameters
for each planning agent
SocialGym 2.0 [374]SFM configurable motion model’s
properties and group assignment
HuNavSim [375]APF-like/SFM configurable behaviors (regular,
impassive, surprised, curious,
scared, threatening)
Table 5. A classification of state-of-the-art methods for quantitative evaluation of robot navigation requirements. The number of ticks (✓) reflects the number of metrics implemented in each benchmark. Abbreviations used: S stands for simulation environments, R—real-world environments, and S/R reflects simulation and real-world environments.
Table 5. A classification of state-of-the-art methods for quantitative evaluation of robot navigation requirements. The number of ticks (✓) reflects the number of metrics implemented in each benchmark. Abbreviations used: S stands for simulation environments, R—real-world environments, and S/R reflects simulation and real-world environments.
NameMetricsSuitable
Env.
Analysis
Tools
Classical
Navigation
Performance
Physical
Safety
Perceived
Safety
Motion
Naturalness
Social
Norms
iGibson
Benchmark [387]
S
MRPB [382]✓✓✓✓S/R
BenchMR [376]✓✓✓✓✓
Sscenario rendering,
metrics plots
CrowdBot
Benchmark [370]
✓✓✓✓✓✓✓✓Sscenario rendering,
metrics plots
SocNavBench [33]✓✓✓✓✓
✓✓✓✓✓
✓✓✓✓✓✓Sscenario rendering,
metrics plots
Arena-Bench [383]✓✓✓✓✓
✓✓✓
✓✓✓Sscenario rendering,
metrics plots
SEAN 2.0 [369]✓✓✓✓✓
✓✓✓
✓✓S
InHuS [372]✓✓S/Rscenario and metrics
rendering
Tafnakaji
et al. [385]
✓✓✓✓✓S/Rscenario rendering
 SRPB [76]✓✓✓✓✓
✓✓✓✓✓✓✓✓✓
✓✓✓✓✓
✓✓✓✓✓
✓✓✓✓✓S/Rscenario rendering,
metrics plots,
exporting results
to a LATEX table
or a spreadsheet
HuNavSim [375]✓✓✓✓✓
✓✓✓
✓✓✓✓✓✓✓✓✓✓S
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Karwowski, J.; Szynkiewicz, W.; Niewiadomska-Szynkiewicz, E. Bridging Requirements, Planning, and Evaluation: A Review of Social Robot Navigation. Sensors 2024, 24, 2794. https://doi.org/10.3390/s24092794

AMA Style

Karwowski J, Szynkiewicz W, Niewiadomska-Szynkiewicz E. Bridging Requirements, Planning, and Evaluation: A Review of Social Robot Navigation. Sensors. 2024; 24(9):2794. https://doi.org/10.3390/s24092794

Chicago/Turabian Style

Karwowski, Jarosław, Wojciech Szynkiewicz, and Ewa Niewiadomska-Szynkiewicz. 2024. "Bridging Requirements, Planning, and Evaluation: A Review of Social Robot Navigation" Sensors 24, no. 9: 2794. https://doi.org/10.3390/s24092794

APA Style

Karwowski, J., Szynkiewicz, W., & Niewiadomska-Szynkiewicz, E. (2024). Bridging Requirements, Planning, and Evaluation: A Review of Social Robot Navigation. Sensors, 24(9), 2794. https://doi.org/10.3390/s24092794

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop