sensors-logo

Journal Browser

Journal Browser

Advances in Human-Robot Interaction: Sensing, Cognition and Control

A topical collection in Sensors (ISSN 1424-8220). This collection belongs to the section "Sensors and Robotics".

Viewed by 81084

Editors


E-Mail Website
Collection Editor
Robotics Research Group, Computer Science Department of University of Hertfordshire, Hatfield AL10 9AB, UK
Interests: Behaviour adaptation in closed-loop human–robot interaction (HRI); artificial cognition development; trust-aware HRIs

E-Mail Website
Collection Editor
Automation & Robotics Research Group, University of Luxembourg, Luxembourg, Luxembourg
Interests: soft robotics; reconfigurable robotics; robot control; robotic manipulation and grasping
Special Issues, Collections and Topics in MDPI journals

Topical Collection Information

Dear Colleagues,

Thanks to the recent advancement of robotic solutions and computational intelligence, autonomous robots that interact with humans are nowadays becoming more available to the public, offering different benefits to support people in various organisational contexts, such as education, assistive applications, customer service, and home maintenance. These robots are envisioned to deliver meaningful benefits through efficient and effective interaction with humans to fulfil their expectations. These beneficial effects, however, may not always be realised due to maladaptive forms of interaction. To establish a successful human–robot interaction (HRI), besides the perceptual and cognitive capabilities, an autonomous robot should be able to adapt its behaviour in real time and often in partially unknown environments, make necessary adjustments to the situation at hand, and, in turn, achieve the high-level goal, e.g., assisting a person to solve a puzzle or a tricky maths problem.

In contrast to automation that follows pre-programmed “rules” and is limited to specific actions, autonomous robots are envisioned to have a context-guided behaviour adaptation capability, which would allow them to have a degree of self-governance, enabling them to learn and respond actively to situations that were not pre-programmed by the developer. Although this capability of the robot would potentially promote HRIs, there are serious concerns regarding the impact of technology adaptation on human trust, as the actions of the robots involved in HRI will become less predictable. Thus, it is believed that a successful and trustworthy HRI must be a trade-off between the robot’s behaviour adaptation capability and the robot’s capability in measuring and manipulating other community- and individual-relevant factors such as trust, where the human is the trustor and the robot is the trustee, with the final aim of maximising the outcomes of the HRI.

This Topical Collection aims to cover different aspects of the recent advances in the human–robot interaction field, including the development of architectures and modules for sensing, cognition, and control of robotic systems involved in HRIs, user studies, analysis, assessment and validation of robotic systems, as well as work in progress in this field. Authors are encouraged to submit both original research articles and surveys. Research articles should address the originality, as well as practical aspects and implementation, of the work in the field, while surveys should provide an overview and up-to-date information.

We welcome submissions from all topics of HRI applied to industry, health, and education, including, but not limited to, the following topics:

  • Development of robotic frameworks for sense, cognition, and control;
  • Context-guided behaviour adaptation in HRIs;
  • Mutual perception in closed-loop HRIs;
  • Contextual reasoning in HRIs;
  • Assistive robotics;
  • Human-in-the-loop control;
  • Learning by demonstration;
  • Human factors in HCI/HRI;
  • Human-guided reinforcement learning;
  • Interpretable machine learning with human-in-the-loop;
  • Trust and autonomy in HRIs in different contexts;
  • Trust-aware HRI;
  • Trust measurement tools in HRIs;
  • Explainable robotics;
  • HRIs in real-world settings.

Dr. Abolfazl Zaraki
Dr. Hamed Rahimi Nohooji
Collection Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • autonomous behaviour adaptation
  • contextual reasoning
  • trust-aware interaction
  • trustworthy HRI
  • HRIs in real-world settings
  • robot artificial cognition
  • explainable robotics

Published Papers (22 papers)

2024

Jump to: 2023, 2022, 2021

16 pages, 2217 KiB  
Article
Transformable Gaussian Reward Function for Socially Aware Navigation Using Deep Reinforcement Learning
by Jinyeob Kim, Sumin Kang, Sungwoo Yang, Beomjoon Kim, Jargalbaatar Yura and Donghan Kim
Sensors 2024, 24(14), 4540; https://doi.org/10.3390/s24144540 - 13 Jul 2024
Cited by 1 | Viewed by 769
Abstract
Robot navigation has transitioned from avoiding static obstacles to adopting socially aware navigation strategies for coexisting with humans. Consequently, socially aware navigation in dynamic, human-centric environments has gained prominence in the field of robotics. One of the methods for socially aware navigation, the [...] Read more.
Robot navigation has transitioned from avoiding static obstacles to adopting socially aware navigation strategies for coexisting with humans. Consequently, socially aware navigation in dynamic, human-centric environments has gained prominence in the field of robotics. One of the methods for socially aware navigation, the reinforcement learning technique, has fostered its advancement. However, defining appropriate reward functions, particularly in congested environments, holds a significant challenge. These reward functions, crucial for guiding robot actions, necessitate intricate human-crafted design due to their complex nature and inability to be set automatically. The multitude of manually designed reward functions contains issues such as hyperparameter redundancy, imbalance, and inadequate representation of unique object characteristics. To address these challenges, we introduce a transformable Gaussian reward function (TGRF). The TGRF possesses two main features. First, it reduces the burden of tuning by utilizing a small number of hyperparameters that function independently. Second, it enables the application of various reward functions through its transformability. Consequently, it exhibits high performance and accelerated learning rates within the deep reinforcement learning (DRL) framework. We also validated the performance of TGRF through simulations and experiments. Full article
Show Figures

Figure 1

35 pages, 2478 KiB  
Article
Attention-Based Variational Autoencoder Models for Human–Human Interaction Recognition via Generation
by Bonny Banerjee and Murchana Baruah
Sensors 2024, 24(12), 3922; https://doi.org/10.3390/s24123922 - 17 Jun 2024
Cited by 2 | Viewed by 888
Abstract
The remarkable human ability to predict others’ intent during physical interactions develops at a very early age and is crucial for development. Intent prediction, defined as the simultaneous recognition and generation of human–human interactions, has many applications such as in assistive robotics, human–robot [...] Read more.
The remarkable human ability to predict others’ intent during physical interactions develops at a very early age and is crucial for development. Intent prediction, defined as the simultaneous recognition and generation of human–human interactions, has many applications such as in assistive robotics, human–robot interaction, video and robotic surveillance, and autonomous driving. However, models for solving the problem are scarce. This paper proposes two attention-based agent models to predict the intent of interacting 3D skeletons by sampling them via a sequence of glimpses. The novelty of these agent models is that they are inherently multimodal, consisting of perceptual and proprioceptive pathways. The action (attention) is driven by the agent’s generation error, and not by reinforcement. At each sampling instant, the agent completes the partially observed skeletal motion and infers the interaction class. It learns where and what to sample by minimizing the generation and classification errors. Extensive evaluation of our models is carried out on benchmark datasets and in comparison to a state-of-the-art model for intent prediction, which reveals that classification and generation accuracies of one of the proposed models are comparable to those of the state of the art even though our model contains fewer trainable parameters. The insights gained from our model designs can inform the development of efficient agents, the future of artificial intelligence (AI). Full article
Show Figures

Figure 1

62 pages, 4380 KiB  
Article
Bridging Requirements, Planning, and Evaluation: A Review of Social Robot Navigation
by Jarosław Karwowski, Wojciech Szynkiewicz and Ewa Niewiadomska-Szynkiewicz
Sensors 2024, 24(9), 2794; https://doi.org/10.3390/s24092794 - 27 Apr 2024
Cited by 1 | Viewed by 1469
Abstract
Navigation lies at the core of social robotics, enabling robots to navigate and interact seamlessly in human environments. The primary focus of human-aware robot navigation is minimizing discomfort among surrounding humans. Our review explores user studies, examining factors that cause human discomfort, to [...] Read more.
Navigation lies at the core of social robotics, enabling robots to navigate and interact seamlessly in human environments. The primary focus of human-aware robot navigation is minimizing discomfort among surrounding humans. Our review explores user studies, examining factors that cause human discomfort, to perform the grounding of social robot navigation requirements and to form a taxonomy of elementary necessities that should be implemented by comprehensive algorithms. This survey also discusses human-aware navigation from an algorithmic perspective, reviewing the perception and motion planning methods integral to social navigation. Additionally, the review investigates different types of studies and tools facilitating the evaluation of social robot navigation approaches, namely datasets, simulators, and benchmarks. Our survey also identifies the main challenges of human-aware navigation, highlighting the essential future work perspectives. This work stands out from other review papers, as it not only investigates the variety of methods for implementing human awareness in robot control systems but also classifies the approaches according to the grounded requirements regarded in their objectives. Full article
Show Figures

Figure 1

2023

Jump to: 2024, 2022, 2021

16 pages, 10279 KiB  
Article
Table-Balancing Cooperative Robot Based on Deep Reinforcement Learning
by Yewon Kim, Dae-Won Kim and Bo-Yeong Kang
Sensors 2023, 23(11), 5235; https://doi.org/10.3390/s23115235 - 31 May 2023
Cited by 3 | Viewed by 2510
Abstract
Reinforcement learning is one of the artificial intelligence methods that enable robots to judge and operate situations on their own by learning to perform tasks. Previous reinforcement learning research has mainly focused on tasks performed by individual robots; however, everyday tasks, such as [...] Read more.
Reinforcement learning is one of the artificial intelligence methods that enable robots to judge and operate situations on their own by learning to perform tasks. Previous reinforcement learning research has mainly focused on tasks performed by individual robots; however, everyday tasks, such as balancing tables, often require cooperation between two individuals to avoid injury when moving. In this research, we propose a deep reinforcement learning-based technique for robots to perform a table-balancing task in cooperation with a human. The cooperative robot proposed in this paper recognizes human behavior to balance the table. This recognition is achieved by utilizing the robot’s camera to take an image of the state of the table, then the table-balance action is performed afterward. Deep Q-network (DQN) is a deep reinforcement learning technology applied to cooperative robots. As a result of learning table balancing, on average, the cooperative robot showed a 90% optimal policy convergence rate in 20 runs of training with optimal hyperparameters applied to DQN-based techniques. In the H/W experiment, the trained DQN-based robot achieved an operation precision of 90%, thus verifying its excellent performance. Full article
Show Figures

Figure 1

18 pages, 3894 KiB  
Article
Optimized Dynamic Collision Avoidance Algorithm for USV Path Planning
by Hongyang Zhu and Yi Ding
Sensors 2023, 23(9), 4567; https://doi.org/10.3390/s23094567 - 8 May 2023
Cited by 10 | Viewed by 3185
Abstract
Ship collision avoidance is a complex process that is influenced by numerous factors. In this study, we propose a novel method called the Optimal Collision Avoidance Point (OCAP) for unmanned surface vehicles (USVs) to determine when to take appropriate actions to avoid collisions. [...] Read more.
Ship collision avoidance is a complex process that is influenced by numerous factors. In this study, we propose a novel method called the Optimal Collision Avoidance Point (OCAP) for unmanned surface vehicles (USVs) to determine when to take appropriate actions to avoid collisions. The approach combines a model that accounts for the two degrees of freedom in USV dynamics with a velocity obstacle method for obstacle detection and avoidance. The method calculates the change in the USV’s navigation state based on the critical condition of collision avoidance. First, the coordinates of the optimal collision avoidance point in the current ship encounter state are calculated based on the relative velocities and kinematic parameters of the USV and obstacles. Then, the increments of the vessel’s linear velocity and heading angle that can reach the optimal collision avoidance point are set as a constraint for dynamic window sampling. Finally, the algorithm evaluates the probabilities of collision hazards for trajectories that satisfy the critical condition and uses the resulting collision avoidance probability value as a criterion for course assessment. The resulting collision avoidance algorithm is optimized for USV maneuverability and is capable of handling multiple moving obstacles in real-time. Experimental results show that the OCAP algorithm has higher and more robust path-finding efficiency than the other two algorithms when the dynamic obstacle density is higher. Full article
Show Figures

Figure 1

14 pages, 7850 KiB  
Article
A Mixed-Reality-Based Unknown Space Navigation Method of a Flexible Manipulator
by Ronghui Chen, Xiaojun Zhu, Zhang Chen, Yu Tian, Lunfei Liang and Xueqian Wang
Sensors 2023, 23(8), 3840; https://doi.org/10.3390/s23083840 - 9 Apr 2023
Viewed by 2092
Abstract
A hyper-redundant flexible manipulator is characterized by high degree(s) of freedom (DoF), flexibility, and environmental adaptability. It has been used for missions in complex and unknown spaces, such as debris rescue and pipeline inspection, where the manipulator is not intelligent enough to face [...] Read more.
A hyper-redundant flexible manipulator is characterized by high degree(s) of freedom (DoF), flexibility, and environmental adaptability. It has been used for missions in complex and unknown spaces, such as debris rescue and pipeline inspection, where the manipulator is not intelligent enough to face complex situations. Therefore, human intervention is required to assist in decision-making and control. In this paper, we designed an interactive navigation method based on mixed reality (MR) of a hyper-redundant flexible manipulator in an unknown space. A novel teleoperation system frame is put forward. An MR-based interface was developed to provide a virtual model of the remote workspace and virtual interactive interface, allowing the operator to observe the real-time situation from a third perspective and issue commands to the manipulator. As for environmental modeling, a simultaneous localization and mapping (SLAM) algorithm based on an RGB-D camera is applied. Additionally, a path-finding and obstacle avoidance method based on artificial potential field (APF) is introduced to ensure that the manipulator can move automatically under the artificial command in the remote space without collision. The results of the simulations and experiments validate that the system exhibits good real-time performance, accuracy, security, and user-friendliness. Full article
Show Figures

Figure 1

2022

Jump to: 2024, 2023, 2021

11 pages, 2808 KiB  
Article
Auditory Feedback for Enhanced Sense of Agency in Shared Control
by Tomoya Morita, Yaonan Zhu, Tadayoshi Aoyama, Masaru Takeuchi, Kento Yamamoto and Yasuhisa Hasegawa
Sensors 2022, 22(24), 9779; https://doi.org/10.3390/s22249779 - 13 Dec 2022
Cited by 3 | Viewed by 2417
Abstract
There is a growing need for robots that can be remotely controlled to perform tasks of one’s own choice. However, the SoA (Sense of Agency: the sense of recognizing that the motion of an observed object is caused by oneself) is reduced because [...] Read more.
There is a growing need for robots that can be remotely controlled to perform tasks of one’s own choice. However, the SoA (Sense of Agency: the sense of recognizing that the motion of an observed object is caused by oneself) is reduced because the subject of the robot motion is identified as external due to shared control. To address this issue, we aimed to suppress the decline in SoA by presenting auditory feedback that aims to blur the distinction between self and others. We performed the tracking task in a virtual environment under four different auditory feedback conditions, with varying levels of automation to manipulate the virtual robot gripper. Experimental results showed that the proposed auditory feedback suppressed the decrease in the SoA at a medium level of automation. It is suggested that our proposed auditory feedback could blur the distinction between self and others, and that the operator attributes the subject of the motion of the manipulated object to himself. Full article
Show Figures

Figure 1

23 pages, 351 KiB  
Perspective
Methods of Generating Emotional Movements and Methods of Transmitting Behavioral Intentions: A Perspective on Human-Coexistence Robots
by Takafumi Matsumaru
Sensors 2022, 22(12), 4587; https://doi.org/10.3390/s22124587 - 17 Jun 2022
Cited by 1 | Viewed by 3151
Abstract
The purpose of this paper is to introduce and discuss the following two functions that are considered to be important in human-coexistence robots and human-symbiotic robots: the method of generating emotional movements, and the method of transmitting behavioral intentions. The generation of emotional [...] Read more.
The purpose of this paper is to introduce and discuss the following two functions that are considered to be important in human-coexistence robots and human-symbiotic robots: the method of generating emotional movements, and the method of transmitting behavioral intentions. The generation of emotional movements is to design the bodily movements of robots so that humans can feel specific emotions. Specifically, the application of Laban movement analysis, the development from the circumplex model of affect, and the imitation of human movements are discussed. However, a general technique has not yet been established to modify any robot movement so that it contains a specific emotion. The transmission of behavioral intentions is about allowing the surrounding humans to understand the behavioral intentions of robots. Specifically, informative motions in arm manipulation and the transmission of the movement intentions of robots are discussed. In the former, the target position in the reaching motion, the physical characteristics in the handover motion, and the landing distance in the throwing motion are examined, but there are still few research cases. In the latter, no groundbreaking method has been proposed that is fundamentally different from earlier studies. Further research and development are expected in the near future. Full article
31 pages, 1352 KiB  
Review
Recent Advances in Bipedal Walking Robots: Review of Gait, Drive, Sensors and Control Systems
by Tadeusz Mikolajczyk, Emilia Mikołajewska, Hayder F. N. Al-Shuka, Tomasz Malinowski, Adam Kłodowski, Danil Yurievich Pimenov, Tomasz Paczkowski, Fuwen Hu, Khaled Giasin, Dariusz Mikołajewski and Marek Macko
Sensors 2022, 22(12), 4440; https://doi.org/10.3390/s22124440 - 12 Jun 2022
Cited by 58 | Viewed by 12740
Abstract
Currently, there is an intensive development of bipedal walking robots. The most known solutions are based on the use of the principles of human gait created in nature during evolution. Modernbipedal robots are also based on the locomotion manners of birds. This review [...] Read more.
Currently, there is an intensive development of bipedal walking robots. The most known solutions are based on the use of the principles of human gait created in nature during evolution. Modernbipedal robots are also based on the locomotion manners of birds. This review presents the current state of the art of bipedal walking robots based on natural bipedal movements (human and bird) as well as on innovative synthetic solutions. Firstly, an overview of the scientific analysis of human gait is provided as a basis for the design of bipedal robots. The full human gait cycle that consists of two main phases is analysed and the attention is paid to the problem of balance and stability, especially in the single support phase when the bipedal movement is unstable. The influences of passive or active gait on energy demand are also discussed. Most studies are explored based on the zero moment. Furthermore, a review of the knowledge on the specific locomotor characteristics of birds, whose kinematics are derived from dinosaurs and provide them with both walking and running abilities, is presented. Secondly, many types of bipedal robot solutions are reviewed, which include nature-inspired robots (human-like and birdlike robots) and innovative robots using new heuristic, synthetic ideas for locomotion. Totally 45 robotic solutions are gathered by thebibliographic search method. Atlas was mentioned as one of the most perfect human-like robots, while the birdlike robot cases were Cassie and Digit. Innovative robots are presented, such asslider robot without knees, robots with rotating feet (3 and 4 degrees of freedom), and the hybrid robot Leo, which can walk on surfaces and fly. In particular, the paper describes in detail the robots’ propulsion systems (electric, hydraulic), the structure of the lower limb (serial, parallel, mixed mechanisms), the types and structures of control and sensor systems, and the energy efficiency of the robots. Terrain roughness recognition systems using different sensor systems based on light detection and ranging or multiple cameras are introduced. A comparison of performance, control and sensor systems, drive systems, and achievements of known human-like and birdlike robots is provided. Thirdly, for the first time, the review comments on the future of bipedal robots in relation to the concepts of conventional (natural bipedal) and synthetic unconventional gait. We critically assess and compare prospective directions for further research that involve the development of navigation systems, artificial intelligence, collaboration with humans, areas for the development of bipedal robot applications in everyday life, therapy, and industry. Full article
Show Figures

Figure 1

25 pages, 6214 KiB  
Article
Real-Time Stylized Humanoid Behavior Control through Interaction and Synchronization
by Zhiyan Cao, Tianxu Bao, Zeyu Ren, Yunxin Fan, Ken Deng and Wenchuan Jia
Sensors 2022, 22(4), 1457; https://doi.org/10.3390/s22041457 - 14 Feb 2022
Viewed by 2266
Abstract
Restricted by the diversity and complexity of human behaviors, simulating a character to achieve human-level perception and motion control is still an active as well as a challenging area. We present a style-based teleoperation framework with the help of human perceptions and analyses [...] Read more.
Restricted by the diversity and complexity of human behaviors, simulating a character to achieve human-level perception and motion control is still an active as well as a challenging area. We present a style-based teleoperation framework with the help of human perceptions and analyses to understand the tasks being handled and the unknown environment to control the character. In this framework, the motion optimization and body controller with center-of-mass and root virtual control (CR-VC) method are designed to achieve motion synchronization and style mimicking while maintaining the balance of the character. The motion optimization synthesizes the human high-level style features with the balance strategy to create a feasible, stylized, and stable pose for the character. The CR-VC method including the model-based torque compensation synchronizes the motion rhythm of the human and character. Without any inverse dynamics knowledge or offline preprocessing, our framework is generalized to various scenarios and robust to human behavior changes in real-time. We demonstrate the effectiveness of this framework through the teleoperation experiments with different tasks, motion styles, and operators. This study is a step toward building a human-robot interaction that uses humans to help characters understand and achieve the tasks. Full article
Show Figures

Figure 1

17 pages, 2601 KiB  
Article
Designing Man’s New Best Friend: Enhancing Human-Robot Dog Interaction through Dog-Like Framing and Appearance
by Ewart J. de Visser, Yigit Topoglu, Shawn Joshi, Frank Krueger, Elizabeth Phillips, Jonathan Gratch, Chad C. Tossell and Hasan Ayaz
Sensors 2022, 22(3), 1287; https://doi.org/10.3390/s22031287 - 8 Feb 2022
Cited by 5 | Viewed by 5672
Abstract
To understand how to improve interactions with dog-like robots, we evaluated the importance of “dog-like” framing and physical appearance on interaction, hypothesizing multiple interactive benefits of each. We assessed whether framing Aibo as a puppy (i.e., in need of development) versus simply a [...] Read more.
To understand how to improve interactions with dog-like robots, we evaluated the importance of “dog-like” framing and physical appearance on interaction, hypothesizing multiple interactive benefits of each. We assessed whether framing Aibo as a puppy (i.e., in need of development) versus simply a robot would result in more positive responses and interactions. We also predicted that adding fur to Aibo would make it appear more dog-like, likable, and interactive. Twenty-nine participants engaged with Aibo in a 2 × 2 (framing × appearance) design by issuing commands to the robot. Aibo and participant behaviors were monitored per second, and evaluated via an analysis of commands issued, an analysis of command blocks (i.e., chains of commands), and using a T-pattern analysis of participant behavior. Participants were more likely to issue the “Come Here” command than other types of commands. When framed as a puppy, participants used Aibo’s dog name more often, praised it more, and exhibited more unique, interactive, and complex behavior with Aibo. Participants exhibited the most smiling and laughing behaviors with Aibo framed as a puppy without fur. Across conditions, after interacting with Aibo, participants felt Aibo was more trustworthy, intelligent, warm, and connected than at their initial meeting. This study shows the benefits of introducing a socially robotic agent with a particular frame and importance on realism (i.e., introducing the robot dog as a puppy) for more interactive engagement. Full article
Show Figures

Figure 1

24 pages, 17607 KiB  
Article
My Caregiver the Cobot: Comparing Visualization Techniques to Effectively Communicate Cobot Perception to People with Physical Impairments
by Max Pascher, Kirill Kronhardt, Til Franzen, Uwe Gruenefeld, Stefan Schneegass and Jens Gerken
Sensors 2022, 22(3), 755; https://doi.org/10.3390/s22030755 - 19 Jan 2022
Cited by 5 | Viewed by 3513
Abstract
Nowadays, robots are found in a growing number of areas where they collaborate closely with humans. Enabled by lightweight materials and safety sensors, these cobots are gaining increasing popularity in domestic care, where they support people with physical impairments in their everyday lives. [...] Read more.
Nowadays, robots are found in a growing number of areas where they collaborate closely with humans. Enabled by lightweight materials and safety sensors, these cobots are gaining increasing popularity in domestic care, where they support people with physical impairments in their everyday lives. However, when cobots perform actions autonomously, it remains challenging for human collaborators to understand and predict their behavior, which is crucial for achieving trust and user acceptance. One significant aspect of predicting cobot behavior is understanding their perception and comprehending how they “see” the world. To tackle this challenge, we compared three different visualization techniques for Spatial Augmented Reality. All of these communicate cobot perception by visually indicating which objects in the cobot’s surrounding have been identified by their sensors. We compared the well-established visualizations Wedge and Halo against our proposed visualization Line in a remote user experiment with participants suffering from physical impairments. In a second remote experiment, we validated these findings with a broader non-specific user base. Our findings show that Line, a lower complexity visualization, results in significantly faster reaction times compared to Halo, and lower task load compared to both Wedge and Halo. Overall, users prefer Line as a more straightforward visualization. In Spatial Augmented Reality, with its known disadvantage of limited projection area size, established off-screen visualizations are not effective in communicating cobot perception and Line presents an easy-to-understand alternative. Full article
Show Figures

Figure 1

27 pages, 7850 KiB  
Article
Mix Frame Visual Servo Control Framework for Autonomous Assistive Robotic Arms
by Zubair Arif and Yili Fu
Sensors 2022, 22(2), 642; https://doi.org/10.3390/s22020642 - 14 Jan 2022
Cited by 3 | Viewed by 3863
Abstract
Assistive robotic arms (ARAs) that provide care to the elderly and people with disabilities, are a significant part of Human-Robot Interaction (HRI). Presently available ARAs provide non-intuitive interfaces such as joysticks for control and thus, lacks the autonomy to perform daily activities. This [...] Read more.
Assistive robotic arms (ARAs) that provide care to the elderly and people with disabilities, are a significant part of Human-Robot Interaction (HRI). Presently available ARAs provide non-intuitive interfaces such as joysticks for control and thus, lacks the autonomy to perform daily activities. This study proposes that, for inducing autonomous behavior in ARAs, visual sensors integration is vital, and visual servoing in the direct Cartesian control mode is the preferred method. Generally, ARAs are designed in a configuration where its end-effector’s position is defined in the fixed base frame while orientation is expressed in the end-effector frame. We denoted this configuration as ‘mixed frame robotic arms’. Consequently, conventional visual servo controllers which operate in a single frame of reference are incompatible with mixed frame ARAs. Therefore, we propose a mixed-frame visual servo control framework for ARAs. Moreover, we enlightened the task space kinematics of a mixed frame ARAs, which led us to the development of a novel “mixed frame Jacobian matrix”. The proposed framework was validated on a mixed frame JACO-2 7 DoF ARA using an adaptive proportional derivative controller for achieving image-based visual servoing (IBVS), which showed a significant increase of 31% in the convergence rate, outperforming conventional IBVS joint controllers, especially in the outstretched arm positions and near the base frame. Our Results determine the need for the mixed frame controller for deploying visual servo control on modern ARAs, that can inherently cater to the robotic arm’s joint limits, singularities, and self-collision problems. Full article
Show Figures

Figure 1

2021

Jump to: 2024, 2023, 2022

17 pages, 4391 KiB  
Article
Time-Optimal Velocity Tracking Control for Consensus Formation of Multiple Nonholonomic Mobile Robots
by Hamidreza Fahham, Abolfazl Zaraki, Gareth Tucker and Mark W. Spong
Sensors 2021, 21(23), 7997; https://doi.org/10.3390/s21237997 - 30 Nov 2021
Cited by 2 | Viewed by 2629
Abstract
The problem of velocity tracking is considered essential in the consensus of multi-wheeled mobile robot systems to minimise the total operating time and enhance the system’s energy efficiency. This study presents a novel switched-system approach, consisting of bang-bang control and consensus formation algorithms, [...] Read more.
The problem of velocity tracking is considered essential in the consensus of multi-wheeled mobile robot systems to minimise the total operating time and enhance the system’s energy efficiency. This study presents a novel switched-system approach, consisting of bang-bang control and consensus formation algorithms, to address the problem of time-optimal velocity tracking of multiple wheeled mobile robots with nonholonomic constraints. This effort aims to achieve the desired velocity formation in the least time for any initial velocity conditions in a multiple mobile robot system. The main findings of this study are as follows: (i) by deriving the equation of motion along the specified path, the motor’s extremal conditions for a time-optimal trajectory are introduced; (ii) utilising a general consensus formation algorithm, the desired velocity formation is achieved; (iii) applying the Pontryagin Maximum Principle, the new switching formation matrix of weights is obtained. Using this new switching matrix of weights guarantees that at least one of the system’s motors, of either the followers or the leader, reaches its maximum or minimum value by using extremals, which enables the multi-robot system to reach the velocity formation in the least time. The proposed approach is verified in a theoretical analysis along with the numerical simulation process. The simulation results demonstrated that using the proposed switched system, the time-optimal consensus algorithm behaved very well in the networks with different numbers of robots and different topology conditions. The required time for the consensus formation is dramatically reduced, which is very promising. The findings of this work could be extended to and beneficial for any multi-wheeled mobile robot system. Full article
Show Figures

Figure 1

14 pages, 2179 KiB  
Article
Path Driven Dual Arm Mobile Co-Manipulation Architecture for Large Part Manipulation in Industrial Environments
by Aitor Ibarguren and Paul Daelman
Sensors 2021, 21(19), 6620; https://doi.org/10.3390/s21196620 - 5 Oct 2021
Cited by 4 | Viewed by 2522
Abstract
Collaborative part transportation is an interesting application as many industrial sectors require moving large parts among different areas of the workshops, using a large amount of the workforce on this tasks. Even so, the implementation of such kinds of robotic solutions raises technical [...] Read more.
Collaborative part transportation is an interesting application as many industrial sectors require moving large parts among different areas of the workshops, using a large amount of the workforce on this tasks. Even so, the implementation of such kinds of robotic solutions raises technical challenges like force-based control or robot-to-human feedback. This paper presents a path-driven mobile co-manipulation architecture, proposing an algorithm that deals with all the steps of collaborative part transportation. Starting from the generation of force-based twist commands, continuing with the path management for the definition of safe and collaborative areas, and finishing with the feedback provided to the system users, the proposed approach allows creating collaborative lanes for the conveyance of large components. The implemented solution and performed tests show the suitability of the proposed architecture, allowing the creation of a functional robotic system able to assist operators transporting large parts on workshops. Full article
Show Figures

Figure 1

26 pages, 7553 KiB  
Article
Intuitive Spatial Tactile Feedback for Better Awareness about Robot Trajectory during Human–Robot Collaboration
by Stefan Grushko, Aleš Vysocký, Dominik Heczko and Zdenko Bobovský
Sensors 2021, 21(17), 5748; https://doi.org/10.3390/s21175748 - 26 Aug 2021
Cited by 26 | Viewed by 4047
Abstract
In this work, we extend the previously proposed approach of improving mutual perception during human–robot collaboration by communicating the robot’s motion intentions and status to a human worker using hand-worn haptic feedback devices. The improvement is presented by introducing spatial tactile feedback, which [...] Read more.
In this work, we extend the previously proposed approach of improving mutual perception during human–robot collaboration by communicating the robot’s motion intentions and status to a human worker using hand-worn haptic feedback devices. The improvement is presented by introducing spatial tactile feedback, which provides the human worker with more intuitive information about the currently planned robot’s trajectory, given its spatial configuration. The enhanced feedback devices communicate directional information through activation of six tactors spatially organised to represent an orthogonal coordinate frame: the vibration activates on the side of the feedback device that is closest to the future path of the robot. To test the effectiveness of the improved human–machine interface, two user studies were prepared and conducted. The first study aimed to quantitatively evaluate the ease of differentiating activation of individual tactors of the notification devices. The second user study aimed to assess the overall usability of the enhanced notification mode for improving human awareness about the planned trajectory of a robot. The results of the first experiment allowed to identify the tactors for which vibration intensity was most often confused by users. The results of the second experiment showed that the enhanced notification system allowed the participants to complete the task faster and, in general, improved user awareness of the robot’s movement plan, according to both objective and subjective data. Moreover, the majority of participants (82%) favoured the improved notification system over its previous non-directional version and vision-based inspection. Full article
Show Figures

Figure 1

26 pages, 378 KiB  
Review
Artificial Vision Algorithms for Socially Assistive Robot Applications: A Review of the Literature
by Victor Manuel Montaño-Serrano, Juan Manuel Jacinto-Villegas, Adriana Herlinda Vilchis-González and Otniel Portillo-Rodríguez
Sensors 2021, 21(17), 5728; https://doi.org/10.3390/s21175728 - 25 Aug 2021
Cited by 7 | Viewed by 4637
Abstract
Today, computer vision algorithms are very important for different fields and applications, such as closed-circuit television security, health status monitoring, and recognizing a specific person or object and robotics. Regarding this topic, the present paper deals with a recent review of the literature [...] Read more.
Today, computer vision algorithms are very important for different fields and applications, such as closed-circuit television security, health status monitoring, and recognizing a specific person or object and robotics. Regarding this topic, the present paper deals with a recent review of the literature on computer vision algorithms (recognition and tracking of faces, bodies, and objects) oriented towards socially assistive robot applications. The performance, frames per second (FPS) processing speed, and hardware implemented to run the algorithms are highlighted by comparing the available solutions. Moreover, this paper provides general information for researchers interested in knowing which vision algorithms are available, enabling them to select the one that is most suitable to include in their robotic system applications. Full article
Show Figures

Figure 1

18 pages, 2628 KiB  
Article
Robot Transparency and Anthropomorphic Attribute Effects on Human–Robot Interactions
by Jianmin Wang, Yujia Liu, Tianyang Yue, Chengji Wang, Jinjing Mao, Yuxi Wang and Fang You
Sensors 2021, 21(17), 5722; https://doi.org/10.3390/s21175722 - 25 Aug 2021
Cited by 9 | Viewed by 4671
Abstract
Anthropomorphic robots need to maintain effective and emotive communication with humans as automotive agents to establish and maintain effective human–robot performances and positive human experiences. Previous research has shown that the characteristics of robot communication positively affect human–robot interaction outcomes such as usability, [...] Read more.
Anthropomorphic robots need to maintain effective and emotive communication with humans as automotive agents to establish and maintain effective human–robot performances and positive human experiences. Previous research has shown that the characteristics of robot communication positively affect human–robot interaction outcomes such as usability, trust, workload, and performance. In this study, we investigated the characteristics of transparency and anthropomorphism in robotic dual-channel communication, encompassing the voice channel (low or high, increasing the amount of information provided by textual information) and the visual channel (low or high, increasing the amount of information provided by expressive information). The results showed the benefits and limitations of increasing the transparency and anthropomorphism, demonstrating the significance of the careful implementation of transparency methods. The limitations and future directions are discussed. Full article
Show Figures

Figure 1

19 pages, 3395 KiB  
Article
Vertical Jumping for Legged Robot Based on Quadratic Programming
by Dingkui Tian, Junyao Gao, Xuanyang Shi, Yizhou Lu and Chuzhao Liu
Sensors 2021, 21(11), 3679; https://doi.org/10.3390/s21113679 - 25 May 2021
Cited by 7 | Viewed by 3120
Abstract
The highly dynamic legged jumping motion is a challenging research topic because of the lack of established control schemes that handle over-constrained control objectives well in the stance phase, which are coupled and affect each other, and control robot’s posture in the flight [...] Read more.
The highly dynamic legged jumping motion is a challenging research topic because of the lack of established control schemes that handle over-constrained control objectives well in the stance phase, which are coupled and affect each other, and control robot’s posture in the flight phase, in which the robot is underactuated owing to the foot leaving the ground. This paper introduces an approach of realizing the cyclic vertical jumping motion of a planar simplified legged robot that formulates the jump problem within a quadratic-programming (QP)-based framework. Unlike prior works, which have added different weights in front of control tasks to express the relative hierarchy of tasks, in our framework, the hierarchical quadratic programming (HQP) control strategy is used to guarantee the strict prioritization of the center of mass (CoM) in the stance phase while split dynamic equations are incorporated into the unified quadratic-programming framework to restrict the robot’s posture to be near a desired constant value in the flight phase. The controller is tested in two simulation environments with and without the flight phase controller, the results validate the flight phase controller, with the HQP controller having a maximum error of the CoM in the x direction and y direction of 0.47 and 0.82 cm and thus enabling the strict prioritization of the CoM. Full article
Show Figures

Figure 1

17 pages, 4715 KiB  
Article
A Bio-Inspired Compliance Planning and Implementation Method for Hydraulically Actuated Quadruped Robots with Consideration of Ground Stiffness
by Xiaoxing Zhang, Haoyuan Yi, Junjun Liu, Qi Li and Xin Luo
Sensors 2021, 21(8), 2838; https://doi.org/10.3390/s21082838 - 17 Apr 2021
Cited by 3 | Viewed by 3088
Abstract
There has been a rising interest in compliant legged locomotion to improve the adaptability and energy efficiency of robots. However, few approaches can be generalized to soft ground due to the lack of consideration of the ground surface. When a robot locomotes on [...] Read more.
There has been a rising interest in compliant legged locomotion to improve the adaptability and energy efficiency of robots. However, few approaches can be generalized to soft ground due to the lack of consideration of the ground surface. When a robot locomotes on soft ground, the elastic robot legs and compressible ground surface are connected in series. The combined compliance of the leg and surface determines the natural dynamics of the whole system and affects the stability and efficiency of the robot. This paper proposes a bio-inspired leg compliance planning and implementation method with consideration of the ground surface. The ground stiffness is estimated based on analysis of ground reaction forces in the frequency domain, and the leg compliance is actively regulated during locomotion, adapting them to achieve harmonic oscillation. The leg compliance is planned on the condition of resonant movement which agrees with natural dynamics and facilitates rhythmicity and efficiency. The proposed method has been implemented on a hydraulic quadruped robot. The simulations and experimental results verified the effectiveness of our method. Full article
Show Figures

Figure 1

15 pages, 5003 KiB  
Article
Variable Admittance Control Based on Human–Robot Collaboration Observer Using Frequency Analysis for Sensitive and Safe Interaction
by Hyomin Kim and Woosung Yang
Sensors 2021, 21(5), 1899; https://doi.org/10.3390/s21051899 - 8 Mar 2021
Cited by 8 | Viewed by 4092
Abstract
A collaborative robot should be sensitive to the user intention while maintaining safe interaction during tasks such as hand guiding. Observers based on the discrete Fourier transform have been studied to distinguish between the low-frequency motion elicited by the operator and high-frequency behavior [...] Read more.
A collaborative robot should be sensitive to the user intention while maintaining safe interaction during tasks such as hand guiding. Observers based on the discrete Fourier transform have been studied to distinguish between the low-frequency motion elicited by the operator and high-frequency behavior resulting from system instability and disturbances. However, the discrete Fourier transform requires an excessively long sampling time. We propose a human–robot collaboration observer based on an infinite impulse response filter to increase the intention recognition speed. By using this observer, we also propose a variable admittance controller to ensure safe collaboration. The recognition speed of the human–robot collaboration observer is 0.29 s, being 3.5 times faster than frequency analysis based on the discrete Fourier transform. The performance of the variable admittance controller and its improved recognition speed are experimentally verified on a two-degrees-of-freedom manipulator. We confirm that the improved recognition speed of the proposed human–robot collaboration observer allows us to timely recover from unsafe to safe collaboration. Full article
Show Figures

Figure 1

19 pages, 2684 KiB  
Article
Simulation of Upward Jump Control for One-Legged Robot Based on QP Optimization
by Dingkui Tian, Junyao Gao, Chuzhao Liu and Xuanyang Shi
Sensors 2021, 21(5), 1893; https://doi.org/10.3390/s21051893 - 8 Mar 2021
Cited by 6 | Viewed by 3242
Abstract
An optimization framework for upward jumping motion based on quadratic programming (QP) is proposed in this paper, which can simultaneously consider constraints such as the zero moment point (ZMP), limitation of angular accelerations, and anti-slippage. Our approach comprises two parts: the trajectory generation [...] Read more.
An optimization framework for upward jumping motion based on quadratic programming (QP) is proposed in this paper, which can simultaneously consider constraints such as the zero moment point (ZMP), limitation of angular accelerations, and anti-slippage. Our approach comprises two parts: the trajectory generation and real-time control. In the trajectory generation for the launch phase, we discretize the continuous trajectories and assume that the accelerations between the two sampling intervals are constant and transcribe the problem into a nonlinear optimization problem. In the real-time control of the stance phase, the over-constrained control objectives such as the tracking of the center of moment (CoM), angle, and angular momentum, and constraints such as the anti-slippage, ZMP, and limitation of joint acceleration are unified within a framework based on QP optimization. Input angles of the actuated joints are thus obtained through a simple iteration. The simulation result reveals that a successful upward jump to a height of 16.4 cm was achieved, which confirms that the controller fully satisfies all constraints and achieves the control objectives. Full article
Show Figures

Figure 1

Back to TopTop