sensors-logo

Journal Browser

Journal Browser

Human-Robot Interaction

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (31 July 2020) | Viewed by 17911

Special Issue Editors


E-Mail Website
Guest Editor
University of Ljubljana, Faculty of Electrical Engineering, Tržaška c. 25, 1000 Ljubljana, Slovenia
Interests: human-robot interaction; collaborative robotics; rehabilitation robotics; haptic interfaces; robot control; virtual and augmented reality

E-Mail Website
Co-Guest Editor
Jozef Stefan Institute, Ljubljana, Slovenia
Interests: robotics; humanoids; exoskeleton; nonlinear control; robot learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, we have witnessed an impressive advancement in various areas where human–robot interaction is required for accomplishing a task or a mission. Robots safely and reliably coexist, collaborate, cooperate with humans, share spaces and tasks with humans, learn and adapt to new tasks and environmental conditions, interact with different groups of people (workers, children, grown-ups, elderly, patients) in manufacturing and during activities of daily living, and operate in various environments, such as homes, factories, and hospitals. All this was made possible with research focusing on development of autonomous and semi-autonomous robots that can operate in tight cooperation with humans, in environments previously occupied only by humans, are capable of learning from humans, and are able to generalize acquired knowledge. New robot functionalities, which were developed in the last decade, increase system complexity in terms of mechanisms, sensing capabilities, computational demands, communications, energy requirements, control, and human–robot interfaces. Intuitive and transparent interfaces between a human and a robot are at the forefront of development. Human–robot interaction is not only about physical contacts but involves also cognitive, social, and emotional aspects.

The aim of this Special Issue is to showcase advanced human–robot interaction concepts based on complex sensing and control capabilities with a strong consideration for safety aspects. Ideally, concepts should be demonstrated in realistic operating conditions. Though the primary focus of this Special Issue as summarized below is on physical human–robot interaction, concepts that involve cognitive and social aspects of interaction are encouraged as well. Topics of interest include but are not limited to:

  • Applications based on collaborative, wearable, assistive, medical, rehabilitation, and haptic robots;
  • Sensor fusion, intention detection, and robot control;
  • Shared control algorithms (collaborative, assistive, rehabilitation robots);
  • Novel sensing technologies and concepts for robot safety in various environments (industrial, clinical, home);
  • Human factor analyses in human–robot interaction;
  • Transparent interfaces for human–robot interaction;
  • Sensing and control concepts for physical human–robot interaction;
  • Ergonomics in human–robot physical collaboration.

Prof. Dr. Matjaž Mihelj
Dr. Tadej Petric
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Collaborative robotics
  • Wearable robotics
  • Wearable sensors
  • Assistive robotics
  • Medical and rehabilitation robotics
  • Haptic interfaces
  • Feedback devices
  • Sensor fusion
  • Robot safety
  • Robot control
  • Shared control
  • Robot adaptation and learning
  • Intention detection
  • Human factors
  • Brain–computer interfaces
  • Physical human–robot collaboration

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 9129 KiB  
Article
Passive Exercise Adaptation for Ankle Rehabilitation Based on Learning Control Framework
by Fares J. Abu-Dakka, Angel Valera, Juan A. Escalera, Mohamed Abderrahim, Alvaro Page and Vicente Mata
Sensors 2020, 20(21), 6215; https://doi.org/10.3390/s20216215 - 31 Oct 2020
Cited by 17 | Viewed by 5015
Abstract
Ankle injuries are among the most common injuries in sport and daily life. However, for their recovery, it is important for patients to perform rehabilitation exercises. These exercises are usually done with a therapist’s guidance to help strengthen the patient’s ankle joint and [...] Read more.
Ankle injuries are among the most common injuries in sport and daily life. However, for their recovery, it is important for patients to perform rehabilitation exercises. These exercises are usually done with a therapist’s guidance to help strengthen the patient’s ankle joint and restore its range of motion. However, in order to share the load with therapists so that they can offer assistance to more patients, and to provide an efficient and safe way for patients to perform ankle rehabilitation exercises, we propose a framework that integrates learning techniques with a 3-PRS parallel robot, acting together as an ankle rehabilitation device. In this paper, we propose to use passive rehabilitation exercises for dorsiflexion/plantar flexion and inversion/eversion ankle movements. The therapist is needed in the first stage to design the exercise with the patient by teaching the robot intuitively through learning from demonstration. We then propose a learning control scheme based on dynamic movement primitives and iterative learning control, which takes the designed exercise trajectory as a demonstration (an input) together with the recorded forces in order to reproduce the exercise with the patient for a number of repetitions defined by the therapist. During the execution, our approach monitors the sensed forces and adapts the trajectory by adding the necessary offsets to the original trajectory to reduce its range without modifying the original trajectory and subsequently reducing the measured forces. After a predefined number of repetitions, the algorithm restores the range gradually, until the patient is able to perform the originally designed exercise. We validate the proposed framework with both real experiments and simulation using a Simulink model of the rehabilitation parallel robot that has been developed in our lab. Full article
(This article belongs to the Special Issue Human-Robot Interaction)
Show Figures

Figure 1

15 pages, 2742 KiB  
Article
An Intuitive Formulation of the Human Arm Active Endpoint Stiffness
by Yuqiang Wu, Fei Zhao, Wansoo Kim and Arash Ajoudani
Sensors 2020, 20(18), 5357; https://doi.org/10.3390/s20185357 - 18 Sep 2020
Cited by 15 | Viewed by 3953
Abstract
In this work, we propose an intuitive and real-time model of the human arm active endpoint stiffness. In our model, the symmetric and positive-definite stiffness matrix is constructed through the eigendecomposition Kc=VDVT, where V is an [...] Read more.
In this work, we propose an intuitive and real-time model of the human arm active endpoint stiffness. In our model, the symmetric and positive-definite stiffness matrix is constructed through the eigendecomposition Kc=VDVT, where V is an orthonormal matrix whose columns are the normalized eigenvectors of Kc, and D is a diagonal matrix whose entries are the eigenvalues of Kc. In this formulation, we propose to construct V and D directly by exploiting the geometric information from a reduced human arm skeleton structure in 3D and from the assumption that human arm muscles work synergistically when co-contracted. Through the perturbation experiments across multiple subjects under different arm configurations and muscle activation states, we identified the model parameters and examined the modeling accuracy. In comparison to our previous models for predicting human active arm endpoint stiffness, the new model offers significant advantages such as fast identification and personalization due to its principled simplicity. The proposed model is suitable for applications such as teleoperation, human–robot interaction and collaboration, and human ergonomic assessments, where a personalizable and real-time human kinodynamic model is a crucial requirement. Full article
(This article belongs to the Special Issue Human-Robot Interaction)
Show Figures

Figure 1

13 pages, 2547 KiB  
Article
Proprioceptive Estimation of Forces Using Underactuated Fingers for Robot-Initiated pHRI
by Joaquin Ballesteros, Francisco Pastor, Jesús M. Gómez-de-Gabriel, Juan M. Gandarias, Alfonso J. García-Cerezo and Cristina Urdiales
Sensors 2020, 20(10), 2863; https://doi.org/10.3390/s20102863 - 18 May 2020
Cited by 11 | Viewed by 4403
Abstract
In physical Human–Robot Interaction (pHRI), forces exerted by humans need to be estimated to accommodate robot commands to human constraints, preferences, and needs. This paper presents a method for the estimation of the interaction forces between a human and a robot using a [...] Read more.
In physical Human–Robot Interaction (pHRI), forces exerted by humans need to be estimated to accommodate robot commands to human constraints, preferences, and needs. This paper presents a method for the estimation of the interaction forces between a human and a robot using a gripper with proprioceptive sensing. Specifically, we measure forces exerted by a human limb grabbed by an underactuated gripper in a frontal plane using only the gripper’s own sensors. This is achieved via a regression method, trained with experimental data from the values of the phalanx angles and actuator signals. The proposed method is intended for adaptive shared control in limb manipulation. Although adding force sensors provides better performance, the results obtained are accurate enough for this application. This approach requires no additional hardware: it relies uniquely on the gripper motor feedback—current, position and torque—and joint angles. Also, it is computationally cheap, so processing times are low enough to allow continuous human-adapted pHRI for shared control. Full article
(This article belongs to the Special Issue Human-Robot Interaction)
Show Figures

Figure 1

13 pages, 3737 KiB  
Article
Gaussian Mixture Models for Control of Quasi-Passive Spinal Exoskeletons
by Marko Jamšek, Tadej Petrič and Jan Babič
Sensors 2020, 20(9), 2705; https://doi.org/10.3390/s20092705 - 9 May 2020
Cited by 20 | Viewed by 3858
Abstract
Research and development of active and passive exoskeletons for preventing work related injuries has steadily increased in the last decade. Recently, new types of quasi-passive designs have been emerging. These exoskeletons use passive viscoelastic elements, such as springs and dampers, to provide support [...] Read more.
Research and development of active and passive exoskeletons for preventing work related injuries has steadily increased in the last decade. Recently, new types of quasi-passive designs have been emerging. These exoskeletons use passive viscoelastic elements, such as springs and dampers, to provide support to the user, while using small actuators only to change the level of support or to disengage the passive elements. Control of such devices is still largely unexplored, especially the algorithms that predict the movement of the user, to take maximum advantage of the passive viscoelastic elements. To address this issue, we developed a new control scheme consisting of Gaussian mixture models (GMM) in combination with a state machine controller to identify and classify the movement of the user as early as possible and thus provide a timely control output for the quasi-passive spinal exoskeleton. In a leave-one-out cross-validation procedure, the overall accuracy for providing support to the user was 86 . 72 ± 0 . 86 % (mean ± s.d.) with a sensitivity and specificity of 97 . 46 ± 2 . 09 % and 83 . 15 ± 0 . 85 % respectively. The results of this study indicate that our approach is a promising tool for the control of quasi-passive spinal exoskeletons. Full article
(This article belongs to the Special Issue Human-Robot Interaction)
Show Figures

Figure 1

Back to TopTop