1. Introduction
Recent trends in robotics pursue the incorporation of robotic systems among people. Social robots are taking on increasing importance, mostly for healthcare applications, i.e., helping patients [
1] or elderly people [
2]. Collaborative robots (i.e., cobots) are expected to cooperate in physical tasks with them (e.g., moving large objects [
3]). Cooperation requires adaptation on both sides. Hence, cobots must be safe [
4,
5,
6] and include force-sensing capabilities to better adapt to people’s feedback and constraints.
Force sensing is particularly important in physical Human–Robot Interaction (pHRI), where robots are expected to physically manipulate a person, e.g., during rehabilitation [
7], or when using exoskeletons [
8] or prosthesis [
9]. In these cases, dependability and safety become a major concern [
10,
11], especially when it is up to the robot to intentionally touch and/or manipulate people using grippers [
12]. These applications include assistive robotics [
13], search and rescue missions [
14] and healthcare applications [
15] among others. These robots need to accommodate to humans’ constraints and needs via shared control. To achieve continuous, transparent adaptation, low level shared control must rely on blending the robot commands with human intention. Although accuracy in force estimation is not critical at low level, as no high precision is required and constant feedback and adaptation tend to correct minor errors, it is at least necessary to assess human force direction and magnitude, so the robot can comply with the person’s input.
There are several methods to estimate human/robot interaction forces. The most common ones are industrial force/torque sensors, current-controlled cobots and sensitive manipulators with joint torque sensors. Industrial sensors are accurate and provide Cartesian external forces, but they are usually bulky, heavy and expensive. Manipulators with joint torque sensing can be implemented using elastic joints, which can be easily developed using the position error and the arm inverse Jacobian [
16], but provide poor performance. Motor current-based force estimation depends on an accurate model of the arm dynamics with frictions [
17]. Some other robots include rigid joints with torque sensors integrated with the controller, used to provide impedance control. Those robots are expensive and require complex control [
18], and they are used mostly for research purposes. Some experimental force sensors for robot arms are based on lightweight pressure sensors such as piezoresistive sensors [
19], MEMS barometric sensors [
20], or optical sensors [
21], which are still under development.
Alternatively, force estimation could be performed on end effectors instead of on the robot arm. The actual choice of the end-effector is important for safe human manipulation. Although there are many different grippers [
22], safe, reliable and autonomous grippers, sensitive enough to manipulate human limbs are still under research. Soft grippers are receiving a growing interest in this field [
23], but precise manipulation often requires an adaptive but more controllable solution. Gripper-based proprioceptive sensing has already been used in pHRI and can provide a more controllable solution. In [
24], variations on the forearm perimeter have been used to estimate hand postures. Some studies [
25] propose estimating forces using uniquely proprioceptive sensors in an arm to estimate the position of a surgical instrument carried by an underactuated arm (continuous flexible or made of rigid serial links) based on the actual positions of the intermediate stages, and also the interaction forces.
The authors have already published works in the field of force sensing in pHRI [
26] and intelligent tactile perception in robotics [
27]. Specifically, in [
28] they have suggested using additional joint angle sensors to obtain shape estimation on grasped objects for limb manipulation planning. In this work we propose extending the capacities of the gripper in [
28] to also assess forces exerted by the human forearm with no further modifications, i.e., additional sensors. Rather than building an analytic model, given the complexity of these problems, many of these methods rely on machine learning techniques. For example, in [
29], interaction forces between a human and a cobot were measured using an external industrial 6-axis force sensor. In [
30], the interaction force is predicted by using flexible joints with integrated force sensors, to perform an estimation of the arm global friction, but the approach requires additional functional and fully calibrated force sensors in the gripper.
In this work, we propose a regression model to detect interaction forces in a gripper with two underactuated fingers with two-phalanx using uniquely its own proprioceptive joint sensors, namely servo and one passive joint angle. Specifically, the gripper is expected to manipulate the forearm of a person in a frontal plane. The gripper adapts to the human’s forearm by design, flexing fingers and shifting forces to keep the grip. As the person moves to either comply with the robot motion or resist, finger angles and applied torque keep adapting, implicitly providing information on the direction and magnitude of human forces on the grip, as illustrated in
Figure 1. The ultimate goal of our proposal is to obtain information on human intention and interaction force in a continuous way so that the robot may adapt to human needs and constrains in a transparent way via shared control in applications involving pHRI. The main novelty of our proposal is that it relies on information that grippers can provide without any additional hardware. Hence, we avoid any extra weight, cost and/or complexity in the system. Due to the non-rigid properties of the human forearm and the use of machine learning methods, proprioceptive sensor information (servo and passive articulation positions) is related to forces. Hence, we can train a model using data from tests with volunteers where forces are measured using independent sensors. Thus, inexpensive underactuated grippers with different number of fingers can be used to estimate human intention for efficient, low level shared control in assistive robotics.
The paper is structured as follows.
Section 2 presents the design of our gripper and its control system, as well as its kinematic and dynamic analysis.
Section 3 describes the tests done and the results obtained.
Section 4 presents the experimental setup of the system. Finally, these results are discussed in
Section 5, and conclusions and future work are provided in
Section 6.
3. Force Estimation Method
To overcome the difficulty of obtaining an analytic model, we propose using regression methods. The proposed method adapts better to imperfections, sensor and mechanical errors and can be extensible to other similar grippers via training. A schematic of the method is showed in
Figure 3, where the symbol
represents measurements, and
are the estimated external forces.
The inputs of the smart actuators’ current-based controllers are the desired position (
) and maximum current (
). Current positions (
), velocities (
) and accelerations (
) of the joints are considered inputs of the estimator, along with the current (
) and the PWM of the control signal (
). The position is measured from the encoders of the smart actuators (
) and the sensors integrated in the passive joints
. Velocities and accelerations are computed from the position with discrete derivatives according to the sample time (
).
and
variables present low correlation in our actuators (Pearson coefficient equal to 0.4379) due to dynamic behavior of the DC motors. Hence, they are kept as input parameters in our model. All the signals are defined for the two-finger gripper as defined in Equation (
7), where sub-index
r and
l refer to right and left actuator, respectively.
Thus, the goal is to find a non-linear function (
). According to Equation (
8), this function estimates external forces from input parameters. We propose two regression methods to obtain
: (i) Support Vector Regression (SVR) [
32]; and (ii) Random Forest Regression (RFR) [
33].
SVR relies on fitting the error rate within a certain threshold rather than minimizing it (Principle of Maximal Margin). The main advantages of SVR is that it is a non-parametric technique, i.e., it does not depend on distributions of the underlying dependent and independent variables. Additionally, it allows for construction of a non-linear model without changing the explanatory variables, helping in a better interpretation of the resultant model. RFR is a type of additive model that makes predictions by combining decisions from a sequence of base models (ensemble learning), where each based classifier is a decision tree. Unlike linear models, RFR can capture non-linear interaction between the features and the target. Both methods are appropriate to work with non-linearity and outliers, so they are good candidates to solve our problem.
4. Experimental Setup
By adding proprioceptive angular sensors, the angles and can be measured (we use l and r subscripts for left and right fingers). Thus, given the position information provided by the servos ( and ), the position of the remaining phalanxes ( and ) can be computed. As a result, the position of the gripped objects can be estimated using and the shape of the grasped object can be inferred with the positions of the two fingers and .
Two potentiometers (muRata SV01 10 k linear) have been used for the measurement of the distal joints. They have been added to the gripper as a DAQ with a 50 Hz sample rate. The actuators are Dynamixel XL450-W250 servos featuring a 12-bit digital magnetic encoder ( resolution), and an advanced position-based controller with torque limits to provide a sort of force control. This capability is essential to control the grasping force. Their internal position PID loops have been set to Proportional-only control to get compliance to the user interaction forces. The servos provide real-time feedback of the positions (), the electrical current (I), and the PWM output (P) of the controller.
A microcontroller board (Arduino Mega 2560) has been used as gripper interface and DAQ that periodically samples the analog values with 10-bit ADC ( resolution), from the potentiometers, and queries the status of the servos at a rate of 10Hz. A serial port over USB communicates with the main computer to provide the above information and receives simple Open/Close commands.
The actual values of the parameters for the kinematics shown in
Figure 2 can be found in
Table 1. It has been designed to grasp an upper forearm with a perimeter between
and 19.3 cm and a median of 17.7 cm, according to the anthropometrics from [
34].
The joint ranges present different mechanical limits. In particular, . The position of the phalanxes depends on the balance between external forces , , the actuator torque and the extension springs () used to make the finger stable when no external forces are applied.
To use machine learning techniques to estimate forces using the gripper, we need to capture all the problem instance parameters simultaneously. Also, we need a test environment that ensures repeatability. Hence, we have designed the structure in
Figure 4 to obtain a ground-truth for our regression methods. It can be observed that the gripper is physically attached to six load cells used to estimate Cartesian forces. Thus, when the gripper is closed around a moving forearm, we can record its force and all gripper parameters at the same rate of 20 Hz.
Force sensors in our structure (see
Figure 5a) have been calibrated using a dummy forearm section and a force meter to obtain ground-truth values. Different forces at different angles have been applied (
Figure 5b) and recorded together with its corresponding load-cell readings. Then, a matrix that relates the six readings from the load cells has been adjusted using a least squares method. The inverse of this matrix will then be used to provide output forces in
units from load-cell readings.
After the calibration of the force sensor, the positions of the fingers were controlled—proportional control—to maintain the person’s forearm as close as possible to the center of the gripper. We purposefully kept a low gain in our control loop to make it more sensitive to external perturbations.
5. Experimental Protocol and Results
To test our system, five volunteers were asked to get their forearms gripped and pull in a frontal plane (X and Y movements) to obtain a training data set. Each volunteer performs 2-min circular motion and 2-min cross motion Forces and gripper parameters were captured in a continuous way with a common time reference at 20 Hz (generating more than 20,000 samples).
Figure 6 shows forces measured in the X and Y axes by the load cells in the structure, the corresponding joint and servo positions and the current and PWM provided by the servo controller for the left gripper side. Signals may be positive or negative depending on the motor direction. In this example, the person first moves his forearm 3 times right and 3 times left (X-axis) and then 2 times up and 2 times down (Y-axis) in a sequence. Forces during the sequence can be clearly observed as peaks in the X axes in
Figure 6. It can also be observed how
and
evolve with the forces. Similarly, the current and PWM also present changes depending on forces. It can be observed that in general, forces in the x-axis (e.g., seconds 1 to 3.5) are more correlated with the gripper parameters than forces in the y-axis (e.g., seconds 3.5 to 5). This was expected because the gripper is aligned with the Y-axis in our tests, so the gripper fingers tend to slip when forearms move in vertical.
As commented in
Section 2.3, dynamic parameters are required for force estimation. We can obtain angular velocities and acceleration extracting the first and second derivatives of
,
. To reduce noise in these new features, we use a moving average filter (size 3).
Figure 6 shows these first and second derivatives, which present similar trends with exerted forces.
It can be observed that the derivatives obtained are centered in zero, whereas , are not. This happens because these angles depend on the gripped forearm shape. To avoid this shape dependence, we propose using the initial forearm angles , , obtained when the gripped initially closes, as an offset.
5.1. Data Modelling
Our problem requires multi-variable analysis as we need to estimate forces on the X and Y-axis. The first decision before modelling was consequently whether to work with forces independently or together. We have decided to model them independently; otherwise, any motion patterns in training sets presenting casual relationships in X and Y, e.g., users doing circular motion or favoring one side against the other, would be acquired in the model.
As commented, we have tested RFR and SVR to create our model. Both techniques are appropriate to deal with non-linearity (e.g.,
Figure 6). RFR is particularly fit to cope with problems where parameters have different importance depending on the situation. For example, when forces in the x-axis are low, the current parameter (
I) does not provide much information, whereas
still correlates with the force (
Figure 6). However, SVR deals better than RFR with sparse data, which are present in boundaries—i.e., large forces—in our data set.
MATLAB (Version used: R2019b by The MathWorks, Natick, MA, USA) has been used to create RFR and SVR models. RFR has been created using the function. The best hyperparameter set for our data was: NumTrees = 40, MinLeafSize = 5. SVR was created with the function. Kernel radial with standard parameters outperforms linear and polynomial kernel for our data. Automatic parameter optimization () has been used to select the best values for , and parameters.
5.2. Performance Evaluation and Discussion
In this section, we analyze force estimation results in the frontal plane (X and Y), both for RFR and SVR. To do so, we acquired information from 5 volunteers using the described system. Each volunteer performed two tests: (i) exerting forces in a cross pattern (only X or Y direction at a given time), and (ii) exerting forces in a circular pattern (forces both in X and Y at all times), as shown in
Figure 7. Volunteers were trained to use constrained forces, although no mechanism was applied to keep them in a constrained interval. Ten tests were recorded per volunteer in a total testing time of 2 min after training. Afterwards, a k-fold cross-validation technique per volunteer (
) was used to evaluate the accuracy of the models. Finally, the Mean Absolute Error (MAE) was calculated for all tests of each regression model in X and Y.
Figure 8 shows the resulting Force X and Force Y estimates versus the measured force values for all tests using both methods (SVR and RFR) in terms of MAE. RFR outperforms SVR slightly for our data set. It can be observed that in the middle range both models behave similarly but in all cases RFR provides a better fit for low and high force values. This confirms the importance of input parameters depends on the force ranges. To understand this dependence, we can observe the importance of those input parameters using the
variable from the
RFR function in MATLAB (
Table 2). It can be observed that current
I and the servo speed
are the most relevant parameters to estimate forces in the X-axis. However, to estimate forces in the Y-Axis the second passive joint angle
and its variation become more relevant. This is most likely due to slippage in fingers when users pull their forearm up and/or down, meaning that variations in the grip become more important than forces in the servoes in these cases.
Figure 7 shows estimates versus real force values for a single test over time. It can be observed that the force module tends to be underestimated. This effect is higher in the Y-axis compared to the X-axis (see
Figure 7b), due to slippage in the grip. It can be observed that the motion pattern is correctly estimated. Hence, the user’s intention can be estimated from these forces estimation to be used in a shared-control approach, i.e., to adapt emergent motion patterns to the user’s preferred direction. Typically, users’ intention in shared control in pHRI has been obtained in different ways. In [
35] intent is defined in a binary way (e.g., motion or not, left or right, up or down, etc.). Also, it is defined within a set of discrete intents (e.g., predefined poses) or using continuous variables (e.g., steering angles). Our approach provides Cartesian X and Y forces. These forces can be transformed into polar coordinates—error module (
) and angle (
)—to calculate the classification accuracy and estimation error in all three typical user intent estimations: binary intent (left or right, i.e., 180 degrees clustering), discrete intents (left, right, up and down, i.e., 90 degrees clustering) and continuous intent (module and angle).
Table 3 shows the results. It can be observed that the classification accuracy remains high in binary and discrete cases (above 90%), with values similar to the ones presented in [
36]. On the other hand, continuous values provide a granularity below 20 degrees on average in direction, so applications that need to know the user’s intention to collaborate with them in tasks with some degree of freedom—e.g., repositioning a limb vs. (precise) assistive surgery—can rely on the proposed approach.
6. Conclusions
In this work, we have presented a method to estimate interaction forces with an underactuated gripper grasping a human forearm. These forces are related to human intention and, hence, critical for pHRI. The intention is typically used in shared-control approaches to ensure that human constraints, goals, and comfort are taken into account while their forearms are being manipulated.
We use only the gripper proprioceptive sensors to estimate the forces. Specifically, we work with a gripper with two underactuated fingers to achieve an adaptive, robust, and precise grasping of human limbs operating in a closed control loop. Its proprioceptive sensors provide information about the servo and passive joint positions—using uniquely two inexpensive potentiometers—, plus the motor PWM and current. The analytical model of the gripper may already produce limited information about human forces in the gripper, but the model is only valid for certain conditions and it does not take into account limb slippage or artifacts. Instead, we propose using machine learning to estimate human forces.
We have designed a platform to capture the required learning data set that includes a fixed gripper and a force-measurement structure to get training data. When volunteers are moving their gripped forearms, we gather all the gripper parameter values as well as the load-cell readings. We tested SVR and RFR to predict forces using the acquired data set. RFR provides slightly better results because it adapts better to the nature of our data: depending on the force range, some input parameters provide more information than others. Specifically, we observed that force in X is better estimated using I and and force in Y is better estimated using and . After training, the method is computationally cheap and resulting trees can be run in parallel.
The proposed method does not require any additional sensor except the gripper proprioceptive ones. Additionally, the proposed proprioceptive sensors are cheap, robust, and do not require calibration for each different gripped object. The gripper has two fingers in the same plane, so only two-dimensional forces (in a frontal plane) are considered because the forces in other directions are rejected by the kinematic constraints of the fingers.
Results prove that the proposed methodology provides satisfactory results in all our tests with different people and changing forces. Future work will focus on developing a gripper with a higher number of non-parallel fingers to consider forces in the full Cartesian space. Also, we will work on implementing shared control based on estimated forces to prove that task efficiency and human comfort improve using the proposed method to estimate user’s intention.