Next Article in Journal
Discrimination of Different Species of Dendrobium with an Electronic Nose Using Aggregated Conformal Predictor
Next Article in Special Issue
A New Integrated System for Assistance in Communicating with and Telemonitoring Severely Disabled Patients
Previous Article in Journal
An Insulated Flexible Sensor for Stable Electromyography Detection: Application to Prosthesis Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Classification of Lifting Techniques for Application of A Robotic Hip Exoskeleton

1
The BioRobotics Institute, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
2
Fondazione Don Carlo Gnocchi, 20148 Milan, Italy
*
Author to whom correspondence should be addressed.
These authors share the senior authorship.
Sensors 2019, 19(4), 963; https://doi.org/10.3390/s19040963
Submission received: 31 January 2019 / Revised: 18 February 2019 / Accepted: 21 February 2019 / Published: 25 February 2019
(This article belongs to the Special Issue Sensors and Wearable Assistive Devices)

Abstract

:
The number of exoskeletons providing load-lifting assistance has significantly increased over the last decade. In this field, to take full advantage of active exoskeletons and provide appropriate assistance to users, it is essential to develop control systems that are able to reliably recognize and classify the users’ movement when performing various lifting tasks. To this end, the movement-decoding algorithm should work robustly with different users and recognize different lifting techniques. Currently, there are no studies presenting methods to classify different lifting techniques in real time for applications with lumbar exoskeletons. We designed a real-time two-step algorithm for a portable hip exoskeleton that can detect the onset of the lifting movement and classify the technique used to accomplish the lift, using only the exoskeleton-embedded sensors. To evaluate the performance of the proposed algorithm, 15 healthy male subjects participated in two experimental sessions in which they were asked to perform lifting tasks using four different techniques (namely, squat lifting, stoop lifting, left-asymmetric lifting, and right-asymmetric lifting) while wearing an active hip exoskeleton. Five classes (the four lifting techniques plus the class “no lift”) were defined for the classification model, which is based on a set of rules (first step) and a pattern recognition algorithm (second step). Leave-one-subject-out cross-validation showed a recognition accuracy of 99.34 ± 0.85%, and the onset of the lift movement was detected within the first 121 to 166 ms of movement.

1. Introduction

Heavy load lifting is a common task in warehouses and manufacturing environments. The sixth European Working Conditions Survey carried out in 2015 in 35 countries revealed that 32% of workers perform tasks like carrying or moving heavy loads, and 10% of health and personal care workers (e.g., nurses) are required to lift and carry patients [1]. In particular, repetitive lifting of heavy loads exposes workers to the risk of physical ailments such as low back pain [2] and other work-related musculoskeletal disorders [3]. To reduce the risk of injuries caused by heavy load lifting, numerous passive [4,5,6,7] and active [8,9,10,11,12,13,14,15,16,17] exoskeletons have been developed by several research groups and companies worldwide over the last decade.
Passive exoskeletons rely on spring-based mechanisms and elastic materials and are usually designed to store energy during the bending phase of human movement and release it during the lifting phase. Typically, passive systems are lightweight and, in many cases, have been proven effective in reducing the muscular activity of the lower back when performing lifting tasks [4,5,6,18,19,20,21]. As a drawback, they have limited versatility and can be perceived as uncomfortable in tasks that were not intended to be assisted, such as walking [19,22,23,24]. By contrast, active lumbar devices can produce net positive work to extend the user’s trunk, but at the price of heavier weight and encumbrance. The assistive action can be accomplished by designing adaptive force or torque profiles that vary based on the type and speed of the movement performed by the user. To fully exploit the potential advantages of active exoskeletons, control systems must be able to detect the onset of the lifting movement and generate task-appropriate assistance profiles on a dynamic, as-needed basis. To fulfill this aim, in our previous studies, we developed a real-time rule-based algorithm to detect the onset of the lifting movement, and we successfully validated the algorithm with an active hip exoskeleton [25,26].
To further advance the state of the art among lift detection algorithms, real-time classification of different lifting techniques is essential to providing the most appropriate assistive action. The lifting technique is defined by the posture adopted just before the load is lifted [27,28,29], and the selection of lifting technique by workers is typically made based on several factors, such as the initial position [30] and weight [31] of the load.
Among symmetric lifting techniques (i.e., those not involving twisting of the trunk), the most common are so-called squat and stoop lifting. In squat lifting, the knees are fully flexed while the trunk is held as straight as possible to reach the load [27]. In stoop lifting, the knees are almost fully extended and the trunk is bent forward. Squat lifting is the most clinically recommended lifting technique, as the compression force at the lumbosacral (L5/S1) disks has a relatively small peak value, but it is not well-suited to prolonged and repetitive lifting, because it concentrates loads on the knee muscles, which are not as strong as the hip and trunk muscles [32]. On the other hand, stoop lifting produces a relatively high peak compression force on the L5/S1 joint [33]. Asymmetric lifting, achieved by twisting the trunk, is usually preferred by workers when the load is not placed in front of them [34], but it is not recommended by ergonomists due to the high moment exerted at the L5/S1 junction by the erector spinae muscles [35].
In this paper, we present a two-step algorithm to detect the onset of lifting movements and classify the adopted technique among squat lifting, stoop lifting, left-asymmetric lifting, and right-asymmetric lifting. To the best of our knowledge, there are no previous studies attempting to recognize different lifting techniques for the control of lumbar exoskeletons. To evaluate the performance of the proposed algorithm, fifteen subjects were recruited and asked to perform lifting tasks while wearing an active pelvis orthosis (APO). Two hip joint encoders and an inertial measurement unit (IMU) on the backpack of the APO were used for onset detection and technique recognition. A subject-independent offline analysis was conducted to investigate the generalization capabilities of the proposed algorithm.

2. Materials and Methods

2.1. Experimental Setup

The APO (Figure 1A) is a lightweight hip joint exoskeleton designed for assisting hip flexion and extension movements; it is an improved version of the system presented in [36] and has been recently used in other studies for lift assistance [25,26,37]. The APO is composed of (i) a frame structure connected to the user’s trunk by means of a lumbar orthopedic cuff, straps, and braces, and (ii) two rotating links connected to the user’s thighs. The two links are actuated by two series elastic actuators, which can provide up to ±22 N·m of peak torque. Two absolute encoders measure the hip joint angles. The APO’s control electronics are housed in a “backpack” module integrated into the lumbar orthosis.
A 9 degree-of-freedom (DOF) IMU (LSM9DS0, STMicroelectronics Inc., Geneva, Switzerland) was placed on the backpack of the exoskeleton (Figure 1A). A sensory fusion algorithm was used to calculate Euler angles using raw IMU signals in real time [38], with the roll angle and yaw angle describing trunk motion in the sagittal and transverse planes, respectively. All sensors were sampled at 100 Hz. A user datagram protocol (UDP) link was set to synchronize the data collected from the APO and the IMU, which were both connected by Ethernet cables to a switcher, connected in turn to the main control laptop running the graphical user interface.

2.2. Experimental Protocol

The study was carried on at the premises of The BioRobotics Institute of Scuola Superiore Sant’Anna (Pontedera, Italy), approved by the local Ethics Committee (approval n. 1/2018), and conducted in accordance with the principles stated in the Declaration of Helsinki. Fifteen healthy male subjects were recruited and provided written informed consent to participate in the study (age: 27.4 ± 2.4 years, height: 177.6 ± 8.1 cm, weight: 69.7 ± 7.4 kg).
During the experimental session, the APO was used in transparent mode (i.e., closed-loop torque control with a reference torque of zero [25]). Once wearing the device, subjects were asked to stand still for three seconds to calibrate the estimated trunk Euler angles from the IMU.
The experiment consisted of two sessions and used the experimental setup shown in Figure 1B. In Session 1, subjects were asked to perform repetitive lifting according to four different techniques, namely, squat lifting, stoop lifting, left-asymmetric lifting, and right-asymmetric lifting. For each technique, subjects lifted up and set down a 5-kg box for 10 repetitions, upon visual cues from a graphical interface in front of the participant. Subjects were instructed to lower their bodies in the technique-appropriate manner to reach the load on a 20-cm-high stand, lift the load, put it on a 75-cm-high table, return to a standing posture, and wait for the next visual cue. After a new visual cue was displayed, they were then instructed to reach the load on the table, lower it down, place it on the stand, and return back to standing. Subjects performed the lifting and lowering movements at their self-selected speeds.
In Session 2, subjects were asked to perform the following movement sequence: starting from a seated position on a 45-cm-high stool, they were requested to stand up, walk to the load, lift it up, place it on a table, return to standing posture, reach the load again, lower it down, walk back to the stool, and sit down. All load lifting and lowering movements were triggered by visual cues, and this full sequence was repeated 4 times for each lifting technique.

2.3. Recognition Algorithm

A two-step recognition strategy was used to detect the onset of the lifting movement and recognize the adopted lifting technique (Figure 2). The first step of the algorithm was designed to detect the lifting movement using the hip joint angle measurements, according to the algorithm described in [25]. Then, the second step aimed to classify the lifting technique or recognize false positive lifting movements identified by the first step. The classification was performed using a quadratic discriminant analysis (QDA) classifier based on signals collected by hip joint encoders and the IMU on the APO backpack. A more detailed description of the two-step algorithm is provided in the following sections.

2.3.1. First-Step Algorithm

The first-step algorithm uses a rule-based strategy to distinguish two primary indicator movements from all the movements the user can do―namely, (i) the Pre-extension movement, i.e., the quasi-static (trunk or hip) flexion phase, which occurs prior to the lift phase, typically when the person grasps or releases the object, and (ii) the Extension movement, i.e., the lift phase, which can involve mostly the trunk or the hip extension, depending on the lifting technique. All movements not classified as Pre-extension or Extension are classified as Other [25].
At each time iteration i , if the current classification state is Other, the transition to Pre-extension is governed by Rule 1, which requires all three of the following conditions to be satisfied:
θ d i f f ( i ) < h 1 ,
θ m e a n ( i ) > h 2 ,
θ s t d ¯ ( i ) = s t d ( θ m e a n ( [ i 9 : i ] ) ) < h 3 .
θ d i f f ( i ) is defined as | θ L ( i ) θ R ( i ) | , where θ L ( i ) and θ R ( i ) denote the left and right hip joint angles; θ m e a n ( i ) is defined as ( θ L ( i ) + θ R ( i ) ) / 2 and θ s t d ¯ ( i ) is the standard deviation of θ m e a n over a moving window of 10 samples (i.e., 100 ms); h 1 , h 2 and h 3 are empirically predefined thresholds.
When the current state is Pre-extension, Rule 2 is used to detect the transition to Extension, based on the following conditions (assuming   t 0 as the time when the transition from Other to Pre-extension occurred):
θ s t d ¯ ( i ) > h 4 ,   AND
H a s P e a k ( i ) = 1 ,   AND
T P E ( i ) T e x t e n s i o n .
T P E ( i ) is the current duration of the Pre-extension phase, i.e., the time passed since   t 0 ; H a s P e a k ( i ) equals 1 if a local maximum of θ m e a n has been detected since t 0 —otherwise it equals 0; h 4 and T e x t e n s i o n are predefined thresholds.
Rule 3 allows the algorithm to transition from Pre-extension to Other, if the Pre-extension duration is longer than a certain threshold T e x t e n s i o n :
T P E ( i ) > T e x t e n s i o n .
Rule 4 determines the transition from Extension to Other, whenever two conditions are met:
θ m e a n ( i ) < h 5 ,   AND
θ s t d ¯ ( i ) < h 6   or   H a s V a l l e y ( i ) = 1 .
H a s V a l l e y ( i ) equals 1 if a local minimum of θ m e a n has been detected since t 1 (defined as the time when the transition from Pre-extension to Extension occurred)—otherwise it equals 0; h 5 and h 6 are predefined thresholds.

2.3.2. Second-Step Algorithm

According to our previous study [25], the rule-based algorithm is able to avoid false positive lift recognitions in most static activities (such as still sitting or quiet standing) and dynamic activities such as walking or turning around. Most recognition errors occur during the stand-to-sit transition—in particular, after the touch-down event, when trunk extension occurs to complete the transition, the rule-based algorithm classifies the movement as Extension. Such false positive detection can lead to an improper and uncomfortable assistive torque delivered by the robot.
To improve the algorithm robustness and avoid false recognitions (in particular, those occurring in stand-to-sit movements), a second-step algorithm based on a QDA classifier has been implemented. It acts as soon as the first-step outputs Extension and aims to recognize five lifting classes, namely, no-lift (in case of false positive Extension detections), and the four lift techniques, i.e., squat, stoop, left-asymmetric, and right-asymmetric lifting. For all the non-lift movements investigated in this study, Extension was only detected during stand-to-sit movements. Therefore, the no-lift classification was achieved only in stand-to-sit movements.
To extract relevant information from the sensor signals, seven candidate features for the QDA classifier were computed (Table 1) to generate the initial feature pool. According to the kinematic signals collected by the on-board sensors, these features have relatively large variance among the different techniques and could provide useful information for lifting technique classification (Figure 3). For instance, visual inspection of signals revealed that in squat and stoop lifting, the left and right hip angles have similar profiles, whereas they differ significantly for the asymmetric lifts. Similarly, squat and stoop lifting are differentiated by the roll angle (the angle between the trunk and vertically upward direction) profile, which is similar to the hip joint angles (180°—the angle between the trunk and the thigh) for stoop lifting yet reaches significantly lower peak values during squat lifting. Moreover, it is worth noting that θ t h i g h could be considered as a rough estimate of the thigh angle (the angle between the thigh limb and vertically downward direction) in the sagittal plane, which was considered useful to discriminate sitting down from lifting.
Meanwhile, left-asymmetric and right-asymmetric lifting are characterized by a trunk yaw angle variance which is higher than symmetric lifting, and the trunk yaw angle reaches different values when trunk is twisted towards left or right. In order to compensate for gross variations in the yaw angle (determined by magnetometer) due to changes in the subject’s spatial orientation, we adjusted it by subtracting the mean value of yaw angle over the last 500 ms before the Pre-extension phase (i.e., 50 samples before t 0 ), which could be regarded as an offset related with subject’s orientation.
A greedy-forward selection algorithm, similar to the one in [39], was used for feature set determination, performed as follows: each single feature was used for recognition, and the one with the highest accuracy was selected as the first feature of the feature set. Then each feature in the unselected feature pool was combined with the selected feature for recognition, and the hybrid feature with the best performance was selected as the second feature of the feature set. The process was repeated until the optimal feature set was determined. The optimal feature set was used by the classifier in the second step for lifting technique classification. To avoid overfitting, the feature selection was stopped if the added feature leads to an increment of overall recognition accuracy lower than 0.1%. Note that the optimal feature set was selected based on the average accuracy over all subjects.

2.4. Performance Evaluation

For the purpose of controlling exoskeletons based on the recognized movement, the speed (delay) of movement detection and accuracy of technique recognition are crucial metrics to evaluate the performance of the proposed algorithm. In this evaluation, the data collected in Session 1 and Session 2 are combined to generate the dataset for each subject.

2.4.1. Lifting Technique Recognition

Recognition accuracy (RA) describes the overall reliability of the algorithm for lifting technique recognition. It was calculated as:
RA =   N c o r r e c t N t o t a l ,
where N c o r r e c t is the number of correctly recognized movements and N t o t a l is the total number of test movements. A confusion matrix (CM) was then calculated to quantify the error distribution across classes:
CM =   [ c 11 c 12 c 15 c 21 c 22 c 25 c 51 c 52 c 55 ] .
Each element of the matrix is defined as:
c i j = n i j n ¯ i ,
where n i j is the number of movements in class i that recognized as class j . n ¯ i is the total number of movements in class i . Therefore, off-diagonal numbers indicate misrecognition rates.
Apart from recognition accuracy, sensitivity and specificity were also calculated as follows:
Sensitivity ( k ) =   T P k T P k + F N k ,
Specificity ( k ) =   T N k T N k + F P k ,
where T P k , F P k , T N k , and F N k are the numbers of true-positive, false-positive, true-negative, and false-negative for class k , respectively.
To evaluate the generalization capabilities of the proposed algorithm, the four metrics mentioned above were calculated for subject-dependent and subject-independent recognition. For subject-dependent recognition, 5-fold cross-validation (CV) was performed for each subject. Data of each subject were randomly divided into 5 folds, with each fold having similar numbers of movements in each class. One fold was selected as the testing dataset and the remaining 4 folds as the training dataset. This procedure was repeated 5 times and each fold of the data was used as the testing dataset only once. Recognition metrics for each subject were the average values of corresponding metrics obtained from the 5 repetitions. The overall performance of subject-dependent recognition was calculated by averaging the individual performances of all the subjects.
For subject-independent recognition, leave-one-subject-out cross-validation was carried out to evaluate the algorithm performance. Data of one subject served as the testing dataset, and data of the remaining 14 subjects as the training dataset. The classification metrics were calculated 15 times, each one using one subject’s data as the test dataset. The overall performance of subject-independent recognition was the average value of performances obtained from the 15 repetitions. A paired-samples t-test analysis with a confidence interval of 95% was performed to compare the performance of subject-dependent and subject-independent recognition.

2.4.2. Lifting Detection Delay

We assumed that subjects started the lifting movement when a local maximum of hip flexion angle was observed in Pre-extension phase, and we thus defined the absolute time delay for lifting detection as:
T = t 1 t p e a k ,
where t p e a k is the moment of hip angle peak in the Pre-extension phase, i.e., the reference event.
In the experiment, subjects performed the lifting movement at their preferred speed. However, in our previous study [25], we noticed detection delay was significantly influenced by lifting speed. To avoid the impact of lifting speed on recognition time evaluation, normalized detection delay was calculated as:
T n = T t 2 t p e a k ,
where t 2 is the terminal moment of the Extension phase detected by the algorithm in the transition from the Extension to the Other phases. T n could thus be considered as the percentage of lifting movement duration.
Lifting detection delay was evaluated with the result of subject-independent recognition. The set of average detection delays for each subject was calculated separately for each lifting technique. One-way repeated measures ANOVA was then performed to investigate whether lifting technique had a significant impact on lifting detection delay. Next, paired samples t-tests were run to detect significant differences in detection delays between specific pairs of lifting techniques. For all comparisons, the confidence interval percentage was set to 95%.

3. Results

3.1. Lifting Technique Recognition

For subject-dependent recognition, the optimal feature set identified was { α t h i g h , δ L R   , δ t h i g h   ,   ψ a d j } . The overall recognition accuracy across 15 subjects was 99.65 ± 0.48% (Mean ± STD). To have a better understanding of error distribution, the average confusion matrix over 15 subjects was calculated (Table 2). Recognition errors were only observed when the actual mode was squat lifting. The percentages of misrecognized squat lifting as no-lift, stoop lifting and right-asymmetric lifting were 0.67%, 0.22% and 0.71%, respectively. All the stand-to-sit movements were misrecognized (transitioned from Pre-extension to Extension) in the first step of the algorithm. The accuracy of no-lift recognition indicates the percentage of misrecognized stand-to-sit movements being successfully corrected in the second step. The average sensitivities across 15 subjects for no-lift, squat lifting, stoop lifting, left-asymmetric lifting and right-asymmetric lifting were 100.00 ± 0.00%, 98.40 ± 2.25%, 100.00 ± 0.00%, 100.00 ± 0.00% and 100.00 ± 0.00%, respectively. The average specificities for no-lift, squat lifting, stoop lifting, left-asymmetric lifting and right-asymmetric lifting were 98.82 ± 0.37%, 100.00 ± 0.00%, 99.94 ± 0.23%, 100.00 ± 0.00% and 99.81 ± 0.54%, respectively.
For subject-independent recognition, the optimal feature set was { α t r u n k , δ L R , δ t h i g h ,   ψ a d j } . The overall recognition accuracy across 15 subjects was 99.34 ± 0.85%. Though the accuracy was slightly lower (by 0.31%) than that of subject-dependent recognition, the difference was not statistically significant (p = 0.186). Table 3 reports the confusion matrix describing the recognition performance. Compared to the confusion matrix for subject-dependent recognition, apart from squat lifting, recognition errors were also observed when the actual mode was no-lift, stoop lifting, and right-asymmetric lifting. Misrecognition of no-lift as squat lifting had the highest error rate, which was 1.33%. For the other misrecognitions, the error rates were lower than 1%. The average sensitivities across 15 subjects for no-lift, squat lifting, stoop lifting, left-asymmetric lifting and right-asymmetric lifting were 98.67 ± 3.99%, 98.81 ± 2.20%, 99.29 ± 2.77%, 100.00 ± 0.00% and 99.76 ± 0.92%, respectively. Compared with subject-dependent recognition, the differences in sensitivity were not statistically significant for all the classes (p = 0.217, 0.603, 0.334, 1.000 and 0.334 for no-lift, squat lifting, stoop lifting, left-asymmetric lifting and right-asymmetric lifting, respectively). The average specificities for no-lift, squat lifting, stoop lifting, left-asymmetric lifting and right-asymmetric lifting were 99.94 ± 0.23%, 99.74 ± 0.77%, 99.81 ± 0.54%, 99.81 ± 0.74% and 99.87 ± 0.34%, respectively. Compared with subject-dependent recognition, the differences in specificity were not statistically significant for all the classes (p = 0.171, 0.217, 0.416, 0.334 and 0.668 for no-lift, squat lifting, stoop lifting, left-asymmetric lifting and right-asymmetric lifting, respectively).

3.2. Lifting Detection Delay

The average absolute detection delay over 15 subjects was 166 ± 41 ms, 134 ± 27 ms, 136 ± 29 ms, and 121 ± 20 ms for squat, stoop, left-asymmetric and right-asymmetric lifting, respectively (Figure 4A). Since lifting speed (the average hip joint angular speed from t p e a k to t 2 ) varied significantly across subjects (from 55.6 to 91.9 deg/s, 47.0 to 66.9 deg/s, 50.3 to 82.8 deg/s and 54.1 to 83.8 deg/s for squat, stoop, left-asymmetric and right-asymmetric lifting, respectively), the absolute detection delay data were normalized to investigate the impact of lifting techniques on lifting detection delay.
The average normalized detection delay over 15 subjects was 14.2 ± 2.6%, 11.1 ± 1.8%, 11.4 ± 2.0%, and 10.5 ± 1.5% of total lifting time for squat, stoop, left-asymmetric and right-asymmetric lifting, respectively (Figure 4B). According to the result of Shapiro-Wilk’s test, the data of normalized detection delay was normally distributed, as p > 0.05 for all the lifting techniques. The result of one-way repeated measures ANOVA indicated that the normalized detection delay was significantly different between lifting techniques (p < 0.001). Pair-wise comparisons between squat lifting and the other three lifting techniques were all significant (p = 0.002 for squat vs. stoop, p = 0.002 for squat vs. left-asymmetric, and p < 0.001 for squat vs. right-asymmetric). The pair-wise comparisons between stoop, left-asymmetric, and right-asymmetric lifting were not statistically significant (p = 1 for stoop vs. left-asymmetric, p = 0.564 for stoop vs. right-asymmetric, and p = 0.311 for left-asymmetric vs. right-asymmetric).

4. Discussion

Understanding the user’s lifting technique is important for active exoskeletons to provide assistance efficiently when needed, as kinematics and dynamics of human movement vary significantly for different lifting techniques. However, few studies have presented lifting detection algorithms for application with wearable exoskeletons, and most of them rely on surface EMG signals collected from hip flexor-extensor muscles or forearm muscles to control the assistive action of the robot [40,41]. In a recent study by Cevzar et al., a finite state machine with Gaussian mixture models (GMM) was developed to classify five kinds of movements (i.e., forward bending, squatting, walking, stair climbing, and chair sitting) using signals of an IMU and two encoders, for the purpose of controlling a quasi-active hip exoskeleton [42]. Whenever forward bending was classified, the passive actuator mechanism was engaged to assist the user movement, whereas the actuators where disengaged when performing the other movements. However, classification results for this study were not reported. Moreover, to the best of the authors’ knowledge, the present study is the first to report the use of lifting technique recognition to provide ad-hoc pre-defined assistive profiles.
The primary contributions of this study are the exploration and confirmation of the promise of real-time lift technique recognition for the purpose of exoskeleton control. The proposed algorithm has two main features, which may present advantages over previous approaches: first, it relies on sensory information collected by exoskeleton-embedded sensors, without relying on extra wearable sensors or biosignals; second, it can recognize the lifting technique performed by the user.
Based on the algorithm performance results of this study, subject-dependent and subject-independent technique recognition accuracy (both > 99.3%) and onset recognition delay (<170 ms) proved appropriate for use in application with wearable robots. Among the investigated lifting techniques, the algorithm had significantly higher normalized detection delay in squat lifting than in other lifting techniques. This finding may be attributed to the additional time taken by subjects to adjust their posture prior to lifting, since it is usually more difficult to maintain balance for squat lifting [30]. Computation time of the algorithm is also included in the absolute detection delay, but it is very small. The average computation time of the algorithm (including both step 1 and step 2) over 1172 detections is 0.29 ± 0.05 ms in MATLAB platform. Though in the present study the proposed algorithm was not tested for online control of the exoskeleton, according to the results achieved in our previous study [25], this amount of detection delay would not introduce noticeable discomfort to users.
While in our previous study [25] we also asked subjects to perform freestyle lifting with no specified technique, we did not consider freestyle lifting in this study because there is no explicit definition for freestyle lifting, and we found that subjects typically adopted a hybrid technique combining elements of squat and stoop lifting. The four lifting techniques tested in this experiment showed obvious differences in signal characteristics, which were helpful for lifting technique recognition. However, sensor signals of each lifting technique also exhibited some between-subject variance, which was probably due to the differences in subjects’ anthropometry, ways of performing different lifting techniques, and of wearing the device. The subject-related variance challenged the algorithm to perform subject-independent recognition because features used in the recognition should not only discriminate different lifting techniques, but should also have small variance across different subjects. Therefore, a feature with large between-subject variance should be discarded, even if it is very accurate for some individual subject(s). For example, δ L R , δ t h i g h and   ψ a d j are used in the optimal feature set of both subject-dependent and subject-independent recognitions, which means they not only can characterize different lifting techniques, but also show high similarities among different subjects. However, α t h i g h is only selected for subject-dependent recognition, but discarded for subject-independent recognition. This is probably related to the use of the same height of the load for all the subjects, which would be expected to cause considerable variation in the thigh angle (when bending down to grasp the load) among subjects, according to their heights. Instead, trunk angle seems to be less influenced by subject’s height, and it can explain why α t r u n k was selected in the optimal feature set for subject-independent recognition.
Despite the very good performance of the presented algorithm, the current study has some limitations, primary among which is the lack of online validation of the algorithm. Indeed, the exoskeleton was controlled in transparent mode to collect data for offline analysis, and no assistance was delivered to subjects. Future works will focus on developing assistive strategies for different lifting techniques and evaluating the performance of the algorithm for online exoskeleton control. In addition, the robustness of the proposed algorithm will be tested in scenarios more similar to real working environments.

5. Conclusions

In this study, we developed an algorithm for lifting onset detection and technique classification using hip exoskeleton-embedded sensors. The ability to recognize different lifting techniques enables active exoskeletons to select appropriate control strategies and provide more efficient assistance to users when performing lifting tasks. The proposed algorithm achieved reliable lifting detection performance with high recognition accuracy and relatively small detection delay. More importantly, the algorithm generalized well across different subjects, and the recognition accuracy of subject-independent recognition was 99.34%. The experimental results suggest that the developed strategy is promising for control of active exoskeletons for load lifting.

Author Contributions

Conceptualization, B.C., F.L., L.G., N.V. and S.C.; Data curation, B.C.; Formal analysis, B.C.; Funding acquisition, N.V. and S.C.; Investigation, B.C., F.L. and L.G.; Methodology, B.C.; Project administration, N.V. and S.C.; Software, B.C.; Supervision, N.V. and S.C.; Validation, B.C.; Visualization, B.C. and F.L.; Writing—original draft, B.C., F.L. and L.G.; Writing—review & editing, N.V. and S.C.

Funding

This work was supported in part by Regione Toscana within the CENTAURO project (Bando FAR-FAS 2014) and in part by EU within the HUMAN project (H2020-FOF-2016 Grant Agreement #723737).

Acknowledgments

The authors would like to thank Zach McKinney for proofreading the article.

Conflicts of Interest

S. Crea and N. Vitiello have commercial interests in IUVO S.r.l., a spin-off company of Scuola Superiore Sant’Anna. Currently, part of the IP protecting the APO technology described in the paper has been licensed to IUVO S.r.l. for commercial exploitation. The authors confirm that this did not affect the analysis of the results. All other authors declare no competing interests.

References

  1. Parent-Thirion, A.; Biletta, I.; Cabrita, J.; Vargas, O.; Vermeylen, G.; Wilczynska, A.; Wilkens, M. Sixth European Working Conditions Survey—Overview Report; Publications Office of the European Union: Luxembourg, 2016. [Google Scholar]
  2. Punnett, L.; Prüss-Ütün, A.; Nelson, D.I.; Fingerhut, M.A.; Leigh, J.; Tak, S.; Phillips, S. Estimating the global burden of low back pain attributable to combined occupational exposures. Am. J. Ind. 2005, 48, 459–469. [Google Scholar] [CrossRef] [PubMed]
  3. da Costa, B.R.; Vieira, E.R. Risk factors for work-related musculoskeletal disorders: A systematic review of recent longitudinal studies. Am. J. Ind. 2010, 53, 285–323. [Google Scholar] [CrossRef] [PubMed]
  4. Wehner, M.; Rempel, D.; Kazerooni, H. Lower extremity exoskeleton reduces back forces in lifting. In Proceedings of the ASME 2009 Dynamic Systems and Control Conference, Hollywood, CA, USA, 12–14 October 2009; pp. 49–56. [Google Scholar]
  5. Masood, J.; Ortiz, J.; Fernández, J.; Mateos, L.A.; Caldwell, D.G. Mechanical design and analysis of light weight hip joint Parallel Elastic Actuator for industrial exoskeleton. In Proceedings of the 2016 6th IEEE International Conference on Biomedical Robotics and Biomechatronics (BioRob), Singapore, 26–29 June 2016; pp. 631–636. [Google Scholar]
  6. Abdoli-e, M.; Agnew, M.J.; Stevenson, J.M. An on-body personal lift augmentation device (PLAD) reduces EMG amplitude of erector spinae during lifting tasks. Clin. Biomech. 2006, 21, 456–465. [Google Scholar] [CrossRef] [PubMed]
  7. Näf, M.B.; Koopman, A.S.; Baltrusch, S.; Rodriguez-Guerrero, C.; Vanderborght, B.; Lefeber, D. Passive back support exoskeleton improves range of motion using flexible beams. Front. Robot. AI 2018, 5, 72. [Google Scholar] [CrossRef]
  8. Aida, T.; Nozaki, H.; Kobayashi, H. Development of muscle suit and application to factory laborers. In Proceedings of the International Conference on Mechatronics and Automation, Changchun, China, 9–12 August 2009; pp. 1027–1032. [Google Scholar]
  9. Kobayashi, H.; Aida, T.; Hashimoto, T. Muscle suit development and factory application. Int. J. Autom. Technol. 2009, 3, 709–715. [Google Scholar] [CrossRef]
  10. Kobayashi, H.; Nozaki, H. Development of muscle suit for supporting manual worker. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 1769–1774. [Google Scholar]
  11. Muramatsu, Y.; Kobayashi, H.; Sato, Y.; Jiaou, H.; Hashimoto, T.; Kobayashi, H. Quantitative Performance Analysis of Exoskeleton Augmenting Devices–Muscle Suit–for Manual Worker. Int. J. Autom. Technol. 2011, 5, 559–567. [Google Scholar] [CrossRef]
  12. Kadota, K.; Akai, M.; Kawashima, K.; Kagawa, T. Development of Power-Assist Robot Arm using pneumatic rubbermuscles with a balloon sensor. In Proceedings of the 18th IEEE International Symposium on Robot and Human Interactive Communication, Toyama, Japan, 27 September–2 October 2009; pp. 546–551. [Google Scholar]
  13. Li, X. Design of wearable power assist wear for low back support using pneumatic actuators. Int. J. Autom. Technol. 2013, 7, 228–236. [Google Scholar] [CrossRef]
  14. Naruse, K.; Kawai, S.; Kukichi, T. Three-dimensional lifting-up motion analysis for wearable power assist device of lower back support. In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 2959–2964. [Google Scholar]
  15. Tanaka, T.; Satoh, Y.; Kaneko, S.I.; Suzuki, Y.; Sakamoto, N.; Seki, S. Smart suit: Soft power suit with semi-active assist mechanism-prototype for supporting waist and knee joint. In Proceedings of the International Conference on Control, Automation and Systems, Seoul, Korea, 14–17 October 2008; pp. 2002–2005. [Google Scholar]
  16. Yu, H.; Choi, I.S.; Han, K.-L.; Choi, J.Y.; Chung, G.; Suh, J. Development of a Stand-alone Powered Exoskeleton Robot Suit in Steel Manufacturing. ISIJ Int. 2015, 55, 2609–2617. [Google Scholar] [CrossRef] [Green Version]
  17. Zhang, T.; Huang, H.H. Lower-Back Robotic Exoskeleton. IEEE Robot. Autom. Mag. 2018, 25, 95–106. [Google Scholar] [CrossRef]
  18. Abdoli-e, M.; Stevenson, J.M. The effect of on-body lift assistive device on the lumbar 3D dynamic moments and EMG during asymmetric freestyle lifting. Clin. Biomech. 2008, 23, 372–380. [Google Scholar] [CrossRef] [PubMed]
  19. Abdoli-Eramaki, M.; Stevenson, J.M.; Reid, S.A.; Bryant, T.J. Mathematical and empirical proof of principle for an on-body personal lift augmentation device (PLAD). J. Biomech. 2007, 40, 1694–1700. [Google Scholar] [CrossRef] [PubMed]
  20. Lotz, C.A.; Agnew, M.J.; Godwin, A.A.; Stevenson, J.M. The effect of an on-body personal lift assist device (PLAD) on fatigue during a repetitive lifting task. J. Electromyogr. Kinesiol. 2009, 19, 331–340. [Google Scholar] [CrossRef] [PubMed]
  21. Whitfield, B.H.; Costigan, P.A.; Stevenson, J.M.; Smallman, C.L. Effect of an on-body ergonomic aid on oxygen consumption during a repetitive lifting task. Int. J. Ind. Ergon. 2014, 44, 39–44. [Google Scholar] [CrossRef]
  22. de Looze, M.P.; Bosch, T.; Krause, F.; Stadler, K.S.; O’Sullivan, L.W. Exoskeletons for industrial application and their potential effects on physical work load. Ergonomics 2016, 59, 671–681. [Google Scholar] [CrossRef] [PubMed]
  23. Baltrusch, S.; van Dieën, J.; van Bennekom, C.; Houdijk, H. The effect of a passive trunk exoskeleton on functional performance in healthy individuals. Appl. Ergon. 2018, 72, 94–106. [Google Scholar] [CrossRef] [PubMed]
  24. Baltrusch, S.; van Dieën, J.; Bruijn, S.; Koopman, A.; van Bennekom, C.; Houdijk, H. The Effect of a Passive Trunk Exoskeleton on Functional Performance and Metabolic Costs. In International Symposium on Wearable Robotics; Springer: Cham, Switzerland, 2018; pp. 229–233. [Google Scholar]
  25. Chen, B.; Grazi, L.; Lanotte, F.; Vitiello, N.; Crea, S. A Real-Time Lift Detection Strategy for a Hip Exoskeleton. Front. Neurorobot. 2018, 12, 17. [Google Scholar] [CrossRef] [PubMed]
  26. Lanotte, F.; Grazi, L.; Chen, B.; Vitiello, N.; Crea, S. A low-back exoskeleton can reduce the erector spinae muscles activity during freestyle symmetrical load lifting tasks. In Proceedings of the 7th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics, Enschede, The Netherlands, 26–29 August 2018; pp. 701–706. [Google Scholar]
  27. Burgess-Limerick, R. Squat, stoop, or something in between? Int. J. Ind. Ergon. 2003, 31, 143–148. [Google Scholar] [CrossRef] [Green Version]
  28. van Dieën, J.H.; Hoozemans, M.J.; Toussaint, H.M. Stoop or squat: A review of biomechanical studies on lifting technique. Clin. Biomech. 1999, 14, 685–696. [Google Scholar] [CrossRef]
  29. Anderson, C.K.; Chaffin, D.B. A biomechanical evaluation of five lifting techniques. Appl. Ergon. 1986, 17, 2–8. [Google Scholar] [CrossRef]
  30. Hsiang, S.M.; Brogmus, G.E.; Courtney, T.K. Low back pain (LBP) and lifting technique—A review. Int. J. Ind. Ergon. 1997, 19, 59–74. [Google Scholar] [CrossRef]
  31. Mittal, M.; Malik, S. Biomechanical evaluation of lift postures in adult Koli female labourers. Ergonomics 1991, 34, 103–108. [Google Scholar] [CrossRef] [PubMed]
  32. Trafimow, J.; Schipplein, O.; Novak, G.; Andersson, G. The effects of quadriceps fatigue on the technique of lifting. Spine 1993, 18, 364–367. [Google Scholar] [CrossRef] [PubMed]
  33. Leskinen, T.; Stålhammar, H.; Kuorinka, I.; Troup, J. A dynamic analysis of spinal compression with different lifting techniques. Ergonomics 1983, 26, 595–604. [Google Scholar] [CrossRef] [PubMed]
  34. Garg, A.; Badger, D. Maximum acceptable weights and maximum voluntary isometric strengths for asymmetric lifting. Ergonomics 1986, 29, 879–892. [Google Scholar] [CrossRef] [PubMed]
  35. Bogduk, N. The lumbar disc and low back pain. Neurosurg. Clin. N. Am. 1991, 2, 791–806. [Google Scholar]
  36. Giovacchini, F.; Vannetti, F.; Fantozzi, M.; Cempini, M.; Cortese, M.; Parri, A.; Yan, T.; Lefeber, D.; Vitiello, N. A light-weight active orthosis for hip movement assistance. Robot. Auton. Syst. 2015, 73, 123–134. [Google Scholar] [CrossRef]
  37. Chen, B.; Grazi, L.; Lanotte, F.; Vitiello, N.; Crea, S. Lift Movement Detection with a QDA Classifier for an Active Hip Exoskeleton. In Proceedings of the International Symposium on Wearable Robotics, Pisa, Italy, 16–20 October 2018; Springer: Cham, Switzerland, 2018; pp. 224–228. [Google Scholar]
  38. Madgwick, S. An Efficient Orientation Filter for Inertial and Inertial/Magnetic Sensor Arrays; University of Bristol: Bristol, UK, 2010; Volume 25, pp. 113–118. [Google Scholar]
  39. Chen, B.; Zheng, E.; Wang, Q.; Wang, L. A new strategy for parameter optimization to improve phase-dependent locomotion mode recognition. Neurocomputing 2015, 149, 585–593. [Google Scholar] [CrossRef]
  40. Naruse, K.; Kawai, S.; Yokoi, H.; Kakazu, Y. Development of wearable exoskeleton power assist system for lower back support. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, NV, USA, 27–31 October 2003; pp. 3630–3635. [Google Scholar]
  41. Kawai, S.; Yokoi, H.; Naruse, K.; Kakazu, Y. Study for control of a power assist device. Development of an EMG based controller considering a human model. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, 28 September–2 October 2004; pp. 2283–2288. [Google Scholar]
  42. Cevzar, M.; Petrič, T.; Jamšek, M.; Babič, J. Real-Time Control of Quasi-Active Hip Exoskeleton Based on Gaussian Mixture Model Approach. In Proceedings of the International Symposium on Wearable Robotics, Pisa, Italy, 16–20 October 2018; pp. 244–248. [Google Scholar]
Figure 1. (A) A subject wearing the APO with an IMU board on the backpack. (B) Experimental setup of this study.
Figure 1. (A) A subject wearing the APO with an IMU board on the backpack. (B) Experimental setup of this study.
Sensors 19 00963 g001
Figure 2. Block diagram of the recognition algorithm. The white region and the light blue region denote the first and the second steps of the algorithm, respectively. Rules 1–4 detect different phase transitions. Rule 1 refers to Equations (1)–(3); Rule 2 refers to Equations (4)–(6); Rule 3 refers to Equation (7), and Rule 4 refers to Equations (8) and (9).
Figure 2. Block diagram of the recognition algorithm. The white region and the light blue region denote the first and the second steps of the algorithm, respectively. Rules 1–4 detect different phase transitions. Rule 1 refers to Equations (1)–(3); Rule 2 refers to Equations (4)–(6); Rule 3 refers to Equation (7), and Rule 4 refers to Equations (8) and (9).
Sensors 19 00963 g002
Figure 3. (AD) Sensor signals of squat, stoop, left-asymmetric and right-asymmetric lifting over the lifting cycle for a representative subject. The figures were plotted using the data for lifting the load and placing it on the table, which were collected in Session 1 of the experiment. Solid curves and shaded regions denote mean values and standard deviations of corresponding angles over multiple cycles, respectively. The three classification outputs are denoted by backgrounds of different colors: Other (blue), Pre-extension (yellow) and Extension (green). Left and right hip angles are measured by exoskeleton’s hip joint encoders, and roll and yaw angles are estimated using raw signals collected by the IMU on exoskeleton’s backpack.
Figure 3. (AD) Sensor signals of squat, stoop, left-asymmetric and right-asymmetric lifting over the lifting cycle for a representative subject. The figures were plotted using the data for lifting the load and placing it on the table, which were collected in Session 1 of the experiment. Solid curves and shaded regions denote mean values and standard deviations of corresponding angles over multiple cycles, respectively. The three classification outputs are denoted by backgrounds of different colors: Other (blue), Pre-extension (yellow) and Extension (green). Left and right hip angles are measured by exoskeleton’s hip joint encoders, and roll and yaw angles are estimated using raw signals collected by the IMU on exoskeleton’s backpack.
Sensors 19 00963 g003
Figure 4. Lifting detection delay of the algorithm. (A,B) show absolute and normalized detection delay for four different lifting techniques, respectively. Error bars denote STDs of the delay across 15 subjects.
Figure 4. Lifting detection delay of the algorithm. (A,B) show absolute and normalized detection delay for four different lifting techniques, respectively. Error bars denote STDs of the delay across 15 subjects.
Sensors 19 00963 g004
Table 1. List of candidate features.
Table 1. List of candidate features.
FeatureDefinition and Description
α h i p θ m e a n ( i ) , left and right average hip flexion angle
α t r u n k φ ( i ) , trunk flexion angle
α t h i g h θ t h i g h ( i ) , thigh flexion angle
δ L R θ L ( i ) θ R ( i ) , difference between left and right hip flexion angle
σ t h i g h Standard deviation of θ t h i g h from t 0 to i
δ t h i g h θ t h i g h ( i ) θ t h i g h ( t 0 ) , difference of θ t h i g h from t 0 to i
  ψ a d j ψ ( i ) ψ 0 ¯ ( i ) , adjusted yaw angle of the IMU
Note: t 0 and i denote the initial moment of the Pre-extension phase and Extension phase; φ and ψ denote roll and yaw value estimated with raw IMU signals; θ t h i g h = θ m e a n φ ; ψ 0 ¯ is the mean value of ψ in the 500-ms analysis window before t 0 .
Table 2. Confusion matrix (Mean ± STD) for subject-dependent recognition (%).
Table 2. Confusion matrix (Mean ± STD) for subject-dependent recognition (%).
Estimated Mode
No-LiftSquatStoopLeft-AsymmetricRight-Asymmetric
Actual modeNo-lift100.00 ± 0.000.00 ± 0.000.00 ± 0.000.00 ± 0.000.00 ± 0.00
Squat0.67 ± 1.3898.40 ± 2.250.22 ± 0.860.00 ± 0.000.71 ± 2.02
Stoop0.00 ± 0.000.00 ± 0.00100.00 ± 0.000.00 ± 0.000.00 ± 0.00
Left-asymmetric0.00 ± 0.000.00 ± 0.000.00 ± 0.00100.00 ± 0.000.00 ± 0.00
Right-asymmetric0.00 ± 0.000.00 ± 0.000.00 ± 0.000.00 ± 0.00100.00 ± 0.00
Table 3. Confusion matrix (Mean ± STD) for subject-independent recognition (%).
Table 3. Confusion matrix (Mean ± STD) for subject-independent recognition (%).
Estimated Mode
No LiftSquatStoopLeft-AsymmetricRight-Asymmetric
Actual modeNo lift98.67 ± 3.991.33 ± 3.990.00 ± 0.000.00 ± 0.000.00 ± 0.00
Squat0.24 ± 0.9298.81 ± 2.200.48 ± 1.260.00 ± 0.000.48 ± 1.26
Stoop0.00 ± 0.000.00 ± 0.0099.29 ± 2.770.71 ± 2.770.00 ± 0.00
Left-asymmetric0.00 ± 0.000.00 ± 0.000.00 ± 0.00100.00 ± 0.000.00 ± 0.00
Right-asymmetric0.00 ± 0.000.00 ± 0.000.24 ± 0.920.00 ± 0.0099.76 ± 0.92

Share and Cite

MDPI and ACS Style

Chen, B.; Lanotte, F.; Grazi, L.; Vitiello, N.; Crea, S. Classification of Lifting Techniques for Application of A Robotic Hip Exoskeleton. Sensors 2019, 19, 963. https://doi.org/10.3390/s19040963

AMA Style

Chen B, Lanotte F, Grazi L, Vitiello N, Crea S. Classification of Lifting Techniques for Application of A Robotic Hip Exoskeleton. Sensors. 2019; 19(4):963. https://doi.org/10.3390/s19040963

Chicago/Turabian Style

Chen, Baojun, Francesco Lanotte, Lorenzo Grazi, Nicola Vitiello, and Simona Crea. 2019. "Classification of Lifting Techniques for Application of A Robotic Hip Exoskeleton" Sensors 19, no. 4: 963. https://doi.org/10.3390/s19040963

APA Style

Chen, B., Lanotte, F., Grazi, L., Vitiello, N., & Crea, S. (2019). Classification of Lifting Techniques for Application of A Robotic Hip Exoskeleton. Sensors, 19(4), 963. https://doi.org/10.3390/s19040963

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop