1. Introduction
Stroke is the leading cause of motor impairment, affecting more than 100/100,000 people worldwide each year [
1]. Following a stroke, about 80% of patients exhibit upper limb disability, frequently affecting hand function [
1]. Since hands are a basic human instrument, the inability to execute several Activities of Daily Living (ADL) has a significant impact on the patient’s autonomy and may lead to permanent incapacity.
Robotic-assisted treatments could help stroke patients restore their motor function, according to the latest developments in hand rehabilitation [
2,
3]. The exoskeletons that have been presented offer the opportunity to intensify and repeat therapy more frequently, to precisely control motor support, and to monitor patient progress objectively [
4]. Robotic therapy could therefore adapt to the patient’s unique movement patterns and progress while performing a personalized and objective follow-up of the patient’s evolution and muscular condition [
5]. Moreover, the rehabilitation method can be adapted to the patient’s sensitivity, recovery stage, particular disabilities, and occupational constraints [
6].
From a mechanical design point of view, the hand exoskeleton might be classified into glove, pneumatic, and mechanical exoskeletons. Although the pneumatic exoskeleton device is simple to operate, flexible finger movement is challenging [
7]. Furthermore, due to the glove’s covering, glove-based exoskeletons are often less flexible and comfortable to wear due to the heat and sweat. It prevents direct contact with the object and interferes with tactile feedback [
8]. In linkage-based exoskeletons, the finger components are connected to the mechanical linkages via either fingertip touch [
7] or full hand contact [
9].
How to operate the hand exoskeleton in accordance with the patients’ intended movements is a further concern. Surface electromyography (sEMG) is a non-invasive technique that has been extensively employed in clinical evaluations [
10]. The sEMG displays muscular contractions that are controlled by the central nervous system. A hand exoskeleton’s ability to be controlled in real time may be enhanced by the capture and analysis of sEMG signals [
11]. The method of mirror therapy, which proposes extracting the movement intention of stroke survivors from the non-paretic muscles due to their largely proper function, is used by the majority of exoskeletons driven by sEMG [
12]. For sEMG interpretation, movement intention parsing, and robotic exoskeleton actuation, various techniques have been created. Real-time control and online sEMG analysis for multiactuator exoskeletons are still difficult to implement. In [
13], a novel 3D-printed hand orthosis that is controlled by electromyography (EMG) signals is presented. However, the number of the investigated feature extraction approaches is limited by one time-domain feature. Moreover, a new hand exoskeleton with real-time EMG-driven embedded control is proposed, focusing on quantifying hand gesture recognition delays for bilateral rehabilitation [
14]. However, the presented work can be extrapolated to people suffering from stroke who only have one affected hand because the control is performed bilaterally.
This study shows a newly developed wearable robotic hand exoskeleton that can be controlled by motion intention and has higher ranges of motion (ROMs) for most joints. The mechanical shells come into full contact with the human finger, and a total of five actuators are used to produce strong output forces for each digit. The sEMG was used to interpret the motion intentions of the healthy arm’s muscles as they controlled the exoskeleton of the hand. Two classifiers were implemented in several configurations, and the obtained performances were compared. In the same context, authors in [
15] present a novel electromyography (EMG)-driven hand exoskeleton for bilateral rehabilitation of grasping in stroke. Although the obtained results are acceptable, the total number of healthy subjects for training and stroked subjects for validation is too limited. On the other hand, a new design of wearable robotic hand exoskeleton with more degrees of freedom (DOFs) and a larger range of motion (ROM) was demonstrated [
16]. However, the authors focused on mechanical features by optimizing the DOFs and the ROM.
The Robot Operating System (ROS) is widely used in the design of the control systems of exoskeleton devices [
17,
18,
19,
20,
21,
22]. In [
19], the authors present a new design of the robot’s control software based on the ROS (Robot Operating System) platform to realize the basic rehabilitation training of the patient’s shoulder. Moreover, a new framework has been developed for rapid prototyping based on the integration between MATLAB-Simulink and the Robot Operating System (ROS) environment [
20]. This framework allows robust position and torque control of the exoskeleton and real-time monitoring. In [
21], A PC equipped with ROS is used to simulate the multi-finger dexterous hand with RVIZ (ROS visualization), and the simulation model of the multi-finger dexterous hand is controlled by ROS node communication. In the same context, a set of software components, executed as ROS nodes, solves problems related to hardware issues, and control strategies are proposed [
22]. However, the use of ROS-based real-time control architectures still requires more development for human computer systems.
For this purpose, we have created a prototype system that estimates hand-grip state by monitoring EMG signals from the associated muscles of the healthy limb. The main architecture of the developed system consists of controlling a new hand exoskeleton device via EMG signals measured from the healthy non-paretic side. We used the SVM and KNN classifiers to perform handgrip state estimation.
Figure 1 depicts the system’s general architecture. As shown in this figure, the main architecture is composed of the EMG acquisition system, the preprocessing stage, the classification algorithms, and the active hand exoskeleton device. In the processing stage, four time-domain features were extracted from the received signals and considered as inputs for the classification block, including Willison amplitude (WAMP), Mean Absolute Value (MAV), Variance (VAR), and Waveform length (WL). The control system is based on ROS, a well-known operating system with resources for building robotic applications. The primary goal of this study was to create a reliable classification of two hand movements that could be used to control the motion of a hand exoskeleton device with few EMG channels and many degrees of freedom.
The structure of the current work is as follows:
Section 2 introduces the materials and methods. Results and discussion are presented in
Section 3. We finished by providing a work overview and perspective.
2. Materials and Methods
2.1. System Overview and Operating System Selection
The main architecture of the developed system consists of controlling a new hand exoskeleton device via EMG signals measured from the healthy non-paretic side. As shown in
Figure 1, the main architecture is composed of the EMG acquisition system, the preprocessing stage, the classification algorithms, and the active hand exoskeleton device. The acquisition system is established by surface electrodes plugged into the MyoWare sensor. The MyoWare Muscle Sensor is an Arduino-compatible, all-in-one electromyography (EMG) sensor from Advancer Technologies. The MyoWare Muscle Sensor measures muscle activity through the electric potential of the muscle, commonly referred to as surface electromyography. MyoWare Muscle Sensor can analyze the filtered and rectified electrical activity of a muscle and output a signal that represents how hard the muscle is being flexed. This board includes a single-supply voltage of +3.3 V to +5 V, three output modes, reverse-polarity-protected power pins, and indicator LEDs. EMG raws were acquired by a real-time SbRIO board, ensuring a high-speed acquisition system. The signal pre-processing stage consists of filtering the acquired signals and using noise cancellation techniques. Four time-domain features were extracted from the received signals and considered as inputs for the classification block, including Willison amplitude (WAMP), Mean Absolute Value (MAV), Variance (VAR), and Waveform length (WL).
The control system is based on ROS, a well-known operating system with resources for building robotic applications. In actuality, ROS is an open-source software development environment for robots. Due to its cutting-edge design, ROS can run simultaneously on multiple machines and link to a wide range of gadgets and programs. The suggested control system design is shown in
Figure 2. In order to ensure data collection and classification algorithm execution, several nodes were incorporated into the architecture.
Parallel processing is the main focus of the low-level design (
Figure 3). A method for splitting an operation into multiple processes and running them all simultaneously on various CPUs or processors is task scheduling for parallel processing. As illustrated by
Figure 3, parallel processing is performed in each task that requires running multiple processes, including the multichannel EMG acquisition, the filtering of the acquired signals, the execution of the feature extraction approaches, and the data fusion for decision making. This design offers concurrency, which removes timing constraints and enables the solution of larger problems. Additional devices may also be integrated with it. To meet real-time requirements, the system operations must be finished within the sample time T, depending on the principal components (including acquisition, computing, and decision-making). The ROS-based node architecture is shown in
Figure 4. The first node deals with the acquisition block based on the NI SbRIO real-time acquisition board, which acquires the signals received from the EMG sensor and connects it to the laptop. A ROS toolkit for LabVIEW is used to transmit the data to the sbRIO using Network Streams. Regarding the pre-processing stage, four software nodes were designed for EMG processing in order to extract useful information for classification. The sixth node is also a software node that runs the machine learning algorithms and returns the decision to the last node to apply it to the hand exoskeleton device. In this hardware node, an Arduino Due is used to control the hand exoskeleton actuators via the dedicated interfaces. This node uses the serial protocol to communicate with the ROS core (as a master).
For the classification step, two classifiers were implemented into the LabVIEW interface for intention detection. The classification results were sent to the control system of the hand exoskeleton device via the Arduino Due board. The Arduino board executes the desired motion by running five stepper motors fixed to the related mechanical support. To achieve the desired movement, each stepper motor is driven by a dedicated driver based on the ULN2003 circuit. By applying this architecture, participants can perform grasping, hold suitable things, and release them with high flexibility.
Significant advancements in software engineering lifecycle dependability are required due to the growing complexity of autonomous navigation systems. One of the most challenging research areas for overall systems is middleware. Runtime application adaptation, communication across heterogeneous systems at various middleware layers, and runtime safety assurance in the case of a failure are all instances of middleware issues.
Since ROS1 does not provide real-time performance, we switched to the Robot Operating System (ROS2) architecture [
23,
24]. In actuality, ROS1 shares many of the same drawbacks, including the lack of a standardized approach for creating a multi-robot system [
25,
26]. Furthermore, ROS1 lacks real-time design, which forces us to stretch our design to match the high real-time performance requirements and tight real-time performance indicators of our navigation system.
In order to ensure data integrity, ROS1’s distributed strategy also needs a stable network environment, yet the network is unsecured and unencrypted. ROS2 improves network performance for multi-robot communication over ROS1 and adds functionality for multi-robot systems. Real-time control is also supported by ROS2, which can enhance the intended system’s performance as well as the timeliness of control [
27,
28]. The architecture of ROS2, which is organized into multiple levels, improves fault tolerance capabilities because communication is based on the DDS (Data Distribution Service) standard (
Figure 5). Moreover, ROS2’s intra-process communication technique is more efficient.
2.2. Hand Exoskeleton Design
In this section, we describe the use of 3D printing technology to demonstrate a brand-new hand exoskeleton design. The hand exoskeleton’s 3D architecture was developed using the computer-aided design program SOLIDWORKS. We made use of a 3D printer with fused deposition modeling (FDM). One of the most popular manufacturing methods in 3D printing, FDM offers the advantages of being able to produce goods quickly and affordably.
This idea aims to develop a poly-articulated hand exoskeleton inspired by the biology of the human hand in terms of size and principle of actuation. Equipped with five geared stepper motors ensuring the movements of each finger, this hand exoskeleton integrates force and position sensors in order to control the individual movement of each finger based on an integrated computer.
Figure 6 shows the 3D CAD model of the assembly of the bionic hand. The module pieces were then manufactured using a 3D printer in preparation for assembly. Final assembly of the manufactured components results in the bionic hand illustrated in
Figure 7. The hand exoskeleton device was created as an active hand exoskeleton to assist stroke survivors with gripping activities. Five independent degrees of freedom (DOF), one for each finger, are provided by the hand exoskeleton device to aid in gripping various things. As illustrated in
Figure 8, the hand exoskeleton is made up of five pulley transmission systems, one for each finger, situated on the front of the hand so as not to obstruct the ability to grasp objects, and powered by five stepper motors. For material selection, the control unit box and the five finger mechanisms are printed in three dimensions using PLA material, also known as polylactic acid. In fact, PLA is a great, simple-to-use 3D printing material. Since it is made from renewable resources like maize starch, tapioca roots, or sugarcane, its major benefit is that it is a renewable and biodegradable resource. Thus, it naturally degrades when exposed to the environment. In addition, it is non-toxic and has a pleasant smell when printed. A large variety of colors are available in PLA filament, which is also very simple to print well owing to its thermal properties.
The assembling step needs several adjustments of the fishing wires attached to the pulley fixed at each stepper motor shaft to ensure flexion and extension of each finger.
Table 1 presents the main components of the designed exoskeleton. The control box is designed to support the electronic architecture of the control system. This box is attached to the motor support piece, which is capable of supporting six motors. The guiding wire part is placed on the carpal zone to adjust the five wires. Regarding performance, the hand exoskeleton was created to produce a potential grasping force of 10 N, which was deemed sufficient to safely hold medium-sized things. Regarding the response time, the actuated hand exoskeleton has less than 2 s of latency to apply the received decision. This device is powered by 5 v of DC voltage, and the maximum consumed current is around 2.5 A.
In order to minimize electromagnetic interference, the hand exoskeleton’s control and drive electronics were built into a small piece of hardware that was integrated inside the control box of the hand exoskeleton. The electronic architecture is fully covered by the plastic box and separated from the power system and the actuators, ensuring a minimum of electromagnetic interference. For control architecture, an embedded Arduino board received the control signals from the LabVIEW system once the classification algorithm had been run and its results transformed into control signals. The Arduino microprocessor was used to drive the hand exoskeleton actuators. It transformed the received control commands into Pulse Width Modulation (PWM) signals. Five geared stepper motors were independently controlled in order to move each finger based on the coupled fishing wires. Each stepper motor is driven by the popular ULN2003 driver dedicated to stepper motor control. Indeed, the ULN2003 module is a high-voltage (up to 50 v) and high-current (up to 500 mA) Darlington driver comprised of seven NPN Darlington pairs. All units feature integral clamp diodes for switching inductive loads. It can be used to drive relay, hammer, lamp, and display (LED) drivers. When the user wants to perform grasping, each stepper motor is actuated in the forward direction. The fishing wire attached to the motor shaft will ensure the flexion of the related finger during the grasping state of the hand. On the other side, the exoskeleton will return to its resting state when motors are actuated in the inverse direction.
2.3. Data Acquisition System for EMG Measurement
In order to assess the degree of assistance during human-exoskeleton contact, EMG recordings can be utilized to correlate the state of hand grasping with the activity of the recruited muscles. Extensor digitorum longus (EDL) and flexor digitorum longus (FDL), two major arm muscles, were used in this research with the aim of assessing the hand grasping state. AgCl electrodes are used to acquire the signal, and by following the surface EMG standards for the noninvasive evaluation of muscles (SENIAM) [
29], skin-electrode contact is investigated.
To further improve signal capture, the participant’s skin was cleansed with an alcohol swab and then shaved. The electrode placement followed the recommendations of SENIAM. For each bipolar derivation, the pre-gelled electrodes were employed with an inter-electrode interval of 20 mm. For all bipolar derivations, the reference electrode was placed at the wrist.
Figure 8 displays the map of the EMG sensor locations. 30 healthy participants with a mean age of 25 +/− 3.5 years and no history of stroke or hand impairments are used for EMG signal recording dedicated to classifier training. Individuals gave informed consent and consented to take part in the study.
The acquisition system is ensured by the Myoware EMG sensor using surface electrodes. The EMG raw and the EMG rectified are two outputs that this sensor can produce in order to measure muscle activity. The EMG-rectified output is ensured by an integrated analog conditioning block that provides the time-domain average value of the EMG signals. However, this output is not suitable for additional digital processing and can only be used for threshold detection. Therefore, any additional feature can be extracted from the rectified signals. On the other hand, the raw EMG output is more suitable for further digital processing in order to extract the useful features of the generated EMG signals [
30,
31]. In this investigation, we employed the real-time board sbRIO-9637 from National Instruments along with the EMG raw as the acquisition system’s input.
The captured signal’s amplifier gain is set to 1000. The acquisition system’s sampling frequency is set to 100 kHz with a sampling rate of 10,000 samples per second [
32]. Furthermore, a Butterworth-type 10–2000 Hz band-pass filter is utilized [
4]. To provide two channels of EMG signals, pre-gelled surface electrodes were attached on both sides of the relevant muscles, with a reference contacting the olecranon joint. We create a Butterworth-type notch filter of 50 Hz in the LabVIEW block diagram to eliminate power line noise in order to enhance the received signal’s quality. This filter’s design was influenced by LabVIEW’s digital filter design module.
2.4. Machine Learning for Handgrip State Estimation
A machine learning method was used to look for patterns in the EMG data of two different hand states. In order to extract pertinent properties from the signals, a feature extraction stage was first constructed. As shown in
Table 2, once the EMG signal had been filtered, we computed the four time-domain characteristics, including MAV, WAMP, VAR, and WL. These feature extraction approaches are often used for hand pattern recognition algorithms [
33,
34].
An event was the signal that was recorded when the subject repeatedly made the grabbing action. Each topic had 20 occurrences, each lasting three seconds. Over the entire experiment (30 subjects × 20 repetitions), there were a total of 600 occurrences. We took the four features indicated above for each of the four electrodes that were synchronously collected from each event, giving each event a total of eight features. The convention for feature identification is shown in
Table 3.
Support Vector Machine (SVM) and k-nearest neighbors (K-NN) were two of the classification methods we trained from the feature space to recognize the grasping state (k-NN). These supervised algorithms needed to be optimized after a training phase using 70% of the events and a testing phase using 30%. In addition, we tested various classifier configurations to determine which one performed the best. Five kernel functions were examined for the SVM classifier. Although SVM is a binary classifier, one-to-one and one-vs.-all multiclass techniques were also used. On the other hand, as shown in
Table 4, we modified the distance metric and the number of neighbors for the k-NN classifier.
3. Results
This section presents the experimental outcomes obtained with machine learning algorithms for the classification of the two relevant hand gestures by adhering to the acquisition technique indicated in
Figure 1.
Eight time-domain features were derived from the EMG signal events and tested in ten classifier configurations—five based on SVM and five based on KNN. First, 1200 events were used to train the classifiers in order to identify the best parameters for minimizing the error ratio between each event’s estimated and actual label. The effectiveness of the techniques was then assessed for the testing set’s 580 events.
For classifier evaluation, a confusion matrix (2 × 2) is designed, including a count of the events that were both correctly and erroneously categorized. The sensitivity, specificity, and accuracy of the used approaches were computed using the true positive, true negative, and trueness rates, respectively. Moreover, Receiving-Operating Characteristic (ROC) curves were created by recording sensitivity (S) versus 1-specificity (FPR) for the best four classifiers, as shown in
Figure 9. The average classifier performances were between 0.69 and 0.93 in sensitivity and 0.01 and 0.06 in 1-specificity. For accuracy evaluation, the KNN1 and KNN4 classifiers demonstrated the best performance. Overall, as shown in
Table 5, we found that the best classifiers had an average sensitivity of 91%, specificity of 97%, and accuracy of 98%.
In this study, we developed a novel wearable robotic hand prosthesis with several joints, more active DOFs, larger ranges of motion for most joints, and the ability to be autonomously manipulated by the motion intention. This hand exoskeleton can independently drive the five fingers and satisfies the requirements of hand function rehabilitation. Additionally, the exoskeleton’s mechanical design could make it possible for the hand to perform pinching and gripping actions. The anatomical and functional qualities of the human hand served as inspiration for this design. The categorization accuracy of the hand exoskeleton control system based on sEMG signals was high, according to the results.
The first step consists of assembling the 3D-printed components. The participant is asked to wear the main control box and the finger parts. Secondly, the fishing wires were installed in the guiding-wire components and attached to the pulley plugged into the stepper motor shaft. This step is repeated five times to set all fingers movements. The fishing wires should be well adjusted, ensuring a compact mechanical structure without backlash. The final step deals with installing the surface electrodes and powering the control system. In order to avoid any unwanted movements, the control system is initiated with some delay before starting the training. At this time, participants can begin training by exercising the healthy forearm. The desired movement is applied in real-time on the second forearm by actioning the exoskeleton and imitating the healthy side. As illustrated by
Figure 10, the preliminary test of the active hand exoskeleton device shows the user can perform the grasping movement with high flexibility.
Regarding the evaluation of the designed hand exoskeleton, 10 healthy participants (from the 30 participants previously mentioned) were used in this study. The hand exoskeleton device was rated by the test subjects. They ranked each item on a scale from 1 (not at all satisfied) to 5 (extremely satisfied) and considered the exoskeleton device’s dimensions, weight, adjustments, safety, simplicity of use, comfort, and effectiveness (
Table 6). The majority of participants were satisfied with the exoskeleton’s performance. It was the most highly ranked of the seven items and had an average item score of about 4.7. However, several participants complained that the stepper motors made the hand exoskeleton’s dimensions a little bit cumbersome and difficult for them to manually modify. The participants were generally satisfied with the developed hand exoskeleton device. Additionally, because the control system based on sEMG was simple to understand, none of the participants requested more time to configure the system. None of the participants expressed any discomfort during the trial, and no side effects, such as a force ulcer, were discovered.
4. Discussion
The overall system contains three main parts: the first is the acquisition system composed of the MyoWare EMG sensor and the NI (National Instrument) real-time board connected to the laptop. The second component is the LabVIEW software running classification algorithms. The desired movement was sent to the hand exoskeleton device via the USB protocol. The Arduino board receives the desired variables and executes the hand exoskeleton by running the stepper motors. The consumed current of each stepper motor was acquired, ensuring a closed-loop control system, in order to guarantee that each finger is fully flexed or fully extended. Although this control method is very conventional, the grasping force shows good performance. However, the use of flexion sensors can be a good alternative for finger angle control. As shown in
Figure 11, the user can grasp, hold, and release objects based on the desired intention. The overall system shows a good latency (less than 2 s) to apply the desired movement due to the real-time acquisition system and the high-speed stepper motor. As shown in
Table 7, the latency is mainly due to the mechanical constraints of the structure of the hand exoskeleton. However, if we want to quantify the maximum latency since starting acquisition until generating a decision—including EMG acquisition and processing, classification, and decision-making—the results in
Table 7 prove the real-time performance of the designed system, which is consistent with previous studies showing that the real-time constraints were the more challenging issue of the sEMG-controlled hand exoskeleton [
35,
36].
The top-performing classifiers (KNN1 and KNN4) had less-than-ideal online classification accuracy, although this was consistent with earlier research [
37]. Different classifiers should be chosen and used separately, as the classification accuracy is typically not the same for each person [
38,
39]. The accuracy of identifying several activities for the same person may likewise vary greatly. Muscle contractions varied from subject to subject, indicating that different ways for people to activate their muscles could be used to carry out the same function. Depending on the person, different hand gesture predictions may have a varying level of accuracy [
10]. Although the implemented classifiers show good accuracy for intention detection, participants can find some problems in training after executing several cycles, which are mainly due to the muscle fatigue of the healthy hand. This issue will be processed in the future by considering muscle fatigue in the classification algorithms.
Comparing the hand exoskeleton device developed during this work to other exoskeleton systems recently developed [
40,
41], some features and benefits stand out. Indeed, there are certain structural benefits to the developed exoskeleton device. Above all, it was made using 3D printing technology with a straightforward control system, which made it lighter than the majority of recently developed equipment. The exoskeleton that was attached to the hand weighed about 180 g in total (95 g for the 3D-printed components and 80 g for the stepper motors).
Since the majority of hand usage in regular activities is supervised by sight, visual feedback was left in place for the experiment. Additionally, the possible impacts of visual input on outcomes could be further restricted because, during hand performance, all individuals equally received visual feedback. The purpose of this study was to demonstrate a newly created wearable robotic hand exoskeleton that can be freely directed by movement intention based on the processed EMG signals. However, further research is required before doing any clinical tests because this is only a preliminary study showcasing a unique exoskeleton design. The sample size was adequate to show that the approaches performed well. However, a bigger database would be required for a clinical investigation. EMG signals from stroke patients in particular should be thoroughly examined since they may have altered shapes and more varied behavior [
42]. To demonstrate the effectiveness of this new exoskeleton for patients with neuromuscular diseases, such as stroke patients, we plan to conduct clinical research soon.