1. Introduction
There is evidence that early and intensive rehabilitation therapies are associated with better functional gains in patients with acquired brain damage [
1]. Rehabilitation robots have shown good results in delivering high-intensity therapies and to maximize patients’ recovery [
2,
3,
4]. However, there are some motor functions that cannot be recovered. In this case, assistive robotics have shown good results in assisting patients with acquired brain damage in performing activities of daily living and/or in supporting elderly people in staying active, socially connected, and living independently. Principally, there are two kinds of assistive robotic devices: one of them is based on mobile robot assistants, such as Care-O-bot, PR2, and Tiago, among others; the other one is based on the use of an external robotic arm or a robotic exoskeleton fixed or mounted on a wheelchair.
On the other hand, there is another approach based on the use of: (i) an external robotic arm fixed or mounted on a wheelchair; or (ii) an exoskeleton robotic device. JACO and iARM are two of the most popular external robotic arms fixed or mounted on wheelchairs. Both robotic arms were designed to be mounted on a user’s motorized wheelchair; they have six degrees of freedom and can reach objects at a distance of 90 cm [
5]. A study on the practical demands of the potential users of external robotic arms and upper limb exoskeletons for assistance with ADLs can be found in [
6]. The study concluded that eating and hairdressing, as well as cleaning, handling food, dressing, and moving nearby items were the ADLs that have received relatively high scores regarding the necessity of external robotic arms. The FRIEND robotic platform is an example of a well-known external robotic arm that assists disabled people in performing ADLs. The FRIEND platform, which belongs to the group of intelligent wheelchair-mounted manipulators, is intended to support disabled people with impairments of the upper limbs in ADLs [
7]. On the other hand, dressing, toilet use, transfer, wheelchair control, moving nearby items, and handling food have shown high demand for the necessity of upper limb exoskeletons. Kigachi et al. presented a mechanism and control method of a mobile exoskeleton robot for three-degree-of-freedom upper-limb motion assistance (shoulder vertical and horizontal flexion/extension and elbow flexion/extension motion assistance) [
8]. In addition, Meng et al. presented a mobile robotic exoskeleton with six degrees of freedom (DOFs) based on a wheelchair [
9].
In this paper, a mobile robotic platform for assisting moderately and severely impaired people in performing daily activities and fully participating in society is presented. The mobile robotic platform was based on an upper limb robotic exoskeleton mounted on a robotized wheel chair. The platform is modular and composed of different hardware components: an unobtrusive and wireless hybrid brain/neural–computer interaction (BNCI) system (electroencephalography (EEG) and electrooculography (EOG)) [
10], a physiological signal monitoring system, an electromyography (EMG) system, a rugged, small form-factor, and high-performance computer, a robotized wheelchair, RGB-D cameras, a voice control system, eye-tracking glasses, a small monitor, a robotic arm exoskeleton attached to the wheelchair, and a robotic hand exoskeleton including a mechatronic device to control the pronation/supination of the arm. Moreover, the robotic exoskeleton can be replaced with an external robotic device if needed. The platform has open-source software components as well, such as algorithms to estimate the user’s intention based on the hybrid BNCI system, to process the user’s physiological reactions, to estimate the indoor location and to navigate, to estimate gaze and to recognize objects, to compute 3D objects and mouth pose, to recognize user activity, and a high-level controller to control the robotic exoskeleton or external robotic device and to control the environment and wheelchair control system. The modularity of the presented mobile robotic platform can be exploited by adapting the multimodal interface to the residual capabilities of the disabled person. In particular, the platform can be mainly adapted to three groups of end users with different residual capabilities:
Group 1: users with residual motor capabilities to control the arm and/or hand, but who need assistance to carry out activities of daily living in an efficient way. In this group of users, residual EMG signals could be used to control a wearable robot to assist in performing ADLs. In addition, the multimodal interface could be composed of a voice semantic recognition system (for users with non-speech disorders) or a wearable EOG system (for users with speech disorders) to tune some parameters of the high-level controller of the wearable robot and to interact with the user control software, a commercial wearable device for physiological signal monitoring, and RGB depth cameras to sense and understand the environment and context to automatically recognize the abilities necessary for different ADLs;
Group 2: users without functional control of the arm and/or hand and who are unable to speak (due to a speech disorder or aphasia). In this group, the multimodal interface could be composed of a hybrid BMI system to send commands to the high-level control of the wearable robot, a wearable EOG system to interact with the user control software, a commercial wearable device for physiological signal monitoring, and RGB depth cameras to sense and understand the environment and context to automatically recognize the abilities necessary for different ADLs;
Group 3: users without functional motor control of the arm and/or hand, with speech disorders, and with limited ability to control the movement of their eyes. In this case, the multimodal interface could be composed of a BMI system to send commands to the high-level control of the wearable robot and to interact with the user control software, a commercial wearable device for physiological signal monitoring, and RGB depth cameras to sense and understand the environment and context to automatically recognize the abilities necessary for different ADLs.
For users belonging to Groups 1 and 2, a set of application scenarios was identified as possible targets for the AIDE system: drinking tasks, eating tasks, pressing a sensitive dual switch, performing personal hygiene, touching another person, and so on. For users, belonging to Group 3, the identified scenarios were related to communication, the control of home devices, and entertainment.
2. Modular Assistive Robotic Platform
The system is a fully autonomous prototype consisting mainly of a robotized wheelchair with autonomous navigation capabilities, a multimodal interface, and a novel arm exoskeleton attached to the wheelchair (
Figure 1).
2.1. Biosignal Acquisition System
The proposed platform is capable of measuring and storing data from several physiological signals. Some of these signals are used for decision making when controlling the system, such as the EOG or EEG, but others are only used to measure the condition of the patient (respiratory rate, galvanic skin response, heart rate, etc.). The system allows adapting the use of the physiological signals based on the patient’s need. In addition, new biosignals and processing techniques can be integrated. The performance, signal processing, and adaptation of the different physiological signals of the system have been tested in several studies [
11,
12,
13,
14,
15].
2.1.1. ExG Cap
An ExG cap, developed by Brain Vision, can be used to perform three different biosignal measurements: (1) EEG acquisition, through eight electrodes, to perform BNCI tasks and allow the user to control the assistive robotic device and interact with the control interface; (2) EOG acquisition, using two electrodes placed on the outer canthus of the eyes, to detect left and right eye movements, to provide the user the opportunity to navigate through the menus of the control interface; (3) EKG acquisition to be combined with the respiration and galvanic skin response (GSR) data in order to estimate the affective state of the user [
16].
2.1.2. Electrocardiogram and Respiration Sensor
The system incorporates the Zephyr BioHarnessTM (Medtronic Zephyr, Boulder, CO, USA) physiological monitoring telemetry device to measure the electrocardiogram (ECG) and the respiration rate. This device has a built-in signal-processing unit. Therefore, we only applied a 0.004 Hz high-pass filter to remove the DC component of the signals. The HR was extracted from the ECG signal, but the time domain indices of the heart rate variability (HRV) were also extracted. In particular, the SDANN was used as a feature of the HRV, which is defined as the standard deviation of the average instantaneous heart rate intervals (NN) calculated over short periods. In this case, the SDANN was computed over a moving window of 300 s.
2.1.3. Galvanic Skin Response
A GSR sensor, developed by Shimmer, measures the skin conductivity between two reusable electrodes mounted to two fingers of one hand. These data are used, together with the EKG and the respiratory rate, to estimate the affective state of the user [
12]. GSR is a common measure in psychophysiological paradigms and therefore often used in affective state detection. The GSR signal was processed using a band-pass filter of 0.05–
Hz (the frequency range of the skin conductance response (SCR)) in order to remove the artifacts.
2.2. Environment Perception and Control System
The system integrates a computer vision system to recognize the environment with which the system will interact [
17]. In addition, it has a user interface so that the user can interact with the environment.
2.2.1. Computer Vision System
The activities of daily living (ADLs) require the capability to perform reaching tasks within a complex and unstructured environment. This problem should be solved in real time to be able to deal with the possible disturbances that the element may suffer during the interaction. Moreover, the objects are commonly textureless.
Currently, several methods have been proposed. However, despite the great advances in the field (especially using deep learning techniques), it has not been solved effectively yet, especially with nontextured objects. Some authors have used commercial tracking systems such as Optitrack or ART Track [
18,
19,
20]. The main limitation of these devices is the necessity to modify the objects to track through the inclusion of optical markers, to reconstruct their position and orientation. The main lines of investigation in the field of 3D textureless object pose estimation are methods based on geometric 3D descriptors, template matching, deep learning techniques, and random forests.
Our system incorporates a computer vision system based on the use of three devices (
Figure 1). The first one is Tobii Pro Glasses 2. This eye-tracking system allows the user to select the desired object. The second one is the Orbbec Astra S RGB-D camera used for the 3D pose estimation of textureless objects with which the system can interact. This camera is attached directly to the back of the wheelchair by means of a structure that places it on top of the user’s head, focusing on the scene. Finally, a full HD 1080p camera able to work at 30 fps is placed in front of the user, under the screen. This camera is used to estimate the 3D pose of the mouth of the user. This information helps the system know which position the exoskeleton must be in for tasks such as eating or drinking.
This computer vision system was tested in real conditions with patients and was also thoroughly evaluated both qualitatively and quantitatively. The results and a more detailed explanation of the algorithms developed can be seen in [
17].
2.2.2. User Interface
The system also has a screen attached to the wheelchair and located in front of the user (
Figure 1). On this screen, the interface menus are displayed. It brings many different options to the user (e.g., go to another room, drink, grab an object, entertainment, etc.) and gives some information about the selected task and the exoskeleton status.
2.3. Mobile Platform
The mobile platform was based on Summit XL Steel, from Robotnik. It has omnidirectional wheels that allow the movement of the user in the room. Furthermore, it has its own computer that executes a navigation system, which makes it possible to move between different rooms. Laser-based simultaneous localization and mapping (SLAM) is used to perform the mapping of each room, and the navigation and localization along the different rooms is performed using the adaptive Monte Carlo localization (AMCL) probabilistic localization system, as can be observed in
Figure 2). In addition, this platform is equipped with two laser sensors used to provide the wheelchair with an obstacle avoidance algorithm, increasing the safety during navigation.
2.4. Electric Power System
The system has three batteries to power the whole system. First, the mobile platform incorporates a 15/30Ah@48V LiFePO4 battery, which gives an autonomy of up to 10 h. On the other hand, the main computer of the system also has its own 91 kWh battery. The third and the last battery of the system are dedicated to supply the arm and hand exoskeleton. This battery was built with Panasonic 18650b cells and has a capacity of 1.18 kWh, which gives an autonomy of up to 3 h in continuous operation.
2.5. Safety Buttons
Safety is a key issue in wearable robotics, so there are three emergency stop switches (
Figure 3): (1) on the left side of the robotized wheelchair; (2) on the back side of the robotized wheelchair; and (3) connected through a wire to the left side of the wheelchair.
By default, there is only one emergency button that kills the exoskeleton power supply from the battery, located on a panel on the wheelchair. However, there is a second plug that offers the possibility of wiring a second button, which allows halting the device from a distance.
To restart the exoskeleton operation after a safety stop, the emergency button must be released and the lit green button of the left panel must be pressed.
The mobile robotic platform has its own emergency button located on the back side of the robotized wheelchair.
To restart the movement of the robotized wheelchair after a safety stop, the platform must be restarted by following the following steps: (1) pressing the green CPU button for 2 s; (2) when the green LED of the CPU button is off, putting the ON-OFF switch in the OFF position; (3) putting the ON-OFF switch in the ON position; this will turn on the platform electronics again; (4) pressing the green CPU button for 2 s; and (5) releasing the safety button.
2.6. Assistive Robotic Devices
The system is able to integrate two different types of robotic devices to assist people with disabilities: (i) an external robotic arm; or (ii) a robotic exoskeleton. Both of them mounted are on the robotized wheelchair.
The control architecture of the robot is independent of the type of robot used as an assistive device. This architecture was implemented in a low layer and a high layer. The low layer implements the low-level control of the robotic device. It implements a joint trajectory controller, which executes the trajectories received by the high-level controller. The other layer corresponds to the high-level controller, which is responsible for managing the communication of the robot with the system, but it also implements a motion planning system. This motion planning system resorts to the learning by demonstration (LbD) method based on the dynamic movement primitives (DMPs) proposed and evaluated in [
21].
2.6.1. Exoskeleton Robotic Device
An upper limb exoskeleton was designed with five active degrees of freedom corresponding to the following arm movements: shoulder abduction/adduction, shoulder flexion/extension, shoulder internal/external rotation, elbow flexion/extension, and wrist pronation/supination [
11,
12,
21,
22]. This device allows the user’s right arm to be moved to reach objects, thus facilitating the performance of ADLs (
Figure 1).
In addition to the arm exoskeleton, an active hand exoskeleton was designed to assist the opening and closing of the right/left hand [
23,
24]. It consists of four independent modules anchored to a hand orthosis that actuate the movements of the thumb, index finger, and middle finger, and jointly move the ring and little finger. The configuration of the hand can be adapted according to the size of the hand.
2.6.2. Robotic Manipulator
The system can also integrate an external robotic manipulator. Experimental tests of the complete system were carried out with JACO robot produced by Kinova (Boisbriand, Canada) [
25]. This robotic manipulator is a very light manipulator (4.4 kg for the arm and 727 g for the hand), which can be installed on a motorized wheelchair (right or left) to help people with upper extremity mobility limitations. It has seven degrees of freedom, with a two- or three-finger gripper with a maximum opening of 17.5 cm. The JACO robot is capable of loading objects from 3.5 kg to 4.4 kg, being able to reach objects within a radius of 75 cm.
2.7. Processing and Control System
The system has two computers, the main computer of the system and the computer integrated within the mobile robotic platform (
Figure 1 and
Figure 4).
The computer of the mobile robotic platform executes the navigation algorithms of the mobile platform using all the information from the sensors. It communicates with the main computer to execute the actions received from the system, as well as to inform the system about the current state during the navigation.
The main computer performs the communication between all the components of the system, processes all the information gathered from the sensors and cameras, and controls the arm and hand exoskeletons. This computer has its own 91 kWh battery.
Both computers communicate through a WiFi router. In this way, we can monitor the operation of the entire system by connecting an external computer to the router.
2.8. Finite State Machine
The integration of environmental data acquired by 3D sensors and user intentions has been evaluated in several studies [
11,
12,
13,
14,
15]. The AIDE system also incorporates an activity recognition algorithm to improve the performance of the control interfaces. This algorithm has been evaluated with patients [
16]. The experience gained in these studies resulted in two different state machines (
Figure 5 and
Figure 6). Both finite state machines (FSMs) describe the general operation of the system, so they have to be adapted according to the user’s residual capabilities, in other words depending on the control user interfaces employed. The system can be controlled by means of EEG, EMG, EoG, gaze, voice commands, etc., and/or a combination of these. In this way, the system is adapted to the user’s needs or preferences. These FSMs were evaluated in the different studies cited. In addition, in these studies, the different functions of the finite state machines were explained.
2.8.1. Hygiene Task
Due to the complexity of this type of task, the hygiene task is primarily intended to allow the user to be able to clean his/her face or brush his/her teeth.
Figure 5 shows the state machine developed to carry out this type of task.
2.8.2. Preparing and Eating a Meal
In this scenario, the complex task of preparing and eating a meal is broken up into two subtasks. First, the user has to prepare a meal (
Figure 6). In this FSM, the user takes the food from the fridge and heats it in the microwave. To do this, the use moves the wheelchair, opens/closes the fridge, opens/closes the microwave, and moves the robotic arm and hand exoskeleton to grasp and release the food tray. In order to perform this, several elements of the AIDE system are involved such us environmental control to move the wheelchair, the robotic arm, and the hand exoskeleton, the object detection and 3D pose estimation, etc.
After this, the system will continue to the eating and drinking task. In this task, the wheelchair is always in the same position in such a way that the user has only to interact with the exoskeleton to manipulate the glass and the cutlery.
3. Experimental Session
The study presented in this paper aimed to determine the degree of usability of the complete system in its main application environment, assistance in activities of daily living. In other experiments carried out throughout the project [
11,
12,
16,
17,
21], the different elements that compose the robotic system described here were validated, as well as the different user interfaces used (EEG, EOG, EMG) [
13,
14,
15].
This experiment was performed in a home environment developed for this purpose. It consisted of a room divided into two areas, one that simulated the living room and the other the kitchen. These two areas were used by a user in order to simulate the interaction with different elements of a home.
For this purpose, we enlisted the collaboration of a subject suffering from multiple sclerosis. In addition, a group of clinicians composed of nurses, doctors, and occupational therapists provided us an objective view of the system in its main field of application after the observation of this experiment (see
Figure 7).
The results of this study were obtained by performing the System Usability Scale (SUS), which determines the degree of system usability as perceived by the user and the clinicians.
3.1. Interface
The control of all the system proposed for this experiment was performed through an environmental control interface (ECI). This interface was developed under the AIDE project. It consists of three different abstraction levels where the user has to navigate in order to perform a specific activity (
Figure 8). The first level shows the available rooms of the proposed scenario; the second level has a grid with all the possible activities the user can perform; and the last level is related to the action the user can achieve regarding the activity. The control of this interface was performed with a hybrid EEG/EOG system [
26]. In addition, the control of the ECI was provided with an intelligent system, proposed in [
16], in order to help the navigation through the interface and streamline the completion of the desired task.
3.2. Navigation
In this experiment, two different rooms were mapped, the kitchen and the living room, as can be seen in
Figure 9. After a previous mapping of the different rooms, the user could freely navigate through the them using the proposed interface. The navigation to each room was performed in two steps using the interface. First, three different location points were established to perform a direct displacement to them. Then, a fine approach could be performed by small displacements to reach the place where the task had to be executed.
3.3. Activities of Daily Living
Throughout the experiment, the user interacted with several elements of the home through the use of the environmental control interface (
Figure 9). These elements were located in two different rooms, the kitchen and the living room. The user navigated through the environmental control menu using the EOG and EEG interfaces described above.
Environmental control allowed the user to choose the destination he wanted to reach (kitchen or living room), and the mobile platform would take him there automatically. First, as shown in
Figure 10, he moved to the kitchen area and adjusted the height of the worktop. Next, he moved to the living room, where he lit a lamp and then turned on the television. The times indicated are those that the user took to complete the activity, from the time he initiated the order to select the task to be performed until the activity was completely finished.
Once the user had interacted with the different elements of the room, he was ready to perform the eating task. As previously, the user selected the object, in this case the spoon, using the eye-tracking system, and he confirmed the selected object using an EOG command. Therefore, the exoskeleton started to move. When the robot reached the object, the user had to think “close” in order to close the hand (EEG command). When the robot reached his mouth, the user used EOG commands to indicate that he wanted to finish the task or wanted to continue eating. To leave the spoon, the user had to think “open” in order to open the hand (EEG command). At that point, the exoskeleton returned to the idle position, and the finite state machine was left waiting for a new command.
The user was able to complete all the tasks in reasonably short times, since the longest activities were navigation to the kitchen (1 min and 15 s) and the eating task (depending on the repetitions the user wanted to perform). In addition, the user had the ability to abort the activity carried out at any time if he deemed it necessary, providing greater security to the system.
3.4. Subjective Assessment of Usability
The System Usability Scale (SUS) provides a quick tool for measuring the usability aspects of technology. The SUS consists of 10 questions with five response options from strongly agree to strongly disagree. The questions are the following:
- Q1
I think that I would like to use this system frequently.
- Q2
I found the system unnecessarily complex.
- Q3
I thought the system was easy to use.
- Q4
I think that I would need the support of a technical person to be able to use this system.
- Q5
I found the various functions in this system were well integrated.
- Q6
I thought there was too much inconsistency in this system.
- Q7
I would imagine that most people would learn to use this system very quickly.
- Q8
I found the system very cumbersome to use.
- Q9
I felt very confident using the system.
- Q10
I needed to learn a lot of things before I could get going with this system.
3.5. Results
As mentioned above, the system developed was validated in different experiments that allowed improving not only the robotic device, but also the control and the different user interfaces. In the study presented in this paper, the main objective was to know the vision of the user himself and the opinion of a group of experts in relation to the usability of the final system in the assistance with ADLs.
To answer the questionnaire, factors such as the time taken by the user to carry out the activity with the robotic system must be taken into account (it cannot be too high), as well as assessing whether the user has completed each of the tasks without problems. To this end, the experts were present as members of the public throughout the experiment, in order to be able to evaluate the aforementioned issues first hand.
All the clinicians filled in the SUS questionnaire, and the results are shown in
Figure 11. The median of all the questions was equal to or above 2.5. However, the two questions with the lowest median value were related to the complexity and the cumbersome aspect of the system. This may be due to the fact that this system is a prototype that is still at an early development stage, and it is also a fact that, for the first time of use, it takes a relatively long time to calibrate the control interfaces to the user. We are working on improving the future prototypes of the system by taking into account these aspects.
4. Conclusions
In this paper, a modular robotic platform to provide assistance to moderately and severely impaired people in performing daily activities and participating in society was presented. The main innovation of our robotic platform was its modularity, which allows customizing the platform (hardware and software components) for the needs of each potential user. We presented the results of an experiment with a subject suffering from multiple sclerosis. In the experiment, the subject had to carry out different tasks in a simulated scenario while being observed by a a group of clinicians composed of nurses, doctors, and occupational therapists. After that, the subject and the clinicians replied to a usability questionnaire. These results showed a high degree of usability of the system, although there were also several areas for improvement. These aspects were taken into account to improve the new version of the device, thus trying to reduce the users’ perception of the complexity of the system.