sensors-logo

Journal Browser

Journal Browser

Machine Learning and Multimodal Sensing for Smart Wearable Assistive Robotics

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 August 2021) | Viewed by 25333

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electronic and Electrical Engineering, University of Bath, Bath BA27AY, UK
Interests: tactile and visual perception in robots; tactile sensing and haptics; human-robot interaction and collaboration; bayesian inference for robot control; wearable robotics; learning methods in autonomous robots; sensorimotor control; telepresence and teleoperation

E-Mail Website
Guest Editor
Department of Electronic and Electrical Engineering, University of Bath, Claverton Down, Bath BA2 7AY, UK
Interests: human–machine interface; rehabilitation robotics; biomechatronics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
University of Bath, Department of Electronic and Electrical Engineering, Bath, UK
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Biomechatronics Laboratory, Department of Mechatronics and Mechanical Systems, Escola Politécnica of the University of São Paulo, Sao Paulo, Brazil
Interests: biomechatronics; biorobotics; motor control; sleep
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Instrument Science and Engineering, Southeast University, Shenyang 110167, China
Interests: biorobot/biomechatronic interfaces; haptic interaction systems; motor learning/control

Special Issue Information

Dear Colleagues,

Wearable assistive robotics technology has grown rapidly in recent years, allowing the development of devices which are capable of assisting people in performing activities of daily living (ADL), such as walking on flat surfaces and stairs, and sitting and standing. This progress, driven by advances in actuation, sensing technology, soft materials, and machine learning methods, has led to assistive robots that are portable, lightweight, and capable of making decisions.
Making use of multimodal sensing technologies and machine learning models provides the venue to develop smart assistive robots that can accurately understand the posture and activity of the human body. Furthermore, using multimodal information with novel machine learning models can allow the wearable robot to learn to adapt its performance (e.g., activity recognition and delivery of assistance) over time from interaction with the human body. Thus, this process has the potential to develop the next generation of wearable robots that can autonomously adapt and safely assist humans in ADL.
The aim of this Special Issue is to contribute to the state-of-the-art and introduce current developments on machine learning models and multimodal sensing for decision-making, adaptability, interaction, and control of wearable assistive robotics. We encourage potential authors to submit contributions of original research, new development, experimental works, and surveys in the field of wearable assistive robotics.

Dr. Uriel Martinez-Hernandez
Dr. Zhang Dingguo
Dr. Benjamin Metcalfe
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Activity recognition
  • Wearable robots, exoskeletons, soft robotics
  • Machine learning for multimodal sensing and perception
  • Cognitive architectures for robot learning and control
  • Active sensing and perception
  • Adaptive assistance
  • Multimodal sensing
  • Human–robot interaction
  • Robotic prosthetics and orthotics

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

20 pages, 7457 KiB  
Article
Evaluation of User-Prosthesis-Interfaces for sEMG-Based Multifunctional Prosthetic Hands
by Julio Fajardo, Guillermo Maldonado, Diego Cardona, Victor Ferman and Eric Rohmer
Sensors 2021, 21(21), 7088; https://doi.org/10.3390/s21217088 - 26 Oct 2021
Cited by 6 | Viewed by 2758
Abstract
The complexity of the user interfaces and the operating modes present in numerous assistive devices, such as intelligent prostheses, influence patients to shed them from their daily living activities. A methodology to evaluate how diverse aspects impact the workload evoked when using an [...] Read more.
The complexity of the user interfaces and the operating modes present in numerous assistive devices, such as intelligent prostheses, influence patients to shed them from their daily living activities. A methodology to evaluate how diverse aspects impact the workload evoked when using an upper-limb bionic prosthesis for unilateral transradial amputees is proposed and thus able to determine how user-friendly an interface is. The evaluation process consists of adapting the same 3D-printed terminal device to the different user-prosthesis-interface schemes to facilitate running the tests and avoid any possible bias. Moreover, a study comparing the results gathered by both limb-impaired and healthy subjects was carried out to contrast the subjective opinions of both types of volunteers and determines if their reactions have a significant discrepancy, as done in several other studies. Full article
Show Figures

Figure 1

13 pages, 3210 KiB  
Article
The Role of Surface Electromyography in Data Fusion with Inertial Sensors to Enhance Locomotion Recognition and Prediction
by Lin Meng, Jun Pang, Ziyao Wang, Rui Xu and Dong Ming
Sensors 2021, 21(18), 6291; https://doi.org/10.3390/s21186291 - 19 Sep 2021
Cited by 19 | Viewed by 3406
Abstract
Locomotion recognition and prediction is essential for real-time human–machine interactive control. The integration of electromyography (EMG) with mechanical sensors could improve the performance of locomotion recognition. However, the potential of EMG in motion prediction is rarely discussed. This paper firstly investigated the effect [...] Read more.
Locomotion recognition and prediction is essential for real-time human–machine interactive control. The integration of electromyography (EMG) with mechanical sensors could improve the performance of locomotion recognition. However, the potential of EMG in motion prediction is rarely discussed. This paper firstly investigated the effect of surface EMG on the prediction of locomotion while integrated with inertial data. We collected EMG signals of lower limb muscle groups and linear acceleration data of lower limb segments from ten healthy participants in seven locomotion activities. Classification models were built based on four machine learning methods—support vector machine (SVM), k-nearest neighbor (KNN), artificial neural network (ANN), and linear discriminant analysis (LDA)—where a major vote strategy and a content constraint rule were utilized for improving the online performance of the classification decision. We compared four classifiers and further investigated the effect of data fusion on the online locomotion classification. The results showed that the SVM model with a sliding window size of 80 ms achieved the best recognition performance. The fusion of EMG signals does not only improve the recognition accuracy of steady-state locomotion activity from 90% (using acceleration data only) to 98% (using data fusion) but also enables the prediction of the next steady locomotion (∼370 ms). The study demonstrates that the employment of EMG in locomotion recognition could enhance online prediction performance. Full article
Show Figures

Figure 1

8 pages, 17436 KiB  
Communication
Experiment Study for Wrist-Wearable Electro-Tactile Display
by Xiong Lu, Minxu Lin, Shouchun Wang, Xusheng Hu, Hongbin Yin and Yuxing Yan
Sensors 2021, 21(4), 1332; https://doi.org/10.3390/s21041332 - 13 Feb 2021
Cited by 8 | Viewed by 3037
Abstract
Tactile sensation is a promising information display channel for human beings that involves supplementing or replacing degraded visual or auditory channels. In this paper, a wrist-wearable tactile rendering system based on electro-tactile stimulation is designed for information expression, where a square array with [...] Read more.
Tactile sensation is a promising information display channel for human beings that involves supplementing or replacing degraded visual or auditory channels. In this paper, a wrist-wearable tactile rendering system based on electro-tactile stimulation is designed for information expression, where a square array with 8 × 8 spherical electrodes is used as the touch panel. To verify and improve this touch-based information display method, the optimal mode for stimulus signals was firstly investigated through comparison experiments, which show that sequential stimuli with consecutive-electrode-in-active mode have a better performance than those with single-electrode-in-active mode. Then, simple Chinese and English characters and 26 English characters’ recognition experiments were carried out and the proposed method was verified with an average recognition rate of 95% and 82%, respectively. This wrist-wearable tactile display system would be a new and promising medium for communication and could be of great value for visually impaired people. Full article
Show Figures

Figure 1

23 pages, 3529 KiB  
Article
Understanding LSTM Network Behaviour of IMU-Based Locomotion Mode Recognition for Applications in Prostheses and Wearables
by Freddie Sherratt, Andrew Plummer and Pejman Iravani
Sensors 2021, 21(4), 1264; https://doi.org/10.3390/s21041264 - 10 Feb 2021
Cited by 49 | Viewed by 6594
Abstract
Human Locomotion Mode Recognition (LMR) has the potential to be used as a control mechanism for lower-limb active prostheses. Active prostheses can assist and restore a more natural gait for amputees, but as a medical device it must minimize user risks, such as [...] Read more.
Human Locomotion Mode Recognition (LMR) has the potential to be used as a control mechanism for lower-limb active prostheses. Active prostheses can assist and restore a more natural gait for amputees, but as a medical device it must minimize user risks, such as falls and trips. As such, any control system must have high accuracy and robustness, with a detailed understanding of its internal operation. Long Short-Term Memory (LSTM) machine-learning networks can perform LMR with high accuracy levels. However, the internal behavior during classification is unknown, and they struggle to generalize when presented with novel users. The target problem addressed in this paper is understanding the LSTM classification behavior for LMR. A dataset of six locomotive activities (walking, stopped, stairs and ramps) from 22 non-amputee subjects is collected, capturing both steady-state and transitions between activities in natural environments. Non-amputees are used as a substitute for amputees to provide a larger dataset. The dataset is used to analyze the internal behavior of a reduced complexity LSTM network. This analysis identifies that the model primarily classifies activity type based on data around early stance. Evaluation of generalization for unseen subjects reveals low sensitivity to hyper-parameters and over-fitting to individuals’ gait traits. Investigating the differences between individual subjects showed that gait variations between users primarily occur in early stance, potentially explaining the poor generalization. Adjustment of hyper-parameters alone could not solve this, demonstrating the need for individual personalization of models. The main achievements of the paper are (i) the better understanding of LSTM for LMR, (ii) demonstration of its low sensitivity to learning hyper-parameters when evaluating novel user generalization, and (iii) demonstration of the need for personalization of ML models to achieve acceptable accuracy. Full article
Show Figures

Figure 1

Other

Jump to: Research

19 pages, 1398 KiB  
Perspective
Wearable Assistive Robotics: A Perspective on Current Challenges and Future Trends
by Uriel Martinez-Hernandez, Benjamin Metcalfe, Tareq Assaf, Leen Jabban, James Male and Dingguo Zhang
Sensors 2021, 21(20), 6751; https://doi.org/10.3390/s21206751 - 12 Oct 2021
Cited by 24 | Viewed by 8029
Abstract
Wearable assistive robotics is an emerging technology with the potential to assist humans with sensorimotor impairments to perform daily activities. This assistance enables individuals to be physically and socially active, perform activities independently, and recover quality of life. These benefits to society have [...] Read more.
Wearable assistive robotics is an emerging technology with the potential to assist humans with sensorimotor impairments to perform daily activities. This assistance enables individuals to be physically and socially active, perform activities independently, and recover quality of life. These benefits to society have motivated the study of several robotic approaches, developing systems ranging from rigid to soft robots with single and multimodal sensing, heuristics and machine learning methods, and from manual to autonomous control for assistance of the upper and lower limbs. This type of wearable robotic technology, being in direct contact and interaction with the body, needs to comply with a variety of requirements to make the system and assistance efficient, safe and usable on a daily basis by the individual. This paper presents a brief review of the progress achieved in recent years, the current challenges and trends for the design and deployment of wearable assistive robotics including the clinical and user need, material and sensing technology, machine learning methods for perception and control, adaptability and acceptability, datasets and standards, and translation from lab to the real world. Full article
Show Figures

Graphical abstract

Back to TopTop