sensors-logo

Journal Browser

Journal Browser

Sensors for Posture and Human Motion Recognition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (31 December 2020) | Viewed by 28180

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Information, Media and Electrical Engineering, Institute of Media and Imaging Technology, TH Köln, Köln, Germany
Interests: motion capture; sensor technologies; digital health; machine learning; computer animation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute for Computer Science, Bonn University, Endenicher Allee 19A, D-53115 Bonn, Germany
Interests: computer animation; physics-based modelling; computer algebra; life science informatics; hybrid modelling
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The capture and analysis of human posture and motion has made great advances in the past decade. On the one hand, with the development of new wearable sensor technology, a broad range of motion characteristics can now be measured with wearables. In comparison to full-body motion capture, the use of wearables needs only a small amount of time and instructions for application. Hardware is much cheaper—in a consumer price range compared to professional equipment. On the other hand, the advances in AI and machine learning (ML) provide new ways to interact and gain insights into the captured data. This development on the software side has resulted in new techniques for the analysis, segmentation, classification, and recognition of human posture and motion.

These two developments allow for completely new approaches to capture and analyze human posture and motion. It is a challenge to meaningfully combine new sensor technology and the algorithmic advances in AI and ML.

This Special Issue will cover a wide range of topics around human posture and motion, including new sensor technologies to capture posture and motion; new algorithmic approaches to derive, analyze, and recognize posture and motion from sensor data; and all combinations of these techniques.

Dr. Björn Krüger
Prof. Dr. Andreas Weber
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • wearable devices
  • inertial sensors
  • hybrid systems
  • sensor data fusion
  • hybrid modeling approaches
  • temporal segmentation of motions
  • action classifications from sensor data

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

28 pages, 1401 KiB  
Article
An Efficient 3D Human Pose Retrieval and Reconstruction from 2D Image-Based Landmarks
by Hashim Yasin and Björn Krüger
Sensors 2021, 21(7), 2415; https://doi.org/10.3390/s21072415 - 1 Apr 2021
Cited by 5 | Viewed by 4366
Abstract
We propose an efficient and novel architecture for 3D articulated human pose retrieval and reconstruction from 2D landmarks extracted from a 2D synthetic image, an annotated 2D image, an in-the-wild real RGB image or even a hand-drawn sketch. Given 2D joint positions in [...] Read more.
We propose an efficient and novel architecture for 3D articulated human pose retrieval and reconstruction from 2D landmarks extracted from a 2D synthetic image, an annotated 2D image, an in-the-wild real RGB image or even a hand-drawn sketch. Given 2D joint positions in a single image, we devise a data-driven framework to infer the corresponding 3D human pose. To this end, we first normalize 3D human poses from Motion Capture (MoCap) dataset by eliminating translation, orientation, and the skeleton size discrepancies from the poses and then build a knowledge-base by projecting a subset of joints of the normalized 3D poses onto 2D image-planes by fully exploiting a variety of virtual cameras. With this approach, we not only transform 3D pose space to the normalized 2D pose space but also resolve the 2D-3D cross-domain retrieval task efficiently. The proposed architecture searches for poses from a MoCap dataset that are near to a given 2D query pose in a definite feature space made up of specific joint sets. These retrieved poses are then used to construct a weak perspective camera and a final 3D posture under the camera model that minimizes the reconstruction error. To estimate unknown camera parameters, we introduce a nonlinear, two-fold method. We exploit the retrieved similar poses and the viewing directions at which the MoCap dataset was sampled to minimize the projection error. Finally, we evaluate our approach thoroughly on a large number of heterogeneous 2D examples generated synthetically, 2D images with ground-truth, a variety of real in-the-wild internet images, and a proof of concept using 2D hand-drawn sketches of human poses. We conduct a pool of experiments to perform a quantitative study on PARSE dataset. We also show that the proposed system yields competitive, convincing results in comparison to other state-of-the-art methods. Full article
(This article belongs to the Special Issue Sensors for Posture and Human Motion Recognition)
Show Figures

Figure 1

19 pages, 1939 KiB  
Article
Magnetically Counting Hand Movements: Validation of a Calibration-Free Algorithm and Application to Testing the Threshold Hypothesis of Real-World Hand Use after Stroke
by Diogo Schwerz de Lucena, Justin Rowe, Vicky Chan and David J. Reinkensmeyer
Sensors 2021, 21(4), 1502; https://doi.org/10.3390/s21041502 - 22 Feb 2021
Cited by 22 | Viewed by 3238
Abstract
There are few wearable sensors suitable for daily monitoring of wrist and finger movements for hand-related healthcare applications. Here, we describe the development and validation of a novel algorithm for magnetically counting hand movements. We implemented the algorithm on a wristband that senses [...] Read more.
There are few wearable sensors suitable for daily monitoring of wrist and finger movements for hand-related healthcare applications. Here, we describe the development and validation of a novel algorithm for magnetically counting hand movements. We implemented the algorithm on a wristband that senses magnetic field changes produced by movement of a magnetic ring worn on the finger (the “Manumeter”). The “HAND” (Hand Activity estimated by Nonlinear Detection) algorithm assigns a “HAND count” by thresholding the real-time change in magnetic field created by wrist and/or finger movement. We optimized thresholds to achieve a HAND count accuracy of ~85% without requiring subject-specific calibration. Then, we validated the algorithm in a dexterity-impaired population by showing that HAND counts strongly correlate with clinical assessments of upper extremity (UE) function after stroke. Finally, we used HAND counts to test a recent hypothesis in stroke rehabilitation that real-world UE hand use increases only for stroke survivors who achieve a threshold level of UE functional capability. For 29 stroke survivors, HAND counts measured at home did not increase until the participants’ Box and Blocks Test scores exceeded ~50% normal. These results show that a threshold-based magnetometry approach can non-obtrusively quantify hand movements without calibration and also verify a key concept of real-world hand use after stroke. Full article
(This article belongs to the Special Issue Sensors for Posture and Human Motion Recognition)
Show Figures

Figure 1

22 pages, 6514 KiB  
Article
An Attention-Enhanced Multi-Scale and Dual Sign Language Recognition Network Based on a Graph Convolution Network
by Lu Meng and Ronghui Li
Sensors 2021, 21(4), 1120; https://doi.org/10.3390/s21041120 - 5 Feb 2021
Cited by 23 | Viewed by 3561
Abstract
Sign language is the most important way of communication for hearing-impaired people. Research on sign language recognition can help normal people understand sign language. We reviewed the classic methods of sign language recognition, and the recognition accuracy is not high enough because of [...] Read more.
Sign language is the most important way of communication for hearing-impaired people. Research on sign language recognition can help normal people understand sign language. We reviewed the classic methods of sign language recognition, and the recognition accuracy is not high enough because of redundant information, human finger occlusion, motion blurring, the diversified signing styles of different people, and so on. To overcome these shortcomings, we propose a multi-scale and dual sign language recognition Network (SLR-Net) based on a graph convolutional network (GCN). The original input data was RGB videos. We first extracted the skeleton data from them and then used the skeleton data for sign language recognition. SLR-Net is mainly composed of three sub-modules: multi-scale attention network (MSA), multi-scale spatiotemporal attention network (MSSTA) and attention enhanced temporal convolution network (ATCN). MSA allows the GCN to learn the dependencies between long-distance vertices; MSSTA can directly learn the spatiotemporal features; ATCN allows the GCN network to better learn the long temporal dependencies. The three different attention mechanisms, multi-scale attention mechanism, spatiotemporal attention mechanism, and temporal attention mechanism, are proposed to further improve the robustness and accuracy. Besides, a keyframe extraction algorithm is proposed, which can greatly improve efficiency by sacrificing a little accuracy. Experimental results showed that our method can reach 98.08% accuracy rate in the CSL-500 dataset with a 500-word vocabulary. Even on the challenging dataset DEVISIGN-L with a 2000-word vocabulary, it also reached a 64.57% accuracy rate, outperforming other state-of-the-art sign language recognition methods. Full article
(This article belongs to the Special Issue Sensors for Posture and Human Motion Recognition)
Show Figures

Figure 1

15 pages, 1798 KiB  
Article
On the Effect of Training Convolution Neural Network for Millimeter-Wave Radar-Based Hand Gesture Recognition
by Kang Zhang, Shengchang Lan and Guiyuan Zhang
Sensors 2021, 21(1), 259; https://doi.org/10.3390/s21010259 - 2 Jan 2021
Cited by 6 | Viewed by 3262
Abstract
The purpose of this paper was to investigate the effect of a training state-of-the-art convolution neural network (CNN) for millimeter-wave radar-based hand gesture recognition (MR-HGR). Focusing on the small training dataset problem in MR-HGR, this paper first proposed to transfer the knowledge with [...] Read more.
The purpose of this paper was to investigate the effect of a training state-of-the-art convolution neural network (CNN) for millimeter-wave radar-based hand gesture recognition (MR-HGR). Focusing on the small training dataset problem in MR-HGR, this paper first proposed to transfer the knowledge with the CNN models in computer vision to MR-HGR by fine-tuning the models with radar data samples. Meanwhile, for the different data modality in MR-HGR, a parameterized representation of temporal space-velocity (TSV) spectrogram was proposed as an integrated data modality of the time-evolving hand gesture features in the radar echo signals. The TSV spectrograms representing six common gestures in human–computer interaction (HCI) from nine volunteers were used as the data samples in the experiment. The evaluated models included ResNet with 50, 101, and 152 layers, DenseNet with 121, 161 and 169 layers, as well as light-weight MobileNet V2 and ShuffleNet V2, mostly proposed by many latest publications. In the experiment, not only self-testing (ST), but also more persuasive cross-testing (CT), were implemented to evaluate whether the fine-tuned models generalize to the radar data samples. The CT results show that the best fine-tuned models can reach to an average accuracy higher than 93% with a comparable ST average accuracy almost 100%. Moreover, in order to alleviate the problem caused by private gesture habits, an auxiliary test was performed by augmenting four shots of the gestures with the heaviest misclassifications into the training set. This enriching test is similar with the scenario that a tablet reacts to a new user. The results of two different volunteer in the enriching test shows that the average accuracy of the enriched gesture can be improved from 55.59% and 65.58% to 90.66% and 95.95% respectively. Compared with some baseline work in MR-HGR, the investigation by this paper can be beneficial in promoting MR-HGR in future industry applications and consumer electronic design. Full article
(This article belongs to the Special Issue Sensors for Posture and Human Motion Recognition)
Show Figures

Figure 1

13 pages, 3539 KiB  
Article
The Gaitprint: Identifying Individuals by Their Running Style
by Christian Weich and Manfred M. Vieten
Sensors 2020, 20(14), 3810; https://doi.org/10.3390/s20143810 - 8 Jul 2020
Cited by 11 | Viewed by 3240
Abstract
Recognizing the characteristics of a well-developed running style is a central issue in athletic sub-disciplines. The development of portable micro-electro-mechanical-system (MEMS) sensors within the last decades has made it possible to accurately quantify movements. This paper introduces an analysis method, based on limit-cycle [...] Read more.
Recognizing the characteristics of a well-developed running style is a central issue in athletic sub-disciplines. The development of portable micro-electro-mechanical-system (MEMS) sensors within the last decades has made it possible to accurately quantify movements. This paper introduces an analysis method, based on limit-cycle attractors, to identify subjects by their specific running style. The movement data of 30 athletes were collected over 20 min. in three running sessions to create an individual gaitprint. A recognition algorithm was applied to identify each single individual as compared to other participants. The analyses resulted in a detection rate of 99% with a false identification probability of 0.28%, which demonstrates a very sensitive method for the recognition of athletes based solely on their running style. Further, it can be seen that these differentiations can be described as individual modifications of a general running pattern inherent in all participants. These findings open new perspectives for the assessment of running style, motion in general, and a person’s identification, in, for example, the growing e-sports movement. Full article
(This article belongs to the Special Issue Sensors for Posture and Human Motion Recognition)
Show Figures

Figure 1

Review

Jump to: Research

25 pages, 3576 KiB  
Review
A Review of Force Myography Research and Development
by Zhen Gang Xiao and Carlo Menon
Sensors 2019, 19(20), 4557; https://doi.org/10.3390/s19204557 - 20 Oct 2019
Cited by 92 | Viewed by 9115
Abstract
Information about limb movements can be used for monitoring physical activities or for human-machine-interface applications. In recent years, a technique called Force Myography (FMG) has gained ever-increasing traction among researchers to extract such information. FMG uses force sensors to register the variation of [...] Read more.
Information about limb movements can be used for monitoring physical activities or for human-machine-interface applications. In recent years, a technique called Force Myography (FMG) has gained ever-increasing traction among researchers to extract such information. FMG uses force sensors to register the variation of muscle stiffness patterns around a limb during different movements. Using machine learning algorithms, researchers are able to predict many different limb activities. This review paper presents state-of-art research and development on FMG technology in the past 20 years. It summarizes the research progress in both the hardware design and the signal processing techniques. It also discusses the challenges that need to be solved before FMG can be used in an everyday scenario. This paper aims to provide new insight into FMG technology and contribute to its advancement. Full article
(This article belongs to the Special Issue Sensors for Posture and Human Motion Recognition)
Show Figures

Figure 1

Back to TopTop