sensors-logo

Journal Browser

Journal Browser

Deep Learning in Visual and Wearable Sensing for Motion Analysis and Healthcare

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biosensors".

Deadline for manuscript submissions: 31 May 2025 | Viewed by 12132

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
Interests: medical data science; machine learning; pattern recognition; activity recognition; motion capture; sensor technologies; medical informatics

E-Mail Website
Guest Editor
Faculty of Information, Media and Electrical Engineering, Institute of Media and Imaging Technology, TH Köln, Köln, Germany
Interests: motion capture; sensor technologies; digital health; machine learning; computer animation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
Interests: biomedical engineering; artificial intelligence; pattern recognition; machine vision; machine learning; medical sensor
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are pleased to announce this Special Issue, which aims to gather together articles investigating the use of deep learning approaches in visual and wearable sensing, e.g., for motion analysis and healthcare applications. This issue will make a significant contribution to the field of machine learning and cover a broad spectrum of applications in the medical domain.

Applications may include (but are not limited to): diagnostics, activity recognition, motion tracking, motion analysis of body parts or rehabilitation support. As sensor technologies are diverse, we welcome all papers exploring the use of wearable sensors or ambient sensors (such as RGB(D) image/video, millimeter-wave radar, etc.).

Dr. Sebastian Fudickar
Prof. Dr. Björn Krüger
Prof. Dr. Marcin Grzegorzek
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning techniques
  • computer vision
  • wearable sensors
  • accelerometers
  • gyroscopes
  • magnetometers
  • multimodal sensing
  • EMG and force sensors
  • human activity recognition
  • human movement/gait analysis

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 3569 KiB  
Article
Wearable Biosensor Smart Glasses Based on Augmented Reality and Eye Tracking
by Lina Gao, Changyuan Wang and Gongpu Wu
Sensors 2024, 24(20), 6740; https://doi.org/10.3390/s24206740 - 20 Oct 2024
Viewed by 1058
Abstract
With the rapid development of wearable biosensor technology, the combination of head-mounted displays and augmented reality (AR) technology has shown great potential for health monitoring and biomedical diagnosis applications. However, further optimizing its performance and improving data interaction accuracy remain crucial issues that [...] Read more.
With the rapid development of wearable biosensor technology, the combination of head-mounted displays and augmented reality (AR) technology has shown great potential for health monitoring and biomedical diagnosis applications. However, further optimizing its performance and improving data interaction accuracy remain crucial issues that must be addressed. In this study, we develop smart glasses based on augmented reality and eye tracking technology. Through real-time information interaction with the server, the smart glasses realize accurate scene perception and analysis of the user’s intention and combine with mixed-reality display technology to provide dynamic and real-time intelligent interaction services. A multi-level hardware architecture and optimized data processing process are adopted during the research process to enhance the system’s real-time accuracy. Meanwhile, combining the deep learning method with the geometric model significantly improves the system’s ability to perceive user behavior and environmental information in complex environments. The experimental results show that when the distance between the subject and the display is 1 m, the eye tracking accuracy of the smart glasses can reach 1.0° with an error of no more than ±0.1°. This study demonstrates that the effective integration of AR and eye tracking technology dramatically improves the functional performance of smart glasses in multiple scenarios. Future research will further optimize smart glasses’ algorithms and hardware performance, enhance their application potential in daily health monitoring and medical diagnosis, and provide more possibilities for the innovative development of wearable devices in medical and health management. Full article
Show Figures

Figure 1

16 pages, 30304 KiB  
Article
Generisch-Net: A Generic Deep Model for Analyzing Human Motion with Wearable Sensors in the Internet of Health Things
by Kiran Hamza, Qaiser Riaz, Hamza Ali Imran, Mehdi Hussain and Björn Krüger
Sensors 2024, 24(19), 6167; https://doi.org/10.3390/s24196167 - 24 Sep 2024
Viewed by 3354
Abstract
The Internet of Health Things (IoHT) is a broader version of the Internet of Things. The main goal is to intervene autonomously from geographically diverse regions and provide low-cost preventative or active healthcare treatments. Smart wearable IMUs for human motion analysis have proven [...] Read more.
The Internet of Health Things (IoHT) is a broader version of the Internet of Things. The main goal is to intervene autonomously from geographically diverse regions and provide low-cost preventative or active healthcare treatments. Smart wearable IMUs for human motion analysis have proven to provide valuable insights into a person’s psychological state, activities of daily living, identification/re-identification through gait signatures, etc. The existing literature, however, focuses on specificity i.e., problem-specific deep models. This work presents a generic BiGRU-CNN deep model that can predict the emotional state of a person, classify the activities of daily living, and re-identify a person in a closed-loop scenario. For training and validation, we have employed publicly available and closed-access datasets. The data were collected with wearable inertial measurement units mounted non-invasively on the bodies of the subjects. Our findings demonstrate that the generic model achieves an impressive accuracy of 96.97% in classifying activities of daily living. Additionally, it re-identifies individuals in closed-loop scenarios with an accuracy of 93.71% and estimates emotional states with an accuracy of 78.20%. This study represents a significant effort towards developing a versatile deep-learning model for human motion analysis using wearable IMUs, demonstrating promising results across multiple applications. Full article
Show Figures

Figure 1

24 pages, 1193 KiB  
Article
A Systematic Evaluation of Feature Encoding Techniques for Gait Analysis Using Multimodal Sensory Data
by Rimsha Fatima, Muhammad Hassan Khan, Muhammad Adeel Nisar, Rafał Doniec, Muhammad Shahid Farid and Marcin Grzegorzek
Sensors 2024, 24(1), 75; https://doi.org/10.3390/s24010075 - 22 Dec 2023
Cited by 7 | Viewed by 1606
Abstract
This paper addresses the problem of feature encoding for gait analysis using multimodal time series sensory data. In recent years, the dramatic increase in the use of numerous sensors, e.g., inertial measurement unit (IMU), in our daily wearable devices has gained the interest [...] Read more.
This paper addresses the problem of feature encoding for gait analysis using multimodal time series sensory data. In recent years, the dramatic increase in the use of numerous sensors, e.g., inertial measurement unit (IMU), in our daily wearable devices has gained the interest of the research community to collect kinematic and kinetic data to analyze the gait. The most crucial step for gait analysis is to find the set of appropriate features from continuous time series data to accurately represent human locomotion. This paper presents a systematic assessment of numerous feature extraction techniques. In particular, three different feature encoding techniques are presented to encode multimodal time series sensory data. In the first technique, we utilized eighteen different handcrafted features which are extracted directly from the raw sensory data. The second technique follows the Bag-of-Visual-Words model; the raw sensory data are encoded using a pre-computed codebook and a locality-constrained linear encoding (LLC)-based feature encoding technique. We evaluated two different machine learning algorithms to assess the effectiveness of the proposed features in the encoding of raw sensory data. In the third feature encoding technique, we proposed two end-to-end deep learning models to automatically extract the features from raw sensory data. A thorough experimental evaluation is conducted on four large sensory datasets and their outcomes are compared. A comparison of the recognition results with current state-of-the-art methods demonstrates the computational efficiency and high efficacy of the proposed feature encoding method. The robustness of the proposed feature encoding technique is also evaluated to recognize human daily activities. Additionally, this paper also presents a new dataset consisting of the gait patterns of 42 individuals, gathered using IMU sensors. Full article
Show Figures

Figure 1

13 pages, 2668 KiB  
Article
Validity of AI-Based Gait Analysis for Simultaneous Measurement of Bilateral Lower Limb Kinematics Using a Single Video Camera
by Takumi Ino, Mina Samukawa, Tomoya Ishida, Naofumi Wada, Yuta Koshino, Satoshi Kasahara and Harukazu Tohyama
Sensors 2023, 23(24), 9799; https://doi.org/10.3390/s23249799 - 13 Dec 2023
Cited by 11 | Viewed by 3157
Abstract
Accuracy validation of gait analysis using pose estimation with artificial intelligence (AI) remains inadequate, particularly in objective assessments of absolute error and similarity of waveform patterns. This study aimed to clarify objective measures for absolute error and waveform pattern similarity in gait analysis [...] Read more.
Accuracy validation of gait analysis using pose estimation with artificial intelligence (AI) remains inadequate, particularly in objective assessments of absolute error and similarity of waveform patterns. This study aimed to clarify objective measures for absolute error and waveform pattern similarity in gait analysis using pose estimation AI (OpenPose). Additionally, we investigated the feasibility of simultaneous measuring both lower limbs using a single camera from one side. We compared motion analysis data from pose estimation AI using video footage that was synchronized with a three-dimensional motion analysis device. The comparisons involved mean absolute error (MAE) and the coefficient of multiple correlation (CMC) to compare the waveform pattern similarity. The MAE ranged from 2.3 to 3.1° on the camera side and from 3.1 to 4.1° on the opposite side, with slightly higher accuracy on the camera side. Moreover, the CMC ranged from 0.936 to 0.994 on the camera side and from 0.890 to 0.988 on the opposite side, indicating a “very good to excellent” waveform similarity. Gait analysis using a single camera revealed that the precision on both sides was sufficiently robust for clinical evaluation, while measurement accuracy was slightly superior on the camera side. Full article
Show Figures

Figure 1

17 pages, 12583 KiB  
Article
Cross-Viewpoint Semantic Mapping: Integrating Human and Robot Perspectives for Improved 3D Semantic Reconstruction
by László Kopácsi, Benjámin Baffy, Gábor Baranyi, Joul Skaf, Gábor Sörös, Szilvia Szeier, András Lőrincz and Daniel Sonntag
Sensors 2023, 23(11), 5126; https://doi.org/10.3390/s23115126 - 27 May 2023
Cited by 1 | Viewed by 2097
Abstract
Allocentric semantic 3D maps are highly useful for a variety of human–machine interaction related tasks since egocentric viewpoints can be derived by the machine for the human partner. Class labels and map interpretations, however, may differ or could be missing for the participants [...] Read more.
Allocentric semantic 3D maps are highly useful for a variety of human–machine interaction related tasks since egocentric viewpoints can be derived by the machine for the human partner. Class labels and map interpretations, however, may differ or could be missing for the participants due to the different perspectives. Particularly, when considering the viewpoint of a small robot, which significantly differs from the viewpoint of a human. In order to overcome this issue, and to establish common ground, we extend an existing real-time 3D semantic reconstruction pipeline with semantic matching across human and robot viewpoints. We use deep recognition networks, which usually perform well from higher (i.e., human) viewpoints but are inferior from lower viewpoints, such as that of a small robot. We propose several approaches for acquiring semantic labels for images taken from unusual perspectives. We start with a partial 3D semantic reconstruction from the human perspective that we transfer and adapt to the small robot’s perspective using superpixel segmentation and the geometry of the surroundings. The quality of the reconstruction is evaluated in the Habitat simulator and a real environment using a robot car with an RGBD camera. We show that the proposed approach provides high-quality semantic segmentation from the robot’s perspective, with accuracy comparable to the original one. In addition, we exploit the gained information and improve the recognition performance of the deep network for the lower viewpoints and show that the small robot alone is capable of generating high-quality semantic maps for the human partner. The computations are close to real-time, so the approach enables interactive applications. Full article
Show Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

  • Type of Paper: Article
  • Tentative Title: 3D Semantic Label Transfer and Matching in Human-Robot Collaboration
  • Authors: László Kopácsi (1,2), Benjámin Baffy (2), Gábor Baranyi (2), Joul Skaf (2), Gábor Sörös (3), Szilvia Szeier (2), András Lőrincz (2), Daniel Sonntag (1,4)
  • Abstract: Allocentric semantic 3D maps are highly useful for a variety of human-machine interactions since ego-centric instructions can be derived by the machine for the human partner. Class labels, however, may differ or could be missing for the participants due to the different perspectives. In order to overcome this issue, we extend an existing real-time 3D semantic reconstruction pipeline with semantic matching across human and robot viewpoints. We use deep recognition networks, which usually perform well from higher, i.e., the human viewpoints but are inferior from lower, such as the viewpoints of a small robot. We propose several approaches for acquiring semantic labels for unusual perspectives. We start with a partial semantic reconstruction from the human perspective that we extended to the new, unusual perspective using superpixel segmentation and the geometry of the surroundings. The quality of the reconstruction is evaluated in the Habitat simulator and in a real environment using Intel's small OpenBot robot that we equipped with an RGBD camera. We show that the proposed approach provides high-quality semantic segmentation from the robot's perspective with accuracy comparable to the original one. In addition, we exploited the gained information and improved the recognition performance of the deep network for the lower viewpoints and showed that the small robot alone is capable of generating high-quality semantic maps for the human partner. Furthermore, as computations are close to real-time, the approach enables interactive applications.
Back to TopTop