sensors-logo

Journal Browser

Journal Browser

Multi-Modal Sensors for Human Behavior Monitoring

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (30 September 2019) | Viewed by 54896

Special Issue Editors


E-Mail Website
Guest Editor
Dept. of Computer Science, University of Milan, Via Celoria 18 Milan 20133 Italy
Interests: affective and physiological computing; computational vision; intelligent systems; Bayesian modelling; machine learning

E-Mail Website
Guest Editor
DISCo (Department of Informatics, Systems and Communication), University of Milan-Bicocca, Viale Sarca 336, Milan, Italy
Interests: computer vision and image analysis; intelligent systems; deep learning; machine learning; biomedical signal processing; wearable devices

E-Mail Website
Guest Editor
Dipartimento di Chimica e Biologia "A. Zambelli" , Università di Salerno, Via Giovanni Paolo II, 132, 84084 Fisciano, SA, Italy
Interests: computer vision and image analysis; intelligent systems; deep learning; machine learning; color imaging
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

in everyday life we are surrounded by various sensors, wearable and not, that explicitly or implicitly record information on our behavior either visible and hidden (e.g.  physiological activity).

Such sensors are of different nature: accelerometer, gyroscope, camera, electrodermal activity sensor, heart rate monitor, breath rate monitor and others.

Most important, the multimodal nature of data is apt to sense and understand the many facets of human daily-life behavior from physical, voluntary activities to social signaling and lifestyle choices influenced by affect, personal traits, age and social context.

The intelligent sensing community is able to exploit the data acquired with these sensors in order to develop machine-learning-based techniques, which can help in improving predictive models of human behavior.

The purpose of this special issue is to gather the latest research in the field of human behavior monitoring, both at the sensing and the understanding levels,  by using multimodal data sources.  

Applications of interest can relate to domotics, healthcare, transport, education, safety aid, entertainment, sports and others.

Given the need for data in this field of research, scientific works that present data collections are also welcome.

Therefore, contributions to this Special Issue may include, but are not limited to:

  • Novel sensing techniques for the non-invasive measurement of physiological signals
  • Internet-of-Things based architecture for multimodal monitoring of human behaviour
  • Learning and inference from multimodal sensory data
  • Real-time multimodal activity recognition
  • Semantic interpretation of multimodal sensory data
  • Multimodal sensors fusion techniques
  • Multimodal databases and benchmarks for behavior monitoring and understanding.

Prof. Giuseppe Boccignone
Dr. Paolo Napoletano
Prof. Raimondo Schettini
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Activity monitoring
  • Emotion prediction
  • Stress detection
  • Fatigue detection
  • Fall detection
  • Sport-related activity monitoring
  • Health monitoring
  • Pervasive healthcare
  • IoT based monitoring systems
  • Machine learning
  • Benchmark
  • Physiological sensors
  • Wearable sensors

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

16 pages, 1195 KiB  
Article
Efficient Kernel-Based Subsequence Search for Enabling Health Monitoring Services in IoT-Based Home Setting
by Antonio Candelieri, Stanislav Fedorov and Enza Messina
Sensors 2019, 19(23), 5192; https://doi.org/10.3390/s19235192 - 27 Nov 2019
Cited by 2 | Viewed by 2356
Abstract
This paper presents an efficient approach for subsequence search in data streams. The problem consists of identifying coherent repetitions of a given reference time-series, also in the multivariate case, within a longer data stream. The most widely adopted metric to address this problem [...] Read more.
This paper presents an efficient approach for subsequence search in data streams. The problem consists of identifying coherent repetitions of a given reference time-series, also in the multivariate case, within a longer data stream. The most widely adopted metric to address this problem is Dynamic Time Warping (DTW), but its computational complexity is a well-known issue. In this paper, we present an approach aimed at learning a kernel approximating DTW for efficiently analyzing streaming data collected from wearable sensors, while reducing the burden of DTW computation. Contrary to kernel, DTW allows for comparing two time-series with different length. To enable the use of kernel for comparing two time-series with different length, a feature embedding is required in order to obtain a fixed length vector representation. Each vector component is the DTW between the given time-series and a set of “basis” series, randomly chosen. The approach has been validated on two benchmark datasets and on a real-life application for supporting self-rehabilitation in elderly subjects has been addressed. A comparison with traditional DTW implementations and other state-of-the-art algorithms is provided: results show a slight decrease in accuracy, which is counterbalanced by a significant reduction in computational costs. Full article
(This article belongs to the Special Issue Multi-Modal Sensors for Human Behavior Monitoring)
Show Figures

Figure 1

21 pages, 11805 KiB  
Article
Sport-Related Human Activity Detection and Recognition Using a Smartwatch
by Zhendong Zhuang and Yang Xue
Sensors 2019, 19(22), 5001; https://doi.org/10.3390/s19225001 - 16 Nov 2019
Cited by 57 | Viewed by 6082
Abstract
As an active research field, sport-related activity monitoring plays an important role in people’s lives and health. This is often viewed as a human activity recognition task in which a fixed-length sliding window is used to segment long-term activity signals. However, activities with [...] Read more.
As an active research field, sport-related activity monitoring plays an important role in people’s lives and health. This is often viewed as a human activity recognition task in which a fixed-length sliding window is used to segment long-term activity signals. However, activities with complex motion states and non-periodicity can be better monitored if the monitoring algorithm is able to accurately detect the duration of meaningful motion states. However, this ability is lacking in the sliding window approach. In this study, we focused on two types of activities for sport-related activity monitoring, which we regard as a human activity detection and recognition task. For non-periodic activities, we propose an interval-based detection and recognition method. The proposed approach can accurately determine the duration of each target motion state by generating candidate intervals. For weak periodic activities, we propose a classification-based periodic matching method that uses periodic matching to segment the motion sate. Experimental results show that the proposed methods performed better than the sliding window method. Full article
(This article belongs to the Special Issue Multi-Modal Sensors for Human Behavior Monitoring)
Show Figures

Figure 1

29 pages, 30003 KiB  
Article
A Model-Based System for Real-Time Articulated Hand Tracking Using a Simple Data Glove and a Depth Camera
by Linjun Jiang, Hailun Xia and Caili Guo
Sensors 2019, 19(21), 4680; https://doi.org/10.3390/s19214680 - 28 Oct 2019
Cited by 10 | Viewed by 5277
Abstract
Tracking detailed hand motion is a fundamental research topic in the area of human-computer interaction (HCI) and has been widely studied for decades. Existing solutions with single-model inputs either require tedious calibration, are expensive or lack sufficient robustness and accuracy due to occlusions. [...] Read more.
Tracking detailed hand motion is a fundamental research topic in the area of human-computer interaction (HCI) and has been widely studied for decades. Existing solutions with single-model inputs either require tedious calibration, are expensive or lack sufficient robustness and accuracy due to occlusions. In this study, we present a real-time system to reconstruct the exact hand motion by iteratively fitting a triangular mesh model to the absolute measurement of hand from a depth camera under the robust restriction of a simple data glove. We redefine and simplify the function of the data glove to lighten its limitations, i.e., tedious calibration, cumbersome equipment, and hampering movement and keep our system lightweight. For accurate hand tracking, we introduce a new set of degrees of freedom (DoFs), a shape adjustment term for personalizing the triangular mesh model, and an adaptive collision term to prevent self-intersection. For efficiency, we extract a strong pose-space prior to the data glove to narrow the pose searching space. We also present a simplified approach for computing tracking correspondences without the loss of accuracy to reduce computation cost. Quantitative experiments show the comparable or increased accuracy of our system over the state-of-the-art with about 40% improvement in robustness. Besides, our system runs independent of Graphic Processing Unit (GPU) and reaches 40 frames per second (FPS) at about 25% Central Processing Unit (CPU) usage. Full article
(This article belongs to the Special Issue Multi-Modal Sensors for Human Behavior Monitoring)
Show Figures

Figure 1

25 pages, 5681 KiB  
Article
Practical and Durable Flexible Strain Sensors Based on Conductive Carbon Black and Silicone Blends for Large Scale Motion Monitoring Applications
by Yun Xia, Qi Zhang, Xue E. Wu, Tim V. Kirk and Xiao Dong Chen
Sensors 2019, 19(20), 4553; https://doi.org/10.3390/s19204553 - 19 Oct 2019
Cited by 18 | Viewed by 5812
Abstract
Presented is a flexible capacitive strain sensor, based on the low cost materials silicone (PDMS) and carbon black (CB), that was fabricated by casting and curing of successive silicone layers—a central PDMS dielectric layer bounded by PDMS/CB blend electrodes and packaged by exterior [...] Read more.
Presented is a flexible capacitive strain sensor, based on the low cost materials silicone (PDMS) and carbon black (CB), that was fabricated by casting and curing of successive silicone layers—a central PDMS dielectric layer bounded by PDMS/CB blend electrodes and packaged by exterior PDMS films. It was effectively characterized for large flexion-angle motion wearable applications, with strain sensing properties assessed over large strains (50%) and variations in temperature and humidity. Additionally, suitability for monitoring large tissue deformation was established by integration with an in vitro digestive model. The capacitive gauge factor was approximately constant at 0.86 over these conditions for the linear strain range (3 to 47%). Durability was established from consistent relative capacitance changes over 10,000 strain cycles, with varying strain frequency and elongation up to 50%. Wearability and high flexion angle human motion detection were demonstrated by integration with an elbow band, with clear detection of motion ranges up 90°. The device’s simple structure and fabrication method, low-cost materials and robust performance, offer promise for expanding the availability of wearable sensor systems. Full article
(This article belongs to the Special Issue Multi-Modal Sensors for Human Behavior Monitoring)
Show Figures

Figure 1

11 pages, 2059 KiB  
Article
Use of Machine Learning and Wearable Sensors to Predict Energetics and Kinematics of Cutting Maneuvers
by Matteo Zago, Chiarella Sforza, Claudia Dolci, Marco Tarabini and Manuela Galli
Sensors 2019, 19(14), 3094; https://doi.org/10.3390/s19143094 - 12 Jul 2019
Cited by 22 | Viewed by 3922
Abstract
Changes of directions and cutting maneuvers, including 180-degree turns, are common locomotor actions in team sports, implying high mechanical load. While the mechanics and neurophysiology of turns have been extensively studied in laboratory conditions, modern inertial measurement units allow us to monitor athletes [...] Read more.
Changes of directions and cutting maneuvers, including 180-degree turns, are common locomotor actions in team sports, implying high mechanical load. While the mechanics and neurophysiology of turns have been extensively studied in laboratory conditions, modern inertial measurement units allow us to monitor athletes directly on the field. In this study, we applied four supervised machine learning techniques (linear regression, support vector regression/machine, boosted decision trees and artificial neural networks) to predict turn direction, speed (before/after turn) and the related positive/negative mechanical work. Reference values were computed using an optical motion capture system. We collected data from 13 elite female soccer players performing a shuttle run test, wearing a six-axes inertial sensor at the pelvis level. A set of 18 features (predictors) were obtained from accelerometers, gyroscopes and barometer readings. Turn direction classification returned good results (accuracy > 98.4%) with all methods. Support vector regression and neural networks obtained the best performance in the estimation of positive/negative mechanical work (coefficient of determination R2 = 0.42–0.43, mean absolute error = 1.14–1.41 J) and running speed before/after the turns (R2 = 0.66–0.69, mean absolute error = 0.15–018 m/s). Although models can be extended to different angles, we showed that meaningful information on turn kinematics and energetics can be obtained from inertial units with a data-driven approach. Full article
(This article belongs to the Special Issue Multi-Modal Sensors for Human Behavior Monitoring)
Show Figures

Figure 1

22 pages, 11207 KiB  
Article
A Wireless Visualized Sensing System with Prosthesis Pose Reconstruction for Total Knee Arthroplasty
by Hanjun Jiang, Shaolin Xiang, Yanshu Guo and Zhihua Wang
Sensors 2019, 19(13), 2909; https://doi.org/10.3390/s19132909 - 1 Jul 2019
Cited by 5 | Viewed by 3869
Abstract
The surgery quality of the total knee arthroplasty (TKA) depends on how accurate the knee prosthesis is implanted. The knee prosthesis is composed of the femoral component, the plastic spacer and the tibia component. The instant and kinetic relative pose of the knee [...] Read more.
The surgery quality of the total knee arthroplasty (TKA) depends on how accurate the knee prosthesis is implanted. The knee prosthesis is composed of the femoral component, the plastic spacer and the tibia component. The instant and kinetic relative pose of the knee prosthesis is one key aspect for the surgery quality evaluation. In this work, a wireless visualized sensing system with the instant and kinetic prosthesis pose reconstruction has been proposed and implemented. The system consists of a multimodal sensing device, a wireless data receiver and a data processing workstation. The sensing device has the identical shape and size as the spacer. During the surgery, the sensing device temporarily replaces the spacer and captures the images and the contact force distribution inside the knee joint prosthesis. It is connected to the external data receiver wirelessly through a 432 MHz data link, and the data is then sent to the workstation for processing. The signal processing method to analyze the instant and kinetic prosthesis pose from the image data has been investigated. Experiments on the prototype system show that the absolute reconstruction errors of the flexion-extension rotation angle (the pitch rotation of the femoral component around the horizontal long axis of the spacer), the internal–external rotation (the yaw rotation of the femoral component around the spacer vertical axis) and the mediolateral translation displacement between the centers of the femoral component and the spacer based on the image data are less than 1.73°, 1.08° and 1.55 mm, respectively. It provides a force balance measurement with error less than ±5 N. The experiments also show that kinetic pose reconstruction can be used to detect the surgery defection that cannot be detected by the force measurement or instant pose reconstruction. Full article
(This article belongs to the Special Issue Multi-Modal Sensors for Human Behavior Monitoring)
Show Figures

Figure 1

16 pages, 8221 KiB  
Article
3D Pose Detection of Closely Interactive Humans Using Multi-View Cameras
by Xiu Li, Zhen Fan, Yebin Liu, Yipeng Li and Qionghai Dai
Sensors 2019, 19(12), 2831; https://doi.org/10.3390/s19122831 - 25 Jun 2019
Cited by 18 | Viewed by 6359
Abstract
We propose a method to automatically detect 3D poses of closely interactive humans from sparse multi-view images at one time instance. It is a challenging problem due to the strong partial occlusion and truncation between humans and no tracking process to provide priori [...] Read more.
We propose a method to automatically detect 3D poses of closely interactive humans from sparse multi-view images at one time instance. It is a challenging problem due to the strong partial occlusion and truncation between humans and no tracking process to provide priori poses information. To solve this problem, we first obtain 2D joints in every image using OpenPose and human semantic segmentation results from Mask R-CNN. With the 3D joints triangulated from multi-view 2D joints, a two-stage assembling method is proposed to select the correct 3D pose from thousands of pose seeds combined by joint semantic meanings. We further present a novel approach to minimize the interpenetration between human shapes with close interactions. Finally, we test our method on multi-view human-human interaction (MHHI) datasets. Experimental results demonstrate that our method achieves high visualized correct rate and outperforms the existing method in accuracy and real-time capability. Full article
(This article belongs to the Special Issue Multi-Modal Sensors for Human Behavior Monitoring)
Show Figures

Figure 1

18 pages, 7081 KiB  
Article
Gyroscope-Based Continuous Human Hand Gesture Recognition for Multi-Modal Wearable Input Device for Human Machine Interaction
by Hobeom Han and Sang Won Yoon
Sensors 2019, 19(11), 2562; https://doi.org/10.3390/s19112562 - 5 Jun 2019
Cited by 44 | Viewed by 10569
Abstract
Human hand gestures are a widely accepted form of real-time input for devices providing a human-machine interface. However, hand gestures have limitations in terms of effectively conveying the complexity and diversity of human intentions. This study attempted to address these limitations by proposing [...] Read more.
Human hand gestures are a widely accepted form of real-time input for devices providing a human-machine interface. However, hand gestures have limitations in terms of effectively conveying the complexity and diversity of human intentions. This study attempted to address these limitations by proposing a multi-modal input device, based on the observation that each application program requires different user intentions (and demanding functions) and the machine already acknowledges the running application. When the running application changes, the same gesture now offers a new function required in the new application, and thus, we can greatly reduce the number and complexity of required hand gestures. As a simple wearable sensor, we employ one miniature wireless three-axis gyroscope, the data of which are processed by correlation analysis with normalized covariance for continuous gesture recognition. Recognition accuracy is improved by considering both gesture patterns and signal strength and by incorporating a learning mode. In our system, six unit hand gestures successfully provide most functions offered by multiple input devices. The characteristics of our approach are automatically adjusted by acknowledging the application programs or learning user preferences. In three application programs, the approach shows good accuracy (90–96%), which is very promising in terms of designing a unified solution. Furthermore, the accuracy reaches 100% as the users become more familiar with the system. Full article
(This article belongs to the Special Issue Multi-Modal Sensors for Human Behavior Monitoring)
Show Figures

Figure 1

8 pages, 2800 KiB  
Article
A Textile Sensor for Long Durations of Human Motion Capture
by Sufeng Hu, Miaoding Dai, Tianyun Dong and Tao Liu
Sensors 2019, 19(10), 2369; https://doi.org/10.3390/s19102369 - 23 May 2019
Cited by 16 | Viewed by 4362
Abstract
Human posture and movement analysis is important in the areas of rehabilitation, sports medicine, and virtual training. However, the development of sensors with good accuracy, low cost, light weight, and suitability for long durations of human motion capture is still an ongoing issue. [...] Read more.
Human posture and movement analysis is important in the areas of rehabilitation, sports medicine, and virtual training. However, the development of sensors with good accuracy, low cost, light weight, and suitability for long durations of human motion capture is still an ongoing issue. In this paper, a new flexible textile sensor for knee joint movement measurements was developed by using ordinary fabrics and conductive yarns. An electrogoniometer was adopted as a standard reference to calibrate the proposed sensor and validate its accuracy. The knee movements of different daily activities were performed to evaluate the performance of the sensor. The results show that the proposed sensor could be used to monitor knee joint motion in everyday life with acceptable accuracy. Full article
(This article belongs to the Special Issue Multi-Modal Sensors for Human Behavior Monitoring)
Show Figures

Figure 1

Review

Jump to: Research

27 pages, 4048 KiB  
Review
Step by Step Towards Effective Human Activity Recognition: A Balance between Energy Consumption and Latency in Health and Wellbeing Applications
by Enida Cero Dinarević, Jasmina Baraković Husić and Sabina Baraković
Sensors 2019, 19(23), 5206; https://doi.org/10.3390/s19235206 - 27 Nov 2019
Cited by 7 | Viewed by 3891
Abstract
Human activity recognition (HAR) is a classification process that is used for recognizing human motions. A comprehensive review of currently considered approaches in each stage of HAR, as well as the influence of each HAR stage on energy consumption and latency is presented [...] Read more.
Human activity recognition (HAR) is a classification process that is used for recognizing human motions. A comprehensive review of currently considered approaches in each stage of HAR, as well as the influence of each HAR stage on energy consumption and latency is presented in this paper. It highlights various methods for the optimization of energy consumption and latency in each stage of HAR that has been used in literature and was analyzed in order to provide direction for the implementation of HAR in health and wellbeing applications. This paper analyses if and how each stage of the HAR process affects energy consumption and latency. It shows that data collection and filtering and data segmentation and classification stand out as key stages in achieving a balance between energy consumption and latency. Since latency is only critical for real-time HAR applications, the energy consumption of sensors and devices stands out as a key challenge for HAR implementation in health and wellbeing applications. Most of the approaches in overcoming challenges related to HAR implementation take place in the data collection, filtering and classification stages, while the data segmentation stage needs further exploration. Finally, this paper recommends a balance between energy consumption and latency for HAR in health and wellbeing applications, which takes into account the context and health of the target population. Full article
(This article belongs to the Special Issue Multi-Modal Sensors for Human Behavior Monitoring)
Show Figures

Graphical abstract

Back to TopTop