Next Article in Journal
Technological Advancement in Tower-Based Canopy Reflectance Monitoring: The AMSPEC-III System
Next Article in Special Issue
Tilted Orientation of Photochromic Dyes with Guest-Host Effect of Liquid Crystalline Polymer Matrix for Electrical UV Sensing
Previous Article in Journal
Synthesis and Sensing Applications of Fluorescent 3-Cinnamoyl Coumarins
Previous Article in Special Issue
Feature Selection and Predictors of Falls with Foot Force Sensors Using KNN-Based Algorithms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

One Small Step for a Man: Estimation of Gender, Age and Height from Recordings of One Step by a Single Inertial Sensor

1
Department of Computer Science II, Universität Bonn, Bonn 53113, Germany
2
Gokhale Method Institute, Stanford, CA 94305, USA
*
Author to whom correspondence should be addressed.
Sensors 2015, 15(12), 31999-32019; https://doi.org/10.3390/s151229907
Submission received: 2 October 2015 / Revised: 9 December 2015 / Accepted: 11 December 2015 / Published: 19 December 2015
(This article belongs to the Collection Sensors for Globalized Healthy Living and Wellbeing)

Abstract

:
A number of previous works have shown that information about a subject is encoded in sparse kinematic information, such as the one revealed by so-called point light walkers. With the work at hand, we extend these results to classifications of soft biometrics from inertial sensor recordings at a single body location from a single step. We recorded accelerations and angular velocities of 26 subjects using integrated measurement units (IMUs) attached at four locations (chest, lower back, right wrist and left ankle) when performing standardized gait tasks. The collected data were segmented into individual walking steps. We trained random forest classifiers in order to estimate soft biometrics (gender, age and height). We applied two different validation methods to the process, 10-fold cross-validation and subject-wise cross-validation. For all three classification tasks, we achieve high accuracy values for all four sensor locations. From these results, we can conclude that the data of a single walking step (6D: accelerations and angular velocities) allow for a robust estimation of the gender, height and age of a person.

1. Introduction

Sparse representation of human motions has been investigated for some decades now. It is well-known that representation of human motion by point light displays and similar concepts (e.g., point light walker [1,2]) contains detailed information on several aspects of motions and their initiators.
Over the years, the possibilities to identify certain parameters characterizing given motions have been explored. On the one hand, it is possible to discover information about the displayed motions as such. In the field of action recognition, it has been shown that estimation of poses and skeletons from video and motion capture data allows for recognition and analysis of human movement (Lv et al. [3], Junejo et al. [4], Barnachon et al. [5], Oshin et al. [6]). The survey of vision-based human motion capture by Moeslund et al. [7] discusses the advances and application of motion-capture-related techniques for tracking, pose estimation and recognition of movement. Recognition of motion patterns from video data can be achieved by machine learning approaches exploiting local space-time features (e.g., for SVM-based methods, Schüldt et al. [8]). On the other hand, information on the kinematic properties of living beings or animated objects can be detected by analyzing representations of motions. This can be done using motion capture data from passive or active devices, as well as contact forces measurements (Venture et al. [9], Kirk et al. [10]).
More recently, the market for wearable devices has virtually exploded (Liew et al. [11], Son et al. [12]). The sheer number of devices [13] reflects that there are numerous methods to capture and analyze human motion in a relatively new field of application associated with ubiquitous computing. Even though information acquired by such devices may be less accurate than information acquired by modern motion capture systems (Le Masurier et al. [14], Foster et al. [15]), it has been shown that reconstruction of motion from extremely sparse sensor setups is possible in practice (Tautges et al. [16], Riaz et al. [17]). This indicates that data collected using tri-axial accelerometers are suitable for classification tasks, e.g., associated with social actions (Hung et al. [18]), general everyday activities (Parkka et al. [19], Jean-Baptiste et al. [20], Dijkstra et al. [21]) or repetitive physical exercises (Morris et al. [22]).
We investigated if data from a single wearable sensor can reveal similar information about the moving subject as motion capture data in the sense of the above-quoted [1,2]. We focus on classification of gender, age and height defining exemplary inertial properties of moving subjects. Our experiments show that it is indeed possible to classify and thereby estimate such properties. Our method is able to process representations of single steps recorded by one accelerometer (as opposed to longer data sequences; Neugebauer et al. [23]). In sum, our method is able to recover soft biometric information with high accuracy consistently over various sensor positions. Since the classification task depends on the chosen feature sets, we further investigated this by evaluating the role of different possible feature sets in the classification.
Modern machine learning techniques like decision trees can target pattern recognition and prediction tasks based on many different representations of motion (Brand et al. [24], Bao et al. [25], Kwapisz et al. [26]). We used random forests, a learning method based on the construction of multiple decision trees, which can be used for classification, as well as regression tasks. While learning predictive models by using decision trees on their own may result in over-fitting to a training set (Phan et al. [27]), random forests are less prone to this problem. For an overview of random forests, refer to the works of Breimann [28] or Liaw and Wiener [29].

2. Materials and Methods

2.1. Participants’ Consent

All participants were informed in detail about the purpose of the study, the nature of the experiments, the types of data to be recorded and the data privacy policy. The subjects were aware that they were taking part in experiments where a number of biometric and kinematic properties were monitored. The main focus of the study was communicated to the subjects during their progress over the course of the training by the specialists of Gokhale Method Institute [30] (Stanford, CA, United States). Each willing participant was asked to fill in the data collection form with personal details, including full name, sex, age and height.

2.2. Population Characteristics and Sampling

The participants were selected during a gait and posture training program conducted in July of 2014 by the specialists of Gokhale Method Institute. They use special gait and posture training methods to help regain the structural integrity of the body. The training program consisted of six 90-minute training sessions. The study population consisted of a total of 26 adults with a male to female ratio of 12:14 and an average age of 48.1 years (σ = ± 12.7). The average height of the participants was recorded at 174 cm (σ = ± 10.2). The characteristics of the study population are shown in Table 1.
Table 1. Characteristics of the study population, including age, sex and height. For validation, two types of models were used: k-fold cross-validation and subject-wise cross-validation.
Table 1. Characteristics of the study population, including age, sex and height. For validation, two types of models were used: k-fold cross-validation and subject-wise cross-validation.
VariableCharacteristics
Total Population26
Age (y, mean, ± SD)48.1 ± 12.7
Female Participants14
Male Participants12
Height (cm, ± SD)174 ± 10.2
A k-fold cross-validation model (chosen value of k = 10) was used to compute the classification accuracy of the classifier. In k-fold cross-validation, original sample data are randomly partitioned into k equally-sized sub-samples or folds. Out of the k folds, k-1folds are used for training, and the left-out fold is used for validation. The cross-validation process is repeated k times, and each of the k folds is used exactly once for validation. For sampling, the stratified sampling method [31] is used to divide the population into training and test datasets.
A subject-wise cross-validation model was also employed to compute the classification accuracy of each participant against others. Subject-wise cross-validation is a special variant of leave-one-out cross-validation in which instead of leaving one sample out for validation, all samples of one participant are left out for validation. For n participants (n = 26, in our case), all samples of n - 1 participants are used for training, and all samples of the left-out participant are used for testing. The cross-validation process is repeated n times in order to validate each participant exactly once against the rest. Unlike 10-fold cross-validation, the number of samples in each fold is not equal in subject-wise cross-validation. This is due to the difference in the step length of each subject. Subjects with shorter step lengths have more steps than the others.

2.3. Standardized Gait Tasks

The gait task consisted of a 10-meter straight walk from a starting point, turning around and walking back to the starting point. Participants were asked to walk in their natural manner and to repeat the gait task two times, resulting in a 4 × 10 -meter walk. Three different types of experiments were performed: (1) walking on a hard surface (concrete floor) with shoes on; (2) walking on a hard surface (concrete floor) with bare feet; and (3) walking on a soft surface (exercise mattress) with bare feet. Data were recorded during three different stages of the training course: (1) at the start of the training (before the 1st session); (2) in the middle of the training (after the 3rd session); and (3) at the end of the training (after the 6th session). Hence, for each participant, 9 different recording sessions were carried out in total (see Table 2).
Table 2. Standardized gait tasks. Experiments were performed on different surfaces with and without shoes, as shown here. For each participant, 9 different recording sessions were carried out in total.
Table 2. Standardized gait tasks. Experiments were performed on different surfaces with and without shoes, as shown here. For each participant, 9 different recording sessions were carried out in total.
4 × 10-Meter Straight Walk
Hard SurfaceHard SurfaceSoft Surface
Shoes OnBarefootBarefoot
Recordings Before 1st Session
After 3rd Session
After 6th Session

2.4. Sensor Placement and Data Collection

A set of four APDM Opal wireless inertial measurement units [32] was used to record accelerations and angular velocities. An APDM Opal IMU consists of a triad of three accelerometers and three gyroscopes. The technical specifications of the sensor are given in Table 3. The sensors were tightly attached to different body parts using adjustable elastic straps. We were particularly interested in the inertial measurements of four different body parts: (1) chest; (2) lower back; (3) right wrist; and (4) left ankle. The sensor placement at each body part is shown in Figure 1.
Table 3. Technical specifications of the APDM Opal IMU.
Table 3. Technical specifications of the APDM Opal IMU.
AccelerometerGyroscopeMagnetometer
Axes3 axes3 axes3 axes
Range±2 g or ±6 g ±2000 deg/s ±6 Gauss
Noise0.0012 m/s 2 / H z 0.05 deg/s/ H z 0.5 mGauss/ H z
Sample Rate1280 Hz1280 Hz1280 Hz
Output Rate20 to 128 Hz20 to 128 Hz20 to 128 Hz
Bandwidth50 Hz50 Hz50 Hz
Resolution14 bits14 bits14 bits
Figure 1. Placement of four APDM Opal IMUs on different body parts. The sensors were placed on four different locations: left ankle, right wrist, lower back and chest.
Figure 1. Placement of four APDM Opal IMUs on different body parts. The sensors were placed on four different locations: left ankle, right wrist, lower back and chest.
Sensors 15 29907 g001

2.5. Pre-Processing

The output sampling rate of an APDM Opal IMU sensor is adjustable between 20 and 128 Hz. In our experiments, an output sampling rate of 128 Hz was chosen. Due to the noisy nature of the acceleration measurements, raw data were pre-processed to suppress noise. To this end, we used the moving average method with a window size of 9 frames to smooth the raw signal and suppress noise.

2.6. Signal Decomposition

The input signal consists of a long sequence of steps, which is segmented into single steps in order to extract features. A simple approach to decompose a long sequence of steps into single steps is by means of peak and valley detection [33,34,35]. In this approach, peaks are detected by finding local maxima, whereas valleys are detected by finding local minima. The detection of false peaks is minimized by using two thresholds: Δ d and Δ h · Δ d is used to define the minimum distance between two peaks, and Δ h is used to define the minimum height of the peak. We have used the same approach to detect peaks and valleys from the input signal. The values of the two thresholds are chosen by experimentation. The valleys are then used to cut the input signal into individual steps. Peaks and valleys are only detected in the x-axis of the acceleration signal and are used to decompose the y- and z-axes of acceleration and all axes of the gyroscope. This approach makes sure that the length of the individual step is consistent in all axes of the acceleration and gyroscope. In Figure 2, the left side image presents the pre-processed input signal from the x-axis of the IMU’s accelerometer attached to the lower back. The detected valleys, highlighted with circles (◯), are also shown.
Figure 2. The pre-processed input signal from the x-axis of the IMU’s accelerometer attached to the lower back and an extracted single step are shown. In the left image, detected valleys are highlighted with ◯. In the right image, a decomposed signal depicting a single step is shown between the vertical dash-dot lines (- ·). Some of the extracted features from the single step are: (1) square (□): global minimum; (2) diamond (◇): global maximum; (3) solid line (–): mean; (4) horizontal dash-dot line (- ·): standard deviation; (5) dashed line (- -): root mean square; (6) between vertical dash-dot lines (- ·): length and duration.
Figure 2. The pre-processed input signal from the x-axis of the IMU’s accelerometer attached to the lower back and an extracted single step are shown. In the left image, detected valleys are highlighted with ◯. In the right image, a decomposed signal depicting a single step is shown between the vertical dash-dot lines (- ·). Some of the extracted features from the single step are: (1) square (□): global minimum; (2) diamond (◇): global maximum; (3) solid line (–): mean; (4) horizontal dash-dot line (- ·): standard deviation; (5) dashed line (- -): root mean square; (6) between vertical dash-dot lines (- ·): length and duration.
Sensors 15 29907 g002

2.7. Extraction of Features

All single steps detected from the signal decomposition are further processed to extract different features from the time and frequency domains. Table 4 presents a complete list of features extracted from different components of accelerations and angular velocities. For each single step, the feature set consists of 50 features in total. Statistical features include: step length, step duration, average, standard deviation, global minimum, global maximum, root mean square and entropy. Energy features include the energy of the step. The maximum amplitude of the frequency spectrum of the signal is calculated using fast Fourier transform (FFT). The step length and the step duration are only computed for the x-axis of the accelerations, as they remain the same in all other axes. All of the remaining features are computed for all 3D accelerations and 3D angular velocities. In Figure 2, the right-hand image presents a decomposed signal depicting a single step between the vertical dash-dot lines (- ·). Some of the extracted features are also shown, including: (1) square (□): global minimum; (2) diamond (◇): global maximum; (3) solid line (–): mean; (4) horizontal dash-dot line (- ·): standard deviation; (5) dashed line (- -): root mean square; (6) between vertical dash-dot lines (- ·): length and duration of the step.
Table 4. Description of the extracted features for each step from the accelerometer (A) and/or the gyroscope (G). For each step, 50 features from the time and frequency domains are computed.
Table 4. Description of the extracted features for each step from the accelerometer (A) and/or the gyroscope (G). For each step, 50 features from the time and frequency domains are computed.
Feature NameSensorAxisTotalDescription
Step LengthAx1Total number of frames
Step Duration (s)Ax1Step duration in seconds
AverageA, Gx, y, z6Mean value of the step
Standard DeviationA, Gx, y, z6σ of the step
MinimumA, Gx, y, z6Global minimum of the step
MaximumA, Gx, y, z6Global maximum of the step
Root Mean SquareA, Gx, y, z6RMS value of the step
EntropyA, Gx, y, z6Uncertainty measure of the step, s i .: - i = 1 n ( p i ) l o g 2 ( p i ) where p i = s i m a x ( s i ) i = 1 n s i m a x ( s i )
Signal EnergyA, Gx, y, z6Energy of the step: n = 1 N | x [ n ] | 2
AmplitudeA, Gx, y, z6Maximum amplitude of the frequency spectrum of the signal of the step

2.8. Classification of Features

Training and validation data were prepared for each sensor using the features extracted in the previous step. Three types of group classification tasks were performed: (1) gender classification; (2) height classification; and (3) age classification. Furthermore, training and validation data were also prepared for classification within participant subgroups for height and age classification. In Table 5, the characteristics of the population within different classification tasks are presented. For age and height classification, we choose classes based on the available data. Here, we have tried to define meaningful thresholds for classes while keeping balanced populations for all classes.
Table 5. Characteristics of the population within different group and subgroup classification tasks.
Table 5. Characteristics of the population within different group and subgroup classification tasks.
TaskClassesNAge (Mean ± SD)
Group Classification Tasks
Gender ClassificationMale1243.75 ± 14.50
Female1451.79 ± 11.15
Age ClassificationAge < 40 934.11 ± 03.62
40 < Age < 50 646.67 ± 02.58
Age ≥501160.67 ± 07.48
Height ClassificationHeight ≤ 170 cm855.62 ± 11.29
170 cm < Height < 180 cm1044.70 ± 11.31
Height ≥180 cm844.75 ± 13.81
Subgroup Classification Tasks
Age Classification
Male GroupAge ≤40632.67 ± 02.94
Age >40654.83 ± 09.87
Female GroupAge ≤50641.83 ± 06.08
Age >50859.25 ± 07.48
Height Classification
Male GroupHeight ≤180 cm738.43 ± 11.27
Height >180 cm551.20 ± 13.85
Female GroupHeight ≤ 170 cm855.62 ± 11.29
Height >170 cm646.67 ± 09.48
As the classifier, random forest [29] was chosen and trained on the training dataset with the following values of parameters: number of trees = 400; maximum number of features for best split = 7. Two types of validation strategies were employed: stratified 10-fold cross-validation and subject-wise cross-validation. The 10-fold cross-validation was employed for all group and subgroup classification tasks, whereas the subject-wise cross-validation was employed to group classification tasks only.
For each sensor in a classification task, the classifier was trained and validated for three different sets of features: (1) 3D accelerations (26 features); (2) 3D angular velocities (26 features); and (3) 6D accelerations and angular velocities (50 features). The 10-fold cross-validation was employed for all three sets of features, whereas the subject-wise cross-validation was employed for the third set of features (50 features) only. Finally, the classification rate, specificity, sensitivity and the positive predictive value (PPV) for each set of features were calculated as explained in [36]. The same approach was used for all group and subgroup classification tasks. The classification rate c or classification accuracy is given by the formula in Equation (1):
c = ( T P + T N ) ( T P + T N + F P + F N )
where T P , T N are the numbers of true positives and true negatives, respectively, and F P , F N are the numbers of false positives and false negatives, respectively.

3. Results

In the following sections, we present the results of our investigations of the recorded gait data. Our classification results prove a number of hypotheses regarding biometric and biographic characteristics of the human subjects. Specifically, the gender, the body height and the age of participants could be classified well. Each of classification tasks was solved by training random forest classifiers, as introduced in the previous section.

3.1. Gender Classification

Our goal was to show that classification tasks regarding the gender of the trial subject can be performed sufficiently well by using the proposed sensors attached to each of the given locations.
H 0 : The gender can be identified by motion recordings of any of the employed sensors
The results presented in Figure 3 show that the statement holds true for each of the four sensors individually. For each sensor, there are three different images visualizing the results of the binary classification, namely for the investigation of accelerations, of angular velocities, as well as of both combined. The confusion matrices encode the following information: each column represents the instances in one of the predicted classes, while each row represents the instances in the actual class (female/male).
Figure 3. Confusion matrices of gender classification computed with 10-fold cross-validation. Each column presents sensor position (left to right): left ankle, lower back, chest and right wrist. Each row presents feature sets used for classification (top to bottom): 3D accelerations (26 features), 3D angular velocities (26 features) and 6D accelerations and angular velocities (50 features). Classes: C G F = gender female; C G M = gender male.
Figure 3. Confusion matrices of gender classification computed with 10-fold cross-validation. Each column presents sensor position (left to right): left ankle, lower back, chest and right wrist. Each row presents feature sets used for classification (top to bottom): 3D accelerations (26 features), 3D angular velocities (26 features) and 6D accelerations and angular velocities (50 features). Classes: C G F = gender female; C G M = gender male.
Sensors 15 29907 g003
For the application of acceleration only, the classification rates are higher than 84 . 8 % for each of the sensors. Classification results based on angular velocities show a lower classification rate, but still above 79 . 35 % . The classification based on the combined features performs better than each of the individual feature sets, namely above 87 % . More precisely, the results for the combined features are (listed by sensor in descending order of rates): chest ( 92 . 57 % ), lower back ( 91 . 52 % ), left ankle ( 89 . 96 % ), right wrist ( 87 . 16 % ). Table 6 presents 10-fold cross-validation results of gender classification, including correct classification accuracy, sensitivity, specificity, the positive predictive value (PPV) of each class and the average PPV of all classes. PPV C 1 represents the PPV of the class C G F , and PPV C 2 represents the PPV of the class C G M .
Table 6. Classification results obtained by using 10-fold cross-validation for different classification categories: gender, height and age. The results show balanced correct classification rates, sensitivity, specificity, the positive predictive value (PPV) of each class and the average PPV of all classes.
Table 6. Classification results obtained by using 10-fold cross-validation for different classification categories: gender, height and age. The results show balanced correct classification rates, sensitivity, specificity, the positive predictive value (PPV) of each class and the average PPV of all classes.
Classification TaskBody PartSensorClass.RateSens.Spec.PPV C 1 PPV C 2 PPV C 3 Avg.PPV
Gender ClassificationChestA x y z , G x y z 92.5791.7293.2491.4393.4892.45
Lower BackA x y z , G x y z 91.5289.4293.1891.2291.7591.49
Right WristA x y z , G x y z 87.1685.7588.3285.8588.2487.05
Left AnkleA x y z , G x y z 89.9686.7792.5790.5289.5490.03
Body Height ClassificationChestA x y z , G x y z 89.0588.8494.4589.6587.4390.0089.03
Lower BackA x y z , G x y z 88.4588.1694.0591.3688.7386.3988.82
Right WristA x y z , G x y z 84.7884.6592.3383.4085.2185.4384.68
Left AnkleA x y z , G x y z 87.2887.0793.4789.8789.0684.2387.72
Age ClassificationChestA x y z , G x y z 88.8287.4094.0590.1093.0285.8189.64
Lower BackA x y z , G x y z 88.8287.2094.1287.3489.4890.0388.95
Right WristA x y z , G x y z 83.5081.0891.1882.2388.7482.7284.56
Left AnkleA x y z , G x y z 85.7483.8092.3386.0992.5282.8287.14

3.2. Body Height Classification

Another goal was body height classification from only accelerations, angular velocities and a combination of both.
H 1 : The body height can be identified by motion recordings of any of the employed sensors
The results of the ternary classification for each individual sensor are given in Figure 4. Here, the classification estimated the assignment to three classes ( C H 1 : height ≤170 cm, C H 2 : 170 cm < height < 180 cm, C H 3 : height ≥180 cm). A behavior similar to the gender classification was observed where the classification based on the combined features of accelerations and angular velocities performs better than the individual ones. More precisely, the results for the combined features are (listed by sensor in descending order of rates): chest ( 89 . 05 % ), lower back ( 88 . 45 % ), left ankle ( 87 . 27 % ), right wrist ( 84 . 78 % ). Table 6 presents 10-fold cross-validation results of body height classification, including correct classification accuracy, sensitivity, specificity, the positive predictive value (PPV) of each class and the average PPV of all classes. PPV C 1 shows the PPV of the class C H 1 ; PPV C 2 shows the PPV of the class C H 2 ; and PPV C 3 shows the PPV of the class C H 3 .
Figure 4. Confusion matrices of body height classification computed with 10-fold cross-validation. Each column presents the sensor position (left to right): left ankle, lower back, chest and right wrist. 6D accelerations and angular velocities (50 features) were used for classification. C H 1 : height ≤170 cm, C H 2 : 170 cm < height < 180 cm, C H 3 : height 180 cm.
Figure 4. Confusion matrices of body height classification computed with 10-fold cross-validation. Each column presents the sensor position (left to right): left ankle, lower back, chest and right wrist. 6D accelerations and angular velocities (50 features) were used for classification. C H 1 : height ≤170 cm, C H 2 : 170 cm < height < 180 cm, C H 3 : height 180 cm.
Sensors 15 29907 g004

3.3. Age Classification

Another goal was age group classification from only accelerations, angular velocities and their combination.
H 2 : The age group of individuals can be identified by motion recordings of any of the employed sensors.
The results of the ternary classification for each individual sensor are given in Figure 5. Here, the classification estimated the assignment to three classes according to three age groups ( C A 1 : age <40; C A 2 : 40 ≤ age < 50; C A 3 : age 50 ) of participants. Similar to the previous classification tasks, the classification based on the combined features of accelerations and angular velocities performs better than the individual ones. More precisely, age classification results for the combined features are (listed by sensor in descending order of rates): lower back ( 88 . 822 % ), chest ( 88 . 818 % ), left ankle ( 85 . 74 % ), right wrist ( 83 . 50 % ). Table 6 presents 10-fold cross-validation results of age classification, including correct classification accuracy, sensitivity, specificity, the positive predictive value (PPV) of each class and the average PPV of all classes. PPV C 1 represents the PPV of the class C A 1 ; PPV C 2 represents the PPV of the class C A 2 ; and PPV C 3 represents the PPV of the class C A 3 .
Figure 5. Confusion matrices of age classification computed with 10-fold cross-validation. Each column presents the sensor position (left to right): left ankle, lower back, chest and right wrist. 6D accelerations and angular velocities (50 features) were used for classification. C A 1 : age <40; C A 2 : 40 ≤ age < 50; C A 3 : age 50 .
Figure 5. Confusion matrices of age classification computed with 10-fold cross-validation. Each column presents the sensor position (left to right): left ankle, lower back, chest and right wrist. 6D accelerations and angular velocities (50 features) were used for classification. C A 1 : age <40; C A 2 : 40 ≤ age < 50; C A 3 : age 50 .
Sensors 15 29907 g005

3.4. Contribution of Individual Features to Classification Results

The contribution of each of the employed features in all three classification tasks was homogenous in the sense that there is not one outstanding feature with a major contribution to the classification results. In all experiments, we made the following observation: in sum, accelerations contributed more to the overall results than angular velocities. However, the combination of the two feature types did better than accelerations or angular velocities individually. Random forest’s permutation-based variable importance measures have been used to evaluate the contribution of individual features in the overall classification results. For further details, refer to the works of Breimann [29] and Louppe et al. [37].
In detail, the classification results related to sensors at different locations can depend on quite different feature sets. In the following, we will give an overview of the most important contributors for each of the locations.

3.4.1. Gender Classification

For the location at the chest, angular velocities (around the y-axis, i.e., transverse axis) contributed most, especially the standard deviation, max, energy, and RMS. These are related to the rotation of the upper body around a horizontal axis over the course of the motion. Note that this is not a contradiction to our other claims. Furthermore, the amplitude of the accelerations along the x-axis, i.e., the cranio-caudal axis, is of high importance. For the lower back, the most important features are associated with acceleration of the z-axis. This corresponds to changes in the velocity of the hip movement within the sagittal plane, i.e., front to back. In addition, angular velocities associated with the z-axis, i.e., rotation around the anteroposterior axis (swinging of hips), contribute significantly to the results. Furthermore, the amplitude of the accelerations along the x-axis, i.e., the cranio-caudal axis, is also of high importance. For the right wrist, features associated with acceleration along the y- and z-axes are top contributors. Particularly, minimum, maximum and entropy acceleration values associated with dorso-ventral, as well as lateral movement of the hand play a more important part in the classification. Furthermore, the RMS and energy of angular velocities associated with the z-axis are important. This is also linked to the swinging of the hand in the lateral direction.
For the ankles, the contribution of accelerations along each axis is generally higher compared to the contribution of other single features. Figure 6 shows bar graphs of the features’ importance computed during gender classification. The graphs present a comparison of the importance of each feature (as percentage) with respect to different sensor positions. In general, all features are significantly contributing in the classification task. An overview of contribution percentages where the most important features are highlighted is given in Table 7.
Table 7. Features importance computed during gender classification using 10-fold cross-validation strategy. The top 5 contributing features are highlighted with bold text. All values are the percentage.
Table 7. Features importance computed during gender classification using 10-fold cross-validation strategy. The top 5 contributing features are highlighted with bold text. All values are the percentage.
LenDurMeanSDMinMaxRMSEntEAmp
ChestA x 1.421.391.622.722.052.142.511.871.444.38
A y 1.141.351.321.091.031.081.032.17
A z 2.801.552.312.183.483.013.851.15
G x 1.581.282.422.201.202.451.222.04
G y 1.044.671.425.023.700.884.751.83
G z 0.841.491.001.171.561.141.531.50
Lower BackA x 1.611.621.452.411.342.081.801.941.364.14
A y 1.991.801.531.431.691.701.922.48
A z 5.112.254.304.932.192.122.152.05
G x 1.473.511.751.421.541.291.712.39
G y 1.021.601.331.391.441.161.381.75
G z 1.401.421.382.201.421.461.593.60
Right WristA x 1.201.211.492.331.881.411.621.521.402.14
A y 2.431.832.692.012.492.892.532.04
A z 2.022.242.252.842.491.622.301.70
G x 1.522.041.821.472.361.312.522.00
G y 2.351.461.571.532.611.832.611.57
G z 1.781.881.942.383.001.492.891.50
Left AnkleA x 1.151.171.522.623.971.652.121.661.861.71
A y 3.611.681.554.211.832.151.591.22
A z 5.171.623.432.151.581.861.651.89
G x 1.601.461.922.291.551.641.301.65
G y 2.011.862.391.901.861.721.942.19
G z 1.641.801.991.901.651.771.751.67
Figure 6. Bar graphs of the features’ importance computed during gender classification using the 10-fold cross-validation strategy. The graphs present a comparison of the importance of each feature (in %) with respect to different sensor positions. In general, all features are significantly contributing in the classification task.
Figure 6. Bar graphs of the features’ importance computed during gender classification using the 10-fold cross-validation strategy. The graphs present a comparison of the importance of each feature (in %) with respect to different sensor positions. In general, all features are significantly contributing in the classification task.
Sensors 15 29907 g006

3.4.2. Body Height Classification

For the location at the chest, accelerations along the z-axis contributed most, especially the mean, minimum, maximum and energy. These are associated with the motion of the upper body in the dorso-ventral direction. Furthermore, the minimum accelerations along the x-axis, i.e., the cranio-caudal axis, are of importance.
For the lower back, the most important features are associated with acceleration of the z-axis, especially the mean, maximum, RMS and energy. This corresponds to changes in the velocity of the movement of the hips within the sagittal plane, i.e., front to back. In addition, the minimum of the accelerations in the x-axis contributes significantly to the results. These are linked to the movement of the hips along the cranio-caudal axis (up and down). For the right wrist, features associated with acceleration along each of the three axes contribute significantly. Particularly, maximum, RMS and energy values associated with dorso-ventral movement of the hand play a more important part. For the ankles, also the contribution of accelerations along each axis is generally high. Additionally, angular velocities associated with the rotation of the feet from side to side (around the z-axis) are significant contributors. Figure 7 shows bar graphs of the feature contribution computed during body height classification. The graphs present a comparison of the importance of each feature (as percentage) with respect to different sensor positions. In general, all features are significantly contributing in the classification task. An overview of the contribution percentages where the most important features are highlighted is given in Table 8.
Table 8. Features’ importance computed during body height classification using the 10-fold cross-validation strategy. The top 5 contributing features are highlighted with bold text. All values are the percentage.
Table 8. Features’ importance computed during body height classification using the 10-fold cross-validation strategy. The top 5 contributing features are highlighted with bold text. All values are the percentage.
LenDurMeanSDMinMaxRMSEntEAmp
ChestA x 1.271.241.813.194.151.872.381.571.632.38
A y 2.281.611.862.101.871.481.622.09
A z 3.693.443.623.673.342.383.601.60
G x 1.701.521.651.811.721.741.771.96
G y 1.002.051.612.152.060.941.891.22
G z 1.251.771.191.541.741.131.751.11
Lower BackA x 1.341.331.573.122.651.821.891.781.592.43
A y 2.831.511.772.311.821.511.731.54
A z 4.352.542.733.013.402.263.881.44
G x 2.162.081.521.631.521.401.501.42
G y 1.281.551.761.491.621.961.571.53
G z 1.691.891.712.081.831.651.993.02
Right WristA x 1.331.331.892.832.781.492.261.621.822.35
A y 2.482.313.032.682.141.861.992.07
A z 2.532.312.563.482.871.622.861.50
G x 1.711.491.601.431.811.261.641.32
G y 1.821.821.762.301.981.592.041.61
G z 1.951.571.871.972.141.482.191.65
Left AnkleA x 1.041.061.312.533.811.861.891.261.501.82
A y 3.411.621.773.062.161.942.061.38
A z 3.281.512.401.921.692.281.701.61
G x 1.731.612.001.651.651.181.651.44
G y 2.422.142.372.562.711.572.332.01
G z 2.241.952.752.212.001.612.431.92
Figure 7. Bar graphs of the features’ importance computed during body height classification using the 10-fold cross-validation strategy. The graphs present a comparison of the importance of each feature (in %) with respect to different sensor positions. In general, all features are significantly contributing in the classification task.
Figure 7. Bar graphs of the features’ importance computed during body height classification using the 10-fold cross-validation strategy. The graphs present a comparison of the importance of each feature (in %) with respect to different sensor positions. In general, all features are significantly contributing in the classification task.
Sensors 15 29907 g007

3.4.3. Age Classification

For the location at the chest, the importance of the features is similarly distributed as in the height classification results: accelerations along the z-axis contributed most, especially the mean, maximum, RMS and energy. These are associated with the motion of the upper body in the dorso-ventral direction. Furthermore, the minimum acceleration along the x-axis, i.e., the cranio-caudal axis, is important. For the lower back, the most important features are associated especially with acceleration of the z-axis. This is similar to the results found in the height classification scenario and corresponds to changes in the velocity of the movement of the hips within the sagittal plane, i.e., front to back. For the right wrist, features associated with acceleration along each of the three axes contribute significantly. Additionally, the minimum angular velocity associated with rotation around the z-axis, i.e., swinging laterally, is important. For the ankles, the contribution of features associated with lateral acceleration is high. Additionally, angular velocities associated with swinging of the feet from side to side (around the z-axis), as well as rolling over from heel to toes (rotation around the y-axis) are significant contributors. Figure 8 shows bar graphs of the features’ importance computed during age classification. The graphs present a comparison of the importance of each feature (as percentage) with respect to different sensor positions. In general, all features are significantly contributing in the classification task. An overview of contribution percentages where the most important features are highlighted is given in Table 9.
Figure 8. Bar graphs of the features’ importance computed during age classification using the 10-fold cross-validation strategy. The graphs present a comparison of the importance of each feature (in %) with respect to different sensor positions. In general, all features are significantly contributing in the classification task.
Figure 8. Bar graphs of the features’ importance computed during age classification using the 10-fold cross-validation strategy. The graphs present a comparison of the importance of each feature (in %) with respect to different sensor positions. In general, all features are significantly contributing in the classification task.
Sensors 15 29907 g008
Table 9. Features’ importance computed during age classification using the 10-fold cross-validation strategy. The top 5 contributing features are highlighted with bold text. All values are the percentage.
Table 9. Features’ importance computed during age classification using the 10-fold cross-validation strategy. The top 5 contributing features are highlighted with bold text. All values are the percentage.
LenDurMeanSDMinMaxRMSEntEAmp
ChestA x 1.501.462.262.293.252.372.892.001.752.21
A y 2.741.831.972.161.731.671.832.11
A z 3.632.312.983.173.402.193.111.54
G x 1.491.441.311.432.241.472.381.76
G y 0.992.002.092.051.950.901.901.30
G z 1.282.141.561.491.921.102.021.42
Lower BackA x 1.221.281.511.991.491.631.911.691.422.26
A y 2.991.291.731.851.651.741.611.22
A z 4.752.224.153.282.802.382.761.56
G x 1.642.861.721.631.581.251.621.67
G y 1.352.162.062.092.171.441.981.95
G z 1.901.941.591.742.381.811.943.17
Right WristA x 1.681.651.662.101.621.661.932.052.651.98
A y 2.711.852.602.652.151.732.001.52
A z 2.351.962.423.422.351.972.171.52
G x 1.811.561.671.341.751.441.681.35
G y 1.782.141.632.181.971.742.071.67
G z 2.032.572.792.232.251.812.261.95
Left AnkleA x 1.101.151.291.662.161.791.701.351.591.58
A y 2.581.632.541.842.061.701.941.54
A z 2.751.683.422.841.582.271.831.80
G x 2.201.962.471.621.931.561.811.78
G y 2.211.962.144.721.791.812.061.84
G z 2.521.942.562.091.931.742.061.90

3.5. Classification Results Based on Restriction to Subgroups

Since the correlation between body height and gender is very high (on average, men are taller than women), we performed a gait-based classification task on each of the groups of female and male participants in order to present height classification results that are independent of this particular phenomenon. Moreover, we also performed age classification on the data of each subgroup (female vs. male) separately. The number of subjects present in the study did not allow for ternary classification of subgroups (see Table 5 for the population characteristics). Therefore, there were two different classes in the height-related experiment: C H 1 = the body height of the subject is less than or equal to t h cm; C H 2 = the body height of the subject is greater than t h cm ( t h = 180 for male, t h = 170 for female subjects). In the age-related experiment, assigned classes were: C A 1 = the subject is less than or equal to t a years old; C A 2 = the subject is greater than t a years old ( t a = 40 for male, t a = 50 for female subjects).
Table 10 shows an overview of the results. It is quite clear that the results are very good in all cases with the classification rate higher than 90 % in all but two cases ( 89 . 34 % and 87 . 97 % for the right wrist sensor in both female groups). The results also present balanced sensitivity, specificity, the positive predictive value (PPV) of each class and the average PPV of all classes. For body height classification, PPV C 1 represents the PPV of the class C H 1 , and PPV C 2 represents the PPV of the class C H 2 . For age classification, PPV C 1 shows the PPV of the class C A 1 , and PPV C 2 shows the PPV of the class C A 2 .
Table 10. Results of body height and age classifications within participant subgroups using 10-fold cross-validation. The results show balanced correct classification rates, sensitivity, specificity, the positive predictive value (PPV) of each class and the average PPV of all classes.
Table 10. Results of body height and age classifications within participant subgroups using 10-fold cross-validation. The results show balanced correct classification rates, sensitivity, specificity, the positive predictive value (PPV) of each class and the average PPV of all classes.
Classification TaskBody PartSensorClass. RateSens.Spec.PPV C 1 PPV C 2 Avg. PPV
Body Height Classification
Male GroupChestA x y z , G x y z 95.0696.7492.7294.8795.3395.10
Lower BackA x y z , G x y z 93.4694.8291.6193.9392.8193.37
Right WristA x y z , G x y z 93.5096.7789.0792.3195.3293.81
Left AnkleA x y z , G x y z 93.2794.9191.2093.1693.4193.29
Female GroupChestA x y z , G x y z 91.1892.8489.0791.4990.7791.13
Lower BackA x y z , G x y z 93.2296.0689.6392.1394.7393.43
Right WristA x y z , G x y z 89.3492.9784.9088.3090.7889.54
Left AnkleA x y z , G x y z 92.7194.7190.0892.5992.8692.73
Age Classification
Male GroupChestA x y z , G x y z 93.3693.1293.6093.9092.7993.34
Lower BackA x y z , G x y z 93.6193.4593.7794.0193.1993.60
Right WristA x y z , G x y z 93.5594.4092.6593.1993.9593.57
Left AnkleA x y z , G x y z 92.6592.6992.6292.5892.7392.65
Female GroupChestA x y z , G x y z 92.7890.0495.2994.5991.2792.93
Lower BackA x y z , G x y z 95.0595.7894.3993.9296.1195.01
Right WristA x y z , G x y z 87.9788.7987.2086.6289.2987.96
Left AnkleA x y z , G x y z 90.8087.3793.7492.2989.6490.96

3.6. Subject-Wise Cross-Validation

In order to show that our results are not caused by over-fitting the classification to specific subjects rather than learning the properties, we are looking for (gender, height, age), a subject-wise cross-validation model was also employed (as explained in Section 2.8). Table 11 presents the classification results of subject-wise cross-validation for all three group classification tasks: gender, height and age. The feature set contained all features of 6D accelerations and angular velocities (50 in total). For each sensor position, sensitivity, specificity, the PPV of each class and the average PPV of all classes were also computed. A comparison of the classification results of group classification tasks using 10-fold cross-validation and subject-wise cross-validation for chest (CH), lower back (LB), right wrist (RW) and left ankle (LA) is presented in Figure 9. It is clearly observable that 10-fold cross-validation outperforms subject-wise cross-validation in all cases.
Figure 9. A comparison of correct classification accuracy of group classification tasks (gender, height and age) using 10-fold cross-validation and subject-wise cross-validation. Sensor positions include: chest (CH), lower back (LB), right wrist (RW) and left ankle (LA). The 10-fold cross-validation model outperforms the subject-wise cross-validation model in all cases.
Figure 9. A comparison of correct classification accuracy of group classification tasks (gender, height and age) using 10-fold cross-validation and subject-wise cross-validation. Sensor positions include: chest (CH), lower back (LB), right wrist (RW) and left ankle (LA). The 10-fold cross-validation model outperforms the subject-wise cross-validation model in all cases.
Sensors 15 29907 g009
Table 11. Subject-wise classification results of different classification categories: gender, height and age. The results show balanced correct classification rates, sensitivity, specificity, the positive predictive value (PPV) of each class and the average PPV of all classes.
Table 11. Subject-wise classification results of different classification categories: gender, height and age. The results show balanced correct classification rates, sensitivity, specificity, the positive predictive value (PPV) of each class and the average PPV of all classes.
Classification TaskBody PartSensorClass. RateSens.Spec.PPV C 1 PPV C 2 PPV C 3 Avg. PPV
Gender ClassificationChestA x y z , G x y z 85.4885.0985.8886.2884.6685.47
Lower BackA x y z , G x y z 87.9585.7189.7186.7488.8887.81
Right WristA x y z , G x y z 78.9073.5082.6974.8981.6378.26
Left AnkleA x y z , G x y z 77.1482.3272.6772.1782.6877.43
Body Height ClassificationChestA x y z , G x y z 82.8779.1391.2375.0071.2091.8179.34
Lower BackA x y z , G x y z 84.3884.8892.0283.1881.9887.2384.13
Right WristA x y z , G x y z 72.6171.9886.3180.0258.6679.1072.59
Left AnkleA x y z , G x y z 67.7867.8483.9284.9659.5761.6068.71
Age ClassificationChestA x y z , G x y z 68.5469.3884.4759.7985.6270.2871.90
Lower BackA x y z , G x y z 72.0072.0585.6163.9572.2885.2373.82
Right WristA x y z , G x y z 61.9961.7280.9653.0168.5062.8461.45
Left AnkleA x y z , G x y z 63.9563.3181.9160.5955.3172.8862.93
In the case of gender classification using chest and lower back sensors, the classification rates are 7 . 08 % and 6 . 37 % lower than 10-fold cross-validation. For right wrist and left ankle sensors, the classification rates are 8 . 26 % and 12 . 83 % lower than 10-fold cross-validation. In the case of height classification using chest and lower back sensors, the classification rates are 6 . 18 % and 6 . 07 % lower than 10-fold cross-validation. For right wrist and left ankle sensors, the classification rates are 12 . 18 % and 19 . 50 % lower than 10-fold cross-validation.
For the age classification task, a sharp decline in the classification rates is observable in subject-wise cross-validation. For chest and lower back sensors, the classification rates are 20 . 28 % and 16 . 82 % lower than 10-fold cross-validation. For right wrist and left ankle, the classification rates are 21 . 51 % and 21 . 79 % lower than 10-fold cross-validation. The main reason for such a sharp decline is because of the unbalanced population in classes C A 1 , C A 2 and C A 3 with a subject ratio of 9:6:11.
On the level of subject-wise cross-validation, it is also possible to address the questions of the invariance of the features within the different steps of a walking sequence or to come up with random forest regressions for age and height. Not surprisingly, almost all steps of one walking sequence were classified identically; 99 . 1 % for gender classification, 98 . 7 % for height classification and 98 . 4 % for age classification. When performing a random forest regression instead of a classification, we obtained age classifications with an average RMS error of about 11.51 years and height classification with an average RMS error of about 9.14 cm.

4. Discussion

4.1. Summary of Findings

The general problem we tackled is the estimation of soft biometric information from one single step recorded by one inertial sensor. We did so by solving different classification tasks based on the motion data of human walking steps represented by accelerations and angular velocities. Data were recorded by one sensor placed at various locations on the human body, namely the chest, the lower back, the wrist and the ankle. The results show that these classification tasks can be solved well by using accelerometers and/or gyroscopes at any of the given locations. The classification rates were highest for sensors located at the lower back and chest in each of the experiments, but still convincingly high when the sensor is attached to the wrist or ankle.
Our analysis of the feature sets used in each of the experiments has made clear that there is not one feature mainly responsible for any of the distinctions necessary for a classification. However, the feature importance in each of the classifications gave pointers as to what combination of features produces the best results. The most important findings were that angular velocities did not perform better than accelerations.

4.2. Comparison with Existing Research

It is not surprising that information about the gender can be recovered by analysis of chest or lower back movement. The effects of marker placement and viewpoint selection for recording locomotion are discussed extensively in the works of Troje [2], as was the high relevance of hip movement for gender classification by human observers. However, we have presented new findings, namely that accelerations associated with wrist and ankle movement alone allow for classification of gender, as well. To our knowledge, we are also the first to show that classification of height and age groups is possible from non-visual features. This is as yet done by solely relying on image- or video-based features. Makihara et al. [38] introduce a paper on gait-based age estimation by Gaussian process regression on silhouette-based features of bodies (contrary to face-based age estimation, as presented by Stewart et al. [39]). Their investigation was based on standard resolution video data. They have constructed a whole-generation database of over 1000 individuals, their age ranging from two to 94.
Our initial situation is clearly different from this in terms of sensor modalities. The use of commercial smart phones and wearables is an attractive chance to monitor biometric properties nowadays. Mobile phones and smart devices are a convenient platform for recording information in an every-day setup. Our experiments have shown that information recorded by a single sensor, such as a smart device, suffices for the estimation of basic soft biometric properties. Particularly, the wrist was an important subject for tests, because smart devices are commonly worn at that location.
Estimating biometric properties based on motion data makes sense in a number of different scenarios. In some of them, the focus may be on hard biometric properties in order to facilitate online identity checks and close security gaps. A number of previous works have shown that identification and authentication problems can be solved by classification of motion data acquired by mobile devices. Derawi and Bours [40] show that recognition of specific users can be done in real-time based on data collected by mobile phones. Their method can correctly identify enrolled users based on learning templates of different walking trials.
On the other hand, attention may be directed to soft biometric properties. Monitoring health or preventing and curing injury are use cases that represent this idea. Previous works have shown that accelerometers are well suited for detection and recognition of events and activity. In their paper on sensory motor performance, Albert et al. [41] discuss a new method to classify different types of falls in order to rapidly assess the cause and necessary emergency response. They present very good results classifying accelerometer data acquired by commercial mobile phones, which were attached to the lower backs of test subjects. In their comparative evaluation of five machine learning classifiers, support vector machines performed best, achieving accuracy values near 98%. Classification by decision trees only performed second best in their experiments at 94% to 98% accuracy for fall detection and at 98% to 99% accuracy for fall type classification. In their paper on gait pattern classification, Von Tscharner et al. [42] even conclude that a combination of PCA, SVM and ICA is most reliable dealing with high intra- and inter-subject variability. However, in their survey on mobile gait classification, Schneider et al. [43] make an attempt to settle the disagreement about suitable classification algorithms. In their study, they conclude that random forest is best suited for the classification of gait-related properties. In our setup, we decided to use random forest in order to produce comparable results. One additional benefit of this choice is that there is a low number of parameters that have to be chosen. Furthermore, the random forest method enables computing the significance and importance of each feature in overall classification. This helped us to investigate and perform a comparative study of the features’ importance for each sensor position in different classification tasks.

4.3. Limitations

Since our database is much smaller than the one introduced by Makihara et al. [38] and the variety of biometric features was also smaller (e.g., age covered only three decades), our experiments can only serve as proof of concept for now. Testing classifiers of non-image-based features on a larger database comprising wider ranges of biometric properties is a direction for future work.
Another limitation of our database is that it only consists of data belonging to patients with complaints of back pain. It will be worthy to perform further experiments to record data of participants without back pain (control group). Classification tasks can then be performed for the patient group, the control group and a combination of both.
One noteworthy limitation we had to face in our experiments is a possible uncertainty of sensor placement. Irrespective of how carefully each involved sensor is placed, the accuracy of placement depends on physical characteristics of test subjects, which may vary between individuals to some extent.

5. Conclusions and Future Work

We have classified biometric information based on the data of a single inertial-measurement unit collected on a single step. As a novel empirical finding, we have shown that single steps of normal walking already reveal biometric information about gender, height and age quite well, not only for measurements of lower back movements or chest movements, but also for wrist movements or ankle movements. Using standard 10-fold cross-validation, the classification rates have been for gender classification: 87.16% (right wrist sensor) to 92.57% (chest sensor); height classification: 84.78% (right wrist sensor) to 89.05% (chest sensor); age classification: 83.50% (right wrist sensor) to 88.82% (chest, lower back sensor). When using the rather strict subject-wise evaluations, the classification rates are somewhat lower for gender by 6.37% (lower back sensor) to 12.83% (left ankle) compared to the results of 10-fold cross-validation. For height classification, the classification rates using subject-wise evaluation are 6.07% (lower back sensor) to 19.50% (left ankle sensor) lower, and for age classification, 16.82% (lower back sensor) to 21.79% (left ankle sensor). These values can be seen as “lower bounds” on the possible classification rates on the biological variations, since also our feature selection, as well as our used machine learning techniques might not be optimal. Especially, a good estimate of the direction of gravity should improve the results; at sensors position with less change in orientation (chest, lower back), the classification rates had been better than at the ones with higher change (wrist, ankle). In future work, we will try to adopt a model-based estimate of body-part orientation using techniques similar to the ones used in [17] to come up with such estimates.
On the side of the basic science questions about human movement control, we want to address questions about to which degree the movement patterns can be “spoofed” by trained and untrained persons in future work. We will perform tests asking probands to try to walk like the other gender, to pretend to have another age or to have another height, etc.
On the technological side, our work should help to gain information on the user by smartwatches, smartphones or smart shoes, given the fact that many sensor systems for consumer electronics are limited: long time recordings can be done in low frame rates only or high speed measurements can be done for a limited amount of time, to save battery life time. Thus, it is more and more important to get information out of sparse sensor readings. Our work presents a technique where biometric parameters can be estimated from single steps. These biometric parameters can be used for further analysis of motions that are recorded with lower frame rates. Compared to previous work, where full sequences are considered for classification, we see this as a strong improvement.
However, our work also demonstrates the sensitivity of sensor data of such devices with respect to privacy concerns: already, the information on a single step recorded from a smartphone or smartwatch reveals personal information on gender, height and age.

Acknowledgments

We thank Guanhong Tao for his support and help in performing the experiments. We also thank all participants for taking part in the experiments. This work was partially supported by Deutsche Forschungsgemeinschaft (DFG) under research grants KR 4309/2-1.

Author Contributions

Conceived of and designed the experiments: BK, QR, AW. Performed the experiments: QR, BK. Analyzed the data: QR, BK, AV. Contributed reagents/materials/analysis tools: QR, BK, AV. Wrote the paper: AV, QR, BK, AW.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Johansson, G. Visual perception of biological motion and a model for its analysis. Percep. Psychophys. 1973, 14, 201–211. [Google Scholar] [CrossRef]
  2. Troje, N.F. Decomposing biological motion: A framework for analysis and synthesis of human gait patterns. J. Vis. 2002, 2, 2. [Google Scholar] [CrossRef] [PubMed]
  3. Lv, F.; Nevatia, R. Single View Human Action Recognition using Key Pose Matching and Viterbi Path Searching. In Proceedings of the 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2007), Minneapolis, MN, USA, 18–23 June 2007.
  4. Junejo, I.N.; Dexter, E.; Laptev, I.; Perez, P. View-Independent Action Recognition from Temporal Self-Similarities. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 172–185. [Google Scholar] [CrossRef] [PubMed]
  5. Barnachon, M.; Bouakaz, S.; Boufama, B.; Guillou, E. Ongoing human action recognition with motion capture. Pattern Recognit. 2014, 47, 238–247. [Google Scholar] [CrossRef]
  6. Oshin, O.; Gilbert, A.; Bowden, R. Capturing relative motion and finding modes for action recognition in the wild. Comput. Vis. Image Underst. 2014, 125, 155–171. [Google Scholar] [CrossRef]
  7. Moeslund, T.B.; Hilton, A.; Krüger, V. A survey of advances in vision-based human motion capture and analysis. Comput. Vis. Image Underst. 2006, 104, 90–126. [Google Scholar] [CrossRef]
  8. Schuldt, C.; Laptev, I.; Caputo, B. Recognizing Human Actions: A Local SVM Approach. In Proceedings of the17th International Conference on Pattern Recognition (ICPR’04), Washington, DC, USA, 23–26 August 2004; pp. 32–36.
  9. Venture, G.; Ayusawa, K.; Nakamura, Y. Motion capture based identification of the human body inertial parameters. In Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, (EMBS), Vancouver, BC, Canada, 20–25 August 2008; pp. 4575–4578.
  10. Kirk, A.G.; O’Brien, J.F.; Forsyth, D.A. Skeletal Parameter Estimation from Optical Motion Capture Data. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 20–25 June 2005; pp. 782–788.
  11. Liew, C.S.; Wah, T.Y.; Shuja, J.; Daghighi, B. Mining Personal Data Using Smartphones and Wearable Devices: A Survey. Sensors 2015, 15, 4430–4469. [Google Scholar]
  12. Son, D.; Lee, J.; Qiao, S.; Ghaffari, R.; Kim, J.; Lee, J.E.; Song, C.; Kim, S.J.; Lee, D.J.; Jun, S.W.; et al. Multifunctional wearable devices for diagnosis and therapy of movement disorders. Nat. Nanotechnol. 2014, 9, 397–404. [Google Scholar] [CrossRef] [PubMed]
  13. Tao, W.; Liu, T.; Zheng, R.; Feng, H. Gait Analysis Using Wearable Sensors. Sensors 2012, 12, 2255–2283. [Google Scholar] [CrossRef] [PubMed]
  14. Le Masurier, G.C.; Tudor-Locke, C. Comparison of pedometer and accelerometer accuracy under controlled conditions. Med. Sci. Sports Exerc. 2003, 35, 867–871. [Google Scholar] [CrossRef] [PubMed]
  15. Foster, R.C.; Lanningham-Foster, L.M.; Manohar, C.; McCrady, S.K.; Nysse, L.J.; Kaufman, K.R.; Padgett, D.J.; Levine, J.A. Precision and accuracy of an ankle-worn accelerometer-based pedometer in step counting and energy expenditure. Prev. Med. 2005, 41, 778–783. [Google Scholar] [CrossRef] [PubMed]
  16. Tautges, J.; Zinke, A.; Krüger, B.; Baumann, J.; Weber, A.; Helten, T.; Müller, M.; Seidel, H.P.; Eberhardt, B. Motion Reconstruction Using Sparse Accelerometer Data. ACM Trans. Graph. 2011, 30, 18:1–18:12. [Google Scholar] [CrossRef]
  17. Riaz, Q.; Tao, G.; Krüger, B.; Weber, A. Motion reconstruction using very few accelerometers and ground contacts. Graph. Model. 2015, 79, 23–38. [Google Scholar]
  18. Hung, H.; Englebienne, G.; Kools, J. Classifying Social Actions with a Single Accelerometer. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Zurich, Switzerland, 8–12 September 2013; pp. 207–210.
  19. Parkka, J.; Ermes, M.; Korpipaa, P.; Mantyjarvi, J.; Peltola, J.; Korhonen, I. Activity classification using realistic data from wearable sensors. IEEE Trans. Inf. Technol. Biomed. 2006, 10, 119–128. [Google Scholar] [CrossRef] [PubMed]
  20. Jean-Baptiste, E.M.D.; Nabiei, R.; Parekh, M.; Fringi, E.; Drozdowska, B.; Baber, C.; Jancovic, P.; Rotshein, P.; Russell, M.J. Intelligent Assistive System Using Real-Time Action Recognition for Stroke Survivors. In Proceedings of the 2014 IEEE International Conference on Healthcare Informatic (ICHI), Verona, Italy, 15–17 September 2014; pp. 39–44.
  21. Dijkstra, B.; Kamsma, Y.; Zijlstra, W. Detection of gait and postures using a miniaturised triaxial accelerometer-based system: Accuracy in community-dwelling older adults. Age Ageing 2010, 39, 259–262. [Google Scholar] [CrossRef] [PubMed]
  22. Morris, D.; Saponas, T.S.; Guillory, A.; Kelner, I. RecoFit: Using a Wearable Sensor to Find, Recognize, and Count Repetitive Exercises. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Toronto, ON, Canada, 26 April–1 May 2014; pp. 3225–3234.
  23. Neugebauer, J.M.; Hawkins, D.A.; Beckett, L. Estimating youth locomotion ground reaction forces using an accelerometer-based activity monitor. PLoS ONE 2012, 7, e48182. [Google Scholar] [CrossRef] [PubMed]
  24. Brand, M.; Oliver, N.; Pentland, A. Coupled Hidden Markov Models for Complex Action Recognition. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR ’97), San Juan, Puerto Rico, 17–19 June 1997; pp. 994–999.
  25. Bao, L.; Intille, S. Activity Recognition from User-Annotated Acceleration Data. In Pervasive Computing; Ferscha, A., Mattern, F., Eds.; Springer Berlin Heidelberg: Berlin, Germany, 2004; Volume 300, pp. 1–17. [Google Scholar]
  26. Kwapisz, J.R.; Weiss, G.M.; Moore, S.A. Activity Recognition Using Cell Phone Accelerometers. SIGKDD Explor. Newsl. 2011, 12, 74–82. [Google Scholar] [CrossRef]
  27. Phan, T. Improving Activity Recognition via Automatic Decision Tree Pruning. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Seattle, WA, USA, 13–17 September 2014; pp. 827–832.
  28. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  29. Liaw, A.; Wiener, M. Classification and Regression by randomForest. R News 2002, 2, 18–22. [Google Scholar]
  30. Gokhale, E. Gokhale Method | Gokhale Method Institute. Available online: http://www.gokhalemethod.com/ (accessed on 2 October 2015).
  31. Särndal, C.; Swensson, B. Model Assisted Survey Sampling; Springer: New York, NY, USA, 2003; pp. 100–101. [Google Scholar]
  32. Opal, A. Wireless, Wearable, Synchronized Inertial Measurement Units (IMUs) | APDM, Inc. Available online: http://www.apdm.com/wearable-sensors/ (accessed on 2 October 2015).
  33. Li, F.; Zhao, C.; Ding, G.; Gong, J.; Liu, C.; Zhao, F. A reliable and accurate indoor localization method using phone inertial sensors. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing, Pittsburgh, PA, USA, 5–8 September 2012; pp. 421–430.
  34. Derawi, M.; Nickel, C.; Bours, P.; Busch, C. Unobtrusive User-Authentication on Mobile Phones Using Biometric Gait Recognition. In Proceedings of the 2010 Sixth International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP), Darmstadt, Germany, 15–17 October 2010; pp. 306–311.
  35. Zijlstra, W. Assessment of spatio-temporal parameters during unconstrained walking. Eur. J. Appl. Physiol. 2004, 92, 39–44. [Google Scholar] [CrossRef] [PubMed]
  36. Umbaugh, S.E. Digital Image Processing and Analysis: Human and Computer Vision Applications with CVIPtools; CRC Press: Boca Raton, FL, USA, 2010; pp. 373–376. [Google Scholar]
  37. Louppe, G.; Wehenkel, L.; Sutera, A.; Geurts, P. Understanding variable importances in forests of randomized trees. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 5–10 December 2013; pp. 431–439.
  38. Makihara, Y.; Okumura, M.; Iwama, H.; Yagi, Y. Gait-based Age Estimation using a Whole-generation Gait Database. In Proceedings of the International Joint Conference on Biometrics (IJCB2011), Washington, DC, USA, 11–13 October 2011; pp. 1–6.
  39. Stewart, D.; Pass, A.; Zhang, J. Gender classification via lips: static and dynamic features. IET Biom. 2013, 2, 28–34. [Google Scholar] [CrossRef]
  40. Derawi, M.; Bours, P. Gait and activity recognition using commercial phones. Comput. Secur. 2013, 39, 137–144. [Google Scholar] [CrossRef]
  41. Albert, M.V.; Kording, K.; Herrmann, M.; Jayaraman, A. Fall Classification by Machine Learning Using Mobile Phones. PLoS ONE 2012, 7, e36556. [Google Scholar] [CrossRef] [PubMed]
  42. Von Tscharner, V.; Enders, H.; Maurer, C. Subspace Identification and Classification of Healthy Human Gait. PLoS ONE 2013, 8, e65063. [Google Scholar] [CrossRef] [PubMed]
  43. Schneider, O.S.; MacLean, K.E.; Altun, K.; Karuei, I.; Wu, M.M. Real-time Gait Classification for Persuasive Smartphone Apps: Structuring the Literature and Pushing the Limits. In Proceedings of the 2013 International Conference on Intelligent User Interfaces, Los Angeles, CA, USA, 19–22 March 2013; pp. 161–172.

Share and Cite

MDPI and ACS Style

Riaz, Q.; Vögele, A.; Krüger, B.; Weber, A. One Small Step for a Man: Estimation of Gender, Age and Height from Recordings of One Step by a Single Inertial Sensor. Sensors 2015, 15, 31999-32019. https://doi.org/10.3390/s151229907

AMA Style

Riaz Q, Vögele A, Krüger B, Weber A. One Small Step for a Man: Estimation of Gender, Age and Height from Recordings of One Step by a Single Inertial Sensor. Sensors. 2015; 15(12):31999-32019. https://doi.org/10.3390/s151229907

Chicago/Turabian Style

Riaz, Qaiser, Anna Vögele, Björn Krüger, and Andreas Weber. 2015. "One Small Step for a Man: Estimation of Gender, Age and Height from Recordings of One Step by a Single Inertial Sensor" Sensors 15, no. 12: 31999-32019. https://doi.org/10.3390/s151229907

APA Style

Riaz, Q., Vögele, A., Krüger, B., & Weber, A. (2015). One Small Step for a Man: Estimation of Gender, Age and Height from Recordings of One Step by a Single Inertial Sensor. Sensors, 15(12), 31999-32019. https://doi.org/10.3390/s151229907

Article Metrics

Back to TopTop