Next Article in Journal
Electrostatic MEMS Vibration Energy Harvesters inside of Tire Treads
Next Article in Special Issue
Multi-Channel Convolutional Neural Network Based 3D Object Detection for Indoor Robot Environmental Perception
Previous Article in Journal
A New Model and Its Application for the Dynamic Response of RGO Resistive Gas Sensor
Previous Article in Special Issue
Exploring RGB+Depth Fusion for Real-Time Object Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Measurement of Human Gait Symmetry using Body Surface Normals Extracted from Depth Maps

1
DIRO, University of Montreal, Montreal, QC H3T 1J4, Canada
2
ITF, The University of Danang—University of Science and Technology, Danang 556361, Vietnam
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(4), 891; https://doi.org/10.3390/s19040891
Submission received: 11 December 2018 / Revised: 16 February 2019 / Accepted: 18 February 2019 / Published: 21 February 2019
(This article belongs to the Special Issue Depth Sensors and 3D Vision)

Abstract

:
In this paper, we introduce an approach for measuring human gait symmetry where the input is a sequence of depth maps of subject walking on a treadmill. Body surface normals are used to describe 3D information of the walking subject in each frame. Two different schemes for embedding the temporal factor into a symmetry index are proposed. Experiments on the whole body, as well as the lower limbs, were also considered to assess the usefulness of upper body information in this task. The potential of our method was demonstrated with a dataset of 97,200 depth maps of nine different walking gaits. An ROC analysis for abnormal gait detection gave the best result ( AUC = 0.958 ) compared with other related studies. The experimental results provided by our method confirm the contribution of upper body in gait analysis as well as the reliability of approximating average gait symmetry index without explicitly considering individual gait cycles for asymmetry detection.

1. Introduction

Gait analysis has shown a lot of evidences demonstrating its potential to identify and diagnose early neurological and non-neurological musculoskeletal disorders. Gait symmetry is one of the most popular features used to perform these health-related assessments. It is a good indicator of human motion ability to identify pathology and assess recovery for people with asymmetric gait of various origins such as cerebral palsy, stroke, hip or knee arthritis and surgery or leg length discrepancy [1,2,3,4]. Researchers have dealt with the problem of gait symmetry estimation according to various input data types. For instance, the use of body-mounted devices such as inertial sensors [5] or motion capture markers [6] has provided precise measurements and promising results. In this paper, we propose an alternative approach estimating an index of gait symmetry using a vision system. Compared with the mentioned methods, our system does not require the precise positioning of sensors/markers on patient’s body. Besides, our input is acquired from a single camera, the method thus, does not need a functionality of run-time calibration/synchronization as for multiple sensors. Our configuration consists of a treadmill where a patient walks and a depth camera placed in front of it capturing a sequence of depth maps. To avoid occlusions in the depth maps, the treadmill console, handlebars and vertical supports are lowered and placed on the floor in front of the treadmill. The objective of our method is to measure a reasonable index indicating human gait symmetry during a walk. This may work as a patient screening tool providing relevant gait information during a treatment or recovery after a surgery. In addition, the study can be considered as an exploratory examination of surface normals to assess gait asymmetry since this factor plays the principal role in our gait description.
The remaining of this paper is organized as follows: Section 2 provides an overview of related vision-based studies working on human gait symmetry measurement; Section 3 describes our geometric feature that represents informative gait properties, Section 4 gives the schemes of symmetry measurement, Section 5 presents our evaluation and a comparison with related studies using a dataset of multiple data types, and Section 6 gives the final conclusion.

2. Related Work

Considering vision-based approaches that measure human gait symmetry, many studies working with low-cost depth cameras have been proposed. These 3D vision systems can reduce the need of manual intervention and then simplify their operation. There are two data types that are popularly employed to represent the 3D walking posture: 3D skeleton and depth map. Typically, a skeleton is directly estimated from the corresponding depth map [7,8] and is represented as a collection of 3D coordinates of body joints. Although this has been applied in many applications (e.g., abnormal gait detection [9] or action recognition [10]), the skeleton estimation is unstable between consecutive frames due to noisy depth maps. Besides, skeleton estimation algorithms usually encounter problems (i.e., provide deformed results) when working with pathological gaits [11]. Taking these factors into account, a sequence of raw depth maps was considered as the input of our processing.
Auvinet et al. [11] proposed a method estimating gait asymmetry based on a depth region of lower limbs of subjects walking on a treadmill. They introduced a Mean Gait Cycle Model (MGCM), that is composed of a sequence of Mean Depth Images (MDIs) for each leg, to obtain a representative step cycle and decrease the influence of noise. This requires gait cycle detection and the registration of depth maps prior to averaging since the subject’s position varies on the treadmill. A gait asymmetry index was measured as the longitudinal difference between (left and right) pairs of such MDIs. Unlike that approach where only the leg zone was considered, the whole depth body is involved into the stage of feature extraction in our method. Besides, our processing flow does not require gait cycle detection since our gait symmetry index can be directly measured on an input sequence of an arbitrary length. Moreover, we are using surface normals that are independent of the body’s position on the treadmill, no registration of depth maps is thus, required.
Another recent method that computes a gait normality index was described in [12]. The authors individually processed the depth and silhouette information to provide a pair of scores that are then weighted summed to get the gait normality index. Depth information was provided by a histogram of depth-related features of keypoints. A hidden Markov model (HMM) was structured to represent the transition of such histograms within the input sequence of depth maps. The likelihood provided by this HMM given a new sequence was used as a partial gait normality indicator. The second partial score was estimated by considering the change of pixel-based projections over a sequence of binary silhouettes. Differently from that study, our method directly embeds the transition of depth-related properties as well as implicitly considers the silhouette-based symmetry in a single processing flow.

3. Depth-Based Geometric Feature

The depth camera in our approach is placed in front of a person walking on a treadmill to capture depth frames from a frontal view. The right-handed camera coordinate system is thus, similar to the illustration in Figure 1, in which the positive x and y respectively point to the left and up while the z-axis points from the camera to the subject.

3.1. Human Body Segmentation

We only consider the body pixels, a step of background removal is thus, necessary since the input of the system is a sequence of depth maps. This operation can be easily performed by a depth-based segmentation (e.g., [11]). In detail, a 3D bounding box above the treadmill is specified and all points within this volume are considered belonging to human body. This method is appropriate for applications where the geometric relation between the camera and the treadmill is fixed. On the other hand, some SDKs provided together with the depth camera also have a functionality of human detection. For example, the Microsoft Kinect applies a random forest classifier on each depth map to assign a label (e.g., head, neck or shoulder) to each pixel, consequently the body is a collection of pixels of body-part labels [7,8]. We employ the latter method since it does not require a definition of a 3D region.

3.2. 3D Reprojection

Our geometric feature is computed in the 3D space, a reprojection is thus, performed to convert the depth body in each input frame to a 3D point cloud. Given a depth camera with an internal matrix of focal lengths f x and f y , and principal point ( c x , c y ) , a 3D point ( x , y , z ) can be reconstructed from a 2D point ( u , v ) with depth value d as
x = ( u c x ) d f x , y = ( v c y ) d f y , z = d
There is an obvious advantage of using 3D point clouds instead of depth frames: the depth map is a projection result where each pixel contains 2D coordinates and a depth ( u , v , d ) while the reprojection maps the pixel into a point cloud in a uniform 3D coordinate system ( x , y , z ) which is more appropriate for geometric operations.

3.3. Cloud of Normal Vectors

Once a 3D point cloud is obtained for the body in each depth map, a collection of normal vectors is estimated to provide a raw representation of points’ direction. Surface normals are important properties of any geometric surface and are computed directly from the point cloud.
The normal calculation of a 3D point is performed with eigenvectors and eigenvalues of a covariance matrix created from the points in its neighborhood [13]. Given an arbitrary 3D point cloud, such neighbor regions can be determined with the support of a KD-tree [14]. An illustration of this estimation is presented in Figure 2.
Let us notice that there is a trade-off between the processing time and the result of normal estimation. In detail, defining a neighborhood using a large radius leads to a longer processing time but the estimated normals are less sensitive to noise in the input point cloud. Recall that our approach directly applies on point clouds reconstructed from the depth maps without performing any enhancement (e.g., noise filtering). We defined a small neighborhood of 3-centimeters radius in our experiments (see Section 5) since normal vectors belonging to a large area will be further combined in the next step. This combination implicitly performs a noise removal and provides a reliable approximation of cloud surface direction.

3.4. Silhouette-Based Region Separation

A cloud of normal vectors cannot be directly used as a feature representing the human posture since different clouds might have various numbers of points. A possible solution is to segment the cloud into specific regions of interest. Concretely, we may define a fixed number of 3D regions together with their positions and combine the normal vectors within each one. This operation normalizes the cloud representation to a new array where each element corresponds to a 3D region.
In our work, we separate a cloud of normal vectors according to human body anatomy and symmetry. Concretely, the body in each depth map is simply split into four equal-size regions using a 2 × 2 grid, in which the grid is easily determined as the bounding box of body silhouette. The vertical split is necessary for our objective: symmetry measurement. This split is expected to indicate the difference between the left and right body sides along the movement. The horizontal split is to individually consider the upper and lower body parts to assess whether using only the lower body is enough for the task of gait analysis (as done in [9,11]). Besides, this separation also reduces the risk of losing relevant information regarding specific upper and lower limb motion during walk. For example, when the left leg is moving forward during a stride, the left hand tends to be backward. The combination within each region (i.e., one of four grid cells) is simply performed as an algebraic addition of normal vectors. The representation of each input depth body is then a collection of four accumulated vectors { v TL , v TR , v BL , v BR } in the 3D camera space. An example of this step is shown in Figure 3, in which the terms TL, TR, BL, BR respectively indicate the top-left, top-right, bottom-left, and bottom-right regions. Although the resolution 2 × 2 is small, it represents well the human body anatomy typically used by physicians for localization for diagnosis or treatment and provided good results in our experiments in Section 5.

3.5. Angle Conversion

Instead of directly employing the four 3D vectors as a feature supporting the gait symmetry measurement, we convert them into a scalar value corresponding to the angle between the left and right side vectors projected onto a specific plane. In this work, we consider the three main planes used in anatomy and then evaluate their potential.

3.5.1. Transverse Plane

The transverse plane vertically splits the body silhouette. Given the coordinate system shown in Figure 4, the plane is parallel with O x z and goes through the horizontal line splitting the 2 × 2 grid (see Figure 3). Each 3D accumulated normal vector u = ( u x , u y , u z ) is first projected onto the transverse plane to obtain a new vector u ^ = ( u x , 0 , u z ) . We then compare the direction of the vectors between the left and right sides. Assuming left-right symmetry, the mean value of the angle α between the left side vector u ^ L and the reflected right side vector u ^ R ref w.r.t. the positive z direction z = ( 0 , 0 , 1 ) should be small. This angle is computed as
α = cos 1 u ^ L · u ^ R ref u ^ L · u ^ R ref
with u ^ R ref = 2 ( z · u ^ R ) z u ^ R in which the dot notation · indicates the inner product.

3.5.2. Sagittal Plane

The sagittal plane is parallel with the base plane O y z and goes through the vertical line splitting the 2 × 2 grid. Similarly to the previous section, we also project each accumulated vector onto the sagittal plane and then calculate the angle between the left and right sides. In detail, the projected result of a vector u = ( u x , u y , u z ) is determined as u ^ = ( 0 , u y , u z ) . The angle between u ^ L and u ^ R is determined as
β = cos 1 u ^ L · u ^ R u ^ L · u ^ R
When extracting the angle feature using the sagittal plane, we do not need to reflect one of the two input vectors since the gait symmetry is analyzed via a comparison between left and right body parts instead of upper and lower ones (see Figure 4). During a normal (symmetric) walk, the mean value of u ^ L and u ^ R over a gait cycle should be close and the periodic angles β should be stable.

3.5.3. Coronal Plane

The coronal plane is parallel with the base plane O x y and is perpendicular to both transverse and sagittal planes. After projecting accumulated normal vectors onto the coronal plane, we perform the angle calculation in the same fashion as the transverse plane with a reflection w.r.t. the positive y direction y = ( 0 , 1 , 0 ) in order to indicate the left-right symmetry. Given an accumulated normal vector u = ( u x , u y , u z ) , its projection onto the coronal plane is thus, u ^ = ( u x , u y , 0 ) and the angle is computed as
γ = cos 1 u ^ L · u ^ R ref u ^ L · u ^ R ref
where u ^ R ref = 2 ( y · u ^ R ) y u ^ R .

3.5.4. Feature Representation

After performing the feature extraction, each depth body is represented by three angles α , β and γ estimated for the lower and upper limbs. Recall that these three angles are independently processed in the next steps as well as in the experiments to give an evaluation. A geometric description of the three angle calculations is given in Figure 4 to provide a visual understanding.

4. Gait Symmetry Measurement

4.1. Basic Measurement

Given three angles α , β and γ estimated from a collection of accumulated normal vectors v = { v TL , v TR , v BL , v BR } (see Figure 3) according to the transverse, sagittal and coronal planes, our basic gait symmetry index is measured as the summation of top and bottom contributions, i.e.,
I α ( v ) = α T + α B I β ( v ) = β T + β B I γ ( v ) = γ T + γ B
The addition of non-negative angles corresponding to each anatomy plane allows the accumulation of the left-right differences resulting from the upper and lower limbs. We consider two different schemes for computing the mean value of I as our gait symmetry measurement.

4.2. Frame-Based Index

The first scheme is that a gait symmetry index is estimated for each depth body according to Equation (5). The temporal factor is then considered by calculating the average of these per-frame indices as the indicator of gait symmetry along the movement. In detail, given a sequence of n collections v of accumulated vectors measured from n consecutive depth bodies, the gait symmetry index corresponding to each of three angles α , β and γ is computed as
I f ( v 1 . . n ) = 1 n i = 1 n I ( v i )
where n should correspond to one or several gait cycles. During a normal symmetric walk, I f should be small for α and γ and stable for β over a gait cycle.

4.3. Segment-Based Index

Unlike the previous index measurement, the second scheme firstly performs an addition of accumulated normal vectors at the same grid cells over a sequence of consecutive depth bodies and then estimates the symmetry index using Equation (5) only once as
I s ( v 1 . . n ) = I 1 n i = 1 n v i
This segment-based index may be interpreted as the within-frame measurement of a mean posture determined from a sequence of depth bodies. Therefore, for a symmetric gait, I s should be small for all three (sagittal, transverse and coronal) planes over a gait cycle.

5. Experiments

Since there is no benchmark dataset with ground truth gait symmetry index, we performed the evaluation of our method according to a specific application to distinguish normal and abnormal walking gaits. A good index measurement is expected to well separate the two gait types. In detail, the indices of normal walking gait provided by a symmetry indicator should be in a specific value range while the ones of anomalous gaits are distributed in other regions. These areas are expected to be well distinguishable. In practical applications, the gait symmetry index of a patient should converge from abnormal values to the normal range during a recovery. The measure of separation ability is the Area Under Curve (AUC) estimated according to the Receiver Operating Characteristic (ROC) curve since we are dealing with a problem of binary classification.

5.1. Dataset

Our experiments were performed on a dataset of multiple data types introduced in [15]. The dataset includes depth maps, point clouds, silhouettes and 3D skeletons that were acquired by a frontal depth camera. These data captured normal walking gaits and 8 abnormal ones that were performed by nine subjects (eight males, one female, 20–39 years old, 154–186 cm height and 51–95 kg mass) on a treadmill. These anomalous walking gaits were simulated by either padding a sole under one of the two feet or mounting a weight to an ankle. The former gaits were defined to focus on frontal asymmetry, i.e., the body centroid tends to be tilted to the left or right side, while the latter ones produce a movement with unequal speed and displacement between left and right legs. These gait abnormalities are appropriate to demonstrate the motion of patients having a problem with their lower limbs and have been used in related studies [11,16]. Each gait is represented by a sequence of 1200 frames corresponding to approximately 60 gait cycles. For this study, each of the 81 walking sequences was represented by 1200 consecutive depth maps in our evaluation.
This dataset was acquired by a Microsoft Kinect 2 at a frame rate of 13 fps. The camera was mounted at a height of 1.7 m. The distance between this camera and the subject was 2.3 m and the camera direction was parallel with the ground. Since Kinect two measures a depth map according to the Time-of-Flight technique, it provides smoother results with more details compared with cameras using structured light such as the Kinect version 1.

5.2. Evaluation Scheme

As mentioned in previous sections, we have different tests that can be independently performed to provide a comparison. First, there are three planes for the angle estimation including the transverse, sagittal and coronal ones. Second, we can individually consider the potential of frame-based and segment-based indices. Besides, we also evaluate the use of only the lower body (i.e., the angles estimated according to v BL and v BR ) since some recent studies (e.g., [9,11]) focused only on this part. Therefore, we obtained 12 evaluations for the 12 independent processing flows.
In order to provide an overall assessment for our method, we calculated an AUC over the entire dataset by focusing on a specific application: abnormal gait detection. Such evaluation assumes that the gait symmetry index belongs to a specific value range for normal walk while is beyond this range given an abnormal one. The leave-one-out cross-validation scheme was also employed to evaluate the distinguishing power of within-subject indices. Concretely, an AUC was calculated for each subject and the average AUC was finally used as the assessment result. This computation also allowed to compare our method with related studies where a training set is required.

5.3. Experimental Results

The AUCs obtained from the assessment of our symmetry index estimated on the whole body are shown in Table 1. It is obvious that the segment-based index in Equation (7) was more efficient for assessing gait symmetry than the one obtained by Equation (6) where the temporal factor was embedded after performing the index estimation for each frame. Table 1 also shows that using the sagittal-based angle provided better results than the transverse and coronal ones.
These properties are demonstrated again in Table 2 that presents the AUCs experimented on only the lower body. This table also shows that when the upper body was removed from the index estimation, the ability of gait symmetry assessment was reduced with decreasing of AUCs at corresponding positions in the table. Therefore, the whole body should be considered when dealing with problems related to gait symmetry index.
In order to provide a comparison with related studies, we reimplemented some methods as follows. The first selection was the HMM proposed in [9] that provides a gait normality index for each input sequence of skeletons. The second one was the MGCM introduced in [11] that computes the longitude difference between the left and right legs of an average aligned depth map as an indicator of gait asymmetry index. A common property between these two methods is the removal of upper body in the feature extraction stage. The model in [9] considered only 3D joints belonging to lower limbs while the one in [11] performed the estimation on a predefined leg zone. The third implemented approach was [12] where the researchers described the posture symmetry in both depth map and silhouette. These two features were independently extracted to give two scores. A combination of these scores was also proposed to improve the final gait index.
The AUCs measured according to the indices resulting from these three studies together with our best one (i.e., corresponding to the use of sagittal plane) are presented in Table 3. Each gait index indicating the overall gait symmetry was obtained by: a non-linear combination of per-cycle indices in [9], a direct consideration on the average gait cycle in [11], the per-segment mean index in [12], and our segment-based index described in Section 4.3.
Table 3 shows that the index given by our approach was better than the related studies since our indices of normal and abnormal walking gaits were more easily separable.

5.4. Combination of Gait Indices

As an attempt to improve the gait indices estimated according to the three planes, we performed weighted combinations that are expected to provide a better measure. Such combined gait index estimated on a sequence of n collections v of accumulated vectors is
I ( v 1 . . n ) = w t I t ( v 1 . . n ) + w s I s ( v 1 . . n ) + w c I c ( v 1 . . n )
where the subscripts t, s and c respectively indicate the transverse, sagittal and coronal planes. We can also combine only two of the three measures by simply assigning a zero weight to the remaining operand.
Instead of using a grid search to determine appropriate weights, we directly estimated them from the gait samples of the training set in the leave-one-out cross-validation scheme. The weight corresponding to each plane-related index was calculated as the variance of such index resulting from training samples of normal gaits. Variance was used here as a measure of information provided by each index. The abnormal gait patterns were not employed in this stage because the combination might be biased and would not work well in practical situations where various abnormal gaits may occur. We considered only segment-based gait index since it was better than the frame-based one (see Table 1 and Table 2). We observed that the combination of I t and I s significantly enhanced the gait index while the others (including the combination of all three measures) resulted in the same or slightly lower AUCs (see Table 4).

5.5. Discussion

According to the experimental results presented in the previous section, the following properties might be useful for further studies to extend the proposed approach as well as deal with the problem of gait index estimation.
First, the proposed method is appropriate to work on noisy depth maps as demonstrated by the experiments. Recall that there is no enhancement step (e.g., filtering) performed to improve the depth map quality. However, the number of such noisy (unreliable) points is much smaller compared with informative ones. By grouping the points into four large regions and then performing accumulation, the effect of noise can be significantly reduced. Another possible factor that could deform the body depth map is the subject’s clothing. Therefore, comfortable but tight fitting clothes should be worn during the examination.
Second, the temporal integration of depth features should be performed before applying Equation (5) to measure the symmetry. In other words, the use of segment-based index according to Equation (7) is preferred to the frame-based one. This means that surface normals of the mean posture over several gait cycles reveal important information about the gait symmetry. In summary, our index can be considered as an average measure of symmetry indices of consecutive gait cycles given a long sequence of depth maps.
Third, the upper body has its contribution in the gait analysis. When this body part does not participate in any stage, the symmetry description might lose useful information, and the resulting index is thus, less efficient.
Finally, the sagittal gait index provided consistently the best results on our dataset and played the main role in the improvement of combined indices. This result is supported by the literature where kinematic gait measurements in the sagittal plane provide the most appropriate information [17]. In addition, the time series of this index may be appropriate for further investigations on gait analysis such as gait cycle segmentation or gait event demarcation. Nevertheless, transverse and coronal planes’ measurements might provide additional information important in clinical gait analysis. Again, let us notice that employing these planes requires an appropriate setup of camera coordinate system as illustrated in Figure 1 in order to simplify consequent calculations.
Although the input of our method is a sequence of depth maps, the geometric feature is directly extracted from the corresponding 3D point clouds. Therefore, the proposed approach is still applicable given such cloud data. However, the stage of estimating normal vector for each 3D point might be slightly time-consuming (especially with high-density clouds) since it depends on the determination of point’s neighborhood. For example, when assigning a radius of 5 centimeters for neighborhood search, the processing time was 3 times longer than 3 centimeters while the system accuracy was almost unchanged. When the depth map is available, a pixel neighbor is useful for this task since we know the correspondence between a pixel and its reprojected 3D point. In summary, the computational cost might be unexpectedly changed depending on the input data, but this is not a significant problem since there are some algorithms supporting the fast calculation of normal vectors (e.g., [18,19]).

6. Conclusions

An original approach estimating a gait symmetry index from a sequence of depth maps has been presented in this paper. By employing a 3D reprojection, a geometric feature is proposed to extract useful posture characteristics according to the region-based accumulation of 3D normal vectors. Since surface normals are independent of the subject’s position on the treadmill, no registration of depth maps is required in our methodology. Two schemes of embedding temporal factor are also considered and evaluated to give a reasonable recommendation for further works. The potential of surface normals for gait asymmetry assessment has been demonstrated by experiments on a gait dataset of 97,200 depth maps where the obtained results outperformed related studies in the task of distinguishing normal and abnormal walking gaits. This vision-based approach is an alternative method for gait symmetry measurement beside conventional motion systems that employ wearable sensors. The practical advantages of our method are the use of low-price devices and its easy setup without run-time calibration (as approaches using multiple input signals) or accurate sensor placement. Our system may work as a patient screening tool providing relevant gait information during a treatment or recovery after surgery. In further works, possible extensions of the body surface normal features can be investigated to improve the gait symmetry index estimation. Besides, the proposed index will be measured on a larger dataset and the results will be collected over multiple sessions/days for assessing their stability as well as modeling their change during the recovery of patients. Finally, investigating partial surfaces according to body parts to provide limb-level motions could be an interesting extension.

Author Contributions

Conceptualization, T.-N.N. and J.M.; Methodology, T.-N.N. and J.M.; Software, T.-N.N.; Validation, T.-N.N. and J.M.; Formal Analysis, T.-N.N.; Investigation, T.-N.N.; Resources, T.-N.N., H.-H.H. and J.M.; Data Curation, T.-N.N.; Writing—Original Draft Preparation, T.-N.N.; Writing—Review & Editing, T.-N.N. and J.M.; Visualization, T.-N.N.; Supervision, J.M.; Project Administration, J.M; Funding Acquisition, J.M.

Funding

This research was funded by Natural Sciences and Engineering Research Council (NSERC) grant number RGPIN-2015-05671.

Acknowledgments

The authors would like to thank the NSERC (Natural Sciences and Engineering Research Council of Canada) for supporting this work (Discovery Grant RGPIN-2015-05671). We also thank the reviewers for their comments/suggestions that significantly improved the paper.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
AUCArea Under Curve
ROCReceiver Operating Characteristic
HMMHidden Markov Model
MGCMMean Gait Cycle Model
MDIMean Depth Image

References

  1. Böhm, H.; Döderlein, L. Gait asymmetries in children with cerebral palsy: Do they deteriorate with running? Gait Posture 2012, 35, 322–327. [Google Scholar] [CrossRef]
  2. Patterson, K.K.; Gage, W.H.; Brooks, D.; Black, S.E.; McIlroy, W.E. Evaluation of gait symmetry after stroke: A comparison of current methods and recommendations for standardization. Gait Posture 2010, 31, 241–246. [Google Scholar] [CrossRef]
  3. James, P.; Nicol, A.; Hamblen, D. A comparison of gait symmetry and hip movements in the assessment of patients with monarticular hip arthritis. Clin. Biomech. 1994, 9, 162–166. [Google Scholar] [CrossRef]
  4. Gurney, B. Leg length discrepancy. Gait Posture 2002, 15, 195–206. [Google Scholar] [CrossRef]
  5. Trojaniello, D.; Ravaschio, A.; Hausdorff, J.M.; Cereatti, A. Comparative assessment of different methods for the estimation of gait temporal parameters using a single inertial sensor: Application to elderly, post-stroke, Parkinson’s disease and Huntington’s disease subjects. Gait Posture 2015, 42, 310–316. [Google Scholar] [CrossRef]
  6. Loper, M.; Mahmood, N.; Black, M.J. MoSh: Motion and Shape Capture from Sparse Markers. ACM Trans. Graph. 2014, 33, 220. [Google Scholar] [CrossRef]
  7. Shotton, J.; Fitzgibbon, A.; Cook, M.; Sharp, T.; Finocchio, M.; Moore, R.; Kipman, A.; Blake, A. Real-time human pose recognition in parts from single depth images. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 1297–1304. [Google Scholar]
  8. Shotton, J.; Girshick, R.; Fitzgibbon, A.; Sharp, T.; Cook, M.; Finocchio, M.; Moore, R.; Kohli, P.; Criminisi, A.; Kipman, A.; et al. Efficient Human Pose Estimation from Single Depth Images. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2821–2840. [Google Scholar] [CrossRef]
  9. Nguyen, T.N.; Huynh, H.H.; Meunier, J. Skeleton-Based Abnormal Gait Detection. Sensors 2016, 16, 1792. [Google Scholar] [CrossRef]
  10. Du, Y.; Wang, W.; Wang, L. Hierarchical recurrent neural network for skeleton based action recognition. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1110–1118. [Google Scholar]
  11. Auvinet, E.; Multon, F.; Meunier, J. New Lower-Limb Gait Asymmetry Indices Based on a Depth Camera. Sensors 2015, 15, 4605–4623. [Google Scholar] [CrossRef] [Green Version]
  12. Nguyen, T.N.; Huynh, H.H.; Meunier, J. Assessment of gait normality using a depth camera and mirrors. In Proceedings of the 2018 IEEE EMBS International Conference on Biomedical Health Informatics (BHI), Las Vegas, NV, USA, 4–7 March 2018; pp. 37–41. [Google Scholar]
  13. Rusu, R.B.; Cousins, S. 3D is here: Point Cloud Library (PCL). In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011. [Google Scholar]
  14. Zhou, K.; Hou, Q.; Wang, R.; Guo, B. Real-time KD-tree Construction on Graphics Hardware. ACM Trans. Graph. 2008, 27, 126. [Google Scholar] [CrossRef]
  15. Nguyen, T.N.; Huynh, H.H.; Meunier, J. 3D Reconstruction With Time-of-Flight Depth Camera and Multiple Mirrors. IEEE Access 2018, 6, 38106–38114. [Google Scholar] [CrossRef]
  16. Nguyen, T.N.; Huynh, H.H.; Meunier, J. Human gait symmetry assessment using a depth camera and mirrors. Comput. Biol. Med. 2018, 101, 174–183. [Google Scholar] [CrossRef] [PubMed]
  17. McGinley, J.L.; Baker, R.; Wolfe, R.; Morris, M.E. The reliability of three-dimensional kinematic gait measurements: A systematic review. Gait Posture 2009, 29, 360–369. [Google Scholar] [CrossRef] [PubMed]
  18. Dey, T.K.; Li, G.; Sun, J. Normal estimation for point clouds: A comparison study for a Voronoi based method. In Proceedings of the Eurographics/IEEE VGTC Symposium Point-Based Graphics, Stony Brook, NY, USA, 21–22 June 2005; pp. 39–46. [Google Scholar]
  19. Zhao, H.; Yuan, D.; Zhu, H.; Yin, J. 3-D point cloud normal estimation based on fitting algebraic spheres. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–25 September 2016; pp. 2589–2592. [Google Scholar]
Figure 1. Our scene configuration and the corresponding 3D camera coordinate system.
Figure 1. Our scene configuration and the corresponding 3D camera coordinate system.
Sensors 19 00891 g001
Figure 2. The normal cloud estimated from a depth map via a 3D reprojection. All normal vectors must point to the camera (i.e., non-positive z-coordinate) as a postprocessing of determined orientations.
Figure 2. The normal cloud estimated from a depth map via a 3D reprojection. All normal vectors must point to the camera (i.e., non-positive z-coordinate) as a postprocessing of determined orientations.
Sensors 19 00891 g002
Figure 3. The separation applied on each cloud of normal vectors and the corresponding result of region-based accumulation. The frontal, side and top views are to provide an easy understanding for Section 3.5.
Figure 3. The separation applied on each cloud of normal vectors and the corresponding result of region-based accumulation. The frontal, side and top views are to provide an easy understanding for Section 3.5.
Sensors 19 00891 g003
Figure 4. (Top) 3D representation of a body and the three anatomy planes together with the camera coordinate system. (Bottom) angle estimation based on transverse, sagittal and coronal planes. In the case that the coordinate system does not satisfy the requirement described in Figure 1, a rigid transformation is necessary to simplify the angle estimation.
Figure 4. (Top) 3D representation of a body and the three anatomy planes together with the camera coordinate system. (Bottom) angle estimation based on transverse, sagittal and coronal planes. In the case that the coordinate system does not satisfy the requirement described in Figure 1, a rigid transformation is necessary to simplify the angle estimation.
Sensors 19 00891 g004
Table 1. AUCs obtained from experiments considering the whole body. The first and second highest results are emphasized in bold and underlined, respectively.
Table 1. AUCs obtained from experiments considering the whole body. The first and second highest results are emphasized in bold and underlined, respectively.
Test DataIndex EstimationTransverse PlaneSagittal PlaneCoronal Plane
All 9 subjectsframe-based0.8160.8700.716
segment-based0.8950.9660.832
Leave-one-outframe-based0.8190.9310.722
segment-based0.9030.9580.819
Table 2. AUCs obtained from experiments considering only the lower body. The first and second highest results are emphasized in bold and underlined, respectively.
Table 2. AUCs obtained from experiments considering only the lower body. The first and second highest results are emphasized in bold and underlined, respectively.
Test DataIndex EstimationTransverse PlaneSagittal PlaneCoronal Plane
All 9 subjectsframe-based0.7070.7270.514
segment-based0.7700.9490.785
Leave-one-outframe-based0.7220.8330.500
segment-based0.8060.9580.819
Table 3. The AUCs of related studies that employ different input data types. The notations † and ‡ indicate the consideration of only lower body and the use of data augmentation, respectively. The best results are highlighted in bold.
Table 3. The AUCs of related studies that employ different input data types. The notations † and ‡ indicate the consideration of only lower body and the use of data augmentation, respectively. The best results are highlighted in bold.
MethodInputAll 9 SubjectsLeave-One-Out
HMM [9] skeleton-0.778
MGCM [11] depth map0.8300.875
HMM [12]depth map -0.569
Correlation [12]silhouette-0.903
HMM + Correlation [12]combination -0.917
Ours (lower body) depth map0.9490.958
Ours (full body)depth map0.9660.958
Table 4. The AUCs estimated from each single gait index and possible combinations. The notations are similar to Equation (8) and the best results corresponding to each row are highlighted in bold.
Table 4. The AUCs estimated from each single gait index and possible combinations. The notations are similar to Equation (8) and the best results corresponding to each row are highlighted in bold.
I t I s I c I t & I s I s & I c I t & I c I t & I s & I c
Full body0.9030.9580.8190.9860.9720.9030.986
Lower body0.8060.9580.8190.9580.9580.7920.958

Share and Cite

MDPI and ACS Style

Nguyen, T.-N.; Huynh, H.-H.; Meunier, J. Measurement of Human Gait Symmetry using Body Surface Normals Extracted from Depth Maps. Sensors 2019, 19, 891. https://doi.org/10.3390/s19040891

AMA Style

Nguyen T-N, Huynh H-H, Meunier J. Measurement of Human Gait Symmetry using Body Surface Normals Extracted from Depth Maps. Sensors. 2019; 19(4):891. https://doi.org/10.3390/s19040891

Chicago/Turabian Style

Nguyen, Trong-Nguyen, Huu-Hung Huynh, and Jean Meunier. 2019. "Measurement of Human Gait Symmetry using Body Surface Normals Extracted from Depth Maps" Sensors 19, no. 4: 891. https://doi.org/10.3390/s19040891

APA Style

Nguyen, T. -N., Huynh, H. -H., & Meunier, J. (2019). Measurement of Human Gait Symmetry using Body Surface Normals Extracted from Depth Maps. Sensors, 19(4), 891. https://doi.org/10.3390/s19040891

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop