Next Article in Journal
Unbalanced Current Identification of Three-Core Power Cables Based on Phase Detection of Magnetic Fields
Next Article in Special Issue
Enhancement and Optimization of Underwater Images and Videos Mapping
Previous Article in Journal
NISQE: Non-Intrusive Speech Quality Evaluator Based on Natural Statistics of Mean Subtracted Contrast Normalized Coefficients of Spectrogram
Previous Article in Special Issue
VILO SLAM: Tightly Coupled Binocular Vision–Inertia SLAM Combined with LiDAR
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Human Motion Prediction via Dual-Attention and Multi-Granularity Temporal Convolutional Networks

1
Key Laboratory Measurement and Control of CSE Ministry of Education, School of Automation, Southeast University, Nanjing 210002, China
2
Nanjing Center for Applied Mathematics, Nanjing 211135, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(12), 5653; https://doi.org/10.3390/s23125653
Submission received: 17 May 2023 / Revised: 12 June 2023 / Accepted: 14 June 2023 / Published: 16 June 2023

Abstract

:
Intelligent devices, which significantly improve the quality of life and work efficiency, are now widely integrated into people’s daily lives and work. A precise understanding and analysis of human motion is essential for achieving harmonious coexistence and efficient interaction between intelligent devices and humans. However, existing human motion prediction methods often fail to fully exploit the dynamic spatial correlations and temporal dependencies inherent in motion sequence data, which leads to unsatisfactory prediction results. To address this issue, we proposed a novel human motion prediction method that utilizes dual-attention and multi-granularity temporal convolutional networks (DA-MgTCNs). Firstly, we designed a unique dual-attention (DA) model that combines joint attention and channel attention to extract spatial features from both joint and 3D coordinate dimensions. Next, we designed a multi-granularity temporal convolutional networks (MgTCNs) model with varying receptive fields to flexibly capture complex temporal dependencies. Finally, the experimental results from two benchmark datasets, Human3.6M and CMU-Mocap, demonstrated that our proposed method significantly outperformed other methods in both short-term and long-term prediction, thereby verifying the effectiveness of our algorithm.

1. Introduction

With the rapid development of artificial intelligence technology, an increasing number of intelligent devices are being applied in industrial production and daily human life. Human motion prediction, a key technology for enhancing device intelligence, aims to capture the intrinsic temporal evolution within historical human motion sequences to generate predictions for future motion. Human motion prediction has been widely applied in fields such as autonomous driving [1], human–computer interaction [2,3], human emotion recognition [4], and human behavior analysis [5,6,7]. However, due to the high dimensionality, joint spatial collaboration, hierarchical human structure, and strong temporality characteristics of human motion, capturing temporal dynamic information and spatial dependency features for precise human motion prediction remains a challenging research hotspot.
Human motion prediction is a typical task in the computer vision field. Traditional human motion prediction algorithms, such as hidden Markov models (HMMs) [8], Gaussian process dynamic models (GPDMs) [9], and restricted Boltzmann machines [10], as shown in Figure 1, often require extensive prior knowledge and assumptions, making it challenging to capture the complexity and diversity of human motion and so restricting their application impact.
As more and more large-scale motion capture datasets become available, an increasing number of deep learning models have been designed and have demonstrated excellent performance, such as convolutional neural networks (CNNs) [11], graph neural networks (GNNs) [12,13,14], temporal modules such as recurrent neural networks (RNNs) [15,16,17,18,19,20], temporal convolutional networks (TCNs) [21,22], and attention mechanisms [23,24]. Although these deep learning models have exhibited effectiveness in human motion prediction, there are still limitations in two aspects:
(a) Spatial relationship modeling: In most previous studies, spatial joint graphs were designed based on the human physical structure, typically utilizing graph neural networks (GNNs) [25] to capture spatial correlations. However, GNNs are limited by the local and linear aggregation of node features and may not effectively capture the global and nonlinear dynamics of human motion. The introduction of adaptive graphs aimed to overcome these limitations, but they still have drawbacks, such as overlooking the correlation between critical 3D coordinate information, which results in a loss of relevant internal data feature information.
(b) Simultaneously capturing complex short-term and long-term temporal dependencies: Most research has employed temporal learning components to capture temporal correlations. RNNs are a classic approach, but they face gradient vanishing or exploding issues when learning long time sequences. More advanced models such as LSTM and GRU mitigate the issue of vanishing gradients to a certain degree, but pose challenges in training and lack a parallel computation capability. Self-attention mechanisms [26,27] attempt to capture temporal dependencies but still struggle to effectively model long-range dependencies. TCNs [22,28] capture long-term dependencies through fixed kernel sizes, adopting an independent module framework that can only capture single dependency relationships from a temporal scale perspective. Fixed receptive fields limit their ability to adaptively learn multi-scale temporal dependencies.
To tackle these challenges in human motion prediction, a novel method based on dual attention and multi-granularity temporal convolutional networks (DA-MgTCNs) was proposed. This approach effectively captures spatial correlations and multi-scale temporal dependencies. Specifically, joint attention and channel attention were combined to design a dual-attention structure for extracting spatial features and capturing information on spatial correlations between and within human joints. TCNs were employed to model long-term temporal dependencies, and the concept of multi-granularity was introduced into the TCN to further enhance performance. The multi-granularity TCN (MgTCN) employed convolution kernels of varying scales in its convolution operations across multiple branches, enabling it to effectively capture multi-scale temporal dependencies in a flexible manner.
The MgTCN module was comprised of a combination of multi-granularity causal convolutions, dilated convolutions, and residual connections. Each branch of the module was composed of multiple causal convolution layers with varying dilation factors. This design enabled the adaptive selection of different receptive fields based on varying motion styles and joint trajectory features for short-term and long-term human motion prediction.
The main contributions of this paper are as follows:
(1) We designed a dual-attention model for extracting inter-joint and intra-joint spatial features, more effectively mining spatial relationships between joints and different motion styles, providing richer information sources for motion prediction.
(2) We introduced a multi-granularity temporal convolutional network (MgTCN) that employed multi-channel TCNs with different receptive fields for learning, thus achieving discriminative fusion at different time granularities, flexibly capturing complex short-term and long-term temporal dependencies, and thereby further improving the model’s performance.
(3) We conducted extensive experiments on the Human3.6M and CMU-MoCap datasets, demonstrating that our method outperformed most state-of-the-art approaches in short-term and long-term prediction, verifying the effectiveness of the proposed algorithm.
The remainder of this paper is organized as follows: Section 2 reviews related work. Section 3 details the proposed methodology. In Section 4, we describe experiments conducted on two large-scale datasets, comparing the performance of the proposed method with baselines. Section 5 provides a summary and conclusion, as well as a discussion of future work.

2. Related Work

In this section, we review the literature relevant to our dual-attention multi-granularity temporal convolutional networks (DA-MgTCNs) model, focusing on existing methods for human motion prediction, temporal convolutional networks (TCNs), multi-granularity (Mg) convolutions, and attention mechanisms.

2.1. Human Motion Prediction

The development of human motion prediction has evolved through several phases. Traditional methods primarily rely on statistical approaches, such as hidden Markov models (HMMs) [8], Gaussian processes (GPs) [9], and restricted Boltzmann machines [10], to learn underlying patterns and structures from data in order to predict future human motion [15]. Although these methods have achieved some success in certain scenarios, they still face challenges in capturing complex spatial and temporal dependencies, computational efficiency, and scalability.
With the rapid development of deep learning, researchers have started to apply it to human motion prediction tasks. Recurrent neural networks (RNNs) have been widely adopted for the temporal information modeling of human motion. Some representative works include Fragkiadaki et al. [15]’s RNN model, Martinez et al. [17]’s RNN-based joint angle prediction model, and Li et al. [11]’s convolutional recurrent neural network (CRNN) model. Although RNNs have achieved high accuracy in human motion prediction, they may lead to error accumulation and eventually converge to a statical average pose due to the continuous computation of time series.
To address this issue, researchers have improved RNNs. Chiu et al. [29] used LSTM units to model the underlying structure of human motion hierarchically, but this method did not adequately capture the spatial structure of the human body. Martinez et al. [17] introduced a residual structure using GRUs to model the velocity of human motion sequences, focusing on short-term temporal modeling but ignoring long-term dependencies and spatial structure. Jain et al. [16] combined LSTM and fully connected (FC) layers in a structural RNN model to encode high-level spatio-temporal structures in human motion sequences. Guo et al. [30] employed FC layers and GRUs to model local structures and capture long-term temporal dependencies, but they did not account for the interactions between different limbs. These RNN-based models faced challenges in capturing long-term dependencies and error accumulation.

2.2. Temporal Convolutional Networks

The temporal convolutional network (TCN) was developed to address these issues. The fundamental TCN architecture includes causal convolution, dilated convolution, and residual blocks [31]. Compared to RNNs and LSTMs, TCNs offer the advantages of parallel computation and larger receptive fields. Recent research has shown that 1D convolution can effectively represent time-series data [31,32,33], achieving significant success in various sequence learning tasks, such as machine translation [34], speech synthesis [35], video analysis [36], and semantic segmentation [37]. The contextual size of the network may be easily increased by stacking numerous one-dimensional convolutional layers, and creating hierarchical feature representations for input sequences enables the efficient modeling of long-term temporal patterns [35,36].

2.3. Multi-Granularity Convolution

A single-scale TCN might not be sufficient to capture the multi-scale temporal correlations in motion sequences in human motion prediction tasks. To capture complicated short-term and long-term temporal connections, researchers have developed a method known as multi-granularity convolution, which can combines multi-scale information fusion [38]. By adjusting the convolutional size, CNN-based deep learning models can quickly gather feature information at various granularities, enabling more accurate decision making by combining and evaluating data from various scales. Recent achievements in the field of computer vision have fully exploited multi-granularity information fusion based on CNNs [39].

2.4. Attention Mechanisms

Additionally, understanding spatial relationships is essential when attempting to predict human motion. In order to overcome this constraint, attention mechanisms were incorporated into the model to extract the spatial correlations of joints. There is still untapped potential in the area of human motion prediction, despite the widespread use of attention mechanisms in natural language processing [40,41] and image processing [42]. Tang et al. [43] employed the attention module for information extraction along the temporal dimension, and Cai et al. [44] used it for global spatial dependency among joint trajectories. However, it is believed that the intrinsic three-dimensional coordinate information of the human body is crucial for spatial representation.

3. Approach

3.1. Problem Formulation

Our goal was to forecast future human posture sequences based on previous 3D human pose sequences. Three-dimensional joint positions were employed as the pose representation to prevent the ambiguity produced by the joint angle representation. A graphical representation of the human pose was created by analysing the properties of human joint positions over time. Let x 1 : T = [ x 1 , x 2 , , x T ] represent the set of joint positions for T time steps, where x i R J × C , T specifies the number of input time steps; J the number of human pose joints; and C = 3 the feature dimension (x, y, z). Our goal was to anticipate the pose’s future steps x T + 1 : T + N = [ x T + 1 , x T + 2 , , x T + N ] . We began by copying the latest pose x T N times to build a time series of length T + N , as described in the literature [25,45]. As a result, the goal became generating a time series of length T + N from the input sequence x 1 : T + N = [ x 1 , x 2 , , x T ; x T , , x T ] to produce the output sequence x ^ 1 : T + N , where x i is commonly designated as the 3D coordinates of N body joints.

3.2. Overview

We employed a residual depth network consisting of DA-MgTCN modules to capture the global spatial correlation and multi-scale temporal dependence of human motion. Each DA-MgTCN module consisted of a two-branch attention structure module (DA) and a multi-granularity TCN module (MgTCN) connected in series to capture the inter-temporal dependence of historical motion sequences. The DA module was used to extract spatially significant information from joint-level and channel-level dimensions. A combination of multi-granularity convolution and TCN was used in the MgTCN module to increase the prediction quality and adapt to varied forms of human motion and multi-scale time. The complete model architecture was trained end-to-end, with global and local residual connections improving the deep neural network’s performance. Each DA-MgTCN component is described in detail below.
Y D A M g T C N = M g T C N ( D A ( X ) )
Figure 2 shows a detailed description of the module. The specifics of the DA and MgTCN modules are provided below.

3.3. Dual Attention (DA)

The self-attentive mechanism is regarded as an efficient method for modeling remote dependencies. Tang et al. [43] and Cai et al. [44] used the attention module for information extraction along the temporal dimension and the modeling of global spatial dependencies, respectively. However, we observed that the 3D coordinate information from human joints is crucial for spatial representations.
As a result, we proposed a dual-attention module that took into account both joint-level attention and channel-level attention in order to extract joint-related and channel-related information for spatial correlation. The DA module is depicted in the lower left corner of Figure 2 and is described in detail below.
Given a human motion feature X, a linear transformation was first performed using the weight matrices W q , W k , and W v to obtain the query Q, the key K, and the value V. These two branches shared the same embeddings: Q, K, and V. The embeddings are partially reorganized into J × C T (for Q, K, and V in the joint branch) and C × J T (for Q, K, and V in the channel branch) dimensions. The joint and channel attention were used to simultaneously mine the dependencies between joints in the space and channel dimensions. This was computed as follows:
Q = W q X , K = W k X , V = W v X
F J = A t t e n t i o n ( Q ( J ) , K ( J ) , V ( J ) ) = s o f t m a x ( Q ( J ) K ( J ) T d k ) V ( J )
F C = A t t e n t i o n ( Q ( C ) , K ( C ) , V ( C ) ) = s o f t m a x ( Q ( C ) K ( C ) T d k ) V ( C )
where Q ( J ) , K ( J ) , V ( J ) , Q ( C ) , K ( C ) , and V ( C ) represent the deformations of the Q , K , and V matrices, respectively. W q , W k , and W v are trainable weights, and d k is the dimension of K. J and C denote joint-level and channel-level branches, respectively. F J and F C are the output features of the joint-level and channel-level networks. After obtaining the joint-level and channel-level features, we summed them one by one to obtain the spatially noticed feature representation X ^ of the MgTCN, as shown in Equation (5):
X ^ = F J F C
After obtaining the spatially noticed feature representation X ^ of the motion data, we could feed this representation into the subsequent layers of the network. This process helped to capture joint-level and channel-level contextual information, which is crucial for effective motion prediction modeling.

3.4. Multi-Granularity TCN (MgTCN)

To learn human motion temporal features efficiently, we extended the concept of temporal multi-granularity convolutional kernels to TCN networks and proposed MgTCNs for extracting temporal features at multiple scales for different motion styles. The MgTCN module is shown in the lower right corner of Figure 2 and consisted of multi-granularity causal convolution, dilated convolution, and residual blocks. There were three causal convolution channels in the MgTCN, each using kernels with granularity sizes of 2, 3, and 5 for feature extraction. Each channel consisted of three residual blocks connected in series. These units increased the perceptual field at a dilation rate of [1, 2, 4] and used ReLU as the activation function. In addition, a dropout unit was included in each residual block for regularization.
Causal convolution: The output at the tth timestamp for standard 1D convolution is calculated from the k elements around the previous layer with time step t, which is not reasonable for the human motion prediction task [31]. The goal of this research was to find the best function for generating human-like future poses based on previous motion capture sequences. As a result, the predicted pose at time step t could be derived only from all possible representations of previously observed frames and not from later poses. MgTCN’s causal convolution ensured that only past data were used as the model input, preventing future information leakage. This was easily accomplished by shifting the standard convolution output by a few time steps, as shown in the equation below:
y [ t ] = i = 0 k x [ t i ] w [ i ]
where y [ t ] is the output, x [ t i ] are the inputs, w [ i ] is the convolution weight at time step i, and k is the kernel size.
Dilated convolution: Causal convolution captures historical data inadequately. Increasing the network’s depth or number of layers can help it capture historical data linearly. However, increasing the network depth exponentially increases the number of parameters, making network training more difficult. Oord et al. [35] suggested using dilation convolution to extend causal convolutional networks’ receptive field to better capture historical information.
Dilated convolution is implemented by adding expansion parameters to the moving convolutional kernel. Compared to traditional deep convolutional networks, dilation convolution can obtain a larger receptive field without significantly increasing the number of parameters, thus capturing information over a longer time range. This approach can focus on both local details and motion trends over a longer time span when dealing with human motion prediction tasks.
Dilated causal convolution can be expressed by the following equation. For a filter f = ( 0 , , k 1 ) and x R T , denoting the given 1D time-series input, the dilated convolution operation F on element s of the sequence is computed as:
F k , d ( x ) = i = 0 k 1 f ( i ) · x ( s d · i )
where d is the dilation factor, k is the size of the filter, and the convolution kernel is restricted to slide only at the current position and to its left (i.e., past information).
The receptive field R for a three-layer convolution is calculated as:
R = 1 + ( k 1 ) × d 1 + ( k 1 ) × d 2 + ( k 1 ) × d 3
where d 1 , d 2 , and d 3 are the dilation factors of the three-layer convolution, which are used to calculate the size of the receptive field.
Our TCN was calculated as:
T C N k , d = 1 = x + F k , d = 1 ( x ) , T C N k , d = 2 = T C N k , d = 1 + F k , d = 2 ( T C N k , d = 1 ) , ( k = 2 , 3 , 5 ) T C N k = T C N k , d = 4 = T C N k , d = 2 + F k , d = 4 ( T C N k , d = 2 )
Figure 3 shows an example with a three-layer causal expansion convolutional network (TCN). The TCN’s elements in Figure 3 include a series of dilation causal convolutions with dilation factors d = 1 , 2 , 4 and a filter size K = 3 .
Multi-granularity convolution: In order to handle complex, multi-action, multi-joint predictions of the human body, MgTCN required the use of convolutional kernel filters with different granularities to extract time-series features at different scales. This was necessary to meet the needs of short- and long-term predictions that require the capture of time-series features of different lengths. Three MgTCN time series were processed separately, which made it possible to combine multiple time granularities in the feature extraction process, which could better represent a large range of spatio-temporal features. Therefore, integrating time-series data with different time granularities to obtain better results is a challenge.
In order to handle the complex multi-action and multi-joint prediction of the human body, one must integrate time-series data with different time granularities to obtain better results. MgTCN used convolutional kernel filters with different granularities to extract time-series features at different scales. This satisfied the need for capturing short- and long-term forecasts of time-series features of different lengths. Three time series were treated separately in MgTCN, which allowed us to combine multiple temporal granularities in the feature extraction process to better represent certain large ranges of spatio-temporal features.
The MgTCN network output could be used to extract multi-temporal granularity features (short-term and long-term) using the aforementioned spatial and temporal feature extraction steps. We combined the data from these three TCN channels and used the equation below to make predictions in order to achieve the integration of the multi-granularity information.
F u s i o n = C a t ( w 1 × T C N k = 2 , w 2 × T C N k = 3 , w 3 × T C N k = 5 )
M g T C N = g ( F u s i o n )
where w i is a learnable parameter to adjust the weights for different time periods, and g ( . ) represents a mapping function that maps the fused features to the predicted values.
With this multi-granularity temporal convolution (MgTCN) method, we could both observe the general trend of human motion in the long-term and capture the outliers of short-term changes. This temporal correlation facilitated predictive power.

3.5. Global and Local Residual Connection

Residual connection skips a layer of the network and adds its output to the next layer’s output. This eliminates gradient fading by propagating the gradient straight from the back layer to the front layer. This architecture simplifies neural network representation learning in deeper structures.
Figure 2 illustrates the use of global residual connections between the encoder and decoder modules and local residual connections in each DA-MgTCN module to enhance neural network training and deeper structural performance. This method assisted the network in capturing complex data patterns in human motion prediction.

3.6. Loss Function

To train our DA-MgTCN model, we employed an end-to-end training technique. The mean position per joint error (MPJPE) loss function between the anticipated motion sequence and the ground truth motion sequence was used to analyze the difference between the predicted outcomes and the true pose, which was defined as follows:
L = 1 N i = 1 N j = 1 T | | Y ^ i , j Y i , j | | 2 2
where N is the number of human joints, T is the number of time steps in the future series, Y ^ i , j R C is the prediction of the ith joint at the jth time step, and Y i , j is the corresponding ground truth.
We optimized the loss function using the improved Adam method (AdamW [46]), which mitigated the overfitting problem by adding a weight decay term and could significantly improve the robustness of the model.

4. Experiments

In this section, we evaluate the performance of the proposed method using two large-scale human motion capture benchmark datasets: Human3.6M and CMU-Mocap.

4.1. Datasets

Human3.6M [47] is the largest existing human motion analysis database, consisting of 7 actors (S1, S5, S6, S7, S8, S9, and S11) performing 15 actions: walking, eating, smoking, discussing, directions, greeting, phoning, posing, purchases, sitting, sitting Down, taking photos, waiting, walking a dog, and walking together. Some actions are periodic, such as walking, while others are non-periodic, such as taking photos. Each pose includes 32 joints, represented in the form of an exponential map. By converting these into 3D coordinates, eliminating redundant joints, global rotation, and translation, the resulting skeleton retains 17 joints that provide sufficient human motion details. These joints include key ones that locate major body parts (e.g., shoulders, knees, and elbows). This strategy ensures that no crucial joints are overlooked. We downsampled the frame rate to 25 fps and used S5 and S11 for testing and validation, while the remaining five actors were used for training.
CMU-MoCap, available at http://mocap.cs.cmu.edu/, accessed on 13 June 2023, is a 3D human motion dataset released by Carnegie Mellon University that used 12 Vicon infrared MX-40 cameras to record the positions of 41 sensors attached to the human body, describing human motion. The dataset can be divided into six motion themes, including human interaction, interaction with environment, locomotion, physical activities and sports, situations and scenarios, and test motions.
These motion themes can be further subdivided into 23 sub-motion themes. The same data preprocessing method as in the literature [25] was adopted, simplifying each human body and reducing the motion rate to 25 frames per second. Furthermore, eight actions (basketball, basketball signals, directing traffic, jumping, running, soccer, walking, and washing the face) were selected from the dataset to evaluate the model’s performance. No hyperparameters were adjusted in this dataset, and we only used the training and testing sets, applying a splitting method consistent with the common practice in the literature.

4.2. Implementation Details

All experiments in this paper were implemented using the PyTorch deep learning framework. The experimental environment was Ubuntu 20.04 with an NVIDIA A100 GPU. During the training process, the batch normalization size was set to 16, and the AdamW optimizer was used to optimize the model. The initial learning rate was set to 0.003, with decay by 5% every 5 epochs. The model was trained for 60 epochs, and each experiment was conducted three times. The average result was taken to ensure a more robust evaluation of the model’s performance. The input motion prediction length was 25 frames (1000 ms), and the prediction generated 25 frames (1000 ms). The choice and configuration of the relevant hyperparameters are shown in Table 1.

4.3. Evaluation Metrics and Baselines

The same evaluation metrics as those used in existing algorithms [25,45] were employed for assessing model performance. The standard mean per joint position error (MPJPE) was used to measure the average Euclidean distance (in millimeters, mm) between the predicted joint 3D coordinates and the ground truth, as illustrated in Equation (12). In addition, to further illustrate the advantages of the method, we conducted a comparative analysis of our method with Res. sup. [17], convSeq2Seq [11], DMGNN [13], LTD [25], LPJP [44], Hisrep [48], MSR [49], and ST-DGCN [45].

4.4. Experimental Results and Analysis

Human3.6M: Based on the existing work, we divided the prediction results into short-term (80–400 ms) and long-term predictions (500–1000 ms). The experimental results are shown in Table 2, which demonstrates the joint position error and mean error for short-term (80 ms, 160 ms, 320 ms, 400 ms) and long-term (560 ms, 1000 ms) predictions for 15 kinds of movements. It was found that the existing methods usually showed high prediction accuracy when dealing with more periodic and regular movements, such as “walking” and “eating”. However, when dealing with more random and irregular movements, such as “directions”, “posing”, and “purchases”, the prediction accuracy decreased significantly. The algorithm proposed in this paper showed high prediction accuracy when dealing with highly complex, non-periodic, and irregular movements.
Our experimental results revealed that the proposed DA-MgTCN method outperformed most baseline methods in both short-term and long-term motion prediction. In particular, it can be observed from the experimental results that the proposed DA-MgTCN method outperformed most baseline methods in short-term motion prediction and improved more significantly in long-term prediction, with each MPJPE index reaching the optimum and obtaining excellent prediction results for both the 560 mm and 1000 mm MPJPE metrics. This success can be attributed to the ability of DA-MgTCN to fully capture spatial correlation and multi-granularity temporal features, which was a key factor in enhancing the model’s prediction accuracy.
Qualitative comparison: We visualized the results of the aforementioned motion prediction to further assess the model’s performance. Figure 4 illustrates the visualization results for actions including “walking”, “discussion”, “posing”, and “sitting down”. The first row in every subplot shows the ground truth pose sequences (in black), followed by the predicted poses (in blue), i.e., each row displays the prediction results of one model. From the visualization results, it was observed that the predictions generated by the DA-MgTCN method showed higher similarity to the actual sequences and exhibited lower distortion and better continuity between frames. This was due to the dual-branch spatial attention and multi-granularity temporal convolution modeling joint motion trajectories, which provided richer and smoother joint motion temporal context information. The model could sufficiently capture global spatial dependencies, allowing it to encode joint information with distant hidden dependencies. For example, in the “sitting down” motion visualization, the motion between the hands and feet was more coordinated and coherent. This demonstrated once again how well the suggested DA-MgTCN forecasted very complicated irregular movements and complex periodic motions.
CMU-MoCap: To further validate the generalization of the DA-MgTCN method, we compared its performance with existing algorithms on the CMU-MoCap dataset, including Res. sup. [17], convSeq2Seq [11], DMGNN [13], LTD [25], LPJP [44], MSR [49], and ST-DGCN [45]. The experimental results are shown in Table 3, presenting the mean per joint position error and corresponding average error for short-term and long-term predictions across eight actions. From the table, it can be observed that the DA-MgTCN method’s short-term and long-term prediction accuracy was significantly higher than that of the other seven existing prediction algorithms, including Cai et al. [44]’s method, even when handling relatively complex non-periodic actions. The DA-MgTCN method improved the average prediction accuracy by about 1.5% in short-term prediction and 3% in long-term prediction, respectively, compared to the state-of-the-art ST-DGCN method. Thus, the comprehensive experimental results once again confirmed the effectiveness and generalization capabilities of the DA-MgTCN method.

4.5. Ablation Study

To deeply evaluate the contribution of each component in our model, we conducted a series of ablation experiments on the Human3.6M dataset. These experiments focused on the impact of the channel-attention (channel-att) and multi-grained (Mg) convolution modules on the model’s performance. The results of the experiments are shown in Table 4.
In terms of channel attention, the prediction accuracy significantly decreased when only joint attention was used without dual attention. The multi-granularity convolutional TCN module showed excellent performance in capturing long-term temporal dependence, thus improving the long-term prediction accuracy. Furthermore, when the channel-att or Mg module was removed, the error at 1000 ms increased by 1.9% and 4.0%, respectively, on the Human3.6M dataset, and by 2.9% and 4.0%, respectively, on the CMU-MoCap dataset. The best performance could be achieved by combining these two components. The multi-granularity model demonstrated better performance compared to the single-granularity model, especially for long prediction cycles. Additionally, the use of learnable weight parameters led to better prediction performance compared to fixed weights. This suggested that by designing a multi-granularity temporal structure, we could extract the temporal correlation between different time periods more effectively, thus improving the prediction performance.
Effects of the Number of DA-MgTCNs: Further, to validate the effect of multiple DA-MgTCNs in the model, we increased the number of DA-MgTCNs from 6 to 10 in step 2 and determined the prediction error and running time cost for both dataset predictions, as shown in Table 5. The experimental results showed that when 6 to 10 DA-MgTCNs were used, the predicted MPJPE decreased, while the time cost continued to increase. When 12 or 14 DA-MgTCNs were used, the prediction error remained stable at a lower level, but the time cost increased. Therefore, the use of 10 MgTCNs was chosen to achieve higher prediction accuracy and operational efficiency.
In summary, the experimental results in this paper revealed the importance of the DA-MgTCN method using the dual-attention and multi-granularity convolutional design in terms of performance improvement. Modeling joint motion trajectories using dual-attention, dual-branch spatial attention, and multi-granularity temporal convolution could provide richer and smoother temporal contextual information related to joint motion, which was conducive to adequately modeling spatial global dependencies and enabling the model to encode joint information with hidden dependencies at a distance, thus improving the overall performance of the model for both short-term and long-term motion prediction.

4.6. Limitations

In addition to the qualitative results presented in Figure 4, challenging cases encountered by the DA-MgTCN model were also investigated. Figure 5 illustrates an example of a predicted skeleton for the "walking a dog" action. It was evident that the last few frames did not perfectly align with the ground truth pose. This misalignment resulted from the high degree of uncertainty inherent in human motion, where a series of past poses can suggest various possible future outcomes. As a result, predicting long-term dependencies between joints and frames becomes more difficult. Furthermore, the experiments were constrained by more realistic data scenarios and experimental conditions, which may have posed challenges to our algorithm’s validation. In the future, we will consider motion prediction in more intricate scenarios to investigate novel methods for multi-grain human motion prediction in multi-domain contexts. The aim is to enhance the adaptability and performance of the model.

5. Conclusions

In this paper, we proposed a novel human motion prediction method leveraging dual-channel attention and multi-granularity temporal convolutional networks (DA-MgTCNs) to accurately understand and analyze human motion. Our method combined a dual-attention mechanism and a multi-granularity temporal convolutional networks model to address the challenging problem of extracting inter-joint and intra-joint spatial features. Moreover, the multi-granularity temporal convolutional networks model facilitated the design of a TCN with different convolutional kernel granularities, enabling the learning of richer multi-scale temporal information and further enhancing the performance of the model. Extensive experiments were conducted on two large-scale datasets, Human3.6M and CMU-MoCap. The experimental results demonstrated that the proposed method significantly outperformed other approaches in both short-term and long-term prediction tasks, thus validating the effectiveness of the proposed algorithm. In future work, we aim to further optimize the network structure and parameter settings and extend the application of our model to spatio-temporal prediction tasks in real-world scenarios, such as robot perception and interaction.

Author Contributions

Conceptualization, B.H.; methodology, B.H.; software, B.H.; validation, B.H.; formal analysis, B.H.; investigation, B.H.; resources, B.H.; data curation, B.H.; writing—original draft preparation, B.H.; writing—review and editing, B.H.; visualization, B.H.; supervision, X.L.; project administration, X.L.; funding acquisition, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under grants 62233003 and 62073072, the Key Projects of the Key R&D Program of Jiangsu Province under grants BE2020006 and BE2020006-1, and Shenzhen Natural Science Foundation under grants JCYJ20210324132202005 and JCYJ20220818101206014.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets generated and/or analyzed during the current study are publicly available. The Human3.6M dataset can be accessed through the reference in [47]. The CMU-MoCap dataset is publicly available and can be accessed online at http://mocap.cs.cmu.edu/ (accessed on 13 June 2023). The use of these datasets is governed by their respective usage policies.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DA   Dual attention
TCNs   Temporal convolutional networks
MgTCN   Multi-granularity TCN
GNNs   Graph neural networks

References

  1. Chen, S.; Liu, B.; Feng, C.; Vallespi-Gonzalez, C.; Wellington, C. 3d point cloud processing and learning for autonomous driving: Impacting map creation, localization, and perception. IEEE Signal Process. Mag. 2020, 38, 68–86. [Google Scholar] [CrossRef]
  2. Gui, L.Y.; Zhang, K.; Wang, Y.X.; Liang, X.; Moura, J.M.; Veloso, M. Teaching robots to predict human motion. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 562–567. [Google Scholar]
  3. Koppula, H.S.; Saxena, A. Anticipating human activities using object affordances for reactive robotic response. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 14–29. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Sheng, W.; Li, X. Multi-task learning for gait-based identity recognition and emotion recognition using attention enhanced temporal graph convolutional network. Pattern Recognit. 2021, 114, 107868. [Google Scholar] [CrossRef]
  5. Kong, Y.; Wei, Z.; Huang, S. Automatic analysis of complex athlete techniques in broadcast taekwondo video. Multimed. Tools Appl. 2018, 77, 13643–13660. [Google Scholar] [CrossRef]
  6. Dong, Y.; Li, X.; Dezert, J.; Zhou, R.; Zhu, C.; Wei, L.; Ge, S.S. Evidential reasoning with hesitant fuzzy belief structures for human activity recognition. IEEE Trans. Fuzzy Syst. 2021, 29, 3607–3619. [Google Scholar] [CrossRef]
  7. Dong, Y.; Li, X.; Dezert, J.; Zhou, R.; Zhu, C.; Cao, L.; Khyam, M.O.; Ge, S.S. Multi-Source Weighted Domain Adaptation With Evidential Reasoning for Activity Recognition. IEEE Trans. Ind. Inform. 2022, 19, 5530–5542. [Google Scholar] [CrossRef]
  8. Lehrmann, A.M.; Gehler, P.V.; Nowozin, S. Efficient nonlinear Markov models for human motion. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1314–1321. [Google Scholar]
  9. Wang, J.M.; Fleet, D.J.; Hertzmann, A. Gaussian Process Dynamical Models for Human Motion. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 30, 283–298. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Taylor, G.W.; Hinton, G.E.; Roweis, S. Modeling human motion using binary latent variables. Adv. Neural Inf. Process. Syst. 2006, 19, 1345–1352. [Google Scholar]
  11. Li, C.; Zhang, Z.; Lee, W.S.; Lee, G.H. Convolutional sequence to sequence model for human dynamics. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 2275–2284. [Google Scholar]
  12. Li, M.; Chen, S.; Chen, X.; Zhang, Y.; Wang, Y.; Tian, Q. Symbiotic Graph Neural Networks for 3D Skeleton-Based Human Action Recognition and Motion Prediction. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3316–3333. [Google Scholar] [CrossRef]
  13. Li, M.; Chen, S.; Zhao, Y.; Zhang, Y.; Wang, Y.; Tian, Q. Dynamic multiscale graph neural networks for 3d skeleton based human motion prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 214–223. [Google Scholar]
  14. Zhong, C.; Hu, L.; Zhang, Z.; Ye, Y.; Xia, S. Spatio-Temporal Gating-Adjacency GCN For Human Motion Prediction. In Proceedings of the Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 6447–6456. [Google Scholar]
  15. Fragkiadaki, K.; Levine, S.; Felsen, P.; Malik, J. Recurrent network models for human dynamics. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), IEEE Computer Society, Santiago, Chile, 7–13 December 2015; pp. 4346–4354. [Google Scholar]
  16. Jain, A.; Zamir, A.R.; Savarese, S.; Saxena, A. Structural-rnn: Deep learning on spatio-temporal graphs. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 5308–5317. [Google Scholar]
  17. Martinez, J.; Black, M.J.; Romero, J. On human motion prediction using recurrent neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2891–2900. [Google Scholar]
  18. Liu, Z.; Wu, S.; Jin, S.; Liu, Q.; Lu, S.; Zimmermann, R.; Cheng, L. Towards natural and accurate future motion prediction of humans and animals. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 10004–10012. [Google Scholar]
  19. Shu, X.; Zhang, L.; Qi, G.J.; Liu, W.; Tang, J. Spatiotemporal co-attention recurrent neural networks for human-skeleton motion prediction. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3300–3315. [Google Scholar] [CrossRef]
  20. Liu, Z.; Wu, S.; Jin, S.; Ji, S.; Liu, Q.; Lu, S.; Cheng, L. Investigating pose representations and motion contexts modeling for 3D motion prediction. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 681–697. [Google Scholar] [CrossRef]
  21. Lebailly, T.; Kiciroglu, S.; Salzmann, M.; Fua, P.; Wang, W. Motion prediction using temporal inception module. In Proceedings of the Asian Conference on Computer Vision, Kyoto, Japan, 30 November–4 December 2020. [Google Scholar]
  22. Cui, Q.; Sun, H.; Kong, Y.; Zhang, X.; Li, Y. Efficient human motion prediction using temporal convolutional generative adversarial network. Inf. Sci. 2021, 545, 427–447. [Google Scholar] [CrossRef]
  23. Mao, W.; Liu, M.; Salzmann, M.; Li, H. Multi-level motion attention for human motion prediction. Int. J. Comput. Vis. 2021, 129, 2513–2535. [Google Scholar] [CrossRef]
  24. Medjaouri, O.; Desai, K. Hr-stan: High-resolution spatio-temporal attention network for 3d human motion prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 2540–2549. [Google Scholar]
  25. Mao, W.; Liu, M.; Salzmann, M.; Li, H. Learning trajectory dependencies for human motion prediction. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 4317–4326. [Google Scholar]
  26. Shi, L.; Zhang, Y.; Cheng, J.; Lu, H. Decoupled spatial-temporal attention network for skeleton-based action recognition. arXiv 2020, arXiv:2007.03263. [Google Scholar]
  27. Aksan, E.; Kaufmann, M.; Cao, P.; Hilliges, O. A spatio-temporal transformer for 3d human motion prediction. In Proceedings of the 2021 International Conference on 3D Vision (3DV), London, UK, 1–3 December 2021; pp. 565–574. [Google Scholar]
  28. Li, Y.; Wang, Z.; Yang, X.; Wang, M.; Poiana, S.I.; Chaudhry, E.; Zhang, J. Efficient convolutional hierarchical autoencoder for human motion prediction. Vis. Comput. 2019, 35, 1143–1156. [Google Scholar] [CrossRef] [Green Version]
  29. Chiu, H.K.; Adeli, E.; Wang, B.; Huang, D.A.; Niebles, J.C. Action-agnostic human pose forecasting. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa Village, HI, USA, 7–11 January 2018; pp. 1423–1432. [Google Scholar]
  30. Guo, X.; Choi, J. Human motion prediction via learning local structure representations and temporal dependencies. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 2580–2587. [Google Scholar]
  31. Bai, S.; Kolter, J.Z.; Koltun, V. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv 2018, arXiv:1803.01271. [Google Scholar]
  32. Lea, C.; Flynn, M.D.; Vidal, R.; Reiter, A.; Hager, G.D. Temporal convolutional networks for action segmentation and detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 156–165. [Google Scholar]
  33. Farha, Y.A.; Gall, J. Ms-tcn: Multi-stage temporal convolutional network for action segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3575–3584. [Google Scholar]
  34. Dauphin, Y.N.; Fan, A.; Auli, M.; Grangier, D. Language modeling with gated convolutional networks. In Proceedings of the International conference on machine learning. PMLR, Sydney, Australia, 6–11 August 2017; pp. 933–941. [Google Scholar]
  35. van den Oord, A.; Dieleman, S.; Zen, H.; Simonyan, K.; Vinyals, O.; Graves, A.; Kalchbrenner, N.; Senior, A.; Kavukcuoglu, K. Wavenet: A generative model for raw audio. arXiv 2016, arXiv:1609.03499. [Google Scholar]
  36. Pavllo, D.; Feichtenhofer, C.; Grangier, D.; Auli, M. 3d human pose estimation in video with temporal convolutions and semi-supervised training. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 7753–7762. [Google Scholar]
  37. Yu, F.; Koltun, V. Multi-scale context aggregation by dilated convolutions. arXiv 2015, arXiv:1511.07122. [Google Scholar]
  38. Reis, M.S. Multiscale and multi-granularity process analytics: A review. Processes 2019, 7, 61. [Google Scholar] [CrossRef] [Green Version]
  39. Yang, B.; Yang, J.; Ni, R.; Yang, C.; Liu, X. Multi-granularity scenarios understanding network for trajectory prediction. Complex Intell. Syst. 2023, 9, 851–864. [Google Scholar] [CrossRef]
  40. Chorowski, J.K.; Bahdanau, D.; Serdyuk, D.; Cho, K.; Bengio, Y. Attention-based models for speech recognition. arXiv 2015, arXiv:1506.07503. [Google Scholar]
  41. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. arXiv 2017, arXiv:1706.03762. [Google Scholar]
  42. Xu, Y.; Yu, L.; Xu, H.; Zhang, H.; Nguyen, T. Vector sparse representation of color image using quaternion matrix analysis. IEEE Trans. Image Process. 2015, 24, 1315–1329. [Google Scholar] [CrossRef] [PubMed]
  43. Tang, Y.; Ma, L.; Liu, W.; Zheng, W. Long-term human motion prediction by modeling motion context and enhancing motion dynamic. arXiv 2018, arXiv:1805.02513. [Google Scholar]
  44. Cai, Y.; Huang, L.; Wang, Y.; Cham, T.J.; Cai, J.; Yuan, J.; Liu, J.; Yang, X.; Zhu, Y.; Shen, X.; et al. Learning progressive joint propagation for human motion prediction. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 226–242. [Google Scholar]
  45. Ma, T.; Nie, Y.; Long, C.; Zhang, Q.; Li, G. Progressively Generating Better Initial Guesses Towards Next Stages for High-Quality Human Motion Prediction. In Proceedings of the Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
  46. Loshchilov, I.; Hutter, F. Decoupled weight decay regularization. arXiv 2017, arXiv:1711.05101. [Google Scholar]
  47. Ionescu, C.; Papava, D.; Olaru, V.; Sminchisescu, C. Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 36, 1325–1339. [Google Scholar] [CrossRef]
  48. Mao, W.; Liu, M.; Salzmann, M. History Repeats Itself: Human Motion Prediction via Motion Attention. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 474–489. [Google Scholar]
  49. Dang, L.; Nie, Y.; Long, C.; Zhang, Q.; Li, G. MSR-GCN: Multi-Scale Residual Graph Convolution Networks for Human Motion Prediction. In Proceedings of the International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 11467–11476. [Google Scholar]
Figure 1. The process of human motion prediction. The top layer represents the historical pose data; the middle layer shows the classical methods used; and the bottom layer shows the output result, i.e., the predicted pose.
Figure 1. The process of human motion prediction. The top layer represents the historical pose data; the middle layer shows the classical methods used; and the bottom layer shows the output result, i.e., the predicted pose.
Sensors 23 05653 g001
Figure 2. The whole architecture of our suggested solution for motion prediction, which employed an end-to-end framework. We encoded human poses X 1 : T + N and fed them into the DA-MgTCN, which was a series connection of DA and MgTCN modules. The DA module was used to extract spatially important information from dimensions at the joint and channel levels. The MgTCN was used to capture different scales of temporal dependencies. Finally, the decoder module recovered the time dimension length.
Figure 2. The whole architecture of our suggested solution for motion prediction, which employed an end-to-end framework. We encoded human poses X 1 : T + N and fed them into the DA-MgTCN, which was a series connection of DA and MgTCN modules. The DA module was used to extract spatially important information from dimensions at the joint and channel levels. The MgTCN was used to capture different scales of temporal dependencies. Finally, the decoder module recovered the time dimension length.
Sensors 23 05653 g002
Figure 3. Architectural elements in a TCN (causal dilated convolutional network).
Figure 3. Architectural elements in a TCN (causal dilated convolutional network).
Sensors 23 05653 g003
Figure 4. Qualitative comparison.
Figure 4. Qualitative comparison.
Sensors 23 05653 g004
Figure 5. Example of the failure cases of our forecasting approach. The first line indicates the ground-truth 3D human motion. The second line, shown in blue, presents the predicted future motion.
Figure 5. Example of the failure cases of our forecasting approach. The first line indicates the ground-truth 3D human motion. The second line, shown in blue, presents the predicted future motion.
Sensors 23 05653 g005
Table 1. The choice and configuration of the relevant hyperparameters.
Table 1. The choice and configuration of the relevant hyperparameters.
Hyperparameter/ConfigValue
OptimizerAdamW
Base learning rate 3 × 10 3
Weight decay 10 2
Optimizer momentum β 1 = 0.9 , β 2 = 0.999
Batch size16
Warmup epochs5
Epochs60
Layer10
Table 2. Prediction of 3D joint positions in Human3.6M for all actions. The best results are marked in bold.
Table 2. Prediction of 3D joint positions in Human3.6M for all actions. The best results are marked in bold.
Time (ms)801603204005601000801603204005601000
ActionWalkingEating
Res. sup. [17]29.450.876.081.581.7100.716.830.656.968.779.9100.2
convSeq2Seq [11]17.733.556.363.672.282.311.022.440.748.461.387.1
DMGNN [13]17.330.754.665.273.495.811.021.436.243.958.186.7
LTD [25]12.323.039.846.154.159.88.416.933.240.753.477.8
MSR [49]12.222.738.645.252.763.08.417.133.040.452.577.1
Hisrep [48]10.019.534.239.847.458.16.414.028.736.250.075.7
ST-DGCN [45]10.219.834.540.348.156.47.015.130.638.151.176.0
Our model10.119.233.840.246.155.47.014.330.238.548.972.6
ActionSmokingDiscussion
Res. sup. [17]23.042.670.182.794.8137.432.961.290.996.2121.3161.7
convSeq2Seq [11]11.622.841.348.960.081.717.134.564.877.698.1129.3
DMGNN [13]9.017.632.140.350.972.217.334.861.069.881.9138.3
LTD [25]7.916.231.938.950.772.612.527.458.571.791.6121.5
MSR [49]8.016.331.338.249.571.612.026.857.169.788.6117.6
Hisrep [48]7.014.929.936.447.669.510.223.452.165.486.6119.8
ST-DGCN [45]6.614.128.234.746.569.510.023.853.666.787.1118.2
ours6.514.628.033.846.166.79.824.254.565.183.1114.8
ActionDirectionsGreeting
Res. sup. [17]35.457.376.387.7110.1152.534.563.4124.6142.5156.1166.5
convSeq2Seq [11]13.529.057.669.786.6115.822.045.082.096.0116.9147.3
DMGNN [13]13.124.664.781.9110.1115.823.350.3107.3132.1152.5157.7
LTD [25]9.019.943.453.771.0101.818.738.777.793.4115.4148.8
MSR [49]8.619.743.353.871.2100.616.537.077.393.4116.3147.2
Hisrep [48]7.418.444.556.573.9106.513.730.163.878.1101.9138.8
ST-DGCN [45]7.217.640.951.569.3100.415.234.171.687.1110.2143.5
Our model6.917.040.749.068.098.514.633.368.586.4112.0135.9
ActionPhoningPosing
Res. sup. [17]38.069.3115.0126.7141.2131.536.169.1130.5157.1194.7240.2
convSeq2Seq [11]13.526.649.959.977.1114.016.936.775.792.9122.5187.4
DMGNN [13]12.525.848.158.378.998.615.329.371.596.7163.9310.1
LTD [25]10.221.042.552.369.2103.113.729.966.684.1114.5173.0
MSR citedang2021msr10.120.741.551.368.3104.412.829.467.085.0116.3174.3
Hisrep [48]8.618.339.049.267.4105.010.224.258.575.8107.6178.2
ST-DGCN [45]8.318.338.748.465.9102.710.725.760.076.6106.1164.8
Our model8.318.139.247.964.595.610.425.460.574.8103.2162.2
Time (ms)801603204005601000801603204005601000
ActionPurchasesSitting
Res. sup. [17]36.360.386.595.9122.7160.342.681.4134.7151.8167.4201.5
convSeq2Seq [11]20.341.876.589.9111.3151.513.527.052.063.182.4120.7
DMGNN [13]21.438.775.792.7118.6153.811.925.144.650.260.1104.9
LTD [25]15.632.865.779.3102.0143.510.621.946.357.978.3119.7
MSR [49]14.832.466.179.6101.6139.210.522.046.357.878.2120.0
Hisrep [48]13.029.260.473.995.6134.29.320.144.356.076.4115.9
ST-DGCN [45]12.528.760.173.395.3133.38.819.242.453.874.4116.1
Our model12.629.159.072.491.6128.38.418.540.452.972.0113.6
ActionSitting DownTaking Photo
Res. sup. [17]47.386.0145.8168.9205.3277.626.147.681.494.7117.0143.2
convSeq2Seq [11]20.740.670.482.7106.5150.312.726.052.163.684.4128.1
DMGNN [13]15.032.977.193.0122.1168.813.629.046.058.891.6120.7
LTD [25]16.131.161.575.5100.0150.29.920.945.056.677.4119.8
MSR [49]16.131.662.576.8102.8155.59.921.044.656.377.9121.9
Hisrep [48]14.930.759.172.097.0143.68.318.440.751.572.1115.9
ST-DGCN [45]13.927.957.471.596.7147.88.418.942.053.374.3118.6
Our model13.827.058.172.295.7143.78.218.140.651.270.9117.1
ActionWaitingWalking Dog
Res. sup. [17]30.657.8106.2121.5146.2196.264.2102.1141.1164.4191.3209.0
convSeq2Seq [11]14.629.758.169.787.3117.727.753.690.7103.3122.4162.4
DMGNN [13]12.224.259.677.5106.0136.747.193.3160.1171.2194.0182.3
LTD [25]11.424.050.161.579.4108.123.446.283.596.0111.9148.9
MSR [49]10.723.148.359.276.3106.320.742.980.493.3111.9148.2
Hisrep [48]8.719.243.454.974.5108.220.140.373.386.3108.2146.9
ST-DGCN [45]8.920.143.654.372.2103.418.839.373.786.4104.7139.8
Our model9.219.943.653.067.3100.818.537.772.887.6105.8137.2
ActionWaiting TogetherAverage
Res. sup. [17]26.850.180.292.2107.6131.134.762.0101.1115.597.6130.5
convSeq2Seq [11]15.330.453.161.272.087.416.633.361.472.790.7124.2
DMGNN [13]14.326.750.163.283.4115.917.033.665.979.7103.0137.2
LTD [25]10.521.038.545.255.065.612.726.152.363.581.6114.3
MSR [49]10.620.937.443.952.965.912.125.651.662.981.1114.2
Hisrep [48]8.918.435.141.952.764.910.422.647.158.377.3112.1
ST-DGCN [45]8.718.634.441.051.964.310.322.747.458.576.9110.3
Our model8.718.533.540.550.861.410.222.346.957.775.1106.9
Table 3. Short- and long-term prediction of 3D body poses on CMU-Mocap. All results are in millimeters. The best results are marked in bold.
Table 3. Short- and long-term prediction of 3D body poses on CMU-Mocap. All results are in millimeters. The best results are marked in bold.
Time (ms)801603204005601000
Res. sup. [17]24.043.074.587.2105.5136.3
convSeq2Seq [11]12.522.240.749.784.6
DMGNN [13]13.624.147.058.877.4112.6
LTD [25]9.317.133.040.955.886.2
LPJP [44]9.817.635.745.1-93.2
MSR [49]8.115.230.638.653.783.0
ST-DGCN [45]7.614.329.036.650.980.1
Our model (DA-MgTCN)7.514.028.134.849.077.4
Table 4. Influence of the channel-attention (channel-att) and multi-grained (Mg) convolution modules on the Human3.6M and CMU-MoCap datasets. On average, the two components of our model contributed to its accuracy. The best results are marked in bold.
Table 4. Influence of the channel-attention (channel-att) and multi-grained (Mg) convolution modules on the Human3.6M and CMU-MoCap datasets. On average, the two components of our model contributed to its accuracy. The best results are marked in bold.
Human3.6M MPJPE (mm)CMU-MoCap MPJPE (mm)
Channel-attMgTCN801603204005601000801603204005601000
10.422.948.159.075.9108.77.714.529.036.151.981.7
10.523.148.059.476.8111.07.914.629.236.752.382.6
10.222.446.957.774.8106.77.514.028.134.849.079.4
Table 5. The MPJPE of our model with different numbers of DA-MgTCNs for short-term and long-term prediction on Human3.6M and CMU-MoCap. The best results are marked in bold.
Table 5. The MPJPE of our model with different numbers of DA-MgTCNs for short-term and long-term prediction on Human3.6M and CMU-MoCap. The best results are marked in bold.
DA-Human3.6M MPJPE (mm)CMU-MoCap MPJPE (mm)
MgTCNs801603204005601000801603204005601000
611.525.051.965.681.1118.08.415.531.739.554.388.3
810.623.549.061.478.3111.17.914.829.837.152.482.9
1010.222.346.957.774.8106.77.514.028.134.849.077.4
1210.322.747.958.674.6106.57.813.828.335.350.978.8
1410.322.846.858.076.3108.17.614.328.935.649.278.4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, B.; Li, X. Human Motion Prediction via Dual-Attention and Multi-Granularity Temporal Convolutional Networks. Sensors 2023, 23, 5653. https://doi.org/10.3390/s23125653

AMA Style

Huang B, Li X. Human Motion Prediction via Dual-Attention and Multi-Granularity Temporal Convolutional Networks. Sensors. 2023; 23(12):5653. https://doi.org/10.3390/s23125653

Chicago/Turabian Style

Huang, Biaozhang, and Xinde Li. 2023. "Human Motion Prediction via Dual-Attention and Multi-Granularity Temporal Convolutional Networks" Sensors 23, no. 12: 5653. https://doi.org/10.3390/s23125653

APA Style

Huang, B., & Li, X. (2023). Human Motion Prediction via Dual-Attention and Multi-Granularity Temporal Convolutional Networks. Sensors, 23(12), 5653. https://doi.org/10.3390/s23125653

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop