Figure 1.
The architecture of DMGCN for EEG-based emotion recognition. DMGCN consists of construction of multi-level graph, hierarchical dynamic geometric interaction neural network (HDGIL) and multi-level feature fusion classifier (M2FC). We have an opportunity to focus on local and global connectivity of brain cortical neurons through construction of multi-level graph. HDGIL is a dual-stream model responsible for hierarchical graph representations and, finally, M2FC provides the method for adaptive fusion of these and classifying the graph.
Figure 1.
The architecture of DMGCN for EEG-based emotion recognition. DMGCN consists of construction of multi-level graph, hierarchical dynamic geometric interaction neural network (HDGIL) and multi-level feature fusion classifier (M2FC). We have an opportunity to focus on local and global connectivity of brain cortical neurons through construction of multi-level graph. HDGIL is a dual-stream model responsible for hierarchical graph representations and, finally, M2FC provides the method for adaptive fusion of these and classifying the graph.
Figure 2.
Construction of multi-level graphs.
Figure 2.
Construction of multi-level graphs.
Figure 3.
The module-level details of DMGCN. The weighted sum operator introduces parameters that allow an adaptive trade-off between different information, formulated as , where and is a trainable parameter. HBN is short for the Riemannian BatchNorm layer to align the distributions of covariates in different layers. is a mapping from X vectors of dimension to a vector of dimension , and is regarded as one-to-one mapping, a special case where X is 1.
Figure 3.
The module-level details of DMGCN. The weighted sum operator introduces parameters that allow an adaptive trade-off between different information, formulated as , where and is a trainable parameter. HBN is short for the Riemannian BatchNorm layer to align the distributions of covariates in different layers. is a mapping from X vectors of dimension to a vector of dimension , and is regarded as one-to-one mapping, a special case where X is 1.
Figure 4.
The visualization of classification accuracy for each subject in comparative experiments.
Figure 4.
The visualization of classification accuracy for each subject in comparative experiments.
Figure 5.
The visualization of models’ robustness in subject-independent comparative experiments. (a–d) are results from comparative experiment for DEAP-Arousal, DEAP, DEAP-Valence and SEED Datasets, respectively.
Figure 5.
The visualization of models’ robustness in subject-independent comparative experiments. (a–d) are results from comparative experiment for DEAP-Arousal, DEAP, DEAP-Valence and SEED Datasets, respectively.
Figure 6.
(a) For the SEED dataset, we visualize the top 24 connections with the highest strength. To prevent the values from being too close together and causing the colors to become indistinguishable, the weights of these connections are standardized. There is a symmetric connection strength between the nodes. (b) For the first four subjects of the SEED dataset, the output features of the M2FC were reduced using the t-SNE algorithm for visualization. (c) Visualization of the adjacent matrix of hidden graphs. (d) Visualization of the node features of hidden graphs.
Figure 6.
(a) For the SEED dataset, we visualize the top 24 connections with the highest strength. To prevent the values from being too close together and causing the colors to become indistinguishable, the weights of these connections are standardized. There is a symmetric connection strength between the nodes. (b) For the first four subjects of the SEED dataset, the output features of the M2FC were reduced using the t-SNE algorithm for visualization. (c) Visualization of the adjacent matrix of hidden graphs. (d) Visualization of the node features of hidden graphs.
Figure 7.
For example, in DBGCN [
9], as GCN layers are stacked, node features gradually become less distinguishable for the SEED dataset. The similarity between node features is represented by the inner product of their feature vectors.
Figure 7.
For example, in DBGCN [
9], as GCN layers are stacked, node features gradually become less distinguishable for the SEED dataset. The similarity between node features is represented by the inner product of their feature vectors.
Figure 8.
The visualization of models’ robustness in subject-independent ablation experiments. In the subfigures (a–d), we visualize the results of subject-independent ablation experiments in DEAP-Arousal, DEAP, DEAP-Valence and SEED Datasets. respectively.
Figure 8.
The visualization of models’ robustness in subject-independent ablation experiments. In the subfigures (a–d), we visualize the results of subject-independent ablation experiments in DEAP-Arousal, DEAP, DEAP-Valence and SEED Datasets. respectively.
Figure 9.
The visualization of classification accuracy for each subject in ablation experiments.
Figure 9.
The visualization of classification accuracy for each subject in ablation experiments.
Table 1.
The basic operators in the Poincaré ball and the Lorentz model . Hyperbolic manifolds are low-dimensional projections of higher-dimensional hyperbolic surfaces, with prevalent models being the Poincaré ball and the Lorentz model. Both possess advantages in intuitive visualization and numerical stability, respectively. Both and are a pair of reciprocal mappings between manifold and tangent space. If ; similarly, if . Additionally, the metric is in Riemannian manifold.
Table 1.
The basic operators in the Poincaré ball and the Lorentz model . Hyperbolic manifolds are low-dimensional projections of higher-dimensional hyperbolic surfaces, with prevalent models being the Poincaré ball and the Lorentz model. Both possess advantages in intuitive visualization and numerical stability, respectively. Both and are a pair of reciprocal mappings between manifold and tangent space. If ; similarly, if . Additionally, the metric is in Riemannian manifold.
| Poincaré Ball | Lorentz |
---|
Manifold | | |
Tangent Space | | |
Origin | | |
Metric | | ,where is I expect |
Induce Distance | | |
Exponential Mapping | | |
Logarithmic Mapping | | |
Parallel Transport | | |
Scalar Multiplication | | |
Bias Addition | | |
Matrix-Vector Production | | |
Mapping in Manifold | |
Table 2.
Projections between hyperbolic and Euclidean space .
Table 2.
Projections between hyperbolic and Euclidean space .
| Poincaré Ball | Lorentz |
---|
Project from Euclidean to Manifold | | |
Project from Euclidean to Tangent | | |
Table 3.
Summary of hyperparameters in our experiments. As the dimension of the hidden layer increases, convergence to the feasible solution becomes more difficult, so the dimension of the hidden layer is set to a lower value, namely, 10. Under these settings, the effect of the representation capacity of the space on the performance of the model can be visually observed.
Table 3.
Summary of hyperparameters in our experiments. As the dimension of the hidden layer increases, convergence to the feasible solution becomes more difficult, so the dimension of the hidden layer is set to a lower value, namely, 10. Under these settings, the effect of the representation capacity of the space on the performance of the model can be visually observed.
Hyperparameter | Value |
---|
Learning Rate, 1 | |
Batch Size for SEED and DEAP | 20 |
Dropout Rate 2 | 0.1 |
Initial Curvature for Riemannian Manifold 3 | −0.5 |
Table 4.
Average and variance of the macro-accuracy in a comparative experiment. The labeling methods for DEAP-Valence, DEAP-Arousal and DEAP differ from one another. In DEAP-Valence and DEAP-Arousal, sample classification is based on the relative position of valence and arousal values with respect to their thresholds (0.5) on the corresponding axes, respectively. In contrast, DEAP considers both valence and arousal to label samples, resulting in a label space with four dimensions. Valence represents the positivity/negativity levels of emotions, while arousal reflects the activation level of emotional states. Additionally, bold indicates the highest average accuracy across all models, red highlights DMGCN’s performance, underlining marks the best accuracy among baselines, and the tilde shows the lowest variance during LOOCV.
Table 4.
Average and variance of the macro-accuracy in a comparative experiment. The labeling methods for DEAP-Valence, DEAP-Arousal and DEAP differ from one another. In DEAP-Valence and DEAP-Arousal, sample classification is based on the relative position of valence and arousal values with respect to their thresholds (0.5) on the corresponding axes, respectively. In contrast, DEAP considers both valence and arousal to label samples, resulting in a label space with four dimensions. Valence represents the positivity/negativity levels of emotions, while arousal reflects the activation level of emotional states. Additionally, bold indicates the highest average accuracy across all models, red highlights DMGCN’s performance, underlining marks the best accuracy among baselines, and the tilde shows the lowest variance during LOOCV.
Methods | DEAP-Valence | DEAP-Arousal | DEAP | SEED |
---|
ACC
|
STD
|
ACC
|
STD
|
ACC
|
STD
|
ACC
|
STD
|
---|
DBGCN | 95.03 | 4.18 | 91.74 | 5.38 | 63.78 | 3.77 | 84.95 | 7.67 |
GCB-BLS | 96.64 | 4.12 | 92.27 | 13.94 | 69.46 | 5.66 | 88.88 | 7.53 |
V-IAG | 96.72 | 3.23 | 94.17 | 4.28 | 69.53 | 4.20 | 90.97 | 5.54 |
RGNN | 97.94 | 3.63 | 94.96 | 5.88 | 69.65 | 3.85 | 91.97 | 5.12 |
HDGCN | 98.16 | 2.76 | 95.67 | 3.55 | 71.14 | 2.88 | 91.74 | 4.22 |
DMGCN | 98.73 | 1.15 | 95.97 | 3.68 | 72.74 | 0.82 | 94.89 | 3.93 |
Table 5.
Effect of hidden-layer dimension on model performance. The model’s average classification accuracy (%) was evaluated using leave-one-out-cross-validation (LOOCV), while the standard deviation (%) of accuracy indicates the model’s robustness to individual differences. Additionally, Params refers to the number of floating-point parameters and its unit is K.
Table 5.
Effect of hidden-layer dimension on model performance. The model’s average classification accuracy (%) was evaluated using leave-one-out-cross-validation (LOOCV), while the standard deviation (%) of accuracy indicates the model’s robustness to individual differences. Additionally, Params refers to the number of floating-point parameters and its unit is K.
D | Models | SEED | DEAP | DEAP-Arousal | DEAP-Valence |
---|
ACC
|
STD
|
Params
|
ACC
|
STD
|
Params
|
ACC
|
STD
|
Params
|
ACC
|
STD
|
Params
|
---|
4 | DBGCN | 61.28 | 7.93 | 5.0 | 43.75 | 4.93 | 5.1 | 70.23 | 8.66 | 5.1 | 69.41 | 2.46 | 5.1 |
GCB-BLS | 67.66 | 6.87 | 6.5 | 47.05 | 3.15 | 6.5 | 75.22 | 6.27 | 6.5 | 77.49 | 5.02 | 6.5 |
V-IAG | 73.71 | 4.66 | 9.3 | 54.14 | 4.81 | 9.5 | 77.93 | 4.72 | 9.5 | 78.19 | 4.56 | 9.5 |
RGNN | 78.62 | 3.99 | 10.3 | 55.63 | 2.71 | 10.3 | 75.34 | 3.70 | 10.3 | 80.52 | 3.80 | 10.3 |
HDGCN | 75.08 | 4.46 | 14.2 | 59.44 | 3.48 | 12.7 | 79.46 | 2.22 | 12.7 | 82.23 | 3.36 | 12.7 |
DMGCN | 88.18 | 3.75 | 6.4 | 67.60 | 2.36 | 5.6 | 87.31 | 4.95 | 5.6 | 89.07 | 2.11 | 5.6 |
6 | DBGCN | 72.01 | 5.95 | 5.2 | 47.97 | 6.71 | 5.4 | 75.72 | 5.36 | 5.4 | 74.71 | 3.79 | 5.4 |
GCB-BLS | 71.45 | 4.41 | 7.5 | 54.66 | 4.66 | 7.5 | 80.83 | 5.97 | 7.5 | 82.96 | 1.72 | 7.5 |
V-IAG | 79.99 | 3.50 | 10.2 | 56.75 | 3.29 | 10.4 | 83.95 | 3.60 | 10.4 | 83.43 | 5.60 | 10.4 |
RGNN | 80.66 | 3.04 | 11.2 | 59.52 | 5.69 | 11.2 | 80.34 | 4.70 | 11.2 | 83.52 | 3.49 | 11.2 |
HDGCN | 81.39 | 4.93 | 15.4 | 64.44 | 3.70 | 13.9 | 95.38 | 3.06 | 13.9 | 87.31 | 1.47 | 13.9 |
DMGCN | 89.24 | 2.85 | 6.9 | 69.95 | 4.25 | 6.2 | 89.01 | 4.32 | 6.2 | 91.54 | 3.02 | 6.2 |
8 | DBGCN | 77.85 | 6.34 | 5.5 | 59.69 | 4.11 | 5.6 | 83.72 | 4.95 | 5.6 | 85.93 | 5.65 | 5.6 |
GCB-BLS | 81.77 | 5.74 | 8.5 | 62.54 | 4.08 | 8.5 | 85.74 | 4.97 | 8.5 | 87.82 | 3.31 | 8.5 |
V-IAG | 82.10 | 4.67 | 11.1 | 64.52 | 3.39 | 11.3 | 86.05 | 4.15 | 11.3 | 86.58 | 4.55 | 11.3 |
RGNN | 88.60 | 5.25 | 12.1 | 66.05 | 2.72 | 12.1 | 85.52 | 8.68 | 12.1 | 88.70 | 3.35 | 12.1 |
HDGCN | 89.44 | 5.37 | 16.5 | 68.24 | 4.05 | 15.0 | 86.24 | 4.86 | 15.0 | 92.84 | 3.43 | 15.0 |
DMGCN | 91.31 | 3.26 | 7.4 | 71.95 | 4.25 | 6.7 | 91.56 | 4.15 | 6.7 | 92.54 | 3.02 | 6.7 |
10 | DBGCN | 84.95 | 7.67 | 5.7 | 63.78 | 3.77 | 5.9 | 91.74 | 5.38 | 5.9 | 95.03 | 4.18 | 5.9 |
GCB-BLS | 88.88 | 7.53 | 9.5 | 69.46 | 5.66 | 9.5 | 92.27 | 13.94 | 9.5 | 96.64 | 4.12 | 9.5 |
V-IAG | 90.97 | 5.54 | 12.0 | 69.53 | 4.20 | 12.2 | 94.17 | 4.28 | 12.2 | 96.72 | 3.23 | 12.2 |
RGNN | 91.97 | 5.12 | 13.0 | 69.65 | 3.85 | 13.0 | 94.96 | 5.88 | 13.0 | 97.94 | 3.63 | 13.0 |
HDGCN | 91.74 | 4.22 | 17.7 | 71.14 | 2.88 | 16.2 | 95.67 | 3.55 | 16.2 | 98.16 | 2.76 | 16.2 |
DMGCN | 94.89 | 3.93 | 8.0 | 72.74 | 0.82 | 7.2 | 95.97 | 3.68 | 7.2 | 98.73 | 1.15 | 7.2 |
Table 6.
Effect of hidden-layer number on the model performance. The layer number refers to the number of GCNs in each branch of the model. Moreover, Nikolentzos G. et al. proved the equivalence of one-layer and multilayer GSCs [
26]. Therefore, only the layer number of LHC in DMGCN changes.
Table 6.
Effect of hidden-layer number on the model performance. The layer number refers to the number of GCNs in each branch of the model. Moreover, Nikolentzos G. et al. proved the equivalence of one-layer and multilayer GSCs [
26]. Therefore, only the layer number of LHC in DMGCN changes.
L | Models | SEED | DEAP | DEAP-Arousal | DEAP-Valence |
---|
ACC
|
STD
|
Params
|
ACC
|
STD
|
Params
|
ACC
|
STD
|
Params
|
ACC
|
STD
|
Params
|
---|
4 | DBGCN | 73.22 | 7.06 | 5.4 | 58.72 | 1.11 | 4.5 | 82.61 | 3.34 | 4.5 | 84.14 | 2.91 | 4.5 |
GCB-BLS | 76.48 | 6.24 | 8.5 | 62.77 | 2.34 | 6.7 | 89.42 | 8.15 | 6.7 | 90.83 | 3.94 | 6.7 |
V-IAG | 82.83 | 6.89 | 11.0 | 66.26 | 2.62 | 9.2 | 91.89 | 5.28 | 9.2 | 89.54 | 2.68 | 9.2 |
RGNN | 86.7 | 2.59 | 11.7 | 67.71 | 2.06 | 9.9 | 92.65 | 13.21 | 9.9 | 90.59 | 3.82 | 9.9 |
HDGCN | 88.81 | 4.41 | 14.4 | 67.75 | 2.72 | 10.3 | 88.79 | 3.21 | 10.3 | 89.97 | 2.85 | 10.3 |
DMGCN | 90.23 | 4.29 | 6.5 | 69.54 | 1.18 | 4.7 | 85.02 | 5.25 | 4.7 | 92.31 | 2.55 | 4.7 |
6 | DBGCN | 84.95 | 7.67 | 5.9 | 63.78 | 3.77 | 5.0 | 91.74 | 5.38 | 5.0 | 95.03 | 4.18 | 5.0 |
GCB-BLS | 88.88 | 7.53 | 9.5 | 69.46 | 5.66 | 7.7 | 92.27 | 13.94 | 7.7 | 96.64 | 4.12 | 7.7 |
V-IAG | 90.97 | 5.54 | 12.2 | 69.53 | 4.2 | 10.4 | 94.17 | 4.28 | 10.4 | 96.72 | 3.23 | 10.4 |
RGNN | 91.97 | 5.12 | 13.0 | 69.65 | 3.85 | 11.2 | 94.96 | 5.88 | 11.2 | 97.94 | 3.63 | 11.2 |
HDGCN | 91.74 | 4.22 | 16.2 | 71.14 | 2.88 | 12.1 | 95.67 | 3.55 | 12.1 | 98.16 | 2.76 | 12.1 |
DMGCN | 94.89 | 3.93 | 7.2 | 72.74 | 0.82 | 5.4 | 95.97 | 3.68 | 5.4 | 98.73 | 1.15 | 5.4 |
8 | DBGCN | 77.44 | 5.11 | 6.3 | 63.52 | 3.26 | 5.4 | 88.36 | 3.32 | 5.4 | 88.72 | 3.21 | 5.4 |
GCB-BLS | 89.38 | 3.73 | 10.4 | 65.72 | 3.75 | 8.6 | 90.75 | 5.23 | 8.6 | 90.91 | 3.27 | 8.6 |
V-IAG | 90.61 | 7.89 | 13.5 | 61.98 | 3.1 | 11.7 | 91.89 | 1.83 | 11.7 | 91.3 | 6.72 | 11.7 |
RGNN | 90.84 | 8.02 | 14.2 | 64.41 | 8.75 | 12.4 | 89.78 | 3.09 | 12.4 | 91.4 | 7.53 | 12.4 |
HDGCN | 92.03 | 8.66 | 17.9 | 69.39 | 3.23 | 13.8 | 93.78 | 4.19 | 13.8 | 93.06 | 8.27 | 13.8 |
DMGCN | 95.07 | 4.5 | 8.0 | 73.97 | 5.89 | 6.2 | 93.59 | 3.83 | 6.2 | 96.73 | 6.23 | 6.2 |
10 | DBGCN | 72.87 | 8.28 | 6.8 | 59.19 | 2.99 | 5.9 | 79.95 | 9.02 | 5.9 | 80.43 | 2.77 | 5.9 |
GCB-BLS | 86.48 | 6.19 | 11.4 | 60.42 | 2.62 | 9.6 | 86.01 | 2.74 | 9.6 | 82.02 | 4.03 | 9.6 |
V-IAG | 85.83 | 3.78 | 14.7 | 57.95 | 3.82 | 12.9 | 88.00 | 8.35 | 12.9 | 83.42 | 1.35 | 12.9 |
RGNN | 85.22 | 2.69 | 15.5 | 60.78 | 2.23 | 13.7 | 88.36 | 4.11 | 13.7 | 81.77 | 4.13 | 13.7 |
HDGCN | 88.25 | 4.25 | 19.7 | 67.71 | 2.06 | 15.6 | 89.31 | 1.87 | 15.6 | 88.48 | 1.66 | 15.6 |
DMGCN | 94.95 | 3.12 | 8.7 | 72.25 | 3.47 | 6.9 | 94.05 | 3.03 | 6.9 | 95.84 | 2.85 | 6.9 |
Table 7.
Time and space complexity are key metrics for evaluating an algorithm, indicating the worst-case growth in time and the auxiliary space needed. Time complexity impacts the speed of model training and prediction, where high complexity leads to longer times and limits efficient validation and improvement. Space complexity affects the number of parameters, and higher dimensionality requires larger datasets, which is often impractical and increases the risk of overfitting. The units for Bytes and FLOPs are KB and K (one thousand floating point computations per second).
Table 7.
Time and space complexity are key metrics for evaluating an algorithm, indicating the worst-case growth in time and the auxiliary space needed. Time complexity impacts the speed of model training and prediction, where high complexity leads to longer times and limits efficient validation and improvement. Space complexity affects the number of parameters, and higher dimensionality requires larger datasets, which is often impractical and increases the risk of overfitting. The units for Bytes and FLOPs are KB and K (one thousand floating point computations per second).
Models | SEED | DEAP | DEAP-Arousal | DEAP-Valence |
---|
T | S | T/S | T | S | T/S | T | S | T/S | T | S | T/S |
---|
DBGCN | 9.7 | 4.7 | 2.1 | 8.3 | 4.0 | 2.1 | 8.2 | 4.0 | 2.1 | 8.2 | 4.0 | 2.1 |
GCB-BLS | 26.6 | 7.6 | 3.5 | 19.9 | 6.1 | 3.3 | 19.6 | 6.1 | 3.2 | 19.6 | 6.1 | 3.2 |
V-IAG | 30.7 | 9.8 | 3.1 | 17.3 | 8.3 | 2.1 | 17.2 | 8.3 | 2.1 | 17.2 | 8.3 | 2.1 |
RGNN | 29.8 | 10.4 | 2.9 | 19.1 | 8.9 | 2.1 | 19.0 | 8.9 | 2.1 | 19.0 | 8.9 | 2.1 |
HDGCN | 26.7 | 12.9 | 2.1 | 20.6 | 9.6 | 2.1 | 20.5 | 9.6 | 2.1 | 20.5 | 9.6 | 2.1 |
DMGCN | 36.3 | 5.8 | 6.3 | 34.5 | 4.3 | 8.0 | 32.1 | 4.3 | 7.4 | 34.0 | 4.3 | 7.9 |
Table 9.
Summary of models in ablation experiments. DMGCN-I: representations from HDGIL are directly used for classification through 2-layer MLP (multilayer perceptron), instead of M2FC (multi-level feature fusion classifier); DMGCN-II: GCN (graph convolution network) is replaced with GSC (global subgraph checker); DMGCN-III: GCN is replaced with LHC (local hierarchy checker), and the combination of the concatenation operation, Euclidean mean pooling and 2-layer MLP are regarded as a GIL (geometry interactive layer); DMGCN-IV: similarly to in DMGCN-III, the combination of the concatenation operation, Euclidean mean pooling and 2-layer MLP are regarded as a GIL.
Table 9.
Summary of models in ablation experiments. DMGCN-I: representations from HDGIL are directly used for classification through 2-layer MLP (multilayer perceptron), instead of M2FC (multi-level feature fusion classifier); DMGCN-II: GCN (graph convolution network) is replaced with GSC (global subgraph checker); DMGCN-III: GCN is replaced with LHC (local hierarchy checker), and the combination of the concatenation operation, Euclidean mean pooling and 2-layer MLP are regarded as a GIL (geometry interactive layer); DMGCN-IV: similarly to in DMGCN-III, the combination of the concatenation operation, Euclidean mean pooling and 2-layer MLP are regarded as a GIL.
Method | M2FC | GSC | LHC | GIL |
---|
DMGCN | ✓ | ✓ | ✓ | ✓ |
DMGCN-I | ✗ | ✓ | ✓ | ✓ |
DMGCN-II | ✓ | ✗ | ✓ | ✓ |
DMGCN-III | ✓ | ✓ | ✗ | ✓ |
DMGCN-IV | ✓ | ✓ | ✓ | ✗ |
Table 10.
Effect of hidden-layer dimension on DMGCN’s performance. The error ratio represents the proportion of samples requiring more than 20 iterations out of the total samples. The maximum number of iterations was set to 20. The time was 1 h.
Table 10.
Effect of hidden-layer dimension on DMGCN’s performance. The error ratio represents the proportion of samples requiring more than 20 iterations out of the total samples. The maximum number of iterations was set to 20. The time was 1 h.
Dimension | SEED | DEAP | DEAP-Arousal | DEAP-Valence |
---|
ACC
|
STD
| T | |
ACC
|
STD
| T | |
ACC
|
STD
| T | |
ACC
|
STD
| T | |
---|
5 | 90.12 | 13.22 | 0.36 | 1.40 | 65.94 | 1.41 | 0.37 | 2.11 | 82.40 | 6.46 | 0.32 | 2.16 | 87.22 | 7.71 | 0.33 | 1.63 |
10 | 94.89 | 3.93 | 0.55 | 10.75 | 72.74 | 0.82 | 0.53 | 13.12 | 95.97 | 3.68 | 0.51 | 14.42 | 98.73 | 1.15 | 0.53 | 11.49 |
15 | 94.28 | 10.96 | 0.67 | 37.22 | 68.50 | 6.94 | 0.64 | 37.80 | 89.79 | 10.94 | 0.61 | 38.30 | 96.36 | 2.06 | 0.65 | 19.05 |
20 | 82.29 | 12.28 | 0.86 | 54.54 | 56.15 | 14.01 | 0.87 | 82.60 | 80.87 | 7.57 | 0.85 | 61.70 | 89.95 | 9.02 | 0.82 | 56.29 |
30 | 64.88 | 6.87 | 0.97 | 96.52 | 50.69 | 5.79 | 0.93 | 97.25 | 70.84 | 17.53 | 0.96 | 87.99 | 74.96 | 8.94 | 0.93 | 85.49 |
Table 11.
Effect of numerical precision on DMGCN’s performance. The unit of the absolute error and relative error is percentage. The absolute error and relative error (1∼0.01%) are formulated as and , respectively. The iteration can be terminated when the absolute or relative error is less than the corresponding threshold.
Table 11.
Effect of numerical precision on DMGCN’s performance. The unit of the absolute error and relative error is percentage. The absolute error and relative error (1∼0.01%) are formulated as and , respectively. The iteration can be terminated when the absolute or relative error is less than the corresponding threshold.
| | SEED | DEAP | DEAP-Arousal | DEAP-Valence |
---|
ACC
|
STD
| T | |
ACC
|
STD
| T | |
ACC
|
STD
| T | |
ACC
|
STD
| T | |
---|
| 1 | 77.50 | 8.10 | 0.45 | 0.04 | 60.84 | 7.57 | 0.48 | 0.02 | 76.63 | 6.99 | 0.45 | 0.02 | 71.64 | 6.87 | 0.46 | 0.06 |
0.1 | 80.69 | 5.79 | 0.57 | 8.81 | 65.30 | 6.72 | 0.52 | 13.11 | 84.96 | 8.94 | 0.56 | 2.85 | 87.50 | 8.10 | 0.49 | 3.50 |
0.01 | 80.30 | 6.73 | 0.68 | 26.70 | 56.63 | 6.99 | 0.58 | 28.50 | 81.64 | 3.57 | 0.55 | 18.82 | 84.75 | 4.24 | 0.56 | 25.38 |
| 1 | 79.35 | 6.98 | 0.45 | 0.03 | 60.40 | 8.13 | 0.46 | 0.09 | 77.32 | 5.94 | 0.42 | 0.07 | 76.30 | 6.91 | 0.45 | 0.06 |
0.1 | 94.89 | 3.93 | 0.55 | 10.75 | 72.74 | 0.82 | 0.53 | 13.12 | 95.97 | 3.68 | 0.51 | 14.42 | 98.73 | 1.15 | 0.53 | 11.49 |
0.01 | 70.37 | 7.68 | 0.63 | 21.63 | 66.00 | 3.92 | 0.54 | 31.05 | 86.22 | 11.52 | 0.66 | 29.53 | 82.54 | 3.97 | 0.60 | 30.56 |
| 1 | 79.70 | 8.03 | 0.55 | 0.03 | 60.81 | 5.79 | 0.49 | 0.09 | 81.04 | 6.28 | 0.49 | 0.05 | 71.04 | 6.35 | 0.46 | 0.06 |
0.1 | 94.55 | 5.25 | 0.58 | 34.31 | 72.41 | 5.73 | 0.63 | 46.31 | 89.85 | 3.08 | 0.68 | 31.91 | 85.33 | 5.43 | 0.60 | 26.92 |
0.01 | 71.93 | 4.92 | 0.76 | 64.91 | 65.58 | 4.24 | 0.71 | 79.04 | 78.57 | 4.24 | 0.71 | 68.16 | 82.56 | 3.22 | 0.71 | 74.48 |