Deep Residual Network with a CBAM Mechanism for the Recognition of Symmetric and Asymmetric Human Activity Using Wearable Sensors
Abstract
:1. Introduction
- This study delves into detecting symmetric and asymmetric human activities utilizing wearable sensors. To achieve this, we employed two well-established benchmark HAR datasets: WISDM-HARB and UTwente. These datasets offer diverse symmetric and asymmetric human actions, providing a robust foundation for our research.
- The proposed model, CNN-ResBLSTM-CBAM, represents an innovative approach, integrating advanced deep residual networks with attention mechanisms. This design is tailored to effectively learn and capture the nuanced characteristics of symmetry and asymmetry in sensor data.
- Extensive evaluations demonstrate the efficacy of our method, showcasing impressive accuracy rates of 89.01% and 96.49% on the WISDM-HARB and UTwente datasets, respectively. These evaluations emphasize the model’s ability to differentiate between symmetric and asymmetric activities. Notably, our approach surpasses the performance of conventional CNNs and long short-term memory (LSTM) models in this classification task.
- Furthermore, our study conducts thorough assessments to elucidate the impact of various sensor types on the classification of symmetric and asymmetric human activities. This comprehensive analysis sheds light on the nuances of sensor selection and its implications for accurate activity recognition.
2. Related Works
2.1. Sensor-Based HAR
2.2. Deep Learning Approaches for HAR
3. Methodology
- Symmetric activities entail the coordinated use of both sides of the body in a mirrored fashion, as depicted in Figure 1a. Common symmetric activities recognized by sensor-based HAR systems include walking, running, climbing stairs, biking, and similar motions.
- In contrast, asymmetric actions involve the body’s use in a manner that lacks symmetry or balance, as illustrated in Figure 1b. Rather than exhibiting symmetrical movements across opposing limbs or body parts, these activities feature unpredictable and irregular motions that differ between the sides of the body. Such actions demonstrate unilateral variability, introducing complexity to their analysis. Examples encompass activities like typing, drinking, writing, eating, and other unstructured movements executed with a single hand or on one side of the body, commonly observed daily.
3.1. Overview of the Sensor-Based HAR Framework
3.2. Data Acquisition
3.2.1. WISDM-HARB Dataset
3.2.2. UTwente Dataset
3.3. Data Pre-Processing
3.3.1. Data Denoising
3.3.2. Data Normalization
3.3.3. Data Segmentation
3.4. The Proposed CNN-ResBiGRU-CBAM Model
3.4.1. Convolution Block
3.4.2. Residual BiGRU Block
3.4.3. CBAM Block
3.5. CNN-ResBiGRU-CBAM Model Hyperparameters
3.6. Cross-Validation
4. Experiments and Results
4.1. Experimental Settings
- Numpy and Pandas manage data during data retrieval, processing, and sensor data analysis.
- Matplotlib and Seaborn are used to craft visualizations and present the results of data analysis and model evaluation.
- Scikit-learn, or Sklearn, is used to gather and generate data for research endeavors.
- TensorFlow is used to construct and train deep learning models.
4.2. Experimental Results
4.2.1. Experimental Results from the WISDM-HARB Dataset
4.2.2. Experimental Results from the UTwente Dataset
4.3. Comparison with State-of-the-Art Models
5. Discussion
5.1. Impact of Different Types of Sensors
5.2. Impact of Different Types of Activities
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Wang, Y.; Cang, S.; Yu, H. A survey on wearable sensor modality centred human activity recognition in health care. Expert Syst. Appl. 2019, 137, 167–190. [Google Scholar] [CrossRef]
- Wang, Z.; Yang, Z.; Dong, T. A Review of Wearable Technologies for Elderly Care that Can Accurately Track Indoor Position, Recognize Physical Activities and Monitor Vital Signs in Real Time. Sensors 2017, 17, 341. [Google Scholar] [CrossRef] [PubMed]
- Mostafa, H.; Kerstin, T.; Regina, S. Wearable Devices in Medical Internet of Things: Scientific Research and Commercially Available Devices. Healthc. Inform. Res. 2017, 23, 4–15. [Google Scholar] [CrossRef]
- Ha, P.J.; Hyun, M.J.; Ju, K.H.; Hee, K.M.; Hwan, O.Y. Sedentary Lifestyle: Overview of Updated Evidence of Potential Health Risks. Korean J. Fam. Med. 2020, 41, 365–373. [Google Scholar] [CrossRef]
- Oh, Y.; Choi, S.A.; Shin, Y.; Jeong, Y.; Lim, J.; Kim, S. Investigating Activity Recognition for Hemiparetic Stroke Patients Using Wearable Sensors: A Deep Learning Approach with Data Augmentation. Sensors 2024, 24, 210. [Google Scholar] [CrossRef]
- Kraft, D.; Srinivasan, K.; Bieber, G. Deep Learning Based Fall Detection Algorithms for Embedded Systems, Smartwatches, and IoT Devices Using Accelerometers. Technologies 2020, 8, 72. [Google Scholar] [CrossRef]
- Mekruksavanich, S.; Jitpattanakul, A. Deep Residual Network for Smartwatch-Based User Identification through Complex Hand Movements. Sensors 2022, 22, 3094. [Google Scholar] [CrossRef] [PubMed]
- Proffitt, R.; Ma, M.; Skubic, M. Development and Testing of a Daily Activity Recognition System for Post-Stroke Rehabilitation. Sensors 2023, 23, 7872. [Google Scholar] [CrossRef]
- Zhou, X.; Liang, W.; Wang, K.I.K.; Wang, H.; Yang, L.T.; Jin, Q. Deep-Learning-Enhanced Human Activity Recognition for Internet of Healthcare Things. IEEE Internet Things J. 2020, 7, 6429–6438. [Google Scholar] [CrossRef]
- Fridriksdottir, E.; Bonomi, A.G. Accelerometer-Based Human Activity Recognition for Patient Monitoring Using a Deep Neural Network. Sensors 2020, 20, 6424. [Google Scholar] [CrossRef]
- Lara, O.D.; Labrador, M.A. A Survey on Human Activity Recognition using Wearable Sensors. IEEE Commun. Surv. Tutor. 2013, 15, 1192–1209. [Google Scholar] [CrossRef]
- Peng, L.; Chen, L.; Ye, Z.; Zhang, Y. AROMA: A Deep Multi-Task Learning Based Simple and Complex Human Activity Recognition Method Using Wearable Sensors. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2018, 2, 74. [Google Scholar] [CrossRef]
- Shoaib, M.; Bosch, S.; Incel, O.D.; Scholten, H.; Havinga, P.J.M. Complex Human Activity Recognition Using Smartphone and Wrist-Worn Motion Sensors. Sensors 2016, 16, 426. [Google Scholar] [CrossRef] [PubMed]
- Alo, U.R.; Nweke, H.F.; Teh, Y.W.; Murtaza, G. Smartphone Motion Sensor-Based Complex Human Activity Identification Using Deep Stacked Autoencoder Algorithm for Enhanced Smart Healthcare System. Sensors 2020, 20, 6300. [Google Scholar] [CrossRef] [PubMed]
- Liu, L.; Peng, Y.; Liu, M.; Huang, Z. Sensor-based human activity recognition system with a multilayered model using time series shapelets. Knowl. Based Syst. 2015, 90, 138–152. [Google Scholar] [CrossRef]
- Chen, L.; Liu, X.; Peng, L.; Wu, M. Deep learning based multimodal complex human activity recognition using wearable devices. Appl. Intell. 2021, 51, 4029–4042. [Google Scholar] [CrossRef]
- Tahir, B.S.; Ageed, Z.S.; Hasan, S.S.; Zeebaree, S.R.M. Modified Wild Horse Optimization with Deep Learning Enabled Symmetric Human Activity Recognition Model. Comput. Mater. Contin. 2023, 75, 4009–4024. [Google Scholar] [CrossRef]
- Cengiz, A.B.; Birant, K.U.; Cengiz, M.; Birant, D.; Baysari, K. Improving the Performance and Explainability of Indoor Human Activity Recognition in the Internet of Things Environment. Symmetry 2022, 14, 2022. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. arXiv 2014, arXiv:1409.4842. [Google Scholar]
- Long, J.; Sun, W.; Yang, Z.; Raymond, O.I. Asymmetric Residual Neural Network for Accurate Human Activity Recognition. Information 2019, 10, 203. [Google Scholar] [CrossRef]
- Tuncer, T.; Ertam, F.; Dogan, S.; Aydemir, E.; Pławiak, P. Ensemble Residual Networks based Gender and Activity Recognition Method with Signals. J. Supercomput. 2020, 76, 2119–2138. [Google Scholar] [CrossRef]
- Ronald, M.; Poulose, A.; Han, D.S. iSPLInception: An Inception-ResNet Deep Learning Architecture for Human Activity Recognition. IEEE Access 2021, 9, 68985–69001. [Google Scholar] [CrossRef]
- Mehmood, K.; Imran, H.A.; Latif, U. HARDenseNet: A 1D DenseNet Inspired Convolutional Neural Network for Human Activity Recognition with Inertial Sensors. In Proceedings of the 2020 IEEE 23rd International Multitopic Conference (INMIC), Bahawalpur, Pakistan, 5–7 November 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Xu, C.; Chai, D.; He, J.; Zhang, X.; Duan, S. InnoHAR: A Deep Neural Network for Complex Human Activity Recognition. IEEE Access 2019, 7, 9893–9902. [Google Scholar] [CrossRef]
- Zhao, Y.; Yang, R.; Chevalier, G.; Gong, M. Deep Residual Bidir-LSTM for Human Activity Recognition Using Wearable Sensors. arXiv 2017, arXiv:1708.08989. [Google Scholar] [CrossRef]
- Malki, Z.; Atlam, E.S.; Dagnew, G.; Alzighaibi, A.; Elmarhomy, G.; Gad, I. Bidirectional Residual LSTM-based Human Activity Recognition. Comput. Inf. Sci. 2020, 13, 40. [Google Scholar] [CrossRef]
- Challa, S.; Semwal, V. A multibranch CNN-BiLSTM model for human activity recognition using wearable sensor data. Vis. Comput. 2021, 38, 4095–4109. [Google Scholar] [CrossRef]
- Gao, W.; Zhang, L.; Teng, Q.; He, J.; Wu, H. DanHAR: Dual Attention Network for multimodal human activity recognition using wearable sensors. Appl. Soft Comput. 2021, 111, 107728. [Google Scholar] [CrossRef]
- Murahari, V.S.; Plötz, T. On attention models for human activity recognition. In Proceedings of the 2018 ACM International Symposium on Wearable Computers ISWC ’18, Singapore, 8–12 October 2018; pp. 100–103. [Google Scholar] [CrossRef]
- Khan, Z.N.; Ahmad, J. Attention induced multi-head convolutional neural network for human activity recognition. Appl. Soft Comput. 2021, 110, 107671. [Google Scholar] [CrossRef]
- Weiss, G.M.; Yoneda, K.; Hayajneh, T. Smartphone and Smartwatch-Based Biometrics Using Activities of Daily Living. IEEE Access 2019, 7, 133190–133202. [Google Scholar] [CrossRef]
- Anguita, D.; Ghio, A.; Oneto, L.; Parra, X.; Reyes-Ortiz, J.L. A Public Domain Dataset for Human Activity Recognition using Smartphones. In Proceedings of the The European Symposium on Artificial Neural Networks, Bruges, Belgium, 24–26 April 2013; pp. 437–442. [Google Scholar]
- Mekruksavanich, S.; Jitpattanakul, A. Deep Convolutional Neural Network with RNNs for Complex Activity Recognition Using Wrist-Worn Wearable Sensor Data. Electronics 2021, 10, 1685. [Google Scholar] [CrossRef]
- Banos, O.; Galvez, J.M.; Damas, M.; Pomares, H.; Rojas, I. Window Size Impact in Human Activity Recognition. Sensors 2014, 14, 6474–6499. [Google Scholar] [CrossRef]
- Hochreiter, S. The Vanishing Gradient Problem During Learning Recurrent Neural Nets and Problem Solutions. Int. J. Uncertain. Fuzziness Knowl. Based Syst. 1998, 6, 107–116. [Google Scholar] [CrossRef]
- Cho, K.; van Merriënboer, B.; Bahdanau, D.; Bengio, Y. On the Properties of Neural Machine Translation: Encoder–Decoder Approaches. In Proceedings of the SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, Doha, Qatar, 25 October 2014; pp. 103–111. [Google Scholar] [CrossRef]
- Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. arXiv 2014, arXiv:1412.3555. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the Computer Vision—ECCV 2018, Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Agac, S.; Durmaz Incel, O. On the Use of a Convolutional Block Attention Module in Deep Learning-Based Human Activity Recognition with Motion Sensors. Diagnostics 2023, 13, 1861. [Google Scholar] [CrossRef] [PubMed]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; The MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Zhang, Z.; Sabuncu, M.R. Generalized cross entropy loss for training deep neural networks with noisy labels. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS’18, Montreal, QC, Canada, 3–8 December 2018; pp. 8792–8802. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Wong, T.T. Performance evaluation of classification algorithms by k-fold and leave-one-out cross validation. Pattern Recognit. 2015, 48, 2839–2846. [Google Scholar] [CrossRef]
- Bragança, H.; Colonna, J.G.; Oliveira, H.A.B.F.; Souto, E. How Validation Methodology Influences Human Activity Recognition Mobile Systems. Sensors 2022, 22, 2360. [Google Scholar] [CrossRef] [PubMed]
- Suglia, V.; Palazzo, L.; Bevilacqua, V.; Passantino, A.; Pagano, G.; D’Addio, G. A Novel Framework Based on Deep Learning Architecture for Continuous Human Activity Recognition with Inertial Sensors. Sensors 2024, 24, 2199. [Google Scholar] [CrossRef]
- Ismail Fawaz, H.; Lucas, B.; Forestier, G.; Pelletier, C.; Schmidt, D.F.; Weber, J.; Webb, G.I.; Idoumghar, L.; Muller, P.A.; Petitjean, F. InceptionTime: Finding AlexNet for time series classification. Data Min. Knowl. Discov. 2020, 34, 1936–1962. [Google Scholar] [CrossRef]
- Aparecido Garcia, F.; Mazzoni Ranieri, C.; Aparecida Francelin Romero, R. Temporal Approaches for Human Activity Recognition Using Inertial Sensors. In Proceedings of the 2019 Latin American Robotics Symposium (LARS), 2019 Brazilian Symposium on Robotics (SBR) and 2019 Workshop on Robotics in Education (WRE), Rio Grande, Brazil, 23–25 October 2019; pp. 121–125. [Google Scholar] [CrossRef]
Ref. | Year | Method | Dataset | Sensor Types | Sensor Location | No. of Activities | Focus Symmetry and Asymmetry Motions |
---|---|---|---|---|---|---|---|
[21] | 2019 | Dual-scaled residual network | OPPORTUNITY | A, G | Body | 11 | no |
UniMiB-SHAR | A | 17 | |||||
[22] | 2020 | Ensemble ResNet | UCI-DSA | A, G, M | Body | 19 | no |
[23] | 2021 | iSPLInception | OPPORTUNITY | A, G | Body | 17 | no |
PAMAP2 | A, G, M | Head, Chest, Ankle | 12 | ||||
[24] | 2020 | DenseNet | UCI-HAR | A, G | Waist | 6 | no |
[25] | 2019 | InnoHAR | OPPORTUNITY | A, G | Body | 17 | no |
PAMAP2 | A, G, M | Head, Chest, Ankle | 12 | ||||
UCI-HAR | A, G | Waist | 6 | ||||
[26] | 2017 | Residual BiLSTM | OPPORTUNITY | A, G | Body | 17 | no |
UCI-HAR | A, G | Waist | 6 | ||||
[27] | 2020 | BiLSTM | mHelath | A, G, M | Ankle, Arm, Chest | 12 | no |
[28] | 2021 | Multibrance CNN-BiLSTM | WISDM | A | 6 | no | |
UCI-HAR | A, G | Waist | 6 | ||||
PAMAP2 | A, G, M | Head, Chest, Ankle | 12 | ||||
[29] | 2021 | Dual Attention Network | WISDM | A | 6 | no | |
UniMiB-SHAR | A | 17 | |||||
PAMAP2 | A, G, M | Head, Chest, Ankle | 12 | ||||
OPPORTUNITY | A, G | Body | 18 | ||||
[30] | 2018 | Att-DeepConvLSTM | OPPORTUNITY | A, G | Body | 17 | no |
PAMAP2 | A, G, M | Head, Chest, Ankle | 12 | ||||
Skoda | A | Arm | 10 | ||||
Our approach | - | CNN-ResBiGRU-CBAM | WISDM-HARB | A, G | Wrist | 18 | yes |
UTwente | A, G | Wrist | 13 |
Type | Activity | Description |
---|---|---|
Symmetic | Walking | Engaging in the activity of moving on foot outside. |
Jogging | Engaging in the activity of running at a steady and moderate pace outside. | |
Stairs | Repeatedly ascending and descending many flights of stairs. | |
Sitting | Being in a sitting position. | |
Standing | Being in an upright position on one’s feet. | |
Clapping | Striking one’s hands together to produce a sound, using both hands. | |
Asymmetric | Typing | Performing keyboard input tasks while seated. |
Brushing Teeth | Engaging in oral hygiene by brushing teeth. | |
Eating Soup | Consuming soup from a bowl. | |
Eating Chips | Ingesting snack chips. | |
Eating Pasta | Partaking in pasta consumption. | |
Eating Sandwich | Consuming a sandwich meal. | |
Drinking | Taking liquid refreshment from a cup. | |
Kicking | Striking a soccer ball with the foot. | |
Catching a ball | Intercepting a thrown object, such as a tennis ball. | |
Dribbling | Manipulating a basketball with repeated bounces. | |
Writing | Producing written content while seated. | |
Folding | Organizing clothing items by creasing and arranging them. |
Type | Activity | Description |
---|---|---|
Symmetic | Walking | Walking at a normal pace on a flat surface indoors |
Jogging | Jogging at a moderate pace on a flat surface indoors | |
Standing | Standing still in an upright position | |
Sitting | Sitting in a chair with minimal movement | |
Biking | Riding a bicycle outdoors on a flat surface | |
Walking Upstairs | Climbing multiple flights of stairs in an upward direction | |
Walking Downstairs | Descending multiple flights of stairs in a downward direction | |
Asymmetric | Typing | Typing on a computer keyboard while seated in a chair |
Writing | Handwriting with a pen on paper while seated in a chair | |
Drinking Coffee | Consuming a beverage from a cup while seated | |
Talking | Engaging in a conversation while standing still | |
Smoking | Smoking a cigarette while standing still | |
Eating | Consuming a cup of soup using a spoon while seated |
Stage | Hyperparameters | Values | |
---|---|---|---|
Architecture | Convolutional Block × 4 | ||
1D Convolution | Kernel Size | 3 | |
Stride | 1 | ||
Filters | 256 | ||
Activation | ReLU | ||
Batch Normalization | - | ||
Max Pooling | 2 | ||
Dropout | 0.25 | ||
Residual BiGRU Block | |||
ResBiGRU_1 | Neural | 128 | |
ResBiGRU_2 | Neural | 64 | |
CBAM Block | |||
CBAM Layer | - | ||
Classification Block | |||
Dense | Number of activity classes | ||
Activation | SoftMax | ||
Training | Loss Function | Cross-entropy | |
Optimizer | Adam | ||
Batch Size | 128 | ||
Number of Epochs | 200 |
Model | Recognition Performance | |||
---|---|---|---|---|
Accuracy | Precision | Recall | F1-Score | |
CNN | 69.10% | 68.77% | 69.11% | 68.62% |
LSTM | 81.08% | 81.13% | 81.07% | 80.94% |
BiLSTM | 84.14% | 84.17% | 84.13% | 84.05% |
GRU | 81.39% | 81.41% | 81.38% | 81.25% |
BiGRU | 85.39% | 85.43% | 85.39% | 85.36% |
CNN-ResBiGRU-CBAM | 86.77% | 86.90% | 86.77% | 86.69% |
Model | Recognition Performance | |||
---|---|---|---|---|
Accuracy | Precision | Recall | F1-Score | |
CNN | 59.36% | 59.08% | 59.31% | 58.90% |
LSTM | 73.34% | 73.17% | 73.33% | 73.81% |
BiLSTM | 73.89% | 73.66% | 73.56% | 73.84% |
GRU | 71.21% | 72.76% | 71.20% | 71.50% |
BiGRU | 72.34% | 72.26% | 72.31% | 72.19% |
CNN-ResBiGRU-CBAM | 75.13% | 75.28% | 75.12% | 73.76% |
Model | Recognition Performance | |||
---|---|---|---|---|
Accuracy | Precision | Recall | F1-Score | |
CNN | 72.27% | 72.38% | 72.24% | 71.96% |
LSTM | 82.63% | 82.55% | 82.61% | 82.49% |
BiLSTM | 86.00% | 86.01% | 85.99% | 85.93% |
GRU | 84.77% | 84.85% | 84.78% | 84.70% |
BiGRU | 86.92% | 86.88% | 86.92% | 86.83% |
CNN-ResBiGRU-CBAM | 89.01% | 89.00% | 89.01% | 88.94% |
Model | Recognition Performance | |||
---|---|---|---|---|
Accuracy | Precision | Recall | F1-Score | |
CNN | 85.55% | 86.28% | 85.53% | 85.13% |
LSTM | 84.70% | 84.75% | 84.70% | 83.90% |
BiLSTM | 90.64% | 91.02% | 90.64% | 90.51% |
GRU | 93.55% | 93.80% | 93.55% | 93.51% |
BiGRU | 95.68% | 95.73% | 95.68% | 95.65% |
CNN-ResBiGRU-CBAM | 96.15% | 96.39% | 96.15% | 96.14% |
Model | Recognition Performance | |||
---|---|---|---|---|
Accuracy | Precision | Recall | F1-Score | |
CNN | 72.08% | 72.44% | 72.08% | 71.72% |
LSTM | 38.86% | 38.52% | 38.86% | 35.32% |
BiLSTM | 64.12% | 63.92% | 64.13% | 62.23% |
GRU | 81.53% | 81.93% | 81.53% | 81.20% |
BiGRU | 75.17% | 75.36% | 75.17% | 74.34% |
CNN-ResBiGRU-CBAM | 88.93% | 89.79% | 88.93% | 88.45% |
Model | Recognition Performance | |||
---|---|---|---|---|
Accuracy | Precision | Recall | F1-Score | |
CNN | 93.07% | 93.22% | 93.07% | 92.95% |
LSTM | 90.00% | 90.53% | 90.00% | 89.82% |
BiLSTM | 93.29% | 93.42% | 93.28% | 93.25% |
GRU | 94.66% | 94.93% | 94.66% | 94.60% |
BiGRU | 95.72% | 95.80% | 95.72% | 95.71% |
CNN-ResBiGRU-CBAM | 96.49% | 96.62% | 96.49% | 96.47% |
Type | Activity | F1-Score | ||
---|---|---|---|---|
DeepConvTCN [49] | InceptionTime [48] | The Proposed CNN-ResBiGRU-CBAM | ||
Symmetric | Walking | 0.96 | 0.93 | 0.97 |
Jogging | 0.98 | 0.73 | 1.00 | |
Stairs | 0.88 | 0.98 | 0.94 | |
Sitting | 0.78 | 0.80 | 0.80 | |
Standing | 0.82 | 0.91 | 0.86 | |
Clapping | 0.98 | 0.86 | 0.97 | |
Asymmetric | Typing | 0.88 | 0.80 | 0.92 |
Brushing Teeth | 0.98 | 0.98 | 0.96 | |
Eating Soup | 0.86 | 0.87 | 0.85 | |
Eating Chips | 0.75 | 0.73 | 0.70 | |
Eating Pasta | 0.82 | 0.83 | 0.84 | |
Drinking | 0.87 | 0.86 | 0.82 | |
Eating Sandwishes | 0.62 | 0.73 | 0.58 | |
Kicking | 0.90 | 0.81 | 0.95 | |
Catching a ball | 0.93 | 0.90 | 0.97 | |
Dribbling | 0.94 | 0.90 | 0.98 | |
Writing | 0.86 | 0.72 | 0.94 | |
Folding | 0.87 | 0.88 | 0.97 | |
Average | 0.87 | 0.85 | 0.89 |
Type | Activity | F1-Score | ||
---|---|---|---|---|
DeepConvTCN [49] | InceptionTime [48] | The Proposed CNN-ResBiGRU-CBAM | ||
Symmetric | Walking | 0.91 | 0.87 | 0.99 |
Jogging | 0.97 | 0.97 | 1.00 | |
Standing | 0.87 | 0.86 | 0.95 | |
Sitting | 0.89 | 0.98 | 0.88 | |
Biking | 0.90 | 0.98 | 1.00 | |
Walking Upstairs | 0.98 | 0.99 | 0.98 | |
Walking Downstairs | 0.97 | 0.97 | 0.99 | |
Asymmetric | Typing | 0.92 | 0.95 | 0.99 |
Writing | 0.98 | 0.91 | 1.00 | |
Drinking Coffee | 0.82 | 0.85 | 0.94 | |
Talking | 0.89 | 0.82 | 0.91 | |
Smoking | 0.87 | 0.84 | 0.94 | |
Eating | 0.97 | 0.99 | 0.97 | |
Average | 0.918 | 0.922 | 0.965 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mekruksavanich, S.; Jitpattanakul, A. Deep Residual Network with a CBAM Mechanism for the Recognition of Symmetric and Asymmetric Human Activity Using Wearable Sensors. Symmetry 2024, 16, 554. https://doi.org/10.3390/sym16050554
Mekruksavanich S, Jitpattanakul A. Deep Residual Network with a CBAM Mechanism for the Recognition of Symmetric and Asymmetric Human Activity Using Wearable Sensors. Symmetry. 2024; 16(5):554. https://doi.org/10.3390/sym16050554
Chicago/Turabian StyleMekruksavanich, Sakorn, and Anuchit Jitpattanakul. 2024. "Deep Residual Network with a CBAM Mechanism for the Recognition of Symmetric and Asymmetric Human Activity Using Wearable Sensors" Symmetry 16, no. 5: 554. https://doi.org/10.3390/sym16050554
APA StyleMekruksavanich, S., & Jitpattanakul, A. (2024). Deep Residual Network with a CBAM Mechanism for the Recognition of Symmetric and Asymmetric Human Activity Using Wearable Sensors. Symmetry, 16(5), 554. https://doi.org/10.3390/sym16050554