WiPg: Contactless Action Recognition Using Ambient Wi-Fi Signals
Abstract
:1. Introduction
- Aiming at the problem that the diversity between the body types of various people affects the performance of the recognition method, the recognition method in WiPg is suitable for diverse people. Through the combination of GAN and CNN, high-level features independent of body size are extracted, and action recognition is carried out.
- WiPg has a very good recognition accuracy. In order to achieve this goal, first of all, we need to collect a large number of real experimental data and create similar graphs of problems by using different numbers of samples. More samples provide more opportunities for learning algorithms to understand the underlying mapping of input to output, leading to better performance models. Secondly, in the process of model training, parameters are adjusted repeatedly until the best recognition result is obtained.
- On account of the longer training time, people who practice yoga exercises cannot maintain the standard movements of the initial state from the beginning to the end of the whole training process. This can be judged by setting the threshold value in WiPg. If multiple feedback updates fail to output the result, it indicates that the action is not standard; and if it can output a classification result, it indicates that the action is standard.
2. Related Theory
2.1. Theory of CSI
2.2. Methods of Data Preprocessing
2.2.1. An Overview of the Butterworth Filter
2.2.2. An Overview of the PCA
2.3. Overview of the GAN
3. Overview of WiPg
- We collected the CSI sign of 14 standard yoga poses from 10 experimenters in three real experimental environments. We eliminated the noise of the collected data by Butterworth filter and PCA, and the main characteristics of the yoga CSI data were retained. See Section 3.1 for details;
- According to fast Fourier transform, energy changes to determine which period of data is yoga action data. See Section 3.2 for details;
- CNN is integrated into GAN to build an action-recognition model through which to learn a piece of yoga CSI data that is personnel-independent, followed by action recognition. See Section 3.3 for details;
- The last step is to estimate whether the yoga action is standard or not. The criterion is the threshold value obtained through a large number of training tests. If the output is within the threshold range, the action is standard, and the recognition result will be output. On the contrary, if it is not in the range, feedback is updated, and the action is not standard if it is in a loop in which the result cannot be output all the time. See Section 3.3 for details.
3.1. Data Preprocessing
3.2. Motion Detection
3.3. Action Recognition of Wi-Piga
- To generate parameters of network G and discriminator network, D, by initialization;
- To extract n real data samples with personnel labels from the training set, G generates the personnel labels of the n data sample without personnel labels and forms new data;
- To fix G and then to train D, we make D distinguish the true and false data samples as much as possible; in other words, to mark the person who generated each CSI data and to obtain a personnel-label distribution, R, through full connection;
- To update D for k times, G is updated once. D maximizes the prediction performance of personnel labels, and G minimizes the prediction accuracy of discriminator model, D. After multiple update iterations, the people-related features are ideally eliminated, and the performance of person tag prediction is reduced.
3.3.1. Generation Network, G
3.3.2. Discriminator Network, D
4. Experiment and Analysis
4.1. Experimental Scene
4.2. Performance Evaluation
4.3. Experimental Verification
4.3.1. Influence of Different People’s Bodies on the Recognition Rate
4.3.2. Impact of Different Number of Packets, Number of Subcarriers, and Experiment Scenes on the Recognition Rate
4.3.3. Impact of the Number of Epochs and Batch Size
4.3.4. Comprehensive Evaluation
- (1)
- Comparative experiments of different classification methods
- (2)
- Comparative experiments of different motion-recognition methods
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Ma, Y.; Zhou, G.; Wang, S. Wi-Fi Sensing with Channel State Information: A Survey. ACM Comput. Surv. 2019, 52, 1–36. [Google Scholar] [CrossRef] [Green Version]
- Yang, G.; Tan, W.; Jin, H.; Zhao, T.; Tu, L. Review wearable sensing system for gait recognition. Clust. Comput.-J. Netw. Softw. Tools Appl. 2019, 22, S3021–S3029. [Google Scholar] [CrossRef]
- Yang, J.; Li, Q.; Wang, X.; Di, P.; Ding, H.; Bai, Y.; Dong, W.; Zhu, S. Smart wearable monitoring system based on multi-type sensors for motion recognition. Smart Mater. Struct. 2021, 30, 035017. [Google Scholar] [CrossRef]
- Zhuang, W.; Chen, Y.; Su, J.; Wang, B.; Gao, C. Design of human activity recognition algorithms based on a single wearable IMU sensor. Int. J. Sens. Netw. 2019, 33, 193–206. [Google Scholar] [CrossRef]
- Ma, Q.; Li, X.; Li, G.; Ning, B.; Bai, M.; Wang, X. MRLIHT: Mobile RFID-Based Localization for Indoor Human Tracking. Sensors 2020, 20, 1711. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhao, J.; Zhou, J.; Yao, Y.; Li, D.; Gao, L. RF-Motion: A Device-Free RF-Based Human Motion Recognition System. Wirel. Commun. Mob. Comput. 2021, 2021, 1–9. [Google Scholar] [CrossRef]
- Espinosa, R.; Ponce, H.; Gutierrez, S.; Martinez-Villasenor, L.; Brieva, J.; Moya-Albor, E. A vision-based approach for fall detection using multiple cameras and convolutional neural networks: A case study using the UP-Fall detection dataset. Comput. Biol. Med. 2019, 115, 103520. [Google Scholar] [CrossRef] [PubMed]
- Wang, Z.; Jiang, K.; Hou, Y.; Dou, W.; Zhang, C.; Huang, Z.; Guo, Y. A Survey on Human Behavior Recognition Using Channel State Information. IEEE Access 2019, 7, 155986–156024. [Google Scholar] [CrossRef]
- Zhao, J.; Liu, L.; Wei, Z.; Zhang, C.; Wang, W.; Fan, Y. R-DEHM: CSI-Based Robust Duration Estimation of Human Motion with Wi-Fi. Sensors 2019, 19, 1421. [Google Scholar] [CrossRef] [Green Version]
- Wang, F.; Feng, J.; Zhao, Y.; Zhang, X.; Zhang, S.; Han, J. Activity Recognition and Indoor Localization with Wi-Fi Fingerprints. IEEE Access 2019, 7, 80058–80068. [Google Scholar] [CrossRef]
- Fei, H.; Xiao, F.; Han, J.S.; Huang, H.; Sun, L. Multi-Variations Activity Based Gaits Recognition Using Commodity Wi-Fi. IEEE Trans. Veh. Technol. 2020, 69, 2263–2273. [Google Scholar] [CrossRef]
- Zhang, J.; Wei, B.; Wu, F.; Dong, L.; Hu, W.; Kanhere, S.; Luo, C.; Yu, S.; Cheng, J. Gate-ID: Wi-Fi-Based Human Identification Irrespective of Walking Directions in Smart Home. IEEE Internet Things J. 2021, 8, 7610–7624. [Google Scholar] [CrossRef]
- Ma, Y.; Zhou, G.; Wang, S.; Zhao, H.; Jung, W. SignFi: Sign Language Recognition Using Wi-Fi. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2018, 2, 1–21. [Google Scholar] [CrossRef]
- Tang, Z.; Liu, Q.; Wu, M.; Chen, W.; Huang, J. Wi-Fi CSI Gesture Recognition Based on Parallel LSTM-FCN Deep Space-Time Neural Network. China Commun. 2021, 18, 205–215. [Google Scholar] [CrossRef]
- Hao, Z.; Duan, Y.; Dang, X.; Zhang, T. CSI-HC: A WiFi-Based Indoor Complex Human Motion Recognition Method. Mob. Inf. Syst. 2020, 2020, 3185416. [Google Scholar] [CrossRef]
- Chen, Z.; Zhang, L.; Jiang, C.; Cao, Z.; Cui, W. Wi-Fi CSI Based Passive Human Activity Recognition Using Attention Based BLSTM. IEEE Trans. Mob. Comput. 2019, 18, 2714–2724. [Google Scholar] [CrossRef]
- Jiang, W.; Miao, C.; Ma, F.; Yao, S.; Wang, Y.; Yuan, Y. Towards Environment Independent Device Free Human Activity Recognition. In Proceedings of the MobiCom ’18: Proceedings of the 24th Annual International Conference on Mobile Computing and Networking, New Delhi, India, 29 October–2 November 2018; pp. 289–304. [Google Scholar]
- Ma, Y.; Arshad, S.; Muniraju, S.; Torkildson, E.; Rantala, E.; Doppler, K.; Zhou, G. Location- and Person-Independent Activity Recognition with Wi-Fi. Deep. Neural Netw. Reinf. Learn. 2021, 2, 1–25. [Google Scholar] [CrossRef]
- Zheng, Y.; Zhang, Y.; Qian, K.; Zhang, G.; Yang, Z. Zero-Effort Cross-Domain Gesture Recognition with Wi-Fi. In Proceedings of the 17th Annual International Conference on Mobile Systems, Applications, and Services (MobiSys ’19), Seoul, Korea, 17–21 June 2019; pp. 313–325. [Google Scholar]
- Qi, W.; Ovur, S.; Li, Z.; Marzullo, A.; Song, R. Multi-Sensor Guided Hand Gesture Recognition for a Teleoperated Robot Using a Recurrent Neural Network. IEEE Robot. Autom. Lett. 2021, 6, 6039–6045. [Google Scholar] [CrossRef]
- Su, H.; Mariani, A.; Ovur, S.; Menciassi, A.; Ferrigno, G.; De Momi, E. Toward Teaching by Demonstration for Robot-Assisted Minimally Invasive Surgery. IEEE Trans. Autom. Sci. Eng. 2021, 18, 484–494. [Google Scholar] [CrossRef]
- Qi, W.; Aliverti, A. A Multimodal Wearable System for Continuous and Real-Time Breathing Pattern Monitoring During Daily Activity. IEEE J. Biomed. Health Inform. 2020, 24, 2199–2207. [Google Scholar] [CrossRef] [PubMed]
- Wang, F.; Li, H.; Yu, N. Investigation on impact of substrate on low-pass filter based on coaxial TSV. IEICE Electron. Express 2019, 16, 1–6. [Google Scholar] [CrossRef] [Green Version]
- Wang, F.; Gong, W.; Liu, J. On spatial diversity in wifi-based human activity recognition: A deep learning-based approach. IEEE Internet Things J. 2019, 6, 2035–2047. [Google Scholar] [CrossRef]
- Xiong, W.; Feng, C.; Xiong, Z.; Wang, J.; Liu, M.; Zeng, C. Improved pedestrian reidentification based on CNN. Comput. Eng. Sci. 2019, 41, 457–463. [Google Scholar]
- Qiao, Y.; Li, J.; He, B.; Li, W.; Xin, T. A Novel Signal Detection Scheme Based on Adaptive Ensemble Deep Learning Algorithm in SC-FDE Systems. IEEE Access 2020, 8, 123514–123523. [Google Scholar] [CrossRef]
- Jia, L.; Gu, Y.; Cheng, K.; Yan, H.; Ren, F. BeAware: Convolutional neural network (CNN) based user behavior understanding through Wi-Fi channel state information. Neurocomputing 2020, 397, 457–463. [Google Scholar] [CrossRef]
- He, D.; Xie, C. Semantic image segmentation algorithm in a deep learning computer network. Multimed. Syst. 2020. [CrossRef]
- Lee, K.; Choi, C.; Shin, D.; Kim, H. Prediction of Heavy Rain Damage Using Deep Learning. Water 2020, 12, 1942. [Google Scholar] [CrossRef]
- Koliousis, A.; Watcharapichat, P.; Weidlich, M.; Mai, L.; Costa, P. CROSSBOW: Scaling Deep Learning with Small Batch Sizes on Multi-GPU Servers. arXiv 2019, arXiv:1901.02244. [Google Scholar] [CrossRef] [Green Version]
Experimenter | Gender | Height (m) | Weight (kg) | BMI |
---|---|---|---|---|
A | Female | 1.57 | 57 | 23.13 |
B | Female | 1.62 | 54 | 20.57 |
C | Female | 1.66 | 51 | 18.50 |
D | Female | 1.75 | 52 | 16.97 |
E | Female | 1.80 | 63 | 19.44 |
F | Male | 1.69 | 56 | 20.84 |
G | Male | 1.73 | 70 | 23.39 |
H | Male | 1.77 | 80 | 25.53 |
M | Male | 1.85 | 75 | 21.91 |
N | Male | 1.89 | 70 | 19.59 |
Different Experimental Scenarios and Methods | Three Evaluation Indexes | |||
---|---|---|---|---|
Accuracy | F1 Score | AUC Area | ||
Yoga Classroom | RF | 0.899 | 0.849 | 0.897 4 |
SVM | 0.906 | 0.857 | 0.923 4 | |
the method of WiPg | 0.933 | 0.907 | 0.942 1 | |
Lab | RF | 0.858 | 0.845 | 0.884 3 |
SVM | 0.891 | 0.853 | 0.904 5 | |
the method of WiPg | 0.914 | 0.876 | 0.912 5 | |
Dormitory | RF | 0.834 | 0.841 | 0.857 6 |
SVM | 0.872 | 0.851 | 0.894 7 | |
the method of WiPg | 0.899 | 0.867 | 0.909 5 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hao, Z.; Niu, J.; Dang, X.; Qiao, Z. WiPg: Contactless Action Recognition Using Ambient Wi-Fi Signals. Sensors 2022, 22, 402. https://doi.org/10.3390/s22010402
Hao Z, Niu J, Dang X, Qiao Z. WiPg: Contactless Action Recognition Using Ambient Wi-Fi Signals. Sensors. 2022; 22(1):402. https://doi.org/10.3390/s22010402
Chicago/Turabian StyleHao, Zhanjun, Juan Niu, Xiaochao Dang, and Zhiqiang Qiao. 2022. "WiPg: Contactless Action Recognition Using Ambient Wi-Fi Signals" Sensors 22, no. 1: 402. https://doi.org/10.3390/s22010402
APA StyleHao, Z., Niu, J., Dang, X., & Qiao, Z. (2022). WiPg: Contactless Action Recognition Using Ambient Wi-Fi Signals. Sensors, 22(1), 402. https://doi.org/10.3390/s22010402