Deep Learning Models for Stress Analysis in University Students: A Sudoku-Based Study
Abstract
:1. Introduction
2. Related Work
3. Materials and Methods
3.1. Experiment Design
- Scenario 1: The participant was left alone in the room, solving Sudoku puzzles while being exposed to horror or discordant audio, such as white noise, and watching horror videos, such as zombie movies.
- Scenario 2: No music or videos were played during this scenario. Instead, a person was present in the room observing the participant while they solved the Sudoku puzzles.
- Scenario 3: The participant was left alone in the room, solving Sudoku puzzles while being exposed to comforting audio and videos, such as sounds of birds, waterfalls, and rainfall.
3.2. Data Collection
3.3. Data Pre-Processing
3.4. Models Training
3.4.1. StressNeXt
3.4.2. LRCN
3.4.3. Self-Supervised CNN
3.4.4. Training Parameters
4. Results and Discussion
4.1. Scenario-Based Self-Reporting Stress Analysis
4.2. Classifiers Evaluation
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Li, C. From Involution to Education: A Glance to Chinese Young Generation. In Proceedings of the 2021 4th International Conference on Humanities Education and Social Sciences (ICHESS 2021), Xishuangbanna, China, 29–31 October 2021; Atlantis Press: Amsterdam, The Netherlands, 2021; pp. 1884–1887. [Google Scholar]
- Ponzini, A. Educating the new Chinese middle-class youth: The role of quality education on ideas of class and status. J. Chin. Sociol. 2020, 7, 1. [Google Scholar] [CrossRef]
- Pascoe, M.C.; Hetrick, S.E.; Parker, A.G. The impact of stress on students in secondary school and higher education. Int. J. Adolesc. Youth 2020, 25, 104–112. [Google Scholar] [CrossRef] [Green Version]
- Chapell, M.S.; Blanding, Z.B.; Silverstein, M.E.; Takahashi, M.; Newman, B.; Gubi, A.; McCann, N. Test anxiety and academic performance in undergraduate and graduate students. J. Educ. Psychol. 2005, 97, 268–274. [Google Scholar] [CrossRef]
- College Student Suicide: Failures and Potential Solutions. Available online: https://www.brainsway.com/knowledge-center/college-student-suicide-failures-and-potential-solutions/#:~:text=How%20Many%20College%20Students%20Commit,for%20death%20among%20college%20students (accessed on 19 February 2023).
- Wagh, K.P.; Vasanth, K. Performance evaluation of multi-channel electroencephalogram signal (EEG) based time frequency analysis for human emotion recognition. Biomed. Signal Process. Control 2022, 78, 103966. [Google Scholar] [CrossRef]
- Vijayakumar, S.; Flynn, R.; Corcoran, P.; Murray, N. CNN-based Emotion Recognition from Multimodal Peripheral Physiological Signals. In Proceedings of the IMX’22: ACM International Conference on Interactive Media Experiences, Aveiro, Portugal, 22–24 June 2022. [Google Scholar]
- Miao, M.; Zheng, L.; Xu, B.; Yang, Z.; Hu, W. A multiple frequency bands parallel spatial–temporal 3D deep residual learning framework for EEG-based emotion recognition. Biomed. Signal Process. Control 2023, 79, 104141. [Google Scholar] [CrossRef]
- Montero Quispe, K.G.; Utyiama, D.M.; Dos Santos, E.M.; Oliveira, H.A.; Souto, E.J. Applying Self-Supervised Representation Learning for Emotion Recognition Using Physiological Signals. Sensors 2022, 22, 9102. [Google Scholar] [CrossRef] [PubMed]
- Tang, Y.; Wang, Y.; Zhang, X.; Wang, Z. STILN: A Novel Spatial-Temporal Information Learning Network for EEG-based Emotion Recognition. arXiv 2022, arXiv:2211.12103. [Google Scholar] [CrossRef]
- Choi, J.; Lee, J.S.; Ryu, M.; Hwang, G.; Hwang, G.; Lee, S.J. Attention-LRCN: Long-term Recurrent Convolutional Network for Stress Detection from Photoplethysmography. In Proceedings of the 2022 IEEE International Symposium on Medical Measurements and Applications (MeMeA), Messina, Italy, 22–24 June 2022; IEEE: New York, NY, USA, 2022; pp. 1–6. [Google Scholar]
- Miranda-Correa, J.A.; Abadi, M.K.; Sebe, N.; Patras, I. Amigos: A dataset for affect, personality and mood research on individuals and groups. IEEE Trans. Affect. Comput. 2018, 12, 479–493. [Google Scholar] [CrossRef] [Green Version]
- Katsigiannis, S.; Ramzan, N. DREAMER: A database for emotion recognition through EEG and ECG signals from wireless low-cost off-the-shelf devices. IEEE J. Biomed. Health Inform. 2017, 22, 98–107. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Koldijk, S.; Sappelli, M.; Verberne, S.; Neerincx, M.A.; Kraaij, W. The swell knowledge work dataset for stress and user modeling research. In Proceedings of the 16th International Conference on Multimodal Interaction, Istanbul, Turkey, 12–16 November 2014; pp. 291–298. [Google Scholar]
- Zheng, W.-L.; Lu, B.-L. Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Trans. Auton. Ment. Dev. 2015, 7, 162–175. [Google Scholar] [CrossRef]
- Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.-S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. Deap: A database for emotion analysis; using physiological signals. IEEE Trans. Affect. Comput. 2011, 3, 18–31. [Google Scholar] [CrossRef] [Green Version]
- Schmidt, P.; Reiss, A.; Duerichen, R.; Marberger, C.; Van Laerhoven, K. Introducing wesad, a multimodal dataset for wearable stress and affect detection. In Proceedings of the 20th ACM International Conference on Multimodal Interaction, Boulder, CO, USA, 16–20 October 2018; pp. 400–408. [Google Scholar]
- Hu, X. Wenjuanxing Official Website. Available online: https://www.wjx.cn/ (accessed on 19 December 2022).
- Electro, P. Polar Verity Sense. Available online: https://www.polar.com/us-en/products/accessories/polar-verity-sense (accessed on 21 December 2022).
- Xinweilai. BMD101 ECG Detection Package. Taobao. Available online: https://item.taobao.com/item.htm?spm=a230r.1.14.22.a4734ab0qwBJQL&id=618036232572&ns=1&abbucket=1&mt= (accessed on 19 June 2023).
- NeuroSky. MindWave Mobile Setup Kit. Available online: https://mwm2.neurosky.com/ (accessed on 8 August 2022).
- Mekruksavanich, S.; Hnoohom, N.; Jitpattanakul, A. A Deep Residual-based Model on Multi-Branch Aggregation for Stress and Emotion Recognition through Biosignals. In Proceedings of the 2022 19th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), Prachuap Khiri Khan, Thailand, 24–27 May 2022; IEEE: New York, NY, USA, 2022; pp. 1–4. [Google Scholar]
- Fawaz, H.I.; Lucas, B.; Forestier, G.; Pelletier, C.; Schmidt, D.F.; Weber, J.; Webb, G.I.; Idoumghar, L.; Muller, P.-A.; Petitjean, F. Inceptiontime: Finding alexnet for time series classification. Data Min. Knowl. Discov. 2020, 34, 1936–1962. [Google Scholar] [CrossRef]
- Li, R.; Liu, Z. Stress detection using deep neural networks. BMC Med. Inform. Decis. Mak. 2020, 20, 285. [Google Scholar] [CrossRef] [PubMed]
- Arsalan, A.; Majid, M. Human stress classification during public speaking using physiological signals. Comput. Biol. Med. 2021, 133, 104377. [Google Scholar] [CrossRef] [PubMed]
- Behinaein, B.; Bhatti, A.; Rodenburg, D.; Hungler, P.; Etemad, A. A Transformer Architecture for Stress Detection from ECG. In Proceedings of the 2021 ACM International Symposium on Wearable Computers, Virtual, 21–26 September 2021; pp. 132–134. [Google Scholar] [CrossRef]
- Egilmez, B.; Poyraz, E.; Wenting, Z.; Memik, G.; Dinda, P.; Alshurafa, N. UStress: Understanding college student subjective stress using wrist-based passive sensing. In Proceedings of the 2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), Kona, HI, USA, 13–17 March 2017; pp. 673–678. [Google Scholar] [CrossRef]
- Seo, W.; Kim, N.; Kim, S.; Lee, C.; Park, S.-M. Deep ECG-Respiration Network (DeepER Net) for Recognizing Mental Stress. Sensors 2019, 19, 3021. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bobade, P.; Vani, M. Stress Detection with Machine Learning and Deep Learning using Multimodal Physiological Data. In Proceedings of the 2020 Second International Conference on Inventive Research in Computing Applications (ICIRCA), Coimbatore, India, 15–17 July 2020; pp. 51–57. [Google Scholar] [CrossRef]
- Hwang, B.; You, J.; Vaessen, T.; Myin-Germeys, I.; Park, C.; Zhang, B.-T. Deep ECGNet: An optimal deep learning framework for monitoring mental stress using ultra short-term ECG signals. Telemed. e-Health 2018, 24, 753–772. [Google Scholar] [CrossRef] [PubMed]
- Rastgoo, M.N.; Nakisa, B.; Maire, F.; Rakotonirainy, A.; Chandran, V. Automatic driver stress level classification using multimodal deep learning. Expert Syst. Appl. 2019, 138, 112793. [Google Scholar] [CrossRef]
Faculty | Sophomore | Junior | Senior | Ph.D. | Total |
---|---|---|---|---|---|
FOSE | 2 | 1 | 2 | 2 | 7 |
FHSS | 11 | — | — | — | 11 |
NUBS | 2 | 3 | 6 | 1 | 12 |
Total | 15 | 4 | 8 | 3 | 30 |
Model Name | Data | Accuracy | F1-Score |
---|---|---|---|
StressNeXt | PPG | 83.90% | 69.04% |
ECG | 85.71% | 69.61% | |
EEG | 66.31% | 39.40% | |
PPG + ECG | 88.22% | 77.48% | |
PPG + EEG | 83.26% | 68.35% | |
ECG + EEG | 90.02% | 80.45% | |
PPG + ECG + EEG | 86.90% | 74.26% | |
LRCN | PPG | 84.27% | 70.49% |
ECG | 93.42% | 88.11% | |
EEG | 80.15% | 62.36% | |
PPG + ECG | 86.77% | 74.66% | |
PPG + EEG | 83.89% | 69.67% | |
ECG + EEG | 91.39% | 84.31% | |
PPG + ECG + EEG | 84.44% | 71.35% | |
Self-Supervised CNN | PPG | 81.66% | 63.98% |
ECG | 90.07% | 81.11% | |
EEG | 74.44% | 28.90% | |
PPG + ECG | 86.05% | 71.47% | |
PPG + EEG | 84.72% | 69.90% | |
ECG + EEG | 90.32% | 81.04% | |
PPG + ECG + EEG | 80.69% | 52.70% |
Model Name | Scenario | Data | Accuracy | F1-Score |
---|---|---|---|---|
StressNeXt | Scenario 1 | PPG | 84.78% | 79.90% |
ECG | 93.48% | 90.83% | ||
EEG | 63.39% | 48.68% | ||
PPG + ECG | 89.52% | 85.44% | ||
PPG + EEG | 80.98% | 73.95% | ||
ECG + EEG | 92.71% | 90.02% | ||
PPG + ECG + EEG | 87.88% | 84.37% | ||
Scenario 2 | PPG | 86.64% | 79.53% | |
ECG | 96.80% | 94.27% | ||
EEG | 64.81% | 47.75% | ||
PPG + ECG | 83.73% | 65.20% | ||
PPG + EEG | 84.18% | 75.01% | ||
ECG + EEG | 95.93% | 91.33% | ||
PPG + ECG + EEG | 89.37% | 76.86% | ||
Scenario 3 | PPG | 95.22% | 78.55% | |
ECG | 97.79% | 91.37% | ||
EEG | 86.86% | 46.81% | ||
PPG + ECG | 97.79% | 91.62% | ||
PPG + EEG | 94.80% | 77.17% | ||
ECG + EEG | 98.78% | 95.39% | ||
PPG + ECG + EEG | 98.29% | 92.88% | ||
LRCN | Scenario 1 | PPG | 81.78% | 78.10% |
ECG | 95.13% | 93.72% | ||
EEG | 69.93% | 55.68% | ||
PPG + ECG | 88.52% | 85.41% | ||
PPG + EEG | 81.15% | 77.31% | ||
ECG + EEG | 93.51% | 91.17% | ||
PPG + ECG + EEG | 81.51% | 77.14% | ||
Scenario 2 | PPG | 82.94% | 72.06% | |
ECG | 96.46% | 93.90% | ||
EEG | 73.35% | 54.19% | ||
PPG + ECG | 85.21% | 74.37% | ||
PPG + EEG | 83.31% | 73.50% | ||
ECG + EEG | 97.96% | 96.67% | ||
PPG + ECG + EEG | 87.84% | 75.98% | ||
Scenario 3 | PPG | 95.43% | 78.90% | |
ECG | 97.16% | 93.63% | ||
EEG | 90.42% | 57.76% | ||
PPG + ECG | 95.44% | 84.94% | ||
PPG + EEG | 92.14% | 68.00% | ||
ECG + EEG | 96.71% | 90.31% | ||
PPG + ECG + EEG | 92.00% | 67.49% | ||
Self-Supervised CNN | Scenario 1 | PPG | 92.06% | 90.09% |
ECG | 95.06% | 93.01% | ||
EEG | 63.35% | 28.10% | ||
PPG + ECG | 88.22% | 82.41% | ||
PPG + EEG | 91.08% | 89.11% | ||
ECG + EEG | 87.62% | 80.42% | ||
PPG + ECG + EEG | 90.05% | 87.98% | ||
Scenario 2 | PPG | 92.95% | 89.92% | |
ECG | 96.61% | 94.54% | ||
EEG | 69.90% | 30.35% | ||
PPG + ECG | 90.53% | 79.41% | ||
PPG + EEG | 90.08% | 85.39% | ||
ECG + EEG | 96.15% | 93.09% | ||
PPG + ECG + EEG | 88.56% | 76.57% | ||
Scenario 3 | PPG | 95.48% | 78.54% | |
ECG | 98.50% | 94.66% | ||
EEG | 89.95% | 37.93% | ||
PPG + ECG | 95.14% | 77.24% | ||
PPG + EEG | 95.80% | 82.97% | ||
ECG + EEG | 97.06% | 85.73% | ||
PPG + ECG + EEG | 94.89% | 77.83% |
Model Name | Sudoku Difficulty | Data | Accuracy | F1-Score |
---|---|---|---|---|
StressNeXt | Medium | PPG | 87.75% | 78.58% |
ECG | 90.04% | 83.72% | ||
EEG | 69.98% | 44.72% | ||
PPG + ECG | 83.79% | 73.58% | ||
PPG + EEG | 86.06% | 75.56% | ||
ECG + EEG | 87.85% | 78.48% | ||
PPG + ECG + EEG | 86.25% | 76.48% | ||
Hard | PPG | 85.36% | 73.37% | |
ECG | 89.41% | 77.91% | ||
EEG | 67.59% | 43.92% | ||
PPG + ECG | 84.96% | 70.63% | ||
PPG + EEG | 81.80% | 67.53% | ||
ECG + EEG | 90.05% | 79.82% | ||
PPG + ECG + EEG | 84.52% | 69.55% | ||
LRCN | Medium | PPG | 85.52% | 74.36% |
ECG | 91.56% | 87.91% | ||
EEG | 79.75% | 57.72% | ||
PPG + ECG | 84.90% | 74.16% | ||
PPG + EEG | 84.37% | 70.64% | ||
ECG + EEG | 82.83% | 67.56% | ||
PPG + ECG + EEG | 84.37% | 72.07% | ||
Hard | PPG | 82.58% | 68.27% | |
ECG | 92.00% | 83.55% | ||
EEG | 77.14% | 58.98% | ||
PPG + ECG | 86.98% | 75.84% | ||
PPG + EEG | 81.55% | 66.90% | ||
ECG + EEG | 87.74% | 75.85% | ||
PPG + ECG + EEG | 87.10% | 75.75% | ||
Self-Supervised CNN | Medium | PPG | 89.70% | 81.48% |
ECG | 88.71% | 80.70% | ||
EEG | 75.55% | 29.64% | ||
PPG + ECG | 87.36% | 73.33% | ||
PPG + EEG | 92.49% | 84.64% | ||
ECG + EEG | 84.92% | 70.74% | ||
PPG + ECG + EEG | 89.38% | 80.53% | ||
Hard | PPG | 91.29% | 83.10% | |
ECG | 89.74% | 81.06% | ||
EEG | 73.34% | 28.20% | ||
PPG + ECG | 89.95% | 80.51% | ||
PPG + EEG | 84.93% | 70.77% | ||
ECG + EEG | 90.63% | 82.80% | ||
PPG + ECG + EEG | 89.81% | 81.14% |
Model | Accuracy | F1-Score | Input Data | Scenarios | Number of Participants |
---|---|---|---|---|---|
Transformer [26] | 71.60% | 74.20% | Raw ECG | Participants write reports for each of the two provided topics and make presentation for one of the provided topics (SWELL dataset) | 25 |
Random Forest [27] | 78.80% | 88.80% | Extracted features of GSR, heart rate | Students perform multiple tasks, including sing-a-song, emails, color-word test, game, arithmetic question, social conversation, eating, homework, put hands in ice bucket | 9 |
AdaBoost DT (3-class classification) [17] | 80.34% | 72.51% | Extracted features of PPG, EDA, SKT | Participants read magazines, take TSST, and watch amusing videos (WESAD dataset) | 17 |
DeepER Net [28] | 83.90% | 81.00% | Extracted features of ECG and RSP | University students solve math tasks or color-word test | 18 |
Artificial Neural Network (ANN) [29] | 84.32% | 78.71% | Extracted features of ACC, PPG, EDA, TEMP, RESP, EMG, and ECG | Participants read magazines, take TSST, and watch amusing videos (WESAD dataset) | 17 |
Deep ECGNet [30] | 87.39% | 73.96% | Extracted features of ECG | Students take multiple tasks, including arithmetic problems, color-word test, interview | 30 |
CNN-LSTM Network [31] | 92.80% | 94.56% | Raw ECG, vehicle dynamic data, environmental parameters | Participants drive on a simulator with different scenarios, including urban, highway, city | 17 |
Multi-layer Perceptron [24] | 93.64% | 92.44% | Raw PPG, EDA, SKT | Participants read magazines, take TSST, and watch amusing videos (WESAD dataset) | 17 |
SVM-RBF [25] | 96.25% | 96.00% | Extracted features of PPG, GSR, EEG | Participants prepare a talk and speak in front of real audience | 40 |
Deep 1D-CNN [12] | 97.48% | 96.82% | ECG, EDA, EMG, RESP, TEMP, TEMP, ACC | Participants watched a series of videos | 15 |
Proposed model (general) | 93.42% | 88.11% | ECG + EEG | Students solve Sudoku puzzles under different distractions, including noisy environment, another individual monitoring, comforting conditions | 30 |
Proposed model (scenario 1) | 95.13% | 93.72% | |||
Proposed model (scenario 2) | 97.76% | 96.67% | |||
Proposed model (scenario 3) | 98.78% | 95.39% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, Q.; Lee, B.G. Deep Learning Models for Stress Analysis in University Students: A Sudoku-Based Study. Sensors 2023, 23, 6099. https://doi.org/10.3390/s23136099
Chen Q, Lee BG. Deep Learning Models for Stress Analysis in University Students: A Sudoku-Based Study. Sensors. 2023; 23(13):6099. https://doi.org/10.3390/s23136099
Chicago/Turabian StyleChen, Qicheng, and Boon Giin Lee. 2023. "Deep Learning Models for Stress Analysis in University Students: A Sudoku-Based Study" Sensors 23, no. 13: 6099. https://doi.org/10.3390/s23136099
APA StyleChen, Q., & Lee, B. G. (2023). Deep Learning Models for Stress Analysis in University Students: A Sudoku-Based Study. Sensors, 23(13), 6099. https://doi.org/10.3390/s23136099