Decoding Brain Signals from Rapid-Event EEG for Visual Analysis Using Deep Learning
Abstract
:1. Introduction
2. Related Work
- 1.
- Usage of a block design during signal acquisition.
- 2.
- No preprocessing employed, i.e., usage of unfiltered data, resulting in training on noisy data.
- 3.
- Test data are sourced from the same block as the training data.
- 1.
- EEG data collected on a set of object classes/images identical to that utilized by Spampinato et al. [34];
- 2.
- Application of the same block design technique;
- 3.
- Adoption of similar preprocessing methods;
- 4.
- Utilization of test data sourced from the same block as the training data.
- 1.
- Determination of the optimal number of object classes to increase accuracy, as a low number of classes results in higher accuracy and vice-versa;
- 2.
- 3.
- 4.
- Ensuring accurate labeling of data as incorrect or arbitrary labeling of events in block and rapid-event designs, resulting in accuracy boosts, reported in an analysis by Ren Li et al. [33] (page 318, Section 2 point e).
- 5.
3. Materials and Methods
3.1. Dataset Description
3.2. Preprocessing and Feature Extraction
- 1.
- Raw EEG data are rereferenced to the mastoids to remove external noise and artifacts.
- 2.
- The data are bandpass-filtered by applying a zero-phase FIR filter from the MNE library. This filter eliminates phase shifts and gradually cuts off frequency components below 14 Hz and above 71 Hz, so no ringing artifacts remain in the signal.
- 3.
- A notch filter at 49–51 Hz is applied to remove any power-line noise.
- 4.
- The data are then epoched based on events, starting from −0.5 s and ending at 2.5 s for each event. The length of the epochs retrieved at this stage for a batch of files is 4,997,120 × 104. A total of 4,997,120 of the time points are considered rows, while 104 represent the sensory positions.
- 5.
- Events corresponding to 400 visual stimuli are extracted from the stimulus channel and assigned unique class labels for all 40 classes.
- 6.
- The data are then annotated using the unique labels and epoch data.
3.3. Proposed Classifiers
4. Experimental Results
4.1. Results with a No-Filtering Approach
4.2. Results with Filtered Data
5. Discussion
5.1. Effect of the Sensor Selection Strategy
5.2. Effect of Window Sizes on Signal Accuracy
5.3. Comparison
6. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Sánchez-Reyes, L.M.; Rodríguez-Reséndiz, J.; Avecilla-Ramírez, G.N.; García-Gomar, M.L. Novel algorithm for detection of cognitive dysfunction using neural networks. Biomed. Signal Process. Conotrol 2024, 90, 105853. [Google Scholar] [CrossRef]
- Sánchez-Reyes, L.M.; Rodríguez-Reséndiz, J.; Avecilla-Ramírez, G.N.; García-Gomar, M.L.; Robles-Ocampo, J.B. Impact of eeg parameters detecting dementia diseases: A systematic review. IEEE Access 2021, 9, 60–74. [Google Scholar] [CrossRef]
- Shen, G.; Horikawa, T.; Majima, K.; Kamitani, Y. Deep image reconstruction from human brain activity. PLoS Comput. Biol. 2019, 15, e1006633. [Google Scholar] [CrossRef] [PubMed]
- Das, K.; Giesbrecht, B.; Eckstein, M.P. Predicting variations of perceptual performance across individuals from neural activity using pattern classifiers. NeuroImage 2010, 51, 1425–1437. [Google Scholar] [CrossRef]
- Kalafatovich, J.; Lee, M.; Lee, S.-W. Learning Spatiotemporal Graph Representations for Visual Perception Using EEG Signals. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 97. [Google Scholar] [CrossRef]
- Kersten, D.; Mamassian, P.; Yuille, A. Object Perception as Bayesian inference. Annu. Rev. Psychol. 2004, 55, 271–304. [Google Scholar] [CrossRef]
- Katayama, O.; Stern, Y.; Habeck, C.; Coors, A.; Lee, S.; Harada, K.; Makino, K.; Tomida, K.; Morikawa, M.; Yamaguchi, R.; et al. Detection of neurophysiological markers of cognitive reserve: An EEG study. Front. Aging Neurosci. 2024, 16, 1401818. [Google Scholar] [CrossRef]
- Rehman, A.; Khalili, Y.A. Neuroanatomy, Occipital Lobe. In Medicine, Biology; StatPearls Publishing: Treasure Island, FL, USA, 2019. [Google Scholar]
- Holdaway, T. “Principals of Psychology PS200” Chapter 18: The Brain; PressBooks: Montreal, QC, Canada, 2024. [Google Scholar]
- Cai, G.; Zhang, F.; Yang, B. Manifold Learning-Based Common Spatial Pattern for EEG Signal Classification. IEEE J. Biomed. Health Inform. 2024, 28, 1971–1981. [Google Scholar] [CrossRef]
- Yang, K.; Hu, Y.; Zeng, Y.; Tong, L.; Gao, Y.; Pei, C.; Li, Z.; Yan, B. EEG Network Analysis of Depressive Emotion Interference Spatial Cognition Based on a Simulated Robotic Arm Docking Task. Brain Sci. 2024, 14, 44. [Google Scholar] [CrossRef]
- Phukan, A.; Gupta, D. Deep Feature extraction from EEG Signals using xception model for Emotion Classification. Multimed. Tools Appl. 2024, 83, 33445–33463. [Google Scholar] [CrossRef]
- Du, X.; Meng, Y.; Qiu, S.; Lv, Y.; Liu, Q. EEG Emotion Recognition by Fusion of Multi-Scale features. Brain Sci. 2023, 13, 1293. [Google Scholar] [CrossRef] [PubMed]
- Wang, Y.; Zhang, B.; Di, L. Research Progress of EEG-Based Emotion Recognition: A Survey. ACM Comput. Surv. 2024, 56, 1–49. [Google Scholar] [CrossRef]
- Krishnan, P.T.; Erramchetty, S.K.; Balusa, B.C. Advanced Framework for Epilepsy detection through image-based EEG Signal Analysis. Front. Hum. Neurosci. 2024, 18, 1336157. [Google Scholar] [CrossRef] [PubMed]
- Su, K.-m.; Hairston, W.D.; Robbins, K. EEG-Annotate: Automated identification and labeling of events in continuous signals with applications to EEG. J. Neurosci. Methods 2018, 293, 359–374. [Google Scholar] [CrossRef]
- Zhang, X.; Zhang, X.; Huang, Q.; Lv, Y.; Chen, F. A review of automated sleep stage based on EEG signals. Biocybern. Biomed. Eng. 2024, 44, 651–673. [Google Scholar] [CrossRef]
- Jamil, N.; Belkacem, A.N. Advancing Real-Time Remote Learning: A Novel Paradigm for Cognitive Enhancement Using EEG and Eye-Tracking Analytics. IEEE Access 2024, 12, 93116–93132. [Google Scholar] [CrossRef]
- Kocturova, M.; Jones, J. A Novel Approach to EEG Speech Activity Detection with Visual Stimuli and Mobile BCI. Appl. Sci. 2021, 11, 674. [Google Scholar] [CrossRef]
- Craik, A.; He, Y.; Contreras-Vidal, J.L. Deep Learning for electroencephalogram (EEG) classification tasks: A review. J. Neural Eng. 2019, 16, 031001. [Google Scholar] [CrossRef]
- Ruiz, S.; Lee, S.; Dalboni da Rocha, J.L.; Ramos-Murguialday, A.; Pasqualotto, E.; Soares, E.; García, E.; Fetz, E.; Birbaumer, N.; Sitaram, R. Motor Intentions Decoded from fMRI Signals. Brain Sci. 2024, 14, 643. [Google Scholar] [CrossRef]
- Huettel, S.A. Event Related fMRI in cognition. Neuroimage 2012, 63, 1152–1156. [Google Scholar] [CrossRef]
- Hahn, A.; Reed, M.B.; Vraka, C.; Godbersen, G.M.; Klug, S.; Komorowski, A.; Falb, P.; Nics, L.; Traub-Weidinger, T.; Hacker, M.; et al. High-temporal resolution functional PET/MRI reveals coupling between human metabolic and hemodynamic brain response. Eur. J. Nucl. Med. Mol. Imaging 2024, 51, 1310–1322. [Google Scholar] [CrossRef] [PubMed]
- Chowdhury, E.; Mahadevappa, M.; Kumar, C.S. Identification of Finger Movement from ECoG Signal Using Machine Learning Model. In Proceedings of the IEEE 9th International Conference for Convergence in Technology (12CT), Pune, India, 5–7 April 2024; pp. 1–6. [Google Scholar]
- Afnan, J.; Cai, Z.; Lina, J.M.; Abdallah, C.; Delaire, E.; Avigdor, T.; Ros, V.; Hedrich, T.; von Ellenrieder, N.; Kobayashi, E.; et al. EEG/MEG source imaging of deep brain activity within the maximum entropy on the mean framework: Simulations and validation in epilepsy. Hum. Brain Mapp. 2024, 45, e26720. [Google Scholar] [CrossRef] [PubMed]
- Sharma, R.; Meena, H.K. Emerging Trends in EEG Signal Processing: A Systematic Review. Springer Nat. Comput. Sci. 2024, 5, 415. [Google Scholar] [CrossRef]
- Dash, D.; Wisler, A.; Ferrari, P.; Davenport, E.M.; Maldjian, J.; Wang, J. MEG Sensor Slection for Neural Speech Decoding. IEEE Access 2020, 8, 182320–182337. [Google Scholar] [CrossRef]
- Sari-Sarraf, V.; Vakili, J.; Tabatabaei, S.M.; Golizadeh, A. The brain function promotion by modulating the power of beta and gamma waves subsequent twelve weeks’ time pressure training in chess players. J. Appl. Health Stud. Sport Physiol. 2024. [Google Scholar] [CrossRef]
- Simanova, I.; van Gerven, M.; Oostenveld, R.; Hagoort, P. Identifying Object Categories from Event-Related EEG: Toward Decoding of Conceptual Representations. PLoS ONE 2010, 5, e14465. [Google Scholar] [CrossRef]
- Ashford, J.; Jones, J. Classification of EEG Signals Based on Image Representations of Statistical Features. In Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2020. [Google Scholar]
- Deng, X.; Wang, Z.; Liu, K.; Xiang, X. A GAN Model Encoded by CapsEEGNet for Visual EEG Encoding and Image Reproduction. J. Neurosci. Methods 2023, 384, 109747. [Google Scholar] [CrossRef]
- Bharadwaj, H.M.; Wilbur, R.B.; Siskind, J.M. Still an Ineffective Method with Supertrials/ERPs—Comments on ‘Decoding Brain Representations by Multimodal Learning of Neural Activity and Visual Features’. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 14052. [Google Scholar] [CrossRef]
- Li, R.; Johansen, J.S. The Perils and Pitfalls of Block Design for EEG Classification Experiments. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 316–333. [Google Scholar] [CrossRef]
- Spampinato, C.; Palazzo, S. Deep Learning Human Mind for Automated Visual Classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Raza, H.; Rathee, D.; Zhou, S.M.; Cecotti, H.; Prasad, G. Covariate shift estimation based adaptive emsemble learning for handling non stationarity in motor imagery related EEG-based brain computer interface. Neurocomputing 2019, 343, 154–166. [Google Scholar] [CrossRef]
- Bashivan, P.; Rish, I.; Yeasin, M.; Codella, N. Learning Representations from EEG with Deep Recurrent Convolutional Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Bird, J.; Jones, L.; Milsom, D.; Malekmohammadi, A. A Study on mental state classification using eeg based brain machine interface. In Proceedings of the 9th International Conference on Intelligent Systems, Funchal, Portugal, 25–27 September 2018. [Google Scholar]
- Nuthakki, S.; Kumar, S.; Kulkarni, C.S.; Nuthakki, Y. Role of AI Enabled Smart Meters to Enhance Customer Satisfaction. Int. J. Comput. Sci. Mob. Comput. 2022, 11, 99–107. [Google Scholar] [CrossRef]
- Rehman, M.; Ahmed, T. Optimized k-Nearest Neighbor Search with Range Query. Nuclues 2015, 52, 45–49. [Google Scholar]
- Wu, H.; Li, S.; Wu, D. Motor Imagery Classification for Asynchronous EEG-Based Brain-Computer Interfaces. IEEE Trans. Neural Syst. Rehabil. Eng. 2024, 32, 527–536. [Google Scholar] [CrossRef]
- Pasanta, D.; Puts, N.A. Functional Spectroscopy. In Reference Module in Neuroscience and Biobehavioral Psychology; Elsevier: Amsterdam, The Netherlands, 2024. [Google Scholar]
- Padfield, N.; Zabalza, J.; Zhao, H.; Masero, V.; Ren, J. EEG based brain computer interfaces using motor imagery: Technique and challenges. Sensors 2019, 19, 1423. [Google Scholar] [CrossRef]
- Nuthakki, S.; Kolluru, V.K.; Nuthakki, Y.; Koganti, S. Integrating Predictive Analytics and Computational Statistics for Cardiovascular Health Decision-Making. Int. J. Innov. Res. Creat. Technol. 2023, 9, 1–12. [Google Scholar] [CrossRef]
- Miladinovic, A.; Ajsevic, M.; Jarmolowska, J.; Marusic, U.; Colussi, M.; Silveri, M.; Battaglini, G. A Effect of Power feature covariance shift on BCI spatial-filtering techniques: A comparative study. Comput. Methods Program Biomed 2019, 198, 105808. [Google Scholar] [CrossRef]
- Ahmed, H.; Wilbur, R.B.; Bharadwaj, H.M.; Siskind, J.M. Confounds in the data—Comments on ‘Decoding Brain Representations by Multimodal Learning of Neural Activity and Visual Features’. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 9217–9220. [Google Scholar] [CrossRef]
- Palazzo, S.; Spampinato, C. Decoding Brain Representations by Multimodal Learning of Neural Activity and Visual Features. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 3833–3849. [Google Scholar] [CrossRef]
- Zheng, X.; Chen, W. Ensemble Deep Learning for Automated Visual Classification Using EEG Signals. Pattern Recognit. 2019, 102, 107147. [Google Scholar] [CrossRef]
- Fares, A.; Zahir, S.; Shedeed, H. Region Level Bi-directional Deep Learning Framework for EEG-based Image Classification. In Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine, Madrid, Spain, 3–6 December 2018. [Google Scholar]
- Fares, A.; Zahir, S.; Shedeed, H. EEG-based image classification via a region level stacked bi-directional deep learning framework. In Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine, Madrid, Spain, 3–6 December 2018. [Google Scholar]
- Guo, W.; Xu, G.; Wang, Y. Brain Visual Image signal classification via hybrid dilation residual shrinkage network with spatio temporal feature fusion. Signal Image Video Process. 2023, 17, 743–751. [Google Scholar] [CrossRef]
- Abbasi, H.; Seyedarabi, H.; Razavi, S.N. A combinational deep learning approach for automated visual classification using EEG signals. Signal Image Video Process. 2024, 18, 2453–2464. [Google Scholar] [CrossRef]
- Ahmed, H.; Wilbur, R.B.; Bharadwaj, H.M.; Siskind, J.M. Object Classification from Randomized EEG Trials. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; p. 3845. [Google Scholar]
- Kaneshiro, B.; Guimaraes, M.P.; Kim, H.S.; Norcia, A.M. A Representational Similarity Analysis of the Dynamics of Object Processing Using Single-Trial EEG Classification. PLoS ONE 2015, 10, e0135697. [Google Scholar] [CrossRef]
- Gifford, A.T.; Dwivedi, K.; Roig, G.; Cichy, R.M. A Large and Rich EEG Dataset for Modeling Human Visual Object Recognition. NeuroImage 2022, 264, 119754. [Google Scholar] [CrossRef]
- Vivancos, D.; Cuesta, F. Mind Big Data 2022: A Large Dataset of Brain Signals. arXiv 2022, arXiv:2212.14746. [Google Scholar]
- Cichy, R.M.; Pantazis, D. Multivariate Pattern Analysis of MEG and EEG: A Comparison of Representational Structure in Time and Space. NeuroImage 2017, 158, 441–454. [Google Scholar] [CrossRef]
- Falciglia, S.; Betello, F.; Russo, S.; Napoli, C. Learning Visual Stimulus-Evoked EEG Manifold for Neural Images Classification. NeuroComputing 2024, 588, 127654. [Google Scholar] [CrossRef]
- Bhalerao, S.V.; Pachori, R.B. Automated Classification of Cognitive Visual Objects Using Multivariate Swarm Sparse Decomposition from Multichannel EEG-MEG Signals. IEEE Trans. Hum.-Mach. Syst. 2024, 54, 455–464. [Google Scholar] [CrossRef]
- Ahmadieh, H.; Gassemi, F.; Moradi, M.H. A Hybrid Deep Learning Framework for Automated Visual Classification Using EEG Signals. Neural Comput. Appl. 2023, 35, 20989–21005. [Google Scholar] [CrossRef]
- Zhu, S.; Ye, Z.; Ai, Q. EEG-ImageNet: An Electroencephalogram Dataset and Benchmarks with Image Visual Stimuli of Multi-Granularity Labels. arXiv 2024, arXiv:2406.07151. [Google Scholar]
- Ye, Z.; Yao, L.; Zhang, Y.; Gustin, S. Self-Supervised Cross-Modal Visual Retrieval from Brain Activities. Pattern Recognit. 2024, 145, 109915. [Google Scholar] [CrossRef]
- Li, R.; Johansen, J.S.; Ahmed, H.; Ilyevsky, T.V.; Wilbur, R.B.; Bharadwaj, H.M. Training on the Test Set? An Analysis of Spampinato et al. Extraction 2019, 31, 6809–6810. [Google Scholar]
- Singh, A.K.; Krishnan, S. Trends in EEG Signal feature extraction applications. Front. Artif. Intell. 2023, 92, 1072801. [Google Scholar] [CrossRef] [PubMed]
- Ahmed, I.; Jahangir, M.; Iqbal, S.T.; Azhar, M.; Siddiqui, I. Classification of Brain Signals of Event Related Potentials using Different Methods of Feature Extraction. Int. J. Sci. Eng. Res. 2017, 8, 680–686. [Google Scholar] [CrossRef]
- Badr, Y.; Tariq, U.; Al-Shargie, F.; Babiloni, F.; Mughairbi, F.A. A Review on Evaluating Mental Stress by Deep Learning Using EEG Signals. Neural Comput. Appl. 2024, 36, 12629–12654. [Google Scholar] [CrossRef]
- Ari, E.; Tacgin, E. NF-EEG: A Generalized CNN Model for Multi-Class EEG Motor Imagery Classification Without Signal Preprocessing for Brain-Computer Interfaces. Biomed. Signal Process. Control. 2024, 92, 106081. [Google Scholar] [CrossRef]
Ref# | Name of Dataset | Journal and Year | Stimulus | TSD | No. of Classes | No. of Images/ Clips per Class | No. of Subjects | Device/No. of Channels | Sampling Rate |
---|---|---|---|---|---|---|---|---|---|
[52] | ImageNet D1 | Journal 2021 | Image | Rapid Event | 40 | 1000 | 01 | BioSemi ActiveTwo recorder 104 | 4096 Hz |
[34] | ImageNet D2 | Journal 2017 | Image | Block Design | 40 | 50 | 06 | ActiCap 128 | 1000 Hz |
[33] | ImageNet D3 | Journal 2021 | Image | Rapid Event | 40 | 50 | 6 | BioSemi ActiveTwo recorder 104 | 4096 Hz |
[33] | ImageNet V1 | Journal 2021 | Video | Rapid Event | 12 | 32 | 06 | BioSemi ActiveTwo recorder 104 | 4096 Hz |
[53] | Stanford Dataset D4 | Journal 2015 | Image | Rapid Event | 6 | 12 | 10 | EGI HCGSN 128 | 1000 Hz |
[29] | MPI DB D5 | Journal 2010 | Image | Rapid Event | 3 | 4 | 04 | ActiCap System 64 | 500 Hz |
[54] | Things D6 | Journal 2022 | Image | Rapid Event | 1854 | 10 | 10 | Easy Cap 64 | 1000 Hz |
[31] | ImageNet D7 | Journal 2023 | Image | Rapid Event | 4 | 10 | 04 | ActiCHamp 32 | 1000 Hz |
[55] | MNIST D8 | Journal 2024 | Image | Rapid Event | 11 | 116 | 01 | Emotiv EPOC 14 | 128 Hz |
[56] | Human dataset D9 | Journal 2017 | Image | Rapid Event | 5 | 12 | 16 | Easycap 74 | 1000 Hz |
[60] | EEG-ImageNet D10 | Journal 2024 | Image (coarse-grained) | Block Design | 40 | 50 | 16 | - | 1000 Hz |
[60] | EEG-ImageNet D11 | Journal 2024 | Image (fine-grained) | Block Design | 40 | 50 | 16 | - | 1000 Hz |
Ref # | Year | Type | Dataset Utilized | Classes | TSD | Classifier | Accuracy |
---|---|---|---|---|---|---|---|
[60] | 2024 | Journal | EEG-ImageNet D11 | 40 | Block Design | SVM | 77.84% |
MLP | 81.63% | ||||||
EEGNet | 36.45% | ||||||
RGNN | 70.57% | ||||||
[60] | 2024 | Journal | EEG-ImageNet D10 | 40 | Block Design | SVM | 50.57% |
MLP | 53.39% | ||||||
EEGNet | 30.30% | ||||||
RGNN | 47.03% | ||||||
[61] | 2024 | Journal | ImageNet D1 | 40 | Rapid Event | EEGVis_CMR (from EEG to Image) | 17.9% |
[57] | 2024 | Journal | MNIST D8 | 11 | Rapid Event | RieManiSpectraNet | 55% |
[58] | 2024 | Journal | Human dataset D9 | 05 | Rapid Event | LDA | 68.75% |
[59] | 2023 | Journal | Stanford Dataset D4 | 06 | Rapid Event | LSTM | 55.55% |
SVM | 66.67% | ||||||
[31] | 2023 | Journal | ImageNet D7 | 04 | Rapid Event | SVM | 36.22% |
CNN | 64.49% | ||||||
LSTM-CNN | 65.26% | ||||||
EEGNet | 79.29% | ||||||
[54] | 2022 | Journal | Things D6 | 1854 | Rapid Event | AlexNet | 15.4% |
ResNet-50 | 16.25% | ||||||
CORnet | 21.05% | ||||||
MoCo | 12.40% | ||||||
[50] | 2022 | Journal | ImageNet D2 | 40 | Block Design | SVM | 82.70% |
RNN-based Model | 84.00% | ||||||
Siamese Network | 93.70% | ||||||
Bi-LSTM | 92.59% | ||||||
HDRS-STF | 99.78% | ||||||
BiLISTM+AttGW | 99.50% | ||||||
[5] | 2023 | Journal | Stanford Dataset D4 | 06 | Rapid Event | LDA | 40.52% |
ShallowConvNet | 46.51% | ||||||
EENet | 43.83% | ||||||
LSTM | 38.06% | ||||||
EEG-Conv Transformer | 52.33% | ||||||
TSCNN | 54.28% | ||||||
Max Plank Institute Dataset [MPI DB] | 03 | Rapid Event | LDA | 76.11% | |||
ShallowConvNet | 77.42% | ||||||
EEGNet | 77.79% | ||||||
LSTM | 60.61% | ||||||
TSCNN | 84.40% | ||||||
[32] | 2023 | Journal | ImageNet D1 | 40 | Rapid Event | LSTM | 2.3% |
k-NN | 2.1% | ||||||
SVM | 3.0% | ||||||
MLP | 2.8% | ||||||
1D CNN | 2.4% | ||||||
EEGNet | 17.6% | ||||||
SyncNet | 3.7% | ||||||
[45] | 2022 | Journal | ImageNet D2 | 40 | Block Design | LSTM | 2.7% |
k-NN | 3.6% | ||||||
SVM | 3.0% | ||||||
MLP | 3.7% | ||||||
1D CNN | 3.3% | ||||||
EEGNet | 2.5% | ||||||
SyncNet | 3.8% | ||||||
EEGChannelNet | 2.6% | ||||||
[33] | 2021 | Journal | ImageNet D3 | 40 | Rapid Event | LSTM | 2.9% |
k-NN | 3.2% | ||||||
SVM | 3.0% | ||||||
MLP | 3.7% | ||||||
1D CNN | 3.3% | ||||||
[52] | 2021 | Conference | ImageNet D1 | 40 | Rapid Event | 1D CNN | 5.1% |
LSTM | 2.2% | ||||||
SVM | 5.0% | ||||||
k-NN | 2.1% | ||||||
[62] | 2019 | Journal | ImageNet D2 | 40 | Block Design | LSTM | 63.1% |
k-NN | 100% | ||||||
SVM | 100% | ||||||
MLP | 21.9% | ||||||
1D CNN | 85.9% | ||||||
ImageNet D3 | 40 | Rapid Event | LSTM | 0.7% | |||
k-NN | 1.4% | ||||||
SVM | 2.7% | ||||||
MLP | 1.5% | ||||||
1D CNN | 2.1% | ||||||
[46] | 2020 | Journal | ImageNet D2 | 40 | Block Design | Inception v3 (from signals to images) | 94.4% |
[47] | 2019 | Journal | ImageNet D2 | 40 | Block Design | Proposed LSTM-B | 97.13% |
[48] | 2018 | Conference | ImageNet D2 | 40 | Block Design | Proposed Bidirectional LSTMs | 97.3% |
[49] | 2018 | Conference | ImageNet D2 | 40 | Block Design | Proposed Region-level bi-directional LSTM | 97.1% |
[34] | 2017 | Conference | ImageNet D2 | 40 | Block Design | GoogleNet | 92.6% |
VGG | 80.0% | ||||||
Proposed Method | 89.7% |
Device | BioSemi ActiveTwo recorder |
Number of Subjects | 1 |
Visual Stimuli | ILSVRC-2021 |
Total Classes | 40 |
Images per Class | 1000 |
Duration of Visual Stimuli | 2 s with 1 s blanking |
Sampling Frequency | 4096 Hz |
Data Resolution | 24 bits |
Temporal Stimulation Design | Rapid Event design |
Classifier | Parameters |
---|---|
KNN | k = 5 |
SVM | kernel = ‘poly’, C = 20, random_state = 1,gamma = 1, probability = True, class_weight = ‘balanced’ |
MLP | Hidden_layers = 1500, Max_iterations = 2000, random_state = 42 |
1D CNN | Learning rate = 0.0005, batch size = 100, epochs = 200, optimizer = Adam, loss = sparse_categorical_crossentropy, metrics = Accuracy, no of Layers = 15, activation = Relu, Softmax |
LSTM | Learning rate = 0.005, batch size = 100, epochs = 200, optimizer = Adam, loss = sparse_categorical_crossentropy, metrics = Accuracy, no of Layers = 14, activation = Relu, Softmax |
MCCFF Net-50 | Learning rate = 0.005, Batch size = 120, epochs = 150, optimizer = Adam, loss = sparse_categorical_crossentropy, metrics = Accuracy, no of layers = 51, activation = Relu, Sigmoid |
MCCFF VGG | Learning rate = 0.001, Batch size = 100, epochs = 200, optimizer = Adam, loss = sparse_categorical_crossentropy, metrics = Accuracy, no of layers = 16, activation = Relu, Sigmoid |
1D CNN Model | LSTM Model | ||||
---|---|---|---|---|---|
Layer | Neural Units/Kernel Size | Activation | Layer | Neural Units | Activation |
Conv1D | 8 (3, 3) | ReLU | LSTM | 8 | ReLU |
Dropout | 0.1 | - | Batch Normalization | - | - |
Batch Normalization | - | - | MaxPooling1D | - | - |
MaxPooling1D | (4, 4) | - | LSTM | 16 | ReLU |
Conv1D | 16 (3, 3) | ReLU | Dropout | 0.2 | - |
Dropout | 0.2 | - | Batch Normalization | - | - |
Batch Normalization | - | - | MaxPooling1D | - | - |
MaxPooling1D | (4, 4) | - | LSTM | 32 | ReLU |
Conv1D | 32 (3, 3) | ReLU | Dropout | 0.4 | - |
Batch Normalization | - | - | Batch Normalization | - | - |
Flatten | - | - | MaxPooling1D | - | - |
Dense Layer | 16 | ReLU | Flatten | - | - |
Dropout | 0.4 | - | Dense Layer | 32 | ReLU |
Batch Normalization | - | - | Dense (Output Layer) | 41 | Softmax |
Dense (Output Layer) | 41 | Softmax |
Classifier | Precision (%) | Recall (%) | F1 Score (%) | Accuracy (%) |
---|---|---|---|---|
KNN | 2.49 | 4.96 | 2.82 | 4.96 |
SVM | 7.51 | 4.34 | 3.93 | 4.34 |
MLP | 14.31 | 3.72 | 2.85 | 3.72 |
Proposed Models | ||||
LSTM | 21.86 | 10.97 | 9.32 | 10.97 |
1d-CNN | 47.369 | 15.96 | 13.591 | 15.960 |
MCCFF Net-50 | 48.11 | 22.94 | 20.53 | 22.94 |
MCCFF VGG | 62.05 | 33.16 | 34.59 | 33.17 |
Classifier | Precision (%) | Recall (%) | F1 Score (%) | Accuracy (%) |
---|---|---|---|---|
KNN | 40.0 | 4.9 | 3.4 | 4.96 |
SVM | 5.1 | 4.34 | 4.1 | 4.34 |
MLP | 13.5 | 5.59 | 6.51 | 5.59 |
Proposed Models | ||||
LSTM | 47.96 | 8.25 | 5.86 | 8.25 |
1d-CNN | 52.42 | 13.0 | 11.07 | 12.99 |
MCCFF Net-50 | 44.34 | 13.5 | 8.73 | 13.50 |
MCCFF VGG | 54.47 | 14.57 | 4.31 | 14.57 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rehman, M.; Anwer, H.; Garay, H.; Alemany-Iturriaga, J.; Díez, I.D.l.T.; Siddiqui, H.u.R.; Ullah, S. Decoding Brain Signals from Rapid-Event EEG for Visual Analysis Using Deep Learning. Sensors 2024, 24, 6965. https://doi.org/10.3390/s24216965
Rehman M, Anwer H, Garay H, Alemany-Iturriaga J, Díez IDlT, Siddiqui HuR, Ullah S. Decoding Brain Signals from Rapid-Event EEG for Visual Analysis Using Deep Learning. Sensors. 2024; 24(21):6965. https://doi.org/10.3390/s24216965
Chicago/Turabian StyleRehman, Madiha, Humaira Anwer, Helena Garay, Josep Alemany-Iturriaga, Isabel De la Torre Díez, Hafeez ur Rehman Siddiqui, and Saleem Ullah. 2024. "Decoding Brain Signals from Rapid-Event EEG for Visual Analysis Using Deep Learning" Sensors 24, no. 21: 6965. https://doi.org/10.3390/s24216965
APA StyleRehman, M., Anwer, H., Garay, H., Alemany-Iturriaga, J., Díez, I. D. l. T., Siddiqui, H. u. R., & Ullah, S. (2024). Decoding Brain Signals from Rapid-Event EEG for Visual Analysis Using Deep Learning. Sensors, 24(21), 6965. https://doi.org/10.3390/s24216965