Enhancing Real-Time Cursor Control with Motor Imagery and Deep Neural Networks for Brain–Computer Interfaces
Abstract
:1. Introduction
- Signal Preprocessing Improvement: We have enhanced the signal preprocessing phase with the introduction of Four-Class Iterative Filtering (FCIF), which significantly improves the quality of brain signals. This leads to a notable increase in the accuracy of user intention classification, crucial for effective BCI operation.
- Advanced Feature Extraction: The utilization of the Four-Class Filter Bank Common Spatial Pattern (FCFBCSP) algorithm optimizes the extraction of discriminative spatial patterns from brain signals. This method is applied across four motor imagery classes—left-hand movement, right-hand movement, foot movement, and tongue movement—facilitating highly precise cursor control.
- Modified DNN Classifier: A modified Deep Neural Network (DNN) classifier has been developed and tailored specifically for the unique demands of motor-imagery-based BCIs. This classifier ensures high accuracy and minimal response time, which are crucial for real-time applications.
- Rigorous Empirical Testing: Our approach has undergone extensive empirical testing to rigorously evaluate the performance of the proposed BCI system. These tests provide deep insights into the system’s reliability, precision, and practical applicability, enhancing the credibility of our findings through a robust, evidence-based evaluation process.
2. Related Work
2.1. Technological Advancements and Neuroimaging Methods
2.2. Deep Learning in BCIs
2.3. Classification Models and Adaptation
2.4. Artifact Removal and Signal Integrity
2.5. Hyperparameter Optimization and System Performance
2.6. Future Directions and Ethical Considerations
3. EEG-Based Methodology for Motor Imagery BCI
3.1. Four-Class Iterative Filtering (FCIF)
3.1.1. Mathematical Foundations and Implementation
3.1.2. Optimization and Convergence
3.2. Four-Class Filter Bank Common Spatial Pattern (FCFBCSP) Algorithm
- Decomposition into Filter Banks: Initially, the FCFBCSP algorithm decomposes the preprocessed EEG data into multiple filter bank components, each corresponding to a specific frequency band. This decomposition facilitates the algorithm’s adaptation to the unique characteristics of different frequency ranges within the EEG spectrum, addressing the complex nature of brain signals.
- Application of CSP within Frequency Bands: The Common Spatial Pattern (CSP) algorithm is employed within each frequency band to derive spatial filters. These filters are tailored to maximize the variance of EEG signals associated with one motor imagery class while minimizing it for the others. Such selective variance enhancement is crucial for revealing the most discriminative spatial patterns for each motor imagery class.
- Feature Vector Construction: For each trial, feature vectors are constructed using the CSP components extracted from various frequency bands. These vectors capture the essential spatial patterns across the spectrum, forming a robust basis for subsequent classification tasks.
- Comprehensive Spatial Analysis: By capturing spatial patterns across multiple frequency bands, the FCFBCSP provides a comprehensive analysis of EEG data. This approach ensures that dominant features in both high- and low-frequency bands are identified, and the spatial patterns with the highest discriminative power are utilized for classifying the four motor imagery classes.
3.3. Modified DNN Classifier Architecture
- Input Layer: The input layer receives feature vectors extracted from the prepro- cessed EEG data, where the dimensionality is determined by the number of features in these vectors.
- Hidden Layers: The network comprises multiple hidden layers. Each layer contains a varying number of neurons, specifically designed to process the complex spatial patterns inherent in EEG data effectively. The depth and width of these layers are calibrated based on empirical performance metrics, such as accuracy and computational efficiency, ensuring optimal performance for the specific demands of motor imagery classification.
- Activation Functions: Rectified Linear Units (ReLUs) are employed in the hidden layers to facilitate the efficient learning of complex patterns in the data. The choice of ReLU over other functions is due to its ability to speed up training without affecting the generalization ability of the network.
- Output Layer: The output layer includes neurons corresponding to the motor imagery classes (four in this study). A SoftMax activation function is used to translate the raw outputs of the network into probabilities for each class, which is crucial for accurate multi-class classification.
- Learning Rate and Batch Size: A grid search was utilized to determine the optimal learning rate, testing values ranging from 0.001 to 0.01. Batch sizes of 16, 32, and 64 were evaluated, with 32 providing the best trade-off between training speed and model performance.
- Early Stopping and Dropout: Early stopping was implemented to prevent overfitting, ceasing training if the validation loss did not improve, typically ending training after 70–80 epochs out of a maximum set at 100. A dropout rate of 0.5 was used to further ensure the model generalizes well on unseen data.
- Regularization: L2 regularization was applied to the weights, with parameters fine-tuned through cross-validation to ensure the model remains generalizable across various datasets.
- Low-Latency Optimization: The architecture is specifically optimized for low-latency operations, a crucial feature for real-time cursor control and navigation. Specific architectural decisions, such as the minimization of layer depth where possible, help reduce computation time, ensuring rapid response rates necessary for real-time application.
- Pre-trained Models: Utilizing pre-trained models as a foundational starting point significantly speeds up the training process. These models are typically sourced from similar task domains and are adapted through further training on motor imagery-specific data, facilitating faster convergence and enhanced generalization capabilities.
- Batch Normalization: Batch normalization techniques are implemented across layers to standardize activations, thus accelerating training dynamics and enhancing the stability of the network’s learning process. This method is particularly effective in managing internal covariate shift, leading to improved training speeds and more robust performance.
- Loss Function: A categorical cross-entropy loss function is strategically chosen to train the network, aiming to systematically minimize classification errors. This choice is aligned with the multi-class nature of the output, providing a probabilistic framework that quantitatively assesses the classifier’s predictions against the true labels.
3.4. Hardware and Software Components for EEG-Based Computer Cursor Control with Emotiv Insight
3.4.1. Hardware Components
- Model: Emotiv Insight EEG Headset.
- EEG Channels: This features five strategically positioned EEG channels (AF3, F7, F3, FC5, and T7), offering comprehensive scalp coverage essential for capturing a broad spectrum of neural signals associated with various motor imagery tasks.
- Sampling Rate: Operating at a high sampling rate of 128 Hz, the headset records dynamic brain activity with minimal delay, crucial for the effective translation of user intentions into cursor movements in real-time applications.
- EOG Channels: These are crucial for accurately monitoring eye movements and blinks, which can significantly contaminate EEG data. Effective capture of these signals allows the system to differentiate between cursor commands driven by brain activity and those inadvertently triggered by eye movements.
- Amplifiers: Built-in amplifiers boost the EEG signal quality, enabling more precise and reliable brain signal processing.
- Electrodes: The headset utilizes custom-designed dry EEG electrodes, ensuring optimal contact with the scalp and consistent signal acquisition, critical for maintaining high-quality data during extended use.
- Accelerometer/Gyroscope: A built-in three-axis accelerometer and gyroscope track head movements and orientation changes, providing additional control inputs that enhance the system’s responsiveness and user experience.
- Wireless Interface: The Bluetooth connection of the Emotiv Insight ensures efficient, reliable wireless communication, minimizing latency and maintaining continuous data flow between the headset and the computer system.
3.4.2. Software Components
- EEG Data Acquisition Software: EmotivPRO, provided by Emotiv, acts as the interface between the Emotiv Insight headset and the computer system, designed to handle high data throughput and ensure stable communication for real-time data capture.
- -
- Connection to Emotiv Insight: EmotivPRO facilitates seamless real-time EEG data acquisition, critical for immediate cursor control response.
- -
- Recording EEG Data: It supports continuous recording during experimental sessions, crucial for later analysis and model training.
- -
- Basic Preprocessing: Initial data preprocessing tools such as filtering and artifact removal enhance the quality and usability of the EEG signals before more complex processing.
3.4.3. Advanced Signal Processing Techniques
3.4.4. Classification and Analysis
- Data Analysis Software: Comprehensive data analysis is performed using Python, enhanced by libraries like NumPy and Pandas, enabling sophisticated handling of large datasets for detailed statistical analysis and effective data management.
- Data Visualization: We utilize Matplotlib to generate clear and informative graphical representations of data, aiding in the interpretation of complex patterns and results, and providing insights into the effectiveness of the BCI system.
3.5. Experimental Protocol with Emotiv Insight
3.5.1. Preparation and Setup
3.5.2. Execution of Motor Imagery Tasks
- Task Details: Each task was explicitly explained with the aid of visual aids and verbal descriptions to ensure that participants could vividly imagine the movements without actual motion.
- Randomized Order: Tasks were assigned in a randomized order to each participant to eliminate any order effects that might affect the consistency and quality of the EEG data.
- Number of Trials: Each participant performed each motor imagery task 30 times, resulting in a total of 120 trials per participant. This substantial number of repetitions was essential for gathering robust EEG data and ensuring statistical significance in the subsequent analysis.
3.5.3. Monitoring and Recording
- Artifact Handling: Real-time monitoring allowed for immediate identification and tagging of contaminated data segments, which were later excluded or corrected in the preprocessing phase.
3.5.4. Data Integrity and Preprocessing
- Signal Cleaning Procedures: Advanced signal processing techniques were applied to ensure that only clean, artifact-free data were used in the analysis. This included filtering, artifact subtraction, and data segmentation based on task-specific cues.
3.6. Performance Metrics
4. Experimental Results Analysis
4.1. AUC Calculation and Importance
4.2. System Design and Experiment Setup
4.2.1. Optimization of Filter Bank for EEG Signal Processing
- Mu Band (8–12 Hz) and Beta Band (13–30 Hz): These frequency bands are crucial for motor imagery, linked to motor functions and active thinking processes.
- Filter Configuration: We employed a set number of filters for each band, optimizing the bandwidth to reduce overlap between adjacent filters, enhancing the signal’s clarity and usability for classification.
- Cross-validation Performance Evaluation: The effectiveness of different filter configurations was rigorously evaluated through cross-validation on the training dataset, focusing on metrics like classification accuracy to refine our approach.
- Final Tuning: The filter bank’s final settings were adjusted to optimize the subsequent classification stage, ensuring high discriminability between different motor imagery classes.
4.2.2. Model Training and Validation Setup
4.3. Confusion Matrices Analysis
4.3.1. Left-Hand Imagery
4.3.2. Right-Hand Imagery
4.3.3. Foot Imagery
4.3.4. Tongue Imagery
4.3.5. Control Conditions
4.4. Hypothetical p-Values and Statistical Analysis
- The extremely low p-value (0.001) for Classification Accuracy robustly supports the rejection of the null hypothesis, indicating a statistically significant improvement in accuracy for motor imagery tasks compared to control conditions. This finding underscores the BCI system’s ability to effectively interpret and classify neural patterns associated with specific motor tasks.
- The p-value (0.015) associated with Response Time suggests a significant reduction in decision-making time for motor imagery tasks, enhancing the system’s usability in real-time applications. This improvement is critical for applications where timely and accurate response is paramount, such as in assistive technologies for individuals with mobility impairments.
- p-values for Specificity (0.002) and Sensitivity (0.009) confirm significant improvements in these metrics, indicating that the system is not only more likely to identify motor imagery when it occurs but also correctly ignores nontask-related brain activity. These improvements contribute to a more reliable and user-friendly BCI system.
- The significant results for False Positives (0.027) and False Negatives (0.003) demonstrate the system’s enhanced accuracy in recognizing true motor imagery tasks and reducing errors that could lead to unintentional actions, further validating the BCI’s effectiveness.
4.5. Visual Analysis of BCI System Performance
4.5.1. Accuracy and Response Time Trends
4.5.2. Cursor Path Analysis
4.5.3. Beta Rhythm Power Analysis
4.6. Discussion
4.6.1. Comparison with Existing Techniques and Methodological Evaluation
- Improved accuracy in both artifact removal and classification.
- Enhanced generalization capabilities across different subjects.
- Increased resilience to noise, ensuring cleaner and more reliable signals for classification.
- More efficient feature extraction from EEG signals, enhancing the discriminative power of the features.
- The computational complexity is higher, which may affect processing time and resource consumption, especially in large-scale or real-time applications.
- The performance has not been extensively validated on a wide range of diverse datasets.
- Optimal performance is heavily dependent on precise hyperparameter tuning, which can be resource-intensive.
4.6.2. BCI System Accuracy and Variations Across Motor Imagery Tasks
4.6.3. BCI System Efficiency in Real-Time Interaction and Response Time Analysis
5. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Weiss, J.M.; Gaunt, R.A.; Franklin, R.; Boninger, M.L.; Collinger, J.L. Demonstration of a Portable Intracortical Brain-Computer Interface. Brain-Comput. Interfaces 2019, 6, 106–117. [Google Scholar] [CrossRef]
- Pan, K.; Li, L.; Zhang, L.; Li, S.; Yang, Z.; Guo, Y. A Noninvasive BCI System for 2D Cursor Control Using a Spectral-Temporal Long Short-Term Memory Network. Front. Comput. Neurosci. 2022, 16, 799019. [Google Scholar] [CrossRef]
- Zhang, J.; Wang, M. A Survey on Robots Controlled by Motor Imagery Brain-Computer Interfaces. Cogn. Robot. 2021, 1, 12–24. [Google Scholar]
- Akuthota, S.; Rajkumar, K.; Ravichander, J. EEG based Motor Imagery BCI using Four Class Iterative Filtering & Four Class Filter Bank Common Spatial Pattern. In Proceedings of the International Conference on Advances in Electronics, Communication, Computing and Intelligent Information Systems (ICAECIS), Bangalore, India, 19–21 April 2023; pp. 429–434. [Google Scholar]
- Janapati, R.; Dalal, V.; Kumar, G.M.; Anuradha, P.; Shekar, P.V.R. Web Interface Applications Controllers used by Autonomous EEG-BCI Technologies. In Proceedings of the AIP Conference Proceedings; AIP Publishing: Melville, NY, USA, 2022; Volume 2418. [Google Scholar]
- Janapati, R.; Dalal, V.; Sengupta, R. Advances in Experimental Paradigms for EEG-BCI. In Proceedings of the 2nd International Conference on Recent Trends in Machine Learning, IoT, Smart Cities and Applications: ICMISC 2021, Hyderabad, India, 28–29 March 2021; Lecture Notes in Networks and Systems. Springer: Berlin/Heidelberg, Germany, 2022; pp. 163–170. [Google Scholar]
- Janapati, R.; Dalal, V.; Sengupta, R.; Raja Shekar, P.V. Progression of EEG-BCI Classification Techniques: A Study. In Proceedings of the Inventive Systems and Control, Coimbatore, India, 7–8 January 2021; Lecture Notes in Networks and Systems. Springer: Berlin/Heidelberg, Germany, 2021; pp. 161–170. [Google Scholar]
- Janapati, R.; Dalal, V.; Govardhan, N.; Gupta, R.S. Review on EEG-BCI Classification Techniques Advancements. IOP Conf. Ser. Mater. Sci. Eng. 2020, 981, 032019. [Google Scholar] [CrossRef]
- Ramakrishnan, J.; Mavaluru, D.; Sakthivel, R.S.; Alqahtani, A.S.; Mubarakali, A.; Retnadhas, M. Brain-Computer Interface for Amyotrophic Lateral Sclerosis Patients using Deep Learning Network. Neural Comput. Appl. 2022, 34, 13439–13453. [Google Scholar] [CrossRef]
- Mathesul, S.; Swain, D.; Satapathy, S.K.; Rambhad, A.; Acharya, B.; Gerogiannis, V.C.; Kanavos, A. COVID-19 Detection from Chest X-ray Images Based on Deep Learning Techniques. Algorithms 2023, 16, 494. [Google Scholar] [CrossRef]
- Shriram, S.; Nagaraj, B.; Jaya, J.; Shankar, S.; Ajay, P. Deep Learning-Based Real-Time AI Virtual Mouse System Using Computer Vision to Avoid COVID-19 Spread. J. Healthc. Eng. 2021, 2021, 8133076. [Google Scholar] [CrossRef]
- Teng, G.; He, Y.; Zhao, H.; Liu, D.; Xiao, J.; Ramkumar, S. Design and Development of Human Computer Interface Using Electrooculogram with Deep Learning. Artif. Intell. Med. 2020, 102, 101765. [Google Scholar] [CrossRef]
- Stieger, J.R.; Engel, S.A.; Suma, D.; He, B. Benefits of Deep Learning Classification of Continuous Noninvasive Brain–Computer Interface Control. J. Neural Eng. 2021, 18, 046082. [Google Scholar]
- Schweihoff, J.F.; Loshakov, M.; Pavlova, I.; Kück, L.; Ewell, L.A.; Schwarz, M.K. DeepLabStream Enables Closed-Loop Behavioral Experiments using Deep Learning-based Markerless, real-time Posture Detection. Commun. Biol. 2021, 4, 130. [Google Scholar] [CrossRef]
- Alam, M.S.; Kwon, K.; Alam, M.A.; Abbass, M.Y.; Imtiaz, S.M.; Kim, N. Trajectory-Based Air-Writing Recognition Using Deep Neural Network and Depth Sensor. Sensors 2020, 20, 376. [Google Scholar] [CrossRef] [PubMed]
- Tran, D.S.; Ho, N.H.; Yang, H.J.; Baek, E.T.; Kim, S.H.; Lee, G. Real-Time Hand Gesture Spotting and Recognition Using RGB-D Camera and 3D Convolutional Neural Network. Appl. Sci. 2020, 10, 722. [Google Scholar] [CrossRef]
- Tiwari, S.; Goel, S.; Bhardwaj, A. MIDNN-A Classification Approach for the EEG based Motor Imagery Tasks using Deep Neural Network. Appl. Intell. 2022, 52, 4824–4843. [Google Scholar] [CrossRef]
- Choi, J.W.; Park, J.; Huh, S.; Jo, S. Asynchronous Motor Imagery BCI and LiDAR-Based Shared Control System for Intuitive Wheelchair Navigation. IEEE Sens. J. 2023, 23, 16252–16263. [Google Scholar] [CrossRef]
- Guerrero-Mendez, C.D.; Blanco-Díaz, C.F.; Ruiz-Olaya, A.F.; Lopez-Delis, A.; Jaramillo-Isaza, S.; Andrade, R.M.; Souza, A.F.D.; Delisle-Rodriguez, D.; Frizera-Neto, A.; Bastos-Filho, T.F. EEG Motor Imagery Classification using Deep Learning Approaches in Naïve BCI Users. Biomed. Phys. Eng. Express 2023, 9, 045029. [Google Scholar] [CrossRef]
- Mousavi, M.; de Sa, V.R. Spatio-Temporal Analysis of Error-related Brain Activity in Active and Passive Brain–Computer Interfaces. Brain-Comput. Interfaces 2019, 6, 118–127. [Google Scholar] [CrossRef] [PubMed]
- Al-Saegh, A.; Dawwd, S.A.; Abdul-Jabbar, J.M. Deep Learning for Motor Imagery EEG-based Classification: A Review. Biomed. Signal Process. Control 2021, 63, 102172. [Google Scholar] [CrossRef]
- Altaheri, H.; Muhammad, G.; Alsulaiman, M.; Amin, S.U.; Altuwaijri, G.A.; Abdul, W.; Bencherif, M.A.; Faisal, M. Deep Learning Techniques for Classification of Electroencephalogram (EEG) Motor Imagery (MI) Signals: A Review. Neural Comput. Appl. 2023, 35, 14681–14722. [Google Scholar] [CrossRef]
- Savvopoulos, A.; Kanavos, A.; Mylonas, P.; Sioutas, S. LSTM Accelerator for Convolutional Object Identification. Algorithms 2018, 11, 157. [Google Scholar] [CrossRef]
- Sukkar, M.; Shukla, M.; Kumar, D.; Gerogiannis, V.C.; Kanavos, A.; Acharya, B. Enhancing Pedestrian Tracking in Autonomous Vehicles by Using Advanced Deep Learning Techniques. Information 2024, 15, 104. [Google Scholar] [CrossRef]
- Chaddad, A.; Wu, Y.; Kateb, R.; Bouridane, A. Electroencephalography Signal Processing: A Comprehensive Review and Analysis of Methods and Techniques. Sensors 2023, 23, 6434. [Google Scholar] [CrossRef] [PubMed]
- Daud, S.N.S.S.; Sudirman, R. Wavelet Based Filters for Artifact Elimination in Electroencephalography Signal: A Review. Ann. Biomed. Eng. 2022, 50, 1271–1291. [Google Scholar] [CrossRef]
- Miah, O.; Habiba, U.; Kabir, F. ODL-BCI: Optimal Deep Learning Model for Brain-computer Interface to Classify Students Confusion via Hyperparameter Tuning. Brain Disord. 2024, 13, 100121. [Google Scholar] [CrossRef]
- Škola, F.; Tinková, S.; Liarokapis, F. Progressive Training for Motor Imagery Brain-Computer Interfaces Using Gamification and Virtual Reality Embodiment. Front. Hum. Neurosci. 2019, 13, 329. [Google Scholar] [CrossRef]
- Parashiva, P.K.; Vinod, A.P. Improving Direction Decoding Accuracy during Online Motor Imagery based Brain-Computer Interface using Error-related Potentials. Biomed. Signal Process. Control 2022, 74, 103515. [Google Scholar] [CrossRef]
- Choi, J.W.; Huh, S.; Jo, S. Improving Performance in Motor Imagery BCI-based Control Applications via Virtually Embodied Feedback. Comput. Biol. Med. 2020, 127, 104079. [Google Scholar] [CrossRef]
- Abiri, R.; Borhani, S.; Kilmarx, J.; Esterwood, C.; Jiang, Y.; Zhao, X. A Usability Study of Low-Cost Wireless Brain-Computer Interface for Cursor Control Using Online Linear Model. IEEE Trans. Hum.-Mach. Syst. 2020, 50, 287–297. [Google Scholar] [CrossRef] [PubMed]
- Parikh, D.; George, K. Quadcopter Control in Three-Dimensional Space Using SSVEP and Motor Imagery-Based Brain-Computer Interface. In Proceedings of the 11th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), Vancouver, BC, Canada, 4–7 November 2020; pp. 0782–0785. [Google Scholar]
- Guo, Y.; Wang, M.; Zheng, T.; Li, Y.; Wang, P.; Qin, X. NAO Robot Limb Control Method Based on Motor Imagery EEG. In Proceedings of the International Symposium on Computer, Consumer and Control (IS3C), Taichung City, Taiwan, 13–16 November 2020; pp. 521–524. [Google Scholar]
- Reyhani-Masoleh, B.; Chau, T. Navigating in Virtual Reality using Thought: The Development and Assessment of a Motor Imagery based Brain-Computer Interface. arXiv 2019, arXiv:1912.04828. [Google Scholar]
- Gao, C.; Xia, M.; Zhang, Z.; Han, Y.; Gu, Y. Improving the Brain-Computer Interface Learning Process with Gamification in Motor Imagery: A Review. In Gamification-Analysis, Design, Development and Ludification; IntechOpen: London, UK, 2022. [Google Scholar]
- Alchalabi, B.; Faubert, J. A Comparison between BCI Simulation and Neurofeedback for Forward/Backward Navigation in Virtual Reality. Comput. Intell. Neurosci. 2019, 2019, 2503431. [Google Scholar] [CrossRef]
- Saichoo, T.; Boonbrahm, P.; Punsawad, Y. Investigating User Proficiency of Motor Imagery for EEG-Based BCI System to Control Simulated Wheelchair. Sensors 2022, 22, 9788. [Google Scholar] [CrossRef]
- Dutt-Mazumder, A.; Huggins, J.E. Performance Comparison of a non-invasive P300-based BCI Mouse to a Head-Mouse for People with SCI. Brain-Comput. Interfaces 2020, 7, 1–10. [Google Scholar] [CrossRef]
- Hossain, K.M.; Islam, M.A.; Hossain, S.; Nijholt, A.; Ahad, M.A.R. Status of deep learning for EEG-based brain–computer interface applications. Front. Comput. Neurosci. 2022, 16, 1006763. [Google Scholar] [CrossRef] [PubMed]
Measurement/Condition | Left-Hand MI | Right-Hand MI | Foot MI | Tongue MI | Control Conditions |
---|---|---|---|---|---|
Classification Accuracy (%) | 90.5 | 88.2 | 87.9 | 89.8 | 12.1 |
Response Time (ms) | 650 | 670 | 680 | 655 | 1020 |
Specificity (%) | 91.4 | 92.0 | 91.2 | 91.7 | — |
Sensitivity (%) | 89.6 | 87.8 | 88.5 | 89.2 | — |
False Positives | 8 | 7 | 9 | 6 | 98 |
False Negatives | 10 | 12 | 11 | 8 | 105 |
Model Positive | Model Negative | |
---|---|---|
Real Positive | 88 | 10 |
Real Negative | 8 | 94 |
Model Positive | Model Negative | |
---|---|---|
Real Positive | 86 | 12 |
Real Negative | 9 | 93 |
Model Positive | Model Negative | |
---|---|---|
Real Positive | 87 | 11 |
Real Negative | 10 | 92 |
Model Positive | Model Negative | |
---|---|---|
Real Positive | 89 | 9 |
Real Negative | 7 | 94 |
Model Positive | Model Negative | |
---|---|---|
Real Positive | 14 | 2 |
Real Negative | 96 | 6 |
Metric | Hypothesis (H0) | Hypothesis (H1) | Significance Level () | Hypothetical p-Value |
---|---|---|---|---|
Classification Accuracy | No significant difference in accuracy between control and motor imagery | Higher accuracy for motor imagery vs. control | 0.05 | 0.001 |
Response Time | Similar response times for control and motor imagery | Faster response times for motor imagery vs. control | 0.05 | 0.015 |
Specificity | No difference in specificity between conditions | Higher specificity under motor imagery conditions | 0.05 | 0.002 |
Sensitivity | Uniform sensitivity across all tasks | Improved sensitivity in motor imagery tasks | 0.05 | 0.009 |
False Positives | Equal rates of false positives in all conditions | Fewer false positives in motor imagery tasks | 0.05 | 0.027 |
False Negatives | No variation in false negatives between tasks | Reduced false negatives in motor imagery tasks | 0.05 | 0.003 |
Additional Metrics | Consistent performance across all metrics | Variability in performance by task | 0.05 | 0.008 |
Motor Imagery Task | F1-Score | Precision | Recall |
---|---|---|---|
Left-handed Gestures | 0.918 | 0.903 | 0.934 |
Right-handed Gestures | 0.887 | 0.901 | 0.874 |
Foot Movement | 0.865 | 0.842 | 0.890 |
Tongue Movement | 0.939 | 0.956 | 0.922 |
Paper | Accuracy (%) | Unique Contributions | Areas for Further Research |
---|---|---|---|
[31] | 80 | Demonstrated a positive relationship between cursor control and visualization ability, enhancing interface intuitiveness. | Explore individualized assistive BCIs tailored to user-specific neural patterns and preferences. |
[15] | Not mentioned | Evaluated the speed of suggested algorithms, focusing on enhancing real-time response capabilities. | Conduct a comparative analysis with current state-of-the-art BCI systems to benchmark speed and accuracy. |
[36] | 76 | Investigated the efficacy of motor imagery commands for neurofeedback learning, integrating VR to enhance training. | Further study on movement-related activation of the motor cortex in virtual reality settings. |
[18] | <50 | Focused on improving BCI performance through shared control systems, enhancing user autonomy. | Address the challenges of asynchronous BCIs, particularly in reducing error rates and improving system responsiveness. |
[30] | <50 | Enhanced neuronal activity recognition and pattern identification, incorporating advanced neural networks. | Enhance motor imagery effectiveness through virtual reality, exploring new training protocols. |
[35] | 74.35 | Highlighted the positive effects on functional networks and motor learning, potentially increasing BCI efficacy. | Implement a randomized control trial to assess the impact of gamification on motor learning and BCI integration. |
[19] | 80 | Utilized deep learning techniques to improve performance by 32%, focusing on computational efficiency and accuracy. | Enhance the usefulness, controllability, and dependability of robotic devices controlled by BCIs. |
[33] | 78.29 | Developed a novel classification algorithm for operating NAO robots, enhancing interface responsiveness. | Address the challenges of limited precision in robotic control through algorithm refinement and testing. |
[2] | 63.45 | Leveraged temporal characteristics related to error potentials P300, aiming to reduce reaction times. | Extend the system to complex tasks involving multidirectional movements and increase task variety. |
[29] | 64.9 | Improved direction decoding accuracy by 10%, employing advanced signal processing techniques. | Investigate EEG data with low signal-to-noise ratios to enhance decoding accuracy under nonideal conditions. |
[32] | 85 | Focused on improving human–machine interactions and optimized offline adjustments to enhance system adaptability. | Explore the integration of hybrid BCI controls and real-time system adjustments to improve usability and operability. |
[9] | Not mentioned | Emphasized user adaptability and enhanced feature engineering, tailoring the system to user needs. | Focus on user-centered design for real-world applications, improving adaptability and customization. |
[34] | 70 | Developed a three-class BCI system integrating VR to boost user engagement and system accuracy. | Combine VR and BCI technologies to enhance user training and engagement in complex tasks. |
[37] | 83.7 | Created a user-friendly BCI system using an EEG neuroheadset, enhancing accessibility for severely disabled individuals. | Develop protocols for individuals with severe disabilities to control electric wheelchairs and other assistive devices. |
[14] | Not mentioned | Focused on lower-speed robotic control and signal processing enhancements for robust operation. | Further enhance signal processing capabilities and develop faster, more accurate control systems for robotics. |
[11] | Not mentioned | Adopted a user-centered design approach, significantly improving the user experience in BCI interactions. | Study the long-term usability and individualized training needs to adapt BCIs to daily use. |
[28] | 75.84 | Reported high levels of user satisfaction following training, enhancing motivation and engagement. | Assess the long-term impacts of training methods on user experience and motor imagery BCI skills. |
[13] | Not mentioned | Introduced novel signal processing techniques and real-time user feedback mechanisms. | Enhance signal processing and system calibration techniques to improve user feedback and accuracy. |
[12] | Not mentioned | Explored new feature sets that improved classification accuracy, enhancing user interface intuitiveness. | Focus on personalized calibration and feature optimization to better accommodate individual differences. |
[17] | 82.48 | Utilized a Pattern Recognition Neural Network to improve adaptability and accuracy of BCIs. | Expand on adaptability features to enhance real-world application and user control in complex environments. |
[16] | Not mentioned | Developed a multimodal approach, combining different sensory inputs for a more robust BCI. | Generalize the multimodal approach to various user scenarios to enhance BCI robustness and reliability. |
Proposed Work | 88.2–90.5 | Demonstrated robustness across various user scenarios, providing a comprehensive evaluation approach. | Explore personalization options, long-term usability improvements, and enhanced user interfaces for broader application. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Akuthota, S.; Janapati, R.C.; Kumar, K.R.; Gerogiannis, V.C.; Kanavos, A.; Acharya, B.; Grivokostopoulou, F.; Desai, U. Enhancing Real-Time Cursor Control with Motor Imagery and Deep Neural Networks for Brain–Computer Interfaces. Information 2024, 15, 702. https://doi.org/10.3390/info15110702
Akuthota S, Janapati RC, Kumar KR, Gerogiannis VC, Kanavos A, Acharya B, Grivokostopoulou F, Desai U. Enhancing Real-Time Cursor Control with Motor Imagery and Deep Neural Networks for Brain–Computer Interfaces. Information. 2024; 15(11):702. https://doi.org/10.3390/info15110702
Chicago/Turabian StyleAkuthota, Srinath, Ravi Chander Janapati, K. Raj Kumar, Vassilis C. Gerogiannis, Andreas Kanavos, Biswaranjan Acharya, Foteini Grivokostopoulou, and Usha Desai. 2024. "Enhancing Real-Time Cursor Control with Motor Imagery and Deep Neural Networks for Brain–Computer Interfaces" Information 15, no. 11: 702. https://doi.org/10.3390/info15110702
APA StyleAkuthota, S., Janapati, R. C., Kumar, K. R., Gerogiannis, V. C., Kanavos, A., Acharya, B., Grivokostopoulou, F., & Desai, U. (2024). Enhancing Real-Time Cursor Control with Motor Imagery and Deep Neural Networks for Brain–Computer Interfaces. Information, 15(11), 702. https://doi.org/10.3390/info15110702