Next Article in Journal
Exploring Perspectives of Blockchain Technology and Traditional Centralized Technology in Organ Donation Management: A Comprehensive Review
Previous Article in Journal
An Efficient Deep Learning Framework for Optimized Event Forecasting
Previous Article in Special Issue
Cost-Effective Signcryption for Securing IoT: A Novel Signcryption Algorithm Based on Hyperelliptic Curves
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Real-Time Cursor Control with Motor Imagery and Deep Neural Networks for Brain–Computer Interfaces

by
Srinath Akuthota
1,
Ravi Chander Janapati
1,
K. Raj Kumar
1,
Vassilis C. Gerogiannis
2,
Andreas Kanavos
3,*,
Biswaranjan Acharya
4,
Foteini Grivokostopoulou
5,* and
Usha Desai
6
1
Department of Electronics & Communication Engineering, SR University, Warangal 506009, India
2
Department of Digital Systems, University of Thessaly, 41500 Larissa, Greece
3
Department of Informatics, Ionian University, 49100 Corfu, Greece
4
Department of Computer Engineering AI, Marwadi University, Rajkot 360003, India
5
Computer Technology Institute and Press “Diophantus”, 26504 Patras, Greece
6
Department of Electronics & Communication Engineering, S.E.A. College of Engineering & Technology, Bengaluru 560049, India
*
Authors to whom correspondence should be addressed.
Information 2024, 15(11), 702; https://doi.org/10.3390/info15110702
Submission received: 29 August 2024 / Revised: 11 October 2024 / Accepted: 20 October 2024 / Published: 4 November 2024
(This article belongs to the Special Issue Intelligent Information Processing for Sensors and IoT Communications)

Abstract

:
This paper advances real-time cursor control for individuals with motor impairments through a novel brain–computer interface (BCI) system based solely on motor imagery. We introduce an enhanced deep neural network (DNN) classifier integrated with a Four-Class Iterative Filtering (FCIF) technique for efficient preprocessing of neural signals. The underlying approach is the Four-Class Filter Bank Common Spatial Pattern (FCFBCSP) and it utilizes a customized filter bank for robust feature extraction, thereby significantly improving signal quality and cursor control responsiveness. Extensive testing under varied conditions demonstrates that our system achieves an average classification accuracy of 89.1% and response times of 663 milliseconds, illustrating high precision in feature discrimination. Evaluations using metrics such as Recall, Precision, and F1-Score confirm the system’s effectiveness and accuracy in practical applications, making it a valuable tool for enhancing accessibility for individuals with motor disabilities.

Graphical Abstract

1. Introduction

The proliferation of digital technology has greatly increased the need for effective human–computer interaction (HCI). This is particularly crucial for individuals with impaired motor functions, for whom traditional input devices pose significant barriers. Brain–computer interfaces (BCIs) have emerged as a transformative solution, providing direct communication between the human brain and computer systems, and thus, bridging the accessibility gap [1]. Motor-imagery-based BCIs, which allow users to control computer cursors and navigate digital environments through cognitive processes alone, are at the forefront of this innovation [2].
However, despite their potential, motor-imagery-based BCIs still face significant technical challenges. These challenges include the reliable extraction of meaningful information from brain signals, accurate classification of user intentions, and achieving real-time responsiveness. These factors are crucial, as they significantly affect the overall effectiveness and user satisfaction of BCI systems [3].
Our study aims to significantly advance the field of motor-imagery-based BCIs by addressing these technical challenges through an innovative approach. This research is motivated by the need to improve the practical usability and responsiveness of BCIs, thereby enhancing the interaction quality for users, particularly those with severe motor impairments.
We introduce a novel integrated method combining advanced signal preprocessing and classification techniques. Our approach employs Four-Class Iterative Filtering (FCIF) for signal preprocessing [4], complemented by a modified deep neural network (DNN) for robust classification. This method leverages the Four-Class Filter Bank Common Spatial Pattern (FCFBCSP) algorithm, which is specifically designed for optimal feature extraction from motor imagery data [5].
This paper addresses key challenges in the field of Brain–Computer Interfaces (BCIs), particularly focusing on motor-imagery-based systems that enhance real-time cursor control. The development and refinement of such technologies are crucial for increasing the accessibility of computing for individuals with motor disabilities. Within this context, our research has made several significant advancements, which are summarized below.
The main contributions of our research are outlined as follows:
  • Signal Preprocessing Improvement: We have enhanced the signal preprocessing phase with the introduction of Four-Class Iterative Filtering (FCIF), which significantly improves the quality of brain signals. This leads to a notable increase in the accuracy of user intention classification, crucial for effective BCI operation.
  • Advanced Feature Extraction: The utilization of the Four-Class Filter Bank Common Spatial Pattern (FCFBCSP) algorithm optimizes the extraction of discriminative spatial patterns from brain signals. This method is applied across four motor imagery classes—left-hand movement, right-hand movement, foot movement, and tongue movement—facilitating highly precise cursor control.
  • Modified DNN Classifier: A modified Deep Neural Network (DNN) classifier has been developed and tailored specifically for the unique demands of motor-imagery-based BCIs. This classifier ensures high accuracy and minimal response time, which are crucial for real-time applications.
  • Rigorous Empirical Testing: Our approach has undergone extensive empirical testing to rigorously evaluate the performance of the proposed BCI system. These tests provide deep insights into the system’s reliability, precision, and practical applicability, enhancing the credibility of our findings through a robust, evidence-based evaluation process.
Through these advancements, our research addresses significant existing limitations of motor-imagery-based BCIs, opening new avenues for future research and development in the field. The subsequent sections will detail our methodology, present experimental findings, and discuss the implications of our results in the broader context of HCI and BCI development.
The rest of the paper is structured as follows. Section 2 provides a review of related work, setting the stage for our contributions by discussing existing methods and their limitations. Section 3 systematically unfolds our study’s methodology, providing an in-depth account of the processes involved in data acquisition, four-class iterative filtering, Four-Class Filter Bank Common Spatial Pattern (FCFBCSP) feature extraction, and the implementation of the enhanced Deep Neural Network (DNN) classifier. The methodology is elucidated with the aid of visual representations, including images and block diagrams, to enhance comprehension of the experimental design, data analysis, and results. Section 4 critically examines the implications of the obtained results, offering a comparative analysis with relevant findings from prior studies. This comparative discussion serves to highlight the contributions made by the proposed approach. Finally, Section 5 provides a succinct summary of the major findings, key contributions, and outlines recommendations for future research directions, serving as a synthesis of the study’s outcomes and emphasizing its significance and potential avenues for further exploration.

2. Related Work

The field of Brain–Computer Interfaces (BCIs) has experienced significant advancements due to improvements in technology, particularly in signal processing and neuroimaging. These developments have not only enhanced the accuracy and usability of BCIs but have also expanded their applications across various domains. This section reviews the key literature by categorizing it into themes that reflect the current state and future direction of BCI research.

2.1. Technological Advancements and Neuroimaging Methods

Advancements in signal processing methods have significantly improved the accuracy of EEG-based BCIs. These enhancements have diversified BCI applications, extending from basic spellers to complex interfaces such as web browsers and brain-controlled wheelchairs. Neuroimaging methods have enabled fine-grained control over external elements through nuanced interpretation of brain signals, making BCIs more prevalent and versatile [6,7,8].
EEG-based BCIs continue to incorporate advanced hardware for data recording, experimental paradigms, and comprehensive signal processing pipelines. These developments ensure that BCIs can handle complex tasks and operate in real-world environments, broadening their applicability and effectiveness in various fields [8].

2.2. Deep Learning in BCIs

Deep learning has revolutionized the field of BCIs, particularly in the development of algorithms that improve interaction capabilities for users with specific needs, such as ALS patients [9]. Additionally, the creation of real-time AI virtual mouse systems using deep learning and computer vision has addressed significant challenges, such as those posed by the COVID-19 pandemic [10,11].
Further contributions to the field include the integration of multimodal human–computer interfaces (HCIs). The potential of combining electrooculogram (EOG) data with deep learning techniques to enhance HCI performance has been highlighted [12]. The advantages of deep learning classification in continuous noninvasive BCIs, focusing on improved control mechanisms, have been explored [13]. Additionally, a technique for real-time posture identification in closed-loop behavioral studies, illustrating the innovative use of deep learning in BCIs, has been introduced [14].

2.3. Classification Models and Adaptation

Classification models based on deep learning, like CNNs and RNNs, have seen extensive use in BCI applications for motor imagery tasks. Trajectory-based air-writing recognition using depth sensors and deep neural networks has been explored, providing a novel interaction modality [15]. Real-time hand gesture detection and identification using RGB-D cameras and 3D convolutional neural networks has been demonstrated, expanding the utility of BCIs in gesture-based control [16]. Moreover, a deep-neural-network-based classification method for EEG-based motor imagery tasks has been proposed, significantly enhancing the performance of EEG-based BCIs [17]. An asynchronous motor imagery BCI and a LiDAR-based shared control system for intuitive wheelchair navigation have been developed, further demonstrating the practical applications of BCIs in mobility aids [18].
Improvements in user adaptation through EEG motor imagery classification using deep learning techniques, which aid in customizing BCIs to individual users, have been shown [19]. A noninvasive BCI system for 2D cursor control using a spectral–temporal long short-term memory network has been presented, adding to the repertoire of control strategies [2]. Notably, a novel covariance-based method that combines spatial and temporal aspects of feedback-related brain activity in response to BCI error has been proposed, showing how nuanced feedback can enhance BCI responsiveness [20].
The effectiveness of deep learning models such as CNNs and RNNs in classifying motor imagery tasks has been well documented. These models, however, frequently encounter challenges with generalization across different sessions and subjects [21,22,23,24]. This highlights a need for approaches that enhance both the robustness and accuracy of classification in varying conditions.

2.4. Artifact Removal and Signal Integrity

Recent advancements in EEG artifact removal have utilized wavelet transforms and Independent Component Analysis (ICA) to enhance signal clarity. Studies employing wavelet-based methods have demonstrated the potential to selectively remove noise while attempting to preserve the integrity of original EEG signals [25]. Similarly, techniques based on ICA have been applied to decompose EEG data into independent sources, isolating and removing components attributed to artifacts [26]. Despite their efficacy, these approaches are often challenged by inadvertent alterations to essential EEG features, which can potentially degrade the data’s utility for accurate classification. The Four-Class Iterative Filtering (FCIF) approach represents a development in this area, aiming to maintain high fidelity in the resultant EEG data while effectively reducing noise.

2.5. Hyperparameter Optimization and System Performance

The importance of hyperparameter tuning in optimizing BCI system performance has been increasingly recognized. Traditional methods often rely on arbitrary or manually intensive tuning processes that may not yield optimal performance. A structured approach using grid search and cross-validation has been suggested to systematically explore parameter space and enhance model consistency across different datasets [27]. This approach underlines a significant advancement toward improving the reliability and efficacy of BCI systems.

2.6. Future Directions and Ethical Considerations

Progressive training for motor imagery BCIs using gamification and virtual reality embodiment to enhance user engagement and training efficiency has been investigated [28]. The accuracy of direction decoding during online motor-imagery-based BCIs using error-related potentials has been aimed to increase [29]. Virtually embodied feedback to enhance performance in motor imagery BCI-based control applications has been focused on [30]. The usability of a low-cost wireless BCI for cursor control using an online linear model has been studied [31]. The use of motor-imagery-based BCIs and SSVEP for controlling quadcopters in three dimensions has been explored [32].
A limb control technique based on motor imagery EEG for the NAO robot has been created [33]. Mental navigation in virtual reality and development of a BCI based on motor imagery has been investigated [34]. A review on gamifying motor imagery to enhance the BCI learning process has been conducted [35]. BCI simulation and neurofeedback for forward and backward virtual reality navigation have been compared [36]. The user-friendliness of motor imagery for an EEG-based BCI system to operate a simulated wheelchair has been studied [37]. The feasibility of a novel noninvasive P300-based BCI mouse emulation device (MED), along with a commercial head-mouse among utilized by head-mouse users with cervical spinal cord injury (SCI), has been evaluated [38].
These studies collectively underscore the increasing diversity in BCI applications and methodologies. They highlight innovative designs for human–computer interfaces, real-time applications using deep learning, and multimodal sensing. The subsequent sections will detail our novel approach and its contribution to the field, particularly in enhancing the precision and usability of motor-imagery-based BCIs for real-time cursor control.

3. EEG-Based Methodology for Motor Imagery BCI

The data collection process in this study was meticulously executed, involving a cohort of 50 participants who were selected based on stringent inclusion criteria, including age, cognitive ability, and the absence of neurological disorders. In adherence to ethical standards, all participants were extensively briefed on the research objectives and provided their informed consent prior to participation.
Advanced neuroimaging technology was employed, utilizing a high-resolution electroencephalogram (EEG) device equipped with 32 electrodes arranged according to the international 10–20 system. This setup was used to record brain activity from each participant’s scalp at a sample rate of 128 Hz, optimal for capturing the neural dynamics associated with motor imagery tasks. This robust data collection framework forms the foundation of our study, ensuring high reliability and depth in the subsequent analysis.
Participants were asked to engage in imagining four distinct motor actions: tongue movement, right hand movement, left hand movement, and foot movement. Each task was designed to target specific regions of the motor cortex, with the activations observed in the EEG recordings. Seated comfortably in a controlled environment, participants performed the motor imagery tasks in a randomized order to prevent any bias. Each task was repeated 30 times to gather a sufficient number of trials for statistical reliability.
To minimize signal artifacts, participants were instructed to remain still, and any eye movements or blinks were monitored using electrooculography (EOG) channels. This monitoring was crucial for the subsequent removal of artifacts during the preprocessing phase.
In the preprocessing steps illustrated in Figure 1, various automated algorithms were utilized to identify and remove EEG segments contaminated by muscle artifacts, eye blinks, or other nonbrain-related activities. The EEG signals were bandpass-filtered within the 8 Hz to 30 Hz range to focus on the frequencies most relevant to motor imagery. Data were segmented into epochs centered around the onset of each motor imagery task, ensuring that only the relevant portions of the data were analyzed. Baseline correction was applied to each epoch to eliminate DC offset and standardize the data across all trials.
The final preprocessed EEG dataset, consisting of 120 trials for each of the four motor imagery tasks, formed the basis for the subsequent feature extraction and categorization steps. Feature extraction was performed using the Four-Class Filter Bank Common Spatial Pattern (FCFBCSP) algorithm, which discriminates between different motor imagery tasks by optimizing spatial filters for each class. This method significantly enhances the signal-to-noise ratio and extracts features that are most indicative of the underlying motor imagery.
Following feature extraction, a modified deep neural network (DNN) classifier was employed to categorize the EEG signals into the corresponding motor imagery tasks. The DNN was designed with multiple layers to handle the high dimensionality and variability of EEG data, ensuring robust and accurate classification. The neural network architecture was optimized through several iterations, focusing on minimizing classification error and improving real-time response capabilities.

3.1. Four-Class Iterative Filtering (FCIF)

The Four-Class Iterative Filtering (FCIF) technique is utilized during the preprocessing stage to enhance the accuracy and reliability of brain signals associated with motor imagery tasks [4]. FCIF is an advanced spatial filtering technique designed to emphasize relevant spatial patterns inherent to different motor imagery activities, thereby extracting valuable information from EEG data. This technique leverages principles from signal processing and linear algebra.

3.1.1. Mathematical Foundations and Implementation

The cornerstone of the FCIF method is the construction of spatial filters, which are specifically designed to enhance the distinction between the four predefined motor imagery classes: left-hand movement, right-hand movement, foot movement, and tongue movement. These filters are iteratively refined to maximize the variance between task-related components and minimize the variance of unrelated or noisy components, thereby enhancing signal clarity.
The initial step in this filtering process involves computing the covariance matrix for each motor imagery class, capturing the inter-channel relationships and emphasizing the spatial patterns associated with each type of motor imagery:
Cov ( X ) = 1 N i = 1 N ( X i μ ) ( X i μ ) T
where Cov ( X ) represents the covariance matrix, N is the number of samples, ∑ signifies the summation over all samples, X i represents individual EEG data samples, and μ is the mean of the data across all samples.
Following the computation of covariance matrices, spatial filters are iteratively updated to optimize the separation of class-specific signals. This update process is guided by a gradient descent algorithm aimed at minimizing the overlap between class-specific data distributions:
W new = W old + η J ( W old )
where W new denotes the updated spatial filter, W old is the previous iteration’s filter, η is the learning rate, and J ( W old ) represents the gradient of the cost function with respect to the filter.
Initially, spatial filters are estimated for each class using these covariance matrices, projecting the EEG data onto a space that augments class-related information. Through iterative refinement, the spatial filters are continuously optimized to enhance class distinction. This optimization involves updating the filters and recalculating the covariance matrices until convergence criteria are met, thus ensuring reduced noise and unrelated brain activity, and enhancing the EEG data’s signal-to-noise ratio.

3.1.2. Optimization and Convergence

The spatial filters are initially estimated using the covariance matrices mentioned and are refined through several iterations. Each iteration involves projecting the EEG data onto the new filter space, recalculating the covariance matrices, and updating the filters until convergence is achieved. This iterative refinement process not only enhances the selectivity and specificity of the filters but also effectively improves the signal-to-noise ratio of the EEG data.
This detailed focus on relevant brain areas and patterns significantly boosts FCIF’s ability to discern between different motor imagery tasks, thereby enhancing the subsequent classification accuracy.
Through its iterative enhancement of spatial filters, the FCIF technique proves indispensable in preprocessing EEG data for motor-imagery-based BCIs, setting a robust foundation for accurate and efficient signal classification, as depicted in Figure 2.

3.2. Four-Class Filter Bank Common Spatial Pattern (FCFBCSP) Algorithm

The Four-Class Filter Bank Common Spatial Pattern (FCFBCSP) algorithm represents an advanced feature extraction methodology employed in our study to enhance the discriminative capability of EEG signals derived from motor imagery tasks. Designed to identify and exploit spatial patterns that augment the separation between different motor imagery classes, the FCFBCSP aims to maximize the informativeness of the extracted features, thereby improving classification accuracy.
The application of the FCFBCSP algorithm involves several key steps structured to optimize feature extraction across various neural frequencies:
  • Decomposition into Filter Banks: Initially, the FCFBCSP algorithm decomposes the preprocessed EEG data into multiple filter bank components, each corresponding to a specific frequency band. This decomposition facilitates the algorithm’s adaptation to the unique characteristics of different frequency ranges within the EEG spectrum, addressing the complex nature of brain signals.
  • Application of CSP within Frequency Bands: The Common Spatial Pattern (CSP) algorithm is employed within each frequency band to derive spatial filters. These filters are tailored to maximize the variance of EEG signals associated with one motor imagery class while minimizing it for the others. Such selective variance enhancement is crucial for revealing the most discriminative spatial patterns for each motor imagery class.
  • Feature Vector Construction: For each trial, feature vectors are constructed using the CSP components extracted from various frequency bands. These vectors capture the essential spatial patterns across the spectrum, forming a robust basis for subsequent classification tasks.
  • Comprehensive Spatial Analysis: By capturing spatial patterns across multiple frequency bands, the FCFBCSP provides a comprehensive analysis of EEG data. This approach ensures that dominant features in both high- and low-frequency bands are identified, and the spatial patterns with the highest discriminative power are utilized for classifying the four motor imagery classes.
This structured methodology enhances the accuracy of feature extraction and significantly contributes to the reliability and efficiency of the classification process in our BCI system. The enhanced feature set derived through the FCFBCSP is instrumental in achieving superior performance in detecting and classifying motor imagery tasks, as depicted in Figure 3.

3.3. Modified DNN Classifier Architecture

An essential component of our proposed motor-imagery-based Brain–Computer Interface (BCI) system is the Modified Deep Neural Network (DNN) classifier. This section details the specific architecture of the DNN and the modifications implemented to optimize it for real-time cursor control and navigation in BCI applications [39].
As depicted in Figure 4, the architecture of the Modified DNN classifier is designed as a feed-forward neural network tailored for motor imagery tasks. The structure of this DNN includes several key elements:
  • Input Layer: The input layer receives feature vectors extracted from the prepro- cessed EEG data, where the dimensionality is determined by the number of features in these vectors.
  • Hidden Layers: The network comprises multiple hidden layers. Each layer contains a varying number of neurons, specifically designed to process the complex spatial patterns inherent in EEG data effectively. The depth and width of these layers are calibrated based on empirical performance metrics, such as accuracy and computational efficiency, ensuring optimal performance for the specific demands of motor imagery classification.
  • Activation Functions: Rectified Linear Units (ReLUs) are employed in the hidden layers to facilitate the efficient learning of complex patterns in the data. The choice of ReLU over other functions is due to its ability to speed up training without affecting the generalization ability of the network.
  • Output Layer: The output layer includes neurons corresponding to the motor imagery classes (four in this study). A SoftMax activation function is used to translate the raw outputs of the network into probabilities for each class, which is crucial for accurate multi-class classification.
Figure 4. Diagram showing the structure of the modified DNN classifier.
Figure 4. Diagram showing the structure of the modified DNN classifier.
Information 15 00702 g004
To optimize the performance of our DNN, we conducted a systematic hyperparameter tuning process:
  • Learning Rate and Batch Size: A grid search was utilized to determine the optimal learning rate, testing values ranging from 0.001 to 0.01. Batch sizes of 16, 32, and 64 were evaluated, with 32 providing the best trade-off between training speed and model performance.
  • Early Stopping and Dropout: Early stopping was implemented to prevent overfitting, ceasing training if the validation loss did not improve, typically ending training after 70–80 epochs out of a maximum set at 100. A dropout rate of 0.5 was used to further ensure the model generalizes well on unseen data.
  • Regularization: L2 regularization was applied to the weights, with parameters fine-tuned through cross-validation to ensure the model remains generalizable across various datasets.
These strategic enhancements in the training process are critical for developing a robust DNN capable of high performance in real-time BCI applications. The detailed architecture and thoughtful tuning of the model underscore our commitment to creating a highly accurate and efficient BCI system.
To ensure the DNN classifier meets the stringent requirements of real-time BCI operations, several critical modifications and optimizations have been integrated:
  • Low-Latency Optimization: The architecture is specifically optimized for low-latency operations, a crucial feature for real-time cursor control and navigation. Specific architectural decisions, such as the minimization of layer depth where possible, help reduce computation time, ensuring rapid response rates necessary for real-time application.
  • Pre-trained Models: Utilizing pre-trained models as a foundational starting point significantly speeds up the training process. These models are typically sourced from similar task domains and are adapted through further training on motor imagery-specific data, facilitating faster convergence and enhanced generalization capabilities.
  • Batch Normalization: Batch normalization techniques are implemented across layers to standardize activations, thus accelerating training dynamics and enhancing the stability of the network’s learning process. This method is particularly effective in managing internal covariate shift, leading to improved training speeds and more robust performance.
  • Loss Function: A categorical cross-entropy loss function is strategically chosen to train the network, aiming to systematically minimize classification errors. This choice is aligned with the multi-class nature of the output, providing a probabilistic framework that quantitatively assesses the classifier’s predictions against the true labels.
This tailored architecture not only addresses the unique challenges of processing EEG data for motor imagery tasks but also ensures high accuracy and responsiveness necessary for effective BCI operation. The diagram in Figure 4 illustrates the complete architectural layout, emphasizing the strategic modifications that empower the classifier to perform efficiently in real-time applications.

3.4. Hardware and Software Components for EEG-Based Computer Cursor Control with Emotiv Insight

The efficacy of EEG-based computer cursor control systems critically depends on the integrated performance of both hardware and software components. In our study, we employ the Emotiv Insight EEG headset coupled with a comprehensive suite of software tools. This combination is optimized for the efficient acquisition, preprocessing, analysis, and visualization of EEG data, ensuring that our system captures accurate brain signals and processes them effectively for real-time cursor control.

3.4.1. Hardware Components

EEG Headset: The Emotiv Insight EEG headset is central to our hardware setup, serving as the critical interface between the user’s brain activity and the computer system. Chosen for its robust specifications, the headset provides high-fidelity EEG signal capture essential for precise motor imagery task execution.
  • Model: Emotiv Insight EEG Headset.
  • EEG Channels: This features five strategically positioned EEG channels (AF3, F7, F3, FC5, and T7), offering comprehensive scalp coverage essential for capturing a broad spectrum of neural signals associated with various motor imagery tasks.
  • Sampling Rate: Operating at a high sampling rate of 128 Hz, the headset records dynamic brain activity with minimal delay, crucial for the effective translation of user intentions into cursor movements in real-time applications.
Electrooculography (EOG) Channels: Accuracy in distinguishing intentional cursor movements from involuntary eye movements is paramount. The Emotiv Insight includes two EOG channels to enhance system accuracy by filtering out potential noise and artifacts from eye movements.
  • EOG Channels: These are crucial for accurately monitoring eye movements and blinks, which can significantly contaminate EEG data. Effective capture of these signals allows the system to differentiate between cursor commands driven by brain activity and those inadvertently triggered by eye movements.
Amplifiers and Electrodes: The quality of EEG data capture is enhanced by the quality of signal amplification and electrode design integrated within the Emotiv Insight, ensuring detection and accurate amplification of even subtle neural signals.
  • Amplifiers: Built-in amplifiers boost the EEG signal quality, enabling more precise and reliable brain signal processing.
  • Electrodes: The headset utilizes custom-designed dry EEG electrodes, ensuring optimal contact with the scalp and consistent signal acquisition, critical for maintaining high-quality data during extended use.
Additional Sensors: Recognizing the importance of additional biometric data in enhancing cursor control, the Emotiv Insight is equipped with more sensors to provide a richer dataset for processing.
  • Accelerometer/Gyroscope: A built-in three-axis accelerometer and gyroscope track head movements and orientation changes, providing additional control inputs that enhance the system’s responsiveness and user experience.
Computer Interface: Seamless data transmission between the EEG headset and the computer is facilitated by the headset’s wireless capabilities, critical for real-time applications.
  • Wireless Interface: The Bluetooth connection of the Emotiv Insight ensures efficient, reliable wireless communication, minimizing latency and maintaining continuous data flow between the headset and the computer system.

3.4.2. Software Components

Software components play a crucial role in the acquisition, preprocessing, and analysis of EEG data, significantly impacting the performance and effectiveness of computer cursor control experiments. Our study utilizes a suite of specialized software tools, each serving distinct purposes in the workflow from data capture to analysis.
Data Acquisition and Preprocessing:
  • EEG Data Acquisition Software: EmotivPRO, provided by Emotiv, acts as the interface between the Emotiv Insight headset and the computer system, designed to handle high data throughput and ensure stable communication for real-time data capture.
    -
    Connection to Emotiv Insight: EmotivPRO facilitates seamless real-time EEG data acquisition, critical for immediate cursor control response.
    -
    Recording EEG Data: It supports continuous recording during experimental sessions, crucial for later analysis and model training.
    -
    Basic Preprocessing: Initial data preprocessing tools such as filtering and artifact removal enhance the quality and usability of the EEG signals before more complex processing.

3.4.3. Advanced Signal Processing Techniques

Four-Class Iterative Filtering (FCIF) Technique: Implemented using custom Python code tailored to the specific needs of our BCI system, FCIF fine-tunes the filtering process to enhance signal classification accuracy based on the unique characteristics of our EEG data.
Four-Class Filter Bank Common Spatial Pattern (FCFBCSP) Algorithm: Our custom implementation of the FCFBCSP algorithm using Python significantly boosts the discriminative ability of EEG signals by isolating and enhancing features critical for distinguishing between different motor imagery tasks.

3.4.4. Classification and Analysis

Modified Deep Neural Network (DNN) Classifier: Built using advanced deep learning frameworks such as TensorFlow or PyTorch, our Modified DNN Classifier accommodates efficient training and real-time inference capabilities, leveraging the latest optimizations and techniques in machine learning.
Data Analysis and Visualization:
  • Data Analysis Software: Comprehensive data analysis is performed using Python, enhanced by libraries like NumPy and Pandas, enabling sophisticated handling of large datasets for detailed statistical analysis and effective data management.
  • Data Visualization: We utilize Matplotlib to generate clear and informative graphical representations of data, aiding in the interpretation of complex patterns and results, and providing insights into the effectiveness of the BCI system.
The integration of these advanced software components forms a robust framework that supports all phases of our EEG-based computer cursor control experiments. By combining custom-developed algorithms with established software libraries, our approach achieves an optimal balance of flexibility and operational efficiency, essential for the success of the research objectives.

3.5. Experimental Protocol with Emotiv Insight

The success of EEG-based motor imagery studies hinges significantly on the participants’ ability to perform the tasks under controlled conditions, minimizing physical movements and other potential sources of noise. In our study, participants were meticulously briefed on the motor imagery exercises to ensure a clear understanding and strict compliance, crucial for obtaining clean and interpretable EEG data.

3.5.1. Preparation and Setup

Participants were seated in a quiet, controlled environment to reduce external distractions and potential artifacts, which is essential for capturing high-quality EEG data. The Emotiv Insight headset, equipped with electrodes positioned according to the manufacturer’s guidelines, was used to ensure accurate EEG data capture. Each participant underwent a familiarization session to become comfortable with the headset and fully understand the task requirements. This session also allowed participants to practice the motor imagery tasks, ensuring that they could perform them consistently during the actual data collection.

3.5.2. Execution of Motor Imagery Tasks

The motor imagery tasks involved imagining movements of the left hand, right hand, feet, and tongue. These tasks were thoroughly described to the participants, emphasizing the importance of internal visualization without physical execution:
  • Task Details: Each task was explicitly explained with the aid of visual aids and verbal descriptions to ensure that participants could vividly imagine the movements without actual motion.
  • Randomized Order: Tasks were assigned in a randomized order to each participant to eliminate any order effects that might affect the consistency and quality of the EEG data.
  • Number of Trials: Each participant performed each motor imagery task 30 times, resulting in a total of 120 trials per participant. This substantial number of repetitions was essential for gathering robust EEG data and ensuring statistical significance in the subsequent analysis.

3.5.3. Monitoring and Recording

EOG channels integrated within the Emotiv Insight headset actively monitored eye movements and blinks during the tasks. This monitoring was vital for accurately identifying and excluding artifacts related to eye movements from the EEG data, which significantly enhances data quality and reliability:
  • Artifact Handling: Real-time monitoring allowed for immediate identification and tagging of contaminated data segments, which were later excluded or corrected in the preprocessing phase.

3.5.4. Data Integrity and Preprocessing

During preprocessing, artifacts identified from eye movements and other nonbrain sources were meticulously removed. This step was crucial to maintaining the integrity of the EEG signals before they proceeded to the feature extraction and classification phases:
  • Signal Cleaning Procedures: Advanced signal processing techniques were applied to ensure that only clean, artifact-free data were used in the analysis. This included filtering, artifact subtraction, and data segmentation based on task-specific cues.
The comprehensive instructions and controlled settings were designed to maximize the reliability and relevance of the EEG data collected. This meticulous approach allowed for an accurate assessment of the participants’ motor imagery capabilities, providing a solid foundation for effective data analysis in the later stages of our research.
This detailed protocol underscores the rigorous standards applied to data collection in our study, aiming to ensure that the results are both reliable and significant in advancing the understanding and application of EEG-based brain–computer interfaces.

3.6. Performance Metrics

This subsection details the performance metrics used to evaluate the effectiveness of the proposed BCI system in distinguishing between motor imagery tasks and control conditions. A comprehensive overview of the key measurements for each performed task is presented in Table 1, including classification accuracy, response time, specificity, sensitivity, false positives, and false negatives, offering a quantitative assessment of the system’s performance.
Classification Accuracy measures the proportion of correctly classified instances, critical for evaluating the BCI system’s precision in interpreting user intentions from brain signals. High accuracy rates in motor imagery tasks compared to control conditions highlight the system’s effectiveness in specific task recognition.
Response Time quantifies the system’s latency in assigning class labels after task initiation. This metric is crucial for real-time applications, where timely response is paramount to user satisfaction and system usability.
Specificity and Sensitivity assess the BCI’s capability to differentiate between active motor imagery and rest (control conditions). Specificity measures how accurately the system identifies nontask states, crucial for preventing false activations during rest periods. Sensitivity indicates the effectiveness of the system in recognizing actual motor imagery tasks, essential for responsive control.
False Positives and False Negatives are indicators of potential classification errors. High False Positives could lead to unintended actions if the system misinterprets nontask activities as commands. Conversely, False Negatives, where genuine motor imagery tasks are overlooked, could result in missed commands, affecting the system’s reliability.
The data in Table 1 not only illustrate the system’s functional efficacy across various scenarios but also highlights areas where improvements are necessary. This comprehensive evaluation aids in understanding the operational capabilities of the BCI system and guides further development to enhance its accuracy and responsiveness.

4. Experimental Results Analysis

The experimental results presented here offer detailed insights into the performance of the proposed motor-imagery-based BCI system.

4.1. AUC Calculation and Importance

The Area Under the Curve (AUC) metric provides a robust measure of our classification model’s ability to distinguish between classes across all possible classification thresholds.
The Receiver Operating Characteristic (ROC) curves, illustrated in Figure 5, were generated for each class to compute the AUC. These curves plot the true positive rate (sensitivity) against the false positive rate (1-specificity) at various threshold settings, providing a clear visual representation of each class’s performance.
Each class’s AUC value was calculated, where an AUC of 1.0 indicates perfect classification ability, and an AUC of 0.5 suggests no discriminative ability. Our model achieved high AUC scores, indicating strong predictive performance.
AUC is particularly insightful for evaluating the ability of the classifier to separate different motor imagery classes effectively. It is crucial in our context, where the class distribution may be imbalanced, as it provides a more reliable evaluation than accuracy alone. This metric highlights how well the classifier can identify true positives while minimizing false positives across all decision thresholds.

4.2. System Design and Experiment Setup

4.2.1. Optimization of Filter Bank for EEG Signal Processing

The design of our filter bank was meticulously tailored for EEG signal processing in BCI applications, focusing on motor imagery tasks. Selection criteria for the filters targeted essential EEG frequency ranges:
  • Mu Band (8–12 Hz) and Beta Band (13–30 Hz): These frequency bands are crucial for motor imagery, linked to motor functions and active thinking processes.
Each band was isolated using bandpass filters with carefully chosen cutoff frequencies that maximize information extraction and minimize redundancy:
  • Filter Configuration: We employed a set number of filters for each band, optimizing the bandwidth to reduce overlap between adjacent filters, enhancing the signal’s clarity and usability for classification.
  • Cross-validation Performance Evaluation: The effectiveness of different filter configurations was rigorously evaluated through cross-validation on the training dataset, focusing on metrics like classification accuracy to refine our approach.
  • Final Tuning: The filter bank’s final settings were adjusted to optimize the subsequent classification stage, ensuring high discriminability between different motor imagery classes.
This strategic approach to filter bank optimization underscores our commitment to enhancing the accuracy and efficiency of our BCI system, ensuring it effectively processes and classifies EEG signals for real-time applications.

4.2.2. Model Training and Validation Setup

Division of Training, Validation, and Test Sets: To ensure robust model training and evaluation, we divided the dataset into 70% for training, 15% for validation, and 15% for testing. This split facilitates substantial training while retaining adequate data for validation and testing to mitigate overfitting.
Cross-Validation: To enhance model reliability, k-fold cross-validation (with k = 5) was performed on the training data. This method allowed us to validate the model’s performance across different data subsets, thereby reducing training bias and enhancing the robustness of our predictions.
Hyperparameter Selection: The hyperparameters of the model were systematically optimized using a grid search strategy coupled with cross-validation. Key hyperparameters tuned included the learning rate, batch size, number of layers, number of neurons per layer, and dropout rate. The set of hyperparameters that achieved the highest accuracy in cross-validation was selected for final model training, ensuring optimal performance.

4.3. Confusion Matrices Analysis

The confusion matrices provided from Table 2, Table 3, Table 4, Table 5 and Table 6 illustrate the model’s ability to accurately discriminate between positive (motor imagery) and negative (control conditions) instances across different tasks. High values in the diagonal elements of each matrix clearly demonstrate the system’s strong classification accuracy, affirming its reliability.

4.3.1. Left-Hand Imagery

The confusion matrix for left-hand imagery (Table 2) shows a high degree of accuracy and a commendable balance between true positive and true negative classifications, indicating reliable performance.
The high true positive and true negative rates suggest the system effectively recognizes and differentiates left-hand motor imagery from rest states, minimizing both false alarms and misses. This precision is crucial for applications requiring distinct and reliable left-hand commands, enhancing user control and interaction.

4.3.2. Right-Hand Imagery

Similarly, the confusion matrix for right-hand imagery (Table 3) demonstrates robust performance, effectively classifying right-hand motor imagery and control conditions.
This matrix highlights the system’s capability to accurately interpret right-hand imagery, crucial for user-specific BCI operations. The balanced numbers of false positives and negatives underscore the model’s efficiency in providing consistent and dependable outputs, key for seamless user experience in interactive applications.

4.3.3. Foot Imagery

The confusion matrix for foot imagery (Table 4) continues this trend, indicating a proficient capacity to differentiate between active motor imagery and rest periods.
The results from this matrix confirm the system’s effectiveness in recognizing foot imagery with high accuracy. The minimal number of false classifications (both positives and negatives) demonstrates the system’s robustness, enhancing its practical application in controlling devices through foot commands without the need for physical movements.

4.3.4. Tongue Imagery

The confusion matrix for tongue imagery (Table 5) showcases the system’s high accuracy in discerning tongue motor imagery tasks, further affirming the model’s effectiveness across diverse motor imagery classifications.
This matrix illustrates the system’s precision in identifying tongue imagery, a critical function for speech and language therapy applications. The high rates of correct classifications suggest the model’s potential in specialized therapeutic settings, offering new avenues for rehabilitation using BCI technologies.

4.3.5. Control Conditions

Finally, the confusion matrix for control conditions (Table 6) shows the system’s proficiency in identifying instances of inactivity, with a high true negative rate that underscores its sensitivity to nonactive states.
The strong performance in recognizing nonactive states as depicted in this matrix is essential for avoiding unintended actions, ensuring that the BCI system remains inactive until a clear user command is detected. This reliability is vital for safety and efficiency in user interactions with the BCI system.
These results collectively highlight the BCI system’s capability to accurately classify both motor imagery tasks and control conditions across a variety of scenarios, demonstrating its potential for broad application in assistive technologies and neurotherapeutic settings. This comprehensive analysis not only validates the system’s design but also guides future improvements and optimizations.

4.4. Hypothetical p-Values and Statistical Analysis

This subsection delves into the statistical evaluation of the performance metrics for the proposed BCI system, using hypothetical p-values to assess the significance of observed differences in system performance across various metrics. Table 7 presents these p-values, offering insights into the statistical robustness of the results obtained from the experimental tests.
The analysis of hypothetical p-values reveals significant findings:
  • The extremely low p-value (0.001) for Classification Accuracy robustly supports the rejection of the null hypothesis, indicating a statistically significant improvement in accuracy for motor imagery tasks compared to control conditions. This finding underscores the BCI system’s ability to effectively interpret and classify neural patterns associated with specific motor tasks.
  • The p-value (0.015) associated with Response Time suggests a significant reduction in decision-making time for motor imagery tasks, enhancing the system’s usability in real-time applications. This improvement is critical for applications where timely and accurate response is paramount, such as in assistive technologies for individuals with mobility impairments.
  • p-values for Specificity (0.002) and Sensitivity (0.009) confirm significant improvements in these metrics, indicating that the system is not only more likely to identify motor imagery when it occurs but also correctly ignores nontask-related brain activity. These improvements contribute to a more reliable and user-friendly BCI system.
  • The significant results for False Positives (0.027) and False Negatives (0.003) demonstrate the system’s enhanced accuracy in recognizing true motor imagery tasks and reducing errors that could lead to unintentional actions, further validating the BCI’s effectiveness.
These statistical results not only validate the effectiveness of the BCI system in discriminating between motor imagery and control conditions but also affirm the reliability and potential of the system for practical BCI applications. The robust statistical evidence provided here supports the system’s capability to function accurately under varying conditions, which is critical for user-centric implementations and future innovations in BCI technology.

4.5. Visual Analysis of BCI System Performance

The graphical representations detailed below offer a comprehensive visual assessment of the BCI system’s performance across various metrics. These visuals highlight the system’s accuracy, response time, and the distinctiveness of motor control pathways enabled by the system, providing a deeper understanding of its operational efficacy.

4.5.1. Accuracy and Response Time Trends

Figure 6 visually summarizes the trends in accuracy and response time across all trials, offering critical insights into the BCI system’s consistency and reliability over time. This visual analysis is crucial for evaluating the system’s capacity to maintain stable performance in real-time applications, an essential feature for practical BCI deployment.
The graph illustrates a consistent high accuracy level, while response times show a gradual decrease. This trend suggests that the BCI system not only maintains its efficacy in classifying motor imagery tasks as interactions progress but also becomes faster. Such improvements in speed, coupled with sustained accuracy, validate the effectiveness of the Modified DNN classifier and the FCFBCSP algorithm in extracting and processing relevant features for quick and accurate decision making.

4.5.2. Cursor Path Analysis

Figure 7 presents cursor movement paths during trials, illustrating the system’s ability to translate motor imagery into precise cursor trajectories accurately.
The blue curve represents the average cursor path derived from individual trials, highlighting typical movement patterns under controlled conditions. The dispersion around this curve reflects the variability in individual user performance, offering insights into how well the BCI system adapts to different users. The solid and dotted red curves depict exemplary trials, showcasing a range of possible cursor movements and the system’s responsiveness to varied user inputs. This variability is critical for understanding and improving user-specific interaction dynamics within BCI systems.

4.5.3. Beta Rhythm Power Analysis

Figure 8 illustrates the power distribution of beta rhythms extracted from the left and right brain regions, a crucial aspect in understanding the neurological underpinnings of motor imagery used in BCI.
This graph highlights the differential power of beta rhythms, where the y-axis represents power levels, and the x-axis indicates frequency. The distinct power levels between the two hemispheres underscore the neural engagement during motor imagery tasks, correlating directly with the participant’s ability to control cursor movements through thought alone.
These graphical representations, when combined with the quantitative and statistical analyses, provide a holistic evaluation of the BCI system’s capabilities. They illustrate not only the system’s precision in classifying and responding to motor imagery but also its adaptability to individual differences in brain activity patterns, underscoring its potential for practical applications in assistive technology and beyond.

4.6. Discussion

4.6.1. Comparison with Existing Techniques and Methodological Evaluation

This subsection compares the proposed methodology with existing techniques in BCI feature extraction and classification, highlighting the advancements our approach offers and discussing both the advantages and limitations.
Feature Extraction: Traditional methods like CSP and FBCSP are effective but often struggle with nonstationary noise in EEG signals. Our FC-FBCSP enhances feature extraction by using a refined set of filters tailored for multiple frequency bands, significantly improving performance in handling noise and artifacts.
Artifact Removal: Common artifact removal techniques like ICA and wavelet transforms can remove useful signal components or introduce distortions. In contrast, our FCIF method preserves essential signal components while effectively eliminating noise, showing superior accuracy in artifact removal.
Classification: Conventional classifiers like SVM and basic neural networks do not always generalize well across different subjects. Our Modified Deep Neural Network (MDNN), combined with FC-FBCSP, achieves higher accuracy and better generalization in motor imagery classification.
The proposed methodology offers several advantages over existing approaches:
  • Improved accuracy in both artifact removal and classification.
  • Enhanced generalization capabilities across different subjects.
  • Increased resilience to noise, ensuring cleaner and more reliable signals for classification.
  • More efficient feature extraction from EEG signals, enhancing the discriminative power of the features.
However, the methodology also presents certain limitations:
  • The computational complexity is higher, which may affect processing time and resource consumption, especially in large-scale or real-time applications.
  • The performance has not been extensively validated on a wide range of diverse datasets.
  • Optimal performance is heavily dependent on precise hyperparameter tuning, which can be resource-intensive.

4.6.2. BCI System Accuracy and Variations Across Motor Imagery Tasks

This discussion critically examines the performance of a cursor control BCI system, leveraging neuromuscular imaging to accurately discern user intentions through motor imagery tasks. A thorough evaluation of classification accuracy across different motor tasks provides vital insights into the system’s operational effectiveness and reveals subtle yet significant variations in its capability to interpret diverse types of motor intentions, as depicted in Figure 6.
The BCI system exhibits exemplary performance, achieving high accuracy rates across various motor imagery tasks, which confirms its reliability in translating mental commands into precise cursor movements. Specifically, the system demonstrates an impressive accuracy rate of 90.5% for left-hand movement tasks (Table 8), underscoring its proficiency in recognizing and executing user-generated commands for left-hand actions. This high level of accuracy ensures that the system can be reliably used in applications requiring fine motor control, such as virtual typing or nuanced artistic creation. Similarly, right-hand movement tasks are classified with an accuracy of 88.2%, illustrating the system’s robustness in handling neural signals associated with right-hand gestures, making it suitable for right-hand dominant users.
Although the accuracy for foot movement tasks is marginally lower at 87.9%, it effectively demonstrates the system’s capability to decode motor imagery related to foot actions, which could be pivotal in applications like driving or foot-operated controls in accessibility devices. The accuracy for tongue movement tasks stands notably high at 89.8%, highlighting the system’s versatility and sensitivity in processing signals from less commonly used motor imagery tasks, expanding its applicability in specialized therapeutic settings or communication aids for individuals with severe physical limitations.
Despite these high overall accuracy rates, variations in performance among different motor imagery tasks were noted. Hand-related tasks generally outperformed foot and tongue movements, potentially due to more distinct and familiar neural patterns associated with hand gestures compared to those generated by less frequently used limbs or muscles. This observation may indicate inherent differences in how the brain encodes and processes various motor actions, suggesting a potential area for further research to enhance the system’s adaptability and performance across a broader spectrum of motor tasks.
The specificity of the accuracy values for each motor imagery class, coupled with detailed performance metrics such as F1-scores, precision, and recall, illustrates the BCI system’s sophisticated approach to balancing sensitivity and specificity. This balanced performance profile makes the BCI system a promising tool for advanced applications, offering reliable, nuanced control that can adapt to the specific needs and conditions of its users.
This analysis not only highlights the technical excellence of the BCI system but also lays the groundwork for future enhancements aimed at boosting the system’s adaptability and precision, especially for motor actions that are currently less accurately represented. Enhancing these capabilities could lead to broader applications and improve user experience, particularly in complex environments where a wide range of intuitive motor controls is necessary.

4.6.3. BCI System Efficiency in Real-Time Interaction and Response Time Analysis

The proposed BCI system demonstrates strong adaptability across a range of motor imagery tasks, showcasing its versatility and broad applicability in real-time cursor control scenarios. The effectiveness of motor-imagery-based BCIs in enhancing user interactions with digital systems is reinforced by research findings that highlight both accuracy and efficient response times as key contributors to seamless and precise cursor control. This dual capability makes the BCI system especially valuable in the development of user-friendly and inclusive assistive technology solutions.
The importance of response time as a measure of a BCI system’s efficiency in real-time interactions cannot be overstated. It indicates how swiftly the system can interpret user intentions and translate them into cursor movements. According to our findings, response times average 655 milliseconds for tongue movement, 680 milliseconds for foot movement, 670 milliseconds for right-hand movement, and 650 milliseconds for left-hand movement. These figures reflect the system’s ability to provide effective real-time control, with slight variations among tasks illustrating the intrinsic differences in neural processing required for each type of motor imagery.
Furthermore, our analysis reveals a positive correlation between reduced response times and increased classification accuracy. Particularly, tasks that inherently require quicker response times, such as right-hand and tongue movements, show higher accuracy rates. This relationship emphasizes the system’s ability to maintain high performance under the demand for rapid processing, a crucial feature for applications where delayed responses could diminish functionality or user satisfaction.
Despite the system’s high overall accuracy, examination of confusion matrices for various motor tasks highlights some misclassification tendencies, particularly between similar tasks such as left- and right-hand movements. To address these issues, we are exploring advanced feature engineering and algorithm optimization strategies aimed at enhancing the specificity and robustness of the system.
Moreover, the BCI system excels not only in accuracy but also maintains a balanced performance across various metrics such as F1-score, precision, and recall. This balanced performance profile supports the system’s potential for widespread real-world application.
Looking ahead, the comprehensive performance analysis, including a detailed comparison of our system with other studies, is crucial for ongoing enhancements. This benchmarking process, as illustrated in Table 9, helps identify unique contributions and areas requiring further research, enabling continuous refinement of our BCI development approach.
This enhanced focus on efficiency and response time, combined with a rigorous evaluation of accuracy and performance metrics, underscores the BCI system’s robustness and readiness for practical deployment, particularly in fields requiring high precision and responsiveness.

5. Conclusions and Future Work

This study has successfully developed and validated an advanced approach that significantly enhances the efficiency of motor-imagery-based Brain–Computer Interface (BCI) technology. By utilizing a modified deep neural network (DNN) for classification and the Four-Class Filter Bank Common Spatial Pattern (FCFBCSP) for signal preprocessing, the proposed BCI system has adeptly addressed critical factors that influence classification effectiveness. Notably, the system has demonstrated impressive accuracy rates ranging from 88.2% to 90.5%, accompanied by real-time response times averaging between 650 and 680 milliseconds. These performance metrics not only highlight the system’s capacity for precise cursor control but also underscore its overall efficiency, evidenced through robust F1-Score, Precision, and Recall metrics.
The adoption of cutting-edge signal preprocessing and feature extraction techniques has been pivotal in enhancing the quality of brain signals, setting the stage for more effective and responsive BCIs. This advancement holds particular significance for individuals with motor disabilities, offering them enhanced capabilities to interact with digital systems seamlessly. The robust performance, efficient response times, and balanced evaluations of various metrics position this BCI system as a valuable tool for real-world applications, aligning perfectly with the goals of assistive technology and user-centered design principles.
However, there are limitations to this study that should be acknowledged to better understand its implications and areas for future improvement. First, the generalizability of our method may be limited as variations in EEG data across individuals and sessions can reduce its performance. Additionally, further testing is needed to evaluate the system’s performance under real-time constraints, particularly in dynamic and uncontrolled environments. Moreover, the effectiveness of our system is based on specific EEG configurations and might not translate as effectively with fewer or differently positioned electrodes.
A user-centered design approach will continue to be central to future developments, ensuring that BCI systems are tailor-made to meet the diverse preferences, needs, and scenarios encountered by users. As BCIs become more integrated into everyday activities, ethical considerations will also gain prominence. It is imperative to address user privacy, data security, and potential ethical dilemmas proactively. Future research should focus on developing comprehensive ethical frameworks and guidelines that will guide the responsible development and deployment of BCI technology.
Furthermore, we plan to incorporate comprehensive robustness analysis to ensure the reliability and effectiveness of our Brain–Computer Interface (BCI) system under varying conditions. This will include conducting perturbation tests where key parameters within the signal processing and classification modules are deliberately altered, such as adding noise to the EEG data or varying the neural network’s architecture and hyperparameters. Additionally, we will explore the effects of changing electrode configurations and the inclusion of different data features to assess their impact on system performance. Simulated disturbances will also be introduced to model potential real-world challenges, allowing us to evaluate the system’s resilience and identify necessary adjustments to enhance its stability and accuracy. These robustness tests are essential for developing a BCI system that is not only effective in controlled environments but also durable and reliable in diverse real-world applications.
In conclusion, the ongoing evolution of motor-imagery-based BCIs holds the promise of revolutionizing computer interaction, particularly for individuals with motor limitations. The path forward involves continued research and development efforts aimed at enhancing accuracy, refining response times, developing hybrid systems, adhering to user-centered designs, and addressing ethical considerations. These efforts will ensure the sustained impact and broader adoption of BCIs across various sectors, ultimately improving the quality of life for many individuals.

Author Contributions

Conceptualization, S.A., R.C.J., K.R.K., V.C.G., A.K., B.A., F.G. and U.D.; Methodology, S.A., R.C.J., K.R.K., V.C.G., A.K., B.A., F.G. and U.D.; Software, S.A., R.C.J., K.R.K., V.C.G., A.K., B.A., F.G. and U.D.; Validation, S.A., R.C.J., K.R.K., V.C.G., A.K., B.A., F.G. and U.D.; Data curation, S.A., R.C.J., K.R.K., V.C.G., A.K., B.A., F.G. and U.D.; Writing – original draft, S.A., R.C.J., K.R.K., V.C.G., A.K., B.A., F.G. and U.D.; Writing – review & editing, S.A., R.C.J., K.R.K., V.C.G., A.K., B.A., F.G. and U.D.; Project administration, S.A., R.C.J., K.R.K., V.C.G., A.K., B.A., F.G. and U.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Weiss, J.M.; Gaunt, R.A.; Franklin, R.; Boninger, M.L.; Collinger, J.L. Demonstration of a Portable Intracortical Brain-Computer Interface. Brain-Comput. Interfaces 2019, 6, 106–117. [Google Scholar] [CrossRef]
  2. Pan, K.; Li, L.; Zhang, L.; Li, S.; Yang, Z.; Guo, Y. A Noninvasive BCI System for 2D Cursor Control Using a Spectral-Temporal Long Short-Term Memory Network. Front. Comput. Neurosci. 2022, 16, 799019. [Google Scholar] [CrossRef]
  3. Zhang, J.; Wang, M. A Survey on Robots Controlled by Motor Imagery Brain-Computer Interfaces. Cogn. Robot. 2021, 1, 12–24. [Google Scholar]
  4. Akuthota, S.; Rajkumar, K.; Ravichander, J. EEG based Motor Imagery BCI using Four Class Iterative Filtering & Four Class Filter Bank Common Spatial Pattern. In Proceedings of the International Conference on Advances in Electronics, Communication, Computing and Intelligent Information Systems (ICAECIS), Bangalore, India, 19–21 April 2023; pp. 429–434. [Google Scholar]
  5. Janapati, R.; Dalal, V.; Kumar, G.M.; Anuradha, P.; Shekar, P.V.R. Web Interface Applications Controllers used by Autonomous EEG-BCI Technologies. In Proceedings of the AIP Conference Proceedings; AIP Publishing: Melville, NY, USA, 2022; Volume 2418. [Google Scholar]
  6. Janapati, R.; Dalal, V.; Sengupta, R. Advances in Experimental Paradigms for EEG-BCI. In Proceedings of the 2nd International Conference on Recent Trends in Machine Learning, IoT, Smart Cities and Applications: ICMISC 2021, Hyderabad, India, 28–29 March 2021; Lecture Notes in Networks and Systems. Springer: Berlin/Heidelberg, Germany, 2022; pp. 163–170. [Google Scholar]
  7. Janapati, R.; Dalal, V.; Sengupta, R.; Raja Shekar, P.V. Progression of EEG-BCI Classification Techniques: A Study. In Proceedings of the Inventive Systems and Control, Coimbatore, India, 7–8 January 2021; Lecture Notes in Networks and Systems. Springer: Berlin/Heidelberg, Germany, 2021; pp. 161–170. [Google Scholar]
  8. Janapati, R.; Dalal, V.; Govardhan, N.; Gupta, R.S. Review on EEG-BCI Classification Techniques Advancements. IOP Conf. Ser. Mater. Sci. Eng. 2020, 981, 032019. [Google Scholar] [CrossRef]
  9. Ramakrishnan, J.; Mavaluru, D.; Sakthivel, R.S.; Alqahtani, A.S.; Mubarakali, A.; Retnadhas, M. Brain-Computer Interface for Amyotrophic Lateral Sclerosis Patients using Deep Learning Network. Neural Comput. Appl. 2022, 34, 13439–13453. [Google Scholar] [CrossRef]
  10. Mathesul, S.; Swain, D.; Satapathy, S.K.; Rambhad, A.; Acharya, B.; Gerogiannis, V.C.; Kanavos, A. COVID-19 Detection from Chest X-ray Images Based on Deep Learning Techniques. Algorithms 2023, 16, 494. [Google Scholar] [CrossRef]
  11. Shriram, S.; Nagaraj, B.; Jaya, J.; Shankar, S.; Ajay, P. Deep Learning-Based Real-Time AI Virtual Mouse System Using Computer Vision to Avoid COVID-19 Spread. J. Healthc. Eng. 2021, 2021, 8133076. [Google Scholar] [CrossRef]
  12. Teng, G.; He, Y.; Zhao, H.; Liu, D.; Xiao, J.; Ramkumar, S. Design and Development of Human Computer Interface Using Electrooculogram with Deep Learning. Artif. Intell. Med. 2020, 102, 101765. [Google Scholar] [CrossRef]
  13. Stieger, J.R.; Engel, S.A.; Suma, D.; He, B. Benefits of Deep Learning Classification of Continuous Noninvasive Brain–Computer Interface Control. J. Neural Eng. 2021, 18, 046082. [Google Scholar]
  14. Schweihoff, J.F.; Loshakov, M.; Pavlova, I.; Kück, L.; Ewell, L.A.; Schwarz, M.K. DeepLabStream Enables Closed-Loop Behavioral Experiments using Deep Learning-based Markerless, real-time Posture Detection. Commun. Biol. 2021, 4, 130. [Google Scholar] [CrossRef]
  15. Alam, M.S.; Kwon, K.; Alam, M.A.; Abbass, M.Y.; Imtiaz, S.M.; Kim, N. Trajectory-Based Air-Writing Recognition Using Deep Neural Network and Depth Sensor. Sensors 2020, 20, 376. [Google Scholar] [CrossRef] [PubMed]
  16. Tran, D.S.; Ho, N.H.; Yang, H.J.; Baek, E.T.; Kim, S.H.; Lee, G. Real-Time Hand Gesture Spotting and Recognition Using RGB-D Camera and 3D Convolutional Neural Network. Appl. Sci. 2020, 10, 722. [Google Scholar] [CrossRef]
  17. Tiwari, S.; Goel, S.; Bhardwaj, A. MIDNN-A Classification Approach for the EEG based Motor Imagery Tasks using Deep Neural Network. Appl. Intell. 2022, 52, 4824–4843. [Google Scholar] [CrossRef]
  18. Choi, J.W.; Park, J.; Huh, S.; Jo, S. Asynchronous Motor Imagery BCI and LiDAR-Based Shared Control System for Intuitive Wheelchair Navigation. IEEE Sens. J. 2023, 23, 16252–16263. [Google Scholar] [CrossRef]
  19. Guerrero-Mendez, C.D.; Blanco-Díaz, C.F.; Ruiz-Olaya, A.F.; Lopez-Delis, A.; Jaramillo-Isaza, S.; Andrade, R.M.; Souza, A.F.D.; Delisle-Rodriguez, D.; Frizera-Neto, A.; Bastos-Filho, T.F. EEG Motor Imagery Classification using Deep Learning Approaches in Naïve BCI Users. Biomed. Phys. Eng. Express 2023, 9, 045029. [Google Scholar] [CrossRef]
  20. Mousavi, M.; de Sa, V.R. Spatio-Temporal Analysis of Error-related Brain Activity in Active and Passive Brain–Computer Interfaces. Brain-Comput. Interfaces 2019, 6, 118–127. [Google Scholar] [CrossRef] [PubMed]
  21. Al-Saegh, A.; Dawwd, S.A.; Abdul-Jabbar, J.M. Deep Learning for Motor Imagery EEG-based Classification: A Review. Biomed. Signal Process. Control 2021, 63, 102172. [Google Scholar] [CrossRef]
  22. Altaheri, H.; Muhammad, G.; Alsulaiman, M.; Amin, S.U.; Altuwaijri, G.A.; Abdul, W.; Bencherif, M.A.; Faisal, M. Deep Learning Techniques for Classification of Electroencephalogram (EEG) Motor Imagery (MI) Signals: A Review. Neural Comput. Appl. 2023, 35, 14681–14722. [Google Scholar] [CrossRef]
  23. Savvopoulos, A.; Kanavos, A.; Mylonas, P.; Sioutas, S. LSTM Accelerator for Convolutional Object Identification. Algorithms 2018, 11, 157. [Google Scholar] [CrossRef]
  24. Sukkar, M.; Shukla, M.; Kumar, D.; Gerogiannis, V.C.; Kanavos, A.; Acharya, B. Enhancing Pedestrian Tracking in Autonomous Vehicles by Using Advanced Deep Learning Techniques. Information 2024, 15, 104. [Google Scholar] [CrossRef]
  25. Chaddad, A.; Wu, Y.; Kateb, R.; Bouridane, A. Electroencephalography Signal Processing: A Comprehensive Review and Analysis of Methods and Techniques. Sensors 2023, 23, 6434. [Google Scholar] [CrossRef] [PubMed]
  26. Daud, S.N.S.S.; Sudirman, R. Wavelet Based Filters for Artifact Elimination in Electroencephalography Signal: A Review. Ann. Biomed. Eng. 2022, 50, 1271–1291. [Google Scholar] [CrossRef]
  27. Miah, O.; Habiba, U.; Kabir, F. ODL-BCI: Optimal Deep Learning Model for Brain-computer Interface to Classify Students Confusion via Hyperparameter Tuning. Brain Disord. 2024, 13, 100121. [Google Scholar] [CrossRef]
  28. Škola, F.; Tinková, S.; Liarokapis, F. Progressive Training for Motor Imagery Brain-Computer Interfaces Using Gamification and Virtual Reality Embodiment. Front. Hum. Neurosci. 2019, 13, 329. [Google Scholar] [CrossRef]
  29. Parashiva, P.K.; Vinod, A.P. Improving Direction Decoding Accuracy during Online Motor Imagery based Brain-Computer Interface using Error-related Potentials. Biomed. Signal Process. Control 2022, 74, 103515. [Google Scholar] [CrossRef]
  30. Choi, J.W.; Huh, S.; Jo, S. Improving Performance in Motor Imagery BCI-based Control Applications via Virtually Embodied Feedback. Comput. Biol. Med. 2020, 127, 104079. [Google Scholar] [CrossRef]
  31. Abiri, R.; Borhani, S.; Kilmarx, J.; Esterwood, C.; Jiang, Y.; Zhao, X. A Usability Study of Low-Cost Wireless Brain-Computer Interface for Cursor Control Using Online Linear Model. IEEE Trans. Hum.-Mach. Syst. 2020, 50, 287–297. [Google Scholar] [CrossRef] [PubMed]
  32. Parikh, D.; George, K. Quadcopter Control in Three-Dimensional Space Using SSVEP and Motor Imagery-Based Brain-Computer Interface. In Proceedings of the 11th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), Vancouver, BC, Canada, 4–7 November 2020; pp. 0782–0785. [Google Scholar]
  33. Guo, Y.; Wang, M.; Zheng, T.; Li, Y.; Wang, P.; Qin, X. NAO Robot Limb Control Method Based on Motor Imagery EEG. In Proceedings of the International Symposium on Computer, Consumer and Control (IS3C), Taichung City, Taiwan, 13–16 November 2020; pp. 521–524. [Google Scholar]
  34. Reyhani-Masoleh, B.; Chau, T. Navigating in Virtual Reality using Thought: The Development and Assessment of a Motor Imagery based Brain-Computer Interface. arXiv 2019, arXiv:1912.04828. [Google Scholar]
  35. Gao, C.; Xia, M.; Zhang, Z.; Han, Y.; Gu, Y. Improving the Brain-Computer Interface Learning Process with Gamification in Motor Imagery: A Review. In Gamification-Analysis, Design, Development and Ludification; IntechOpen: London, UK, 2022. [Google Scholar]
  36. Alchalabi, B.; Faubert, J. A Comparison between BCI Simulation and Neurofeedback for Forward/Backward Navigation in Virtual Reality. Comput. Intell. Neurosci. 2019, 2019, 2503431. [Google Scholar] [CrossRef]
  37. Saichoo, T.; Boonbrahm, P.; Punsawad, Y. Investigating User Proficiency of Motor Imagery for EEG-Based BCI System to Control Simulated Wheelchair. Sensors 2022, 22, 9788. [Google Scholar] [CrossRef]
  38. Dutt-Mazumder, A.; Huggins, J.E. Performance Comparison of a non-invasive P300-based BCI Mouse to a Head-Mouse for People with SCI. Brain-Comput. Interfaces 2020, 7, 1–10. [Google Scholar] [CrossRef]
  39. Hossain, K.M.; Islam, M.A.; Hossain, S.; Nijholt, A.; Ahad, M.A.R. Status of deep learning for EEG-based brain–computer interface applications. Front. Comput. Neurosci. 2022, 16, 1006763. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Spatial filtering process using EEG data.
Figure 1. Spatial filtering process using EEG data.
Information 15 00702 g001
Figure 2. Flowchart depicting the iterative process of FCIF.
Figure 2. Flowchart depicting the iterative process of FCIF.
Information 15 00702 g002
Figure 3. Diagram illustrating the operation of the FCFBCSP algorithm using EEG data.
Figure 3. Diagram illustrating the operation of the FCFBCSP algorithm using EEG data.
Information 15 00702 g003
Figure 5. ROC curves for each class showing the trade-off between sensitivity and specificity at various threshold levels.
Figure 5. ROC curves for each class showing the trade-off between sensitivity and specificity at various threshold levels.
Information 15 00702 g005
Figure 6. Accuracy and response time trends over trials.
Figure 6. Accuracy and response time trends over trials.
Information 15 00702 g006
Figure 7. Average cursor path, trajectories, and individual traces.
Figure 7. Average cursor path, trajectories, and individual traces.
Information 15 00702 g007
Figure 8. Power distinction of two beta rhythm curves.
Figure 8. Power distinction of two beta rhythm curves.
Information 15 00702 g008
Table 1. Motor imagery tasks versus control conditions performance metrics.
Table 1. Motor imagery tasks versus control conditions performance metrics.
Measurement/ConditionLeft-Hand
MI
 Right-Hand 
MI
  Foot  
MI
 Tongue 
MI
 Control 
Conditions
Classification Accuracy (%)90.588.287.989.812.1
Response Time (ms)6506706806551020
Specificity (%)91.492.091.291.7
Sensitivity (%)89.687.888.589.2
False Positives879698
False Negatives1012118105
Table 2. Confusion matrix for left-hand imagery.
Table 2. Confusion matrix for left-hand imagery.
Model PositiveModel Negative
Real Positive8810
Real Negative894
Table 3. Confusion matrix for right-hand imagery.
Table 3. Confusion matrix for right-hand imagery.
Model PositiveModel Negative
Real Positive8612
Real Negative993
Table 4. Confusion matrix for foot imagery.
Table 4. Confusion matrix for foot imagery.
Model PositiveModel Negative
Real Positive8711
Real Negative1092
Table 5. Confusion matrix for tongue imagery.
Table 5. Confusion matrix for tongue imagery.
Model PositiveModel Negative
Real Positive899
Real Negative794
Table 6. Confusion matrix for control conditions.
Table 6. Confusion matrix for control conditions.
Model PositiveModel Negative
Real Positive142
Real Negative966
Table 7. Hypothetical p-values for performance metrics.
Table 7. Hypothetical p-values for performance metrics.
MetricHypothesis (H0)Hypothesis (H1)Significance
Level ( α )
Hypothetical
p-Value
Classification
Accuracy
No significant difference in accuracy between control and motor imageryHigher accuracy for motor imagery vs. control0.050.001
Response
Time
Similar response times for control and motor imageryFaster response times for motor imagery vs. control0.050.015
SpecificityNo difference in specificity between conditionsHigher specificity under motor imagery conditions0.050.002
SensitivityUniform sensitivity across all tasksImproved sensitivity in motor imagery tasks0.050.009
False
Positives
Equal rates of false positives in all conditionsFewer false positives in motor imagery tasks0.050.027
False
Negatives
No variation in false negatives between tasksReduced false negatives in motor imagery tasks0.050.003
Additional
Metrics
Consistent performance across all metricsVariability in performance by task0.050.008
Table 8. Specific metrics for each motor imagery class.
Table 8. Specific metrics for each motor imagery class.
Motor Imagery TaskF1-ScorePrecisionRecall
Left-handed Gestures0.9180.9030.934
Right-handed Gestures0.8870.9010.874
Foot Movement0.8650.8420.890
Tongue Movement0.9390.9560.922
Table 9. Comparison of BCI system studies.
Table 9. Comparison of BCI system studies.
PaperAccuracy (%)Unique ContributionsAreas for Further Research
[31]80Demonstrated a positive relationship between cursor control and visualization ability, enhancing interface intuitiveness.Explore individualized assistive BCIs tailored to user-specific neural patterns and preferences.
[15]Not mentionedEvaluated the speed of suggested algorithms, focusing on enhancing real-time response capabilities.Conduct a comparative analysis with current state-of-the-art BCI systems to benchmark speed and accuracy.
[36]76Investigated the efficacy of motor imagery commands for neurofeedback learning, integrating VR to enhance training.Further study on movement-related activation of the motor cortex in virtual reality settings.
[18]<50Focused on improving BCI performance through shared control systems, enhancing user autonomy.Address the challenges of asynchronous BCIs, particularly in reducing error rates and improving system responsiveness.
[30]<50Enhanced neuronal activity recognition and pattern identification, incorporating advanced neural networks.Enhance motor imagery effectiveness through virtual reality, exploring new training protocols.
[35]74.35Highlighted the positive effects on functional networks and motor learning, potentially increasing BCI efficacy.Implement a randomized control trial to assess the impact of gamification on motor learning and BCI integration.
[19]80Utilized deep learning techniques to improve performance by 32%, focusing on computational efficiency and accuracy.Enhance the usefulness, controllability, and dependability of robotic devices controlled by BCIs.
[33]78.29Developed a novel classification algorithm for operating NAO robots, enhancing interface responsiveness.Address the challenges of limited precision in robotic control through algorithm refinement and testing.
[2]63.45Leveraged temporal characteristics related to error potentials P300, aiming to reduce reaction times.Extend the system to complex tasks involving multidirectional movements and increase task variety.
[29]64.9Improved direction decoding accuracy by 10%, employing advanced signal processing techniques.Investigate EEG data with low signal-to-noise ratios to enhance decoding accuracy under nonideal conditions.
[32]85Focused on improving human–machine interactions and optimized offline adjustments to enhance system adaptability.Explore the integration of hybrid BCI controls and real-time system adjustments to improve usability and operability.
[9]Not mentionedEmphasized user adaptability and enhanced feature engineering, tailoring the system to user needs.Focus on user-centered design for real-world applications, improving adaptability and customization.
[34]70Developed a three-class BCI system integrating VR to boost user engagement and system accuracy.Combine VR and BCI technologies to enhance user training and engagement in complex tasks.
[37]83.7Created a user-friendly BCI system using an EEG neuroheadset, enhancing accessibility for severely disabled individuals.Develop protocols for individuals with severe disabilities to control electric wheelchairs and other assistive devices.
[14]Not mentionedFocused on lower-speed robotic control and signal processing enhancements for robust operation.Further enhance signal processing capabilities and develop faster, more accurate control systems for robotics.
[11]Not mentionedAdopted a user-centered design approach, significantly improving the user experience in BCI interactions.Study the long-term usability and individualized training needs to adapt BCIs to daily use.
[28]75.84Reported high levels of user satisfaction following training, enhancing motivation and engagement.Assess the long-term impacts of training methods on user experience and motor imagery BCI skills.
[13]Not mentionedIntroduced novel signal processing techniques and real-time user feedback mechanisms.Enhance signal processing and system calibration techniques to improve user feedback and accuracy.
[12]Not mentionedExplored new feature sets that improved classification accuracy, enhancing user interface intuitiveness.Focus on personalized calibration and feature optimization to better accommodate individual differences.
[17]82.48Utilized a Pattern Recognition Neural Network to improve adaptability and accuracy of BCIs.Expand on adaptability features to enhance real-world application and user control in complex environments.
[16]Not mentionedDeveloped a multimodal approach, combining different sensory inputs for a more robust BCI.Generalize the multimodal approach to various user scenarios to enhance BCI robustness and reliability.
Proposed
Work
88.2–90.5Demonstrated robustness across various user scenarios, providing a comprehensive evaluation approach.Explore personalization options, long-term usability improvements, and enhanced user interfaces for broader application.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Akuthota, S.; Janapati, R.C.; Kumar, K.R.; Gerogiannis, V.C.; Kanavos, A.; Acharya, B.; Grivokostopoulou, F.; Desai, U. Enhancing Real-Time Cursor Control with Motor Imagery and Deep Neural Networks for Brain–Computer Interfaces. Information 2024, 15, 702. https://doi.org/10.3390/info15110702

AMA Style

Akuthota S, Janapati RC, Kumar KR, Gerogiannis VC, Kanavos A, Acharya B, Grivokostopoulou F, Desai U. Enhancing Real-Time Cursor Control with Motor Imagery and Deep Neural Networks for Brain–Computer Interfaces. Information. 2024; 15(11):702. https://doi.org/10.3390/info15110702

Chicago/Turabian Style

Akuthota, Srinath, Ravi Chander Janapati, K. Raj Kumar, Vassilis C. Gerogiannis, Andreas Kanavos, Biswaranjan Acharya, Foteini Grivokostopoulou, and Usha Desai. 2024. "Enhancing Real-Time Cursor Control with Motor Imagery and Deep Neural Networks for Brain–Computer Interfaces" Information 15, no. 11: 702. https://doi.org/10.3390/info15110702

APA Style

Akuthota, S., Janapati, R. C., Kumar, K. R., Gerogiannis, V. C., Kanavos, A., Acharya, B., Grivokostopoulou, F., & Desai, U. (2024). Enhancing Real-Time Cursor Control with Motor Imagery and Deep Neural Networks for Brain–Computer Interfaces. Information, 15(11), 702. https://doi.org/10.3390/info15110702

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop