Next Article in Journal
AI-Based Smart Sensing and AR for Gait Rehabilitation Assessment
Next Article in Special Issue
Online Professional Development on Educational Neuroscience in Higher Education Based on Design Thinking
Previous Article in Journal
Spatial Knowledge Acquisition for Pedestrian Navigation: A Comparative Study between Smartphones and AR Glasses
Previous Article in Special Issue
Irradiance Non-Uniformity in LED Light Simulators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mind the Move: Developing a Brain-Computer Interface Game with Left-Right Motor Imagery †

by
Georgios Prapas
1,
Kosmas Glavas
1,
Katerina D. Tzimourta
1,
Alexandros T. Tzallas
2 and
Markos G. Tsipouras
1,*
1
Department of Electrical and Computer Engineering, University of Western Macedonia, Campus ZEP Kozani, 50100 Kozani, Greece
2
Department of Informatics and Telecommunication, University of Ioannina, 47150 Arta, Greece
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in 7th South-East Europe Design Automation, Computer Engineering, Computer Networks and Social Media Conference (SEEDA-CECNSM 2022), Ioannina, Greece, 23–25 September 2022.
Information 2023, 14(7), 354; https://doi.org/10.3390/info14070354
Submission received: 30 March 2023 / Revised: 5 June 2023 / Accepted: 14 June 2023 / Published: 21 June 2023

Abstract

:
Brain-computer interfaces (BCIs) are becoming an increasingly popular technology, used in a variety of fields such as medical, gaming, and lifestyle. This paper describes a 3D non-invasive BCI game that uses a Muse 2 EEG headband to acquire electroencephalogram (EEG) data and OpenViBE platform for processing the signals and classifying them into three different mental states: left and right motor imagery and eye blink. The game is developed to assess user adjustment and improvement in BCI environment after training. The classification algorithm used is Multi-Layer Perceptron (MLP), with 96.94% accuracy. A total of 33 subjects participated in the experiment and successfully controlled an avatar using mental commands to collect coins. The online metrics employed for this BCI system are the average game score, the average number of clusters and average user improvement.

1. Introduction

Brain-Computer Interface (BCI) is a direct link between a computer and the human brain [1]. It is the most recent advancement in Human-Computer Interface technology (HCI). In contrast with the typical input devices (mouse, keyboard, etc.), BCI receives the waves generated by the brain at various locations in the human head and converts these signals into commands and actions that can control the computer. BCI applications are traditionally directed toward the medical field, mainly targeting patients with chronic neurological diseases [2].
BCI systems are classified as invasive or non-invasive according to where the sensors are placed. A system is considered an invasive BCI [3] if the array of electrodes used to collect brain signals is implanted inside the brain. On the other hand, when the electrodes are placed on top of the head, the system is non-invasive [4]. The first category has the benefit of having a higher-quality signal with fewer artifacts and noise. Patients with serious brain disabilities, such as blindness or paralysis, are the primary targets of invasive BCI since neurosurgery is a risky and costly operation. The non-invasive category [5] has many benefits, including portability, affordability, and safety of the method.
Electroencephalography (EEG) is the most frequently used non-invasive method [6]. EEG, records the electrical activity of the brain as it occurs on the scalp’s surface [7]. EEG equipment can be either clinical or commercial. Clinical devices are utilized in medical applications and typically have 64 or 32 electrodes. Commercial devices are more affordable and widely available mainly for lifestyle applications, and they have anywhere between 1 and 64 electrodes.
BCI has the potential to be integrated with 3D and Extended Reality (XR) software [8,9] to help people with neuro and motor rehabilitation. The BCI could be used to measure the patient’s brain activity, while the 3D/XR software could be used to provide the patient with immersive, interactive, and engaging experiences. For example, the patient could be guided through a series of exercises designed to help them improve their motor skills. The BCI could be used to measure the patient’s progress and provide feedback, allowing the patient to adjust the exercises accordingly. BCI has made a great advancement in the medical area with numerous applications, for example, controlling prosthetic limbs [10] or smart home systems [11]. As a result of this advancement, we are now seeing actions to deploy BCI outside of the medical arena. In the last decade, literature has shown that two out of five BCI application scenarios primarily focused on non-medical fields [12]. One of these fields is the gaming industry [13,14]. BCI in gaming can be used to create a more immersive gaming experience. It allows gamers to use their thoughts and feelings to control the game, instead of using a mouse, keyboard, or gamepad. For example, a gamer could use their thoughts to move objects in the game or control the speed of their character. BCI-enabled gaming could also allow people with disabilities to enjoy gaming in a more interactive way. There are several potential benefits of using BCI in gaming [15], including:
  • Increased immersion: one of the key advantages of using BCI in gaming is that it has the potential to increase immersion by allowing players to directly control game characters with their thoughts. This could lead to a more immersive and realistic gaming experience.
  • Increased interactivity: another benefit of using BCI in gaming is that it could allow for increased interactivity between players and game characters/environments. For example, if a player’s EEG signals indicated they are feeling scared, this could trigger a change in the game environment (e.g., an enemy appearing), which would then require the player to react accordingly (e.g., by fighting back).
  • Accessibility: another potential advantage of using BCI in gaming is that it could make games more accessible for people with disabilities who may not be able to use traditional input devices such as keyboards or controllers.
Despite the potential benefits, there are also challenges associated with using BCI in gaming [16]. One of the main challenges is that BCI technology is still in its infancy and significant technical challenges need to be overcome before it can be widely used. For example, current BCI systems often have high error rates, making them impractical for gaming use. In addition, they require users to undergo a period of training to learn how to control the device. This can take a considerable amount of time, making it difficult for users to become proficient in using the device. These systems are designed to work with specific games, making them difficult to use with other types of games. There is also a need to develop more user-friendly and affordable BCI systems [17].
This study presents an approach to create a BCI game that is controlled by mental commands as expressed by EEG signals. The Muse 2 headband is utilized to measure EEG signals from multiple players, and OpenViBE is employed to record EEG data, conduct offline signal processing and classification, and perform real-time data acquisition and online EEG analysis and classification. Finally, the 3D game is developed in the Unity software, in which players are required to collect coins through motor imagery actions, such as left and right movements. The aim of this work is to determine whether users can adjust and improve their performance in manipulating the BCI system through training.

2. Related Work

In the past years, several papers were published that connected BCIs with games. The goal of these works was to develop games that can be controlled with mental commands. Six papers from the literature are presented in this section. Three different commercial EEG devices are used, the Emotiv EPOC+ [18,19,20], Neurosky Mindwave [20,21,22] and the Muse 2 headband [23]. The Emotiv EPOC+ has the better spatial resolution since it has 14 electrodes followed by the Muse 2 headband which has four EEG channels and the Neurosky Mindwave with only one electrode. Although the Emotiv EPOC+ has the best spatial resolution, it is heavier and more difficult to wear. The Muse 2 headband is the easiest to use due to its design and light weight.
Hjørungdal et al. [18] designed a game-based learning framework for controlling brain-actuated wheelchairs. The framework consists of a computer game, brain-computer interface (BCI) hardware, and a wheelchair control system. To acquire the raw EEG data, the Emotiv EPOC+ was used. The game provides real-time feedback to the users, allowing them to learn to control the wheelchair through their brain activity. The BCI hardware measures brain activity to generate commands for the wheelchair. The wheelchair control system translates the BCI commands into actual wheelchair movement. The framework is evaluated through a user study on a group of people with severe motor disabilities. Five game levels were developed for more extensive user training. Three male subjects tested the framework and the results indicate that users can learn to control the wheelchair with a high degree of accuracy and speed.
Fiałek and Liarokapis [20] presented the results of a comparison between two commercial EEG devices for serious games and virtual environments. The first EEG device is the Emotiv EPOC+ with fourteen channels and the second is the NeuroSky MindWave Mobile with a single electrode. Both systems were evaluated in terms of their accuracy and usability. The usability was assessed through an online survey and the accuracy was evaluated through a series of experiments. Thirty-one participants tested the prototypes and the results showed that both BCIs had a good level of accuracy, with the Emotiv EPOC+ Mobile being slightly more accurate. The survey results showed that the NeuroSky MindWave Mobile was slightly easier to use than the Emotiv EPOC+. In conclusion, both BCIs are suitable for use in serious games and virtual environments, but the Emotiv EPOC+ is more accurate and the NeuroSky MindWave Mobile is slightly easier to use.
Alchalabi et al. [19] designed an EEG-based serious game for focus improvement. The game is designed to help individuals with Attention Deficit Hyperactivity Disorder (ADHD) improve their focus and attention skills. The game is based on EEG signals, which are collected via the EMOTIV EPOC+ headset, which has 14 EEG channels. The designed scenario consists of two different commands, neutral state, and push. The signals are then analyzed to detect the user’s attention levels. The game provides an interactive, engaging and entertaining environment to motivate users to practice focus exercises that are tailored to their individual needs. The results of an initial study with four individuals with ADHD are presented and discussed. The users tested the game both with a keyboard and an EEG device. The results show that the game was able to detect attention levels and improve focus in the participants when playing with mental commands.
Djamal et al. [21] presented a brain-computer interface (BCI) game controlling system using Fast Fourier Transform (FFT) and Learning Vector Quantization (LVQ) algorithms. The objective of the proposed system is to enable users to control a 3D game by interpreting brain signals. EEG signals are collected from users with NeuroSky headset and then pre-processed to eliminate artifacts. The FFT algorithm is applied to extract the frequency components from the signals. The extracted components are then used to construct the feature vector for the LVQ algorithm. The LVQ algorithm is used to classify the user’s intention to control the game. The system is evaluated using real EEG data collected from ten users. The results obtained show that the proposed system is able to accurately detect the user’s intention and control the game with minimum user input. Furthermore, the proposed system is able to achieve a classification accuracy of up to 88%.
Joselli et al. [22] developed MindNinja, which is an action game based on EEG signals that allow players to interact with the game using their own brain activity. The game is designed to help players develop cognitive skills such as concentration and focus. The paper presents the concept and development of the game, as well as an evaluation of its effectiveness. It includes a discussion of the design principles behind the game, the use of EEG signals as the basis for game control, and the implementation of a reinforcement learning algorithm to evaluate the player’s performance. For acquiring the raw EEG signals, the Neurosky Mindwave mobile headset was used. To evaluate their game, eleven subjects participated in the experiment and the following metrics were considered: Player Score, Missed Cuts, Attention Level, Stress Level, Engagement Level and Evolution Attention. Results from a study of the game’s effectiveness are presented, showing that players improved their concentration and focus when playing the game.
Glavas et al. [23] developed Zombie Jumper, which is a 2D BCI game with two different levels of difficulty. To control the game, two mental commands were used imagining moving forward and blinking. For recording EEG data they used the Muse 2 headband. The goal of the game was to avoid zombies by jumping over them. For signal processing OpenViBE software was employed and Linear Discriminant Analysis was the classification algorithm. A total of 37 subjects tested level 1 and the 18 best subjects tested level 2. The aim of this study was to evaluate the user adaptation in a BCI game environment. The evaluation metrics of this work were Average Scores and Improvement.
The literature has identified a major limitation in the form of a small sample size and a limited number of repetitions per subject, which can compromise the effectiveness of BCI improvement. This study aims to address this issue by expanding the sample size and increasing the number of repetitions per subject to assess the potential for improved BCI control and better adjustment in BCI environments.

3. Materials

The hardware and software utilized are described in the next sections. The following components are either installed or connected via Bluetooth to a PC that serves as an acquisition server.

3.1. Muse 2 Headband

The Muse 2 headband [24] (Figure 1) is a Bluetooth-enabled headband that is designed to monitor brain activity and provide feedback in real-time. It is used for various types of experiments [23,25,26,27,28], including brain-computer interface (BCI) experiments. It measures electroencephalogram (EEG) signals from the scalp, providing information related to attention, relaxation, and meditation. The main reason for choosing the Muse 2 headband is its portability and affordability. In addition, it provides real-time feedback and is easily accessible. On the other hand, the Muse 2 headband suffers from some limitations. The major one is the limited electrode coverage since the device has four electrodes (TP9, AF7, AF8 and TP10). Noise interference is also a limitation because the Muse 2 uses dry electrodes, which can be more vulnerable to interference from external noise sources, such as movement or electrical devices.

3.2. BlueMuse

BlueMuse [29] is an open-source EEG brainwave monitoring software. It is designed to be used with EEG headsets, such as the Muse 2 headband, to stream data via Bluetooth connection and provide real-time brainwave data visualization and analysis. BlueMuse supports a variety of features such as real-time data streaming, frequency analysis, mental state monitoring, and integration with virtual reality applications. It can also automatically detect available EEGs within Bluetooth range and allows the simultaneous streaming of data from multiple Muse devices.

3.3. Lab Streaming Layer

Lab Streaming Layer (LSL) [30] is an open-source framework for streaming data between applications and devices in real-time. It provides a set of tools and libraries that enable applications to easily send and receive data streams over a local network or the Internet. It supports the capture of data from a variety of sources, including EEG and eye-tracking systems, motion tracking systems, and audio and video devices. LSL is designed to facilitate the integration of data streams into research applications and software. In this study, BlueMuse utilizes the Muse 2 headband through Bluetooth, setting up an LSL stream to transfer the EEG signals to OpenViBE’s acquisition server. This data is then recorded, processed, and classified by OpenViBE.

3.4. OpenViBE

OpenViBE is an open-source software platform for designing, testing, and using brain-computer interfaces (BCIs). It offers a complete range of signal processing algorithms, data visualization tools, and application programming interfaces (APIs) for developing custom BCI applications. OpenViBE is optimized for real-time processing and is suitable for a range of BCI applications such as motor imagery, event-related potentials, and steady-state visual evoked potentials.

4. Methods

In Figure 2, the flowcharts for both the offline and online processes of the proposed BCI system are provided. The methods used in the system are outlined in the following section.
OpenViBE’s acquisition server is employed to capture raw EEG data with a sampling frequency of 256 Hz. A scenario is then created to process the data and extract features for classification. After importing the raw data, a Chebyshev bandpass filter is applied to each EEG channel in the range of 8–40 Hz to minimize noise and artifacts. Then the continuous EEG recordings are divided into shorter, consecutive time periods or epochs. The purpose of EEG time epoching is to analyze the EEG signals in order to identify patterns or features that are related to specific behaviors or states. In this case, the epoching process involved dividing the EEG signals into non-overlapping windows of 3 s each, resulting in the classifier producing a decision every 3 s. The duration of the epoch was determined through a series of iterative short experiments using trial and error methodology to identify the optimal window length. The signal is then segmented and divided into five distinct frequency bands: Alpha waves 8–12 Hz, Beta 1 waves 12–20 Hz, Beta 2 waves 20–30 Hz, Gamma 1 waves 30–35 Hz and Gamma 2 waves 35–40 Hz.
The energy of each band is calculated and the obtained features are used as the basis for the classifier. Figure 3 presents the offline scenario.

4.1. Classification

A multi-layer perceptron (MLP) [31,32] is a type of artificial neural network that has multiple layers of neurons between the input and output layers. MLPs are typically composed of an input layer, one or more hidden layers, and an output layer. Each layer contains a set of neurons which are connected to the neurons in the previous layer. The input layer receives input data and the output layer produces output data. The hidden layers are used to extract features from the data and the weights and biases of the neurons are adjusted during the training process. The MLP is used for supervised learning tasks such as pattern recognition and classification. The main advantages of MLP are:
  • It is a powerful and flexible algorithm that can be used to classify EEG data with high accuracy.
  • It can be used to classify EEG data in a non-linear manner, allowing for more accurate classifications than linear methods.
  • It is also capable of dealing with large datasets and can be used to classify EEG data from multiple subjects.
  • It can also be used to perform unsupervised learning, which can help reduce the cost of labeling EEG data.
  • It is able to generalize well to new data and can be used to classify EEG data from different individuals.
  • It is computationally efficient and can be used to classify EEG data in real-time.
In this study, a two-layer MLP is employed to classify the three classes, with hyperbolic tangent as the activation function in the hidden layer and softmax function in the output layer; OpenViBE default configuration settings have been employed. Additionally, 10-fold cross-validation is used to obtain the classification results.

4.2. Online Processing

After the offline processing and training of the classification algorithm have been completed, a new scenario is set up for online testing as it is presented in Figure 4. The parameters remain the same as before, with the exception of time segmentation, which is no longer necessary. The Acquisition client is employed to bring in the real-time signal produced by the Muse 2 headband. The classified instances are then sent through an LSL stream to be used in the BCI application.

4.3. Dataset

In order to assess the BCI experiment, 33 individuals (18 males and 15 females) between the ages of 21 and 45 were recruited. Every participant was in good health and had either normal or corrected to normal vision and agreed to have their recordings used for the research by signing a formal consent. They were all guided by an experienced researcher to sit in a chair and keep their movements to a minimum for three separate EEG recordings. For the first recording, the subjects were instructed to blink hard every 1 s for a period of 5 min. For the second recording, they were instructed to look left and imagine that their left hand was moving for 5 min. For the third recording, they were instructed to look right and imagine that their right hand was moving for 5 min. The recordings of the mental commands for MI were obtained in an asynchronous manner, where the subjects were asked to perform MI mental commands continuously for a period of 5 min without any instruction from the researchers during the recordings. During the recordings, they were placed in a quiet room to reduce noise from the outside. All of the 99 recordings were included in the offline processing, providing 97 feature vectors per class as input for the classifier for each of the participants (Table 1). The three classes (left and right MI and blink), are balanced since the three recordings last 5 min each, and the same processing is applied to these signals.

4.4. Game Design

Unity is a cross-platform game engine developed by Unity Technologies. It is used to create video games for web, mobile, consoles, and other platforms. Unity is primarily used for 3D applications, although it also supports 2D development. Unity features a visual editor and powerful scripting system, allowing users to quickly build and prototype games. It also has a wide range of APIs, plugins, and assets to facilitate development. Unity can be used to create applications for OpenViBE, allowing developers to combine OpenViBE’s signal processing and visualization tools with Unity’s game engine for BCI applications.
This research created a 3D video game (as displayed in Figure 5) with an avatar that moves continuously forward. The goal of the game is to accumulate coins that are placed in various positions on the platform, requiring the player to move left, right, and jump. The platform has three lanes, each with coins on the ground or in the air. In total, the game includes 50 coins which are divided into 17 clusters (16 clusters of 3 coins and 1 cluster of 2 coins).
This project replaces standard keyboard input with brain commands. The Lab Streaming Layer (LSL) library is used to enable the game to receive live data streams. The stream outlet component provided by the LSL library is used to send time series data on the lab network, which is then sampled and sent to the game. The samples received from the stream are divided into three categories based on the user’s command.
  • If the user is glancing right and imagining moving their right hand, the sample is put in the first category and the in-game avatar slides right.
  • If the user is glancing left and imagining moving their left hand, the sample is put in the first category and the in-game avatar slides left.
  • If the user blinks, the sample goes to the third category and the in-game avatar jumps.

5. Results

Table 2 presents the classification results of EEG signals into three classes: Right Motor Imagery, Left Motor Imagery, and Blink. The Right Motor Imagery, Left Motor Imagery and Blink columns present the True Positive Rate (TPR) and Precision results of the respective commands, while the Overall column is the classification accuracy (Acc) for each subject. The average classification accuracy for all 33 subjects is 96.9%. The highest TPR is obtained for Left Motor Imagery (95.6% average for all subjects) and the lowest for Right Motor Imagery (95.4% average for all subjects). The results indicate that the proposed method is able to effectively classify the EEG signals into the three categories, with a high degree of accuracy.
Most of the participants (26) had not previously taken part in a BCI game experiment, except seven of them (Subjects 1–5 and 32–33) that had participated in a previous BCI research [28]. To ensure that they were properly prepared, an experienced researcher briefed them on the protocol before the experiment started. After the recordings and the offline processing are completed, the subjects were instructed to perform the mental commands of left/right motor imagery in order to evaluate the online classification results for 90 s (45 s for left and 45 s for right movement). This process was repeated after the completion of the testing phase for every subject, examining if users had improved after the training and testing process. For the training process, they were given ten practice trials to become familiar with the game environment. After they had become familiar with the game, they played ten more times to evaluate the BCI system. The total number of coins collected (average from the ten repetitions per subject) and the total number of coin clusters (average from the ten repetitions per subject) were used as evaluation metrics. A coin cluster is assumed completed if at least one of its coins is collected. Table 3 presents the results of the online testing.
Table 3 displays the results of a BCI game for 33 subjects. The two metrics used to evaluate the game performance are Average Game Score and Average Coin Clusters. The Average Game Score for all subjects is 27.6 (55.3%). The best performing participant is subject 5 with a 39.4 (78.8%) average score and the worst is subject 8 with 21 (42%). During the testing phase, it is observed that users can be classified into three groups depending on their performance and their ability to synchronize their mental commands in the game as it is presented in Figure 6. The first group comprises ten individuals who experienced difficulty while playing the game and their score ranges from 40% to 49.9%. The primary challenge faced by players within this category is their inability to effectively manipulate the BCI. The second group consists of 14 subjects who could handle the game quite successfully. Their game score ranges from 50% to 59.9%. The last group includes nine users that could control the BCI with excellent precision. Their game score ranges from 60% to 69.9% (six subjects) and 70% to 78.8% (three subjects).
For the second online evaluation metric, Average Coin Clusters are employed. The average number of clusters is 10.04 (59%) ranging from 7.8 (45.8%) to 13.8 (81.1%). Subject 15 had the lowest average score and subject 5 had the best. These results are observed to be higher than the Average Game score results because most of the subjects could not collect all three coins from a cluster due to synchronization.
Another online metric used to evaluate the results is User Improvement (Table 4). Every subject performed 15 motor imagery commands for each left and right movement before and after playing the game in order to monitor user adaptation to the BCI (60 trials for each subject). The results indicate that users have a 7.5% increase in the accuracy of both mental commands. The average correctly classified mental commands for left motor imagery before playing the game are 73.73% while after playing the game is 80.80%. The average improvement for all subjects in left motor imagery is 7.1%. For right motor imagery movement, the average correctly classified mental commands before playing the game are 71.66%, and after are 79.73%. For this movement the average improvement is 8.07%. These results are presented in Figure 7.
A Kolmogorov-Smirnov test was used to test for normality on the four data samples (Left MI before training, Left MI after training, Right MI before training and Right MI after training). The results of the first sample are D(33) = 0.18, p = 0.197. For the second sample are, D(33) = 0.18, p = 0.201. For the third sample are, D(33) = 0.15, p = 0.385. For the fourth sample are, D(33) = 0.14, p = 0.479. The results indicate that the data are normally distributed in all samples.
Since the data are normally distributed, a paired samples t-test is performed to assess the impact of user improvement before and after testing the game. The left motor imagery command results indicate a statistical difference before (M = 11.1, SD = 3.1) and after ((M = 12.1, SD = 2.6), [t(33) = 5.8, p < 0.001]) playing the game. The right motor imagery movement results also indicate a statistical difference before (M = 10.8, SD = 2.9) and after ((M = 12, SD = 2.3), [t(33) = 5.5, p < 0.001]) playing the game.

6. Discussion

This work article presents a 3D game that employs a non-invasive BCI. The game utilizes the Muse 2 EEG headband to gather EEG data, while the signals are processed and classified into three distinct classes—left and right motor imagery and blink—using the OpenViBE platform. The primary objective of the game is to evaluate the users’ adaptation and progress within the BCI environment after undergoing training. The classification algorithm employed is the MLP, which achieved an accuracy rate of 96.94%. The experiment involved 33 participants who effectively controlled an avatar by issuing mental commands in order to collect coins.
Our research aimed to explore the feasibility of commercial and low-cost wireless EEGs to create a reliable BCI system based on MI commands. The Muse 2 provides a user-friendly approach for studying motor imagery but it is important to acknowledge the limitations of the electrode positions employed in the system. The optimal electrode positions for capturing MI commands are over the motor cortex, specifically at the C3 and C4 locations in the 10–20 EEG system. These locations correspond to the primary motor cortex on the left and right sides of the brain, respectively. The sub-optimal electrode placement, lacking direct coverage of the motor cortex (such as C3 and C4 locations), may limit the spatial resolution and specificity of the recorded brain activity. As a result, the interpretation of neural signals related to motor imagery can be challenging, and the extracted features may not fully capture the intricacies of motor-related brain processes. To address this limitation, we employed a technique known as eye polarization, using left and right gaze. Through conducting a series of training sessions with multiple users, we aimed to evaluate the effectiveness and reliability of the resulting BCI system. Despite encountering certain limitations and challenges, our experimental results showed promising potential for the development of a reliable BCI game based on MI commands with a low-cost EEG headset.
While our BCI game showed promising results, it is essential to acknowledge its limitations. The game is designed around a three-second epoch, where the avatar continuously moves forward and receives commands for turning or jumping every 3 s based on the user’s mental command. The game maintains continuous movement without interruption or delay. However, in order to execute turning or jumping actions, the player must wait for a three-second interval. The avatar’s constant forward movement is implemented to ensure a captivating and engaging gameplay experience with no delays. Additionally, the placement of coins in our game is strategically designed to be collectible within the current analysis framework and time windows. Developing a BCI game requires careful adjustment of parameters to align with EEG processing. BCI game development necessitates the customization of parameters to suit the specific requirements and constraints of the chosen analysis approach. These considerations highlight the importance of adapting game designs to the capabilities and limitations of BCI systems, thus optimizing the user experience within the given constraints. Lastly, this BCI is not applicable to other games without adjustments.
Throughout the study, consistent stimuli are used during both the MI training phase and data collection for classification training. Participants engaged in asynchronous MI tasks, mentally simulating hand movement actions without physical execution. This approach ensured that all participants are exposed to identical mental tasks, eliminating potential confounding factors associated with varying stimuli. It is worth noting that Action Observation (AO), which involves observing motor actions performed by others (videos, games, real movement), activates similar neural pathways as MI. The main objective of AO is to visually perceive and process motor actions executed by someone else to activate relevant neural pathways and enhance motor cognition. While this study focused on MI training, future research could consider incorporating AO to explore potential synergistic effects on motor learning and performance. Combining both modalities may enhance neural activation and functional connectivity within the motor system, potentially leading to improved outcomes in motor skill acquisition and rehabilitation [33].
A comparison of this work to other papers from the literature is provided in Table 5.
Table 5 presents a comparative study of various studies that have used EEG devices to measure brain activity and control mental commands. Although a direct comparison is not feasible, since each study has employed different subjects, devices, techniques and methods for evaluation, several remarks related can be drawn from the Table 5.
Hjørungdal et al. [18] used three subjects, the Emotiv Epoc EEG device, four mental commands and the time to complete the task as the evaluation metric. Fiałek and Liarokapis [20] used two different EEG devices, the Emotiv EPOC+ and NeuroSky MindWave, to compare them for two mental commands on 31 subjects. To evaluate their experiment they employed a combination of average rating values, learnability, satisfaction, performance, and effort.
Alchalabi et al. [19] used four subjects, the Emotiv EPOC+ EEG device, two mental commands and average focus, average stress, average relaxation, average excitement, and average engagement as evaluation metrics. Djamal et al. [21] used 20 subjects, the NeuroSky MindWave EEG device, two mental commands and the average accuracy of training and testing data as the evaluation metric. Joselli et al. [22] used 11 subjects, the NeuroSky MindWave Mobile EEG device, one mental command and a combination of average player score, average missed cuts, average attention level, average stress level, average engagement level, and average evolution attention as evaluation metrics. Glavas et al. [23] used 38 subjects and the Muse 2 headband to acquire EEG data for two mental commands. To evaluate their work they employed Classification Accuracy, Avg Game Score 1, Avg Game Score 2 and Improvement as evaluation metrics.
In this work, 33 subjects participated to evaluate the BCI game. In the presented literature, the minimum number of subjects is three and the maximum is 38 while the average number of subjects are 17.8 subjects, thus this study includes one of the largest datasets presented in the literature. Furthermore, the number of mental commands in this study is three, while the majority of the papers in the literature investigate BCI systems with two mental commands, with the exception of [18], that includes four mental commands however with a very limited number of participants. Furthermore, in this study multiple repetitions were performed, with the participants playing the game 20 times (ten for practise and the rest for evaluation). The combination of a large number of participants, with three mental commands and multiple recordings for each subject (train, practice and evaluation) compose a very complex experimental landscape, which surpasses any previous attempt presented in the literature. Moreover, the evaluation metrics in this work are extensive (including: Classification Accuracy, Game Score, Number of Clusters and User Improvement) in order to clearly asses the capabilities of the proposed system and capture its potentials.

7. Conclusions

In this study, a Brain-Computer Interface game featuring three Motor Imagery commands is developed using Unity Engine for the 3D game and OpenViBE platform for BCI implementation. The objective of this game is to assess user adjustment and improvement in BCI control. Raw EEG data is recorded using the Muse 2 headband. A total of 33 participants took part in the experiment and successfully controlled an in-game avatar through mental commands to collect coins. The results indicate a great degree of precision in controlling the avatar once participants became familiar with the BCI environment. However, some participants struggled with the left motor imagery command. Despite this, the results hold promise for further research in BCI game development.
The BCI game utilized MLP as its classification algorithm with a precision of 96.94%. The experiment is evaluated through the use of online evaluation metrics, including the average game score, the average number of clusters collected, and average user improvement. The results showed that there is a noticeable improvement in the users’ performance before and after training, demonstrating that subjects can learn to operate a BCI system.
In future research, the current BCI-based 3D MI game could be adapted as a training step in order to train users to control other types of robotic systems, such as robotic arms, wheelchairs, exoskeletons and drones. Furthermore, the comparison between different wearable EEG devices and their potential to effectively operate a BCI system should be addressed in future communications. The complexity of the BCI systems in terms of number of identified MI and mental state commands should also be clearly investigated, since up until now the majority of the related work includes just two mental commands. Building up the complexity of BCI systems must also be followed with the involvement of larger numbers of participants along with multiple recordings from each participant. Additionally, a training section will be incorporated into the game for the BCI system, ensuring that the stimuli used during both the training and online phases are consistent. As a result, evaluation analysis will be more reliable and it might be easier to notice significant effects and correlations. Finally, the current research could be extended to investigate the use of BCI technology in controlling augmented and virtual reality environments.

Author Contributions

Conceptualization, M.G.T.; methodology, G.P., K.G., K.D.T., A.T.T. and M.G.T.; software, G.P. and K.G.; validation, G.P. and K.G.; data curation, G.P. and K.G.; writing—original draft preparation, G.P. and K.G.; writing—review and editing, G.P., K.G., K.D.T., A.T.T. and M.G.T.; supervision, M.G.T.; project administration, M.G.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported from the project “AGROTOUR–New Technologies and Innovative Approaches to Agri-Food and Tourism to Boost Regional Excellence in Western Macedonia” (MIS 5047196), which is implemented under the Action “Reinforcement of the Research and Innovation Infrastructure”, funded by the Operational Programme “Competitiveness, Entrepreneurship and Innovation” (NSRF 2014-2020), and co-financed by Greece and the European Union (European Regional Development Fund). Additionally, this work has been co-financed by the European Union and Greek national funds through Operational Program Competitiveness, Entrepreneurship and Innovation under the call RESEARCH—CREATE—INNOVATE: “Intelli– WheelChair” (Project Code: T2EΔ K-02438).

Institutional Review Board Statement

The study was approved by the Research Ethics and Ethics Committee of the University of Western Macedonia (protocol code 246/2023 and date of approval 8 June 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data not available for this study.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BCIBrain–computer interface
HCIHuman–computer interface
EEGElectroencephalography
EOGElectrooculography
LSLLab streaming layer
MLPMulti-layer perceptron
MIMotor Imagery
XRExtended Reality

References

  1. Saha, S.; Mamun, K.A.; Ahmed, K.; Mostafa, R.; Naik, G.R.; Darvishi, S.; Khandoker, A.H.; Baumert, M. Progress in brain computer interface: Challenges and opportunities. Front. Syst. Neurosci. 2021, 15, 578875. [Google Scholar] [CrossRef] [PubMed]
  2. Wolpaw, J.R. Brain-computer interfaces (BCIs) for communication and control. In Proceedings of the 9th International ACM SIGACCESS Conference on Computers and Accessibility, Tempe, AZ, USA, 15–17 October 2007; pp. 1–2. [Google Scholar]
  3. Birbaumer, N. Breaking the silence: Brain–computer interfaces (BCI) for communication and motor control. Psychophysiology 2006, 43, 517–532. [Google Scholar] [CrossRef] [PubMed]
  4. Dornhege, G.; Millán, J.d.R.; Hinterberger, T.; McFarland, D.J.; Muller, K.R. Toward Brain-Computer Interfacing; Citeseer: State College, PA, USA, 2007; Volume 63. [Google Scholar]
  5. Kalagi, S.; Machado, J.; Carvalho, V.; Soares, F.; Matos, D. Brain computer interface systems using non-invasive electroencephalogram signal: A literature review. In Proceedings of the 2017 International Conference on Engineering, Technology and Innovation (ICE/ITMC), Madeira, Portugal, 27–29 June 2017; pp. 1578–1583. [Google Scholar]
  6. Abiri, R.; Borhani, S.; Sellers, E.W.; Jiang, Y.; Zhao, X. A comprehensive review of EEG-based brain–computer interface paradigms. J. Neural Eng. 2019, 16, 011001. [Google Scholar] [CrossRef] [PubMed]
  7. Teplan, M. Fundamentals of EEG measurement. Meas. Sci. Rev. 2002, 2, 1–11. [Google Scholar]
  8. Georgiev, D.D.; Georgieva, I.; Gong, Z.; Nanjappan, V.; Georgiev, G.V. Virtual reality for neurorehabilitation and cognitive enhancement. Brain Sci. 2021, 11, 221. [Google Scholar] [CrossRef]
  9. Wen, D.; Fan, Y.; Hsu, S.H.; Xu, J.; Zhou, Y.; Tao, J.; Lan, X.; Li, F. Combining brain–computer interface and virtual reality for rehabilitation in neurological diseases: A narrative review. Ann. Phys. Rehabil. Med. 2021, 64, 101404. [Google Scholar] [CrossRef]
  10. Robinson, N.; Mane, R.; Chouhan, T.; Guan, C. Emerging trends in BCI-robotics for motor control and rehabilitation. Curr. Opin. Biomed. Eng. 2021, 20, 100354. [Google Scholar] [CrossRef]
  11. Alrajhi, W.; Alaloola, D.; Albarqawi, A. Smart home: Toward daily use of BCI-based systems. In Proceedings of the 2017 International Conference on Informatics, Health & Technology (ICIHT), Riyadh, Saudi Arabia, 21–23 February 2017; pp. 1–5. [Google Scholar]
  12. Brunner, C.; Birbaumer, N.; Blankertz, B.; Guger, C.; Kübler, A.; Mattia, D.; Millán, J.D.R.; Miralles, F.; Nijholt, A.; Opisso, E.; et al. BNCI Horizon 2020: Towards a roadmap for the BCI community. Brain-Comput. Interfaces 2015, 2, 1–10. [Google Scholar] [CrossRef] [Green Version]
  13. Marshall, D.; Coyle, D.; Wilson, S.; Callaghan, M. Games, gameplay, and BCI: The state of the art. IEEE Trans. Comput. Intell. AI Games 2013, 5, 82–99. [Google Scholar] [CrossRef]
  14. Vasiljevic, G.A.M.; de Miranda, L.C. Brain–computer interface games based on consumer-grade EEG Devices: A systematic literature review. Int. J. Hum.–Comput. Interact. 2020, 36, 105–142. [Google Scholar] [CrossRef]
  15. Kerous, B.; Skola, F.; Liarokapis, F. EEG-based BCI and video games: A progress report. Virtual Real. 2018, 22, 119–135. [Google Scholar] [CrossRef]
  16. Plass-Oude Bos, D.; Reuderink, B.; van de Laar, B.; Gürkök, H.; Mühl, C.; Poel, M.; Nijholt, A.; Heylen, D. Brain-computer interfacing and games. Brain-Computer Interfaces: Applying our Minds to Human-Computer Interaction; Springer: Cham, Switzerland, 2010; pp. 149–178. [Google Scholar]
  17. Stamps, K.; Hamam, Y. Towards inexpensive BCI control for wheelchair navigation in the enabled environment–a hardware survey. In Proceedings of the Brain Informatics: International Conference, BI 2010, Toronto, ON, Canada, 28–30 August 2010; pp. 336–345. [Google Scholar]
  18. Hjørungdal, R.M.; Sanfilippo, F.; Osen, O.; Rutle, A.; Bye, R.T. A game-based learning framework for controlling brain-actuated wheelchairs. In Proceedings of the 30th European Conference on Modelling and Simulation, Regensburg, Germany, 31 May–3 June 2016. [Google Scholar]
  19. Alchalcabi, A.E.; Eddin, A.N.; Shirmohammadi, S. More attention, less deficit: Wearable EEG-based serious game for focus improvement. In Proceedings of the 2017 IEEE 5th international conference on serious games and applications for health (SeGAH), Perth, WA, Australia, 2–4 April 2017; pp. 1–8. [Google Scholar]
  20. Fiałek, S.; Liarokapis, F. Comparing Two Commercial Brain Computer Interfaces for Serious Games and Virtual Environments. In Emotion in Games; Springer: Cham, Switzerland, 2016; pp. 103–117. [Google Scholar]
  21. Djamal, E.C.; Abdullah, M.Y.; Renaldi, F. Brain computer interface game controlling using fast fourier transform and learning vector quantization. J. Telecommun. Electron. Comput. Eng. 2017, 9, 71–74. [Google Scholar]
  22. Joselli, M.; Binder, F.; Clua, E.; Soluri, E. Mindninja: Concept, Development and Evaluation of a Mind Action Game Based on EEGs. In Proceedings of the 2014 Brazilian Symposium on Computer Games and Digital Entertainment, Porto Alegre, Brazil, 12–14 November 2014; pp. 123–132. [Google Scholar] [CrossRef]
  23. Glavas, K.; Prapas, G.; Tzimourta, K.D.; Giannakeas, N.; Tsipouras, M.G. Evaluation of the User Adaptation in a BCI Game Environment. Appl. Sci. 2022, 12, 12722. [Google Scholar] [CrossRef]
  24. Interaxon’s Muse 2. Available online: https://choosemuse.com/muse-2/ (accessed on 1 June 2023).
  25. Garcia-Moreno, F.M.; Bermudez-Edo, M.; Rodríguez-Fórtiz, M.J.; Garrido, J.L. A CNN-LSTM deep Learning classifier for motor imagery EEG detection using a low-invasive and low-Cost BCI headband. In Proceedings of the 2020 16th International Conference on Intelligent Environments (IE), Madrid, Spain, 20–23 July 2020; pp. 84–91. [Google Scholar]
  26. Chaudhary, M.; Mukhopadhyay, S.; Litoiu, M.; Sergio, L.E.; Adams, M.S. Understanding brain dynamics for color perception using wearable eeg headband. arXiv 2020, arXiv:2008.07092. [Google Scholar]
  27. Pu, L.; Lion, K.M.; Todorovic, M.; Moyle, W. Portable EEG monitoring for older adults with dementia and chronic pain-A feasibility study. Geriatr. Nurs. 2021, 42, 124–128. [Google Scholar] [CrossRef] [PubMed]
  28. Prapas, G.; Glavas, K.; Tzallas, A.T.; Tzimourta, K.D.; Giannakeas, N.; Tsipouras, M.G. Motor Imagery Approach for BCI Game Development. In Proceedings of the 2022 7th South-East Europe Design Automation, Computer Engineering, Computer Networks and Social Media Conference (SEEDA-CECNSM), Ioannina, Greece, 23–25 September 2022; pp. 1–5. [Google Scholar] [CrossRef]
  29. Kowaleski, J. BlueMuse. 2019. Available online: https://github.com/kowalej/BlueMuse (accessed on 10 October 2022).
  30. Kothe, C. Lab Streaming-Layer. 2018. Available online: https://github.com/sccn/labstreaminglayer (accessed on 8 May 2022).
  31. Marsland, S. Machine Learning: An Algorithmic Perspective; Chapman and Hall/CRC: Boca Raton, FL, USA, 2011. [Google Scholar]
  32. Raj, P.; Evangeline, P. The Digital Twin Paradigm for Smarter Systems and Environments: The Industry Use Cases; Academic Press: Cambridge, MA, USA, 2020. [Google Scholar]
  33. Miladinović, A.; Barbaro, A.; Valvason, E.; Ajčević, M.; Accardo, A.; Battaglini, P.P.; Jarmolowska, J. Combined and Singular Effects of Action Observation and Motor Imagery Paradigms on Resting-State Sensorimotor Rhythms. In Proceedings of the XV Mediterranean Conference on Medical and Biological Engineering and Computing—MEDICON 2019, Coimbra, Portugal, 26–28 September 2019; pp. 1129–1137. [Google Scholar]
Figure 1. The Muse 2 headband is presented (right image). A screenshot from one recording is shown. The four EEG channels of the Muse 2 headband and their waves are presented (left image).
Figure 1. The Muse 2 headband is presented (right image). A screenshot from one recording is shown. The four EEG channels of the Muse 2 headband and their waves are presented (left image).
Information 14 00354 g001
Figure 2. This flowchart illustrates the two-step process of the proposed system. Initially, an offline processing phase is employed to train a classifier using EEG data. Following this, an online processing phase is used to take mental commands from the user, which are subsequently translated into in-game movement through the use of the trained classifier.
Figure 2. This flowchart illustrates the two-step process of the proposed system. Initially, an offline processing phase is employed to train a classifier using EEG data. Following this, an online processing phase is used to take mental commands from the user, which are subsequently translated into in-game movement through the use of the trained classifier.
Information 14 00354 g002
Figure 3. Offline processing scenario to train the classifier. The first step of the scenario involves importing EEG recordings using a CSV file reader. Three separate boxes are used to handle the three different EEG recordings (Left, Right and Blink) of the BCI system. The signals are then filtered to remove frequencies outside the range of 8–40 Hz. The filtered signals are epoched in time windows of 3 s and EEG waves are calculated (Alpha, Beta 1, Beta 2 Gamma 1, Gamma 2). In the next step, the energy of the signals is calculated and a feature vector of all frequency bands energies is formulated in logarithmic scale. Finally, all feature vectors from all the different EGG recordings are used for the training of the classifier.
Figure 3. Offline processing scenario to train the classifier. The first step of the scenario involves importing EEG recordings using a CSV file reader. Three separate boxes are used to handle the three different EEG recordings (Left, Right and Blink) of the BCI system. The signals are then filtered to remove frequencies outside the range of 8–40 Hz. The filtered signals are epoched in time windows of 3 s and EEG waves are calculated (Alpha, Beta 1, Beta 2 Gamma 1, Gamma 2). In the next step, the energy of the signals is calculated and a feature vector of all frequency bands energies is formulated in logarithmic scale. Finally, all feature vectors from all the different EGG recordings are used for the training of the classifier.
Information 14 00354 g003
Figure 4. Online Scenario for the proposed BCI system. Acquisition client connects with the LSL stream from BlueMuse in a specific port and the real time data processing starts. With the channel selector only the four EEG channels are included (TP9, TP10, AF7 AF8) and then the same process with the offline scenario is applied. The signals are filtered, EEG waves and energy are calculated and then this features are fed in the classifier to classify the mental commads. Finally, the LSL stream is employed to transmit the classifier’s results to the game. This was accomplished through the implementation of the LSL stream box, which facilitated the communication of data between OpenViBE and the game.
Figure 4. Online Scenario for the proposed BCI system. Acquisition client connects with the LSL stream from BlueMuse in a specific port and the real time data processing starts. With the channel selector only the four EEG channels are included (TP9, TP10, AF7 AF8) and then the same process with the offline scenario is applied. The signals are filtered, EEG waves and energy are calculated and then this features are fed in the classifier to classify the mental commads. Finally, the LSL stream is employed to transmit the classifier’s results to the game. This was accomplished through the implementation of the LSL stream box, which facilitated the communication of data between OpenViBE and the game.
Information 14 00354 g004
Figure 5. Screenshot from the game-play. The avatar is moving on the platform depending on the user’s mental commands.
Figure 5. Screenshot from the game-play. The avatar is moving on the platform depending on the user’s mental commands.
Information 14 00354 g005
Figure 6. Histogram that presents the different groups of users depending on their performance while testing the BCI game.
Figure 6. Histogram that presents the different groups of users depending on their performance while testing the BCI game.
Information 14 00354 g006
Figure 7. The average user improvement for MI commands before and after playing the game.
Figure 7. The average user improvement for MI commands before and after playing the game.
Information 14 00354 g007
Table 1. The three classes are fully balanced and the feature vectors fed to the classifier for each class per subject is 97. The three classes are left MI, right MI, and blink.
Table 1. The three classes are fully balanced and the feature vectors fed to the classifier for each class per subject is 97. The three classes are left MI, right MI, and blink.
Feature Vectors for Class 1 per SubjectFeature Vectors for Class 2 per SubjectFeature Vectors for Class 3 per Subject
979797
Table 2. Classification Results for the three classes. The obtained classification results in terms of True Positive Rate and Precision for each Motor Imagery command and Blink are presented for each subject. The last two columns provide the classification accuracy and ROC metric, respectively, for each subject.
Table 2. Classification Results for the three classes. The obtained classification results in terms of True Positive Rate and Precision for each Motor Imagery command and Blink are presented for each subject. The last two columns provide the classification accuracy and ROC metric, respectively, for each subject.
SubjectsRight Motor Imagery (TPR/Precision) (%)Left Motor Imagery (TPR/Precision) (%)Blink (TPR/Precision) (%)Overall (Acc %)ROC Area
176.6/93.494.5/79.7100.0/100.090.30.946
297.2/100.0100.0/96.7100.0/100.0990.997
399.3/100.0100/98.7100.0/100.099.71
495.2/89.889.7/94.6100.0/100.094.90.993
599.3/98.798.6/99.4100.0/100.099.30.997
684.1/85.085.5/84.6100.0/100.089.80.968
784.8/98.599.3/84.499.3/99.094.40.993
8100.0/99.099.3/98.7100.0/100.099.71
995.2/93.594.6/95.099.3/100.096.50.997
1099.3/97.797.5/98.8100.0/100.099.10.998
1178.6/94.896.2/81.3100.0/100.091.10.982
1296.6/98.497.9/100.0100.0/100.098.10.999
1391.7/77.573.8/90.499.3/99.788.20.935
1497.9/91.391.0/97.5100.0/100.096.30.997
1599.0/100.0100.0/99.0100.0/100.099.61
16100.0/98.097.9/100.0100.0/100.099.30.997
1791.7/100.0100.0/92.4100.0/100.097.20.997
1893.8/92.992.8/93.8100.0/100.095.50.994
19100.0/100.0100.0/100.0100.0/100.01001
2097.9/96.996.6/97.9100.0/100.098.10.994
2195.2/98.599.3/93.899.3/100.097.90.997
22100.0/95.295.2/100.0100.0/100.098.30.994
2397.2/94.695.2/96.4100.0/100.097.40.998
2499.3/98.7100.0/99.098.6/100.099.31
2596.6/100.0100.0/96.399.3/100.098.50.994
2699.3/99.3100.0/99.099.3/100.099.51
2799.0/100.0100.0/99.0100.0/100.099.61
28100.0/83.680.4/100.0100.0/100.093.40.982
2999.0/96.096.0/98.8100.0/100.098.31
3099.0/95.094.8/99.8100.0/100.097.90.993
3195.9/96.996.9/95.9100.0/100.097.60.998
3291.8/93.793.8/91.9100.0/100.095.20.993
3397.9/99.099.0/98.0100.0/100.098.90.996
Table 3. Game Results. The Average Game Score column represents the average number of coins collected during the ten repetitions and the respective percentage. The Average Coin Clusters column represents the average number of coin clusters in which the subject collected at least 1 coin.
Table 3. Game Results. The Average Game Score column represents the average number of coins collected during the ten repetitions and the respective percentage. The Average Coin Clusters column represents the average number of coin clusters in which the subject collected at least 1 coin.
SubjectsAverage Game ScoreAverage Coin Clusters
128.7 (57.4%)10.6 (62.3%)
236.8 (73.6%)13.2 (77.6%)
330.2 (60.4%)10.9 (64.1%)
428.6 (57.2%)10.4 (61.1%)
539.4 (78.8%)13.8 (81.1%)
629.6 (59.2%)11.0 (64.7%)
731.4 (62.8%)11.3 (66.4%)
821.0 (42.0%)8.2 (48.2%)
925.1 (50.2%)9.3 (54.7%)
1027.6 (55.2%)10.1 (59.4%)
1124.3 (48.6%)8.5 (50.0%)
1225.5 (51.0%)9.5 (55.8%)
1332.0 (64.0%)11.5 (67.6%)
1435.0 (70.0%)12.5 (73.5%)
1521.3 (42.6%)7.8 (45.8%)
1626.4 (52.8%)9.5 (55.8%)
1727.9 (55.8%)10.3 (60.5%)
1825.8 (51.6%)9.2 (54.1%)
1931.4 (62.8%)11.3 (66.4%)
2027.6 (55.2%)10.0 (58.8%)
2123.3 (46.6%)8.2 (48.2%)
2227.9 (55.8%)10.4 (61.1%)
2331.5 (63.0%)11.4 (67.0%)
2425.3 (50.6%)9.0 (52.9%)
2524.8 (49.6%)8.6 (50.5%)
2624.2 (48.4%)8.2 (48.2%)
2728.6 (57.2%)10.2 (60.0%)
2822.3 (44.6%)8.4 (49.4%)
2931.5 (63.0%)11.4 (67.0%)
3022.6 (45.2%)8.8 (51.7%)
3123.2 (46.4%)8.4 (49.4%)
3228.4 (56.8%)10.8 (63.5%)
3323.8 (47.6%)8.8 (51.7%)
Table 4. User Improvement. The table shows the number of correct Motor Imagery commands for 15 tries before and 15 tries after playing the game for each command. The results in the table present the True Positive Rate and Precision of each Motor Imagery command.
Table 4. User Improvement. The table shows the number of correct Motor Imagery commands for 15 tries before and 15 tries after playing the game for each command. The results in the table present the True Positive Rate and Precision of each Motor Imagery command.
SubjectsLeft MI (TPR/Precision) (%)
(Before Training)
Left MI (TPR/Precision) (%)
(After Training)
Right MI (TPR/Precision) (%)
(Before Training)
Right MI (TPR/Precision) (%)
(After Training)
173.3/66.880.0/80.066.7/71.480.0/80.0
286.7/81.393.3/87.580.0/88.586.7/92.9
380.0/70.686.7/72.266.7/76.966.7/83.3
473.3/73.373.3/78.673.3/73.380.0/75.0
593.3/82.4100.0/88.280.0/92.386.7/100
680.0/75.080.0/80.073.3/78.680.0/80.0
786.7/81.393.3/93.380.0/85.793.3/93.3
846.7/50.060.0/56.353.3/50.053.3/57.1
966.7/71.473.3/78.673.3/68.880.0/75.0
1073.3/68.880.0/80.066.7/71.480.0/80.0
1166.7/62.573.3/66.860.0/64.366.7/71.4
1266.7/66.780.0/75.066.7/66.773.3/78.6
1380.0/80.093.3/87.580.0/80.086.7/92.9
1486.7/92.993.3/10093.3/87.5100.0/93.8
1560.0/52.953.3/57.146.7/53.860.0/56.3
1666.7/62.573.3/64.760.0/64.360.0/69.2
1780.0/75.080.0/80.073.3/78.680.0/80.0
1873.3/68.880.0/75.066.7/71.473.3/78.6
1986.7/92.993.3/87.593.3/87.586.7/92.9
2080.0/75.080.0/80.073.3/78.680.0/80.0
2153.3/57.160.0/64.360.0/56.366.7/62.5
2266.7/62.573.3/73.360.0/64.373.3/73.3
2386.7/72.2100.0/78.966.7/83.373.3/100
24100.0/75.0100.0/88.266.7/10086.7/100
2593.3/87.593.3/87.586.7/92.986.7/92.9
2666.7/90.980.0/70.653.3/73.766.7/76.9
27100.0/78.9100.0/100.073.3/100.0100.0/100.0
2840.0/10053.3/88.9100.0/62.593.3/66.7
29100.0/100.0100.0/100.0100.0/100.0100.0/100.0
300.0/-20.0/100.0100.0/50.0100.0/55.6
31100.0/50.0100.0/60.00.0/-33.3/100.0
3266.7/90.986.7/100.093.3/73.7100.0/88.2
3353.3/72.780.0/10080.0/63.2100.0/83.3
Table 5. Comparative study.
Table 5. Comparative study.
AuthorsSubjectsEEG DeviceMental
Commands
Reps per
Subj
Experiment
Duration
(per Subj)
Evaluation Metrics
Hjørungdal et al. [18]3Emotiv EPOC+4--Time to Complete the Task
Fiałek and Liarokapis [20]31NeuroSky MindWave
Emotiv EPOC+
2--Avg Rating Values
Learnability, Satisfaction
Performance, Effort
Alchalabi et al. [19]4Emotiv EPOC+21-Avg Focus (0.38), Avg Stress (0.49)
Avg Relaxation (0.32)
Avg Excitement (0.25)
Avg Engagement (0.65)
Djamal et al. [21]20NeuroSky MindWave24-Average Accuracy of Training and Testing Data
Joselli et al. [22]11NeuroSky MindWave15 10 minAvg Player Score (163.36)
Avg Missed Cuts (8)
Avg Attention Level (74)
Avg Stress Level (41.27)
Avg Engagement Level (75.73)
Avg Evolution Attention (1.70)
Glavas et al. [23]38Muse 2 Headband220 40 minClassification Accuracy (98.75%)
Avg Game Score 1 52.70%
Avg Game Score 2 70.35%
Improvement
This work33Muse 2 Headband320 55 minClassification Accuracy (96.94%)
Avg Game Score 27.6 (55.3%)
Avg Number of Clusters 10.04 (59%)
Avg Improvement 7.5%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Prapas, G.; Glavas, K.; Tzimourta, K.D.; Tzallas, A.T.; Tsipouras, M.G. Mind the Move: Developing a Brain-Computer Interface Game with Left-Right Motor Imagery. Information 2023, 14, 354. https://doi.org/10.3390/info14070354

AMA Style

Prapas G, Glavas K, Tzimourta KD, Tzallas AT, Tsipouras MG. Mind the Move: Developing a Brain-Computer Interface Game with Left-Right Motor Imagery. Information. 2023; 14(7):354. https://doi.org/10.3390/info14070354

Chicago/Turabian Style

Prapas, Georgios, Kosmas Glavas, Katerina D. Tzimourta, Alexandros T. Tzallas, and Markos G. Tsipouras. 2023. "Mind the Move: Developing a Brain-Computer Interface Game with Left-Right Motor Imagery" Information 14, no. 7: 354. https://doi.org/10.3390/info14070354

APA Style

Prapas, G., Glavas, K., Tzimourta, K. D., Tzallas, A. T., & Tsipouras, M. G. (2023). Mind the Move: Developing a Brain-Computer Interface Game with Left-Right Motor Imagery. Information, 14(7), 354. https://doi.org/10.3390/info14070354

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop