Next Article in Journal
Titanium Additive Manufacturing with Powder Bed Fusion: A Bibliometric Perspective
Previous Article in Journal
Integration of Fog Computing in a Distributed Manufacturing Execution System Under the RAMI 4.0 Framework
Previous Article in Special Issue
XAI-Based Accurate Anomaly Detector That Is Robust Against Black-Box Evasion Attacks for the Smart Grid
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Impact of Virtual Reality on Brain–Computer Interface Performance in IoT Control—Review of Current State of Knowledge

Faculty of Computer Science, Kazimierz Wielki University, 85-064 Bydgoszcz, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(22), 10541; https://doi.org/10.3390/app142210541
Submission received: 12 October 2024 / Revised: 8 November 2024 / Accepted: 13 November 2024 / Published: 15 November 2024
(This article belongs to the Special Issue IoT in Smart Cities and Homes, 2nd Edition)

Abstract

:

Featured Application

Potential applications of the work related to advanced hands-free control systems based on the interaction of BCI and VR.

Abstract

This article examines state-of-the-art research into the impact of virtual reality (VR) on brain–computer interface (BCI) performance: how the use of virtual reality can affect brain activity and neural plasticity in ways that can improve the performance of brain–computer interfaces in IoT control, e.g., for smart home purposes. Integrating BCI with VR improves the performance of brain–computer interfaces in IoT control by providing immersive, adaptive training environments that increase signal accuracy and user control. VR offers real-time feedback and simulations that help users refine their interactions with smart home systems, making the interface more intuitive and responsive. This combination ultimately leads to greater independence, efficiency, and ease of use, especially for users with mobility issues, in managing IoT-connected devices. The integration of BCI and VR shows great potential for transformative applications ranging from neurorehabilitation and human–computer interaction to cognitive assessment and personalized therapeutic interventions for a variety of neurological and cognitive disorders. The literature review highlights the significant advances and multifaceted challenges in this rapidly evolving field. Particularly noteworthy is the emphasis on the importance of adaptive signal processing techniques, which are key to enhancing the overall control and immersion experienced by individuals in virtual environments. The value of multimodal integration, in which BCI technology is combined with complementary biosensors such as gaze tracking and motion capture, is also highlighted. The incorporation of advanced artificial intelligence (AI) techniques will revolutionize the way we approach the diagnosis and treatment of neurodegenerative conditions.

1. Introduction

Integrating BCI with VR can significantly increase the efficiency of BCI in controlling IoT systems such as smart homes. This is particularly important in aging societies in developed and developing countries, where automation of daily living activities, healthcare, and management within a smart home may be the only solution to meet the needs of people with cognitive disabilities who are limited by age. By a smart home, we also mean technologies for use in, for example, nursing homes, senior condominiums, or even in resorts and sanatoriums. VR can provide immersive training environments, allowing users to practice controlling devices using brain signals, which improves the accuracy and speed of BCI commands. The real-time visual feedback provided by VR can help users understand how brain signals interact with a system, leading to more intuitive and efficient control. VR environments can simulate different smart home scenarios, allowing users to control different devices in a low-risk virtual space before interacting with the devices in the real world. In addition, integrating VR with BCI can help overcome the limited interface and feedback challenges of traditional BCIs by visually presenting complex data in an accessible way. This increases user engagement and learning because VR can adapt to the user’s skill level, gradually increasing task complexity, which improves neural adaptation over time. For users with disabilities or impaired mobility, VR offers a non-contact, immersive interface that allows them to control their smart homes more independently. Additionally, machine learning algorithms used in VR simulations can analyze brain patterns and optimize system responsiveness based on individual user preferences. As BCI technology improves with VR training, it has the potential to provide smoother, more accurate control of IoT devices by reducing misinterpretations of brain signals. Combining VR and BCI can also enable remote or multi-device control, where users seamlessly manage multiple IoT devices in a single virtual environment, thus increasing convenience and efficiency in smart homes.
Virtual reality has become an increasingly valuable technology with diverse applications, from neurorehabilitation for individuals recovering from stroke or traumatic brain injury to cognitive enhancement for older adults experiencing age-related cognitive decline [1]. The immersive and interactive nature of virtual environments has been shown to activate neural plasticity, potentially leading to improvements in cognitive functioning. As brain–computer interface technology continues to advance, the integration of virtual reality presents promising opportunities for enhancing BCI performance and user experience.
This review article examines the current research on the impact of virtual reality on brain–computer interface performance. Specifically, we will explore how the use of virtual reality can influence brain activity and neural plasticity in ways that may enhance the performance of brain–computer interfaces.
Virtual reality has emerged as a powerful tool in the field of neurorehabilitation, offering immersive and interactive environments that can drive neuroplasticity and facilitate recovery of physical and cognitive functions [1]. VR can create context-specific interactive scenarios that involve all our senses, stimulate the brain in a multisensory fashion, and increase motivation and fun with game-like environments. These properties of virtual reality make it a promising complement to brain–computer interface technology, which aims to provide assistive and restorative functions through direct communication between the brain and external devices.
The interaction between the human mind and virtual realities has been demonstrated to improve cognitive functions through the activation of neural plasticity [1,2,3]. Technologies employing virtual realities may be especially helpful for older adults suffering from cognitive decline and social isolation, as well as for assisting in the neurorehabilitation of patients with stroke or traumatic brain injury. Additionally, virtual reality may be an essential ingredient for the replacement of lost functions through an appropriate brain–computer interface that controls robotic devices.
The novelty and contribution of the article lie in highlighting that VR-based BCIs expand the possibilities for applications in rehabilitation, gaming, and education by offering more interactive and personalized experiences. This integration provides a more effective and engaging platform for non-invasive neurofeedback and cognitive control training. Comparison with previous studies in the discussion extends the current state of knowledge.
A key mechanism by which virtual reality may enhance brain–computer interface performance is through the induction of neuroplasticity. Neuroplasticity refers to the brain’s ability to reorganize neural pathways and synaptic connections in response to changes in behavior, environment, neural processes, thinking, and emotions. Virtual reality can activate neuroplasticity through its provision of multisensory stimulation, increased user engagement and motivation, and ability to adapt task difficulty to the user’s capabilities [2].
The immersive and interactive nature of virtual reality has been shown to stimulate neural activity across multiple sensory modalities, including visual, auditory, and somatosensory. This multimodal sensory stimulation can drive neuroplastic changes that ultimately enhance the brain’s ability to process and encode information. Additionally, the game-like and engaging nature of virtual reality can increase a user’s motivation and sustained attention—factors that are crucial for effective motor learning and skill acquisition.
Furthermore, virtual reality environments can be dynamically adjusted to match the user’s abilities and provide the right level of challenge, a key factor in promoting neuroplasticity. As the user progresses, the virtual environment can become more complex, requiring greater cognitive and motor skills. This gradual increase in task difficulty is an effective strategy for driving neural reorganization and performance improvements [2].
The use of virtual reality for cognitive rehabilitation is underpinned by several theoretical frameworks, including the concepts of embodied cognition, neural network theory, and social cognition. Embodied cognition posits that cognitive processes are grounded in the body’s interactions with the environment, suggesting that the immersive and interactive nature of virtual reality can engage sensorimotor systems to facilitate cognitive rehabilitation. Neural network theory emphasizes the brain’s ability to dynamically reconfigure its connections in response to experience, which aligns with the neuroplastic changes induced by virtual reality. Social cognition theory highlights the importance of social interactions and cues in shaping cognitive abilities, a factor that can be incorporated into virtual reality-based rehabilitation through collaborative or multiplayer scenarios.
While the integration of virtual reality with brain–computer interfaces remains a relatively new and underexplored field, the potential benefits of this combination may not be as clear-cut as suggested by preliminary evidence. Although a study [4] indicates that VR can enhance task performance and user focus during BCI interaction, it does not necessarily translate to improved overall BCI performance or training efficiency. In fact, the immersive and distracting nature of virtual environments may introduce additional cognitive load and distract users from the core BCI tasks, potentially hindering their ability to effectively modulate the neural activity patterns required for optimal BCI control. Furthermore, the compatibility and integration of VR systems with BCI hardware and software can present technical challenges that may offset any potential usability advantages. Therefore, further research is needed to directly compare the efficacy of VR-BCI systems against traditional non-VR setups across a broader range of BCI applications and user populations to fully understand the impact of virtual reality on BCI performance.

1.1. EEG Signal Acquisition

The electroencephalographic signal represents the electrical potentials recorded by electrodes placed on the scalp, which fluctuate due to the synchronous activity of millions of neurons in the brain. These fluctuations in electrical potentials are caused by the ionic current flows associated with the neural processes, reflecting the underlying neuronal activity within the cerebral cortex [5].
The contact point of the electrode with the subject’s head picks up electrical signals that represent the collective activity of many neurons within the area of the brain being monitored. These signals are not limited to a single neuron but rather reflect the synchronized firing of numerous neurons involved in the neural processes, which are associated with a particular cognitive or motor task being performed by the subject.
The electroencephalogram is a system composed of multiple electronic components designed to acquire, amplify, and transmit the electrical signals, which represent the brain’s neural activity. However, the conditions in which the EEG system operates can present challenges in accurately capturing the low-voltage signals due to the use of amplification and signal processing techniques.

1.2. Brain–Computer Interface Fundamentals

A brain–computer interface (BCI) is a direct communication pathway between the brain and an external device that allows the user to control or interact with the device using their neural activity. BCI is a device that allows humans to communicate with a computer using only EEG signals taken from the surface of the head. After proper calibration, the user, using the device and dedicated software, is able to control a computer in a way similar to using a mouse or keyboard. The operation of a brain–computer interface begins with the user initiating a specific task or action. This can involve imagining movement, reaching a certain focus threshold, or engaging in meditation, depending on the characteristics the equipment is designed to detect. This is followed by the acquisition and preprocessing of brain signals, which include removing artifacts from the data. Characteristic features are then extracted from the processed signals. Brain–computer interfaces operate by detecting specific patterns of brain activity and translating them into commands that can be used to control a computer application or external device. Slow Cortical Potentials, Sensorimotor Rhythms, and Event-Related Potentials are among the primary neural correlates that have been used to operate BCIs [6]. These neural signals reflect different aspects of brain activity and can be used to control various functions of a brain–computer interface. Slow Cortical Potentials, for example, are low-frequency shifts in the brain’s electrical activity, which can be modulated by the user’s mental state and utilized for BCI control. Sensorimotor rhythms, which are oscillations in the mu and beta frequency bands over the sensorimotor cortex, can be used to detect motor imagery and intentions. Event-Related Potentials, on the other hand, are time-locked responses to specific sensory, cognitive, or motor events, which can be leveraged for BCI applications like P300-based spellers. The selection and integration of these neural features depend on the specific requirements and design of the BCI system to optimize its performance and usability for the end-user. SSVEP is yet another important and widely employed brain signal in BCIs, which represents the brain’s response to a visual stimulus flickering at a specific frequency [7]. This steady-state visual evoked potential has been extensively studied and utilized in various BCI applications due to its high signal-to-noise ratio, ease of detection, and ability to elicit robust responses in the brain. The SSVEP-based BCI approach allows users to control external devices or computer interfaces by focusing their visual attention on flickering visual stimuli, which induces corresponding frequency-specific responses in the brain’s electrical activity, which, in turn, can be detected and translated into control commands [7,8,9,10].

1.3. BCI Applications

The key medical applications of brain–computer interfaces include neurorehabilitation, assistive technology, diagnosis and monitoring, and treatment of neurological and psychiatric disorders (Table 1).
Brain–computer interfaces have also found applications in gaming and entertainment, which enable users to control virtual environments and gaming experiences using their brain activity.
  • Brain-Controlled games:
    • Games that use brain activity to control characters or objects [30,31];
    • Neurofeedback games for relaxation and focus [31,32,33];
  • Virtual Reality and Augmented Reality Experiences:
    • Immersive and interactive experiences controlled by brain activity [33];
  • Music and Art Generation:
    • Creating music or art using brainwaves (Rincon 2021 [34]).
Brain–computer interfaces can enable hands-free communication and control of external devices.
  • Hands-free typing and communication:
    • Spelling devices for people with severe motor impairments [35];
  • Brain-to-Brain communication:
    • Research into direct communication between brains [36];
  • Control of external devices:
    • Smart home control [20];
    • Vehicle control (e.g., cars and drones) [37,38,39].
Other applications of brain–computer interfaces include education and training, neuromarketing, security, and authentication.
  • Education and Training:
    • Enhancing attention and focus in students [40];
    • Training pilots and surgeons using simulations [41,42];
  • Neuromarketing:
    • Measuring consumer responses to products and advertisements [43];
  • Security and Authentication:
    • Brainwave-based lie detection [44];
    • Biometric authentication [45,46,47].

1.4. Virtual Reality Fundamentals

Virtual reality can be defined as an interactive, computer-generated environment that provides an immersive experience for the user [48]. This immersive experience is achieved through the use of specialized hardware and software, which work together to stimulate multiple human senses, including vision, audition, and sometimes even touch or proprioception. The article [49] primarily focuses on proposing electrophysiological measures as a method for evaluating the sense of presence in virtual environments. Physiological signals, such as electroencephalography, can provide objective insights into the user’s cognitive and emotional states, which are closely linked to the sense of presence experienced in VR.
The goal of virtual reality is to create immersive experiences that closely resemble real-world scenarios. To achieve the highest level of realism, specialized headsets have been designed to isolate the user from external stimuli and direct their focus solely on the virtual environment. Additionally, the digitally generated world must exhibit responsiveness, dynamically reacting in real-time to changes in the environment and user inputs.
The virtual reality market has expanded beyond just entertainment and has a growing range of applications and use cases. While virtual reality games and 360-degree films continue to provide immersive experiences, the technology has also been increasingly adopted in fields such as education, training, healthcare, and even manufacturing. Virtual reality allows users to interact with digital environments in a way that simulates real-world scenarios, enabling new forms of learning, skill development, and problem-solving across diverse domains. The ability to create highly realistic and responsive virtual environments has also opened up new opportunities for remote collaboration, virtual tourism, and therapeutic interventions. As the technology continues to evolve, the potential applications of virtual reality are expected to expand further, transforming how we engage with digital information and experience the world around us [50,51].
Using a virtual environment has a real impact on brain–computer interface use. This solution is more intuitive and requires less training than traditional BCI use. Therefore, it can be successfully used in designing and creating new visual concepts, including schematic drawings, sketches, modeling solids in space, and sculpting (digital sculpting). This kind of immersion in virtual reality can initiate a whole series of applications that are intuitively operated using thought commands in the created virtual world. This would allow you to perform tedious and repetitive tasks in a different, simple, and more entertaining way. Such a solution would be intended not only for healthy people but also for people with motor disabilities, who would be more independent in this way [4].

2. Material and Methods

2.1. Data Set

This bibliometric analysis examines the research landscape of VR-BCI systems by applying bibliometric methods to the analysis of scientific publications in this area. Our approach involves formulating research questions to identify key areas: the evolution of research topics over time, geographical patterns of publications, and the most influential authors and articles. We also examine emerging topics that may have an impact on future research. Obtaining a comprehensive understanding of current research and trends in VR-BCI systems is essential, as it will enrich ongoing discussions and establish a solid foundation for future research through the interpretation of bibliometric data.

2.2. Methods

The bibliographic databases Web of Science (WoS), Scopus, and PubMed were used in this study, selected for their wide research coverage and rich data that support in-depth bibliometric analysis. To focus on relevant literature, we applied specific filters, narrowing the scope to original articles and reviews in English. After filtering, we manually reviewed each article to ensure that it complied with the criteria of our study, which determined the final sample size. Descriptive statistics were then used to analyze the main features of the data set, including authors and research groups, topic clusters, and emerging trends. This allowed us to map the evolution of key terminology and major research achievements in the field of VR-BCI systems. We identified temporal trends to monitor changes in research coverage over time and grouped publications into topic clusters, revealing relationships between different research areas. This process highlights relevant topics and research fields (Figure 1).
The study followed specific elements of the PRISMA 2020 guidelines for bibliographic reviews, focusing on aspects such as the rationale (item 3), objectives (item 4), eligibility criteria (item 5), information sources (item 6), search strategy (item 7), selection process (item 8), data collection process (item 9), synthesis methods (item 13a), synthesis results (item 20b), and discussion (item 23a). For bibliometric analysis, we used tools embedded in the Web of Science (WoS), Scopus, and PubMed databases, as well as the Biblioshiny tool from the Bibliometrix Rv.4.1.3 package (GNU GPL). This methodology supports bibliometric and scientometric studies, often allowing for refined categorization by conceptual structures, research areas, authors, documents, and sources. The results are presented using graphs and tables that allow for flexible analysis and visualization options. Given the interdisciplinary scope and complexity of the topic, we have collected the most important results of the review later in this article.
To refine our search to suit our research objectives, we used advanced filtered queries, limiting results to English-language articles. In WoS, searches were performed using the “Subject” field (consisting of title, abstract, keyword plus, and other keywords); in Scopus, using article title, abstract, and keywords; and in PubMed, using manual keyword sets. The databases were searched for articles using keywords such as “virtual reality”, “brain–computer interface”, and “system” or similar. The selected publications were then further refined (see Figure 2), by manually reviewing articles, removing irrelevant items and duplicates, which resulted in our final sample size. The number of selected articles decreased from 98 (published 2005–2024) to 3.

2.3. Research on VR-BCI Systems

Research on BCI with VR employs a variety of methodologies, often combining several approaches (Figure 3). Common techniques used in this field include neuroimaging techniques, virtual reality (VR) systems, experimental paradigms, data analysis and classification, and user experience evaluation.
The methodologies used in VR-BCI systems, especially in neuroimaging, play a key role in determining the quality, applicability, and effectiveness of the systems in specific applications. EEG is one of the most commonly used methods due to its non-invasive nature, relatively low cost, and high temporal resolution, making it ideal for applications that require real-time processing of brain signals, such as responsive games or instantaneous feedback in VR training environments. However, EEG suffers from low spatial resolution, which limits its ability to capture complex spatial patterns of brain activity, making it less suitable for tasks requiring detailed brain mapping. In contrast, functional near-infrared spectroscopy (fNIRS) offers higher spatial resolution by detecting changes in cerebral blood flow, which can provide more precise information on specific areas involved in tasks or emotional responses in VR. Despite this spatial advantage, fNIRS has lower temporal resolution compared to EEG because it measures hemodynamic responses, which change more slowly than electrical brain signals, making it more suitable for applications that do not require immediate feedback. EEG is also more sensitive to electrical noise and artifacts from physical movements, which can be a challenge in VR environments where users frequently interact and move around. To overcome this, some VR-BCI systems combine EEG with fNIRS to take advantage of both high temporal and spatial resolution, capturing richer brain data while minimizing the limitations of each individual modality. Another factor is the practicality of wearing the equipment: EEG caps and fNIRS headsets can be cumbersome, especially in immersive VR setups, so advances in wireless and lightweight devices are key to increasing user comfort and realism. For highly interactive VR applications, the real-time capabilities of EEG make it preferable, while for applications focused on monitoring cognitive load, mental fatigue, or emotional states, fNIRS may offer more meaningful data. Other neuroimaging techniques, such as MEG (magnetoencephalography) or fMRI (functional magnetic resonance imaging), provide more detailed information but are less practical for VR-BCI systems due to their high cost, bulk, and the need for immobilization, which limits natural interaction. As a result, the choice of neuroimaging method depends largely on the specific goals of the VR-BCI application, balancing the need for spatial and temporal accuracy, user comfort, and interaction fidelity. Practical guidelines for VR-BCI development often suggest choosing EEG for real-time interactive applications and fNIRS for detailed location-specific monitoring, especially where time is less of a concern. Methodological studies comparing results from different neuroimaging modalities in VR-BCI contexts can help establish clearer guidelines, ultimately optimizing system performance and user experience across applications.
Neuroimaging techniques include the following:
  • Electroencephalography [52,53] constitutes the most common method due to its non-invasiveness, portability, and affordability. Researchers analyze brainwave patterns such as the following:
    • Brain rhythms: Alpha waves (8–12 Hz)—elevated can be used to signal relaxation or lack of engagement, while decreased can indicate increased attention or mental effort; Beta waves (13–30 Hz)—increased beta activity can be used to detect states of increased attention or engagement;
    • P300 event-related potential—a characteristic brain response to a rare or unexpected stimulus (positive, 300 ms delayed), used for selection tasks;
    • Steady-state visually evoked potentials—elicited by flickering stimuli at specific frequencies, allowing for control by focusing attention;
  • Functional near-infrared spectroscopy (fNIRS) [54,55]—measures brain activity by detecting changes in blood oxygenation. Offers better spatial resolution than EEG but is more limited in terms of portability;
  • Functional magnetic resonance imaging (fMRI) [56]—provides high spatial resolution images of brain activity but is expensive, immobile, and not suitable for real-time BCI-VR applications.
VR systems include the following:
  • Head-mounted displays [3,52,53]—immersive VR experiences, often combined with head tracking for more natural interaction;
  • Desktop VR [57]—less immersive but more accessible and affordable;
  • CAVE systems—highly immersive, multi-projection VR environments, but expensive and less common.
Experimental paradigms include the following:
  • Motor imagery—participants imagine performing movements to control virtual objects or navigate environments [52];
  • Visual attention—participants control the VR environment by focusing their attention on specific stimuli [58];
  • Cognitive tasks—BCI systems can be used to assess cognitive function in VR environments, such as memory, attention, and decision-making [54].
Data analysis and classification include the following:
  • Signal processing—extracting relevant features from noisy brain signals [59];
  • Machine learning—training algorithms to classify brain patterns and translate them into control commands [59];
  • Statistical analysis—evaluating the performance and usability of BCI-VR systems.
User experience evaluation includes the following:
  • Questionnaires—assessing subjective experiences like presence, embodiment, and usability [52,53];
  • Performance metrics—measuring task accuracy, completion time, and other objective indicators of system performance [4].
Ethical considerations are paramount in any research involving human participants. In the context of our studies, which focus on healthy users rather than patients, formal approval from an institutional review board or ethics committee was not sought. This decision aligns with ethical guidelines for research involving non-clinical populations and minimal-risk procedures. Participation in the experiment was entirely voluntary, with each participant providing informed consent after receiving a comprehensive explanation of the study’s objectives, procedures, and potential risks and benefits. Participants were explicitly informed of their right to withdraw from the study at any time without consequence. Data collected during the study was anonymized to protect participant privacy and confidentiality.

3. Results

The integration of virtual reality and brain–computer interfaces has been explored in various research studies. Brain–computer interfaces have the potential to serve as a powerful control mechanism for a variety of applications by allowing direct communication between the human brain and external devices. When integrated with virtual reality environments, BCIs can enable users to influence navigation, communication, and other assistive functions through intuitive, multisensory interactions that engage the user and enhance their motivation and engagement. The immersive and interactive nature of virtual reality can amplify the capabilities of BCIs, providing a rich, context-specific interface for users to control digital systems and environments using their neural activity.
This research builds upon our previous study published in [4], which explored the potential of virtual reality to enhance the usability and efficiency of brain–computer interfaces. Our team designed a custom computer system that enabled a participant to draw shapes on a computer screen using a BCI. The study employed a two-day within-subject design. On the first day, the participant completed drawing tasks without VR. On the second day, they performed the same tasks while immersed in a virtual environment using Esperanza EMV300 mobile VR goggles (Esperanza sp. j., Ożarów Mazowiecki, Poland). Our findings demonstrated a significant improvement in task performance when VR was integrated with the BCI. The average time taken to complete the drawing tasks decreased threefold in the VR condition compared to the non-VR condition. This suggests that VR immersion enhances concentration and facilitates more efficient BCI control. Furthermore, the study found that a task requiring high precision was notably easier for the participant to perform in the VR environment. These results, while based on a single-case study, highlight the potential of VR to improve the usability and effectiveness of BCI systems, particularly for tasks demanding focused attention and precise control.

3.1. Neurorehabilitation

Research has indicated that the use of virtual reality tools in cognitive rehabilitation programs can benefit individuals with mild cognitive impairment and Alzheimer’s disease. The interactive and immersive qualities of virtual reality have been found to support interventions aimed at addressing behavioral and psychological symptoms in this population [60].
Another review article discusses how virtual reality can be integrated with brain–computer interfaces to enhance neurorehabilitation for patients with motor and cognitive deficits. The integration of VR with BCI technology provides an immersive and interactive environment that can drive neuroplasticity and facilitate the recovery of physical and cognitive functions in patients undergoing neurorehabilitation. This integrated approach utilizes the benefits of both virtual reality and brain–computer interfaces to develop customized and engaging rehabilitation programs for individuals with motor and cognitive deficits [3].
The study described in “BCI–VR-Based Hand Soft Rehabilitation System with Its Applications in Hand Rehabilitation After Stroke” examines a novel hand rehabilitation system for stroke patients. The system combines a brain–computer interface that allows patient control, a soft hand rehabilitation glove that provides physical support, and a virtual reality environment for engaging in rehabilitation training. The researchers tested their system on 11 stroke patients and found that it was effective in improving hand motor function, muscle strength, and muscle tension. They suggest that this system is a promising new approach for hand rehabilitation after stroke [61].
The research presented in “Clinical Effects of Immersive Multimodal BCI-VR Training after Bilateral Neuromodulation with rTMS on Upper Limb Motor Recovery after Stroke. A Study Protocol for a Randomized Controlled Trial” investigates the effectiveness of a specific rehabilitation program for stroke patients. This program combines brain stimulation (rTMS) with immersive virtual reality training controlled by a brain–computer interface. The study aims to determine if this combined approach is more effective than rTMS paired with conventional motor imagery tasks in improving upper limb motor function after a stroke [62,63].
Haoqi Li discusses the potential of combining brain–computer Interface technology with virtual reality for neurorehabilitation and human–computer interaction. While this integration shows promise, the article highlights challenges such as accurately decoding brain signals and the limitations of recognizable motion commands. The article emphasizes that integrating multiple BCI paradigms and incorporating biosensors like eye-tracking and motion capture can help overcome these obstacles. It also points to the significant role of Artificial Intelligence in improving neural activity analysis and decoding for BCI-VR applications [57].
The article “ChatGPT and BCI-VR: a new integrated diagnostic and therapeutic perspective for the accurate diagnosis and personalized treatment of mild cognitive impairment” presents a promising approach to addressing the challenges in diagnosing and treating mild cognitive impairment. The authors propose an innovative integration of brain–computer interface technology, virtual reality, and the advanced language model ChatGPT.
This integrated system aims to provide a more comprehensive and personalized solution for MCI assessment and intervention. By leveraging BCI and VR, the approach would create an interactive and immersive environment that can dynamically respond to the user’s cognitive abilities, allowing for a more accurate and nuanced evaluation of MCI symptoms. Furthermore, the integration of ChatGPT’s natural language processing and machine learning capabilities enables the system to analyze patient data in depth, identify specific cognitive deficits, and tailor the treatment plan accordingly.
This combined BCI-VR-ChatGPT framework holds significant potential to enhance the accuracy of MCI diagnosis, leading to earlier intervention and more effective, personalized treatment strategies. The article highlights the synergistic benefits of integrating these cutting-edge technologies, which could revolutionize the way clinicians approach the management of mild cognitive impairment and potentially lead to improved outcomes for patients [64,65,66].

3.2. IoT

The research examined a new brain–computer interface system that allows users to control a swarm of drones using a virtual reality environment. This system generates a virtual representation of the physical environment, enabling users to control the drones by focusing on different targets within the VR interface [64]. This system has the potential to revolutionize the way we control drones, making it more intuitive and efficient.
The study reported in the cited work outlines a brain–computer interface system that uses steady-state visually evoked potentials to control a virtual reality environment. The researchers designed a four-class three-dimensional (3D) paradigm that moves synchronously with a virtual car in the VR environment. They found that the system achieved a high average accuracy of 0.956 and a maximum information transfer rate of 41.033 bits/min. The study suggests that this hybrid BCI-VR system provides a promising approach for brain–computer interaction [8].
The research described in “Development and evaluation of BCI for operating VR flight simulator based on desktop VR equipment” focused on developing and evaluating a brain–computer interface system for controlling a virtual reality flight simulator. The researchers utilized a desktop VR setup to address common issues like dizziness and isolation that are often associated with head-mounted VR systems. By leveraging steady-state visually evoked potentials-based BCI technology, they designed a user interface that allowed participants to control a virtual aircraft within the VR environment. The team evaluated the performance of this hybrid BCI-VR system and found it to be a convenient and suitable option for VR-based flight simulations, paving the way for more immersive and intuitive flight training applications [65].
The editorial “Editorial: Brain–computer Interfaces and Augmented/Virtual Reality” discusses the growing use of augmented and virtual reality (AR/VR) in conjunction with brain–computer interfaces. It highlights how these technologies can create more immersive and effective experiences, particularly for neurorehabilitation. The authors explore various applications of BCI-VR systems, including motor rehabilitation, communication, and cognitive training [54].

3.3. Cognitive Enhancement

The study “Playing a P300-based BCI VR game leads to changes in cognitive functions of healthy adults” examines how playing a virtual reality game controlled by a brain–computer interface based on the P300 signal can impact the cognitive functions of healthy adults. The study found that participants who engaged with the VR game exhibited improvements in attention and memory, suggesting that the combination of VR and BCI can lead to cognitive enhancements [63].
The article “A dynamically optimized time-window length for SSVEP based hybrid BCI-VR system” explores how dynamically adjusting the time-window length used for processing brain signals can improve the performance of a brain–computer Interface system designed for virtual reality applications. The researchers investigated an SSVEP-based hybrid BCI-VR system and found that dynamically optimizing the time-window length, rather than using a fixed length, resulted in both higher accuracy and faster information transfer rates. This dynamic optimization allows the system to adapt to the user’s brain activity in real-time, leading to enhanced control and immersion in the VR environment. The findings from this study suggest that incorporating adaptive signal processing techniques can significantly enhance the effectiveness of BCI-VR systems for various applications, such as neurorehabilitation and user interaction in virtual worlds [8].
The article “A comparative study of stereo-dependent SSVEP targets and their impact on VR-BCI performance” explores the integration of steady-state visual evoked potential brain–computer interfaces with virtual reality technology. The researchers compare different SSVEP paradigms to investigate how planar and stereoscopic visual stimuli influence the characteristics of SSVEP signals. Notably, they introduce a novel method called 3D-Blink, which utilizes opacity inversion instead of the traditional luminance modulation. This approach has the potential to improve user comfort and leverage depth perception in VR-based BCI applications. By evaluating the performance and usability of these stereo-dependent SSVEP targets, the researchers aim to advance the development of more immersive and effective BCI-VR systems for various applications, such as neurorehabilitation and human–computer interaction in virtual environments [9].
There is a complex interaction observed between the use of VR and the quality of EEG data in VR-EEG systems, especially in brain–computer interface (BCI) applications. VR headsets can sometimes interfere with the precise electrode placement required for high-quality EEG readings, as they can interfere with scalp contact or apply pressure that affects signal acquisition. Consumer-grade EEG devices, such as the Emotiv headset, which are often chosen for their convenience and portability in VR environments, may lack the electrode precision necessary for tasks requiring highly accurate brainwave data, making some BCI applications less reliable. This is indeed a “chicken and egg” situation as VR increases the immersiveness and practicality of the BCI experience, but at the same time, it can compromise the quality of EEG data, favoring user convenience and flexibility over precision. In response, researchers are exploring specialized VR-EEG integration methods and adaptive signal processing techniques to strike a balance between maintaining EEG quality and creating an immersive, effective VR-BCI environment. This is a topic that has not been explored much but is worth further investigation. Our own research in this area and conclusions related to the shorter task execution time with VR, compared to experiments without VR, show that the selection of the right configuration of devices and software and paradigms is important both for the time of adjustment to the VR-BCI system and the time of execution of the appropriate tasks using them.
Authors propose using BCI-VR to assess and train cognitive functions such as memory and attention through immersive VR tasks like virtual navigation and object recognition. The authors suggest that ChatGPT can personalize feedback mechanisms, potentially by adjusting task difficulty based on real-time BCI data and providing tailored encouragement to enhance user engagement. The article’s findings suggest a novel approach to MCI classification by leveraging ChatGPT’s ability to analyze diverse data sources. The authors propose that ChatGPT could integrate BCI data with patient medical history and subjective reports to enhance diagnostic accuracy. Additionally, ChatGPT’s pattern recognition capabilities might uncover subtle brain activity patterns indicative of MCI.
Figure 4 illustrates the dynamic interplay between the user, BCI system, and virtual environment. Our review highlights the potential of this closed-loop system to revolutionize fields like neurorehabilitation, cognitive assessment, and human–computer interaction. To provide a structured overview of the current state of knowledge in each key area, we have compiled a comprehensive table (Table 2) outlining the advancements, challenges, and future directions for each element of the BCI-VR system. However, realizing this potential requires overcoming challenges related to signal quality, user training, and ethical considerations. Addressing these limitations will pave the way for wider adoption of BCI-VR technology, unlocking its transformative power across diverse domains.

4. Discussion

Current literature on VR-BCI systems reveals notable gaps, particularly regarding their generalizability across diverse populations. Studies often focus on a narrow demographic—usually younger, tech-savvy individuals—while overlooking variations in age, cognitive ability, cultural background, and neurological conditions that could influence BCI performance and usability. Additionally, there is limited exploration of adaptive techniques in VR-BCI systems that could enhance personalized user experiences, such as tailoring interfaces to individual brain patterns or adjusting VR environments in real-time based on user fatigue or cognitive load. Research on the long-term effects of VR-BCI use is also sparse, especially regarding how prolonged exposure might impact mental health, cognitive function, or adaptation to real-world interactions. Ethical frameworks and guidelines for data handling, privacy, and user consent are underdeveloped, leaving critical questions unanswered in this rapidly advancing field. Lastly, interoperability between different VR-BCI systems and standardization of protocols remain underexplored, making it challenging to integrate findings across studies and develop universally applicable insights.

4.1. User Experience

The integration of brain–computer interfaces and virtual reality has opened up new avenues for enhancing user experiences in a wide range of applications. BCI technology allows users to control digital environments and interfaces using their brain activity, while VR provides an immersive, multisensory experience that can amplify the benefits of BCI-based interactions.
One key aspect of the integration between brain–computer interfaces and virtual reality is the ability to create highly immersive and interactive virtual environments that can dynamically respond to the user’s cognitive and motor abilities. These virtual worlds can adapt in real-time to the user’s brain activity and physical movements, allowing for a more natural and intuitive interaction. This enhanced level of responsiveness and personalization can lead to increased engagement, motivation, and a deeper sense of presence and control within the virtual environment, ultimately enhancing the overall user experience [67].
For example, users can use their brain activity to control the movement and interaction of virtual objects or to navigate through complex virtual scenes and scenarios.
This enhanced sense of presence and agency within the virtual world can foster a profound connection and immersion for the user, leading to increased engagement, motivation, and overall satisfaction. The ability to directly interact with and manipulate the virtual environment using one’s own brain activity can instill a heightened feeling of control and empowerment, further amplifying the user’s sense of embodiment and investment in the virtual experience. This symbiotic relationship between the user’s cognitive inputs and the responsive, adaptive virtual world can cultivate a truly immersive and captivating experience where the user’s mental and physical faculties become seamlessly integrated with the digital realm. The result is a heightened state of flow, where the user becomes deeply absorbed and intrinsically motivated to explore, interact, and achieve their goals within the virtual environment [49,68].

4.2. Currently Available Commercial Products and Computational Methods Used in BCI

While the field of BCI and VR is rapidly evolving, commercially available solutions that fully integrate both technologies are still limited. Most products focus on either BCI or VR separately. However, here are some commercially available solutions and companies that are making strides in combining BCI and VR. Neurorehabilitation and therapy include, e.g., the following solutions:
  • MindMaze—offers VR-based neurorehabilitation games controlled by EEG signals, targeting cognitive and motor skills;
  • Neofect—develops VR games and rehabilitation programs for stroke patients, incorporating sensors for movement tracking and feedback;
  • SyncThink—provides VR-based assessments and training programs for visual and vestibular function, utilizing eye-tracking technology.
Gaming and entertainment include, e.g., the following solutions:
  • Emotiv—offers EEG headsets and software development kits for integrating brainwave data into games and applications;
  • NextMind—developed a non-invasive neural interface that allows users to control digital experiences with their thoughts, but the company has shut down;
  • Neurosky—manufactures EEG-enabled headsets for consumer and research applications, including gaming and entertainment.
Research and development include, e.g., the following solutions:
  • OpenBCI—provides open-source hardware and software platforms for BCI research and development, enabling researchers to create custom BCI-VR applications;
  • g.tec—offers a range of BCI systems and software for research and clinical applications, including VR integration options;
VR hardware with potential for BCI integration are the following:
  • Varjo—develops high-end VR headsets with advanced eye-tracking capabilities, which could be leveraged for BCI applications;
  • HP Reverb G2 Omnicept Edition—features integrated eye-tracking, heart rate, and facial expression sensors, offering the potential for developing more immersive and responsive BCI-VR experiences.
The field of brain–computer interface and virtual reality integration has immense potential, but current commercial applications remain relatively limited, reflecting the early developmental stage of this technology. Leading companies like MindMaze and Emotiv have pioneered commercially available BCI headsets, primarily targeting gaming and entertainment applications. The Technology Readiness Level scale, a widely used metric to assess technological maturity, ranges from basic principles to fully operational systems. MindMaze’s technology, for instance, has reached TRL 7, with its use in clinical rehabilitation settings. Conversely, Emotiv’s headsets, which focus on consumer-grade neurofeedback and accessibility, fall under TRL 6. Several other companies are also emerging in this space, each with its own distinct focus and target applications. These include Neofect, which develops rehabilitation devices that integrate with VR; SyncThink, which focuses on eye-tracking and VR for neurological assessments; NextMind, which explores BCI control for digital experiences; Neurosky, which offers affordable EEG headsets for meditation and attention training; OpenBCI, which provides open-source hardware and software for BCI research and development; g.tec, which specializes in high-performance BCI systems for research and clinical use; Varjo, which develops high-end VR headsets with integrated eye-tracking; and HP, which offers the Reverb G2 Omnicept Edition, a VR headset with built-in eye-tracking and facial expression recognition. However, these solutions often face limitations in terms of signal quality, user training requirements, and the range of applications they support. Consequently, the majority of BCI-VR solutions remain confined to research labs and clinical settings, indicating the need for further technological advancements, particularly in areas like miniaturization, cost reduction, and user-friendliness, to drive broader market adoption.
EEG signal processing for BCIs demands rapid execution, typically under 40 ms, to ensure real-time feedback. Raw EEG signals undergo preprocessing, including filtering and artifact removal. Artifacts, which are extraneous signals from sources like muscle activity or electrical interference, can be addressed through various methods, including manual identification, linear trend removal, and source separation techniques.
Spatial filtering is crucial for enhancing the signal-to-noise ratio and isolating relevant brain activity. The Laplacian filter, a commonly used spatial filter, highlights localized activity by referencing neighboring electrodes. Another approach, Common Average Referencing, subtracts the average signal across all electrodes from each individual electrode’s signal, aiming to reduce common noise. The Common Spatial Patterns method is particularly effective for motor imagery BCIs, as it identifies spatial filters that maximize the variance between different mental states, such as imagining left-hand versus right-hand movement.
Following preprocessing and spatial filtering, EEG signals are fed into classifiers for decoding user intent or recognizing brain states. Various machine learning algorithms are employed for this purpose, including linear discriminant analysis, support vector machines, and artificial neural networks. The selection and optimization of these computational methods are crucial for developing accurate, reliable, and robust BCI systems.

4.3. Limitations and Directions of Further Studies

While the integration of brain–computer interfaces and virtual reality holds great promise, there are still several challenges and limitations that need to be addressed. One key challenge is the accuracy and reliability of BCI systems in decoding and interpreting brain signals, which can be influenced by factors such as individual variations, mental states, and environmental conditions. Continued research and advancements in signal processing, machine learning, and neural decoding algorithms are necessary to improve the robustness and effectiveness of BCI-VR systems. Another limitation is the potential for motion sickness and disorientation that can arise from prolonged immersion in virtual environments. To address this, researchers will need to explore innovative approaches that integrate additional sensory modalities, such as vestibular and proprioceptive cues, into the VR experience. Strategies like incorporating multimodal feedback, including haptic, auditory, and visual cues, can help create a more coherent and intuitive sensory experience, mitigating the risk of motion sickness and disorientation [69].
The high cost of high-quality BCI and VR equipment can be a significant barrier to accessibility, limiting their adoption by consumers and researchers. This financial constraint can prevent individuals, healthcare providers, and research institutions from accessing these cutting-edge technologies, thereby restricting the widespread implementation and potential benefits of BCI-VR systems. The lack of standardized protocols and interfaces for BCI-VR integration hinders the development of interoperable solutions, which are crucial for enabling seamless integration and widespread adoption of these technologies. The absence of such standards and compatibility can hamper the ability to create and deploy scalable, cross-platform BCI-VR applications, further limiting their accessibility and impact. The integration of BCI technology into virtual environments and personal devices also raises important ethical concerns that need to be carefully addressed, such as issues of privacy, data security, and potential risks associated with BCI technology, like unintended cognitive or neural impacts. Ensuring the responsible and ethical development and deployment of these systems is essential to protect the well-being of BCI users and maintain public trust in these transformative technologies.
Leveraging emerging technologies, such as augmented reality and wearable sensors, can further enhance the user experience and expand the applications of BCI-VR systems [70,71,72]. Future research should also focus on expanding the versatility and accessibility of BCI-VR systems. Adopting standardized protocols, improving user-friendly interfaces, and exploring cost-effective hardware solutions can broaden the reach and applicability of these integrated technologies [73,74,75,76,77,78,79,80].
VR-BCI systems face technical limitations that challenge their accuracy and reliability. EEG signals, commonly used in BCI, are highly susceptible to noise from muscle movements, environmental interference, and other external factors that can compromise the accuracy of brain signal interpretation. The VR environments themselves often lack the full sensory and perceptual fidelity of real-world experiences, making it difficult to simulate realistic interactions that feel natural to users. Latency is another issue, as both EEG data processing and VR rendering can introduce delays that disrupt the synchronous feedback required for smooth user interaction. The limited spatial resolution of EEG sensors also limits the granularity of brain activity that can be captured, making it difficult to distinguish between complex cognitive states or intentions. Prolonged use of VR and BCI can lead to user fatigue or discomfort, which further compromises the usability and overall performance of the system for long-term use.
The successful integration of brain–computer interfaces and virtual reality faces inherent technical hurdles. Electroencephalography, a non-invasive and relatively affordable approach, is susceptible to noise and artifacts, such as muscle activity and eye movements, which can obscure brain signals and impede accurate control. Additionally, the movement of electrodes in some commercially available BCI devices further exacerbates this issue, introducing another layer of variability to the already noisy EEG signal. Furthermore, while VR technology is rapidly advancing, it still falls short of perfectly replicating the complexity and fidelity of real-world environments. This discrepancy can limit the ecological validity of BCI-VR applications, particularly those intended for training or rehabilitation purposes, where the transfer of skills to real-world scenarios is crucial. To address these challenges, researchers are exploring innovative solutions to enhance the integration of BCIs and virtual reality. One promising approach involves the incorporation of multimodal sensing, where brain activity is combined with other physiological data, such as eye tracking and motion capture, to provide a more comprehensive and robust assessment of user interactions within the virtual environment. This synergistic integration of multiple data streams can help mitigate the impact of noise and artifacts, leading to more accurate decoding of user intent and improved overall system performance [57].
A crucial aspect demanding careful consideration in BCI-VR integration is the safeguarding of neural data and mitigation of potential cognitive risks. The sensitive nature of brainwave data necessitates robust cybersecurity measures to prevent unauthorized access, breaches, or malicious use. Furthermore, the long-term effects of prolonged BCI use on cognitive function remain an active area of investigation. Further research is essential to establish comprehensive neurosecurity standards, develop privacy-preserving data encryption methods, and investigate the potential for unintended cognitive or neural impacts arising from extended or improper BCI-VR interaction.
Specifically, researchers should explore effective techniques for anonymizing and encrypting neural data to protect user privacy. Additionally, studies should examine the cumulative impact of repeated BCI-VR sessions on cognitive performance, attention, and neural plasticity over extended periods. Establishing clear guidelines and protocols for safe, ethically-sound BCI-VR usage will be crucial in ensuring the responsible advancement and widespread adoption of these transformative technologies. Addressing these concerns proactively will be paramount in ensuring the ethical and responsible advancement of BCI-VR technology [81,82,83,84,85].
VR-BCI systems raise several ethical concerns, particularly around data privacy, as they collect sensitive neurological data that can reveal personal thoughts, emotions, or cognitive states, posing significant risks if mishandled or accessed without permission. Ensuring informed consent is challenging, especially given the complexity of VR-BCI technology, as users may not fully understand how their data will be used or what the potential long-term implications may be. The potential for the technology to influence cognitive states or perceptions raises concerns about unintended psychological effects, including the possibility of changing the user’s thought patterns or emotional states over time. There is also a risk of addiction or overreliance, as long-term use of VR-BCI systems can blur the lines between virtual and real experiences for users, potentially affecting their mental health and decision-making. Equitable access and potential biases in system design are additional concerns, as VR-BCI systems may reinforce existing inequalities if not made available to diverse populations. As technology advances, ethical guidelines and regulatory frameworks may struggle to keep up, creating risks of misuse or exploitation of VR-BCI capabilities in ways that are not yet fully understood.
Key considerations for successful BCI-VR integration:
  • Signal Quality: A primary challenge in BCI research is obtaining high-quality, reliable brain signals. Factors such as electrode placement, signal interference, and individual neurophysiological variations can significantly impact signal fidelity, which is crucial for accurate decoding and control. Advancements in sensor technologies, signal processing algorithms, and personalized calibration methods are necessary to improve signal quality and robustness [86].
  • User Training: Effective control of BCI systems often requires extensive training and learning for users to develop the necessary cognitive skills and mental strategies. The ability to modulate specific brain activity patterns, such as steady-state visual evoked potentials or motor imagery, can be highly individual and needs to be cultivated through dedicated practice and feedback. Developing user-friendly training protocols and adaptive learning algorithms is crucial for enhancing BCI control and accessibility [87].
  • Ethical Considerations: The integration of BCI technology into virtual environments and personal devices raises important ethical concerns that require careful consideration. Issues of privacy, data security, and informed consent must be addressed to ensure the responsible and ethical development and deployment of these systems. Additionally, potential risks, such as unintended cognitive or neural impacts, should be thoroughly investigated and mitigated to protect the well-being of BCI users [88,89].
The inclusion of visual aids such as comparison tables, system architecture diagrams, and interaction diagrams can greatly simplify complex information in a VR-BCI system. Comparison tables can help to quickly distinguish between different system components, technologies, or approaches, highlighting their relative strengths and limitations side by side. This will enable users to understand complex trade-offs and decision points at a glance, reducing the cognitive load associated with the text. System architecture diagrams offer an overview of the VR-BCI structure, illustrating the relationships and data flow between hardware, software, and user input, helping to clarify how each part contributes to the functionality of the system. These diagrams make it easier for non-technical stakeholders to understand the structure and flow of the system, improving interdisciplinary communication. On the other hand, interaction diagrams provide a step-by-step visualization of user interactions with the system, which can simplify understanding of the user journey and potential bottlenecks or error points. By mapping user actions and system reactions, flowcharts facilitate a better understanding of system processes and user experience design issues. Together, these visual aids allow team members to engage more intuitively with intricate details, fostering faster understanding and more informed decision-making. Thus, the aforementioned visualizations bridge gaps in understanding between different teams.
Despite these challenges, the future of commercially available BCI-VR solutions appears promising. As technology advances and costs decrease, we can expect to see more widespread adoption of these integrated systems in various fields, including healthcare, gaming, and education.

5. Conclusions

Integrating BCI with VR improves the performance of brain–computer interfaces in IoT control by providing immersive, adaptive training environments that increase signal accuracy and user control. VR offers real-time feedback and simulations that help users refine their interactions with smart home systems, making the interface more intuitive and responsive. This combination ultimately leads to greater independence, efficiency, and ease of use, especially for users with mobility issues, in managing IoT-connected devices.
The integration of brain–computer interfaces and virtual reality has shown immense potential for transforming a diverse array of applications, from neurorehabilitation and human–computer interaction to cognitive assessment and personalized therapeutic interventions for various neurological and cognitive disorders. The reviewed literature underscores the significant advancements and the multifaceted challenges in this rapidly evolving field.
Particularly noteworthy is the emphasis on the importance of adaptive signal processing techniques, which enable BCI-VR systems to dynamically optimize their performance by adjusting parameters like time-window length to better accommodate the user’s brain activity in real-time. This adaptability is crucial for enhancing the overall control and immersion experienced by individuals within virtual environments.
Furthermore, the reviewed studies highlight the value of multimodal integration, where BCI technology is combined with complementary biosensors like eye-tracking and motion capture. This synergistic approach allows for more comprehensive and accurate assessments of cognitive and motor functions, ultimately leading to more personalized and effective therapeutic interventions.
Lastly, the incorporation of advanced artificial intelligence techniques, such as machine learning and natural language processing, holds immense promise for revolutionizing the way clinicians approach the diagnosis and management of conditions like mild cognitive impairment. The integration of AI-powered systems like ChatGPT can enable deeper analysis of patient data, more precise identification of cognitive deficits, and the development of tailored treatment strategies.
Overall, the seamless integration of BCIs and VR, coupled with the advancements in adaptive signal processing, multimodal integration, and AI-driven technologies, presents a transformative opportunity to redefine the landscape of neurorehabilitation, human–computer interaction, and personalized cognitive interventions. As this field continues to evolve, the potential for enhancing quality of life and improving clinical outcomes for individuals with neurological and cognitive disorders is truly remarkable.

Author Contributions

Conceptualization, A.P., I.R. and D.M.; methodology, A.P., I.R. and D.M.; software, A.P., I.R. and D.M.; validation, A.P., I.R. and D.M.; formal analysis, A.P., I.R. and D.M.; investigation, A.P., I.R. and D.M.; resources, A.P., I.R. and D.M.; data curation, A.P., I.R. and D.M.; writing—original draft preparation, A.P., I.R. and D.M.; writing—review and editing, A.P., I.R. and D.M.; supervision, I.R.; project administration, I.R.; funding acquisition, I.R. All authors have read and agreed to the published version of the manuscript.

Funding

The work presented in this paper has been financed under a grant to maintain the re- search potential of Kazimierz Wielki University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Georgiev, D.; Georgieva, I.; Gong, Z.; Nanjappan, V.; Georgiev, G.V. Virtual Reality for Neurorehabilitation and Cognitive Enhancement. Brain Sci. 2021, 11, 221. [Google Scholar] [CrossRef] [PubMed]
  2. Ku, J.; Kang, Y.J. Novel Virtual Reality Application in Field of Neurorehabilitation. Brain Neurorehabil. 2018, 11, e5. [Google Scholar] [CrossRef]
  3. Leeb, R.; Pérez-Marcos, D. Brain-computer interfaces and virtual reality for neurorehabilitation. Handb. Clin. Neurol. 2020, 168, 183–197. [Google Scholar] [CrossRef]
  4. Piszcz, A. BCI in VR: An immersive way to make the brain-computer interface more efficient. Stud. Mater. Inform. Stosow. 2021, 1, 5–10. [Google Scholar]
  5. Gu, W.; Bohan, Y.; Chang, R. Machine Learning-based EEG Applications and Markets. arXiv 2022, arXiv:2208.05144. [Google Scholar] [CrossRef]
  6. Zhang, Y. Mechanism Research and Application of Brain-computer Interface. In Proceedings of the ISAIMS 2020: 2020 International Symposium on Artificial Intelligence in Medical Sciences, Beijing, China, 11–13 September 2020; pp. 187–192. [Google Scholar] [CrossRef]
  7. Zhang, Y.; Xie, S.; Wang, H.; Zhang, Z. Data Analytics in Steady-State Visual Evoked Potential-Based Brain—Computer Interface: A Review. IEEE Sens. J. 2021, 21, 1124–1138. [Google Scholar] [CrossRef]
  8. Niu, L.; Bin, J.; Wang, J.K.S.; Zhan, G.; Jia, J.; Zhang, L.; Gan, Z.; Kang, X. Effect of 3D paradigm synchronous motion for SSVEP-based hybrid BCI-VR system. Med. Biol. Eng. Comput. 2023, 61, 2481–2495. [Google Scholar] [CrossRef]
  9. Liu, H.; Wang, Z.; Li, R.; Zhao, X.; Xu, T.; Zhou, T.; Hu, H. A comparative study of stereo-dependent SSVEP targets and their impact on VR-BCI performance. Front. Neurosci. 2024, 18, 1367932. [Google Scholar] [CrossRef]
  10. Niu, L.; Bin, J.; Wang, J.K.S.; Zhan, G.; Zhang, L.; Gan, Z.; Kang, X. A dynamically optimized time-window length for SSVEP based hybrid BCI-VR system. Biomed. Signal Process. Control 2023, 84, 104826. [Google Scholar] [CrossRef]
  11. Guger, C.; Millán, J.D.R.; Mattia, D.; Ushiba, J.; Soekadar, S.R.; Prabhakaran, V.; Mrachacz-Kersting, N.; Kamada, K.; Allison, B.Z. Brain-computer interfaces for stroke rehabilitation: Summary of the 2016 BCI Meeting in Asilomar. Brain-Comput. Interfaces 2018, 5, 41–57. [Google Scholar] [CrossRef]
  12. Elashmawi, W.H.; Ayman, A.; Antoun, M.; Mohamed, H.; Mohamed, S.E.; Amr, H.; Talaat, Y.; Ali, A. A Comprehensive Review on Brain—Computer Interface (BCI)-Based Machine and Deep Learning Algorithms for Stroke Rehabilitation. Appl. Sci. 2024, 14, 6347. [Google Scholar] [CrossRef]
  13. Huo, C.C.; Zheng, Y.; Lu, W.W.; Zhang, T.Y.; Wang, D.F.; Xu, D.S.; Li, Z.Y. Prospects for intelligent rehabilitation techniques to treat motor dysfunction. Neural. Regen. Res. 2021, 16, 264–269. [Google Scholar] [CrossRef] [PubMed]
  14. Cicerone, D.; Ley, K.; Smith, C.; Ellmo, W.; Mangel, H.; Nelson, P.T.; Chase, R.F.; Kalmar, K. Neuropsychological rehabilitation of mild traumatic brain injury. Brain Inj. 1996, 10, 277–286. [Google Scholar] [CrossRef] [PubMed]
  15. Lim, J.; Wang, P.T.; Sohn, W.; Serrano-Amenos, C.; Ibrahim, M.; Lin, D.; Thaploo, S.; Shaw, S.J.; Armacost, M.; Gong, H.; et al. Early feasibility of an embedded bi-directional brain-computer interface for ambulation. arXiv 2024, arXiv:2402.11776v1. [Google Scholar] [CrossRef]
  16. Mishra, S.; Priyanka, B. A Survey on Brain-Computer Interaction. arXiv 2022, arXiv:2201.00997v3. [Google Scholar] [CrossRef]
  17. Pawuś, D.; Paszkiel, S. BCI Wheelchair Control Using Expert System Classifying EEG Signals Based on Power Spectrum Estimation and Nervous Tics Detection. Appl. Sci. 2022, 12, 10385. [Google Scholar] [CrossRef]
  18. Jiang, L.; Luo, C.; Liao, Z.; Li, X.; Chen, Q.; Yuan, J.; Lu, K.; Zhang, D. SmartRolling: A human—machine interface for wheelchair control using EEG and smart sensing techniques. Inf. Process. Manag. 2023, 60, 103262. [Google Scholar] [CrossRef]
  19. Palumbo, A.; Gramigna, V.; Calabrese, B.; Ielpo, N. Motor-Imagery EEG-Based BCIs in Wheelchair Movement and Control: A Systematic Literature Review. Sensors 2021, 21, 6285. [Google Scholar] [CrossRef]
  20. Arachchige, M.D.J.; Nafea, M.; Nugroho, H. A hybrid EEG and head motion system for smart home control for disabled people. J. Ambient Intell. Humaniz. Comput. 2022, 14, 4023–4038. [Google Scholar] [CrossRef]
  21. Alalayah, K.M.; Senan, E.M.; Atlam, H.F.; Ahmed, I.A.; Shatnawi, H.S.A. Automatic and Early Detection of Parkinson’s Disease by Analyzing Acoustic Signals Using Classification Algorithms Based on Recursive Feature Elimination Method. Diagnostics 2023, 13, 1924. [Google Scholar] [CrossRef]
  22. Montazeri, S.M.; Nevalainen, P.; Stevenson, N.J.; Vanhatalo, S. Development of Sleep State Trend (SST), a bedside measure of neonatal sleep state fluctuations based on single EEG channels. Clin. Neurophysiol. 2022, 143, 75–83. [Google Scholar] [CrossRef]
  23. Bera, T.K. A Review on The Medical Applications of Electroencephalography (EEG). In Proceedings of the 2021 Seventh International Conference on Bio Signals, Images, and Instrumentation (ICBSII), Chennai, India, 25–27 March 2021; pp. 1–6. [Google Scholar] [CrossRef]
  24. Li, M.; Kuang, L.; Xu, S.; Sha, Z. Brain Tumor Detection Based on Multimodal Information Fusion and Convolutional Neural Network. IEEE Access 2019, 7, 180134–180146. [Google Scholar] [CrossRef]
  25. Craik, A.; González-España, J.J.; Alamir, A.; Edquilang, D.; Wong, S.; Rodríguez, L.S.; Feng, J.; Francisco, G.E.; Contreras-Vidal, J.L. Design and Validation of a Low-Cost Mobile EEG-Based Brain—Computer Interface. Sensors 2023, 23, 5930. [Google Scholar] [CrossRef] [PubMed]
  26. Valle, G.; Secerovic, N.K.; Eggemann, D.; Gorskij, O.; Pavlova, N.; Petrini, F.M.; Cvancara, P.; Stieglitz, T.; Musienko, A.; Bumbasirevic, M.; et al. Biomimetic computer-to-brain communication enhancing naturalistic touch sensations via peripheral nerve stimulation. Nat. Commun. 2024, 15, 1151. [Google Scholar] [CrossRef] [PubMed]
  27. Oh, S.H.; Park, J.W.; Cho, S.J. Effectiveness of the VR Cognitive Training for Symptom Relief in Patients with ADHD. J. Web Eng. 2022, 21, 767–788. [Google Scholar] [CrossRef]
  28. Delfan, N.; Shahsavari, M.R.; Hussain, S.; Damaševičius, R.; Acharya, U.R. A Hybrid Deep Spatio-Temporal Attention-Based Model for Parkinson’s Disease Diagnosis Using Resting State EEG Signals. arXiv 2023, arXiv:2308.07436. [Google Scholar] [CrossRef]
  29. Li, J.; De Ridder, D.; Adhia, D.; Hall, M.; Deng, J. Chronic pain detection from resting-state raw EEG signals using improved feature selection. arXiv 2023, arXiv:2306. 15194. [Google Scholar] [CrossRef]
  30. Diya, S.Z.; Proma, R.A.; Rahman, I.I.; Islam, A.B.; Islam, M.N. Applying Brain-Computer Interface Technology for Evaluation of User Experience in Playing Games. In Proceedings of the 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), Cox’s Bazar, Bangladesh, 7–9 February 2019; pp. 1–6. [Google Scholar] [CrossRef]
  31. Hughes, A.; Jorda, S. Applications of Biological and Physiological Signals in Commercial Video Gaming and Game Research: A Review. Front. Comput. Sci. 2021, 3, 557608. [Google Scholar] [CrossRef]
  32. Lin, W.; Li, C.; Zhang, Y. A System of Emotion Recognition and Judgment and Its Application in Adaptive Interactive Game. Sensors 2023, 23, 3250. [Google Scholar] [CrossRef]
  33. Thomas, K.P.; Vinod, A.P. A study on the impact of Neurofeedback in EEG based attention-driven game. In Proceedings of the 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 9–12 October 2016; pp. 320–325. [Google Scholar] [CrossRef]
  34. Rincon, R.A.D. Generating Music and Generative Art from Brain activity. arXiv 2021, arXiv:2108.04316v2. [Google Scholar] [CrossRef]
  35. Maslova, O.; Komarova, Y.; Shusharina, N.; Kolsanov, A.; Zakharov, A.; Garina, E.; Pyatin, V. Non-invasive EEG-based BCI spellers from the beginning to today: A mini-review. Front. Hum. Neurosci. 2023, 17, 1216648. [Google Scholar] [CrossRef] [PubMed]
  36. Davis, K.R. Brain-Computer Interfaces: The Technology of Our Future. UC Merced Undergrad. Res. J. 2022, 14, 1–28. [Google Scholar] [CrossRef]
  37. Tang, L. Design of Intelligent Car Control Based on EEG Signal. J. Phys. Conf. Ser. 2022, 2365, 012046. [Google Scholar] [CrossRef]
  38. Kim, S.; Lee, S.; Kang, H.; Kim, S.; Ahn, M. P300 Brain–Computer Interface-Based Drone Control in Virtual and Augmented Reality. Sensors 2021, 21, 5765. [Google Scholar] [CrossRef] [PubMed]
  39. Abdulwahhab, A.H.; Myderrizi, A.; Mahmood, M.K. Drone Movement Control by Electroencephalography Signals Based on BCI System. Adv. Electr. Electron. Eng. 2022, 20, 216–224. [Google Scholar] [CrossRef]
  40. Chen, D.; Liu, K.; Guo, J.; Bi, L.; Xiang, J. Editorial: Brain-computer interface and its applications. Front Neurorobot. 2023, 17, 1140508. [Google Scholar] [CrossRef]
  41. van Weelden, E.; Alimardani, M.; Wiltshire, T.J.; Louwerse, M.M. Advancing the Adoption of Virtual Reality and Neurotechnology to Improve Flight Training. In Proceedings of the 2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS), Magdeburg, Germany, 8–10 September 2021; pp. 1–4. [Google Scholar] [CrossRef]
  42. Yu, P.; Pan, J.; Wang, Z.; Shen, Y.; Li, J.; Hao, A.; Wang, H. Quantitative influence and performance analysis of virtual reality laparoscopic surgical training system. BMC Med. Educ. 2022, 22, 92. [Google Scholar] [CrossRef]
  43. Khondakar, M.F.K.; Sarowar, M.H.; Chowdhury, M.H.; Majumder, S.; Hossain, M.A.; Dewan, M.A.A.; Hossain, Q.D. A systematic review on EEG-based neuromarketing: Recent trends and analyzing techniques. Brain Inform. 2024, 11, 17. [Google Scholar] [CrossRef]
  44. Budaházi, Á. Limitations of Brain-based Lie Detection. Belügyi Szle. 2022, 70, 69–87. [Google Scholar] [CrossRef]
  45. Chaurasia, A.K.; Fallahi, M.; Strufe, T.; Terhörst, P.; Cabarcos, P.A. NeuroBench: An Open-Source Benchmark Framework for the Standardization of Methodology in Brainwave-based Authentication Research. arXiv 2024, arXiv:2402.08656v2. [Google Scholar] [CrossRef]
  46. Alzahab, N.A.; Iorio, A.; Baldi, M.; Scalise, L. Effect of Auditory Stimuli on Electroencephalography-based Authentication. In Proceedings of the 2022 IEEE International Conference on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE), Rome, Italy, 26–28 October 2022; pp. 388–392. [Google Scholar] [CrossRef]
  47. Kopito, R.; Haruvi, A.; Brande-Eilat, N.; Kalev, S.; Kay, E.; Furman, D. Brain-based Authentication: Towards A Scalable, Commercial Grade Solution Using Noninvasive Brain Signals. bioRxiv 2021. [Google Scholar] [CrossRef]
  48. Schöne, B.; Kisker, J.; Lange, L.; Gruber, T.; Sylvester, S.; Osinsky, R. The reality of virtual reality. Front. Psychol. 2023, 14, 1093014. [Google Scholar] [CrossRef] [PubMed]
  49. Grassini, S.; Laumann, K.; Thorp, S.; de Martin Topranin, V. Using electrophysiological measures to evaluate the sense of presence in immersive virtual environments: An event-related potential study. Brain Behav. 2021, 11, 8. [Google Scholar] [CrossRef] [PubMed]
  50. Lee, D.; Ng, P.; Wut, T.M. Virtual Reality in Festivals: A Systematic Literature Review and Implications for Consumer Research. Emerg. Sci. J. 2022, 6, 1153–1166. [Google Scholar] [CrossRef]
  51. Bernal, L.; Pérez, S.M.Q.; Beltrán, E.T.M.; Pérez, G.M.; Celdrán, A.H. When Brain-Computer Interfaces Meet the Metaverse: Landscape, Demonstrator, Trends, Challenges, and Concerns. arXiv 2022, arXiv:2212.03169. [Google Scholar] [CrossRef]
  52. Juliano, J.M.; Spicer, R.P.; Vourvopoulos, A.; Lefebvre, S.; Jann, K.; Ard, T.; Santarnecchi, E.; Krum, D.M.; Liew, S.L. Embodiment Is Related to Better Performance on a Brain-Computer Interface in Immersive Virtual Reality: A Pilot Study. Sensors 2020, 20, 1204. [Google Scholar] [CrossRef]
  53. Saha, S.; Mamun, K.A.; Ahmed, K.; Mostafa, R.; Naik, G.R.; Darvishi, S.; Khandoker, A.H.; Baumert, M. Progress in brain computer interface: Challenges and opportunities. Front. Syst. Neurosci. 2021, 15, 578875. [Google Scholar] [CrossRef]
  54. Putze, F.; Vourvopoulos, A.; Lécuyer, A.; Krusienski, D.; Bermúdez i Badia, S.; Mullen, T.; Herff, C. Editorial: Brain-Computer Interfaces and Augmented/Virtual Reality. Front. Hum. Neurosci. 2020, 14, 144. [Google Scholar] [CrossRef]
  55. Pelc, M.; Mikołajewski, D.; Gorzelańczyk, E.J.; Wieczorek, A.; Racheniuk, H.; Sudoł, A.; Latifzadeh, K.; Leiva, L.A.; Kawala-Sterniuk, A. Pilot study on using Hybrid—Cascade filtering on brain signals for the control purposes. In Proceedings of the 2023 27th International Conference on Methods and Models in Automation and Robotics (MMAR), Międzyzdroje, Poland, 22–25 August 2023; pp. 251–255. [Google Scholar] [CrossRef]
  56. Roth, B.J. Can MRI Be Used as a Sensor to Record Neural Activity? Sensors 2023, 23, 1337. [Google Scholar] [CrossRef]
  57. Li, H. Multi-Technique Integration in Brain-Computer Interface-VR: Challenges and Opportunities. Highlights Sci. Eng. Technol. 2024, 85, 176–183. [Google Scholar] [CrossRef]
  58. Yang, L.; Van Hulle, M.M. Real-Time Navigation in Google Street View® Using a Motor Imagery-Based BCI. Sensors 2023, 23, 1704. [Google Scholar] [CrossRef]
  59. Nam, H.; Kim, J.-M.; Choi, W.; Bak, S.; Kam, T.-E. The effects of layer-wise relevance propagation-based feature selection for EEG classification: A comparative study on multiple datasets. Front. Hum. Neurosci. 2023, 17, 1205881. [Google Scholar] [CrossRef]
  60. Lin, C.X.; Lee, C.; Lally, D.; Coughlin, J.F. Impact of Virtual Reality (VR) Experience on Older Adults’ Well-Being. In Human Aspects of IT for the Aged Population. Applications in Health, Assistance, and Entertainment, Proceedings of the 4th International Conference, ITAP 2018, Held as Part of HCI International 2018, Las Vegas, NV, USA, 15–20 July 2018; Lecture Notes in Computer, Science; Zhou, J., Salvendy, G., Eds.; Springer: Cham, Switzerland, 2018; Volume 10927, p. 10927. [Google Scholar] [CrossRef]
  61. Gao, N.; Chen, P.; Liang, L. BCI–VR-Based Hand Soft Rehabilitation System with Its Applications in Hand Rehabilitation After Stroke. Int. J. Precis. Eng. Manuf. 2023, 24, 1403–1424. [Google Scholar] [CrossRef]
  62. Sánchez-Cuesta, F.J.; Arroyo-Ferrer, A.; González-Zamorano, Y.; Vourvopoulos, A.; Badia, S.B.I.; Figuereido, P.; Serrano, J.I.; Romero, J.P. Clinical Effects of Immersive Multimodal BCI-VR Training after Bilateral Neuromodulation with rTMS on Upper Limb Motor Recovery after Stroke. A Study Protocol for a Randomized Controlled Trial. Medicina 2021, 57, 736. [Google Scholar] [CrossRef] [PubMed]
  63. Bulat, M.; Karpman, A.; Samokhina, A.; Panov, A. Playing a P300-based BCI VR game leads to changes in cognitive functions of healthy adults. bioRxiv 2020. [Google Scholar] [CrossRef]
  64. Deng, T.; Huo, Z.; Zhang, L.; Dong, Z.; Niu, L.; Kang, X.; Huang, X. A VR-based BCI interactive system for UAV swarm control. Biomed. Signal Process. Control 2023, 85, 104944. [Google Scholar] [CrossRef]
  65. Zhengdong, Z.; Zhang, L.; Wei, S.; Zhang, X.; Mao, L. Development and evaluation of BCI for operating VR flight simulator based on desktop VR equipment. Adv. Eng. Inform. 2022, 51, 101499. [Google Scholar] [CrossRef]
  66. Yao, Y.; Hasan, W.Z.W.; Jiao, W.; Dong, X.; Ramli, H.R.; Norsahperi, N.M.H.; Wen, D. ChatGPT and BCI-VR: A new integrated diagnostic and therapeutic perspective for the accurate diagnosis and personalized treatment of mild cognitive impairment. Front. Hum. Neurosci. 2024, 18, 1426055. [Google Scholar] [CrossRef]
  67. Jalil, N. Introduction to Intelligent User Interfaces (IUIs). In Software Usability; Castro, L.M., Cabrero, D., Heimgärtner, R., Eds.; IntechOpen: London, UK, 2022. [Google Scholar] [CrossRef]
  68. Kogler, W.; Wood, G.; Kober, S.E. Effects of electrical brain stimulation on brain indices and presence experience in immersive, interactive virtual reality. Virtual Real. 2022, 26, 1019–1029. [Google Scholar] [CrossRef]
  69. Chang, E.; Kim, H.T.; Yoo, B. Virtual Reality Sickness: A Review of Causes and Measurements. Int. J. Hum. Comput. Interact. 2020, 36, 1658–1682. [Google Scholar] [CrossRef]
  70. Kawala-Janik, A.; Bauer, W.; Al-Bakri, A.; Haddix, C.; Yuvaraj, R.; Cichon, K.; Podraza, W. Implementation of Low-Pass Fractional Filtering for the Purpose of Analysis of Electroencephalographic Signals. In Non-Integer Order Calculus and Its Applications, Proceedings of the 9th International Conference on Non-Integer Order Calculus and Its Applications, Łódź, Poland, 11–13 October 2017; Lecture Notes in Electrical Engineering; Ostalczyk, P., Sankowski, D., Nowakowski, J., Eds.; Springer: Cham, Switzerland, 2017; Volume 496. [Google Scholar] [CrossRef]
  71. Kawala-Sterniuk, A.; Pelc, M.; Martinek, R.; Wójcik, G.M. Editorial: Currents in biomedical signals processing—Methods and applications. Front. Neurosci. 2022, 16, 989400. [Google Scholar] [CrossRef]
  72. Schneider, P.; Wójcik, G.M.; Kawiak, A.; Kwasniewicz, L.; Wierzbicki, A. Modeling and Comparing Brain Processes in Message and Earned Source Credibility Evaluation. Front. Hum. Neurosci. 2022, 16, 808382. [Google Scholar] [CrossRef] [PubMed]
  73. Rojek, I.; Dostatni, E.; Mikołajewski, D.; Pawłowski, L.; Wegrzyn-Wolska, K. Modern approach to sustainable production in the context of Industry 4.0. Bull. Pol. Acad. Sci. Tech. Sci. 2022, 70, e143828. [Google Scholar] [CrossRef]
  74. Krajewski, D.; Oleksy, M.; Oliwa, R.; Bulanda, K.; Czech, K.; Mazur, D.; Masłowski, G. Methods for Enhancing the Electrical Properties of Epoxy Matrix Composites. Energies 2022, 15, 4562. [Google Scholar] [CrossRef]
  75. Rojek, I. Hybrid Neural Networks as Prediction Models. In Artificial Intelligence and Soft Computing, Lecture Notes in Artificial Intelligence; Rutkowski, L., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 88–95. [Google Scholar]
  76. Rojek, I. Neural networks as performance improvement models in intelligent CAPP systems. Control Cybern. 2010, 39, 55–68. [Google Scholar]
  77. Różanowski, K.; Sondej, T. Architecture Design of the High Integrated System-on-Chip for Biomedical Applications. In Proceedings of the 20th International Conference on Mixed Design of Integrated Circuits and Systems (MIXDES 2013), Gdynia, Poland, 20–22 June 2013; pp. 529–533. [Google Scholar]
  78. Rojek, I. Classifier Models in Intelligent CAPP Systems. In Man-Machine Interactions, Advances in Intelligent and Soft Computing; Cyran, K.A., Kozielski, S., Peters, J.F., Stańczyk, U., Wakulicz-Deja, A., Eds.; Springer-Verlag: Berlin/Heidelberg, Germany, 2009; pp. 311–319. [Google Scholar]
  79. Sawicki, K.; Piotrowski, Z. The proposal of IEEE 802.11 network access point authentication mechanism using a covert channel. In Proceedings of the MIKON 2012: 19th International Conference on Microwaves, Radar and Wireless Communications, Warsaw, Poland, 21–23 May 2012; pp. 656–659. [Google Scholar]
  80. Rojek, I.; Mikołajewski, D.; Dostatni, E.; Kopowski, J. Specificity of 3D Printing and AI-Based Optimization of Medical Devices Using the Example of a Group of Exoskeletons. Appl. Sci. 2023, 13, 1060. [Google Scholar] [CrossRef]
  81. Kapitonova, M.; Kellmeyer, P.; Vogt, S.; Ball, T. A Framework for Preserving Privacy and Cybersecurity in Brain-Computer Interfacing Applications. Front. Hum. Neurosci. 2020, 14, 566722. [Google Scholar] [CrossRef]
  82. Zeng, Y.; Sun, K.; Lu, E. Declaration on the Ethics of Brain—Computer Interfaces and Augment Intelligence. AI Ethics 2022, 48, 54–56. [Google Scholar] [CrossRef]
  83. Bernal, S.L.; Celdrán, A.H.; Pérez, G.M.; Barros, M.T.; Balasubramaniam, S. Security in Brain-Computer Interfaces: Current Vulnerabilities, Attacks, and Countermeasures. Sensors 2021, 21, 4142. [Google Scholar] [CrossRef]
  84. Yuste, R.; Goering, S.; Agüera y Arcas, B.; Bi, G.; Carmena, J.M.; Carter, A.; Fins, J.J.; Fries, P.; Illes, J.; Kellmeyer, P.; et al. Ethical Issues in Brain—Computer Interface Research. Nature 2017, 551, 159–162. [Google Scholar] [CrossRef]
  85. Dewil, S.; Kuptchik, S.; Liu, M.; Riva, G. The Cognitive Basis for Virtual Reality Rehabilitation of Upper-Extremity Motor Function After Neurotraumas. J. Multimodal User Interfaces 2023, 17, 105–120. [Google Scholar] [CrossRef]
  86. Knierim, M.T.; Bleichner, M.G.; Reali, P. A Systematic Comparison of High-End and Low-Cost EEG Amplifiers for Concealed, Around-the-Ear EEG Recordings. Sensors 2023, 23, 4559. [Google Scholar] [CrossRef] [PubMed]
  87. Peksa, J.; Mamchur, D. State-of-the-Art on Brain-Computer Interface Technology. Sensors 2023, 23, 6001. [Google Scholar] [CrossRef] [PubMed]
  88. Gordon, E.C.; Seth, A.K. Ethical considerations for the use of brain-computer interfaces for cognitive enhancement. PLoS Biol. 2024, 22, e3002899. [Google Scholar] [CrossRef]
  89. Mikołajewska, E.; Mikołajewski, D. Ethical considerations in the use of brain-computer interfaces. Open Med. 2013, 8, 720–724. [Google Scholar] [CrossRef]
Figure 1. Bibliometric analysis procedure.
Figure 1. Bibliometric analysis procedure.
Applsci 14 10541 g001
Figure 2. A PRISMA flow diagram of the review process using selected PRISMA 2020 guidelines.
Figure 2. A PRISMA flow diagram of the review process using selected PRISMA 2020 guidelines.
Applsci 14 10541 g002
Figure 3. General architecture of BCI-based VR system for IoT/smart home control.
Figure 3. General architecture of BCI-based VR system for IoT/smart home control.
Applsci 14 10541 g003
Figure 4. Information flow in a closed-loop VR-BCI system.
Figure 4. Information flow in a closed-loop VR-BCI system.
Applsci 14 10541 g004
Table 1. Key BCI medical applications.
Table 1. Key BCI medical applications.
AreaApplication
NeurorehabilitationStroke rehabilitation [11,12]
Spinal cord injury rehabilitation [13]
Traumatic brain injury rehabilitation [14]
Cerebral palsy treatment [15]
Assistive TechnologyCommunication devices for people with locked-in syndrome [16]
Wheelchair control [9,17,18,19]
Prosthetic limb control [16]
Environmental control (e.g., controlling lights, appliances) [20]
Diagnosis and MonitoringEpilepsy detection and prediction [21]
Sleep disorder diagnosis [22]
Brain tumor detection [23,24]
Monitoring consciousness in coma patients [25,26]
Treatment of Neurological and Psychiatric DisordersNeurofeedback for ADHD, anxiety, and depression [27]
Deep brain stimulation for Parkinson’s disease and essential tremor [28]
Treatment of chronic pain [29]
Table 2. BCI-VR system components: current status, challenges, and future directions.
Table 2. BCI-VR system components: current status, challenges, and future directions.
AreaState of the Art
BCI ParadigmsMotor Imagery: Users imagine movements to generate brain signals. Widely used but requires training and has limitations in accuracy and speed;
SSVEP: Users focus on flickering stimuli at specific frequencies. High accuracy and speed but can be visually fatiguing;
P300: Users focus on a target stimulus within a flashing array. Requires minimal training but has lower information transfer rates;
Hybrid Paradigms: Combining multiple paradigms to leverage their strengths and mitigate weaknesses is gaining traction.
EEG Signal AcquisitionElectroencephalography: Non-invasive, portable, and affordable, making it dominant in BCI-VR research;
High-Density EEG: Using more electrodes for improved spatial resolution and signal quality;
Dry Electrodes: Enhancing user comfort and reducing setup time, though signal quality can be a concern.
Feature ExtractionTime-Frequency Analysis: Wavelet Transform and Short-Time Fourier Transform are commonly used to extract relevant features from EEG signals;
Spatial Filtering: Techniques like Common Spatial Patterns are used to enhance signal-to-noise ratio and extract spatially relevant features;
Deep Learning: Convolutional Neural Networks and Recurrent Neural Networks are increasingly used for automatic feature extraction and classification;
ClassificationMachine Learning: Support Vector Machines and Linear Discriminant Analysis are popular for classifying EEG patterns;
Deep Learning: Deep Neural Networks, particularly CNNs and RNNs, are showing promise in achieving higher accuracy;
Transfer Learning: Utilizing pre-trained models to reduce training time and improve performance is an active area of research.
VR Simulation and FeedbackRealistic Environments: Creating immersive and engaging VR experiences to enhance user motivation and task performance;
Adaptive Environments: Tailoring VR scenarios based on user performance and brain activity for personalized training and rehabilitation;
Multimodal Feedback: Integrating visual, auditory, and haptic feedback to provide a richer and more intuitive user experience.
VR Control SignalDiscrete Control: Selecting objects or triggering events in the VR environment using specific brain patterns;
Continuous Control: Navigating or manipulating objects in the VR environment using continuous brain activity modulation;
Shared Control: Combining BCI control with traditional input methods or assistive technologies to enhance usability and performance.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Piszcz, A.; Rojek, I.; Mikołajewski, D. Impact of Virtual Reality on Brain–Computer Interface Performance in IoT Control—Review of Current State of Knowledge. Appl. Sci. 2024, 14, 10541. https://doi.org/10.3390/app142210541

AMA Style

Piszcz A, Rojek I, Mikołajewski D. Impact of Virtual Reality on Brain–Computer Interface Performance in IoT Control—Review of Current State of Knowledge. Applied Sciences. 2024; 14(22):10541. https://doi.org/10.3390/app142210541

Chicago/Turabian Style

Piszcz, Adrianna, Izabela Rojek, and Dariusz Mikołajewski. 2024. "Impact of Virtual Reality on Brain–Computer Interface Performance in IoT Control—Review of Current State of Knowledge" Applied Sciences 14, no. 22: 10541. https://doi.org/10.3390/app142210541

APA Style

Piszcz, A., Rojek, I., & Mikołajewski, D. (2024). Impact of Virtual Reality on Brain–Computer Interface Performance in IoT Control—Review of Current State of Knowledge. Applied Sciences, 14(22), 10541. https://doi.org/10.3390/app142210541

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop