Next Article in Journal
A Proposal of Accessibility Guidelines for Human-Robot Interaction
Previous Article in Journal
SACN: A Novel Rotating Face Detector Based on Architecture Search
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

An Introductory Tutorial on Brain–Computer Interfaces and Their Applications

1
Dipartimento di Ingegneria dell’Informazione, Università Politecnica delle Marche, Via Brecce Bianche, I-60131 Ancona, Italy
2
Graduate School of Informatics, Kyoto University, Yoshidahonmachi 36-1, Sakyo-ku, Kyoto 606-8501, Japan
3
Department of Electrical and Electronic Engineering, Tokyo University of Agriculture and Technology, 2-24-16, Nakacho, Koganei-shi, Tokyo 184-8588, Japan
4
RIKEN Center for Brain Science, 2-1, Hirosawa, Wako-shi, Saitama 351-0198, Japan
5
RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
6
School of Computer Science and Technology, Hangzhou Dianzi University, Xiasha Higher Education Zone, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Electronics 2021, 10(5), 560; https://doi.org/10.3390/electronics10050560
Submission received: 18 January 2021 / Revised: 21 February 2021 / Accepted: 22 February 2021 / Published: 27 February 2021
(This article belongs to the Special Issue Technology and Applications of Brain-Computer Interfaces)

Abstract

:
The prospect and potentiality of interfacing minds with machines has long captured human imagination. Recent advances in biomedical engineering, computer science, and neuroscience are making brain–computer interfaces a reality, paving the way to restoring and potentially augmenting human physical and mental capabilities. Applications of brain–computer interfaces are being explored in applications as diverse as security, lie detection, alertness monitoring, gaming, education, art, and human cognition augmentation. The present tutorial aims to survey the principal features and challenges of brain–computer interfaces (such as reliable acquisition of brain signals, filtering and processing of the acquired brainwaves, ethical and legal issues related to brain–computer interface (BCI), data privacy, and performance assessment) with special emphasis to biomedical engineering and automation engineering applications. The content of this paper is aimed at students, researchers, and practitioners to glimpse the multifaceted world of brain–computer interfacing.

1. Introduction

Severe neurological and cognitive disorders, such as amyotrophic lateral sclerosis (ALS), brainstem stroke, and spinal cord injury, could destroy the pathways through which the brain communicates with and controls its external environment [1,2]. Severely affected patients may loose all voluntary muscle control, including eye movements, and may be completely locked into their bodies, unable to communicate by any means. Such a complex of severe diseases is sometimes referred to as locked-in syndrome to signify the inability to interact with and to manifest any intent to the external world. A potential solution for restoring functions and to overcome motor impairments is to provide the brain with a new, nonmuscular communication and control channel, a direct brain–computer interface (BCI) or brain–machine interface, for conveying messages and commands to the external world [3].
Evidence shows that neuronal networks oscillate in a way that is functionally relevant [4,5,6]. In particular, mammalian cortical neurons form behavior-dependent oscillating networks of various sizes. Recent findings indicate that network oscillations temporally link neurons into assemblies and facilitate synaptic plasticity, mechanisms that cooperatively support temporal representation and long-term consolidation of information [4,7,8]. The essence of BCI technology, through sensors placed over the head, is to record such neuronal oscillations, which encode the brain activity, and to decipher the neuronal oscillation code.
The slow speeds, high error rate, susceptibility to artifacts, and complexity of early BCI systems have been challenges for implementing workable real-world systems [9]. Originally, the motivation for developing BCIs was to provide severely disabled individuals with a basic communication system. In recent years, advances in computing and biosensing technologies improved the outlook for BCI applications, making them promising not only as assistive technologies but also for mainstream applications [10].
While noninvasive techniques are the most widely used in applications devoted both to regular consumers and to restoring the functionality of disabled subjects, invasive implants were initially developed involving experiments with animals and were applied in the control of artificial prostheses. At the end of the 1990s, implants were applied to humans to accomplish simple tasks such as moving a screen cursor. The electroencephalogram (EEG), for instance, is a typical signal used as an input for BCI applications and refers to the electrical activity recorded through electrodes positioned on the scalp. During the last decades, EEG-based BCI became one of the most popular noninvasive techniques. The EEG measures the summation of synchronous activity of neurons [11] that have the same spatial orientation after an external stimulus is produced. This technique was used to register different types of neural activities such as evoked responses (ERs), also known as evoked potentials (EPs) [12,13], or induced responses as event-related potentials, event-related desynchronisations, and slow cortical potentials [13].
In recent years, the motivation for developing BCIs has been not only to provide an alternate communication channel for severely disabled people but also to use BCIs for communication and control in industrial environments and consumer applications [14]. It is also worth mentioning a recent technological development, closely related to BCI, known as a brain-to-brain interface (BBI). A BBI is a combination of the brain–computer interface and the computer–brain interface [15]. Brain-to-brain interfaces allow for direct transmission of brain activity in real time by coupling the brains of two individuals.
The last decade witnessed an increasing interest towards the use of BCI for games and entertainment applications [16] given the amount of meaningful information provided by BCI devices not easily achievable by other input modalities. BCI signals can be utilized in this kind of application to collect data to describe the cognitive states of a user, which proved useful in adapting a game to the player’s emotional and cognitive conditions. Moreover, research in BCI can provide game developers with information that is relevant to controling a gaming application or to developing hardware and software products to build an unobtrusive interface [17].
The use of mental states to trigger surroundings and to control the external environment can be achieved via passive brain–computer interfaces (pBCIs) to replace a lost function in persons with severe motor disabilities and without possibility of function recovery (i.e., amyotrophic lateral sclerosis or brainstem stroke) [18]. In such instances, BCI is used as a system that allows for direct communication between the brain and distant devices. Over the last years, research efforts have been devoted to its use in smart environmental control systems, fast and smooth movement of robotic arm prototypes, as well as motion planning of autonomous or semiautonomous vehicles and robotic systems [19,20]. BCI can be used in an active way, known as active BCI, leaving the user voluntarily modulating brain activity to generate a specific command to the surrounding environment, replacing or partially restoring lost or impaired muscular abilities. The alternative modality, pBCIs [21,22,23], is equipped to derive its outputs from arbitrary brain activity without the intention of a specific voluntary control (i.e., it exploits implicit information on the user states). In fact, in systems based on pBCIs, the users do not try to control their brain activity [24]. pBCIs have been used in modern research on adaptive automation [25] and in augmented user’s evaluation [26]. A recent study demonstrated the higher resolution of neurophysiological measures in comparison to subjective ones and how the simultaneous employment of neurophysiological measures and behavioral ones could allow for a holistic assessment of operational tools [27]. Reliability is a desirable characteristic of BCI systems when they are used under nonexperimental operating conditions. The usability of BCI systems is influenced by the involved and frequent procedures that are required for configuration and calibration. Such an obstruction to smooth user experience may be mitigated by automated recalibration algorithms [28]. The range of possible EEG-based BCI applications is very broad, from very simple to complex, including generic cursor control applications, spelling devices [29], gaming [30,31], navigation in virtual reality [32], environmental control [33,34], and control of robotic devices [19,35]. To highlight the capabilities of EEG-based BCIs, some applications with focus on their use in control and automation systems will be summarized.
A large amount of studies have accumulated over the decades on BCI in terms of methodological research and of real-world applications. The aim of the present review paper is to provide readers with a comprehensive overview of the heterogeneous topics related to BCI by grouping its applications and techniques in two macro-themes. The first one concerns low-level brain signals acquisitions methods, and the second one relates to the use of brain signals acquisition at higher levels to control artificial devices. Specifically, the literature on BCI has been collected and surveyed in this review with the aim of highlighting the following:
  • the main aspects of BCI systems related to signal acquisition and processing,
  • the electrophysiological signals characteristics,
  • BCI applications to controlling robots and vehicles, assistive devices, and generally automation.
The present paper is organized as follows: Section 2 presents a review of the state-of-the-art in brain–computer interfacing. Section 3 discusses in detail electrophysiological recordings for brain–computer interfacing. Section 4 illustrates the main applications of BCIs in automation and control problems. Section 5 introduces current limitations and challenges of the BCI Technologies. Section 6 concludes the paper.
The content of the present tutorial paper is aimed at getting a glimpse of the multifaceted and multidisciplinary world of brain–computer interfacing and allied topics.

2. BCI Systems: Interaction Modality and Signal Processing Technique

A BCI system typically requires the acquisition of brain signals, the processing of such signals through specifically designed algorithms, and their translation into commands to external devices [36]. Effective usage of a BCI device entails a closed loop of sensing, processing, and actuation. In the sensing process, bio-electric signals are sensed and digitized before being passed to a computer system. Signal acquisition may be realized through a number of technologies and ranges from noninvasive to invasive. In the processing phase, a computing platform interprets fluctuations in the signals through an understanding of the underlying neurophysiology in order to discern user intent from the changing signal. The final step is the actuation of such an intent, in which it is translated into specific commands for a computer or robotic system to execute. The user can then receive feedback in order to adjust his/her thoughts and then generates new and adapted signals for the BCI system to interpret.

2.1. BCI Systems Classification

There exist several different methods that can be used to record brain activity [37], such as EEG on the scalp level, electrocorticography (ECoG), magnetoencephalography (MEG), positron emission tomography (PET), near-infrared spectroscopy (a technique that is able to detect changes in oxygenated hemoglobin and deoxygenated hemoglobin when the brain is activated and restored to the original state), or functional magnetic resonance imaging (fMRI) [38,39]. ECoG requires surgery to fit the electrodes onto the surface of the cortex, making it an invasive technique [40]. MEG [41,42], PET [43,44,45], and fMRI [46,47,48] require very expensive and cumbersome equipment and facilities. Careful development of a set of calibration samples and application of multivariate calibration techniques is essential for near-infrared spectroscopy analytic methods. In general, BCI systems are classified with respect to the techniques used to pick up the signals as invasive, when the brain signals are picked on the basis of surgery, or non- or partially invasive.
Noninvasive implants are easy to wear, but they produce poor signal resolution because the skull dampens signals, dispersing and blurring the electromagnetic waves created by the neurons. Even though brain waves can still be detected, it is more difficult to determine the area of the brain that created the recorded signals or the actions of individual neurons. ECoG is the most studied noninvasive interface, mainly due to its fine temporal resolution, ease of use, portability, and low setup cost. A typical EEG recording setup is shown in Figure 1.
The brain is extremely complex (there are about 100 billion neurons in a human brain, and each neuron is constantly sending and receiving signals through an intricate network of connections). Assuming that all thoughts or actions are encoded in electric signals in the brain is a gross understatement, as there are chemical processes involved as well, which EEGs are unable to pick up on. Moreover, EEG signal recording is weak and prone to interference: EEGs measure tiny voltage potentials while the blinking eyelids of the subject can generate much stronger signals. Another substantial barrier to using EEG as a BCI is the extensive training required before users can effectively work with such technology.
ECoG measures the electrical activity of the brain taken from beneath the skull in a similar way to noninvasive electroencephalography, but the electrodes are embedded in a thin, plastic pad that is placed directly above the cortex, beneath the dura mater. The survey in [49] overviews the progresses in a recent field of applied research, micro-corticography ( μ ECoG). Miniaturized implantable μ ECoG devices possess the advantage of providing greater-density neural signal acquisition and stimulation capabilities in a minimally invasive implant. An increased spatial resolution of the μ ECoG array is useful for greater specificity diagnosis and treatment of neuronal diseases. In general, invasive devices to measure brain signals are based on electrodes directly implanted in the patient’s grey matter of the brain during neurosurgery. Because the electrodes lie in the grey matter, invasive devices produce the highest-quality signals of BCI devices but are prone to scar-tissue build-up, causing the signal to become weaker or even null as the body reacts to a foreign object in the brain. Partially invasive BCI devices are implanted inside the skull but rest outside the brain rather than within the grey matter. They produce better resolution signals than noninvasive BCIs where the bone tissue of the cranium deflects and deforms signals and have a lower risk of forming scar-tissue in the brain than fully invasive BCIs.
Invasive and partially invasive technologies remain limited to healthcare fields [50]. Medical grade brain–computer interfaces are often used in assisting people with damage to their cognitive or sensorimotor functions. Neuro-feedback is starting to be used for stroke patients by physical therapists to assist in visualizing brain activities and in promoting the brain [51]. Such plasticity enables the nervous system to adapt to the environmental pressures, physiologic changes, and experiences [52]. Because of the plasticity of the brain, the other parts of the brain with no damages take over the functions disabled by the suffered injuries. Indeed, medical-grade BCI was originally designed for the rehabilitation of patients following paralysis or loss of limbs. In this way, the control of robotic arms or interaction with computerized devices could be achieved by conscious thought processes, whereby users imagine the movement of neuroprosthetic devices to perform complex tasks [53]. Brain–computer interfacing has also demonstrated profound benefits in correcting blindness through phosphine generation, thereby introducing a limited field of vision to previously sightless patients [54]. Studies have shown that those patients with access to BCI technologies, get faster recoveries from serious mental and physical traumas compared to those who undergo traditional rehabilitation methods [55]. For this very reason, the use of BCI has also expanded (albeit tentatively) into the fields of Parkinson’s disease, Alzheimer’s disease, and dementia research [56,57].
A class of wireless BCI devices were designed as pill-sized chips of electrodes implanted on the cortex [58]. Such a small volume houses an entire signal processing system: a lithium ion battery, ultralow-power integrated circuits for signal processing and conversion, wireless radio and infrared transmitters, and a copper coil for recharging. All the wireless and charging signals pass through an electromagnetically transparent sapphire window. Not all wireless BCI systems are integrated and fully implantable. Several proof of concept demonstrations have shown encouraging results, but barriers to clinical translation still remain. In particular, intracortical prostheses must satisfy stringent power dissipation constraints so as not to damage the cortex [59]. Starting from the observation that approximately 20% of traumatic cervical spinal cord injuries result in tetraplegia, the authors of [60] developed a semi-invasive technique that uses an epidural wireless brain–machine interface to drive an exoskeleton. BCI technology can empower individuals to directly control electronic devices located in smart homes/offices and associative robots via their thoughts. This process requires efficient transmission of ECoG signals from implanted electrodes inside the brain to an external receiver located outside on the scalp. The contribition [61] discusses efficient, low complexity, and balanced BCI communication techniques to mitigate interferences.

2.2. Elicitation of Brain Signals

Brain–computer interfaces were inherently conceived to acquire neural data and to exploit them to control different devices and appliances. Based on the way the neural data are elicited and used, BCI systems can be classified in four typologies [62]:
  • Active: In this instance, a BCI system acquires and translates neural data generated by users who are voluntarily engaged in predefined cognitive tasks for the purpose of “driving” the BCI.
  • Reactive: Such an instance makes use of neural data generated when users react to stimuli, often visual or tactile.
  • Passive: Such an instance refers to the case that a BCI serves to acquire neural data generated when users are engaged in cognitively demanding tasks.
  • Hybrid: Such an instance is a mixture of active, reactive, and passive BCI and possibly a further data acquisition system.
Cognitive brain systems are amenable to conscious control, yielding better regulation of magnitude and duration of localized brain activity. Signal generation, acquisition, processing, and commanding chain in a BCI system is illustrated in Figure 2. Likewise, the visual cortex is a focus of signal acquisition since the electrical signals picked up from the the visual cortex tend to synchronize with external visual stimuli hitting the retina.
One of the most significant obstacles that must be overcome in pursuing the utilization of brain signals for control is the establishment of a valid method to extract event-related information from a real-time EEG [63]. Most BCIs rely on one of three types of mental activities, namely, motor imagery [64], P300 [65], and steady-state visually evoked potentials (SSVEPs) [66]. Some BCI may utilize more than one such mental activity; hence, they are referred to as “hybrid BCIs” [67]. Once brain signal patterns are translated in relation to cognitive tasks, BCI systems can decode the user’s goals. By manipulating such brain signals, patients can express their intent to the BCI system and said brain signals can act as control signals in BCI units.
Some users experience significant difficulty in using BCI technologies. It is reported that approximately 15–30% of users cannot modulate their brain signals, which results in the inability to operate BCI systems [68]. Such target users are called “BCI-illiterate” [69]. The sensorimotor EEG changes of the motor cortex during active and passive movement and motor imagery are similar. The study [70] showed that it is possible to use classifiers calculated with data from passive and active hand movement to detect motor imagery. Hence, a physiotherapy session for a stroke patient could be used to obtain data to learn a classifier and the BCI-rehabilitation training could start immediately.
P300 is a further type of brain activity that can be detected by means of EEG recordings. P300 is a brainwave component that occurs after a stimulus that is deemed “important”. The existence of the P300 response may be verified by the standard “oddball paradigm”, which consists in the presentation of a deviant stimulus within a stream of standard stimuli, which elicits the P300 component. In the EEG signal, P300 appears as a positive wave 300 ms after stimulus onset and serves as a link between stimulus characteristics and attention. In order to record P300 traces, the electrodes are placed over the posterior scalp. Attention and working memory are considered as cognitive processes underlying P300 amplitude [71]. In fact, it was suggested that P300 is a manifestation of a context-updating activity occurring whenever one’s model of the environment is revized. The study in [71] investigated the support of attentional and memory processes in controlling a P300-based BCI in people with amyotrophic lateral sclerosis.
The study in [72] investigated a BCI technology based on EEG responses to vibro-tactile stimuli around the waist. P300-BCIs based on tactile stimuli have the advantage of not taxing the visual or auditory system, hence being especially suitable for patients whose vision or eye movements are impaired. The contribution of [73] suggested a method for the extraction of discriminative features in electroencephalography evoked-potential latency. Based on offline results, evidence is presented indicating that a full surround sound auditory BCI paradigm has potential for an online application. The auditory spatial BCI concept is based on a directional audio stimuli delivery technique, which employs a loudspeaker array. The stimuli presented to the subjects vary in frequency and timbre. Such research resulted in a methodology for finding and optimizing evoked response latencies in the P300 range in order to classify the subject’s chosen targets (or the ignored non-targets).
The SSVEP paradigm operates by exposing the user to oscillating visual stimuli (e.g., flickering Light Emitting Diodes (LEDs) or phase-reversing checkerboards). An electrical activity corresponding to the frequency of such oscillation (and its multiples) can be measured from the occipital lobe of the brain. The user issues a command by choosing a stimulus (and therefore a frequency). Figure 3 shows an experimental setup where a patient gazes at a screen that presents eight different oscillating patterns.
BCI includes significant target systems out of clinical scopes. Based on different frequency flicker through visual evocations, SSVEP systems can be used to provide different inputs in control applications. Martišius et al. in [74] described SSVEP as a mean to successfully control computer devices and games [74]. Specifically, in [74], SSVEP is used to build up a human–computer interaction system for decision-making improvement. In particular, in this work, a model of traffic lights is designed as a case study. The experiments carried out on such model, which involved decision-making situations, allowed a SSVEP–BCI system to assist people to make a decision correctly.

2.3. Syncronous and Asyncronous Interaction

A first aspect that can be considered to analyse BCI-based control techniques is related to the use of synchronous or asynchronous protocols. In synchronous protocols, the system indicates to the user the moments when he/she must attend a cognitive process. Then, the signals need to be processed during a concrete interval of time and the decision is taken. The systems relying on such a kind of protocol are slow. The synchronous protocols make the processing of acquired signals easier because the starting time is known and the differences with respect to the background can be measured [75]. On the other hand, asynchronous protocols [28,76] are more flexible because the user is not restricted in time and he/she can think freely [77]. In synchronous protocols, the acquired signals are time-locked to externally paced cues repeated every time (the system controls the user), while in asynchronous protocols, the user can think of some mental tasks at any time (the user controls the system). The asynchronous BCIs [28,76] require extensive training (of the order of many weeks), their performance is user-dependent, and their accuracy is not as high as that of synchronous one [78]. On the other hand, synchronous BCIs require minimal training and have stable performance and high accuracy. Asynchronous BCI is more realistic and practical than a synchronous system in that BCI commands can be generated whenever the user wants [79].
Asynchronous BCI systems are more practicable than synchronous ones in real-world applications. A key challenge in asynchronous BCI design is to discriminate intentional control and non-intentional control states. In the contribution [66], a two-stage asynchronous protocol for an SSVEP-based BCI was introduced. This visual-evoked potential-based asynchronous BCI protocol was extended to mixed frequency and phase-coded visual stimuli [80]. Brain–computer interfacing systems can also use pseudo-random stimulation sequences on a screen (code-based BCI) [81]. Such a system can be able to control a robotic device. In this case, the BCI controls may be overlaid on the video that shows a robot performing certain tasks.
Hybrid BCIs combine different input signals to provide more flexible and effective control. The combination of these input signals makes it possible to use a BCI system for a larger patient group and to make the system faster and more reliable. Hybrid BCIs can also use one brain signal and a different type of input, such as an electrophysiological signal (e.g., the heart rate) or a signal from an external device such as an eye-tracking system. For instance, a BCI can be used as an additional control channel in a video game that already uses a game pad or it can complement other bio-signals such as electrocardiogram or blood pressure in an application monitoring a driver’s alertness [14]. The contribution of [67] describes BCIs in which functioning is based on the classification of two EEG patterns, namely, the event-related (de)synchronisation of sensorimotor rhythms and SSVEPs.

2.4. Preprocessing and Processing Techniques

The technical developments that assist research into EEG-based communication can be split into the development of signal processing algorithms, the development of classification algorithms, and the development of dynamic models. Spontaneous EEG signals can be separated from the background EEG using finite impulse response band-pass filters or fast Fourier transform algorithms. Other spontaneous EEG signals, for example, those associated with mental tasks, i.e., mental arithmetic, geometric figure rotation, or visual counting, are better recognized using autoregressive features [82].
In the article of [83], Butkevičiūtė evaluated the effects of movement artifacts during EEG recording, which could significantly affect the use of EEG for control.
EEG artifacts may significantly affect the accuracy of feature extraction and data classification of BCIs. For example, the EEG artifacts derived from ocular and muscular activities are inevitable and unpredictable due to the physical conditions of the subject. Consequently, the removal of these artifacts is a crucial function for BCI applications to improve its robustness. Nowadays, different tools can be used to achieve eye blinks artifacts correction in EEG signal acquisition: the most used methods are based on regression techniques and Independent Component Analysis (ICA) [84]. The choice of the most suitable method used depends on the specific application and on the limitations to the method itself. In fact, regression-based methods require at least one Electrooculography (EOG) channel and their performance is affected by the mutual contamination between EEG and EOG signals. With ICA, the EEG signals are projected onto an ICA embedding [85]. ICA-based methods require a higher number of electrodes and a greater computational effort compared to regression-based techniques. The contribution [84] proposed a new regression-based method and compared it with three of the most used algorithms (Gratton, extended InfoMax, and SOBI) for eye-blink correction. The obtained results confirmed that ICA-based methods exhibit significantly different behaviors with respect to the regression-based methods, with a larger reduction in the alpha band of the power spectral density over the frontal electrodes. Such results highlighted that the proposed algorithm can provide comparable performance in terms of blink correction, without requiring EOG channels, a high number of electrodes, or a high computational effort, thus preserving EEG information from blink-free signal segments. More generally, blind source separation (BSS) is an effective and powerful tool for signal processing and artifact removal from electroencephalographic signals [86]. For high-throughput applications such as BCIs, cognitive neuroscience, or clinical neuromonitoring, it is of prime importance that blind source separation is effectively performed in real time. In order to improve the throughput of a BSS-based BCI in terms of speed, the optimal parallelism environment that hardware provides may be exploited. The obtained results show that co-simulation environment greatly reduces computation time.
Once the features of acquired brain signals are extracted by means of a signal processing algorithm, they can be classified using an intelligent/adaptive classifier. Typically, a choice between different classifiers is performed, although ways of combining predictions from several nonlinear regression techniques can be exploited. Some research endeavors have focused on the application of dynamic models such as the combined Hidden Markov Autoregressive model [87] to process and classify the acquired signals, while others have focused on the use of Hidden Markov Models and Kalman filters [88]. The contribution in [89] suggested to eliminate redundancy in high-dimensional EEG signals and reduce the coupling among different classes of EEG signals by means of the principal component analysis and to employ Linear Discriminant Analysis (LDA) to extract features that represent the raw signals; subsequently, a voting-based extreme learning machine (ELM) method was made use of to classify such features. The contribution in [90] proposed the use of indexes applied to BCI recordings, such as the largest Lyapunov exponent, the mutual information, the correlation dimension, and the minimum embedding dimension, as features for the classification of EEG signals. A multi-layer perceptron classifier and a Support-Vector Machine (SVM) classifier based on k-means clustering were used to accomplish classification. A support-vector machine classification based method applied to P300 data was also discussed in the contribution [91].

2.5. Use of BCI in Medical and General Purpose Applications

An early application of BCI was to neural prosthetic implants, which showed several potential uses for recording neuronal activity and stimulating the central nervous system as well as the peripheral nervous system. The key goal of many neuroprosthetics is operation through closed-loop BCI systems (the loop is from the measurement of brain activity, classification of data, feedback to the subject, and the effect of feedback on brain activity [92]), with a channel for relaying tactile information. To be efficient, such systems must be equipped with neural interfaces that work in a consistent manner for as long as possible. In addition, such neuroprosthetic systems must be able to adapt the recording to changes in neuronal populations and to tolerate physical real-life environmental factors [93]. Visual prosthetic development has one of the highest priorities in the biomedical engineering field. Complete blindness from retinal degeneration arises from diseases such as the age-related macular degeneration, which causes dystrophy of photoreceptor cells; from artery or vein occlusion; and from diabetic retinopathy. Functional vision can be achieved by converting images into binary pulses of electrical signals and by delivering them to the visual cortex. Sensations created are in the form of bright spots referred to as visual perception patterns. Current developments appear in the form of enhanced image processing algorithms and data transfer approaches, combined with emerging nanofabrication and conductive polymerization [94]. Likewise, cochlear implants use electrical signals that directly stimulate the sensory epithelium of the basilar membrane to produce auditory stimuli [95]. The highest levels of success were seen in subjects who had some sense of hearing during their critical developmental periods. Better resolution of sensory input is achieved by providing it to the cortex rather than the auditory nerve. Such implants may be placed into the cochlear nerve/pons junction, when the auditory nerve has been damaged; into the cochlear nucleus; or into the inferior colliculus [96]. BCI could prove useful in psychology, where it might help in establishing the psychological state of patients, as described in [97]. A patient’s feeling is predictable by examining the electrical signals generated by the brain. Krishna et al. [97] proposed an emotion classification tool by developing generalized mixture model and obtained 89% classification precision in recognizing happiness, sadness, boredom, and neutral states in terms of valency. Further applications have been proposed in the biometric field, creating a EEG-based cryptographic authentication scheme [98] through EEG signals, getting important results based on the commitment scheme adopted from [99].
In recent years, the motivation for developing brain–computer interfaces has been not only to provide an alternate communication channel for severely disabled people but also to use BCIs for communication and control in industrial environments and consumer applications [14]. BCI is useful not only for communication but also to allow mental states of the operator to trigger the surroundings [18]. In such instances, BCI is used as a system that allows a direct communication between the brain and distant devices. Over the last years, research efforts have been paid on its use in smart environmental control systems, fast and smooth movement of robotic arm prototypes, as well as motion planning of autonomous or semi-autonomous vehicles. For instance, one study developed an SSVEP–BCI for controlling a toy vehicle [100]. Figure 4 shows an experimental setup for controlling a toy vehicle through SSVEP–BCI technology.
In particular, BCIs have been shown to achieve excellent performance in controlling robotic devices using only signals sensed from brain implants. Until now, however, BCIs successful in controlling robotic arms have relied on invasive brain implants. These implants require a substantial amount of medical and surgical expertise to be correctly installed and operated, not to mention the cost and potential risks to human subjects. A great challenge in BCI research is to develop less invasive or even totally noninvasive technologies that would allow paralyzed patients to control their environment or robotic limbs using their own thoughts. A noninvasive counterpart requiring less intervention that can provide high-quality control would thoroughly improve the integration of BCIs into clinical and home settings. Noninvasive neuroimaging and increased user engagement improve EEG-based neural decoding and facilitate real-time 2D robotic device control [101]. Once the basic mechanism of converting thoughts to computerized or robotic action is perfected, the potential uses for the technology will be almost limitless. Instead of a robotic hand, disabled users could have robotic braces attached to their own limbs, allowing them to move and directly interact with the environment. Signals could be sent to the appropriate motor control nerves in the hands, bypassing a damaged section of the spinal cord and allowing actual movement of the subject’s own hands [102]. Breakthroughs are required in areas of usability, hardware/software, and system integration, but for successful development, a BCI should also take user characteristics and acceptance into account.
Efforts have been focused on developing potential applications in multimedia communication and relaxation (such as immersive virtual reality control). Computer gaming, in particular, has benefited immensely from the commercialization of BCI technology, whereby users can act out first-person roles through thought processes. Indeed, using brain signals to control game play opens many possibilities beyond entertainment, since neurogaming has potentials in accelerating wellness, learning, and other cognitive functions. Major challenges must be tackled for BCIs to mature into an established communications medium for virtual-reality applications, which range from basic neuroscience studies to developing optimal peripherals and mental gamepads and more efficient brain-signal processing techniques [103].
An extended BCI technology is called the “collaborative BCI” [104]. In this method, the tasks are performed by multiple people rather than just one person, which would make the result more efficient. Brain-computer interfacing relies on the focused concentration of a subject for optimal performance, while, in the collaborative BCI, if one person loses focus in concentration for any reason, the other subjects will compensate and produce the missing commands. Collaborative BCI found widespread applications in learning; communication; as well as in improving social, creative, and emotional skills. The authors of the paper [105] built a collaborative BCI and focused on the role of each subject in a study group, trying to answer the question of whether there are some subjects that would be better to remove or that are fundamental to enhance group performance. The goal of the paper in [106] was to introduce a new way to reduce the time identifying a message or command. Instead of relying on brain activity from one subject, the system proposed by the authors of [106] utilized brain activity from eight subjects performing a single trial. Hence, the system could rely on an average based on eight trials, which is more than sufficient for adequate classification, even though each subject contributed only one trial. The paper in [107] presents a detailed review about BCI applications for training and rehabilitation of students with neurodevelopmental disorders.
Machine-learning and deep learning approaches based on the analysis of physiological data play a central role since they can provide a means to decode and characterize task-related brain states (i.e., reducing from a multidimensional to one-dimensional problem) and to differentiate relevant brain signals from those that are not task-related. In this regard, researchers tested BCI systems in daily life applications, illustrating the development and effectiveness of this technique [18,108,109]. EEG-based pBCI became a relevant tool for real-time analysis of brain activity since it can potentially provide information about the operator cognitive state without distracting the user from the main task and out of any conditioning from a subjective judgment of an observer or of the user itself [25]. Another growing area for BCIs is mental workload estimation, exploring the effects of mental workload and fatigue upon the P300 response (used for word spell BCI) and the alpha-theta EEG bands. In this regard, there is currently a movement within the BCI community to integrate other signal types into “hybrid BCIs” [67] to increase the granularity of the monitored response.

2.6. Ethical and Legal Issues in BCI

Brain–computer interfacing research poses ethical and legal issues related to the question of mind-reading as well as mind-conditioning. BCI research and its translation to therapeutic intervention gave rise to significant ethical, legal, and social concerns, especially about person-hood, stigma, autonomy, privacy, safety, responsibility, and justice [110,111]. Talks of “brain hacking” and “brain phishing” triggered a debate among ethicists, security experts, and policymakers to prepare for a world of BCI products and services by predicting implications of BCI to liberty and autonomy.
To what concerns the mind-reading issue, for example, the contribution of [112] discusses topics such as the representation of persons with communication impairments, dealing with technological complexity and moral responsibility in multidisciplinary teams, and managing expectations, ranging from an individual user to the general public. Furthermore, the contribution in [112] illustrates that, where treatment and research interests conflict, ethical concerns arise.
With reference to the mind-conditioning issue, the case of deep brain stimulation (DBS) is discussed in [113]. DBS is currently used to treat neurological disorders such as Parkinson’s disease, essential tremor, and dystonia and is explored as an experimental treatment for psychiatric disorders such as major depression and obsessive compulsive disorder. Fundamental ethical issues arise in DBS treatment and research, the most important of which are balancing risks and benefits and ensuring respect for the autonomous wish of the patient. This implies special attention to patient selection, psycho-social impact of treatment, and effects on the personal identity. Moreover, it implies a careful informed consent process in which unrealistic expectations of patients and their families are addressed and in which special attention is given to competence. A fundamental ethical challenge is to promote high-quality scientific research in the interest of future patients while at the same time safeguarding the rights and interests of vulnerable research subjects.

2.7. Data Security

Data security and privacy are important aspects of virtually every technological application related to human health. With the digitization of almost every aspect of our lives, privacy leakage in cyber space has become a pressing concern [114]. Considering that the data associated with BCI applications are highly sensitive, special attention should be given to keeping their integrity and security. Such a topic is treated in a number of recent scientific paper.
The objective of the paper in [115] is to ensure security in a network supporting BCI applications to identify brain activities in a real-time mode. To achieve such goal, the authors of this paper proposed a Radio Frequency Identification (RFID)-based system made of semi-active RFID tags placed on the patient’s scalp. Such tags transmit the collected brain activities wirelessly to a scanner controller, which consists of a mini-reader and a timer integrated together for every patient. Additionally, the paper proposed a novel prototype of interface called “BCI Identification System” to assist the patient in the identification process.
BCI data represent an individual’s brain activity at a given time. Like many other kinds of data, BCI data can be utilized for malicious purposes: a malicious BCI application (e.g., a game) could allow an attacker to phish an unsuspecting user enjoying a game and record the user’s brain activity. By analyzing the unlawfully collected data, the attacker could be able to infer private information and characteristics regarding the user without the user’s consent nor awareness. The paper in [114] demonstrates the ability to predict and infer meaningful personality traits and cognitive abilities by analyzing resting-state EEG recordings of an individual’s brain activity using a variety of machine learning methods.
Unfortunately, manufacturers of BCI devices focus on application development, without paying much attention to security and privacy-related issues. Indeed, an increasing number of attacks to BCI applications exposed the existence of such issues. For example, malicious developers of third-party applications could extract private information of users. In the paper of [116], the authors focused on security and privacy of BCI applications. In particular, they classified BCI applications into four usage scenarios: (1) neuromedical applications, (2) user authentication, (3) gaming and entertainment, and (4) smartphone-based applications. For each usage scenario, the authors discussed security and privacy issues and possible countermeasures.

2.8. Performance Metrics

Metrics represent a critical component of the overall BCI experience; hence, it is of prime importance to develop and to evaluate BCI metrics. Recent research papers have investigated performance metrics in specific applications, as oulined below.
Affective brain–computer interfaces are a relatively new area of research in affective computing. The estimation of affective states could improve human–computer interactions as well as the care of people with severe disabilities [117]. The authors of such a paper reviewed articles publicized in the scientific literature and found that a significant number of articles did not consider the presence of class imbalance. To properly account for the effect of class imbalance, the authors of [117] suggest the use of balanced accuracy as a performance metric and its posterior distribution for computing credible intervals.
The research work summarized in the paper [118] investigated the performance impact of time-delay data augmentation in motor-imagery classifiers for BCI applications. The considered strategy is an extension of the common spectral spatial patterns method, which consists of accommodating available additional information about intra- and inter-electrode correlations into the information matrix employed by the conventional Common Spatial Patterns (CSP) method. Experiments based on EEG signals from motor-imagery datasets, in a context of differentiation between binary (left and right-hand) movements, result in an overall classification improvement. The analyzed time-delay data-augmentation method improves the motor-imagery BCI classification accuracy with a predictable increase in computation complexity.
As BCIs become more prevalent, it is important to explore the array of human traits that may predict an individual’s performance when using BCI technologies. The exploration summarized in the paper in [119] was based on collecting and analyzing the data of 51 participants. Such an experiment explored the correlations between performance and demographics. The measurements were interrelated to the participants’ ability to control the BCI device and contributed to the correlation among a variety of traits, including brain hemispheric dominance and Myers–Briggs personality-type indicator [120]. The preliminary results of such experiment was that a combination of human traits are being turned on and off in these individuals while using BCI technologies, affecting their performance.

2.9. Further Readings in BCI Challenges and Current Issues

Every technical advancement in a multidisciplinary field such as BCI unavoidably paves the way to new progresses as well as unprecedented challenges. The paper in [121] attempted to present an all-encompassing review of brai-n-computer interfaces and the scientific advancements associated with it. The ultimate goal of such a general overview of the BCI technology was to underscore the applications, practical challenges, and opportunities associated with BCI technology.
We found the study in [122] to be very interesting and insightful. Such a study aimed to assess BCI users’ experiences, self-observations, and attitudes in their own right and looked for social and ethical implications. The authors conducted nine semi-structured interviews with medical BCI users. The outcome of such a study was that BCI users perceive themselves as active operators of a technology that offers them social participation and impacts their self-definition. Users understand that BCIs can contribute to retaining or regaining human capabilities and that BCI use contains elements that challenge common experiences, for example, when the technology proves to be in conflict with their affective side. The majority of users feel that the potential benefits of BCIs outweigh the risks because the BCI technology is considered to promote valuable qualities and capabilities. BCI users appreciate the opportunity to regain lost capabilities as well as to gain new ones.

3. Electrophysiological Recordings for Brain–Computer Interfacing

The present section discusses in detail typical EEG recording steps and responses that are widely used in BCI applications.

3.1. Electrodes Locations in Encephalography

In EEG measurement, the electrodes are typically attached to the scalp by means of conductive pastes and special caps (if necessary), although EEG systems based on dry electrodes systems (that do not need pastes) have been developed. For BCI applications, the EEG signals are usually picked up by multiple electrodes [2,123,124]. Concerning EEG systems, currently, research is being conducted toward producing dry sensors (i.e., without any conductive gel) and eventually a water-based technology instead of the classical gel-based technology, providing high signal quality and better comfort. As reported in [125], three different kinds of dry electrodes were compared against a wet electrode. Dry electrode technology showed excellent standards, comparable to wet electrodes in terms of signals spectra and mental state classification. Moreover, the use of dry electrodes reduced the time taken to apply sensors, hence enhancing a user comfort. In the case of multi-electrode measurement, the International 10-20 [126], Extended 10-20 [127], International 10-10 [128], and the International 10-5 [129,130] methods have stood as the de facto standards for electrode arrangement.
In these systems, the locations on a head surface are described by relative distances between cranial landmarks. In the context of the International 10-20 system, for example, the landmarks are a nasal point located between the eyes, the height of eyes, and the inion, which is the most prominent projection of the occipital bone at the posterioinferior (lower rear) part of the human skull [126]. The line connecting these landmarks along the head surface may lead to marks that divide the line into short segments of 10% and 20% of the whole length. Each pick-up point is defined as the intersection of the lines connecting these marks along the head surface. Typical electrodes arrangements are shown in Figure 5.
The EEG systems record the electrical potential difference between any two electrodes [126]. Referential recording is a method that reads the electrical potential difference between a target electrode and a reference electrode, and the reference electrodes are common among all electrodes. Earlobes that are considered electrically inactive are widely used to place the reference electrodes. In contrast, bipolar recording reads the electrical potential difference between electrodes located on head surface.
The electroencephalogram recording systems are generally more compact than the systems used in functional magnetic resonance, near infrared spectroscopy, and magnetic resonance. Two further important aspects of brain signal acquisition are temporal and spatial resolution. Spatial resolution refers to the degree of accuracy with which brain activity can be located in space, while temporal resolution refers to the degree of accuracy, on a temporal scale, with which a functional imaging technique can describe a neural event [131].
The temporal resolution of electroencephalography is higher than the temporal resolution afforded by functional magnetic resonance and by near infrared spectroscopy [132]. However, since it is difficult to make electrodes smaller, the spatial resolution of electroencephalography is very low compared to spatial resolution afforded by other devices. In addition, the high-frequency components of the electrical activity of the neurons decrease in the observed signals because the physiological barriers between the emitters and the receivers, such as the skull, work as lowpass filters. The signals picked up by the electrodes are contaminated by the noise caused by poor contacting of the electrodes with the skin, by muscle movements (electromyographic signals), and by eye movements (electrooculographic signals).

3.2. Event-Related Potentials

An event-related potential (ERP) is an electrophysiological response to an external or internal stimulus that can be observed as an electric potential variation in EEG [133]. A widely-used ERP in the BCIs is P300, which is a positive deflection in the EEG that occurs around 300 ms after a target stimulus has been presented. A target can be given as either a single stimulus or a combined set of visual, auditory, tactile, olfactory, or gustatory stimuli.
An example of the waveform of P300 is shown in Figure 6. The signal labeled “Target” is the average signal observed in the period of 0–1 s after displaying a visual stimulus when the subject attended the stimulus. The signal labeled “Non Target” is the average signal observed in the period of 0–1 s after displaying a stimulus that gets ignored by the subject. It can be easily recognized that there is a difference in the potential curves between “Targets” and “Non Targets” in the period of 0.2–0.4 s. Such potential, elicited at around 300 ms, is exactly a P300 response.
Such event-related potential is easily observed in EEG readouts by an experimental paradigm called the ‘oddball task’ [134]. In the oddball paradigm, the subject is asked to react—by either counting or button-pressing—to target stimuli that are hidden as rare occurrences among a series of more common stimuli that often require no response. A well-known instance of BCI that makes use of the oddball paradigm is a P300 speller. Such a speller aims to input Latin letters on the basis of the user’s will. A typical P300 speller consists of rows and columns of a two-dimensional matrix of Latin letters and symbols displayed on a screen, as illustrated in Figure 7. Each row and column flashes for a short time (typically 50–500 ms) in random order with an interval between successive flashes of 500 to 1000 ms [135], as illustrated in Figure 7. The user gazes at a target symbol on the screen. When the row or column including the target symbol flashes, the user counts incidences of the flash. By detecting a P300 at the flash of row and column, the interface can identify the symbol that the user is gazing at.
Although the majority of P300-based BCIs use P300 responses to visual stimuli, the visual stimuli are not suitable for patients whose vision or eye movement are impaired. For such users, alternative BCIs that use either auditory or tactile stimuli have been developed. Several giant leaps have been made in the BCI field in the last years from several points of view. For example, many works have been produced in terms new BCI spellers. The Hex-O-Spell, a gaze-independent BCI speller that relies on imaginary movement, was first explained in 2006 by Blankertz et al. [136] and presented in [76,137]. As reported in [138], the first variation of the Hex-O-Speller was used as an ERP P300 BCI system and was compared with other variations of the Hex-O-Spell utilizing ERP systems (Cake Speller and Center Speller) [139]. These two GUIs were developed to be compared with the Hex-O-Spell ERP in [138] for gaze-independent BCI spellers. In [138], the Hex-O-Spell was transformed into an ERP system, to test when ERP spellers could also be gaze-independent. The purpose was to check if BCI-spellers could replace eye tracker speller systems. The aforementioned change in the design could improve the performance of the speller and could provide a more useful control without the need for spatial attention. A typical implementation of auditory P300-based interfaces exploits spatially distributed auditory cues, generated via multiple loudspeakers [140,141]. Tactile P300-based BCIs are also effective as an alternative to visual-stimuli-based interfaces [72]. In said kind of BCI, vibrating cues are given to users by multiple tactors.
The detection of a P300 response is not straightforward due to the very low signal-to-noise ratio of the observed EEG readouts; thus, it is necessary to record several trials for the same target symbol. Averaging over multiple trials as well as lowpass filtering are essential steps to ensure the recognition of P300 responses. A statistical approach is used for the detection of a P300 response, such as linear discriminant analysis followed by dimension reduction including downsampling and principal component analysis, as well as convolutional neural networks [142]. The stepwise linear discriminant analysis is widely used as an implementation of LDA. Moreover, the well-known SVM algorithm is applicable to classify a P300 response.
Rapid serial visual presentation appears to be one of the most appropriate paradigms for patients using a P300-based brain–computer interface, since ocular movements are not required. However, the use of different locations for each stimulus may improve the overall performance. The paper in [143] explored how spatial overlap between stimuli influences performance in a P300-based BCI. Significant differences in accuracy were found between the 0 % overlapped condition and all the other conditions, and between 33.3 % and higher overlap, namely 66.7 % and 100 % . Such results were explained by hypothesizing a modulation in the non-target stimulus amplitude signal caused by the overlapping factor.

3.3. Evoked Potentials

An EP is an electrophysiological response of the nervous system to external stimuli. A typical stimulation that elicits “evoked potentials” can be visual, auditory, or tactile. By modulating stimulation patterns, it is possible to construct several kinds of external stimuli assigned to different commands.
The evoked potential corresponding to a flickering light falls in the category of SSVEPs. The visual flickering stimulus excites the retina that elicits an electrical activity at the same frequency of the visual stimulus. When the subject gazes at a visual pattern flickering at the frequencies in the range 3–70 Hz, an SSVEP is observed in an EEG signal recorded through electrodes placed in the proximity of the visual cortex [144]. The Figure 8 shows the power spectra of the signals observed when a subject gazes at a visual pattern flickering at the frequency of 12 Hz and when the subject does not gaze at any pattern. When the subject gazes at the active stimulus, the power spectrum exhibits a peak at 12 Hz.
Visual flickering stimuli can be generated by LEDs [145] or computer monitors [146]. The use of LEDs has the advantage of affording the display of arbitrary flickering patterns and the drawback that the system requires a special hardware, while computer monitors are controlled by a software but the flickering frequency is strongly dependent on their refresh rate. It is important to consider the safety and comfort of visual stimuli: modulated visual stimuli at certain frequencies can provoke epileptic seizures [147] while bright flashes and repetitive stimulation may impair the user’s vision and induce fatigue [148].
A BCI can utilize flickering visual stimuli that elicit SSVEPs to implement multiple command inputs. Such an interface has multiple visual targets with different flickering patterns (typically with different frequencies within the range 3–70 Hz [148,149,150]) for a user to gaze at. By detecting the frequency of the recorded evoked potential, the interface determines the user-intended command. An example of the arrangement of multiple checkerboards on a monitor is illustrated in Figure 9, where six targets are displayed onscreen. Each target makes a monochrome inversion in the checkerboard at different frequencies, as illustrated in Figure 10. When the user gazes at the stimulus flickering at F 1 Hz, the steady-state evoked potential corresponding to the frequency F 1 Hz is elicited and measured in the observed EEG signal. By analyzing the EEG to recognize the frequency of the SSVEP response, a BCI is able to determine which target the user is gazing at. If a computer monitor is used for displaying the visual target, the number of available flickering frequency is limited due to its refresh rate. Therefore, this type of BCIs was unable to implement many commands while being able to achieve a higher input speed [124,132]. Recently, a stimulus setting using frequency and phase modulation [151,152,153] was achieved to provide many commands.
A straightforward way to recognize the frequency of an SSVEP is to check the peak of the discrete Fourier transform of an EEG channel (i.e., the signal picked up by a single electrode). For multiple electrodes, a method based on the canonical correlation analysis (CCA) [154,155] is also widely used for recognizing the frequency [156]. In this method, the canonical correlation between the M-channels EEG, denoted by
x ( t ) = [ x 1 ( t ) , x 2 ( t ) , , x M ( t ) ] ,
and the reference Fourier series with the fundamental frequency f, denoted by
y ( t ) = [ sin ( 2 π · f t ) , cos ( 2 π · f t ) , sin ( 2 π · 2 f t ) , cos ( 2 π · 2 f t ) , , sin ( 2 π · L f t ) , cos ( 2 π · L f t ) ] ,
where L denotes the number of harmonics and is calculated for all candidate frequencies, f = f 1 , , f N , where N is the number of visual targets. The frequency that gives the maximum canonical correlation is recognized as the frequency of the target that the user is gazing at. This approach can be improved by applying a bank of bandpass filters [157,158]. Recent studies have pointed out that the nonuniform spectra of the spontaneous or background EEG can deteriorate the performances of the frequency recognition algorithm and have proposed efficient methods to achieve frequency recognition based on the effects of background EEG [159,160,161].
In the CCA method to identify the target frequency, the reference signal, y ( t ) , can be replaced by calibration signals, which are EEG signals to each target collected with multiple trials. There are several studies on data-driven methods for the CCA [80,162]. Recently, it has been reported [163,164,165] that, by utilizing the correlation between trials with respect to the same target, recognition accuracy can be improved. This method is called task-related component analysis [166], which can be applied to detection the target frequency as well as the target phase.
Readers may wonder if this type of interface can be replaced by eye-tracking devices, as the latter are much simpler to implement. Indeed, the answer to such a question is mixed. For example, the comparative study in [167,168] has suggested that, for small targets on the screen, SSVEP-BCI achieves higher accuracy in command recognition than eye-tracking-based interfaces.
SSVEP-based BCIs suffer from a few limitations. The power of the SSVEP response is very weak when the flicker frequency is larger than 25 Hz [169]. Moreover, every SSVEP elicited by a stimulus with a certain frequency includes secondary harmonics. As mentioned earlier, a computer monitor can generate visual stimuli with frequencies limited by the refresh rate; hence, possible frequencies that could be used to form visual targets are strictly limited [146,170]. In order to overcome the limitation of using targets flickering with different frequencies (called “frequency modulation” [171]), several alternative stimuli have been proposed. Time-modulation uses the flash sequences of different targets that are mutually independent [172]. Code-modulation is another approach that uses pseudo-random sequences, which are typically the “m-sequences” [81]. A kind of shift-keying technique (typically used in digital communications) is also used to form flickering targets [170]. Another paradigm is waveform-modulation [173], where various types of periodic waveforms, such as rectangle, sinusoidal, and triangle waveforms, are used as stimuli. In addition, it has been experimentally verified that different frequencies as well as phase-shifts allocated to visual targets greatly increase the number of targets [161,174]. Surveys that illustrate several methods and paradigms for stimulating visually evoked potentials are found in [148,175,176].
Additionally, a type of SSVEP obtained by turning the user’s attention to the repeat of a short-term stimulus—such as a visual flicker—can be utilized. The repeat of tactile [177] and auditory [178,179] stimuli can likewise evoke such potentials.

3.4. Event-Related Desynchronisation/Synchronization

Brain–computer interfaces that use imagination of muscle movements have been extensively studied. This type of interface recognizes brain activities around motor cortices associated with imagining movements of body parts such as hands, feet, and tongue [132]. This kind of interface is called “motor-imagery-based” (MI-BCI). The MI-BCI is based on the assumption that different motor imagery tasks, such as right-hand movement and left-hand movement, activate different brain areas. The imagined tasks are classified through an algorithmic analysis of the recorded EEG readouts and are chosen by the user to communicate different messages.
Neurophysiological studies suggest [180] that motor imagery tasks decrease the energy in certain frequency bands called mu (8–15 Hz) and beta (10–30 Hz) in the EEG signals observed through electrodes located on the (sensory)motor cortex [181]. The decrease/increase in energy is called event-related desynchronisation/synchronization [182,183]. There exist several methods to quantify event-related desynchronisation (ERD)/synchronization (ERS) [184]. A typical quantification is the relative power decrease or increase, defined as
ERD = P event P ref P ref ,
where the quantity P event denotes the power within the frequency band of interest in the period after the event and the quantity P ref denotes the power within the frequency band of interest in the preceding baseline (or reference) period.
The area of the brain cortex where event-related desynchronisation/synchronization are observed is associated with the body part of which a subject imagines movement [183,185,186]. Such a phenomenon implies that it is possible to infer of which body part the subject imagines movement from the EEG by detecting the location where event-related desynchronisation occurs. ERD is induced by motor imagery tasks performed by healthy subjects as well as paralyzed patients [187]. An example of event-related desynchronisation observed in the EEG is shown in Figure 11.
The EEG signals were recorded while a subject performed the tasks of motor imagery of their left and right hands [188]. After the recording, the EEG signals were bandpass-filtered with a pass-band of 8–30 Hz. The Figure 11 illustrates the EEG power averaged over 100 trials of the task of motor imagery of the left or right hand withnormalization by a base energy corresponding to signals measured before performing motor imagery tasks. (Note that the ERD is defined as the relative power to the baseline power; therefore, if the current power is larger than the base power, the ERD can be over 100%). The symbol “0” on the horizontal axis is the time when the subject started performing the tasks. The decrease in the energy while the subject performs the tasks can be clearly observed in Figure 11.
One of the merits of MI-BCI against the BCIs based on the perception of flickering stimuli is that it is unnecessary to make use of a device to display the stimuli. Moreover, it has been reported recently that the detection of the motor imagery tasks and a feedback are useful for the rehabilitation of patients who suffer from motor disorders caused by brain injuries [189,190,191,192]. The MI-BCI can be utilized in rehabilitation to recover motor functions as follows. One of the rehabilitation procedures for the recovery of motor functions is to make a subject perform movements of a disabled body part by a cue and to give visual or physical feedback [189]. The coincident events of the intention of the movement that the subject has and the corresponding feedback to the subject are supposed to promote plasticity of the brain.
In rehabilitation, to generate the feedback coincidentally with the intention that elicited it, the intention is considered significant. In the general procedure of the rehabilitation illustrated above, the cue control generates the intention, which is detected from the EEG to give the feedback to the patient. When using the motor-imagery BCI for the rehabilitation, the feedback generation is controlled by the interface. In fact, the MI-BCI enables the rehabilitation system to detect the intention of a movement and to generate the feedback at an appropriate time. Some researches have suggested that rehabilitation based on MI-BCIs can promote the plasticity of the brain more efficiently than the conventional systems based on the cue [189,190,191,192].
A further promising paradigm related to the ERD is passive movement (PM). Brain–computer interfaces exploiting passive movements are instances of passive BCIs. The passive movement is typically performed with a mechatronic finger rehabilitation device [193]. An early report about observing the effect in the brain cortical activity during PM suggested that PMs consisting of brisk wrist extensions done with the help of a pulley system resulted in significant ERD after the beginning of the movement, followed by ERS in the beta band [194]. A recent study reported that the classification accuracy calculated from EEG signals during the passive and active hand movements did not differ significantly from the classification accuracy for detecting MI. It has been reported that ERD is induced not only by MI but also by either passive action observation (AO) or a combination of MI with AO [195,196,197]. The PM and AO gives less fatigues to users; therefore, they are very promising for rehabilitation purposes.
A well-known method for extracting brain activity that is used in MI-BCIs is based on the notion of common spatial pattern (CSP) [132,198,199]. The CSP consists of a set of spatial weight coefficients corresponding to electrodes recording a multichannel EEG. These coefficients are determined from a measured EEG in such a way that the variances of the signal extracted by the spatial weights maximally differ between two tasks (e.g., left- and right-hand movement imagery). Specifically, the CSP is given as follows: let C 1 and C 2 be the spatial covariance matrices of a multichannel EEG recording during two tasks (task 1 and task 2, respectively). The CSP for task 1 is given as the generalized eigenvector w corresponding to the maximum generalized eigenvalue λ of the matrix-pencil ( C 1 , C 2 ) :
C 1 w = λ C 2 w .
The generalized eigenvector associated with the minimum generalized eigenvalue is the CSP for task 2. It should be noted that the CSP can also be regarded as a spatial filter that projects the observed EEG signals onto a subspace to extract features, which are assigned to a class corresponding to a subject’s cerebral status. In addition, the CSP is strongly related to a frequency band. Even though the brain activity caused by a motor-related task is observed in the mu and beta bands, the bandwidth and the center frequency basically depend on the individual. To specify the most effective frequency band, several variants of the CSP have been proposed [200,201,202,203,204,205,206]. The underlying idea behind these methods is to incorporate finite impulse response filters adapting the measured EEG with the process of finding the CSP. In the CSP methods, the covariance matrices, C 1 and C 2 , should be estimated from the observed EEG. The quality of such an estimation directly affects the accuracy of the classifier. A method for active selection of the data from the observed dataset was also proposed [207].
While the CSP method is suitable for two-class classification, multiclass extensions of the CSP have been proposed, such as one-versus-the-rest-CSP and simultaneous diagonalization [185]. Another novel approach [208] to the classification of motor-imagery EEG is to employ Riemannian geometry, where empirical covariances are directly classified on the Riemaninan manifold [209] instead of finding CSPs. The basic idea is that, on the Riemannian manifold of the symmetric and positive-definite matrices, each point (covariance matrix) is projected onto a Euclidean tangent space, where standard classifiers such as linear discriminant analysis [208] and support vector machine [210] may be employed. The use of filter banks can enhance the classification performance [211]. A review paper about this approach was published [212]. In this approach, a major issue is how to choose the reference tangent space, which is defined as the one at the “central point” of the collection of spatial covariance matrices computed by the help of specific numerical algorithms [213,214]. Some studies have investigated classification accuracies with respect to various types of central points [211,215]. To make it easier for the reader to identify the references in this section, Table 1 summarizes the works presented in each subsection.

4. Progress on Applications of Brain–Computer Interfaces to Control and Automation

Brain–computer interfacing is a real-time communication system that connects the brain and external devices. A BCI system can directly convert the information sent by the brain into commands that can drive external devices and can replace human limbs or phonation organs to achieve communication with the outside world and to control the external environment [115]. In other words, a BCI system can replace the normal peripheral nerve and muscle tissue to achieve communication between a human and a computer or between a human and the external environment [115].
The frequent use of EEG for BCI applications is due to several factors, namely, it can work in most environments; it is simple and convenient to use in practice because scalp-recorded EEG equipment is lightweight, inexpensive, and easy to apply (it affords the most practical noninvasive access to brain activity); and it is characterized by a very high temporal resolution (about some milliseconds) that makes it attractive to be used in real time [216,217]. The main disadvantages of EEG, on the other hand, are the poor spatial resolution (few centimeters) and the damping of the signal due to bone and skin tissue that produces a very weak scalp-recorded EEG [218,219]. The reduced amplitude of the signal makes it susceptible to so-called artifacts, caused by other electrical activities (i.e., muscles electromyographic activity or electro-oculographic activity caused by eye movements, external electromagnetic sources such as power lines and electrical equipment or movements of the cables). To reduce the artifacts effects and to improve the signal-to-noise ratio, most EEG electrodes require a conductive solution to be applied before usage.
While BCI applications share the same goal (rapid and accurate communication and control), they differ widely in their inputs, feature extraction and methods, translation algorithms, outputs, and operation protocols. Despite some of its limitations, BCI systems are quickly moving out of the laboratories and becoming practical systems also useful for communication, control, and automation purposes. In recent years, BCIs have been validated in various noisy structured environments such as homes, hospitals, and expositions, resulting in the direct application of BCIs gaining popularity with regular consumers [220]. In the last years, some research efforts have been done on its use about smart environments, smart control systems, fast and smooth movement of robotic arm prototypes, motion planning of autonomous or semi-autonomous wheelchairs, as well as controlling orthoses and prostheses. A number of research endeavors confirmed that different devices such as a wheelchair or robot arm can already be controlled by a BCI device [221].
The domain of brain–computer interfacing is broad and includes various applications including well-established ones such as controlling a cursor on the screen [222], selecting letters from a virtual keyboard [223,224], browsing the internet [225,226], and playing games [227]. BCIs are also already being used in more sophisticated applications with the brain controlling robotic devices including wheelchairs, orthoses, and prostheses [78]. BCI technologies are being applied to smart homes and smart living. In the work of [34], BCI users controlled TV channels, a digital door-lock system, and an electric light system in an unshielded environment. Over the last years, research efforts have been devoted to BCI applications to smart environmental control systems, fast and smooth robotic arm prototypes, as well as motion planning of autonomous or semi-autonomous vehicles. The world of BCI applications is expanding and new fields are opening as in communications, control, and automation such as the control of unmanned vehicles [228]; virtual reality applications in games [227]; environmental control; or improvements in brain control of robotic devices. To better understand the new capabilities of EEG-based BCIs, applications related to control and automation systems are summarized in the next subsections.

4.1. Application to Unmanned Vehicles and Robotics

Recently, the authors of [228] gained extensive media attention for their demonstration of the potential of noninvasive EEG-based BCI systems in three-dimensional control of a quadcopter. Five subjects were trained to modulate their sensorimotor rhythms to control an AR drone navigating a physical space. Visual feedback was provided via a forward-facing camera placed on the hull of the drone. Brain activity was used to move the quadcopter through an obstacle course. The subjects were able to quickly pursue a series of foam ring targets by passing through them in a real-world environment. They obtained up to 90.5 % of all valid targets through the course, and the movement was performed in an accurate and continuous way. The performance of such a system was quantified by using metrics suitable for asynchronous BCI. The results provide an affordable framework for the development of multidimensional BCI control in telepresence robotics. The study showed that BCI can be effectively used to accomplish complex control in a three-dimensional space. Such an application can be beneficial both to people with severe disabilities as well as in industrial environments. In fact, the authors of [228] faced problems related to typical control applications where the BCI acts as a controller that moves a simple object in a structured environment. Such a study follows previous research endeavors: the works in [229,230] that showed the ability of users to control the flight of a virtual helicopter with 2D control, and the work of [231], that demonstrated 3D control by leveraging a motor imagery paradigm with intelligent control strategies.
Applications on different autonomous robots are under investigation. For example, the study of [232] proposed a new humanoid navigation system directly controlled through an asynchronous sensorimotor rhythm-based BCI system. Their approach allows for flexible robotic motion control in unknown environments using a camera vision. The proposed navigation system includes posture-dependent control architecture and is comparable with the previous mobile robot navigation system that depends on an agent-based model.

4.2. Application to “Smart Home” and Virtual Reality

Applications of EEG-based brain–computer interfaces are emerging in “Smart Homes”. BCI technology can be used by disabled people to improve their independence and to maximize their residual capabilities at home. In the last years, novel BCI systems were developed to control home appliances. A prototypical three-wheel, small-sized robot for smart-home applications used to perform experiments is shown in Figure 12.
The aim of the study [233] was to improve the quality of life of disabled people through BCI control systems during some daily life activity such as opening/closing doors, switching on and off lights, controlling televisions, using mobile phones, sending massages to people in their community, and operating a video camera. To accomplish such goals, the authors of the study [233] proposed a real-time wireless EEG-based BCI system based on commercial EMOTIV EPOC headset. EEG signals were acquired by an EMOTIV EPOC headset and transmitted through a Bluetooth module to a personal computer. The received EEG data were processed by the software provided by EMOTIV, and the results were transmitted to the embedded system to control the appliances through a Wi-Fi module. A dedicated graphical user interface (GUI) was developed to detect a key stroke and to convert it to a predefined command.
In the studies of [234,235], the authors proposed the integration of the BCI technique with universal plug and play (UPnP) home networking for smart house applications. The proposed system can process EEG signals without transmitting them to back-end personal computers. Such flexibility, the advantages of low-power-consumption and of using small-volume wireless physiological signal acquisition modules, and embedded signal processing modules make this technology be suitable for various kinds of smart applications in daily life.
The study of [236] evaluated the performances of an EEG-based BCI system to control smart home applications with high accuracy and high reliability. In said study, a P300-based BCI system was connected to a virtual reality system that can be easily reconfigurable and therefore constitutes a favorable testing environment for real smart homes for disabled people. The authors of [237] proposed an implementation of a BCI system for controlling wheelchairs and electric appliances in a smart house to assist the daily-life activities of its users. Tests were performed by a subject achieving satisfactory results.
Virtual reality concerns human–computer interaction, where the signals extracted from the brain are used to interact with a computer. With advances in the interaction with computers, new applications have appeared: video games [227] and virtual reality developed with noninvasive techniques [238,239].

4.3. Application to Mobile Robotics and Interaction with Robotic Arms

The EEG signals of a subject can be recorded and processed appropriately in order to differentiate between several cognitive processes or “mental tasks”. BCI-based control systems use such mental activity to generate control commands in a device or a robot arm or a wheelchair [132,240]. As previously said, BCIs are systems that can bypass conventional channels of communication (i.e., muscles and speech) to provide direct communication and control between the human brain and physical devices by translating different patterns of brain activity into commands in real time. This kind of control can be successfully applied to support people with motor disabilities to improve their quality of life, to enhance the residual abilities, or to replace lost functionality [78]. For example, with regard to individuals affected by neurological disabilities, the operation of an external robotic arm to facilitate handling activities could take advantage of these new communication modalities between humans and physical devices [22]. Some functions such as those connected with the abilities to select items on a screen by moving a cursor in a three-dimensional scene is straightforward using BCI-based control [77,241]. However, a more sophisticated control strategy is required to accomplish the control tasks at more complex levels because most external effectors (mechanical prosthetics, motor robots, and wheelchairs) posses more degrees of freedom. Moreover, a major feature of brain-controlled mobile robotic systems is that these mobile robots require higher safety since they are used to transport disabled people [78]. In BCI-based control, EEG signals are translated into user intentions.
In synchronous protocols, usually P300 and SSVEP-BCIs based on external stimulation are adopted. For asynchronous protocols, event-related de-synchronization, and ERS, interfaces independent of external stimuli are used. In fact, since asynchronous BCIs do not require any external stimulus, they appear more suitable and natural for brain-controlled mobile robots, where users need to focus their attention on robot driving but not on external stimuli.
Another aspect is related to two different operational modes that can be adopted in brain-controlled mobile robots [78]. One category is called “direct control by the BCI”, which means that the BCI translates EEG signals into motion commands to control robots directly. This method is computationally less complex and does not require additional intelligence. However, the overall performance of these brain-controlled mobile robots mainly depends on the performance of noninvasive BCIs, which are currently slow and uncertain [78]. In other words, the performance of the BCI systems limits that of the robots. In the second category of brain-controlled robots, a shared control was developed, where a user (using a BCI) and an intelligent controller (such as an autonomous navigation system) share the control over the robots. In this case, the performance of robots depend on their intelligence. Thus, the safety of driving these robots can be better ensured, and even the accuracy of intention inference of the users can be improved. This kind of approach is less compelling for the users, but their reduced effort translates into higher computational cost. The use of sensors (such as laser sensors) is often required.

4.4. Application to Robotic Arms, Robotic Tele-Presence, and Electrical Prosthesis

Manipulator control requires more accuracy in space target reaching compared with the wheelchair and other devices control. Control of the movement of a cursor in a three-dimensional scene is the most significant pattern in BCI-based control studies [123,242]. EEG changes, normally associated with left-hand, right-hand, or foot movement imagery, can be used to control cursor movement [242].
Several research studies [240,243,244,245,246,247] presented applications aimed at the control of a robot or a robotic arm to assist people with severe disabilities in a variety of tasks in their daily life. In most of the cases, the focus of these papers is on the different methods adopted to classify the action that the robot arm has to perform with respect to the mental activity recorded by BCI. In the contributions [240,243], a brain–computer interface is used to control a robot’s end-effector to achieve a desired trajectory or to perform pick/place tasks. The authors use an asynchronous protocol and a new LDA-based classifier to differentiate between three mental tasks. In [243], in particular, the system uses radio-frequency identification (RFID) to automatically detect objects that are close to the robot. A simple visual interface with two choices, “move” and “pick/place”, allows the user to pick/place the objects or to move the robot. The same approach is adopted in the research framework described in [245], where the user has to concentrate his/her attention on the option required in order to execute the action visualized on the menu screen. In the work of [244], an interactive, noninvasive, synchronous BCI system is developed to control in a whole three-dimensional workspace a manipulator having several degrees of freedom.
Using a robot-assisted upper-limb rehabilitation system, in the work of [248], the patient’s intention is translated into a direct control of the rehabilitation robot. The acquired signal is processed (through wavelet transform and LDA) to classify the pattern of left- and right-upper-limb motor imagery. Finally, a personal computer triggers the upper-limb rehabilitation robot to perform motor therapy and provides a virtual feedback.
In the study of [249], the authors showed how BCI-based control of a robot moving at a user’s home can be successfully reached after a training period. P300 is used in [247] to discern which object the robot should pick up and which location the robot should take the object to. The robot is equipped with a camera to frame objects. The user is instructed to attend to the image of the object, while the border around each image is flashed in a random order. A similar procedure is used to select a destination location. From a communication viewpoint, the approach provides cues in a synchronous way. The research study [232] deals with a similar approach but with an asynchronous BCI-based direct-control system for humanoid robot navigation. The experimental procedures consist of offline training, online feedback testing, and real-time control sessions. Five healthy subjects controlled a humanoid robot navigation to reach a target in an indoor maze by using their EEGs based on real-time images obtained from a camera on the head of the robot.
Brain–computer interface-based control has been adopted also to manage hand or arm prosthesis [1,250]. In such cases, patients were subjected to a training period, during which they learned to use their motor imagery. In particular, in the work described in [1], tetraplegic patients were trained to control the opening and closing of their paralyzed hand by means of orthosis by an EEG recorded over the sensorimotor cortex.

4.5. Application to Wheelchair Control and Autonomous Vehicles

Power wheelchairs are traditionally operated by a joystick. One or more switches change the function that is controlled by the joystick. Not all persons who could experience increased mobility by using a powered wheelchair possess the necessary cognitive and neuromuscular capacity needed to navigate a dynamic environment with a joystick. For these users, “shared” control approach coupled with an alternative interface is indicated. In a traditional shared control system, the assistive technology assists the user in path navigation. Shared control systems typically can work in several modes that vary the assistance provided (i.e., user autonomy) and rely on several movement algorithms. The authors of [251] suggest that shared control approaches can be classified in two ways: (1) mode changes triggered by the user via a button and (2) mode changes hard-coded to occur when specific conditions are detected.
Most of the current research related to BCI-based control of wheelchair shows applications of synchronous protocols [252,253,254,255,256,257]. Although synchronous protocols showed high accuracy and safety [253], low response efficiency and inflexible path option can represent a limit for wheelchair control in the real environment.
Minimization of user involvement is addressed by the work in [251], through a novel semi-autonomous navigation strategy. Instead of requiring user control commands at each step, the robot proposes actions (e.g., turning left or going forward) based on environmental information. The subject may reject the action proposed by the robot if he/she disagrees with it. Given the rejection of the human subject, the robot takes a different decision based on the user’s intention. The system relies on the automatic detection of interesting navigational points and on a human–robot dialog aimed at inferring the user’s intended action.
The authors of the research work [252] used a discrete approach for the navigation problem, in which the environment is discretized and composed by two regions (rectangles of 1 m 2 , one on the left and the other on the right of the start position), and the user decides where to move next by imagining left or right limb movements. In [253,254], a P300-based (slow-type) BCI is used to select the destination in a list of predefined locations. While the wheelchair moves on virtual guiding paths ensuring smooth, safe, and predictable trajectories, the user can stop the wheelchair by means of a faster BCI. In fact, the system switches between the fast and the slow BCIs depending on the state of the wheelchair. The paper [255] describes a brain-actuated wheelchair based on a synchronous P300 neurophysiological protocol integrated in a real-time graphical scenario builder, which incorporates advanced autonomous navigation capabilities (shared control). In the experiments, the task of the autonomous navigation system was to drive the vehicle to a given destination while also avoiding obstacles (both static and dynamic) detected by the laser sensor. The goal/location was provided by the user by means of a brain–computer interface.
The contributions of [256,257] describe a BCI based on SSVEPs to control the movement of an autonomous robotic wheelchair. The signals used in this work come from individuals who are visually stimulated. The stimuli are black-and-white checkerboards flickering at different frequencies.
Asynchronous protocols have been suggested for the BCI-based wheelchair control in [258,259,260]. The authors of [258] used beta oscillations in the EEG elicited by imagination of movements of a paralysed subject for a self-paced asynchronous BCI control. The subject, immersed in a virtual street populated with avatars, was asked to move among the avatars toward the end of the street, to stop by each avatar, and to talk to them. In the experiments described in [259], a human user makes path planning and fully controls a wheelchair except for automatic obstacle avoidance based on a laser range finder. In the experiments reported in [260], two human subjects were asked to mentally drive both a real and a simulated wheelchair from a starting point to a goal along a prespecified path.
Several recent papers describe BCI applications where wheelchair control is multidimensional. In fact, it appears that control commands from a single modality were not enough to meet the criteria of multi-dimensional control. The combination of different EEG signals can be adopted to give multiple control (simultaneous or sequential) commands. The authors of [261,262] showed that hybrid EEG signals, such as SSVEP and motor imagery, could improve the classification accuracy of brain–computer interfaces. The authors of [263,264] adopted the combination of P300 potential and MI or SSVEP to control a brain-actuated wheelchair. In this case, multi-dimensional control (direction and speed) is provided by multiple commands. In the paper of [265], the authors proposed a hybrid BCI system that combines MI and SSVEP to control the speed and direction of a wheelchair synchronously. In this system, the direction of the wheelchair was given by left- and right-hand imagery. The idle state without mental activities was decoded to keep the wheelchair moving along the straight direction. Synchronously, SSVEP signals induced by gazing at specific flashing buttons were used to accelerate or decelerate the wheelchair. To make it easier for the reader to identify the references in this section, Table 2 summarizes the papers about BCI applications presented in each subsection.

5. Current Limitations and Challenges of the BCI Technologies

The present review paper is a collection of specific papers related to the BCI technology for data capturing, methods for signal processing and information extraction, and BCI applications to control and automation. Some considerations about limitations and challenges related to BCI usage and applications can be then inferred by an analysis of the surveyed papers. BCI development depends on a close interdisciplinary cooperation between neuroscientists, engineers, psychologists, computer scientists, and rehabilitation specialists. It would benefit from general acceptance and application of methods for evaluating translation algorithms, user training protocols, and other key aspects of BCI technology. General limitations of BCI technology may be recognized to be the following:
  • inaccuracy in terms of classifying neural activity;
  • limited ability to read brain signals for those BCIs placed outside of the skull;
  • in limited cases, requirement for pretty drastic surgery;
  • amount of ethical issues due to reading people’s inner thoughts;
  • the bulky nature of the system leading to possibly uncomfortable user experience; and
  • the security of personal data not being guaranteed against attackers or intruders.
Other limitation can be related to the methods used to record brain activity. Critical issues for EEG signal acquisition are related, for instance, to the artifacts and outliers that can limit its usability and the interpretability of the features extracted that can be noise-affected due to the low signal-to-noise ratio characterizing EEG signals [266].
Based on the different methods used to record brain activity (ECoG, fMRI, and PET), further different kinds of limitations can be recognized. To what concerns fMRI, the major limitations are the lack of image contrast (poor image contrast and weak image boundaries) and unknown noise toll [267], while the main problems related to PET are its maintenance cost and setup burden. The main limitation related to ECoG usage is its invasive nature based on the application of a surgically implanted electrode grid. Motion artifacts, environmental noise, or eye movements can reduce the reliability of data acquired and can limit the ability of extracting relevant patterns. Moreover, the rapid variation in time and among sessions of the EEG signals makes the parameters extracted from EEG nonstationary. For instance, the change in mental state or different levels of attention can affect the EEG signal characteristic and can increase its variability in different experimental sessions. Due to the chaotic behaviour of the neural system, the intrinsic nonlinear nature of the brain should be better analysed by nonlinear dynamic methods than the linear ones. In BCI technology, some challenges related to data acquisition of EEG signals are to be taken into account. Such a challenge concerns the identification of the optimal location for reference electrodes and the control of impedance when testing with high-density sponge electrode nets. A relevant aspect related to the use of BCI concerns the trade-off between the difficulty interpreting brain signals and the quantity of training needed for efficient operation of the interface [261]. Moreover, in BCI systems, several channels record the signals to maintain a high spatial accuracy. Generally, this produces a large amount of data that increases with the number of channels, which can leads to the need for some kind of machine learning approach to extract relevant features [266].

6. Conclusions

Because of its nature, BCI is conceived for a continuous interaction between the brain and controlled devices, affording external activity and control of apparati. The interface enables a direct communication pathway between the brain and the object to be controlled. By reading neuronal oscillations from an array of neurons and by using computer chips and programs to translate the signals into actions, a BCI can enable a person suffering from paralysis to write a book or to control a motorized wheelchair. Current BCIs require deliberate conscious thought, while future applications, such as prosthetic control, are likely to work effortlessly. One of the major challenges in developing BCI technologies has been the design of electrodes and surgical methods that are minimally invasive. In the traditional BCI model, the brain accepts an implanted mechanical device and controls the device as a natural part of its representation of the body. Much current research is focused on the potential of noninvasive BCI. Cognitive-computation-based systems use adaptive algorithms and pattern-matching techniques to facilitate communication. Both the user and software are expected to adapt and learn, making the process more efficient with practice.
Near-term applications of BCIs are primarily task-oriented and are targeted to avoid the most difficult obstacles in development. In the farther term, brain–computer interfaces will enable a broad range of task-oriented and opportunistic applications by leveraging pervasive technologies to sense and merge critical brain, behavioral, task, and environmental information [268].
The theoretical groundwork of the 1930s and 1940s and the technical advance of computers in the following decades provided the basis for dramatic increases in human efficiency. While computers continue to evolve, the interface between humans and computers has begun to present a serious impediment to full realization of the potential payoff [241]. While machine learning approaches have led to tremendous advances in BCIs in recent years, there still exists a large variation in performance across subjects [269]. Understanding the reasons for this variation in performance constitutes perhaps one of the most fundamental open questions in the research on BCI. Future research on the integration of cognitive computation and brain–computer interfacing is foreseen to be about how the direct communication between the brain and the computer can be used to overcome this impediment by improving or augmenting conventional forms of human communication.

Author Contributions

Conceptualization, S.F., A.B., and T.T.; data curation, H.H. and T.T.; writing—original draft preparation, A.B., S.F., and F.V.; writing—review and editing, S.F., A.B., T.T., H.H., and F.V.; supervision, S.F. and T.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors wish to thank the anonymous reviewers for their valuable comments and suggestions as well as the editors of this Special Issue for the invitation to contribute.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AcronymMeaning
ALSAmyotrophic lateral sclerosis
AOAction observation
BBIBrain-to-brain interface
BCIBrain–computer interface
BMIBrain–machine interface
BSSBlind source separation
CCACanonical correlation analysis
CSPCommon spatial pattern
DBSDeep brain stimulation
ECoGElectrocorticogram
EEGElectroencephalogram
ELMExtreme learning machine
EOGElectrooculogram
EPEvoked potential
EREvoked response
ERDEvent-related desynchronisation
ERPEvent-related potential
ERSEvent-related synchronisation
fMRIFunctional Magnetic Resonance Imaging
GUIGraphical user interface
ICAIndependent component analysis
InfoMaxInformation maximisation
LDALinear discriminant analysis
LEDLight-emitting diode
MEGMagnetoencephalogram
MIMotor imagery
MRIMagnetic resonance imaging
PETPositron emission tomography
PMPassive movement
RFIDRadio-frequency identification
SOBISecond-order blind identification
SSVEPSteady-state visually evoked potential
SVMSupport vector machine
UPnPUniversal plug and play

References

  1. Pfurtscheller, G.; Neuper, C.; Birbaumer, N. Human Brain-Computer Interface; CRC Press: Boca Raton, FL, USA, 2005; pp. 1–35. [Google Scholar]
  2. Wolpaw, J.R.; Birbaumer, N.; McFarland, D.J.; Pfurtscheller, G.; Vaughan, T.M. Brain-computer interfaces for communication and control. Clin. Neurophysiol. 2002, 113, 767–791. [Google Scholar] [CrossRef]
  3. Nicolelis, M. Beyond Boundaries; St. Martin’s Press: New York, NY, USA, 2012. [Google Scholar]
  4. Draguhn, A.; Buzsáki, G. Neuronal oscillations in cortical networks. Science 2004, 304, 1926–1929. [Google Scholar]
  5. Thut, G.; Schyns, P.G.; Gross, J. Entrainment of perceptually relevant brain oscillations by non-invasive rhythmic stimulation of the human brain. Front. Psychol. 2011, 2. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Cox, R.; van Driel, J.; de Boer, M.; Talamini, L.M. Slow Oscillations during Sleep Coordinate Interregional Communication in Cortical Networks. J. Neurosci. 2014, 34, 16890–16901. [Google Scholar] [CrossRef] [PubMed]
  7. Fontolan, L.; Krupa, M.; Hyafil, A.; Gutkin, B. Analytical insighFts on Theta-Gamma coupled neural oscillators. J. Math. Neurosci. 2013, 3. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Ritter, P.; Born, J.; Brecht, M.; Dinse, H.R.; Heinemann, U.; Pleger, B.; Schmitz, D.; Schreiber, S.; Villringer, A.; Kempter, R. State-dependencies of learning across brain scales. Front. Comput. Neurosci. 2015, 9. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Santhanam, G.; Ryu, S.I.; Yu, B.M.; Afshar, A.; Shenoy, K.V. A high-performance brain-computer interface. Nature 2006, 442, 195–198. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Jackson, M.M.; Mappus, R. Applications for brain-computer interfaces. In Brain-Computer Interfaces; Springer: London, UK, 2010; pp. 89–103. [Google Scholar] [CrossRef]
  11. Niedermeyer, E.; Da Silva, F. Electroencephalography: Basic Principles, Clinical Applications, and Related Fields; Lippincott Williams & Wilkins: Philadelphia, PA, USA, 2005. [Google Scholar]
  12. Sutter, E. The Brain Response Interface: Communication Through Visually-induced Electrical Brain Responses. J. Microcomput. Appl. 1992, 15, 31–45. [Google Scholar] [CrossRef]
  13. Becedas, J. Brain Machine Interfaces: Basis and Advances. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2012, 42, 825–836. [Google Scholar] [CrossRef]
  14. Erp, J.V.; Lotte, F.; Tangermann, M. Brain-computer interfaces: Beyond medical applications. Comput. IEEE Comput. Soc. 2012, 45, 26–34. [Google Scholar] [CrossRef] [Green Version]
  15. LaRocco, J.; Paeng, D.G. Optimizing Computer–Brain Interface Parameters for Non-invasive Brain-to-Brain Interface. Front. Neuroinform. 2020, 14. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Bos, D.P.; Reuderink, B.; van de Laar, B.; Gürkök, H.; Mühl, C.; Poel, M.; Nijholt, A.; Heylen, D. Brain-Computer Interfacing and Games. In Brain-Computer Interfaces—Applying our Minds to Human-Computer Interaction; Tan, D.S., Nijholt, A., Eds.; Human-Computer Interaction Series; Springer: London, UK, 2010; pp. 149–178. [Google Scholar] [CrossRef]
  17. Nijholt, A. BCI for games: A ‘state of the art’ survey. In Entertainment Computing—ICEC 2008, Proceedings of the 7th International Conference, Pittsburgh, PA, USA, 25–27 September 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 225–228. [Google Scholar]
  18. Aricò, P.; Borghini, G.; Di Flumeri, G.; Sciaraffa, N.; Babiloni, F. Passive BCI beyond the LAB: Current Trends and Future Directions. Physiol. Meas. 2018, 39. [Google Scholar] [CrossRef]
  19. Xu, B.; Li, W.; He, X.; Wei, Z.; Zhang, D.; Wu, C.; Song, A. Motor Imagery Based Continuous Teleoperation Robot Control with Tactile Feedback. Electronics 2020, 9, 174. [Google Scholar] [CrossRef] [Green Version]
  20. Korovesis, N.; Kandris, D.; Koulouras, G.; Alexandridis, A. Robot Motion Control via an EEG-Based Brain–Computer Interface by Using Neural Networks and Alpha Brainwaves. Electronics 2019, 8, 1387. [Google Scholar] [CrossRef] [Green Version]
  21. Blankertz, B.; Acqualagna, L.; Dähne, S.; Haufe, S.; Kraft, M.; Sturm, I.; Ušćumlic, M.; Wenzel, M.; Curio, G.; Müller, K. The Berlin Brain-Computer Interface: Progress Beyond Communication and Control. Front. Neurosci. 2016, 10. [Google Scholar] [CrossRef] [Green Version]
  22. Bamdad, M.; Zarshenas, H.; Auais, M.A. Application of BCI systems in neurorehabilitation: A scoping review. Disabil. Reabil. Assist. Technol. 2014, 10, 355–364. [Google Scholar] [CrossRef] [PubMed]
  23. Vansteensel, M.; Jarosiewicz, B. Brain-computer interfaces for communication. Handb. Clin. Neurol. 2020, 168, 65–85. [Google Scholar] [CrossRef]
  24. Aricò, P.; Sciafarra, N.; Babiloni, F. Brain–Computer Interfaces: Toward a Daily Life Employment. Brain Sci. 2020, 10, 157. [Google Scholar] [CrossRef] [Green Version]
  25. Di Flumeri, G.; De Crescenzio, F.; Berberian, B.; Ohneiser, O.; Kramer, J.; Aricò, P.; Borghini, G.; Babiloni, F.; Bagassi, S.; Piastra, S. Brain–Computer Interface-Based Adaptive Automation to Prevent Out-Of-The-Loop Phenomenon in Air Traffic Controllers Dealing With Highly Automated Systems. Front. Hum. Neurosci. 2019, 13. [Google Scholar] [CrossRef] [Green Version]
  26. Borghini, G.; Aricò, P.; Di Flumeri, G.; Sciaraffa, N.; Colosimo, A.; Herrero, M.T.; Bezerianos, A.; Thakor, N.V.; Babiloni, F. A new perspective for the training assessment: Machine learning-based neurometric for augmented user’s evaluation. Front. Hum. Neurosci. 2017, 11. [Google Scholar] [CrossRef]
  27. Aricò, P.; Reynal, M.; Di Flumeri, G.; Borghini, G.; Sciaraffa, N.; Imbert, J.P.; Hurter, C.; Terenzi, M.; Ferreira, A.; Pozzi, S.; et al. How neurophysiological measures can be used to enhance the evaluation of remote tower solutions. Front. Hum. Neurosci. 2019, 13. [Google Scholar] [CrossRef]
  28. Schettini, F.; Aloise, F.; Aricò, P.; Salinari, S.; Mattia, D.; Cincotti, F. Self-calibration algorithm in an asynchronous P300-based brain–computer interface. J. Neural Eng. 2014, 11, 035004. [Google Scholar] [CrossRef]
  29. Rezeika, A.; Benda, M.; Stawicki, P.; Gembler, F.; Saboor, A.; Volosyak, I. Brain–Computer Interface Spellers: A review. Brain Sci. 2018, 8, 57. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Diya, S.Z.; Prorna, R.A.; Rahman, I.; Islam, A.; Islam, M. Applying brain-computer interface technology for evaluation of user experience in playing games. In Proceedings of the 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), Cox’sBazar, Bangladesh, 7–9 February 2019. [Google Scholar]
  31. Rosca, S.D.; Leba, M. Design of a brain-controlled video game based on a BCI system. MATEC Web Conf. 2019, 290, 01019. [Google Scholar] [CrossRef] [Green Version]
  32. Abbasi-Asl, R.; Keshavarzi, M.; Chan, D. Brain-computer interface in virtual reality. In Proceedings of the 9th International IEEE/EMBS Conference on Neural Engineering (NER), San Francisco, CA, USA, 20–23 March 2019; pp. 1220–1224. [Google Scholar]
  33. Zhong, S.; Liu, Y.; Yu, Y.; Tang, J.; Zhou, Z.; Hu, D. A dynamic user interface based BCI environmental control system. Int. J. Hum. Comput. Interact. 2020, 36, 55–66. [Google Scholar] [CrossRef]
  34. Kim, M.; Kim, M.K.; Hwang, M.; Kim, H.Y.; Cho, J.; Kim, S.P. Online Home Appliance Control Using EEG-Based Brain–Computer Interfaces. Electronics 2019, 8, 1101. [Google Scholar] [CrossRef] [Green Version]
  35. Yu, Y.; Garrison, H.; Battison, A.; Gabel, L. Control of a Quadcopter with Hybrid Brain-Computer Interface. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 2779–2784. [Google Scholar]
  36. Nicolas-Alonso, L.F.; Gomez-Gil, J. Brain computer interfaces, a review. Sensors 2012, 12, 1211–1279. [Google Scholar] [CrossRef] [PubMed]
  37. Hochberg, L.; Donoghue, J. Sensors for brain-computer interfaces. IEEE Eng. Med. Biol. Mag. 2006, 25, 32–38. [Google Scholar] [CrossRef] [PubMed]
  38. Birbaumer, N.; Murguialday, A.R.; Cohen, L. Brain-computer interface in paralysis. Curr. Opin. Neurol. 2008, 21, 634–638. [Google Scholar] [CrossRef] [PubMed]
  39. Sitaram, R.; Weiskopf, N.; Caria, A.; Veit, R.; Erb, M.; Birbaumer, N. fMRI brain-computer interfaces. IEEE Signal Process. Mag. 2008, 25, 95–106. [Google Scholar] [CrossRef]
  40. Leuthardt, E.C.; Miller, K.J.; Schalk, G.; Rao, R.P.N.; Ojemann, J.G. Electrocorticography-based brain computer interface—The Seattle experience. IEEE Trans. Neural Syst. Rehabil. Eng. 2006, 14, 194–198. [Google Scholar] [CrossRef]
  41. Singh, S.P. Magnetoencephalography: Basic principles. Annals Indian Acad. Neurol. 2014, 17 (Suppl. 1), S107–S112. [Google Scholar] [CrossRef] [PubMed]
  42. Mandal, P.K.; Banerjee, A.; Tripathi, M.; Sharma, A. A Comprehensive Review of Magnetoencephalography (MEG) Studies for Brain Functionality in Healthy Aging and Alzheimer’s Disease (AD). Front. Comput. Neurosci. 2018, 12, 60. [Google Scholar] [CrossRef]
  43. Keppler, J.S.; Conti, P.S. A Cost Analysis of Positron Emission Tomography. Am. J. Roentgenol. 2001, 177, 31–40. [Google Scholar] [CrossRef]
  44. Vaquero, J.J.; Kinahan, P. Positron Emission Tomography: Current Challenges and Opportunities for Technological Advances in Clinical and Preclinical Imaging Systems. Annu. Rev. Biomed. Eng. 2015, 17, 385–414. [Google Scholar] [CrossRef] [Green Version]
  45. Queiroz, M.A.; de Galiza Barbosa, F.; Buchpiguel, C.A.; Cerri, G.G. Positron emission tomography/magnetic resonance imaging (PET/MRI): An update and initial experience at HC-FMUSP. Rev. Assoc. MéDica Bras. 2018, 64, 71–84. [Google Scholar] [CrossRef] [Green Version]
  46. Lu, W.; Dong, K.; Cui, D.; Jiao, Q.; Qiu, J. Quality assurance of human functional magnetic resonance imaging: A literature review. Quant. Imaging Med. Surg. 2019, 9, 1147. [Google Scholar] [CrossRef]
  47. Bielczyk, N.Z.; Uithol, S.; van Mourik, T.; Anderson, P.; Glennon, J.C.; Buitelaar, J.K. Disentangling causal webs in the brain using functional magnetic resonance imaging: A review of current approaches. Netw. Neurosci. 2019, 3, 237–273. [Google Scholar] [CrossRef]
  48. Silva, A.; Merkle, H. Hardware considerations for functional magnetic resonance imaging. Concepts Magn. Reson. Educ. J. 2003, 16, 35–49. [Google Scholar] [CrossRef]
  49. Shokoueinejad, M.; Park, D.W.; Jung, Y.; Brodnick, S.; Novello, J.; Dingle, A.; Swanson, K.; Baek, D.H.; Suminski, A.; Lake, W.; et al. Progress in the field of micro-electrocorticography. Micromachines 2019, 10, 62. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Kumar Mudgal, S.; Sharma, S.; Chaturvedi, J.; Sharma, A. Brain computer interface advancement in neurosciences: Applications and issues. Interdiscip. Neurosurg. 2020, 20, 100694. [Google Scholar] [CrossRef]
  51. Spychala, N.; Debener, S.; Bongartz, E.; Müller, H.; Thorne, J.; Philipsen, A.; Braun, N. Exploring Self-Paced Embodiable Neurofeedback for Post-stroke Motor Rehabilitation. Front. Hum. Neurosci. 2020, 13. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Pascual-Leone, A.; Amedi, A.; Fregni, F.; Merabet, L.B. The plastic human brain cortex. Annu. Rev. Neurosci. 2005, 28, 377–401. [Google Scholar] [CrossRef] [Green Version]
  53. Friehs, G.M.; Zerris, V.A.; Ojakangas, C.L.; Fellows, M.R.; Donoghue, J.P. Brain-machine and brain-computer interfaces. Stroke 2004, 35, 2702–2705. [Google Scholar] [CrossRef] [Green Version]
  54. Lee, S.; Fallegger, F.; Casse, B.; Fried, S. Implantable microcoils for intracortical magnetic stimulation. Sci. Adv. 2016, 2. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  55. Koch Fager, S.; Fried-Oken, M.; Jakobs, T.; Beukelman, D. New and emerging access technologies for adults with complex communication needs and severe motor impairments: State of the science. Augment. Altern. Commun. 2019, 35, 13–25. [Google Scholar] [CrossRef]
  56. Bočková, M.; Rektor, I. Impairment of brain functions in Parkinson’s disease reflected by alterations in neural connectivity in EEG studies: A viewpoint. Clin. Neurophysiol. 2019, 130, 239–247. [Google Scholar] [CrossRef] [PubMed]
  57. Borhani, S.; Abiri, R.; Jiang, Y.; Berger, T.; Zhao, X. Brain connectivity evaluation during selective attention using EEG-based brain-computer interface. Brain-Comput. Interfaces 2019, 6, 25–35. [Google Scholar] [CrossRef]
  58. Borton, D.A.; Yin, M.; Aceros, J.; Nurmikko, A. An implantable wireless neural interface for recording cortical circuit dynamics in moving primates. J. Neural Eng. 2013, 10, 026010. [Google Scholar] [CrossRef] [Green Version]
  59. Dethier, J.; Nuyujukian, P.; Ryu, S.; Shenoy, K.; Boahen, K. Design and validation of a real-time spiking-neural-network decoder for brain-machine interfaces. J. Neural Eng. 2013, 10, 036008. [Google Scholar] [CrossRef] [Green Version]
  60. Benabid, A.; Costecalde, T.; Eliseyev, A.; Charvet, G.; Verney, A.; Karakas, S.; Foerster, M.; Lambert, A.; Morinière, B.; Abroug, N.; et al. An exoskeleton controlled by an epidural wireless brain–machine interface in a tetraplegic patient: A proof-of-concept demonstration. Lancet Neurol. 2019, 18, 1112–1122. [Google Scholar] [CrossRef]
  61. Al Ajrawi, S.; Al-Hussaibi, W.; Rao, R.; Sarkar, M. Efficient balance technique for brain-computer interface applications based on I/Q down converter and time interleaved ADCs. Inform. Med. Unlocked 2020, 18, 100276. [Google Scholar] [CrossRef]
  62. Wahlstrom, K.; Fairweather, B.; Ashman, H. Privacy and brain-computer interfaces: Method and interim findings. ORBIT J. 2017, 1, 1–19. [Google Scholar] [CrossRef] [Green Version]
  63. Daly, J.J.; Wolpaw, J.R. Brain-computer interfaces in neurological rehabilitation. Lancet Neurol. 2008, 7, 1032–1043. [Google Scholar] [CrossRef]
  64. Thomas, K.P.; Guan, C.; Lau, C.T.; Vinod, A.P.; Ang, K.K. A new discriminative common spatial pattern method for motor imagery brain-computer interfaces. IEEE Trans. Biomed. Eng. 2009, 56, 2730–2733. [Google Scholar] [CrossRef]
  65. Lenhardt, A.; Kaper, M.; Ritter, H.J. An adaptive P300-based online brain-computer interface. IEEE Trans. Neural Syst. Rehabil. Eng. 2008, 16, 121–130. [Google Scholar] [CrossRef] [PubMed]
  66. Xia, B.; Li, X.; Xie, H.; Yang, W.; Li, J.; He, L. Asynchronous brain-computer interface based on steady-state visual-evoked potential. Cogn. Comput. 2013, 5, 243–251. [Google Scholar] [CrossRef]
  67. Pfurtscheller, G.; Allison, B.Z.; Brunner, C.; Bauernfeind, G.; Escalante, T.S.; Scherer, R.; Zander, T.D.; Putz, G.M.; Neuper, C.; Birbaumer, N. The Hybrid BCI. Front. Neurosci. 2010, 4, 3. [Google Scholar] [CrossRef]
  68. Kwon, M.; Cho, H.; Won, K.; Ahn, M.; Jun, S. Use of both eyes-open and eyes-closed resting states may yield a more robust predictor of motor imagery BCI performance. Electronics 2020, 9, 690. [Google Scholar] [CrossRef]
  69. Ahn, M.; Cho, H.; Ahn, S.; Jun, S.C. High theta and low alpha powers may be indicative of BCI-illiteracy in motor imagery. PLoS ONE 2013, 8, e80886. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  70. Kaiser, V.; Kreilinger, A.; Müller-Putz, G.R.; Neuper, C. First steps toward a motor imagery based stroke BCI: New strategy to set up a classifier. Front. Neurosci. 2011, 5, 86. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  71. Riccio, A.; Simione, L.; Schettini, F.; Pizzimenti, A.; Inghilleri, M.; Belardinelli, M.O.; Mattia, D.; Cincotti, F. Attention and P300-based BCI performance in people with amyotrophic lateral sclerosis. Front. Hum. Neurosci. 2013, 7, 732. [Google Scholar] [CrossRef] [Green Version]
  72. Brouwer, A.M.; van Erp, J.B.F. A tactile P300 brain-computer interface. Front. Neurosci. 2010, 4, 19. [Google Scholar] [CrossRef] [Green Version]
  73. Cai, Z.; Makino, S.; Rutkowski, T.M. Brain evoked potential latencies optimization for spatial auditory brain-computer interface. Cogn. Comput. 2015, 7, 34–43. [Google Scholar] [CrossRef] [Green Version]
  74. Martišius, I.; Damasevicius, R. A prototype SSVEP based real time BCI gaming system. Comput. Intell. Neurosci. 2016, 2016, 3861425. [Google Scholar] [CrossRef] [Green Version]
  75. Pfurtscheller, G.; Neuper, C. Motor Imagery and Direct Brain-Computer Communication. Proc. IEEE 2001, 89, 1123–1134. [Google Scholar] [CrossRef]
  76. Aloise, F.; Aricò, P.; Schettini, F.; Salinari, S.; Mattia, D.; Cincotti, F. Asynchronous gaze-independent event-related potential-based brain-computer interface. Artif. Intell. Med. 2013, 59, 61–69. [Google Scholar] [CrossRef]
  77. Wolpaw, J.R.; Birbaumer, N.; Heetderks, W.J.; McFarland, D.J.; Peckham, H.P.; Gerwin, S.; Emanuel, D.; Quatrano, L.A.; Robinson, C.J.; Vaughan, T.M. Brain-Computer Interface Technology: A Review of the First International Meeting. IEEE Trans. Rehabil. Eng. 2000, 8, 222–226. [Google Scholar] [CrossRef] [Green Version]
  78. Luzheng, B.; Xin-An, F.; Yili, L. EEG-Based Brain-Controlled Mobile Robots: A Survey. IEEE Trans. Hum. Mach. Syst. 2013, 43, 161–176. [Google Scholar] [CrossRef]
  79. Han, C.H.; Müller, K.R.; Hwang, H.J. Brain-switches for asynchronous brain–computer interfaces: A systematic review. Electronics 2020, 9, 422. [Google Scholar] [CrossRef] [Green Version]
  80. Suefusa, K.; Tanaka, T. Asynchronous brain-computer interfacing based on mixed-coded visual stimuli. IEEE Trans. Biomed. Eng. 2018, 65, 2119–2129. [Google Scholar] [CrossRef]
  81. Bin, G.; Gao, X.; Wang, Y.; Li, Y.; Hong, B.; Gao, S. A high-speed BCI based on code modulation VEP. J. Neural Eng. 2011, 8, 025015. [Google Scholar] [CrossRef] [PubMed]
  82. Huan, N.J.; Palaniappan, R. Neural network classification of autoregressive features from electroencephalogram signals for brain-computer interface design. J. Neural Eng. 2004, 1, 142–150. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  83. Butkevičiūtė, E.; Bikulciene, L.; Sidekerskiene, T.; Blazauskas, T.; Maskeliunas, R.; Damasevicius, R.; Wei, W. Removal of movement artefact for mobile EEG analysis in sports exercises. IEEE Access 2019, 7, 7206–7217. [Google Scholar] [CrossRef]
  84. Di Flumeri, G.; Aricò, P.; Borghini, G.; Colosimo, A.; Babiloni, F. A new regression-based method for the eye blinks artifacts correction in the EEG signal, without using any EOG channel. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 6–20 August 2016; Volume 59, pp. 3187–3190. [Google Scholar] [CrossRef]
  85. Jirayucharoensak, S.; Israsena, P. Automatic removal of EEG artifacts using ICA and Lifting Wavelet Transform. In Proceedings of the 2013 International Computer Science and Engineering Conference (ICSEC), Nakhonpathom, Thailand, 4–6 September 2013; pp. 136–139. [Google Scholar] [CrossRef]
  86. Zhang, X.; Vialatte, F.B.; Chen, C.; Rathi, A.; Dreyfus, G. Embedded implementation of second-order blind identification (SOBI) for real-time applications in neuroscience. Cogn. Comput. 2014, 7, 56–63. [Google Scholar] [CrossRef]
  87. Argunşah, A.; Çetin, M. A brain-computer interface algorithm based on Hidden Markov models and dimensionality reduction. In Proceedings of the 2010 IEEE 18th Signal Processing and Communications Applications Conference, Diyarbakir, Turkey, 22–24 April 2010; pp. 93–96. [Google Scholar]
  88. Li, J.; Chen, X.; Li, Z. Spike detection and spike sorting with a hidden Markov model improves offline decoding of motor cortical recordings. J. Neural Eng. 2019, 16. [Google Scholar] [CrossRef] [PubMed]
  89. Duan, L.; Zhong, H.; Miao, J.; Yang, Z.; Ma, W.; Zhang, X. A voting optimized strategy based on ELM for improving classification of motor imagery BCI data. Cogn. Comput. 2014, 6, 477–483. [Google Scholar] [CrossRef]
  90. Banitalebi, A.; Setarehdan, S.K.; Hossein-Zadeh, G.A. A technique based on chaos for brain computer interfacing. In Proceedings of the 14th International CSI Computer Conference, Tehran, Iran, 20–21 October 2009; pp. 464–469. [Google Scholar] [CrossRef] [Green Version]
  91. Thulasidas, M.; Guan, C.; Wu, J. Robust classification of EEG signal for brain-computer interface. IEEE Trans. Neural Syst. Rehabil. Eng. 2006, 14, 24–29. [Google Scholar] [CrossRef]
  92. van Gerven, M.; Farquhar, J.; Schaefer, R.; Vlek, R.; Geuze, J.; Nijholt, A.; Ramsey, N.; Haselager, P.; Vuurpijl, L.; Gielen, S.; et al. The brain-computer interface cycle. J. Neural Eng. 2009, 6, 041001. [Google Scholar] [CrossRef]
  93. Ryu, S.; Shenoy, K. Human cortical prostheses: Lost in translation? Neurosurgical Focus FOC 2009, 27, E5. [Google Scholar] [CrossRef] [PubMed]
  94. Bloch, E.; Luo, Y.; da Cruz, L. Advances in retinal prosthesis systems. Ther. Adv. Ophthalmol. 2019, 11. [Google Scholar] [CrossRef] [PubMed]
  95. Zeng, F.G. Trends in cochlear implants. Trends Amplif. 2004, 8, 1–34. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  96. Rothschild, R.M. Neuroengineering tools/applications for bidirectional interfaces, brain–computer interfaces, and neuroprosthetic implants—A review of recent progress. Front. Neuroeng. 2010, 3, 112. [Google Scholar] [CrossRef] [Green Version]
  97. Krishna, N.; Sekaran, K.; Annepu, V.; Vamsi, N.; Ghantasala, G.; Pethakamsetty, C.; Kadry, S.; Blazauskas, T.; Damasevicius, R. An Efficient Mixture Model Approach in Brain-Machine Interface Systems for Extracting the Psychological Status of Mentally Impaired Persons Using EEG Signals. IEEE Access 2019, 7, 77905–77914. [Google Scholar] [CrossRef]
  98. Damasevicius, R.; Maskeliunas, R.; Kazanavicius, E.; Woźniak, M. Combining Cryptography with EEG Biometrics. Comput. Intell. Neurosci. 2018, 2018, 1867548. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  99. Al-Saggaf, A. Secure method for combining cryptography with iris biometrics. J. Univers. Comput. Sci. 2018, 24, 341–356. [Google Scholar]
  100. Zhang, C.; Kimura, Y.; Higashi, H.; Tanaka, T. A simple platform of brain-controlled mobile robot and its implementation by SSVEP. In Proceedings of the 2012 International Joint Conference on Neural Networks (IJCNN), Brisbane, QLD, Australia, 10–15 June 2012; pp. 1–7. [Google Scholar]
  101. Edelman, B.; Meng, J.; Suma, D.; Zurn, C.; Nagarajan, E.; Baxter, B.; Cline, C.; He, B. Noninvasive neuroimaging enhances continuous neural tracking for robotic device control. Sci. Robots 2019, 4. [Google Scholar] [CrossRef] [PubMed]
  102. Bockbrader, M. Upper limb sensorimotor restoration through brain–computer interface technology in tetraparesis. Curr. Opin. Biomed. Eng. 2019, 11, 85–101. [Google Scholar] [CrossRef]
  103. Lécuyer, A.; Lotte, F.; Reilly, R.B.; Leeb, R.; Hirose, M.; Slater, M. Brain–computer interfaces, virtual reality, and videogames. Computer 2008, 41, 66–72. [Google Scholar] [CrossRef] [Green Version]
  104. Yuan, P.; Wang, Y.; Gao, X.; Jung, T.P.; Gao, S. A Collaborative Brain-Computer Interface for Accelerating Human Decision Making. In Universal Access in Human-Computer Interaction. Design Methods, Tools, and Interaction Techniques for eInclusion; Lecture Notes in Computer Science; Stephanidis, C., Antona, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8009, pp. 672–681. [Google Scholar] [CrossRef]
  105. Bianchi, L.; Gambardella, F.; Liti, C.; Piccialli, V. Group study via collaborative BCI. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 272–276. [Google Scholar] [CrossRef]
  106. Kapeller, C.; Ortner, R.; Krausz, G.; Bruckner, M.; Allison, B.; Guger, C.; Edlinger, G. Toward Multi-brain Communication: Collaborative Spelling with a P300 BCI. In Foundations of Augmented Cognition. Advancing Human Performance and Decision-Making through Adaptive Systems, Proceedings of the AC 2014, Heraklion, Crete, Greece, 22–27 June 2014; Lecture Notes in Computer Science; Schmorrow, D.D., Fidopiastis, C.M., Eds.; Springer: Cham, Switzerland, 2014; Volume 8534, pp. 47–54. [Google Scholar] [CrossRef]
  107. Papanastasiou, G.; Drigas, A.; Skianis, C.; Lytras, M. Brain computer interface based applications for training and rehabilitation of students with neurodevelopmental disorders. A literature review. Heliyon 2020, 6, e04250. [Google Scholar] [CrossRef]
  108. Aricò, P.; Borghini, G.; Di Flumeri, G.; Sciaraffa, N.; Colosimo, A.; Babiloni, F. Passive BCI in operational environments: Insights, recent advances, and future trends. IEEE Trans. Biomed. Eng. 2017, 64, 1431–1436. [Google Scholar] [CrossRef] [PubMed]
  109. Aricò, P.; Borghini, G.; Di Flumeri, G.; Bonelli, S.; Golfetti, A.; Graziani, I. Human Factors and Neurophysiological Metrics in Air Traffic Control: A Critical Review. IEEE Rev. Biomed. Eng. 2017, 13. [Google Scholar] [CrossRef]
  110. Burwell, S.; Sample, M.; Racine, E. Ethical aspects of brain computer interfaces: A scoping review. BMC Med. Ethics 2017, 18, 60. [Google Scholar] [CrossRef] [PubMed]
  111. Klein, E.; Peters, B.; Higger, M. Ethical considerations in ending exploratory brain-computer interface research studies in locked-in syndrome. Camb. Q. Healthc. Ethics 2018, 27, 660–674. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  112. Vlek, R.J.; Steines, D.; Szibbo, D.; Kübler, A.; Schneider, M.J.; Haselager, P.; Nijboer, F. Ethical issues in brain-computer interface research, development, and dissemination. J. Neurol. Phys. Therapy 2012, 36, 94–99. [Google Scholar] [CrossRef]
  113. Schermer, M. Ethical issues in deep brain stimulation. Front. Integr. Neurosci. 2011, 5, 17. [Google Scholar] [CrossRef] [Green Version]
  114. Landau, O.; Cohen, A.; Gordon, S.; Nissim, N. Mind your privacy: Privacy leakage through BCI applications using machine learning methods. Knowl. Based Syst. 2020, 198, 105932. [Google Scholar] [CrossRef]
  115. Ajrawi, S.; Rao, R.; Sarkar, M. Cybersecurity in brain-computer interfaces: RFID-based design-theoretical framework. Inform. Med. Unlocked 2021, 22, 100489. [Google Scholar] [CrossRef]
  116. Li, Q.; Ding, D.; Conti, M. Brain-Computer Interface applications: Security and privacy challenges. In Proceedings of the 2015 IEEE Conference on Communications and Network Security (CNS), Florence, Italy, 28–30 September 2015; pp. 663–666. [Google Scholar] [CrossRef]
  117. Mowla, M.; Cano, R.; Dhuyvetter, K.; Thompson, D. Affective brain-computer interfaces: Choosing a meaningful performance measuring metric. Comput. Biol. Med. 2020, 126, 104001. [Google Scholar] [CrossRef] [PubMed]
  118. Gubert, P.; Costa, M.; Silva, C.; Trofino-Neto, A. The performance impact of data augmentation in CSP-based motor-imagery systems for BCI applications. Biomed. Signal Process. Control 2020, 62, 102152. [Google Scholar] [CrossRef]
  119. North, S.; Young, K.; Hamilton, M.; Kim, J.; Zhao, X.; North, M.; Cronnon, E. Brain-Computer Interface: An Experimental Analysis of Performance Measurement. In Proceedings of the IEEE SoutheastCon 2020, Raleigh, NC, USA, 28–29 March 2020; pp. 1–8. [Google Scholar] [CrossRef]
  120. Myers Briggs, I.; McCaulley, M.; Quenk, N.; A, H. MBTI Handbook: A Guide to the Development and Use of the Myers-Briggs Type Indicator, 3rd ed.; Consulting Psychologists Press: Sunnyvale, CA, USA, 1998. [Google Scholar]
  121. Yadav, D.; Yadav, S.; Veer, K. A comprehensive assessment of brain computer interfaces: Recent trends and challenges. J. Neurosci. Methods 2020, 346, 108918. [Google Scholar] [CrossRef] [PubMed]
  122. Kögel, J.; Jox, R.; Friedrich, O. What is it like to use a BCI?—Insights from an interview study with brain-computer interface users. BMC Med. Ethics 2020, 21, 2. [Google Scholar] [CrossRef] [PubMed]
  123. Wolpaw, J.R.; McFarland, D.J. Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans. Natl. Acad. Sci. 2004, 101, 17849–17854. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  124. McFarland, D.J.; Wolpaw, J.R. Brain-computer interfaces for communication and control. Commun. ACM 2011, 54, 60–66. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  125. Di Flumeri, G.; Aricò, P.; Borghini, G.; Sciaraffa, N.; Di Florio, A.; Babiloni, F. The Dry Revolution: Evaluation of Three Different EEG Dry Electrode Types in Terms of Signal Spectral Features, Mental States Classification and Usability. Sensors 2019, 19, 1365. [Google Scholar] [CrossRef] [Green Version]
  126. Malmivuo, J.; Plonsey, R. Bioelectromagnetism: Principles and Applications of Bioelectric and Biomagnetic Fields; Oxford University Press: Oxford, UK, 1995. [Google Scholar] [CrossRef]
  127. Nuwer, M.R. Recording electrode site nomenclature. J. Clin. Neurophysiol. 1987, 4, 121. [Google Scholar] [CrossRef]
  128. Suarez, E.; Viegas, M.D.; Adjouadi, M.; Barreto, A. Relating induced changes in EEG signals to orientation of visual stimuli using the ESI-256 machine. Biomed. Sci. Instrum. 2000, 36, 33. [Google Scholar]
  129. Oostenveld, R.; Praamstra, P. The five percent electrode system for high-resolution EEG and ERP measurements. Clin. Neurophysiol. 2001, 112, 713–719. [Google Scholar] [CrossRef]
  130. Jurcak, V.; Tsuzuki, D.; Dan, I. 10/20, 10/10, and 10/5 systems revisited: Their validity as relative head-surface-based positioning systems. NeuroImage 2007, 34, 1600–1611. [Google Scholar] [CrossRef]
  131. Crosson, B.; Ford, A.; McGregor, K.M.; Meinzer, M.; Cheshkov, S.; Li, X.; Walker-Batson, D.; Briggs, R.W. Functional imaging and related techniques: An introduction for rehabilitation researchers. J. Rehabil. Res. Dev. 2010, 47, vii–xxxiv. [Google Scholar] [CrossRef] [PubMed]
  132. Dornhege, G.; Millan, J.d.R.; Hinterberger, T.; McFarland, D.; Muller, K.R. (Eds.) Toward Brain-Computer Interfacing; A Bradford Book; The MIT Press: Cambridge, MA, USA, 2007. [Google Scholar]
  133. Luck, S.J. An Introduction to the Event-Related Potential Technique; The MIT Press: Cambridge, MA, USA, 2014. [Google Scholar]
  134. Picton, T.W. The P300 wave of the human event-related potential. J. Clin. Neurophysiol. 1992, 9, 456–479. [Google Scholar] [CrossRef]
  135. Sellers, E.W.; Donchin, E. A P300-based brain–computer interface: Initial tests by ALS patients. Clin. Neurophysiol. 2006, 117, 538–548. [Google Scholar] [CrossRef]
  136. Blankertz, B.; Dornhege, G.; Krauledat, M.; Schröder, M.; Williamson, J.; Murray, S.R.; Müller, K.R. The Berlin Brain-Computer Interface presents the novel mental typewriter Hex-o-Spell. Handb. Clin. Neurol. 2006, 113, 108–109. Available online: http://www.dcs.gla.ac.uk/~rod/publications/BlaDorKraSchWilMurMue06.pdf (accessed on 25 February 2021).
  137. Blankertz, B.; Krauledat, M.; Dornhege, G.; Williamson, J.; Murray, S.R.; Müller, K.R. A Note on Brain Actuated Spelling with the Berlin Brain-Computer Interface. Handb. Clin. Neurol. 2007, 4555, 759–768. [Google Scholar] [CrossRef] [Green Version]
  138. Treder, M.S.; Blankertz, B. Covert attention and visual speller design in an ERP-based brain-computer interface. Behav. Brain Funct. 2010, 6. [Google Scholar] [CrossRef] [Green Version]
  139. Treder, M.S.; Schmidt, N.M.; Blankertz, B. Gaze-independent brain-computer interfaces based on covert attention and feature attention. J. Neural Eng. 2011, 8, 066003. [Google Scholar] [CrossRef]
  140. Schreuder, M.; Blankertz, B.; Tangermann, M. A New Auditory Multi-Class Brain-Computer Interface Paradigm: Spatial Hearing as an Informative Cue. PLoS ONE 2010, 5, e9813. [Google Scholar] [CrossRef]
  141. Rader, S.; Holmes, J.; Golob, E. Auditory event-related potentials during a spatial working memory task. Clin. Neurophysiol. 2008, 119, 1176–1189. [Google Scholar] [CrossRef] [PubMed]
  142. Solon, A.; Lawhern, V.; Touryan, J.; McDaniel, J.; Ries, A.; Gordon, S. Decoding P300 variability using convolutional neural networks. Front. Hum. Neurosci. 2019, 13, 201. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  143. Fernández-Rodríguez, A.; Medina-Juliá, M.; Velasco-Álvarez, F.; Ron-Angevin, R. Effects of spatial stimulus overlap in a visual P300-based brain-computer interface. Neuroscience 2020, 431, 134–142. [Google Scholar] [CrossRef] [PubMed]
  144. Regan, D. Steady-state evoked potentials. J. Opt. Soc. Am. 1977, 67, 1475–1489. [Google Scholar] [CrossRef]
  145. Gao, X.; Xu, D.; Cheng, M.; Gao, S. A BCI-based environmental controller for the motion-disabled. IEEE Trans. Neural Syst. Rehabil. Eng. 2003, 11, 137–140. [Google Scholar]
  146. Middendorf, M.; McMillan, G.; Calhoun, G.; Jones, K.S. Brain-computer interfaces based on the steady-state visual-evoked response. IEEE Trans. Rehabil. Eng. 2000, 8, 211–214. [Google Scholar] [CrossRef] [Green Version]
  147. Fisher, R.S.; Harding, G.; Erba, G.; Barkley, G.L.; Wilkins, A. Photic- and Pattern-induced Seizures: A Review for the Epilepsy Foundation of America Working Group. Epilepsia 2005, 46, 1426–1441. [Google Scholar] [CrossRef] [PubMed]
  148. Zhu, D.; Bieger, J.; Molina, G.G.; Aarts, R.M. A survey of stimulation methods used in SSVEP-based BCIs. Comput. Intell. Neurosci. 2010, 10, 1–12. [Google Scholar] [CrossRef] [PubMed]
  149. Vialatte, F.B.; Maurice, M.; Dauwels, J.; Cichocki, A. Steady-state visually evoked potentials: Focus on essential paradigms and future perspectives. Prog. Neurobiol. 2010, 90, 418–438. [Google Scholar] [CrossRef] [PubMed]
  150. Wang, Y.; Gao, X.; Hong, B.; Jia, C.; Gao, S. Brain-computer interfaces based on visual evoked potentials. IEEE Eng. Med. Biol. Mag. 2008, 27, 64–71. [Google Scholar] [CrossRef]
  151. Zhang, Y.; Xu, P.; Cheng, K.; Yao, D. Multivariate synchronization index for frequency recognition of SSVEP-based brain-computer interface. J. Neurosci. Methods 2014, 221, 32–40. [Google Scholar] [CrossRef] [PubMed]
  152. Wang, Y.; Chen, X.; Gao, X.; Gao, S. A Benchmark Dataset for SSVEP-Based Brain-Computer Interfaces. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 1746–1752. [Google Scholar] [CrossRef] [PubMed]
  153. Nakanishi, M.; Wang, Y.; Wang, Y.t.; Jung, T.P. Does frequency resolution affect the classification performance of steady-state visual evoked potentials? In Proceedings of the 2017 8th International IEEE/EMBS Conference on Neural Engineering (NER), Shanghai, China, 25–28 May 2017; pp. 341–344. [Google Scholar] [CrossRef]
  154. Hotelling, H. Relations between two sets of variates. Biometrika 1936, 28, 321–377. [Google Scholar] [CrossRef]
  155. Anderson, T.W. An Introduction to Multivariate Statistical Analysis; Wiley: New York, NY, USA, 1958; Volume 2. [Google Scholar]
  156. Lin, Z.; Zhang, C.; Wu, W.; Gao, X. Frequency recognition based on canonical correlation analysis for SSVEP-based BCIs. IEEE Trans. Biomed. Eng. 2006, 53, 2610–2614. [Google Scholar] [CrossRef]
  157. Chen, X.; Wang, Y.; Gao, S.; Jung, T.P.; Gao, X. Filter bank canonical correlation analysis for implementing a high-speed SSVEP-based brain-computer interface. J. Neural Eng. 2015, 12, 46008. [Google Scholar] [CrossRef] [PubMed]
  158. Rabiul Islam, M.; Khademul Islam Molla, M.; Nakanishi, M.; Tanaka, T. Unsupervised frequency-recognition method of SSVEPs using a filter bank implementation of binary subband CCA. J. Neural Eng. 2017, 14. [Google Scholar] [CrossRef] [PubMed]
  159. Tanaka, T.; Zhang, C.; Higashi, H. SSVEP frequency detection methods considering background EEG. In Proceedings of the 6th Joint International Conference on Soft Computing and Intelligent Systems (SCIS) and 13th International Symposium on Advanced Intelligent Systems (ISIS), Kobe, Japan, 20–24 November 2012; pp. 1138–1143. [Google Scholar]
  160. Wei, C.S.; Lin, Y.P.; Wang, Y.; Wang, Y.T.; Jung, T.P. Detection of steady-state visual-evoked potential using differential canonical correlation analysis. In Proceedings of the 6th International IEEE/EMBS Conference on Neural Engineering (NER), San Diego, CA, USA, 6–8 November 2013; pp. 57–60. [Google Scholar]
  161. Nakanishi, M.; Wang, Y.; Wang, Y.T.; Mitsukura, Y.; Jung, T.P. Enhancing unsupervised canonical correlation analysis-based frequency detection of SSVEPs by incorporating background EEG. In Proceedings of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Chicago, IL, USA, 26–30 August 2014; pp. 3053–3056. [Google Scholar]
  162. Nakanishi, M.; Wang, Y.; Wang, Y.T.; Mitsukura, Y.; Jung, T.P. A high-speed brain speller using steady-state visual evoked potentials. Int. J. Neural Syst. 2014, 24, 1450019. [Google Scholar] [CrossRef] [PubMed]
  163. Nakanishi, M.; Wang, Y.; Chen, X.; Wang, Y.T.; Gao, X.; Jung, T.P. Enhancing detection of SSVEPs for a high-speed brain speller using task-related component analysis. IEEE Trans. Biomed. Eng. 2018, 65, 104–112. [Google Scholar] [CrossRef] [PubMed]
  164. Zhang, Y.; Yin, E.; Li, F.; Zhang, Y.; Tanaka, T.; Zhao, Q.; Cui, Y.; Xu, P.; Yao, D.; Guo, D. Two-Stage Frequency Recognition Method Based on Correlated Component Analysis for SSVEP-Based BCI. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 1314–1323. [Google Scholar] [CrossRef] [PubMed]
  165. Zhang, Y.; Guo, D.; Li, F.; Yin, E.; Zhang, Y.; Li, P.; Zhao, Q.; Tanaka, T.; Yao, D.; Xu, P. Correlated component analysis for enhancing the performance of SSVEP-based brain-computer interface. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 948–956. [Google Scholar] [CrossRef]
  166. Tanaka, H.; Katura, T.; Sato, H. Task-related component analysis for functional neuroimaging and application to near-infrared spectroscopy data. NeuroImage 2013, 64, 308–327. [Google Scholar] [CrossRef] [PubMed]
  167. Suefusa, K.; Tanaka, T. Visually stimulated brain-computer interfaces compete with eye tracking interfaces when using small targets. In Proceedings of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Chicago, IL, USA, 26–30 August 2014; pp. 4005–4008. [Google Scholar] [CrossRef]
  168. Suefusa, K.; Tanaka, T. A comparison study of visually stimulated brain-computer and eye-tracking interfaces. J. Neural Eng. 2017, 14. [Google Scholar] [CrossRef]
  169. Bakardjian, H.; Tanaka, T.; Cichocki, A. Optimization of SSVEP brain responses with application to eight-command brain-computer interface. Neurosci. Lett. 2010, 469, 34–38. [Google Scholar] [CrossRef]
  170. Kimura, Y.; Tanaka, T.; Higashi, H.; Morikawa, N. SSVEP-based brain–computer interfaces using FSK-modulated visual stimuli. IEEE Trans. Biomed. Eng. 2013, 60, 2831–2838. [Google Scholar] [CrossRef] [PubMed]
  171. Bin, G.; Gao, X.; Wang, Y.; Hong, B.; Gao, S. VEP-based brain-computer interfaces: Time, frequency, and code modulations. IEEE Comput. Intell. Mag. 2009, 4, 22–26. [Google Scholar] [CrossRef]
  172. Guo, F.; Hong, B.; Gao, X.; Gao, S. A brain–computer interface using motion-onset visual evoked potential. J. Neural Eng. 2008, 5, 477–485. [Google Scholar] [CrossRef] [PubMed]
  173. Tanji, Y.; Nakanishi, M.; Suefusa, K.; Tanaka, T. Waveform-Based Multi-Stimulus Coding for Brain-Computer Interfaces Based on Steady-State Visual Evoked Potentials. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 821–825. [Google Scholar] [CrossRef]
  174. Chen, X.; Wang, Y.; Nakanishi, M.; Gao, X.; Jung, T.P.; Gao, S. High-speed spelling with a noninvasive brain-computer interface. Proc. Natl. Acad. Sci. USA 2015, 112, E6058–E6067. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  175. Norcia, A.M.; Appelbaum, L.G.; Ales, J.M.; Cottereau, B.R.; Rossion, B. The steady-state visual evoked potential in vision research: A review. J. Vis. 2015, 15, 4. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  176. İşcan, Z.; Nikulin, V.V. Steady state visual evoked potential (SSVEP) based brain-computer interface (BCI) performance under different perturbations. PLoS ONE 2018, 13, e0191673. [Google Scholar] [CrossRef] [Green Version]
  177. Muller-Putz, G.R.; Scherer, R.; Neuper, C.; Pfurtscheller, G. Steady-state somatosensory evoked potentials: Suitable brain signals for brain-computer interfaces? IEEE Trans. Neural Syst. Rehabil. Eng. 2006, 14, 30–37. [Google Scholar] [CrossRef]
  178. Goodin, D.S.; Squires, K.C.; Starr, A. Long latency event-related components of the auditory evoked potential in dementia. Brain J. Neurol. 1978, 101, 635–648. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  179. Higashi, H.; Rutkowski, T.M.; Washizawa, Y.; Cichocki, A.; Tanaka, T. EEG auditory steady state responses classification for the novel BCI. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Boston, MA, USA, 30 August–3 September 2011; pp. 4576–4579. [Google Scholar]
  180. Guillot, A.; Collet, C. The Neurophysiological Foundations of Mental and Motor Imagery; Oxford University Press: Oxford, UK, 2010. [Google Scholar]
  181. Pfurtscheller, G.; Brunner, C.; Schlögl, A.; Lopes da Silva, F. Mu rhythm (de) synchronization and EEG single-trial classification of different motor imagery tasks. Neuroimage 2006, 31, 153–159. [Google Scholar] [CrossRef] [PubMed]
  182. Sanei, S.; Chambers, J. EEG Signal Processing; Wiley-Interscience: Chichester, UK, 2007; Chapter 1. [Google Scholar]
  183. Pfurtscheller, G.; Neuper, C.; Flotzinger, D.; Pregenzer, M. EEG-based discrimination between imagination of right and left hand movement. Electroencephalogr. Clin. Neurophysiol. 1997, 103, 642–651. [Google Scholar] [CrossRef]
  184. Pfurtscheller, G.; Lopes, F.H. Event-related EEG/MEG synchronization and desynchronization: Basic principles. Clin. Neurophysiol. 1999, 110, 1842–1857. [Google Scholar] [CrossRef]
  185. Dornhege, G.; Blankertz, B.; Curio, G.; Müller, K.R. Boosting bit rates in noninvasive EEG single-trial classifications by feature combination and multiclass paradigms. IEEE Trans. Biomed. Eng. 2004, 51, 993–1002. [Google Scholar] [CrossRef]
  186. Grosse-Wentrup, M.; Buss, M. Multiclass common spatial patterns and information theoretic feature extraction. IEEE Trans. Biomed. Eng. 2008, 55, 1991–2000. [Google Scholar] [CrossRef] [PubMed]
  187. Kübler, A.; Nijboer, F.; Mellinger, J.; Vaughan, T.M.; Pawelzik, H.; Schalk, G.; McFarland, D.J.; Birbaumer, N.; Wolpaw, J.R. Patients with ALS can use sensorimotor rhythms to operate a brain-computer interface. Neurology 2005, 64, 1775–1777. [Google Scholar] [CrossRef]
  188. Higashi, H.; Tanaka, T.; Funase, A. Classification of single trial EEG during imagined hand movement by rhythmic component extraction. In Proceedings of the 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Minneapolis, MN, USA, 3–6 September 2009; pp. 2482–2485. [Google Scholar] [CrossRef]
  189. Pfurtscheller, G.; Muller-Putz, G.R.; Scherer, R.; Neuper, C. Rehabilitation with brain-computer interface systems. Computer 2008, 41, 58–65. [Google Scholar] [CrossRef]
  190. Page, S.J.; Levine, P.; Sisto, S.A.; Johnston, M.V. Mental practice combined with physical practice for upper-limb motor deficit in subacute stroke. Phys. Therapy 2001, 81, 1455–1462. [Google Scholar] [CrossRef] [PubMed]
  191. Sharma, N.; Pomeroy, V.M.; Baron, J.C. Motor imagery: A backdoor to the motor system after stroke? Stroke 2006, 37, 1941–1952. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  192. Várkuti, B.; Guan, C.; Pan, Y.; Phua, K.S.; Ang, K.K.; Kuah, C.W.K.; Chua, K.; Ang, B.T.; Birbaumer, N.; Sitaram, R. Resting state changes in functional connectivity correlate with movement recovery for BCI and robot-assisted upper-extremity training after stroke. Neurorehabilit. Neural Repair 2013, 27, 53–62. [Google Scholar] [CrossRef]
  193. Lambercy, O.; Dovat, L.; Gassert, R.; Burdet, E.; Teo, C.L.; Milner, T. A Haptic Knob for Rehabilitation of Hand Function. IEEE Trans. Neural Syst. Rehabil. Eng. 2007, 15, 356–366. [Google Scholar] [CrossRef]
  194. Alegre, M.; Labarga, A.; Gurtubay, I.G.; Iriarte, J.; Malanda, A.; Artieda, J. Beta electroencephalograph changes during passive movements: Sensory afferences contribute to beta event-related desynchronization in humans. Neurosci. Lett. 2002, 331, 29–32. [Google Scholar] [CrossRef]
  195. Cochin, S.; Barthelemy, C.; Roux, S.; Martineau, J. Observation and execution of movement: Similarities demonstrated by quantified electroencephalography. Eur. J. Neurosci. 1999, 11, 1839–1842. [Google Scholar] [CrossRef] [PubMed]
  196. Friesen, C.L.; Bardouille, T.; Neyedli, H.F.; Boe, S.G. Combined action observation and motor imagery neurofeedback for modulation of brain activity. Front. Hum. Neurosci. 2017, 10, 692. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  197. Nagai, H.; Tanaka, T. Action Observation of Own Hand Movement Enhances Event-Related Desynchronization. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 1407–1415. [Google Scholar] [CrossRef] [PubMed]
  198. Müller-Gerking, J.; Pfurtscheller, G.; Flyvbjerg, H. Designing optimal spatial filters for single-trial EEG classification in a movement task. Clin. Neurophysiol. 1999, 110, 787–798. [Google Scholar] [CrossRef]
  199. Ramoser, H.; Muller-Gerking, J.; Pfurtscheller, G. Optimal spatial filtering of single trial EEG during imagined hand movement. IEEE Trans. Rehabil. Eng. 2000, 8, 441–446. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  200. Lemm, S.; Blankertz, B.; Curio, G.; Müller, K.R. Spatio-spectral filters for improving the classification of single trial EEG. IEEE Trans. Biomed. Eng. 2005, 52, 1541–1548. [Google Scholar] [CrossRef]
  201. Dornhege, G.; Blankertz, B.; Krauledat, M.; Losch, F.; Curio, G.; Muller, K.R. Combined optimization of spatial and temporal filters for improving brain-computer interfacing. IEEE Trans. Biomed. Eng. 2006, 53, 2274–2281. [Google Scholar] [CrossRef] [PubMed]
  202. Tomioka, R.; Müller, K.R. A regularized discriminative framework for EEG analysis with application to brain–computer interface. NeuroImage 2010, 49, 415–432. [Google Scholar] [CrossRef]
  203. Wu, W.; Gao, X.; Hong, B.; Gao, S. Classifying single-trial EEG during motor imagery by iterative spatio-spectral patterns learning (ISSPL). IEEE Trans. Biomed. Eng. 2008, 55, 1733–1743. [Google Scholar] [CrossRef]
  204. Ang, K.K.; Chin, Z.Y.; Zhang, H.; Guan, C. Filter bank common spatial pattern (FBCSP) in brain-computer interface. In Proceedings of the IEEE World Congress on Computational Intelligence—IEEE International Joint Conference on Neural Networks, Hong Kong, China, 1–8 June 2008; pp. 2390–2397. [Google Scholar]
  205. Higashi, H.; Tanaka, T. Simultaneous design of FIR filter banks and spatial patterns for EEG signal classification. IEEE Trans. Biomed. Eng. 2013, 60, 1100–1110. [Google Scholar] [CrossRef]
  206. Higashi, H.; Tanaka, T. Common spatio-time-frequency patterns for motor imagery-based brain machine interfaces. Comput. Intell. Neurosci. 2013, 2013, 537218. [Google Scholar] [CrossRef] [Green Version]
  207. Tomida, N.; Tanaka, T.; Ono, S.; Yamagishi, M.; Higashi, H. Active Data Selection for Motor Imagery EEG Classification. IEEE Trans. Biomed. Eng. 2014, 62, 458–467. [Google Scholar] [CrossRef]
  208. Barachant, A.; Bonnet, S.; Congedo, M.; Jutten, C. Multiclass brain-computer interface classification by Riemannian geometry. IEEE Trans. Biomed. Eng. 2012, 59, 920–928. [Google Scholar] [CrossRef] [Green Version]
  209. Uehara, T.; Tanaka, T.; Fiori, S. Robust averaging of covariance matrices for motor-imagery brain-computer interfacing by Riemannian geometry. In Advances in Cognitive Neurodynamics (V): Proceedings of the Fifth International Conference on Cognitive Neurodynamics-2015); Springer: Singapore, 2016; pp. 347–353. [Google Scholar] [CrossRef]
  210. Barachant, A.; Bonnet, S.; Congedo, M.; Jutten, C. Classification of covariance matrices using a Riemannian-based kernel for BCI applications. Neurocomputing 2013, 112, 172–178. [Google Scholar] [CrossRef] [Green Version]
  211. Islam, M.R.; Tanaka, T.; Molla, M.K.I. Multiband tangent space mapping and feature selection for classification of EEG during motor imagery. J. Neural Eng. 2018, 15. [Google Scholar] [CrossRef] [PubMed]
  212. Yger, F.; Berar, M.; Lotte, F. Riemannian Approaches in Brain-Computer Interfaces: A Review. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 1753–1762. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  213. Fiori, S.; Tanaka, T. An algorithm to compute averages on matrix Lie groups. IEEE Trans. Signal Process. 2009, 57, 4734–4743. [Google Scholar] [CrossRef]
  214. Fiori, S. Learning the Fréchet mean over the manifold of symmetric positive-definite matrices. Cogn. Comput. 2009, 1, 279–291. [Google Scholar] [CrossRef] [Green Version]
  215. Uehara, T.; Sartori, M.; Tanaka, T.; Fiori, S. Robust Averaging of Covariances for EEG Recordings Classification in Motor Imagery Brain-Computer Interfaces. Neural Comput. 2017, 29, 1631–1666. [Google Scholar] [CrossRef] [PubMed]
  216. Graimann, B.; Pfurtscheller, G.; Allison, B. Brain-Computer Interfaces—Revolutionizing Human-Computer Interaction; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  217. Ladouce, S.; Donaldson, D.; Dudchenko, P.; Ietswaart, M. Understanding Minds in Real-World Environments: Toward a Mobile Cognition Approach. Front. Hum. Neurosci. 2017, 10. [Google Scholar] [CrossRef] [Green Version]
  218. Abiri, R.; Borhani, S.; Sellers, E.W.; Jiang, Y.; Zhao, X. A comprehensive review of EEG-based brain-computer interface paradigms. J. Neural Eng. 2019, 16. [Google Scholar] [CrossRef]
  219. Lazarou, I.; Nikolopoulos, S.; Petrantonakis, P.; Kompatsiaris, I.; Tsolaki, M. EEG-Based Brain–Computer Interfaces for Communication and Rehabilitation of People with Motor Impairment: A Novel Approach of the 21st Century. Front. Hum. Neurosci. 2018, 12, 14. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  220. Shih, J.; Krusienski, D.; Wolpaw, J. Brain-computer interfaces in medicine. Mayo Clinic Proc. 2012, 87, 268–279. [Google Scholar] [CrossRef] [Green Version]
  221. Huang, Q.; Zhang, Z.; Yu, T.; He, S.; Li, Y. An EEG-EOG-Based Hybrid Brain-Computer Interface: Application on Controlling an Integrated Wheelchair Robotic Arm System. Front. Neurosci. 2019, 13. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  222. Yuanqing, L.; Chuanchu, W.; Haihong, Z.; Cuntai, G. An EEG-based BCI system for 2D cursor control. In Proceedings of the IEEE International Joint Conference on Neural Networks, IJCNN, and IEEE World Congress on Computational Intelligence, Hong Kong, China, 1–8 June 2008; pp. 2214–2219. [Google Scholar] [CrossRef]
  223. Donchin, E.; Spencer, K.M.; Wijesinghe, R. The mental prosthesis: Assessing the speed of a P300-based brain-computer interface. IEEE Trans. Rehabil. Eng. 2000, 8, 174–179. [Google Scholar] [CrossRef] [Green Version]
  224. Hong, B.; Guo, F.; Liu, T.; Gao, X.; Gao, S. N200-speller using motion-onset visual response. Clin. Neurophysioly 2009, 120, 1658–1666. [Google Scholar] [CrossRef] [PubMed]
  225. Karim, A.A.; Hinterberger, T.; Richter, J.; Mellinger, J.; Neumann, N.; Flor, H.; Kübler, A.; Birbaumer, N. Neural internet: Web surfing with brain potentials for the completely paralyzed. Neurorehabilit. Neural Repair 2006, 20, 508–515. [Google Scholar] [CrossRef]
  226. Bensch, M.; Karim, A.A.; Mellinger, J.; Hinterberger, T.; Tangermann, M.; Bogdan, M.; Rosenstiel, W.; Nessi Birbaumer, N. Nessi: An EEG-Controlled Web Browser for Severely Paralyzed Patients. Comput. Intell. Neurosci. 2007, 7, 508–515. [Google Scholar] [CrossRef] [Green Version]
  227. Marshall, D.; Coyle, D.; Wilson, S.; Callaghan, M. Games, gameplay, and BCI: The state of the art. IEEE Trans. Comput. Intell. AI Games 2013, 5, 82–99. [Google Scholar] [CrossRef]
  228. LaFleur, K.; Cassady, K.; Doud, A.; Shades, K.; Rogin, E.; He, B. Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain–computer interface. J. Neural Eng. 2013, 10, 046003. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  229. Royer, A.; He, B. Goal selection versus process control in a brain-computer interface based on sensorimotor rhythms. J. Neural Eng. 2009, 6, 016005. [Google Scholar] [CrossRef] [PubMed]
  230. Royer, A.S.; Doud, A.J.; Rose, M.L.; He, B. EEG control of a virtual helicopter in 3-dimensional space using intelligent control strategies. IEEE Trans. Neural Syst. Rehabil. Eng. 2010, 18, 581–589. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  231. Doud, A.J.; Lucas, J.P.; Pisansky, M.T.; He, B. Continuous Three-Dimensional Control of a Virtual Helicopter Using a Motor Imagery Based Brain-Computer Interface. PLoS ONE 2011, 6, e26322. [Google Scholar] [CrossRef] [Green Version]
  232. Chae, Y.; Jeong, J.; Jo, S. Toward Brain-Actuated Humanoid Robots: Asynchronous Direct Control Using an EEG-Based BCI. IEEE Trans. Robot. 2012, 28, 1131–1144. [Google Scholar] [CrossRef]
  233. Alshabatat, A.; Vial, P.; Premaratne, P.; Tran, L. EEG-based brain-computer interface for automating home appliances. J. Comput. 2014, 9, 2159–2166. [Google Scholar] [CrossRef]
  234. Lin, C.T.; Lin, B.S.; Lin, F.C.; Chang, C.J. Brain Computer Interface-Based Smart Living Environmental Auto-Adjustment Control System in UPnP Home Networking. IEEE Syst. J. 2014, 8, 363–370. [Google Scholar] [CrossRef] [Green Version]
  235. Ou, C.Z.; Lin, B.S.; Chang, C.J.; Lin, C.T. Brain Computer Interface-based Smart Environmental Control System. In Proceedings of the Eighth International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP), Piraeus-Athens, Greece, 18–20 July 2012; pp. 281–284. [Google Scholar] [CrossRef]
  236. Edlinger, G.; Holzner, C.; Guger, C.; Groenegress, C.; Slater, M. Brain-computer interfaces for goal orientated control of a virtual smart home environment. In Proceedings of the 4th International IEEE/EMBS Conference on Neural Engineering, Antalya, Turkey, 29 April–2 May 2009; pp. 463–465. [Google Scholar] [CrossRef]
  237. Kanemura, A.; Morales, Y.; Kawanabe, M.; Morioka, H.; Kallakuri, N.; Ikeda, T.; Miyashita, T.; Hagita, N.; Ishii, S. A waypoint-based framework in brain-controlled smart home environments: Brain interfaces, domotics, and robotics integration. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Tokyo, Japan, 3–7 November 2013; pp. 865–870. [Google Scholar] [CrossRef]
  238. Bayliss, J.; Ballard, D. A virtual reality testbed for brain-computer interface research. IEEE Trans. Rehabil. Eng. 2000, 8, 188–190. [Google Scholar] [CrossRef] [Green Version]
  239. Bayliss, J. Use of the evoked potential P3 component for control in a virtual apartment. IEEE Trans. Neural Syst. Rehabil. Eng. 2003, 11, 113–116. [Google Scholar] [CrossRef] [PubMed]
  240. Ianez, E.; Azorin, J.M.; Ubeda, A.; Ferrandez, J.M.; Fernandez, E. Mental tasks-based brain–robot interface. Robot. Auton. Syst. 2010, 58, 1238–1245. [Google Scholar] [CrossRef]
  241. Schalk, G. Brain-computer symbiosis. J. Neural Eng. 2008, 5, P1–P15. [Google Scholar] [CrossRef] [Green Version]
  242. McFarland, D.J.; Sarnacki, W.A.; Wolpaw, J.R. Electroencephalographic (EEG) control of three-dimensional movement. J. Neural Eng. 2010, 7, 036007. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  243. Ubeda, A.; Ianez, E.; Azorin, J.M. Shared control architecture based on RFID to control a robot arm using a spontaneous brain–machine interface. Robot. Auton. Syst. 2013, 61, 768–774. [Google Scholar] [CrossRef]
  244. Li, T.; Hong, J.; Zhang, J.; Guo, F. Brain-machine interface control of a manipulator using small-world neural network and shared control strategy. J. Neurosci. Methods 2014, 224, 26–38. [Google Scholar] [CrossRef] [PubMed]
  245. Sirvent Blasco, J.L.; Ianez, E.; Ubeda, A.; Azorin, J.M. Visual evoked potential-based brain-machine interface applications to assist disabled people. Expert Syst. Appl. 2012, 39, 7908–7918. [Google Scholar] [CrossRef]
  246. Hortal, E.; Planelles, D.; Costa, A.; Ianez, E.; Ubeda, A.; Azorin, J.; Fernandez, E. SVM-based Brain-Machine Interface for controlling a robot arm through four mental tasks. Neurocomputing 2015, 151, 116–121. [Google Scholar] [CrossRef]
  247. Bell, C.J.; Shenoy, P.; Chalodhorn, R.; Rao, R.P.N. Control of a humanoid robot by a noninvasive brain-computer interface in humans. J. Neural Eng. 2008, 5, 214–220. [Google Scholar] [CrossRef]
  248. Xu, B.; Peng, S.; Song, A.; Yang, R.; Pan, L. Robot-aided upper-limb rehabilitation based on motor imagery EEG. Int. J. Adv. Robot. Syst. 2011, 8, 88–97. [Google Scholar] [CrossRef]
  249. Leeb, R.; Perdikis, S.; Tonini, L.; Biasiucci, A.; Tavella, M.; Creatura, M.; Molina, A.; Al-Khodairy, A.; Carlson, T.; Millán, J. Transferring brain–computer interfaces beyond the laboratory: Successful application control for motor-disabled users. Artif. Intell. Med. 2013, 59, 121–132. [Google Scholar] [CrossRef] [Green Version]
  250. Gernot, R.; Muller-Putz, G.R.; Pfurtscheller, G. Control of an Electrical Prosthesis With an SSVEP-Based BCI. IEEE Trans. Biomed. Eng. 2008, 55, 361–364. [Google Scholar] [CrossRef]
  251. Perrin, X.; Chavarriaga, R.; Colas, F.; Siegwart, R.; Millán, J.d.R. Brain-coupled Interaction for Semi-autonomous Navigation of an Assistive Robot. Robot. Auton. Syst. 2010, 58, 1246–1255. [Google Scholar] [CrossRef] [Green Version]
  252. Tanaka, K.; Matsunaga, K.; Wang, H. Electroencephalogram-based control of an electric wheelchair. IEEE Trans. Robot. 2005, 21, 762–766. [Google Scholar] [CrossRef]
  253. Rebsamen, B.; Guan, C.; Zhang, H.; Wang, C.; Teo, C.; Ang, M. A brain controlled wheelchair to navigate in familiar environments. IEEE Trans. Neural Syst. Rehabil. Eng. 2010, 18, 590–598. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  254. Rebsamen, B.; Teo, C.; Zeng, Q.; Ang, M.; Burdet, E.; Guan, C.; Zhang, H.; Laugier, C. Controlling a Wheelchair Indoors Using Thought. IEEE Intell. Syst. 2007, 22, 18–24. [Google Scholar] [CrossRef]
  255. Iturrate, I.; Antelis, J.; Kubler, A.; Minguez, J. A noninvasive brain-actuated wheelchair based on a P300 neurophysiological protocol and automated navigation. IEEE Trans. Robot. 2009, 25, 614–627. [Google Scholar] [CrossRef] [Green Version]
  256. Muller, S.; Celeste, W.; Bastos-Filho, T.; Sarcinelli-Filho, M. Brain-computer interface based on visual evoked potentials to command autonomous robotic wheelchair. J. Med. Biol. Eng. 2010, 30, 407–415. [Google Scholar] [CrossRef]
  257. Diez, P.; Muller, S.; Mut, V.; Laciar, E.; Avila, E.; Bastos-Filho, T.; Sarcinelli-Filho, M. Commanding a robotic wheelchair with a high-frequency steady-state visual evoked potential based brain–computer interface. Med. Eng. Phys. 2013, 35, 1155–1164. [Google Scholar] [CrossRef]
  258. Leeb, R.; Friedman, D.; Müller-Putz, G.R.; Scherer, R.; Slater, M.; Pfurtscheller, G. Self-paced (asynchronous) BCI control of a wheelchair in virtual environments: A case study with a tetraplegic. Comput. Intell. Neurosci. 2007, 7, 7–12. [Google Scholar] [CrossRef] [Green Version]
  259. Tsui, C.; Gan, J.; Hu, H. A self-paced motor imagery based brain-computer interface for robotic wheelchair control. Clin. EEG Neurosci. 2011, 42, 225–229. [Google Scholar] [CrossRef]
  260. Galan, F.; Nuttin, M.; Lew, P.; Vanacker, G.; Philips, J. A brain-actuated wheelchair: Asynchronous and non-invasive brain-computer interfaces for continuous control of robots. Clin. Neurophysiol. 2008, 119, 2159–2169. [Google Scholar] [CrossRef] [Green Version]
  261. Allison, B.; Dunne, S.; Leeb, R.; Del, R.; Millán, J.; Nijholt, A. Towards Practical Brain–Computer Interfaces Bridging the Gap from Research to Real-World Applications; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  262. Brunner, C.; Allison, B.; Krusienski, D.; Kaiser, V.; Muller-Putz, G.; Pfurtscheller, G. Improved signal processing approaches in an offline simulation of a hybrid brain-computer interface. J. Neurosci. Methods 2010, 188, 165–173. [Google Scholar] [CrossRef] [Green Version]
  263. Li, Y.; Pan, J.; Wang, F.; Yu, Z. A hybrid BCI system combining P300 and SSVEP and its application to wheelchair control. IEEE Trans. Neural Syst. Rehabil. Eng. 2013, 60, 3156–3166. [Google Scholar] [CrossRef]
  264. Long, J.; Li, Y.; Wang, H.; Yu, T.; Pan, J.; Li, F. A Hybrid Brain Computer Interface to Control the Direction and Speed of a Simulated or Real Wheelchair. IEEE Trans. Neural Syst. Rehabil. Eng. 2012, 20, 720–729. [Google Scholar] [CrossRef] [PubMed]
  265. Cao, L.; Li, J.; Ji, H.; Jiang, C. A hybrid brain computer interface system based on the neurophysiological protocol and brain-actuated switch for wheelchair control. J. Neurosci. Methods 2014, 229, 33–43. [Google Scholar] [CrossRef]
  266. Lotte, F.; Congedo, M.; Lécuyer, A.; Lamarche, F.; Arnaldi, B. A review of classification algorithms for EEG-based brain-computer interfaces. J. Neural Eng. 2007, 4, 1–13. [Google Scholar] [CrossRef] [PubMed]
  267. Balafar, M.; Ramli, A.; Saripan, M.; Mashohor, S. Review of brain MRI image segmentation methods. Artif. Intell. Rev. 2010, 33, 261–274. [Google Scholar] [CrossRef]
  268. Lance, B.J.; Kerick, S.E.; Ries, A.J.; Oie, K.S.; McDowell, K. Brain-computer interface technologies in the coming decades. Proc. IEEE 2012, 100, 1585–1599. [Google Scholar] [CrossRef]
  269. Grosse-Wentrup, M.; Schölkopf, B.; Hill, J. Causal influence of gamma oscillations on the sensorimotor rhythm. NeuroImage 2011, 56, 837–842. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The most widely used method for recording brain activity in brain–computer interfacing is EEG, as it is a technique that is simple, noninvasive, portable, and cost-effective. Typical electroencephalogram recording setup: cap carrying on contact electrodes and wires (from the Department of Electrical and Electronic Engineering at the Tokyo University of Agriculture and Technology). The wires are connected to amplifiers that are not shown in the figure. Amplifiers also improve the quality of acquired signals through filtering. Some amplifiers include analog-to-digital converters to allow brain signals to be acquired by (and stored on) a computer.
Figure 1. The most widely used method for recording brain activity in brain–computer interfacing is EEG, as it is a technique that is simple, noninvasive, portable, and cost-effective. Typical electroencephalogram recording setup: cap carrying on contact electrodes and wires (from the Department of Electrical and Electronic Engineering at the Tokyo University of Agriculture and Technology). The wires are connected to amplifiers that are not shown in the figure. Amplifiers also improve the quality of acquired signals through filtering. Some amplifiers include analog-to-digital converters to allow brain signals to be acquired by (and stored on) a computer.
Electronics 10 00560 g001
Figure 2. Basic brain–computer interface (BCI) schematic: How targeted brain oscillation signals (or brainwaves) originate from a visual stimulus or a cognitive process and how they get acquired, processed, and translated into commands.
Figure 2. Basic brain–computer interface (BCI) schematic: How targeted brain oscillation signals (or brainwaves) originate from a visual stimulus or a cognitive process and how they get acquired, processed, and translated into commands.
Electronics 10 00560 g002
Figure 3. Experimental setup of a steady-state visually evoked potential (SSVEP) recording: A patient gazes at a screen that presents eight different patterns corresponding to eight different intended commands to the interface. Each pattern oscillates with a different frequency: whenever the patient gazes at a particular pattern, neuronal oscillations take place in the visual cortex, which locks up with the frequency of the flickering pattern (from the Department of Electrical and Electronic Engineering at the Tokyo University of Agriculture and Technology).
Figure 3. Experimental setup of a steady-state visually evoked potential (SSVEP) recording: A patient gazes at a screen that presents eight different patterns corresponding to eight different intended commands to the interface. Each pattern oscillates with a different frequency: whenever the patient gazes at a particular pattern, neuronal oscillations take place in the visual cortex, which locks up with the frequency of the flickering pattern (from the Department of Electrical and Electronic Engineering at the Tokyo University of Agriculture and Technology).
Electronics 10 00560 g003
Figure 4. An experimental setup for controlling a vehicle through a steady-state visually-evoked-potentials-based BCI (from the Department of Electrical and Electronic Engineering at the Tokyo University of Agriculture and Technology).
Figure 4. An experimental setup for controlling a vehicle through a steady-state visually-evoked-potentials-based BCI (from the Department of Electrical and Electronic Engineering at the Tokyo University of Agriculture and Technology).
Electronics 10 00560 g004
Figure 5. The electrode arrangement of the International 10-20 method and the International 10-10 method. The circles show the electrodes defined by the International 10-20 method. The circles and the crosses show the electrodes defined by the International 10-10 method.
Figure 5. The electrode arrangement of the International 10-20 method and the International 10-10 method. The circles show the electrodes defined by the International 10-20 method. The circles and the crosses show the electrodes defined by the International 10-10 method.
Electronics 10 00560 g005
Figure 6. Examples of event-related potentials: P300 waveforms. The illustrated signals were averaged over 85 trials. “Target” is the observed signal corresponding to the subject’s perception after displaying a stimulus. “Non Target” is the observed signal corresponding to the subject’s perception corresponding to a stimulus ignored by the subject.
Figure 6. Examples of event-related potentials: P300 waveforms. The illustrated signals were averaged over 85 trials. “Target” is the observed signal corresponding to the subject’s perception after displaying a stimulus. “Non Target” is the observed signal corresponding to the subject’s perception corresponding to a stimulus ignored by the subject.
Electronics 10 00560 g006
Figure 7. Stimuli in the telephone-call brain–computer interface. According to the order indicated by the arrows, each row and line flashes.
Figure 7. Stimuli in the telephone-call brain–computer interface. According to the order indicated by the arrows, each row and line flashes.
Electronics 10 00560 g007
Figure 8. Example of an SSVEP: Power spectrum of the signal observed when a subject gazes at a visual pattern (flickering at 12 Hz) compared to the power spectrum of the signal observed when the subject does not receive any stimulus (idle).
Figure 8. Example of an SSVEP: Power spectrum of the signal observed when a subject gazes at a visual pattern (flickering at 12 Hz) compared to the power spectrum of the signal observed when the subject does not receive any stimulus (idle).
Electronics 10 00560 g008
Figure 9. An instance of steady-state visually evoked potentials-based brain–computer interfacing: Each checkerboard performs monochrome inversion at a different frequency.
Figure 9. An instance of steady-state visually evoked potentials-based brain–computer interfacing: Each checkerboard performs monochrome inversion at a different frequency.
Electronics 10 00560 g009
Figure 10. Monochrome inversion in a checkerboard.
Figure 10. Monochrome inversion in a checkerboard.
Electronics 10 00560 g010
Figure 11. Event-related desynchronisation elicited by the motor imagery tasks of moving the left hand (top) and the right hand (bottom) at electrodes CP3 and CP4.
Figure 11. Event-related desynchronisation elicited by the motor imagery tasks of moving the left hand (top) and the right hand (bottom) at electrodes CP3 and CP4.
Electronics 10 00560 g011
Figure 12. A prototypical three-wheel, small-sized robot for smart-home applications used to perform experiments (from the Department of Information Engineering at the Università Politecnica delle Marche).
Figure 12. A prototypical three-wheel, small-sized robot for smart-home applications used to perform experiments (from the Department of Information Engineering at the Università Politecnica delle Marche).
Electronics 10 00560 g012
Table 1. Electrophysiological recordings in BCI.
Table 1. Electrophysiological recordings in BCI.
Reference PaperElectrodes Locations in EncephalographyEvent-Related PotentialsEvoked Potentials
[2]X
[123]X
[124]X
[125]X
[126]X
[127]X
[128]X
[129]X
[130]X
[131]X
[132]X
[133] X
[134] X
[135] X
[136] X
[137] X
[76] X
[138] X
[139] X
[140] X
[141] X
[72] X
[142] X
[144] X
[145] X
[146] X
[147] X
[148] X
[149] X
[150] X
[124] X
[151] X
[152] X
[153] X
[154] X
[155] X
[156] X
[157] X
[158] X
[159] X
[160] X
[161] X
[162] X
[80] X
[163] X
[164] X
[165] X
[166] X
[167] X
[168] X
[169] X
[146] X
[170] X
[171] X
[172] X
[81] X
[173] X
[174] X
[148] X
[175] X
[176] X
[177] X
[178] X
[179] X
Table 2. BCI applications to automation.
Table 2. BCI applications to automation.
Reference PaperApplication to Unmanned Vehicles and RoboticsApplication to Mobile Robotics and Interaction with Robotic ArmsApplication to Robotic Arms, Robotic Tele-Presence and Electrical ProsthesisApplication to Wheelchair Control and Autonomous Vehicles
[115]X
[216]X
[217]X
[218]X
[219]X
[220]X
[221]X
[222]X
[223]X
[224]X
[225]X
[226]X
[227]X
[78]X
[34]X
[228]X
[229]X
[230]X
[231]X
[232]X
[132] X
[240] X
[78] X
[22] X
[77] X
[241] X
[123] X
[242] X
[240] X
[243] X
[244] X
[245] X
[246] X
[247] X
[248] X
[249] X
[1] X
[250] X
[251] X
[252] X
[253] X
[255] X
[256] X
[257] X
[258] X
[259] X
[261] X
[262] X
[263] X
[264] X
[265] X
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bonci, A.; Fiori, S.; Higashi, H.; Tanaka, T.; Verdini, F. An Introductory Tutorial on Brain–Computer Interfaces and Their Applications. Electronics 2021, 10, 560. https://doi.org/10.3390/electronics10050560

AMA Style

Bonci A, Fiori S, Higashi H, Tanaka T, Verdini F. An Introductory Tutorial on Brain–Computer Interfaces and Their Applications. Electronics. 2021; 10(5):560. https://doi.org/10.3390/electronics10050560

Chicago/Turabian Style

Bonci, Andrea, Simone Fiori, Hiroshi Higashi, Toshihisa Tanaka, and Federica Verdini. 2021. "An Introductory Tutorial on Brain–Computer Interfaces and Their Applications" Electronics 10, no. 5: 560. https://doi.org/10.3390/electronics10050560

APA Style

Bonci, A., Fiori, S., Higashi, H., Tanaka, T., & Verdini, F. (2021). An Introductory Tutorial on Brain–Computer Interfaces and Their Applications. Electronics, 10(5), 560. https://doi.org/10.3390/electronics10050560

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop