Next Article in Journal
Pure Sinusoidal Output Single-Phase Current-Source Inverter with Minimized Switching Losses and Reduced Output Filter Size
Next Article in Special Issue
Optimal Feature Search for Vigilance Estimation Using Deep Reinforcement Learning
Previous Article in Journal
TLS-VaD: A New Tool for Developing Centralized Link-Scheduling Algorithms on the IEEE802.15.4e TSCH Network
Previous Article in Special Issue
Open-Access fNIRS Dataset for Classification of Unilateral Finger- and Foot-Tapping
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Simultaneous Decoding of Eccentricity and Direction Information for a Single-Flicker SSVEP BCI

1
Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
2
Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
3
State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100864, China
4
Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China
*
Author to whom correspondence should be addressed.
Electronics 2019, 8(12), 1554; https://doi.org/10.3390/electronics8121554
Submission received: 11 October 2019 / Revised: 3 December 2019 / Accepted: 3 December 2019 / Published: 17 December 2019

Abstract

:
The feasibility of a steady-state visual evoked potential (SSVEP) brain–computer interface (BCI) with a single-flicker stimulus for multiple-target decoding has been demonstrated in a number of recent studies. The single-flicker BCIs have mainly employed the direction information for encoding the targets, i.e., different targets are placed at different spatial directions relative to the flicker stimulus. The present study explored whether visual eccentricity information can also be used to encode targets for the purpose of increasing the number of targets in the single-flicker BCIs. A total number of 16 targets were encoded, placed at eight spatial directions, and two eccentricities (2.5° and 5°) relative to a 12 Hz flicker stimulus. Whereas distinct SSVEP topographies were elicited when participants gazed at targets of different directions, targets of different eccentricities were mainly represented by different signal-to-noise ratios (SNRs). Using a canonical correlation analysis-based classification algorithm, simultaneous decoding of both direction and eccentricity information was achieved, with an offline 16-class accuracy of 66.8 ± 16.4% averaged over 12 participants and a best individual accuracy of 90.0%. Our results demonstrate a single-flicker BCI with a substantially increased target number towards practical applications.

1. Introduction

Steady-state visual evoked potential (SSVEP), as one of the most widely used responses in electroencephalogram (EEG) -based brain–computer interfaces (BCIs), has received sustained attention [1,2,3,4,5,6,7]. When participants attend a periodic visual stimulus, SSVEPs are elicited at the stimulation frequency and its harmonics [8]. Correspondingly, by encoding different targets with distinct frequencies, BCI systems can be realized via real-time frequency recognition of the recorded SSVEPs [3,9]. To date, the frequency-coding SSVEP BCIs have achieved significant progress, featured by the relatively large number of simultaneously decodable targets and the high communication speed [5,6], thereby potential for real-life applications such as letter typing.
When flicker stimuli were presented at different spatial locations in the visual field, distinct SSVEP responses would be elicited [10]. The phenomenon, known as the retinotopic mapping [11,12], has gained increasing interest in recent BCI studies. While pilot BCI studies have mainly focused on designing visual spatial patterns to increase possible BCI target numbers [13] or enhance the signal-to-noise ratio (SNR) of SSVEP [14], efforts have been devoted to decoding the spatial information embedded in SSVEP responses directly [15,16]. Unlike the traditional frequency-coded SSVEP BCI paradigm in which SSVEP responses were modulated by targets with different frequencies [3,9], it is feasible to design a spatially-coded SSVEP BCI by encoding responses by targets with different spatial locations. Indeed, previous studies have demonstrated that overtly attending to targets at distinct spatial directions relative to a centrally-displayed flicker stimulus could evoke separable SSVEP responses [15,16]. Moreover, the differences in responses are sufficient to support the decoding of directions at a single-trial level to achieve a dial [15] and spatial navigation task [16], suggesting the feasibility of a single-stimulus, multi-target SSVEP BCI. Compared with the frequency-coded BCIs in which multiple stimuli are required to encode multiple targets, this single-stimulus design can considerably simplify the stimulation setup and the user interface of BCIs [17,18]. In addition, given the fact that the stimulus always appears in the peripheral visual field, this single-flicker SSVEP BCI paradigm is expected to reduce the visual burden at the same time [16], indicating its potential to be a good candidate for practical applications.
However, the previous spatially-coded SSVEP studies only utilized spatial directions to encode targets, and the resulting nine- or four-command designs have limited the potential applications of spatially-coded BCIs when compared with the conventional frequency-coded SSVEP BCIs. For example, in a drone control task, while previous designs are only sufficient to control the moving directions, it is possible to send more commands such as speeding, stopping, climbing, etc., if more command channels could be achieved. One way to extend the range of feasible application scenarios is to include the visual eccentricity information for increasing the number of targets. Indeed, SSVEP responses have been observed to reduce along with the increase of the eccentricity of stimuli from the fixation spot [19], providing neurophysiological evidence in support of the eccentricity decoding in SSVEP responses. Joint decoding of eccentricity and direction information is expected to substantially increase the number of targets, by making a better use of the visual spatial information. Nevertheless, the eccentricity information could contribute to extending the encoding dimension only when the spatial patterns remain separable even with a large eccentricity. Specifically, the weaker SSVEP responses along with increasing eccentricities may lead to a reduced accuracy for the direction classification at the same time, thus influencing the BCI performance in a complex way. Although there are previous studies suggesting a relatively stable spatial patterns of visual motion-onset responses with increasing eccentricities [17,18], efforts are still needed to evaluate how visual eccentricity information modulates the SSVEP responses and whether this modulation could contribute to decoding visual spatial information at a single-trial level.
In the present study, the feasibility of a spatially-coded BCI to encode targets with both the eccentricity and direction information simultaneously was evaluated. Eight directions (left, left-up, up, right-up, right, right-down, down, and left-down) and two eccentricities (2.5° and 5°) relative to one flicker stimulus were employed to encode 16 targets. During the experiment, participants were instructed to direct their overt attention to one of the targets EEG recorded. Then, SSVEP responses modulated by different visual directions and eccentricities were analyzed, and the 16-target classification performances were evaluated in an offline manner. Our results suggest the feasibility of the simultaneous decoding of visual eccentricity and direction information based on SSVEP.

2. Methods

2.1. Participants

Twelve participants (five females, aged from 23 to 28 years, mean 24.8 years) with normal or corrected-to-normal vision participated in the experiment. All participants were given informed consent before experiments and received financial compensation for their participation. The study was approved by the local Ethics Committee at the Department of Psychology, Tsinghua University.

2.2. Visual Stimulation

The visual stimulation in the experiment is illustrated at the top panel of Figure 1. An LCD computer monitor (144 Hz refresh rate, 1920 × 1080 pixel resolution, 23.6-inch, and a viewing distance of 50 cm) was used to present the stimulation. A white disk (radius = 2.5°) was centrally displayed on the screen (indicated as the gray disk in the top panel of Figure 1). During the experiment, the disk flickered at 12 Hz with a sampled sinusoidal stimulation method [20], forming a flicker stimulus to elicit SSVEPs. The stimulus lasted 4000 ms in total. One small red square (0.25° × 0.25°) would appear on the screen to indicate where the participants should direct their overt attention during the experiment. There were 16 possible targets arranged surrounding the central circle at eight directions (left, left-up, up, right-up, right, right-down, down, and left-down) and two eccentricities (2.5° and 5°). Since a previous study observed a rapid drop of SSVEP responses when the stimulus presented beyond 5° away from the central fixation spot [19], 2.5° and 5° were chosen conservatively to evaluate the feasibility of eccentricity decoding in the present study. Eccentricities larger than 5° will be explored in further studies.

2.3. Experimental Procedure

The experiment included ten blocks in total. The duration of the inter-block intervals was controlled by participants themselves with a lower limit of 30 s set in the experimental program. In each block, 16 trials corresponding to each attention target were presented with a random order. As demonstrated at the bottom panel of Figure 1, for each trial, one red square was displayed to cue the to-be-attended target for 1000 ms at the beginning, then followed by a 4000 ms flicker stimulus. Since the proposed study is an offline study, the red square highlighted the to-be-attended target for the whole flickering duration for the participants to stay focused. While this highlighting strategy cannot be used in online experiments without pre-defined targets, the performance calculated with this offline design is believed to be a reasonable estimation of follow-up online studies, as a similar strategy has been adopted in our previous 4-direction decoding studies [16,17]. The inter-trial interval varied from 1000 to 1500 ms, during which participants could blink or swallow. The Psychophysics Toolbox [20,21] based on MATLAB (The Mathworks, Natick, MA, USA) was employed to present the stimulation.

2.4. EEG Recordings

EEG was recorded continuously at a sampling rate of 1000 Hz with a SynAmps2 amplifier (Compumedics NeuroScan, Charlotte, NC, USA). Sixty-four electrodes were recorded according to the international 10–20 system with a reference at the vertex and a forehead ground at AFz. Electrode impedances were kept below 10 kΩ during the experiment. The experiment was carried out in an electromagnetically shielded room.

2.5. Data Preprocessing

Continuous EEG data were first band-pass filtered to 1.5–80 Hz, and a 50 Hz notch filter was used to remove the line noise. Next, EEG data were segmented into 4000 ms trials after the onset of the stimulus, resulting in 10 trials for each of the 16 attentional targets. Then, a set of 9 electrodes covering the parietal-occipital (PO5/6/7/8, O1/2, Pz, POz, and Oz), where the SSVEPs typically show maximal responses, was chosen for further analysis.

2.6. SNR Evaluation

In order to describe the SSVEP response strength when attending to targets at different directions and eccentricities in a quantitive way, a newly-proposed method [22], which could evaluate the SSVEP SNR of the multi-channel EEG data response while considering multiple harmonics, was employed in the present study. Here, the stimulus frequency, as well as its second and third harmonics, were included in SNR calculation and the following-up BCI classification.
First, for each subject, the SSVEP signal was defined as the projection of every single-trial EEG data in the subspace of the stimulus frequency and its harmonics, while noise was defined as the residual after the projection. SNR, which was defined as the ratio between signal and noise, was calculated for each trial with the Formula (1). Then, the single-trial SNR were averaged for each attentional target as the index of the responses. Details about the mathematical derivation can be found in [22].
SNR   =   signal noise   =   trace ( T ϕ H ϕ T H ) trace [ T ( I     ϕ H ϕ ) T H ]
Here, T is the 9-channel averaged EEG data, and ϕ is the reference signal. I is a Unit matrix.
ϕ   =   [ sin ( 2 π f stim t ) cos ( 2 π f stim t ) sin ( 4 π f stim t ) cos ( 4 π f stim t ) sin ( 6 π f stim t ) cos ( 6 π f stim t ) ]
Finally, a two-way repeated measure analysis of variance (RMANOVA) with two within-subject factors, i.e., direction (left, left-up, up, right-up, right, right-down, down, and left-down) and eccentricity (2.5° and 5°), was conducted to determine their possible effects on the SNR of SSVEP statistically. p-Values smaller than 0.05 were considered statistically significant after Greenhouse–Geisser correction. Statistical analyses were performed with SPSS (22.0.0, IBM, Armonk, NY, USA).

2.7. BCI Classification

In the offline performance evaluation, the single-trial 4000 ms EEG data were used for BCI classification without any manual artifact rejection procedures, as in many previous BCI studies [3,4,5]. A canonical correlation analysis (CCA)-based classification algorithm [23] was employed to capture the distinct SSVEP patterns, as reported in [15,16]. Note that all the offline classifications were evaluated with a 10-fold cross-validation procedure.
First of all, in order to evaluate how directions and eccentricities contribute to the classification performances, 8-directions classification at each eccentricity and the 2-eccentricity classification in each direction were conducted.
In the training phase, K-trial EEG data when the participant was attending to the target location c were concatenated as X c . Then, the reference signal Y was obtained by replicating the ϕ (see Formula (2)) K times:
Y   =   [ ϕ   ϕ   ϕ ]
Here, K is 9 for each target as 90% of the EEG data were used as the training set. CCA was employed to find spatial filters Wx c and Wy c (c = 1, 2, …, N) to maximize the canonical correlation r c = [ ρ 1   ρ M ] between X and reference signal Y:
r c Wx , Wy max   =   E [ W x c T X c Y T W y c T ] E [ W x c T X c X T W x c T ] E [ W y c T YY T W y c T ]
Here, N is the target number. For 8-directions classification, N = 8, and for the 2-eccentricity classification, N = 2. The M is the number of canonical correlation coefficients and is set as 6, the same as reported in [15,16].
Then, for each trial in the training set, a 1   ×   ( N   ×   M ) feature vector was composed by calculating the canonical correlations r c for all N targets and concatenating them as [ r 1   r 2   r N ], which was used to train a support vector machine (SVM) classifier with a linear kernel using the LIBSVM toolbox [24]. The regularization parameter of the linear kernel was decided based on a grid search strategy for each iteration in the cross-validation procedure, by using the corresponding data in the training set.
In the testing phase, the EEG trial to be classified is filtered with Wx c , and the correlation coefficients r c with the corresponding reference signals Wy c ϕ are computed, (c = 1, 2, …, N). The concatenated correlation coefficients [ r 1   r 2   r N ] constituted the feature vector for the testing trial, which then was used to recognize the target by the classifier.
After decoding the directions and eccentricities separately, a 16-target classification, which decoded the visual eccentricity and direction information simultaneously, was conducted with the above-mentioned CCA method. Here, N = 16.
Finally, in order to evaluate how the visual eccentricity information influences the joint classification of directions and eccentricities, three conditions were compared: individual filter, 2.5° filter, and 5° filter. The individual filter means the spatial filters Wx c and Wy c (c = 1, 2, …, 16) were trained with data from their respective eccentricities, corresponding to the results in Table 1. The 2.5° filter, however, indicates the classification accuracies were calculated all by using spatial filters trained with data with an eccentricity of 2.5°, even for those with an eccentricity of 5°. The 5° filter could be explained similarly.

3. Results

As illustrated in Figure 2, a typical SSVEP response over occipital and parietal areas could be found across conditions. When attending to targets at different directions and eccentricities, distinct SNR topographies for SSVEP were elicited with a shift of the response over the parietal-occipital areas. Specifically, when participants attended to the target at the right side, the flicker stimulus appeared in their left visual field, leading to a right-dominant response, and the opposite relation held for the target at the left side, suggesting a contralateral response. In addition, the SSVEP spatial patterns remained similar, for different eccentricities of the flicker stimulus. The dissimilarities of SSVEP topographies at the right and right-up conditions for this specific subject might be due to the relatively low signal-to-noise ratio at the eccentricity of 5°, leading to a failure to effectively capture the expected SSVEP activities.
Figure 3 shows the SNRs when attending to targets at different eccentricities and directions. At the eccentricity of 2.5°, the SNRs are −13.2 ± 3.00 dB, −12.7 ± 2.83 dB, −12.5 ± 2.99 dB, −12.3 ± 3.46 dB, −12.4 ± 3.27 dB, −13.2 ± 3.52 dB, −13.8 ± 3.37 dB, and −13.8 ± 3.51 dB for left, left-up, up, right-up, right, right-down, down, and left-down, respectively. At the eccentricity of 5°, the SNRs are −15.4 ± 3.53 dB, −14.8 ± 3.06 dB, −15.0 ± 3.11 dB, −14.6 ± 3.41 dB, −15.4 ± 3.23 dB, −16.0 ± 2.97 dB, −14.6 ± 3.57 dB, and −15.3 ± 3.80 dB for left, left-up, up, right-up, right, right-down, down, and left-down, respectively. In addition, the baseline SNRs were also calculated from the EEG data recorded during the rest time when no flicker stimulus existed. The average baseline across participants is −20.5 ± 2.25 dB. Even when attending to targets at the eccentricity of 5°, SNRs were still much higher than the baseline, suggesting a robust SSVEP response.
RMANOVA showed a significant main effect of eccentricity on SSVEP SNRs (2.5° > 5°, F(1, 11) = 21.7, p = 0.001), suggesting SNRs decreased as the eccentricity increased. No significant main effect of the direction was found on SSVEP SNRs (F(7, 77) = 1.55, p = 0.214). Besides, no significant interaction effect was observed on SSVEP SNR (F(7, 77) = 1.72, p = 0.161).
The 8-direction classification accuracies at the eccentricity of 2.5° and 5° are shown in Figure 4. The accuracies were 75.5 ± 14.9% and 59.4 ± 15.0% at 2.5° and 5°, respectively. As shown, the classification accuracy is significantly reduced for the targets with the larger eccentricity with a paired t-test (t(11) = 6.27, p < 0.001).
As reflected in Figure 5, the 2-eccentricity classification achieved an accuracy of 89.6 ± 15.0%, 91.7 ± 10.7%, 89.6 ± 13.3%, 84.2 ± 11.0%, 91.7 ± 13.0%, 93.8 ± 9.38%, 87.9 ± 16.0%, and 90.4 ± 9.00% for left, left-up, up, right-up, right, right-down, down, and left-down, respectively. A RMANOVA was used to conduct a comparison and no significant main effect of the direction was found (F(7, 77) = 1.40, p = 0.262).
The results so far demonstrated the feasibility of decoding directions and eccentricities separately. Then, the 16-target classification results, which decoded directions and eccentricities at the same time, are summarized in Table 1. When using a 4-s data, the mean accuracy across participants is 66.8 ± 16.4%, well above chance level for the 16-target classification problem (i.e., 6.25%). Note that an individual difference could be found in classification accuracies, ranging from 38.8% to 90.0%.
Then, accuracies obtained by using different spatial filters were shown in Figure 6. The accuracies were 66.8 ± 15.7%, 62.3 ± 15.5%, and 61.0 ± 16.4% for individual filter, 2.5° filter, and 5° filter, respectively. A decreasing trend can be observed in three conditions. A RMANOVA was used to conduct a comparison. A significant main effect of the filter type was found (F(2, 22) = 17.4, p < 0.001) and post-hoc tests with Bonferroni correction found significant difference between accuracies obtained in the individual filter condition and the 2.5° filter condition (p < 0.001). Furthermore, although accuracies obtained in both conditions were higher than those from the 5° filter condition (individual filter > 5° filter, p < 0.001); 2.5° filter > 5° filter, p = 0.956), it should be noted that the absolute numbers of the accuracies are comparable.
Subsequently, we took a closer look at the classification results as well. First of all, the top panel of Figure 7 demonstrates the confusion matrix for the 16-target classification (8 directions × 2 eccentricities) with individual filters. As shown, classification achieved better performance at the eccentricity of 2.5° than 5°. Most of the misclassification happened between the adjacent directions and eccentricities. Besides, when using spatial filters trained with data at the eccentricity of 2.5° or 5°, similar but lower performances could be obtained as demonstrated in the bottom panel of Figure 7. Note that no matter in which filter condition, classifications at the eccentricity of 2.5° always outperformed those at the eccentricity of 5°.
Finally, we also explored the effect of data length on BCI performance. Data from the first N seconds (N = 2, 3, and 4) within one trial were used to keep the number of trials the same among different conditions. As shown in Figure 8, the accuracies were 48.6 ± 17.2%, 59.3 ± 15.5%, and 66.8 ± 15.7% for using data with a length at 2 s, 3 s, and 4 s, respectively. Although a decreasing trend could be observed, the 2-s data still provided accuracies well above chance level (t-test, t(11) = 7.74, p < 0.001).

4. Discussion

By encoding targets with visual direction and eccentricity information simultaneously, a single-stimulus 16-target SSVEP BCI was proposed in the present study. When participants attended to targets at different spatial directions and eccentricities relative to a single-flicker stimulus, distinct SNRs and spatial patterns of SSVEPs could be elicited. For the first time, visual eccentricity is considered as a classification label in SSVEP responses, and the classification results suggested the responses modulated by visual eccentricities can be recognized by a machine-classifier at a single-trial level, implying a possible real-time eccentricity decoding. Moreover, the offline 16-target classification achieved an average accuracy at 66.8% and the best accuracy at 90.0%, suggesting the feasibility of decoding visual direction and eccentricity information at the same time with only one stimulus. By utilizing the visual direction and eccentricity information simultaneously, the proposed single-flicker BCI has increased the number of targets to 16, which is by far the largest number of targets reported in spatially-coded SSVEP BCIs. Unlike the frequency-coded BCI paradigms in which targets have to be combined with stimuli in advance to make commands, this spatially-coded single-stimulus design managed to separate the stimulus and targets so that it is possible to place the targets in a more flexible way and be applied in scenarios where the number and locations of targets could change. Besides, instead of staring at the stimuli, participants only need to focus on the non-flickering targets with our paradigm, which fits the daily interaction habits better. Together with the augmented reality and computer vision technology, this paradigm is expected to achieve visual information decoding in a more natural way. For example, when users are walking on the street with Google Glass, the proposed BCI system is able to find out which store the users are looking at and feedback its discount information.
The present study also investigated how the visual eccentricity information contributed to this spatially-coded paradigm. First of all, when attending to targets at increased eccentricities, the corresponding reduced SNRs and decreased 8-direction classification accuracies suggested a weaker response along with the larger eccentricity. Furthermore, this decrease of SSVEP responses could be a contributing feature for the eccentricity decoding, supported by the 2-eccentricity classification accuracies ranged from 84.2% to 93.8% in 8 directions. Then, the 8-direction classification accuracies at 5° were found to achieve an accuracy of 59.4 ± 15.0%, much higher than chance level. More importantly, compared with those classifications using spatial filters trained from their corresponding eccentricities, the 16-target classification accuracies, though significantly decreased, still remained comparable when using spatial filters from data with an eccentricity of 5°. These classification accuracies provide evidence in support of the weaker yet stable spatial patterns across eccentricities. Taken together, our results suggested that the decreased SSVEP responses and relatively stable spatial patterns provided the neural basis of the joint decoding of visual eccentricity and direction information, supporting the feasibility of the visual eccentricity information as an encoding dimension in spatially-coded BCIs. Besides, it should also be noted that the feasibility of transferring spatial filters across eccentricities bears the potential to reduce training time, since it is possible to train targets at a certain eccentricity, while using targets at multiple eccentricities for online tasks.
We noticed individual differences in the classification performance, which may be explained by the variation of SSVEP signal quality across subjects (see the standard errors in Figure 3). This phenomenon has been observed in previous SSVEP BCI studies as well [16,25,26]. What is more, it should be noted that the stimulus in this study was not presented in the center of the visual field, leading to a smaller visual burden but relatively weak SSVEP responses [19,27]. As suggested in a previous high-frequency SSVEP study about BCI demographics, the relatively weaker response may result in a larger individual difference [28]. Therefore, the present study provided the extrafoveal evidence of individual differences in SSVEP responses as a supplement for findings based on the central vision stimulation.
Instead of pursuing the boost of communication speed, we proposed a new paradigm to optimize the user interface for applications. However, while participants do not need to stare at the stimuli and are able to control the system with a low visual load, the evoked SSVEP responses are relatively weak, leading to a lower performance when compared with a conventional SSVEP paradigm. In order to achieve an online practical system, it is necessary to find a balance between the user-friendliness and the system performance by improving the average accuracies. As reported in [29], the signal quality of the dataset will influence the estimate of covariance matrices in the CCA-based methods. Therefore, due to the relatively weak SSVEP responses in the proposed paradigm, the obtained spatial filters in the present study may be not as effective as those from traditional SSVEP studies [30], where the stimuli were directly attended. Consequently, a relatively longer time is needed to make a reliable classification. Therefore, the improvement of classification performances is expected to be achieved by further optimization of the spatial filters. First of all, by constructing spatial filters to make the neural patterns evoked by stimulus at different locations more distinguishable with methods like common spatial patterns [31] and DCPM [32], it is possible to enhance the recognition performance in the proposed paradigm. Furthermore, as an increasing training sample size is expected to boost the classification accuracy in CCA-based methods [30], using more training trials for each direction or exploiting of the training data from other subjects may also improve the average accuracy.
As the first step to evaluate the feasibility of eccentricity decoding in SSVEP responses, there are other issues needed to be discussed. Firstly, only two eccentricities were included in the present study. As the next step, it would also be worthwhile to evaluate whether more eccentricities could be decoded and whether it is possible to increase the target number further in this paradigm. Moreover, as we only utilize one stimulus frequency, it should be tested whether our finding could be generalized to other frequencies or whether there is an optimal frequency for the classification. Finally, since the proposed SSVEP BCI system demonstrated that only one flicker stimulus is sufficient to encode 16 output channels, it should be studied whether by incorporating multiple stimuli, it might be possible to further increase the target number and cover a larger visual field.

Author Contributions

Conceptualization, J.C., A.M., and D.Z.; data curation, J.C.; formal analysis, J.C.; funding acquisition, A.K.E. and D.Z.; methodology, A.M., Y.W., and D.Z.; software, J.C.; supervision, A.K.E., X.G., and D.Z.; validation, Y.W. and X.G.; visualization, J.C.; writing—original draft, J.C.; writing—review and editing, J.C., A.M., A.K.E., Y.W., X.G., and D.Z.

Funding

This work is supported by the National Natural Science Foundation of China (NSFC) and the German Research Foundation (DFG) in project Crossmodal Learning, NSFC 61621136008/DFG TRR-169/C1/B1, the National Key Research and Development Plan under Grant 2016YFB1001200, the National Natural Science Foundation of China under Grant 61977041 and U1736220, and National Social Science Foundation of China under Grant 17ZDA323.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gao, S.; Wang, Y.; Gao, X.; Hong, B. Visual and auditory brain–computer interfaces. IEEE Trans. Biomed. Eng. 2014, 61, 1436–1447. [Google Scholar] [PubMed]
  2. McFarland, D.J.; Wolpaw, J.R. EEG-based brain–computer interfaces. Curr. Opin. Biomed. Eng. 2017, 4, 194–200. [Google Scholar] [CrossRef] [PubMed]
  3. Cheng, M.; Gao, X.R.; Gao, S.G.; Xu, D.F. Design and implementation of a brain-computer interface with high transfer rates. IEEE Trans. Biomed. Eng. 2002, 49, 1181–1186. [Google Scholar] [CrossRef] [PubMed]
  4. Zhang, D.; Maye, A.; Gao, X.R.; Hong, B.; Engel, A.K.; Gao, S.K. An independent brain-computer interface using covert non-spatial visual selective attention. J. Neural Eng. 2010, 7, 016010. [Google Scholar] [CrossRef]
  5. Chen, X.G.; Wang, Y.J.; Nakanishi, M.; Gao, X.R.; Jung, T.P.; Gao, S.K. High-speed spelling with a noninvasive brain-computer interface. Proc. Natl. Acad. Sci. USA 2015, 112, E6058–E6067. [Google Scholar] [CrossRef] [Green Version]
  6. Nakanishi, M.; Wang, Y.; Chen, X.; Wang, Y.-T.; Gao, X.; Jung, T.-P. Enhancing detection of SSVEPs for a high-speed brain speller using task-related component analysis. IEEE Trans. Biomed. Eng. 2017, 65, 104–112. [Google Scholar] [CrossRef]
  7. Chen, X.; Zhao, B.; Wang, Y.; Gao, X. Combination of high-frequency SSVEP-based BCI and computer vision for controlling a robotic arm. J. Neural Eng. 2019, 16, 026012. [Google Scholar] [CrossRef]
  8. Regan, D. Steady-state evoked potentials. J. Opt. Soc. Am. 1977, 67, 1475–1489. [Google Scholar] [CrossRef]
  9. Bin, G.; Gao, X.; Wang, Y.; Hong, B.; Gao, S. VEP-based brain-computer interfaces: Time, frequency, and code modulations [Research Frontier]. IEEE Comput. Intell. Mag. 2009, 4, 22–26. [Google Scholar] [CrossRef]
  10. Di Russo, F.; Pitzalis, S.; Aprile, T.; Spitoni, G.; Patria, F.; Stella, A.; Spinelli, D.; Hillyard, S.A. Spatiotemporal analysis of the cortical sources of the steady-state visual evoked potential. Hum. Brain Mapp. 2007, 28, 323–334. [Google Scholar] [CrossRef]
  11. Capilla, A.; Melcon, M.; Kessel, D.; Calderon, R.; Pazo-Alvarez, P.; Carretie, L. Retinotopic mapping of visual event-related potentials. Biol. Psychol. 2016, 118, 114–125. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Wandell, B.A.; Dumoulin, S.O.; Brewer, A.A. Visual Field Maps in Human Cortex. Neuron 2007, 56, 366–383. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Yan, Z.; Gao, X.; Bin, G.; Hong, B.; Gao, S. A half-field stimulation pattern for SSVEP-based brain-computer interface. In Proceedings of the 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Minneapolis, MN, USA, 3–6 September 2009; pp. 6461–6464. [Google Scholar]
  14. Materka, A.; Byczuk, M. Alternate half-field stimulation technique for SSVEP-based brain–computer interfaces. Electron. Lett. 2006, 42, 321–322. [Google Scholar] [CrossRef]
  15. Maye, A.; Zhang, D.; Engel, A.K. Utilizing Retinotopic Mapping for a Multi-Target SSVEP BCI with a Single Flicker Frequency. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 1026–1036. [Google Scholar] [CrossRef] [PubMed]
  16. Chen, J.; Zhang, D.; Engel, A.K.; Gong, Q.; Maye, A. Application of a single-flicker online SSVEP BCI for spatial navigation. PLoS ONE 2017, 12, e0178385. [Google Scholar] [CrossRef] [Green Version]
  17. Chen, J.; Li, Z.; Hong, B.; Maye, A.; Engel, A.K.; Zhang, D. A Single-Stimulus, Multitarget BCI Based on Retinotopic Mapping of Motion-Onset VEPs. IEEE Trans. Biomed. Eng. 2018, 66, 464–470. [Google Scholar] [CrossRef]
  18. Chen, J.; Hong, B.; Wang, Y.J.; Gao, X.R.; Zhang, D. Towards a fully spatially coded brain-computer interface: Simultaneous decoding of visual eccentricity and direction. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Berlin, Germany, 23–27 July 2019; pp. 3091–3094. [Google Scholar]
  19. Ng, K.B.; Bradley, A.P.; Cunnington, R. Effect of competing stimuli on SSVEP-based BCI. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011; pp. 6307–6310. [Google Scholar]
  20. Brainard, D.H.; Vision, S. The psychophysics toolbox. Spat. Vis. 1997, 10, 433–436. [Google Scholar] [CrossRef] [Green Version]
  21. Kleiner, M.B.; Brainard, D.H.; Pelli, D.G. What’s new in Psychtoolbox-3? Perception 2007, 36, 301–307. [Google Scholar]
  22. Yang, C.; Han, X.; Wang, Y.; Saab, R.; Gao, S.; Gao, X. A Dynamic Window Recognition Algorithm for SSVEP-Based Brain–Computer Interfaces Using a Spatio-Temporal Equalizer. Int. J. Neural Syst. 2018, 28, 1850028. [Google Scholar] [CrossRef]
  23. Lin, Z.; Zhang, C.; Wu, W.; Gao, X. Frequency recognition based on canonical correlation analysis for SSVEP-based BCIs. IEEE Trans. Biomed. Eng. 2006, 53, 2610–2614. [Google Scholar] [CrossRef]
  24. Chang, C.-C.; Lin, C.-J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 27. [Google Scholar] [CrossRef]
  25. Allison, B.; Luth, T.; Valbuena, D.; Teymourian, A.; Volosyak, I.; Graser, A. BCI demographics: How many (and what kinds of) people can use an SSVEP BCI? IEEE Trans. Neural Syst. Rehabil. Eng. 2010, 18, 107–116. [Google Scholar] [CrossRef] [PubMed]
  26. Guger, C.; Allison, B.Z.; Großwindhager, B.; Prückl, R.; Hintermüller, C.; Kapeller, C.; Bruckner, M.; Krausz, G.; Edlinger, G. How many people could use an SSVEP BCI? Front. Neurosci. 2012, 6, 169. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Zhang, N.; Liu, Y.; Yin, E.; Deng, B.; Cao, L.; Jiang, J.; Zhou, Z.; Hu, D. Retinotopic and topographic analyses with gaze restriction for steady-state visual evoked potentials. Sci. Rep. 2019, 9, 4472. [Google Scholar] [CrossRef] [Green Version]
  28. Volosyak, I.; Valbuena, D.; Luth, T.; Malechka, T.; Graser, A. BCI demographics II: How many (and what kinds of) people can use a high-frequency SSVEP BCI? IEEE Trans. Neural Syst. Rehabil. Eng. 2011, 19, 232–239. [Google Scholar] [CrossRef]
  29. Sakar, C.O.; Kursun, O.; Gurgen, F. Ensemble canonical correlation analysis. Appl. Intell. 2014, 40, 291–304. [Google Scholar] [CrossRef]
  30. Wong, C.M.; Wan, F.; Wang, B.; Wang, Z.; Nan, W.; Lao, K.F.; Mak, P.U.; Vai, M.I.; da Rosa, A.C. Learning across multi-stimulus enhances target recognition methods in SSVEP-based BCIs. J. Neural Eng. 2019. [Google Scholar] [CrossRef]
  31. Thomas, K.P.; Guan, C.; Lau, C.T.; Vinod, A.P.; Ang, K.K. A new discriminative common spatial pattern method for motor imagery brain–computer interfaces. IEEE Trans. Biomed. Eng. 2009, 56, 2730–2733. [Google Scholar] [CrossRef]
  32. Xu, M.; Xiao, X.; Wang, Y.; Qi, H.; Jung, T.-P.; Ming, D. A brain–computer interface based on miniature-event-related potentials induced by very small lateral visual stimuli. IEEE Trans. Biomed. Eng. 2018, 65, 1166–1175. [Google Scholar]
Figure 1. Stimulus (top) and timing (bottom) of the experiment.
Figure 1. Stimulus (top) and timing (bottom) of the experiment.
Electronics 08 01554 g001
Figure 2. Topographies of the signal-to-noise ratio (SNR) of SSVEP from a representative participant (sub 2). The inner circle represents the eccentricity of 2.5°, while the outer circle represents the eccentricity of 5°. All SNRs were normalized into z-values so that the positive and negative values indicate SNRs above and below the mean level across electrodes, respectively, in z units.
Figure 2. Topographies of the signal-to-noise ratio (SNR) of SSVEP from a representative participant (sub 2). The inner circle represents the eccentricity of 2.5°, while the outer circle represents the eccentricity of 5°. All SNRs were normalized into z-values so that the positive and negative values indicate SNRs above and below the mean level across electrodes, respectively, in z units.
Electronics 08 01554 g002
Figure 3. Bar plot of SNRs when attending to targets at different eccentricities and directions. The error bar indicates standard errors.
Figure 3. Bar plot of SNRs when attending to targets at different eccentricities and directions. The error bar indicates standard errors.
Electronics 08 01554 g003
Figure 4. The 8-direction classification accuracy at the eccentricity of 2.5° and 5°, respectively. The thick dark line indicates the average accuracy and the thin line indicates accuracies for each subject. The black dashed line indicates the chance level of classification.
Figure 4. The 8-direction classification accuracy at the eccentricity of 2.5° and 5°, respectively. The thick dark line indicates the average accuracy and the thin line indicates accuracies for each subject. The black dashed line indicates the chance level of classification.
Electronics 08 01554 g004
Figure 5. Box plot of the 2-eccentricity classification accuracy at each of the eight directions. The black dashed line indicates the chance level of classification.
Figure 5. Box plot of the 2-eccentricity classification accuracy at each of the eight directions. The black dashed line indicates the chance level of classification.
Electronics 08 01554 g005
Figure 6. The boxplot of accuracies influenced by spatial filters. The individual filter means the spatial filters were trained with data from their respective eccentricities. The 2.5° filter label indicates the classification accuracies were calculated all by using spatial filters trained with data with an eccentricity of 2.5°. The 5° filter label could be explained in a similar way. The black dashed line indicates the chance level of classification.
Figure 6. The boxplot of accuracies influenced by spatial filters. The individual filter means the spatial filters were trained with data from their respective eccentricities. The 2.5° filter label indicates the classification accuracies were calculated all by using spatial filters trained with data with an eccentricity of 2.5°. The 5° filter label could be explained in a similar way. The black dashed line indicates the chance level of classification.
Electronics 08 01554 g006
Figure 7. Confusion matrix for 16-target classifications. L, LU, U, RU, R, RD, D, LD are short for left, left-up, up, right-up, right, right-down, down, and left-down. The rows show true labels and the columns show predicted labels. The 2.5° filter label means the confusion matrix was calculated by all using spatial filters trained with data at an eccentricity of 2.5°. The 5° filter label could be explained in a similar way. The individual filter means the spatial filters were trained with data from their respective eccentricities.
Figure 7. Confusion matrix for 16-target classifications. L, LU, U, RU, R, RD, D, LD are short for left, left-up, up, right-up, right, right-down, down, and left-down. The rows show true labels and the columns show predicted labels. The 2.5° filter label means the confusion matrix was calculated by all using spatial filters trained with data at an eccentricity of 2.5°. The 5° filter label could be explained in a similar way. The individual filter means the spatial filters were trained with data from their respective eccentricities.
Electronics 08 01554 g007
Figure 8. Classification accuracies as the function of data length. Error bars indicate standard error. The black dashed line indicates the chance level of classification.
Figure 8. Classification accuracies as the function of data length. Error bars indicate standard error. The black dashed line indicates the chance level of classification.
Electronics 08 01554 g008
Table 1. The summary of 16-target classification accuracy when using 4-s steady-state visual evoked potential (SSVEP) data.
Table 1. The summary of 16-target classification accuracy when using 4-s steady-state visual evoked potential (SSVEP) data.
Subject IdAccuracy (%)
158.1
286.9
390.0
467.5
559.4
676.3
780.6
838.8
951.3
1045.0
1170.6
1276.9
Average66.8
Standard deviant16.4

Share and Cite

MDPI and ACS Style

Chen, J.; Maye, A.; Engel, A.K.; Wang, Y.; Gao, X.; Zhang, D. Simultaneous Decoding of Eccentricity and Direction Information for a Single-Flicker SSVEP BCI. Electronics 2019, 8, 1554. https://doi.org/10.3390/electronics8121554

AMA Style

Chen J, Maye A, Engel AK, Wang Y, Gao X, Zhang D. Simultaneous Decoding of Eccentricity and Direction Information for a Single-Flicker SSVEP BCI. Electronics. 2019; 8(12):1554. https://doi.org/10.3390/electronics8121554

Chicago/Turabian Style

Chen, Jingjing, Alexander Maye, Andreas K. Engel, Yijun Wang, Xiaorong Gao, and Dan Zhang. 2019. "Simultaneous Decoding of Eccentricity and Direction Information for a Single-Flicker SSVEP BCI" Electronics 8, no. 12: 1554. https://doi.org/10.3390/electronics8121554

APA Style

Chen, J., Maye, A., Engel, A. K., Wang, Y., Gao, X., & Zhang, D. (2019). Simultaneous Decoding of Eccentricity and Direction Information for a Single-Flicker SSVEP BCI. Electronics, 8(12), 1554. https://doi.org/10.3390/electronics8121554

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop