1. Introduction
The recent development of low cost Electroencephalography (EEG) devices has promoted the creation of new Brain Computer Interfaces (BCI) applications to different areas, such as home automation control or stress and fatigue monitoring, among others. We have shown in [
1,
2] a self-designed EEG prototype for Internet of Things (IoT). This prototype allows us to classify eye states i.e., open eyes and closed eyes, using a simple criterion based on the power of alpha and beta bands. The frequency-domain components of these brain rhythms are computed using the Fourier Transform (FT) over non-overlapped windows. The main limitation of this approach is the significant delay time required to determine changes in eye states.
In this paper, we study the use of sliding FT applied to overlapped windows as a solution for minimising that delay. We will test the performance of this proposed method using a self-designed prototype [
2] and the consumer-grade OpenBCI device [
3].
2. Materials and Methods
In our study, two different EEG devices have been used to capture user brain activity (see
Figure 1). The consumer-grade OpenBCI platform [
3] has been configured using the Cyton board and the 8 channels headset Ultracortex Mark IV. An electrode located in the O2 position was used as input channel, while the reference and ground were located in A2 and A1 positions, respectively. A sampling frequency of
has been used to capture the EEG signals. On the other hand, the self-designed prototype [
2] employs a sampling frequency of
, an input channel located in O2 and ground and reference electrodes in the positions corresponding to right mastoid and FP2, respectively.
The brain activity of 5 male subjects was recorded and analysed for each experiment. Their mean age was 25 years old and none of them suffered from any disease or pain during the recordings. All the experiments were conducted in a quiet room and with the subject seated in a comfortable chair.
Recordings were composed of two tasks: 20 s of open eyes (OE) and 20 s of closed eyes (CE) with 3 s of rest between both tasks. A total of 10 recordings were registered for each subject. During the OE tasks, the subjects were asked for looking at a cross projected in the centre of a monitor just in front of them.
The acquired signals have been processed off-line using Matlab. Our algorithm computes the method already proposed in [
2] using sliding FT. During the training step, the threshold parameter is calculated for each subject using 3 random recordings. This value will allow us to classify eye states for the remaining 7 test recordings, applying for this purpose the decision rule detailed in [
2]. The best window size for each subject is also determined.
3. Experimental Results
The performance of both EEG devices is also analysed in a realistic scenario. For this purpose, the test recordings were concatenated composing eye tasks of different duration, thus simulating a continuous EEG recording with real eye state changes. The classification is performed for each subject taking into account the threshold parameters obtained from the training.
Table 1 and
Table 2 show the accuracy achieved for all the subjects considering four time frames of eye tasks, denoting by
t that time size, and applying a 70% of overlapping for the sliding windows. The classification algorithm will apply the window size adequate to each subject as done in [
2]. Therefore, the system delay suffered by all the subjects is not the same, as it is shown in
Table 3 for both EEG devices.
4. Conclusions
The study presented in this paper shows that the sliding Fourier Transform allows us to reduce the delay between changes produced in eye states and their corresponding decision performed by the classifier. It can be noticed that the system performance is highly subject-dependent, but there are no significant deviations between average performances of both EEG devices. In addition, regardless of the hardware used to capture the brain signals, it can be seen that the classifier exhibits better accuracy for long-term eye tasks.
Author Contributions
F.L. has implemented the software used in this paper and wrote the paper; F.L. and D.I. have performed the experiments and the data analysis; F.J.V.-A. has developed the EEG hardware; P.M.C. and A.D. have designed the experiments and head the research.
Funding
This work has been funded by the Xunta de Galicia and the European Social Fund of the European Union (Francisco Laport, code ED481A-2018/156).
Conflicts of Interest
The authors declare no conflict of interest.
References
- Laport, F.; Vazquez-Araujo, F.J.; Castro, P.M.; Dapena, A. Brain-Computer Interfaces for Internet of Things. Proceedings 2018, 2, 1179. [Google Scholar] [CrossRef]
- Laport, F.; Vazquez-Araujo, F.J.; Castro, P.M.; Dapena, A. Hardware and Software for Integrating Brain–Computer Interface with Internet of Things. In International Work-Conference on the Interplay Between Natural and Artificial Computation; Springer: Cham, Switzerland, 2019; pp. 22–31. [Google Scholar]
- OpenBCI. Available online: http://openbci.com/ (accessed on 14 July 2019).
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).