Next Article in Journal
Computer Game-Based Telerehabilitation Platform Targeting Manual Dexterity: Exercise Is Fun. “You Are Kidding—Right?”
Previous Article in Journal
Application Layer ARQ Algorithm for Real-Time Multi-Source Data Streaming in UAV Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

P300 Brain–Computer Interface-Based Drone Control in Virtual and Augmented Reality

School of Computer Science and Electrical Engineering, Handong Global University, Pohang 37554, Korea
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(17), 5765; https://doi.org/10.3390/s21175765
Submission received: 27 July 2021 / Revised: 19 August 2021 / Accepted: 24 August 2021 / Published: 27 August 2021
(This article belongs to the Section Vehicular Sensing)

Abstract

:
Since the emergence of head-mounted displays (HMDs), researchers have attempted to introduce virtual and augmented reality (VR, AR) in brain–computer interface (BCI) studies. However, there is a lack of studies that incorporate both AR and VR to compare the performance in the two environments. Therefore, it is necessary to develop a BCI application that can be used in both VR and AR to allow BCI performance to be compared in the two environments. In this study, we developed an opensource-based drone control application using P300-based BCI, which can be used in both VR and AR. Twenty healthy subjects participated in the experiment with this application. They were asked to control the drone in two environments and filled out questionnaires before and after the experiment. We found no significant (p > 0.05) difference in online performance (classification accuracy and amplitude/latency of P300 component) and user experience (satisfaction about time length, program, environment, interest, difficulty, immersion, and feeling of self-control) between VR and AR. This indicates that the P300 BCI paradigm is relatively reliable and may work well in various situations.

1. Introduction

The brain–computer interface (BCI) is a technology that allows for direct communication between humans and a computer through the user’s brain activity and mental states [1]. A BCI can improve the quality of life by translating the brain activity into “mental commands” that can be exploited in a wide number of applications ranging from assisting people with disabilities or re-education therapies to entertainment and video games [2].
P300 BCI is one of the popular paradigms used in the BCI field. P300 is a positive event-related potential (ERP) that appears approximately 300 ms after the stimulus [3]. Farwell and Donchin introduced modern P300-based BCI in 1988 as a means of communication for patients suffering from the ‘locked-in’ syndrome [4]. The P300 Speller is a typewriter that uses the P300-based BCI paradigm. It consists of rows and columns with alphabetic and numeric characters; most are non-target and several are target stimuli. It detects the user’s intended character (target) based upon the P300 component elicited by flashing rows and columns [4]. The goal of the P300 is to analyze the ERPs in one or more repetitions to pinpoint which item has produced a P300, which is a positive ERP that appears approximately 300 ms after the item the user wants to select has flashed [5].
Many BCI studies in the past were based upon 2D visual stimulation but the recent development of head-mounted displays (HMDs) has paved the way to the commercialization of combined BCI+VR (virtual reality) or BCI+AR (augmented reality) [5]. Indeed, HMDs provide a build-in structure that can support the embedding of EEG (electroencephalography) electrodes, which are necessary for BCI [5].
With the emergence of HMDs, some BCI studies have been conducted that combine VR or AR using a HMD. Table 1 provides an overview of previous VR-BCI and AR-BCI studies. It summarizes the application contents, BCI paradigm, and display type. The AR systems were classified according to the type used: video-see through (VST) and optical see through (OST).
However, the development of BCI content in VR and AR environments has not been carried out actively because of the limited performance of BCI-based control, the two technologies’ inherent complexity, as well as the difficulty in designing effective UIs based upon BCI [2]. Furthermore, no BCI study has been conducted in both AR and VR to compare the performance in the two environments, although there are BCI studies that include either VR or AR. Therefore, it is necessary to develop a BCI application that can be used to compare the performance in both VR and AR and to test their feasibility in real-world environments.
To do so, we developed a BCI drone control application in this study that can be used in both VR and AR. In VR, a virtual drone was controlled with virtual maps, while in AR, a real drone was controlled and real-time video could be viewed with a drone camera. Using this application, we conducted an experiment with 20 subjects and compared the differences in the VR and AR environments.
In general, AR using head-mounted displays (HMD) includes optical see-through (OST) and video see-through (VST) [24]. The OST-HMD does not obstruct the subject’s front view and allows the user to observe the surrounding environment directly. The VST-HMD often captures environmental information through the camera so that the subjects can still obtain information about the surrounding environment. In this study, we use VR-assisted AR that floats virtual directional buttons above a real time video from a drone’s camera. Note that the AR we compare with VR in this study is video see-through (VST) AR.
To compare the performance in VR and AR effectively, we focused on one BCI paradigm (P300), two environments (VR and AR), and one task (mobile drone control).
We selected the P300 paradigm for our drone control system for the following reasons. First, drone control systems require rapid interaction between users and a drone, as the direction in which a user wants to go must be controlled with a rapid response with higher accuracy. Second, compared to the motor imagery BCI paradigm, P300-based BCIs require less training time and achieve a higher accuracy and information transfer rate (amount of information sent per unit of time). Third, P300 allows for a larger number of possible commands [18,25], which can be used as the directional buttons (forward, up, down, right, right turn, left, and left turn) in the drone control system. Finally, the P300 paradigm provides an intuitive user interface (UI); what a user sees is what should be recognized [26,27,28,29], which is essential to control the drone easily. In the SSVEP paradigm, the visual stimulus was flickered at different frequencies depending upon each target button, which can cause a different performance for each target button [30,31]. In addition, all target buttons in the P300 paradigm have the same ISI and ERPs corresponding to each target button and are preprocessed with the same procedure. Therefore, P300 was selected as the BCI paradigm for the drone control system to ensure that all directional buttons have a similar effect regardless of the order or frequency of the instructions because all instructions to control the drone were generated randomly in our experiment.
The first objective of this paper was to introduce an opensource-based drone control application using P300 BCI in the AR and VR environments. This application can allow people who are physically challenged to travel outside their normal environment and increase their autonomy by controlling a drone with their brainwaves alone. In addition, it also allows for more immersive experiences by combining BCI with VR and AR environments using a HMD.
The second objective of this paper was to study the feasibility of combining BCI and VR or AR technologies and compare the VR and AR environments. We evaluated the two environments from two perspectives: online performance and user experience. We compared the classification accuracy and analyzed the differences in event-related potential (ERP) signals in each environment. We also compared user satisfaction in the two environments and their effect according to preference.

2. Materials and Methods

2.1. Application Development

2.1.1. Development Environment

In this project, we used OpenViBE, Python, Unity 3D, and Tello for Unity, which are competitive with respect to portability, scalability, and online performance.
OpenViBE for integration overall: This is an open-source software platform that is specialized in integrating various components of the brain–computer interface [32]. OpenViBE allows brain waves to be acquired, preprocessed, classified, and visualized in real-time and specializes in designing and testing brain–computer interfaces. In this project, we implemented the OpenViBE designer code for the Training Session and Online Session. These components collect brain signals from an EEG device (DSI-VR300, Wearable Sensing Inc., San Diego, CA, USA), the stimulation information from Unity, and save data in a file.
Python for signal processing: Signal processing algorithms were implemented in Python. Brainwaves were processed with Python to detect the P300 signal and the target button that a user saw was predicted by a pre-constructed classifier.
Unity 3D for VR and AR application: Unity 3D is a cross-platform game engine that can be used to create three-dimensional, VR, and AR games, as well as simulations and other experiences [33]. In this project, it was used to display a visual panel that blinks randomly, virtual maps for drone movement, and video from a drone on the screen.
Tello for Unity to control the drone: Unity API was used to control a DJI Tello drone in the AR mode [34]. This API provides a function to move the drone in several directions, automatic take-off and landing functions, as well as hijack functions to control the drone with keyboards in case of emergency [34]. It also allows users to view real-time video from the drone on screens in the HMD [34]. However, Tello for Unity uses a keyboard to control drones by giving them constant commands. To address this, we made it possible for the drones to travel a certain distance with a single command by relaying commands continuously over a period of time.
All the source codes for the drone control system are available through the Github repository [35].

2.1.2. System of Drone Control Application

The drone control system’s environment and composition are shown in Figure 1. We implemented an application to control the drone in the AR and VR environments using Unity. In Unity, an interface for blinking buttons was implemented to provide visual stimulation. Unity relays the timing information of a flickering button to OpenViBE through the TCP/IP protocol. OpenViBE parses the timing information from Unity and the data from DSI-VR300 into a single file. The timing information and EEG signal are processed in Python to predict the user’s direction to control the drone. The command predicted is transferred to either a virtual or real drone. A real drone transfers the real-time video from the drone camera back to Unity through Wi-Fi.

2.1.3. Signal Processing and Classification

Basically, the signal acquired was processed through a pre-defined pipeline, as shown in Figure 2. Each step is described as follows.
  • Re-referencing: seven channels (Fz, Pz, Oz, P3, P4, PO7, and PO8) were re-referenced using a channel on the left ear.
  • Bandpass filtering: data were filtered to the frequency band of 0.1–30 Hz using the SciPy package with the 5th order Butterworth filter [36].
  • Epoching: data were segmented to 0–1000 ms epochs from each stimulus onset.
  • Baseline Correction: Each epoch’s mean value was subtracted from the epoch. EEG signals are prone to amplitude shifts attributable to such factors including changes in impedance or noise, which can be fatal in the data analysis. Baseline correction compensates for this random amplitude shift.
  • Consecutive Trial Averaging: to improve the signal-to-noise ratio (SNR), segmented epochs were averaged continuously by 20 epochs.
  • Resampling: Brain signals were digitized originally at a sampling rate of 300 Hz. To reduce the data size, signals were down-sampled to 100 Hz.
The EEG data acquired during the training session were processed through the steps in Figure 2 and a classifier was constructed using these data. Then, this classifier was used in the following two online sessions. Linear discriminant analysis (LDA) was used as our classifier. LDA uses a linear hyperplane to separate data from two classes, assuming that data are distributed normally [37]. As its classification is highly efficient despite its simplicity, this classifier is used widely for BCI designs, particularly for P300-based BCI [38]. The EEG data from the online session were processed through the same steps in Figure 2. All outputs from the classifier across blinks were summed and the selection (direction button) with the highest value was chosen as the target direction.

2.1.4. Game Scenario and Contents

The characteristics of the P300 component in an ERP may differ in appearance, latency, and amplitude depending upon users’ individual differences, age effects, and ultradian rhythms [39]. Thus, it is reasonable to collect data during a training session before running a real-time BCI application (e.g., testing, or online session). As most BCI systems, the application developed consists of two phases: the training session and online session. Thus, we designed a training scenario as shown in Figure 3a and then the classifier was used to detect the target button from Figure 3b, which is the UI during the testing session.
In the training session, the user looks at the target button according to the instructions on the top. In one selection of a training session, seven buttons blink 30 times with a fixed ISI in random sequence. “Blink” in this system refers to visual stimulation using a color that toggles between gray and yellow. A button toggles to yellow for 100 ms and re-toggles to gray for 100 ms to give one visual stimulation.
After the training session, online sessions are run to control the drone. The button interface for online sessions is shown in Figure 3b. There are seven directional buttons (forward, up, down, right, right turn, left, and left turn). In one selection of the online session, seven buttons blink 20 times with fixed ISI in random sequence. The user looks at the button in the direction she/he wants to move the drone and this application predicts the direction the user is looking at and controls the drone.
The drone control application is divided into VR and AR contents depending upon the environment. Figure 3c,d are parts of the VR application with which a user controls a drone in the VR environment. In the VR environment, the virtual drone (Figure 4a) allows the user to navigate a virtual map of his/her choice (park, desert, maze, and forest). Figure 3e,f are images of controlling a drone in the AR environment. Furthermore, in the AR environment, a drone (Figure 4b) can be controlled in real environments through a real-world drone (DJI-Tello) (Figure 5b) and users can watch real-time images from the drone’s camera.

2.2. Experiment

2.2.1. Subjects

To compare the performance in VR and AR, 20 subjects (age: 20 to 27, ten males and ten females) participated in this experiment with the same equipment in the same place and performed one task (mobile drone control) in VR and AR sequentially. The Public Institutional Bioethics Committee designated by the Ministry of Health and Welfare approved the study (P01-201812-11-004) and all subjects were informed about the experiment and their right to stop the experiment if/when they felt uncomfortable or experienced 3D motion sickness before the experiment. All subjects signed a written consent form prior to this experiment and were paid for their participation. None of them had participated in a BCI experiment before this study.
Each experiment took a total of 50 min. The experimental procedure is shown in Figure 6. Before and after the experiment, the experimental survey was conducted. Training sessions were conducted to generate a classifier, followed by online sessions to control the drone. Half of the participants engaged in the AR online session after the VR online session and the other half engaged in the VR online session after the AR online session. To test their accuracy in the online session, the subjects were instructed to move the drone according to the instructions given at the top in the display. Each subject played 15 selections (15 in VR and 15 in AR) in two online sessions.
The EEG data were recorded with DSI-VR300, which has a HMD (HTC-Vive) and a headset to measure the EEG (Figure 5a) [40]. The device has seven channels of dry electrodes (Fz, Pz, Oz, P3, P4, PO7, and PO8) and a 300 Hz sampling frequency [40]. A DJI-Tello drone was used in the AR applications (Figure 4b and Figure 5b) [41].

2.2.2. Questionnaire

Two questionnaires were generated pre and post-task (Table 2). Before the experiment, the subjects filled out a pre-task questionnaire that includes questions about mental disease, experience with BCI content or AR and VR contents, and 3D sickness. They were also asked of their condition, such as the number of hours they slept and hours elapsed since they had ingested alcohol, coffee, or cigarettes. After the experiment, the subjects filled out a post-task questionnaire about their satisfaction with VR and AR to compare the two environments. They evaluated their satisfaction with the time, program, environment, interest, difficulty, immersion, feeling of self-control of the drone, and 3D sickness. They also indicated their predicted accuracy from 1 to 10 as well as their preferred environment (VR or AR) and reason for preference.

2.2.3. Analysis and Statistical Tests

Primarily, we investigated the online performance, peak latency, and amplitude of the ERP component to compare the VR and AR environments. The scores and answers to the questionnaires were also examined to assess any differences between the two. We used the Wilcoxon signed-rank tests to analyze the data, as the Kolmogorov–Smirnov test indicated that most of the data did not follow a normal distribution. We set the significance level to 0.05.

3. Results

Before the analysis of the experimental results, we looked at all subjects’ full P300 waves in 30 selections (15 each in VR and AR) one by one. In one selection, there was 20 epochs (or blinks) per button. In total, 140 epochs (20 epochs × 7 buttons) were required to predict one directional button. If the maximum amplitude of the ERP in each epoch exceeded 80 μV, which is over the conventional range of P300 amplitude [42,43], the epoch was considered poorly measured, and if these exceeded 50% of the total epochs, the selection was considered to be noise-contaminated. If at least 10 of the 15 selections (66%), either in AR or VR, were considered noise-contaminated, the subject was excluded from the analysis. As a result, three subjects (S07, S12, and S13) were excluded from the analysis because of heavy noise in their EEG signals.
Performance in each environment, 3D motion sickness, and environment preference are shown in Table 3. We compared the VR and AR environments from two perspectives: online performance (classification accuracy and ERP signal) and user experience (satisfaction and preference) in Section 3.1 and Section 3.2.

3.1. Online Performance in AR and VR

3.1.1. Accuracy in VR and AR

The classification accuracy was calculated to evaluate the two-drone control system. Figure 7 shows the subjects’ accuracy in VR and AR and the mean. On average, the subjects’ performance was 90.88% (VR) and 88.53% (AR), with a mean of 89.71% overall. The Wilcoxon signed-rank tests were performed to test the statistical significance and showed that the classification accuracy did not differ significantly in the two environments (p > 0.05).

3.1.2. ERP in AR and VR

To compare the difference in ERP signals in the two environments, we analyzed the latency and amplitude of the peak in the P300. In this research study, AR for comparison with VR is VR-assisted and video see through-based. Three-hundred epochs (15 selection × 20 blinks) in one environment were averaged per subject; we identified the peak of the P300 component in the averaged ERP manually and calculated the peak value’s amplitude and latency.
As shown in Table 4, the mean latency in VR and AR were 415.88 ± 22.77 ms and 411.76 ± 23.82 ms, respectively, and the mean amplitude in the two were 5.09 ± 4.79 µV and 6.22 ± 7.94 µV, respectively.
The Wilcoxon signed-rank tests were used to test the statistical significance of the ERP because the Kolmogorov–Smirnov test showed that the latency and amplitude dataset also did not follow a normal distribution. The Wilcoxon signed-rank test showed that the latency and amplitude of the P300 peak point did not differ significantly in the VR and AR environments (p > 0.05).
In the analysis above, each subject’s mean target ERP was compared in VR and AR. This time, we divided the targets into seven directions (forward, up, down, right, right turn, left, and left turn) and analyzed the ERP signal depending upon the directions.
Figure 8a shows the averaged ERP of the seven directions between 0 and 1000 ms after the stimulus. The ERP signal of 17 experiments in VR and AR were averaged in each direction.
As observed, prominent peaks presented at approximately 0.4 s for both VR and AR. To examine the difference in peak amplitude between the two environments, we collected the mean amplitude from 0.3 to 0.5 s in each direction and conducted a statistical test (Wilcoxon rank-sum test). However, we found no statistically significant difference between the AR and VR peak amplitudes in all directions (p > 0.05, FDR corrected [44,45]).
Furthermore, we collected all trials from all subjects and attempted to calculate all subjects’ direction-based accuracy. Figure 8b presents the results. Overall, direction-based accuracies were 75% (minimum) to 95% (maximum) for VR and 80% to 100% for AR. However, no statistically significant difference was found between the two environments (p > 0.05).

3.2. User Experience in AR and VR

We also evaluated the users’ experience using the answers from the two questionnaires in Table 2, alongside any difference associated with VR or AR preference and 3D sickness.

3.2.1. Satisfaction

To compare the users’ satisfaction in VR and AR, seven items (time length, program, environment, interest, difficulty, immersion, and feeling of self-control) were evaluated. Each item was scored from one to five points on VR and AR and three points indicates moderate or neutral. The mean score and standard deviation for the seven items evaluated are shown in Figure 9. The Wilcoxon signed-rank test showed that the users’ experience of the seven items did not differ significantly in the VR and AR environments (p > 0.05).

3.2.2. Accuracy According to Preference

The mean accuracy between VR and AR did not differ significantly in Section 3.1.1. However, the accuracy differed slightly depending upon the subjects’ preferred environment. Figure 10 shows the group’s mean accuracy according to preference. The mean accuracy in the group that preferred VR (n = 7) was 93.1% in VR and 89.33% in AR, which indicates that VR accuracy was slightly higher than that in AR. In addition, the group that preferred AR (n = 10) showed greater accuracy in AR (90.67%) than VR (85.48%). However, the statistical test showed no significant difference between the two (VR preferred: p = 0.23336, AR preferred: p = 0.58091).

3.2.3. User’s Self-Predicted Performance

BCI users’ self-prediction of their accuracy is useful information for understanding BCI performance variation [46,47,48,49] and there is typically a high positive correlation between self-predicted BCI performance and actual classification accuracy [47]. Similarly, in this drone control experiment in VR and AR, the subjects were asked to predict their accuracy overall after the experiment. Figure 11 presents the predicted and actual accuracies averaged over the two environments. Interestingly, we also found that their self-predicted accuracy was correlated highly positively with their actual classification accuracy (r = 0.68, p < 0.05).

4. Discussion

In this study, we demonstrated a P300 BCI drone control application for VR and AR environments and showed that the two environments did not differ significantly in performance (classification accuracy and ERP patterns) and user experience (time length, program, environment, interest, difficulty, immersion, and feeling of self-control).
We extended the conventional 2D-based BCI experiment to a 3D-based BCI experiment in both virtual reality (VR) and augmented reality (AR) using a HMD. A wearable HMD-based BCI is promising as it provides a more user-friendly BCI system for hands-free interaction with real and virtual objects in AR and VR environments [50]. These 3D environments can be highly interactive and can provide a suitable way to alleviate the limitations of BCI control, such as slow and error-prone interactions and limited degrees of freedom [51]. Beyond the typical BCI paradigm, we developed the drone control application, which can be used in a real situation. This application can improve the level of freedom and offer the possibility for an immersive scenario through induced illusions of an artificially perceived reality that can be used not only in basic BCI studies but also in many other fields of application [52]. We also used dry rather than wet EEG electrodes because of their easy use in real situations for a user-friendly system.
In BCI studies, the user’s experience is one of the most important aspects of the system’s feasibility. We compared the two environments using not only performance but also users’ experience in a real-world environment and all participants showed a positive response in both VR and AR. This confirms the feasibility of applications available in both VR and AR with respect to users’ satisfaction as well as performance.
This work encourages more exciting and promising new research topics to further develop the association between BCI and mixed reality, which uses both VR and AR. In addition to the P300 paradigm, other paradigms, such as motor imagery and SSVEP, can also be tested in a mobile drone control system in VR and AR for more practical applications.
Both VR and AR offer immersive and interactive effects but even when the same HMD is used, some elements such as UI and contents can lead to different user satisfactions. Thus, we investigated the participants’ preferred environment as well as their satisfaction.
Although the subjects expressed the same positive opinion about the two environments, they had their own preferred environment and corresponding reasons. The reasons why some preferred VR were “interest in a virtual environment” and “feeling of playing the game”, while the reasons why others preferred AR were “strong immersion due to the AR”, “controlling of real drone”, and “liveliness of real time video”.
Section 3.2.2 showed that classification accuracy was slightly higher in the subjects’ preferred environment. However, there was no statistically significant difference between VR and AR accuracy in either the group that preferred VR (n = 7) or AR (n = 10). In general, a small sample size can influence statistical tests’ significance [53] and a larger sample increases reliability and minimizes the effect of measurement error [54]. Thus, we expect that more data would help us draw a solid conclusion about the relation between environment preference and BCI performance. Furthermore, there is another possible scenario. Subjects may prefer an environment in which they believe their accuracy is higher, i.e., perceived accuracy may affect the environment. However, in this study, we surveyed the subjects’ environment preference after an experiment. One possible approach to check whether there is bias attributable to the experience (regarding accuracy) would be to provide the VR and AR experience without classification feedback and conduct a post-experimental survey. In the future, we will consider this when designing an experiment.
Now we discuss the potential limitations and issues with respect to user-friendly BCI applications. As in traditional ERP-based systems, this drone control application requires a training session before use. During the session, a classifier for the online session is constructed by applying a supervised algorithm to adjust the user’s ERP signal [55]. However, throughout this process, the subject is instructed to focus on a specific direction button without any meaningful interaction. This calibration process is one of the major factors that limits the current BCI systems’ progress [56]. Therefore, to achieve a practical BCI system, we need to develop a BCI application that requires no calibration.
Various studies have been conducted to minimize or reduce the calibration process. Lotte et al. proposed regularized canonical correlation analysis combined with a regularized linear discriminant analysis in 2009 [57], and in 2011, Rivet et al. proposed an adaptive training session to reduce the calibration time [58]. Recently, the convolutional neural network (CNN) has been proven to be useful in optimizing EEG classification [59,60] because of the model’s ability to capture local features, which should be sufficiently robust to data shifts [61]. An ideal calibration-free algorithm should follow one of two paths: (1) determine a set of generalized features that do not differ among subjects or (2) store various forms of P300 signals for comparison [56]. In the future, we will investigate various models (such as the CNN, random forest, and ensemble classifier) that fit our system and collect a large sample of ERP data in VR and AR. Ultimately, we will develop a user-friendly BCI drone control application that requires no training session.
Another aspect to improve in our experiment is to include an evaluation of the drone’s free flight through user’s intention. In this experiment, the subjects had to move the drone according to designated instructions to test their accuracy. If the result of a user’s P300 classification matched the specified instruction, the drone moved in that direction; otherwise, it did not. In fact, controlling the drone in the direction that the user wants it to go is one of the important points in this system. Therefore, in the future, we need to test the drone’s free flight performance through users’ intention.
The last issues are related to the basic concept of this study. Many P300 BCI applications have been implemented in a desktop environment (2D monitor screen). Hence, a desktop-based BCI application may provide a different user experience. In this study, we only compared VR with AR. Thus, investigating any differences in user feedback in the three environments would provide insightful information that may be helpful in designing a new BCI application. Furthermore, different control paradigms (e.g., SSVEP or motor imagery) may provide different experiences as well. Considering the methods used to command vary somewhat across control paradigms, VR and AR should be tested in different BCI types as well. Finally, we recruited 20 subjects and used data from only 17. Although this is not a small sample, more data will help strengthen the results, particularly for group analysis (VR/AR preferred or 3D sickness, etc.). In the future, we will consider these issues in further studies.

5. Conclusions

We developed a P300 BCI drone control application that worked with VR and AR. To check the difference between and influence of the different environments, an online experiment was conducted with 20 subjects and their opinions were collected using a questionnaire. We found that all subjects controlled a drone in the VR and AR environments well and expressed the same positive opinions about both environments. These results demonstrate that VR and AR do not differ in performance (including ERP patterns) and user experience. Thus, we conclude that the P300 BCI paradigm is relatively stable and works well in various situations.

Author Contributions

Conceptualization, S.K. (Soram Kim), S.L., H.K., S.K. (Sion Kim) and M.A.; methodology, S.K. (Soram Kim), S.L., H.K., S.K. (Sion Kim) and M.A.; software, S.K. (Soram Kim), S.L., H.K. and S.K. (Sion Kim); validation, S.K. (Soram Kim), S.L., H.K., S.K. (Sion Kim) and M.A.; formal analysis, S.K. (Soram Kim), S.L., H.K. and S.K. (Sion Kim); investigation, S.K. (Soram Kim), S.L., H.K., S.K. (Sion Kim) and M.A.; resources, M.A.; data curation, S.K. (Soram Kim), S.L., H.K. and S.K. (Sion Kim); writing—original draft preparation, S.K. (Soram Kim) and M.A.; writing—review and editing, S.K. (Soram Kim) and M.A.; visualization, S.K. (Soram Kim) and M.A.; supervision, M.A.; project administration, M.A.; funding acquisition, M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Research Foundation of Korea (NRF) grant (numbers 2019R1F1A1058844 and 2021R1I1A3060828) and the National Program for Excellence in Software at the Handong Global University (number 2017-0-00130).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and was approved by the Public Institutional Bioethics Committee designated by the Ministry of Health and Welfare (P01-201812-11-004).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

We are grateful for the generous participation of our subjects in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wolpaw, J.; Wolpaw, E.W. Brain-Computer Interfaces: Principles and Practice; Oxford University Press: Oxford, UK, 2012; ISBN 978-0-19-992148-5. [Google Scholar]
  2. Si-Mohammed, H.; Petit, J.; Jeunet, C.; Argelaguet, F.; Spindler, F.; Évain, A.; Roussel, N.; Casiez, G.; Lecuyer, A. Towards BCI-Based Interfaces for Augmented Reality: Feasibility, Design and Evaluation. IEEE Trans. Vis. Comput. Graph. 2020, 26, 1608–1621. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Fazel-Rezai, R.; Allison, B.Z.; Guger, C.; Sellers, E.W.; Kleih, S.C.; Kübler, A. P300 Brain Computer Interface: Current Challenges and Emerging Trends. Front. Neuroeng. 2012, 5, 14. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Farwell, L.A.; Donchin, E. Talking off the Top of Your Head: Toward a Mental Prosthesis Utilizing Event-Related Brain Potentials. Electroencephalogr. Clin. Neurophysiol. 1988, 70, 510–523. [Google Scholar] [CrossRef]
  5. Cattan, G.; Andreev, A.; Visinoni, E. Recommendations for Integrating a P300-Based Brain–Computer Interface in Virtual Reality Environments for Gaming: An Update. Computers 2020, 9, 92. [Google Scholar] [CrossRef]
  6. Aamer, A.; Esawy, A.; Swelam, O.; Nabil, T.; Anwar, A.; Eldeib, A. BCI Integrated with VR for Rehabilitation. In Proceedings of the 2019 31st International Conference on Microelectronics (ICM), Cairo, Egypt, 15–18 December 2019; pp. 166–169. [Google Scholar]
  7. Mercado, J.; Espinosa-Curiel, I.; Escobedo, L.; Tentori, M. Developing and Evaluating a BCI Video Game for Neurofeedback Training: The Case of Autism. Multimed. Tools Appl. 2019, 78, 13675–13712. [Google Scholar] [CrossRef]
  8. McMahon, M.; Schukat, M. A Low-Cost, Open-Source, BCI-VR Prototype for Real-Time Signal Processing of EEG to Manipulate 3D VR Objects as a Form of Neurofeedback. In Proceedings of the 2018 29th Irish Signals and Systems Conference (ISSC), Belfast, UK, 21–22 June 2018; pp. 1–6. [Google Scholar]
  9. Rohani, D.A.; Puthusserypady, S. BCI inside a Virtual Reality Classroom: A Potential Training Tool for Attention. EPJ Nonlinear Biomed. Phys. 2015, 3, 12. [Google Scholar] [CrossRef] [Green Version]
  10. Ali, A.; Puthusserypady, S. A 3D Learning Playground for Potential Attention Training in ADHD: A Brain Computer Interface Approach. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 67–70. [Google Scholar]
  11. Arpaia, P.; De Benedetto, E.; Donato, N.; Duraccio, L.; Moccaldi, N. A Wearable SSVEP BCI for AR-Based, Real-Time Monitoring Applications. In Proceedings of the 2021 IEEE International Symposium on Medical Measurements and Applications (MeMeA), Lausanne, Switzerland, 23–25 June 2021; pp. 1–6. [Google Scholar]
  12. Arpaia, P.; Duraccio, L.; Moccaldi, N.; Rossi, S. Wearable Brain–Computer Interface Instrumentation for Robot-Based Rehabilitation by Augmented Reality. IEEE Trans. Instrum. Meas. 2020, 69, 6362–6371. [Google Scholar] [CrossRef]
  13. Park, S.; Cha, H.; Kwon, J.; Kim, H.; Im, C. Development of an Online Home Appliance Control System Using Augmented Reality and an SSVEP-Based Brain-Computer Interface. In Proceedings of the 2020 8th International Winter Conference on Brain-Computer Interface (BCI), Gangwon, Korea, 26–28 February 2020; pp. 1–2. [Google Scholar]
  14. Wang, M.; Li, R.; Zhang, R.; Li, G.; Zhang, D. A Wearable SSVEP-Based BCI System for Quadcopter Control Using Head-Mounted Device. IEEE Access 2018, 6, 26789–26798. [Google Scholar] [CrossRef]
  15. Kerous, B.; Liarokapis, F. BrainChat—A Collaborative Augmented Reality Brain Interface for Message Communication. In Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), Nantes, France, 9–13 October 2017; pp. 279–283. [Google Scholar]
  16. Zeng, H.; Wang, Y.; Wu, C.; Song, A.; Liu, J.; Ji, P.; Xu, B.; Zhu, L.; Li, H.; Wen, P. Closed-Loop Hybrid Gaze Brain-Machine Interface Based Robotic Arm Control with Augmented Reality Feedback. Front. Neurorobot. 2017, 11, 60. [Google Scholar] [CrossRef] [Green Version]
  17. Tidoni, E.; Abu-Alqumsan, M.; Leonardis, D.; Kapeller, C.; Fusco, G.; Guger, C.; Hintermüller, C.; Peer, A.; Frisoli, A.; Tecchia, F.; et al. Local and Remote Cooperation With Virtual and Robotic Agents: A P300 BCI Study in Healthy and People Living with Spinal Cord Injury. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 1622–1632. [Google Scholar] [CrossRef]
  18. Borges, L.R.; Martins, F.R.; Naves, E.L.M.; Bastos, T.F.; Lucena, V.F. Multimodal System for Training at Distance in a Virtual or Augmented Reality Environment for Users of Electric-Powered Wheelchairs. IFAC Pap. 2016, 49, 156–160. [Google Scholar] [CrossRef]
  19. Bi, L.; Fan, X.; Luo, N.; Jie, K.; Li, Y.; Liu, Y. A Head-Up Display-Based P300 Brain–Computer Interface for Destination Selection. IEEE Trans. Intell. Transp. Syst. 2013, 14, 1996–2001. [Google Scholar] [CrossRef]
  20. Martens, N.; Jenke, R.; Abu-Alqumsan, M.; Kapeller, C.; Hintermüller, C.; Guger, C.; Peer, A.; Buss, M. Towards Robotic Re-Embodiment Using a Brain-and-Body-Computer Interface. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 5131–5132. [Google Scholar]
  21. Takano, K.; Hata, N.; Kansaku, K. Towards Intelligent Environments: An Augmented Reality–Brain–Machine Interface Operated with a See-Through Head-Mount Display. Front. Neurosci. 2011, 5, 60. [Google Scholar] [CrossRef] [Green Version]
  22. Faller, J.; Leeb, R.; Pfurtscheller, G.; Scherer, R. Avatar Navigation in Virtual and Augmented Reality Environments Using an SSVEP BCI. In Proceedings of the 1st International Conference on Apllied Bionics and Biomechanics (ICABB-2010), Venice, Italy, 14–16 October 2010; pp. 1–4. [Google Scholar]
  23. Lenhardt, A.; Ritter, H. An Augmented-Reality Based Brain-Computer Interface for Robot Control. In Neural Information Processing. Models and Applications; Wong, K.W., Mendis, B.S.U., Bouzerdoum, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 58–65. [Google Scholar]
  24. Ke, Y.; Liu, P.; An, X.; Song, X.; Ming, D. An Online SSVEP-BCI System in an Optical See-through Augmented Reality Environment. J. Neural Eng. 2020, 17, 016066. [Google Scholar] [CrossRef] [PubMed]
  25. Barrera, A. Advances in Robot Navigation; BoD–Books on Demand: Norderstedt, Germany, 2011; ISBN 978-953-307-346-0. [Google Scholar]
  26. Woo, S.; Lee, J.; Kim, H.; Chun, S.; Lee, D.; Gwon, D.; Ahn, M. An Open Source-Based BCI Application for Virtual World Tour and Its Usability Evaluation. Front. Hum. Neurosci. 2021, 18, 388. [Google Scholar] [CrossRef]
  27. Finke, A.; Lenhardt, A.; Ritter, H. The MindGame: A P300-Based Brain–Computer Interface Game. Neural Netw. 2009, 22, 1329–1333. [Google Scholar] [CrossRef]
  28. Winograd, M.R.; Rosenfeld, J.P. Mock Crime Application of the Complex Trial Protocol (CTP) P300-Based Concealed Information Test. Psychophysiol. 2011, 48, 155–161. [Google Scholar] [CrossRef]
  29. He, S.; Zhang, R.; Wang, Q.; Chen, Y.; Yang, T.; Feng, Z.; Zhang, Y.; Shao, M.; Li, Y. A P300-Based Threshold-Free Brain Switch and Its Application in Wheelchair Control. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 715–725. [Google Scholar] [CrossRef]
  30. Wu, Z.; Lai, Y.; Xia, Y.; Wu, D.; Yao, D. Stimulator Selection in SSVEP-Based BCI. Med. Eng. Phys. 2008, 30, 1079–1088. [Google Scholar] [CrossRef] [PubMed]
  31. Chabuda, A.; Durka, P.; Żygierewicz, J. High Frequency SSVEP-BCI With Hardware Stimuli Control and Phase-Synchronized Comb Filter. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 344–352. [Google Scholar] [CrossRef]
  32. Renard, Y.; Lotte, F.; Gibert, G.; Congedo, M.; Maby, E.; Delannoy, V.; Bertrand, O.; Lécuyer, A. OpenViBE: An Open-Source Software Platform to Design, Test, and Use Brain–Computer Interfaces in Real and Virtual Environments. Presence Teleoperators Virtual Environ. 2010, 19, 35–53. [Google Scholar] [CrossRef] [Green Version]
  33. Unity at 10: For Better—Or Worse—Game Development Has Never Been Easier|Ars Technica. Available online: https://arstechnica.com/gaming/2016/09/unity-at-10-for-better-or-worse-game-development-has-never-been-easier/ (accessed on 23 March 2021).
  34. Komori, A. Comoc/TelloForUnity. 2020. Available online: https://github.com/comoc/TelloForUnity (accessed on 26 March 2021).
  35. BCILab. AhnBCILab/BCI_Drone. 2021. Available online: https://github.com/AhnBCILab/BCI_Drone (accessed on 26 March 2021).
  36. Scipy. Signal. Butter—SciPy v1.6.1 Reference Guide. Available online: https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.butter.html (accessed on 22 March 2021).
  37. Lotte, F.; Congedo, M.; Lécuyer, A.; Lamarche, F.; Arnaldi, B.A. Review of Classification Algorithms for EEG-Based Brain–Computer Interfaces. J. Neural Eng. 2007, 4, R1–R13. [Google Scholar] [CrossRef]
  38. Krusienski, D.J.; Sellers, E.W.; Cabestaing, F.; Bayoudh, S.; McFarland, D.J.; Vaughan, T.M.; Wolpaw, J.R. A Comparison of Classification Techniques for the P300 Speller. J. Neural Eng. 2006, 3, 299–305. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Polich, J. On the Relationship between EEG and P300: Individual Differences, Aging, and Ultradian Rhythms. Int. J. Psychophysiol. 1997, 26, 299–317. [Google Scholar] [CrossRef]
  40. DSI VR300. Available online: https://wearablesensing.com/products/vr300/ (accessed on 29 March 2021).
  41. Tello. Available online: https://www.ryzerobotics.com/kr/tello (accessed on 29 March 2021).
  42. Carlson, S.R.; Katsanis, J.; Iacono, W.G.; Mertz, A.K. Substance Dependence and Externalizing Psychopathology in Adolescent Boys with Small, Average, or Large P300 Event-Related Potential Amplitude. Psychophysiology 1999, 36, 583–590. [Google Scholar] [CrossRef] [PubMed]
  43. Baykara, E.; Ruf, C.A.; Fioravanti, C.; Käthner, I.; Simon, N.; Kleih, S.C.; Kübler, A.; Halder, S. Effects of Training and Motivation on Auditory P300 Brain–Computer Interface Performance. Clin. Neurophysiol. 2016, 127, 379–387. [Google Scholar] [CrossRef]
  44. Benjamini, Y.; Hochberg, Y. Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. J. R. Stat. Soc. Ser. B Methodol. 1995, 57, 289–300. [Google Scholar] [CrossRef]
  45. Genovese, C.R.; Lazar, N.A.; Nichols, T. Thresholding of Statistical Maps in Functional Neuroimaging Using the False Discovery Rate. NeuroImage 2002, 15, 870–878. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Won, K.; Kwon, M.; Jang, S.; Ahn, M.; Jun, S.C. P300 Speller Performance Predictor Based on RSVP Multi-Feature. Front. Hum. Neurosci. 2019, 13, 261. [Google Scholar] [CrossRef] [Green Version]
  47. Ahn, M.; Cho, H.; Ahn, S.; Jun, S.C. User’s Self-Prediction of Performance in Motor Imagery Brain–Computer Interface. Front. Hum. Neurosci. 2018, 12, 59. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Halder, S.; Hammer, E.M.; Kleih, S.C.; Bogdan, M.; Rosenstiel, W.; Birbaumer, N.; Kübler, A. Prediction of Auditory and Visual P300 Brain-Computer Interface Aptitude. PLoS ONE 2013, 8, e53513. [Google Scholar] [CrossRef]
  49. Zhang, Y.; Xu, P.; Guo, D.; Yao, D. Prediction of SSVEP-Based BCI Performance by the Resting-State EEG Network. J. Neural Eng. 2013, 10, 066017. [Google Scholar] [CrossRef] [Green Version]
  50. Lin, C.-T.; Do, T.-T.N. Direct-Sense Brain–Computer Interfaces and Wearable Computers. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 298–312. [Google Scholar] [CrossRef]
  51. Lotte, F.; Faller, J.; Guger, C.; Renard, Y.; Pfurtscheller, G.; Lécuyer, A.; Leeb, R. Combining BCI with Virtual Reality: Towards New Applications and Improved BCI. In Towards Practical Brain-Computer Interfaces: Bridging the Gap from Research to Real-World Applications; Biological and Medical Physics, Biomedical Engineering; Allison, B.Z., Dunne, S., Leeb, R., Del R. Millán, J., Nijholt, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 197–220. ISBN 978-3-642-29746-5. [Google Scholar]
  52. Putze, F.; Vourvopoulos, A.; Lécuyer, A.; Krusienski, D.; Bermúdez i Badia, S.; Mullen, T.; Herff, C. Editorial: Brain-Computer Interfaces and Augmented/Virtual Reality. Front. Hum. Neurosci. 2020, 14, 144. [Google Scholar] [CrossRef]
  53. Blair, R.C.; Higgins, J.J. A Comparison of the Power of the Paired Samples Rank Transform Statistic to That of Wilcoxon’s Signed Ranks Statistic. J. Educ. Stat. 1985, 10, 368–383. [Google Scholar] [CrossRef]
  54. Kanyongo, G.; Brook, G.; Kyei-Blankson, L.; Gocmen, G. Reliability and Statistical Power: How Measurement Fallibility Affects Power and Required Sample Sizes for Several Parametric and Nonparametric Statistics. J. Mod. Appl. Stat. Methods 2007, 6, 9. [Google Scholar] [CrossRef]
  55. Lotte, F.; Bougrain, L.; Clerc, M. Electroencephalography (EEG)-Based Brain-Computer Interfaces. In Wiley Encyclopedia of Electrical and Electronics Engineering; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2015; p. 44. [Google Scholar] [CrossRef] [Green Version]
  56. Lee, J.; Won, K.; Kwon, M.; Jun, S.C.; Ahn, M. CNN with Large Data Achieves True Zero-Training in Online P300 Brain-Computer Interface. IEEE Access 2020, 8, 74385–74400. [Google Scholar] [CrossRef]
  57. Lotte, F.; Guan, C. An Efficient P300-Based Brain-Computer Interface with Minimal Calibration Time. In Proceedings of the Assistive Machine Learning for People with Disabilities symposium (NIPS’09 Symposium), Vancouver, BC, Canada, 10 December 2009. [Google Scholar]
  58. Rivet, B.; Cecotti, H.; Perrin, M.; Maby, E.; Mattout, J. Adaptive Training Session for a P300 Speller Brain–Computer Interface. J. Physiol. Paris 2011, 105, 123–129. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Schirrmeister, R.T.; Springenberg, J.T.; Fiederer, L.D.J.; Glasstetter, M.; Eggensperger, K.; Tangermann, M.; Hutter, F.; Burgard, W.; Ball, T. Deep Learning with Convolutional Neural Networks for EEG Decoding and Visualization. Hum. Brain Mapp. 2017, 38, 5391–5420. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. Rashid, M.; Sulaiman, N.; PP Abdul Majeed, A.; Musa, R.M.; Ab. Nasir, A.F.; Bari, B.S.; Khatun, S. Current Status, Challenges, and Possible Solutions of EEG-Based Brain-Computer Interface: A Comprehensive Review. Front. Neurorobot. 2020, 14, 25. [Google Scholar] [CrossRef] [PubMed]
  61. Lawhern, V.J.; Solon, A.J.; Waytowich, N.R.; Gordon, S.M.; Hung, C.P.; Lance, B.J. EEGNet: A Compact Convolutional Neural Network for EEG-Based Brain–Computer Interfaces. J. Neural Eng. 2018, 15, 056013. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. The structure of the drone control system overall.
Figure 1. The structure of the drone control system overall.
Sensors 21 05765 g001
Figure 2. Signal processing procedure.
Figure 2. Signal processing procedure.
Sensors 21 05765 g002
Figure 3. UI of the system developed. (a) Button interface for the training session. (b) Button interface for the online session. (c) VR content to control a virtual drone. Desert map is shown. (d) All four maps (park, dessert, maze, and forest). (e) AR contents to control a real drone. Button interface is shown. (f) A real video in HMD streamed from the drone.
Figure 3. UI of the system developed. (a) Button interface for the training session. (b) Button interface for the online session. (c) VR content to control a virtual drone. Desert map is shown. (d) All four maps (park, dessert, maze, and forest). (e) AR contents to control a real drone. Button interface is shown. (f) A real video in HMD streamed from the drone.
Sensors 21 05765 g003
Figure 4. Images of a drone used in this system: (a) a drone used in the virtual environment and (b) a drone used (DJI-Tello) in the real environment.
Figure 4. Images of a drone used in this system: (a) a drone used in the virtual environment and (b) a drone used (DJI-Tello) in the real environment.
Sensors 21 05765 g004
Figure 5. Experimental view. (a) A subject wearing HMD is playing the drone control system. (b) A real drone (red box) is being controlled by a subject (blue box) through the AR-drone control system.
Figure 5. Experimental view. (a) A subject wearing HMD is playing the drone control system. (b) A real drone (red box) is being controlled by a subject (blue box) through the AR-drone control system.
Sensors 21 05765 g005
Figure 6. Experimental procedure. Half of the participants engaged in the online sessions in the order of AR followed by VR and the other half engaged in the order of VR followed by AR.
Figure 6. Experimental procedure. Half of the participants engaged in the online sessions in the order of AR followed by VR and the other half engaged in the order of VR followed by AR.
Sensors 21 05765 g006
Figure 7. Mean accuracy and standard deviation are presented for VR (blue), AR (orange), and the mean of VR and AR (green). No significant difference was observed (p > 0.05).
Figure 7. Mean accuracy and standard deviation are presented for VR (blue), AR (orange), and the mean of VR and AR (green). No significant difference was observed (p > 0.05).
Sensors 21 05765 g007
Figure 8. (a) ERP signal (n = 17) averaged over seven directions (forward, up, down, right, right turn, left, and left turn) in VR (blue line) and AR (orange line) environments. Standard deviations are presented with areas. (b) All subjects’ direction-based accuracy.
Figure 8. (a) ERP signal (n = 17) averaged over seven directions (forward, up, down, right, right turn, left, and left turn) in VR (blue line) and AR (orange line) environments. Standard deviations are presented with areas. (b) All subjects’ direction-based accuracy.
Sensors 21 05765 g008
Figure 9. Users’ satisfaction with VR and AR.
Figure 9. Users’ satisfaction with VR and AR.
Sensors 21 05765 g009
Figure 10. Accuracy according to the environment preferred.
Figure 10. Accuracy according to the environment preferred.
Sensors 21 05765 g010
Figure 11. Actual accuracy and users’ self-predicted accuracy. The circle size represents the number of subjects in that point. The dashed line is the y = x line and the solid red is the regression line on the given points.
Figure 11. Actual accuracy and users’ self-predicted accuracy. The circle size represents the number of subjects in that point. The dashed line is the y = x line and the solid red is the regression line on the given points.
Sensors 21 05765 g011
Table 1. Overview of previous VR-BCI and AR-BCI studies. Motor imagery (MI) and steady-state visual-evoked potential (SSVEP).
Table 1. Overview of previous VR-BCI and AR-BCI studies. Motor imagery (MI) and steady-state visual-evoked potential (SSVEP).
EnvironmentApplication ContentsStudyBCI-ParadigmDisplay Type/
(AR Type)
VRPost-stroke rehabilitationAamer et al. [6]MIHMD
Attention trainingMercado et al. [7]NeurofeedbackHMD
BCI systemMcMahon et al. [8]MIHMD
Attention trainingRohani et al. [9]P300HMD
Attention trainingAli et al. [10]SSVEPHMD
ARReal-time monitoring applicationsArpaia et al. [11]SSVEPHMD/OST
Robot-based rehabilitationArpaia et al. [12]SSVEPHMD/OST
Robot controlSi-Mohammed et al. [2]SSVEPHMD/OST
Home appliance controlPark et al. [13]SSVEPHMD/OST
Quadcopter controlWang et al. [14]SSVEPHMD/VST
CommunicationKerous et al. [15]P300HMD/VST
Robotic arm controlZeng et al. [16]MICS/VST
Robot controlTidoni et al. [17]P300HMD/VST
Wheelchair controlBorges et al. [18]SSVEPHMD/VST
Feasibility studyBi et al. [19]P300head-up display
Robot controlMartens et al. [20]P300, SSVEPHMD/VST
Light and TV controlTakano et al. [21]P300HMD/OST
Robot controlFaller et al. [22]SSVEPHMD/VST
Robotic arm controlLenhardt et al. [23]P300HMD/VST
Table 2. Pre- and post-questionnaire items.
Table 2. Pre- and post-questionnaire items.
QuestionnairesQuestionAnswer Format
PreDo you have mental disease?Yes or No
Have you ever participated in a BCI experiment?Yes or No
Have you ever experienced AR or VR contents?Yes or No
Have you ever experienced 3D motion sickness?Yes or No
Did you sleep well for more than 6 h?Yes or No
Did you drink coffee within 24 h?Yes or No
Did you drink within 24 h?Yes or No
Did you smoke within 24 h?Yes or No
Evaluate your physical condition.1 to 5 (good)
Evaluate your mental condition.1 to 5 (good)
Post
(VR and AR)
Evaluate the playing time (time).1 to 5 (long)
Evaluate how you feel about this application
(program).
1 to 5 (excited)
Evaluate the comfort of surroundings
(environment).
1 to 5 (good)
Were you interested in the application (interest)?1 to 5 (interested)
Was the application difficult (difficulty)?1 to 5 (easy)
Evaluate the immersiveness of the application
(immersion).
1 to 5 (high)
Evaluate the ability to control the drone (control).1 to 5 (high)
Did you feel 3D motion sickness?Yes or No
Please predict your performance.1 to 10 (high)
What do you prefer, VR or AR?VR or AR
Table 3. Results of the experiment.
Table 3. Results of the experiment.
Subject No.Accuracy (%)3D SicknessPreference
VRARMeanVRAR
S190.0060.0075.00NNVR
S266.6780.0073.33YNAR
S395.0085.0090.00NNVR
S480.0073.3376.67NNAR
S580.0066.6773.33NYAR
S6100.00100.00100.00NNAR
S8100.00100.00100.00NNAR
S973.3386.6780.00NNAR
S1073.3393.3383.33NNVR
S1193.33100.0096.67NNVR
S14100.0073.3386.67NNVR
S15100.00100.00100.00NNAR
S16100.0093.3396.67YYVR
S17100.00100.00100.00NNAR
S18100.0093.3396.67NYVR
S1993.33100.0096.67NNAR
S20100.00100.00100.00NNAR
Table 4. Comparison of the ERP in VR and AR. Mean and standard deviation (SD) are presented at the bottom.
Table 4. Comparison of the ERP in VR and AR. Mean and standard deviation (SD) are presented at the bottom.
Subject No.Latency (ms)Amplitude (μV)
VRARVRAR
S14104205.656.05
S23904103.72.16
S33903802.724.97
S44204102.492.54
S54404309.055.28
S64604404.357.72
S84204103.84.74
S94204207.195.62
S104004002.252.94
S114304302.943.09
S143703404.489.01
S154004003.173.1
S164204301.472.24
S174104003.892.76
S184504303.592.86
S1940041022.8537.05
S204404402.883.53
Mean415.88411.765.096.22
SD22.7723.824.797.94
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kim, S.; Lee, S.; Kang, H.; Kim, S.; Ahn, M. P300 Brain–Computer Interface-Based Drone Control in Virtual and Augmented Reality. Sensors 2021, 21, 5765. https://doi.org/10.3390/s21175765

AMA Style

Kim S, Lee S, Kang H, Kim S, Ahn M. P300 Brain–Computer Interface-Based Drone Control in Virtual and Augmented Reality. Sensors. 2021; 21(17):5765. https://doi.org/10.3390/s21175765

Chicago/Turabian Style

Kim, Soram, Seungyun Lee, Hyunsuk Kang, Sion Kim, and Minkyu Ahn. 2021. "P300 Brain–Computer Interface-Based Drone Control in Virtual and Augmented Reality" Sensors 21, no. 17: 5765. https://doi.org/10.3390/s21175765

APA Style

Kim, S., Lee, S., Kang, H., Kim, S., & Ahn, M. (2021). P300 Brain–Computer Interface-Based Drone Control in Virtual and Augmented Reality. Sensors, 21(17), 5765. https://doi.org/10.3390/s21175765

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop