brainchat - a collaborative augmented reality brain ...liarokap/publications/ismar2017.pdf · this...

5
BrainChat - A Collaborative Augmented Reality Brain Interface for Message Communication Bojan Kerous * Fotis Liarokapis Human Computer Interaction Laboratory Faculty of Informatics Masaryk University Brno, Czech Republic ABSTRACT This paper presents BrainChat, an augmented reality based multi- user concept for brain-computer interfaces. The goal is to provide seamless communication based only on thoughts. A working proto- type is presented, which demonstrates two-person textual commu- nication based on non-invasive brain computer interfaces. Design choices are discussed and directions for future work are provided, considering the relevant research directions in Brain-Computer In- terfaces based on Electroencephalography. Index Terms: H.5.2 [User Interfaces]: Interaction styles— Graphical user interfaces (GUI) 1 I NTRODUCTION Brain-Computer Interfaces (BCI’s) can be defined as an artificial interface between a person’s brain and a computer that does not re- quire physical movement to exercise control [1]. There are numerous brain imaging technologies differing in the degree of invasiveness, temporal and spatial resolution, as well as in the setup requirements. Out of these, Electroencephalography (EEG) stands out as a rela- tively inexpensive alternative, which is comparatively easier to setup and has a good temporal resolution [2]. The EEG-based BCI requires that a set of electrodes is placed on the user’s scalp, where usually a conductive gel or saline solu- tion is used to facilitate connectivity, although dry electrodes have been shown to be feasible as well. EEG-based BCI’s have seen use in medical applications [3] (to facilitate neurological rehabilita- tion), assistive technologies [4] (wheelchair navigation, computer interaction, robotics) and in games research as well [5]. In spite of significant strides, the current status of EEG-based de- vices is such that there are many technical issues, such as heavy sig- nal contamination due to the ambient electrical noise, user-induced artifacts (blinks, movement), and electrode insufficiencies. Addition- ally, the user is typically required to conduct a classifier calibration session prior to the on-line session, in addition to a user training session, required in order to learn to modulate their brain activity. On the other hand, Augmented Reality (AR) is similarly a novel research area where new technical advances lead to ever-evolving and innovative interaction designs. Latest advances in hardware, showed that heads-up interfaces (i.e. Google glass, Microsoft Hololens) are focusing on superimposing menu interfaces into the user’s eye(s). Users need to use gestures and voice recognition to control applications. However, this requires users to make an effort to control content and as a result poses limitations. In this paper we present a proof of concept of an AR-based com- munication BCI system called BrainChat. The goal is to provide * e-mail: [email protected] e-mail:liarokap@fi.muni.cz communication based only on a person’s brain activity. A work- ing prototype is presented, which demonstrates two-person textual communication based on non-invasive brain computer interfaces. The rest of the paper is structured as follows. Section 2 demon- strates the modalities of BCI’s. Section 3 presents background work, whereas section 4 our motivation. Section 5 illustrates the BrainChat interface and sections 6 and 7 present discussions and future work respectively, while concluding remarks are given in section 8. 2 BRAIN COMPUTER I NTERFACE INTERACTION MODALITIES To establish control using a BCI designers of the interface are faced with decision on which brain activity to leverage. Depending on the underlying modality, the BCI can require a stimuli to be presented (typically visually) in order for the brain response to be detected, or would alternatively require continuous mental effort for the same effect. The extensively studied mechanisms that depend on the stimuli presentations are Event-Related Potentials (such as the P300 po- tential spike), where an appearance of the targeted stimuli triggers cognitive appraisal distinguishing target of interest which results in a spike in the EEG signal, or Evoked Potentials (such as Steady-State Visual Evoked Potential) which represent natural responses to stable stimuli behavior [6]. On the other hand, Motor Imagery (MI) relies on the user’s ca- pacity to imagine movement of their limbs, which causes a spike in power of the characteristic sensori-motor frequency band. Over- whelming majority of research in BCI’s tackles one of the three staple mechanisms: P300 (ERP), Steady-State Visual Evoked Po- tential (SSVEP), and Motor Imagery (MI). The rationale behind using the Event-Related Potentials approach compared to others is elaborated in Section 4. A typical P300 spelling scenario features a grid of elements that flash in sets of rows and columns. Once the letter of interest has flashed the letter is spelled. In the SSVEP paradigm different icons that correspond to different commands (or letters) flash with a dis- tinct frequency. When the user attends to a letter, the power increase of the corresponding frequency is to detect over the visual cortex, which triggers the appropriate command. The MI paradigm is not typically used to issue discreet commands, but to enable continuous and stimuli independent control. 3 STATE- OF- THE- ART The research community has taken on the challenge of integrating these innovative interfaces in several studies. An overview of EEG based BCI’s and their present and potential uses in virtual environ- ments and games has been recently presented [7]. Moreover, in [8] authors presented a table-sized display which showed highlights of objects placed on its surface. By intermittent flashing of these high- lights a user equipped with a BCI device was able to elicit a P300 response by attending to the object of interest and in turn selecting it once the highlight under it flashed. 2017 IEEE International Symposium on Mixed and Augmented Reality Adjunct Proceedings 978-0-7695-6327-5/17 $31.00 © 2017 IEEE DOI 10.1109/ISMAR-Adjunct.2017.91 279

Upload: dinhkhuong

Post on 25-May-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

BrainChat - A Collaborative Augmented Reality Brain Interface forMessage CommunicationBojan Kerous* Fotis Liarokapis†

Human Computer Interaction LaboratoryFaculty of InformaticsMasaryk University

Brno, Czech Republic

ABSTRACT

This paper presents BrainChat, an augmented reality based multi-user concept for brain-computer interfaces. The goal is to provideseamless communication based only on thoughts. A working proto-type is presented, which demonstrates two-person textual commu-nication based on non-invasive brain computer interfaces. Designchoices are discussed and directions for future work are provided,considering the relevant research directions in Brain-Computer In-terfaces based on Electroencephalography.

Index Terms: H.5.2 [User Interfaces]: Interaction styles—Graphical user interfaces (GUI)

1 INTRODUCTION

Brain-Computer Interfaces (BCI’s) can be defined as an artificialinterface between a person’s brain and a computer that does not re-quire physical movement to exercise control [1]. There are numerousbrain imaging technologies differing in the degree of invasiveness,temporal and spatial resolution, as well as in the setup requirements.Out of these, Electroencephalography (EEG) stands out as a rela-tively inexpensive alternative, which is comparatively easier to setupand has a good temporal resolution [2].

The EEG-based BCI requires that a set of electrodes is placedon the user’s scalp, where usually a conductive gel or saline solu-tion is used to facilitate connectivity, although dry electrodes havebeen shown to be feasible as well. EEG-based BCI’s have seenuse in medical applications [3] (to facilitate neurological rehabilita-tion), assistive technologies [4] (wheelchair navigation, computerinteraction, robotics) and in games research as well [5].

In spite of significant strides, the current status of EEG-based de-vices is such that there are many technical issues, such as heavy sig-nal contamination due to the ambient electrical noise, user-inducedartifacts (blinks, movement), and electrode insufficiencies. Addition-ally, the user is typically required to conduct a classifier calibrationsession prior to the on-line session, in addition to a user trainingsession, required in order to learn to modulate their brain activity.

On the other hand, Augmented Reality (AR) is similarly a novelresearch area where new technical advances lead to ever-evolvingand innovative interaction designs. Latest advances in hardware,showed that heads-up interfaces (i.e. Google glass, MicrosoftHololens) are focusing on superimposing menu interfaces into theuser’s eye(s). Users need to use gestures and voice recognition tocontrol applications. However, this requires users to make an effortto control content and as a result poses limitations.

In this paper we present a proof of concept of an AR-based com-munication BCI system called BrainChat. The goal is to provide

*e-mail: [email protected]†e-mail:[email protected]

communication based only on a person’s brain activity. A work-ing prototype is presented, which demonstrates two-person textualcommunication based on non-invasive brain computer interfaces.

The rest of the paper is structured as follows. Section 2 demon-strates the modalities of BCI’s. Section 3 presents background work,whereas section 4 our motivation. Section 5 illustrates the BrainChatinterface and sections 6 and 7 present discussions and future workrespectively, while concluding remarks are given in section 8.

2 BRAIN COMPUTER INTERFACE INTERACTION MODALITIES

To establish control using a BCI designers of the interface are facedwith decision on which brain activity to leverage. Depending on theunderlying modality, the BCI can require a stimuli to be presented(typically visually) in order for the brain response to be detected, orwould alternatively require continuous mental effort for the sameeffect.

The extensively studied mechanisms that depend on the stimulipresentations are Event-Related Potentials (such as the P300 po-tential spike), where an appearance of the targeted stimuli triggerscognitive appraisal distinguishing target of interest which results in aspike in the EEG signal, or Evoked Potentials (such as Steady-StateVisual Evoked Potential) which represent natural responses to stablestimuli behavior [6].

On the other hand, Motor Imagery (MI) relies on the user’s ca-pacity to imagine movement of their limbs, which causes a spikein power of the characteristic sensori-motor frequency band. Over-whelming majority of research in BCI’s tackles one of the threestaple mechanisms: P300 (ERP), Steady-State Visual Evoked Po-tential (SSVEP), and Motor Imagery (MI). The rationale behindusing the Event-Related Potentials approach compared to others iselaborated in Section 4.

A typical P300 spelling scenario features a grid of elements thatflash in sets of rows and columns. Once the letter of interest hasflashed the letter is spelled. In the SSVEP paradigm different iconsthat correspond to different commands (or letters) flash with a dis-tinct frequency. When the user attends to a letter, the power increaseof the corresponding frequency is to detect over the visual cortex,which triggers the appropriate command. The MI paradigm is nottypically used to issue discreet commands, but to enable continuousand stimuli independent control.

3 STATE-OF-THE-ART

The research community has taken on the challenge of integratingthese innovative interfaces in several studies. An overview of EEGbased BCI’s and their present and potential uses in virtual environ-ments and games has been recently presented [7]. Moreover, in [8]authors presented a table-sized display which showed highlights ofobjects placed on its surface. By intermittent flashing of these high-lights a user equipped with a BCI device was able to elicit a P300response by attending to the object of interest and in turn selectingit once the highlight under it flashed.

2017 IEEE International Symposium on Mixed and Augmented Reality Adjunct Proceedings

978-0-7695-6327-5/17 $31.00 © 2017 IEEE

DOI 10.1109/ISMAR-Adjunct.2017.91

279

In another ERP (P300) example [9] an AR-based BCI for remoterobot control was implemented, where BCI commands (buttonsas stimuli) were overlaid on the video acquired through the robotcamera, and anchored to the marker, while the movement was alsocontrolled in four directions. This was expanded in [10] whereno significant difference was observed between an LCD and Head-Mounted Display condition. In another study using ERP, the roleof background and number of markers was examined [11]. In thisinstance, the camera was facing a set of markers indicating thelocation of the stimuli, while the user was seated before a monitor.The authors experimented with stimuli presentation with and withoutbackground, and concluded no difference in two conditions. Adifferent study [12] proposed an Augmented Reality P300-basedBCI with the stimuli projected on a car windshield in a controlledindoor environment. The stimuli were presented as different drivingdestinations arranged in a 3x3 grid.

In one study [13], the suitability of different object highlight-ing modes was examined with the SSVEP paradigm in an objectselection task. The display of real-world objects was modified bymasks(solid overlay, transparent overlay, inverted colors, and withmodulated brightness and background) and displayed on an LCDscreen. The results suggest that the overlay icon should cover thewhole of the object while maintaining a level of its visibility, whilethe stimulus should be simple and bright.

One good example of using a MI paradigm along with the ERP-based approach to establish robot control is seen in [14]. HereMI was used to define the path according to user’s preferences, byimagining limb movements that corresponded to robot movement,while the P300 modality was used to select between these recordedpaths.

4 MOTIVATION

Primary motivation for this research is to establish an experimen-tal platform for multi-user cooperative/competitive scenarios. Thebenefit of this setup lies in potentially more engaging system whichwould benefit both user recruitment and motivation. Our secondarymotivation is to work towards establishing an interface that wouldbe applicable in many different mobile and multimodal interactionuse-cases. This would yield valuable EEG data for offline analysisand indicate favourable design directions. However, consideringthe possibilities of using different BCI control strategies it is of im-portance to examine alternatives according to general performanceconsiderations, as well as matters of user comfort.

4.1 Performance and reliability

From the perspective of performance and reliability of the interface,there are numerous intricacies to be considered. Performance of theSSVEP paradigm was assessed in [15] where the authors conductedan experiment with 53 participants. They reported that 86.7% ofthe users were able to achieve 90 to 100% accuracy with only 3.8%below this level. Compared to similar studies conducted with MIand P300 paradigms, the SSVEP provided the best performance bothin classification and information transfer rate.

Performance of MI paradigm is generally lower than in alterna-tives. This was examined in [16] where performance of 99 userswas assessed after a 6 minute training. Only 6.2% were able toachieve 100% accuracy, while 93.3% achieved above 59%. On theother hand, ERP-based interaction leveraging the P300 presenta-tion scheme was assessed in [17] where out of 81 users 72.8% hasachieved a 100% accuracy. These data suggest that ERP interactionis more consistent and capable of accommodating a greater numberof potential users. Although SSVEP performed better compared toother modalities in these studies, other factors have influenced therationale of leveraging ERP interaction instead.

4.2 User comfortThe objective of current research in both AR and BCI is to achievean interface that would not inhibit the users comfort or mobility.Although still severely limited in the aspect of user comfort, sig-nificant progress is being made towards making these innovativeinterfaces both ergonomically sound, and capable of being used inwider range of human activity including full body motion. For EEGthis is problematic since motion artifacts are introduced that degradethe interaction or result in its total breakdown. Indeed, in most ofthe BCI studies the user is instructed to sit and keep still for theduration of the experiment. However, there have been some reportson encouraging strides towards making a mobile interface. In thesestudies the ERP is the only interaction modality deemed feasible formobile users. P300 was recommended for this purpose as early as2004 in [18] based on error rates of various modalities reported in2003 BCI competition.

To the best of our knowledge the only study examining the effectsof movement on SSVEP performance was in 2013 [19]. In thisoff-line study the detectability of the SSVEP response was found todeteriorate with increased walking speed, and the authors suggesteda possibility of using higher flicker frequency of the stimuli in orderto avoid frequency bands saturated with motion artifacts. However,in [20] the authors examined SSVEP stimuli presentation modalitieswith respect to the display type and visual characteristics of the stim-uli. Although the best results were achieved with LED lights sincethey enable higher flashing frequencies (greater than can be noticed)than the typical monitor, current Augmented Reality displays do notprovide sufficiently high refresh rate. Furthermore, since the bandof the used frequencies is reduced, then the number of commandsthat can be available is limited by the display modality.

Evaluation of a P300 on the other hand was reported in a fullymobile environment in [21]. There, a good across-subjects andcross-trial consistency of P300 interaction modality was reported.They reported a moderate drop of performance between sitting andwalking conditions (both based on single trial recording). Althoughthe authors conducted an off-line study the signal processing can beeasily adapted to an on-line system.

MI is distinct from alternatives both in that it requires both usertraining and system calibration (usually conducted simultaneously).Additionally users are required to keep still since the same brainareas used for movement are leveraged to achieve control. Thismakes the MI modality inapplicable (at least in real-time) for use-cases without movement restrictions. In summary, the resilienceof the ERP modality to movement artifacts makes it most viablefor a free-moving users, as well as integration with other standardperipherals or manual tasks. Another important consideration isthe flexibility of the stimuli presentation of ERP modality. Oneexample is the evolution of the P300 presentation modality. Thecanonical presentation of a P300 stimuli has deviated significantlyin recent years. Researchers are giving more attention to movement,color, and scale changes, and examining how this can influence BCIperformance. When the stimuli is modified in different ways thiscan trigger more ERP components (sensitive to movement, scale,color, relevance), which benefits classification accuracy and providesmore flexibility in user interface design than other staples of BCIinteraction [22].

5 BRAINCHAT PIPELINE

The goal of the study was to examine communication between twopeople through an AR display using BCI’s. For the case study anAR see-through HMD (HTC Vive) was used with the canonicalP300 stimuli presentation scheme. Different configurations for ARwere tried out, such as marker based registration. However, this wascausing a number of problems since the grid was disappearing fromthe user’s view and restricting movement. At the end it was decidedfor the grid of letters to be overlaid over the video obtained through

280

the HTC Vive front-facing camera. The reason for that was becausewe wanted to keep the prototype as simple as possible.

Figure 1: Two of the HMDs were connected to two computers thatestablished communication through a local area network.

Figure 2: Architecture of BrainChat

The EEG signal was obtained using two acquisition devices:NeuroElectrics Enobio 32, and StarStim 8, . A set of 8 electrodes(PO7, PO8, P3, P4, Pz, Cz, Oz, Fz) was placed on the scalp accordingto the 10/20 system, while the reference electrode was placed on theright earlobe. Signal processing, and classification was conducted inOpenVibe, as it provides the most comprehensive device support, aswell as a simple node-based design process.

The calibration session was conducted by instructing the user tocount the number of flashes of the target letter. Calibration consistedof 10 randomly selected letters. All rows and columns flashed inrandom order 12 times for each letter the user was instructed tospell, with one second delay between these 12 repetitions. The flashduration was set at 0.2 seconds, preceded and followed by a 0.1second delay. The user was given 3 second delay before the nexttarget letter block of flashes was initiated. In total the training session

took 10 minutes, including a 30 second pre-training delay. The trialrecording was Band pass filtered with a fourth order Butterworthfilter with low cut frequency of 1Hz, high cut at 20Hz and passband ripple set to 0.5 dB, further decimated by a factor of 4, andused as input to train both the Spatial filter and the two class LinearDiscriminant Analysis (LDA) classifier.

The online session was conducted with the same stimuli timingsand duration as the training session, while the users were instructedto spell freely. The signal was processed as before and input to thexDAWN [23] Spatial Filter, the signal was divided in 0.6 secondepochs (defined by markers signifying the moment of the flash),averaged per epoch and fed to the LDA classifier.

Unity Game Engine was used for stimuli presentation and net-working. In order to establish synchronized stimuli presentationin Unity, OpenVibe [24] was set up to broadcast stimuli presenta-tion triggers through the in-built Virtual Reality Peripheral Network(VRPN) box and was routed through the external Unity Indie VRPNAdapter (UIVA) application to Unity.

Figure 3: P300 letter grid overlayed on the live VIVE camera feed

Unity was used to connect two running instances (see Fig. 3).One side served as a host and the other connected to the host asa client. Both host and client computers ran the same OpenVibescenarios separately and concurrently (once the connection has beenestablished). The messages were sent letter by letter and displayedto the conversing partner.

6 DISCUSSION

In this paper we have examined different interaction modalities BCIinterfaces. SSVEP paradigm offers superior performance, but ismore difficult to seamlessly incorporate into a scene weather in VRor AR scenarios (considering its dependence on brightness, colorand refresh rate). MI paradigm was ruled out considering the priorityof user comfort. Due to the limited resolution of the vive cameraour implementation was not able to incorporate an AR marker-basedsystem. Instead, we have superimposed the letter grid over the real-time video of the users surrounding, and found that the video feeddid not obstruct the spelling. In this prototype, the flashing of theletters was initiated locally (separately for each user), since the delayintroduced by the network would have a detrimental effect on theperformance. Another limitation of our prototype lies in the fact thatthe communication was automatic, in the sense that the letters, oncespelled would automatically be visible to the other user. For users toshare the same stimuli (commands, buttons, objects), and interactwith them in unison, this delay needs to be controlled with designchanges such as turn-based interaction, or by assigning different setsof stimuli for each user.

281

7 IMPLICATIONS FOR FUTURE RESEARCH

Both EEG-base BCI’s and AR systems are getting cheaper and mo-bile which makes the combination an attractive prospect to examine,especially considering future real-life implementations. In this pa-per, we have chosen to implement an ERP-based speller, as a firststep towards examining real-world applications that would benefitfrom the fusion of these novel technologies in a multi-user context.Our priority, on the one hand, will be to examine the possibility touse BCI is scenarios that do not restrict user movement and enablecollaboration and communication, and on the other, to find ways toextract useful information provided by ERP responses of users todifferent stimuli put into context of real-world use-cases. We foreseethat in the coming years these types of interface will completelychange the way we communicate and cooperate. In the near futurewe plan to perform a full scale user testing to examine the effec-tiveness of the BCI interface. Furthermore, a marker-based stimulipresentation will be a priority. We are aware of the issues existingwith BCI illiteracy so it is important to understand and documentthese limitations, and work towards mitigating these difficulties byexamining multimodal interaction schemes. Finally, we also plan toextend the collaboration to more than two persons.

8 CONCLUSION

In this paper we have showcased BrainChat, a novel BCI AR systemthat can be used for remote communication between at least twopersons. The work is in early stages but it proves the conceptof messaging based only on thoughts. We have given a detailedrationale of using ERP modality considering its flexibility withregard to environment noise, and user induced motion artifacts.Considering the wealth of cognitive information carried in the ERPresponses of the brain, further inquiries in AR as a channel of stimulipresentation will give insights into ways these novel technologiescan augment the human experience in collaborative environments.

ACKNOWLEDGMENTS

Authors of this paper would like to thank Mr. Milan Dolezal and Mr.Filip Skola for their extensive help with integration, troubleshooting,and with providing invaluable input throughout the design process.

REFERENCES

[1] Bernhard Graimann, Brendan Allison, and Gert Pfurtscheller.Brain–computer interfaces: A gentle introduction. Brain-Computer Interfaces, pages 1–27, 2010.

[2] Tushar Kanti Bera. Noninvasive electromagnetic methodsfor brain monitoring: a technical review. In Brain-ComputerInterfaces, pages 51–95. Springer, 2015.

[3] Janis J Daly and Jonathan R Wolpaw. Brain–computer inter-faces in neurological rehabilitation. The Lancet Neurology, 7(11):1032–1043, 2008.

[4] J d R Millan, Rudiger Rupp, Gernot R Muller-Putz, RoderickMurray-Smith, Claudio Giugliemma, Michael Tangermann,Carmen Vidaurre, Febo Cincotti, Andrea Kubler, Robert Leeb,et al. Combining brain–computer interfaces and assistive tech-nologies: state-of-the-art and challenges. Frontiers in neuro-science, 4, 2010.

[5] Anatole Lecuyer, Fabien Lotte, Richard B Reilly, Robert Leeb,Michitaka Hirose, and Mel Slater. Brain-computer interfaces,virtual reality, and videogames. Computer, 41(10), 2008.

[6] Fabrizio Beverina, Giorgio Palmas, Stefano Silvoni, FrancescoPiccione, Silvio Giove, et al. User adaptive bcis: Ssvep andp300 based interfaces. PsychNology Journal, 1(4):331–354,2003.

[7] Bojan Kerous and Fotis Liarokapis. Brain-computer interfaces-a survey on interactive virtual environments. In Games andVirtual Worlds for Serious Applications (VS-Games), 2016 8thInternational Conference on, pages 1–4. IEEE, 2016.

[8] Beste F Yuksel, Michael Donnerer, James Tompkin, and An-thony Steed. A novel brain-computer interface using a multi-touch surface. In Proceedings of the SIGCHI Conference onHuman Factors in Computing Systems, pages 855–858. ACM,2010.

[9] Kenji Kansaku, Naoki Hata, and Kouji Takano. My thoughtsthrough a robot’s eyes: An augmented reality-brain–machineinterface. Neuroscience research, 66(2):219–222, 2010.

[10] Kouji Takano, Naoki Hata, and Kenji Kansaku. Towards intelli-gent environments: an augmented reality–brain–machine inter-face operated with a see-through head-mount display. Frontiersin neuroscience, 5, 2011.

[11] Kosuke Uno, Genzo Naito, Yohei Tobisa, Lui Yoshida, YutaroOgawa, Kiyoshi Kotani, and Yasuhiko Jimbo. Basic investi-gation of brain–computer interface combined with augmentedreality and development of an improvement method using thenontarget object. Electronics and Communications in Japan,98(8):9–15, 2015.

[12] Luzheng Bi, Xin-An Fan, Nini Luo, Ke Jie, Yun Li, and YiliLiu. A head-up display-based p300 brain–computer interfacefor destination selection. IEEE Transactions on IntelligentTransportation Systems, 14(4):1996–2001, 2013.

[13] Pierre Gergondet and Abderrahmane Kheddar. Ssvep stimulidesign for object-centric bci. Brain-Computer Interfaces, 2(1):11–28, 2015.

[14] Reinhold Scherer, Mike Chung, Johnathan Lyon, Willy Che-ung, and Rajesh PN Rao. Interaction with virtual andaugmented reality environments using non-invasive brain-computer interfacing. In 1st International Conference onApplied Bionics and Biomechanics (October 2010), 2010.

[15] Christoph Guger, Brendan Z Allison, Bernhard Großwind-hager, Robert Pruckl, Christoph Hintermuller, ChristophKapeller, Markus Bruckner, Gunther Krausz, and GunterEdlinger. How many people could use an ssvep bci? Frontiersin neuroscience, 6, 2012.

[16] C Guger, G Edlinger, W Harkam, I Niedermayer, andG Pfurtscheller. How many people are able to operate aneeg-based brain-computer interface (bci)? IEEE transactionson neural systems and rehabilitation engineering, 11(2):145–147, 2003.

[17] Christoph Guger, Shahab Daban, Eric Sellers, ClemensHolzner, Gunther Krausz, Roberta Carabalona, Furio Gramat-ica, and Guenter Edlinger. How many people are able to controla p300-based brain–computer interface (bci)? Neuroscienceletters, 462(1):94–98, 2009.

[18] Karla Felix Navarro. Wearable, wireless brain computer in-terfaces in augmented reality environments. In InformationTechnology: Coding and Computing, 2004. Proceedings. ITCC2004. International Conference on, volume 2, pages 643–647.IEEE, 2004.

[19] Yuan-Pin Lin, Yijun Wang, and Tzyy-Ping Jung. A mobilessvep-based brain-computer interface for freely moving hu-mans: The robustness of canonical correlation analysis to mo-tion artifacts. In Engineering in Medicine and Biology Society

282

(EMBC), 2013 35th Annual International Conference of theIEEE, pages 1350–1353. IEEE, 2013.

[20] Danhua Zhu, Jordi Bieger, Gary Garcia Molina, and Ronald MAarts. A survey of stimulation methods used in ssvep-basedbcis. Computational intelligence and neuroscience, 2010:1,2010.

[21] Stefan Debener, Falk Minow, Reiner Emkes, Katharina Gan-dras, and Maarten Vos. How about taking a low-cost, small,and wireless eeg for a walk? Psychophysiology, 49(11):1617–1621, 2012.

[22] Luigi Bianchi, Saber Sami, Arjan Hillebrand, Ian P Fawcett,Lucia Rita Quitadamo, and Stefano Seri. Which physiologi-cal components are more suitable for visual erp based brain–computer interface? a preliminary meg/eeg study. Brain topog-raphy, 23(2):180–185, 2010.

[23] Bertrand Rivet, Antoine Souloumiac, Virginie Attina, and Guil-laume Gibert. xdawn algorithm to enhance evoked potentials:application to brain–computer interface. IEEE Transactionson Biomedical Engineering, 56(8):2035–2043, 2009.

[24] Yann Renard, Fabien Lotte, Guillaume Gibert, Marco Congedo,Emmanuel Maby, Vincent Delannoy, Olivier Bertrand, andAnatole Lecuyer. Openvibe: an open-source software platformto design, test, and use brain–computer interfaces in real andvirtual environments. Presence: teleoperators and virtualenvironments, 19(1):35–53, 2010.

283