driving a semiautonomous mobile robotic car controlled by ... · driving a semiautonomous mobile...

14
Research Article Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI Piotr Stawicki, Felix Gembler, and Ivan Volosyak Faculty of Technology and Bionics, Rhine-Waal University of Applied Sciences, 47533 Kleve, Germany Correspondence should be addressed to Ivan Volosyak; [email protected] Received 31 March 2016; Revised 1 June 2016; Accepted 19 June 2016 Academic Editor: Justin Dauwels Copyright © 2016 Piotr Stawicki et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Brain-computer interfaces represent a range of acknowledged technologies that translate brain activity into computer commands. e aim of our research is to develop and evaluate a BCI control application for certain assistive technologies that can be used for remote telepresence or remote driving. e communication channel to the target device is based on the steady-state visual evoked potentials. In order to test the control application, a mobile robotic car (MRC) was introduced and a four-class BCI graphical user interface (with live video feedback and stimulation boxes on the same screen) for piloting the MRC was designed. For the purpose of evaluating a potential real-life scenario for such assistive technology, we present a study where 61 subjects steered the MRC through a predetermined route. All 61 subjects were able to control the MRC and finish the experiment (mean time 207.08 s, SD 50.25) with a mean (SD) accuracy and ITR of 93.03% (5.73) and 14.07 bits/min (4.44), respectively. e results show that our proposed SSVEP- based BCI control application is suitable for mobile robots with a shared-control approach. We also did not observe any negative influence of the simultaneous live video feedback and SSVEP stimulation on the performance of the BCI system. 1. Introduction e idea of remotely operated vehicles without the necessity of manual supervision concerns humankind since the end of 19th century. e first public demonstration of a wireless submarine was performed by Nicola Tesla in 1898 [1]. e first automatically operated vehicles utilizing autonomous behav- ior were Elmer and Elsie, built by neurobiologist William Grey Walter in Bristol, England, in the late 1940s [2]. Since then, numerous areas of application for robot vehicles (military, environment, and healthcare) have emerged. In healthcare applications, robotic vehicles are primarily developed as assistive devices for severely disabled or paralyzed patients or patients suffering from a stroke [3]. For some people with severe motor impairments like amyotrophic lateral sclerosis (ALS) or spinal cord injury (SCI), traditional control interfaces, such as a 2-axis joystick, a keyboard, or a mouse, are not suitable, as they require precise movement control over the limbs, which in their case is not possible due to nerve cells degradation or spinal nerve damage. For these people, accurate control interfaces can be realized through mod- ern technologies such as brain-computer interfaces (BCIs). e term brain-computer interface (BCI) or brain-machine interface (BMI) represents a range of technologies developed intensively over the past 20 years. BCIs introduce additional communication pathways that translate brain signals directly into computer commands (without the use of muscles or nerves) and therefore provide people with minor to severe disabilities with a new opportunity to enhance their quality of life; for some people, BCIs offer a new way of communicating and, for others, a new way to control their environment (e.g., wheelchair, prosthesis) [4–6]. Service robots designed to assist paralyzed people are a popular research topic in the literature on BCIs [7]. Technical solutions for such robots are developed with different BCI paradigms. In the last decade, numerous robot control BCI applications were developed. Ron-Angevin et al. used a SMR-based two-class BCI application to choose between four direction commands in order to control a robot [8]. ree out of four subjects were able to control their BCI robot. Tonin et al. developed a shared control of two mental states’ brain-machine interface for steering a telepresence robot located at a distance of 100 km [9]. All four subjects (two healthy and two with Hindawi Publishing Corporation Computational Intelligence and Neuroscience Volume 2016, Article ID 4909685, 14 pages http://dx.doi.org/10.1155/2016/4909685

Upload: others

Post on 02-Jun-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Driving a Semiautonomous Mobile Robotic Car Controlled by ... · Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI ... (frequencycoding,methodoriginallyproposed

Research ArticleDriving a Semiautonomous Mobile Robotic Car Controlled by anSSVEP-Based BCI

Piotr Stawicki Felix Gembler and Ivan Volosyak

Faculty of Technology and Bionics Rhine-Waal University of Applied Sciences 47533 Kleve Germany

Correspondence should be addressed to Ivan Volosyak ivanvolosyakhochschule-rhein-waalde

Received 31 March 2016 Revised 1 June 2016 Accepted 19 June 2016

Academic Editor Justin Dauwels

Copyright copy 2016 Piotr Stawicki et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

Brain-computer interfaces represent a range of acknowledged technologies that translate brain activity into computer commandsThe aim of our research is to develop and evaluate a BCI control application for certain assistive technologies that can be used forremote telepresence or remote driving The communication channel to the target device is based on the steady-state visual evokedpotentials In order to test the control application a mobile robotic car (MRC) was introduced and a four-class BCI graphical userinterface (with live video feedback and stimulation boxes on the same screen) for piloting theMRCwas designed For the purpose ofevaluating a potential real-life scenario for such assistive technology we present a study where 61 subjects steered theMRC througha predetermined route All 61 subjects were able to control the MRC and finish the experiment (mean time 20708 s SD 5025) witha mean (SD) accuracy and ITR of 9303 (573) and 1407 bitsmin (444) respectively The results show that our proposed SSVEP-based BCI control application is suitable for mobile robots with a shared-control approach We also did not observe any negativeinfluence of the simultaneous live video feedback and SSVEP stimulation on the performance of the BCI system

1 Introduction

The idea of remotely operated vehicles without the necessityof manual supervision concerns humankind since the endof 19th century The first public demonstration of a wirelesssubmarinewas performed byNicola Tesla in 1898 [1]The firstautomatically operated vehicles utilizing autonomous behav-iorwereElmer andElsie built by neurobiologistWilliamGreyWalter in Bristol England in the late 1940s [2] Since thennumerous areas of application for robot vehicles (militaryenvironment and healthcare) have emerged In healthcareapplications robotic vehicles are primarily developed asassistive devices for severely disabled or paralyzed patientsor patients suffering from a stroke [3] For some peoplewith severe motor impairments like amyotrophic lateralsclerosis (ALS) or spinal cord injury (SCI) traditional controlinterfaces such as a 2-axis joystick a keyboard or a mouseare not suitable as they require precise movement controlover the limbs which in their case is not possible due to nervecells degradation or spinal nerve damage For these peopleaccurate control interfaces can be realized through mod-ern technologies such as brain-computer interfaces (BCIs)

The term brain-computer interface (BCI) or brain-machineinterface (BMI) represents a range of technologies developedintensively over the past 20 years BCIs introduce additionalcommunication pathways that translate brain signals directlyinto computer commands (without the use of muscles ornerves) and therefore provide people with minor to severedisabilities with a new opportunity to enhance their quality oflife for some people BCIs offer a new way of communicatingand for others a new way to control their environment (egwheelchair prosthesis) [4ndash6]

Service robots designed to assist paralyzed people are apopular research topic in the literature on BCIs [7] Technicalsolutions for such robots are developed with different BCIparadigms In the last decade numerous robot control BCIapplications were developed

Ron-Angevin et al used a SMR-based two-class BCIapplication to choose between four direction commands inorder to control a robot [8] Three out of four subjects wereable to control their BCI robot Tonin et al developed ashared control of two mental statesrsquo brain-machine interfacefor steering a telepresence robot located at a distance of100 km [9] All four subjects (two healthy and two with

Hindawi Publishing CorporationComputational Intelligence and NeuroscienceVolume 2016 Article ID 4909685 14 pageshttpdxdoiorg10115520164909685

2 Computational Intelligence and Neuroscience

motor disabilities myopathy) successfully controlled therobot with a similar performance Huang et al used anevent-related desynchronizationsynchronization of electro-encephalogram signals to control a 2D constantly movingvirtual wheelchair [10] Four out of five healthy subjectsreached mean target hit rates of 875 and 100 with motorimaginary in two virtual games

Several studies used the P300 paradigm (another com-monly used BCI approach using the 300ms componentof an evoked potential) Chella et al presented a MuseumRobotic Guide GUI for P300-based BCI developed at theSan Camillo Institute [11] The user can choose between tworobots (Pioneer3 or PeopleBot) each located in a differentplace to visit the museum Escolano et al reported a P300-based telepresence system that enables the user to be presentin remote environments through a mobile robot using ashared-control strategy [12]

Reliable BCIs can also be realized with steady-state visualevoked potentials (SSVEPs) SSVEPs are the continuous brainresponses elicited at the visual and parietal cortical areasunder visual stimulation with a specific constant frequency[13] Shortly when looking at a light source that flickerswith a constant frequency the corresponding brain signalsare measured with the EEG from the occipital cortex Asthe specific stimulation frequencies are known a priorithe stimuli the user focuses its gaze on can be detectedthrough classification algorithms that detect the stimulationfrequency with the strongest amplitude that is throughanalyzing the signal to noise ratios

Visual evoked potentials (VEP) recorded in the 1960s byRegan [14] and early tested in the 1980s and 1990s extendedour knowledge about the human brain For example as earlyas 1979 Frank and Torres evaluated the use of visual evokedpotentials in diagnosis of ldquocortical blindnessrdquo in children [15]VEP can also support the diagnosis of multiple sclerosis [16]

At the beginning of the 21st century the progress ofcomputing power and technology allowed the extensiveresearch and development of SSVEP and its use in BCI Arecent review of visual and auditory BCIs can be found in[17] Various applications like spelling interfaces [18] andcontrol applications for a prosthesis [19] or for navigation [20]can be implemented with the SSVEP paradigm In a largedemographic studywith 86 subjects Volosyak et al presenteda four-class SSVEP-based BCI with LED stimulation [21]84 subjects reached a mean (SD) accuracy and ITR of 9226(782) and 1724 (699) bitsmin respectively with stimula-tion at 13 14 15 and 16Hz (medium frequency set) Only 56subjects were able to operate the BCI at high frequencies (3436 38 and 40Hz) reaching amean accuracy and ITR of 8916(929) and 1210 (731) bitsmin Zhao et al introduced athree-layer decision tree for humanoid robot BCI interaction[22] The SSVEP stimulation was presented on a 1710158401015840 LCDmonitor Four boxes were flickering at 5 6 8 and 10Hzwith an additional 17Hz red LED on top of the monitorFive young subjects tested the developed software platformThe mean performance accuracy achieved by four of themwas approximately 88 Guneysu and Levent Akin presenteda four-class SSVEP-based BCI with LED stimulation at 79 11 and 15Hz an Emotiv EPOC headset was used for

EEG data acquisition and MATLAB software (FFT withGaussianmodel)was used for dominant frequency extraction[23] Three subjects achieved an average performance ofapproximately 68 at a distance of 22 cm from the LEDsHolewa and Nawrocka presented an SSVEP-based BCI forLegoMindstorm robot control with anEmotiv EPOCheadset[24] For visual simulation four LED-based panels with stim-ulation frequencies of 28 30 32 and 34Hz were used Thepanels were additionally covered with different color filtersThree subjects operated this setup with a mean accuracy andITR of 7375 and 1136 bitsmin Kwak et al presented anSSVEP-control system for the exoskeleton REX [25] FiveLEDs blinking at 9 11 13 15 and 17Hz were used to selectbetween the exoskeleton commands forward left standingright and sitting respectively Three subjects achieved amean accuracy of 9916 In an f-VEP based BCI each targetis flashing at a different constant frequency causing a periodicsequence of evoked responses with the same fundamentalfrequency as that of the flickered stimulus as well as itsharmonics [26] and in a c-VEP BCI usually a pseudorandomm-sequence phase-shifted between targets is flashing (m-sequence with an autocorrelation function which is a veryclose approximation to a unit impulse function and it isnearly orthogonal to its time lag sequence) [26] Kapeller etal evaluated a frequency-coded f-VEP (857 10 12 and 15Hz)system and codemodulated c-VEP (63-bit pseudorandomm-sequence) as a continuous robot steering input with videofeedback of the robotmovement [27] Eleven healthy subjectsachieved a maximum accuracy of 9136 and 9818 forf-VEP and c-VEP respectively in an online accuracy testrun without zero class Wang et al presented a handmaderemote control electrical car which was driven using anSSVEP stimulationwith blue LEDs (controlled viaATmega8Lmicrocontroller) [28] Combining Average and Fast FourierTransform calculations they achieved an average accuracy of85 91 94 98 and 100 for the different trial lengths of 05 12 4 and 8 seconds respectivelyThe EEG signal was recordedfrom electrodes119874

1and119874

2of the international 10ndash20 system

Gergondet et al used SSVEP stimulation with approximatedfrequencies (frequency coding method originally proposedby Wang et al [29]) for steering a humanoid robot (HRP-2) [30] Four stimulation arrows (6 8 9 and 10Hz) werepresented on a 1710158401015840 LCD screen with 60Hz vertical refreshrate Also presented in a background area of the screenwas a live video stream from the robotrsquos point of viewTheir research focused on the impact of the backgrounddynamic compared to static Diez et al tested an SSVEP-driven Pioneer 3-DX mobile robot equipped with a videocamera for live feedback and an obstacle avoidance ultrasonicsensor system [31] The high frequency (37 38 39 and40Hz) visual stimulation was generated with an FPGA cardand stimulation modules (LED-based) were placed on theouter edge of the monitor in a top left right and bottomarrangement and the camera image was displayed live onthe monitor BCI commands were classified using a slidingwindow FFTmethod Seven participants navigated the robotto two different office locations Some recent studies combineat least two of the BCI paradigms to realize a so-called hybridBCI Choi and Jo combined the SSVEP with event-related

Computational Intelligence and Neuroscience 3

desynchronization (ERD typical SMR-based BCI) for thenavigationexploration of a humanoid robot with additionalP300 BCI for interaction (object recognition decision) [32]They also tested and evaluated the use of a low-cost BCI sys-tem The ERD and SSVEP BCIs were two-class systems thenumber of classes for the P300 BCI varied (2 to 4 dependingon the detected objects) Five subjects used a low-cost Emo-tiv EPOC headset to perform the experiment The overallaccuracies achieved by the ERD and SSVEP protocols were846 and 844 respectively the P300 protocol reached anaccuracy of 91 for two objects and 895 for four objectsthrough the robotrsquos cameraThe scored ITR value for the nav-igationexploration task was 181 bitsmin and for the objectsrecognition task 21 bitsmin Authors reported that switchingbetween SSVEP and ERD caused a drop in accuracy downto 73 The robotrsquos point of view was shown in the middleof the monitor (live feedback) the SSVEP stimulation wasshown at the sides and the P300was showndirectly in the livefeedback of the robotrsquos view Tidoni et al presented a studywhere nine subjects (located in Italy) remotely controlleda humanoid robot (HRP-2 located in Japan) [33] Subjectstested a synchronous asynchronous audio-visual feedbackand evaluated whether a mirror reflection of the robot (inits field of view) improved the performance of an SSVEP-based BCIwith video feedbackThemain taskwas to navigatethe robot and to move a bottle from one table to anotherA six-class SSVEP-based BCI was used the stimulation andthe video feedback were presented on the same screen Theflashing frequencies were 6 8 9 10 and 14Hz with anadditional zero class when the subject did not gaze at anytarget Due to the remote distance the feedback action wasvisible to the subject after approximately 800ms Zhao et alcompared a four-class SSVEP-based BCI to a six-class P300-based BCI by evaluating achieved control accuracies of robotnavigation tasks [34] For both systems (SSVEP and P300)the modular platform Cerebot was used and stimuli werepresented on a LCD monitor Seven subjects took part inthe experiment and achieved a mean accuracy and ITR of913 and 188 bitsmin for the SSVEP approach for the P300approach 903 and 247 bitsmin were achieved

Rutkowski et al implemented a robot control applicationusing tactile BCIs [35] They tested two tactile stimula-tion configurations a pin-pressure BCI (arranged in a 3 times

3 matrix) and a full body vibrotactile BCI (vibrotactilegenerators placed on subjects backs shoulders arms andlegs) A humanoid NAO robot was navigated using sixpredefined commands Five subjects could control the robotusing the pin-pressure BCI and three subjects successfullytested the full body vibrotactile BCI The authors demon-strate the reliability of the tactile BCI paradigm for robotnavigation

All listed studies have to deal with the main engineeringproblems regarding BCI technologies EEG-based BCIs arerelatively slow and can sometimes be unreliable Maximizingthe speed of a BCI while still maintaining high accuracyleads to certain limitations in the number of targets and insystem speed (target selection) Moreover it is often reportedthat a certain number of subjects are not able to controlBCIs (so-called BCI illiterates eg [21 36]) We chose the

SSVEP paradigm as it is reported to be among the fastest andmost reliable BCI strategies with short or no training timeneeded [13] In our recent study after five years of researchwe developed an SSVEP calibration software (SSVEPWizard)for LCD monitors that determines SSVEP key parametersfor each subject [37] The selected stimulation frequenciesare those with the highest SSVEP response in the visualcortex We have shown that with optimized parameters theSSVEP BCI literacy rate can reach 100 This calibrationsoftware is also used here to determine and optimize the user-specific SSVEP parameters Some SSVEP-based applicationsdescribed problems regarding video feedbackThe research ofGergondet et al was focused on the impact of the background(dynamic compared to static) on the BCI performance Theyreported that a dynamic background negatively affected theSSVEP stimulation [30] Kapeller et al noted a slight decreasein performance for an SSVEP-based BCI with backgroundvideo and small targets [27] In the approach presented byDiez et al the stimulation was separated from the videofeedback stream (outer LEDs) but no negative influence ofthe live feedback was reported [31]

SSVEP-based BCI systems for robot control usually usefour or five control commands A higher degree-of-freedomcontrol is difficult to realize because multitarget BCIs usuallyhave a lower accuracy A countermeasure to the limitations indegrees of freedom can be a shared-control approach that isthe robot can share some specific control commands with thehuman user which allows for the faster execution of complextasks The so-called shared-control principle was initiallyexplored for BCI-controlled wheelchairs and recently appliedto BCI telepresence [12] BCIs with only a few targets aremore robust and therefore more suitable for control applica-tions There are already numerous existing BCI technologiesthat patients use at their homes [38 39] However forthe last decade publications about SSVEP-based BCIs havebeen sparse The PubMed database shows 4200 results forthe search term ldquoBrain-computer interfacerdquoldquoBrain-machineinterfacerdquo but only 185 results for the term ldquoSSVEPrdquoWith thispaper we would like to contribute to the research of SSVEP-based BCIs

In this paper we present a four-class SSVEP-based BCIcontrol application with live video feedback We also presenta self-made low-cost semiautonomous mobile robotic car(MRC) with live video feedback controlled by a BCI appli-cation Unlike the robotic cars developed in the above-mentioned research studies not only the vehicle itself butalso the camerarsquos pan-tilt-position was controlled via SSVEPThe main task of this BCI study was to maneuver theMRC through a predetermined route and to examine thesurroundings (live scenario task) The BCI performance wastested with 61 subjects BCI key parameters for examplestimulation frequencies were determined individually byWizard software In this study we tested the followinghypotheses

(i) Is it possible to use an SSVEP-based BCI stimulationand a background video on the same screen withoutthe negative influence of the presented video asreported in for example [27 30]

4 Computational Intelligence and Neuroscience

(ii) How does the video feedback affect BCI performancewhen simultaneously presented with the SSVEP-based BCI stimulation

(iii) Is our self-made semiautonomousmobile robotic carsuitable for future use as an assistive technology

(iv) How did the subjects perform using the presentedSSVEP-based BCI control application compared toresults presented in for example [31]

The paper is organized as follows Section 2 describes theexperimental setup and presents details about the experimen-tal procedure and the hardware and software solutions Theresults are presented in Section 3 followed by a discussionand conclusion in Sections 4 and 5 respectively

2 Material and Methods

21 Subjects Everyone who volunteered to participate in thestudy became a research subject after reading the subjectinformation sheet This study was carried out with writteninformed consent from all subjects in accordance with theDeclaration ofHelsinki 61 subjects (30 female)with amean(SD) age of 2262 (500) years participated in the study allhealthy and all students or employees of the Rhine-Waal Uni-versity of Applied Sciences in KleveThe EEG recording tookplace in a normal PC-laboratory room (asymp36m2) The LCDscreen was located in front of windows and the luminancewas kept at an acceptable level in order not to disturb thesubjects If necessary outer blinds were pulled down duringthe day or light was turned on during the evening None ofthe subjects had neurological or visual disorders Spectacleswere worn when appropriate Subjects did not receive anyfinancial reward for their participation in this study

22 Hardware

221 Mobile Robotic Car (MRC) Four identically con-structed mobile robotic cars (MRCs) were used in theexperiment all set up with the same hardware and softwareconfigurations Four wheels were attached to a preassembledcase (14 cm times 19 cm) with built-in DCmotors A rechargeablebattery (6x AA NiMH 1900mAh) was placed inside Fourlight sensors (eachwith a corresponding red LED L-53SRC-Ffor luminescence positioned side by side) were mounted on aprototype board at the front of theMRC Two of those sensorswere connected to the analog input pins of a BeagleBoneBoard Rev A6 at the center and two at the sides The DCmotors were controlled by an Atmega168-based unit Thecamera Logitech C210 (V-U0019) was mounted in frontof the MRC on two servos (RB-421 RobotBase China)simulating a pan-tilt head and connected to the BeagleBoneUSB port The camera was controlled in five predefinedorientations looking down forward up to the left and to therightThe connection with theMRCwas established throughUDP via a HAMA N150 2in1-WLAN-ADAPTER connectedwith an Ethernet cable to the BeagleBone of the MRCand a USB WiFi adapter (D-Link DWL-G122) connectedto the desktop computer All the electronic components

Figure 1 Mobile robotic car (MRC) On top of the MRC theBeagleBoneBoardRevA6 theAtmega168-basedmotor drive shieldand a small HAMAWiFiWLAN-Access-Point At the front the redLEDs from light sensors that allowed the MRC to follow the whiteline on autopilot mode and the Logitech C210 camera mounted on2 servos creating the pan-tilt head for 2-DoF (degree-of-freedom)control The rechargeable battery is mounted inside the main caseMRC dimensions are 25 times 25 times 12 cm

(boards and HAMA N150 Access-Point) were mounted ontop of the MRC The BeagleBone was equipped with an8GB SD-card running Ubuntu ARM (Linux distributionversion 1204) with MJPG-Streamer installed (version 01httpmjpg-streamersfnet) for video streaming One of theused MRCs is shown in Figure 1

222 Data Acquisition and Feature Extraction The subjectswere seated in front of a 2410158401015840 LCD screen (BenQ XL2420Tor BenQ XL2411T resolution 1920 times 1080 pixels verticalrefresh rate 120Hz) at a distance of approximately 60 cmThecomputer system based on an Intel Core i7 340GHz wasrunning Microsoft Windows 7 Enterprise Standard passiveAgAgCl electrodes were used to acquire the signals from thesurface of the scalp The ground electrode was placed over119860119865119885 the reference electrode was placed over 119862

119885 and eight

signal electrodes were placed at predefined locations on theEEG-cap marked with 119875

119885 1198751198743 1198751198744 1198741 1198742 119874119885 1198749 and

11987410

according to the international system of EEG electrodeplacement Standard abrasive electrolytic gel was appliedbetween the electrodes and the scalp to bring impedancesbelow 5 kΩ An EEG amplifier gUSBamp (Guger Technolo-gies Graz Austria) was utilizedThe sampling frequency wasset to 128Hz During the EEG signal acquisition an analogbandpass filter between 2 and 30Hz and a notch filter around50Hz were applied directly in the amplifier

The minimum energy combination (MEC) method wasoriginally introduced by Friman et al in [40] In this study arefined version of the MEC as presented by Volosyak in [18]was used In the following we present this method in a veryshort form

223 SSVEP SignalModel Wecanmodel the signal recordedat a given electrode 119894 (against reference electrode at time 119905) asfollows

119910119894(119905) =

119873ℎ

sum

119896=1

(119886119894119896sin 2120587119896119891119905 + 119887

119894119896cos 2120587119896119891119905) + 119864

119894119905 (1)

Computational Intelligence and Neuroscience 5

The first part is the SSVEP evoked response consisting of sineand cosine functions of the frequency 119891 and its harmonics119896 with corresponding amplitudes 119886

119894119896and 119887119894119896 The last part

119864119894119905represents the noise component of the electrode 119894 other

activities not related to the SSVEP evoked response andartifacts The 119894th EEG signals samples stored in 119873

119905are in

vector form 119910119894= 119883120591119894+ 119864119894 where 119910

119894= [119910119894(1) 119910

119894(119873119905)]119879

is 119873119905times 1 vector 119883 is the SSVEP model matrix of size

119873119905times 2119873ℎand contains the sin (2120587119896119891119905) and cos (2120587119896119891119905) pair

in its columns The 120591119894vector contains the corresponding

amplitudesThis is based on the assumption that the nuisanceis stationary within the short time segment length the signalsfrom the 119873

119910electrodes can be stored in a vector form 119884 =

[1199101 119910

119873119910]

224 Minimum Energy Combination In order to cancel outthe nuisance and noise and to boost the SSVEP responsechannels are created in a weighted sum of 119873

119910electrodes

119904 = sum119873119910

119894=1119908119894119910119894= 119884119908 Several sets 119878 of 119873

119904channels can be

created with a different combination of the original electrodesignals 119878 = 119884119882 where119882 is119873

119910times119873119904matrix that contains the

different weight combinations The structure of 119882 is basedon the minimum energy combination technique In orderto use this combination method for cancelling the nuisancesignal in a first step any potential SSVEP component needsto be removed from the signal = 119884 minus 119883(119883

119879119883)minus1119883119879119884

The next step is to find a weight vector that minimizes theresulting energy of the combination of the electrodesrsquo noisesignals The last step is the optimization problem forminimalizing the nuisance and noise component

min

10038171003817100381710038171003817

10038171003817100381710038171003817

2

= min

119879119879

(2)

The solution to the minimize problem is the smallest eigen-vector V

1(of the symmetricmatrix 119879) and the energy of the

resulting combination equals the smallest eigenvalue 1205821from

this matrix In order to discard up to 90 of the nuisancesignal the total number of channels is selected by finding thesmallest value for119873

119904that satisfies the following equation

sum119873119904

119894=1120582119894

sum119873119910

119895=1120582119895

gt 01 (3)

To detect the SSVEP response for a frequency the powerof that frequency and its harmonics119873

ℎis predicted

=1

119873119904119873ℎ

119873119904

sum

119897=1

119873ℎ

sum

119896=1

10038171003817100381710038171003817119883

T119896119904119897

10038171003817100381710038171003817

2

(4)

To prevent overlapping between frequencies we use 119873ℎ= 2

in the actual system implementation The estimation of thepower of the SSVEP stimulation frequencies and additionalfrequencies (119873

119891) is normalized into probabilities

119901119894=

119894

sum119895=119873119891

119895=1119895

with119894=119873119891

sum

119894=1

119901119894= 1 (5)

Table 1 Overview of the time windows 119879119878of EEG data used for

SSVEP classification Eleven time segments between 8125ms (8times13

samples or 8 blocks of EEG data) and 16250ms were used in thisstudy

Time [ms] Blocks of EEG data8125 810156 1015234 1520313 2030469 3040625 4050781 5060938 6071094 708125 8016250 160

where 119894is the power estimation of the 119894th signal 1 le 119894 le

119873119891 To increase the difference between the results we apply a

Softmax function

1199011015840

119894=

119890120572119901119894

sum119895=119873119891

119895=1119890120572119901119895

with119894=119873119891

sum

119894=1

1199011015840

119894= 1 (6)

where 120572 is set to 025 The output values of the Softmaxfunction are between 0 and 1 and their sum is equal to 1 Theclassification output119862

119900of the signal of 119894th frequency is a result

of three conditions the probability 1199011015840

119894of the 119894th frequency

is the highest it surpasses a predefined border 1199011015840

119894ge 120573119894 and

the detected frequency is one of the stimulation frequenciesAfter each classification to prevent wrong classificationsthe classifier output was rejected for the next 9 blocks(approximately 914ms of EEG data with the sampling rate of128Hz) and the visual stimulation was paused for that timeperiod This classification method is the next iteration of theBremen-BCI as presented for example in [18] An exampleof the recorded signal and the SNR can be seen in Figure 9

The SSVEP classification was performed online every13 samples (ca 100ms) on the basis of the adaptive timesegment length of the acquired EEG data introduced in[18] If no classification was possible the time segmentlengthwindowwas gradually extended to the next predefinedwindow lengths (see Table 1 and Figure 2)

23 Software

231 Graphical User Interface A custom-made graphicaluser interface (robot-GUI) was designed to control the MRCDirectX 9 was used to draw the images (buttons and areal-time picture from the actual camera view in back-ground) on themonitor and OpenCV library (version 2410httpwwwopencvorg) was used for image acquisitionEach received frame was rendered into background textureas soon as it was received (max 30Hz the frame rate waslimited by the used camera) Each received frame (640 times

480 pixels) was rescaled to 1280 times 960 Due to the ratioof the streamed video frames and the resolution of the

6 Computational Intelligence and Neuroscience

Time (ms)

8 bl

10 bl

160 blocks

8 bl8 bl8 bl

10 bl

8 bl8 bl

10 bl15 bl15 bl

15 blocks20 blocks

10 bl

8 bl8 bl

80 blocks

Com

man

d cla

ssifi

catio

n

20000 22000 24000

203125 ms

8125 ms

152344 ms

1015625 ms

8135 ms

0 2000 4000 6000 8000 10000 12000 14000 16000 18000

middot middot middot

middot middot middot

middot middot middot

middot middot middot

16250 ms

Figure 2 Changes in the time segment length after a performed classification in case no distinct classification can be made and the actualtime 119905 allows for the extension to the next predefined value After each classification (gray bars of 8 10 20 and 160 blocks) additional timefor gaze shifting was included (black) in which the classifier output was rejected for 9 blocks

Figure 3 A screenshot of the robot-GUI in drive mode In thissituation the user had to select the ldquodrive forwardrdquo commandwhichcaused the robot to execute an autopilot program that uses the built-in light sensors to follow the white line

Figure 4 Signs positioned at the turning points and positionedtowards the camera showed the correct direction for following thetrack Here in the screenshot the robot automatically stopped afterreaching this turning point and a sign instructed the user to selectthe command ldquoturn leftrdquo

monitors used a gray background in the form of a framewas implemented to prevent unnatural video scaling Thestarting dimensions of the buttons were 146 times 512 pixelsfor left and right 576 times 128 pixels for the top and 536 times

96 pixels for bottom buttons (see Figures 3 and 4) Duringthe runtime the size of the buttons changed continuously

depending on the calculated probabilities of the 119894th stimuliin (6) The robot-GUI had two modes a drive mode and acamera mode In drive mode the subject was able to sendthe following commands to the MRC ldquodrive forwardrdquo (adiscrete command using the line-sensors to follow the tapedline on autopilot to the next direction change) ldquoturn leftrdquoldquoturn rightrdquo and ldquoswitch to camera moderdquo In camera modethe subject was able to send the following commands ldquolookleftrdquo ldquolook rightrdquo ldquolook uprdquo and ldquoswitch to drive moderdquoWhile looking up the button ldquolook uprdquo changed the contentand functionality to ldquolook forwardrdquoThe control buttons werered in drive mode and yellow in camera mode to increaseuser friendliness The aspect ratio of the video stream was4 3 so the computer screen was not entirely filled out Ifthe command ldquodrive forwardrdquo was selected by the user anautonomous program was executed on the MRC The MRCused the four line-sensors to follow the white line taped tothe floor until it arrived at a turning point where the robotstopped automaticallyWhile driving theMRCautomaticallymade small corrections when the line was detected by oneof the inner sensors the rotation of the wheels on the sameside was decreased by 50 until another sensor detected theline Similarly when the line was detected by one of the outersensors the wheels on the same side were stopped so thatthe MRC could drive in a curve in order to go back ontrack If one of the turning commands was selected theMRCmade an approximately 90-degree turn in the correspondingdirection In drive mode the camera was facing slightly theground (approximately 20 degrees down from the frontalview for the ldquolook downrdquo position) When switching to cam-eramode (represented by the word ldquoCAMERArdquo at the bottomof the robot-GUI) the camera changed to the ldquolook forwardrdquoposition (switching back to drive mode would change thecamera position to looking down again) All the EEGdata andthe received video stream from the MRC were anonymouslystored

Computational Intelligence and Neuroscience 7

(a) (b)

Figure 5 (a) shows the second step of theWizardThe subject faced a circle consisting of small segments which flickered simultaneously withseven different frequencies Each of the seven tested frequencies was represented by the same amount of segmentsThe segments representingone of the seven tested frequencies were scattered in random order (b) forming the stimulation circle (a) After this multistimulation withthirteen or fourteen different frequencies depending on the alpha test result the EEG data were analyzed and ordered from the strongest tothe lowest SSVEP response

fS

(a) (b)

Figure 6 This figure shows the third step of the Wizard (a) The main frequency 119891119878is in the center (white circle stimulation) and the other

three frequencies (from four 119891119878119894with the strongest SSVEP response) are divided into small stimulation blocks and scattered around 119891

119878acting

as ldquonoiserdquo in peripheral vision This pattern is shown in (b) This was repeated for each 119891119878119894

232 Wizard The Wizard software is generally spokena calibration program that autonomously determines BCIkey parameters [37] It is designed to select user-specificoptimal stimulation frequencies for SSVEP-based BCIs onLCD screens It is a development of the method presentedin [41] The Wizard chooses four frequencies with the bestSSVEP response out of a set of 14 possible stimulationfrequencies 119891

119894(ranged from 6Hz to 20Hz) that can be

realized on a monitor screen with a vertical refresh rate of120Hz [42] In the following we provide a brief description

Step 1 (alpha test) The subject was instructed by an audiofeedback to close his or her eyes 15 frequencies in the alphawave band (827 857 887 893 923 953 970 100 10301061 1091 1121 1170 120 and 1230Hz) were measured for10 seconds during the closed eyes period and if any of those15 frequencies would interfere with a possible stimulationfrequency 119891

119894(Δ119891 le 015Hz) this frequency 119891

119894would be

filtered out

Step 2 (stimulation frequency selection) In order to avoid theinfluence of harmonic frequencies the user had to look at twocircles in sequence Each circle contained seven stimulationfrequencies which were presented in pseudorandom order assmall green circle segments (see Figure 5) This step took 20seconds

Step 3 (optimal classification thresholds and time segmentlength) In this step the Wizard software chose the beststarting time-length segment 119879

119878as well as the optimal

classification thresholds 120573119894for the frequencies determined

in the previous step First a white circle flickering at thefrequency which had the highest SSVEP response in phase2 was displayed It was necessary to simulate noise causedby peripheral vision when concentrating on the target objectFor this purpose the white circle was surrounded by a greenring containing segments which presented the remainingoptimal frequencies (see Figure 6) The user was instructedby an audio feedback to gaze at the white circle The circle

8 Computational Intelligence and Neuroscience

(a) (b)

Figure 7 Two subjects during the driving task The robotic-GUI is in camera mode The subject in (a) sees a text message instructing himto select the button ldquolook uprdquo The subject in (b) is one step further and is facing the hidden secret number which is attached underneath adrawer

and the ring flickered for 10 seconds while EEG data wererecorded After a two-second break the flickering continuedThe white circle now flickered with the second-highestfrequency from phase two while the ring flickered with theremaining three frequencies This procedure was repeateduntil the data for all four optimal frequencies were collectedTotal recording time for phase three was 40 seconds andbased on this data optimal thresholds and time segmentlength were determined If one of the frequencies 119891

119878119894did not

reach a defined minimal threshold 120573119894 this frequency was

replaced with the next best frequency from Step 2 and theprocedure of this step was repeated

24 Procedure In order to analyze the BCI performancevariety a pre- and postquestionnaire system (similar to [21])was employed After signing the consent form each subjectcompleted a brief prequestionnaire and was prepared for theEEG recording Subjects were seated in a comfortable chairin front of the monitor the electrode cap was placed andelectrode gel was applied First the subjects went throughthe steps of the Wizard and BCI key parameters (stimula-tion frequencies classification thresholds and optimal timesegment length) were determined Then the experimenterstarted the robot-GUI (Figure 3) The control task was todrive the MRC through a predetermined route (Figure 8) Ifthe robot-GUI software recognized a wrong command forexample ldquoturn rightrdquo instead of ldquoturn leftrdquo this commandwas not sent so that the MRC would not drive off the track(Figure 7) If the subjectmade a fewmistakes at the beginningand was willing to start over again (this occurred with asmall number of subjects) the experimentermoved theMRCback to the starting point and restarted the robot-GUI In thevery few cases where the wireless connection to the MRCwas interrupted (eg the MRC was mechanically unable toexecute the command thatwas correctly selected in the robot-GUI by the user) the experimenter manually repositionedthe MRC to the next turning point

At a certain turning point (represented by a blue squareshown in Figure 8 when approximately 60 of the drivingtask was completed) the camera was facing a text messagewhich instructed the subjects to look up (user action rep-resented with a circled number as eg 5 in this case)

Start

Finish

1 m

1 m

1 m

1 m

2 m

15 m15 m

15 m

15 m

15 m

15 m

Figure 8 Track for the mobile robotic car Subjects had to steerthe MRC through a 15-meter-long course with 10 turns in orderto reach the finish line At a certain turning point (marked with ablue square) the MRCs camera faced a text message that instructedthe subject to switch to camera mode The task was to examine thesurroundings and to look for a hidden number which was attachedunderneath a drawer (otherwise not visible) One example of theoptimal path is shown in Figure 9

At this point the user had to switch to camera mode andselect the command ldquolook uprdquo Now the subject was ableto see a hidden secret number which was attached to thebottom of a drawer (located at a height of 50 cm facingthe floor not normally visible to the people in the room)The numbers varied randomly between the subjects andwere only accessible via the MRC camera After seeing thenumber on the screen the subject had to change back todrive mode and continued the driving task When arriving atthe finish line the camera faced another text message whichagain instructed the subjects to look up After changing tocamera mode and selecting ldquolook uprdquo the camera faced asign displaying the word ldquofinishrdquo and the robot-GUI stoppedautomatically All of the EEG data and performed commandswere stored on the hard drive of the desktop PC for furtherevaluation The values of ITR and accuracies were calculatedautomatically by the software The ITR was calculated usingthe formula presented by Wolpaw et al in [43] the numberof possible choices for classification was equal to four itwas the same for each mode (drive mode or camera mode)After finishing the experiment the subject filled out thepostquestionnaire

Computational Intelligence and Neuroscience 9

Figure 9This chart presents exemplarily the BCI performance for subject 59 (optimal path with 100 accuracy)The top row shows the rawEEG signal acquired from channel119874

119911(one of eight EEG electrodes)The remaining graphs show the SNR values during the whole experiment

(dashed line represents the threshold for each command) for the four commands forwardup turn left turn right and cameradrive thestimulation frequencies for those commands were 667 750 706 and 632Hz respectively The 119909-axis the same for all graphs representsthe total duration of the experiment (150 seconds) The marking above each graph represents the exact time of each classification circlednumbers below the 119909-axis correspond to the task actions

3 Results

The results of the driving performance are shown in Table 3All 61 subjects were able to control the BCI application andcompleted the entire driving taskThe overall mean (SD) ITRof the driving performance was 1407 (444) bitsmin Themean (SD) accuracy was 9714 (373) Except for one allsubjects reached accuracies above 80 40 subjects reachedaccuracies above 90 and nineteen of them even completedthe steering task without any errorsThemean (SD) accuracyof each action (commands required to complete the task) was

9614 (002) In the prequestionnaire subjects answered thequestions regarding gender the need for vision correctiontiredness and BCI experience as displayed in Table 2Eighteen female subjects (30) with an average (SD) ageof 2267 (430) years and 43 male subjects with an average(SD) age of 2260 (532) years participated in the experimentThe influence of gender was also tested 18 female subjectsachieved a mean (SD) accuracy and ITR of 9324 (798)and 1405 (573) bitsmin respectively and 43 male achieveda mean (SD) accuracy and ITR of 9295 (659) and 1407(508) bitsmin respectively A one-way ANOVA showed no

10 Computational Intelligence and Neuroscience

Table 2 Questionnaire results The table presents the answers collected from the prequestionnaire The figures show the number of respon-dents or the mean (SD) value

Age Gender Visioncorrection Tiredness Length of sleep

last nightExperiencewith BCIs

Years Male Female Yes No 1 nottired

2 littletired

3 moderatelytired 4 tired 5 very

tired Hours Yes No

2262(500) 43 18 22 39 18 23 17 3 0 675 (122) 6 55

statistically significant difference in accuracy 119865(1 59) =

0022 119901 = 0883 and ITR 119865(1 59) = 00003 119901 = 0987between those two groupsThe influence of vision correctionon accuracy and ITR was also tested A total of 22 subjectsrequired vision correction in the form of glasses or contactlenses (as they stated in the questionnaire) They achieveda mean (SD) accuracy and ITR of 9156 (636) and 1278(402) bitsmin respectively These results were compared tosubjects who did not require vision correction and achieveda mean (SD) accuracy and ITR of 9392 (535) and 1484(466) bitsmin respectively A one-way ANOVA showedno statistically significant influence of vision correction onaccuracy 119865(1 59) = 1664 119901 = 0202 There was also nosignificant effect of vision correction on ITR119865(1 59) = 2263119901 = 0138 An evaluation of tiredness was also calculated onthe basis of the answers given in a prequestionnaire (scalefrom 1 to 5) Results of the prequestionnaire are shown inTable 2 A one-way ANOVA revealed no significant effect oftiredness on accuracy 119865(4 56) = 0335 119901 = 0853 and ITR119865(4 56) = 0067 119901 = 0991

4 Discussion

In the results presented the overall accuracy was calculatedfor all commands in drive mode and in camera modewithout distinguishing between them since four commandsfor SSVEP classification were always present Even thoughdifferent commands were sent the command classificationremained the same A comparison of this study to otherpublications on steering a robot with SSVEP-based BCI ispresented in Table 4

In order to test the influence of video overlay we com-pared the accuracy results with a demographic study [21] inwhich 86 subjects achieved a mean (SD) accuracy of 9226(782) while driving a small robot through a labyrinth witha four-class SSVEP-based BCI An unpaired 119905-test showed nostatistically significant difference 119905(145) = minus0089 119901 = 0397compared to the present values This shows that the videooverlay did not negatively influence the BCI performanceGergondet et al listed three factors with respect to howthe live video feedback in the background can influence theSSVEP stimulation distraction due to what is happening onthe video distraction due to what is happening around theuser and variation in luminosity below the stimuli shownto the user [30] Kapeller et al showed that there is a slightdecrease in performance for the underlying video sequencedue to a distractive environment and small targets used [27]

Contrary to the findings reported by Gergondet et al andKapeller et al the dynamic background did not noticeablyaffect the BCI performance (unpaired 119905-test result) Weassume that this was due to the fact that stimulation frequen-cies and minimal time segment classification windows wereadjusted individually and that relatively large stimulationboxes were used (visual angle)

The high accuracies of the SSVEP-based BCI systemsupport the hypothesis that BCI can be used to control acomplex driving application In comparison to the study byDiez et al [31] the MRC self-driving behavior was limitedto a preset number of turning points and a predefined pathAlthough the robot presented by Diez et al could avoidobstacles by itself (limited algorithm reaction) it decreased itsspeed for trajectory adjustment and in some cases stopped toavoid collision and further user commands were necessaryWhile Diez et al reported a mean time of 23320 seconds forthe second destination (139-meter distance) in this study theachieved mean time was 20708 seconds (15-meter distance)As pointed out by Diez et al the overall time for completingthe task depends not only on the robot but also on thesubjectrsquos performance controlling the BCI system [31] In ourstudy the high accuracies of the BCI system were achieveddue to the key parameter adjustment carried out by theWizard software

Themain difference between the robot used by Diez et aland the MRC presented in this study is the extended controlpossibility of the interface that also allows for controlling thecamera Both robots were designed for different purposesDiez et al mainly designed their robot for driving frompoint A to point B without any additional interaction withthe environment as they planned to use the system forwheelchair control in the future By contrast in our designthe MRC itself is the interaction device Equipped with adisplay and a two-way audio system it might serve as aremote telepresence in the future

The SSVEP paradigm allows for a robust implementationof four driving commands Compared to robots constructedfor the P300 BCI approach where the user faces largestimulation matrices spread across the entire screen thenumber of simultaneously displayed targets here is only fourand hence the load on the visual channel is considerably lowerin comparison Therefore the user might find it easier toconcentrate on the driving task On the other hand the lessernumber of command options reduces freedom and speedHowever a shared-control paradigm allows for complex tasksto be executed with minimal effort The command ldquodrive

Computational Intelligence and Neuroscience 11

Table 3 Results of the driving performance Total time each subjectrequired for steering the MRC through the route (15 meters and10 turns including camera movements) 26 correct commands (3commands in camera mode and 23 commands in drive mode) werenecessary to complete the task Accuracy (Acc) is the number ofcorrect commands (26) divided by the total number of commands(119862119899) Mean values are given at the bottom of the table

Subject Time Acc ITR119862119899[s] [] [bitsmin]

1 30560 8667 720 302 28346 8966 832 293 28407 8125 680 324 28011 8125 690 325 25259 8125 765 326 19002 9286 1340 287 32206 9630 862 278 31007 7027 466 379 15813 9286 1610 2810 37182 8966 635 2911 24172 8667 910 3012 15133 9286 1683 2813 15549 10000 2007 2614 21054 9630 1318 2715 20841 10000 1497 2616 24162 8387 852 3117 24172 8667 910 3018 14513 9630 1912 2719 16748 9630 1657 2720 22080 9286 1153 2821 19419 9630 1429 2722 43154 9286 590 2823 16352 10000 1908 2624 17184 8667 1280 3025 24812 8387 830 3126 13193 10000 2365 2627 16138 10000 1933 2628 18475 9630 1502 2729 14422 10000 2163 2630 16748 9286 1520 2831 21663 9286 1175 2832 35516 8125 544 3233 24406 8667 901 3034 15214 10000 2051 2635 19561 9630 1418 2736 23948 8387 860 3137 16636 8966 1418 2938 17245 10000 1809 2639 21897 8387 940 3140 16565 8966 1425 2941 14666 10000 2127 2642 18627 8966 1267 29

Table 3 Continued

Subject Time Acc ITR119862119899[s] [] [bitsmin]

43 22131 9286 1151 2844 25929 8667 848 3045 15702 10000 1987 2646 14788 9630 1876 2747 18718 9630 1482 2748 14554 10000 2144 2649 16057 9630 1728 2750 20495 10000 1522 2651 22811 9630 1216 2752 16037 10000 1946 2653 23664 8125 817 3254 14889 10000 2095 2655 15011 10000 2078 2656 13985 10000 2231 2657 14716 10000 2120 2658 24273 10000 1285 2659 15001 10000 2080 2660 16463 9630 1685 2761 17885 9630 1551 27Mean 20708 9303 1407 2811SD 5025 573 444 182Min 13193 7027 466 2600Max 43154 10000 2365 3700

forwardrdquo starts an autopilot program that guides the robotthrough a straight distance until it reaches a predefinedstopping point

While the MRC executed the autopilot program andfollowed the line for some time subjects purposely ignoredthe flickering targets and concentrated on the video feedbackinstead As these waiting periods were added up to theactual classification times to each command that followed anautopilot command (ldquoforwardrdquo) the ITR values are consid-erably low From the researcherrsquos point of view a numberof errors could have been avoided if some subjects wouldhave paid more attention to the live feedback Especially inthe task of reading the hidden number subjects were tryingto drive along the path without looking at the text messagethat appeared on the live feedback After realizing that therobot was not turning they looked at the video and saw thetext message instructing them to look up This is visible inthe analysis of the accuracy of the robots commands (seeFigure 10) The lowest accuracy of 8749 was for actionldquochange view to the camera moderdquo (robot command 14) thismay be caused by the fact that the subjects got used to theactions ldquoturn leftrdquoldquoturn rightrdquo and ldquodrive forwardrdquo whichthey repeated six times from the starting point (see Figures10 and 9) This could easily be avoided if the driving pathswere designedwith evenly distributed tasks (eg live scenariotasks such as reading a hidden number should occur moreoften)

12 Computational Intelligence and Neuroscience

Table 4 Comparison of BCI systems for robot navigation

Publication Subjects Classes Literacy rate [] Mean accuracy [] Mean ITR [bitsmin] Robot typeVolosyak et al [21] 86 4 98 9226 1724 E-puckZhao et al1 [34] 7 4 100 903 247 NAOKapeller et al2 [27] 11 4 91 9136 NA E-puckDiez et al3 [31] 7 4 100 8880lowast NA Pioneer 3-DXHolewa and Nawrocka [24] 3 4 100 7375 1136 Lego MindstormChoi and Jo1 [32] 5 2 100 844 1140 NAOThis study 61 4 100 9303 1407 MRC1Results for SSVEP paradigm2f-VEP results for 4 classes without zero class3Results for destination 1 lowastcorrect(correct + wrong) commands

Table 5 Distribution of the time blocks needed for selection of the first three commands ldquodrive forwardrdquo and the last three commandsldquoforwardrdquoldquolook uprdquo over all subjects

Distribution []Time windows 119879

119878in EEG data blocks

8 10 15 20 30 40 50 60 to 160A 656 1803 1803 2295 1148 656 820 820C 820 2295 820 1967 1148 820 492 1639E 984 1311 820 1967 656 984 820 2459Mean 820 1803 1148 2077 984 820 710 1639œ 984 1803 1475 1803 984 328 656 1967X 1148 1639 1639 1475 1311 984 164 1639Y 1148 1475 1148 2295 656 656 492 2131Mean 1093 1639 1421 1858 984 656 437 1913

Accuracy of each command of the robotrsquos path

1 2 3 4 5 6 7 8 9

Commands

10080604020

0

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26

()

Figure 10This chart presents the accuracy of each command alongthe path (119909-axis) and its selection accuracy to previous correctselection over all subjects

No difference between the time needed for the selectionof the command ldquodrive forwardrdquo in the beginning of theexperiment in comparison to the end of the experiment canbe observed (see Table 5) hence neither fatigue nor practiceseemed to influence performance

The general literacy rate among test subjects of 100indicates that SSVEP-based BCIs can be used by a largerpopulation than previously assumedThis high literacy is dueto the careful selection of individual SSVEP key parameterswhichwere determined by the presented calibration software

41 Limitations The SSVEP-control application has somerestrictions

(i) The wireless connection was implemented with theUDP protocol and some information might thereforenot be received properly

(ii) In order to maximize performance accuracy and userfriendliness the number of simultaneously displayedtargets was restricted to four which resulted in aconsiderably lower ITR

(iii) It should also be noted that for long term use recal-ibration might be inevitable Although the numberof targets is limited to four the SSVEP-based BCIpresented here depends on the userrsquos vision andcontrol over his or her eye movements It thereforehas to compete with other approaches such as P300and other eye tracking systems Visual stimulationwith low frequencies in particular is known to causefatigue

(iv) Moreover subjects in this study may not be repre-sentative of the general population they tended tobe young healthy men Further tests with older andphysically impaired persons are required

5 Conclusions

In the present study the SSVEP-based driving application(robot-GUI) was proven to be successful and robust Thedriving application with live video feedback was tested with61 healthy subjects All participants successfully navigateda mobile semiautonomous robot through a predetermined15-meter-long route BCI key parameters were determinedby the Wizard software No negative influence of the livevideo feedback on the performance was observed Such

Computational Intelligence and Neuroscience 13

applications could be used to control assistive robots butmight also be of interest for healthy subjects In contrast toP300 the SSVEP stimulation has to be sufficiently strong (interms of size or visual angle) to significantly influence theuser A common problem of all BCI studies that also occurredhere is to keep the usermotivated and focused on the taskTheevaluation of the questionnaire shows that for the majorityof subjects the driving task with the SSVEP-based BCI wasnot fatiguing even though lower stimulation frequencieswhich usually cause more fatigue than higher frequencieswere used in the experiment More specifically frequenciesbetween 6 and 12Hz were determined in the calibrationprocedure as they elicited the strongest SSVEP responsesSome subjects stated that as they concentrated more on thecamera video they did not find the flickering annoying at allOur results prove that SSVEP-based personalized adaptiveBCI systems with a low target number can be used as aneveryday assistive technology for remote robotsWe have alsoshown that it is possible to design a four-target interface withextended capabilities such as camera and driving controlThe control application presented here can be used formobile robots as an assistive remote technology as a gameinterface forwheelchair control or even for steering remotelycontrolled drones This study should be considered as a steptowards resolving the problems of semiautonomous mobilerobots controlled with SSVEP-based BCIs In order to furtherimprove this approach the next steps should be an onlineparameter adaptationmechanism for improving the accuracythrough practicing a possibility of turning the stimulationonoff and a two-way video and audio communication fora truly remote telepresence

Competing Interests

The authors declare that the research was conducted in theabsence of any commercial or financial relationships thatcould be construed as a potential conflict of interests

Acknowledgments

The authors would like to thank Thomas Grunenberg fromthe Faculty of Technology and Bionics at Rhine-Waal Uni-versity of Applied Sciences for building programming andmaintaining MRCs The authors would also like to thankthe student assistants Leonie Hillen Francisco Nunez SaadPlatini Anna Catharina Thoma and Paul Fanen Wundufor their help with the study This research was supportedby the German Federal Ministry of Education and Research(BMBF) under Grants nos 16SV6364 and 01DR14014 and theEuropean Fund for Regional Development (EFRD or EFREin Germany) under Grant no GE-1-1-047

References

[1] M Cheney and R Uth Tesla Master of Lightning Barnes ampNoble New York NY USA 1999

[2] W Walter The Living Brain W W Norton amp Company NewYork NY USA 1953

[3] P Flandorfer ldquoPopulation ageing and socially assistive robotsfor elderly persons the importance of sociodemographic fac-tors for user acceptancerdquo International Journal of PopulationResearch vol 2012 Article ID 829835 13 pages 2012

[4] A Nijholt D Tan G Pfurtscheller et al ldquoBrain-computerinterfacing for intelligent systemsrdquo IEEE Intelligent Systems vol23 no 3 pp 72ndash79 2008

[5] J D R Millan R Rupp G R Muller-Putz et al ldquoCombiningbrain-computer interfaces and assistive technologies state-of-the-art and challengesrdquo Frontiers in Neuroscience vol 4 article161 2010

[6] C Brunner N Birbaumer B Blankertz et al ldquoBNCI horizon2020 towards a roadmap for the BCI communityrdquo Brain-Computer Interfaces vol 2 no 1 pp 1ndash10 2015

[7] R Bogue ldquoExoskeletons and robotic prosthetics a review ofrecent developmentsrdquo Industrial Robot vol 36 no 5 pp 421ndash427 2009

[8] R Ron-Angevin F Velasco-Alvarez S Sancha-Ros and L daSilva-Sauer ldquoA two-class self-paced BCI to control a robot infour directionsrdquo IEEE International Conference on Rehabilita-tion Robotics (ICORR rsquo11) vol 2011 Article ID 5975486 2011

[9] L Tonin T Carlson R Leeb and J D R Millan ldquoBrain-controlled telepresence robot by motor-disabled peoplerdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society pp 4227ndash4230IEEE Boston Mass USA September 2011

[10] D Huang K Qian D-Y Fei W Jia X Chen and O BaildquoElectroencephalography (EEG)-based brain-computer inter-face (BCI) a 2-D virtual wheelchair control based on event-related desynchronizationsynchronization and state controlrdquoIEEE Transactions on Neural Systems and Rehabilitation Engi-neering vol 20 no 3 pp 379ndash388 2012

[11] A Chella E Pagello E Menegatti et al ldquoA BCI teleoperatedmuseum robotic guiderdquo in Proceedings of the International Con-ference on Complex Intelligent and Software Intensive Systems(CISIS rsquo09) pp 783ndash788 IEEE Fukuoka Japan March 2009

[12] C Escolano J M Antelis and J Minguez ldquoA telepresencemobile robot controlled with a noninvasive brain-computerinterfacerdquo IEEE Transactions on Systems Man and CyberneticsPart B Cybernetics vol 42 no 3 pp 793ndash804 2012

[13] F-B VialatteMMaurice J Dauwels andA Cichocki ldquoSteady-state visually evoked potentials focus on essential paradigmsand future perspectivesrdquo Progress in Neurobiology vol 90 no4 pp 418ndash438 2010

[14] D Regan ldquoSome characteristics of average steady-state andtransient responses evoked by modulated lightrdquo Electroen-cephalography and Clinical Neurophysiology vol 20 no 3 pp238ndash248 1966

[15] Y Frank and F Torres ldquoVisual evoked potentials in the evalua-tion of lsquocortical blindnessrsquo in childrenrdquoAnnals of Neurology vol6 no 2 pp 126ndash129 1979

[16] W IanMcDonald A CompstonG Edan et al ldquoRecommendeddiagnostic criteria for multiple sclerosis guidelines from theinternational panel on the diagnosis of multiple sclerosisrdquoAnnals of Neurology vol 50 no 1 pp 121ndash127 2001

[17] S Gao Y Wang X Gao and B Hong ldquoVisual and auditorybrain-computer interfacesrdquo IEEE Transactions on BiomedicalEngineering vol 61 no 5 pp 1436ndash1447 2014

[18] I Volosyak ldquoSSVEP-based Bremen-BCI interfacemdashboostinginformation transfer ratesrdquo Journal of Neural Engineering vol8 no 3 Article ID 036020 2011

14 Computational Intelligence and Neuroscience

[19] G R Muller-Putz and G Pfurtscheller ldquoControl of an electricalprosthesis with an SSVEP-based BCIrdquo IEEE Transactions onBiomedical Engineering vol 55 no 1 pp 361ndash364 2008

[20] P Martinez H Bakardjian and A Cichocki ldquoFully onlinemulticommand brain-computer interface with visual neuro-feedback using SSVEP paradigmrdquo Computational Intelligenceand Neuroscience vol 2007 Article ID 94561 9 pages 2007

[21] I Volosyak D Valbuena T Luth T Malechka and A GraserldquoBCI demographics II how many (and What Kinds of) peoplecan use a high-frequency SSVEP BCIrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 19 no 3 pp232ndash239 2011

[22] J Zhao Q Meng W Li M Li and G Chen ldquoSSVEP-basedhierarchical architecture for control of a humanoid robot withmindrdquo in Proceedings of the 11th World Congress on Intelli-gent Control and Automation (WCICA rsquo14) pp 2401ndash2406Shenyang China July 2014

[23] A Guneysu and H Levent Akin ldquoAn SSVEP based BCI tocontrol a humanoid robot by using portable EEG devicerdquo inProceedings of the 35th Annual International Conference of theIEEE Engineering in Medicine and Biology Society (EMBC rsquo13)pp 6905ndash6908 IEEE Osaka Japan July 2013

[24] K Holewa and A Nawrocka ldquoEmotiv EPOC neuroheadset inbrainmdashcomputer interfacerdquo in Proceedings of the 15th Interna-tional Carpathian Control Conference (ICCC rsquo14) pp 149ndash152IEEE Velke Karlovice Czech Republic May 2014

[25] N-S Kwak K-R Muller and S-W Lee ldquoToward exoskeletoncontrol based on steady state visual evoked potentialsrdquo inProceedings of the International Winter Workshop on Brain-Computer Interface (BCI rsquo14) pp 1ndash2 Gangwon South KoreaFebruary 2014

[26] G Bin X Gao Y Wang B Hong and S Gao ldquoVEP-basedbrain-computer interfaces time frequency and code modula-tionsrdquo IEEE Computational Intelligence Magazine vol 4 no 4pp 22ndash26 2009

[27] C Kapeller C Hintermuller M Abu-Alqumsan R Pruckl APeer and C Guger ldquoA BCI using VEP for continuous control ofa mobile robotrdquo in Proceedings of the 35th Annual InternationalConference of the IEEE Engineering in Medicine and BiologySociety (EMBC rsquo13) pp 5254ndash5257 IEEE Osaka Japan July2013

[28] H Wang T Li and Z Huang ldquoRemote control of an electricalcar with SSVEP-based BCIrdquo in Proceedings of the IEEE Interna-tional Conference on Information Theory and Information Secu-rity (ICITIS rsquo10) pp 837ndash840 IEEE Beijing China December2010

[29] Y Wang Y-T Wang and T-P Jung ldquoVisual stimulus designfor high-rate SSVEP BCIrdquo Electronics Letters vol 46 no 15 pp1057ndash1058 2010

[30] P Gergondet D Petit and A Kheddar ldquoSteering a robot witha brain-computer interface impact of video feedback on BCIperformancerdquo in Proceedings of the 21st IEEE InternationalSymposium on Robot and Human Interactive Communication(RO-MAN rsquo12) pp 271ndash276 Paris France September 2012

[31] P F Diez V A Mut E Laciar and E M Avila Perona ldquoMobilerobot navigation with a self-paced brain-computer interfacebased on high-frequency SSVEPrdquo Robotica vol 32 no 5 pp695ndash709 2014

[32] B Choi and S Jo ldquoA low-cost EEG system-based hybridbrain-computer interface for humanoid robot navigation andrecognitionrdquo PLoS ONE vol 8 no 9 Article ID e74583 2013

[33] E Tidoni P Gergondet A Kheddar and S M Aglioti ldquoAudio-visual feedback improves the BCI performance in the naviga-tional control of a humanoid robotrdquo Frontiers in Neuroroboticsvol 8 article 20 pp 1ndash8 2014

[34] J Zhao W Li and M Li ldquoComparative study of SSVEP- andP300-based models for the telepresence control of humanoidrobotsrdquo PLoS ONE vol 10 no 11 Article ID e0142168 2015

[35] T M Rutkowski K Shimizu T Kodama P Jurica and ACichocki ldquoBrain-robot interfaces using spatial tactile BCIparadigmsrdquo in Symbiotic Interaction pp 132ndash137 Springer NewYork NY USA 2015

[36] C Brunner B Z Allison D J Krusienski et al ldquoImprovedsignal processing approaches in an offline simulation of a hybridbrain-computer interfacerdquo Journal of NeuroscienceMethods vol188 no 1 pp 165ndash173 2010

[37] F Gembler P Stawicki and I Volosyak ldquoAutonomous parame-ter adjustment for SSVEP-based BCIs with a novel BCI wizardrdquoFrontiers in Neuroscience vol 9 article 474 pp 1ndash12 2015

[38] E W Sellers T M Vaughan and J R Wolpaw ldquoA brain-computer interface for long-term independent home userdquoAmyotrophic Lateral Sclerosis vol 11 no 5 pp 449ndash455 2010

[39] E M Holz L Botrel and A Kubler ldquoBridging gaps long-term independent BCI home-use by a locked-in end-userrdquo inProceedings of the TOBI Workshop IV Sion Switzerland 2013

[40] O Friman I Volosyak and A Graser ldquoMultiple channel detec-tion of steady-state visual evoked potentials for brain-computerinterfacesrdquo IEEE Transactions on Biomedical Engineering vol54 no 4 pp 742ndash750 2007

[41] I Volosyak T Malechka D Valbuena and A Graser ldquoAnovel calibration method for SSVEP based brain-computerinterfacesrdquo in Proceedings of the 18th European Signal ProcessingConference (EUSIPCO rsquo10) pp 939ndash943 Aalborg DenmarkAugust 2010

[42] I Volosyak H Cecotti and A Graser ldquoOptimal visual stimulion LCD screens for SSVEP based brain-computer interfacesrdquoin Proceedings of the 4th International IEEEEMBS Conferenceon Neural Engineering (NER rsquo09) pp 447ndash450 IEEE AntalyaTurkey May 2009

[43] J R Wolpaw N Birbaumer D J McFarland G Pfurtschellerand T M Vaughan ldquoBrain-computer interfaces for communi-cation and controlrdquo Clinical Neurophysiology vol 113 no 6 pp767ndash791 2002

Page 2: Driving a Semiautonomous Mobile Robotic Car Controlled by ... · Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI ... (frequencycoding,methodoriginallyproposed

2 Computational Intelligence and Neuroscience

motor disabilities myopathy) successfully controlled therobot with a similar performance Huang et al used anevent-related desynchronizationsynchronization of electro-encephalogram signals to control a 2D constantly movingvirtual wheelchair [10] Four out of five healthy subjectsreached mean target hit rates of 875 and 100 with motorimaginary in two virtual games

Several studies used the P300 paradigm (another com-monly used BCI approach using the 300ms componentof an evoked potential) Chella et al presented a MuseumRobotic Guide GUI for P300-based BCI developed at theSan Camillo Institute [11] The user can choose between tworobots (Pioneer3 or PeopleBot) each located in a differentplace to visit the museum Escolano et al reported a P300-based telepresence system that enables the user to be presentin remote environments through a mobile robot using ashared-control strategy [12]

Reliable BCIs can also be realized with steady-state visualevoked potentials (SSVEPs) SSVEPs are the continuous brainresponses elicited at the visual and parietal cortical areasunder visual stimulation with a specific constant frequency[13] Shortly when looking at a light source that flickerswith a constant frequency the corresponding brain signalsare measured with the EEG from the occipital cortex Asthe specific stimulation frequencies are known a priorithe stimuli the user focuses its gaze on can be detectedthrough classification algorithms that detect the stimulationfrequency with the strongest amplitude that is throughanalyzing the signal to noise ratios

Visual evoked potentials (VEP) recorded in the 1960s byRegan [14] and early tested in the 1980s and 1990s extendedour knowledge about the human brain For example as earlyas 1979 Frank and Torres evaluated the use of visual evokedpotentials in diagnosis of ldquocortical blindnessrdquo in children [15]VEP can also support the diagnosis of multiple sclerosis [16]

At the beginning of the 21st century the progress ofcomputing power and technology allowed the extensiveresearch and development of SSVEP and its use in BCI Arecent review of visual and auditory BCIs can be found in[17] Various applications like spelling interfaces [18] andcontrol applications for a prosthesis [19] or for navigation [20]can be implemented with the SSVEP paradigm In a largedemographic studywith 86 subjects Volosyak et al presenteda four-class SSVEP-based BCI with LED stimulation [21]84 subjects reached a mean (SD) accuracy and ITR of 9226(782) and 1724 (699) bitsmin respectively with stimula-tion at 13 14 15 and 16Hz (medium frequency set) Only 56subjects were able to operate the BCI at high frequencies (3436 38 and 40Hz) reaching amean accuracy and ITR of 8916(929) and 1210 (731) bitsmin Zhao et al introduced athree-layer decision tree for humanoid robot BCI interaction[22] The SSVEP stimulation was presented on a 1710158401015840 LCDmonitor Four boxes were flickering at 5 6 8 and 10Hzwith an additional 17Hz red LED on top of the monitorFive young subjects tested the developed software platformThe mean performance accuracy achieved by four of themwas approximately 88 Guneysu and Levent Akin presenteda four-class SSVEP-based BCI with LED stimulation at 79 11 and 15Hz an Emotiv EPOC headset was used for

EEG data acquisition and MATLAB software (FFT withGaussianmodel)was used for dominant frequency extraction[23] Three subjects achieved an average performance ofapproximately 68 at a distance of 22 cm from the LEDsHolewa and Nawrocka presented an SSVEP-based BCI forLegoMindstorm robot control with anEmotiv EPOCheadset[24] For visual simulation four LED-based panels with stim-ulation frequencies of 28 30 32 and 34Hz were used Thepanels were additionally covered with different color filtersThree subjects operated this setup with a mean accuracy andITR of 7375 and 1136 bitsmin Kwak et al presented anSSVEP-control system for the exoskeleton REX [25] FiveLEDs blinking at 9 11 13 15 and 17Hz were used to selectbetween the exoskeleton commands forward left standingright and sitting respectively Three subjects achieved amean accuracy of 9916 In an f-VEP based BCI each targetis flashing at a different constant frequency causing a periodicsequence of evoked responses with the same fundamentalfrequency as that of the flickered stimulus as well as itsharmonics [26] and in a c-VEP BCI usually a pseudorandomm-sequence phase-shifted between targets is flashing (m-sequence with an autocorrelation function which is a veryclose approximation to a unit impulse function and it isnearly orthogonal to its time lag sequence) [26] Kapeller etal evaluated a frequency-coded f-VEP (857 10 12 and 15Hz)system and codemodulated c-VEP (63-bit pseudorandomm-sequence) as a continuous robot steering input with videofeedback of the robotmovement [27] Eleven healthy subjectsachieved a maximum accuracy of 9136 and 9818 forf-VEP and c-VEP respectively in an online accuracy testrun without zero class Wang et al presented a handmaderemote control electrical car which was driven using anSSVEP stimulationwith blue LEDs (controlled viaATmega8Lmicrocontroller) [28] Combining Average and Fast FourierTransform calculations they achieved an average accuracy of85 91 94 98 and 100 for the different trial lengths of 05 12 4 and 8 seconds respectivelyThe EEG signal was recordedfrom electrodes119874

1and119874

2of the international 10ndash20 system

Gergondet et al used SSVEP stimulation with approximatedfrequencies (frequency coding method originally proposedby Wang et al [29]) for steering a humanoid robot (HRP-2) [30] Four stimulation arrows (6 8 9 and 10Hz) werepresented on a 1710158401015840 LCD screen with 60Hz vertical refreshrate Also presented in a background area of the screenwas a live video stream from the robotrsquos point of viewTheir research focused on the impact of the backgrounddynamic compared to static Diez et al tested an SSVEP-driven Pioneer 3-DX mobile robot equipped with a videocamera for live feedback and an obstacle avoidance ultrasonicsensor system [31] The high frequency (37 38 39 and40Hz) visual stimulation was generated with an FPGA cardand stimulation modules (LED-based) were placed on theouter edge of the monitor in a top left right and bottomarrangement and the camera image was displayed live onthe monitor BCI commands were classified using a slidingwindow FFTmethod Seven participants navigated the robotto two different office locations Some recent studies combineat least two of the BCI paradigms to realize a so-called hybridBCI Choi and Jo combined the SSVEP with event-related

Computational Intelligence and Neuroscience 3

desynchronization (ERD typical SMR-based BCI) for thenavigationexploration of a humanoid robot with additionalP300 BCI for interaction (object recognition decision) [32]They also tested and evaluated the use of a low-cost BCI sys-tem The ERD and SSVEP BCIs were two-class systems thenumber of classes for the P300 BCI varied (2 to 4 dependingon the detected objects) Five subjects used a low-cost Emo-tiv EPOC headset to perform the experiment The overallaccuracies achieved by the ERD and SSVEP protocols were846 and 844 respectively the P300 protocol reached anaccuracy of 91 for two objects and 895 for four objectsthrough the robotrsquos cameraThe scored ITR value for the nav-igationexploration task was 181 bitsmin and for the objectsrecognition task 21 bitsmin Authors reported that switchingbetween SSVEP and ERD caused a drop in accuracy downto 73 The robotrsquos point of view was shown in the middleof the monitor (live feedback) the SSVEP stimulation wasshown at the sides and the P300was showndirectly in the livefeedback of the robotrsquos view Tidoni et al presented a studywhere nine subjects (located in Italy) remotely controlleda humanoid robot (HRP-2 located in Japan) [33] Subjectstested a synchronous asynchronous audio-visual feedbackand evaluated whether a mirror reflection of the robot (inits field of view) improved the performance of an SSVEP-based BCIwith video feedbackThemain taskwas to navigatethe robot and to move a bottle from one table to anotherA six-class SSVEP-based BCI was used the stimulation andthe video feedback were presented on the same screen Theflashing frequencies were 6 8 9 10 and 14Hz with anadditional zero class when the subject did not gaze at anytarget Due to the remote distance the feedback action wasvisible to the subject after approximately 800ms Zhao et alcompared a four-class SSVEP-based BCI to a six-class P300-based BCI by evaluating achieved control accuracies of robotnavigation tasks [34] For both systems (SSVEP and P300)the modular platform Cerebot was used and stimuli werepresented on a LCD monitor Seven subjects took part inthe experiment and achieved a mean accuracy and ITR of913 and 188 bitsmin for the SSVEP approach for the P300approach 903 and 247 bitsmin were achieved

Rutkowski et al implemented a robot control applicationusing tactile BCIs [35] They tested two tactile stimula-tion configurations a pin-pressure BCI (arranged in a 3 times

3 matrix) and a full body vibrotactile BCI (vibrotactilegenerators placed on subjects backs shoulders arms andlegs) A humanoid NAO robot was navigated using sixpredefined commands Five subjects could control the robotusing the pin-pressure BCI and three subjects successfullytested the full body vibrotactile BCI The authors demon-strate the reliability of the tactile BCI paradigm for robotnavigation

All listed studies have to deal with the main engineeringproblems regarding BCI technologies EEG-based BCIs arerelatively slow and can sometimes be unreliable Maximizingthe speed of a BCI while still maintaining high accuracyleads to certain limitations in the number of targets and insystem speed (target selection) Moreover it is often reportedthat a certain number of subjects are not able to controlBCIs (so-called BCI illiterates eg [21 36]) We chose the

SSVEP paradigm as it is reported to be among the fastest andmost reliable BCI strategies with short or no training timeneeded [13] In our recent study after five years of researchwe developed an SSVEP calibration software (SSVEPWizard)for LCD monitors that determines SSVEP key parametersfor each subject [37] The selected stimulation frequenciesare those with the highest SSVEP response in the visualcortex We have shown that with optimized parameters theSSVEP BCI literacy rate can reach 100 This calibrationsoftware is also used here to determine and optimize the user-specific SSVEP parameters Some SSVEP-based applicationsdescribed problems regarding video feedbackThe research ofGergondet et al was focused on the impact of the background(dynamic compared to static) on the BCI performance Theyreported that a dynamic background negatively affected theSSVEP stimulation [30] Kapeller et al noted a slight decreasein performance for an SSVEP-based BCI with backgroundvideo and small targets [27] In the approach presented byDiez et al the stimulation was separated from the videofeedback stream (outer LEDs) but no negative influence ofthe live feedback was reported [31]

SSVEP-based BCI systems for robot control usually usefour or five control commands A higher degree-of-freedomcontrol is difficult to realize because multitarget BCIs usuallyhave a lower accuracy A countermeasure to the limitations indegrees of freedom can be a shared-control approach that isthe robot can share some specific control commands with thehuman user which allows for the faster execution of complextasks The so-called shared-control principle was initiallyexplored for BCI-controlled wheelchairs and recently appliedto BCI telepresence [12] BCIs with only a few targets aremore robust and therefore more suitable for control applica-tions There are already numerous existing BCI technologiesthat patients use at their homes [38 39] However forthe last decade publications about SSVEP-based BCIs havebeen sparse The PubMed database shows 4200 results forthe search term ldquoBrain-computer interfacerdquoldquoBrain-machineinterfacerdquo but only 185 results for the term ldquoSSVEPrdquoWith thispaper we would like to contribute to the research of SSVEP-based BCIs

In this paper we present a four-class SSVEP-based BCIcontrol application with live video feedback We also presenta self-made low-cost semiautonomous mobile robotic car(MRC) with live video feedback controlled by a BCI appli-cation Unlike the robotic cars developed in the above-mentioned research studies not only the vehicle itself butalso the camerarsquos pan-tilt-position was controlled via SSVEPThe main task of this BCI study was to maneuver theMRC through a predetermined route and to examine thesurroundings (live scenario task) The BCI performance wastested with 61 subjects BCI key parameters for examplestimulation frequencies were determined individually byWizard software In this study we tested the followinghypotheses

(i) Is it possible to use an SSVEP-based BCI stimulationand a background video on the same screen withoutthe negative influence of the presented video asreported in for example [27 30]

4 Computational Intelligence and Neuroscience

(ii) How does the video feedback affect BCI performancewhen simultaneously presented with the SSVEP-based BCI stimulation

(iii) Is our self-made semiautonomousmobile robotic carsuitable for future use as an assistive technology

(iv) How did the subjects perform using the presentedSSVEP-based BCI control application compared toresults presented in for example [31]

The paper is organized as follows Section 2 describes theexperimental setup and presents details about the experimen-tal procedure and the hardware and software solutions Theresults are presented in Section 3 followed by a discussionand conclusion in Sections 4 and 5 respectively

2 Material and Methods

21 Subjects Everyone who volunteered to participate in thestudy became a research subject after reading the subjectinformation sheet This study was carried out with writteninformed consent from all subjects in accordance with theDeclaration ofHelsinki 61 subjects (30 female)with amean(SD) age of 2262 (500) years participated in the study allhealthy and all students or employees of the Rhine-Waal Uni-versity of Applied Sciences in KleveThe EEG recording tookplace in a normal PC-laboratory room (asymp36m2) The LCDscreen was located in front of windows and the luminancewas kept at an acceptable level in order not to disturb thesubjects If necessary outer blinds were pulled down duringthe day or light was turned on during the evening None ofthe subjects had neurological or visual disorders Spectacleswere worn when appropriate Subjects did not receive anyfinancial reward for their participation in this study

22 Hardware

221 Mobile Robotic Car (MRC) Four identically con-structed mobile robotic cars (MRCs) were used in theexperiment all set up with the same hardware and softwareconfigurations Four wheels were attached to a preassembledcase (14 cm times 19 cm) with built-in DCmotors A rechargeablebattery (6x AA NiMH 1900mAh) was placed inside Fourlight sensors (eachwith a corresponding red LED L-53SRC-Ffor luminescence positioned side by side) were mounted on aprototype board at the front of theMRC Two of those sensorswere connected to the analog input pins of a BeagleBoneBoard Rev A6 at the center and two at the sides The DCmotors were controlled by an Atmega168-based unit Thecamera Logitech C210 (V-U0019) was mounted in frontof the MRC on two servos (RB-421 RobotBase China)simulating a pan-tilt head and connected to the BeagleBoneUSB port The camera was controlled in five predefinedorientations looking down forward up to the left and to therightThe connection with theMRCwas established throughUDP via a HAMA N150 2in1-WLAN-ADAPTER connectedwith an Ethernet cable to the BeagleBone of the MRCand a USB WiFi adapter (D-Link DWL-G122) connectedto the desktop computer All the electronic components

Figure 1 Mobile robotic car (MRC) On top of the MRC theBeagleBoneBoardRevA6 theAtmega168-basedmotor drive shieldand a small HAMAWiFiWLAN-Access-Point At the front the redLEDs from light sensors that allowed the MRC to follow the whiteline on autopilot mode and the Logitech C210 camera mounted on2 servos creating the pan-tilt head for 2-DoF (degree-of-freedom)control The rechargeable battery is mounted inside the main caseMRC dimensions are 25 times 25 times 12 cm

(boards and HAMA N150 Access-Point) were mounted ontop of the MRC The BeagleBone was equipped with an8GB SD-card running Ubuntu ARM (Linux distributionversion 1204) with MJPG-Streamer installed (version 01httpmjpg-streamersfnet) for video streaming One of theused MRCs is shown in Figure 1

222 Data Acquisition and Feature Extraction The subjectswere seated in front of a 2410158401015840 LCD screen (BenQ XL2420Tor BenQ XL2411T resolution 1920 times 1080 pixels verticalrefresh rate 120Hz) at a distance of approximately 60 cmThecomputer system based on an Intel Core i7 340GHz wasrunning Microsoft Windows 7 Enterprise Standard passiveAgAgCl electrodes were used to acquire the signals from thesurface of the scalp The ground electrode was placed over119860119865119885 the reference electrode was placed over 119862

119885 and eight

signal electrodes were placed at predefined locations on theEEG-cap marked with 119875

119885 1198751198743 1198751198744 1198741 1198742 119874119885 1198749 and

11987410

according to the international system of EEG electrodeplacement Standard abrasive electrolytic gel was appliedbetween the electrodes and the scalp to bring impedancesbelow 5 kΩ An EEG amplifier gUSBamp (Guger Technolo-gies Graz Austria) was utilizedThe sampling frequency wasset to 128Hz During the EEG signal acquisition an analogbandpass filter between 2 and 30Hz and a notch filter around50Hz were applied directly in the amplifier

The minimum energy combination (MEC) method wasoriginally introduced by Friman et al in [40] In this study arefined version of the MEC as presented by Volosyak in [18]was used In the following we present this method in a veryshort form

223 SSVEP SignalModel Wecanmodel the signal recordedat a given electrode 119894 (against reference electrode at time 119905) asfollows

119910119894(119905) =

119873ℎ

sum

119896=1

(119886119894119896sin 2120587119896119891119905 + 119887

119894119896cos 2120587119896119891119905) + 119864

119894119905 (1)

Computational Intelligence and Neuroscience 5

The first part is the SSVEP evoked response consisting of sineand cosine functions of the frequency 119891 and its harmonics119896 with corresponding amplitudes 119886

119894119896and 119887119894119896 The last part

119864119894119905represents the noise component of the electrode 119894 other

activities not related to the SSVEP evoked response andartifacts The 119894th EEG signals samples stored in 119873

119905are in

vector form 119910119894= 119883120591119894+ 119864119894 where 119910

119894= [119910119894(1) 119910

119894(119873119905)]119879

is 119873119905times 1 vector 119883 is the SSVEP model matrix of size

119873119905times 2119873ℎand contains the sin (2120587119896119891119905) and cos (2120587119896119891119905) pair

in its columns The 120591119894vector contains the corresponding

amplitudesThis is based on the assumption that the nuisanceis stationary within the short time segment length the signalsfrom the 119873

119910electrodes can be stored in a vector form 119884 =

[1199101 119910

119873119910]

224 Minimum Energy Combination In order to cancel outthe nuisance and noise and to boost the SSVEP responsechannels are created in a weighted sum of 119873

119910electrodes

119904 = sum119873119910

119894=1119908119894119910119894= 119884119908 Several sets 119878 of 119873

119904channels can be

created with a different combination of the original electrodesignals 119878 = 119884119882 where119882 is119873

119910times119873119904matrix that contains the

different weight combinations The structure of 119882 is basedon the minimum energy combination technique In orderto use this combination method for cancelling the nuisancesignal in a first step any potential SSVEP component needsto be removed from the signal = 119884 minus 119883(119883

119879119883)minus1119883119879119884

The next step is to find a weight vector that minimizes theresulting energy of the combination of the electrodesrsquo noisesignals The last step is the optimization problem forminimalizing the nuisance and noise component

min

10038171003817100381710038171003817

10038171003817100381710038171003817

2

= min

119879119879

(2)

The solution to the minimize problem is the smallest eigen-vector V

1(of the symmetricmatrix 119879) and the energy of the

resulting combination equals the smallest eigenvalue 1205821from

this matrix In order to discard up to 90 of the nuisancesignal the total number of channels is selected by finding thesmallest value for119873

119904that satisfies the following equation

sum119873119904

119894=1120582119894

sum119873119910

119895=1120582119895

gt 01 (3)

To detect the SSVEP response for a frequency the powerof that frequency and its harmonics119873

ℎis predicted

=1

119873119904119873ℎ

119873119904

sum

119897=1

119873ℎ

sum

119896=1

10038171003817100381710038171003817119883

T119896119904119897

10038171003817100381710038171003817

2

(4)

To prevent overlapping between frequencies we use 119873ℎ= 2

in the actual system implementation The estimation of thepower of the SSVEP stimulation frequencies and additionalfrequencies (119873

119891) is normalized into probabilities

119901119894=

119894

sum119895=119873119891

119895=1119895

with119894=119873119891

sum

119894=1

119901119894= 1 (5)

Table 1 Overview of the time windows 119879119878of EEG data used for

SSVEP classification Eleven time segments between 8125ms (8times13

samples or 8 blocks of EEG data) and 16250ms were used in thisstudy

Time [ms] Blocks of EEG data8125 810156 1015234 1520313 2030469 3040625 4050781 5060938 6071094 708125 8016250 160

where 119894is the power estimation of the 119894th signal 1 le 119894 le

119873119891 To increase the difference between the results we apply a

Softmax function

1199011015840

119894=

119890120572119901119894

sum119895=119873119891

119895=1119890120572119901119895

with119894=119873119891

sum

119894=1

1199011015840

119894= 1 (6)

where 120572 is set to 025 The output values of the Softmaxfunction are between 0 and 1 and their sum is equal to 1 Theclassification output119862

119900of the signal of 119894th frequency is a result

of three conditions the probability 1199011015840

119894of the 119894th frequency

is the highest it surpasses a predefined border 1199011015840

119894ge 120573119894 and

the detected frequency is one of the stimulation frequenciesAfter each classification to prevent wrong classificationsthe classifier output was rejected for the next 9 blocks(approximately 914ms of EEG data with the sampling rate of128Hz) and the visual stimulation was paused for that timeperiod This classification method is the next iteration of theBremen-BCI as presented for example in [18] An exampleof the recorded signal and the SNR can be seen in Figure 9

The SSVEP classification was performed online every13 samples (ca 100ms) on the basis of the adaptive timesegment length of the acquired EEG data introduced in[18] If no classification was possible the time segmentlengthwindowwas gradually extended to the next predefinedwindow lengths (see Table 1 and Figure 2)

23 Software

231 Graphical User Interface A custom-made graphicaluser interface (robot-GUI) was designed to control the MRCDirectX 9 was used to draw the images (buttons and areal-time picture from the actual camera view in back-ground) on themonitor and OpenCV library (version 2410httpwwwopencvorg) was used for image acquisitionEach received frame was rendered into background textureas soon as it was received (max 30Hz the frame rate waslimited by the used camera) Each received frame (640 times

480 pixels) was rescaled to 1280 times 960 Due to the ratioof the streamed video frames and the resolution of the

6 Computational Intelligence and Neuroscience

Time (ms)

8 bl

10 bl

160 blocks

8 bl8 bl8 bl

10 bl

8 bl8 bl

10 bl15 bl15 bl

15 blocks20 blocks

10 bl

8 bl8 bl

80 blocks

Com

man

d cla

ssifi

catio

n

20000 22000 24000

203125 ms

8125 ms

152344 ms

1015625 ms

8135 ms

0 2000 4000 6000 8000 10000 12000 14000 16000 18000

middot middot middot

middot middot middot

middot middot middot

middot middot middot

16250 ms

Figure 2 Changes in the time segment length after a performed classification in case no distinct classification can be made and the actualtime 119905 allows for the extension to the next predefined value After each classification (gray bars of 8 10 20 and 160 blocks) additional timefor gaze shifting was included (black) in which the classifier output was rejected for 9 blocks

Figure 3 A screenshot of the robot-GUI in drive mode In thissituation the user had to select the ldquodrive forwardrdquo commandwhichcaused the robot to execute an autopilot program that uses the built-in light sensors to follow the white line

Figure 4 Signs positioned at the turning points and positionedtowards the camera showed the correct direction for following thetrack Here in the screenshot the robot automatically stopped afterreaching this turning point and a sign instructed the user to selectthe command ldquoturn leftrdquo

monitors used a gray background in the form of a framewas implemented to prevent unnatural video scaling Thestarting dimensions of the buttons were 146 times 512 pixelsfor left and right 576 times 128 pixels for the top and 536 times

96 pixels for bottom buttons (see Figures 3 and 4) Duringthe runtime the size of the buttons changed continuously

depending on the calculated probabilities of the 119894th stimuliin (6) The robot-GUI had two modes a drive mode and acamera mode In drive mode the subject was able to sendthe following commands to the MRC ldquodrive forwardrdquo (adiscrete command using the line-sensors to follow the tapedline on autopilot to the next direction change) ldquoturn leftrdquoldquoturn rightrdquo and ldquoswitch to camera moderdquo In camera modethe subject was able to send the following commands ldquolookleftrdquo ldquolook rightrdquo ldquolook uprdquo and ldquoswitch to drive moderdquoWhile looking up the button ldquolook uprdquo changed the contentand functionality to ldquolook forwardrdquoThe control buttons werered in drive mode and yellow in camera mode to increaseuser friendliness The aspect ratio of the video stream was4 3 so the computer screen was not entirely filled out Ifthe command ldquodrive forwardrdquo was selected by the user anautonomous program was executed on the MRC The MRCused the four line-sensors to follow the white line taped tothe floor until it arrived at a turning point where the robotstopped automaticallyWhile driving theMRCautomaticallymade small corrections when the line was detected by oneof the inner sensors the rotation of the wheels on the sameside was decreased by 50 until another sensor detected theline Similarly when the line was detected by one of the outersensors the wheels on the same side were stopped so thatthe MRC could drive in a curve in order to go back ontrack If one of the turning commands was selected theMRCmade an approximately 90-degree turn in the correspondingdirection In drive mode the camera was facing slightly theground (approximately 20 degrees down from the frontalview for the ldquolook downrdquo position) When switching to cam-eramode (represented by the word ldquoCAMERArdquo at the bottomof the robot-GUI) the camera changed to the ldquolook forwardrdquoposition (switching back to drive mode would change thecamera position to looking down again) All the EEGdata andthe received video stream from the MRC were anonymouslystored

Computational Intelligence and Neuroscience 7

(a) (b)

Figure 5 (a) shows the second step of theWizardThe subject faced a circle consisting of small segments which flickered simultaneously withseven different frequencies Each of the seven tested frequencies was represented by the same amount of segmentsThe segments representingone of the seven tested frequencies were scattered in random order (b) forming the stimulation circle (a) After this multistimulation withthirteen or fourteen different frequencies depending on the alpha test result the EEG data were analyzed and ordered from the strongest tothe lowest SSVEP response

fS

(a) (b)

Figure 6 This figure shows the third step of the Wizard (a) The main frequency 119891119878is in the center (white circle stimulation) and the other

three frequencies (from four 119891119878119894with the strongest SSVEP response) are divided into small stimulation blocks and scattered around 119891

119878acting

as ldquonoiserdquo in peripheral vision This pattern is shown in (b) This was repeated for each 119891119878119894

232 Wizard The Wizard software is generally spokena calibration program that autonomously determines BCIkey parameters [37] It is designed to select user-specificoptimal stimulation frequencies for SSVEP-based BCIs onLCD screens It is a development of the method presentedin [41] The Wizard chooses four frequencies with the bestSSVEP response out of a set of 14 possible stimulationfrequencies 119891

119894(ranged from 6Hz to 20Hz) that can be

realized on a monitor screen with a vertical refresh rate of120Hz [42] In the following we provide a brief description

Step 1 (alpha test) The subject was instructed by an audiofeedback to close his or her eyes 15 frequencies in the alphawave band (827 857 887 893 923 953 970 100 10301061 1091 1121 1170 120 and 1230Hz) were measured for10 seconds during the closed eyes period and if any of those15 frequencies would interfere with a possible stimulationfrequency 119891

119894(Δ119891 le 015Hz) this frequency 119891

119894would be

filtered out

Step 2 (stimulation frequency selection) In order to avoid theinfluence of harmonic frequencies the user had to look at twocircles in sequence Each circle contained seven stimulationfrequencies which were presented in pseudorandom order assmall green circle segments (see Figure 5) This step took 20seconds

Step 3 (optimal classification thresholds and time segmentlength) In this step the Wizard software chose the beststarting time-length segment 119879

119878as well as the optimal

classification thresholds 120573119894for the frequencies determined

in the previous step First a white circle flickering at thefrequency which had the highest SSVEP response in phase2 was displayed It was necessary to simulate noise causedby peripheral vision when concentrating on the target objectFor this purpose the white circle was surrounded by a greenring containing segments which presented the remainingoptimal frequencies (see Figure 6) The user was instructedby an audio feedback to gaze at the white circle The circle

8 Computational Intelligence and Neuroscience

(a) (b)

Figure 7 Two subjects during the driving task The robotic-GUI is in camera mode The subject in (a) sees a text message instructing himto select the button ldquolook uprdquo The subject in (b) is one step further and is facing the hidden secret number which is attached underneath adrawer

and the ring flickered for 10 seconds while EEG data wererecorded After a two-second break the flickering continuedThe white circle now flickered with the second-highestfrequency from phase two while the ring flickered with theremaining three frequencies This procedure was repeateduntil the data for all four optimal frequencies were collectedTotal recording time for phase three was 40 seconds andbased on this data optimal thresholds and time segmentlength were determined If one of the frequencies 119891

119878119894did not

reach a defined minimal threshold 120573119894 this frequency was

replaced with the next best frequency from Step 2 and theprocedure of this step was repeated

24 Procedure In order to analyze the BCI performancevariety a pre- and postquestionnaire system (similar to [21])was employed After signing the consent form each subjectcompleted a brief prequestionnaire and was prepared for theEEG recording Subjects were seated in a comfortable chairin front of the monitor the electrode cap was placed andelectrode gel was applied First the subjects went throughthe steps of the Wizard and BCI key parameters (stimula-tion frequencies classification thresholds and optimal timesegment length) were determined Then the experimenterstarted the robot-GUI (Figure 3) The control task was todrive the MRC through a predetermined route (Figure 8) Ifthe robot-GUI software recognized a wrong command forexample ldquoturn rightrdquo instead of ldquoturn leftrdquo this commandwas not sent so that the MRC would not drive off the track(Figure 7) If the subjectmade a fewmistakes at the beginningand was willing to start over again (this occurred with asmall number of subjects) the experimentermoved theMRCback to the starting point and restarted the robot-GUI In thevery few cases where the wireless connection to the MRCwas interrupted (eg the MRC was mechanically unable toexecute the command thatwas correctly selected in the robot-GUI by the user) the experimenter manually repositionedthe MRC to the next turning point

At a certain turning point (represented by a blue squareshown in Figure 8 when approximately 60 of the drivingtask was completed) the camera was facing a text messagewhich instructed the subjects to look up (user action rep-resented with a circled number as eg 5 in this case)

Start

Finish

1 m

1 m

1 m

1 m

2 m

15 m15 m

15 m

15 m

15 m

15 m

Figure 8 Track for the mobile robotic car Subjects had to steerthe MRC through a 15-meter-long course with 10 turns in orderto reach the finish line At a certain turning point (marked with ablue square) the MRCs camera faced a text message that instructedthe subject to switch to camera mode The task was to examine thesurroundings and to look for a hidden number which was attachedunderneath a drawer (otherwise not visible) One example of theoptimal path is shown in Figure 9

At this point the user had to switch to camera mode andselect the command ldquolook uprdquo Now the subject was ableto see a hidden secret number which was attached to thebottom of a drawer (located at a height of 50 cm facingthe floor not normally visible to the people in the room)The numbers varied randomly between the subjects andwere only accessible via the MRC camera After seeing thenumber on the screen the subject had to change back todrive mode and continued the driving task When arriving atthe finish line the camera faced another text message whichagain instructed the subjects to look up After changing tocamera mode and selecting ldquolook uprdquo the camera faced asign displaying the word ldquofinishrdquo and the robot-GUI stoppedautomatically All of the EEG data and performed commandswere stored on the hard drive of the desktop PC for furtherevaluation The values of ITR and accuracies were calculatedautomatically by the software The ITR was calculated usingthe formula presented by Wolpaw et al in [43] the numberof possible choices for classification was equal to four itwas the same for each mode (drive mode or camera mode)After finishing the experiment the subject filled out thepostquestionnaire

Computational Intelligence and Neuroscience 9

Figure 9This chart presents exemplarily the BCI performance for subject 59 (optimal path with 100 accuracy)The top row shows the rawEEG signal acquired from channel119874

119911(one of eight EEG electrodes)The remaining graphs show the SNR values during the whole experiment

(dashed line represents the threshold for each command) for the four commands forwardup turn left turn right and cameradrive thestimulation frequencies for those commands were 667 750 706 and 632Hz respectively The 119909-axis the same for all graphs representsthe total duration of the experiment (150 seconds) The marking above each graph represents the exact time of each classification circlednumbers below the 119909-axis correspond to the task actions

3 Results

The results of the driving performance are shown in Table 3All 61 subjects were able to control the BCI application andcompleted the entire driving taskThe overall mean (SD) ITRof the driving performance was 1407 (444) bitsmin Themean (SD) accuracy was 9714 (373) Except for one allsubjects reached accuracies above 80 40 subjects reachedaccuracies above 90 and nineteen of them even completedthe steering task without any errorsThemean (SD) accuracyof each action (commands required to complete the task) was

9614 (002) In the prequestionnaire subjects answered thequestions regarding gender the need for vision correctiontiredness and BCI experience as displayed in Table 2Eighteen female subjects (30) with an average (SD) ageof 2267 (430) years and 43 male subjects with an average(SD) age of 2260 (532) years participated in the experimentThe influence of gender was also tested 18 female subjectsachieved a mean (SD) accuracy and ITR of 9324 (798)and 1405 (573) bitsmin respectively and 43 male achieveda mean (SD) accuracy and ITR of 9295 (659) and 1407(508) bitsmin respectively A one-way ANOVA showed no

10 Computational Intelligence and Neuroscience

Table 2 Questionnaire results The table presents the answers collected from the prequestionnaire The figures show the number of respon-dents or the mean (SD) value

Age Gender Visioncorrection Tiredness Length of sleep

last nightExperiencewith BCIs

Years Male Female Yes No 1 nottired

2 littletired

3 moderatelytired 4 tired 5 very

tired Hours Yes No

2262(500) 43 18 22 39 18 23 17 3 0 675 (122) 6 55

statistically significant difference in accuracy 119865(1 59) =

0022 119901 = 0883 and ITR 119865(1 59) = 00003 119901 = 0987between those two groupsThe influence of vision correctionon accuracy and ITR was also tested A total of 22 subjectsrequired vision correction in the form of glasses or contactlenses (as they stated in the questionnaire) They achieveda mean (SD) accuracy and ITR of 9156 (636) and 1278(402) bitsmin respectively These results were compared tosubjects who did not require vision correction and achieveda mean (SD) accuracy and ITR of 9392 (535) and 1484(466) bitsmin respectively A one-way ANOVA showedno statistically significant influence of vision correction onaccuracy 119865(1 59) = 1664 119901 = 0202 There was also nosignificant effect of vision correction on ITR119865(1 59) = 2263119901 = 0138 An evaluation of tiredness was also calculated onthe basis of the answers given in a prequestionnaire (scalefrom 1 to 5) Results of the prequestionnaire are shown inTable 2 A one-way ANOVA revealed no significant effect oftiredness on accuracy 119865(4 56) = 0335 119901 = 0853 and ITR119865(4 56) = 0067 119901 = 0991

4 Discussion

In the results presented the overall accuracy was calculatedfor all commands in drive mode and in camera modewithout distinguishing between them since four commandsfor SSVEP classification were always present Even thoughdifferent commands were sent the command classificationremained the same A comparison of this study to otherpublications on steering a robot with SSVEP-based BCI ispresented in Table 4

In order to test the influence of video overlay we com-pared the accuracy results with a demographic study [21] inwhich 86 subjects achieved a mean (SD) accuracy of 9226(782) while driving a small robot through a labyrinth witha four-class SSVEP-based BCI An unpaired 119905-test showed nostatistically significant difference 119905(145) = minus0089 119901 = 0397compared to the present values This shows that the videooverlay did not negatively influence the BCI performanceGergondet et al listed three factors with respect to howthe live video feedback in the background can influence theSSVEP stimulation distraction due to what is happening onthe video distraction due to what is happening around theuser and variation in luminosity below the stimuli shownto the user [30] Kapeller et al showed that there is a slightdecrease in performance for the underlying video sequencedue to a distractive environment and small targets used [27]

Contrary to the findings reported by Gergondet et al andKapeller et al the dynamic background did not noticeablyaffect the BCI performance (unpaired 119905-test result) Weassume that this was due to the fact that stimulation frequen-cies and minimal time segment classification windows wereadjusted individually and that relatively large stimulationboxes were used (visual angle)

The high accuracies of the SSVEP-based BCI systemsupport the hypothesis that BCI can be used to control acomplex driving application In comparison to the study byDiez et al [31] the MRC self-driving behavior was limitedto a preset number of turning points and a predefined pathAlthough the robot presented by Diez et al could avoidobstacles by itself (limited algorithm reaction) it decreased itsspeed for trajectory adjustment and in some cases stopped toavoid collision and further user commands were necessaryWhile Diez et al reported a mean time of 23320 seconds forthe second destination (139-meter distance) in this study theachieved mean time was 20708 seconds (15-meter distance)As pointed out by Diez et al the overall time for completingthe task depends not only on the robot but also on thesubjectrsquos performance controlling the BCI system [31] In ourstudy the high accuracies of the BCI system were achieveddue to the key parameter adjustment carried out by theWizard software

Themain difference between the robot used by Diez et aland the MRC presented in this study is the extended controlpossibility of the interface that also allows for controlling thecamera Both robots were designed for different purposesDiez et al mainly designed their robot for driving frompoint A to point B without any additional interaction withthe environment as they planned to use the system forwheelchair control in the future By contrast in our designthe MRC itself is the interaction device Equipped with adisplay and a two-way audio system it might serve as aremote telepresence in the future

The SSVEP paradigm allows for a robust implementationof four driving commands Compared to robots constructedfor the P300 BCI approach where the user faces largestimulation matrices spread across the entire screen thenumber of simultaneously displayed targets here is only fourand hence the load on the visual channel is considerably lowerin comparison Therefore the user might find it easier toconcentrate on the driving task On the other hand the lessernumber of command options reduces freedom and speedHowever a shared-control paradigm allows for complex tasksto be executed with minimal effort The command ldquodrive

Computational Intelligence and Neuroscience 11

Table 3 Results of the driving performance Total time each subjectrequired for steering the MRC through the route (15 meters and10 turns including camera movements) 26 correct commands (3commands in camera mode and 23 commands in drive mode) werenecessary to complete the task Accuracy (Acc) is the number ofcorrect commands (26) divided by the total number of commands(119862119899) Mean values are given at the bottom of the table

Subject Time Acc ITR119862119899[s] [] [bitsmin]

1 30560 8667 720 302 28346 8966 832 293 28407 8125 680 324 28011 8125 690 325 25259 8125 765 326 19002 9286 1340 287 32206 9630 862 278 31007 7027 466 379 15813 9286 1610 2810 37182 8966 635 2911 24172 8667 910 3012 15133 9286 1683 2813 15549 10000 2007 2614 21054 9630 1318 2715 20841 10000 1497 2616 24162 8387 852 3117 24172 8667 910 3018 14513 9630 1912 2719 16748 9630 1657 2720 22080 9286 1153 2821 19419 9630 1429 2722 43154 9286 590 2823 16352 10000 1908 2624 17184 8667 1280 3025 24812 8387 830 3126 13193 10000 2365 2627 16138 10000 1933 2628 18475 9630 1502 2729 14422 10000 2163 2630 16748 9286 1520 2831 21663 9286 1175 2832 35516 8125 544 3233 24406 8667 901 3034 15214 10000 2051 2635 19561 9630 1418 2736 23948 8387 860 3137 16636 8966 1418 2938 17245 10000 1809 2639 21897 8387 940 3140 16565 8966 1425 2941 14666 10000 2127 2642 18627 8966 1267 29

Table 3 Continued

Subject Time Acc ITR119862119899[s] [] [bitsmin]

43 22131 9286 1151 2844 25929 8667 848 3045 15702 10000 1987 2646 14788 9630 1876 2747 18718 9630 1482 2748 14554 10000 2144 2649 16057 9630 1728 2750 20495 10000 1522 2651 22811 9630 1216 2752 16037 10000 1946 2653 23664 8125 817 3254 14889 10000 2095 2655 15011 10000 2078 2656 13985 10000 2231 2657 14716 10000 2120 2658 24273 10000 1285 2659 15001 10000 2080 2660 16463 9630 1685 2761 17885 9630 1551 27Mean 20708 9303 1407 2811SD 5025 573 444 182Min 13193 7027 466 2600Max 43154 10000 2365 3700

forwardrdquo starts an autopilot program that guides the robotthrough a straight distance until it reaches a predefinedstopping point

While the MRC executed the autopilot program andfollowed the line for some time subjects purposely ignoredthe flickering targets and concentrated on the video feedbackinstead As these waiting periods were added up to theactual classification times to each command that followed anautopilot command (ldquoforwardrdquo) the ITR values are consid-erably low From the researcherrsquos point of view a numberof errors could have been avoided if some subjects wouldhave paid more attention to the live feedback Especially inthe task of reading the hidden number subjects were tryingto drive along the path without looking at the text messagethat appeared on the live feedback After realizing that therobot was not turning they looked at the video and saw thetext message instructing them to look up This is visible inthe analysis of the accuracy of the robots commands (seeFigure 10) The lowest accuracy of 8749 was for actionldquochange view to the camera moderdquo (robot command 14) thismay be caused by the fact that the subjects got used to theactions ldquoturn leftrdquoldquoturn rightrdquo and ldquodrive forwardrdquo whichthey repeated six times from the starting point (see Figures10 and 9) This could easily be avoided if the driving pathswere designedwith evenly distributed tasks (eg live scenariotasks such as reading a hidden number should occur moreoften)

12 Computational Intelligence and Neuroscience

Table 4 Comparison of BCI systems for robot navigation

Publication Subjects Classes Literacy rate [] Mean accuracy [] Mean ITR [bitsmin] Robot typeVolosyak et al [21] 86 4 98 9226 1724 E-puckZhao et al1 [34] 7 4 100 903 247 NAOKapeller et al2 [27] 11 4 91 9136 NA E-puckDiez et al3 [31] 7 4 100 8880lowast NA Pioneer 3-DXHolewa and Nawrocka [24] 3 4 100 7375 1136 Lego MindstormChoi and Jo1 [32] 5 2 100 844 1140 NAOThis study 61 4 100 9303 1407 MRC1Results for SSVEP paradigm2f-VEP results for 4 classes without zero class3Results for destination 1 lowastcorrect(correct + wrong) commands

Table 5 Distribution of the time blocks needed for selection of the first three commands ldquodrive forwardrdquo and the last three commandsldquoforwardrdquoldquolook uprdquo over all subjects

Distribution []Time windows 119879

119878in EEG data blocks

8 10 15 20 30 40 50 60 to 160A 656 1803 1803 2295 1148 656 820 820C 820 2295 820 1967 1148 820 492 1639E 984 1311 820 1967 656 984 820 2459Mean 820 1803 1148 2077 984 820 710 1639œ 984 1803 1475 1803 984 328 656 1967X 1148 1639 1639 1475 1311 984 164 1639Y 1148 1475 1148 2295 656 656 492 2131Mean 1093 1639 1421 1858 984 656 437 1913

Accuracy of each command of the robotrsquos path

1 2 3 4 5 6 7 8 9

Commands

10080604020

0

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26

()

Figure 10This chart presents the accuracy of each command alongthe path (119909-axis) and its selection accuracy to previous correctselection over all subjects

No difference between the time needed for the selectionof the command ldquodrive forwardrdquo in the beginning of theexperiment in comparison to the end of the experiment canbe observed (see Table 5) hence neither fatigue nor practiceseemed to influence performance

The general literacy rate among test subjects of 100indicates that SSVEP-based BCIs can be used by a largerpopulation than previously assumedThis high literacy is dueto the careful selection of individual SSVEP key parameterswhichwere determined by the presented calibration software

41 Limitations The SSVEP-control application has somerestrictions

(i) The wireless connection was implemented with theUDP protocol and some information might thereforenot be received properly

(ii) In order to maximize performance accuracy and userfriendliness the number of simultaneously displayedtargets was restricted to four which resulted in aconsiderably lower ITR

(iii) It should also be noted that for long term use recal-ibration might be inevitable Although the numberof targets is limited to four the SSVEP-based BCIpresented here depends on the userrsquos vision andcontrol over his or her eye movements It thereforehas to compete with other approaches such as P300and other eye tracking systems Visual stimulationwith low frequencies in particular is known to causefatigue

(iv) Moreover subjects in this study may not be repre-sentative of the general population they tended tobe young healthy men Further tests with older andphysically impaired persons are required

5 Conclusions

In the present study the SSVEP-based driving application(robot-GUI) was proven to be successful and robust Thedriving application with live video feedback was tested with61 healthy subjects All participants successfully navigateda mobile semiautonomous robot through a predetermined15-meter-long route BCI key parameters were determinedby the Wizard software No negative influence of the livevideo feedback on the performance was observed Such

Computational Intelligence and Neuroscience 13

applications could be used to control assistive robots butmight also be of interest for healthy subjects In contrast toP300 the SSVEP stimulation has to be sufficiently strong (interms of size or visual angle) to significantly influence theuser A common problem of all BCI studies that also occurredhere is to keep the usermotivated and focused on the taskTheevaluation of the questionnaire shows that for the majorityof subjects the driving task with the SSVEP-based BCI wasnot fatiguing even though lower stimulation frequencieswhich usually cause more fatigue than higher frequencieswere used in the experiment More specifically frequenciesbetween 6 and 12Hz were determined in the calibrationprocedure as they elicited the strongest SSVEP responsesSome subjects stated that as they concentrated more on thecamera video they did not find the flickering annoying at allOur results prove that SSVEP-based personalized adaptiveBCI systems with a low target number can be used as aneveryday assistive technology for remote robotsWe have alsoshown that it is possible to design a four-target interface withextended capabilities such as camera and driving controlThe control application presented here can be used formobile robots as an assistive remote technology as a gameinterface forwheelchair control or even for steering remotelycontrolled drones This study should be considered as a steptowards resolving the problems of semiautonomous mobilerobots controlled with SSVEP-based BCIs In order to furtherimprove this approach the next steps should be an onlineparameter adaptationmechanism for improving the accuracythrough practicing a possibility of turning the stimulationonoff and a two-way video and audio communication fora truly remote telepresence

Competing Interests

The authors declare that the research was conducted in theabsence of any commercial or financial relationships thatcould be construed as a potential conflict of interests

Acknowledgments

The authors would like to thank Thomas Grunenberg fromthe Faculty of Technology and Bionics at Rhine-Waal Uni-versity of Applied Sciences for building programming andmaintaining MRCs The authors would also like to thankthe student assistants Leonie Hillen Francisco Nunez SaadPlatini Anna Catharina Thoma and Paul Fanen Wundufor their help with the study This research was supportedby the German Federal Ministry of Education and Research(BMBF) under Grants nos 16SV6364 and 01DR14014 and theEuropean Fund for Regional Development (EFRD or EFREin Germany) under Grant no GE-1-1-047

References

[1] M Cheney and R Uth Tesla Master of Lightning Barnes ampNoble New York NY USA 1999

[2] W Walter The Living Brain W W Norton amp Company NewYork NY USA 1953

[3] P Flandorfer ldquoPopulation ageing and socially assistive robotsfor elderly persons the importance of sociodemographic fac-tors for user acceptancerdquo International Journal of PopulationResearch vol 2012 Article ID 829835 13 pages 2012

[4] A Nijholt D Tan G Pfurtscheller et al ldquoBrain-computerinterfacing for intelligent systemsrdquo IEEE Intelligent Systems vol23 no 3 pp 72ndash79 2008

[5] J D R Millan R Rupp G R Muller-Putz et al ldquoCombiningbrain-computer interfaces and assistive technologies state-of-the-art and challengesrdquo Frontiers in Neuroscience vol 4 article161 2010

[6] C Brunner N Birbaumer B Blankertz et al ldquoBNCI horizon2020 towards a roadmap for the BCI communityrdquo Brain-Computer Interfaces vol 2 no 1 pp 1ndash10 2015

[7] R Bogue ldquoExoskeletons and robotic prosthetics a review ofrecent developmentsrdquo Industrial Robot vol 36 no 5 pp 421ndash427 2009

[8] R Ron-Angevin F Velasco-Alvarez S Sancha-Ros and L daSilva-Sauer ldquoA two-class self-paced BCI to control a robot infour directionsrdquo IEEE International Conference on Rehabilita-tion Robotics (ICORR rsquo11) vol 2011 Article ID 5975486 2011

[9] L Tonin T Carlson R Leeb and J D R Millan ldquoBrain-controlled telepresence robot by motor-disabled peoplerdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society pp 4227ndash4230IEEE Boston Mass USA September 2011

[10] D Huang K Qian D-Y Fei W Jia X Chen and O BaildquoElectroencephalography (EEG)-based brain-computer inter-face (BCI) a 2-D virtual wheelchair control based on event-related desynchronizationsynchronization and state controlrdquoIEEE Transactions on Neural Systems and Rehabilitation Engi-neering vol 20 no 3 pp 379ndash388 2012

[11] A Chella E Pagello E Menegatti et al ldquoA BCI teleoperatedmuseum robotic guiderdquo in Proceedings of the International Con-ference on Complex Intelligent and Software Intensive Systems(CISIS rsquo09) pp 783ndash788 IEEE Fukuoka Japan March 2009

[12] C Escolano J M Antelis and J Minguez ldquoA telepresencemobile robot controlled with a noninvasive brain-computerinterfacerdquo IEEE Transactions on Systems Man and CyberneticsPart B Cybernetics vol 42 no 3 pp 793ndash804 2012

[13] F-B VialatteMMaurice J Dauwels andA Cichocki ldquoSteady-state visually evoked potentials focus on essential paradigmsand future perspectivesrdquo Progress in Neurobiology vol 90 no4 pp 418ndash438 2010

[14] D Regan ldquoSome characteristics of average steady-state andtransient responses evoked by modulated lightrdquo Electroen-cephalography and Clinical Neurophysiology vol 20 no 3 pp238ndash248 1966

[15] Y Frank and F Torres ldquoVisual evoked potentials in the evalua-tion of lsquocortical blindnessrsquo in childrenrdquoAnnals of Neurology vol6 no 2 pp 126ndash129 1979

[16] W IanMcDonald A CompstonG Edan et al ldquoRecommendeddiagnostic criteria for multiple sclerosis guidelines from theinternational panel on the diagnosis of multiple sclerosisrdquoAnnals of Neurology vol 50 no 1 pp 121ndash127 2001

[17] S Gao Y Wang X Gao and B Hong ldquoVisual and auditorybrain-computer interfacesrdquo IEEE Transactions on BiomedicalEngineering vol 61 no 5 pp 1436ndash1447 2014

[18] I Volosyak ldquoSSVEP-based Bremen-BCI interfacemdashboostinginformation transfer ratesrdquo Journal of Neural Engineering vol8 no 3 Article ID 036020 2011

14 Computational Intelligence and Neuroscience

[19] G R Muller-Putz and G Pfurtscheller ldquoControl of an electricalprosthesis with an SSVEP-based BCIrdquo IEEE Transactions onBiomedical Engineering vol 55 no 1 pp 361ndash364 2008

[20] P Martinez H Bakardjian and A Cichocki ldquoFully onlinemulticommand brain-computer interface with visual neuro-feedback using SSVEP paradigmrdquo Computational Intelligenceand Neuroscience vol 2007 Article ID 94561 9 pages 2007

[21] I Volosyak D Valbuena T Luth T Malechka and A GraserldquoBCI demographics II how many (and What Kinds of) peoplecan use a high-frequency SSVEP BCIrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 19 no 3 pp232ndash239 2011

[22] J Zhao Q Meng W Li M Li and G Chen ldquoSSVEP-basedhierarchical architecture for control of a humanoid robot withmindrdquo in Proceedings of the 11th World Congress on Intelli-gent Control and Automation (WCICA rsquo14) pp 2401ndash2406Shenyang China July 2014

[23] A Guneysu and H Levent Akin ldquoAn SSVEP based BCI tocontrol a humanoid robot by using portable EEG devicerdquo inProceedings of the 35th Annual International Conference of theIEEE Engineering in Medicine and Biology Society (EMBC rsquo13)pp 6905ndash6908 IEEE Osaka Japan July 2013

[24] K Holewa and A Nawrocka ldquoEmotiv EPOC neuroheadset inbrainmdashcomputer interfacerdquo in Proceedings of the 15th Interna-tional Carpathian Control Conference (ICCC rsquo14) pp 149ndash152IEEE Velke Karlovice Czech Republic May 2014

[25] N-S Kwak K-R Muller and S-W Lee ldquoToward exoskeletoncontrol based on steady state visual evoked potentialsrdquo inProceedings of the International Winter Workshop on Brain-Computer Interface (BCI rsquo14) pp 1ndash2 Gangwon South KoreaFebruary 2014

[26] G Bin X Gao Y Wang B Hong and S Gao ldquoVEP-basedbrain-computer interfaces time frequency and code modula-tionsrdquo IEEE Computational Intelligence Magazine vol 4 no 4pp 22ndash26 2009

[27] C Kapeller C Hintermuller M Abu-Alqumsan R Pruckl APeer and C Guger ldquoA BCI using VEP for continuous control ofa mobile robotrdquo in Proceedings of the 35th Annual InternationalConference of the IEEE Engineering in Medicine and BiologySociety (EMBC rsquo13) pp 5254ndash5257 IEEE Osaka Japan July2013

[28] H Wang T Li and Z Huang ldquoRemote control of an electricalcar with SSVEP-based BCIrdquo in Proceedings of the IEEE Interna-tional Conference on Information Theory and Information Secu-rity (ICITIS rsquo10) pp 837ndash840 IEEE Beijing China December2010

[29] Y Wang Y-T Wang and T-P Jung ldquoVisual stimulus designfor high-rate SSVEP BCIrdquo Electronics Letters vol 46 no 15 pp1057ndash1058 2010

[30] P Gergondet D Petit and A Kheddar ldquoSteering a robot witha brain-computer interface impact of video feedback on BCIperformancerdquo in Proceedings of the 21st IEEE InternationalSymposium on Robot and Human Interactive Communication(RO-MAN rsquo12) pp 271ndash276 Paris France September 2012

[31] P F Diez V A Mut E Laciar and E M Avila Perona ldquoMobilerobot navigation with a self-paced brain-computer interfacebased on high-frequency SSVEPrdquo Robotica vol 32 no 5 pp695ndash709 2014

[32] B Choi and S Jo ldquoA low-cost EEG system-based hybridbrain-computer interface for humanoid robot navigation andrecognitionrdquo PLoS ONE vol 8 no 9 Article ID e74583 2013

[33] E Tidoni P Gergondet A Kheddar and S M Aglioti ldquoAudio-visual feedback improves the BCI performance in the naviga-tional control of a humanoid robotrdquo Frontiers in Neuroroboticsvol 8 article 20 pp 1ndash8 2014

[34] J Zhao W Li and M Li ldquoComparative study of SSVEP- andP300-based models for the telepresence control of humanoidrobotsrdquo PLoS ONE vol 10 no 11 Article ID e0142168 2015

[35] T M Rutkowski K Shimizu T Kodama P Jurica and ACichocki ldquoBrain-robot interfaces using spatial tactile BCIparadigmsrdquo in Symbiotic Interaction pp 132ndash137 Springer NewYork NY USA 2015

[36] C Brunner B Z Allison D J Krusienski et al ldquoImprovedsignal processing approaches in an offline simulation of a hybridbrain-computer interfacerdquo Journal of NeuroscienceMethods vol188 no 1 pp 165ndash173 2010

[37] F Gembler P Stawicki and I Volosyak ldquoAutonomous parame-ter adjustment for SSVEP-based BCIs with a novel BCI wizardrdquoFrontiers in Neuroscience vol 9 article 474 pp 1ndash12 2015

[38] E W Sellers T M Vaughan and J R Wolpaw ldquoA brain-computer interface for long-term independent home userdquoAmyotrophic Lateral Sclerosis vol 11 no 5 pp 449ndash455 2010

[39] E M Holz L Botrel and A Kubler ldquoBridging gaps long-term independent BCI home-use by a locked-in end-userrdquo inProceedings of the TOBI Workshop IV Sion Switzerland 2013

[40] O Friman I Volosyak and A Graser ldquoMultiple channel detec-tion of steady-state visual evoked potentials for brain-computerinterfacesrdquo IEEE Transactions on Biomedical Engineering vol54 no 4 pp 742ndash750 2007

[41] I Volosyak T Malechka D Valbuena and A Graser ldquoAnovel calibration method for SSVEP based brain-computerinterfacesrdquo in Proceedings of the 18th European Signal ProcessingConference (EUSIPCO rsquo10) pp 939ndash943 Aalborg DenmarkAugust 2010

[42] I Volosyak H Cecotti and A Graser ldquoOptimal visual stimulion LCD screens for SSVEP based brain-computer interfacesrdquoin Proceedings of the 4th International IEEEEMBS Conferenceon Neural Engineering (NER rsquo09) pp 447ndash450 IEEE AntalyaTurkey May 2009

[43] J R Wolpaw N Birbaumer D J McFarland G Pfurtschellerand T M Vaughan ldquoBrain-computer interfaces for communi-cation and controlrdquo Clinical Neurophysiology vol 113 no 6 pp767ndash791 2002

Page 3: Driving a Semiautonomous Mobile Robotic Car Controlled by ... · Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI ... (frequencycoding,methodoriginallyproposed

Computational Intelligence and Neuroscience 3

desynchronization (ERD typical SMR-based BCI) for thenavigationexploration of a humanoid robot with additionalP300 BCI for interaction (object recognition decision) [32]They also tested and evaluated the use of a low-cost BCI sys-tem The ERD and SSVEP BCIs were two-class systems thenumber of classes for the P300 BCI varied (2 to 4 dependingon the detected objects) Five subjects used a low-cost Emo-tiv EPOC headset to perform the experiment The overallaccuracies achieved by the ERD and SSVEP protocols were846 and 844 respectively the P300 protocol reached anaccuracy of 91 for two objects and 895 for four objectsthrough the robotrsquos cameraThe scored ITR value for the nav-igationexploration task was 181 bitsmin and for the objectsrecognition task 21 bitsmin Authors reported that switchingbetween SSVEP and ERD caused a drop in accuracy downto 73 The robotrsquos point of view was shown in the middleof the monitor (live feedback) the SSVEP stimulation wasshown at the sides and the P300was showndirectly in the livefeedback of the robotrsquos view Tidoni et al presented a studywhere nine subjects (located in Italy) remotely controlleda humanoid robot (HRP-2 located in Japan) [33] Subjectstested a synchronous asynchronous audio-visual feedbackand evaluated whether a mirror reflection of the robot (inits field of view) improved the performance of an SSVEP-based BCIwith video feedbackThemain taskwas to navigatethe robot and to move a bottle from one table to anotherA six-class SSVEP-based BCI was used the stimulation andthe video feedback were presented on the same screen Theflashing frequencies were 6 8 9 10 and 14Hz with anadditional zero class when the subject did not gaze at anytarget Due to the remote distance the feedback action wasvisible to the subject after approximately 800ms Zhao et alcompared a four-class SSVEP-based BCI to a six-class P300-based BCI by evaluating achieved control accuracies of robotnavigation tasks [34] For both systems (SSVEP and P300)the modular platform Cerebot was used and stimuli werepresented on a LCD monitor Seven subjects took part inthe experiment and achieved a mean accuracy and ITR of913 and 188 bitsmin for the SSVEP approach for the P300approach 903 and 247 bitsmin were achieved

Rutkowski et al implemented a robot control applicationusing tactile BCIs [35] They tested two tactile stimula-tion configurations a pin-pressure BCI (arranged in a 3 times

3 matrix) and a full body vibrotactile BCI (vibrotactilegenerators placed on subjects backs shoulders arms andlegs) A humanoid NAO robot was navigated using sixpredefined commands Five subjects could control the robotusing the pin-pressure BCI and three subjects successfullytested the full body vibrotactile BCI The authors demon-strate the reliability of the tactile BCI paradigm for robotnavigation

All listed studies have to deal with the main engineeringproblems regarding BCI technologies EEG-based BCIs arerelatively slow and can sometimes be unreliable Maximizingthe speed of a BCI while still maintaining high accuracyleads to certain limitations in the number of targets and insystem speed (target selection) Moreover it is often reportedthat a certain number of subjects are not able to controlBCIs (so-called BCI illiterates eg [21 36]) We chose the

SSVEP paradigm as it is reported to be among the fastest andmost reliable BCI strategies with short or no training timeneeded [13] In our recent study after five years of researchwe developed an SSVEP calibration software (SSVEPWizard)for LCD monitors that determines SSVEP key parametersfor each subject [37] The selected stimulation frequenciesare those with the highest SSVEP response in the visualcortex We have shown that with optimized parameters theSSVEP BCI literacy rate can reach 100 This calibrationsoftware is also used here to determine and optimize the user-specific SSVEP parameters Some SSVEP-based applicationsdescribed problems regarding video feedbackThe research ofGergondet et al was focused on the impact of the background(dynamic compared to static) on the BCI performance Theyreported that a dynamic background negatively affected theSSVEP stimulation [30] Kapeller et al noted a slight decreasein performance for an SSVEP-based BCI with backgroundvideo and small targets [27] In the approach presented byDiez et al the stimulation was separated from the videofeedback stream (outer LEDs) but no negative influence ofthe live feedback was reported [31]

SSVEP-based BCI systems for robot control usually usefour or five control commands A higher degree-of-freedomcontrol is difficult to realize because multitarget BCIs usuallyhave a lower accuracy A countermeasure to the limitations indegrees of freedom can be a shared-control approach that isthe robot can share some specific control commands with thehuman user which allows for the faster execution of complextasks The so-called shared-control principle was initiallyexplored for BCI-controlled wheelchairs and recently appliedto BCI telepresence [12] BCIs with only a few targets aremore robust and therefore more suitable for control applica-tions There are already numerous existing BCI technologiesthat patients use at their homes [38 39] However forthe last decade publications about SSVEP-based BCIs havebeen sparse The PubMed database shows 4200 results forthe search term ldquoBrain-computer interfacerdquoldquoBrain-machineinterfacerdquo but only 185 results for the term ldquoSSVEPrdquoWith thispaper we would like to contribute to the research of SSVEP-based BCIs

In this paper we present a four-class SSVEP-based BCIcontrol application with live video feedback We also presenta self-made low-cost semiautonomous mobile robotic car(MRC) with live video feedback controlled by a BCI appli-cation Unlike the robotic cars developed in the above-mentioned research studies not only the vehicle itself butalso the camerarsquos pan-tilt-position was controlled via SSVEPThe main task of this BCI study was to maneuver theMRC through a predetermined route and to examine thesurroundings (live scenario task) The BCI performance wastested with 61 subjects BCI key parameters for examplestimulation frequencies were determined individually byWizard software In this study we tested the followinghypotheses

(i) Is it possible to use an SSVEP-based BCI stimulationand a background video on the same screen withoutthe negative influence of the presented video asreported in for example [27 30]

4 Computational Intelligence and Neuroscience

(ii) How does the video feedback affect BCI performancewhen simultaneously presented with the SSVEP-based BCI stimulation

(iii) Is our self-made semiautonomousmobile robotic carsuitable for future use as an assistive technology

(iv) How did the subjects perform using the presentedSSVEP-based BCI control application compared toresults presented in for example [31]

The paper is organized as follows Section 2 describes theexperimental setup and presents details about the experimen-tal procedure and the hardware and software solutions Theresults are presented in Section 3 followed by a discussionand conclusion in Sections 4 and 5 respectively

2 Material and Methods

21 Subjects Everyone who volunteered to participate in thestudy became a research subject after reading the subjectinformation sheet This study was carried out with writteninformed consent from all subjects in accordance with theDeclaration ofHelsinki 61 subjects (30 female)with amean(SD) age of 2262 (500) years participated in the study allhealthy and all students or employees of the Rhine-Waal Uni-versity of Applied Sciences in KleveThe EEG recording tookplace in a normal PC-laboratory room (asymp36m2) The LCDscreen was located in front of windows and the luminancewas kept at an acceptable level in order not to disturb thesubjects If necessary outer blinds were pulled down duringthe day or light was turned on during the evening None ofthe subjects had neurological or visual disorders Spectacleswere worn when appropriate Subjects did not receive anyfinancial reward for their participation in this study

22 Hardware

221 Mobile Robotic Car (MRC) Four identically con-structed mobile robotic cars (MRCs) were used in theexperiment all set up with the same hardware and softwareconfigurations Four wheels were attached to a preassembledcase (14 cm times 19 cm) with built-in DCmotors A rechargeablebattery (6x AA NiMH 1900mAh) was placed inside Fourlight sensors (eachwith a corresponding red LED L-53SRC-Ffor luminescence positioned side by side) were mounted on aprototype board at the front of theMRC Two of those sensorswere connected to the analog input pins of a BeagleBoneBoard Rev A6 at the center and two at the sides The DCmotors were controlled by an Atmega168-based unit Thecamera Logitech C210 (V-U0019) was mounted in frontof the MRC on two servos (RB-421 RobotBase China)simulating a pan-tilt head and connected to the BeagleBoneUSB port The camera was controlled in five predefinedorientations looking down forward up to the left and to therightThe connection with theMRCwas established throughUDP via a HAMA N150 2in1-WLAN-ADAPTER connectedwith an Ethernet cable to the BeagleBone of the MRCand a USB WiFi adapter (D-Link DWL-G122) connectedto the desktop computer All the electronic components

Figure 1 Mobile robotic car (MRC) On top of the MRC theBeagleBoneBoardRevA6 theAtmega168-basedmotor drive shieldand a small HAMAWiFiWLAN-Access-Point At the front the redLEDs from light sensors that allowed the MRC to follow the whiteline on autopilot mode and the Logitech C210 camera mounted on2 servos creating the pan-tilt head for 2-DoF (degree-of-freedom)control The rechargeable battery is mounted inside the main caseMRC dimensions are 25 times 25 times 12 cm

(boards and HAMA N150 Access-Point) were mounted ontop of the MRC The BeagleBone was equipped with an8GB SD-card running Ubuntu ARM (Linux distributionversion 1204) with MJPG-Streamer installed (version 01httpmjpg-streamersfnet) for video streaming One of theused MRCs is shown in Figure 1

222 Data Acquisition and Feature Extraction The subjectswere seated in front of a 2410158401015840 LCD screen (BenQ XL2420Tor BenQ XL2411T resolution 1920 times 1080 pixels verticalrefresh rate 120Hz) at a distance of approximately 60 cmThecomputer system based on an Intel Core i7 340GHz wasrunning Microsoft Windows 7 Enterprise Standard passiveAgAgCl electrodes were used to acquire the signals from thesurface of the scalp The ground electrode was placed over119860119865119885 the reference electrode was placed over 119862

119885 and eight

signal electrodes were placed at predefined locations on theEEG-cap marked with 119875

119885 1198751198743 1198751198744 1198741 1198742 119874119885 1198749 and

11987410

according to the international system of EEG electrodeplacement Standard abrasive electrolytic gel was appliedbetween the electrodes and the scalp to bring impedancesbelow 5 kΩ An EEG amplifier gUSBamp (Guger Technolo-gies Graz Austria) was utilizedThe sampling frequency wasset to 128Hz During the EEG signal acquisition an analogbandpass filter between 2 and 30Hz and a notch filter around50Hz were applied directly in the amplifier

The minimum energy combination (MEC) method wasoriginally introduced by Friman et al in [40] In this study arefined version of the MEC as presented by Volosyak in [18]was used In the following we present this method in a veryshort form

223 SSVEP SignalModel Wecanmodel the signal recordedat a given electrode 119894 (against reference electrode at time 119905) asfollows

119910119894(119905) =

119873ℎ

sum

119896=1

(119886119894119896sin 2120587119896119891119905 + 119887

119894119896cos 2120587119896119891119905) + 119864

119894119905 (1)

Computational Intelligence and Neuroscience 5

The first part is the SSVEP evoked response consisting of sineand cosine functions of the frequency 119891 and its harmonics119896 with corresponding amplitudes 119886

119894119896and 119887119894119896 The last part

119864119894119905represents the noise component of the electrode 119894 other

activities not related to the SSVEP evoked response andartifacts The 119894th EEG signals samples stored in 119873

119905are in

vector form 119910119894= 119883120591119894+ 119864119894 where 119910

119894= [119910119894(1) 119910

119894(119873119905)]119879

is 119873119905times 1 vector 119883 is the SSVEP model matrix of size

119873119905times 2119873ℎand contains the sin (2120587119896119891119905) and cos (2120587119896119891119905) pair

in its columns The 120591119894vector contains the corresponding

amplitudesThis is based on the assumption that the nuisanceis stationary within the short time segment length the signalsfrom the 119873

119910electrodes can be stored in a vector form 119884 =

[1199101 119910

119873119910]

224 Minimum Energy Combination In order to cancel outthe nuisance and noise and to boost the SSVEP responsechannels are created in a weighted sum of 119873

119910electrodes

119904 = sum119873119910

119894=1119908119894119910119894= 119884119908 Several sets 119878 of 119873

119904channels can be

created with a different combination of the original electrodesignals 119878 = 119884119882 where119882 is119873

119910times119873119904matrix that contains the

different weight combinations The structure of 119882 is basedon the minimum energy combination technique In orderto use this combination method for cancelling the nuisancesignal in a first step any potential SSVEP component needsto be removed from the signal = 119884 minus 119883(119883

119879119883)minus1119883119879119884

The next step is to find a weight vector that minimizes theresulting energy of the combination of the electrodesrsquo noisesignals The last step is the optimization problem forminimalizing the nuisance and noise component

min

10038171003817100381710038171003817

10038171003817100381710038171003817

2

= min

119879119879

(2)

The solution to the minimize problem is the smallest eigen-vector V

1(of the symmetricmatrix 119879) and the energy of the

resulting combination equals the smallest eigenvalue 1205821from

this matrix In order to discard up to 90 of the nuisancesignal the total number of channels is selected by finding thesmallest value for119873

119904that satisfies the following equation

sum119873119904

119894=1120582119894

sum119873119910

119895=1120582119895

gt 01 (3)

To detect the SSVEP response for a frequency the powerof that frequency and its harmonics119873

ℎis predicted

=1

119873119904119873ℎ

119873119904

sum

119897=1

119873ℎ

sum

119896=1

10038171003817100381710038171003817119883

T119896119904119897

10038171003817100381710038171003817

2

(4)

To prevent overlapping between frequencies we use 119873ℎ= 2

in the actual system implementation The estimation of thepower of the SSVEP stimulation frequencies and additionalfrequencies (119873

119891) is normalized into probabilities

119901119894=

119894

sum119895=119873119891

119895=1119895

with119894=119873119891

sum

119894=1

119901119894= 1 (5)

Table 1 Overview of the time windows 119879119878of EEG data used for

SSVEP classification Eleven time segments between 8125ms (8times13

samples or 8 blocks of EEG data) and 16250ms were used in thisstudy

Time [ms] Blocks of EEG data8125 810156 1015234 1520313 2030469 3040625 4050781 5060938 6071094 708125 8016250 160

where 119894is the power estimation of the 119894th signal 1 le 119894 le

119873119891 To increase the difference between the results we apply a

Softmax function

1199011015840

119894=

119890120572119901119894

sum119895=119873119891

119895=1119890120572119901119895

with119894=119873119891

sum

119894=1

1199011015840

119894= 1 (6)

where 120572 is set to 025 The output values of the Softmaxfunction are between 0 and 1 and their sum is equal to 1 Theclassification output119862

119900of the signal of 119894th frequency is a result

of three conditions the probability 1199011015840

119894of the 119894th frequency

is the highest it surpasses a predefined border 1199011015840

119894ge 120573119894 and

the detected frequency is one of the stimulation frequenciesAfter each classification to prevent wrong classificationsthe classifier output was rejected for the next 9 blocks(approximately 914ms of EEG data with the sampling rate of128Hz) and the visual stimulation was paused for that timeperiod This classification method is the next iteration of theBremen-BCI as presented for example in [18] An exampleof the recorded signal and the SNR can be seen in Figure 9

The SSVEP classification was performed online every13 samples (ca 100ms) on the basis of the adaptive timesegment length of the acquired EEG data introduced in[18] If no classification was possible the time segmentlengthwindowwas gradually extended to the next predefinedwindow lengths (see Table 1 and Figure 2)

23 Software

231 Graphical User Interface A custom-made graphicaluser interface (robot-GUI) was designed to control the MRCDirectX 9 was used to draw the images (buttons and areal-time picture from the actual camera view in back-ground) on themonitor and OpenCV library (version 2410httpwwwopencvorg) was used for image acquisitionEach received frame was rendered into background textureas soon as it was received (max 30Hz the frame rate waslimited by the used camera) Each received frame (640 times

480 pixels) was rescaled to 1280 times 960 Due to the ratioof the streamed video frames and the resolution of the

6 Computational Intelligence and Neuroscience

Time (ms)

8 bl

10 bl

160 blocks

8 bl8 bl8 bl

10 bl

8 bl8 bl

10 bl15 bl15 bl

15 blocks20 blocks

10 bl

8 bl8 bl

80 blocks

Com

man

d cla

ssifi

catio

n

20000 22000 24000

203125 ms

8125 ms

152344 ms

1015625 ms

8135 ms

0 2000 4000 6000 8000 10000 12000 14000 16000 18000

middot middot middot

middot middot middot

middot middot middot

middot middot middot

16250 ms

Figure 2 Changes in the time segment length after a performed classification in case no distinct classification can be made and the actualtime 119905 allows for the extension to the next predefined value After each classification (gray bars of 8 10 20 and 160 blocks) additional timefor gaze shifting was included (black) in which the classifier output was rejected for 9 blocks

Figure 3 A screenshot of the robot-GUI in drive mode In thissituation the user had to select the ldquodrive forwardrdquo commandwhichcaused the robot to execute an autopilot program that uses the built-in light sensors to follow the white line

Figure 4 Signs positioned at the turning points and positionedtowards the camera showed the correct direction for following thetrack Here in the screenshot the robot automatically stopped afterreaching this turning point and a sign instructed the user to selectthe command ldquoturn leftrdquo

monitors used a gray background in the form of a framewas implemented to prevent unnatural video scaling Thestarting dimensions of the buttons were 146 times 512 pixelsfor left and right 576 times 128 pixels for the top and 536 times

96 pixels for bottom buttons (see Figures 3 and 4) Duringthe runtime the size of the buttons changed continuously

depending on the calculated probabilities of the 119894th stimuliin (6) The robot-GUI had two modes a drive mode and acamera mode In drive mode the subject was able to sendthe following commands to the MRC ldquodrive forwardrdquo (adiscrete command using the line-sensors to follow the tapedline on autopilot to the next direction change) ldquoturn leftrdquoldquoturn rightrdquo and ldquoswitch to camera moderdquo In camera modethe subject was able to send the following commands ldquolookleftrdquo ldquolook rightrdquo ldquolook uprdquo and ldquoswitch to drive moderdquoWhile looking up the button ldquolook uprdquo changed the contentand functionality to ldquolook forwardrdquoThe control buttons werered in drive mode and yellow in camera mode to increaseuser friendliness The aspect ratio of the video stream was4 3 so the computer screen was not entirely filled out Ifthe command ldquodrive forwardrdquo was selected by the user anautonomous program was executed on the MRC The MRCused the four line-sensors to follow the white line taped tothe floor until it arrived at a turning point where the robotstopped automaticallyWhile driving theMRCautomaticallymade small corrections when the line was detected by oneof the inner sensors the rotation of the wheels on the sameside was decreased by 50 until another sensor detected theline Similarly when the line was detected by one of the outersensors the wheels on the same side were stopped so thatthe MRC could drive in a curve in order to go back ontrack If one of the turning commands was selected theMRCmade an approximately 90-degree turn in the correspondingdirection In drive mode the camera was facing slightly theground (approximately 20 degrees down from the frontalview for the ldquolook downrdquo position) When switching to cam-eramode (represented by the word ldquoCAMERArdquo at the bottomof the robot-GUI) the camera changed to the ldquolook forwardrdquoposition (switching back to drive mode would change thecamera position to looking down again) All the EEGdata andthe received video stream from the MRC were anonymouslystored

Computational Intelligence and Neuroscience 7

(a) (b)

Figure 5 (a) shows the second step of theWizardThe subject faced a circle consisting of small segments which flickered simultaneously withseven different frequencies Each of the seven tested frequencies was represented by the same amount of segmentsThe segments representingone of the seven tested frequencies were scattered in random order (b) forming the stimulation circle (a) After this multistimulation withthirteen or fourteen different frequencies depending on the alpha test result the EEG data were analyzed and ordered from the strongest tothe lowest SSVEP response

fS

(a) (b)

Figure 6 This figure shows the third step of the Wizard (a) The main frequency 119891119878is in the center (white circle stimulation) and the other

three frequencies (from four 119891119878119894with the strongest SSVEP response) are divided into small stimulation blocks and scattered around 119891

119878acting

as ldquonoiserdquo in peripheral vision This pattern is shown in (b) This was repeated for each 119891119878119894

232 Wizard The Wizard software is generally spokena calibration program that autonomously determines BCIkey parameters [37] It is designed to select user-specificoptimal stimulation frequencies for SSVEP-based BCIs onLCD screens It is a development of the method presentedin [41] The Wizard chooses four frequencies with the bestSSVEP response out of a set of 14 possible stimulationfrequencies 119891

119894(ranged from 6Hz to 20Hz) that can be

realized on a monitor screen with a vertical refresh rate of120Hz [42] In the following we provide a brief description

Step 1 (alpha test) The subject was instructed by an audiofeedback to close his or her eyes 15 frequencies in the alphawave band (827 857 887 893 923 953 970 100 10301061 1091 1121 1170 120 and 1230Hz) were measured for10 seconds during the closed eyes period and if any of those15 frequencies would interfere with a possible stimulationfrequency 119891

119894(Δ119891 le 015Hz) this frequency 119891

119894would be

filtered out

Step 2 (stimulation frequency selection) In order to avoid theinfluence of harmonic frequencies the user had to look at twocircles in sequence Each circle contained seven stimulationfrequencies which were presented in pseudorandom order assmall green circle segments (see Figure 5) This step took 20seconds

Step 3 (optimal classification thresholds and time segmentlength) In this step the Wizard software chose the beststarting time-length segment 119879

119878as well as the optimal

classification thresholds 120573119894for the frequencies determined

in the previous step First a white circle flickering at thefrequency which had the highest SSVEP response in phase2 was displayed It was necessary to simulate noise causedby peripheral vision when concentrating on the target objectFor this purpose the white circle was surrounded by a greenring containing segments which presented the remainingoptimal frequencies (see Figure 6) The user was instructedby an audio feedback to gaze at the white circle The circle

8 Computational Intelligence and Neuroscience

(a) (b)

Figure 7 Two subjects during the driving task The robotic-GUI is in camera mode The subject in (a) sees a text message instructing himto select the button ldquolook uprdquo The subject in (b) is one step further and is facing the hidden secret number which is attached underneath adrawer

and the ring flickered for 10 seconds while EEG data wererecorded After a two-second break the flickering continuedThe white circle now flickered with the second-highestfrequency from phase two while the ring flickered with theremaining three frequencies This procedure was repeateduntil the data for all four optimal frequencies were collectedTotal recording time for phase three was 40 seconds andbased on this data optimal thresholds and time segmentlength were determined If one of the frequencies 119891

119878119894did not

reach a defined minimal threshold 120573119894 this frequency was

replaced with the next best frequency from Step 2 and theprocedure of this step was repeated

24 Procedure In order to analyze the BCI performancevariety a pre- and postquestionnaire system (similar to [21])was employed After signing the consent form each subjectcompleted a brief prequestionnaire and was prepared for theEEG recording Subjects were seated in a comfortable chairin front of the monitor the electrode cap was placed andelectrode gel was applied First the subjects went throughthe steps of the Wizard and BCI key parameters (stimula-tion frequencies classification thresholds and optimal timesegment length) were determined Then the experimenterstarted the robot-GUI (Figure 3) The control task was todrive the MRC through a predetermined route (Figure 8) Ifthe robot-GUI software recognized a wrong command forexample ldquoturn rightrdquo instead of ldquoturn leftrdquo this commandwas not sent so that the MRC would not drive off the track(Figure 7) If the subjectmade a fewmistakes at the beginningand was willing to start over again (this occurred with asmall number of subjects) the experimentermoved theMRCback to the starting point and restarted the robot-GUI In thevery few cases where the wireless connection to the MRCwas interrupted (eg the MRC was mechanically unable toexecute the command thatwas correctly selected in the robot-GUI by the user) the experimenter manually repositionedthe MRC to the next turning point

At a certain turning point (represented by a blue squareshown in Figure 8 when approximately 60 of the drivingtask was completed) the camera was facing a text messagewhich instructed the subjects to look up (user action rep-resented with a circled number as eg 5 in this case)

Start

Finish

1 m

1 m

1 m

1 m

2 m

15 m15 m

15 m

15 m

15 m

15 m

Figure 8 Track for the mobile robotic car Subjects had to steerthe MRC through a 15-meter-long course with 10 turns in orderto reach the finish line At a certain turning point (marked with ablue square) the MRCs camera faced a text message that instructedthe subject to switch to camera mode The task was to examine thesurroundings and to look for a hidden number which was attachedunderneath a drawer (otherwise not visible) One example of theoptimal path is shown in Figure 9

At this point the user had to switch to camera mode andselect the command ldquolook uprdquo Now the subject was ableto see a hidden secret number which was attached to thebottom of a drawer (located at a height of 50 cm facingthe floor not normally visible to the people in the room)The numbers varied randomly between the subjects andwere only accessible via the MRC camera After seeing thenumber on the screen the subject had to change back todrive mode and continued the driving task When arriving atthe finish line the camera faced another text message whichagain instructed the subjects to look up After changing tocamera mode and selecting ldquolook uprdquo the camera faced asign displaying the word ldquofinishrdquo and the robot-GUI stoppedautomatically All of the EEG data and performed commandswere stored on the hard drive of the desktop PC for furtherevaluation The values of ITR and accuracies were calculatedautomatically by the software The ITR was calculated usingthe formula presented by Wolpaw et al in [43] the numberof possible choices for classification was equal to four itwas the same for each mode (drive mode or camera mode)After finishing the experiment the subject filled out thepostquestionnaire

Computational Intelligence and Neuroscience 9

Figure 9This chart presents exemplarily the BCI performance for subject 59 (optimal path with 100 accuracy)The top row shows the rawEEG signal acquired from channel119874

119911(one of eight EEG electrodes)The remaining graphs show the SNR values during the whole experiment

(dashed line represents the threshold for each command) for the four commands forwardup turn left turn right and cameradrive thestimulation frequencies for those commands were 667 750 706 and 632Hz respectively The 119909-axis the same for all graphs representsthe total duration of the experiment (150 seconds) The marking above each graph represents the exact time of each classification circlednumbers below the 119909-axis correspond to the task actions

3 Results

The results of the driving performance are shown in Table 3All 61 subjects were able to control the BCI application andcompleted the entire driving taskThe overall mean (SD) ITRof the driving performance was 1407 (444) bitsmin Themean (SD) accuracy was 9714 (373) Except for one allsubjects reached accuracies above 80 40 subjects reachedaccuracies above 90 and nineteen of them even completedthe steering task without any errorsThemean (SD) accuracyof each action (commands required to complete the task) was

9614 (002) In the prequestionnaire subjects answered thequestions regarding gender the need for vision correctiontiredness and BCI experience as displayed in Table 2Eighteen female subjects (30) with an average (SD) ageof 2267 (430) years and 43 male subjects with an average(SD) age of 2260 (532) years participated in the experimentThe influence of gender was also tested 18 female subjectsachieved a mean (SD) accuracy and ITR of 9324 (798)and 1405 (573) bitsmin respectively and 43 male achieveda mean (SD) accuracy and ITR of 9295 (659) and 1407(508) bitsmin respectively A one-way ANOVA showed no

10 Computational Intelligence and Neuroscience

Table 2 Questionnaire results The table presents the answers collected from the prequestionnaire The figures show the number of respon-dents or the mean (SD) value

Age Gender Visioncorrection Tiredness Length of sleep

last nightExperiencewith BCIs

Years Male Female Yes No 1 nottired

2 littletired

3 moderatelytired 4 tired 5 very

tired Hours Yes No

2262(500) 43 18 22 39 18 23 17 3 0 675 (122) 6 55

statistically significant difference in accuracy 119865(1 59) =

0022 119901 = 0883 and ITR 119865(1 59) = 00003 119901 = 0987between those two groupsThe influence of vision correctionon accuracy and ITR was also tested A total of 22 subjectsrequired vision correction in the form of glasses or contactlenses (as they stated in the questionnaire) They achieveda mean (SD) accuracy and ITR of 9156 (636) and 1278(402) bitsmin respectively These results were compared tosubjects who did not require vision correction and achieveda mean (SD) accuracy and ITR of 9392 (535) and 1484(466) bitsmin respectively A one-way ANOVA showedno statistically significant influence of vision correction onaccuracy 119865(1 59) = 1664 119901 = 0202 There was also nosignificant effect of vision correction on ITR119865(1 59) = 2263119901 = 0138 An evaluation of tiredness was also calculated onthe basis of the answers given in a prequestionnaire (scalefrom 1 to 5) Results of the prequestionnaire are shown inTable 2 A one-way ANOVA revealed no significant effect oftiredness on accuracy 119865(4 56) = 0335 119901 = 0853 and ITR119865(4 56) = 0067 119901 = 0991

4 Discussion

In the results presented the overall accuracy was calculatedfor all commands in drive mode and in camera modewithout distinguishing between them since four commandsfor SSVEP classification were always present Even thoughdifferent commands were sent the command classificationremained the same A comparison of this study to otherpublications on steering a robot with SSVEP-based BCI ispresented in Table 4

In order to test the influence of video overlay we com-pared the accuracy results with a demographic study [21] inwhich 86 subjects achieved a mean (SD) accuracy of 9226(782) while driving a small robot through a labyrinth witha four-class SSVEP-based BCI An unpaired 119905-test showed nostatistically significant difference 119905(145) = minus0089 119901 = 0397compared to the present values This shows that the videooverlay did not negatively influence the BCI performanceGergondet et al listed three factors with respect to howthe live video feedback in the background can influence theSSVEP stimulation distraction due to what is happening onthe video distraction due to what is happening around theuser and variation in luminosity below the stimuli shownto the user [30] Kapeller et al showed that there is a slightdecrease in performance for the underlying video sequencedue to a distractive environment and small targets used [27]

Contrary to the findings reported by Gergondet et al andKapeller et al the dynamic background did not noticeablyaffect the BCI performance (unpaired 119905-test result) Weassume that this was due to the fact that stimulation frequen-cies and minimal time segment classification windows wereadjusted individually and that relatively large stimulationboxes were used (visual angle)

The high accuracies of the SSVEP-based BCI systemsupport the hypothesis that BCI can be used to control acomplex driving application In comparison to the study byDiez et al [31] the MRC self-driving behavior was limitedto a preset number of turning points and a predefined pathAlthough the robot presented by Diez et al could avoidobstacles by itself (limited algorithm reaction) it decreased itsspeed for trajectory adjustment and in some cases stopped toavoid collision and further user commands were necessaryWhile Diez et al reported a mean time of 23320 seconds forthe second destination (139-meter distance) in this study theachieved mean time was 20708 seconds (15-meter distance)As pointed out by Diez et al the overall time for completingthe task depends not only on the robot but also on thesubjectrsquos performance controlling the BCI system [31] In ourstudy the high accuracies of the BCI system were achieveddue to the key parameter adjustment carried out by theWizard software

Themain difference between the robot used by Diez et aland the MRC presented in this study is the extended controlpossibility of the interface that also allows for controlling thecamera Both robots were designed for different purposesDiez et al mainly designed their robot for driving frompoint A to point B without any additional interaction withthe environment as they planned to use the system forwheelchair control in the future By contrast in our designthe MRC itself is the interaction device Equipped with adisplay and a two-way audio system it might serve as aremote telepresence in the future

The SSVEP paradigm allows for a robust implementationof four driving commands Compared to robots constructedfor the P300 BCI approach where the user faces largestimulation matrices spread across the entire screen thenumber of simultaneously displayed targets here is only fourand hence the load on the visual channel is considerably lowerin comparison Therefore the user might find it easier toconcentrate on the driving task On the other hand the lessernumber of command options reduces freedom and speedHowever a shared-control paradigm allows for complex tasksto be executed with minimal effort The command ldquodrive

Computational Intelligence and Neuroscience 11

Table 3 Results of the driving performance Total time each subjectrequired for steering the MRC through the route (15 meters and10 turns including camera movements) 26 correct commands (3commands in camera mode and 23 commands in drive mode) werenecessary to complete the task Accuracy (Acc) is the number ofcorrect commands (26) divided by the total number of commands(119862119899) Mean values are given at the bottom of the table

Subject Time Acc ITR119862119899[s] [] [bitsmin]

1 30560 8667 720 302 28346 8966 832 293 28407 8125 680 324 28011 8125 690 325 25259 8125 765 326 19002 9286 1340 287 32206 9630 862 278 31007 7027 466 379 15813 9286 1610 2810 37182 8966 635 2911 24172 8667 910 3012 15133 9286 1683 2813 15549 10000 2007 2614 21054 9630 1318 2715 20841 10000 1497 2616 24162 8387 852 3117 24172 8667 910 3018 14513 9630 1912 2719 16748 9630 1657 2720 22080 9286 1153 2821 19419 9630 1429 2722 43154 9286 590 2823 16352 10000 1908 2624 17184 8667 1280 3025 24812 8387 830 3126 13193 10000 2365 2627 16138 10000 1933 2628 18475 9630 1502 2729 14422 10000 2163 2630 16748 9286 1520 2831 21663 9286 1175 2832 35516 8125 544 3233 24406 8667 901 3034 15214 10000 2051 2635 19561 9630 1418 2736 23948 8387 860 3137 16636 8966 1418 2938 17245 10000 1809 2639 21897 8387 940 3140 16565 8966 1425 2941 14666 10000 2127 2642 18627 8966 1267 29

Table 3 Continued

Subject Time Acc ITR119862119899[s] [] [bitsmin]

43 22131 9286 1151 2844 25929 8667 848 3045 15702 10000 1987 2646 14788 9630 1876 2747 18718 9630 1482 2748 14554 10000 2144 2649 16057 9630 1728 2750 20495 10000 1522 2651 22811 9630 1216 2752 16037 10000 1946 2653 23664 8125 817 3254 14889 10000 2095 2655 15011 10000 2078 2656 13985 10000 2231 2657 14716 10000 2120 2658 24273 10000 1285 2659 15001 10000 2080 2660 16463 9630 1685 2761 17885 9630 1551 27Mean 20708 9303 1407 2811SD 5025 573 444 182Min 13193 7027 466 2600Max 43154 10000 2365 3700

forwardrdquo starts an autopilot program that guides the robotthrough a straight distance until it reaches a predefinedstopping point

While the MRC executed the autopilot program andfollowed the line for some time subjects purposely ignoredthe flickering targets and concentrated on the video feedbackinstead As these waiting periods were added up to theactual classification times to each command that followed anautopilot command (ldquoforwardrdquo) the ITR values are consid-erably low From the researcherrsquos point of view a numberof errors could have been avoided if some subjects wouldhave paid more attention to the live feedback Especially inthe task of reading the hidden number subjects were tryingto drive along the path without looking at the text messagethat appeared on the live feedback After realizing that therobot was not turning they looked at the video and saw thetext message instructing them to look up This is visible inthe analysis of the accuracy of the robots commands (seeFigure 10) The lowest accuracy of 8749 was for actionldquochange view to the camera moderdquo (robot command 14) thismay be caused by the fact that the subjects got used to theactions ldquoturn leftrdquoldquoturn rightrdquo and ldquodrive forwardrdquo whichthey repeated six times from the starting point (see Figures10 and 9) This could easily be avoided if the driving pathswere designedwith evenly distributed tasks (eg live scenariotasks such as reading a hidden number should occur moreoften)

12 Computational Intelligence and Neuroscience

Table 4 Comparison of BCI systems for robot navigation

Publication Subjects Classes Literacy rate [] Mean accuracy [] Mean ITR [bitsmin] Robot typeVolosyak et al [21] 86 4 98 9226 1724 E-puckZhao et al1 [34] 7 4 100 903 247 NAOKapeller et al2 [27] 11 4 91 9136 NA E-puckDiez et al3 [31] 7 4 100 8880lowast NA Pioneer 3-DXHolewa and Nawrocka [24] 3 4 100 7375 1136 Lego MindstormChoi and Jo1 [32] 5 2 100 844 1140 NAOThis study 61 4 100 9303 1407 MRC1Results for SSVEP paradigm2f-VEP results for 4 classes without zero class3Results for destination 1 lowastcorrect(correct + wrong) commands

Table 5 Distribution of the time blocks needed for selection of the first three commands ldquodrive forwardrdquo and the last three commandsldquoforwardrdquoldquolook uprdquo over all subjects

Distribution []Time windows 119879

119878in EEG data blocks

8 10 15 20 30 40 50 60 to 160A 656 1803 1803 2295 1148 656 820 820C 820 2295 820 1967 1148 820 492 1639E 984 1311 820 1967 656 984 820 2459Mean 820 1803 1148 2077 984 820 710 1639œ 984 1803 1475 1803 984 328 656 1967X 1148 1639 1639 1475 1311 984 164 1639Y 1148 1475 1148 2295 656 656 492 2131Mean 1093 1639 1421 1858 984 656 437 1913

Accuracy of each command of the robotrsquos path

1 2 3 4 5 6 7 8 9

Commands

10080604020

0

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26

()

Figure 10This chart presents the accuracy of each command alongthe path (119909-axis) and its selection accuracy to previous correctselection over all subjects

No difference between the time needed for the selectionof the command ldquodrive forwardrdquo in the beginning of theexperiment in comparison to the end of the experiment canbe observed (see Table 5) hence neither fatigue nor practiceseemed to influence performance

The general literacy rate among test subjects of 100indicates that SSVEP-based BCIs can be used by a largerpopulation than previously assumedThis high literacy is dueto the careful selection of individual SSVEP key parameterswhichwere determined by the presented calibration software

41 Limitations The SSVEP-control application has somerestrictions

(i) The wireless connection was implemented with theUDP protocol and some information might thereforenot be received properly

(ii) In order to maximize performance accuracy and userfriendliness the number of simultaneously displayedtargets was restricted to four which resulted in aconsiderably lower ITR

(iii) It should also be noted that for long term use recal-ibration might be inevitable Although the numberof targets is limited to four the SSVEP-based BCIpresented here depends on the userrsquos vision andcontrol over his or her eye movements It thereforehas to compete with other approaches such as P300and other eye tracking systems Visual stimulationwith low frequencies in particular is known to causefatigue

(iv) Moreover subjects in this study may not be repre-sentative of the general population they tended tobe young healthy men Further tests with older andphysically impaired persons are required

5 Conclusions

In the present study the SSVEP-based driving application(robot-GUI) was proven to be successful and robust Thedriving application with live video feedback was tested with61 healthy subjects All participants successfully navigateda mobile semiautonomous robot through a predetermined15-meter-long route BCI key parameters were determinedby the Wizard software No negative influence of the livevideo feedback on the performance was observed Such

Computational Intelligence and Neuroscience 13

applications could be used to control assistive robots butmight also be of interest for healthy subjects In contrast toP300 the SSVEP stimulation has to be sufficiently strong (interms of size or visual angle) to significantly influence theuser A common problem of all BCI studies that also occurredhere is to keep the usermotivated and focused on the taskTheevaluation of the questionnaire shows that for the majorityof subjects the driving task with the SSVEP-based BCI wasnot fatiguing even though lower stimulation frequencieswhich usually cause more fatigue than higher frequencieswere used in the experiment More specifically frequenciesbetween 6 and 12Hz were determined in the calibrationprocedure as they elicited the strongest SSVEP responsesSome subjects stated that as they concentrated more on thecamera video they did not find the flickering annoying at allOur results prove that SSVEP-based personalized adaptiveBCI systems with a low target number can be used as aneveryday assistive technology for remote robotsWe have alsoshown that it is possible to design a four-target interface withextended capabilities such as camera and driving controlThe control application presented here can be used formobile robots as an assistive remote technology as a gameinterface forwheelchair control or even for steering remotelycontrolled drones This study should be considered as a steptowards resolving the problems of semiautonomous mobilerobots controlled with SSVEP-based BCIs In order to furtherimprove this approach the next steps should be an onlineparameter adaptationmechanism for improving the accuracythrough practicing a possibility of turning the stimulationonoff and a two-way video and audio communication fora truly remote telepresence

Competing Interests

The authors declare that the research was conducted in theabsence of any commercial or financial relationships thatcould be construed as a potential conflict of interests

Acknowledgments

The authors would like to thank Thomas Grunenberg fromthe Faculty of Technology and Bionics at Rhine-Waal Uni-versity of Applied Sciences for building programming andmaintaining MRCs The authors would also like to thankthe student assistants Leonie Hillen Francisco Nunez SaadPlatini Anna Catharina Thoma and Paul Fanen Wundufor their help with the study This research was supportedby the German Federal Ministry of Education and Research(BMBF) under Grants nos 16SV6364 and 01DR14014 and theEuropean Fund for Regional Development (EFRD or EFREin Germany) under Grant no GE-1-1-047

References

[1] M Cheney and R Uth Tesla Master of Lightning Barnes ampNoble New York NY USA 1999

[2] W Walter The Living Brain W W Norton amp Company NewYork NY USA 1953

[3] P Flandorfer ldquoPopulation ageing and socially assistive robotsfor elderly persons the importance of sociodemographic fac-tors for user acceptancerdquo International Journal of PopulationResearch vol 2012 Article ID 829835 13 pages 2012

[4] A Nijholt D Tan G Pfurtscheller et al ldquoBrain-computerinterfacing for intelligent systemsrdquo IEEE Intelligent Systems vol23 no 3 pp 72ndash79 2008

[5] J D R Millan R Rupp G R Muller-Putz et al ldquoCombiningbrain-computer interfaces and assistive technologies state-of-the-art and challengesrdquo Frontiers in Neuroscience vol 4 article161 2010

[6] C Brunner N Birbaumer B Blankertz et al ldquoBNCI horizon2020 towards a roadmap for the BCI communityrdquo Brain-Computer Interfaces vol 2 no 1 pp 1ndash10 2015

[7] R Bogue ldquoExoskeletons and robotic prosthetics a review ofrecent developmentsrdquo Industrial Robot vol 36 no 5 pp 421ndash427 2009

[8] R Ron-Angevin F Velasco-Alvarez S Sancha-Ros and L daSilva-Sauer ldquoA two-class self-paced BCI to control a robot infour directionsrdquo IEEE International Conference on Rehabilita-tion Robotics (ICORR rsquo11) vol 2011 Article ID 5975486 2011

[9] L Tonin T Carlson R Leeb and J D R Millan ldquoBrain-controlled telepresence robot by motor-disabled peoplerdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society pp 4227ndash4230IEEE Boston Mass USA September 2011

[10] D Huang K Qian D-Y Fei W Jia X Chen and O BaildquoElectroencephalography (EEG)-based brain-computer inter-face (BCI) a 2-D virtual wheelchair control based on event-related desynchronizationsynchronization and state controlrdquoIEEE Transactions on Neural Systems and Rehabilitation Engi-neering vol 20 no 3 pp 379ndash388 2012

[11] A Chella E Pagello E Menegatti et al ldquoA BCI teleoperatedmuseum robotic guiderdquo in Proceedings of the International Con-ference on Complex Intelligent and Software Intensive Systems(CISIS rsquo09) pp 783ndash788 IEEE Fukuoka Japan March 2009

[12] C Escolano J M Antelis and J Minguez ldquoA telepresencemobile robot controlled with a noninvasive brain-computerinterfacerdquo IEEE Transactions on Systems Man and CyberneticsPart B Cybernetics vol 42 no 3 pp 793ndash804 2012

[13] F-B VialatteMMaurice J Dauwels andA Cichocki ldquoSteady-state visually evoked potentials focus on essential paradigmsand future perspectivesrdquo Progress in Neurobiology vol 90 no4 pp 418ndash438 2010

[14] D Regan ldquoSome characteristics of average steady-state andtransient responses evoked by modulated lightrdquo Electroen-cephalography and Clinical Neurophysiology vol 20 no 3 pp238ndash248 1966

[15] Y Frank and F Torres ldquoVisual evoked potentials in the evalua-tion of lsquocortical blindnessrsquo in childrenrdquoAnnals of Neurology vol6 no 2 pp 126ndash129 1979

[16] W IanMcDonald A CompstonG Edan et al ldquoRecommendeddiagnostic criteria for multiple sclerosis guidelines from theinternational panel on the diagnosis of multiple sclerosisrdquoAnnals of Neurology vol 50 no 1 pp 121ndash127 2001

[17] S Gao Y Wang X Gao and B Hong ldquoVisual and auditorybrain-computer interfacesrdquo IEEE Transactions on BiomedicalEngineering vol 61 no 5 pp 1436ndash1447 2014

[18] I Volosyak ldquoSSVEP-based Bremen-BCI interfacemdashboostinginformation transfer ratesrdquo Journal of Neural Engineering vol8 no 3 Article ID 036020 2011

14 Computational Intelligence and Neuroscience

[19] G R Muller-Putz and G Pfurtscheller ldquoControl of an electricalprosthesis with an SSVEP-based BCIrdquo IEEE Transactions onBiomedical Engineering vol 55 no 1 pp 361ndash364 2008

[20] P Martinez H Bakardjian and A Cichocki ldquoFully onlinemulticommand brain-computer interface with visual neuro-feedback using SSVEP paradigmrdquo Computational Intelligenceand Neuroscience vol 2007 Article ID 94561 9 pages 2007

[21] I Volosyak D Valbuena T Luth T Malechka and A GraserldquoBCI demographics II how many (and What Kinds of) peoplecan use a high-frequency SSVEP BCIrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 19 no 3 pp232ndash239 2011

[22] J Zhao Q Meng W Li M Li and G Chen ldquoSSVEP-basedhierarchical architecture for control of a humanoid robot withmindrdquo in Proceedings of the 11th World Congress on Intelli-gent Control and Automation (WCICA rsquo14) pp 2401ndash2406Shenyang China July 2014

[23] A Guneysu and H Levent Akin ldquoAn SSVEP based BCI tocontrol a humanoid robot by using portable EEG devicerdquo inProceedings of the 35th Annual International Conference of theIEEE Engineering in Medicine and Biology Society (EMBC rsquo13)pp 6905ndash6908 IEEE Osaka Japan July 2013

[24] K Holewa and A Nawrocka ldquoEmotiv EPOC neuroheadset inbrainmdashcomputer interfacerdquo in Proceedings of the 15th Interna-tional Carpathian Control Conference (ICCC rsquo14) pp 149ndash152IEEE Velke Karlovice Czech Republic May 2014

[25] N-S Kwak K-R Muller and S-W Lee ldquoToward exoskeletoncontrol based on steady state visual evoked potentialsrdquo inProceedings of the International Winter Workshop on Brain-Computer Interface (BCI rsquo14) pp 1ndash2 Gangwon South KoreaFebruary 2014

[26] G Bin X Gao Y Wang B Hong and S Gao ldquoVEP-basedbrain-computer interfaces time frequency and code modula-tionsrdquo IEEE Computational Intelligence Magazine vol 4 no 4pp 22ndash26 2009

[27] C Kapeller C Hintermuller M Abu-Alqumsan R Pruckl APeer and C Guger ldquoA BCI using VEP for continuous control ofa mobile robotrdquo in Proceedings of the 35th Annual InternationalConference of the IEEE Engineering in Medicine and BiologySociety (EMBC rsquo13) pp 5254ndash5257 IEEE Osaka Japan July2013

[28] H Wang T Li and Z Huang ldquoRemote control of an electricalcar with SSVEP-based BCIrdquo in Proceedings of the IEEE Interna-tional Conference on Information Theory and Information Secu-rity (ICITIS rsquo10) pp 837ndash840 IEEE Beijing China December2010

[29] Y Wang Y-T Wang and T-P Jung ldquoVisual stimulus designfor high-rate SSVEP BCIrdquo Electronics Letters vol 46 no 15 pp1057ndash1058 2010

[30] P Gergondet D Petit and A Kheddar ldquoSteering a robot witha brain-computer interface impact of video feedback on BCIperformancerdquo in Proceedings of the 21st IEEE InternationalSymposium on Robot and Human Interactive Communication(RO-MAN rsquo12) pp 271ndash276 Paris France September 2012

[31] P F Diez V A Mut E Laciar and E M Avila Perona ldquoMobilerobot navigation with a self-paced brain-computer interfacebased on high-frequency SSVEPrdquo Robotica vol 32 no 5 pp695ndash709 2014

[32] B Choi and S Jo ldquoA low-cost EEG system-based hybridbrain-computer interface for humanoid robot navigation andrecognitionrdquo PLoS ONE vol 8 no 9 Article ID e74583 2013

[33] E Tidoni P Gergondet A Kheddar and S M Aglioti ldquoAudio-visual feedback improves the BCI performance in the naviga-tional control of a humanoid robotrdquo Frontiers in Neuroroboticsvol 8 article 20 pp 1ndash8 2014

[34] J Zhao W Li and M Li ldquoComparative study of SSVEP- andP300-based models for the telepresence control of humanoidrobotsrdquo PLoS ONE vol 10 no 11 Article ID e0142168 2015

[35] T M Rutkowski K Shimizu T Kodama P Jurica and ACichocki ldquoBrain-robot interfaces using spatial tactile BCIparadigmsrdquo in Symbiotic Interaction pp 132ndash137 Springer NewYork NY USA 2015

[36] C Brunner B Z Allison D J Krusienski et al ldquoImprovedsignal processing approaches in an offline simulation of a hybridbrain-computer interfacerdquo Journal of NeuroscienceMethods vol188 no 1 pp 165ndash173 2010

[37] F Gembler P Stawicki and I Volosyak ldquoAutonomous parame-ter adjustment for SSVEP-based BCIs with a novel BCI wizardrdquoFrontiers in Neuroscience vol 9 article 474 pp 1ndash12 2015

[38] E W Sellers T M Vaughan and J R Wolpaw ldquoA brain-computer interface for long-term independent home userdquoAmyotrophic Lateral Sclerosis vol 11 no 5 pp 449ndash455 2010

[39] E M Holz L Botrel and A Kubler ldquoBridging gaps long-term independent BCI home-use by a locked-in end-userrdquo inProceedings of the TOBI Workshop IV Sion Switzerland 2013

[40] O Friman I Volosyak and A Graser ldquoMultiple channel detec-tion of steady-state visual evoked potentials for brain-computerinterfacesrdquo IEEE Transactions on Biomedical Engineering vol54 no 4 pp 742ndash750 2007

[41] I Volosyak T Malechka D Valbuena and A Graser ldquoAnovel calibration method for SSVEP based brain-computerinterfacesrdquo in Proceedings of the 18th European Signal ProcessingConference (EUSIPCO rsquo10) pp 939ndash943 Aalborg DenmarkAugust 2010

[42] I Volosyak H Cecotti and A Graser ldquoOptimal visual stimulion LCD screens for SSVEP based brain-computer interfacesrdquoin Proceedings of the 4th International IEEEEMBS Conferenceon Neural Engineering (NER rsquo09) pp 447ndash450 IEEE AntalyaTurkey May 2009

[43] J R Wolpaw N Birbaumer D J McFarland G Pfurtschellerand T M Vaughan ldquoBrain-computer interfaces for communi-cation and controlrdquo Clinical Neurophysiology vol 113 no 6 pp767ndash791 2002

Page 4: Driving a Semiautonomous Mobile Robotic Car Controlled by ... · Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI ... (frequencycoding,methodoriginallyproposed

4 Computational Intelligence and Neuroscience

(ii) How does the video feedback affect BCI performancewhen simultaneously presented with the SSVEP-based BCI stimulation

(iii) Is our self-made semiautonomousmobile robotic carsuitable for future use as an assistive technology

(iv) How did the subjects perform using the presentedSSVEP-based BCI control application compared toresults presented in for example [31]

The paper is organized as follows Section 2 describes theexperimental setup and presents details about the experimen-tal procedure and the hardware and software solutions Theresults are presented in Section 3 followed by a discussionand conclusion in Sections 4 and 5 respectively

2 Material and Methods

21 Subjects Everyone who volunteered to participate in thestudy became a research subject after reading the subjectinformation sheet This study was carried out with writteninformed consent from all subjects in accordance with theDeclaration ofHelsinki 61 subjects (30 female)with amean(SD) age of 2262 (500) years participated in the study allhealthy and all students or employees of the Rhine-Waal Uni-versity of Applied Sciences in KleveThe EEG recording tookplace in a normal PC-laboratory room (asymp36m2) The LCDscreen was located in front of windows and the luminancewas kept at an acceptable level in order not to disturb thesubjects If necessary outer blinds were pulled down duringthe day or light was turned on during the evening None ofthe subjects had neurological or visual disorders Spectacleswere worn when appropriate Subjects did not receive anyfinancial reward for their participation in this study

22 Hardware

221 Mobile Robotic Car (MRC) Four identically con-structed mobile robotic cars (MRCs) were used in theexperiment all set up with the same hardware and softwareconfigurations Four wheels were attached to a preassembledcase (14 cm times 19 cm) with built-in DCmotors A rechargeablebattery (6x AA NiMH 1900mAh) was placed inside Fourlight sensors (eachwith a corresponding red LED L-53SRC-Ffor luminescence positioned side by side) were mounted on aprototype board at the front of theMRC Two of those sensorswere connected to the analog input pins of a BeagleBoneBoard Rev A6 at the center and two at the sides The DCmotors were controlled by an Atmega168-based unit Thecamera Logitech C210 (V-U0019) was mounted in frontof the MRC on two servos (RB-421 RobotBase China)simulating a pan-tilt head and connected to the BeagleBoneUSB port The camera was controlled in five predefinedorientations looking down forward up to the left and to therightThe connection with theMRCwas established throughUDP via a HAMA N150 2in1-WLAN-ADAPTER connectedwith an Ethernet cable to the BeagleBone of the MRCand a USB WiFi adapter (D-Link DWL-G122) connectedto the desktop computer All the electronic components

Figure 1 Mobile robotic car (MRC) On top of the MRC theBeagleBoneBoardRevA6 theAtmega168-basedmotor drive shieldand a small HAMAWiFiWLAN-Access-Point At the front the redLEDs from light sensors that allowed the MRC to follow the whiteline on autopilot mode and the Logitech C210 camera mounted on2 servos creating the pan-tilt head for 2-DoF (degree-of-freedom)control The rechargeable battery is mounted inside the main caseMRC dimensions are 25 times 25 times 12 cm

(boards and HAMA N150 Access-Point) were mounted ontop of the MRC The BeagleBone was equipped with an8GB SD-card running Ubuntu ARM (Linux distributionversion 1204) with MJPG-Streamer installed (version 01httpmjpg-streamersfnet) for video streaming One of theused MRCs is shown in Figure 1

222 Data Acquisition and Feature Extraction The subjectswere seated in front of a 2410158401015840 LCD screen (BenQ XL2420Tor BenQ XL2411T resolution 1920 times 1080 pixels verticalrefresh rate 120Hz) at a distance of approximately 60 cmThecomputer system based on an Intel Core i7 340GHz wasrunning Microsoft Windows 7 Enterprise Standard passiveAgAgCl electrodes were used to acquire the signals from thesurface of the scalp The ground electrode was placed over119860119865119885 the reference electrode was placed over 119862

119885 and eight

signal electrodes were placed at predefined locations on theEEG-cap marked with 119875

119885 1198751198743 1198751198744 1198741 1198742 119874119885 1198749 and

11987410

according to the international system of EEG electrodeplacement Standard abrasive electrolytic gel was appliedbetween the electrodes and the scalp to bring impedancesbelow 5 kΩ An EEG amplifier gUSBamp (Guger Technolo-gies Graz Austria) was utilizedThe sampling frequency wasset to 128Hz During the EEG signal acquisition an analogbandpass filter between 2 and 30Hz and a notch filter around50Hz were applied directly in the amplifier

The minimum energy combination (MEC) method wasoriginally introduced by Friman et al in [40] In this study arefined version of the MEC as presented by Volosyak in [18]was used In the following we present this method in a veryshort form

223 SSVEP SignalModel Wecanmodel the signal recordedat a given electrode 119894 (against reference electrode at time 119905) asfollows

119910119894(119905) =

119873ℎ

sum

119896=1

(119886119894119896sin 2120587119896119891119905 + 119887

119894119896cos 2120587119896119891119905) + 119864

119894119905 (1)

Computational Intelligence and Neuroscience 5

The first part is the SSVEP evoked response consisting of sineand cosine functions of the frequency 119891 and its harmonics119896 with corresponding amplitudes 119886

119894119896and 119887119894119896 The last part

119864119894119905represents the noise component of the electrode 119894 other

activities not related to the SSVEP evoked response andartifacts The 119894th EEG signals samples stored in 119873

119905are in

vector form 119910119894= 119883120591119894+ 119864119894 where 119910

119894= [119910119894(1) 119910

119894(119873119905)]119879

is 119873119905times 1 vector 119883 is the SSVEP model matrix of size

119873119905times 2119873ℎand contains the sin (2120587119896119891119905) and cos (2120587119896119891119905) pair

in its columns The 120591119894vector contains the corresponding

amplitudesThis is based on the assumption that the nuisanceis stationary within the short time segment length the signalsfrom the 119873

119910electrodes can be stored in a vector form 119884 =

[1199101 119910

119873119910]

224 Minimum Energy Combination In order to cancel outthe nuisance and noise and to boost the SSVEP responsechannels are created in a weighted sum of 119873

119910electrodes

119904 = sum119873119910

119894=1119908119894119910119894= 119884119908 Several sets 119878 of 119873

119904channels can be

created with a different combination of the original electrodesignals 119878 = 119884119882 where119882 is119873

119910times119873119904matrix that contains the

different weight combinations The structure of 119882 is basedon the minimum energy combination technique In orderto use this combination method for cancelling the nuisancesignal in a first step any potential SSVEP component needsto be removed from the signal = 119884 minus 119883(119883

119879119883)minus1119883119879119884

The next step is to find a weight vector that minimizes theresulting energy of the combination of the electrodesrsquo noisesignals The last step is the optimization problem forminimalizing the nuisance and noise component

min

10038171003817100381710038171003817

10038171003817100381710038171003817

2

= min

119879119879

(2)

The solution to the minimize problem is the smallest eigen-vector V

1(of the symmetricmatrix 119879) and the energy of the

resulting combination equals the smallest eigenvalue 1205821from

this matrix In order to discard up to 90 of the nuisancesignal the total number of channels is selected by finding thesmallest value for119873

119904that satisfies the following equation

sum119873119904

119894=1120582119894

sum119873119910

119895=1120582119895

gt 01 (3)

To detect the SSVEP response for a frequency the powerof that frequency and its harmonics119873

ℎis predicted

=1

119873119904119873ℎ

119873119904

sum

119897=1

119873ℎ

sum

119896=1

10038171003817100381710038171003817119883

T119896119904119897

10038171003817100381710038171003817

2

(4)

To prevent overlapping between frequencies we use 119873ℎ= 2

in the actual system implementation The estimation of thepower of the SSVEP stimulation frequencies and additionalfrequencies (119873

119891) is normalized into probabilities

119901119894=

119894

sum119895=119873119891

119895=1119895

with119894=119873119891

sum

119894=1

119901119894= 1 (5)

Table 1 Overview of the time windows 119879119878of EEG data used for

SSVEP classification Eleven time segments between 8125ms (8times13

samples or 8 blocks of EEG data) and 16250ms were used in thisstudy

Time [ms] Blocks of EEG data8125 810156 1015234 1520313 2030469 3040625 4050781 5060938 6071094 708125 8016250 160

where 119894is the power estimation of the 119894th signal 1 le 119894 le

119873119891 To increase the difference between the results we apply a

Softmax function

1199011015840

119894=

119890120572119901119894

sum119895=119873119891

119895=1119890120572119901119895

with119894=119873119891

sum

119894=1

1199011015840

119894= 1 (6)

where 120572 is set to 025 The output values of the Softmaxfunction are between 0 and 1 and their sum is equal to 1 Theclassification output119862

119900of the signal of 119894th frequency is a result

of three conditions the probability 1199011015840

119894of the 119894th frequency

is the highest it surpasses a predefined border 1199011015840

119894ge 120573119894 and

the detected frequency is one of the stimulation frequenciesAfter each classification to prevent wrong classificationsthe classifier output was rejected for the next 9 blocks(approximately 914ms of EEG data with the sampling rate of128Hz) and the visual stimulation was paused for that timeperiod This classification method is the next iteration of theBremen-BCI as presented for example in [18] An exampleof the recorded signal and the SNR can be seen in Figure 9

The SSVEP classification was performed online every13 samples (ca 100ms) on the basis of the adaptive timesegment length of the acquired EEG data introduced in[18] If no classification was possible the time segmentlengthwindowwas gradually extended to the next predefinedwindow lengths (see Table 1 and Figure 2)

23 Software

231 Graphical User Interface A custom-made graphicaluser interface (robot-GUI) was designed to control the MRCDirectX 9 was used to draw the images (buttons and areal-time picture from the actual camera view in back-ground) on themonitor and OpenCV library (version 2410httpwwwopencvorg) was used for image acquisitionEach received frame was rendered into background textureas soon as it was received (max 30Hz the frame rate waslimited by the used camera) Each received frame (640 times

480 pixels) was rescaled to 1280 times 960 Due to the ratioof the streamed video frames and the resolution of the

6 Computational Intelligence and Neuroscience

Time (ms)

8 bl

10 bl

160 blocks

8 bl8 bl8 bl

10 bl

8 bl8 bl

10 bl15 bl15 bl

15 blocks20 blocks

10 bl

8 bl8 bl

80 blocks

Com

man

d cla

ssifi

catio

n

20000 22000 24000

203125 ms

8125 ms

152344 ms

1015625 ms

8135 ms

0 2000 4000 6000 8000 10000 12000 14000 16000 18000

middot middot middot

middot middot middot

middot middot middot

middot middot middot

16250 ms

Figure 2 Changes in the time segment length after a performed classification in case no distinct classification can be made and the actualtime 119905 allows for the extension to the next predefined value After each classification (gray bars of 8 10 20 and 160 blocks) additional timefor gaze shifting was included (black) in which the classifier output was rejected for 9 blocks

Figure 3 A screenshot of the robot-GUI in drive mode In thissituation the user had to select the ldquodrive forwardrdquo commandwhichcaused the robot to execute an autopilot program that uses the built-in light sensors to follow the white line

Figure 4 Signs positioned at the turning points and positionedtowards the camera showed the correct direction for following thetrack Here in the screenshot the robot automatically stopped afterreaching this turning point and a sign instructed the user to selectthe command ldquoturn leftrdquo

monitors used a gray background in the form of a framewas implemented to prevent unnatural video scaling Thestarting dimensions of the buttons were 146 times 512 pixelsfor left and right 576 times 128 pixels for the top and 536 times

96 pixels for bottom buttons (see Figures 3 and 4) Duringthe runtime the size of the buttons changed continuously

depending on the calculated probabilities of the 119894th stimuliin (6) The robot-GUI had two modes a drive mode and acamera mode In drive mode the subject was able to sendthe following commands to the MRC ldquodrive forwardrdquo (adiscrete command using the line-sensors to follow the tapedline on autopilot to the next direction change) ldquoturn leftrdquoldquoturn rightrdquo and ldquoswitch to camera moderdquo In camera modethe subject was able to send the following commands ldquolookleftrdquo ldquolook rightrdquo ldquolook uprdquo and ldquoswitch to drive moderdquoWhile looking up the button ldquolook uprdquo changed the contentand functionality to ldquolook forwardrdquoThe control buttons werered in drive mode and yellow in camera mode to increaseuser friendliness The aspect ratio of the video stream was4 3 so the computer screen was not entirely filled out Ifthe command ldquodrive forwardrdquo was selected by the user anautonomous program was executed on the MRC The MRCused the four line-sensors to follow the white line taped tothe floor until it arrived at a turning point where the robotstopped automaticallyWhile driving theMRCautomaticallymade small corrections when the line was detected by oneof the inner sensors the rotation of the wheels on the sameside was decreased by 50 until another sensor detected theline Similarly when the line was detected by one of the outersensors the wheels on the same side were stopped so thatthe MRC could drive in a curve in order to go back ontrack If one of the turning commands was selected theMRCmade an approximately 90-degree turn in the correspondingdirection In drive mode the camera was facing slightly theground (approximately 20 degrees down from the frontalview for the ldquolook downrdquo position) When switching to cam-eramode (represented by the word ldquoCAMERArdquo at the bottomof the robot-GUI) the camera changed to the ldquolook forwardrdquoposition (switching back to drive mode would change thecamera position to looking down again) All the EEGdata andthe received video stream from the MRC were anonymouslystored

Computational Intelligence and Neuroscience 7

(a) (b)

Figure 5 (a) shows the second step of theWizardThe subject faced a circle consisting of small segments which flickered simultaneously withseven different frequencies Each of the seven tested frequencies was represented by the same amount of segmentsThe segments representingone of the seven tested frequencies were scattered in random order (b) forming the stimulation circle (a) After this multistimulation withthirteen or fourteen different frequencies depending on the alpha test result the EEG data were analyzed and ordered from the strongest tothe lowest SSVEP response

fS

(a) (b)

Figure 6 This figure shows the third step of the Wizard (a) The main frequency 119891119878is in the center (white circle stimulation) and the other

three frequencies (from four 119891119878119894with the strongest SSVEP response) are divided into small stimulation blocks and scattered around 119891

119878acting

as ldquonoiserdquo in peripheral vision This pattern is shown in (b) This was repeated for each 119891119878119894

232 Wizard The Wizard software is generally spokena calibration program that autonomously determines BCIkey parameters [37] It is designed to select user-specificoptimal stimulation frequencies for SSVEP-based BCIs onLCD screens It is a development of the method presentedin [41] The Wizard chooses four frequencies with the bestSSVEP response out of a set of 14 possible stimulationfrequencies 119891

119894(ranged from 6Hz to 20Hz) that can be

realized on a monitor screen with a vertical refresh rate of120Hz [42] In the following we provide a brief description

Step 1 (alpha test) The subject was instructed by an audiofeedback to close his or her eyes 15 frequencies in the alphawave band (827 857 887 893 923 953 970 100 10301061 1091 1121 1170 120 and 1230Hz) were measured for10 seconds during the closed eyes period and if any of those15 frequencies would interfere with a possible stimulationfrequency 119891

119894(Δ119891 le 015Hz) this frequency 119891

119894would be

filtered out

Step 2 (stimulation frequency selection) In order to avoid theinfluence of harmonic frequencies the user had to look at twocircles in sequence Each circle contained seven stimulationfrequencies which were presented in pseudorandom order assmall green circle segments (see Figure 5) This step took 20seconds

Step 3 (optimal classification thresholds and time segmentlength) In this step the Wizard software chose the beststarting time-length segment 119879

119878as well as the optimal

classification thresholds 120573119894for the frequencies determined

in the previous step First a white circle flickering at thefrequency which had the highest SSVEP response in phase2 was displayed It was necessary to simulate noise causedby peripheral vision when concentrating on the target objectFor this purpose the white circle was surrounded by a greenring containing segments which presented the remainingoptimal frequencies (see Figure 6) The user was instructedby an audio feedback to gaze at the white circle The circle

8 Computational Intelligence and Neuroscience

(a) (b)

Figure 7 Two subjects during the driving task The robotic-GUI is in camera mode The subject in (a) sees a text message instructing himto select the button ldquolook uprdquo The subject in (b) is one step further and is facing the hidden secret number which is attached underneath adrawer

and the ring flickered for 10 seconds while EEG data wererecorded After a two-second break the flickering continuedThe white circle now flickered with the second-highestfrequency from phase two while the ring flickered with theremaining three frequencies This procedure was repeateduntil the data for all four optimal frequencies were collectedTotal recording time for phase three was 40 seconds andbased on this data optimal thresholds and time segmentlength were determined If one of the frequencies 119891

119878119894did not

reach a defined minimal threshold 120573119894 this frequency was

replaced with the next best frequency from Step 2 and theprocedure of this step was repeated

24 Procedure In order to analyze the BCI performancevariety a pre- and postquestionnaire system (similar to [21])was employed After signing the consent form each subjectcompleted a brief prequestionnaire and was prepared for theEEG recording Subjects were seated in a comfortable chairin front of the monitor the electrode cap was placed andelectrode gel was applied First the subjects went throughthe steps of the Wizard and BCI key parameters (stimula-tion frequencies classification thresholds and optimal timesegment length) were determined Then the experimenterstarted the robot-GUI (Figure 3) The control task was todrive the MRC through a predetermined route (Figure 8) Ifthe robot-GUI software recognized a wrong command forexample ldquoturn rightrdquo instead of ldquoturn leftrdquo this commandwas not sent so that the MRC would not drive off the track(Figure 7) If the subjectmade a fewmistakes at the beginningand was willing to start over again (this occurred with asmall number of subjects) the experimentermoved theMRCback to the starting point and restarted the robot-GUI In thevery few cases where the wireless connection to the MRCwas interrupted (eg the MRC was mechanically unable toexecute the command thatwas correctly selected in the robot-GUI by the user) the experimenter manually repositionedthe MRC to the next turning point

At a certain turning point (represented by a blue squareshown in Figure 8 when approximately 60 of the drivingtask was completed) the camera was facing a text messagewhich instructed the subjects to look up (user action rep-resented with a circled number as eg 5 in this case)

Start

Finish

1 m

1 m

1 m

1 m

2 m

15 m15 m

15 m

15 m

15 m

15 m

Figure 8 Track for the mobile robotic car Subjects had to steerthe MRC through a 15-meter-long course with 10 turns in orderto reach the finish line At a certain turning point (marked with ablue square) the MRCs camera faced a text message that instructedthe subject to switch to camera mode The task was to examine thesurroundings and to look for a hidden number which was attachedunderneath a drawer (otherwise not visible) One example of theoptimal path is shown in Figure 9

At this point the user had to switch to camera mode andselect the command ldquolook uprdquo Now the subject was ableto see a hidden secret number which was attached to thebottom of a drawer (located at a height of 50 cm facingthe floor not normally visible to the people in the room)The numbers varied randomly between the subjects andwere only accessible via the MRC camera After seeing thenumber on the screen the subject had to change back todrive mode and continued the driving task When arriving atthe finish line the camera faced another text message whichagain instructed the subjects to look up After changing tocamera mode and selecting ldquolook uprdquo the camera faced asign displaying the word ldquofinishrdquo and the robot-GUI stoppedautomatically All of the EEG data and performed commandswere stored on the hard drive of the desktop PC for furtherevaluation The values of ITR and accuracies were calculatedautomatically by the software The ITR was calculated usingthe formula presented by Wolpaw et al in [43] the numberof possible choices for classification was equal to four itwas the same for each mode (drive mode or camera mode)After finishing the experiment the subject filled out thepostquestionnaire

Computational Intelligence and Neuroscience 9

Figure 9This chart presents exemplarily the BCI performance for subject 59 (optimal path with 100 accuracy)The top row shows the rawEEG signal acquired from channel119874

119911(one of eight EEG electrodes)The remaining graphs show the SNR values during the whole experiment

(dashed line represents the threshold for each command) for the four commands forwardup turn left turn right and cameradrive thestimulation frequencies for those commands were 667 750 706 and 632Hz respectively The 119909-axis the same for all graphs representsthe total duration of the experiment (150 seconds) The marking above each graph represents the exact time of each classification circlednumbers below the 119909-axis correspond to the task actions

3 Results

The results of the driving performance are shown in Table 3All 61 subjects were able to control the BCI application andcompleted the entire driving taskThe overall mean (SD) ITRof the driving performance was 1407 (444) bitsmin Themean (SD) accuracy was 9714 (373) Except for one allsubjects reached accuracies above 80 40 subjects reachedaccuracies above 90 and nineteen of them even completedthe steering task without any errorsThemean (SD) accuracyof each action (commands required to complete the task) was

9614 (002) In the prequestionnaire subjects answered thequestions regarding gender the need for vision correctiontiredness and BCI experience as displayed in Table 2Eighteen female subjects (30) with an average (SD) ageof 2267 (430) years and 43 male subjects with an average(SD) age of 2260 (532) years participated in the experimentThe influence of gender was also tested 18 female subjectsachieved a mean (SD) accuracy and ITR of 9324 (798)and 1405 (573) bitsmin respectively and 43 male achieveda mean (SD) accuracy and ITR of 9295 (659) and 1407(508) bitsmin respectively A one-way ANOVA showed no

10 Computational Intelligence and Neuroscience

Table 2 Questionnaire results The table presents the answers collected from the prequestionnaire The figures show the number of respon-dents or the mean (SD) value

Age Gender Visioncorrection Tiredness Length of sleep

last nightExperiencewith BCIs

Years Male Female Yes No 1 nottired

2 littletired

3 moderatelytired 4 tired 5 very

tired Hours Yes No

2262(500) 43 18 22 39 18 23 17 3 0 675 (122) 6 55

statistically significant difference in accuracy 119865(1 59) =

0022 119901 = 0883 and ITR 119865(1 59) = 00003 119901 = 0987between those two groupsThe influence of vision correctionon accuracy and ITR was also tested A total of 22 subjectsrequired vision correction in the form of glasses or contactlenses (as they stated in the questionnaire) They achieveda mean (SD) accuracy and ITR of 9156 (636) and 1278(402) bitsmin respectively These results were compared tosubjects who did not require vision correction and achieveda mean (SD) accuracy and ITR of 9392 (535) and 1484(466) bitsmin respectively A one-way ANOVA showedno statistically significant influence of vision correction onaccuracy 119865(1 59) = 1664 119901 = 0202 There was also nosignificant effect of vision correction on ITR119865(1 59) = 2263119901 = 0138 An evaluation of tiredness was also calculated onthe basis of the answers given in a prequestionnaire (scalefrom 1 to 5) Results of the prequestionnaire are shown inTable 2 A one-way ANOVA revealed no significant effect oftiredness on accuracy 119865(4 56) = 0335 119901 = 0853 and ITR119865(4 56) = 0067 119901 = 0991

4 Discussion

In the results presented the overall accuracy was calculatedfor all commands in drive mode and in camera modewithout distinguishing between them since four commandsfor SSVEP classification were always present Even thoughdifferent commands were sent the command classificationremained the same A comparison of this study to otherpublications on steering a robot with SSVEP-based BCI ispresented in Table 4

In order to test the influence of video overlay we com-pared the accuracy results with a demographic study [21] inwhich 86 subjects achieved a mean (SD) accuracy of 9226(782) while driving a small robot through a labyrinth witha four-class SSVEP-based BCI An unpaired 119905-test showed nostatistically significant difference 119905(145) = minus0089 119901 = 0397compared to the present values This shows that the videooverlay did not negatively influence the BCI performanceGergondet et al listed three factors with respect to howthe live video feedback in the background can influence theSSVEP stimulation distraction due to what is happening onthe video distraction due to what is happening around theuser and variation in luminosity below the stimuli shownto the user [30] Kapeller et al showed that there is a slightdecrease in performance for the underlying video sequencedue to a distractive environment and small targets used [27]

Contrary to the findings reported by Gergondet et al andKapeller et al the dynamic background did not noticeablyaffect the BCI performance (unpaired 119905-test result) Weassume that this was due to the fact that stimulation frequen-cies and minimal time segment classification windows wereadjusted individually and that relatively large stimulationboxes were used (visual angle)

The high accuracies of the SSVEP-based BCI systemsupport the hypothesis that BCI can be used to control acomplex driving application In comparison to the study byDiez et al [31] the MRC self-driving behavior was limitedto a preset number of turning points and a predefined pathAlthough the robot presented by Diez et al could avoidobstacles by itself (limited algorithm reaction) it decreased itsspeed for trajectory adjustment and in some cases stopped toavoid collision and further user commands were necessaryWhile Diez et al reported a mean time of 23320 seconds forthe second destination (139-meter distance) in this study theachieved mean time was 20708 seconds (15-meter distance)As pointed out by Diez et al the overall time for completingthe task depends not only on the robot but also on thesubjectrsquos performance controlling the BCI system [31] In ourstudy the high accuracies of the BCI system were achieveddue to the key parameter adjustment carried out by theWizard software

Themain difference between the robot used by Diez et aland the MRC presented in this study is the extended controlpossibility of the interface that also allows for controlling thecamera Both robots were designed for different purposesDiez et al mainly designed their robot for driving frompoint A to point B without any additional interaction withthe environment as they planned to use the system forwheelchair control in the future By contrast in our designthe MRC itself is the interaction device Equipped with adisplay and a two-way audio system it might serve as aremote telepresence in the future

The SSVEP paradigm allows for a robust implementationof four driving commands Compared to robots constructedfor the P300 BCI approach where the user faces largestimulation matrices spread across the entire screen thenumber of simultaneously displayed targets here is only fourand hence the load on the visual channel is considerably lowerin comparison Therefore the user might find it easier toconcentrate on the driving task On the other hand the lessernumber of command options reduces freedom and speedHowever a shared-control paradigm allows for complex tasksto be executed with minimal effort The command ldquodrive

Computational Intelligence and Neuroscience 11

Table 3 Results of the driving performance Total time each subjectrequired for steering the MRC through the route (15 meters and10 turns including camera movements) 26 correct commands (3commands in camera mode and 23 commands in drive mode) werenecessary to complete the task Accuracy (Acc) is the number ofcorrect commands (26) divided by the total number of commands(119862119899) Mean values are given at the bottom of the table

Subject Time Acc ITR119862119899[s] [] [bitsmin]

1 30560 8667 720 302 28346 8966 832 293 28407 8125 680 324 28011 8125 690 325 25259 8125 765 326 19002 9286 1340 287 32206 9630 862 278 31007 7027 466 379 15813 9286 1610 2810 37182 8966 635 2911 24172 8667 910 3012 15133 9286 1683 2813 15549 10000 2007 2614 21054 9630 1318 2715 20841 10000 1497 2616 24162 8387 852 3117 24172 8667 910 3018 14513 9630 1912 2719 16748 9630 1657 2720 22080 9286 1153 2821 19419 9630 1429 2722 43154 9286 590 2823 16352 10000 1908 2624 17184 8667 1280 3025 24812 8387 830 3126 13193 10000 2365 2627 16138 10000 1933 2628 18475 9630 1502 2729 14422 10000 2163 2630 16748 9286 1520 2831 21663 9286 1175 2832 35516 8125 544 3233 24406 8667 901 3034 15214 10000 2051 2635 19561 9630 1418 2736 23948 8387 860 3137 16636 8966 1418 2938 17245 10000 1809 2639 21897 8387 940 3140 16565 8966 1425 2941 14666 10000 2127 2642 18627 8966 1267 29

Table 3 Continued

Subject Time Acc ITR119862119899[s] [] [bitsmin]

43 22131 9286 1151 2844 25929 8667 848 3045 15702 10000 1987 2646 14788 9630 1876 2747 18718 9630 1482 2748 14554 10000 2144 2649 16057 9630 1728 2750 20495 10000 1522 2651 22811 9630 1216 2752 16037 10000 1946 2653 23664 8125 817 3254 14889 10000 2095 2655 15011 10000 2078 2656 13985 10000 2231 2657 14716 10000 2120 2658 24273 10000 1285 2659 15001 10000 2080 2660 16463 9630 1685 2761 17885 9630 1551 27Mean 20708 9303 1407 2811SD 5025 573 444 182Min 13193 7027 466 2600Max 43154 10000 2365 3700

forwardrdquo starts an autopilot program that guides the robotthrough a straight distance until it reaches a predefinedstopping point

While the MRC executed the autopilot program andfollowed the line for some time subjects purposely ignoredthe flickering targets and concentrated on the video feedbackinstead As these waiting periods were added up to theactual classification times to each command that followed anautopilot command (ldquoforwardrdquo) the ITR values are consid-erably low From the researcherrsquos point of view a numberof errors could have been avoided if some subjects wouldhave paid more attention to the live feedback Especially inthe task of reading the hidden number subjects were tryingto drive along the path without looking at the text messagethat appeared on the live feedback After realizing that therobot was not turning they looked at the video and saw thetext message instructing them to look up This is visible inthe analysis of the accuracy of the robots commands (seeFigure 10) The lowest accuracy of 8749 was for actionldquochange view to the camera moderdquo (robot command 14) thismay be caused by the fact that the subjects got used to theactions ldquoturn leftrdquoldquoturn rightrdquo and ldquodrive forwardrdquo whichthey repeated six times from the starting point (see Figures10 and 9) This could easily be avoided if the driving pathswere designedwith evenly distributed tasks (eg live scenariotasks such as reading a hidden number should occur moreoften)

12 Computational Intelligence and Neuroscience

Table 4 Comparison of BCI systems for robot navigation

Publication Subjects Classes Literacy rate [] Mean accuracy [] Mean ITR [bitsmin] Robot typeVolosyak et al [21] 86 4 98 9226 1724 E-puckZhao et al1 [34] 7 4 100 903 247 NAOKapeller et al2 [27] 11 4 91 9136 NA E-puckDiez et al3 [31] 7 4 100 8880lowast NA Pioneer 3-DXHolewa and Nawrocka [24] 3 4 100 7375 1136 Lego MindstormChoi and Jo1 [32] 5 2 100 844 1140 NAOThis study 61 4 100 9303 1407 MRC1Results for SSVEP paradigm2f-VEP results for 4 classes without zero class3Results for destination 1 lowastcorrect(correct + wrong) commands

Table 5 Distribution of the time blocks needed for selection of the first three commands ldquodrive forwardrdquo and the last three commandsldquoforwardrdquoldquolook uprdquo over all subjects

Distribution []Time windows 119879

119878in EEG data blocks

8 10 15 20 30 40 50 60 to 160A 656 1803 1803 2295 1148 656 820 820C 820 2295 820 1967 1148 820 492 1639E 984 1311 820 1967 656 984 820 2459Mean 820 1803 1148 2077 984 820 710 1639œ 984 1803 1475 1803 984 328 656 1967X 1148 1639 1639 1475 1311 984 164 1639Y 1148 1475 1148 2295 656 656 492 2131Mean 1093 1639 1421 1858 984 656 437 1913

Accuracy of each command of the robotrsquos path

1 2 3 4 5 6 7 8 9

Commands

10080604020

0

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26

()

Figure 10This chart presents the accuracy of each command alongthe path (119909-axis) and its selection accuracy to previous correctselection over all subjects

No difference between the time needed for the selectionof the command ldquodrive forwardrdquo in the beginning of theexperiment in comparison to the end of the experiment canbe observed (see Table 5) hence neither fatigue nor practiceseemed to influence performance

The general literacy rate among test subjects of 100indicates that SSVEP-based BCIs can be used by a largerpopulation than previously assumedThis high literacy is dueto the careful selection of individual SSVEP key parameterswhichwere determined by the presented calibration software

41 Limitations The SSVEP-control application has somerestrictions

(i) The wireless connection was implemented with theUDP protocol and some information might thereforenot be received properly

(ii) In order to maximize performance accuracy and userfriendliness the number of simultaneously displayedtargets was restricted to four which resulted in aconsiderably lower ITR

(iii) It should also be noted that for long term use recal-ibration might be inevitable Although the numberof targets is limited to four the SSVEP-based BCIpresented here depends on the userrsquos vision andcontrol over his or her eye movements It thereforehas to compete with other approaches such as P300and other eye tracking systems Visual stimulationwith low frequencies in particular is known to causefatigue

(iv) Moreover subjects in this study may not be repre-sentative of the general population they tended tobe young healthy men Further tests with older andphysically impaired persons are required

5 Conclusions

In the present study the SSVEP-based driving application(robot-GUI) was proven to be successful and robust Thedriving application with live video feedback was tested with61 healthy subjects All participants successfully navigateda mobile semiautonomous robot through a predetermined15-meter-long route BCI key parameters were determinedby the Wizard software No negative influence of the livevideo feedback on the performance was observed Such

Computational Intelligence and Neuroscience 13

applications could be used to control assistive robots butmight also be of interest for healthy subjects In contrast toP300 the SSVEP stimulation has to be sufficiently strong (interms of size or visual angle) to significantly influence theuser A common problem of all BCI studies that also occurredhere is to keep the usermotivated and focused on the taskTheevaluation of the questionnaire shows that for the majorityof subjects the driving task with the SSVEP-based BCI wasnot fatiguing even though lower stimulation frequencieswhich usually cause more fatigue than higher frequencieswere used in the experiment More specifically frequenciesbetween 6 and 12Hz were determined in the calibrationprocedure as they elicited the strongest SSVEP responsesSome subjects stated that as they concentrated more on thecamera video they did not find the flickering annoying at allOur results prove that SSVEP-based personalized adaptiveBCI systems with a low target number can be used as aneveryday assistive technology for remote robotsWe have alsoshown that it is possible to design a four-target interface withextended capabilities such as camera and driving controlThe control application presented here can be used formobile robots as an assistive remote technology as a gameinterface forwheelchair control or even for steering remotelycontrolled drones This study should be considered as a steptowards resolving the problems of semiautonomous mobilerobots controlled with SSVEP-based BCIs In order to furtherimprove this approach the next steps should be an onlineparameter adaptationmechanism for improving the accuracythrough practicing a possibility of turning the stimulationonoff and a two-way video and audio communication fora truly remote telepresence

Competing Interests

The authors declare that the research was conducted in theabsence of any commercial or financial relationships thatcould be construed as a potential conflict of interests

Acknowledgments

The authors would like to thank Thomas Grunenberg fromthe Faculty of Technology and Bionics at Rhine-Waal Uni-versity of Applied Sciences for building programming andmaintaining MRCs The authors would also like to thankthe student assistants Leonie Hillen Francisco Nunez SaadPlatini Anna Catharina Thoma and Paul Fanen Wundufor their help with the study This research was supportedby the German Federal Ministry of Education and Research(BMBF) under Grants nos 16SV6364 and 01DR14014 and theEuropean Fund for Regional Development (EFRD or EFREin Germany) under Grant no GE-1-1-047

References

[1] M Cheney and R Uth Tesla Master of Lightning Barnes ampNoble New York NY USA 1999

[2] W Walter The Living Brain W W Norton amp Company NewYork NY USA 1953

[3] P Flandorfer ldquoPopulation ageing and socially assistive robotsfor elderly persons the importance of sociodemographic fac-tors for user acceptancerdquo International Journal of PopulationResearch vol 2012 Article ID 829835 13 pages 2012

[4] A Nijholt D Tan G Pfurtscheller et al ldquoBrain-computerinterfacing for intelligent systemsrdquo IEEE Intelligent Systems vol23 no 3 pp 72ndash79 2008

[5] J D R Millan R Rupp G R Muller-Putz et al ldquoCombiningbrain-computer interfaces and assistive technologies state-of-the-art and challengesrdquo Frontiers in Neuroscience vol 4 article161 2010

[6] C Brunner N Birbaumer B Blankertz et al ldquoBNCI horizon2020 towards a roadmap for the BCI communityrdquo Brain-Computer Interfaces vol 2 no 1 pp 1ndash10 2015

[7] R Bogue ldquoExoskeletons and robotic prosthetics a review ofrecent developmentsrdquo Industrial Robot vol 36 no 5 pp 421ndash427 2009

[8] R Ron-Angevin F Velasco-Alvarez S Sancha-Ros and L daSilva-Sauer ldquoA two-class self-paced BCI to control a robot infour directionsrdquo IEEE International Conference on Rehabilita-tion Robotics (ICORR rsquo11) vol 2011 Article ID 5975486 2011

[9] L Tonin T Carlson R Leeb and J D R Millan ldquoBrain-controlled telepresence robot by motor-disabled peoplerdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society pp 4227ndash4230IEEE Boston Mass USA September 2011

[10] D Huang K Qian D-Y Fei W Jia X Chen and O BaildquoElectroencephalography (EEG)-based brain-computer inter-face (BCI) a 2-D virtual wheelchair control based on event-related desynchronizationsynchronization and state controlrdquoIEEE Transactions on Neural Systems and Rehabilitation Engi-neering vol 20 no 3 pp 379ndash388 2012

[11] A Chella E Pagello E Menegatti et al ldquoA BCI teleoperatedmuseum robotic guiderdquo in Proceedings of the International Con-ference on Complex Intelligent and Software Intensive Systems(CISIS rsquo09) pp 783ndash788 IEEE Fukuoka Japan March 2009

[12] C Escolano J M Antelis and J Minguez ldquoA telepresencemobile robot controlled with a noninvasive brain-computerinterfacerdquo IEEE Transactions on Systems Man and CyberneticsPart B Cybernetics vol 42 no 3 pp 793ndash804 2012

[13] F-B VialatteMMaurice J Dauwels andA Cichocki ldquoSteady-state visually evoked potentials focus on essential paradigmsand future perspectivesrdquo Progress in Neurobiology vol 90 no4 pp 418ndash438 2010

[14] D Regan ldquoSome characteristics of average steady-state andtransient responses evoked by modulated lightrdquo Electroen-cephalography and Clinical Neurophysiology vol 20 no 3 pp238ndash248 1966

[15] Y Frank and F Torres ldquoVisual evoked potentials in the evalua-tion of lsquocortical blindnessrsquo in childrenrdquoAnnals of Neurology vol6 no 2 pp 126ndash129 1979

[16] W IanMcDonald A CompstonG Edan et al ldquoRecommendeddiagnostic criteria for multiple sclerosis guidelines from theinternational panel on the diagnosis of multiple sclerosisrdquoAnnals of Neurology vol 50 no 1 pp 121ndash127 2001

[17] S Gao Y Wang X Gao and B Hong ldquoVisual and auditorybrain-computer interfacesrdquo IEEE Transactions on BiomedicalEngineering vol 61 no 5 pp 1436ndash1447 2014

[18] I Volosyak ldquoSSVEP-based Bremen-BCI interfacemdashboostinginformation transfer ratesrdquo Journal of Neural Engineering vol8 no 3 Article ID 036020 2011

14 Computational Intelligence and Neuroscience

[19] G R Muller-Putz and G Pfurtscheller ldquoControl of an electricalprosthesis with an SSVEP-based BCIrdquo IEEE Transactions onBiomedical Engineering vol 55 no 1 pp 361ndash364 2008

[20] P Martinez H Bakardjian and A Cichocki ldquoFully onlinemulticommand brain-computer interface with visual neuro-feedback using SSVEP paradigmrdquo Computational Intelligenceand Neuroscience vol 2007 Article ID 94561 9 pages 2007

[21] I Volosyak D Valbuena T Luth T Malechka and A GraserldquoBCI demographics II how many (and What Kinds of) peoplecan use a high-frequency SSVEP BCIrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 19 no 3 pp232ndash239 2011

[22] J Zhao Q Meng W Li M Li and G Chen ldquoSSVEP-basedhierarchical architecture for control of a humanoid robot withmindrdquo in Proceedings of the 11th World Congress on Intelli-gent Control and Automation (WCICA rsquo14) pp 2401ndash2406Shenyang China July 2014

[23] A Guneysu and H Levent Akin ldquoAn SSVEP based BCI tocontrol a humanoid robot by using portable EEG devicerdquo inProceedings of the 35th Annual International Conference of theIEEE Engineering in Medicine and Biology Society (EMBC rsquo13)pp 6905ndash6908 IEEE Osaka Japan July 2013

[24] K Holewa and A Nawrocka ldquoEmotiv EPOC neuroheadset inbrainmdashcomputer interfacerdquo in Proceedings of the 15th Interna-tional Carpathian Control Conference (ICCC rsquo14) pp 149ndash152IEEE Velke Karlovice Czech Republic May 2014

[25] N-S Kwak K-R Muller and S-W Lee ldquoToward exoskeletoncontrol based on steady state visual evoked potentialsrdquo inProceedings of the International Winter Workshop on Brain-Computer Interface (BCI rsquo14) pp 1ndash2 Gangwon South KoreaFebruary 2014

[26] G Bin X Gao Y Wang B Hong and S Gao ldquoVEP-basedbrain-computer interfaces time frequency and code modula-tionsrdquo IEEE Computational Intelligence Magazine vol 4 no 4pp 22ndash26 2009

[27] C Kapeller C Hintermuller M Abu-Alqumsan R Pruckl APeer and C Guger ldquoA BCI using VEP for continuous control ofa mobile robotrdquo in Proceedings of the 35th Annual InternationalConference of the IEEE Engineering in Medicine and BiologySociety (EMBC rsquo13) pp 5254ndash5257 IEEE Osaka Japan July2013

[28] H Wang T Li and Z Huang ldquoRemote control of an electricalcar with SSVEP-based BCIrdquo in Proceedings of the IEEE Interna-tional Conference on Information Theory and Information Secu-rity (ICITIS rsquo10) pp 837ndash840 IEEE Beijing China December2010

[29] Y Wang Y-T Wang and T-P Jung ldquoVisual stimulus designfor high-rate SSVEP BCIrdquo Electronics Letters vol 46 no 15 pp1057ndash1058 2010

[30] P Gergondet D Petit and A Kheddar ldquoSteering a robot witha brain-computer interface impact of video feedback on BCIperformancerdquo in Proceedings of the 21st IEEE InternationalSymposium on Robot and Human Interactive Communication(RO-MAN rsquo12) pp 271ndash276 Paris France September 2012

[31] P F Diez V A Mut E Laciar and E M Avila Perona ldquoMobilerobot navigation with a self-paced brain-computer interfacebased on high-frequency SSVEPrdquo Robotica vol 32 no 5 pp695ndash709 2014

[32] B Choi and S Jo ldquoA low-cost EEG system-based hybridbrain-computer interface for humanoid robot navigation andrecognitionrdquo PLoS ONE vol 8 no 9 Article ID e74583 2013

[33] E Tidoni P Gergondet A Kheddar and S M Aglioti ldquoAudio-visual feedback improves the BCI performance in the naviga-tional control of a humanoid robotrdquo Frontiers in Neuroroboticsvol 8 article 20 pp 1ndash8 2014

[34] J Zhao W Li and M Li ldquoComparative study of SSVEP- andP300-based models for the telepresence control of humanoidrobotsrdquo PLoS ONE vol 10 no 11 Article ID e0142168 2015

[35] T M Rutkowski K Shimizu T Kodama P Jurica and ACichocki ldquoBrain-robot interfaces using spatial tactile BCIparadigmsrdquo in Symbiotic Interaction pp 132ndash137 Springer NewYork NY USA 2015

[36] C Brunner B Z Allison D J Krusienski et al ldquoImprovedsignal processing approaches in an offline simulation of a hybridbrain-computer interfacerdquo Journal of NeuroscienceMethods vol188 no 1 pp 165ndash173 2010

[37] F Gembler P Stawicki and I Volosyak ldquoAutonomous parame-ter adjustment for SSVEP-based BCIs with a novel BCI wizardrdquoFrontiers in Neuroscience vol 9 article 474 pp 1ndash12 2015

[38] E W Sellers T M Vaughan and J R Wolpaw ldquoA brain-computer interface for long-term independent home userdquoAmyotrophic Lateral Sclerosis vol 11 no 5 pp 449ndash455 2010

[39] E M Holz L Botrel and A Kubler ldquoBridging gaps long-term independent BCI home-use by a locked-in end-userrdquo inProceedings of the TOBI Workshop IV Sion Switzerland 2013

[40] O Friman I Volosyak and A Graser ldquoMultiple channel detec-tion of steady-state visual evoked potentials for brain-computerinterfacesrdquo IEEE Transactions on Biomedical Engineering vol54 no 4 pp 742ndash750 2007

[41] I Volosyak T Malechka D Valbuena and A Graser ldquoAnovel calibration method for SSVEP based brain-computerinterfacesrdquo in Proceedings of the 18th European Signal ProcessingConference (EUSIPCO rsquo10) pp 939ndash943 Aalborg DenmarkAugust 2010

[42] I Volosyak H Cecotti and A Graser ldquoOptimal visual stimulion LCD screens for SSVEP based brain-computer interfacesrdquoin Proceedings of the 4th International IEEEEMBS Conferenceon Neural Engineering (NER rsquo09) pp 447ndash450 IEEE AntalyaTurkey May 2009

[43] J R Wolpaw N Birbaumer D J McFarland G Pfurtschellerand T M Vaughan ldquoBrain-computer interfaces for communi-cation and controlrdquo Clinical Neurophysiology vol 113 no 6 pp767ndash791 2002

Page 5: Driving a Semiautonomous Mobile Robotic Car Controlled by ... · Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI ... (frequencycoding,methodoriginallyproposed

Computational Intelligence and Neuroscience 5

The first part is the SSVEP evoked response consisting of sineand cosine functions of the frequency 119891 and its harmonics119896 with corresponding amplitudes 119886

119894119896and 119887119894119896 The last part

119864119894119905represents the noise component of the electrode 119894 other

activities not related to the SSVEP evoked response andartifacts The 119894th EEG signals samples stored in 119873

119905are in

vector form 119910119894= 119883120591119894+ 119864119894 where 119910

119894= [119910119894(1) 119910

119894(119873119905)]119879

is 119873119905times 1 vector 119883 is the SSVEP model matrix of size

119873119905times 2119873ℎand contains the sin (2120587119896119891119905) and cos (2120587119896119891119905) pair

in its columns The 120591119894vector contains the corresponding

amplitudesThis is based on the assumption that the nuisanceis stationary within the short time segment length the signalsfrom the 119873

119910electrodes can be stored in a vector form 119884 =

[1199101 119910

119873119910]

224 Minimum Energy Combination In order to cancel outthe nuisance and noise and to boost the SSVEP responsechannels are created in a weighted sum of 119873

119910electrodes

119904 = sum119873119910

119894=1119908119894119910119894= 119884119908 Several sets 119878 of 119873

119904channels can be

created with a different combination of the original electrodesignals 119878 = 119884119882 where119882 is119873

119910times119873119904matrix that contains the

different weight combinations The structure of 119882 is basedon the minimum energy combination technique In orderto use this combination method for cancelling the nuisancesignal in a first step any potential SSVEP component needsto be removed from the signal = 119884 minus 119883(119883

119879119883)minus1119883119879119884

The next step is to find a weight vector that minimizes theresulting energy of the combination of the electrodesrsquo noisesignals The last step is the optimization problem forminimalizing the nuisance and noise component

min

10038171003817100381710038171003817

10038171003817100381710038171003817

2

= min

119879119879

(2)

The solution to the minimize problem is the smallest eigen-vector V

1(of the symmetricmatrix 119879) and the energy of the

resulting combination equals the smallest eigenvalue 1205821from

this matrix In order to discard up to 90 of the nuisancesignal the total number of channels is selected by finding thesmallest value for119873

119904that satisfies the following equation

sum119873119904

119894=1120582119894

sum119873119910

119895=1120582119895

gt 01 (3)

To detect the SSVEP response for a frequency the powerof that frequency and its harmonics119873

ℎis predicted

=1

119873119904119873ℎ

119873119904

sum

119897=1

119873ℎ

sum

119896=1

10038171003817100381710038171003817119883

T119896119904119897

10038171003817100381710038171003817

2

(4)

To prevent overlapping between frequencies we use 119873ℎ= 2

in the actual system implementation The estimation of thepower of the SSVEP stimulation frequencies and additionalfrequencies (119873

119891) is normalized into probabilities

119901119894=

119894

sum119895=119873119891

119895=1119895

with119894=119873119891

sum

119894=1

119901119894= 1 (5)

Table 1 Overview of the time windows 119879119878of EEG data used for

SSVEP classification Eleven time segments between 8125ms (8times13

samples or 8 blocks of EEG data) and 16250ms were used in thisstudy

Time [ms] Blocks of EEG data8125 810156 1015234 1520313 2030469 3040625 4050781 5060938 6071094 708125 8016250 160

where 119894is the power estimation of the 119894th signal 1 le 119894 le

119873119891 To increase the difference between the results we apply a

Softmax function

1199011015840

119894=

119890120572119901119894

sum119895=119873119891

119895=1119890120572119901119895

with119894=119873119891

sum

119894=1

1199011015840

119894= 1 (6)

where 120572 is set to 025 The output values of the Softmaxfunction are between 0 and 1 and their sum is equal to 1 Theclassification output119862

119900of the signal of 119894th frequency is a result

of three conditions the probability 1199011015840

119894of the 119894th frequency

is the highest it surpasses a predefined border 1199011015840

119894ge 120573119894 and

the detected frequency is one of the stimulation frequenciesAfter each classification to prevent wrong classificationsthe classifier output was rejected for the next 9 blocks(approximately 914ms of EEG data with the sampling rate of128Hz) and the visual stimulation was paused for that timeperiod This classification method is the next iteration of theBremen-BCI as presented for example in [18] An exampleof the recorded signal and the SNR can be seen in Figure 9

The SSVEP classification was performed online every13 samples (ca 100ms) on the basis of the adaptive timesegment length of the acquired EEG data introduced in[18] If no classification was possible the time segmentlengthwindowwas gradually extended to the next predefinedwindow lengths (see Table 1 and Figure 2)

23 Software

231 Graphical User Interface A custom-made graphicaluser interface (robot-GUI) was designed to control the MRCDirectX 9 was used to draw the images (buttons and areal-time picture from the actual camera view in back-ground) on themonitor and OpenCV library (version 2410httpwwwopencvorg) was used for image acquisitionEach received frame was rendered into background textureas soon as it was received (max 30Hz the frame rate waslimited by the used camera) Each received frame (640 times

480 pixels) was rescaled to 1280 times 960 Due to the ratioof the streamed video frames and the resolution of the

6 Computational Intelligence and Neuroscience

Time (ms)

8 bl

10 bl

160 blocks

8 bl8 bl8 bl

10 bl

8 bl8 bl

10 bl15 bl15 bl

15 blocks20 blocks

10 bl

8 bl8 bl

80 blocks

Com

man

d cla

ssifi

catio

n

20000 22000 24000

203125 ms

8125 ms

152344 ms

1015625 ms

8135 ms

0 2000 4000 6000 8000 10000 12000 14000 16000 18000

middot middot middot

middot middot middot

middot middot middot

middot middot middot

16250 ms

Figure 2 Changes in the time segment length after a performed classification in case no distinct classification can be made and the actualtime 119905 allows for the extension to the next predefined value After each classification (gray bars of 8 10 20 and 160 blocks) additional timefor gaze shifting was included (black) in which the classifier output was rejected for 9 blocks

Figure 3 A screenshot of the robot-GUI in drive mode In thissituation the user had to select the ldquodrive forwardrdquo commandwhichcaused the robot to execute an autopilot program that uses the built-in light sensors to follow the white line

Figure 4 Signs positioned at the turning points and positionedtowards the camera showed the correct direction for following thetrack Here in the screenshot the robot automatically stopped afterreaching this turning point and a sign instructed the user to selectthe command ldquoturn leftrdquo

monitors used a gray background in the form of a framewas implemented to prevent unnatural video scaling Thestarting dimensions of the buttons were 146 times 512 pixelsfor left and right 576 times 128 pixels for the top and 536 times

96 pixels for bottom buttons (see Figures 3 and 4) Duringthe runtime the size of the buttons changed continuously

depending on the calculated probabilities of the 119894th stimuliin (6) The robot-GUI had two modes a drive mode and acamera mode In drive mode the subject was able to sendthe following commands to the MRC ldquodrive forwardrdquo (adiscrete command using the line-sensors to follow the tapedline on autopilot to the next direction change) ldquoturn leftrdquoldquoturn rightrdquo and ldquoswitch to camera moderdquo In camera modethe subject was able to send the following commands ldquolookleftrdquo ldquolook rightrdquo ldquolook uprdquo and ldquoswitch to drive moderdquoWhile looking up the button ldquolook uprdquo changed the contentand functionality to ldquolook forwardrdquoThe control buttons werered in drive mode and yellow in camera mode to increaseuser friendliness The aspect ratio of the video stream was4 3 so the computer screen was not entirely filled out Ifthe command ldquodrive forwardrdquo was selected by the user anautonomous program was executed on the MRC The MRCused the four line-sensors to follow the white line taped tothe floor until it arrived at a turning point where the robotstopped automaticallyWhile driving theMRCautomaticallymade small corrections when the line was detected by oneof the inner sensors the rotation of the wheels on the sameside was decreased by 50 until another sensor detected theline Similarly when the line was detected by one of the outersensors the wheels on the same side were stopped so thatthe MRC could drive in a curve in order to go back ontrack If one of the turning commands was selected theMRCmade an approximately 90-degree turn in the correspondingdirection In drive mode the camera was facing slightly theground (approximately 20 degrees down from the frontalview for the ldquolook downrdquo position) When switching to cam-eramode (represented by the word ldquoCAMERArdquo at the bottomof the robot-GUI) the camera changed to the ldquolook forwardrdquoposition (switching back to drive mode would change thecamera position to looking down again) All the EEGdata andthe received video stream from the MRC were anonymouslystored

Computational Intelligence and Neuroscience 7

(a) (b)

Figure 5 (a) shows the second step of theWizardThe subject faced a circle consisting of small segments which flickered simultaneously withseven different frequencies Each of the seven tested frequencies was represented by the same amount of segmentsThe segments representingone of the seven tested frequencies were scattered in random order (b) forming the stimulation circle (a) After this multistimulation withthirteen or fourteen different frequencies depending on the alpha test result the EEG data were analyzed and ordered from the strongest tothe lowest SSVEP response

fS

(a) (b)

Figure 6 This figure shows the third step of the Wizard (a) The main frequency 119891119878is in the center (white circle stimulation) and the other

three frequencies (from four 119891119878119894with the strongest SSVEP response) are divided into small stimulation blocks and scattered around 119891

119878acting

as ldquonoiserdquo in peripheral vision This pattern is shown in (b) This was repeated for each 119891119878119894

232 Wizard The Wizard software is generally spokena calibration program that autonomously determines BCIkey parameters [37] It is designed to select user-specificoptimal stimulation frequencies for SSVEP-based BCIs onLCD screens It is a development of the method presentedin [41] The Wizard chooses four frequencies with the bestSSVEP response out of a set of 14 possible stimulationfrequencies 119891

119894(ranged from 6Hz to 20Hz) that can be

realized on a monitor screen with a vertical refresh rate of120Hz [42] In the following we provide a brief description

Step 1 (alpha test) The subject was instructed by an audiofeedback to close his or her eyes 15 frequencies in the alphawave band (827 857 887 893 923 953 970 100 10301061 1091 1121 1170 120 and 1230Hz) were measured for10 seconds during the closed eyes period and if any of those15 frequencies would interfere with a possible stimulationfrequency 119891

119894(Δ119891 le 015Hz) this frequency 119891

119894would be

filtered out

Step 2 (stimulation frequency selection) In order to avoid theinfluence of harmonic frequencies the user had to look at twocircles in sequence Each circle contained seven stimulationfrequencies which were presented in pseudorandom order assmall green circle segments (see Figure 5) This step took 20seconds

Step 3 (optimal classification thresholds and time segmentlength) In this step the Wizard software chose the beststarting time-length segment 119879

119878as well as the optimal

classification thresholds 120573119894for the frequencies determined

in the previous step First a white circle flickering at thefrequency which had the highest SSVEP response in phase2 was displayed It was necessary to simulate noise causedby peripheral vision when concentrating on the target objectFor this purpose the white circle was surrounded by a greenring containing segments which presented the remainingoptimal frequencies (see Figure 6) The user was instructedby an audio feedback to gaze at the white circle The circle

8 Computational Intelligence and Neuroscience

(a) (b)

Figure 7 Two subjects during the driving task The robotic-GUI is in camera mode The subject in (a) sees a text message instructing himto select the button ldquolook uprdquo The subject in (b) is one step further and is facing the hidden secret number which is attached underneath adrawer

and the ring flickered for 10 seconds while EEG data wererecorded After a two-second break the flickering continuedThe white circle now flickered with the second-highestfrequency from phase two while the ring flickered with theremaining three frequencies This procedure was repeateduntil the data for all four optimal frequencies were collectedTotal recording time for phase three was 40 seconds andbased on this data optimal thresholds and time segmentlength were determined If one of the frequencies 119891

119878119894did not

reach a defined minimal threshold 120573119894 this frequency was

replaced with the next best frequency from Step 2 and theprocedure of this step was repeated

24 Procedure In order to analyze the BCI performancevariety a pre- and postquestionnaire system (similar to [21])was employed After signing the consent form each subjectcompleted a brief prequestionnaire and was prepared for theEEG recording Subjects were seated in a comfortable chairin front of the monitor the electrode cap was placed andelectrode gel was applied First the subjects went throughthe steps of the Wizard and BCI key parameters (stimula-tion frequencies classification thresholds and optimal timesegment length) were determined Then the experimenterstarted the robot-GUI (Figure 3) The control task was todrive the MRC through a predetermined route (Figure 8) Ifthe robot-GUI software recognized a wrong command forexample ldquoturn rightrdquo instead of ldquoturn leftrdquo this commandwas not sent so that the MRC would not drive off the track(Figure 7) If the subjectmade a fewmistakes at the beginningand was willing to start over again (this occurred with asmall number of subjects) the experimentermoved theMRCback to the starting point and restarted the robot-GUI In thevery few cases where the wireless connection to the MRCwas interrupted (eg the MRC was mechanically unable toexecute the command thatwas correctly selected in the robot-GUI by the user) the experimenter manually repositionedthe MRC to the next turning point

At a certain turning point (represented by a blue squareshown in Figure 8 when approximately 60 of the drivingtask was completed) the camera was facing a text messagewhich instructed the subjects to look up (user action rep-resented with a circled number as eg 5 in this case)

Start

Finish

1 m

1 m

1 m

1 m

2 m

15 m15 m

15 m

15 m

15 m

15 m

Figure 8 Track for the mobile robotic car Subjects had to steerthe MRC through a 15-meter-long course with 10 turns in orderto reach the finish line At a certain turning point (marked with ablue square) the MRCs camera faced a text message that instructedthe subject to switch to camera mode The task was to examine thesurroundings and to look for a hidden number which was attachedunderneath a drawer (otherwise not visible) One example of theoptimal path is shown in Figure 9

At this point the user had to switch to camera mode andselect the command ldquolook uprdquo Now the subject was ableto see a hidden secret number which was attached to thebottom of a drawer (located at a height of 50 cm facingthe floor not normally visible to the people in the room)The numbers varied randomly between the subjects andwere only accessible via the MRC camera After seeing thenumber on the screen the subject had to change back todrive mode and continued the driving task When arriving atthe finish line the camera faced another text message whichagain instructed the subjects to look up After changing tocamera mode and selecting ldquolook uprdquo the camera faced asign displaying the word ldquofinishrdquo and the robot-GUI stoppedautomatically All of the EEG data and performed commandswere stored on the hard drive of the desktop PC for furtherevaluation The values of ITR and accuracies were calculatedautomatically by the software The ITR was calculated usingthe formula presented by Wolpaw et al in [43] the numberof possible choices for classification was equal to four itwas the same for each mode (drive mode or camera mode)After finishing the experiment the subject filled out thepostquestionnaire

Computational Intelligence and Neuroscience 9

Figure 9This chart presents exemplarily the BCI performance for subject 59 (optimal path with 100 accuracy)The top row shows the rawEEG signal acquired from channel119874

119911(one of eight EEG electrodes)The remaining graphs show the SNR values during the whole experiment

(dashed line represents the threshold for each command) for the four commands forwardup turn left turn right and cameradrive thestimulation frequencies for those commands were 667 750 706 and 632Hz respectively The 119909-axis the same for all graphs representsthe total duration of the experiment (150 seconds) The marking above each graph represents the exact time of each classification circlednumbers below the 119909-axis correspond to the task actions

3 Results

The results of the driving performance are shown in Table 3All 61 subjects were able to control the BCI application andcompleted the entire driving taskThe overall mean (SD) ITRof the driving performance was 1407 (444) bitsmin Themean (SD) accuracy was 9714 (373) Except for one allsubjects reached accuracies above 80 40 subjects reachedaccuracies above 90 and nineteen of them even completedthe steering task without any errorsThemean (SD) accuracyof each action (commands required to complete the task) was

9614 (002) In the prequestionnaire subjects answered thequestions regarding gender the need for vision correctiontiredness and BCI experience as displayed in Table 2Eighteen female subjects (30) with an average (SD) ageof 2267 (430) years and 43 male subjects with an average(SD) age of 2260 (532) years participated in the experimentThe influence of gender was also tested 18 female subjectsachieved a mean (SD) accuracy and ITR of 9324 (798)and 1405 (573) bitsmin respectively and 43 male achieveda mean (SD) accuracy and ITR of 9295 (659) and 1407(508) bitsmin respectively A one-way ANOVA showed no

10 Computational Intelligence and Neuroscience

Table 2 Questionnaire results The table presents the answers collected from the prequestionnaire The figures show the number of respon-dents or the mean (SD) value

Age Gender Visioncorrection Tiredness Length of sleep

last nightExperiencewith BCIs

Years Male Female Yes No 1 nottired

2 littletired

3 moderatelytired 4 tired 5 very

tired Hours Yes No

2262(500) 43 18 22 39 18 23 17 3 0 675 (122) 6 55

statistically significant difference in accuracy 119865(1 59) =

0022 119901 = 0883 and ITR 119865(1 59) = 00003 119901 = 0987between those two groupsThe influence of vision correctionon accuracy and ITR was also tested A total of 22 subjectsrequired vision correction in the form of glasses or contactlenses (as they stated in the questionnaire) They achieveda mean (SD) accuracy and ITR of 9156 (636) and 1278(402) bitsmin respectively These results were compared tosubjects who did not require vision correction and achieveda mean (SD) accuracy and ITR of 9392 (535) and 1484(466) bitsmin respectively A one-way ANOVA showedno statistically significant influence of vision correction onaccuracy 119865(1 59) = 1664 119901 = 0202 There was also nosignificant effect of vision correction on ITR119865(1 59) = 2263119901 = 0138 An evaluation of tiredness was also calculated onthe basis of the answers given in a prequestionnaire (scalefrom 1 to 5) Results of the prequestionnaire are shown inTable 2 A one-way ANOVA revealed no significant effect oftiredness on accuracy 119865(4 56) = 0335 119901 = 0853 and ITR119865(4 56) = 0067 119901 = 0991

4 Discussion

In the results presented the overall accuracy was calculatedfor all commands in drive mode and in camera modewithout distinguishing between them since four commandsfor SSVEP classification were always present Even thoughdifferent commands were sent the command classificationremained the same A comparison of this study to otherpublications on steering a robot with SSVEP-based BCI ispresented in Table 4

In order to test the influence of video overlay we com-pared the accuracy results with a demographic study [21] inwhich 86 subjects achieved a mean (SD) accuracy of 9226(782) while driving a small robot through a labyrinth witha four-class SSVEP-based BCI An unpaired 119905-test showed nostatistically significant difference 119905(145) = minus0089 119901 = 0397compared to the present values This shows that the videooverlay did not negatively influence the BCI performanceGergondet et al listed three factors with respect to howthe live video feedback in the background can influence theSSVEP stimulation distraction due to what is happening onthe video distraction due to what is happening around theuser and variation in luminosity below the stimuli shownto the user [30] Kapeller et al showed that there is a slightdecrease in performance for the underlying video sequencedue to a distractive environment and small targets used [27]

Contrary to the findings reported by Gergondet et al andKapeller et al the dynamic background did not noticeablyaffect the BCI performance (unpaired 119905-test result) Weassume that this was due to the fact that stimulation frequen-cies and minimal time segment classification windows wereadjusted individually and that relatively large stimulationboxes were used (visual angle)

The high accuracies of the SSVEP-based BCI systemsupport the hypothesis that BCI can be used to control acomplex driving application In comparison to the study byDiez et al [31] the MRC self-driving behavior was limitedto a preset number of turning points and a predefined pathAlthough the robot presented by Diez et al could avoidobstacles by itself (limited algorithm reaction) it decreased itsspeed for trajectory adjustment and in some cases stopped toavoid collision and further user commands were necessaryWhile Diez et al reported a mean time of 23320 seconds forthe second destination (139-meter distance) in this study theachieved mean time was 20708 seconds (15-meter distance)As pointed out by Diez et al the overall time for completingthe task depends not only on the robot but also on thesubjectrsquos performance controlling the BCI system [31] In ourstudy the high accuracies of the BCI system were achieveddue to the key parameter adjustment carried out by theWizard software

Themain difference between the robot used by Diez et aland the MRC presented in this study is the extended controlpossibility of the interface that also allows for controlling thecamera Both robots were designed for different purposesDiez et al mainly designed their robot for driving frompoint A to point B without any additional interaction withthe environment as they planned to use the system forwheelchair control in the future By contrast in our designthe MRC itself is the interaction device Equipped with adisplay and a two-way audio system it might serve as aremote telepresence in the future

The SSVEP paradigm allows for a robust implementationof four driving commands Compared to robots constructedfor the P300 BCI approach where the user faces largestimulation matrices spread across the entire screen thenumber of simultaneously displayed targets here is only fourand hence the load on the visual channel is considerably lowerin comparison Therefore the user might find it easier toconcentrate on the driving task On the other hand the lessernumber of command options reduces freedom and speedHowever a shared-control paradigm allows for complex tasksto be executed with minimal effort The command ldquodrive

Computational Intelligence and Neuroscience 11

Table 3 Results of the driving performance Total time each subjectrequired for steering the MRC through the route (15 meters and10 turns including camera movements) 26 correct commands (3commands in camera mode and 23 commands in drive mode) werenecessary to complete the task Accuracy (Acc) is the number ofcorrect commands (26) divided by the total number of commands(119862119899) Mean values are given at the bottom of the table

Subject Time Acc ITR119862119899[s] [] [bitsmin]

1 30560 8667 720 302 28346 8966 832 293 28407 8125 680 324 28011 8125 690 325 25259 8125 765 326 19002 9286 1340 287 32206 9630 862 278 31007 7027 466 379 15813 9286 1610 2810 37182 8966 635 2911 24172 8667 910 3012 15133 9286 1683 2813 15549 10000 2007 2614 21054 9630 1318 2715 20841 10000 1497 2616 24162 8387 852 3117 24172 8667 910 3018 14513 9630 1912 2719 16748 9630 1657 2720 22080 9286 1153 2821 19419 9630 1429 2722 43154 9286 590 2823 16352 10000 1908 2624 17184 8667 1280 3025 24812 8387 830 3126 13193 10000 2365 2627 16138 10000 1933 2628 18475 9630 1502 2729 14422 10000 2163 2630 16748 9286 1520 2831 21663 9286 1175 2832 35516 8125 544 3233 24406 8667 901 3034 15214 10000 2051 2635 19561 9630 1418 2736 23948 8387 860 3137 16636 8966 1418 2938 17245 10000 1809 2639 21897 8387 940 3140 16565 8966 1425 2941 14666 10000 2127 2642 18627 8966 1267 29

Table 3 Continued

Subject Time Acc ITR119862119899[s] [] [bitsmin]

43 22131 9286 1151 2844 25929 8667 848 3045 15702 10000 1987 2646 14788 9630 1876 2747 18718 9630 1482 2748 14554 10000 2144 2649 16057 9630 1728 2750 20495 10000 1522 2651 22811 9630 1216 2752 16037 10000 1946 2653 23664 8125 817 3254 14889 10000 2095 2655 15011 10000 2078 2656 13985 10000 2231 2657 14716 10000 2120 2658 24273 10000 1285 2659 15001 10000 2080 2660 16463 9630 1685 2761 17885 9630 1551 27Mean 20708 9303 1407 2811SD 5025 573 444 182Min 13193 7027 466 2600Max 43154 10000 2365 3700

forwardrdquo starts an autopilot program that guides the robotthrough a straight distance until it reaches a predefinedstopping point

While the MRC executed the autopilot program andfollowed the line for some time subjects purposely ignoredthe flickering targets and concentrated on the video feedbackinstead As these waiting periods were added up to theactual classification times to each command that followed anautopilot command (ldquoforwardrdquo) the ITR values are consid-erably low From the researcherrsquos point of view a numberof errors could have been avoided if some subjects wouldhave paid more attention to the live feedback Especially inthe task of reading the hidden number subjects were tryingto drive along the path without looking at the text messagethat appeared on the live feedback After realizing that therobot was not turning they looked at the video and saw thetext message instructing them to look up This is visible inthe analysis of the accuracy of the robots commands (seeFigure 10) The lowest accuracy of 8749 was for actionldquochange view to the camera moderdquo (robot command 14) thismay be caused by the fact that the subjects got used to theactions ldquoturn leftrdquoldquoturn rightrdquo and ldquodrive forwardrdquo whichthey repeated six times from the starting point (see Figures10 and 9) This could easily be avoided if the driving pathswere designedwith evenly distributed tasks (eg live scenariotasks such as reading a hidden number should occur moreoften)

12 Computational Intelligence and Neuroscience

Table 4 Comparison of BCI systems for robot navigation

Publication Subjects Classes Literacy rate [] Mean accuracy [] Mean ITR [bitsmin] Robot typeVolosyak et al [21] 86 4 98 9226 1724 E-puckZhao et al1 [34] 7 4 100 903 247 NAOKapeller et al2 [27] 11 4 91 9136 NA E-puckDiez et al3 [31] 7 4 100 8880lowast NA Pioneer 3-DXHolewa and Nawrocka [24] 3 4 100 7375 1136 Lego MindstormChoi and Jo1 [32] 5 2 100 844 1140 NAOThis study 61 4 100 9303 1407 MRC1Results for SSVEP paradigm2f-VEP results for 4 classes without zero class3Results for destination 1 lowastcorrect(correct + wrong) commands

Table 5 Distribution of the time blocks needed for selection of the first three commands ldquodrive forwardrdquo and the last three commandsldquoforwardrdquoldquolook uprdquo over all subjects

Distribution []Time windows 119879

119878in EEG data blocks

8 10 15 20 30 40 50 60 to 160A 656 1803 1803 2295 1148 656 820 820C 820 2295 820 1967 1148 820 492 1639E 984 1311 820 1967 656 984 820 2459Mean 820 1803 1148 2077 984 820 710 1639œ 984 1803 1475 1803 984 328 656 1967X 1148 1639 1639 1475 1311 984 164 1639Y 1148 1475 1148 2295 656 656 492 2131Mean 1093 1639 1421 1858 984 656 437 1913

Accuracy of each command of the robotrsquos path

1 2 3 4 5 6 7 8 9

Commands

10080604020

0

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26

()

Figure 10This chart presents the accuracy of each command alongthe path (119909-axis) and its selection accuracy to previous correctselection over all subjects

No difference between the time needed for the selectionof the command ldquodrive forwardrdquo in the beginning of theexperiment in comparison to the end of the experiment canbe observed (see Table 5) hence neither fatigue nor practiceseemed to influence performance

The general literacy rate among test subjects of 100indicates that SSVEP-based BCIs can be used by a largerpopulation than previously assumedThis high literacy is dueto the careful selection of individual SSVEP key parameterswhichwere determined by the presented calibration software

41 Limitations The SSVEP-control application has somerestrictions

(i) The wireless connection was implemented with theUDP protocol and some information might thereforenot be received properly

(ii) In order to maximize performance accuracy and userfriendliness the number of simultaneously displayedtargets was restricted to four which resulted in aconsiderably lower ITR

(iii) It should also be noted that for long term use recal-ibration might be inevitable Although the numberof targets is limited to four the SSVEP-based BCIpresented here depends on the userrsquos vision andcontrol over his or her eye movements It thereforehas to compete with other approaches such as P300and other eye tracking systems Visual stimulationwith low frequencies in particular is known to causefatigue

(iv) Moreover subjects in this study may not be repre-sentative of the general population they tended tobe young healthy men Further tests with older andphysically impaired persons are required

5 Conclusions

In the present study the SSVEP-based driving application(robot-GUI) was proven to be successful and robust Thedriving application with live video feedback was tested with61 healthy subjects All participants successfully navigateda mobile semiautonomous robot through a predetermined15-meter-long route BCI key parameters were determinedby the Wizard software No negative influence of the livevideo feedback on the performance was observed Such

Computational Intelligence and Neuroscience 13

applications could be used to control assistive robots butmight also be of interest for healthy subjects In contrast toP300 the SSVEP stimulation has to be sufficiently strong (interms of size or visual angle) to significantly influence theuser A common problem of all BCI studies that also occurredhere is to keep the usermotivated and focused on the taskTheevaluation of the questionnaire shows that for the majorityof subjects the driving task with the SSVEP-based BCI wasnot fatiguing even though lower stimulation frequencieswhich usually cause more fatigue than higher frequencieswere used in the experiment More specifically frequenciesbetween 6 and 12Hz were determined in the calibrationprocedure as they elicited the strongest SSVEP responsesSome subjects stated that as they concentrated more on thecamera video they did not find the flickering annoying at allOur results prove that SSVEP-based personalized adaptiveBCI systems with a low target number can be used as aneveryday assistive technology for remote robotsWe have alsoshown that it is possible to design a four-target interface withextended capabilities such as camera and driving controlThe control application presented here can be used formobile robots as an assistive remote technology as a gameinterface forwheelchair control or even for steering remotelycontrolled drones This study should be considered as a steptowards resolving the problems of semiautonomous mobilerobots controlled with SSVEP-based BCIs In order to furtherimprove this approach the next steps should be an onlineparameter adaptationmechanism for improving the accuracythrough practicing a possibility of turning the stimulationonoff and a two-way video and audio communication fora truly remote telepresence

Competing Interests

The authors declare that the research was conducted in theabsence of any commercial or financial relationships thatcould be construed as a potential conflict of interests

Acknowledgments

The authors would like to thank Thomas Grunenberg fromthe Faculty of Technology and Bionics at Rhine-Waal Uni-versity of Applied Sciences for building programming andmaintaining MRCs The authors would also like to thankthe student assistants Leonie Hillen Francisco Nunez SaadPlatini Anna Catharina Thoma and Paul Fanen Wundufor their help with the study This research was supportedby the German Federal Ministry of Education and Research(BMBF) under Grants nos 16SV6364 and 01DR14014 and theEuropean Fund for Regional Development (EFRD or EFREin Germany) under Grant no GE-1-1-047

References

[1] M Cheney and R Uth Tesla Master of Lightning Barnes ampNoble New York NY USA 1999

[2] W Walter The Living Brain W W Norton amp Company NewYork NY USA 1953

[3] P Flandorfer ldquoPopulation ageing and socially assistive robotsfor elderly persons the importance of sociodemographic fac-tors for user acceptancerdquo International Journal of PopulationResearch vol 2012 Article ID 829835 13 pages 2012

[4] A Nijholt D Tan G Pfurtscheller et al ldquoBrain-computerinterfacing for intelligent systemsrdquo IEEE Intelligent Systems vol23 no 3 pp 72ndash79 2008

[5] J D R Millan R Rupp G R Muller-Putz et al ldquoCombiningbrain-computer interfaces and assistive technologies state-of-the-art and challengesrdquo Frontiers in Neuroscience vol 4 article161 2010

[6] C Brunner N Birbaumer B Blankertz et al ldquoBNCI horizon2020 towards a roadmap for the BCI communityrdquo Brain-Computer Interfaces vol 2 no 1 pp 1ndash10 2015

[7] R Bogue ldquoExoskeletons and robotic prosthetics a review ofrecent developmentsrdquo Industrial Robot vol 36 no 5 pp 421ndash427 2009

[8] R Ron-Angevin F Velasco-Alvarez S Sancha-Ros and L daSilva-Sauer ldquoA two-class self-paced BCI to control a robot infour directionsrdquo IEEE International Conference on Rehabilita-tion Robotics (ICORR rsquo11) vol 2011 Article ID 5975486 2011

[9] L Tonin T Carlson R Leeb and J D R Millan ldquoBrain-controlled telepresence robot by motor-disabled peoplerdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society pp 4227ndash4230IEEE Boston Mass USA September 2011

[10] D Huang K Qian D-Y Fei W Jia X Chen and O BaildquoElectroencephalography (EEG)-based brain-computer inter-face (BCI) a 2-D virtual wheelchair control based on event-related desynchronizationsynchronization and state controlrdquoIEEE Transactions on Neural Systems and Rehabilitation Engi-neering vol 20 no 3 pp 379ndash388 2012

[11] A Chella E Pagello E Menegatti et al ldquoA BCI teleoperatedmuseum robotic guiderdquo in Proceedings of the International Con-ference on Complex Intelligent and Software Intensive Systems(CISIS rsquo09) pp 783ndash788 IEEE Fukuoka Japan March 2009

[12] C Escolano J M Antelis and J Minguez ldquoA telepresencemobile robot controlled with a noninvasive brain-computerinterfacerdquo IEEE Transactions on Systems Man and CyberneticsPart B Cybernetics vol 42 no 3 pp 793ndash804 2012

[13] F-B VialatteMMaurice J Dauwels andA Cichocki ldquoSteady-state visually evoked potentials focus on essential paradigmsand future perspectivesrdquo Progress in Neurobiology vol 90 no4 pp 418ndash438 2010

[14] D Regan ldquoSome characteristics of average steady-state andtransient responses evoked by modulated lightrdquo Electroen-cephalography and Clinical Neurophysiology vol 20 no 3 pp238ndash248 1966

[15] Y Frank and F Torres ldquoVisual evoked potentials in the evalua-tion of lsquocortical blindnessrsquo in childrenrdquoAnnals of Neurology vol6 no 2 pp 126ndash129 1979

[16] W IanMcDonald A CompstonG Edan et al ldquoRecommendeddiagnostic criteria for multiple sclerosis guidelines from theinternational panel on the diagnosis of multiple sclerosisrdquoAnnals of Neurology vol 50 no 1 pp 121ndash127 2001

[17] S Gao Y Wang X Gao and B Hong ldquoVisual and auditorybrain-computer interfacesrdquo IEEE Transactions on BiomedicalEngineering vol 61 no 5 pp 1436ndash1447 2014

[18] I Volosyak ldquoSSVEP-based Bremen-BCI interfacemdashboostinginformation transfer ratesrdquo Journal of Neural Engineering vol8 no 3 Article ID 036020 2011

14 Computational Intelligence and Neuroscience

[19] G R Muller-Putz and G Pfurtscheller ldquoControl of an electricalprosthesis with an SSVEP-based BCIrdquo IEEE Transactions onBiomedical Engineering vol 55 no 1 pp 361ndash364 2008

[20] P Martinez H Bakardjian and A Cichocki ldquoFully onlinemulticommand brain-computer interface with visual neuro-feedback using SSVEP paradigmrdquo Computational Intelligenceand Neuroscience vol 2007 Article ID 94561 9 pages 2007

[21] I Volosyak D Valbuena T Luth T Malechka and A GraserldquoBCI demographics II how many (and What Kinds of) peoplecan use a high-frequency SSVEP BCIrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 19 no 3 pp232ndash239 2011

[22] J Zhao Q Meng W Li M Li and G Chen ldquoSSVEP-basedhierarchical architecture for control of a humanoid robot withmindrdquo in Proceedings of the 11th World Congress on Intelli-gent Control and Automation (WCICA rsquo14) pp 2401ndash2406Shenyang China July 2014

[23] A Guneysu and H Levent Akin ldquoAn SSVEP based BCI tocontrol a humanoid robot by using portable EEG devicerdquo inProceedings of the 35th Annual International Conference of theIEEE Engineering in Medicine and Biology Society (EMBC rsquo13)pp 6905ndash6908 IEEE Osaka Japan July 2013

[24] K Holewa and A Nawrocka ldquoEmotiv EPOC neuroheadset inbrainmdashcomputer interfacerdquo in Proceedings of the 15th Interna-tional Carpathian Control Conference (ICCC rsquo14) pp 149ndash152IEEE Velke Karlovice Czech Republic May 2014

[25] N-S Kwak K-R Muller and S-W Lee ldquoToward exoskeletoncontrol based on steady state visual evoked potentialsrdquo inProceedings of the International Winter Workshop on Brain-Computer Interface (BCI rsquo14) pp 1ndash2 Gangwon South KoreaFebruary 2014

[26] G Bin X Gao Y Wang B Hong and S Gao ldquoVEP-basedbrain-computer interfaces time frequency and code modula-tionsrdquo IEEE Computational Intelligence Magazine vol 4 no 4pp 22ndash26 2009

[27] C Kapeller C Hintermuller M Abu-Alqumsan R Pruckl APeer and C Guger ldquoA BCI using VEP for continuous control ofa mobile robotrdquo in Proceedings of the 35th Annual InternationalConference of the IEEE Engineering in Medicine and BiologySociety (EMBC rsquo13) pp 5254ndash5257 IEEE Osaka Japan July2013

[28] H Wang T Li and Z Huang ldquoRemote control of an electricalcar with SSVEP-based BCIrdquo in Proceedings of the IEEE Interna-tional Conference on Information Theory and Information Secu-rity (ICITIS rsquo10) pp 837ndash840 IEEE Beijing China December2010

[29] Y Wang Y-T Wang and T-P Jung ldquoVisual stimulus designfor high-rate SSVEP BCIrdquo Electronics Letters vol 46 no 15 pp1057ndash1058 2010

[30] P Gergondet D Petit and A Kheddar ldquoSteering a robot witha brain-computer interface impact of video feedback on BCIperformancerdquo in Proceedings of the 21st IEEE InternationalSymposium on Robot and Human Interactive Communication(RO-MAN rsquo12) pp 271ndash276 Paris France September 2012

[31] P F Diez V A Mut E Laciar and E M Avila Perona ldquoMobilerobot navigation with a self-paced brain-computer interfacebased on high-frequency SSVEPrdquo Robotica vol 32 no 5 pp695ndash709 2014

[32] B Choi and S Jo ldquoA low-cost EEG system-based hybridbrain-computer interface for humanoid robot navigation andrecognitionrdquo PLoS ONE vol 8 no 9 Article ID e74583 2013

[33] E Tidoni P Gergondet A Kheddar and S M Aglioti ldquoAudio-visual feedback improves the BCI performance in the naviga-tional control of a humanoid robotrdquo Frontiers in Neuroroboticsvol 8 article 20 pp 1ndash8 2014

[34] J Zhao W Li and M Li ldquoComparative study of SSVEP- andP300-based models for the telepresence control of humanoidrobotsrdquo PLoS ONE vol 10 no 11 Article ID e0142168 2015

[35] T M Rutkowski K Shimizu T Kodama P Jurica and ACichocki ldquoBrain-robot interfaces using spatial tactile BCIparadigmsrdquo in Symbiotic Interaction pp 132ndash137 Springer NewYork NY USA 2015

[36] C Brunner B Z Allison D J Krusienski et al ldquoImprovedsignal processing approaches in an offline simulation of a hybridbrain-computer interfacerdquo Journal of NeuroscienceMethods vol188 no 1 pp 165ndash173 2010

[37] F Gembler P Stawicki and I Volosyak ldquoAutonomous parame-ter adjustment for SSVEP-based BCIs with a novel BCI wizardrdquoFrontiers in Neuroscience vol 9 article 474 pp 1ndash12 2015

[38] E W Sellers T M Vaughan and J R Wolpaw ldquoA brain-computer interface for long-term independent home userdquoAmyotrophic Lateral Sclerosis vol 11 no 5 pp 449ndash455 2010

[39] E M Holz L Botrel and A Kubler ldquoBridging gaps long-term independent BCI home-use by a locked-in end-userrdquo inProceedings of the TOBI Workshop IV Sion Switzerland 2013

[40] O Friman I Volosyak and A Graser ldquoMultiple channel detec-tion of steady-state visual evoked potentials for brain-computerinterfacesrdquo IEEE Transactions on Biomedical Engineering vol54 no 4 pp 742ndash750 2007

[41] I Volosyak T Malechka D Valbuena and A Graser ldquoAnovel calibration method for SSVEP based brain-computerinterfacesrdquo in Proceedings of the 18th European Signal ProcessingConference (EUSIPCO rsquo10) pp 939ndash943 Aalborg DenmarkAugust 2010

[42] I Volosyak H Cecotti and A Graser ldquoOptimal visual stimulion LCD screens for SSVEP based brain-computer interfacesrdquoin Proceedings of the 4th International IEEEEMBS Conferenceon Neural Engineering (NER rsquo09) pp 447ndash450 IEEE AntalyaTurkey May 2009

[43] J R Wolpaw N Birbaumer D J McFarland G Pfurtschellerand T M Vaughan ldquoBrain-computer interfaces for communi-cation and controlrdquo Clinical Neurophysiology vol 113 no 6 pp767ndash791 2002

Page 6: Driving a Semiautonomous Mobile Robotic Car Controlled by ... · Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI ... (frequencycoding,methodoriginallyproposed

6 Computational Intelligence and Neuroscience

Time (ms)

8 bl

10 bl

160 blocks

8 bl8 bl8 bl

10 bl

8 bl8 bl

10 bl15 bl15 bl

15 blocks20 blocks

10 bl

8 bl8 bl

80 blocks

Com

man

d cla

ssifi

catio

n

20000 22000 24000

203125 ms

8125 ms

152344 ms

1015625 ms

8135 ms

0 2000 4000 6000 8000 10000 12000 14000 16000 18000

middot middot middot

middot middot middot

middot middot middot

middot middot middot

16250 ms

Figure 2 Changes in the time segment length after a performed classification in case no distinct classification can be made and the actualtime 119905 allows for the extension to the next predefined value After each classification (gray bars of 8 10 20 and 160 blocks) additional timefor gaze shifting was included (black) in which the classifier output was rejected for 9 blocks

Figure 3 A screenshot of the robot-GUI in drive mode In thissituation the user had to select the ldquodrive forwardrdquo commandwhichcaused the robot to execute an autopilot program that uses the built-in light sensors to follow the white line

Figure 4 Signs positioned at the turning points and positionedtowards the camera showed the correct direction for following thetrack Here in the screenshot the robot automatically stopped afterreaching this turning point and a sign instructed the user to selectthe command ldquoturn leftrdquo

monitors used a gray background in the form of a framewas implemented to prevent unnatural video scaling Thestarting dimensions of the buttons were 146 times 512 pixelsfor left and right 576 times 128 pixels for the top and 536 times

96 pixels for bottom buttons (see Figures 3 and 4) Duringthe runtime the size of the buttons changed continuously

depending on the calculated probabilities of the 119894th stimuliin (6) The robot-GUI had two modes a drive mode and acamera mode In drive mode the subject was able to sendthe following commands to the MRC ldquodrive forwardrdquo (adiscrete command using the line-sensors to follow the tapedline on autopilot to the next direction change) ldquoturn leftrdquoldquoturn rightrdquo and ldquoswitch to camera moderdquo In camera modethe subject was able to send the following commands ldquolookleftrdquo ldquolook rightrdquo ldquolook uprdquo and ldquoswitch to drive moderdquoWhile looking up the button ldquolook uprdquo changed the contentand functionality to ldquolook forwardrdquoThe control buttons werered in drive mode and yellow in camera mode to increaseuser friendliness The aspect ratio of the video stream was4 3 so the computer screen was not entirely filled out Ifthe command ldquodrive forwardrdquo was selected by the user anautonomous program was executed on the MRC The MRCused the four line-sensors to follow the white line taped tothe floor until it arrived at a turning point where the robotstopped automaticallyWhile driving theMRCautomaticallymade small corrections when the line was detected by oneof the inner sensors the rotation of the wheels on the sameside was decreased by 50 until another sensor detected theline Similarly when the line was detected by one of the outersensors the wheels on the same side were stopped so thatthe MRC could drive in a curve in order to go back ontrack If one of the turning commands was selected theMRCmade an approximately 90-degree turn in the correspondingdirection In drive mode the camera was facing slightly theground (approximately 20 degrees down from the frontalview for the ldquolook downrdquo position) When switching to cam-eramode (represented by the word ldquoCAMERArdquo at the bottomof the robot-GUI) the camera changed to the ldquolook forwardrdquoposition (switching back to drive mode would change thecamera position to looking down again) All the EEGdata andthe received video stream from the MRC were anonymouslystored

Computational Intelligence and Neuroscience 7

(a) (b)

Figure 5 (a) shows the second step of theWizardThe subject faced a circle consisting of small segments which flickered simultaneously withseven different frequencies Each of the seven tested frequencies was represented by the same amount of segmentsThe segments representingone of the seven tested frequencies were scattered in random order (b) forming the stimulation circle (a) After this multistimulation withthirteen or fourteen different frequencies depending on the alpha test result the EEG data were analyzed and ordered from the strongest tothe lowest SSVEP response

fS

(a) (b)

Figure 6 This figure shows the third step of the Wizard (a) The main frequency 119891119878is in the center (white circle stimulation) and the other

three frequencies (from four 119891119878119894with the strongest SSVEP response) are divided into small stimulation blocks and scattered around 119891

119878acting

as ldquonoiserdquo in peripheral vision This pattern is shown in (b) This was repeated for each 119891119878119894

232 Wizard The Wizard software is generally spokena calibration program that autonomously determines BCIkey parameters [37] It is designed to select user-specificoptimal stimulation frequencies for SSVEP-based BCIs onLCD screens It is a development of the method presentedin [41] The Wizard chooses four frequencies with the bestSSVEP response out of a set of 14 possible stimulationfrequencies 119891

119894(ranged from 6Hz to 20Hz) that can be

realized on a monitor screen with a vertical refresh rate of120Hz [42] In the following we provide a brief description

Step 1 (alpha test) The subject was instructed by an audiofeedback to close his or her eyes 15 frequencies in the alphawave band (827 857 887 893 923 953 970 100 10301061 1091 1121 1170 120 and 1230Hz) were measured for10 seconds during the closed eyes period and if any of those15 frequencies would interfere with a possible stimulationfrequency 119891

119894(Δ119891 le 015Hz) this frequency 119891

119894would be

filtered out

Step 2 (stimulation frequency selection) In order to avoid theinfluence of harmonic frequencies the user had to look at twocircles in sequence Each circle contained seven stimulationfrequencies which were presented in pseudorandom order assmall green circle segments (see Figure 5) This step took 20seconds

Step 3 (optimal classification thresholds and time segmentlength) In this step the Wizard software chose the beststarting time-length segment 119879

119878as well as the optimal

classification thresholds 120573119894for the frequencies determined

in the previous step First a white circle flickering at thefrequency which had the highest SSVEP response in phase2 was displayed It was necessary to simulate noise causedby peripheral vision when concentrating on the target objectFor this purpose the white circle was surrounded by a greenring containing segments which presented the remainingoptimal frequencies (see Figure 6) The user was instructedby an audio feedback to gaze at the white circle The circle

8 Computational Intelligence and Neuroscience

(a) (b)

Figure 7 Two subjects during the driving task The robotic-GUI is in camera mode The subject in (a) sees a text message instructing himto select the button ldquolook uprdquo The subject in (b) is one step further and is facing the hidden secret number which is attached underneath adrawer

and the ring flickered for 10 seconds while EEG data wererecorded After a two-second break the flickering continuedThe white circle now flickered with the second-highestfrequency from phase two while the ring flickered with theremaining three frequencies This procedure was repeateduntil the data for all four optimal frequencies were collectedTotal recording time for phase three was 40 seconds andbased on this data optimal thresholds and time segmentlength were determined If one of the frequencies 119891

119878119894did not

reach a defined minimal threshold 120573119894 this frequency was

replaced with the next best frequency from Step 2 and theprocedure of this step was repeated

24 Procedure In order to analyze the BCI performancevariety a pre- and postquestionnaire system (similar to [21])was employed After signing the consent form each subjectcompleted a brief prequestionnaire and was prepared for theEEG recording Subjects were seated in a comfortable chairin front of the monitor the electrode cap was placed andelectrode gel was applied First the subjects went throughthe steps of the Wizard and BCI key parameters (stimula-tion frequencies classification thresholds and optimal timesegment length) were determined Then the experimenterstarted the robot-GUI (Figure 3) The control task was todrive the MRC through a predetermined route (Figure 8) Ifthe robot-GUI software recognized a wrong command forexample ldquoturn rightrdquo instead of ldquoturn leftrdquo this commandwas not sent so that the MRC would not drive off the track(Figure 7) If the subjectmade a fewmistakes at the beginningand was willing to start over again (this occurred with asmall number of subjects) the experimentermoved theMRCback to the starting point and restarted the robot-GUI In thevery few cases where the wireless connection to the MRCwas interrupted (eg the MRC was mechanically unable toexecute the command thatwas correctly selected in the robot-GUI by the user) the experimenter manually repositionedthe MRC to the next turning point

At a certain turning point (represented by a blue squareshown in Figure 8 when approximately 60 of the drivingtask was completed) the camera was facing a text messagewhich instructed the subjects to look up (user action rep-resented with a circled number as eg 5 in this case)

Start

Finish

1 m

1 m

1 m

1 m

2 m

15 m15 m

15 m

15 m

15 m

15 m

Figure 8 Track for the mobile robotic car Subjects had to steerthe MRC through a 15-meter-long course with 10 turns in orderto reach the finish line At a certain turning point (marked with ablue square) the MRCs camera faced a text message that instructedthe subject to switch to camera mode The task was to examine thesurroundings and to look for a hidden number which was attachedunderneath a drawer (otherwise not visible) One example of theoptimal path is shown in Figure 9

At this point the user had to switch to camera mode andselect the command ldquolook uprdquo Now the subject was ableto see a hidden secret number which was attached to thebottom of a drawer (located at a height of 50 cm facingthe floor not normally visible to the people in the room)The numbers varied randomly between the subjects andwere only accessible via the MRC camera After seeing thenumber on the screen the subject had to change back todrive mode and continued the driving task When arriving atthe finish line the camera faced another text message whichagain instructed the subjects to look up After changing tocamera mode and selecting ldquolook uprdquo the camera faced asign displaying the word ldquofinishrdquo and the robot-GUI stoppedautomatically All of the EEG data and performed commandswere stored on the hard drive of the desktop PC for furtherevaluation The values of ITR and accuracies were calculatedautomatically by the software The ITR was calculated usingthe formula presented by Wolpaw et al in [43] the numberof possible choices for classification was equal to four itwas the same for each mode (drive mode or camera mode)After finishing the experiment the subject filled out thepostquestionnaire

Computational Intelligence and Neuroscience 9

Figure 9This chart presents exemplarily the BCI performance for subject 59 (optimal path with 100 accuracy)The top row shows the rawEEG signal acquired from channel119874

119911(one of eight EEG electrodes)The remaining graphs show the SNR values during the whole experiment

(dashed line represents the threshold for each command) for the four commands forwardup turn left turn right and cameradrive thestimulation frequencies for those commands were 667 750 706 and 632Hz respectively The 119909-axis the same for all graphs representsthe total duration of the experiment (150 seconds) The marking above each graph represents the exact time of each classification circlednumbers below the 119909-axis correspond to the task actions

3 Results

The results of the driving performance are shown in Table 3All 61 subjects were able to control the BCI application andcompleted the entire driving taskThe overall mean (SD) ITRof the driving performance was 1407 (444) bitsmin Themean (SD) accuracy was 9714 (373) Except for one allsubjects reached accuracies above 80 40 subjects reachedaccuracies above 90 and nineteen of them even completedthe steering task without any errorsThemean (SD) accuracyof each action (commands required to complete the task) was

9614 (002) In the prequestionnaire subjects answered thequestions regarding gender the need for vision correctiontiredness and BCI experience as displayed in Table 2Eighteen female subjects (30) with an average (SD) ageof 2267 (430) years and 43 male subjects with an average(SD) age of 2260 (532) years participated in the experimentThe influence of gender was also tested 18 female subjectsachieved a mean (SD) accuracy and ITR of 9324 (798)and 1405 (573) bitsmin respectively and 43 male achieveda mean (SD) accuracy and ITR of 9295 (659) and 1407(508) bitsmin respectively A one-way ANOVA showed no

10 Computational Intelligence and Neuroscience

Table 2 Questionnaire results The table presents the answers collected from the prequestionnaire The figures show the number of respon-dents or the mean (SD) value

Age Gender Visioncorrection Tiredness Length of sleep

last nightExperiencewith BCIs

Years Male Female Yes No 1 nottired

2 littletired

3 moderatelytired 4 tired 5 very

tired Hours Yes No

2262(500) 43 18 22 39 18 23 17 3 0 675 (122) 6 55

statistically significant difference in accuracy 119865(1 59) =

0022 119901 = 0883 and ITR 119865(1 59) = 00003 119901 = 0987between those two groupsThe influence of vision correctionon accuracy and ITR was also tested A total of 22 subjectsrequired vision correction in the form of glasses or contactlenses (as they stated in the questionnaire) They achieveda mean (SD) accuracy and ITR of 9156 (636) and 1278(402) bitsmin respectively These results were compared tosubjects who did not require vision correction and achieveda mean (SD) accuracy and ITR of 9392 (535) and 1484(466) bitsmin respectively A one-way ANOVA showedno statistically significant influence of vision correction onaccuracy 119865(1 59) = 1664 119901 = 0202 There was also nosignificant effect of vision correction on ITR119865(1 59) = 2263119901 = 0138 An evaluation of tiredness was also calculated onthe basis of the answers given in a prequestionnaire (scalefrom 1 to 5) Results of the prequestionnaire are shown inTable 2 A one-way ANOVA revealed no significant effect oftiredness on accuracy 119865(4 56) = 0335 119901 = 0853 and ITR119865(4 56) = 0067 119901 = 0991

4 Discussion

In the results presented the overall accuracy was calculatedfor all commands in drive mode and in camera modewithout distinguishing between them since four commandsfor SSVEP classification were always present Even thoughdifferent commands were sent the command classificationremained the same A comparison of this study to otherpublications on steering a robot with SSVEP-based BCI ispresented in Table 4

In order to test the influence of video overlay we com-pared the accuracy results with a demographic study [21] inwhich 86 subjects achieved a mean (SD) accuracy of 9226(782) while driving a small robot through a labyrinth witha four-class SSVEP-based BCI An unpaired 119905-test showed nostatistically significant difference 119905(145) = minus0089 119901 = 0397compared to the present values This shows that the videooverlay did not negatively influence the BCI performanceGergondet et al listed three factors with respect to howthe live video feedback in the background can influence theSSVEP stimulation distraction due to what is happening onthe video distraction due to what is happening around theuser and variation in luminosity below the stimuli shownto the user [30] Kapeller et al showed that there is a slightdecrease in performance for the underlying video sequencedue to a distractive environment and small targets used [27]

Contrary to the findings reported by Gergondet et al andKapeller et al the dynamic background did not noticeablyaffect the BCI performance (unpaired 119905-test result) Weassume that this was due to the fact that stimulation frequen-cies and minimal time segment classification windows wereadjusted individually and that relatively large stimulationboxes were used (visual angle)

The high accuracies of the SSVEP-based BCI systemsupport the hypothesis that BCI can be used to control acomplex driving application In comparison to the study byDiez et al [31] the MRC self-driving behavior was limitedto a preset number of turning points and a predefined pathAlthough the robot presented by Diez et al could avoidobstacles by itself (limited algorithm reaction) it decreased itsspeed for trajectory adjustment and in some cases stopped toavoid collision and further user commands were necessaryWhile Diez et al reported a mean time of 23320 seconds forthe second destination (139-meter distance) in this study theachieved mean time was 20708 seconds (15-meter distance)As pointed out by Diez et al the overall time for completingthe task depends not only on the robot but also on thesubjectrsquos performance controlling the BCI system [31] In ourstudy the high accuracies of the BCI system were achieveddue to the key parameter adjustment carried out by theWizard software

Themain difference between the robot used by Diez et aland the MRC presented in this study is the extended controlpossibility of the interface that also allows for controlling thecamera Both robots were designed for different purposesDiez et al mainly designed their robot for driving frompoint A to point B without any additional interaction withthe environment as they planned to use the system forwheelchair control in the future By contrast in our designthe MRC itself is the interaction device Equipped with adisplay and a two-way audio system it might serve as aremote telepresence in the future

The SSVEP paradigm allows for a robust implementationof four driving commands Compared to robots constructedfor the P300 BCI approach where the user faces largestimulation matrices spread across the entire screen thenumber of simultaneously displayed targets here is only fourand hence the load on the visual channel is considerably lowerin comparison Therefore the user might find it easier toconcentrate on the driving task On the other hand the lessernumber of command options reduces freedom and speedHowever a shared-control paradigm allows for complex tasksto be executed with minimal effort The command ldquodrive

Computational Intelligence and Neuroscience 11

Table 3 Results of the driving performance Total time each subjectrequired for steering the MRC through the route (15 meters and10 turns including camera movements) 26 correct commands (3commands in camera mode and 23 commands in drive mode) werenecessary to complete the task Accuracy (Acc) is the number ofcorrect commands (26) divided by the total number of commands(119862119899) Mean values are given at the bottom of the table

Subject Time Acc ITR119862119899[s] [] [bitsmin]

1 30560 8667 720 302 28346 8966 832 293 28407 8125 680 324 28011 8125 690 325 25259 8125 765 326 19002 9286 1340 287 32206 9630 862 278 31007 7027 466 379 15813 9286 1610 2810 37182 8966 635 2911 24172 8667 910 3012 15133 9286 1683 2813 15549 10000 2007 2614 21054 9630 1318 2715 20841 10000 1497 2616 24162 8387 852 3117 24172 8667 910 3018 14513 9630 1912 2719 16748 9630 1657 2720 22080 9286 1153 2821 19419 9630 1429 2722 43154 9286 590 2823 16352 10000 1908 2624 17184 8667 1280 3025 24812 8387 830 3126 13193 10000 2365 2627 16138 10000 1933 2628 18475 9630 1502 2729 14422 10000 2163 2630 16748 9286 1520 2831 21663 9286 1175 2832 35516 8125 544 3233 24406 8667 901 3034 15214 10000 2051 2635 19561 9630 1418 2736 23948 8387 860 3137 16636 8966 1418 2938 17245 10000 1809 2639 21897 8387 940 3140 16565 8966 1425 2941 14666 10000 2127 2642 18627 8966 1267 29

Table 3 Continued

Subject Time Acc ITR119862119899[s] [] [bitsmin]

43 22131 9286 1151 2844 25929 8667 848 3045 15702 10000 1987 2646 14788 9630 1876 2747 18718 9630 1482 2748 14554 10000 2144 2649 16057 9630 1728 2750 20495 10000 1522 2651 22811 9630 1216 2752 16037 10000 1946 2653 23664 8125 817 3254 14889 10000 2095 2655 15011 10000 2078 2656 13985 10000 2231 2657 14716 10000 2120 2658 24273 10000 1285 2659 15001 10000 2080 2660 16463 9630 1685 2761 17885 9630 1551 27Mean 20708 9303 1407 2811SD 5025 573 444 182Min 13193 7027 466 2600Max 43154 10000 2365 3700

forwardrdquo starts an autopilot program that guides the robotthrough a straight distance until it reaches a predefinedstopping point

While the MRC executed the autopilot program andfollowed the line for some time subjects purposely ignoredthe flickering targets and concentrated on the video feedbackinstead As these waiting periods were added up to theactual classification times to each command that followed anautopilot command (ldquoforwardrdquo) the ITR values are consid-erably low From the researcherrsquos point of view a numberof errors could have been avoided if some subjects wouldhave paid more attention to the live feedback Especially inthe task of reading the hidden number subjects were tryingto drive along the path without looking at the text messagethat appeared on the live feedback After realizing that therobot was not turning they looked at the video and saw thetext message instructing them to look up This is visible inthe analysis of the accuracy of the robots commands (seeFigure 10) The lowest accuracy of 8749 was for actionldquochange view to the camera moderdquo (robot command 14) thismay be caused by the fact that the subjects got used to theactions ldquoturn leftrdquoldquoturn rightrdquo and ldquodrive forwardrdquo whichthey repeated six times from the starting point (see Figures10 and 9) This could easily be avoided if the driving pathswere designedwith evenly distributed tasks (eg live scenariotasks such as reading a hidden number should occur moreoften)

12 Computational Intelligence and Neuroscience

Table 4 Comparison of BCI systems for robot navigation

Publication Subjects Classes Literacy rate [] Mean accuracy [] Mean ITR [bitsmin] Robot typeVolosyak et al [21] 86 4 98 9226 1724 E-puckZhao et al1 [34] 7 4 100 903 247 NAOKapeller et al2 [27] 11 4 91 9136 NA E-puckDiez et al3 [31] 7 4 100 8880lowast NA Pioneer 3-DXHolewa and Nawrocka [24] 3 4 100 7375 1136 Lego MindstormChoi and Jo1 [32] 5 2 100 844 1140 NAOThis study 61 4 100 9303 1407 MRC1Results for SSVEP paradigm2f-VEP results for 4 classes without zero class3Results for destination 1 lowastcorrect(correct + wrong) commands

Table 5 Distribution of the time blocks needed for selection of the first three commands ldquodrive forwardrdquo and the last three commandsldquoforwardrdquoldquolook uprdquo over all subjects

Distribution []Time windows 119879

119878in EEG data blocks

8 10 15 20 30 40 50 60 to 160A 656 1803 1803 2295 1148 656 820 820C 820 2295 820 1967 1148 820 492 1639E 984 1311 820 1967 656 984 820 2459Mean 820 1803 1148 2077 984 820 710 1639œ 984 1803 1475 1803 984 328 656 1967X 1148 1639 1639 1475 1311 984 164 1639Y 1148 1475 1148 2295 656 656 492 2131Mean 1093 1639 1421 1858 984 656 437 1913

Accuracy of each command of the robotrsquos path

1 2 3 4 5 6 7 8 9

Commands

10080604020

0

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26

()

Figure 10This chart presents the accuracy of each command alongthe path (119909-axis) and its selection accuracy to previous correctselection over all subjects

No difference between the time needed for the selectionof the command ldquodrive forwardrdquo in the beginning of theexperiment in comparison to the end of the experiment canbe observed (see Table 5) hence neither fatigue nor practiceseemed to influence performance

The general literacy rate among test subjects of 100indicates that SSVEP-based BCIs can be used by a largerpopulation than previously assumedThis high literacy is dueto the careful selection of individual SSVEP key parameterswhichwere determined by the presented calibration software

41 Limitations The SSVEP-control application has somerestrictions

(i) The wireless connection was implemented with theUDP protocol and some information might thereforenot be received properly

(ii) In order to maximize performance accuracy and userfriendliness the number of simultaneously displayedtargets was restricted to four which resulted in aconsiderably lower ITR

(iii) It should also be noted that for long term use recal-ibration might be inevitable Although the numberof targets is limited to four the SSVEP-based BCIpresented here depends on the userrsquos vision andcontrol over his or her eye movements It thereforehas to compete with other approaches such as P300and other eye tracking systems Visual stimulationwith low frequencies in particular is known to causefatigue

(iv) Moreover subjects in this study may not be repre-sentative of the general population they tended tobe young healthy men Further tests with older andphysically impaired persons are required

5 Conclusions

In the present study the SSVEP-based driving application(robot-GUI) was proven to be successful and robust Thedriving application with live video feedback was tested with61 healthy subjects All participants successfully navigateda mobile semiautonomous robot through a predetermined15-meter-long route BCI key parameters were determinedby the Wizard software No negative influence of the livevideo feedback on the performance was observed Such

Computational Intelligence and Neuroscience 13

applications could be used to control assistive robots butmight also be of interest for healthy subjects In contrast toP300 the SSVEP stimulation has to be sufficiently strong (interms of size or visual angle) to significantly influence theuser A common problem of all BCI studies that also occurredhere is to keep the usermotivated and focused on the taskTheevaluation of the questionnaire shows that for the majorityof subjects the driving task with the SSVEP-based BCI wasnot fatiguing even though lower stimulation frequencieswhich usually cause more fatigue than higher frequencieswere used in the experiment More specifically frequenciesbetween 6 and 12Hz were determined in the calibrationprocedure as they elicited the strongest SSVEP responsesSome subjects stated that as they concentrated more on thecamera video they did not find the flickering annoying at allOur results prove that SSVEP-based personalized adaptiveBCI systems with a low target number can be used as aneveryday assistive technology for remote robotsWe have alsoshown that it is possible to design a four-target interface withextended capabilities such as camera and driving controlThe control application presented here can be used formobile robots as an assistive remote technology as a gameinterface forwheelchair control or even for steering remotelycontrolled drones This study should be considered as a steptowards resolving the problems of semiautonomous mobilerobots controlled with SSVEP-based BCIs In order to furtherimprove this approach the next steps should be an onlineparameter adaptationmechanism for improving the accuracythrough practicing a possibility of turning the stimulationonoff and a two-way video and audio communication fora truly remote telepresence

Competing Interests

The authors declare that the research was conducted in theabsence of any commercial or financial relationships thatcould be construed as a potential conflict of interests

Acknowledgments

The authors would like to thank Thomas Grunenberg fromthe Faculty of Technology and Bionics at Rhine-Waal Uni-versity of Applied Sciences for building programming andmaintaining MRCs The authors would also like to thankthe student assistants Leonie Hillen Francisco Nunez SaadPlatini Anna Catharina Thoma and Paul Fanen Wundufor their help with the study This research was supportedby the German Federal Ministry of Education and Research(BMBF) under Grants nos 16SV6364 and 01DR14014 and theEuropean Fund for Regional Development (EFRD or EFREin Germany) under Grant no GE-1-1-047

References

[1] M Cheney and R Uth Tesla Master of Lightning Barnes ampNoble New York NY USA 1999

[2] W Walter The Living Brain W W Norton amp Company NewYork NY USA 1953

[3] P Flandorfer ldquoPopulation ageing and socially assistive robotsfor elderly persons the importance of sociodemographic fac-tors for user acceptancerdquo International Journal of PopulationResearch vol 2012 Article ID 829835 13 pages 2012

[4] A Nijholt D Tan G Pfurtscheller et al ldquoBrain-computerinterfacing for intelligent systemsrdquo IEEE Intelligent Systems vol23 no 3 pp 72ndash79 2008

[5] J D R Millan R Rupp G R Muller-Putz et al ldquoCombiningbrain-computer interfaces and assistive technologies state-of-the-art and challengesrdquo Frontiers in Neuroscience vol 4 article161 2010

[6] C Brunner N Birbaumer B Blankertz et al ldquoBNCI horizon2020 towards a roadmap for the BCI communityrdquo Brain-Computer Interfaces vol 2 no 1 pp 1ndash10 2015

[7] R Bogue ldquoExoskeletons and robotic prosthetics a review ofrecent developmentsrdquo Industrial Robot vol 36 no 5 pp 421ndash427 2009

[8] R Ron-Angevin F Velasco-Alvarez S Sancha-Ros and L daSilva-Sauer ldquoA two-class self-paced BCI to control a robot infour directionsrdquo IEEE International Conference on Rehabilita-tion Robotics (ICORR rsquo11) vol 2011 Article ID 5975486 2011

[9] L Tonin T Carlson R Leeb and J D R Millan ldquoBrain-controlled telepresence robot by motor-disabled peoplerdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society pp 4227ndash4230IEEE Boston Mass USA September 2011

[10] D Huang K Qian D-Y Fei W Jia X Chen and O BaildquoElectroencephalography (EEG)-based brain-computer inter-face (BCI) a 2-D virtual wheelchair control based on event-related desynchronizationsynchronization and state controlrdquoIEEE Transactions on Neural Systems and Rehabilitation Engi-neering vol 20 no 3 pp 379ndash388 2012

[11] A Chella E Pagello E Menegatti et al ldquoA BCI teleoperatedmuseum robotic guiderdquo in Proceedings of the International Con-ference on Complex Intelligent and Software Intensive Systems(CISIS rsquo09) pp 783ndash788 IEEE Fukuoka Japan March 2009

[12] C Escolano J M Antelis and J Minguez ldquoA telepresencemobile robot controlled with a noninvasive brain-computerinterfacerdquo IEEE Transactions on Systems Man and CyberneticsPart B Cybernetics vol 42 no 3 pp 793ndash804 2012

[13] F-B VialatteMMaurice J Dauwels andA Cichocki ldquoSteady-state visually evoked potentials focus on essential paradigmsand future perspectivesrdquo Progress in Neurobiology vol 90 no4 pp 418ndash438 2010

[14] D Regan ldquoSome characteristics of average steady-state andtransient responses evoked by modulated lightrdquo Electroen-cephalography and Clinical Neurophysiology vol 20 no 3 pp238ndash248 1966

[15] Y Frank and F Torres ldquoVisual evoked potentials in the evalua-tion of lsquocortical blindnessrsquo in childrenrdquoAnnals of Neurology vol6 no 2 pp 126ndash129 1979

[16] W IanMcDonald A CompstonG Edan et al ldquoRecommendeddiagnostic criteria for multiple sclerosis guidelines from theinternational panel on the diagnosis of multiple sclerosisrdquoAnnals of Neurology vol 50 no 1 pp 121ndash127 2001

[17] S Gao Y Wang X Gao and B Hong ldquoVisual and auditorybrain-computer interfacesrdquo IEEE Transactions on BiomedicalEngineering vol 61 no 5 pp 1436ndash1447 2014

[18] I Volosyak ldquoSSVEP-based Bremen-BCI interfacemdashboostinginformation transfer ratesrdquo Journal of Neural Engineering vol8 no 3 Article ID 036020 2011

14 Computational Intelligence and Neuroscience

[19] G R Muller-Putz and G Pfurtscheller ldquoControl of an electricalprosthesis with an SSVEP-based BCIrdquo IEEE Transactions onBiomedical Engineering vol 55 no 1 pp 361ndash364 2008

[20] P Martinez H Bakardjian and A Cichocki ldquoFully onlinemulticommand brain-computer interface with visual neuro-feedback using SSVEP paradigmrdquo Computational Intelligenceand Neuroscience vol 2007 Article ID 94561 9 pages 2007

[21] I Volosyak D Valbuena T Luth T Malechka and A GraserldquoBCI demographics II how many (and What Kinds of) peoplecan use a high-frequency SSVEP BCIrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 19 no 3 pp232ndash239 2011

[22] J Zhao Q Meng W Li M Li and G Chen ldquoSSVEP-basedhierarchical architecture for control of a humanoid robot withmindrdquo in Proceedings of the 11th World Congress on Intelli-gent Control and Automation (WCICA rsquo14) pp 2401ndash2406Shenyang China July 2014

[23] A Guneysu and H Levent Akin ldquoAn SSVEP based BCI tocontrol a humanoid robot by using portable EEG devicerdquo inProceedings of the 35th Annual International Conference of theIEEE Engineering in Medicine and Biology Society (EMBC rsquo13)pp 6905ndash6908 IEEE Osaka Japan July 2013

[24] K Holewa and A Nawrocka ldquoEmotiv EPOC neuroheadset inbrainmdashcomputer interfacerdquo in Proceedings of the 15th Interna-tional Carpathian Control Conference (ICCC rsquo14) pp 149ndash152IEEE Velke Karlovice Czech Republic May 2014

[25] N-S Kwak K-R Muller and S-W Lee ldquoToward exoskeletoncontrol based on steady state visual evoked potentialsrdquo inProceedings of the International Winter Workshop on Brain-Computer Interface (BCI rsquo14) pp 1ndash2 Gangwon South KoreaFebruary 2014

[26] G Bin X Gao Y Wang B Hong and S Gao ldquoVEP-basedbrain-computer interfaces time frequency and code modula-tionsrdquo IEEE Computational Intelligence Magazine vol 4 no 4pp 22ndash26 2009

[27] C Kapeller C Hintermuller M Abu-Alqumsan R Pruckl APeer and C Guger ldquoA BCI using VEP for continuous control ofa mobile robotrdquo in Proceedings of the 35th Annual InternationalConference of the IEEE Engineering in Medicine and BiologySociety (EMBC rsquo13) pp 5254ndash5257 IEEE Osaka Japan July2013

[28] H Wang T Li and Z Huang ldquoRemote control of an electricalcar with SSVEP-based BCIrdquo in Proceedings of the IEEE Interna-tional Conference on Information Theory and Information Secu-rity (ICITIS rsquo10) pp 837ndash840 IEEE Beijing China December2010

[29] Y Wang Y-T Wang and T-P Jung ldquoVisual stimulus designfor high-rate SSVEP BCIrdquo Electronics Letters vol 46 no 15 pp1057ndash1058 2010

[30] P Gergondet D Petit and A Kheddar ldquoSteering a robot witha brain-computer interface impact of video feedback on BCIperformancerdquo in Proceedings of the 21st IEEE InternationalSymposium on Robot and Human Interactive Communication(RO-MAN rsquo12) pp 271ndash276 Paris France September 2012

[31] P F Diez V A Mut E Laciar and E M Avila Perona ldquoMobilerobot navigation with a self-paced brain-computer interfacebased on high-frequency SSVEPrdquo Robotica vol 32 no 5 pp695ndash709 2014

[32] B Choi and S Jo ldquoA low-cost EEG system-based hybridbrain-computer interface for humanoid robot navigation andrecognitionrdquo PLoS ONE vol 8 no 9 Article ID e74583 2013

[33] E Tidoni P Gergondet A Kheddar and S M Aglioti ldquoAudio-visual feedback improves the BCI performance in the naviga-tional control of a humanoid robotrdquo Frontiers in Neuroroboticsvol 8 article 20 pp 1ndash8 2014

[34] J Zhao W Li and M Li ldquoComparative study of SSVEP- andP300-based models for the telepresence control of humanoidrobotsrdquo PLoS ONE vol 10 no 11 Article ID e0142168 2015

[35] T M Rutkowski K Shimizu T Kodama P Jurica and ACichocki ldquoBrain-robot interfaces using spatial tactile BCIparadigmsrdquo in Symbiotic Interaction pp 132ndash137 Springer NewYork NY USA 2015

[36] C Brunner B Z Allison D J Krusienski et al ldquoImprovedsignal processing approaches in an offline simulation of a hybridbrain-computer interfacerdquo Journal of NeuroscienceMethods vol188 no 1 pp 165ndash173 2010

[37] F Gembler P Stawicki and I Volosyak ldquoAutonomous parame-ter adjustment for SSVEP-based BCIs with a novel BCI wizardrdquoFrontiers in Neuroscience vol 9 article 474 pp 1ndash12 2015

[38] E W Sellers T M Vaughan and J R Wolpaw ldquoA brain-computer interface for long-term independent home userdquoAmyotrophic Lateral Sclerosis vol 11 no 5 pp 449ndash455 2010

[39] E M Holz L Botrel and A Kubler ldquoBridging gaps long-term independent BCI home-use by a locked-in end-userrdquo inProceedings of the TOBI Workshop IV Sion Switzerland 2013

[40] O Friman I Volosyak and A Graser ldquoMultiple channel detec-tion of steady-state visual evoked potentials for brain-computerinterfacesrdquo IEEE Transactions on Biomedical Engineering vol54 no 4 pp 742ndash750 2007

[41] I Volosyak T Malechka D Valbuena and A Graser ldquoAnovel calibration method for SSVEP based brain-computerinterfacesrdquo in Proceedings of the 18th European Signal ProcessingConference (EUSIPCO rsquo10) pp 939ndash943 Aalborg DenmarkAugust 2010

[42] I Volosyak H Cecotti and A Graser ldquoOptimal visual stimulion LCD screens for SSVEP based brain-computer interfacesrdquoin Proceedings of the 4th International IEEEEMBS Conferenceon Neural Engineering (NER rsquo09) pp 447ndash450 IEEE AntalyaTurkey May 2009

[43] J R Wolpaw N Birbaumer D J McFarland G Pfurtschellerand T M Vaughan ldquoBrain-computer interfaces for communi-cation and controlrdquo Clinical Neurophysiology vol 113 no 6 pp767ndash791 2002

Page 7: Driving a Semiautonomous Mobile Robotic Car Controlled by ... · Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI ... (frequencycoding,methodoriginallyproposed

Computational Intelligence and Neuroscience 7

(a) (b)

Figure 5 (a) shows the second step of theWizardThe subject faced a circle consisting of small segments which flickered simultaneously withseven different frequencies Each of the seven tested frequencies was represented by the same amount of segmentsThe segments representingone of the seven tested frequencies were scattered in random order (b) forming the stimulation circle (a) After this multistimulation withthirteen or fourteen different frequencies depending on the alpha test result the EEG data were analyzed and ordered from the strongest tothe lowest SSVEP response

fS

(a) (b)

Figure 6 This figure shows the third step of the Wizard (a) The main frequency 119891119878is in the center (white circle stimulation) and the other

three frequencies (from four 119891119878119894with the strongest SSVEP response) are divided into small stimulation blocks and scattered around 119891

119878acting

as ldquonoiserdquo in peripheral vision This pattern is shown in (b) This was repeated for each 119891119878119894

232 Wizard The Wizard software is generally spokena calibration program that autonomously determines BCIkey parameters [37] It is designed to select user-specificoptimal stimulation frequencies for SSVEP-based BCIs onLCD screens It is a development of the method presentedin [41] The Wizard chooses four frequencies with the bestSSVEP response out of a set of 14 possible stimulationfrequencies 119891

119894(ranged from 6Hz to 20Hz) that can be

realized on a monitor screen with a vertical refresh rate of120Hz [42] In the following we provide a brief description

Step 1 (alpha test) The subject was instructed by an audiofeedback to close his or her eyes 15 frequencies in the alphawave band (827 857 887 893 923 953 970 100 10301061 1091 1121 1170 120 and 1230Hz) were measured for10 seconds during the closed eyes period and if any of those15 frequencies would interfere with a possible stimulationfrequency 119891

119894(Δ119891 le 015Hz) this frequency 119891

119894would be

filtered out

Step 2 (stimulation frequency selection) In order to avoid theinfluence of harmonic frequencies the user had to look at twocircles in sequence Each circle contained seven stimulationfrequencies which were presented in pseudorandom order assmall green circle segments (see Figure 5) This step took 20seconds

Step 3 (optimal classification thresholds and time segmentlength) In this step the Wizard software chose the beststarting time-length segment 119879

119878as well as the optimal

classification thresholds 120573119894for the frequencies determined

in the previous step First a white circle flickering at thefrequency which had the highest SSVEP response in phase2 was displayed It was necessary to simulate noise causedby peripheral vision when concentrating on the target objectFor this purpose the white circle was surrounded by a greenring containing segments which presented the remainingoptimal frequencies (see Figure 6) The user was instructedby an audio feedback to gaze at the white circle The circle

8 Computational Intelligence and Neuroscience

(a) (b)

Figure 7 Two subjects during the driving task The robotic-GUI is in camera mode The subject in (a) sees a text message instructing himto select the button ldquolook uprdquo The subject in (b) is one step further and is facing the hidden secret number which is attached underneath adrawer

and the ring flickered for 10 seconds while EEG data wererecorded After a two-second break the flickering continuedThe white circle now flickered with the second-highestfrequency from phase two while the ring flickered with theremaining three frequencies This procedure was repeateduntil the data for all four optimal frequencies were collectedTotal recording time for phase three was 40 seconds andbased on this data optimal thresholds and time segmentlength were determined If one of the frequencies 119891

119878119894did not

reach a defined minimal threshold 120573119894 this frequency was

replaced with the next best frequency from Step 2 and theprocedure of this step was repeated

24 Procedure In order to analyze the BCI performancevariety a pre- and postquestionnaire system (similar to [21])was employed After signing the consent form each subjectcompleted a brief prequestionnaire and was prepared for theEEG recording Subjects were seated in a comfortable chairin front of the monitor the electrode cap was placed andelectrode gel was applied First the subjects went throughthe steps of the Wizard and BCI key parameters (stimula-tion frequencies classification thresholds and optimal timesegment length) were determined Then the experimenterstarted the robot-GUI (Figure 3) The control task was todrive the MRC through a predetermined route (Figure 8) Ifthe robot-GUI software recognized a wrong command forexample ldquoturn rightrdquo instead of ldquoturn leftrdquo this commandwas not sent so that the MRC would not drive off the track(Figure 7) If the subjectmade a fewmistakes at the beginningand was willing to start over again (this occurred with asmall number of subjects) the experimentermoved theMRCback to the starting point and restarted the robot-GUI In thevery few cases where the wireless connection to the MRCwas interrupted (eg the MRC was mechanically unable toexecute the command thatwas correctly selected in the robot-GUI by the user) the experimenter manually repositionedthe MRC to the next turning point

At a certain turning point (represented by a blue squareshown in Figure 8 when approximately 60 of the drivingtask was completed) the camera was facing a text messagewhich instructed the subjects to look up (user action rep-resented with a circled number as eg 5 in this case)

Start

Finish

1 m

1 m

1 m

1 m

2 m

15 m15 m

15 m

15 m

15 m

15 m

Figure 8 Track for the mobile robotic car Subjects had to steerthe MRC through a 15-meter-long course with 10 turns in orderto reach the finish line At a certain turning point (marked with ablue square) the MRCs camera faced a text message that instructedthe subject to switch to camera mode The task was to examine thesurroundings and to look for a hidden number which was attachedunderneath a drawer (otherwise not visible) One example of theoptimal path is shown in Figure 9

At this point the user had to switch to camera mode andselect the command ldquolook uprdquo Now the subject was ableto see a hidden secret number which was attached to thebottom of a drawer (located at a height of 50 cm facingthe floor not normally visible to the people in the room)The numbers varied randomly between the subjects andwere only accessible via the MRC camera After seeing thenumber on the screen the subject had to change back todrive mode and continued the driving task When arriving atthe finish line the camera faced another text message whichagain instructed the subjects to look up After changing tocamera mode and selecting ldquolook uprdquo the camera faced asign displaying the word ldquofinishrdquo and the robot-GUI stoppedautomatically All of the EEG data and performed commandswere stored on the hard drive of the desktop PC for furtherevaluation The values of ITR and accuracies were calculatedautomatically by the software The ITR was calculated usingthe formula presented by Wolpaw et al in [43] the numberof possible choices for classification was equal to four itwas the same for each mode (drive mode or camera mode)After finishing the experiment the subject filled out thepostquestionnaire

Computational Intelligence and Neuroscience 9

Figure 9This chart presents exemplarily the BCI performance for subject 59 (optimal path with 100 accuracy)The top row shows the rawEEG signal acquired from channel119874

119911(one of eight EEG electrodes)The remaining graphs show the SNR values during the whole experiment

(dashed line represents the threshold for each command) for the four commands forwardup turn left turn right and cameradrive thestimulation frequencies for those commands were 667 750 706 and 632Hz respectively The 119909-axis the same for all graphs representsthe total duration of the experiment (150 seconds) The marking above each graph represents the exact time of each classification circlednumbers below the 119909-axis correspond to the task actions

3 Results

The results of the driving performance are shown in Table 3All 61 subjects were able to control the BCI application andcompleted the entire driving taskThe overall mean (SD) ITRof the driving performance was 1407 (444) bitsmin Themean (SD) accuracy was 9714 (373) Except for one allsubjects reached accuracies above 80 40 subjects reachedaccuracies above 90 and nineteen of them even completedthe steering task without any errorsThemean (SD) accuracyof each action (commands required to complete the task) was

9614 (002) In the prequestionnaire subjects answered thequestions regarding gender the need for vision correctiontiredness and BCI experience as displayed in Table 2Eighteen female subjects (30) with an average (SD) ageof 2267 (430) years and 43 male subjects with an average(SD) age of 2260 (532) years participated in the experimentThe influence of gender was also tested 18 female subjectsachieved a mean (SD) accuracy and ITR of 9324 (798)and 1405 (573) bitsmin respectively and 43 male achieveda mean (SD) accuracy and ITR of 9295 (659) and 1407(508) bitsmin respectively A one-way ANOVA showed no

10 Computational Intelligence and Neuroscience

Table 2 Questionnaire results The table presents the answers collected from the prequestionnaire The figures show the number of respon-dents or the mean (SD) value

Age Gender Visioncorrection Tiredness Length of sleep

last nightExperiencewith BCIs

Years Male Female Yes No 1 nottired

2 littletired

3 moderatelytired 4 tired 5 very

tired Hours Yes No

2262(500) 43 18 22 39 18 23 17 3 0 675 (122) 6 55

statistically significant difference in accuracy 119865(1 59) =

0022 119901 = 0883 and ITR 119865(1 59) = 00003 119901 = 0987between those two groupsThe influence of vision correctionon accuracy and ITR was also tested A total of 22 subjectsrequired vision correction in the form of glasses or contactlenses (as they stated in the questionnaire) They achieveda mean (SD) accuracy and ITR of 9156 (636) and 1278(402) bitsmin respectively These results were compared tosubjects who did not require vision correction and achieveda mean (SD) accuracy and ITR of 9392 (535) and 1484(466) bitsmin respectively A one-way ANOVA showedno statistically significant influence of vision correction onaccuracy 119865(1 59) = 1664 119901 = 0202 There was also nosignificant effect of vision correction on ITR119865(1 59) = 2263119901 = 0138 An evaluation of tiredness was also calculated onthe basis of the answers given in a prequestionnaire (scalefrom 1 to 5) Results of the prequestionnaire are shown inTable 2 A one-way ANOVA revealed no significant effect oftiredness on accuracy 119865(4 56) = 0335 119901 = 0853 and ITR119865(4 56) = 0067 119901 = 0991

4 Discussion

In the results presented the overall accuracy was calculatedfor all commands in drive mode and in camera modewithout distinguishing between them since four commandsfor SSVEP classification were always present Even thoughdifferent commands were sent the command classificationremained the same A comparison of this study to otherpublications on steering a robot with SSVEP-based BCI ispresented in Table 4

In order to test the influence of video overlay we com-pared the accuracy results with a demographic study [21] inwhich 86 subjects achieved a mean (SD) accuracy of 9226(782) while driving a small robot through a labyrinth witha four-class SSVEP-based BCI An unpaired 119905-test showed nostatistically significant difference 119905(145) = minus0089 119901 = 0397compared to the present values This shows that the videooverlay did not negatively influence the BCI performanceGergondet et al listed three factors with respect to howthe live video feedback in the background can influence theSSVEP stimulation distraction due to what is happening onthe video distraction due to what is happening around theuser and variation in luminosity below the stimuli shownto the user [30] Kapeller et al showed that there is a slightdecrease in performance for the underlying video sequencedue to a distractive environment and small targets used [27]

Contrary to the findings reported by Gergondet et al andKapeller et al the dynamic background did not noticeablyaffect the BCI performance (unpaired 119905-test result) Weassume that this was due to the fact that stimulation frequen-cies and minimal time segment classification windows wereadjusted individually and that relatively large stimulationboxes were used (visual angle)

The high accuracies of the SSVEP-based BCI systemsupport the hypothesis that BCI can be used to control acomplex driving application In comparison to the study byDiez et al [31] the MRC self-driving behavior was limitedto a preset number of turning points and a predefined pathAlthough the robot presented by Diez et al could avoidobstacles by itself (limited algorithm reaction) it decreased itsspeed for trajectory adjustment and in some cases stopped toavoid collision and further user commands were necessaryWhile Diez et al reported a mean time of 23320 seconds forthe second destination (139-meter distance) in this study theachieved mean time was 20708 seconds (15-meter distance)As pointed out by Diez et al the overall time for completingthe task depends not only on the robot but also on thesubjectrsquos performance controlling the BCI system [31] In ourstudy the high accuracies of the BCI system were achieveddue to the key parameter adjustment carried out by theWizard software

Themain difference between the robot used by Diez et aland the MRC presented in this study is the extended controlpossibility of the interface that also allows for controlling thecamera Both robots were designed for different purposesDiez et al mainly designed their robot for driving frompoint A to point B without any additional interaction withthe environment as they planned to use the system forwheelchair control in the future By contrast in our designthe MRC itself is the interaction device Equipped with adisplay and a two-way audio system it might serve as aremote telepresence in the future

The SSVEP paradigm allows for a robust implementationof four driving commands Compared to robots constructedfor the P300 BCI approach where the user faces largestimulation matrices spread across the entire screen thenumber of simultaneously displayed targets here is only fourand hence the load on the visual channel is considerably lowerin comparison Therefore the user might find it easier toconcentrate on the driving task On the other hand the lessernumber of command options reduces freedom and speedHowever a shared-control paradigm allows for complex tasksto be executed with minimal effort The command ldquodrive

Computational Intelligence and Neuroscience 11

Table 3 Results of the driving performance Total time each subjectrequired for steering the MRC through the route (15 meters and10 turns including camera movements) 26 correct commands (3commands in camera mode and 23 commands in drive mode) werenecessary to complete the task Accuracy (Acc) is the number ofcorrect commands (26) divided by the total number of commands(119862119899) Mean values are given at the bottom of the table

Subject Time Acc ITR119862119899[s] [] [bitsmin]

1 30560 8667 720 302 28346 8966 832 293 28407 8125 680 324 28011 8125 690 325 25259 8125 765 326 19002 9286 1340 287 32206 9630 862 278 31007 7027 466 379 15813 9286 1610 2810 37182 8966 635 2911 24172 8667 910 3012 15133 9286 1683 2813 15549 10000 2007 2614 21054 9630 1318 2715 20841 10000 1497 2616 24162 8387 852 3117 24172 8667 910 3018 14513 9630 1912 2719 16748 9630 1657 2720 22080 9286 1153 2821 19419 9630 1429 2722 43154 9286 590 2823 16352 10000 1908 2624 17184 8667 1280 3025 24812 8387 830 3126 13193 10000 2365 2627 16138 10000 1933 2628 18475 9630 1502 2729 14422 10000 2163 2630 16748 9286 1520 2831 21663 9286 1175 2832 35516 8125 544 3233 24406 8667 901 3034 15214 10000 2051 2635 19561 9630 1418 2736 23948 8387 860 3137 16636 8966 1418 2938 17245 10000 1809 2639 21897 8387 940 3140 16565 8966 1425 2941 14666 10000 2127 2642 18627 8966 1267 29

Table 3 Continued

Subject Time Acc ITR119862119899[s] [] [bitsmin]

43 22131 9286 1151 2844 25929 8667 848 3045 15702 10000 1987 2646 14788 9630 1876 2747 18718 9630 1482 2748 14554 10000 2144 2649 16057 9630 1728 2750 20495 10000 1522 2651 22811 9630 1216 2752 16037 10000 1946 2653 23664 8125 817 3254 14889 10000 2095 2655 15011 10000 2078 2656 13985 10000 2231 2657 14716 10000 2120 2658 24273 10000 1285 2659 15001 10000 2080 2660 16463 9630 1685 2761 17885 9630 1551 27Mean 20708 9303 1407 2811SD 5025 573 444 182Min 13193 7027 466 2600Max 43154 10000 2365 3700

forwardrdquo starts an autopilot program that guides the robotthrough a straight distance until it reaches a predefinedstopping point

While the MRC executed the autopilot program andfollowed the line for some time subjects purposely ignoredthe flickering targets and concentrated on the video feedbackinstead As these waiting periods were added up to theactual classification times to each command that followed anautopilot command (ldquoforwardrdquo) the ITR values are consid-erably low From the researcherrsquos point of view a numberof errors could have been avoided if some subjects wouldhave paid more attention to the live feedback Especially inthe task of reading the hidden number subjects were tryingto drive along the path without looking at the text messagethat appeared on the live feedback After realizing that therobot was not turning they looked at the video and saw thetext message instructing them to look up This is visible inthe analysis of the accuracy of the robots commands (seeFigure 10) The lowest accuracy of 8749 was for actionldquochange view to the camera moderdquo (robot command 14) thismay be caused by the fact that the subjects got used to theactions ldquoturn leftrdquoldquoturn rightrdquo and ldquodrive forwardrdquo whichthey repeated six times from the starting point (see Figures10 and 9) This could easily be avoided if the driving pathswere designedwith evenly distributed tasks (eg live scenariotasks such as reading a hidden number should occur moreoften)

12 Computational Intelligence and Neuroscience

Table 4 Comparison of BCI systems for robot navigation

Publication Subjects Classes Literacy rate [] Mean accuracy [] Mean ITR [bitsmin] Robot typeVolosyak et al [21] 86 4 98 9226 1724 E-puckZhao et al1 [34] 7 4 100 903 247 NAOKapeller et al2 [27] 11 4 91 9136 NA E-puckDiez et al3 [31] 7 4 100 8880lowast NA Pioneer 3-DXHolewa and Nawrocka [24] 3 4 100 7375 1136 Lego MindstormChoi and Jo1 [32] 5 2 100 844 1140 NAOThis study 61 4 100 9303 1407 MRC1Results for SSVEP paradigm2f-VEP results for 4 classes without zero class3Results for destination 1 lowastcorrect(correct + wrong) commands

Table 5 Distribution of the time blocks needed for selection of the first three commands ldquodrive forwardrdquo and the last three commandsldquoforwardrdquoldquolook uprdquo over all subjects

Distribution []Time windows 119879

119878in EEG data blocks

8 10 15 20 30 40 50 60 to 160A 656 1803 1803 2295 1148 656 820 820C 820 2295 820 1967 1148 820 492 1639E 984 1311 820 1967 656 984 820 2459Mean 820 1803 1148 2077 984 820 710 1639œ 984 1803 1475 1803 984 328 656 1967X 1148 1639 1639 1475 1311 984 164 1639Y 1148 1475 1148 2295 656 656 492 2131Mean 1093 1639 1421 1858 984 656 437 1913

Accuracy of each command of the robotrsquos path

1 2 3 4 5 6 7 8 9

Commands

10080604020

0

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26

()

Figure 10This chart presents the accuracy of each command alongthe path (119909-axis) and its selection accuracy to previous correctselection over all subjects

No difference between the time needed for the selectionof the command ldquodrive forwardrdquo in the beginning of theexperiment in comparison to the end of the experiment canbe observed (see Table 5) hence neither fatigue nor practiceseemed to influence performance

The general literacy rate among test subjects of 100indicates that SSVEP-based BCIs can be used by a largerpopulation than previously assumedThis high literacy is dueto the careful selection of individual SSVEP key parameterswhichwere determined by the presented calibration software

41 Limitations The SSVEP-control application has somerestrictions

(i) The wireless connection was implemented with theUDP protocol and some information might thereforenot be received properly

(ii) In order to maximize performance accuracy and userfriendliness the number of simultaneously displayedtargets was restricted to four which resulted in aconsiderably lower ITR

(iii) It should also be noted that for long term use recal-ibration might be inevitable Although the numberof targets is limited to four the SSVEP-based BCIpresented here depends on the userrsquos vision andcontrol over his or her eye movements It thereforehas to compete with other approaches such as P300and other eye tracking systems Visual stimulationwith low frequencies in particular is known to causefatigue

(iv) Moreover subjects in this study may not be repre-sentative of the general population they tended tobe young healthy men Further tests with older andphysically impaired persons are required

5 Conclusions

In the present study the SSVEP-based driving application(robot-GUI) was proven to be successful and robust Thedriving application with live video feedback was tested with61 healthy subjects All participants successfully navigateda mobile semiautonomous robot through a predetermined15-meter-long route BCI key parameters were determinedby the Wizard software No negative influence of the livevideo feedback on the performance was observed Such

Computational Intelligence and Neuroscience 13

applications could be used to control assistive robots butmight also be of interest for healthy subjects In contrast toP300 the SSVEP stimulation has to be sufficiently strong (interms of size or visual angle) to significantly influence theuser A common problem of all BCI studies that also occurredhere is to keep the usermotivated and focused on the taskTheevaluation of the questionnaire shows that for the majorityof subjects the driving task with the SSVEP-based BCI wasnot fatiguing even though lower stimulation frequencieswhich usually cause more fatigue than higher frequencieswere used in the experiment More specifically frequenciesbetween 6 and 12Hz were determined in the calibrationprocedure as they elicited the strongest SSVEP responsesSome subjects stated that as they concentrated more on thecamera video they did not find the flickering annoying at allOur results prove that SSVEP-based personalized adaptiveBCI systems with a low target number can be used as aneveryday assistive technology for remote robotsWe have alsoshown that it is possible to design a four-target interface withextended capabilities such as camera and driving controlThe control application presented here can be used formobile robots as an assistive remote technology as a gameinterface forwheelchair control or even for steering remotelycontrolled drones This study should be considered as a steptowards resolving the problems of semiautonomous mobilerobots controlled with SSVEP-based BCIs In order to furtherimprove this approach the next steps should be an onlineparameter adaptationmechanism for improving the accuracythrough practicing a possibility of turning the stimulationonoff and a two-way video and audio communication fora truly remote telepresence

Competing Interests

The authors declare that the research was conducted in theabsence of any commercial or financial relationships thatcould be construed as a potential conflict of interests

Acknowledgments

The authors would like to thank Thomas Grunenberg fromthe Faculty of Technology and Bionics at Rhine-Waal Uni-versity of Applied Sciences for building programming andmaintaining MRCs The authors would also like to thankthe student assistants Leonie Hillen Francisco Nunez SaadPlatini Anna Catharina Thoma and Paul Fanen Wundufor their help with the study This research was supportedby the German Federal Ministry of Education and Research(BMBF) under Grants nos 16SV6364 and 01DR14014 and theEuropean Fund for Regional Development (EFRD or EFREin Germany) under Grant no GE-1-1-047

References

[1] M Cheney and R Uth Tesla Master of Lightning Barnes ampNoble New York NY USA 1999

[2] W Walter The Living Brain W W Norton amp Company NewYork NY USA 1953

[3] P Flandorfer ldquoPopulation ageing and socially assistive robotsfor elderly persons the importance of sociodemographic fac-tors for user acceptancerdquo International Journal of PopulationResearch vol 2012 Article ID 829835 13 pages 2012

[4] A Nijholt D Tan G Pfurtscheller et al ldquoBrain-computerinterfacing for intelligent systemsrdquo IEEE Intelligent Systems vol23 no 3 pp 72ndash79 2008

[5] J D R Millan R Rupp G R Muller-Putz et al ldquoCombiningbrain-computer interfaces and assistive technologies state-of-the-art and challengesrdquo Frontiers in Neuroscience vol 4 article161 2010

[6] C Brunner N Birbaumer B Blankertz et al ldquoBNCI horizon2020 towards a roadmap for the BCI communityrdquo Brain-Computer Interfaces vol 2 no 1 pp 1ndash10 2015

[7] R Bogue ldquoExoskeletons and robotic prosthetics a review ofrecent developmentsrdquo Industrial Robot vol 36 no 5 pp 421ndash427 2009

[8] R Ron-Angevin F Velasco-Alvarez S Sancha-Ros and L daSilva-Sauer ldquoA two-class self-paced BCI to control a robot infour directionsrdquo IEEE International Conference on Rehabilita-tion Robotics (ICORR rsquo11) vol 2011 Article ID 5975486 2011

[9] L Tonin T Carlson R Leeb and J D R Millan ldquoBrain-controlled telepresence robot by motor-disabled peoplerdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society pp 4227ndash4230IEEE Boston Mass USA September 2011

[10] D Huang K Qian D-Y Fei W Jia X Chen and O BaildquoElectroencephalography (EEG)-based brain-computer inter-face (BCI) a 2-D virtual wheelchair control based on event-related desynchronizationsynchronization and state controlrdquoIEEE Transactions on Neural Systems and Rehabilitation Engi-neering vol 20 no 3 pp 379ndash388 2012

[11] A Chella E Pagello E Menegatti et al ldquoA BCI teleoperatedmuseum robotic guiderdquo in Proceedings of the International Con-ference on Complex Intelligent and Software Intensive Systems(CISIS rsquo09) pp 783ndash788 IEEE Fukuoka Japan March 2009

[12] C Escolano J M Antelis and J Minguez ldquoA telepresencemobile robot controlled with a noninvasive brain-computerinterfacerdquo IEEE Transactions on Systems Man and CyberneticsPart B Cybernetics vol 42 no 3 pp 793ndash804 2012

[13] F-B VialatteMMaurice J Dauwels andA Cichocki ldquoSteady-state visually evoked potentials focus on essential paradigmsand future perspectivesrdquo Progress in Neurobiology vol 90 no4 pp 418ndash438 2010

[14] D Regan ldquoSome characteristics of average steady-state andtransient responses evoked by modulated lightrdquo Electroen-cephalography and Clinical Neurophysiology vol 20 no 3 pp238ndash248 1966

[15] Y Frank and F Torres ldquoVisual evoked potentials in the evalua-tion of lsquocortical blindnessrsquo in childrenrdquoAnnals of Neurology vol6 no 2 pp 126ndash129 1979

[16] W IanMcDonald A CompstonG Edan et al ldquoRecommendeddiagnostic criteria for multiple sclerosis guidelines from theinternational panel on the diagnosis of multiple sclerosisrdquoAnnals of Neurology vol 50 no 1 pp 121ndash127 2001

[17] S Gao Y Wang X Gao and B Hong ldquoVisual and auditorybrain-computer interfacesrdquo IEEE Transactions on BiomedicalEngineering vol 61 no 5 pp 1436ndash1447 2014

[18] I Volosyak ldquoSSVEP-based Bremen-BCI interfacemdashboostinginformation transfer ratesrdquo Journal of Neural Engineering vol8 no 3 Article ID 036020 2011

14 Computational Intelligence and Neuroscience

[19] G R Muller-Putz and G Pfurtscheller ldquoControl of an electricalprosthesis with an SSVEP-based BCIrdquo IEEE Transactions onBiomedical Engineering vol 55 no 1 pp 361ndash364 2008

[20] P Martinez H Bakardjian and A Cichocki ldquoFully onlinemulticommand brain-computer interface with visual neuro-feedback using SSVEP paradigmrdquo Computational Intelligenceand Neuroscience vol 2007 Article ID 94561 9 pages 2007

[21] I Volosyak D Valbuena T Luth T Malechka and A GraserldquoBCI demographics II how many (and What Kinds of) peoplecan use a high-frequency SSVEP BCIrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 19 no 3 pp232ndash239 2011

[22] J Zhao Q Meng W Li M Li and G Chen ldquoSSVEP-basedhierarchical architecture for control of a humanoid robot withmindrdquo in Proceedings of the 11th World Congress on Intelli-gent Control and Automation (WCICA rsquo14) pp 2401ndash2406Shenyang China July 2014

[23] A Guneysu and H Levent Akin ldquoAn SSVEP based BCI tocontrol a humanoid robot by using portable EEG devicerdquo inProceedings of the 35th Annual International Conference of theIEEE Engineering in Medicine and Biology Society (EMBC rsquo13)pp 6905ndash6908 IEEE Osaka Japan July 2013

[24] K Holewa and A Nawrocka ldquoEmotiv EPOC neuroheadset inbrainmdashcomputer interfacerdquo in Proceedings of the 15th Interna-tional Carpathian Control Conference (ICCC rsquo14) pp 149ndash152IEEE Velke Karlovice Czech Republic May 2014

[25] N-S Kwak K-R Muller and S-W Lee ldquoToward exoskeletoncontrol based on steady state visual evoked potentialsrdquo inProceedings of the International Winter Workshop on Brain-Computer Interface (BCI rsquo14) pp 1ndash2 Gangwon South KoreaFebruary 2014

[26] G Bin X Gao Y Wang B Hong and S Gao ldquoVEP-basedbrain-computer interfaces time frequency and code modula-tionsrdquo IEEE Computational Intelligence Magazine vol 4 no 4pp 22ndash26 2009

[27] C Kapeller C Hintermuller M Abu-Alqumsan R Pruckl APeer and C Guger ldquoA BCI using VEP for continuous control ofa mobile robotrdquo in Proceedings of the 35th Annual InternationalConference of the IEEE Engineering in Medicine and BiologySociety (EMBC rsquo13) pp 5254ndash5257 IEEE Osaka Japan July2013

[28] H Wang T Li and Z Huang ldquoRemote control of an electricalcar with SSVEP-based BCIrdquo in Proceedings of the IEEE Interna-tional Conference on Information Theory and Information Secu-rity (ICITIS rsquo10) pp 837ndash840 IEEE Beijing China December2010

[29] Y Wang Y-T Wang and T-P Jung ldquoVisual stimulus designfor high-rate SSVEP BCIrdquo Electronics Letters vol 46 no 15 pp1057ndash1058 2010

[30] P Gergondet D Petit and A Kheddar ldquoSteering a robot witha brain-computer interface impact of video feedback on BCIperformancerdquo in Proceedings of the 21st IEEE InternationalSymposium on Robot and Human Interactive Communication(RO-MAN rsquo12) pp 271ndash276 Paris France September 2012

[31] P F Diez V A Mut E Laciar and E M Avila Perona ldquoMobilerobot navigation with a self-paced brain-computer interfacebased on high-frequency SSVEPrdquo Robotica vol 32 no 5 pp695ndash709 2014

[32] B Choi and S Jo ldquoA low-cost EEG system-based hybridbrain-computer interface for humanoid robot navigation andrecognitionrdquo PLoS ONE vol 8 no 9 Article ID e74583 2013

[33] E Tidoni P Gergondet A Kheddar and S M Aglioti ldquoAudio-visual feedback improves the BCI performance in the naviga-tional control of a humanoid robotrdquo Frontiers in Neuroroboticsvol 8 article 20 pp 1ndash8 2014

[34] J Zhao W Li and M Li ldquoComparative study of SSVEP- andP300-based models for the telepresence control of humanoidrobotsrdquo PLoS ONE vol 10 no 11 Article ID e0142168 2015

[35] T M Rutkowski K Shimizu T Kodama P Jurica and ACichocki ldquoBrain-robot interfaces using spatial tactile BCIparadigmsrdquo in Symbiotic Interaction pp 132ndash137 Springer NewYork NY USA 2015

[36] C Brunner B Z Allison D J Krusienski et al ldquoImprovedsignal processing approaches in an offline simulation of a hybridbrain-computer interfacerdquo Journal of NeuroscienceMethods vol188 no 1 pp 165ndash173 2010

[37] F Gembler P Stawicki and I Volosyak ldquoAutonomous parame-ter adjustment for SSVEP-based BCIs with a novel BCI wizardrdquoFrontiers in Neuroscience vol 9 article 474 pp 1ndash12 2015

[38] E W Sellers T M Vaughan and J R Wolpaw ldquoA brain-computer interface for long-term independent home userdquoAmyotrophic Lateral Sclerosis vol 11 no 5 pp 449ndash455 2010

[39] E M Holz L Botrel and A Kubler ldquoBridging gaps long-term independent BCI home-use by a locked-in end-userrdquo inProceedings of the TOBI Workshop IV Sion Switzerland 2013

[40] O Friman I Volosyak and A Graser ldquoMultiple channel detec-tion of steady-state visual evoked potentials for brain-computerinterfacesrdquo IEEE Transactions on Biomedical Engineering vol54 no 4 pp 742ndash750 2007

[41] I Volosyak T Malechka D Valbuena and A Graser ldquoAnovel calibration method for SSVEP based brain-computerinterfacesrdquo in Proceedings of the 18th European Signal ProcessingConference (EUSIPCO rsquo10) pp 939ndash943 Aalborg DenmarkAugust 2010

[42] I Volosyak H Cecotti and A Graser ldquoOptimal visual stimulion LCD screens for SSVEP based brain-computer interfacesrdquoin Proceedings of the 4th International IEEEEMBS Conferenceon Neural Engineering (NER rsquo09) pp 447ndash450 IEEE AntalyaTurkey May 2009

[43] J R Wolpaw N Birbaumer D J McFarland G Pfurtschellerand T M Vaughan ldquoBrain-computer interfaces for communi-cation and controlrdquo Clinical Neurophysiology vol 113 no 6 pp767ndash791 2002

Page 8: Driving a Semiautonomous Mobile Robotic Car Controlled by ... · Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI ... (frequencycoding,methodoriginallyproposed

8 Computational Intelligence and Neuroscience

(a) (b)

Figure 7 Two subjects during the driving task The robotic-GUI is in camera mode The subject in (a) sees a text message instructing himto select the button ldquolook uprdquo The subject in (b) is one step further and is facing the hidden secret number which is attached underneath adrawer

and the ring flickered for 10 seconds while EEG data wererecorded After a two-second break the flickering continuedThe white circle now flickered with the second-highestfrequency from phase two while the ring flickered with theremaining three frequencies This procedure was repeateduntil the data for all four optimal frequencies were collectedTotal recording time for phase three was 40 seconds andbased on this data optimal thresholds and time segmentlength were determined If one of the frequencies 119891

119878119894did not

reach a defined minimal threshold 120573119894 this frequency was

replaced with the next best frequency from Step 2 and theprocedure of this step was repeated

24 Procedure In order to analyze the BCI performancevariety a pre- and postquestionnaire system (similar to [21])was employed After signing the consent form each subjectcompleted a brief prequestionnaire and was prepared for theEEG recording Subjects were seated in a comfortable chairin front of the monitor the electrode cap was placed andelectrode gel was applied First the subjects went throughthe steps of the Wizard and BCI key parameters (stimula-tion frequencies classification thresholds and optimal timesegment length) were determined Then the experimenterstarted the robot-GUI (Figure 3) The control task was todrive the MRC through a predetermined route (Figure 8) Ifthe robot-GUI software recognized a wrong command forexample ldquoturn rightrdquo instead of ldquoturn leftrdquo this commandwas not sent so that the MRC would not drive off the track(Figure 7) If the subjectmade a fewmistakes at the beginningand was willing to start over again (this occurred with asmall number of subjects) the experimentermoved theMRCback to the starting point and restarted the robot-GUI In thevery few cases where the wireless connection to the MRCwas interrupted (eg the MRC was mechanically unable toexecute the command thatwas correctly selected in the robot-GUI by the user) the experimenter manually repositionedthe MRC to the next turning point

At a certain turning point (represented by a blue squareshown in Figure 8 when approximately 60 of the drivingtask was completed) the camera was facing a text messagewhich instructed the subjects to look up (user action rep-resented with a circled number as eg 5 in this case)

Start

Finish

1 m

1 m

1 m

1 m

2 m

15 m15 m

15 m

15 m

15 m

15 m

Figure 8 Track for the mobile robotic car Subjects had to steerthe MRC through a 15-meter-long course with 10 turns in orderto reach the finish line At a certain turning point (marked with ablue square) the MRCs camera faced a text message that instructedthe subject to switch to camera mode The task was to examine thesurroundings and to look for a hidden number which was attachedunderneath a drawer (otherwise not visible) One example of theoptimal path is shown in Figure 9

At this point the user had to switch to camera mode andselect the command ldquolook uprdquo Now the subject was ableto see a hidden secret number which was attached to thebottom of a drawer (located at a height of 50 cm facingthe floor not normally visible to the people in the room)The numbers varied randomly between the subjects andwere only accessible via the MRC camera After seeing thenumber on the screen the subject had to change back todrive mode and continued the driving task When arriving atthe finish line the camera faced another text message whichagain instructed the subjects to look up After changing tocamera mode and selecting ldquolook uprdquo the camera faced asign displaying the word ldquofinishrdquo and the robot-GUI stoppedautomatically All of the EEG data and performed commandswere stored on the hard drive of the desktop PC for furtherevaluation The values of ITR and accuracies were calculatedautomatically by the software The ITR was calculated usingthe formula presented by Wolpaw et al in [43] the numberof possible choices for classification was equal to four itwas the same for each mode (drive mode or camera mode)After finishing the experiment the subject filled out thepostquestionnaire

Computational Intelligence and Neuroscience 9

Figure 9This chart presents exemplarily the BCI performance for subject 59 (optimal path with 100 accuracy)The top row shows the rawEEG signal acquired from channel119874

119911(one of eight EEG electrodes)The remaining graphs show the SNR values during the whole experiment

(dashed line represents the threshold for each command) for the four commands forwardup turn left turn right and cameradrive thestimulation frequencies for those commands were 667 750 706 and 632Hz respectively The 119909-axis the same for all graphs representsthe total duration of the experiment (150 seconds) The marking above each graph represents the exact time of each classification circlednumbers below the 119909-axis correspond to the task actions

3 Results

The results of the driving performance are shown in Table 3All 61 subjects were able to control the BCI application andcompleted the entire driving taskThe overall mean (SD) ITRof the driving performance was 1407 (444) bitsmin Themean (SD) accuracy was 9714 (373) Except for one allsubjects reached accuracies above 80 40 subjects reachedaccuracies above 90 and nineteen of them even completedthe steering task without any errorsThemean (SD) accuracyof each action (commands required to complete the task) was

9614 (002) In the prequestionnaire subjects answered thequestions regarding gender the need for vision correctiontiredness and BCI experience as displayed in Table 2Eighteen female subjects (30) with an average (SD) ageof 2267 (430) years and 43 male subjects with an average(SD) age of 2260 (532) years participated in the experimentThe influence of gender was also tested 18 female subjectsachieved a mean (SD) accuracy and ITR of 9324 (798)and 1405 (573) bitsmin respectively and 43 male achieveda mean (SD) accuracy and ITR of 9295 (659) and 1407(508) bitsmin respectively A one-way ANOVA showed no

10 Computational Intelligence and Neuroscience

Table 2 Questionnaire results The table presents the answers collected from the prequestionnaire The figures show the number of respon-dents or the mean (SD) value

Age Gender Visioncorrection Tiredness Length of sleep

last nightExperiencewith BCIs

Years Male Female Yes No 1 nottired

2 littletired

3 moderatelytired 4 tired 5 very

tired Hours Yes No

2262(500) 43 18 22 39 18 23 17 3 0 675 (122) 6 55

statistically significant difference in accuracy 119865(1 59) =

0022 119901 = 0883 and ITR 119865(1 59) = 00003 119901 = 0987between those two groupsThe influence of vision correctionon accuracy and ITR was also tested A total of 22 subjectsrequired vision correction in the form of glasses or contactlenses (as they stated in the questionnaire) They achieveda mean (SD) accuracy and ITR of 9156 (636) and 1278(402) bitsmin respectively These results were compared tosubjects who did not require vision correction and achieveda mean (SD) accuracy and ITR of 9392 (535) and 1484(466) bitsmin respectively A one-way ANOVA showedno statistically significant influence of vision correction onaccuracy 119865(1 59) = 1664 119901 = 0202 There was also nosignificant effect of vision correction on ITR119865(1 59) = 2263119901 = 0138 An evaluation of tiredness was also calculated onthe basis of the answers given in a prequestionnaire (scalefrom 1 to 5) Results of the prequestionnaire are shown inTable 2 A one-way ANOVA revealed no significant effect oftiredness on accuracy 119865(4 56) = 0335 119901 = 0853 and ITR119865(4 56) = 0067 119901 = 0991

4 Discussion

In the results presented the overall accuracy was calculatedfor all commands in drive mode and in camera modewithout distinguishing between them since four commandsfor SSVEP classification were always present Even thoughdifferent commands were sent the command classificationremained the same A comparison of this study to otherpublications on steering a robot with SSVEP-based BCI ispresented in Table 4

In order to test the influence of video overlay we com-pared the accuracy results with a demographic study [21] inwhich 86 subjects achieved a mean (SD) accuracy of 9226(782) while driving a small robot through a labyrinth witha four-class SSVEP-based BCI An unpaired 119905-test showed nostatistically significant difference 119905(145) = minus0089 119901 = 0397compared to the present values This shows that the videooverlay did not negatively influence the BCI performanceGergondet et al listed three factors with respect to howthe live video feedback in the background can influence theSSVEP stimulation distraction due to what is happening onthe video distraction due to what is happening around theuser and variation in luminosity below the stimuli shownto the user [30] Kapeller et al showed that there is a slightdecrease in performance for the underlying video sequencedue to a distractive environment and small targets used [27]

Contrary to the findings reported by Gergondet et al andKapeller et al the dynamic background did not noticeablyaffect the BCI performance (unpaired 119905-test result) Weassume that this was due to the fact that stimulation frequen-cies and minimal time segment classification windows wereadjusted individually and that relatively large stimulationboxes were used (visual angle)

The high accuracies of the SSVEP-based BCI systemsupport the hypothesis that BCI can be used to control acomplex driving application In comparison to the study byDiez et al [31] the MRC self-driving behavior was limitedto a preset number of turning points and a predefined pathAlthough the robot presented by Diez et al could avoidobstacles by itself (limited algorithm reaction) it decreased itsspeed for trajectory adjustment and in some cases stopped toavoid collision and further user commands were necessaryWhile Diez et al reported a mean time of 23320 seconds forthe second destination (139-meter distance) in this study theachieved mean time was 20708 seconds (15-meter distance)As pointed out by Diez et al the overall time for completingthe task depends not only on the robot but also on thesubjectrsquos performance controlling the BCI system [31] In ourstudy the high accuracies of the BCI system were achieveddue to the key parameter adjustment carried out by theWizard software

Themain difference between the robot used by Diez et aland the MRC presented in this study is the extended controlpossibility of the interface that also allows for controlling thecamera Both robots were designed for different purposesDiez et al mainly designed their robot for driving frompoint A to point B without any additional interaction withthe environment as they planned to use the system forwheelchair control in the future By contrast in our designthe MRC itself is the interaction device Equipped with adisplay and a two-way audio system it might serve as aremote telepresence in the future

The SSVEP paradigm allows for a robust implementationof four driving commands Compared to robots constructedfor the P300 BCI approach where the user faces largestimulation matrices spread across the entire screen thenumber of simultaneously displayed targets here is only fourand hence the load on the visual channel is considerably lowerin comparison Therefore the user might find it easier toconcentrate on the driving task On the other hand the lessernumber of command options reduces freedom and speedHowever a shared-control paradigm allows for complex tasksto be executed with minimal effort The command ldquodrive

Computational Intelligence and Neuroscience 11

Table 3 Results of the driving performance Total time each subjectrequired for steering the MRC through the route (15 meters and10 turns including camera movements) 26 correct commands (3commands in camera mode and 23 commands in drive mode) werenecessary to complete the task Accuracy (Acc) is the number ofcorrect commands (26) divided by the total number of commands(119862119899) Mean values are given at the bottom of the table

Subject Time Acc ITR119862119899[s] [] [bitsmin]

1 30560 8667 720 302 28346 8966 832 293 28407 8125 680 324 28011 8125 690 325 25259 8125 765 326 19002 9286 1340 287 32206 9630 862 278 31007 7027 466 379 15813 9286 1610 2810 37182 8966 635 2911 24172 8667 910 3012 15133 9286 1683 2813 15549 10000 2007 2614 21054 9630 1318 2715 20841 10000 1497 2616 24162 8387 852 3117 24172 8667 910 3018 14513 9630 1912 2719 16748 9630 1657 2720 22080 9286 1153 2821 19419 9630 1429 2722 43154 9286 590 2823 16352 10000 1908 2624 17184 8667 1280 3025 24812 8387 830 3126 13193 10000 2365 2627 16138 10000 1933 2628 18475 9630 1502 2729 14422 10000 2163 2630 16748 9286 1520 2831 21663 9286 1175 2832 35516 8125 544 3233 24406 8667 901 3034 15214 10000 2051 2635 19561 9630 1418 2736 23948 8387 860 3137 16636 8966 1418 2938 17245 10000 1809 2639 21897 8387 940 3140 16565 8966 1425 2941 14666 10000 2127 2642 18627 8966 1267 29

Table 3 Continued

Subject Time Acc ITR119862119899[s] [] [bitsmin]

43 22131 9286 1151 2844 25929 8667 848 3045 15702 10000 1987 2646 14788 9630 1876 2747 18718 9630 1482 2748 14554 10000 2144 2649 16057 9630 1728 2750 20495 10000 1522 2651 22811 9630 1216 2752 16037 10000 1946 2653 23664 8125 817 3254 14889 10000 2095 2655 15011 10000 2078 2656 13985 10000 2231 2657 14716 10000 2120 2658 24273 10000 1285 2659 15001 10000 2080 2660 16463 9630 1685 2761 17885 9630 1551 27Mean 20708 9303 1407 2811SD 5025 573 444 182Min 13193 7027 466 2600Max 43154 10000 2365 3700

forwardrdquo starts an autopilot program that guides the robotthrough a straight distance until it reaches a predefinedstopping point

While the MRC executed the autopilot program andfollowed the line for some time subjects purposely ignoredthe flickering targets and concentrated on the video feedbackinstead As these waiting periods were added up to theactual classification times to each command that followed anautopilot command (ldquoforwardrdquo) the ITR values are consid-erably low From the researcherrsquos point of view a numberof errors could have been avoided if some subjects wouldhave paid more attention to the live feedback Especially inthe task of reading the hidden number subjects were tryingto drive along the path without looking at the text messagethat appeared on the live feedback After realizing that therobot was not turning they looked at the video and saw thetext message instructing them to look up This is visible inthe analysis of the accuracy of the robots commands (seeFigure 10) The lowest accuracy of 8749 was for actionldquochange view to the camera moderdquo (robot command 14) thismay be caused by the fact that the subjects got used to theactions ldquoturn leftrdquoldquoturn rightrdquo and ldquodrive forwardrdquo whichthey repeated six times from the starting point (see Figures10 and 9) This could easily be avoided if the driving pathswere designedwith evenly distributed tasks (eg live scenariotasks such as reading a hidden number should occur moreoften)

12 Computational Intelligence and Neuroscience

Table 4 Comparison of BCI systems for robot navigation

Publication Subjects Classes Literacy rate [] Mean accuracy [] Mean ITR [bitsmin] Robot typeVolosyak et al [21] 86 4 98 9226 1724 E-puckZhao et al1 [34] 7 4 100 903 247 NAOKapeller et al2 [27] 11 4 91 9136 NA E-puckDiez et al3 [31] 7 4 100 8880lowast NA Pioneer 3-DXHolewa and Nawrocka [24] 3 4 100 7375 1136 Lego MindstormChoi and Jo1 [32] 5 2 100 844 1140 NAOThis study 61 4 100 9303 1407 MRC1Results for SSVEP paradigm2f-VEP results for 4 classes without zero class3Results for destination 1 lowastcorrect(correct + wrong) commands

Table 5 Distribution of the time blocks needed for selection of the first three commands ldquodrive forwardrdquo and the last three commandsldquoforwardrdquoldquolook uprdquo over all subjects

Distribution []Time windows 119879

119878in EEG data blocks

8 10 15 20 30 40 50 60 to 160A 656 1803 1803 2295 1148 656 820 820C 820 2295 820 1967 1148 820 492 1639E 984 1311 820 1967 656 984 820 2459Mean 820 1803 1148 2077 984 820 710 1639œ 984 1803 1475 1803 984 328 656 1967X 1148 1639 1639 1475 1311 984 164 1639Y 1148 1475 1148 2295 656 656 492 2131Mean 1093 1639 1421 1858 984 656 437 1913

Accuracy of each command of the robotrsquos path

1 2 3 4 5 6 7 8 9

Commands

10080604020

0

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26

()

Figure 10This chart presents the accuracy of each command alongthe path (119909-axis) and its selection accuracy to previous correctselection over all subjects

No difference between the time needed for the selectionof the command ldquodrive forwardrdquo in the beginning of theexperiment in comparison to the end of the experiment canbe observed (see Table 5) hence neither fatigue nor practiceseemed to influence performance

The general literacy rate among test subjects of 100indicates that SSVEP-based BCIs can be used by a largerpopulation than previously assumedThis high literacy is dueto the careful selection of individual SSVEP key parameterswhichwere determined by the presented calibration software

41 Limitations The SSVEP-control application has somerestrictions

(i) The wireless connection was implemented with theUDP protocol and some information might thereforenot be received properly

(ii) In order to maximize performance accuracy and userfriendliness the number of simultaneously displayedtargets was restricted to four which resulted in aconsiderably lower ITR

(iii) It should also be noted that for long term use recal-ibration might be inevitable Although the numberof targets is limited to four the SSVEP-based BCIpresented here depends on the userrsquos vision andcontrol over his or her eye movements It thereforehas to compete with other approaches such as P300and other eye tracking systems Visual stimulationwith low frequencies in particular is known to causefatigue

(iv) Moreover subjects in this study may not be repre-sentative of the general population they tended tobe young healthy men Further tests with older andphysically impaired persons are required

5 Conclusions

In the present study the SSVEP-based driving application(robot-GUI) was proven to be successful and robust Thedriving application with live video feedback was tested with61 healthy subjects All participants successfully navigateda mobile semiautonomous robot through a predetermined15-meter-long route BCI key parameters were determinedby the Wizard software No negative influence of the livevideo feedback on the performance was observed Such

Computational Intelligence and Neuroscience 13

applications could be used to control assistive robots butmight also be of interest for healthy subjects In contrast toP300 the SSVEP stimulation has to be sufficiently strong (interms of size or visual angle) to significantly influence theuser A common problem of all BCI studies that also occurredhere is to keep the usermotivated and focused on the taskTheevaluation of the questionnaire shows that for the majorityof subjects the driving task with the SSVEP-based BCI wasnot fatiguing even though lower stimulation frequencieswhich usually cause more fatigue than higher frequencieswere used in the experiment More specifically frequenciesbetween 6 and 12Hz were determined in the calibrationprocedure as they elicited the strongest SSVEP responsesSome subjects stated that as they concentrated more on thecamera video they did not find the flickering annoying at allOur results prove that SSVEP-based personalized adaptiveBCI systems with a low target number can be used as aneveryday assistive technology for remote robotsWe have alsoshown that it is possible to design a four-target interface withextended capabilities such as camera and driving controlThe control application presented here can be used formobile robots as an assistive remote technology as a gameinterface forwheelchair control or even for steering remotelycontrolled drones This study should be considered as a steptowards resolving the problems of semiautonomous mobilerobots controlled with SSVEP-based BCIs In order to furtherimprove this approach the next steps should be an onlineparameter adaptationmechanism for improving the accuracythrough practicing a possibility of turning the stimulationonoff and a two-way video and audio communication fora truly remote telepresence

Competing Interests

The authors declare that the research was conducted in theabsence of any commercial or financial relationships thatcould be construed as a potential conflict of interests

Acknowledgments

The authors would like to thank Thomas Grunenberg fromthe Faculty of Technology and Bionics at Rhine-Waal Uni-versity of Applied Sciences for building programming andmaintaining MRCs The authors would also like to thankthe student assistants Leonie Hillen Francisco Nunez SaadPlatini Anna Catharina Thoma and Paul Fanen Wundufor their help with the study This research was supportedby the German Federal Ministry of Education and Research(BMBF) under Grants nos 16SV6364 and 01DR14014 and theEuropean Fund for Regional Development (EFRD or EFREin Germany) under Grant no GE-1-1-047

References

[1] M Cheney and R Uth Tesla Master of Lightning Barnes ampNoble New York NY USA 1999

[2] W Walter The Living Brain W W Norton amp Company NewYork NY USA 1953

[3] P Flandorfer ldquoPopulation ageing and socially assistive robotsfor elderly persons the importance of sociodemographic fac-tors for user acceptancerdquo International Journal of PopulationResearch vol 2012 Article ID 829835 13 pages 2012

[4] A Nijholt D Tan G Pfurtscheller et al ldquoBrain-computerinterfacing for intelligent systemsrdquo IEEE Intelligent Systems vol23 no 3 pp 72ndash79 2008

[5] J D R Millan R Rupp G R Muller-Putz et al ldquoCombiningbrain-computer interfaces and assistive technologies state-of-the-art and challengesrdquo Frontiers in Neuroscience vol 4 article161 2010

[6] C Brunner N Birbaumer B Blankertz et al ldquoBNCI horizon2020 towards a roadmap for the BCI communityrdquo Brain-Computer Interfaces vol 2 no 1 pp 1ndash10 2015

[7] R Bogue ldquoExoskeletons and robotic prosthetics a review ofrecent developmentsrdquo Industrial Robot vol 36 no 5 pp 421ndash427 2009

[8] R Ron-Angevin F Velasco-Alvarez S Sancha-Ros and L daSilva-Sauer ldquoA two-class self-paced BCI to control a robot infour directionsrdquo IEEE International Conference on Rehabilita-tion Robotics (ICORR rsquo11) vol 2011 Article ID 5975486 2011

[9] L Tonin T Carlson R Leeb and J D R Millan ldquoBrain-controlled telepresence robot by motor-disabled peoplerdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society pp 4227ndash4230IEEE Boston Mass USA September 2011

[10] D Huang K Qian D-Y Fei W Jia X Chen and O BaildquoElectroencephalography (EEG)-based brain-computer inter-face (BCI) a 2-D virtual wheelchair control based on event-related desynchronizationsynchronization and state controlrdquoIEEE Transactions on Neural Systems and Rehabilitation Engi-neering vol 20 no 3 pp 379ndash388 2012

[11] A Chella E Pagello E Menegatti et al ldquoA BCI teleoperatedmuseum robotic guiderdquo in Proceedings of the International Con-ference on Complex Intelligent and Software Intensive Systems(CISIS rsquo09) pp 783ndash788 IEEE Fukuoka Japan March 2009

[12] C Escolano J M Antelis and J Minguez ldquoA telepresencemobile robot controlled with a noninvasive brain-computerinterfacerdquo IEEE Transactions on Systems Man and CyberneticsPart B Cybernetics vol 42 no 3 pp 793ndash804 2012

[13] F-B VialatteMMaurice J Dauwels andA Cichocki ldquoSteady-state visually evoked potentials focus on essential paradigmsand future perspectivesrdquo Progress in Neurobiology vol 90 no4 pp 418ndash438 2010

[14] D Regan ldquoSome characteristics of average steady-state andtransient responses evoked by modulated lightrdquo Electroen-cephalography and Clinical Neurophysiology vol 20 no 3 pp238ndash248 1966

[15] Y Frank and F Torres ldquoVisual evoked potentials in the evalua-tion of lsquocortical blindnessrsquo in childrenrdquoAnnals of Neurology vol6 no 2 pp 126ndash129 1979

[16] W IanMcDonald A CompstonG Edan et al ldquoRecommendeddiagnostic criteria for multiple sclerosis guidelines from theinternational panel on the diagnosis of multiple sclerosisrdquoAnnals of Neurology vol 50 no 1 pp 121ndash127 2001

[17] S Gao Y Wang X Gao and B Hong ldquoVisual and auditorybrain-computer interfacesrdquo IEEE Transactions on BiomedicalEngineering vol 61 no 5 pp 1436ndash1447 2014

[18] I Volosyak ldquoSSVEP-based Bremen-BCI interfacemdashboostinginformation transfer ratesrdquo Journal of Neural Engineering vol8 no 3 Article ID 036020 2011

14 Computational Intelligence and Neuroscience

[19] G R Muller-Putz and G Pfurtscheller ldquoControl of an electricalprosthesis with an SSVEP-based BCIrdquo IEEE Transactions onBiomedical Engineering vol 55 no 1 pp 361ndash364 2008

[20] P Martinez H Bakardjian and A Cichocki ldquoFully onlinemulticommand brain-computer interface with visual neuro-feedback using SSVEP paradigmrdquo Computational Intelligenceand Neuroscience vol 2007 Article ID 94561 9 pages 2007

[21] I Volosyak D Valbuena T Luth T Malechka and A GraserldquoBCI demographics II how many (and What Kinds of) peoplecan use a high-frequency SSVEP BCIrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 19 no 3 pp232ndash239 2011

[22] J Zhao Q Meng W Li M Li and G Chen ldquoSSVEP-basedhierarchical architecture for control of a humanoid robot withmindrdquo in Proceedings of the 11th World Congress on Intelli-gent Control and Automation (WCICA rsquo14) pp 2401ndash2406Shenyang China July 2014

[23] A Guneysu and H Levent Akin ldquoAn SSVEP based BCI tocontrol a humanoid robot by using portable EEG devicerdquo inProceedings of the 35th Annual International Conference of theIEEE Engineering in Medicine and Biology Society (EMBC rsquo13)pp 6905ndash6908 IEEE Osaka Japan July 2013

[24] K Holewa and A Nawrocka ldquoEmotiv EPOC neuroheadset inbrainmdashcomputer interfacerdquo in Proceedings of the 15th Interna-tional Carpathian Control Conference (ICCC rsquo14) pp 149ndash152IEEE Velke Karlovice Czech Republic May 2014

[25] N-S Kwak K-R Muller and S-W Lee ldquoToward exoskeletoncontrol based on steady state visual evoked potentialsrdquo inProceedings of the International Winter Workshop on Brain-Computer Interface (BCI rsquo14) pp 1ndash2 Gangwon South KoreaFebruary 2014

[26] G Bin X Gao Y Wang B Hong and S Gao ldquoVEP-basedbrain-computer interfaces time frequency and code modula-tionsrdquo IEEE Computational Intelligence Magazine vol 4 no 4pp 22ndash26 2009

[27] C Kapeller C Hintermuller M Abu-Alqumsan R Pruckl APeer and C Guger ldquoA BCI using VEP for continuous control ofa mobile robotrdquo in Proceedings of the 35th Annual InternationalConference of the IEEE Engineering in Medicine and BiologySociety (EMBC rsquo13) pp 5254ndash5257 IEEE Osaka Japan July2013

[28] H Wang T Li and Z Huang ldquoRemote control of an electricalcar with SSVEP-based BCIrdquo in Proceedings of the IEEE Interna-tional Conference on Information Theory and Information Secu-rity (ICITIS rsquo10) pp 837ndash840 IEEE Beijing China December2010

[29] Y Wang Y-T Wang and T-P Jung ldquoVisual stimulus designfor high-rate SSVEP BCIrdquo Electronics Letters vol 46 no 15 pp1057ndash1058 2010

[30] P Gergondet D Petit and A Kheddar ldquoSteering a robot witha brain-computer interface impact of video feedback on BCIperformancerdquo in Proceedings of the 21st IEEE InternationalSymposium on Robot and Human Interactive Communication(RO-MAN rsquo12) pp 271ndash276 Paris France September 2012

[31] P F Diez V A Mut E Laciar and E M Avila Perona ldquoMobilerobot navigation with a self-paced brain-computer interfacebased on high-frequency SSVEPrdquo Robotica vol 32 no 5 pp695ndash709 2014

[32] B Choi and S Jo ldquoA low-cost EEG system-based hybridbrain-computer interface for humanoid robot navigation andrecognitionrdquo PLoS ONE vol 8 no 9 Article ID e74583 2013

[33] E Tidoni P Gergondet A Kheddar and S M Aglioti ldquoAudio-visual feedback improves the BCI performance in the naviga-tional control of a humanoid robotrdquo Frontiers in Neuroroboticsvol 8 article 20 pp 1ndash8 2014

[34] J Zhao W Li and M Li ldquoComparative study of SSVEP- andP300-based models for the telepresence control of humanoidrobotsrdquo PLoS ONE vol 10 no 11 Article ID e0142168 2015

[35] T M Rutkowski K Shimizu T Kodama P Jurica and ACichocki ldquoBrain-robot interfaces using spatial tactile BCIparadigmsrdquo in Symbiotic Interaction pp 132ndash137 Springer NewYork NY USA 2015

[36] C Brunner B Z Allison D J Krusienski et al ldquoImprovedsignal processing approaches in an offline simulation of a hybridbrain-computer interfacerdquo Journal of NeuroscienceMethods vol188 no 1 pp 165ndash173 2010

[37] F Gembler P Stawicki and I Volosyak ldquoAutonomous parame-ter adjustment for SSVEP-based BCIs with a novel BCI wizardrdquoFrontiers in Neuroscience vol 9 article 474 pp 1ndash12 2015

[38] E W Sellers T M Vaughan and J R Wolpaw ldquoA brain-computer interface for long-term independent home userdquoAmyotrophic Lateral Sclerosis vol 11 no 5 pp 449ndash455 2010

[39] E M Holz L Botrel and A Kubler ldquoBridging gaps long-term independent BCI home-use by a locked-in end-userrdquo inProceedings of the TOBI Workshop IV Sion Switzerland 2013

[40] O Friman I Volosyak and A Graser ldquoMultiple channel detec-tion of steady-state visual evoked potentials for brain-computerinterfacesrdquo IEEE Transactions on Biomedical Engineering vol54 no 4 pp 742ndash750 2007

[41] I Volosyak T Malechka D Valbuena and A Graser ldquoAnovel calibration method for SSVEP based brain-computerinterfacesrdquo in Proceedings of the 18th European Signal ProcessingConference (EUSIPCO rsquo10) pp 939ndash943 Aalborg DenmarkAugust 2010

[42] I Volosyak H Cecotti and A Graser ldquoOptimal visual stimulion LCD screens for SSVEP based brain-computer interfacesrdquoin Proceedings of the 4th International IEEEEMBS Conferenceon Neural Engineering (NER rsquo09) pp 447ndash450 IEEE AntalyaTurkey May 2009

[43] J R Wolpaw N Birbaumer D J McFarland G Pfurtschellerand T M Vaughan ldquoBrain-computer interfaces for communi-cation and controlrdquo Clinical Neurophysiology vol 113 no 6 pp767ndash791 2002

Page 9: Driving a Semiautonomous Mobile Robotic Car Controlled by ... · Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI ... (frequencycoding,methodoriginallyproposed

Computational Intelligence and Neuroscience 9

Figure 9This chart presents exemplarily the BCI performance for subject 59 (optimal path with 100 accuracy)The top row shows the rawEEG signal acquired from channel119874

119911(one of eight EEG electrodes)The remaining graphs show the SNR values during the whole experiment

(dashed line represents the threshold for each command) for the four commands forwardup turn left turn right and cameradrive thestimulation frequencies for those commands were 667 750 706 and 632Hz respectively The 119909-axis the same for all graphs representsthe total duration of the experiment (150 seconds) The marking above each graph represents the exact time of each classification circlednumbers below the 119909-axis correspond to the task actions

3 Results

The results of the driving performance are shown in Table 3All 61 subjects were able to control the BCI application andcompleted the entire driving taskThe overall mean (SD) ITRof the driving performance was 1407 (444) bitsmin Themean (SD) accuracy was 9714 (373) Except for one allsubjects reached accuracies above 80 40 subjects reachedaccuracies above 90 and nineteen of them even completedthe steering task without any errorsThemean (SD) accuracyof each action (commands required to complete the task) was

9614 (002) In the prequestionnaire subjects answered thequestions regarding gender the need for vision correctiontiredness and BCI experience as displayed in Table 2Eighteen female subjects (30) with an average (SD) ageof 2267 (430) years and 43 male subjects with an average(SD) age of 2260 (532) years participated in the experimentThe influence of gender was also tested 18 female subjectsachieved a mean (SD) accuracy and ITR of 9324 (798)and 1405 (573) bitsmin respectively and 43 male achieveda mean (SD) accuracy and ITR of 9295 (659) and 1407(508) bitsmin respectively A one-way ANOVA showed no

10 Computational Intelligence and Neuroscience

Table 2 Questionnaire results The table presents the answers collected from the prequestionnaire The figures show the number of respon-dents or the mean (SD) value

Age Gender Visioncorrection Tiredness Length of sleep

last nightExperiencewith BCIs

Years Male Female Yes No 1 nottired

2 littletired

3 moderatelytired 4 tired 5 very

tired Hours Yes No

2262(500) 43 18 22 39 18 23 17 3 0 675 (122) 6 55

statistically significant difference in accuracy 119865(1 59) =

0022 119901 = 0883 and ITR 119865(1 59) = 00003 119901 = 0987between those two groupsThe influence of vision correctionon accuracy and ITR was also tested A total of 22 subjectsrequired vision correction in the form of glasses or contactlenses (as they stated in the questionnaire) They achieveda mean (SD) accuracy and ITR of 9156 (636) and 1278(402) bitsmin respectively These results were compared tosubjects who did not require vision correction and achieveda mean (SD) accuracy and ITR of 9392 (535) and 1484(466) bitsmin respectively A one-way ANOVA showedno statistically significant influence of vision correction onaccuracy 119865(1 59) = 1664 119901 = 0202 There was also nosignificant effect of vision correction on ITR119865(1 59) = 2263119901 = 0138 An evaluation of tiredness was also calculated onthe basis of the answers given in a prequestionnaire (scalefrom 1 to 5) Results of the prequestionnaire are shown inTable 2 A one-way ANOVA revealed no significant effect oftiredness on accuracy 119865(4 56) = 0335 119901 = 0853 and ITR119865(4 56) = 0067 119901 = 0991

4 Discussion

In the results presented the overall accuracy was calculatedfor all commands in drive mode and in camera modewithout distinguishing between them since four commandsfor SSVEP classification were always present Even thoughdifferent commands were sent the command classificationremained the same A comparison of this study to otherpublications on steering a robot with SSVEP-based BCI ispresented in Table 4

In order to test the influence of video overlay we com-pared the accuracy results with a demographic study [21] inwhich 86 subjects achieved a mean (SD) accuracy of 9226(782) while driving a small robot through a labyrinth witha four-class SSVEP-based BCI An unpaired 119905-test showed nostatistically significant difference 119905(145) = minus0089 119901 = 0397compared to the present values This shows that the videooverlay did not negatively influence the BCI performanceGergondet et al listed three factors with respect to howthe live video feedback in the background can influence theSSVEP stimulation distraction due to what is happening onthe video distraction due to what is happening around theuser and variation in luminosity below the stimuli shownto the user [30] Kapeller et al showed that there is a slightdecrease in performance for the underlying video sequencedue to a distractive environment and small targets used [27]

Contrary to the findings reported by Gergondet et al andKapeller et al the dynamic background did not noticeablyaffect the BCI performance (unpaired 119905-test result) Weassume that this was due to the fact that stimulation frequen-cies and minimal time segment classification windows wereadjusted individually and that relatively large stimulationboxes were used (visual angle)

The high accuracies of the SSVEP-based BCI systemsupport the hypothesis that BCI can be used to control acomplex driving application In comparison to the study byDiez et al [31] the MRC self-driving behavior was limitedto a preset number of turning points and a predefined pathAlthough the robot presented by Diez et al could avoidobstacles by itself (limited algorithm reaction) it decreased itsspeed for trajectory adjustment and in some cases stopped toavoid collision and further user commands were necessaryWhile Diez et al reported a mean time of 23320 seconds forthe second destination (139-meter distance) in this study theachieved mean time was 20708 seconds (15-meter distance)As pointed out by Diez et al the overall time for completingthe task depends not only on the robot but also on thesubjectrsquos performance controlling the BCI system [31] In ourstudy the high accuracies of the BCI system were achieveddue to the key parameter adjustment carried out by theWizard software

Themain difference between the robot used by Diez et aland the MRC presented in this study is the extended controlpossibility of the interface that also allows for controlling thecamera Both robots were designed for different purposesDiez et al mainly designed their robot for driving frompoint A to point B without any additional interaction withthe environment as they planned to use the system forwheelchair control in the future By contrast in our designthe MRC itself is the interaction device Equipped with adisplay and a two-way audio system it might serve as aremote telepresence in the future

The SSVEP paradigm allows for a robust implementationof four driving commands Compared to robots constructedfor the P300 BCI approach where the user faces largestimulation matrices spread across the entire screen thenumber of simultaneously displayed targets here is only fourand hence the load on the visual channel is considerably lowerin comparison Therefore the user might find it easier toconcentrate on the driving task On the other hand the lessernumber of command options reduces freedom and speedHowever a shared-control paradigm allows for complex tasksto be executed with minimal effort The command ldquodrive

Computational Intelligence and Neuroscience 11

Table 3 Results of the driving performance Total time each subjectrequired for steering the MRC through the route (15 meters and10 turns including camera movements) 26 correct commands (3commands in camera mode and 23 commands in drive mode) werenecessary to complete the task Accuracy (Acc) is the number ofcorrect commands (26) divided by the total number of commands(119862119899) Mean values are given at the bottom of the table

Subject Time Acc ITR119862119899[s] [] [bitsmin]

1 30560 8667 720 302 28346 8966 832 293 28407 8125 680 324 28011 8125 690 325 25259 8125 765 326 19002 9286 1340 287 32206 9630 862 278 31007 7027 466 379 15813 9286 1610 2810 37182 8966 635 2911 24172 8667 910 3012 15133 9286 1683 2813 15549 10000 2007 2614 21054 9630 1318 2715 20841 10000 1497 2616 24162 8387 852 3117 24172 8667 910 3018 14513 9630 1912 2719 16748 9630 1657 2720 22080 9286 1153 2821 19419 9630 1429 2722 43154 9286 590 2823 16352 10000 1908 2624 17184 8667 1280 3025 24812 8387 830 3126 13193 10000 2365 2627 16138 10000 1933 2628 18475 9630 1502 2729 14422 10000 2163 2630 16748 9286 1520 2831 21663 9286 1175 2832 35516 8125 544 3233 24406 8667 901 3034 15214 10000 2051 2635 19561 9630 1418 2736 23948 8387 860 3137 16636 8966 1418 2938 17245 10000 1809 2639 21897 8387 940 3140 16565 8966 1425 2941 14666 10000 2127 2642 18627 8966 1267 29

Table 3 Continued

Subject Time Acc ITR119862119899[s] [] [bitsmin]

43 22131 9286 1151 2844 25929 8667 848 3045 15702 10000 1987 2646 14788 9630 1876 2747 18718 9630 1482 2748 14554 10000 2144 2649 16057 9630 1728 2750 20495 10000 1522 2651 22811 9630 1216 2752 16037 10000 1946 2653 23664 8125 817 3254 14889 10000 2095 2655 15011 10000 2078 2656 13985 10000 2231 2657 14716 10000 2120 2658 24273 10000 1285 2659 15001 10000 2080 2660 16463 9630 1685 2761 17885 9630 1551 27Mean 20708 9303 1407 2811SD 5025 573 444 182Min 13193 7027 466 2600Max 43154 10000 2365 3700

forwardrdquo starts an autopilot program that guides the robotthrough a straight distance until it reaches a predefinedstopping point

While the MRC executed the autopilot program andfollowed the line for some time subjects purposely ignoredthe flickering targets and concentrated on the video feedbackinstead As these waiting periods were added up to theactual classification times to each command that followed anautopilot command (ldquoforwardrdquo) the ITR values are consid-erably low From the researcherrsquos point of view a numberof errors could have been avoided if some subjects wouldhave paid more attention to the live feedback Especially inthe task of reading the hidden number subjects were tryingto drive along the path without looking at the text messagethat appeared on the live feedback After realizing that therobot was not turning they looked at the video and saw thetext message instructing them to look up This is visible inthe analysis of the accuracy of the robots commands (seeFigure 10) The lowest accuracy of 8749 was for actionldquochange view to the camera moderdquo (robot command 14) thismay be caused by the fact that the subjects got used to theactions ldquoturn leftrdquoldquoturn rightrdquo and ldquodrive forwardrdquo whichthey repeated six times from the starting point (see Figures10 and 9) This could easily be avoided if the driving pathswere designedwith evenly distributed tasks (eg live scenariotasks such as reading a hidden number should occur moreoften)

12 Computational Intelligence and Neuroscience

Table 4 Comparison of BCI systems for robot navigation

Publication Subjects Classes Literacy rate [] Mean accuracy [] Mean ITR [bitsmin] Robot typeVolosyak et al [21] 86 4 98 9226 1724 E-puckZhao et al1 [34] 7 4 100 903 247 NAOKapeller et al2 [27] 11 4 91 9136 NA E-puckDiez et al3 [31] 7 4 100 8880lowast NA Pioneer 3-DXHolewa and Nawrocka [24] 3 4 100 7375 1136 Lego MindstormChoi and Jo1 [32] 5 2 100 844 1140 NAOThis study 61 4 100 9303 1407 MRC1Results for SSVEP paradigm2f-VEP results for 4 classes without zero class3Results for destination 1 lowastcorrect(correct + wrong) commands

Table 5 Distribution of the time blocks needed for selection of the first three commands ldquodrive forwardrdquo and the last three commandsldquoforwardrdquoldquolook uprdquo over all subjects

Distribution []Time windows 119879

119878in EEG data blocks

8 10 15 20 30 40 50 60 to 160A 656 1803 1803 2295 1148 656 820 820C 820 2295 820 1967 1148 820 492 1639E 984 1311 820 1967 656 984 820 2459Mean 820 1803 1148 2077 984 820 710 1639œ 984 1803 1475 1803 984 328 656 1967X 1148 1639 1639 1475 1311 984 164 1639Y 1148 1475 1148 2295 656 656 492 2131Mean 1093 1639 1421 1858 984 656 437 1913

Accuracy of each command of the robotrsquos path

1 2 3 4 5 6 7 8 9

Commands

10080604020

0

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26

()

Figure 10This chart presents the accuracy of each command alongthe path (119909-axis) and its selection accuracy to previous correctselection over all subjects

No difference between the time needed for the selectionof the command ldquodrive forwardrdquo in the beginning of theexperiment in comparison to the end of the experiment canbe observed (see Table 5) hence neither fatigue nor practiceseemed to influence performance

The general literacy rate among test subjects of 100indicates that SSVEP-based BCIs can be used by a largerpopulation than previously assumedThis high literacy is dueto the careful selection of individual SSVEP key parameterswhichwere determined by the presented calibration software

41 Limitations The SSVEP-control application has somerestrictions

(i) The wireless connection was implemented with theUDP protocol and some information might thereforenot be received properly

(ii) In order to maximize performance accuracy and userfriendliness the number of simultaneously displayedtargets was restricted to four which resulted in aconsiderably lower ITR

(iii) It should also be noted that for long term use recal-ibration might be inevitable Although the numberof targets is limited to four the SSVEP-based BCIpresented here depends on the userrsquos vision andcontrol over his or her eye movements It thereforehas to compete with other approaches such as P300and other eye tracking systems Visual stimulationwith low frequencies in particular is known to causefatigue

(iv) Moreover subjects in this study may not be repre-sentative of the general population they tended tobe young healthy men Further tests with older andphysically impaired persons are required

5 Conclusions

In the present study the SSVEP-based driving application(robot-GUI) was proven to be successful and robust Thedriving application with live video feedback was tested with61 healthy subjects All participants successfully navigateda mobile semiautonomous robot through a predetermined15-meter-long route BCI key parameters were determinedby the Wizard software No negative influence of the livevideo feedback on the performance was observed Such

Computational Intelligence and Neuroscience 13

applications could be used to control assistive robots butmight also be of interest for healthy subjects In contrast toP300 the SSVEP stimulation has to be sufficiently strong (interms of size or visual angle) to significantly influence theuser A common problem of all BCI studies that also occurredhere is to keep the usermotivated and focused on the taskTheevaluation of the questionnaire shows that for the majorityof subjects the driving task with the SSVEP-based BCI wasnot fatiguing even though lower stimulation frequencieswhich usually cause more fatigue than higher frequencieswere used in the experiment More specifically frequenciesbetween 6 and 12Hz were determined in the calibrationprocedure as they elicited the strongest SSVEP responsesSome subjects stated that as they concentrated more on thecamera video they did not find the flickering annoying at allOur results prove that SSVEP-based personalized adaptiveBCI systems with a low target number can be used as aneveryday assistive technology for remote robotsWe have alsoshown that it is possible to design a four-target interface withextended capabilities such as camera and driving controlThe control application presented here can be used formobile robots as an assistive remote technology as a gameinterface forwheelchair control or even for steering remotelycontrolled drones This study should be considered as a steptowards resolving the problems of semiautonomous mobilerobots controlled with SSVEP-based BCIs In order to furtherimprove this approach the next steps should be an onlineparameter adaptationmechanism for improving the accuracythrough practicing a possibility of turning the stimulationonoff and a two-way video and audio communication fora truly remote telepresence

Competing Interests

The authors declare that the research was conducted in theabsence of any commercial or financial relationships thatcould be construed as a potential conflict of interests

Acknowledgments

The authors would like to thank Thomas Grunenberg fromthe Faculty of Technology and Bionics at Rhine-Waal Uni-versity of Applied Sciences for building programming andmaintaining MRCs The authors would also like to thankthe student assistants Leonie Hillen Francisco Nunez SaadPlatini Anna Catharina Thoma and Paul Fanen Wundufor their help with the study This research was supportedby the German Federal Ministry of Education and Research(BMBF) under Grants nos 16SV6364 and 01DR14014 and theEuropean Fund for Regional Development (EFRD or EFREin Germany) under Grant no GE-1-1-047

References

[1] M Cheney and R Uth Tesla Master of Lightning Barnes ampNoble New York NY USA 1999

[2] W Walter The Living Brain W W Norton amp Company NewYork NY USA 1953

[3] P Flandorfer ldquoPopulation ageing and socially assistive robotsfor elderly persons the importance of sociodemographic fac-tors for user acceptancerdquo International Journal of PopulationResearch vol 2012 Article ID 829835 13 pages 2012

[4] A Nijholt D Tan G Pfurtscheller et al ldquoBrain-computerinterfacing for intelligent systemsrdquo IEEE Intelligent Systems vol23 no 3 pp 72ndash79 2008

[5] J D R Millan R Rupp G R Muller-Putz et al ldquoCombiningbrain-computer interfaces and assistive technologies state-of-the-art and challengesrdquo Frontiers in Neuroscience vol 4 article161 2010

[6] C Brunner N Birbaumer B Blankertz et al ldquoBNCI horizon2020 towards a roadmap for the BCI communityrdquo Brain-Computer Interfaces vol 2 no 1 pp 1ndash10 2015

[7] R Bogue ldquoExoskeletons and robotic prosthetics a review ofrecent developmentsrdquo Industrial Robot vol 36 no 5 pp 421ndash427 2009

[8] R Ron-Angevin F Velasco-Alvarez S Sancha-Ros and L daSilva-Sauer ldquoA two-class self-paced BCI to control a robot infour directionsrdquo IEEE International Conference on Rehabilita-tion Robotics (ICORR rsquo11) vol 2011 Article ID 5975486 2011

[9] L Tonin T Carlson R Leeb and J D R Millan ldquoBrain-controlled telepresence robot by motor-disabled peoplerdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society pp 4227ndash4230IEEE Boston Mass USA September 2011

[10] D Huang K Qian D-Y Fei W Jia X Chen and O BaildquoElectroencephalography (EEG)-based brain-computer inter-face (BCI) a 2-D virtual wheelchair control based on event-related desynchronizationsynchronization and state controlrdquoIEEE Transactions on Neural Systems and Rehabilitation Engi-neering vol 20 no 3 pp 379ndash388 2012

[11] A Chella E Pagello E Menegatti et al ldquoA BCI teleoperatedmuseum robotic guiderdquo in Proceedings of the International Con-ference on Complex Intelligent and Software Intensive Systems(CISIS rsquo09) pp 783ndash788 IEEE Fukuoka Japan March 2009

[12] C Escolano J M Antelis and J Minguez ldquoA telepresencemobile robot controlled with a noninvasive brain-computerinterfacerdquo IEEE Transactions on Systems Man and CyberneticsPart B Cybernetics vol 42 no 3 pp 793ndash804 2012

[13] F-B VialatteMMaurice J Dauwels andA Cichocki ldquoSteady-state visually evoked potentials focus on essential paradigmsand future perspectivesrdquo Progress in Neurobiology vol 90 no4 pp 418ndash438 2010

[14] D Regan ldquoSome characteristics of average steady-state andtransient responses evoked by modulated lightrdquo Electroen-cephalography and Clinical Neurophysiology vol 20 no 3 pp238ndash248 1966

[15] Y Frank and F Torres ldquoVisual evoked potentials in the evalua-tion of lsquocortical blindnessrsquo in childrenrdquoAnnals of Neurology vol6 no 2 pp 126ndash129 1979

[16] W IanMcDonald A CompstonG Edan et al ldquoRecommendeddiagnostic criteria for multiple sclerosis guidelines from theinternational panel on the diagnosis of multiple sclerosisrdquoAnnals of Neurology vol 50 no 1 pp 121ndash127 2001

[17] S Gao Y Wang X Gao and B Hong ldquoVisual and auditorybrain-computer interfacesrdquo IEEE Transactions on BiomedicalEngineering vol 61 no 5 pp 1436ndash1447 2014

[18] I Volosyak ldquoSSVEP-based Bremen-BCI interfacemdashboostinginformation transfer ratesrdquo Journal of Neural Engineering vol8 no 3 Article ID 036020 2011

14 Computational Intelligence and Neuroscience

[19] G R Muller-Putz and G Pfurtscheller ldquoControl of an electricalprosthesis with an SSVEP-based BCIrdquo IEEE Transactions onBiomedical Engineering vol 55 no 1 pp 361ndash364 2008

[20] P Martinez H Bakardjian and A Cichocki ldquoFully onlinemulticommand brain-computer interface with visual neuro-feedback using SSVEP paradigmrdquo Computational Intelligenceand Neuroscience vol 2007 Article ID 94561 9 pages 2007

[21] I Volosyak D Valbuena T Luth T Malechka and A GraserldquoBCI demographics II how many (and What Kinds of) peoplecan use a high-frequency SSVEP BCIrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 19 no 3 pp232ndash239 2011

[22] J Zhao Q Meng W Li M Li and G Chen ldquoSSVEP-basedhierarchical architecture for control of a humanoid robot withmindrdquo in Proceedings of the 11th World Congress on Intelli-gent Control and Automation (WCICA rsquo14) pp 2401ndash2406Shenyang China July 2014

[23] A Guneysu and H Levent Akin ldquoAn SSVEP based BCI tocontrol a humanoid robot by using portable EEG devicerdquo inProceedings of the 35th Annual International Conference of theIEEE Engineering in Medicine and Biology Society (EMBC rsquo13)pp 6905ndash6908 IEEE Osaka Japan July 2013

[24] K Holewa and A Nawrocka ldquoEmotiv EPOC neuroheadset inbrainmdashcomputer interfacerdquo in Proceedings of the 15th Interna-tional Carpathian Control Conference (ICCC rsquo14) pp 149ndash152IEEE Velke Karlovice Czech Republic May 2014

[25] N-S Kwak K-R Muller and S-W Lee ldquoToward exoskeletoncontrol based on steady state visual evoked potentialsrdquo inProceedings of the International Winter Workshop on Brain-Computer Interface (BCI rsquo14) pp 1ndash2 Gangwon South KoreaFebruary 2014

[26] G Bin X Gao Y Wang B Hong and S Gao ldquoVEP-basedbrain-computer interfaces time frequency and code modula-tionsrdquo IEEE Computational Intelligence Magazine vol 4 no 4pp 22ndash26 2009

[27] C Kapeller C Hintermuller M Abu-Alqumsan R Pruckl APeer and C Guger ldquoA BCI using VEP for continuous control ofa mobile robotrdquo in Proceedings of the 35th Annual InternationalConference of the IEEE Engineering in Medicine and BiologySociety (EMBC rsquo13) pp 5254ndash5257 IEEE Osaka Japan July2013

[28] H Wang T Li and Z Huang ldquoRemote control of an electricalcar with SSVEP-based BCIrdquo in Proceedings of the IEEE Interna-tional Conference on Information Theory and Information Secu-rity (ICITIS rsquo10) pp 837ndash840 IEEE Beijing China December2010

[29] Y Wang Y-T Wang and T-P Jung ldquoVisual stimulus designfor high-rate SSVEP BCIrdquo Electronics Letters vol 46 no 15 pp1057ndash1058 2010

[30] P Gergondet D Petit and A Kheddar ldquoSteering a robot witha brain-computer interface impact of video feedback on BCIperformancerdquo in Proceedings of the 21st IEEE InternationalSymposium on Robot and Human Interactive Communication(RO-MAN rsquo12) pp 271ndash276 Paris France September 2012

[31] P F Diez V A Mut E Laciar and E M Avila Perona ldquoMobilerobot navigation with a self-paced brain-computer interfacebased on high-frequency SSVEPrdquo Robotica vol 32 no 5 pp695ndash709 2014

[32] B Choi and S Jo ldquoA low-cost EEG system-based hybridbrain-computer interface for humanoid robot navigation andrecognitionrdquo PLoS ONE vol 8 no 9 Article ID e74583 2013

[33] E Tidoni P Gergondet A Kheddar and S M Aglioti ldquoAudio-visual feedback improves the BCI performance in the naviga-tional control of a humanoid robotrdquo Frontiers in Neuroroboticsvol 8 article 20 pp 1ndash8 2014

[34] J Zhao W Li and M Li ldquoComparative study of SSVEP- andP300-based models for the telepresence control of humanoidrobotsrdquo PLoS ONE vol 10 no 11 Article ID e0142168 2015

[35] T M Rutkowski K Shimizu T Kodama P Jurica and ACichocki ldquoBrain-robot interfaces using spatial tactile BCIparadigmsrdquo in Symbiotic Interaction pp 132ndash137 Springer NewYork NY USA 2015

[36] C Brunner B Z Allison D J Krusienski et al ldquoImprovedsignal processing approaches in an offline simulation of a hybridbrain-computer interfacerdquo Journal of NeuroscienceMethods vol188 no 1 pp 165ndash173 2010

[37] F Gembler P Stawicki and I Volosyak ldquoAutonomous parame-ter adjustment for SSVEP-based BCIs with a novel BCI wizardrdquoFrontiers in Neuroscience vol 9 article 474 pp 1ndash12 2015

[38] E W Sellers T M Vaughan and J R Wolpaw ldquoA brain-computer interface for long-term independent home userdquoAmyotrophic Lateral Sclerosis vol 11 no 5 pp 449ndash455 2010

[39] E M Holz L Botrel and A Kubler ldquoBridging gaps long-term independent BCI home-use by a locked-in end-userrdquo inProceedings of the TOBI Workshop IV Sion Switzerland 2013

[40] O Friman I Volosyak and A Graser ldquoMultiple channel detec-tion of steady-state visual evoked potentials for brain-computerinterfacesrdquo IEEE Transactions on Biomedical Engineering vol54 no 4 pp 742ndash750 2007

[41] I Volosyak T Malechka D Valbuena and A Graser ldquoAnovel calibration method for SSVEP based brain-computerinterfacesrdquo in Proceedings of the 18th European Signal ProcessingConference (EUSIPCO rsquo10) pp 939ndash943 Aalborg DenmarkAugust 2010

[42] I Volosyak H Cecotti and A Graser ldquoOptimal visual stimulion LCD screens for SSVEP based brain-computer interfacesrdquoin Proceedings of the 4th International IEEEEMBS Conferenceon Neural Engineering (NER rsquo09) pp 447ndash450 IEEE AntalyaTurkey May 2009

[43] J R Wolpaw N Birbaumer D J McFarland G Pfurtschellerand T M Vaughan ldquoBrain-computer interfaces for communi-cation and controlrdquo Clinical Neurophysiology vol 113 no 6 pp767ndash791 2002

Page 10: Driving a Semiautonomous Mobile Robotic Car Controlled by ... · Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI ... (frequencycoding,methodoriginallyproposed

10 Computational Intelligence and Neuroscience

Table 2 Questionnaire results The table presents the answers collected from the prequestionnaire The figures show the number of respon-dents or the mean (SD) value

Age Gender Visioncorrection Tiredness Length of sleep

last nightExperiencewith BCIs

Years Male Female Yes No 1 nottired

2 littletired

3 moderatelytired 4 tired 5 very

tired Hours Yes No

2262(500) 43 18 22 39 18 23 17 3 0 675 (122) 6 55

statistically significant difference in accuracy 119865(1 59) =

0022 119901 = 0883 and ITR 119865(1 59) = 00003 119901 = 0987between those two groupsThe influence of vision correctionon accuracy and ITR was also tested A total of 22 subjectsrequired vision correction in the form of glasses or contactlenses (as they stated in the questionnaire) They achieveda mean (SD) accuracy and ITR of 9156 (636) and 1278(402) bitsmin respectively These results were compared tosubjects who did not require vision correction and achieveda mean (SD) accuracy and ITR of 9392 (535) and 1484(466) bitsmin respectively A one-way ANOVA showedno statistically significant influence of vision correction onaccuracy 119865(1 59) = 1664 119901 = 0202 There was also nosignificant effect of vision correction on ITR119865(1 59) = 2263119901 = 0138 An evaluation of tiredness was also calculated onthe basis of the answers given in a prequestionnaire (scalefrom 1 to 5) Results of the prequestionnaire are shown inTable 2 A one-way ANOVA revealed no significant effect oftiredness on accuracy 119865(4 56) = 0335 119901 = 0853 and ITR119865(4 56) = 0067 119901 = 0991

4 Discussion

In the results presented the overall accuracy was calculatedfor all commands in drive mode and in camera modewithout distinguishing between them since four commandsfor SSVEP classification were always present Even thoughdifferent commands were sent the command classificationremained the same A comparison of this study to otherpublications on steering a robot with SSVEP-based BCI ispresented in Table 4

In order to test the influence of video overlay we com-pared the accuracy results with a demographic study [21] inwhich 86 subjects achieved a mean (SD) accuracy of 9226(782) while driving a small robot through a labyrinth witha four-class SSVEP-based BCI An unpaired 119905-test showed nostatistically significant difference 119905(145) = minus0089 119901 = 0397compared to the present values This shows that the videooverlay did not negatively influence the BCI performanceGergondet et al listed three factors with respect to howthe live video feedback in the background can influence theSSVEP stimulation distraction due to what is happening onthe video distraction due to what is happening around theuser and variation in luminosity below the stimuli shownto the user [30] Kapeller et al showed that there is a slightdecrease in performance for the underlying video sequencedue to a distractive environment and small targets used [27]

Contrary to the findings reported by Gergondet et al andKapeller et al the dynamic background did not noticeablyaffect the BCI performance (unpaired 119905-test result) Weassume that this was due to the fact that stimulation frequen-cies and minimal time segment classification windows wereadjusted individually and that relatively large stimulationboxes were used (visual angle)

The high accuracies of the SSVEP-based BCI systemsupport the hypothesis that BCI can be used to control acomplex driving application In comparison to the study byDiez et al [31] the MRC self-driving behavior was limitedto a preset number of turning points and a predefined pathAlthough the robot presented by Diez et al could avoidobstacles by itself (limited algorithm reaction) it decreased itsspeed for trajectory adjustment and in some cases stopped toavoid collision and further user commands were necessaryWhile Diez et al reported a mean time of 23320 seconds forthe second destination (139-meter distance) in this study theachieved mean time was 20708 seconds (15-meter distance)As pointed out by Diez et al the overall time for completingthe task depends not only on the robot but also on thesubjectrsquos performance controlling the BCI system [31] In ourstudy the high accuracies of the BCI system were achieveddue to the key parameter adjustment carried out by theWizard software

Themain difference between the robot used by Diez et aland the MRC presented in this study is the extended controlpossibility of the interface that also allows for controlling thecamera Both robots were designed for different purposesDiez et al mainly designed their robot for driving frompoint A to point B without any additional interaction withthe environment as they planned to use the system forwheelchair control in the future By contrast in our designthe MRC itself is the interaction device Equipped with adisplay and a two-way audio system it might serve as aremote telepresence in the future

The SSVEP paradigm allows for a robust implementationof four driving commands Compared to robots constructedfor the P300 BCI approach where the user faces largestimulation matrices spread across the entire screen thenumber of simultaneously displayed targets here is only fourand hence the load on the visual channel is considerably lowerin comparison Therefore the user might find it easier toconcentrate on the driving task On the other hand the lessernumber of command options reduces freedom and speedHowever a shared-control paradigm allows for complex tasksto be executed with minimal effort The command ldquodrive

Computational Intelligence and Neuroscience 11

Table 3 Results of the driving performance Total time each subjectrequired for steering the MRC through the route (15 meters and10 turns including camera movements) 26 correct commands (3commands in camera mode and 23 commands in drive mode) werenecessary to complete the task Accuracy (Acc) is the number ofcorrect commands (26) divided by the total number of commands(119862119899) Mean values are given at the bottom of the table

Subject Time Acc ITR119862119899[s] [] [bitsmin]

1 30560 8667 720 302 28346 8966 832 293 28407 8125 680 324 28011 8125 690 325 25259 8125 765 326 19002 9286 1340 287 32206 9630 862 278 31007 7027 466 379 15813 9286 1610 2810 37182 8966 635 2911 24172 8667 910 3012 15133 9286 1683 2813 15549 10000 2007 2614 21054 9630 1318 2715 20841 10000 1497 2616 24162 8387 852 3117 24172 8667 910 3018 14513 9630 1912 2719 16748 9630 1657 2720 22080 9286 1153 2821 19419 9630 1429 2722 43154 9286 590 2823 16352 10000 1908 2624 17184 8667 1280 3025 24812 8387 830 3126 13193 10000 2365 2627 16138 10000 1933 2628 18475 9630 1502 2729 14422 10000 2163 2630 16748 9286 1520 2831 21663 9286 1175 2832 35516 8125 544 3233 24406 8667 901 3034 15214 10000 2051 2635 19561 9630 1418 2736 23948 8387 860 3137 16636 8966 1418 2938 17245 10000 1809 2639 21897 8387 940 3140 16565 8966 1425 2941 14666 10000 2127 2642 18627 8966 1267 29

Table 3 Continued

Subject Time Acc ITR119862119899[s] [] [bitsmin]

43 22131 9286 1151 2844 25929 8667 848 3045 15702 10000 1987 2646 14788 9630 1876 2747 18718 9630 1482 2748 14554 10000 2144 2649 16057 9630 1728 2750 20495 10000 1522 2651 22811 9630 1216 2752 16037 10000 1946 2653 23664 8125 817 3254 14889 10000 2095 2655 15011 10000 2078 2656 13985 10000 2231 2657 14716 10000 2120 2658 24273 10000 1285 2659 15001 10000 2080 2660 16463 9630 1685 2761 17885 9630 1551 27Mean 20708 9303 1407 2811SD 5025 573 444 182Min 13193 7027 466 2600Max 43154 10000 2365 3700

forwardrdquo starts an autopilot program that guides the robotthrough a straight distance until it reaches a predefinedstopping point

While the MRC executed the autopilot program andfollowed the line for some time subjects purposely ignoredthe flickering targets and concentrated on the video feedbackinstead As these waiting periods were added up to theactual classification times to each command that followed anautopilot command (ldquoforwardrdquo) the ITR values are consid-erably low From the researcherrsquos point of view a numberof errors could have been avoided if some subjects wouldhave paid more attention to the live feedback Especially inthe task of reading the hidden number subjects were tryingto drive along the path without looking at the text messagethat appeared on the live feedback After realizing that therobot was not turning they looked at the video and saw thetext message instructing them to look up This is visible inthe analysis of the accuracy of the robots commands (seeFigure 10) The lowest accuracy of 8749 was for actionldquochange view to the camera moderdquo (robot command 14) thismay be caused by the fact that the subjects got used to theactions ldquoturn leftrdquoldquoturn rightrdquo and ldquodrive forwardrdquo whichthey repeated six times from the starting point (see Figures10 and 9) This could easily be avoided if the driving pathswere designedwith evenly distributed tasks (eg live scenariotasks such as reading a hidden number should occur moreoften)

12 Computational Intelligence and Neuroscience

Table 4 Comparison of BCI systems for robot navigation

Publication Subjects Classes Literacy rate [] Mean accuracy [] Mean ITR [bitsmin] Robot typeVolosyak et al [21] 86 4 98 9226 1724 E-puckZhao et al1 [34] 7 4 100 903 247 NAOKapeller et al2 [27] 11 4 91 9136 NA E-puckDiez et al3 [31] 7 4 100 8880lowast NA Pioneer 3-DXHolewa and Nawrocka [24] 3 4 100 7375 1136 Lego MindstormChoi and Jo1 [32] 5 2 100 844 1140 NAOThis study 61 4 100 9303 1407 MRC1Results for SSVEP paradigm2f-VEP results for 4 classes without zero class3Results for destination 1 lowastcorrect(correct + wrong) commands

Table 5 Distribution of the time blocks needed for selection of the first three commands ldquodrive forwardrdquo and the last three commandsldquoforwardrdquoldquolook uprdquo over all subjects

Distribution []Time windows 119879

119878in EEG data blocks

8 10 15 20 30 40 50 60 to 160A 656 1803 1803 2295 1148 656 820 820C 820 2295 820 1967 1148 820 492 1639E 984 1311 820 1967 656 984 820 2459Mean 820 1803 1148 2077 984 820 710 1639œ 984 1803 1475 1803 984 328 656 1967X 1148 1639 1639 1475 1311 984 164 1639Y 1148 1475 1148 2295 656 656 492 2131Mean 1093 1639 1421 1858 984 656 437 1913

Accuracy of each command of the robotrsquos path

1 2 3 4 5 6 7 8 9

Commands

10080604020

0

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26

()

Figure 10This chart presents the accuracy of each command alongthe path (119909-axis) and its selection accuracy to previous correctselection over all subjects

No difference between the time needed for the selectionof the command ldquodrive forwardrdquo in the beginning of theexperiment in comparison to the end of the experiment canbe observed (see Table 5) hence neither fatigue nor practiceseemed to influence performance

The general literacy rate among test subjects of 100indicates that SSVEP-based BCIs can be used by a largerpopulation than previously assumedThis high literacy is dueto the careful selection of individual SSVEP key parameterswhichwere determined by the presented calibration software

41 Limitations The SSVEP-control application has somerestrictions

(i) The wireless connection was implemented with theUDP protocol and some information might thereforenot be received properly

(ii) In order to maximize performance accuracy and userfriendliness the number of simultaneously displayedtargets was restricted to four which resulted in aconsiderably lower ITR

(iii) It should also be noted that for long term use recal-ibration might be inevitable Although the numberof targets is limited to four the SSVEP-based BCIpresented here depends on the userrsquos vision andcontrol over his or her eye movements It thereforehas to compete with other approaches such as P300and other eye tracking systems Visual stimulationwith low frequencies in particular is known to causefatigue

(iv) Moreover subjects in this study may not be repre-sentative of the general population they tended tobe young healthy men Further tests with older andphysically impaired persons are required

5 Conclusions

In the present study the SSVEP-based driving application(robot-GUI) was proven to be successful and robust Thedriving application with live video feedback was tested with61 healthy subjects All participants successfully navigateda mobile semiautonomous robot through a predetermined15-meter-long route BCI key parameters were determinedby the Wizard software No negative influence of the livevideo feedback on the performance was observed Such

Computational Intelligence and Neuroscience 13

applications could be used to control assistive robots butmight also be of interest for healthy subjects In contrast toP300 the SSVEP stimulation has to be sufficiently strong (interms of size or visual angle) to significantly influence theuser A common problem of all BCI studies that also occurredhere is to keep the usermotivated and focused on the taskTheevaluation of the questionnaire shows that for the majorityof subjects the driving task with the SSVEP-based BCI wasnot fatiguing even though lower stimulation frequencieswhich usually cause more fatigue than higher frequencieswere used in the experiment More specifically frequenciesbetween 6 and 12Hz were determined in the calibrationprocedure as they elicited the strongest SSVEP responsesSome subjects stated that as they concentrated more on thecamera video they did not find the flickering annoying at allOur results prove that SSVEP-based personalized adaptiveBCI systems with a low target number can be used as aneveryday assistive technology for remote robotsWe have alsoshown that it is possible to design a four-target interface withextended capabilities such as camera and driving controlThe control application presented here can be used formobile robots as an assistive remote technology as a gameinterface forwheelchair control or even for steering remotelycontrolled drones This study should be considered as a steptowards resolving the problems of semiautonomous mobilerobots controlled with SSVEP-based BCIs In order to furtherimprove this approach the next steps should be an onlineparameter adaptationmechanism for improving the accuracythrough practicing a possibility of turning the stimulationonoff and a two-way video and audio communication fora truly remote telepresence

Competing Interests

The authors declare that the research was conducted in theabsence of any commercial or financial relationships thatcould be construed as a potential conflict of interests

Acknowledgments

The authors would like to thank Thomas Grunenberg fromthe Faculty of Technology and Bionics at Rhine-Waal Uni-versity of Applied Sciences for building programming andmaintaining MRCs The authors would also like to thankthe student assistants Leonie Hillen Francisco Nunez SaadPlatini Anna Catharina Thoma and Paul Fanen Wundufor their help with the study This research was supportedby the German Federal Ministry of Education and Research(BMBF) under Grants nos 16SV6364 and 01DR14014 and theEuropean Fund for Regional Development (EFRD or EFREin Germany) under Grant no GE-1-1-047

References

[1] M Cheney and R Uth Tesla Master of Lightning Barnes ampNoble New York NY USA 1999

[2] W Walter The Living Brain W W Norton amp Company NewYork NY USA 1953

[3] P Flandorfer ldquoPopulation ageing and socially assistive robotsfor elderly persons the importance of sociodemographic fac-tors for user acceptancerdquo International Journal of PopulationResearch vol 2012 Article ID 829835 13 pages 2012

[4] A Nijholt D Tan G Pfurtscheller et al ldquoBrain-computerinterfacing for intelligent systemsrdquo IEEE Intelligent Systems vol23 no 3 pp 72ndash79 2008

[5] J D R Millan R Rupp G R Muller-Putz et al ldquoCombiningbrain-computer interfaces and assistive technologies state-of-the-art and challengesrdquo Frontiers in Neuroscience vol 4 article161 2010

[6] C Brunner N Birbaumer B Blankertz et al ldquoBNCI horizon2020 towards a roadmap for the BCI communityrdquo Brain-Computer Interfaces vol 2 no 1 pp 1ndash10 2015

[7] R Bogue ldquoExoskeletons and robotic prosthetics a review ofrecent developmentsrdquo Industrial Robot vol 36 no 5 pp 421ndash427 2009

[8] R Ron-Angevin F Velasco-Alvarez S Sancha-Ros and L daSilva-Sauer ldquoA two-class self-paced BCI to control a robot infour directionsrdquo IEEE International Conference on Rehabilita-tion Robotics (ICORR rsquo11) vol 2011 Article ID 5975486 2011

[9] L Tonin T Carlson R Leeb and J D R Millan ldquoBrain-controlled telepresence robot by motor-disabled peoplerdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society pp 4227ndash4230IEEE Boston Mass USA September 2011

[10] D Huang K Qian D-Y Fei W Jia X Chen and O BaildquoElectroencephalography (EEG)-based brain-computer inter-face (BCI) a 2-D virtual wheelchair control based on event-related desynchronizationsynchronization and state controlrdquoIEEE Transactions on Neural Systems and Rehabilitation Engi-neering vol 20 no 3 pp 379ndash388 2012

[11] A Chella E Pagello E Menegatti et al ldquoA BCI teleoperatedmuseum robotic guiderdquo in Proceedings of the International Con-ference on Complex Intelligent and Software Intensive Systems(CISIS rsquo09) pp 783ndash788 IEEE Fukuoka Japan March 2009

[12] C Escolano J M Antelis and J Minguez ldquoA telepresencemobile robot controlled with a noninvasive brain-computerinterfacerdquo IEEE Transactions on Systems Man and CyberneticsPart B Cybernetics vol 42 no 3 pp 793ndash804 2012

[13] F-B VialatteMMaurice J Dauwels andA Cichocki ldquoSteady-state visually evoked potentials focus on essential paradigmsand future perspectivesrdquo Progress in Neurobiology vol 90 no4 pp 418ndash438 2010

[14] D Regan ldquoSome characteristics of average steady-state andtransient responses evoked by modulated lightrdquo Electroen-cephalography and Clinical Neurophysiology vol 20 no 3 pp238ndash248 1966

[15] Y Frank and F Torres ldquoVisual evoked potentials in the evalua-tion of lsquocortical blindnessrsquo in childrenrdquoAnnals of Neurology vol6 no 2 pp 126ndash129 1979

[16] W IanMcDonald A CompstonG Edan et al ldquoRecommendeddiagnostic criteria for multiple sclerosis guidelines from theinternational panel on the diagnosis of multiple sclerosisrdquoAnnals of Neurology vol 50 no 1 pp 121ndash127 2001

[17] S Gao Y Wang X Gao and B Hong ldquoVisual and auditorybrain-computer interfacesrdquo IEEE Transactions on BiomedicalEngineering vol 61 no 5 pp 1436ndash1447 2014

[18] I Volosyak ldquoSSVEP-based Bremen-BCI interfacemdashboostinginformation transfer ratesrdquo Journal of Neural Engineering vol8 no 3 Article ID 036020 2011

14 Computational Intelligence and Neuroscience

[19] G R Muller-Putz and G Pfurtscheller ldquoControl of an electricalprosthesis with an SSVEP-based BCIrdquo IEEE Transactions onBiomedical Engineering vol 55 no 1 pp 361ndash364 2008

[20] P Martinez H Bakardjian and A Cichocki ldquoFully onlinemulticommand brain-computer interface with visual neuro-feedback using SSVEP paradigmrdquo Computational Intelligenceand Neuroscience vol 2007 Article ID 94561 9 pages 2007

[21] I Volosyak D Valbuena T Luth T Malechka and A GraserldquoBCI demographics II how many (and What Kinds of) peoplecan use a high-frequency SSVEP BCIrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 19 no 3 pp232ndash239 2011

[22] J Zhao Q Meng W Li M Li and G Chen ldquoSSVEP-basedhierarchical architecture for control of a humanoid robot withmindrdquo in Proceedings of the 11th World Congress on Intelli-gent Control and Automation (WCICA rsquo14) pp 2401ndash2406Shenyang China July 2014

[23] A Guneysu and H Levent Akin ldquoAn SSVEP based BCI tocontrol a humanoid robot by using portable EEG devicerdquo inProceedings of the 35th Annual International Conference of theIEEE Engineering in Medicine and Biology Society (EMBC rsquo13)pp 6905ndash6908 IEEE Osaka Japan July 2013

[24] K Holewa and A Nawrocka ldquoEmotiv EPOC neuroheadset inbrainmdashcomputer interfacerdquo in Proceedings of the 15th Interna-tional Carpathian Control Conference (ICCC rsquo14) pp 149ndash152IEEE Velke Karlovice Czech Republic May 2014

[25] N-S Kwak K-R Muller and S-W Lee ldquoToward exoskeletoncontrol based on steady state visual evoked potentialsrdquo inProceedings of the International Winter Workshop on Brain-Computer Interface (BCI rsquo14) pp 1ndash2 Gangwon South KoreaFebruary 2014

[26] G Bin X Gao Y Wang B Hong and S Gao ldquoVEP-basedbrain-computer interfaces time frequency and code modula-tionsrdquo IEEE Computational Intelligence Magazine vol 4 no 4pp 22ndash26 2009

[27] C Kapeller C Hintermuller M Abu-Alqumsan R Pruckl APeer and C Guger ldquoA BCI using VEP for continuous control ofa mobile robotrdquo in Proceedings of the 35th Annual InternationalConference of the IEEE Engineering in Medicine and BiologySociety (EMBC rsquo13) pp 5254ndash5257 IEEE Osaka Japan July2013

[28] H Wang T Li and Z Huang ldquoRemote control of an electricalcar with SSVEP-based BCIrdquo in Proceedings of the IEEE Interna-tional Conference on Information Theory and Information Secu-rity (ICITIS rsquo10) pp 837ndash840 IEEE Beijing China December2010

[29] Y Wang Y-T Wang and T-P Jung ldquoVisual stimulus designfor high-rate SSVEP BCIrdquo Electronics Letters vol 46 no 15 pp1057ndash1058 2010

[30] P Gergondet D Petit and A Kheddar ldquoSteering a robot witha brain-computer interface impact of video feedback on BCIperformancerdquo in Proceedings of the 21st IEEE InternationalSymposium on Robot and Human Interactive Communication(RO-MAN rsquo12) pp 271ndash276 Paris France September 2012

[31] P F Diez V A Mut E Laciar and E M Avila Perona ldquoMobilerobot navigation with a self-paced brain-computer interfacebased on high-frequency SSVEPrdquo Robotica vol 32 no 5 pp695ndash709 2014

[32] B Choi and S Jo ldquoA low-cost EEG system-based hybridbrain-computer interface for humanoid robot navigation andrecognitionrdquo PLoS ONE vol 8 no 9 Article ID e74583 2013

[33] E Tidoni P Gergondet A Kheddar and S M Aglioti ldquoAudio-visual feedback improves the BCI performance in the naviga-tional control of a humanoid robotrdquo Frontiers in Neuroroboticsvol 8 article 20 pp 1ndash8 2014

[34] J Zhao W Li and M Li ldquoComparative study of SSVEP- andP300-based models for the telepresence control of humanoidrobotsrdquo PLoS ONE vol 10 no 11 Article ID e0142168 2015

[35] T M Rutkowski K Shimizu T Kodama P Jurica and ACichocki ldquoBrain-robot interfaces using spatial tactile BCIparadigmsrdquo in Symbiotic Interaction pp 132ndash137 Springer NewYork NY USA 2015

[36] C Brunner B Z Allison D J Krusienski et al ldquoImprovedsignal processing approaches in an offline simulation of a hybridbrain-computer interfacerdquo Journal of NeuroscienceMethods vol188 no 1 pp 165ndash173 2010

[37] F Gembler P Stawicki and I Volosyak ldquoAutonomous parame-ter adjustment for SSVEP-based BCIs with a novel BCI wizardrdquoFrontiers in Neuroscience vol 9 article 474 pp 1ndash12 2015

[38] E W Sellers T M Vaughan and J R Wolpaw ldquoA brain-computer interface for long-term independent home userdquoAmyotrophic Lateral Sclerosis vol 11 no 5 pp 449ndash455 2010

[39] E M Holz L Botrel and A Kubler ldquoBridging gaps long-term independent BCI home-use by a locked-in end-userrdquo inProceedings of the TOBI Workshop IV Sion Switzerland 2013

[40] O Friman I Volosyak and A Graser ldquoMultiple channel detec-tion of steady-state visual evoked potentials for brain-computerinterfacesrdquo IEEE Transactions on Biomedical Engineering vol54 no 4 pp 742ndash750 2007

[41] I Volosyak T Malechka D Valbuena and A Graser ldquoAnovel calibration method for SSVEP based brain-computerinterfacesrdquo in Proceedings of the 18th European Signal ProcessingConference (EUSIPCO rsquo10) pp 939ndash943 Aalborg DenmarkAugust 2010

[42] I Volosyak H Cecotti and A Graser ldquoOptimal visual stimulion LCD screens for SSVEP based brain-computer interfacesrdquoin Proceedings of the 4th International IEEEEMBS Conferenceon Neural Engineering (NER rsquo09) pp 447ndash450 IEEE AntalyaTurkey May 2009

[43] J R Wolpaw N Birbaumer D J McFarland G Pfurtschellerand T M Vaughan ldquoBrain-computer interfaces for communi-cation and controlrdquo Clinical Neurophysiology vol 113 no 6 pp767ndash791 2002

Page 11: Driving a Semiautonomous Mobile Robotic Car Controlled by ... · Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI ... (frequencycoding,methodoriginallyproposed

Computational Intelligence and Neuroscience 11

Table 3 Results of the driving performance Total time each subjectrequired for steering the MRC through the route (15 meters and10 turns including camera movements) 26 correct commands (3commands in camera mode and 23 commands in drive mode) werenecessary to complete the task Accuracy (Acc) is the number ofcorrect commands (26) divided by the total number of commands(119862119899) Mean values are given at the bottom of the table

Subject Time Acc ITR119862119899[s] [] [bitsmin]

1 30560 8667 720 302 28346 8966 832 293 28407 8125 680 324 28011 8125 690 325 25259 8125 765 326 19002 9286 1340 287 32206 9630 862 278 31007 7027 466 379 15813 9286 1610 2810 37182 8966 635 2911 24172 8667 910 3012 15133 9286 1683 2813 15549 10000 2007 2614 21054 9630 1318 2715 20841 10000 1497 2616 24162 8387 852 3117 24172 8667 910 3018 14513 9630 1912 2719 16748 9630 1657 2720 22080 9286 1153 2821 19419 9630 1429 2722 43154 9286 590 2823 16352 10000 1908 2624 17184 8667 1280 3025 24812 8387 830 3126 13193 10000 2365 2627 16138 10000 1933 2628 18475 9630 1502 2729 14422 10000 2163 2630 16748 9286 1520 2831 21663 9286 1175 2832 35516 8125 544 3233 24406 8667 901 3034 15214 10000 2051 2635 19561 9630 1418 2736 23948 8387 860 3137 16636 8966 1418 2938 17245 10000 1809 2639 21897 8387 940 3140 16565 8966 1425 2941 14666 10000 2127 2642 18627 8966 1267 29

Table 3 Continued

Subject Time Acc ITR119862119899[s] [] [bitsmin]

43 22131 9286 1151 2844 25929 8667 848 3045 15702 10000 1987 2646 14788 9630 1876 2747 18718 9630 1482 2748 14554 10000 2144 2649 16057 9630 1728 2750 20495 10000 1522 2651 22811 9630 1216 2752 16037 10000 1946 2653 23664 8125 817 3254 14889 10000 2095 2655 15011 10000 2078 2656 13985 10000 2231 2657 14716 10000 2120 2658 24273 10000 1285 2659 15001 10000 2080 2660 16463 9630 1685 2761 17885 9630 1551 27Mean 20708 9303 1407 2811SD 5025 573 444 182Min 13193 7027 466 2600Max 43154 10000 2365 3700

forwardrdquo starts an autopilot program that guides the robotthrough a straight distance until it reaches a predefinedstopping point

While the MRC executed the autopilot program andfollowed the line for some time subjects purposely ignoredthe flickering targets and concentrated on the video feedbackinstead As these waiting periods were added up to theactual classification times to each command that followed anautopilot command (ldquoforwardrdquo) the ITR values are consid-erably low From the researcherrsquos point of view a numberof errors could have been avoided if some subjects wouldhave paid more attention to the live feedback Especially inthe task of reading the hidden number subjects were tryingto drive along the path without looking at the text messagethat appeared on the live feedback After realizing that therobot was not turning they looked at the video and saw thetext message instructing them to look up This is visible inthe analysis of the accuracy of the robots commands (seeFigure 10) The lowest accuracy of 8749 was for actionldquochange view to the camera moderdquo (robot command 14) thismay be caused by the fact that the subjects got used to theactions ldquoturn leftrdquoldquoturn rightrdquo and ldquodrive forwardrdquo whichthey repeated six times from the starting point (see Figures10 and 9) This could easily be avoided if the driving pathswere designedwith evenly distributed tasks (eg live scenariotasks such as reading a hidden number should occur moreoften)

12 Computational Intelligence and Neuroscience

Table 4 Comparison of BCI systems for robot navigation

Publication Subjects Classes Literacy rate [] Mean accuracy [] Mean ITR [bitsmin] Robot typeVolosyak et al [21] 86 4 98 9226 1724 E-puckZhao et al1 [34] 7 4 100 903 247 NAOKapeller et al2 [27] 11 4 91 9136 NA E-puckDiez et al3 [31] 7 4 100 8880lowast NA Pioneer 3-DXHolewa and Nawrocka [24] 3 4 100 7375 1136 Lego MindstormChoi and Jo1 [32] 5 2 100 844 1140 NAOThis study 61 4 100 9303 1407 MRC1Results for SSVEP paradigm2f-VEP results for 4 classes without zero class3Results for destination 1 lowastcorrect(correct + wrong) commands

Table 5 Distribution of the time blocks needed for selection of the first three commands ldquodrive forwardrdquo and the last three commandsldquoforwardrdquoldquolook uprdquo over all subjects

Distribution []Time windows 119879

119878in EEG data blocks

8 10 15 20 30 40 50 60 to 160A 656 1803 1803 2295 1148 656 820 820C 820 2295 820 1967 1148 820 492 1639E 984 1311 820 1967 656 984 820 2459Mean 820 1803 1148 2077 984 820 710 1639œ 984 1803 1475 1803 984 328 656 1967X 1148 1639 1639 1475 1311 984 164 1639Y 1148 1475 1148 2295 656 656 492 2131Mean 1093 1639 1421 1858 984 656 437 1913

Accuracy of each command of the robotrsquos path

1 2 3 4 5 6 7 8 9

Commands

10080604020

0

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26

()

Figure 10This chart presents the accuracy of each command alongthe path (119909-axis) and its selection accuracy to previous correctselection over all subjects

No difference between the time needed for the selectionof the command ldquodrive forwardrdquo in the beginning of theexperiment in comparison to the end of the experiment canbe observed (see Table 5) hence neither fatigue nor practiceseemed to influence performance

The general literacy rate among test subjects of 100indicates that SSVEP-based BCIs can be used by a largerpopulation than previously assumedThis high literacy is dueto the careful selection of individual SSVEP key parameterswhichwere determined by the presented calibration software

41 Limitations The SSVEP-control application has somerestrictions

(i) The wireless connection was implemented with theUDP protocol and some information might thereforenot be received properly

(ii) In order to maximize performance accuracy and userfriendliness the number of simultaneously displayedtargets was restricted to four which resulted in aconsiderably lower ITR

(iii) It should also be noted that for long term use recal-ibration might be inevitable Although the numberof targets is limited to four the SSVEP-based BCIpresented here depends on the userrsquos vision andcontrol over his or her eye movements It thereforehas to compete with other approaches such as P300and other eye tracking systems Visual stimulationwith low frequencies in particular is known to causefatigue

(iv) Moreover subjects in this study may not be repre-sentative of the general population they tended tobe young healthy men Further tests with older andphysically impaired persons are required

5 Conclusions

In the present study the SSVEP-based driving application(robot-GUI) was proven to be successful and robust Thedriving application with live video feedback was tested with61 healthy subjects All participants successfully navigateda mobile semiautonomous robot through a predetermined15-meter-long route BCI key parameters were determinedby the Wizard software No negative influence of the livevideo feedback on the performance was observed Such

Computational Intelligence and Neuroscience 13

applications could be used to control assistive robots butmight also be of interest for healthy subjects In contrast toP300 the SSVEP stimulation has to be sufficiently strong (interms of size or visual angle) to significantly influence theuser A common problem of all BCI studies that also occurredhere is to keep the usermotivated and focused on the taskTheevaluation of the questionnaire shows that for the majorityof subjects the driving task with the SSVEP-based BCI wasnot fatiguing even though lower stimulation frequencieswhich usually cause more fatigue than higher frequencieswere used in the experiment More specifically frequenciesbetween 6 and 12Hz were determined in the calibrationprocedure as they elicited the strongest SSVEP responsesSome subjects stated that as they concentrated more on thecamera video they did not find the flickering annoying at allOur results prove that SSVEP-based personalized adaptiveBCI systems with a low target number can be used as aneveryday assistive technology for remote robotsWe have alsoshown that it is possible to design a four-target interface withextended capabilities such as camera and driving controlThe control application presented here can be used formobile robots as an assistive remote technology as a gameinterface forwheelchair control or even for steering remotelycontrolled drones This study should be considered as a steptowards resolving the problems of semiautonomous mobilerobots controlled with SSVEP-based BCIs In order to furtherimprove this approach the next steps should be an onlineparameter adaptationmechanism for improving the accuracythrough practicing a possibility of turning the stimulationonoff and a two-way video and audio communication fora truly remote telepresence

Competing Interests

The authors declare that the research was conducted in theabsence of any commercial or financial relationships thatcould be construed as a potential conflict of interests

Acknowledgments

The authors would like to thank Thomas Grunenberg fromthe Faculty of Technology and Bionics at Rhine-Waal Uni-versity of Applied Sciences for building programming andmaintaining MRCs The authors would also like to thankthe student assistants Leonie Hillen Francisco Nunez SaadPlatini Anna Catharina Thoma and Paul Fanen Wundufor their help with the study This research was supportedby the German Federal Ministry of Education and Research(BMBF) under Grants nos 16SV6364 and 01DR14014 and theEuropean Fund for Regional Development (EFRD or EFREin Germany) under Grant no GE-1-1-047

References

[1] M Cheney and R Uth Tesla Master of Lightning Barnes ampNoble New York NY USA 1999

[2] W Walter The Living Brain W W Norton amp Company NewYork NY USA 1953

[3] P Flandorfer ldquoPopulation ageing and socially assistive robotsfor elderly persons the importance of sociodemographic fac-tors for user acceptancerdquo International Journal of PopulationResearch vol 2012 Article ID 829835 13 pages 2012

[4] A Nijholt D Tan G Pfurtscheller et al ldquoBrain-computerinterfacing for intelligent systemsrdquo IEEE Intelligent Systems vol23 no 3 pp 72ndash79 2008

[5] J D R Millan R Rupp G R Muller-Putz et al ldquoCombiningbrain-computer interfaces and assistive technologies state-of-the-art and challengesrdquo Frontiers in Neuroscience vol 4 article161 2010

[6] C Brunner N Birbaumer B Blankertz et al ldquoBNCI horizon2020 towards a roadmap for the BCI communityrdquo Brain-Computer Interfaces vol 2 no 1 pp 1ndash10 2015

[7] R Bogue ldquoExoskeletons and robotic prosthetics a review ofrecent developmentsrdquo Industrial Robot vol 36 no 5 pp 421ndash427 2009

[8] R Ron-Angevin F Velasco-Alvarez S Sancha-Ros and L daSilva-Sauer ldquoA two-class self-paced BCI to control a robot infour directionsrdquo IEEE International Conference on Rehabilita-tion Robotics (ICORR rsquo11) vol 2011 Article ID 5975486 2011

[9] L Tonin T Carlson R Leeb and J D R Millan ldquoBrain-controlled telepresence robot by motor-disabled peoplerdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society pp 4227ndash4230IEEE Boston Mass USA September 2011

[10] D Huang K Qian D-Y Fei W Jia X Chen and O BaildquoElectroencephalography (EEG)-based brain-computer inter-face (BCI) a 2-D virtual wheelchair control based on event-related desynchronizationsynchronization and state controlrdquoIEEE Transactions on Neural Systems and Rehabilitation Engi-neering vol 20 no 3 pp 379ndash388 2012

[11] A Chella E Pagello E Menegatti et al ldquoA BCI teleoperatedmuseum robotic guiderdquo in Proceedings of the International Con-ference on Complex Intelligent and Software Intensive Systems(CISIS rsquo09) pp 783ndash788 IEEE Fukuoka Japan March 2009

[12] C Escolano J M Antelis and J Minguez ldquoA telepresencemobile robot controlled with a noninvasive brain-computerinterfacerdquo IEEE Transactions on Systems Man and CyberneticsPart B Cybernetics vol 42 no 3 pp 793ndash804 2012

[13] F-B VialatteMMaurice J Dauwels andA Cichocki ldquoSteady-state visually evoked potentials focus on essential paradigmsand future perspectivesrdquo Progress in Neurobiology vol 90 no4 pp 418ndash438 2010

[14] D Regan ldquoSome characteristics of average steady-state andtransient responses evoked by modulated lightrdquo Electroen-cephalography and Clinical Neurophysiology vol 20 no 3 pp238ndash248 1966

[15] Y Frank and F Torres ldquoVisual evoked potentials in the evalua-tion of lsquocortical blindnessrsquo in childrenrdquoAnnals of Neurology vol6 no 2 pp 126ndash129 1979

[16] W IanMcDonald A CompstonG Edan et al ldquoRecommendeddiagnostic criteria for multiple sclerosis guidelines from theinternational panel on the diagnosis of multiple sclerosisrdquoAnnals of Neurology vol 50 no 1 pp 121ndash127 2001

[17] S Gao Y Wang X Gao and B Hong ldquoVisual and auditorybrain-computer interfacesrdquo IEEE Transactions on BiomedicalEngineering vol 61 no 5 pp 1436ndash1447 2014

[18] I Volosyak ldquoSSVEP-based Bremen-BCI interfacemdashboostinginformation transfer ratesrdquo Journal of Neural Engineering vol8 no 3 Article ID 036020 2011

14 Computational Intelligence and Neuroscience

[19] G R Muller-Putz and G Pfurtscheller ldquoControl of an electricalprosthesis with an SSVEP-based BCIrdquo IEEE Transactions onBiomedical Engineering vol 55 no 1 pp 361ndash364 2008

[20] P Martinez H Bakardjian and A Cichocki ldquoFully onlinemulticommand brain-computer interface with visual neuro-feedback using SSVEP paradigmrdquo Computational Intelligenceand Neuroscience vol 2007 Article ID 94561 9 pages 2007

[21] I Volosyak D Valbuena T Luth T Malechka and A GraserldquoBCI demographics II how many (and What Kinds of) peoplecan use a high-frequency SSVEP BCIrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 19 no 3 pp232ndash239 2011

[22] J Zhao Q Meng W Li M Li and G Chen ldquoSSVEP-basedhierarchical architecture for control of a humanoid robot withmindrdquo in Proceedings of the 11th World Congress on Intelli-gent Control and Automation (WCICA rsquo14) pp 2401ndash2406Shenyang China July 2014

[23] A Guneysu and H Levent Akin ldquoAn SSVEP based BCI tocontrol a humanoid robot by using portable EEG devicerdquo inProceedings of the 35th Annual International Conference of theIEEE Engineering in Medicine and Biology Society (EMBC rsquo13)pp 6905ndash6908 IEEE Osaka Japan July 2013

[24] K Holewa and A Nawrocka ldquoEmotiv EPOC neuroheadset inbrainmdashcomputer interfacerdquo in Proceedings of the 15th Interna-tional Carpathian Control Conference (ICCC rsquo14) pp 149ndash152IEEE Velke Karlovice Czech Republic May 2014

[25] N-S Kwak K-R Muller and S-W Lee ldquoToward exoskeletoncontrol based on steady state visual evoked potentialsrdquo inProceedings of the International Winter Workshop on Brain-Computer Interface (BCI rsquo14) pp 1ndash2 Gangwon South KoreaFebruary 2014

[26] G Bin X Gao Y Wang B Hong and S Gao ldquoVEP-basedbrain-computer interfaces time frequency and code modula-tionsrdquo IEEE Computational Intelligence Magazine vol 4 no 4pp 22ndash26 2009

[27] C Kapeller C Hintermuller M Abu-Alqumsan R Pruckl APeer and C Guger ldquoA BCI using VEP for continuous control ofa mobile robotrdquo in Proceedings of the 35th Annual InternationalConference of the IEEE Engineering in Medicine and BiologySociety (EMBC rsquo13) pp 5254ndash5257 IEEE Osaka Japan July2013

[28] H Wang T Li and Z Huang ldquoRemote control of an electricalcar with SSVEP-based BCIrdquo in Proceedings of the IEEE Interna-tional Conference on Information Theory and Information Secu-rity (ICITIS rsquo10) pp 837ndash840 IEEE Beijing China December2010

[29] Y Wang Y-T Wang and T-P Jung ldquoVisual stimulus designfor high-rate SSVEP BCIrdquo Electronics Letters vol 46 no 15 pp1057ndash1058 2010

[30] P Gergondet D Petit and A Kheddar ldquoSteering a robot witha brain-computer interface impact of video feedback on BCIperformancerdquo in Proceedings of the 21st IEEE InternationalSymposium on Robot and Human Interactive Communication(RO-MAN rsquo12) pp 271ndash276 Paris France September 2012

[31] P F Diez V A Mut E Laciar and E M Avila Perona ldquoMobilerobot navigation with a self-paced brain-computer interfacebased on high-frequency SSVEPrdquo Robotica vol 32 no 5 pp695ndash709 2014

[32] B Choi and S Jo ldquoA low-cost EEG system-based hybridbrain-computer interface for humanoid robot navigation andrecognitionrdquo PLoS ONE vol 8 no 9 Article ID e74583 2013

[33] E Tidoni P Gergondet A Kheddar and S M Aglioti ldquoAudio-visual feedback improves the BCI performance in the naviga-tional control of a humanoid robotrdquo Frontiers in Neuroroboticsvol 8 article 20 pp 1ndash8 2014

[34] J Zhao W Li and M Li ldquoComparative study of SSVEP- andP300-based models for the telepresence control of humanoidrobotsrdquo PLoS ONE vol 10 no 11 Article ID e0142168 2015

[35] T M Rutkowski K Shimizu T Kodama P Jurica and ACichocki ldquoBrain-robot interfaces using spatial tactile BCIparadigmsrdquo in Symbiotic Interaction pp 132ndash137 Springer NewYork NY USA 2015

[36] C Brunner B Z Allison D J Krusienski et al ldquoImprovedsignal processing approaches in an offline simulation of a hybridbrain-computer interfacerdquo Journal of NeuroscienceMethods vol188 no 1 pp 165ndash173 2010

[37] F Gembler P Stawicki and I Volosyak ldquoAutonomous parame-ter adjustment for SSVEP-based BCIs with a novel BCI wizardrdquoFrontiers in Neuroscience vol 9 article 474 pp 1ndash12 2015

[38] E W Sellers T M Vaughan and J R Wolpaw ldquoA brain-computer interface for long-term independent home userdquoAmyotrophic Lateral Sclerosis vol 11 no 5 pp 449ndash455 2010

[39] E M Holz L Botrel and A Kubler ldquoBridging gaps long-term independent BCI home-use by a locked-in end-userrdquo inProceedings of the TOBI Workshop IV Sion Switzerland 2013

[40] O Friman I Volosyak and A Graser ldquoMultiple channel detec-tion of steady-state visual evoked potentials for brain-computerinterfacesrdquo IEEE Transactions on Biomedical Engineering vol54 no 4 pp 742ndash750 2007

[41] I Volosyak T Malechka D Valbuena and A Graser ldquoAnovel calibration method for SSVEP based brain-computerinterfacesrdquo in Proceedings of the 18th European Signal ProcessingConference (EUSIPCO rsquo10) pp 939ndash943 Aalborg DenmarkAugust 2010

[42] I Volosyak H Cecotti and A Graser ldquoOptimal visual stimulion LCD screens for SSVEP based brain-computer interfacesrdquoin Proceedings of the 4th International IEEEEMBS Conferenceon Neural Engineering (NER rsquo09) pp 447ndash450 IEEE AntalyaTurkey May 2009

[43] J R Wolpaw N Birbaumer D J McFarland G Pfurtschellerand T M Vaughan ldquoBrain-computer interfaces for communi-cation and controlrdquo Clinical Neurophysiology vol 113 no 6 pp767ndash791 2002

Page 12: Driving a Semiautonomous Mobile Robotic Car Controlled by ... · Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI ... (frequencycoding,methodoriginallyproposed

12 Computational Intelligence and Neuroscience

Table 4 Comparison of BCI systems for robot navigation

Publication Subjects Classes Literacy rate [] Mean accuracy [] Mean ITR [bitsmin] Robot typeVolosyak et al [21] 86 4 98 9226 1724 E-puckZhao et al1 [34] 7 4 100 903 247 NAOKapeller et al2 [27] 11 4 91 9136 NA E-puckDiez et al3 [31] 7 4 100 8880lowast NA Pioneer 3-DXHolewa and Nawrocka [24] 3 4 100 7375 1136 Lego MindstormChoi and Jo1 [32] 5 2 100 844 1140 NAOThis study 61 4 100 9303 1407 MRC1Results for SSVEP paradigm2f-VEP results for 4 classes without zero class3Results for destination 1 lowastcorrect(correct + wrong) commands

Table 5 Distribution of the time blocks needed for selection of the first three commands ldquodrive forwardrdquo and the last three commandsldquoforwardrdquoldquolook uprdquo over all subjects

Distribution []Time windows 119879

119878in EEG data blocks

8 10 15 20 30 40 50 60 to 160A 656 1803 1803 2295 1148 656 820 820C 820 2295 820 1967 1148 820 492 1639E 984 1311 820 1967 656 984 820 2459Mean 820 1803 1148 2077 984 820 710 1639œ 984 1803 1475 1803 984 328 656 1967X 1148 1639 1639 1475 1311 984 164 1639Y 1148 1475 1148 2295 656 656 492 2131Mean 1093 1639 1421 1858 984 656 437 1913

Accuracy of each command of the robotrsquos path

1 2 3 4 5 6 7 8 9

Commands

10080604020

0

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26

()

Figure 10This chart presents the accuracy of each command alongthe path (119909-axis) and its selection accuracy to previous correctselection over all subjects

No difference between the time needed for the selectionof the command ldquodrive forwardrdquo in the beginning of theexperiment in comparison to the end of the experiment canbe observed (see Table 5) hence neither fatigue nor practiceseemed to influence performance

The general literacy rate among test subjects of 100indicates that SSVEP-based BCIs can be used by a largerpopulation than previously assumedThis high literacy is dueto the careful selection of individual SSVEP key parameterswhichwere determined by the presented calibration software

41 Limitations The SSVEP-control application has somerestrictions

(i) The wireless connection was implemented with theUDP protocol and some information might thereforenot be received properly

(ii) In order to maximize performance accuracy and userfriendliness the number of simultaneously displayedtargets was restricted to four which resulted in aconsiderably lower ITR

(iii) It should also be noted that for long term use recal-ibration might be inevitable Although the numberof targets is limited to four the SSVEP-based BCIpresented here depends on the userrsquos vision andcontrol over his or her eye movements It thereforehas to compete with other approaches such as P300and other eye tracking systems Visual stimulationwith low frequencies in particular is known to causefatigue

(iv) Moreover subjects in this study may not be repre-sentative of the general population they tended tobe young healthy men Further tests with older andphysically impaired persons are required

5 Conclusions

In the present study the SSVEP-based driving application(robot-GUI) was proven to be successful and robust Thedriving application with live video feedback was tested with61 healthy subjects All participants successfully navigateda mobile semiautonomous robot through a predetermined15-meter-long route BCI key parameters were determinedby the Wizard software No negative influence of the livevideo feedback on the performance was observed Such

Computational Intelligence and Neuroscience 13

applications could be used to control assistive robots butmight also be of interest for healthy subjects In contrast toP300 the SSVEP stimulation has to be sufficiently strong (interms of size or visual angle) to significantly influence theuser A common problem of all BCI studies that also occurredhere is to keep the usermotivated and focused on the taskTheevaluation of the questionnaire shows that for the majorityof subjects the driving task with the SSVEP-based BCI wasnot fatiguing even though lower stimulation frequencieswhich usually cause more fatigue than higher frequencieswere used in the experiment More specifically frequenciesbetween 6 and 12Hz were determined in the calibrationprocedure as they elicited the strongest SSVEP responsesSome subjects stated that as they concentrated more on thecamera video they did not find the flickering annoying at allOur results prove that SSVEP-based personalized adaptiveBCI systems with a low target number can be used as aneveryday assistive technology for remote robotsWe have alsoshown that it is possible to design a four-target interface withextended capabilities such as camera and driving controlThe control application presented here can be used formobile robots as an assistive remote technology as a gameinterface forwheelchair control or even for steering remotelycontrolled drones This study should be considered as a steptowards resolving the problems of semiautonomous mobilerobots controlled with SSVEP-based BCIs In order to furtherimprove this approach the next steps should be an onlineparameter adaptationmechanism for improving the accuracythrough practicing a possibility of turning the stimulationonoff and a two-way video and audio communication fora truly remote telepresence

Competing Interests

The authors declare that the research was conducted in theabsence of any commercial or financial relationships thatcould be construed as a potential conflict of interests

Acknowledgments

The authors would like to thank Thomas Grunenberg fromthe Faculty of Technology and Bionics at Rhine-Waal Uni-versity of Applied Sciences for building programming andmaintaining MRCs The authors would also like to thankthe student assistants Leonie Hillen Francisco Nunez SaadPlatini Anna Catharina Thoma and Paul Fanen Wundufor their help with the study This research was supportedby the German Federal Ministry of Education and Research(BMBF) under Grants nos 16SV6364 and 01DR14014 and theEuropean Fund for Regional Development (EFRD or EFREin Germany) under Grant no GE-1-1-047

References

[1] M Cheney and R Uth Tesla Master of Lightning Barnes ampNoble New York NY USA 1999

[2] W Walter The Living Brain W W Norton amp Company NewYork NY USA 1953

[3] P Flandorfer ldquoPopulation ageing and socially assistive robotsfor elderly persons the importance of sociodemographic fac-tors for user acceptancerdquo International Journal of PopulationResearch vol 2012 Article ID 829835 13 pages 2012

[4] A Nijholt D Tan G Pfurtscheller et al ldquoBrain-computerinterfacing for intelligent systemsrdquo IEEE Intelligent Systems vol23 no 3 pp 72ndash79 2008

[5] J D R Millan R Rupp G R Muller-Putz et al ldquoCombiningbrain-computer interfaces and assistive technologies state-of-the-art and challengesrdquo Frontiers in Neuroscience vol 4 article161 2010

[6] C Brunner N Birbaumer B Blankertz et al ldquoBNCI horizon2020 towards a roadmap for the BCI communityrdquo Brain-Computer Interfaces vol 2 no 1 pp 1ndash10 2015

[7] R Bogue ldquoExoskeletons and robotic prosthetics a review ofrecent developmentsrdquo Industrial Robot vol 36 no 5 pp 421ndash427 2009

[8] R Ron-Angevin F Velasco-Alvarez S Sancha-Ros and L daSilva-Sauer ldquoA two-class self-paced BCI to control a robot infour directionsrdquo IEEE International Conference on Rehabilita-tion Robotics (ICORR rsquo11) vol 2011 Article ID 5975486 2011

[9] L Tonin T Carlson R Leeb and J D R Millan ldquoBrain-controlled telepresence robot by motor-disabled peoplerdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society pp 4227ndash4230IEEE Boston Mass USA September 2011

[10] D Huang K Qian D-Y Fei W Jia X Chen and O BaildquoElectroencephalography (EEG)-based brain-computer inter-face (BCI) a 2-D virtual wheelchair control based on event-related desynchronizationsynchronization and state controlrdquoIEEE Transactions on Neural Systems and Rehabilitation Engi-neering vol 20 no 3 pp 379ndash388 2012

[11] A Chella E Pagello E Menegatti et al ldquoA BCI teleoperatedmuseum robotic guiderdquo in Proceedings of the International Con-ference on Complex Intelligent and Software Intensive Systems(CISIS rsquo09) pp 783ndash788 IEEE Fukuoka Japan March 2009

[12] C Escolano J M Antelis and J Minguez ldquoA telepresencemobile robot controlled with a noninvasive brain-computerinterfacerdquo IEEE Transactions on Systems Man and CyberneticsPart B Cybernetics vol 42 no 3 pp 793ndash804 2012

[13] F-B VialatteMMaurice J Dauwels andA Cichocki ldquoSteady-state visually evoked potentials focus on essential paradigmsand future perspectivesrdquo Progress in Neurobiology vol 90 no4 pp 418ndash438 2010

[14] D Regan ldquoSome characteristics of average steady-state andtransient responses evoked by modulated lightrdquo Electroen-cephalography and Clinical Neurophysiology vol 20 no 3 pp238ndash248 1966

[15] Y Frank and F Torres ldquoVisual evoked potentials in the evalua-tion of lsquocortical blindnessrsquo in childrenrdquoAnnals of Neurology vol6 no 2 pp 126ndash129 1979

[16] W IanMcDonald A CompstonG Edan et al ldquoRecommendeddiagnostic criteria for multiple sclerosis guidelines from theinternational panel on the diagnosis of multiple sclerosisrdquoAnnals of Neurology vol 50 no 1 pp 121ndash127 2001

[17] S Gao Y Wang X Gao and B Hong ldquoVisual and auditorybrain-computer interfacesrdquo IEEE Transactions on BiomedicalEngineering vol 61 no 5 pp 1436ndash1447 2014

[18] I Volosyak ldquoSSVEP-based Bremen-BCI interfacemdashboostinginformation transfer ratesrdquo Journal of Neural Engineering vol8 no 3 Article ID 036020 2011

14 Computational Intelligence and Neuroscience

[19] G R Muller-Putz and G Pfurtscheller ldquoControl of an electricalprosthesis with an SSVEP-based BCIrdquo IEEE Transactions onBiomedical Engineering vol 55 no 1 pp 361ndash364 2008

[20] P Martinez H Bakardjian and A Cichocki ldquoFully onlinemulticommand brain-computer interface with visual neuro-feedback using SSVEP paradigmrdquo Computational Intelligenceand Neuroscience vol 2007 Article ID 94561 9 pages 2007

[21] I Volosyak D Valbuena T Luth T Malechka and A GraserldquoBCI demographics II how many (and What Kinds of) peoplecan use a high-frequency SSVEP BCIrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 19 no 3 pp232ndash239 2011

[22] J Zhao Q Meng W Li M Li and G Chen ldquoSSVEP-basedhierarchical architecture for control of a humanoid robot withmindrdquo in Proceedings of the 11th World Congress on Intelli-gent Control and Automation (WCICA rsquo14) pp 2401ndash2406Shenyang China July 2014

[23] A Guneysu and H Levent Akin ldquoAn SSVEP based BCI tocontrol a humanoid robot by using portable EEG devicerdquo inProceedings of the 35th Annual International Conference of theIEEE Engineering in Medicine and Biology Society (EMBC rsquo13)pp 6905ndash6908 IEEE Osaka Japan July 2013

[24] K Holewa and A Nawrocka ldquoEmotiv EPOC neuroheadset inbrainmdashcomputer interfacerdquo in Proceedings of the 15th Interna-tional Carpathian Control Conference (ICCC rsquo14) pp 149ndash152IEEE Velke Karlovice Czech Republic May 2014

[25] N-S Kwak K-R Muller and S-W Lee ldquoToward exoskeletoncontrol based on steady state visual evoked potentialsrdquo inProceedings of the International Winter Workshop on Brain-Computer Interface (BCI rsquo14) pp 1ndash2 Gangwon South KoreaFebruary 2014

[26] G Bin X Gao Y Wang B Hong and S Gao ldquoVEP-basedbrain-computer interfaces time frequency and code modula-tionsrdquo IEEE Computational Intelligence Magazine vol 4 no 4pp 22ndash26 2009

[27] C Kapeller C Hintermuller M Abu-Alqumsan R Pruckl APeer and C Guger ldquoA BCI using VEP for continuous control ofa mobile robotrdquo in Proceedings of the 35th Annual InternationalConference of the IEEE Engineering in Medicine and BiologySociety (EMBC rsquo13) pp 5254ndash5257 IEEE Osaka Japan July2013

[28] H Wang T Li and Z Huang ldquoRemote control of an electricalcar with SSVEP-based BCIrdquo in Proceedings of the IEEE Interna-tional Conference on Information Theory and Information Secu-rity (ICITIS rsquo10) pp 837ndash840 IEEE Beijing China December2010

[29] Y Wang Y-T Wang and T-P Jung ldquoVisual stimulus designfor high-rate SSVEP BCIrdquo Electronics Letters vol 46 no 15 pp1057ndash1058 2010

[30] P Gergondet D Petit and A Kheddar ldquoSteering a robot witha brain-computer interface impact of video feedback on BCIperformancerdquo in Proceedings of the 21st IEEE InternationalSymposium on Robot and Human Interactive Communication(RO-MAN rsquo12) pp 271ndash276 Paris France September 2012

[31] P F Diez V A Mut E Laciar and E M Avila Perona ldquoMobilerobot navigation with a self-paced brain-computer interfacebased on high-frequency SSVEPrdquo Robotica vol 32 no 5 pp695ndash709 2014

[32] B Choi and S Jo ldquoA low-cost EEG system-based hybridbrain-computer interface for humanoid robot navigation andrecognitionrdquo PLoS ONE vol 8 no 9 Article ID e74583 2013

[33] E Tidoni P Gergondet A Kheddar and S M Aglioti ldquoAudio-visual feedback improves the BCI performance in the naviga-tional control of a humanoid robotrdquo Frontiers in Neuroroboticsvol 8 article 20 pp 1ndash8 2014

[34] J Zhao W Li and M Li ldquoComparative study of SSVEP- andP300-based models for the telepresence control of humanoidrobotsrdquo PLoS ONE vol 10 no 11 Article ID e0142168 2015

[35] T M Rutkowski K Shimizu T Kodama P Jurica and ACichocki ldquoBrain-robot interfaces using spatial tactile BCIparadigmsrdquo in Symbiotic Interaction pp 132ndash137 Springer NewYork NY USA 2015

[36] C Brunner B Z Allison D J Krusienski et al ldquoImprovedsignal processing approaches in an offline simulation of a hybridbrain-computer interfacerdquo Journal of NeuroscienceMethods vol188 no 1 pp 165ndash173 2010

[37] F Gembler P Stawicki and I Volosyak ldquoAutonomous parame-ter adjustment for SSVEP-based BCIs with a novel BCI wizardrdquoFrontiers in Neuroscience vol 9 article 474 pp 1ndash12 2015

[38] E W Sellers T M Vaughan and J R Wolpaw ldquoA brain-computer interface for long-term independent home userdquoAmyotrophic Lateral Sclerosis vol 11 no 5 pp 449ndash455 2010

[39] E M Holz L Botrel and A Kubler ldquoBridging gaps long-term independent BCI home-use by a locked-in end-userrdquo inProceedings of the TOBI Workshop IV Sion Switzerland 2013

[40] O Friman I Volosyak and A Graser ldquoMultiple channel detec-tion of steady-state visual evoked potentials for brain-computerinterfacesrdquo IEEE Transactions on Biomedical Engineering vol54 no 4 pp 742ndash750 2007

[41] I Volosyak T Malechka D Valbuena and A Graser ldquoAnovel calibration method for SSVEP based brain-computerinterfacesrdquo in Proceedings of the 18th European Signal ProcessingConference (EUSIPCO rsquo10) pp 939ndash943 Aalborg DenmarkAugust 2010

[42] I Volosyak H Cecotti and A Graser ldquoOptimal visual stimulion LCD screens for SSVEP based brain-computer interfacesrdquoin Proceedings of the 4th International IEEEEMBS Conferenceon Neural Engineering (NER rsquo09) pp 447ndash450 IEEE AntalyaTurkey May 2009

[43] J R Wolpaw N Birbaumer D J McFarland G Pfurtschellerand T M Vaughan ldquoBrain-computer interfaces for communi-cation and controlrdquo Clinical Neurophysiology vol 113 no 6 pp767ndash791 2002

Page 13: Driving a Semiautonomous Mobile Robotic Car Controlled by ... · Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI ... (frequencycoding,methodoriginallyproposed

Computational Intelligence and Neuroscience 13

applications could be used to control assistive robots butmight also be of interest for healthy subjects In contrast toP300 the SSVEP stimulation has to be sufficiently strong (interms of size or visual angle) to significantly influence theuser A common problem of all BCI studies that also occurredhere is to keep the usermotivated and focused on the taskTheevaluation of the questionnaire shows that for the majorityof subjects the driving task with the SSVEP-based BCI wasnot fatiguing even though lower stimulation frequencieswhich usually cause more fatigue than higher frequencieswere used in the experiment More specifically frequenciesbetween 6 and 12Hz were determined in the calibrationprocedure as they elicited the strongest SSVEP responsesSome subjects stated that as they concentrated more on thecamera video they did not find the flickering annoying at allOur results prove that SSVEP-based personalized adaptiveBCI systems with a low target number can be used as aneveryday assistive technology for remote robotsWe have alsoshown that it is possible to design a four-target interface withextended capabilities such as camera and driving controlThe control application presented here can be used formobile robots as an assistive remote technology as a gameinterface forwheelchair control or even for steering remotelycontrolled drones This study should be considered as a steptowards resolving the problems of semiautonomous mobilerobots controlled with SSVEP-based BCIs In order to furtherimprove this approach the next steps should be an onlineparameter adaptationmechanism for improving the accuracythrough practicing a possibility of turning the stimulationonoff and a two-way video and audio communication fora truly remote telepresence

Competing Interests

The authors declare that the research was conducted in theabsence of any commercial or financial relationships thatcould be construed as a potential conflict of interests

Acknowledgments

The authors would like to thank Thomas Grunenberg fromthe Faculty of Technology and Bionics at Rhine-Waal Uni-versity of Applied Sciences for building programming andmaintaining MRCs The authors would also like to thankthe student assistants Leonie Hillen Francisco Nunez SaadPlatini Anna Catharina Thoma and Paul Fanen Wundufor their help with the study This research was supportedby the German Federal Ministry of Education and Research(BMBF) under Grants nos 16SV6364 and 01DR14014 and theEuropean Fund for Regional Development (EFRD or EFREin Germany) under Grant no GE-1-1-047

References

[1] M Cheney and R Uth Tesla Master of Lightning Barnes ampNoble New York NY USA 1999

[2] W Walter The Living Brain W W Norton amp Company NewYork NY USA 1953

[3] P Flandorfer ldquoPopulation ageing and socially assistive robotsfor elderly persons the importance of sociodemographic fac-tors for user acceptancerdquo International Journal of PopulationResearch vol 2012 Article ID 829835 13 pages 2012

[4] A Nijholt D Tan G Pfurtscheller et al ldquoBrain-computerinterfacing for intelligent systemsrdquo IEEE Intelligent Systems vol23 no 3 pp 72ndash79 2008

[5] J D R Millan R Rupp G R Muller-Putz et al ldquoCombiningbrain-computer interfaces and assistive technologies state-of-the-art and challengesrdquo Frontiers in Neuroscience vol 4 article161 2010

[6] C Brunner N Birbaumer B Blankertz et al ldquoBNCI horizon2020 towards a roadmap for the BCI communityrdquo Brain-Computer Interfaces vol 2 no 1 pp 1ndash10 2015

[7] R Bogue ldquoExoskeletons and robotic prosthetics a review ofrecent developmentsrdquo Industrial Robot vol 36 no 5 pp 421ndash427 2009

[8] R Ron-Angevin F Velasco-Alvarez S Sancha-Ros and L daSilva-Sauer ldquoA two-class self-paced BCI to control a robot infour directionsrdquo IEEE International Conference on Rehabilita-tion Robotics (ICORR rsquo11) vol 2011 Article ID 5975486 2011

[9] L Tonin T Carlson R Leeb and J D R Millan ldquoBrain-controlled telepresence robot by motor-disabled peoplerdquo inProceedings of the Annual International Conference of the IEEEEngineering in Medicine and Biology Society pp 4227ndash4230IEEE Boston Mass USA September 2011

[10] D Huang K Qian D-Y Fei W Jia X Chen and O BaildquoElectroencephalography (EEG)-based brain-computer inter-face (BCI) a 2-D virtual wheelchair control based on event-related desynchronizationsynchronization and state controlrdquoIEEE Transactions on Neural Systems and Rehabilitation Engi-neering vol 20 no 3 pp 379ndash388 2012

[11] A Chella E Pagello E Menegatti et al ldquoA BCI teleoperatedmuseum robotic guiderdquo in Proceedings of the International Con-ference on Complex Intelligent and Software Intensive Systems(CISIS rsquo09) pp 783ndash788 IEEE Fukuoka Japan March 2009

[12] C Escolano J M Antelis and J Minguez ldquoA telepresencemobile robot controlled with a noninvasive brain-computerinterfacerdquo IEEE Transactions on Systems Man and CyberneticsPart B Cybernetics vol 42 no 3 pp 793ndash804 2012

[13] F-B VialatteMMaurice J Dauwels andA Cichocki ldquoSteady-state visually evoked potentials focus on essential paradigmsand future perspectivesrdquo Progress in Neurobiology vol 90 no4 pp 418ndash438 2010

[14] D Regan ldquoSome characteristics of average steady-state andtransient responses evoked by modulated lightrdquo Electroen-cephalography and Clinical Neurophysiology vol 20 no 3 pp238ndash248 1966

[15] Y Frank and F Torres ldquoVisual evoked potentials in the evalua-tion of lsquocortical blindnessrsquo in childrenrdquoAnnals of Neurology vol6 no 2 pp 126ndash129 1979

[16] W IanMcDonald A CompstonG Edan et al ldquoRecommendeddiagnostic criteria for multiple sclerosis guidelines from theinternational panel on the diagnosis of multiple sclerosisrdquoAnnals of Neurology vol 50 no 1 pp 121ndash127 2001

[17] S Gao Y Wang X Gao and B Hong ldquoVisual and auditorybrain-computer interfacesrdquo IEEE Transactions on BiomedicalEngineering vol 61 no 5 pp 1436ndash1447 2014

[18] I Volosyak ldquoSSVEP-based Bremen-BCI interfacemdashboostinginformation transfer ratesrdquo Journal of Neural Engineering vol8 no 3 Article ID 036020 2011

14 Computational Intelligence and Neuroscience

[19] G R Muller-Putz and G Pfurtscheller ldquoControl of an electricalprosthesis with an SSVEP-based BCIrdquo IEEE Transactions onBiomedical Engineering vol 55 no 1 pp 361ndash364 2008

[20] P Martinez H Bakardjian and A Cichocki ldquoFully onlinemulticommand brain-computer interface with visual neuro-feedback using SSVEP paradigmrdquo Computational Intelligenceand Neuroscience vol 2007 Article ID 94561 9 pages 2007

[21] I Volosyak D Valbuena T Luth T Malechka and A GraserldquoBCI demographics II how many (and What Kinds of) peoplecan use a high-frequency SSVEP BCIrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 19 no 3 pp232ndash239 2011

[22] J Zhao Q Meng W Li M Li and G Chen ldquoSSVEP-basedhierarchical architecture for control of a humanoid robot withmindrdquo in Proceedings of the 11th World Congress on Intelli-gent Control and Automation (WCICA rsquo14) pp 2401ndash2406Shenyang China July 2014

[23] A Guneysu and H Levent Akin ldquoAn SSVEP based BCI tocontrol a humanoid robot by using portable EEG devicerdquo inProceedings of the 35th Annual International Conference of theIEEE Engineering in Medicine and Biology Society (EMBC rsquo13)pp 6905ndash6908 IEEE Osaka Japan July 2013

[24] K Holewa and A Nawrocka ldquoEmotiv EPOC neuroheadset inbrainmdashcomputer interfacerdquo in Proceedings of the 15th Interna-tional Carpathian Control Conference (ICCC rsquo14) pp 149ndash152IEEE Velke Karlovice Czech Republic May 2014

[25] N-S Kwak K-R Muller and S-W Lee ldquoToward exoskeletoncontrol based on steady state visual evoked potentialsrdquo inProceedings of the International Winter Workshop on Brain-Computer Interface (BCI rsquo14) pp 1ndash2 Gangwon South KoreaFebruary 2014

[26] G Bin X Gao Y Wang B Hong and S Gao ldquoVEP-basedbrain-computer interfaces time frequency and code modula-tionsrdquo IEEE Computational Intelligence Magazine vol 4 no 4pp 22ndash26 2009

[27] C Kapeller C Hintermuller M Abu-Alqumsan R Pruckl APeer and C Guger ldquoA BCI using VEP for continuous control ofa mobile robotrdquo in Proceedings of the 35th Annual InternationalConference of the IEEE Engineering in Medicine and BiologySociety (EMBC rsquo13) pp 5254ndash5257 IEEE Osaka Japan July2013

[28] H Wang T Li and Z Huang ldquoRemote control of an electricalcar with SSVEP-based BCIrdquo in Proceedings of the IEEE Interna-tional Conference on Information Theory and Information Secu-rity (ICITIS rsquo10) pp 837ndash840 IEEE Beijing China December2010

[29] Y Wang Y-T Wang and T-P Jung ldquoVisual stimulus designfor high-rate SSVEP BCIrdquo Electronics Letters vol 46 no 15 pp1057ndash1058 2010

[30] P Gergondet D Petit and A Kheddar ldquoSteering a robot witha brain-computer interface impact of video feedback on BCIperformancerdquo in Proceedings of the 21st IEEE InternationalSymposium on Robot and Human Interactive Communication(RO-MAN rsquo12) pp 271ndash276 Paris France September 2012

[31] P F Diez V A Mut E Laciar and E M Avila Perona ldquoMobilerobot navigation with a self-paced brain-computer interfacebased on high-frequency SSVEPrdquo Robotica vol 32 no 5 pp695ndash709 2014

[32] B Choi and S Jo ldquoA low-cost EEG system-based hybridbrain-computer interface for humanoid robot navigation andrecognitionrdquo PLoS ONE vol 8 no 9 Article ID e74583 2013

[33] E Tidoni P Gergondet A Kheddar and S M Aglioti ldquoAudio-visual feedback improves the BCI performance in the naviga-tional control of a humanoid robotrdquo Frontiers in Neuroroboticsvol 8 article 20 pp 1ndash8 2014

[34] J Zhao W Li and M Li ldquoComparative study of SSVEP- andP300-based models for the telepresence control of humanoidrobotsrdquo PLoS ONE vol 10 no 11 Article ID e0142168 2015

[35] T M Rutkowski K Shimizu T Kodama P Jurica and ACichocki ldquoBrain-robot interfaces using spatial tactile BCIparadigmsrdquo in Symbiotic Interaction pp 132ndash137 Springer NewYork NY USA 2015

[36] C Brunner B Z Allison D J Krusienski et al ldquoImprovedsignal processing approaches in an offline simulation of a hybridbrain-computer interfacerdquo Journal of NeuroscienceMethods vol188 no 1 pp 165ndash173 2010

[37] F Gembler P Stawicki and I Volosyak ldquoAutonomous parame-ter adjustment for SSVEP-based BCIs with a novel BCI wizardrdquoFrontiers in Neuroscience vol 9 article 474 pp 1ndash12 2015

[38] E W Sellers T M Vaughan and J R Wolpaw ldquoA brain-computer interface for long-term independent home userdquoAmyotrophic Lateral Sclerosis vol 11 no 5 pp 449ndash455 2010

[39] E M Holz L Botrel and A Kubler ldquoBridging gaps long-term independent BCI home-use by a locked-in end-userrdquo inProceedings of the TOBI Workshop IV Sion Switzerland 2013

[40] O Friman I Volosyak and A Graser ldquoMultiple channel detec-tion of steady-state visual evoked potentials for brain-computerinterfacesrdquo IEEE Transactions on Biomedical Engineering vol54 no 4 pp 742ndash750 2007

[41] I Volosyak T Malechka D Valbuena and A Graser ldquoAnovel calibration method for SSVEP based brain-computerinterfacesrdquo in Proceedings of the 18th European Signal ProcessingConference (EUSIPCO rsquo10) pp 939ndash943 Aalborg DenmarkAugust 2010

[42] I Volosyak H Cecotti and A Graser ldquoOptimal visual stimulion LCD screens for SSVEP based brain-computer interfacesrdquoin Proceedings of the 4th International IEEEEMBS Conferenceon Neural Engineering (NER rsquo09) pp 447ndash450 IEEE AntalyaTurkey May 2009

[43] J R Wolpaw N Birbaumer D J McFarland G Pfurtschellerand T M Vaughan ldquoBrain-computer interfaces for communi-cation and controlrdquo Clinical Neurophysiology vol 113 no 6 pp767ndash791 2002

Page 14: Driving a Semiautonomous Mobile Robotic Car Controlled by ... · Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI ... (frequencycoding,methodoriginallyproposed

14 Computational Intelligence and Neuroscience

[19] G R Muller-Putz and G Pfurtscheller ldquoControl of an electricalprosthesis with an SSVEP-based BCIrdquo IEEE Transactions onBiomedical Engineering vol 55 no 1 pp 361ndash364 2008

[20] P Martinez H Bakardjian and A Cichocki ldquoFully onlinemulticommand brain-computer interface with visual neuro-feedback using SSVEP paradigmrdquo Computational Intelligenceand Neuroscience vol 2007 Article ID 94561 9 pages 2007

[21] I Volosyak D Valbuena T Luth T Malechka and A GraserldquoBCI demographics II how many (and What Kinds of) peoplecan use a high-frequency SSVEP BCIrdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 19 no 3 pp232ndash239 2011

[22] J Zhao Q Meng W Li M Li and G Chen ldquoSSVEP-basedhierarchical architecture for control of a humanoid robot withmindrdquo in Proceedings of the 11th World Congress on Intelli-gent Control and Automation (WCICA rsquo14) pp 2401ndash2406Shenyang China July 2014

[23] A Guneysu and H Levent Akin ldquoAn SSVEP based BCI tocontrol a humanoid robot by using portable EEG devicerdquo inProceedings of the 35th Annual International Conference of theIEEE Engineering in Medicine and Biology Society (EMBC rsquo13)pp 6905ndash6908 IEEE Osaka Japan July 2013

[24] K Holewa and A Nawrocka ldquoEmotiv EPOC neuroheadset inbrainmdashcomputer interfacerdquo in Proceedings of the 15th Interna-tional Carpathian Control Conference (ICCC rsquo14) pp 149ndash152IEEE Velke Karlovice Czech Republic May 2014

[25] N-S Kwak K-R Muller and S-W Lee ldquoToward exoskeletoncontrol based on steady state visual evoked potentialsrdquo inProceedings of the International Winter Workshop on Brain-Computer Interface (BCI rsquo14) pp 1ndash2 Gangwon South KoreaFebruary 2014

[26] G Bin X Gao Y Wang B Hong and S Gao ldquoVEP-basedbrain-computer interfaces time frequency and code modula-tionsrdquo IEEE Computational Intelligence Magazine vol 4 no 4pp 22ndash26 2009

[27] C Kapeller C Hintermuller M Abu-Alqumsan R Pruckl APeer and C Guger ldquoA BCI using VEP for continuous control ofa mobile robotrdquo in Proceedings of the 35th Annual InternationalConference of the IEEE Engineering in Medicine and BiologySociety (EMBC rsquo13) pp 5254ndash5257 IEEE Osaka Japan July2013

[28] H Wang T Li and Z Huang ldquoRemote control of an electricalcar with SSVEP-based BCIrdquo in Proceedings of the IEEE Interna-tional Conference on Information Theory and Information Secu-rity (ICITIS rsquo10) pp 837ndash840 IEEE Beijing China December2010

[29] Y Wang Y-T Wang and T-P Jung ldquoVisual stimulus designfor high-rate SSVEP BCIrdquo Electronics Letters vol 46 no 15 pp1057ndash1058 2010

[30] P Gergondet D Petit and A Kheddar ldquoSteering a robot witha brain-computer interface impact of video feedback on BCIperformancerdquo in Proceedings of the 21st IEEE InternationalSymposium on Robot and Human Interactive Communication(RO-MAN rsquo12) pp 271ndash276 Paris France September 2012

[31] P F Diez V A Mut E Laciar and E M Avila Perona ldquoMobilerobot navigation with a self-paced brain-computer interfacebased on high-frequency SSVEPrdquo Robotica vol 32 no 5 pp695ndash709 2014

[32] B Choi and S Jo ldquoA low-cost EEG system-based hybridbrain-computer interface for humanoid robot navigation andrecognitionrdquo PLoS ONE vol 8 no 9 Article ID e74583 2013

[33] E Tidoni P Gergondet A Kheddar and S M Aglioti ldquoAudio-visual feedback improves the BCI performance in the naviga-tional control of a humanoid robotrdquo Frontiers in Neuroroboticsvol 8 article 20 pp 1ndash8 2014

[34] J Zhao W Li and M Li ldquoComparative study of SSVEP- andP300-based models for the telepresence control of humanoidrobotsrdquo PLoS ONE vol 10 no 11 Article ID e0142168 2015

[35] T M Rutkowski K Shimizu T Kodama P Jurica and ACichocki ldquoBrain-robot interfaces using spatial tactile BCIparadigmsrdquo in Symbiotic Interaction pp 132ndash137 Springer NewYork NY USA 2015

[36] C Brunner B Z Allison D J Krusienski et al ldquoImprovedsignal processing approaches in an offline simulation of a hybridbrain-computer interfacerdquo Journal of NeuroscienceMethods vol188 no 1 pp 165ndash173 2010

[37] F Gembler P Stawicki and I Volosyak ldquoAutonomous parame-ter adjustment for SSVEP-based BCIs with a novel BCI wizardrdquoFrontiers in Neuroscience vol 9 article 474 pp 1ndash12 2015

[38] E W Sellers T M Vaughan and J R Wolpaw ldquoA brain-computer interface for long-term independent home userdquoAmyotrophic Lateral Sclerosis vol 11 no 5 pp 449ndash455 2010

[39] E M Holz L Botrel and A Kubler ldquoBridging gaps long-term independent BCI home-use by a locked-in end-userrdquo inProceedings of the TOBI Workshop IV Sion Switzerland 2013

[40] O Friman I Volosyak and A Graser ldquoMultiple channel detec-tion of steady-state visual evoked potentials for brain-computerinterfacesrdquo IEEE Transactions on Biomedical Engineering vol54 no 4 pp 742ndash750 2007

[41] I Volosyak T Malechka D Valbuena and A Graser ldquoAnovel calibration method for SSVEP based brain-computerinterfacesrdquo in Proceedings of the 18th European Signal ProcessingConference (EUSIPCO rsquo10) pp 939ndash943 Aalborg DenmarkAugust 2010

[42] I Volosyak H Cecotti and A Graser ldquoOptimal visual stimulion LCD screens for SSVEP based brain-computer interfacesrdquoin Proceedings of the 4th International IEEEEMBS Conferenceon Neural Engineering (NER rsquo09) pp 447ndash450 IEEE AntalyaTurkey May 2009

[43] J R Wolpaw N Birbaumer D J McFarland G Pfurtschellerand T M Vaughan ldquoBrain-computer interfaces for communi-cation and controlrdquo Clinical Neurophysiology vol 113 no 6 pp767ndash791 2002