Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

A New Auditory Multi-Class Brain-Computer Interface Paradigm: Spatial Hearing as an Informative Cue

Identifieur interne : 001F83 ( Pmc/Checkpoint ); précédent : 001F82; suivant : 001F84

A New Auditory Multi-Class Brain-Computer Interface Paradigm: Spatial Hearing as an Informative Cue

Auteurs : Martijn Schreuder [Allemagne] ; Benjamin Blankertz [Allemagne] ; Michael Tangermann [Allemagne]

Source :

RBID : PMC:2848564

Abstract

Most P300-based brain-computer interface (BCI) approaches use the visual modality for stimulation. For use with patients suffering from amyotrophic lateral sclerosis (ALS) this might not be the preferable choice because of sight deterioration. Moreover, using a modality different from the visual one minimizes interference with possible visual feedback. Therefore, a multi-class BCI paradigm is proposed that uses spatially distributed, auditory cues. Ten healthy subjects participated in an offline oddball task with the spatial location of the stimuli being a discriminating cue. Experiments were done in free field, with an individual speaker for each location. Different inter-stimulus intervals of 1000 ms, 300 ms and 175 ms were tested. With averaging over multiple repetitions, selection scores went over 90% for most conditions, i.e., in over 90% of the trials the correct location was selected. One subject reached a 100% correct score. Corresponding information transfer rates were high, up to an average score of 17.39 bits/minute for the 175 ms condition (best subject 25.20 bits/minute). When presenting the stimuli through a single speaker, thus effectively canceling the spatial properties of the cue, selection scores went down below 70% for most subjects. We conclude that the proposed spatial auditory paradigm is successful for healthy subjects and shows promising results that may lead to a fast BCI that solely relies on the auditory sense.


Url:
DOI: 10.1371/journal.pone.0009813
PubMed: 20368976
PubMed Central: 2848564


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:2848564

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">A New Auditory Multi-Class Brain-Computer Interface Paradigm: Spatial Hearing as an Informative Cue</title>
<author>
<name sortKey="Schreuder, Martijn" sort="Schreuder, Martijn" uniqKey="Schreuder M" first="Martijn" last="Schreuder">Martijn Schreuder</name>
<affiliation wicri:level="3">
<nlm:aff id="aff1">
<addr-line>Machine Learning Department, Berlin Institute of Technology, Berlin, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Machine Learning Department, Berlin Institute of Technology, Berlin</wicri:regionArea>
<placeName>
<region type="land" nuts="3">Berlin</region>
<settlement type="city">Berlin</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Blankertz, Benjamin" sort="Blankertz, Benjamin" uniqKey="Blankertz B" first="Benjamin" last="Blankertz">Benjamin Blankertz</name>
<affiliation wicri:level="3">
<nlm:aff id="aff1">
<addr-line>Machine Learning Department, Berlin Institute of Technology, Berlin, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Machine Learning Department, Berlin Institute of Technology, Berlin</wicri:regionArea>
<placeName>
<region type="land" nuts="3">Berlin</region>
<settlement type="city">Berlin</settlement>
</placeName>
</affiliation>
<affiliation wicri:level="3">
<nlm:aff id="aff2">
<addr-line>Intelligent Data Analysis Group, Fraunhofer FIRST, Berlin, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Intelligent Data Analysis Group, Fraunhofer FIRST, Berlin</wicri:regionArea>
<placeName>
<region type="land" nuts="3">Berlin</region>
<settlement type="city">Berlin</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Tangermann, Michael" sort="Tangermann, Michael" uniqKey="Tangermann M" first="Michael" last="Tangermann">Michael Tangermann</name>
<affiliation wicri:level="3">
<nlm:aff id="aff1">
<addr-line>Machine Learning Department, Berlin Institute of Technology, Berlin, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Machine Learning Department, Berlin Institute of Technology, Berlin</wicri:regionArea>
<placeName>
<region type="land" nuts="3">Berlin</region>
<settlement type="city">Berlin</settlement>
</placeName>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">20368976</idno>
<idno type="pmc">2848564</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2848564</idno>
<idno type="RBID">PMC:2848564</idno>
<idno type="doi">10.1371/journal.pone.0009813</idno>
<date when="2010">2010</date>
<idno type="wicri:Area/Pmc/Corpus">002382</idno>
<idno type="wicri:Area/Pmc/Curation">002382</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001F83</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">A New Auditory Multi-Class Brain-Computer Interface Paradigm: Spatial Hearing as an Informative Cue</title>
<author>
<name sortKey="Schreuder, Martijn" sort="Schreuder, Martijn" uniqKey="Schreuder M" first="Martijn" last="Schreuder">Martijn Schreuder</name>
<affiliation wicri:level="3">
<nlm:aff id="aff1">
<addr-line>Machine Learning Department, Berlin Institute of Technology, Berlin, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Machine Learning Department, Berlin Institute of Technology, Berlin</wicri:regionArea>
<placeName>
<region type="land" nuts="3">Berlin</region>
<settlement type="city">Berlin</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Blankertz, Benjamin" sort="Blankertz, Benjamin" uniqKey="Blankertz B" first="Benjamin" last="Blankertz">Benjamin Blankertz</name>
<affiliation wicri:level="3">
<nlm:aff id="aff1">
<addr-line>Machine Learning Department, Berlin Institute of Technology, Berlin, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Machine Learning Department, Berlin Institute of Technology, Berlin</wicri:regionArea>
<placeName>
<region type="land" nuts="3">Berlin</region>
<settlement type="city">Berlin</settlement>
</placeName>
</affiliation>
<affiliation wicri:level="3">
<nlm:aff id="aff2">
<addr-line>Intelligent Data Analysis Group, Fraunhofer FIRST, Berlin, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Intelligent Data Analysis Group, Fraunhofer FIRST, Berlin</wicri:regionArea>
<placeName>
<region type="land" nuts="3">Berlin</region>
<settlement type="city">Berlin</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Tangermann, Michael" sort="Tangermann, Michael" uniqKey="Tangermann M" first="Michael" last="Tangermann">Michael Tangermann</name>
<affiliation wicri:level="3">
<nlm:aff id="aff1">
<addr-line>Machine Learning Department, Berlin Institute of Technology, Berlin, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Machine Learning Department, Berlin Institute of Technology, Berlin</wicri:regionArea>
<placeName>
<region type="land" nuts="3">Berlin</region>
<settlement type="city">Berlin</settlement>
</placeName>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2010">2010</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Most P300-based brain-computer interface (BCI) approaches use the visual modality for stimulation. For use with patients suffering from amyotrophic lateral sclerosis (ALS) this might not be the preferable choice because of sight deterioration. Moreover, using a modality different from the visual one minimizes interference with possible visual feedback. Therefore, a multi-class BCI paradigm is proposed that uses spatially distributed, auditory cues. Ten healthy subjects participated in an offline oddball task with the spatial location of the stimuli being a discriminating cue. Experiments were done in free field, with an individual speaker for each location. Different inter-stimulus intervals of 1000 ms, 300 ms and 175 ms were tested. With averaging over multiple repetitions, selection scores went over 90% for most conditions, i.e., in over 90% of the trials the correct location was selected. One subject reached a 100% correct score. Corresponding information transfer rates were high, up to an average score of 17.39 bits/minute for the 175 ms condition (best subject 25.20 bits/minute). When presenting the stimuli through a single speaker, thus effectively canceling the spatial properties of the cue, selection scores went down below 70% for most subjects. We conclude that the proposed spatial auditory paradigm is successful for healthy subjects and shows promising results that may lead to a fast BCI that solely relies on the auditory sense.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Wolpaw, Jr" uniqKey="Wolpaw J">JR Wolpaw</name>
</author>
<author>
<name sortKey="Birbaumer, N" uniqKey="Birbaumer N">N Birbaumer</name>
</author>
<author>
<name sortKey="Mcfarland, Dj" uniqKey="Mcfarland D">DJ McFarland</name>
</author>
<author>
<name sortKey="Pfurtscheller, G" uniqKey="Pfurtscheller G">G Pfurtscheller</name>
</author>
<author>
<name sortKey="Vaughan, Tm" uniqKey="Vaughan T">TM Vaughan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hill, Nj" uniqKey="Hill N">NJ Hill</name>
</author>
<author>
<name sortKey="Lal, Tn" uniqKey="Lal T">TN Lal</name>
</author>
<author>
<name sortKey="Bierig, K" uniqKey="Bierig K">K Bierig</name>
</author>
<author>
<name sortKey="Birbaumer, N" uniqKey="Birbaumer N">N Birbaumer</name>
</author>
<author>
<name sortKey="Scholkopf, B" uniqKey="Scholkopf B">B Schölkopf</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nijboer, F" uniqKey="Nijboer F">F Nijboer</name>
</author>
<author>
<name sortKey="Furdea, A" uniqKey="Furdea A">A Furdea</name>
</author>
<author>
<name sortKey="Gunst, I" uniqKey="Gunst I">I Gunst</name>
</author>
<author>
<name sortKey="Mellinger, J" uniqKey="Mellinger J">J Mellinger</name>
</author>
<author>
<name sortKey="Mcfarland, Dj" uniqKey="Mcfarland D">DJ McFarland</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sellers, Ew" uniqKey="Sellers E">EW Sellers</name>
</author>
<author>
<name sortKey="Donchin, E" uniqKey="Donchin E">E Donchin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Furdea, A" uniqKey="Furdea A">A Furdea</name>
</author>
<author>
<name sortKey="Halder, S" uniqKey="Halder S">S Halder</name>
</author>
<author>
<name sortKey="Krusienski, Dj" uniqKey="Krusienski D">DJ Krusienski</name>
</author>
<author>
<name sortKey="Bross, D" uniqKey="Bross D">D Bross</name>
</author>
<author>
<name sortKey="Nijboer, F" uniqKey="Nijboer F">F Nijboer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Desain, P" uniqKey="Desain P">P Desain</name>
</author>
<author>
<name sortKey="Hupse, A" uniqKey="Hupse A">A Hupse</name>
</author>
<author>
<name sortKey="Kallenberg, M" uniqKey="Kallenberg M">M Kallenberg</name>
</author>
<author>
<name sortKey="De Kruif, B" uniqKey="De Kruif B">B de Kruif</name>
</author>
<author>
<name sortKey="Schaefer, R" uniqKey="Schaefer R">R Schaefer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kanoh, S" uniqKey="Kanoh S">S Kanoh</name>
</author>
<author>
<name sortKey="Ichiro Miyamoto, K" uniqKey="Ichiro Miyamoto K">K Ichiro Miyamoto</name>
</author>
<author>
<name sortKey="Yoshinobu, T" uniqKey="Yoshinobu T">T Yoshinobu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hinterberger, T" uniqKey="Hinterberger T">T Hinterberger</name>
</author>
<author>
<name sortKey="Neumann, N" uniqKey="Neumann N">N Neumann</name>
</author>
<author>
<name sortKey="Pham, M" uniqKey="Pham M">M Pham</name>
</author>
<author>
<name sortKey="Kubler, A" uniqKey="Kubler A">A Kübler</name>
</author>
<author>
<name sortKey="Grether, A" uniqKey="Grether A">A Grether</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Farquhar, J" uniqKey="Farquhar J">J Farquhar</name>
</author>
<author>
<name sortKey="Blankespoor, J" uniqKey="Blankespoor J">J Blankespoor</name>
</author>
<author>
<name sortKey="Vlek, R" uniqKey="Vlek R">R Vlek</name>
</author>
<author>
<name sortKey="Desain, I" uniqKey="Desain I">I Desain</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Klobassa, Ds" uniqKey="Klobassa D">DS Klobassa</name>
</author>
<author>
<name sortKey="Vaughan, Tm" uniqKey="Vaughan T">TM Vaughan</name>
</author>
<author>
<name sortKey="Brunner, P" uniqKey="Brunner P">P Brunner</name>
</author>
<author>
<name sortKey="Schwartz, Ne" uniqKey="Schwartz N">NE Schwartz</name>
</author>
<author>
<name sortKey="Wolpaw, Jr" uniqKey="Wolpaw J">JR Wolpaw</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Muller Putz, Gr" uniqKey="Muller Putz G">GR Müller-Putz</name>
</author>
<author>
<name sortKey="Scherer, R" uniqKey="Scherer R">R Scherer</name>
</author>
<author>
<name sortKey="Neuper, C" uniqKey="Neuper C">C Neuper</name>
</author>
<author>
<name sortKey="Pfurtscheller, G" uniqKey="Pfurtscheller G">G Pfurtscheller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cincotti, F" uniqKey="Cincotti F">F Cincotti</name>
</author>
<author>
<name sortKey="Kauhanen, L" uniqKey="Kauhanen L">L Kauhanen</name>
</author>
<author>
<name sortKey="Aloise, F" uniqKey="Aloise F">F Aloise</name>
</author>
<author>
<name sortKey="Palom Ki, T" uniqKey="Palom Ki T">T Palomäki</name>
</author>
<author>
<name sortKey="Caporusso, N" uniqKey="Caporusso N">N Caporusso</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chatterjee, A" uniqKey="Chatterjee A">A Chatterjee</name>
</author>
<author>
<name sortKey="Aggarwal, V" uniqKey="Aggarwal V">V Aggarwal</name>
</author>
<author>
<name sortKey="Ramos, A" uniqKey="Ramos A">A Ramos</name>
</author>
<author>
<name sortKey="Acharya, S" uniqKey="Acharya S">S Acharya</name>
</author>
<author>
<name sortKey="Thakor, Nv" uniqKey="Thakor N">NV Thakor</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brouwer, Am" uniqKey="Brouwer A">AM Brouwer</name>
</author>
<author>
<name sortKey="Van Erp, J" uniqKey="Van Erp J">J van Erp</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kubler, A" uniqKey="Kubler A">A Kübler</name>
</author>
<author>
<name sortKey="Neumann, N" uniqKey="Neumann N">N Neumann</name>
</author>
<author>
<name sortKey="Wilhelm, B" uniqKey="Wilhelm B">B Wilhelm</name>
</author>
<author>
<name sortKey="Hinterberger, T" uniqKey="Hinterberger T">T Hinterberger</name>
</author>
<author>
<name sortKey="Birbaumer, N" uniqKey="Birbaumer N">N Birbaumer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Picton, Tw" uniqKey="Picton T">TW Picton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Polich, J" uniqKey="Polich J">J Polich</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Conroy, M" uniqKey="Conroy M">M Conroy</name>
</author>
<author>
<name sortKey="Polich, J" uniqKey="Polich J">J Polich</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gonsalvez, Cl" uniqKey="Gonsalvez C">CL Gonsalvez</name>
</author>
<author>
<name sortKey="Polich, J" uniqKey="Polich J">J Polich</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Farwell, La" uniqKey="Farwell L">LA Farwell</name>
</author>
<author>
<name sortKey="Donchin, E" uniqKey="Donchin E">E Donchin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lenhardt, A" uniqKey="Lenhardt A">A Lenhardt</name>
</author>
<author>
<name sortKey="Kaper, M" uniqKey="Kaper M">M Kaper</name>
</author>
<author>
<name sortKey="Ritter, Hj" uniqKey="Ritter H">HJ Ritter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nijboer, F" uniqKey="Nijboer F">F Nijboer</name>
</author>
<author>
<name sortKey="Sellers, E" uniqKey="Sellers E">E Sellers</name>
</author>
<author>
<name sortKey="Mellinger, J" uniqKey="Mellinger J">J Mellinger</name>
</author>
<author>
<name sortKey="Jordan, M" uniqKey="Jordan M">M Jordan</name>
</author>
<author>
<name sortKey="Matuz, T" uniqKey="Matuz T">T Matuz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Treder, M" uniqKey="Treder M">M Treder</name>
</author>
<author>
<name sortKey="Venthur, B" uniqKey="Venthur B">B Venthur</name>
</author>
<author>
<name sortKey="Blankertz, B" uniqKey="Blankertz B">B Blankertz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bayliss, Jd" uniqKey="Bayliss J">JD Bayliss</name>
</author>
<author>
<name sortKey="Inverso, Sa" uniqKey="Inverso S">SA Inverso</name>
</author>
<author>
<name sortKey="Tentler, A" uniqKey="Tentler A">A Tentler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Piccione, F" uniqKey="Piccione F">F Piccione</name>
</author>
<author>
<name sortKey="Giorgi, F" uniqKey="Giorgi F">F Giorgi</name>
</author>
<author>
<name sortKey="Tonin, P" uniqKey="Tonin P">P Tonin</name>
</author>
<author>
<name sortKey="Priftis, K" uniqKey="Priftis K">K Priftis</name>
</author>
<author>
<name sortKey="Giove, S" uniqKey="Giove S">S Giove</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Middlebrooks, Jc" uniqKey="Middlebrooks J">JC Middlebrooks</name>
</author>
<author>
<name sortKey="Green, Dm" uniqKey="Green D">DM Green</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brungart, Ds" uniqKey="Brungart D">DS Brungart</name>
</author>
<author>
<name sortKey="Durlach, Ni" uniqKey="Durlach N">NI Durlach</name>
</author>
<author>
<name sortKey="Rabinowitz, Wm" uniqKey="Rabinowitz W">WM Rabinowitz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mondor, Ta" uniqKey="Mondor T">TA Mondor</name>
</author>
<author>
<name sortKey="Zatorre, Rj" uniqKey="Zatorre R">RJ Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Teder S Lej Rvi, Wa" uniqKey="Teder S Lej Rvi W">WA Teder-Sälejärvi</name>
</author>
<author>
<name sortKey="Hillyard, Sa" uniqKey="Hillyard S">SA Hillyard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sonnadara, Rr" uniqKey="Sonnadara R">RR Sonnadara</name>
</author>
<author>
<name sortKey="Alain, C" uniqKey="Alain C">C Alain</name>
</author>
<author>
<name sortKey="Trainor, Lj" uniqKey="Trainor L">LJ Trainor</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Deouell, Ly" uniqKey="Deouell L">LY Deouell</name>
</author>
<author>
<name sortKey="Parnes, A" uniqKey="Parnes A">A Parnes</name>
</author>
<author>
<name sortKey="Pickard, N" uniqKey="Pickard N">N Pickard</name>
</author>
<author>
<name sortKey="Knight, Rt" uniqKey="Knight R">RT Knight</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rader, S" uniqKey="Rader S">S Rader</name>
</author>
<author>
<name sortKey="Holmes, J" uniqKey="Holmes J">J Holmes</name>
</author>
<author>
<name sortKey="Golob, E" uniqKey="Golob E">E Golob</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brainard, Dh" uniqKey="Brainard D">DH Brainard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Green, Md" uniqKey="Green M">MD Green</name>
</author>
<author>
<name sortKey="Swets, Ja" uniqKey="Swets J">JA Swets</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fawcett, T" uniqKey="Fawcett T">T Fawcett</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Duda, Ro" uniqKey="Duda R">RO Duda</name>
</author>
<author>
<name sortKey="Hart, Pe" uniqKey="Hart P">PE Hart</name>
</author>
<author>
<name sortKey="Stork, Dg" uniqKey="Stork D">DG Stork</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Muller, Kr" uniqKey="Muller K">KR Müller</name>
</author>
<author>
<name sortKey="Krauledat, M" uniqKey="Krauledat M">M Krauledat</name>
</author>
<author>
<name sortKey="Dornhege, G" uniqKey="Dornhege G">G Dornhege</name>
</author>
<author>
<name sortKey="Curio, G" uniqKey="Curio G">G Curio</name>
</author>
<author>
<name sortKey="Blankertz, B" uniqKey="Blankertz B">B Blankertz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Student" uniqKey="Student">Student</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ledoit, O" uniqKey="Ledoit O">O Ledoit</name>
</author>
<author>
<name sortKey="Wolf, M" uniqKey="Wolf M">M Wolf</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wolpaw, Jr" uniqKey="Wolpaw J">JR Wolpaw</name>
</author>
<author>
<name sortKey="Birbaumer, N" uniqKey="Birbaumer N">N Birbaumer</name>
</author>
<author>
<name sortKey="Heetderks, Wj" uniqKey="Heetderks W">WJ Heetderks</name>
</author>
<author>
<name sortKey="Mcfarland, Dj" uniqKey="Mcfarland D">DJ McFarland</name>
</author>
<author>
<name sortKey="Peckham, Ph" uniqKey="Peckham P">PH Peckham</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Teder S Lej Rvi, Wa" uniqKey="Teder S Lej Rvi W">WA Teder-Sälejärvi</name>
</author>
<author>
<name sortKey="Hillyard, Sa" uniqKey="Hillyard S">SA Hillyard</name>
</author>
<author>
<name sortKey="Roder, B" uniqKey="Roder B">B Röder</name>
</author>
<author>
<name sortKey="Neville, Hj" uniqKey="Neville H">HJ Neville</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Polich, J" uniqKey="Polich J">J Polich</name>
</author>
<author>
<name sortKey="Criado, Jr" uniqKey="Criado J">JR Criado</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mertens, R" uniqKey="Mertens R">R Mertens</name>
</author>
<author>
<name sortKey="Polich, J" uniqKey="Polich J">J Polich</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Katayama, J" uniqKey="Katayama J">J Katayama</name>
</author>
<author>
<name sortKey="Polich, J" uniqKey="Polich J">J Polich</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Serby, H" uniqKey="Serby H">H Serby</name>
</author>
<author>
<name sortKey="Yom Tov, E" uniqKey="Yom Tov E">E Yom-Tov</name>
</author>
<author>
<name sortKey="Inbar, Gf" uniqKey="Inbar G">GF Inbar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tomiaka, R" uniqKey="Tomiaka R">R Tomiaka</name>
</author>
<author>
<name sortKey="Haufe, S" uniqKey="Haufe S">S Haufe</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">20368976</article-id>
<article-id pub-id-type="pmc">2848564</article-id>
<article-id pub-id-type="publisher-id">09-PONE-RA-10370R1</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0009813</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline">
<subject>Neuroscience/Cognitive Neuroscience</subject>
<subject>Neuroscience/Sensory Systems</subject>
<subject>Physiology/Sensory Systems</subject>
<subject>Neurological Disorders/Neuromuscular Diseases</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>A New Auditory Multi-Class Brain-Computer Interface Paradigm: Spatial Hearing as an Informative Cue</article-title>
<alt-title alt-title-type="running-head">A New Auditory BCI Paradigm</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Schreuder</surname>
<given-names>Martijn</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Blankertz</surname>
<given-names>Benjamin</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Tangermann</surname>
<given-names>Michael</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<label>1</label>
<addr-line>Machine Learning Department, Berlin Institute of Technology, Berlin, Germany</addr-line>
</aff>
<aff id="aff2">
<label>2</label>
<addr-line>Intelligent Data Analysis Group, Fraunhofer FIRST, Berlin, Germany</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Yan</surname>
<given-names>Jun</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">University of Calgary, Canada</aff>
<author-notes>
<corresp id="cor1">* E-mail:
<email>martijn@cs.tu-berlin.de</email>
</corresp>
<fn fn-type="con">
<p>Conceived and designed the experiments: MS BB MT. Performed the experiments: MS. Analyzed the data: MS MT. Contributed reagents/materials/analysis tools: BB MT. Wrote the paper: MS BB MT.</p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<year>2010</year>
</pub-date>
<pub-date pub-type="epub">
<day>1</day>
<month>4</month>
<year>2010</year>
</pub-date>
<volume>5</volume>
<issue>4</issue>
<elocation-id>e9813</elocation-id>
<history>
<date date-type="received">
<day>5</day>
<month>5</month>
<year>2009</year>
</date>
<date date-type="accepted">
<day>16</day>
<month>2</month>
<year>2010</year>
</date>
</history>
<permissions>
<copyright-statement>Schreuder et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</copyright-statement>
</permissions>
<abstract>
<p>Most P300-based brain-computer interface (BCI) approaches use the visual modality for stimulation. For use with patients suffering from amyotrophic lateral sclerosis (ALS) this might not be the preferable choice because of sight deterioration. Moreover, using a modality different from the visual one minimizes interference with possible visual feedback. Therefore, a multi-class BCI paradigm is proposed that uses spatially distributed, auditory cues. Ten healthy subjects participated in an offline oddball task with the spatial location of the stimuli being a discriminating cue. Experiments were done in free field, with an individual speaker for each location. Different inter-stimulus intervals of 1000 ms, 300 ms and 175 ms were tested. With averaging over multiple repetitions, selection scores went over 90% for most conditions, i.e., in over 90% of the trials the correct location was selected. One subject reached a 100% correct score. Corresponding information transfer rates were high, up to an average score of 17.39 bits/minute for the 175 ms condition (best subject 25.20 bits/minute). When presenting the stimuli through a single speaker, thus effectively canceling the spatial properties of the cue, selection scores went down below 70% for most subjects. We conclude that the proposed spatial auditory paradigm is successful for healthy subjects and shows promising results that may lead to a fast BCI that solely relies on the auditory sense.</p>
</abstract>
<counts>
<page-count count="14"></page-count>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>Brain-computer interfaces (BCI) are a direct connection between the brain and a computer, without using any of the brain's natural output pathways
<xref ref-type="bibr" rid="pone.0009813-Wolpaw1">[1]</xref>
. Most BCI research is aimed toward developing tools for patients with severe motor disabilities and paralyzes, patients suffering from amyotrophic lateral sclerosis (ALS) specifically. This group of potential users could particularly benefit from BCI technology, since output pathways that are normally employed by the brain can no longer be used. Completely locked-in syndrome (CLIS) patients have lost all volitional control over their muscles, including eye-muscles, and are therefore out of reach for conventional augmentation devices based on rudimentary muscle control. BCI might be one of the last options for communication for these patients.</p>
<p>BCI research over the last decades has explored a large variety of possible configurations for such a BCI. Among these are the choice for measuring method, physiological brain feature, analysis method and modality of interaction. So far, the primary choice of interaction modality has been vision. Most current BCI systems rely to some extent on the ability of the subject to control the eyes. However, the patients' inability to direct gaze, adjust focus or perform eye-blinks may proof the use of the visual modality in BCI application to be difficult. Therefore, other modalities are now being explored such as audition
<xref ref-type="bibr" rid="pone.0009813-Hill1">[2]</xref>
<xref ref-type="bibr" rid="pone.0009813-Klobassa1">[10]</xref>
and touch
<xref ref-type="bibr" rid="pone.0009813-MllerPutz1">[11]</xref>
<xref ref-type="bibr" rid="pone.0009813-Brouwer1">[14]</xref>
in order to make BCI independent of vision. Moreover, when using such alternative methods for patients with residual vision, the visual modality could be used exclusively for feedback, thereby preventing interaction between feedback and stimulation.</p>
<p>Current auditory BCI systems mostly result in a binary decision. Binary decisions contain lower information content than multi class decisions. Although for some tasks a multi class BCI is the best choice, it is difficult to cope with multiple options in the auditory domain. In the current research we look for alternative ways of stimulus presentation that will allow for a multi class auditory BCI. We hypothesize that by adding spatial information to the cues, subjects will be able to discriminate a larger number of classes. If classification of the P300 deflection in response to this spatial information is possible, it introduces a new means of creating a truly auditory BCI. Such a setup would be flexible in the number of classes used and could potentially increase the speed of auditory BCI.</p>
<sec id="s1a">
<title>Auditory BCI</title>
<p>Hill et al.
<xref ref-type="bibr" rid="pone.0009813-Hill1">[2]</xref>
used event-related potentials (ERP) that are triggered by auditory stimuli for a binary BCI. They presented two sequences of deviant (target) and standard (non-target) tones to the subject. Both ears received a sequence with a different inter-stimulus interval (ISI) at the same time. The subject's task was to focus on either one of the streams by counting the number of targets in that stream. The time samples for left and right non-target tones were taken from the same four seconds of EEG, averaged and subsequently concatenated. Because of the different ISI, ERP in response to the left channel would average out on the right channel samples and vice versa. This concatenated feature was used for classification. Although the classification rate varied widely between different subjects, their results are promising for the use of auditory ERP as a feature for BCI.</p>
<p>A similar approach was recently reported in
<xref ref-type="bibr" rid="pone.0009813-Kanoh1">[7]</xref>
. They used the human capacity to segregate audio streams to create a binary BCI. Two different oddball audio streams were presented to the subject's right ear. When the ISI of such streams is short, the subject naturally segregates these into independent streams. For classification, the ERPs to both streams were classified and the target stream was determined by voting over multiple presentations. Although it is a binary BCI, they argue that it could be extended by adding more streams and thus increase the number of classes. Unfortunately, they used all data for training and testing for their reported results, rather than using a cross-validation method.</p>
<p>Another attempt to create a BCI that is independent of vision used auditory feedback to inform subjects on their sensory motor rhythm (SMR)
<xref ref-type="bibr" rid="pone.0009813-Nijboer1">[3]</xref>
. By adjusting their SMR, subjects were able to make a binary choice. Although initial performance for most subjects was better with visual feedback, this difference decreased with learning. Thus, as they conclude, auditory feedback can effectively be used for a BCI based on SMR.</p>
<p>Similarly, Hinterberger et al.
<xref ref-type="bibr" rid="pone.0009813-Hinterberger1">[8]</xref>
used auditory feedback to inform subjects on their control of the slow cortical potential (SCP). Although two subjects reached the 70% accuracy score that is assumed to be minimal for useful BCI operation
<xref ref-type="bibr" rid="pone.0009813-Kbler1">[15]</xref>
, they generally performed worse than subjects with visual feedback. Furthermore, a BCI based on SCPs typically requires several sessions of training until an acceptable level of BCI control can be obtained.</p>
<p>Even different ways of using the auditory modality have been investigated, such as frequency tagging. When a high frequency tone with a low frequency envelope is presented to a subject, the frequency of the envelope has been found to resonate in the EEG signal. The extend of this resonating can to some level be influenced by selective attention
<xref ref-type="bibr" rid="pone.0009813-Desain1">[6]</xref>
. This envelope could also be constructed as pseudo-random noise, which allows for the use of multiple streams
<xref ref-type="bibr" rid="pone.0009813-Farquhar1">[9]</xref>
.</p>
</sec>
<sec id="s1b">
<title>P300 response</title>
<p>The P300 feature of the human brain is a well-described positive deflection of the ongoing EEG signal
<xref ref-type="bibr" rid="pone.0009813-Picton1">[16]</xref>
,
<xref ref-type="bibr" rid="pone.0009813-Polich1">[17]</xref>
with a latency of 300+ ms to an event. In most people it is present without training in response to an attended rare event. The task that is generally used for eliciting a P300 wave is the oddball paradigm, where an attended target stimulus is infrequently presented between non-target stimuli. The attended stimulus elicits a P300 response in the brain, which generally has the largest amplitude at the midline Pz electrode and parietal regions
<xref ref-type="bibr" rid="pone.0009813-Conroy1">[18]</xref>
. The P300 was shown to be greater with larger target-to-target intervals
<xref ref-type="bibr" rid="pone.0009813-Gonsalvez1">[19]</xref>
. Stimulus order in an oddball paradigm should be random to prevent expectation of the target stimulus.</p>
<p>In the setting of BCI, the short latency of the P300 allows for fast communication speeds. Stimuli can even be presented at a pace faster than the actual timeline of the P300, thereby further increasing the efficiency. This has primarily been done in the visual P300 speller
<xref ref-type="bibr" rid="pone.0009813-Sellers1">[4]</xref>
,
<xref ref-type="bibr" rid="pone.0009813-Farwell1">[20]</xref>
<xref ref-type="bibr" rid="pone.0009813-Nijboer2">[22]</xref>
. Because the P300 response is elicited by an external stimulus, operation speed is dictated by the rate of presentation of these stimuli. This is referred to as synchronous operation mode.</p>
<p>Although it is well established that the P300 component only requires covert attention, it turns out that the performance of visual P300 BCIs degrades if the target stimulus is not overtly fixated
<xref ref-type="bibr" rid="pone.0009813-Treder1">[23]</xref>
. Overt fixation is not a relevant factor in the auditory domain, but the P300 was shown to be stronger for attended stimuli in auditory mode
<xref ref-type="bibr" rid="pone.0009813-Sellers1">[4]</xref>
. They showed in a four stimulus oddball task that the P300 response is present when the target stimulus is presented visually, auditory and in a combination of both. Similarly, the P300 response was reported to be attention dependent when tactile stimuli are used
<xref ref-type="bibr" rid="pone.0009813-Brouwer1">[14]</xref>
.</p>
<p>The visual P300 response has been used for BCI
<xref ref-type="bibr" rid="pone.0009813-Bayliss1">[24]</xref>
,
<xref ref-type="bibr" rid="pone.0009813-Piccione1">[25]</xref>
, in particular for creating a speller application
<xref ref-type="bibr" rid="pone.0009813-Sellers1">[4]</xref>
,
<xref ref-type="bibr" rid="pone.0009813-Farwell1">[20]</xref>
<xref ref-type="bibr" rid="pone.0009813-Nijboer2">[22]</xref>
. In the latter, a matrix of characters is presented and the rows and columns light up in random sequence. The subject attends to the character he/she wants to select by counting the number of illuminations. When the row or column containing the character lights up it elicits a P300 wave, which can be detected from the EEG. Thus, the row and column that give a P300 response define the character that is to be selected. Nijboer et al.
<xref ref-type="bibr" rid="pone.0009813-Nijboer2">[22]</xref>
showed that this paradigm can be successfully used by ALS patients.</p>
<p>A similar selection process has been devised for the auditory modality
<xref ref-type="bibr" rid="pone.0009813-Furdea1">[5]</xref>
. The matrix was still shown for reference purposes, but the columns and rows did no longer flash. Instead, they were marked by a spoken number that was presented to the subject. The subject no longer attended visually, but was instructed to attend to the spoken number that identified the character. They compared the performance with the visual speller. Although the visual speller had significantly better results, satisfiable results were found in the auditory condition as well, with performance reaching up to 100% for one subject. However, auditory stimulation with spoken numbers is time consuming, and selection of a letter could take as long as 3.6 minutes when using multiple iterations.</p>
<p>In a recent publication
<xref ref-type="bibr" rid="pone.0009813-Klobassa1">[10]</xref>
the rows and columns were sequentially represented by six natural sounds. The subject would be visually informed on which sound corresponded to a row or column; a mapping that most subjects could learn within 2 sessions. Subjects were divided in two groups, one group received only the auditory stimuli whereas the second group received concurrent auditory and visual stimulation. For the second group, the number of trials with visual stimulation would gradually be reduced. After 11 sessions, both groups received only auditory stimulation. Although accuracy in session one was lowest for the auditory only group, their performance on the 11
<inline-formula>
<inline-graphic xlink:href="pone.0009813.e001.jpg" mimetype="image"></inline-graphic>
</inline-formula>
session had increased to a level comparable to that from the combined stimulation group.</p>
<p>The oddball principle has also been used for action selection through spoken word stimuli
<xref ref-type="bibr" rid="pone.0009813-Sellers1">[4]</xref>
. In an oddball paradigm setup they showed that short spoken words lead to a P300 response when attended to. They used simple words (‘
<italic>YES</italic>
’, ‘
<italic>NO</italic>
’, ‘
<italic>PASS</italic>
’, ‘
<italic>END</italic>
’) as target/non-target combinations. Although this leads to a less distinct P300 than in the visual modality, it could be classified by averaging over subtrials.</p>
</sec>
<sec id="s1c">
<title>Spatial hearing</title>
<p>Localization of sounds in space is one of the processes that our brain does without mental effort. For an extensive review on spatial hearing in humans see
<xref ref-type="bibr" rid="pone.0009813-Middlebrooks1">[26]</xref>
. Several behavioral studies have shown the ability of human listeners to distinguish sounds in space
<xref ref-type="bibr" rid="pone.0009813-Brungart1">[27]</xref>
<xref ref-type="bibr" rid="pone.0009813-TederSlejrvi1">[29]</xref>
. Several of these studies further showed that when subjects focus on a particular direction, their attentional resources appear to be distributed in a gradient, with decreasing alertness when moving away from the attended direction
<xref ref-type="bibr" rid="pone.0009813-Mondor1">[28]</xref>
,
<xref ref-type="bibr" rid="pone.0009813-TederSlejrvi1">[29]</xref>
.</p>
<p>Although most oddball experiments employ a cue with a difference in pitch, amplitude or length of the stimulus sound, other properties of sound have been investigated. One such property is the spatial location of the stimulus. In their study,
<xref ref-type="bibr" rid="pone.0009813-TederSlejrvi1">[29]</xref>
essentially presented seven oddball paradigms to the subject. An array of seven speakers (with 9
<inline-formula>
<inline-graphic xlink:href="pone.0009813.e002.jpg" mimetype="image"></inline-graphic>
</inline-formula>
distance between them) presented non-targets and targets on all directions in random order. Subjects were asked to attend left, front or right and only targets coming from the attended direction elicited a P300 reliably. In this case, the spatial location is not used to separate non-targets from targets but rather to separate different streams. A more recent study did use the spatial separation (albeit virtually through stereo headphones) to set aside the frequent non-targets (0
<inline-formula>
<inline-graphic xlink:href="pone.0009813.e003.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, straight ahead) from the infrequent targets (
<inline-formula>
<inline-graphic xlink:href="pone.0009813.e004.jpg" mimetype="image"></inline-graphic>
</inline-formula>
30
<inline-formula>
<inline-graphic xlink:href="pone.0009813.e005.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0009813.e006.jpg" mimetype="image"></inline-graphic>
</inline-formula>
90
<inline-formula>
<inline-graphic xlink:href="pone.0009813.e007.jpg" mimetype="image"></inline-graphic>
</inline-formula>
)
<xref ref-type="bibr" rid="pone.0009813-Sonnadara1">[30]</xref>
. However, because the subjects were engaged in a passive listening task and in the meanwhile watched a movie, no P300 responses were elicited. Rather, the focus was on the early mismatch negativity potential. It does show, that spatial location can be a cue determining factor. A similar experiment was performed in free-field with only 10
<inline-formula>
<inline-graphic xlink:href="pone.0009813.e008.jpg" mimetype="image"></inline-graphic>
</inline-formula>
spatial separation
<xref ref-type="bibr" rid="pone.0009813-Deouell1">[31]</xref>
.</p>
<p>An oddball paradigm purely based on spatial location has been used in
<xref ref-type="bibr" rid="pone.0009813-Rader1">[32]</xref>
, but merely as a training for detecting stimuli from different locations in a later task. No behavioral- or neurophysiological data for this condition is reported.</p>
</sec>
</sec>
<sec id="s2" sec-type="methods">
<title>Methods</title>
<sec id="s2a">
<title>Ethics statement</title>
<p>Procedures were positively evaluated by the Ethics Committee of the Charité University Hospital (number EA4/073/09). All subjects provided verbal informed consent and subsequent analysis and presentation of data was anonymized.</p>
</sec>
<sec id="s2b">
<title>Participants</title>
<p>Two sets of experiments were performed. The first set (physiological experiments) included seven healthy volunteers (five male, mean age 29.1 years, range 25–34 years) and was used for validation of the setup and assessment of the physiological response. All subjects were volunteering group members and had some previous experience with BCI, mainly based on imagined movement tasks. The second set (BCI experiments) included five healthy volunteers (three male, mean age 32.4 years, range 22–55 years), out of which two were paid subjects with no previous experience in BCI. They were compensated for their time with eight euro per hour. Of the other three volunteering group members, two also participated in the first round.</p>
<p>Subjects reported to be free of neurological symptoms and to have normal hearing, although two subjects (VPip and VPig) reported having difficulty with spatial localization of sounds in natural situations and subject VPzq reported a high-pitched tinnitus in the right ear.</p>
</sec>
<sec id="s2c">
<title>Task, procedure, and design</title>
<p>Subjects sat in a comfortable chair, facing a screen with fixation cross. They were surrounded by eight speakers at ear height. The speakers were spaced evenly with 45
<inline-formula>
<inline-graphic xlink:href="pone.0009813.e009.jpg" mimetype="image"></inline-graphic>
</inline-formula>
angle between them, at approximately one meter distance from the subject's ears (see
<xref ref-type="fig" rid="pone-0009813-g001">Figure 1</xref>
). Speakers were calibrated to a common stimulus intensity of
<inline-formula>
<inline-graphic xlink:href="pone.0009813.e010.jpg" mimetype="image"></inline-graphic>
</inline-formula>
58 dB. At the start of each recording session, subjects were asked to judge the subjective equality of the loudness from all directions and alter these if necessary. The room was neither electromagnetically shielded, nor were any sound attenuation precautions taken. All experiments consisted of an auditory oddball task that varied to some degree. Before the experiments, subjects were asked to minimize eye movements and other muscle contractions during the experiment. Stimuli were generated in Matlab and presented using the PsychToolbox
<xref ref-type="bibr" rid="pone.0009813-Brainard1">[33]</xref>
. A multichannel, low-latency firewire soundcard from M-Audio (M-Audio Firewire 410) was used to individually control the low-budget, off-the-shelf computer speakers.</p>
<fig id="pone-0009813-g001" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009813.g001</object-id>
<label>Figure 1</label>
<caption>
<title>The experimental setup.</title>
<p>For the physiological experiments, all eight speakers were used. For the BCI experiments, only the front semi-circle was used (speakers 1,2,3,7, and 8).</p>
</caption>
<graphic xlink:href="pone.0009813.g001"></graphic>
</fig>
<sec id="s2c1">
<title>Physiological experiments</title>
<p>First, experiments were performed to assess the physiological response to the setup. All eight speakers were used and the stimuli consisted of 75 ms bandpass filtered white noise (150–8000 Hz) with 3 ms rise and fall. The stimulus for all speakers was the same, making spatial location the only discriminating cue. Any one of the eight directions could be a target (probability 12.5%), leaving the others as non-targets (probability 87.5%). Therefore, this can be considered a classic oddball paradigm. The target direction was indicated prior to each block, both visually on the screen and by presenting the stimulus from that location.</p>
<p>In condition C1000, one trial consisted of 80 subtrials, ten for each individual location. We recorded 32 of such trials, making a total of 2560 subtrials. Inter-stimulus interval (ISI) was set to one second with a latency jitter (mean 25 ms, SD 14.4 ms). Subjects were asked to mentally keep track of the amount of target stimulations.</p>
<p>In order to have an indication of the subjects recognition performance, a second condition (condition Cr) was introduced. Instead of mental counting, subjects were asked to respond by key press each time the target direction was stimulated. To allow for a response, the ISI was set to two seconds with the same latency jitter. Between 576 and 768 subtrials per subject were recorded. Blocks of both conditions were mixed to prevent time biases.</p>
<p>If necessary, an initial round of stimuli was given before recording to familiarize the subject with the stimuli. Presentation order was pseudo random with the restriction that all eight directions were stimulated in one block before continuing to the next block.</p>
</sec>
<sec id="s2c2">
<title>BCI experiments</title>
<p>For the BCI experiments, the paradigm was altered in several ways based on findings from the physiological experiments. First, the amount of speakers was reduced to the frontal five to make the task easier. Thus, the target was presented with 20% probability and non-targets with 80% probability. It has been shown that this is rare enough to produce a P300 response
<xref ref-type="bibr" rid="pone.0009813-Sellers1">[4]</xref>
. All five speakers were given a unique, complex 40 ms stimulus, build from band-pass filtered white noise with a tone overlay (see
<xref ref-type="table" rid="pone-0009813-t001">Table 1</xref>
and
<xref ref-type="supplementary-material" rid="pone.0009813.s001">File S1</xref>
). The discriminating cues were now both the physical properties and the spatial location of the stimulus. Latency jitter was omitted. In order to explore the boundaries of the paradigm, three different conditions were tested.</p>
<table-wrap id="pone-0009813-t001" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009813.t001</object-id>
<label>Table 1</label>
<caption>
<title>Cue properties in BCI experimental round.</title>
</caption>
<alternatives>
<graphic id="pone-0009813-t001-1" xlink:href="pone.0009813.t001"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1">Direction</td>
<td align="left" rowspan="1" colspan="1">Nr</td>
<td align="left" rowspan="1" colspan="1">Lower bound (Hz)</td>
<td align="left" rowspan="1" colspan="1">Upper bound (Hz)</td>
<td align="left" rowspan="1" colspan="1">Tone (Hz)</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">Left</td>
<td align="left" rowspan="1" colspan="1">7</td>
<td align="left" rowspan="1" colspan="1">320</td>
<td align="left" rowspan="1" colspan="1">2500</td>
<td align="left" rowspan="1" colspan="1">440 (a)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Front-left</td>
<td align="left" rowspan="1" colspan="1">8</td>
<td align="left" rowspan="1" colspan="1">416</td>
<td align="left" rowspan="1" colspan="1">3250</td>
<td align="left" rowspan="1" colspan="1">494 (b)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Front</td>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">540</td>
<td align="left" rowspan="1" colspan="1">4225</td>
<td align="left" rowspan="1" colspan="1">554 (cis)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Front-right</td>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">703</td>
<td align="left" rowspan="1" colspan="1">5493</td>
<td align="left" rowspan="1" colspan="1">622 (dis)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Right</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">914</td>
<td align="left" rowspan="1" colspan="1">7140</td>
<td align="left" rowspan="1" colspan="1">699 (f)</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="nt101">
<label></label>
<p>
<italic>Nr</italic>
refers to the speakers labels given in
<xref ref-type="fig" rid="pone-0009813-g001">Figure 1</xref>
.
<italic>Lower</italic>
- and
<italic>Upper bound</italic>
are the boundary frequencies for the band pass filter that is applied to the white noise.
<italic>Tone</italic>
is the fundamental frequency of the tone overlay. Seven harmonics were used, with decaying amplitude. Tone frequencies are chosen to have a full note in between adjacent stimuli.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>The first two conditions differed in their ISI (300 ms for condition C300, 175 ms for condition C175). A trial consisted of 75 subtrials, 15 for each location. For condition C300, we recorded 50 of such trials, making a total of 3750 subtrials. For condition C175, 40 such trials were recorded making a total of 3000 subtrials.</p>
<p>The third condition (C300s) also had a 300 ms ISI. However, all stimuli were now presented through a single speaker (front), thereby leaving the pitch properties of the stimulus the only discriminating cue. Only 20 trials of 75 subtrials were recorded for this condition, making a total of 1500 subtrials. Blocks of the three conditions were mixed to prevent time biases.</p>
<p>Stimuli order now had the extra constraint that there were at least two other directions between presentations of the same direction, to prevent too much overlap of target time frames. If necessary, an initial round of stimuli was given before recording to familiarize the subject with the stimuli.</p>
</sec>
</sec>
<sec id="s2d">
<title>Artifact rejection</title>
<p>For artifact rejection, a simple threshold method was used. The epoched data was first detrended to avoid slow drifts from reaching the threshold. Then, subtrials with a deflection greater than 70
<inline-formula>
<inline-graphic xlink:href="pone.0009813.e011.jpg" mimetype="image"></inline-graphic>
</inline-formula>
V over the ocular channels, compared to baseline, were marked as artifacts. These subtrials were then rejected from the original data and excluded from further analysis. This method excludes mainly eye artifacts.</p>
</sec>
<sec id="s2e">
<title>Data acquisition</title>
<p>EEG was recorded monopolarly using a varying number of Ag/AgCl electrodes. Channels were referenced to the nose. Electrooculogram (EOG) was recorded with two bipolar channels over the eyes. The signals were amplified using a Brain Products 128-channel amplifier, sampled at 1 kHz and filtered by an analog bandpass filter between 0.1 and 250 Hz before being digitized and stored for offline analysis. Further analyzes were done in Matlab (The Mathworks, Version 7.4).</p>
<p>For visual inspection, the raw data was low-pass filtered with an order 8 Chebyshev II filter (30 Hz pass-frequency, 42 Hz stop-frequency, 50 dB damping) to remove obvious 50 Hz artifacts from external sources. The filter was applied to the data both forward and backward to minimize phase-shifts. After filtering, the data was down sampled to 100 Hz and epoched between -150 ms and 800 ms relative to stimulus onset, using the first 150 ms as baseline. Artifacts were disregarded by the simple method described before. P300 latencies and amplitudes were calculated on the 1000 Hz data directly, using the same filters as described above.</p>
<p>For classification purposes the same filter was used before down sampling to 100 Hz. However, the filter was applied causally (only forward) to ensure portability to the online setting, where no future samples are available. Data was epoched in the same way as described above. The same artifact rejection method was used.</p>
</sec>
<sec id="s2f">
<title>Analysis</title>
<p>We use a measure derived from the receiver operating characteristic (ROC,
<xref ref-type="bibr" rid="pone.0009813-Green1">[34]</xref>
) to quantify the separability of two one-dimensional distributions. While ROC curves and derived measures are often used to characterize the performance of classifiers
<xref ref-type="bibr" rid="pone.0009813-Fawcett1">[35]</xref>
, they can as well be used to quantify the discriminability of feature distributions. The advantage over methods like Fisher score
<xref ref-type="bibr" rid="pone.0009813-Duda1">[36]</xref>
,
<xref ref-type="bibr" rid="pone.0009813-Mller1">[37]</xref>
, Student's
<italic>t</italic>
-statistic
<xref ref-type="bibr" rid="pone.0009813-Mller1">[37]</xref>
,
<xref ref-type="bibr" rid="pone.0009813-Student1">[38]</xref>
or pointwise biserial correlation coefficient, is that it does not rely on the assumption that the distributions are Gaussian.</p>
<p>The ROC curve of perfectly mixed distributions is (approximately) the diagonal line (no-discrimination line), and the ROC curve of perfectly separated distributions is a right angle going from (0,0) either through (1,0) or through (0,1) to (1,1). As separability index, we use the signed area (as in the definite integral) between the ROC curve and the no-discrimination line multiplied by two, such that the range of this scoring is between −1 and 1. So, if all values of class 1 are strictly larger than the maximum value of class 2, the ROC-separability-index is 1; if all values of class 1 are smaller than the minimum value of class 2 the index is −1. Accordingly, this separability index is similar to the point biserial correlation coefficient, but does not rely on the assumption that the classes obey Gaussian distributions.</p>
<p>For condition C1000, grand averages were computed for the channels with the highest (P300, interval 300–650 ms) and lowest (N2, interval 100–300 ms) signed ROC value, as well as scalp topographies for the intervals where these peak ROC values were found. Furthermore, response times and errors from condition Cr were computed. For the BCI experiments grand averages were computed for the channel with the highest signed ROC values only. Scalp topographies were only computed for the C175 condition, again in the interval where the high peak ROC values were found.</p>
<p>Due to the temporal aspect of the P300 response, the EEG trace itself was used as a feature for classification. The 20 channels that accounted for most of the difference between the two classes were automatically selected within each fold of the crossvalidation. For this, the ROC values were calculated for each channel and sample. The 10 channels with the highest positive ROC peak and those 10 with the lowest negative ROC peak were used. Data from these channels were decimated by taking the mean of five samples, effectively reducing the data to 16 post-baseline samples per channel. Samples from all 20 channels were then concatenated to form a 320 dimensional feature vector. The feature vector of the training set was normalized to zero mean and unit variance for every dimension independently and the normalization vector stored to normalize subtrials of the test set.</p>
<p>Classification was done using the Fisher Discriminant (FD) algorithm. Due to the dimensionality of the features (320 dimensions), some form of regularization was advisable. Here, a shrinkage method which counterbalances the systematic error in the calculation of the empirical covariance matrix was used
<xref ref-type="bibr" rid="pone.0009813-Ledoit1">[39]</xref>
. A ten-fold cross validation was performed with ten chronologically sampled partitions. Each partition functioned once as test set with the other nine partitions as training set.</p>
<p>Two types of classification scores can be distinguished: classification- and selection score
<xref ref-type="bibr" rid="pone.0009813-Furdea1">[5]</xref>
. Here, classification score refers to the binary classification. It is defined as the percentage of subtrials that is correctly scored to be a target or non-target. The selection accuracy denotes the percentage of trials in which the target direction is correctly designated.</p>
<p>Datasets from the BCI experiments contained four times more non-target stimuli than targets. Although the classification task is essentially binary, chance level for classification is 80%, which could potentially be obtained by simply assigning all samples to the non-target group. Therefore, the number of misclassified targets was checked.</p>
</sec>
<sec id="s2g">
<title>Multi class selection</title>
<p>After the cross validation, the classifier output was used to determine the outcome of the multi-class paradigm, i.e., to estimate the target direction. Taking a set of consecutive subtrials, one for each direction, the subtrial with the most negative classifier output was designated the target. One such set is referred to as an iteration.</p>
<p>To increase sensitivity, outcomes of multiple subtrials for the same direction (within one trial) can be averaged. This way, the influence of single subtrials is decreased and the selection score can be more robust. One possibility is to average the raw subtrial timeseries for each direction and classify these as a single subtrial. Another option is to classify each original subtrial individually and average over the classifier scores. We took the latter approach, as early results showed better performance for this method.</p>
<p>Artifacts were rejected and as a result classification scores for some directions were missing. Because only the remaining valid subtrials were considered, the averaging for some directions was done over less than the stated number of iterations. This is a realistic approach for future online settings, where artifacts may occur at any time, even in patients. Various amounts of iterations were considered to evaluate the influence on the outcome.</p>
</sec>
<sec id="s2h">
<title>Information Transfer Rate</title>
<p>The amount of information carried by every selection can be quantified by the information-transfer rate
<xref ref-type="bibr" rid="pone.0009813-Wolpaw2">[40, ITR]</xref>
, defined as:
<disp-formula>
<graphic xlink:href="pone.0009813.e012"></graphic>
<label>(1)</label>
</disp-formula>
<disp-formula>
<graphic xlink:href="pone.0009813.e013"></graphic>
<label>(2)</label>
</disp-formula>
where
<italic>R</italic>
is the bits/selection and
<italic>B</italic>
the bits/minute.
<italic>N</italic>
is the number of classes,
<italic>P</italic>
the classifier accuracy and
<italic>V</italic>
is the classification speed in selections/minute. In our case, when using multiple iterations,
<italic>P</italic>
is the selection accuracy. From this it is clear that even though the selection accuracy may increase when using more iterations, the ITR may stay the same or even decrease because a selection takes more time, i.e., V increases.</p>
<p>Speed is not the only factor that decides on the usability of a BCI, accuracy is equally important. Some applications may require high speed and can deal with lower accuracy (for instance in gaming), whereas other applications need an accuracy that approaches 100% at the cost of speed (such as operating a wheelchair). A higher accuracy is generally obtained by using more trials, i.e., increase the number of iterations. In order to compare our system on both levels, we report two ITR measures. The first, ‘Max ITR 70%’, refers to the maximum ITR that can be obtained when only taking into account the amount of iterations that result in a selection score of 70% or more. Although this is not necessary the highest ITR, we do not regard selection scores lower than 70% as useful because this would require a large number of error corrections; it would not give the subject a sense of control. The second measure, ‘Max ITR 90%’, is based on only those numbers of iteration that result in a selection score of 90% or higher. In general, this means using more iterations and possibly a decrease of the ITR. However, the increased selection accuracy and sense of control may be favorable for some applications.</p>
<p>Rejected trials were still considered during calculation of the ITR to prevent an artificially small approximation of
<italic>V</italic>
.</p>
</sec>
</sec>
<sec id="s3">
<title>Results</title>
<sec id="s3a">
<title>Artifact rejection</title>
<p>For subject VPiz more than half of the subtrials had to be excluded because of artifacts. In condition C1000, on average about 20% of all subtrials were excluded from analysis (range 5.94%–58.48%). This is about twice as high as the average rejection rate for the other conditions. A possible explanation is the long ISI. As with longer ISI the total length of the trial increases, eye blinks may become unavoidable after some time. Number of rejected trials for all conditions can be found in
<xref ref-type="table" rid="pone-0009813-t002">Table 2</xref>
.</p>
<table-wrap id="pone-0009813-t002" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009813.t002</object-id>
<label>Table 2</label>
<caption>
<title>Rejection rates for all conditions.</title>
</caption>
<alternatives>
<graphic id="pone-0009813-t002-2" xlink:href="pone.0009813.t002"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1">Subject</td>
<td align="left" rowspan="1" colspan="1">C1000</td>
<td align="left" rowspan="1" colspan="1">C300</td>
<td align="left" rowspan="1" colspan="1">C175</td>
<td align="left" rowspan="1" colspan="1">C300s</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">VPiz</td>
<td align="left" rowspan="1" colspan="1">1497 (58.48)</td>
<td align="left" rowspan="1" colspan="1">- -</td>
<td align="left" rowspan="1" colspan="1">- -</td>
<td align="left" rowspan="1" colspan="1">- -</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPip</td>
<td align="left" rowspan="1" colspan="1">276 (10.78)</td>
<td align="left" rowspan="1" colspan="1">- -</td>
<td align="left" rowspan="1" colspan="1">- -</td>
<td align="left" rowspan="1" colspan="1">- -</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPig</td>
<td align="left" rowspan="1" colspan="1">152 (5.94)</td>
<td align="left" rowspan="1" colspan="1">- -</td>
<td align="left" rowspan="1" colspan="1">- -</td>
<td align="left" rowspan="1" colspan="1">- -</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPjf</td>
<td align="left" rowspan="1" colspan="1">624 (24.38)</td>
<td align="left" rowspan="1" colspan="1">- -</td>
<td align="left" rowspan="1" colspan="1">- -</td>
<td align="left" rowspan="1" colspan="1">- -</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPjb</td>
<td align="left" rowspan="1" colspan="1">525 (20.51)</td>
<td align="left" rowspan="1" colspan="1">- -</td>
<td align="left" rowspan="1" colspan="1">- -</td>
<td align="left" rowspan="1" colspan="1">- -</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPja</td>
<td align="left" rowspan="1" colspan="1">160 (6.25)</td>
<td align="left" rowspan="1" colspan="1">242 (6.45)</td>
<td align="left" rowspan="1" colspan="1">205 (6.83)</td>
<td align="left" rowspan="1" colspan="1">125 (8.33)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPzq</td>
<td align="left" rowspan="1" colspan="1">340 (13.28)</td>
<td align="left" rowspan="1" colspan="1">184 (4.91)</td>
<td align="left" rowspan="1" colspan="1">107 (3.57)</td>
<td align="left" rowspan="1" colspan="1">104 (6.93)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPkh</td>
<td align="left" rowspan="1" colspan="1">- -</td>
<td align="left" rowspan="1" colspan="1">1104 (29.44)</td>
<td align="left" rowspan="1" colspan="1">1037 (34.57)</td>
<td align="left" rowspan="1" colspan="1">302 (20.13)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPkj</td>
<td align="left" rowspan="1" colspan="1">- -</td>
<td align="left" rowspan="1" colspan="1">113 (3.01)</td>
<td align="left" rowspan="1" colspan="1">42 (1.40)</td>
<td align="left" rowspan="1" colspan="1">71 (4.73)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPjq</td>
<td align="left" rowspan="1" colspan="1">- -</td>
<td align="left" rowspan="1" colspan="1">211 (5.63)</td>
<td align="left" rowspan="1" colspan="1">87 (2.90)</td>
<td align="left" rowspan="1" colspan="1">42 (2.80)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Average</bold>
</td>
<td align="left" rowspan="1" colspan="1">510.6 (19.94)</td>
<td align="left" rowspan="1" colspan="1">370.8 (9.89)</td>
<td align="left" rowspan="1" colspan="1">295.6 (9.85)</td>
<td align="left" rowspan="1" colspan="1">128.8 (8.59)</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="nt102">
<label></label>
<p>Using the simple artifact rejection method explained before, between 1.40% and 58.48% of the trials were rejected as artifacts. The average rejection rate for condition C1000 is almost twice as high as that for the other conditions. Possibly this is due to the longer ISI, which results in a longer overall trial. Eye blinking may be unavoidable in this case.</p>
</fn>
</table-wrap-foot>
</table-wrap>
</sec>
<sec id="s3b">
<title>Physiological response</title>
<p>Averaged ERP responses and scalp topographies for all subjects in condition C1000 can be found in
<xref ref-type="fig" rid="pone-0009813-g002">Figure 2</xref>
and
<xref ref-type="fig" rid="pone-0009813-g003">Figure 3</xref>
, respectively. For the ERP plots, the channel with the highest positive ROC value between 300 and 650 ms post-stimulus is shown for each subject (
<xref ref-type="fig" rid="pone-0009813-g002">Figure 2</xref>
). Plots show a single target and non-target line; data from all directions is averaged together. The same is done for channels with the largest negative ROC value between 100 and 300 ms. Latency and amplitude of the P300 response can be found in
<xref ref-type="table" rid="pone-0009813-t003">Table 3</xref>
.</p>
<fig id="pone-0009813-g002" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009813.g002</object-id>
<label>Figure 2</label>
<caption>
<title>Averaged positive waveforms (condition C1000).</title>
<p>Only the channel with the highest positive ROC value between 300 and 650 ms is presented here. The shaded interval indicates the area where this highest ROC value was found. Intervals were handpicked. Scalp topographies in
<xref ref-type="fig" rid="pone-0009813-g003">Figure 3</xref>
are taken from this interval. Horizontal black bars mark the time of stimulus presentation.</p>
</caption>
<graphic xlink:href="pone.0009813.g002"></graphic>
</fig>
<fig id="pone-0009813-g003" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009813.g003</object-id>
<label>Figure 3</label>
<caption>
<title>Scalp topographies for the P300 interval (condition C1000).</title>
<p>Scalp topographies indicate the average potential over the interval marked in
<xref ref-type="fig" rid="pone-0009813-g002">Figure 2</xref>
. ROC plots do not necessarily indicate the magnitude of the difference between the two curves, but rather the significance of that difference. For most subjects this is concentrated over the parietal area. Each row corresponds to a different subject. Note that not all subjects have the same number of electrodes available.</p>
</caption>
<graphic xlink:href="pone.0009813.g003"></graphic>
</fig>
<table-wrap id="pone-0009813-t003" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009813.t003</object-id>
<label>Table 3</label>
<caption>
<title>P300 waveform characteristics.</title>
</caption>
<alternatives>
<graphic id="pone-0009813-t003-3" xlink:href="pone.0009813.t003"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1">Subject</td>
<td align="left" rowspan="1" colspan="1">Peak latency (ms)</td>
<td align="left" rowspan="1" colspan="1">Peak amplitude (
<inline-formula>
<inline-graphic xlink:href="pone.0009813.e014.jpg" mimetype="image"></inline-graphic>
</inline-formula>
V)</td>
<td align="left" rowspan="1" colspan="1">Channel</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">VPja</td>
<td align="left" rowspan="1" colspan="1">385</td>
<td align="left" rowspan="1" colspan="1">11.56</td>
<td align="left" rowspan="1" colspan="1">Pz</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPiz</td>
<td align="left" rowspan="1" colspan="1">411</td>
<td align="left" rowspan="1" colspan="1">8.68</td>
<td align="left" rowspan="1" colspan="1">Pz</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPip</td>
<td align="left" rowspan="1" colspan="1">454</td>
<td align="left" rowspan="1" colspan="1">8.14</td>
<td align="left" rowspan="1" colspan="1">PO1</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPzq</td>
<td align="left" rowspan="1" colspan="1">418</td>
<td align="left" rowspan="1" colspan="1">12.48</td>
<td align="left" rowspan="1" colspan="1">PCP1</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPig</td>
<td align="left" rowspan="1" colspan="1">564</td>
<td align="left" rowspan="1" colspan="1">4.53</td>
<td align="left" rowspan="1" colspan="1">P01</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPjf</td>
<td align="left" rowspan="1" colspan="1">415</td>
<td align="left" rowspan="1" colspan="1">10.02</td>
<td align="left" rowspan="1" colspan="1">CCP8</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPjb</td>
<td align="left" rowspan="1" colspan="1">459</td>
<td align="left" rowspan="1" colspan="1">12.02</td>
<td align="left" rowspan="1" colspan="1">PO2</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Average</bold>
</td>
<td align="left" rowspan="1" colspan="1">443.71</td>
<td align="left" rowspan="1" colspan="1">9.63</td>
<td align="left" rowspan="1" colspan="1">
<bold>-</bold>
</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="nt103">
<label></label>
<p>The peak is defined as the point with the maximum potential in the target class in the interval between 300 ms and 650 ms. Data are taken from the channel indicated. For every subject the channel with the highest positive
<italic>ROC</italic>
value within the time interval was chosen. This is not necessarily the channel with the largest peak, but the channel with the most significant difference between the responses to targets and non-targets.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>In condition C1000, all but subjects VPig and VPjf had a typical P300 response concentrated over the parietal areas with an average latency of 425.4 ms. Although the channel with the highest ROC value was not necessarily directly over the vertex, scalp topographies in
<xref ref-type="fig" rid="pone-0009813-g003">Figure 3</xref>
show that for these 5 subjects, the distribution of the positive deflection was concentrated around the
<italic>Pz</italic>
electrode. Subjects VPig and VPjf had an exceptional scalp topography. Subject VPjf showed a typical P300 response in the timeseries, however, distribution of channels with high ROC values was lateralized to the right (see
<xref ref-type="fig" rid="pone-0009813-g003">Figure 3</xref>
, row 6). Subject VPig showed a slight P300 effect over the parietal area, with relative large latency (564 ms). Although the positive ROC value for this subject was very low and the response error was high (see
<xref ref-type="table" rid="pone-0009813-t004">Table 4</xref>
), selection scores were still over 90% (not presented here). This is possibly due to a large negative class difference found over the frontal electrodes (see
<xref ref-type="fig" rid="pone-0009813-g004">Figure 4</xref>
, row 6).</p>
<fig id="pone-0009813-g004" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009813.g004</object-id>
<label>Figure 4</label>
<caption>
<title>Scalp topographies for the negative deflections (condition C1000).</title>
<p>Scalp topographies indicate the average potential over the interval marked in
<xref ref-type="fig" rid="pone-0009813-g005">Figure 5</xref>
. ROC plots do not necessarily indicate the magnitude of the difference between the two curves, but rather the significance of that difference. For most subjects this is concentrated over the frontal and temporal area. Each row corresponds to a different subject. Note that not all subjects have the same number of electrodes available.</p>
</caption>
<graphic xlink:href="pone.0009813.g004"></graphic>
</fig>
<table-wrap id="pone-0009813-t004" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009813.t004</object-id>
<label>Table 4</label>
<caption>
<title>Subject performances for the key response task.</title>
</caption>
<alternatives>
<graphic id="pone-0009813-t004-4" xlink:href="pone.0009813.t004"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1">Subject</td>
<td align="left" rowspan="1" colspan="1">RT [ms]</td>
<td align="left" rowspan="1" colspan="1">Hits</td>
<td align="left" rowspan="1" colspan="1">False alarms</td>
<td align="left" rowspan="1" colspan="1">Misses</td>
<td align="left" rowspan="1" colspan="1">Error</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">VPja</td>
<td align="left" rowspan="1" colspan="1">456 (128)</td>
<td align="left" rowspan="1" colspan="1">72</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">0</td>
<td align="left" rowspan="1" colspan="1">4%</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPiz</td>
<td align="left" rowspan="1" colspan="1">479 (148)</td>
<td align="left" rowspan="1" colspan="1">71</td>
<td align="left" rowspan="1" colspan="1">0</td>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">1.4%</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPip</td>
<td align="left" rowspan="1" colspan="1">507 (174)</td>
<td align="left" rowspan="1" colspan="1">72</td>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">0</td>
<td align="left" rowspan="1" colspan="1">1.4%</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPzq</td>
<td align="left" rowspan="1" colspan="1">360 (82)</td>
<td align="left" rowspan="1" colspan="1">71</td>
<td align="left" rowspan="1" colspan="1">5</td>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">7.8%</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPig</td>
<td align="left" rowspan="1" colspan="1">612 (219)</td>
<td align="left" rowspan="1" colspan="1">88</td>
<td align="left" rowspan="1" colspan="1">17</td>
<td align="left" rowspan="1" colspan="1">8</td>
<td align="left" rowspan="1" colspan="1">22.1%</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPjf</td>
<td align="left" rowspan="1" colspan="1">360 (131)</td>
<td align="left" rowspan="1" colspan="1">96</td>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="left" rowspan="1" colspan="1">0</td>
<td align="left" rowspan="1" colspan="1">4%</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPjb</td>
<td align="left" rowspan="1" colspan="1">450 (113)</td>
<td align="left" rowspan="1" colspan="1">95</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">4%</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Average</bold>
</td>
<td align="left" rowspan="1" colspan="1">460.6 (142.1)</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">6.4%</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="nt104">
<label></label>
<p>Because the majority of stimuli is not a target, true negatives (no response to non-target) are not reported and also not counted for the error score (see equation 3). The total number of targets is equal to the sum of hits and misses.
<italic>RT</italic>
is the average reaction time from correct responses, with standard deviation in parentheses.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<sec id="s3b1">
<title>Negative deflections</title>
<p>Attentional effort not only influences the positive P300 response, but has also been shown to alter the negative deflections prior to the P300 response
<xref ref-type="bibr" rid="pone.0009813-TederSlejrvi2">[41]</xref>
. Although distinct N1 and N2 components were not always found (see
<xref ref-type="fig" rid="pone-0009813-g005">Figure 5</xref>
), there was a negative class difference over the frontal areas and those electrodes over the auditory cortex for most subjects. Subject VPig, who had no clear P300 response, did show a pronounced attention dependent negativity over both auditory cortices (see
<xref ref-type="fig" rid="pone-0009813-g004">Figure 4</xref>
, row 5). On the other hand, subject VPzq, who had a very typical P300 response with high ROC value (
<xref ref-type="fig" rid="pone-0009813-g003">Figure 3</xref>
, row 4) hardly showed any attentional influence on the negative peaks (
<xref ref-type="fig" rid="pone-0009813-g004">Figure 4</xref>
, row 4). One other remarkable observation is the localization of the negative ROC values for subject VPjf. Where the attentional effect on the positive P300 response was localized over the right central area, the largest negative ROC values were found over the same area on the opposite hemisphere.</p>
<fig id="pone-0009813-g005" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009813.g005</object-id>
<label>Figure 5</label>
<caption>
<title>Averaged negative waveforms (condition C1000).</title>
<p>Only the channel with the largest negative ROC value between 100 and 300 ms is presented here. The shaded interval indicates the area where this largest negative ROC value was found. Intervals were handpicked. Scalp topographies in
<xref ref-type="fig" rid="pone-0009813-g004">Figure 4</xref>
are taken from this interval. Horizontal black bars mark the time of stimulus presentation.</p>
</caption>
<graphic xlink:href="pone.0009813.g005"></graphic>
</fig>
</sec>
<sec id="s3b2">
<title>BCI experiments</title>
<p>For comparison, the ERP responses for all subjects and conditions of the second experimental round are presented in
<xref ref-type="fig" rid="pone-0009813-g006">Figure 6</xref>
. The P300 response is superimposed on the deflections that are rhythmically evoked by the stimulus itself. The rhythm of these evoked potentials is transiently disturbed by the positive deflection. In condition C175, negative deflections appear to miss a cycle (see
<xref ref-type="fig" rid="pone-0009813-g006">Figure 6</xref>
, column 2). In condition C300, the P300 response has more time to develop which results in a positive going potential (see
<xref ref-type="fig" rid="pone-0009813-g006">Figure 6</xref>
, column 1). In condition C300s, most subjects show no markedly different traces for non-targets and targets. Subject VPzq showed very pronounced positive deflections for all conditions (including C300s) and was also the best scoring subject in most conditions of the BCI experiments. Scalp topographies from condition C175 (see
<xref ref-type="fig" rid="pone-0009813-g007">Figure 7</xref>
) are more diffuse and the class difference has shifted toward the frontal areas when compared to the longer ISI of condition C1000. This change is also visible when looking at the channels that were selected for feature extraction during the classification routine (
<xref ref-type="fig" rid="pone-0009813-g008">Figure 8</xref>
).</p>
<fig id="pone-0009813-g006" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009813.g006</object-id>
<label>Figure 6</label>
<caption>
<title>Averaged waveforms of all subjects and conditions from the second experimental round.</title>
<p>Only the channel with the highest ROC value between classes is presented here. All ERPs in the left column come from condition C300. The middle column represents condition C175 and images in the right column are taken from condition C300s. Every row represents a subject. The shaded area in condition C175 marks the high ROC interval that is used for scalp topographies in
<xref ref-type="fig" rid="pone-0009813-g007">Figure 7</xref>
. Horizontal black bars mark the time of stimulus presentation.</p>
</caption>
<graphic xlink:href="pone.0009813.g006"></graphic>
</fig>
<fig id="pone-0009813-g007" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009813.g007</object-id>
<label>Figure 7</label>
<caption>
<title>Scalp topographies for the P300 interval (condition C175).</title>
<p>Scalp topographies indicate the average potential over the interval marked in the second column of
<xref ref-type="fig" rid="pone-0009813-g006">Figure 6</xref>
. ROC plots do not necessarily indicate the magnitude of the difference between the two curves, but rather the significance of that difference. The area where the high ROC values are concentrated has shifted to the frontal area as compared to
<xref ref-type="fig" rid="pone-0009813-g003">Figure 3</xref>
. Each row corresponds to a different subject. Note that not all subjects have the same number of electrodes available.</p>
</caption>
<graphic xlink:href="pone.0009813.g007"></graphic>
</fig>
<fig id="pone-0009813-g008" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009813.g008</object-id>
<label>Figure 8</label>
<caption>
<title>Distribution of channels selected for classification in various conditions over all subjects.</title>
<p>Scalp topographies indicate the prevalence of selection of different channels during the cross validation steps averaged over all subjects of a particular condition. Negative values indicate channels selected for negative ROC values, positive values indicate channels selected for positive ROC values. Values have been normalized to the maximum possible occurrences (nr subjects x nr of crossvalidation folds). The frontal cross indicates the
<italic>Fz</italic>
channel, the posterior cross indicates the
<italic>Pz</italic>
channel. Channels with negative ROC values are consistently selected from the frontal regions. For condition C1000, the channels with positive ROC values are concentrated over the parietal- and occipital areas, whereas for the faster conditions these are more diffuse.</p>
</caption>
<graphic xlink:href="pone.0009813.g008"></graphic>
</fig>
<p>Negative deflections for the second experimental round are not discussed here, as the ROC values were low.</p>
</sec>
</sec>
<sec id="s3c">
<title>Stimulus intensity</title>
<p>Before all sessions, subjects could adjust the speaker loudness for all directions to obtain a subjective equality in stimuli. The majority of subjects reported the preset speaker loudness (calibrated at
<inline-formula>
<inline-graphic xlink:href="pone.0009813.e015.jpg" mimetype="image"></inline-graphic>
</inline-formula>
58 dB) to be perceptually equal. Therefore, only the three subjects that changed the loudness of at least one speaker are reported in
<xref ref-type="table" rid="pone-0009813-t005">Table 5</xref>
. Subject VPig and VPzq (BCI) requested all speakers to be louder (about 3–5 dB) than initially set. In this case the initialization was not used by the subject to balance the speaker loudness, but to change the overall loudness. The classification results for VPzq in the BCI experiments were higher than average, with scores reaching 100% in most conditions.</p>
<table-wrap id="pone-0009813-t005" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009813.t005</object-id>
<label>Table 5</label>
<caption>
<title>Speaker loudness.</title>
</caption>
<alternatives>
<graphic id="pone-0009813-t005-5" xlink:href="pone.0009813.t005"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1">Subject</td>
<td align="left" rowspan="1" colspan="1">Exp</td>
<td colspan="8" align="left" rowspan="1">Speaker location</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="left" rowspan="1" colspan="1">5</td>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1">7</td>
<td align="left" rowspan="1" colspan="1">8</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">VPzq</td>
<td align="left" rowspan="1" colspan="1">Phys.</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">60.8</td>
<td align="left" rowspan="1" colspan="1">60.2</td>
<td align="left" rowspan="1" colspan="1">59.7</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">-</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPig</td>
<td align="left" rowspan="1" colspan="1">Phys.</td>
<td align="left" rowspan="1" colspan="1">61.4</td>
<td align="left" rowspan="1" colspan="1">61.1</td>
<td align="left" rowspan="1" colspan="1">61.2</td>
<td align="left" rowspan="1" colspan="1">60.1</td>
<td align="left" rowspan="1" colspan="1">60.2</td>
<td align="left" rowspan="1" colspan="1">60.1</td>
<td align="left" rowspan="1" colspan="1">61.2</td>
<td align="left" rowspan="1" colspan="1">60.4</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPkj</td>
<td align="left" rowspan="1" colspan="1">BCI</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">56.9</td>
<td align="left" rowspan="1" colspan="1">56.2</td>
<td align="left" rowspan="1" colspan="1">x</td>
<td align="left" rowspan="1" colspan="1">x</td>
<td align="left" rowspan="1" colspan="1">X</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">-</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPzq</td>
<td align="left" rowspan="1" colspan="1">BCI</td>
<td align="left" rowspan="1" colspan="1">63.4</td>
<td align="left" rowspan="1" colspan="1">63.8</td>
<td align="left" rowspan="1" colspan="1">62.9</td>
<td align="left" rowspan="1" colspan="1">x</td>
<td align="left" rowspan="1" colspan="1">x</td>
<td align="left" rowspan="1" colspan="1">X</td>
<td align="left" rowspan="1" colspan="1">63.7</td>
<td align="left" rowspan="1" colspan="1">63.2</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="nt105">
<label></label>
<p>Speakers were calibrated to equal loudness (
<inline-formula>
<inline-graphic xlink:href="pone.0009813.e016.jpg" mimetype="image"></inline-graphic>
</inline-formula>
58 dB). Before each session, the subject could adjust this loudness for individual speakers to have subjectively equal loudness. Most subjects did not make changes; only those that did are reported here. Subjects are grouped according to experimental rounds.
<italic>Exp</italic>
refers to the experimental round.
<italic>Speaker location</italic>
refers to the speaker labels given in
<xref ref-type="fig" rid="pone-0009813-g001">Figure 1</xref>
. -  =  unchanged, x  =  unavailable. All values are in dB.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>In the physiological experiments, VPzq adjusted only the three speakers in the back. Subject VPkj decreased the loudness of the speakers right and front-right. This is possibly to account for user specific hearing differences between ears or to counterbalance the perceptual damping of sound sources in the back.</p>
</sec>
<sec id="s3d">
<title>Key response result</title>
<p>The key-reponse task (condition Cr) was performed by subjects in the physiological experiments. Performance results can be found in
<xref ref-type="table" rid="pone-0009813-t004">Table 4</xref>
. Error scores (in percentages) were calculated with equation 3. The number of true negatives is excluded from this equation, because its large number would mask the error size.
<disp-formula>
<graphic xlink:href="pone.0009813.e017"></graphic>
<label>(3)</label>
</disp-formula>
</p>
<p>Although no subject showed a perfect score, number of errors was under 10% for all subjects but VPig. Subject VPig, with an error rate of 22.1%, was one of two subjects who reported to have difficulty with sound localization in natural settings. Subject VPip, who also reported this, had an excellent result with an error rate of 1.4%. Both received a practice round prior to the recordings and had a maximum selection score of over 90% in a preliminary classification test.</p>
<p>The grouped performances of subjects on different directions can be found in
<xref ref-type="table" rid="pone-0009813-t006">Table 6</xref>
; the corresponding confusions in
<xref ref-type="fig" rid="pone-0009813-g009">Figure 9</xref>
. The first observation to be made is the confusion of the front speaker with the rear speaker. All eight false alarms on the front trials are due to confusion with the rear speaker. Vice versa, the only false alarm on the rear trials is a confusion with the front speaker. Several subjects also reported to have difficulty with this distinction. We therefore excluded the rear speakers from the BCI experiments.</p>
<fig id="pone-0009813-g009" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009813.g009</object-id>
<label>Figure 9</label>
<caption>
<title>Polar sensitivity plot for condition Cr.</title>
<p>The confusion matrices for condition Cr of all subjects are summed and represented as sensitivity plot. The black line indicates the sensitivity at each speaker location. Direction confusion is represented by the green (neighboring direction) and red (other) arrows. The length of the arrow indicates the amount of error in that direction. Speaker 1 (front) and 5 (back) are difficult to distinguish, as can be seen from their exclusive confusion. Direction labels correspond to those in
<xref ref-type="fig" rid="pone-0009813-g001">Figure 1</xref>
.</p>
</caption>
<graphic xlink:href="pone.0009813.g009"></graphic>
</fig>
<table-wrap id="pone-0009813-t006" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009813.t006</object-id>
<label>Table 6</label>
<caption>
<title>Averaged performance for different directions (condition C1000).</title>
</caption>
<alternatives>
<graphic id="pone-0009813-t006-6" xlink:href="pone.0009813.t006"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1">Direction</td>
<td align="left" rowspan="1" colspan="1">RT [ms]</td>
<td align="left" rowspan="1" colspan="1">Hits</td>
<td align="left" rowspan="1" colspan="1">False alarms</td>
<td align="left" rowspan="1" colspan="1">Misses</td>
<td align="left" rowspan="1" colspan="1">Error</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">Front</td>
<td align="left" rowspan="1" colspan="1">483 (168)</td>
<td align="left" rowspan="1" colspan="1">67</td>
<td align="left" rowspan="1" colspan="1">8</td>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">11.8%</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Front-right</td>
<td align="left" rowspan="1" colspan="1">414 (121)</td>
<td align="left" rowspan="1" colspan="1">40</td>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="left" rowspan="1" colspan="1">11.1%</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Right</td>
<td align="left" rowspan="1" colspan="1">431 (126)</td>
<td align="left" rowspan="1" colspan="1">84</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">0</td>
<td align="left" rowspan="1" colspan="1">3.5%</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Back-right</td>
<td align="left" rowspan="1" colspan="1">552 (211)</td>
<td align="left" rowspan="1" colspan="1">83</td>
<td align="left" rowspan="1" colspan="1">10</td>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">11.7%</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Back</td>
<td align="left" rowspan="1" colspan="1">518 (242)</td>
<td align="left" rowspan="1" colspan="1">57</td>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">6.6%</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Back-left</td>
<td align="left" rowspan="1" colspan="1">499 (184)</td>
<td align="left" rowspan="1" colspan="1">51</td>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">8.9%</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Left</td>
<td align="left" rowspan="1" colspan="1">408 (100)</td>
<td align="left" rowspan="1" colspan="1">108</td>
<td align="left" rowspan="1" colspan="1">5</td>
<td align="left" rowspan="1" colspan="1">0</td>
<td align="left" rowspan="1" colspan="1">4.4%</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Front-left</td>
<td align="left" rowspan="1" colspan="1">403 (121)</td>
<td align="left" rowspan="1" colspan="1">75</td>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">2.6%</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Average</bold>
</td>
<td align="left" rowspan="1" colspan="1">463.5 (159.1)</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">7.6%</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="nt106">
<label></label>
<p>Reaction times (standard deviation in parentheses) and errors for all subjects were averaged according to direction of target stimulus. Types of errors made can be found in
<xref ref-type="fig" rid="pone-0009813-g009">Figure 9</xref>
. Due to random target assigning, not all directions are designated as a target equally often. The longest reaction times are found in the rear speakers 3 speakers and the frontal 1.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>Looking at the distribution of false alarms relative to the target cue, in total 14 false alarms are made on cues directly neighboring the target and 19 errors on the other cues. When normalizing for the amount of cues (two direct neighbors versus five others) this means that the probability of a false alarm on a neighboring cue (1.22%) is almost twice as high as the probability of a false alarm on any of the other directions (0.66%). When not taking the front and rear speaker into account, the difference between these increases (1.6% for neighboring cues and 0.45% for other cues).</p>
</sec>
<sec id="s3e">
<title>Channel selection</title>
<p>In every fold of the cross validation, the best set of 20 channels is chosen based on the ROC value. The distribution of selected channels for the different experimental settings can be found in
<xref ref-type="fig" rid="pone-0009813-g008">Figure 8</xref>
. As can be seen, channels that have been selected for their predictive power for the negative ROC values are consistently concentrated in the mid-frontal areas. The channels that were selected in condition C1000, during preliminary classification, for their predictive power for the positive ROC values are focally located over the parietal- and occipital area. This is consistent with the assumption that negative ROC values are associated with attentional differences of the eary negative waves
<xref ref-type="bibr" rid="pone.0009813-TederSlejrvi2">[41]</xref>
and positive ROC values are associated with the P300 wave differences
<xref ref-type="bibr" rid="pone.0009813-Polich2">[42]</xref>
. It is also consistent with the ROC topographies in
<xref ref-type="fig" rid="pone-0009813-g003">Figure 3</xref>
and
<xref ref-type="fig" rid="pone-0009813-g004">4</xref>
. When the ISI is decreased in the BCI experiments, the negative channels are still concentrated around the
<italic>Fz</italic>
channel, whereas the distribution of the positive response becomes more diffuse.</p>
</sec>
<sec id="s3f">
<title>Classification</title>
<p>
<xref ref-type="table" rid="pone-0009813-t007">Tables 7a–c</xref>
give the classification- and selection results for the BCI experiments. When using a single iteration for finding the target direction, all subjects scored a selection accuracy below 70% in all conditions. When using multiple iterations the score for most subjects went up quickly for conditions C300 and C175. For control condition C300s, this increase could not be observed for most subjects (see
<xref ref-type="fig" rid="pone-0009813-g010">Figure 10</xref>
).</p>
<fig id="pone-0009813-g010" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009813.g010</object-id>
<label>Figure 10</label>
<caption>
<title>Subject performances for all conditions.</title>
<p>The left column shows the selection scores and the right column shows the corresponding ITR. Both are plotted as a function of the number of iterations. Each rows indicates a different condition. The horizontal line in the left column indicates the 70% threshold. Although we do not report on the classification on condition C1000, we have included the figures here for comparison. They have only 10 iterations, as no more stimuli were presented in the physiological experimental round. Accuracy scores for condition C1000 increase faster for than for the other conditions. However, because of the long ISI (1000 ms) the ITR is relatively low.</p>
</caption>
<graphic xlink:href="pone.0009813.g010"></graphic>
</fig>
<table-wrap id="pone-0009813-t007" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0009813.t007</object-id>
<label>Table 7</label>
<caption>
<title>Classification performance for all BCI conditions.</title>
</caption>
<alternatives>
<graphic id="pone-0009813-t007-7" xlink:href="pone.0009813.t007"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1">Subject</td>
<td align="left" rowspan="1" colspan="1">Classification [%]</td>
<td align="left" rowspan="1" colspan="1">Target score [%]</td>
<td align="left" rowspan="1" colspan="1">Selection [%]</td>
<td align="left" rowspan="1" colspan="1">70% Thresh.</td>
<td align="left" rowspan="1" colspan="1">Max. ITR 70%</td>
<td align="left" rowspan="1" colspan="1">Max. ITR 90%</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">VPja</td>
<td align="left" rowspan="1" colspan="1">69.81</td>
<td align="left" rowspan="1" colspan="1">62.12</td>
<td align="left" rowspan="1" colspan="1">90.00 (15)</td>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1">6.86 (7)</td>
<td align="left" rowspan="1" colspan="1">4.41 (15)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPkh</td>
<td align="left" rowspan="1" colspan="1">
<bold>74.27</bold>
</td>
<td align="left" rowspan="1" colspan="1">69.02</td>
<td align="left" rowspan="1" colspan="1">
<bold>90.00</bold>
(15)</td>
<td align="left" rowspan="1" colspan="1">8</td>
<td align="left" rowspan="1" colspan="1">4.78 (13)</td>
<td align="left" rowspan="1" colspan="1">4.41 (15)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPkj</td>
<td align="left" rowspan="1" colspan="1">73.19</td>
<td align="left" rowspan="1" colspan="1">68.08</td>
<td align="left" rowspan="1" colspan="1">94.00 (11)</td>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="left" rowspan="1" colspan="1">9.60 (5)</td>
<td align="left" rowspan="1" colspan="1">6.81 (11)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPzq</td>
<td align="left" rowspan="1" colspan="1">78.74</td>
<td align="left" rowspan="1" colspan="1">74.89</td>
<td align="left" rowspan="1" colspan="1">
<bold>100.00</bold>
(12)</td>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">19.50 (2)</td>
<td align="left" rowspan="1" colspan="1">11.02 (6)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPjq</td>
<td align="left" rowspan="1" colspan="1">74.54</td>
<td align="left" rowspan="1" colspan="1">69.10</td>
<td align="left" rowspan="1" colspan="1">94.00 (12)</td>
<td align="left" rowspan="1" colspan="1">5</td>
<td align="left" rowspan="1" colspan="1">7.25 (5)</td>
<td align="left" rowspan="1" colspan="1">6.25 (12)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Mean</bold>
</td>
<td align="left" rowspan="1" colspan="1">74.11</td>
<td align="left" rowspan="1" colspan="1">68.64</td>
<td align="left" rowspan="1" colspan="1">93.60 (13.0)</td>
<td align="left" rowspan="1" colspan="1">5.0</td>
<td align="left" rowspan="1" colspan="1">9.60 (6.4)</td>
<td align="left" rowspan="1" colspan="1">6.58 (11.8)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">a) C300</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPja</td>
<td align="left" rowspan="1" colspan="1">
<bold>72.20</bold>
</td>
<td align="left" rowspan="1" colspan="1">63.00</td>
<td align="left" rowspan="1" colspan="1">
<bold>92.50</bold>
(12)</td>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="left" rowspan="1" colspan="1">14.41 (4)</td>
<td align="left" rowspan="1" colspan="1">10.21 (12)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPkh</td>
<td align="left" rowspan="1" colspan="1">68.83</td>
<td align="left" rowspan="1" colspan="1">61.39</td>
<td align="left" rowspan="1" colspan="1">82.50 (13)</td>
<td align="left" rowspan="1" colspan="1">10</td>
<td align="left" rowspan="1" colspan="1">6.87 (13)</td>
<td align="left" rowspan="1" colspan="1">- (-)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPkj</td>
<td align="left" rowspan="1" colspan="1">
<bold>77.48</bold>
</td>
<td align="left" rowspan="1" colspan="1">71.67</td>
<td align="left" rowspan="1" colspan="1">
<bold>97.50</bold>
(7)</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">25.20 (3)</td>
<td align="left" rowspan="1" colspan="1">20.60 (7)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPzq</td>
<td align="left" rowspan="1" colspan="1">
<bold>79.89</bold>
</td>
<td align="left" rowspan="1" colspan="1">75.22</td>
<td align="left" rowspan="1" colspan="1">
<bold>100.00</bold>
(12)</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">19.36 (5)</td>
<td align="left" rowspan="1" colspan="1">17.51 (7)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPjq</td>
<td align="left" rowspan="1" colspan="1">
<bold>75.66</bold>
</td>
<td align="left" rowspan="1" colspan="1">72.56</td>
<td align="left" rowspan="1" colspan="1">
<bold>97.50</bold>
(14)</td>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">21.10 (3)</td>
<td align="left" rowspan="1" colspan="1">15.32 (8)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Mean</bold>
</td>
<td align="left" rowspan="1" colspan="1">74.81</td>
<td align="left" rowspan="1" colspan="1">68.76</td>
<td align="left" rowspan="1" colspan="1">94.00 (11.6)</td>
<td align="left" rowspan="1" colspan="1">4.6</td>
<td align="left" rowspan="1" colspan="1">17.39 (5.6)</td>
<td align="left" rowspan="1" colspan="1">15.91 (8.5)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">b) C175</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPja</td>
<td align="left" rowspan="1" colspan="1">57.96</td>
<td align="left" rowspan="1" colspan="1">34.66</td>
<td align="left" rowspan="1" colspan="1">40.00 (11)</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">- (-)</td>
<td align="left" rowspan="1" colspan="1">- (-)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPkh</td>
<td align="left" rowspan="1" colspan="1">63.35</td>
<td align="left" rowspan="1" colspan="1">47.11</td>
<td align="left" rowspan="1" colspan="1">65.00 (13)</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">- (-)</td>
<td align="left" rowspan="1" colspan="1">- (-)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPkj</td>
<td align="left" rowspan="1" colspan="1">60.32</td>
<td align="left" rowspan="1" colspan="1">41.26</td>
<td align="left" rowspan="1" colspan="1">35.00 (3)</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">- (-)</td>
<td align="left" rowspan="1" colspan="1">- (-)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPzq</td>
<td align="left" rowspan="1" colspan="1">72.20</td>
<td align="left" rowspan="1" colspan="1">59.79</td>
<td align="left" rowspan="1" colspan="1">90.00 (15)</td>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="left" rowspan="1" colspan="1">5.60 (6)</td>
<td align="left" rowspan="1" colspan="1">4.41 (15)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VPjq</td>
<td align="left" rowspan="1" colspan="1">62.75</td>
<td align="left" rowspan="1" colspan="1">42.12</td>
<td align="left" rowspan="1" colspan="1">50.00 (12)</td>
<td align="left" rowspan="1" colspan="1">-</td>
<td align="left" rowspan="1" colspan="1">- (-)</td>
<td align="left" rowspan="1" colspan="1">- (-)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Mean</bold>
</td>
<td align="left" rowspan="1" colspan="1">63.32</td>
<td align="left" rowspan="1" colspan="1">44.99</td>
<td align="left" rowspan="1" colspan="1">56.00 (10.8)</td>
<td align="left" rowspan="1" colspan="1">6.0</td>
<td align="left" rowspan="1" colspan="1">5.60 (6.0)</td>
<td align="left" rowspan="1" colspan="1">4.41 (15.0)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">c) C300s</td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="nt107">
<label></label>
<p>For explanation of the various conditions see the
<xref ref-type="sec" rid="s2">
<italic>Methods</italic>
</xref>
section.
<italic>Classification (%)</italic>
refers to the binary classification score on the artifact free dataset, i.e. the correct classification of individual subtrials.
<italic>Target score (%)</italic>
is the same but only considering target subtrials. The difference between these is possibly due to unbalanced training set.
<italic>Max. selection</italic>
is the maximum selection score reached for each subject,
<italic>70% Thresh</italic>
. refers to the minimum number of iterations needed for averaging to obtain a 70% selection score.
<italic>Max. ITR (70%)</italic>
is the maximum ITR reached when considering only those numbers of averaging that resulted in a 70% selection score or higher.
<italic>Max. ITR (90%)</italic>
is the equivalent with only selection scores above 90%. Number in parentheses in
<italic>Max. selection</italic>
,
<italic>Max. ITR (70%)</italic>
and
<italic>Max. ITR (90%)</italic>
indicate the number of averages needed for the result. Bold numbers indicate the best result per subject over the three conditions. For most subjects, ITR values are highest for condition C175. See
<xref ref-type="fig" rid="pone-0009813-g010">Figure 10</xref>
for results from more averaging steps and the corresponding ITR. See the
<italic>Analysis</italic>
subsection for definitions of classification- and selection score.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>In condition C300 (see
<xref ref-type="table" rid="pone-0009813-t007">Table 7a</xref>
), four out of five subjects reached a selection score of 70% or higher already after using six iterations. The fifth subject reached this threshold after eight iterations. One subject had a selection score of 100% when using 12 or more iterations, all other subjects eventually scored 90% or higher. The average maximum selection score was 93.6%. Average maximum ITR scores were 9.60 and 6.58 for the 70% and 90% constraint, respectively.</p>
<p>For condition C175 (see
<xref ref-type="table" rid="pone-0009813-t007">Table 7b</xref>
), four out of five subjects had a selection score of 70% or higher when using four iterations. Subject VPkh only reached this threshold after using ten iterations. Subject VPzq reached a 100% selection score when using 12 iterations. All but subject VPkh eventually reached a 90% selection score with an average maximum score of 94.00%. Average maximum ITR scores were 17.39 and 15.91 for the 70% and 90% constraint, respectively.</p>
<p>In condition C300s, both the classification- and selection scores were lower (see
<xref ref-type="table" rid="pone-0009813-t007">Table 7c</xref>
). Subject VPzq reached the 70% threshold already after using 6 iterations and had a maximum score of 90% on this control condition. The subject reported to regularly sing in a choir. For all other subjects, selection scores did not rise above 70%. Because condition C300s contained less subtrials (1500 versus 3500 and 3000 in the C300 and C175 condition respectively), it could be argued that the lower classification- and selection scores are due to the lower number of training samples. However, running conditions C300 and C175 with only 1500 subtrials resulted in similar scores as those currently reported. Any difference between conditions is thus due to information added by the spatial localization of the stimuli.</p>
<p>Cross validation was also performed on the data from the physiological experiments. Although both classification- and selection scores were comparable or better, we do not report on these results extensively here as the long ISI makes the system intrinsically slow. For comparison,
<xref ref-type="fig" rid="pone-0009813-g010">Figure 10</xref>
does show these results. Note that the bad score for subject VPiz is due to the removal of over 50% of the trials.</p>
<p>For all conditions and subjects, the classification score of target stimuli is lower than the overall classification score (see column ‘Target score’ in
<xref ref-type="table" rid="pone-0009813-t007">Table 7</xref>
). The classifier favors the decision towards the non-targets, which is due to the bias that exists in the training set. Balancing of the training set might increase the classification score, but has not been applied here.</p>
</sec>
</sec>
<sec id="s4">
<title>Discussion</title>
<p>We discuss here a new experimental paradigm for an auditory BCI. In contrast to most other auditory BCI setups, our setup involves an intuitive multi-class paradigm that can readily vary in the number of classes. So far, it has only been tested offline and on healthy subjects. The results show that all subjects were able to reach a selection score over 70% in the conditions with spatial cues (C300 and C175). Actually, all but one subject reached selections scores higher than 90%. Performance on the control task was for all but one subject below the 70% threshold, showing that the spatial location adds vital information to the cue.</p>
<p>As can be seen in
<xref ref-type="fig" rid="pone-0009813-g010">Figure 10</xref>
, the increase in selection score is highest for condition C1000. The P300 waveform in this condition has time to reach a peak and recover to baseline to some extent and is more typical. For the other conditions, more iterations were necessary for a selection score above 70% i.e., the slope is less steep. Here, the P300 is no longer the clear and pronounced positive peak, as responses evoked by new stimuli disturb the potential buildup. ROC values were generally larger for the C1000 condition as compared to the faster conditions.</p>
<p>For the BCI experiments, especially C175, the area with the highest ROC values had shifted to a more frontal position when compared to condition C1000. Generally, the latency of the P300 wave is shorter over frontal electrodes
<xref ref-type="bibr" rid="pone.0009813-Polich1">[17]</xref>
. Thus, the positive deflection starts developing over the frontal electrodes and moves back. One could therefore explain this shift by an interruption of the developing P300 by the potentials evoked by the presentation of the next stimulus. However, latency differences between frontal and parietal areas are in the range of milliseconds
<xref ref-type="bibr" rid="pone.0009813-Mertens1">[43]</xref>
, which makes it an unlikely explanation. A variant of the P300, the novelty P300 and P3a, generally starts more anterior than the classic P300
<xref ref-type="bibr" rid="pone.0009813-Katayama1">[44]</xref>
. The fact that it is present after an unknown stimulus and habituates quickly also makes it an unlikely candidate. Neurophysiological P300 research mostly uses longer ISI to avoid the overlay of multiple stimuli. In BCI research, where short ISI is common practice, scalp topographies are not often reported. The reason for the shift of the P300 to the front therefore remains unclear.</p>
<p>One measure of reporting the usefulness of a BCI is the ITR. It depends on the selection accuracy, the time necessary for a choice and the amount of classes. Because of the long ISI in condition C1000, the ITR is inevitably low. However, with the pronounce P300 response it functioned as a proof of concept. In an auditory BCI setup, the spatial properties of the cue by itself can be enough to consistently elicit a classifiable P300 response.</p>
<p>With an ISI that is almost six times shorter in condition C175, extra iterations can still produce an overall fast BCI. Also, some subjects reported the faster ISI to help them focus on the task at hand. With successful classification in trials with an ISI as short as 175 ms, the maximum ITR reached an average of 17.39 bits/minute for five subjects (best subject 25.20), considering only 70% correct selection scores. Kanoh et al.
<xref ref-type="bibr" rid="pone.0009813-Kanoh1">[7]</xref>
reported an average ITR of around 5 bits/minute on their binary BCI, but only when they used all data for training and testing, thereby applying the classifier to data it had already seen. In another binary auditory setup
<xref ref-type="bibr" rid="pone.0009813-Hill1">[2]</xref>
, an ITR of between 4 and 7 bits/minute is reported. Our system owes its high ITR to its genuinely multi class nature. Another multi class, auditory BCI is reported on before
<xref ref-type="bibr" rid="pone.0009813-Furdea1">[5]</xref>
. It used spoken numbers as the stimulus for eliciting an ERP and an average ITR of 1.48 bits/minute was reported for their online approach. This could be improved to 4.66 bits/minute when they determined the individual optimal number of iterations in an offline analysis. Recently,
<xref ref-type="bibr" rid="pone.0009813-Klobassa1">[10]</xref>
reported on their multi-class auditory BCI with a maximum online ITR of 5.64 bits/minute in the auditory only condition. The average ITR remained relatively stable over the different sessions.</p>
<p>Visual P300 BCI systems are known for their fast operation and corresponding high ITR. In a recent online visual speller study
<xref ref-type="bibr" rid="pone.0009813-Lenhardt1">[21]</xref>
, average ITR values of 32.15 bits/minute were reported. For this they used four subtrials with an average classification score of over 80%. Maximum ITR for a single subject was as high as 92.32 bits/minute using two subtrials. It can be assumed that the average ITR will further increase, when the optimal number of subtrials is determined for each subject individually. Even in the original application of the visual spelling system in 1988
<xref ref-type="bibr" rid="pone.0009813-Farwell1">[20]</xref>
, ITR values of 12.0 bits/minute (or 10.68 bits/minute according to equation 2) were reported. For a comparison of ITR of several BCI systems, see
<xref ref-type="bibr" rid="pone.0009813-Serby1">[45]</xref>
. It is thus clear that auditory BCI systems lag behind in their performance. The setup proposed here takes a step in closing this gap between visual and auditory performance.</p>
<p>The average ITR for condition C175 went down to 15.91 bits/minute (best subject 20.60 bits/minute) when only 90% correct selection scores were considered. Although this is a drop in ITR of about 9%, it is a score that is still competitive with other auditory BCI systems and has a much higher accuracy barrier. This high accuracy and corresponding ITR encourage the further development of this paradigm. This could for instance be achieved by using a multi class classifier
<xref ref-type="bibr" rid="pone.0009813-Tomiaka1">[46]</xref>
, instead of the binary classifier used.</p>
<p>Performance in condition C300s was low, with only one out of five subjects crossing the 70% threshold. Possibly, the performance in the control condition can be improved when the cues differ more in their physical properties i.e., if the difference in pitch is larger or natural sounds are used as in
<xref ref-type="bibr" rid="pone.0009813-Klobassa1">[10]</xref>
. This would make distinguishing the targets from the non-targets easier and thus, maybe, an auditory multi-class BCI could also be based on this single speaker setup. However, remembering a pitch is rather difficult for some subjects, whereas recognition of a spatial direction is automatic. Subject VPzq actually reached a selection score of 90% in condition C300s. As this subject reported to regularly sing in a choir it could be hypothesized that for him the task was easier to perform and therefore still elicited a P300 response.</p>
<p>Currently the stimuli are presented in free-field i.e., with a dedicated speaker for every direction. Initial tests with stimulus presentation over in-ear headphones showed that accurately identifying the target direction was difficult. However, we believe that by using the complex cue from the BCI experiments and more advanced methods for creating virtual 3D audio, it will be possible to reduce the large setup to stereo ear phones.</p>
<p>As shown by the polar sensitivity plot of the key response task, there is a higher chance of mistaking a target with one of its direct neighbors than other stimuli. Possibly, these neighboring directions fall within the attentional gradient
<xref ref-type="bibr" rid="pone.0009813-Mondor1">[28]</xref>
,
<xref ref-type="bibr" rid="pone.0009813-TederSlejrvi1">[29]</xref>
. Also, it seems that more trials are misclassified in the rear than in other directions. It was shown that the spatial resolution of hearing is higher in the frontal region than toward the sides in
<xref ref-type="bibr" rid="pone.0009813-TederSlejrvi2">[41]</xref>
. Their experiments were on the front-right quadrant only. However, it shows that a more informed placement of the speakers around the subject might improve their ability to distinguish the different cues.</p>
<p>A wide range of applications is possible as the directions can be mapped to any choice and the number of directions is flexible. One area in which our method might prove useful is auditory BCI based on spoken words
<xref ref-type="bibr" rid="pone.0009813-Furdea1">[5]</xref>
. A BCI with spoken word input might prove an intuitive alternative to the somewhat unnatural tones. However, it introduces problems such as increased latency jitter in the P300 onset which may hinder the classification. Spoken words that contain spatial information might lead to a more pronounced response because it is easier to focus on the direction. Also, there is no longer the need to hear a large part of the word before actual recognition can take place. Recognition is then based on the spatial location, whereas the spoken word functions as a reminder of which cue is mapped to a certain direction. Similarly, the paradigm described in
<xref ref-type="bibr" rid="pone.0009813-Klobassa1">[10]</xref>
might benefit from adding spatial information to the cues.</p>
<p>However good the offline results are, they will need to be confirmed in an online setting. Preparations for an online study are currently undertaken.</p>
</sec>
<sec sec-type="supplementary-material" id="s5">
<title>Supporting Information</title>
<supplementary-material content-type="local-data" id="pone.0009813.s001">
<label>File S1</label>
<caption>
<p>Cues as used in the BCI experiments. Cues consist of bandpass filtered noise with a tone overlay. See
<xref ref-type="table" rid="pone-0009813-t001">Table 1</xref>
for their properties.</p>
<p>(0.02 MB ZIP)</p>
</caption>
<media xlink:href="pone.0009813.s001.zip" mimetype="application" mime-subtype="x-zip-compressed">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
</sec>
</body>
<back>
<ack>
<p>The authors greatly acknowledge Gabriel Curio, Vadim Nikulinthe and Klaus-Robert Müller for the fruitful discussion on the physiological and methodological aspects of this research.</p>
<p>This paper reflects only the authors' views, and funding agencies are not liable for any use that may be made of the information contained herein.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="pone.0009813-Wolpaw1">
<label>1</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wolpaw</surname>
<given-names>JR</given-names>
</name>
<name>
<surname>Birbaumer</surname>
<given-names>N</given-names>
</name>
<name>
<surname>McFarland</surname>
<given-names>DJ</given-names>
</name>
<name>
<surname>Pfurtscheller</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Vaughan</surname>
<given-names>TM</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Brain-computer interfaces for communication and control.</article-title>
<source>Clin Neurophysiol</source>
<volume>113</volume>
<fpage>767</fpage>
<lpage>791</lpage>
<pub-id pub-id-type="pmid">12048038</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Hill1">
<label>2</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hill</surname>
<given-names>NJ</given-names>
</name>
<name>
<surname>Lal</surname>
<given-names>TN</given-names>
</name>
<name>
<surname>Bierig</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Birbaumer</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Schölkopf</surname>
<given-names>B</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>An Auditory Paradigm for Brain-Computer Interfaces.</article-title>
<person-group person-group-type="editor">
<name>
<surname>Saul</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Weiss</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Bottou</surname>
<given-names>L</given-names>
</name>
</person-group>
<source>Adv Neural Inf Process Syst</source>
<volume>volume 17</volume>
<fpage>569</fpage>
<lpage>76</lpage>
<comment> MIT Press, Cambridge, MA, USA</comment>
</mixed-citation>
</ref>
<ref id="pone.0009813-Nijboer1">
<label>3</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nijboer</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Furdea</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Gunst</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Mellinger</surname>
<given-names>J</given-names>
</name>
<name>
<surname>McFarland</surname>
<given-names>DJ</given-names>
</name>
<etal></etal>
</person-group>
<year>2008</year>
<article-title>An auditory brain-computer interface (BCI).</article-title>
<source>J Neurosci Methods</source>
<volume>167</volume>
<fpage>43</fpage>
<lpage>50</lpage>
<pub-id pub-id-type="pmid">17399797</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Sellers1">
<label>4</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sellers</surname>
<given-names>EW</given-names>
</name>
<name>
<surname>Donchin</surname>
<given-names>E</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>A P300-based brain-computer interface: initial tests by ALS patients.</article-title>
<source>Clin Neurophysiol</source>
<volume>117</volume>
<fpage>538</fpage>
<lpage>548</lpage>
<pub-id pub-id-type="pmid">16461003</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Furdea1">
<label>5</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Furdea</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Halder</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Krusienski</surname>
<given-names>DJ</given-names>
</name>
<name>
<surname>Bross</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Nijboer</surname>
<given-names>F</given-names>
</name>
<etal></etal>
</person-group>
<year>2009</year>
<article-title>An auditory oddball (P300) spelling system for brain-computer interfaces.</article-title>
<source>Psychophysiology</source>
<volume>46</volume>
<fpage>617</fpage>
<lpage>625</lpage>
<pub-id pub-id-type="pmid">19170946</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Desain1">
<label>6</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Desain</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Hupse</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Kallenberg</surname>
<given-names>M</given-names>
</name>
<name>
<surname>de Kruif</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Schaefer</surname>
<given-names>R</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Brain-computer interfacing using selective attention and frequency-tagged stimuli.</article-title>
<source>Proceedings of the 3rd International Brain-Computer Interface Workshop & Training Course</source>
<fpage>98</fpage>
<lpage>99</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009813-Kanoh1">
<label>7</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kanoh</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Ichiro Miyamoto</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Yoshinobu</surname>
<given-names>T</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>A Brain-Computer Interface (BCI) System Based on Auditory Stream Segregation. In: Engineering in Medicine and Biology Society, 2008. EMBS 2008. 30th Annual International Conference of the IEEE.</article-title>
<source>NC MBE</source>
<fpage>642</fpage>
<lpage>645</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009813-Hinterberger1">
<label>8</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hinterberger</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Neumann</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Pham</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Kübler</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Grether</surname>
<given-names>A</given-names>
</name>
<etal></etal>
</person-group>
<year>2004</year>
<article-title>A multimodal brain-based feedback and communication system.</article-title>
<source>Exp Brain Res</source>
<volume>154</volume>
<fpage>521</fpage>
<lpage>526</lpage>
<pub-id pub-id-type="pmid">14648013</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Farquhar1">
<label>9</label>
<mixed-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Farquhar</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Blankespoor</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Vlek</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Desain</surname>
<given-names>I</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Towards a noise-tagging auditory BCI-paradigm.</article-title>
<fpage>50</fpage>
<lpage>55</lpage>
<comment>In: Proceedings of the 4th International Brain-Computer Interface Workshop and Training Course 2008</comment>
</mixed-citation>
</ref>
<ref id="pone.0009813-Klobassa1">
<label>10</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Klobassa</surname>
<given-names>DS</given-names>
</name>
<name>
<surname>Vaughan</surname>
<given-names>TM</given-names>
</name>
<name>
<surname>Brunner</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Schwartz</surname>
<given-names>NE</given-names>
</name>
<name>
<surname>Wolpaw</surname>
<given-names>JR</given-names>
</name>
<etal></etal>
</person-group>
<year>2009</year>
<article-title>Toward a high-throughput auditory P300-based brain-computer interface.</article-title>
<source>Clin Neurophysiol</source>
<volume>120</volume>
<fpage>1252</fpage>
<lpage>1261</lpage>
<pub-id pub-id-type="pmid">19574091</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-MllerPutz1">
<label>11</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Müller-Putz</surname>
<given-names>GR</given-names>
</name>
<name>
<surname>Scherer</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Neuper</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Pfurtscheller</surname>
<given-names>G</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Steady-state somatosensory evoked potentials: suitable brain signals for brain-computer interfaces?</article-title>
<source>IEEE Trans Neural Syst Rehabil Eng</source>
<volume>14</volume>
<fpage>30</fpage>
<lpage>37</lpage>
<pub-id pub-id-type="pmid">16562629</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Cincotti1">
<label>12</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cincotti</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Kauhanen</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Aloise</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Palomäki</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Caporusso</surname>
<given-names>N</given-names>
</name>
<etal></etal>
</person-group>
<year>2007</year>
<article-title>Vibrotactile Feedback for Brain-Computer Interface Operation.</article-title>
<source>Comput Intell Neurosci</source>
<volume>Volume 2007</volume>
<fpage>12 pages</fpage>
</mixed-citation>
</ref>
<ref id="pone.0009813-Chatterjee1">
<label>13</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chatterjee</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Aggarwal</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Ramos</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Acharya</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Thakor</surname>
<given-names>NV</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>A brain-computer interface with vibrotactile biofeedback for haptic information.</article-title>
<source>J Neuroeng Rehabil</source>
<volume>4</volume>
<fpage>40</fpage>
<pub-id pub-id-type="pmid">17941986</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Brouwer1">
<label>14</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brouwer</surname>
<given-names>AM</given-names>
</name>
<name>
<surname>van Erp</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>A tactile P300 BCI and the optimal number of tactors: effects of target probability and discriminability. In: Proceedings of the 4th International Brain-Computer Interface Workshop and Training Course 2008.</article-title>
<source>TU-Graz</source>
<fpage>280</fpage>
<lpage>285</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009813-Kbler1">
<label>15</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kübler</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Neumann</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Wilhelm</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Hinterberger</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Birbaumer</surname>
<given-names>N</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Predictability of Brain-Computer Communication.</article-title>
<source>J Psychophysiol</source>
<volume>18</volume>
<fpage>121</fpage>
<lpage>129</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009813-Picton1">
<label>16</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Picton</surname>
<given-names>TW</given-names>
</name>
</person-group>
<year>1992</year>
<article-title>The P300 wave of the human event-related potential.</article-title>
<source>Clin Neurophysiol</source>
<volume>9</volume>
<fpage>456</fpage>
<lpage>479</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009813-Polich1">
<label>17</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Polich</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Updating P300: an integrative theory of P3a and P3b.</article-title>
<source>Clin Neurophysiol</source>
<volume>118</volume>
<fpage>2128</fpage>
<lpage>2148</lpage>
<pub-id pub-id-type="pmid">17573239</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Conroy1">
<label>18</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Conroy</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Polich</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Normative Variation of P3a and P3b from a Large Sample.</article-title>
<source>J Psychophysiol</source>
<volume>21</volume>
<fpage>22</fpage>
<lpage>32</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009813-Gonsalvez1">
<label>19</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gonsalvez</surname>
<given-names>CL</given-names>
</name>
<name>
<surname>Polich</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>P300 amplitude is determined by target-to-target interval.</article-title>
<source>Psychophysiology</source>
<volume>39</volume>
<fpage>388</fpage>
<lpage>396</lpage>
<pub-id pub-id-type="pmid">12212658</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Farwell1">
<label>20</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Farwell</surname>
<given-names>LA</given-names>
</name>
<name>
<surname>Donchin</surname>
<given-names>E</given-names>
</name>
</person-group>
<year>1988</year>
<article-title>Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials.</article-title>
<source>Electroencephalogr Clin Neurophysiol</source>
<volume>70</volume>
<fpage>510</fpage>
<lpage>523</lpage>
<pub-id pub-id-type="pmid">2461285</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Lenhardt1">
<label>21</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lenhardt</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Kaper</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Ritter</surname>
<given-names>HJ</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>An adaptive P300-based online brain-computer interface.</article-title>
<source>IEEE Trans Neural Syst Rehabil Eng</source>
<volume>16</volume>
<fpage>121</fpage>
<lpage>130</lpage>
<pub-id pub-id-type="pmid">18403280</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Nijboer2">
<label>22</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nijboer</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Sellers</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Mellinger</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Jordan</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Matuz</surname>
<given-names>T</given-names>
</name>
<etal></etal>
</person-group>
<year>2008</year>
<article-title>A P300-based brain-computer interface for people with amyotrophic lateral sclerosis.</article-title>
<source>Clin Neurophysiol</source>
<volume>119</volume>
<fpage>1909</fpage>
<lpage>1916</lpage>
<pub-id pub-id-type="pmid">18571984</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Treder1">
<label>23</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Treder</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Venthur</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Blankertz</surname>
<given-names>B</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>(C)overt attention and P300-speller design.</article-title>
<source>Poster at the BBCI Workshop ‘Advances in Neurotechnology’, Berlin</source>
</mixed-citation>
</ref>
<ref id="pone.0009813-Bayliss1">
<label>24</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bayliss</surname>
<given-names>JD</given-names>
</name>
<name>
<surname>Inverso</surname>
<given-names>SA</given-names>
</name>
<name>
<surname>Tentler</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Changing the P300 brain computer interface.</article-title>
<source>Cyberpsychol Behav</source>
<volume>7</volume>
<fpage>694</fpage>
<lpage>704</lpage>
<pub-id pub-id-type="pmid">15687805</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Piccione1">
<label>25</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Piccione</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Giorgi</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Tonin</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Priftis</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Giove</surname>
<given-names>S</given-names>
</name>
<etal></etal>
</person-group>
<year>2006</year>
<article-title>P300-based brain computer interface: Reliability and performance in healthy and paralysed participants.</article-title>
<source>Clin Neurophysiol</source>
<volume>117</volume>
<fpage>531</fpage>
<lpage>537</lpage>
<pub-id pub-id-type="pmid">16458069</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Middlebrooks1">
<label>26</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Middlebrooks</surname>
<given-names>JC</given-names>
</name>
<name>
<surname>Green</surname>
<given-names>DM</given-names>
</name>
</person-group>
<year>1991</year>
<article-title>Sound localization by human listeners.</article-title>
<source>Annu Rev Psychol</source>
<volume>42</volume>
<fpage>135</fpage>
<lpage>159</lpage>
<pub-id pub-id-type="pmid">2018391</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Brungart1">
<label>27</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brungart</surname>
<given-names>DS</given-names>
</name>
<name>
<surname>Durlach</surname>
<given-names>NI</given-names>
</name>
<name>
<surname>Rabinowitz</surname>
<given-names>WM</given-names>
</name>
</person-group>
<year>1999</year>
<article-title>Auditory localization of nearby sources. II. Localization of a broadband source.</article-title>
<source>J Acoust Soc Am</source>
<volume>106</volume>
<fpage>1956</fpage>
<lpage>1968</lpage>
<pub-id pub-id-type="pmid">10530020</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Mondor1">
<label>28</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mondor</surname>
<given-names>TA</given-names>
</name>
<name>
<surname>Zatorre</surname>
<given-names>RJ</given-names>
</name>
</person-group>
<year>1995</year>
<article-title>Shifting and focusing auditory spatial attention.</article-title>
<source>J Exp Psychol Hum Percept Perform</source>
<volume>21</volume>
<fpage>387</fpage>
<lpage>409</lpage>
<pub-id pub-id-type="pmid">7714479</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-TederSlejrvi1">
<label>29</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Teder-Sälejärvi</surname>
<given-names>WA</given-names>
</name>
<name>
<surname>Hillyard</surname>
<given-names>SA</given-names>
</name>
</person-group>
<year>1998</year>
<article-title>The gradient of spatial auditory attention in free field: an event-related potential study.</article-title>
<source>Percept Psychophys</source>
<volume>60</volume>
<fpage>1228</fpage>
<lpage>1242</lpage>
<pub-id pub-id-type="pmid">9821784</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Sonnadara1">
<label>30</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sonnadara</surname>
<given-names>RR</given-names>
</name>
<name>
<surname>Alain</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Trainor</surname>
<given-names>LJ</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Effects of spatial separation and stimulus probability on the event-related potentials elicited by occasional changes in sound location.</article-title>
<source>Brain Res</source>
<volume>1071</volume>
<fpage>175</fpage>
<lpage>185</lpage>
<pub-id pub-id-type="pmid">16406012</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Deouell1">
<label>31</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Deouell</surname>
<given-names>LY</given-names>
</name>
<name>
<surname>Parnes</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Pickard</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Knight</surname>
<given-names>RT</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Spatial location is accurately tracked by human auditory sensory memory: evidence from the mismatch negativity.</article-title>
<source>Eur J Neurosci</source>
<volume>24</volume>
<fpage>1488</fpage>
<lpage>1494</lpage>
<pub-id pub-id-type="pmid">16987229</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Rader1">
<label>32</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rader</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Holmes</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Golob</surname>
<given-names>E</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Auditory event-related potentials during a spatial working memory task.</article-title>
<source>Clin Neurophysiol</source>
<volume>119</volume>
<fpage>1176</fpage>
<lpage>1189</lpage>
<pub-id pub-id-type="pmid">18313978</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Brainard1">
<label>33</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brainard</surname>
<given-names>DH</given-names>
</name>
</person-group>
<year>1997</year>
<article-title>The Psychophysics Toolbox.</article-title>
<source>Spat Vis</source>
<volume>10</volume>
<fpage>433</fpage>
<lpage>436</lpage>
<pub-id pub-id-type="pmid">9176952</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Green1">
<label>34</label>
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Green</surname>
<given-names>MD</given-names>
</name>
<name>
<surname>Swets</surname>
<given-names>JA</given-names>
</name>
</person-group>
<year>1966</year>
<article-title>Signal detection theory and psychophysics.</article-title>
<publisher-loc>Huntington, NY</publisher-loc>
<publisher-name>Krieger</publisher-name>
</mixed-citation>
</ref>
<ref id="pone.0009813-Fawcett1">
<label>35</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fawcett</surname>
<given-names>T</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>An introduction to roc analysis.</article-title>
<source>Pattern Recognit Lett</source>
<volume>27</volume>
<fpage>861</fpage>
<lpage>874</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009813-Duda1">
<label>36</label>
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Duda</surname>
<given-names>RO</given-names>
</name>
<name>
<surname>Hart</surname>
<given-names>PE</given-names>
</name>
<name>
<surname>Stork</surname>
<given-names>DG</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>Pattern Classification.</article-title>
<publisher-name>Wiley & Sons, 2nd edition edition</publisher-name>
</mixed-citation>
</ref>
<ref id="pone.0009813-Mller1">
<label>37</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Müller</surname>
<given-names>KR</given-names>
</name>
<name>
<surname>Krauledat</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Dornhege</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Curio</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Blankertz</surname>
<given-names>B</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Machine learning techniques for brain-computer interfaces.</article-title>
<source>Biomed Tech</source>
<volume>49</volume>
<fpage>11</fpage>
<lpage>22</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009813-Student1">
<label>38</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Student</surname>
</name>
</person-group>
<year>1908</year>
<article-title>The probable error of a mean.</article-title>
<source>Biometrika</source>
<volume>6</volume>
<fpage>1</fpage>
<lpage>25</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009813-Ledoit1">
<label>39</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ledoit</surname>
<given-names>O</given-names>
</name>
<name>
<surname>Wolf</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>A well-conditioned estimator for large-dimensional covariance matrices.</article-title>
<source>J Multivariate Anal</source>
<volume>88</volume>
<fpage>365</fpage>
<lpage>411</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009813-Wolpaw2">
<label>40</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wolpaw</surname>
<given-names>JR</given-names>
</name>
<name>
<surname>Birbaumer</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Heetderks</surname>
<given-names>WJ</given-names>
</name>
<name>
<surname>McFarland</surname>
<given-names>DJ</given-names>
</name>
<name>
<surname>Peckham</surname>
<given-names>PH</given-names>
</name>
<etal></etal>
</person-group>
<year>2000</year>
<article-title>Brain-computer interface technology: a review of the first international meeting.</article-title>
<source>IEEE Trans Neural Syst Rehabil Eng</source>
<volume>8</volume>
<fpage>164</fpage>
<lpage>173</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009813-TederSlejrvi2">
<label>41</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Teder-Sälejärvi</surname>
<given-names>WA</given-names>
</name>
<name>
<surname>Hillyard</surname>
<given-names>SA</given-names>
</name>
<name>
<surname>Röder</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Neville</surname>
<given-names>HJ</given-names>
</name>
</person-group>
<year>1999</year>
<article-title>Spatial attention to central and peripheral auditory stimuli as indexed by event-related potentials.</article-title>
<source>Cogn Brain Res</source>
<volume>8</volume>
<fpage>213</fpage>
<lpage>227</lpage>
</mixed-citation>
</ref>
<ref id="pone.0009813-Polich2">
<label>42</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Polich</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Criado</surname>
<given-names>JR</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Neuropsychology and neuropharmacology of P3a and P3b.</article-title>
<source>Int J Psychophysiol</source>
<volume>60</volume>
<fpage>172</fpage>
<lpage>185</lpage>
<pub-id pub-id-type="pmid">16510201</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Mertens1">
<label>43</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mertens</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Polich</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>1997</year>
<article-title>P300 from a single-stimulus paradigm: passive versus active tasks and stimulus modality.</article-title>
<source>Electroencephalogr Clin Neurophysiol</source>
<volume>104</volume>
<fpage>488</fpage>
<lpage>497</lpage>
<pub-id pub-id-type="pmid">9402891</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Katayama1">
<label>44</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Katayama</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Polich</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>1996</year>
<article-title>P300, probability, and the three-tone paradigm.</article-title>
<source>Electroencephalogr Clin Neurophysiol</source>
<volume>100</volume>
<fpage>555</fpage>
<lpage>562</lpage>
<pub-id pub-id-type="pmid">8980420</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Serby1">
<label>45</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Serby</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Yom-Tov</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Inbar</surname>
<given-names>GF</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>An improved P300-based brain-computer interface.</article-title>
<source>IEEE Trans Neural Syst Rehabil Eng</source>
<volume>13</volume>
<fpage>89</fpage>
<lpage>98</lpage>
<pub-id pub-id-type="pmid">15813410</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0009813-Tomiaka1">
<label>46</label>
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tomiaka</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Haufe</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Combined classification and channel/basis selection with L1-L2 regularization with application to p300 speller system. In: Proceedings of the 4th International Brain-Computer Interface Workshop and Training Course 2008.</article-title>
<source>TU-Graz</source>
<fpage>232</fpage>
<lpage>237</lpage>
</mixed-citation>
</ref>
</ref-list>
<fn-group>
<fn fn-type="conflict">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="financial-disclosure">
<p>
<bold>Funding: </bold>
This work was partly supported by the European Information and Communication Technologies (ICT) Programme Project FP7-224631 and 216886, by grants of the Deutsche Forschungsgemeinschaft (DFG) (MU 987/3-1) and Bundesministerium für Bildung und Forschung (BMBF) (FKZ 01IB001A, 01GQ0850) and by the FP7-ICT Programme of the European Community, under the PASCAL2 Network of Excellence, ICT-216886. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</p>
</fn>
</fn-group>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>Allemagne</li>
</country>
<region>
<li>Berlin</li>
</region>
<settlement>
<li>Berlin</li>
</settlement>
</list>
<tree>
<country name="Allemagne">
<region name="Berlin">
<name sortKey="Schreuder, Martijn" sort="Schreuder, Martijn" uniqKey="Schreuder M" first="Martijn" last="Schreuder">Martijn Schreuder</name>
</region>
<name sortKey="Blankertz, Benjamin" sort="Blankertz, Benjamin" uniqKey="Blankertz B" first="Benjamin" last="Blankertz">Benjamin Blankertz</name>
<name sortKey="Blankertz, Benjamin" sort="Blankertz, Benjamin" uniqKey="Blankertz B" first="Benjamin" last="Blankertz">Benjamin Blankertz</name>
<name sortKey="Tangermann, Michael" sort="Tangermann, Michael" uniqKey="Tangermann M" first="Michael" last="Tangermann">Michael Tangermann</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001F83 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 001F83 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:2848564
   |texte=   A New Auditory Multi-Class Brain-Computer Interface Paradigm: Spatial Hearing as an Informative Cue
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:20368976" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024