Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

A Gaze Independent Brain-Computer Interface Based on Visual Stimulation through Closed Eyelids

Identifieur interne : 000792 ( Pmc/Checkpoint ); précédent : 000791; suivant : 000793

A Gaze Independent Brain-Computer Interface Based on Visual Stimulation through Closed Eyelids

Auteurs : Han-Jeong Hwang [Allemagne] ; Valeria Y. Ferreria [Allemagne] ; Daniel Ulrich [Allemagne] ; Tayfun Kilic [Allemagne] ; Xenofon Chatziliadis [Allemagne] ; Benjamin Blankertz [Allemagne] ; Matthias Treder [Allemagne, Royaume-Uni]

Source :

RBID : PMC:4625131

Abstract

A classical brain-computer interface (BCI) based on visual event-related potentials (ERPs) is of limited application value for paralyzed patients with severe oculomotor impairments. In this study, we introduce a novel gaze independent BCI paradigm that can be potentially used for such end-users because visual stimuli are administered on closed eyelids. The paradigm involved verbally presented questions with 3 possible answers. Online BCI experiments were conducted with twelve healthy subjects, where they selected one option by attending to one of three different visual stimuli. It was confirmed that typical cognitive ERPs can be evidently modulated by the attention of a target stimulus in eyes-closed and gaze independent condition, and further classified with high accuracy during online operation (74.58% ± 17.85 s.d.; chance level 33.33%), demonstrating the effectiveness of the proposed novel visual ERP paradigm. Also, stimulus-specific eye movements observed during stimulation were verified as reflex responses to light stimuli, and they did not contribute to classification. To the best of our knowledge, this study is the first to show the possibility of using a gaze independent visual ERP paradigm in an eyes-closed condition, thereby providing another communication option for severely locked-in patients suffering from complex ocular dysfunctions.


Url:
DOI: 10.1038/srep15890
PubMed: 26510583
PubMed Central: 4625131


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4625131

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">A Gaze Independent Brain-Computer Interface Based on Visual Stimulation through Closed Eyelids</title>
<author>
<name sortKey="Hwang, Han Jeong" sort="Hwang, Han Jeong" uniqKey="Hwang H" first="Han-Jeong" last="Hwang">Han-Jeong Hwang</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Machine Learning Group, Berlin Institute of Technology (TU Berlin)</institution>
, Marchstrasse 23, 10587 Berlin,
<country>Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Ferreria, Valeria Y" sort="Ferreria, Valeria Y" uniqKey="Ferreria V" first="Valeria Y." last="Ferreria">Valeria Y. Ferreria</name>
<affiliation wicri:level="1">
<nlm:aff id="a2">
<institution>Neurotechnology Group, Berlin Institute of Technology (TU Berlin)</institution>
, Marchstrasse 23, 10587 Berlin,
<country>Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Ulrich, Daniel" sort="Ulrich, Daniel" uniqKey="Ulrich D" first="Daniel" last="Ulrich">Daniel Ulrich</name>
<affiliation wicri:level="1">
<nlm:aff id="a2">
<institution>Neurotechnology Group, Berlin Institute of Technology (TU Berlin)</institution>
, Marchstrasse 23, 10587 Berlin,
<country>Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Kilic, Tayfun" sort="Kilic, Tayfun" uniqKey="Kilic T" first="Tayfun" last="Kilic">Tayfun Kilic</name>
<affiliation wicri:level="1">
<nlm:aff id="a2">
<institution>Neurotechnology Group, Berlin Institute of Technology (TU Berlin)</institution>
, Marchstrasse 23, 10587 Berlin,
<country>Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Chatziliadis, Xenofon" sort="Chatziliadis, Xenofon" uniqKey="Chatziliadis X" first="Xenofon" last="Chatziliadis">Xenofon Chatziliadis</name>
<affiliation wicri:level="1">
<nlm:aff id="a2">
<institution>Neurotechnology Group, Berlin Institute of Technology (TU Berlin)</institution>
, Marchstrasse 23, 10587 Berlin,
<country>Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Blankertz, Benjamin" sort="Blankertz, Benjamin" uniqKey="Blankertz B" first="Benjamin" last="Blankertz">Benjamin Blankertz</name>
<affiliation wicri:level="1">
<nlm:aff id="a2">
<institution>Neurotechnology Group, Berlin Institute of Technology (TU Berlin)</institution>
, Marchstrasse 23, 10587 Berlin,
<country>Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Treder, Matthias" sort="Treder, Matthias" uniqKey="Treder M" first="Matthias" last="Treder">Matthias Treder</name>
<affiliation wicri:level="1">
<nlm:aff id="a2">
<institution>Neurotechnology Group, Berlin Institute of Technology (TU Berlin)</institution>
, Marchstrasse 23, 10587 Berlin,
<country>Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="a3">
<institution>Behavioural & Clinical Neurosciences Institute, Department of Psychiatry, University of Cambridge</institution>
,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">26510583</idno>
<idno type="pmc">4625131</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4625131</idno>
<idno type="RBID">PMC:4625131</idno>
<idno type="doi">10.1038/srep15890</idno>
<date when="2015">2015</date>
<idno type="wicri:Area/Pmc/Corpus">000641</idno>
<idno type="wicri:Area/Pmc/Curation">000641</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000792</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">A Gaze Independent Brain-Computer Interface Based on Visual Stimulation through Closed Eyelids</title>
<author>
<name sortKey="Hwang, Han Jeong" sort="Hwang, Han Jeong" uniqKey="Hwang H" first="Han-Jeong" last="Hwang">Han-Jeong Hwang</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Machine Learning Group, Berlin Institute of Technology (TU Berlin)</institution>
, Marchstrasse 23, 10587 Berlin,
<country>Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Ferreria, Valeria Y" sort="Ferreria, Valeria Y" uniqKey="Ferreria V" first="Valeria Y." last="Ferreria">Valeria Y. Ferreria</name>
<affiliation wicri:level="1">
<nlm:aff id="a2">
<institution>Neurotechnology Group, Berlin Institute of Technology (TU Berlin)</institution>
, Marchstrasse 23, 10587 Berlin,
<country>Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Ulrich, Daniel" sort="Ulrich, Daniel" uniqKey="Ulrich D" first="Daniel" last="Ulrich">Daniel Ulrich</name>
<affiliation wicri:level="1">
<nlm:aff id="a2">
<institution>Neurotechnology Group, Berlin Institute of Technology (TU Berlin)</institution>
, Marchstrasse 23, 10587 Berlin,
<country>Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Kilic, Tayfun" sort="Kilic, Tayfun" uniqKey="Kilic T" first="Tayfun" last="Kilic">Tayfun Kilic</name>
<affiliation wicri:level="1">
<nlm:aff id="a2">
<institution>Neurotechnology Group, Berlin Institute of Technology (TU Berlin)</institution>
, Marchstrasse 23, 10587 Berlin,
<country>Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Chatziliadis, Xenofon" sort="Chatziliadis, Xenofon" uniqKey="Chatziliadis X" first="Xenofon" last="Chatziliadis">Xenofon Chatziliadis</name>
<affiliation wicri:level="1">
<nlm:aff id="a2">
<institution>Neurotechnology Group, Berlin Institute of Technology (TU Berlin)</institution>
, Marchstrasse 23, 10587 Berlin,
<country>Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Blankertz, Benjamin" sort="Blankertz, Benjamin" uniqKey="Blankertz B" first="Benjamin" last="Blankertz">Benjamin Blankertz</name>
<affiliation wicri:level="1">
<nlm:aff id="a2">
<institution>Neurotechnology Group, Berlin Institute of Technology (TU Berlin)</institution>
, Marchstrasse 23, 10587 Berlin,
<country>Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Treder, Matthias" sort="Treder, Matthias" uniqKey="Treder M" first="Matthias" last="Treder">Matthias Treder</name>
<affiliation wicri:level="1">
<nlm:aff id="a2">
<institution>Neurotechnology Group, Berlin Institute of Technology (TU Berlin)</institution>
, Marchstrasse 23, 10587 Berlin,
<country>Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="a3">
<institution>Behavioural & Clinical Neurosciences Institute, Department of Psychiatry, University of Cambridge</institution>
,
<country>UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Scientific Reports</title>
<idno type="eISSN">2045-2322</idno>
<imprint>
<date when="2015">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>A classical brain-computer interface (BCI) based on visual event-related potentials (ERPs) is of limited application value for paralyzed patients with severe oculomotor impairments. In this study, we introduce a novel gaze independent BCI paradigm that can be potentially used for such end-users because visual stimuli are administered on closed eyelids. The paradigm involved verbally presented questions with 3 possible answers. Online BCI experiments were conducted with twelve healthy subjects, where they selected one option by attending to one of three different visual stimuli. It was confirmed that typical cognitive ERPs can be evidently modulated by the attention of a target stimulus in eyes-closed and gaze independent condition, and further classified with high accuracy during online operation (74.58% ± 17.85 s.d.; chance level 33.33%), demonstrating the effectiveness of the proposed novel visual ERP paradigm. Also, stimulus-specific eye movements observed during stimulation were verified as reflex responses to light stimuli, and they did not contribute to classification. To the best of our knowledge, this study is the first to show the possibility of using a gaze independent visual ERP paradigm in an eyes-closed condition, thereby providing another communication option for severely locked-in patients suffering from complex ocular dysfunctions.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Hwang, H J" uniqKey="Hwang H">H.-J. Hwang</name>
</author>
<author>
<name sortKey="Kim, S" uniqKey="Kim S">S. Kim</name>
</author>
<author>
<name sortKey="Choi, S" uniqKey="Choi S">S. Choi</name>
</author>
<author>
<name sortKey="Im, C H" uniqKey="Im C">C.-H. Im</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="An, X W" uniqKey="An X">X. W. An</name>
</author>
<author>
<name sortKey="Hohne, J" uniqKey="Hohne J">J. Höhne</name>
</author>
<author>
<name sortKey="Ming, D" uniqKey="Ming D">D. Ming</name>
</author>
<author>
<name sortKey="Blankertz, B" uniqKey="Blankertz B">B. Blankertz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Belitski, A" uniqKey="Belitski A">A. Belitski</name>
</author>
<author>
<name sortKey="Farquhar, J" uniqKey="Farquhar J">J. Farquhar</name>
</author>
<author>
<name sortKey="Desain, P" uniqKey="Desain P">P. Desain</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Aloise, F" uniqKey="Aloise F">F. Aloise</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sellers, E W" uniqKey="Sellers E">E. W. Sellers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Treder, M S" uniqKey="Treder M">M. S. Treder</name>
</author>
<author>
<name sortKey="Schmidt, N M" uniqKey="Schmidt N">N. M. Schmidt</name>
</author>
<author>
<name sortKey="Blankertz, B" uniqKey="Blankertz B">B. Blankertz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hwang, H J" uniqKey="Hwang H">H.-J. Hwang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rebsamen, B" uniqKey="Rebsamen B">B. Rebsamen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Muller Putz, G R" uniqKey="Muller Putz G">G. R. Müller-Putz</name>
</author>
<author>
<name sortKey="Pfurtscheller, G" uniqKey="Pfurtscheller G">G. Pfurtscheller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wilson, J J" uniqKey="Wilson J">J. J. Wilson</name>
</author>
<author>
<name sortKey="Palaniappan, R" uniqKey="Palaniappan R">R. Palaniappan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lenhardt, A" uniqKey="Lenhardt A">A. Lenhardt</name>
</author>
<author>
<name sortKey="Kaper, M" uniqKey="Kaper M">M. Kaper</name>
</author>
<author>
<name sortKey="Ritter, H J" uniqKey="Ritter H">H. J. Ritter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Xu, M" uniqKey="Xu M">M. Xu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yin, E" uniqKey="Yin E">E. Yin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yin, E" uniqKey="Yin E">E. Yin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kushner, M J" uniqKey="Kushner M">M. J. Kushner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Abel, L A" uniqKey="Abel L">L. A. Abel</name>
</author>
<author>
<name sortKey="Gibson, K" uniqKey="Gibson K">K. Gibson</name>
</author>
<author>
<name sortKey="Williams, I M" uniqKey="Williams I">I. M. Williams</name>
</author>
<author>
<name sortKey="Li, C W" uniqKey="Li C">C. W. Li</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Averbuch Heller, L" uniqKey="Averbuch Heller L">L. Averbuch-Heller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pinto, S" uniqKey="Pinto S">S. Pinto</name>
</author>
<author>
<name sortKey="Carvalho, M" uniqKey="Carvalho M">M. Carvalho</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ushio, M" uniqKey="Ushio M">M. Ushio</name>
</author>
<author>
<name sortKey="Iwasaki, S" uniqKey="Iwasaki S">S. Iwasaki</name>
</author>
<author>
<name sortKey="Sugasawa, K" uniqKey="Sugasawa K">K. Sugasawa</name>
</author>
<author>
<name sortKey="Murofushi, T" uniqKey="Murofushi T">T. Murofushi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jacobs, L" uniqKey="Jacobs L">L. Jacobs</name>
</author>
<author>
<name sortKey="Bozian, D" uniqKey="Bozian D">D. Bozian</name>
</author>
<author>
<name sortKey="Heffner, R R" uniqKey="Heffner R">R. R. Heffner</name>
</author>
<author>
<name sortKey="Barron, S A" uniqKey="Barron S">S. A. Barron</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Donaghy, C" uniqKey="Donaghy C">C. Donaghy</name>
</author>
<author>
<name sortKey="Thurtell, M J" uniqKey="Thurtell M">M. J. Thurtell</name>
</author>
<author>
<name sortKey="Pioro, E P" uniqKey="Pioro E">E. P. Pioro</name>
</author>
<author>
<name sortKey="Gibson, J M" uniqKey="Gibson J">J. M. Gibson</name>
</author>
<author>
<name sortKey="Leigh, R J" uniqKey="Leigh R">R. J. Leigh</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Okuda, B" uniqKey="Okuda B">B. Okuda</name>
</author>
<author>
<name sortKey="Yamamoto, T" uniqKey="Yamamoto T">T. Yamamoto</name>
</author>
<author>
<name sortKey="Yamasaki, M" uniqKey="Yamasaki M">M. Yamasaki</name>
</author>
<author>
<name sortKey="Maya, K" uniqKey="Maya K">K. Maya</name>
</author>
<author>
<name sortKey="Imai, T" uniqKey="Imai T">T. Imai</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hayashi, H" uniqKey="Hayashi H">H. Hayashi</name>
</author>
<author>
<name sortKey="Kato, S" uniqKey="Kato S">S. Kato</name>
</author>
<author>
<name sortKey="Kawada, T" uniqKey="Kawada T">T. Kawada</name>
</author>
<author>
<name sortKey="Tsubaki, T" uniqKey="Tsubaki T">T. Tsubaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Treder, M S" uniqKey="Treder M">M. S. Treder</name>
</author>
<author>
<name sortKey="Blankertz, B" uniqKey="Blankertz B">B. Blankertz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lesenfants, D" uniqKey="Lesenfants D">D. Lesenfants</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhang, D" uniqKey="Zhang D">D. Zhang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tonin, L" uniqKey="Tonin L">L. Tonin</name>
</author>
<author>
<name sortKey="Leeb, R" uniqKey="Leeb R">R. Leeb</name>
</author>
<author>
<name sortKey="Sobolewski, A" uniqKey="Sobolewski A">A. Sobolewski</name>
</author>
<author>
<name sortKey="Millan Jdel, R" uniqKey="Millan Jdel R">R. Millan Jdel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Marchetti, M" uniqKey="Marchetti M">M. Marchetti</name>
</author>
<author>
<name sortKey="Piccione, F" uniqKey="Piccione F">F. Piccione</name>
</author>
<author>
<name sortKey="Silvoni, S" uniqKey="Silvoni S">S. Silvoni</name>
</author>
<author>
<name sortKey="Gamberini, L" uniqKey="Gamberini L">L. Gamberini</name>
</author>
<author>
<name sortKey="Priftis, K" uniqKey="Priftis K">K. Priftis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Aloise, F" uniqKey="Aloise F">F. Aloise</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Allison, B Z" uniqKey="Allison B">B. Z. Allison</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Acqualagna, L" uniqKey="Acqualagna L">L. Acqualagna</name>
</author>
<author>
<name sortKey="Blankertz, B" uniqKey="Blankertz B">B. Blankertz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lim, J H" uniqKey="Lim J">J.-H. Lim</name>
</author>
<author>
<name sortKey="Hwang, H J" uniqKey="Hwang H">H.-J. Hwang</name>
</author>
<author>
<name sortKey="Han, C H" uniqKey="Han C">C.-H. Han</name>
</author>
<author>
<name sortKey="Jung, K Y" uniqKey="Jung K">K.-Y. Jung</name>
</author>
<author>
<name sortKey="Im, C H" uniqKey="Im C">C.-H. Im</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Blankertz, B" uniqKey="Blankertz B">B. Blankertz</name>
</author>
<author>
<name sortKey="Lemm, S" uniqKey="Lemm S">S. Lemm</name>
</author>
<author>
<name sortKey="Treder, M" uniqKey="Treder M">M. Treder</name>
</author>
<author>
<name sortKey="Haufe, S" uniqKey="Haufe S">S. Haufe</name>
</author>
<author>
<name sortKey="Muller, K R" uniqKey="Muller K">K. R. Müller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Winkler, I" uniqKey="Winkler I">I. Winkler</name>
</author>
<author>
<name sortKey="Haufe, S" uniqKey="Haufe S">S. Haufe</name>
</author>
<author>
<name sortKey="Tangermann, M" uniqKey="Tangermann M">M. Tangermann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Winkler, I" uniqKey="Winkler I">I. Winkler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brunner, P" uniqKey="Brunner P">P. Brunner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Frenzel, S" uniqKey="Frenzel S">S. Frenzel</name>
</author>
<author>
<name sortKey="Neubert, E" uniqKey="Neubert E">E. Neubert</name>
</author>
<author>
<name sortKey="Bandt, C" uniqKey="Bandt C">C. Bandt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shimoda, M" uniqKey="Shimoda M">M. Shimoda</name>
</author>
<author>
<name sortKey="Yokoyama, Y" uniqKey="Yokoyama Y">Y. Yokoyama</name>
</author>
<author>
<name sortKey="Okada, A" uniqKey="Okada A">A. Okada</name>
</author>
<author>
<name sortKey="Nakashima, K" uniqKey="Nakashima K">K. Nakashima</name>
</author>
<author>
<name sortKey="Takahashi, K" uniqKey="Takahashi K">K. Takahashi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cohen, B" uniqKey="Cohen B">B. Cohen</name>
</author>
<author>
<name sortKey="Caroscio, J" uniqKey="Caroscio J">J. Caroscio</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shaunak, S" uniqKey="Shaunak S">S. Shaunak</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gizzi, M" uniqKey="Gizzi M">M. Gizzi</name>
</author>
<author>
<name sortKey="Dirocco, A" uniqKey="Dirocco A">A. Dirocco</name>
</author>
<author>
<name sortKey="Sivak, M" uniqKey="Sivak M">M. Sivak</name>
</author>
<author>
<name sortKey="Cohen, B" uniqKey="Cohen B">B. Cohen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dirocco, A" uniqKey="Dirocco A">A. Dirocco</name>
</author>
<author>
<name sortKey="Gizzi, M" uniqKey="Gizzi M">M. Gizzi</name>
</author>
<author>
<name sortKey="Sivak, M" uniqKey="Sivak M">M. Sivak</name>
</author>
<author>
<name sortKey="Cohen, B" uniqKey="Cohen B">B. Cohen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Donaghy, C" uniqKey="Donaghy C">C. Donaghy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rivaud, S" uniqKey="Rivaud S">S. Rivaud</name>
</author>
<author>
<name sortKey="Muri, R M" uniqKey="Muri R">R. M. Muri</name>
</author>
<author>
<name sortKey="Gaymard, B" uniqKey="Gaymard B">B. Gaymard</name>
</author>
<author>
<name sortKey="Vermersch, A I" uniqKey="Vermersch A">A. I. Vermersch</name>
</author>
<author>
<name sortKey="Pierrotdeseilligny, C" uniqKey="Pierrotdeseilligny C">C. Pierrotdeseilligny</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schaeff, S" uniqKey="Schaeff S">S. Schaeff</name>
</author>
<author>
<name sortKey="Treder, M S" uniqKey="Treder M">M. S. Treder</name>
</author>
<author>
<name sortKey="Venthur, B" uniqKey="Venthur B">B. Venthur</name>
</author>
<author>
<name sortKey="Blankertz, B" uniqKey="Blankertz B">B. Blankertz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hohne, J" uniqKey="Hohne J">J. Höhne</name>
</author>
<author>
<name sortKey="Tangermann, M" uniqKey="Tangermann M">M. Tangermann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schreuder, M" uniqKey="Schreuder M">M. Schreuder</name>
</author>
<author>
<name sortKey="Blankertz, B" uniqKey="Blankertz B">B. Blankertz</name>
</author>
<author>
<name sortKey="Tangermann, M" uniqKey="Tangermann M">M. Tangermann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schreuder, M" uniqKey="Schreuder M">M. Schreuder</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Simon, N" uniqKey="Simon N">N. Simon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hill, N J" uniqKey="Hill N">N. J. Hill</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="K Thner, I" uniqKey="K Thner I">I. Käthner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brouwer, A M" uniqKey="Brouwer A">A.-M. Brouwer</name>
</author>
<author>
<name sortKey="Van Erp, J B F" uniqKey="Van Erp J">J. B. F. van Erp</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Der Waal, M" uniqKey="Van Der Waal M">M. van der Waal</name>
</author>
<author>
<name sortKey="Severens, M" uniqKey="Severens M">M. Severens</name>
</author>
<author>
<name sortKey="Geuze, J" uniqKey="Geuze J">J. Geuze</name>
</author>
<author>
<name sortKey="Desain, P" uniqKey="Desain P">P. Desain</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kaufmann, T" uniqKey="Kaufmann T">T. Kaufmann</name>
</author>
<author>
<name sortKey="Herweg, A" uniqKey="Herweg A">A. Herweg</name>
</author>
<author>
<name sortKey="Kubler, A" uniqKey="Kubler A">A. Kübler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Polich, J" uniqKey="Polich J">J. Polich</name>
</author>
<author>
<name sortKey="Ellerson, P C" uniqKey="Ellerson P">P. C. Ellerson</name>
</author>
<author>
<name sortKey="Cohen, J" uniqKey="Cohen J">J. Cohen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Comerchero, M D" uniqKey="Comerchero M">M. D. Comerchero</name>
</author>
<author>
<name sortKey="Polich, J" uniqKey="Polich J">J. Polich</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kaufmann, T" uniqKey="Kaufmann T">T. Kaufmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schreuder, M" uniqKey="Schreuder M">M. Schreuder</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nijboer, F" uniqKey="Nijboer F">F. Nijboer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kubler, A" uniqKey="Kubler A">A. Kübler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brouwer, A M" uniqKey="Brouwer A">A.-M. Brouwer</name>
</author>
<author>
<name sortKey="Van Erp, J B F" uniqKey="Van Erp J">J. B. F. van Erp</name>
</author>
<author>
<name sortKey="Aloise, F" uniqKey="Aloise F">F. Aloise</name>
</author>
<author>
<name sortKey="Cincotti, F" uniqKey="Cincotti F">F. Cincotti</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Sci Rep</journal-id>
<journal-id journal-id-type="iso-abbrev">Sci Rep</journal-id>
<journal-title-group>
<journal-title>Scientific Reports</journal-title>
</journal-title-group>
<issn pub-type="epub">2045-2322</issn>
<publisher>
<publisher-name>Nature Publishing Group</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">26510583</article-id>
<article-id pub-id-type="pmc">4625131</article-id>
<article-id pub-id-type="pii">srep15890</article-id>
<article-id pub-id-type="doi">10.1038/srep15890</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>A Gaze Independent Brain-Computer Interface Based on Visual Stimulation through Closed Eyelids</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Hwang</surname>
<given-names>Han-Jeong</given-names>
</name>
<xref ref-type="corresp" rid="c1">a</xref>
<xref ref-type="aff" rid="a1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Ferreria</surname>
<given-names>Valeria Y.</given-names>
</name>
<xref ref-type="aff" rid="a2">2</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Ulrich</surname>
<given-names>Daniel</given-names>
</name>
<xref ref-type="aff" rid="a2">2</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Kilic</surname>
<given-names>Tayfun</given-names>
</name>
<xref ref-type="aff" rid="a2">2</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Chatziliadis</surname>
<given-names>Xenofon</given-names>
</name>
<xref ref-type="aff" rid="a2">2</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Blankertz</surname>
<given-names>Benjamin</given-names>
</name>
<xref ref-type="aff" rid="a2">2</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Treder</surname>
<given-names>Matthias</given-names>
</name>
<xref ref-type="corresp" rid="c2">b</xref>
<xref ref-type="aff" rid="a2">2</xref>
<xref ref-type="aff" rid="a3">3</xref>
</contrib>
<aff id="a1">
<label>1</label>
<institution>Machine Learning Group, Berlin Institute of Technology (TU Berlin)</institution>
, Marchstrasse 23, 10587 Berlin,
<country>Germany</country>
</aff>
<aff id="a2">
<label>2</label>
<institution>Neurotechnology Group, Berlin Institute of Technology (TU Berlin)</institution>
, Marchstrasse 23, 10587 Berlin,
<country>Germany</country>
</aff>
<aff id="a3">
<label>3</label>
<institution>Behavioural & Clinical Neurosciences Institute, Department of Psychiatry, University of Cambridge</institution>
,
<country>UK</country>
</aff>
</contrib-group>
<author-notes>
<corresp id="c1">
<label>a</label>
<email>han-jeong.hwang@tu-berlin.de</email>
</corresp>
<corresp id="c2">
<label>b</label>
<email>mst40@cam.ac.uk</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>29</day>
<month>10</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="collection">
<year>2015</year>
</pub-date>
<volume>5</volume>
<elocation-id>15890</elocation-id>
<history>
<date date-type="received">
<day>29</day>
<month>06</month>
<year>2015</year>
</date>
<date date-type="accepted">
<day>05</day>
<month>10</month>
<year>2015</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2015, Macmillan Publishers Limited</copyright-statement>
<copyright-year>2015</copyright-year>
<copyright-holder>Macmillan Publishers Limited</copyright-holder>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<pmc-comment>author-paid</pmc-comment>
<license-p>This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">http://creativecommons.org/licenses/by/4.0/</ext-link>
</license-p>
</license>
</permissions>
<abstract>
<p>A classical brain-computer interface (BCI) based on visual event-related potentials (ERPs) is of limited application value for paralyzed patients with severe oculomotor impairments. In this study, we introduce a novel gaze independent BCI paradigm that can be potentially used for such end-users because visual stimuli are administered on closed eyelids. The paradigm involved verbally presented questions with 3 possible answers. Online BCI experiments were conducted with twelve healthy subjects, where they selected one option by attending to one of three different visual stimuli. It was confirmed that typical cognitive ERPs can be evidently modulated by the attention of a target stimulus in eyes-closed and gaze independent condition, and further classified with high accuracy during online operation (74.58% ± 17.85 s.d.; chance level 33.33%), demonstrating the effectiveness of the proposed novel visual ERP paradigm. Also, stimulus-specific eye movements observed during stimulation were verified as reflex responses to light stimuli, and they did not contribute to classification. To the best of our knowledge, this study is the first to show the possibility of using a gaze independent visual ERP paradigm in an eyes-closed condition, thereby providing another communication option for severely locked-in patients suffering from complex ocular dysfunctions.</p>
</abstract>
</article-meta>
</front>
<body>
<p>A brain-computer interface (BCI) is a communication method that translates brain signals into commands for controlling external devices. It can thereby provide an alternative communication channel for severely paralyzed patients, such as amyotrophic lateral sclerosis (ALS) patients. To develop BCI systems based on event-related potentials (ERPs), various sensory modalities have been exploited
<xref ref-type="bibr" rid="b1">1</xref>
, i.e., vision, hearing, and somatic sensation, in which the user is asked to attend on one of the external stimuli, and brain signals evoked by different stimuli are discriminated and used as an input source for controlling BCI systems. In particular, vision-based BCI paradigms have been intensively studied
<xref ref-type="bibr" rid="b1">1</xref>
because they generally provide a more intuitive way of mapping stimuli to commands accompanying higher communication performance. Moreover, they typically offer a better classification performance in multi-class BCI systems, compared to auditory and haptic BCI paradigms
<xref ref-type="bibr" rid="b2">2</xref>
<xref ref-type="bibr" rid="b3">3</xref>
<xref ref-type="bibr" rid="b4">4</xref>
<xref ref-type="bibr" rid="b5">5</xref>
.</p>
<p>Two widely used visual BCI paradigms are steady-state visual evoked potential (SSVEP) and visual ERPs
<xref ref-type="bibr" rid="b1">1</xref>
. SSVEP is a periodic brain response to a visual stimulus flickering at a certain frequency. A visual ERP is a brain response that is phase-locked to the presentation of visual stimuli, e.g., oddball paradigm. Both paradigms have been widely used in developing a variety of BCI applications, e.g., mental speller
<xref ref-type="bibr" rid="b6">6</xref>
<xref ref-type="bibr" rid="b7">7</xref>
, wheelchair navigation
<xref ref-type="bibr" rid="b8">8</xref>
, prosthesis control
<xref ref-type="bibr" rid="b9">9</xref>
, and mouse cursor control
<xref ref-type="bibr" rid="b10">10</xref>
, and shown the promising possibility of using BCI systems in daily life situations. For example, mental spellers based on each paradigm showed a typing speed of up to 10 letters/min
<xref ref-type="bibr" rid="b7">7</xref>
<xref ref-type="bibr" rid="b11">11</xref>
, and recent studies further improved the typing performance of BCI spellers by hybridizing the SSVEP and visual ERP paradigms
<xref ref-type="bibr" rid="b12">12</xref>
<xref ref-type="bibr" rid="b13">13</xref>
<xref ref-type="bibr" rid="b14">14</xref>
.</p>
<p>However, patients with neurodegenerative diseases who are the main targets of BCI technology gradually lose their motor functions, including decline of visual functions in progressed states of the disease
<xref ref-type="bibr" rid="b15">15</xref>
<xref ref-type="bibr" rid="b16">16</xref>
<xref ref-type="bibr" rid="b17">17</xref>
<xref ref-type="bibr" rid="b18">18</xref>
<xref ref-type="bibr" rid="b19">19</xref>
<xref ref-type="bibr" rid="b20">20</xref>
<xref ref-type="bibr" rid="b21">21</xref>
<xref ref-type="bibr" rid="b22">22</xref>
. Several types of dysfunctions in oculomotor control have been reported in the patients, e.g., gaze palsy
<xref ref-type="bibr" rid="b19">19</xref>
<xref ref-type="bibr" rid="b22">22</xref>
, slow saccade
<xref ref-type="bibr" rid="b17">17</xref>
<xref ref-type="bibr" rid="b19">19</xref>
<xref ref-type="bibr" rid="b22">22</xref>
, nystagmus
<xref ref-type="bibr" rid="b15">15</xref>
<xref ref-type="bibr" rid="b16">16</xref>
<xref ref-type="bibr" rid="b20">20</xref>
, and eyelid dropping (ptosis)
<xref ref-type="bibr" rid="b18">18</xref>
<xref ref-type="bibr" rid="b23">23</xref>
. Since conventional visual BCI systems require moderate eye movements to gaze at a target during stimulation, patients suffering from these symptoms cannot take full advantage of the conventional BCI systems. For those with impaired oculomotor functions, a gaze independent BCI paradigm was introduced, where the subject covertly focuses on a target stimulus while gazing at the center of a screen without eye movements
<xref ref-type="bibr" rid="b6">6</xref>
<xref ref-type="bibr" rid="b24">24</xref>
<xref ref-type="bibr" rid="b25">25</xref>
<xref ref-type="bibr" rid="b26">26</xref>
<xref ref-type="bibr" rid="b27">27</xref>
<xref ref-type="bibr" rid="b28">28</xref>
<xref ref-type="bibr" rid="b29">29</xref>
<xref ref-type="bibr" rid="b30">30</xref>
<xref ref-type="bibr" rid="b31">31</xref>
. Also, a recent study first showed the feasibility of an eyes-closed visual BCI paradigm based on SSVEP under overt attention condition
<xref ref-type="bibr" rid="b32">32</xref>
.</p>
<p>However, existing gaze independent BCIs have limited application value if oculomotor impairments are severe. For instance, a classical gaze independent BCI system can be used for patients with gaze palsy, but not with involuntary eyelid dropping or slow blink because it is required for successful operation that the subject should stably keep the eyes open to covertly recognize visual stimuli using peripheral vision. Also, patients with gaze palsy and slow saccade cannot successfully utilize the recently proposed eyes-closed visual BCI paradigm requiring direct gaze at a target, and thus the proposed paradigm can be especially applied to patients suffering from ocular ptosis or low blink rates, but having mild gaze function at least
<xref ref-type="bibr" rid="b32">32</xref>
.</p>
<p>In the present study, we propose a novel gaze independent visual BCI paradigm based on ERPs that are modulated by visual stimulation through closed eyelids, so that it potentially applies to locked-in state (LIS) patients with complex oculomotor impairments and completely locked-in (CLI) patients. To verify the feasibility of the proposed BCI paradigm, a visual stimulation system was implemented using a pair of glasses and four LEDs with which online BCI experiments were conducted with twelve healthy subjects. In the online experiment, visual stimuli were presented to the subjects with eyes closed while wearing the glasses-based stimulation system, and they were asked to covertly attend on one of the stimuli without directly gazing at a target in order to answer given questions. Classification outputs were given to the subjects for each trial in real-time. Further analyses on classification accuracy and ERPs were performed offline.</p>
<sec disp-level="1">
<title>Method</title>
<sec disp-level="2">
<title>Subjects</title>
<p>Twelve healthy subjects participated in this study (8 males and 4 females, aged 30.41 ± 3.39 s.d. years). Three had previous experiences with BCI experiments, and the others were naïve with respect to BCIs. None had a history of neurological, psychiatric or other severe disorders that might affect experimental outcomes. All subjects had normal or corrected-to-normal vision. The fundamental goal of this study and detailed experimental procedures were explained to each subject, and then they signed consent forms before the experiments. This study was approved by the Ethics Committee of the Institute of Psychology and Ergonomics, Technical University of Berlin (approval number SH_01_20150330), and all experiments were conducted in accordance with the declaration of Helsinki.</p>
</sec>
<sec disp-level="2">
<title>EEG Data Recording</title>
<p>During the experiments, EEG signals were sampled at 1000 Hz using a multi-channel EEG acquisition system (BrainCap, Brain Products, Munich, Germany) with 63 scalp electrodes placed according to the international 10–10 system. The electrode locations were Fp1–2, AF3–4, 7–8, Fz, F1–10, FT7–8, FCz, FC1–6, T7–8, Cz, C1–6, TP7–8, CPz, CP1–6, Pz, P1–10, POz, PO3–4, 7–8 and Oz, O1–2. Two EOG channels were created by bipolarly referencing two pairs of electrodes (horizontal EOG channel: F9–F10; vertical EOG channel: (Fp1+Fp2)/2). The EEG signals were referenced to the left mastoid with a forehead ground. The hardware bandpass filter with cutoff frequencies of 0.016 and 250 Hz was applied before the sampling. The impedance of all electrodes was kept below 20 kΩ.</p>
</sec>
<sec disp-level="2">
<title>Visual Stimulation</title>
<p>To present visual stimuli in eyes-closed condition, we constructed a visual stimulation system which consists of a pair of glasses, four LEDs, and an LED controller. An Arduino Leonardo board containing an ATmega32u4 microprocessor was used as the LED controller. As sketched in
<xref ref-type="fig" rid="f1">Fig. 1</xref>
, two LEDs emitting blue and red light were attached on each side of the glasses for which we drilled two holes on each glass, inserted the LEDs, and fixated them using glue. In order to realize a 3-class BCI system, the two red LEDs placed in the middle were paired and synchronously employed, and the blue LEDs on the left and right side of each glass were independently flashed. The duration of a single flash was 100 ms, and inter-stimulus-interval (ISI) was set to 1200 ms. The relatively long ISI compared to typical ERP studies was empirically determined to help the subjects perceive an upcoming stimulus from the current one in terms of color because the visual stimulus with a short distance from the eyes (<5 cm) yielded longer afterimage durations than in ordinary stimulation conditions. The luminance intensity of the LEDs was empirically selected through preliminary experiments, which were 95 and 80% of the original luminous intensity for the red (500 mcd/20 mA) and blue (1000 mcd/20 mA) LEDs, respectively. With this stimulation setting, none of the subjects reported feeling uncomfortable or having difficulties focusing on a target stimulus.</p>
</sec>
<sec disp-level="2">
<title>Questions</title>
<p>In the experiments, to demonstrate the possibility of using our proposed BCI paradigm in real clinical situations, questions having three possible choices were automatically read out to the subjects using a realistic speech synthesizer. Subjects were asked to answer the questions by concentrating on one of the three different types of visual stimuli, e.g., “Which of these drinks is alcoholic?” a) coffee, b) lemonade, c) beer. Each of ‘a’, ‘b’, and ‘c’ options corresponded to the left blue LED, the right blue LED, and the pair of middle red LEDs. Different questions were used for each trial, and the order of the questions was randomized for each subject.</p>
</sec>
<sec disp-level="2">
<title>Experimental Procedures</title>
<p>Subjects were sitting on a comfortable chair, and wearing the LED-attached glasses after EEG preparation. They were asked to avoid any body movements during the experiments and to keep their eyes shut during visual stimulation. This was continuously monitored, and subjects neither made considerable body movements nor did they open their eyes during the experiment. The visual stimuli were first presented sequentially without EEG recording to check whether the subjects could recognize the position (left, middle or right) and color (blue or red) of each LED stimulus. The visual stimuli were presented to the subjects until they fitted the position of the LED glasses as comfortably as they could perceive each stimulus, which was generally done within several iterations. Each experiment consisted of one calibration and three feedback sessions. In the calibration session, data was collected to construct a subject-specific classifier. Fifteen questions (trials) and their true answers were presented, and subjects had to covertly attend to the LEDs corresponding to the designated true answers. Subsequently, 3 feedback sessions with 20 questions each were conducted (a total of 60 questions). Subjects were prompted to choose answers to each question by themselves, focus on self-selected LEDs during visual stimulation, and input their answers using a keyboard after each trial. The numbers 1–3 were used for answering ‘a’, ‘b’, ‘c’ options, respectively, for which the subjects fixed their index, middle, and ring fingers on the numbers 1–3 of the keyboard during the whole feedback sessions. A classifier output (‘a’, ‘b’, or ‘c’) was acoustically given as feedback right after the subject input his/her own answer, and online classification accuracy was calculated by comparing the classifier output with the subject’s answer. For visual stimulation, the three groups of LEDs were randomly illuminated eight times for each trial in both calibration and feedback sessions (3 groups x 8 sequences = 24 flashes). Thus, the time required for one selection was 31.2 s (1,300 ms x 24 flashes = 31.2 s). A break of about 5 minutes was given between the sessions. Subjects reported that the experimental task was not too hard.</p>
</sec>
<sec disp-level="2">
<title>EEG Data Analysis</title>
<p>All online and offline data analyses were performed after downsampling to 100 Hz. No software filter was applied for online data analysis. A linear discriminant analysis (LDA) classifier with shrinkage of the covariance matrix was used for online classification during the feedback session
<xref ref-type="bibr" rid="b33">33</xref>
. To train a shrinkage LDA classifier, the calibration data were first epoched from – 200 ms to 1000 ms based on the stimulus onset time, and then baseline correction was performed using the data 200 ms prior the stimulus. Epochs and channels containing physiological artifacts (e.g., eye and muscle movements) were removed based on a variance criterion. About 4% of epochs and less than 1 channel were rejected on average. Then, the most discriminative five temporal intervals were selected using a heuristic search based on signed square values of point-biserial correlation coefficients (
<italic>sgn r</italic>
<sup>2</sup>
), and the channel-wise mean amplitudes in the selected time intervals were calculated as features. The shrinkage LDA was trained using the features, and applied to the data measured during the feedback sessions for online classification (see Blankertz
<italic>et al</italic>
.’s study
<xref ref-type="bibr" rid="b33">33</xref>
for the data processing pipeline in detail).</p>
<p>Three different offline analyses were performed to investigate ERPs elicited by the visual stimuli presented in eyes-closed condition and to demonstrate the possibility of using our proposed eyes-closed BCI paradigm in covert attention condition, respectively. Before all offline data processing, the recorded EEG data were first lowpass filtered below 49 Hz using a Chebyschev filter with passbands and stopbands of 42 and 49 Hz to remove powerline interference. The first offline analysis was to investigate ERPs for which the data sets measured from the three feedback sessions were used (60 trials) and the first three steps of the method used in training the shrinkage LDA classifier was identically applied to the data sets (data epoching, baseline correction, and artifact rejection). Since the ERP analysis results showed stimulus-specific eye movements (see the
<xref ref-type="supplementary-material" rid="S1">Supplementary Figures S1</xref>
and
<xref ref-type="supplementary-material" rid="S1">S2</xref>
in advance), we performed a second offline analysis in order to investigate the contribution of eye movements to classification. The horizontal and vertical EOG channels were used to examine stimulus-specific eye movements, and offline classification accuracy was estimated with three different channel sets (all channels, six frontal channels (Fp1–2, AF3–4,7–8), and the other channels) to check the spatial and temporal distribution of discriminative information, especially for the frontal electrode set. It was assumed that the six frontal electrodes contain the information most pertinent to eye movements based on the ERP topographic maps shown in the
<xref ref-type="supplementary-material" rid="S1">Supplementary Figures S1</xref>
and
<xref ref-type="supplementary-material" rid="S1">S2</xref>
. For the latter analysis, a standard binary classification (target vs. non-target; chance level 50%)
<xref ref-type="bibr" rid="b33">33</xref>
was separately performed for each electrode set by taking whole temporal features (yielding a spatial distribution of discriminative information), and for each time interval created by a 80 ms sliding window with 50% overlap (yielding a temporal distribution of discriminative information) in which the three sets of electrodes were also separately employed. In the third offline analysis, we calculated classification performance after clearly removing all identifiable physiological artifacts, especially eye movements, to further check the impact of stimulus-specific eye movements on classification. To obtain cleaned EEG signals, the original EEG data were decomposed into neural and artifactual source components by using independent component analysis (ICA), and artifactual components were projected out. An artifactual independent component classification method, called MARA (Multiple Artifact Rejection Algorithm)
<xref ref-type="bibr" rid="b34">34</xref>
<xref ref-type="bibr" rid="b35">35</xref>
was used to automatically select artifactual components. Using the artifact-free EEG data, online classification was simulated in offline fashion with the identical method employed for the online classification. In this study, a Matlab toolbox, EEGLAB, and its plug-in, MARA, were used to perform ICA and MARA, respectively (http://www.user.tu-berlin.de/irene.winkler/artifacts/).</p>
<p>For statistical analysis, two non-parametric methods, Friedman and Wilcoxon signed-rank test, were performed because testing data sets did not follow a normal distribution as confirmed by the Kolmogorov-Smirnov test. The Friedman and Wilcoxon signed-rank tests correspond to the parametric statistical tests, one-way repeated-measures ANOVA and paired t-test, respectively. The significance level for the Friedman test was set to 0.05, and a Bonferroni-adjusted significance level was used for the Wilcoxon post-hoc analysis, i.e.,
<italic>p</italic>
 = 0.05/the number of post-hoc tests.</p>
</sec>
</sec>
<sec disp-level="1">
<title>Results</title>
<sec disp-level="2">
<title>Online Classification and ERPs</title>
<p>
<xref ref-type="fig" rid="f2">Figure 2(a)</xref>
shows the online classification accuracies of each subject and their mean. All subjects achieved performance substantially higher than the chance accuracy of 33.33%, with a mean accuracy of 74.58% across subjects.
<xref ref-type="fig" rid="f2">Figure 2(b)</xref>
shows the confusion matrix of the online classification results. The correct mean recognition rates of left, middle, and right targets were 70.83%, 86.24%, and 66.66%, respectively, and the performance of middle targets is significantly higher than that of both left and right ones (Friedman χ
<sup>2</sup>
(2) = 8.54,
<italic>p </italic>
= 0.014; the Bonferroni post hoc analysis: middle > left = right, corrected
<italic>p</italic>
 < 0.01). It is also observed in
<xref ref-type="fig" rid="f2">Fig. 2(b)</xref>
that most false negatives for the left and right class occur in the middle class (24.16% and 24.99% false negatives for left and right targets, respectively).</p>
<p>
<xref ref-type="fig" rid="f3">Figure 3</xref>
depicts grand-average ERPs of the cleaned EEG signals obtained after applying ICA for target and non-target stimuli and their differences in terms of the
<italic>sgn r</italic>
<sup>
<italic>2</italic>
</sup>
value, and
<xref ref-type="fig" rid="f4">Fig. 4</xref>
separately shows grand-average ERP topographical maps for each target. In these figures, typical P3 components are seen in both target and non-target conditions, but they are considerably larger for targets than for non-targets (see the
<xref ref-type="supplementary-material" rid="S1">Supplementary Figures S1</xref>
and
<xref ref-type="supplementary-material" rid="S1">S2</xref>
for the grand-average ERPs of the original EEG data before artifact rejection).</p>
</sec>
<sec disp-level="2">
<title>Eye Movements</title>
<p>Along with the P3 components, eye movements are also found as evidenced by activity in frontal electrode sites (see
<xref ref-type="supplementary-material" rid="S1">Supplementary Figures S1</xref>
and
<xref ref-type="supplementary-material" rid="S1">S2</xref>
).
<xref ref-type="fig" rid="f5">Figure 5</xref>
shows the characteristics of the eye movements induced by each directional stimulus, where horizontal and vertical EOGs are separately presented. The red, blue and green lines represent EOGs measured when a target is left, middle, and right LED, respectively, for each stimulus. It is confirmed in
<xref ref-type="fig" rid="f5">Fig. 5(a)</xref>
that the subjects shifted their eyes to the opposite side of a visual stimulus when left and right stimuli are presented, irrespective of whether they are targets or non-targets (see the first and third rows in
<xref ref-type="fig" rid="f5">Fig. 5(a)</xref>
). Note that because horizontal EOG was calculated by subtracting F10 from F9, a negative EOG value corresponds to a shift of the eyes to the right and vice versa. Little horizontal movements are observed for the middle stimulus but strong vertical eye movements are shown, while the left and right stimuli induce little vertical movements (see around 200 ms in
<xref ref-type="fig" rid="f5">Fig. 5(b)</xref>
). The stimulus-specific eye movements are in line with those observed in the grand-average ERP maps of the original EEG data (
<xref ref-type="supplementary-material" rid="S1">Supplementary Figure S2</xref>
).</p>
</sec>
<sec disp-level="2">
<title>Offline Classification</title>
<p>The spatial and temporal distribution of discriminative information of the three electrode sets is presented in
<xref ref-type="fig" rid="f6">Fig. 6</xref>
. Significantly lower performance is consistently observed in the frontal electrode set (“Frontal”) in which eye movements are most strongly reflected, compared to the other two electrode sets (‘All’ and “Central-Occipital”). In particular, no considerable difference in classification accuracy is observed between the two electrode sets constructed using all electrodes (“All”) and the central-occipital electrodes (“Central-Occipital”). The performance differences between the three electrode sets are statistically confirmed for both spatial (
<xref ref-type="fig" rid="f6">Fig. 6(a)</xref>
, Friedman χ
<sup>2</sup>
(2) < 32.08,
<italic>p </italic>
= 0.0007; the Bonferroni post hoc analysis: “All” = “Central-Occipital” > “Frontal”, corrected
<italic>p </italic>
< 0.01) and temporal distribution (
<xref ref-type="fig" rid="f6">Fig. 6(b)</xref>
, Friedman χ
<sup>2</sup>
(2) 
<inline-formula id="d33e382">
<inline-graphic id="d33e383" xlink:href="srep15890-m1.jpg"></inline-graphic>
</inline-formula>
8.17,
<italic>p </italic>
<inline-formula id="d33e387">
<inline-graphic id="d33e388" xlink:href="srep15890-m2.jpg"></inline-graphic>
</inline-formula>
0.0169; the Bonferroni post hoc analysis: “All” = “Central-Occipital” > “Frontal”, corrected
<italic>p</italic>
 < 0.05 for all time intervals except the first, second, third and fifth ones).</p>
<p>
<xref ref-type="fig" rid="f7">Figure 7</xref>
shows the comparison of classification accuracies obtained before and after artifact rejection using ICA for each subject. The degree of performance change varies from one subject to another, but it is not significantly different in average (before ICA: 74.58% vs. after ICA: 75.83%,
<italic>p </italic>
= 0.74 with Wilcoxon signed rank test). The confusion matrix of the simulated classification results (after ICA) showed a similar trend to that of original ones (before ICA) in terms of classification accuracy of each class and false negatives of left and right targets (not shown here).</p>
</sec>
</sec>
<sec disp-level="1">
<title>Discussion</title>
<p>Visual BCI paradigms have been intensively studied to realize practical BCI systems for paralyzed patients, but the performance of conventional visual BCI systems decreases significantly when users are not allowed to directly gaze at a target stimulus
<xref ref-type="bibr" rid="b24">24</xref>
<xref ref-type="bibr" rid="b36">36</xref>
<xref ref-type="bibr" rid="b37">37</xref>
. Even though variations of the classical paradigm have been introduced to overcome clinically relevant problems (e.g., gaze independent or eyes-closed paradigms), they cannot be also applied to severely locked patients with multiple visual dysfunctions because they have been generally developed for a certain type of oculomotor impairment. In this study, we introduced a novel visual BCI paradigm that could be used in both gaze independent and eyes-closed conditions to encompass multiple oculomotor abnormalities, and demonstrated the feasibility of our novel BCI paradigm with a high mean online performance of 74.58%.</p>
<p>Since our proposed BCI paradigm was intended for LIS patients suffering from complex ophthalmoplegia and CLI patients, the characteristics of eye movements induced during visual stimulation should be carefully investigated. A series of analyses confirmed that the subjects tended to move their eyes to an opposite or avoidable direction from a visual stimulus (see
<xref ref-type="fig" rid="f5">Fig. 5</xref>
). Thus, stimulus-specific eye movements can be explained by a reflex action to protect the eyes from a sudden light stimulus as pupillary light reflex. Most importantly, because the subjects consistently showed the reflex response for the same stimulus, irrespective of whether it is a target or non-target, eye movements did not contribute to classification performance (see
<xref ref-type="fig" rid="f6">Fig. 6</xref>
). This suggests that the class-discriminative ERPs reflect genuine neural attention related processes. It was also reported in the literatures that ocular reflexes are generally weaken in patients with motor neuron diseases (MND)
<xref ref-type="bibr" rid="b38">38</xref>
<xref ref-type="bibr" rid="b39">39</xref>
, but saccade reflex similar to the reflexive eye movements observed in this study is relatively well preserved in those patients
<xref ref-type="bibr" rid="b40">40</xref>
<xref ref-type="bibr" rid="b41">41</xref>
<xref ref-type="bibr" rid="b42">42</xref>
<xref ref-type="bibr" rid="b43">43</xref>
. This is because the impairment of the frontal eye field that frequently happens in MND patients leads to eye movement abnormalities, but does not affect reflexive saccades in general
<xref ref-type="bibr" rid="b44">44</xref>
. This indicates that the potential target users of our proposed paradigm could also show reflexive eye movements similar to those shown in healthy subjects. Taken all results together, it can be expected that our paradigm might be useful for LIS patients suffering from multiple oculomotor impairments and CLI patients with cover attention, although this needs further evaluation in a clinical study.</p>
<p>In the ERP maps shown in
<xref ref-type="fig" rid="f3">Figs 3</xref>
and
<xref ref-type="fig" rid="f4">4</xref>
, visible P3 components were observed even when non-target stimuli were presented, but they were not as strong as those elicited by target stimuli. Similar P3 patterns were also observed in our previous BCI studies
<xref ref-type="bibr" rid="b6">6</xref>
<xref ref-type="bibr" rid="b45">45</xref>
when a center speller was employed where both target and non-target stimuli were presented in the fovea similar to our proposed paradigm in this study. This phenomenon was explained by the fact that visual stimulation centered in foveal regions with the highest photoreceptor density lead to more involvement of neurons for visual processing, thereby resulting in the visible P3 components even for non-target stimuli
<xref ref-type="bibr" rid="b6">6</xref>
<xref ref-type="bibr" rid="b45">45</xref>
. It seems reasonable to assume that the P3 components elicited by non-target stimuli in this study can be generated by a similar mechanism.</p>
<p>Most of misclassified left and right targets were assigned to the middle class, as shown in
<xref ref-type="fig" rid="f2">Fig. 2(b)</xref>
. This result can be interpreted by the mismatch of the number of LEDs used for each class along with the mentioned cortical magnification caused by the fovea-centered stimulation. In this study, a pair of LEDs was used for the middle stimulus and they were presented to both eyes simultaneously, while one LED was used for the left and right stimulus, respectively. As already discussed, relatively high P3 amplitudes were also seen for non-target stimuli due to the impact of the fovea-centered stimulation. Therefore, P3 amplitudes for the middle non-target stimulus employing 2 LEDs could be larger compared to the left and right non-target stimulus. This would result in a reduced difference of P3 amplitudes between targets and non-targets whenever the middle non-target stimulus is presented, thereby provoking misclassifications toward the middle class when targets are left and right LEDs. In fact, this speculation is indirectly backed by
<xref ref-type="fig" rid="f8">Fig. 8</xref>
showing grand-average ERP maps obtained when a target is the left LED, where the ERPs elicited by non-target right and middle LEDs are separately illustrated. As expected, P3 amplitudes are stronger for the middle LEDs than for the right LED, and the P3 amplitude differences between targets and non-targets are considerably reduced in the case that a non-target is the middle stimulus (see the second and third rows in
<xref ref-type="fig" rid="f8">Figs. 8(a) and (b))</xref>
. A similar trend is also observed when a target is the right LED (not shown here). This suggests that the number of LEDs and light intensity require careful balancing across stimulus conditions. In particular, the LED intensity should be more carefully elaborated before applying our paradigm to real target patients because a light stimulus might negatively affect patients’ eyes in long-term use.</p>
<p>An eyes-closed visual BCI paradigm was first introduced based on SSVEP
<xref ref-type="bibr" rid="b32">32</xref>
, where EEG patterns induced by attending either left or right visual stimuli were classified. The eyes-closed SSVEP paradigm showed a good classification performance ranging from 81.3% to 96% in average (chance level: 50%), depending on stimulation time, and the corresponding information transfer rates (ITR) were 9.09–10.62 bits/min. Compared to the performance of our proposed paradigm, classification accuracy cannot be directly compared due to different chance levels (50% vs. 33.33%), but the average ITR of the eyes-closed SSVEP paradigm is much higher than that of our paradigm (1.23 bits/min). However, it should be noted that the eyes-closed SSVEP paradigm requires accurate horizontal eye movements to focus on either left or right stimulus, thereby limiting its application value for patients with severe oculomotor dysfunctions. On the other hand, our proposed eyes-closed ERP paradigm could be used in gaze independent condition, as demonstrated in our results (see
<xref ref-type="fig" rid="f7">Fig. 7</xref>
). Therefore, if a paralyzed patient had moderate ocular functions, the eyes-closed SSVEP paradigm would be a better option than our paradigm in terms of the communication rate. If it is not the case, our proposed eyes-closed and gaze independent BCI paradigm could be a better choice for communication. Nevertheless, the relatively low ITR should be improved in the future studies by optimizing various experimental variables, such as flash duration, ISI, and the number of visual stimuli, for practical use. Another solution to increase the communication rate of the proposed ERP paradigm would be to take advantage of SSVEP features by incorporating the SSVEP paradigm to our paradigm, as in previous hybrid BCI systems combining the SSVEP and visual ERP Paradigm
<xref ref-type="bibr" rid="b12">12</xref>
<xref ref-type="bibr" rid="b13">13</xref>
<xref ref-type="bibr" rid="b14">14</xref>
.</p>
<p>Recently, significant advances have been made in the development of BCI systems based on non-visual sensory modalities such as auditory
<xref ref-type="bibr" rid="b46">46</xref>
<xref ref-type="bibr" rid="b47">47</xref>
<xref ref-type="bibr" rid="b48">48</xref>
<xref ref-type="bibr" rid="b49">49</xref>
<xref ref-type="bibr" rid="b50">50</xref>
<xref ref-type="bibr" rid="b51">51</xref>
and tactile
<xref ref-type="bibr" rid="b52">52</xref>
<xref ref-type="bibr" rid="b53">53</xref>
<xref ref-type="bibr" rid="b54">54</xref>
, and they could be also utilized for patients with poor ocular functions because they are independent of oculomotor functions. Some ERP studies compared different sensory modalities and consistently showed the superiority of a visual paradigm over auditory or tactile ones with respect to P3 amplitudes
<xref ref-type="bibr" rid="b55">55</xref>
<xref ref-type="bibr" rid="b56">56</xref>
and BCI performance
<xref ref-type="bibr" rid="b2">2</xref>
<xref ref-type="bibr" rid="b3">3</xref>
<xref ref-type="bibr" rid="b4">4</xref>
<xref ref-type="bibr" rid="b51">51</xref>
. However, it does not mean that a visual BCI paradigm is the best option for all end-users. Instead, a feasible and practical paradigm is highly dependent on the individual patient’s state of disease, which was demonstrated in several end-user studies conducted with different sensory BCI paradigms
<xref ref-type="bibr" rid="b57">57</xref>
<xref ref-type="bibr" rid="b58">58</xref>
<xref ref-type="bibr" rid="b59">59</xref>
<xref ref-type="bibr" rid="b60">60</xref>
. For example, a tactile modality showed better BCI performance than visual and auditory ones
<xref ref-type="bibr" rid="b57">57</xref>
, while a better performance was shown for a visual paradigm than an auditory one
<xref ref-type="bibr" rid="b58">58</xref>
. Thus, a user-centered BCI paradigm should be first investigated in practice before applying BCI technology to end-users. In this sense, our proposed BCI paradigm can provide another option for patients suffering from multiple ocular impairments, along with auditory and tactile paradigms. Furthermore, as some studies already demonstrated the positive impact of multisensory stimuli paradigms on BCI performance (e.g., visual + auditory)
<xref ref-type="bibr" rid="b2">2</xref>
<xref ref-type="bibr" rid="b3">3</xref>
<xref ref-type="bibr" rid="b61">61</xref>
, our novel visual BCI paradigm could be used simultaneously with other sensory paradigms to improve communication rate.</p>
</sec>
<sec disp-level="1">
<title>Additional Information</title>
<p>
<bold>How to cite this article</bold>
: Hwang, H.-J.
<italic>et al</italic>
. A Gaze Independent Brain-Computer Interface Based on Visual Stimulation through Closed Eyelids.
<italic>Sci. Rep</italic>
.
<bold>5</bold>
, 15890; doi: 10.1038/srep15890 (2015).</p>
</sec>
<sec sec-type="supplementary-material" id="S1">
<title>Supplementary Material</title>
<supplementary-material id="d33e29" content-type="local-data">
<caption>
<title>Supplementary Information</title>
</caption>
<media xlink:href="srep15890-s1.pdf"></media>
</supplementary-material>
</sec>
</body>
<back>
<ack>
<p>This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry (NRF-2014R1A6A3A03057524). Furthermore, the authors acknowledge financial support by the BMBF Grant Nos. 01GQ0850 and 16SV5839.</p>
</ack>
<ref-list>
<ref id="b1">
<mixed-citation publication-type="journal">
<name>
<surname>Hwang</surname>
<given-names>H.-J.</given-names>
</name>
,
<name>
<surname>Kim</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Choi</surname>
<given-names>S.</given-names>
</name>
&
<name>
<surname>Im</surname>
<given-names>C.-H.</given-names>
</name>
<article-title>EEG-based brain-computer interfaces: a thorough literature survey</article-title>
.
<source>Int J Hum-Comput Int</source>
<volume>29</volume>
,
<fpage>814</fpage>
<lpage>826</lpage>
(
<year>2013</year>
).</mixed-citation>
</ref>
<ref id="b2">
<mixed-citation publication-type="journal">
<name>
<surname>An</surname>
<given-names>X. W.</given-names>
</name>
,
<name>
<surname>Höhne</surname>
<given-names>J.</given-names>
</name>
,
<name>
<surname>Ming</surname>
<given-names>D.</given-names>
</name>
&
<name>
<surname>Blankertz</surname>
<given-names>B.</given-names>
</name>
<article-title>Exploring combinations of auditory and visual stimuli for gaze-independent brain-computer interfaces</article-title>
.
<source>Plos One</source>
<volume>9</volume>
,
<fpage>e111070</fpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">25350547</pub-id>
</mixed-citation>
</ref>
<ref id="b3">
<mixed-citation publication-type="journal">
<name>
<surname>Belitski</surname>
<given-names>A.</given-names>
</name>
,
<name>
<surname>Farquhar</surname>
<given-names>J.</given-names>
</name>
&
<name>
<surname>Desain</surname>
<given-names>P.</given-names>
</name>
<article-title>P300 audio-visual speller</article-title>
.
<source>J Neural Eng</source>
<volume>8</volume>
,
<fpage>025022</fpage>
(
<year>2011</year>
).
<pub-id pub-id-type="pmid">21436523</pub-id>
</mixed-citation>
</ref>
<ref id="b4">
<mixed-citation publication-type="journal">
<name>
<surname>Aloise</surname>
<given-names>F.</given-names>
</name>
<etal></etal>
.
<article-title>Multimodal stimulation for a P300-based BCI</article-title>
.
<source>Int J Bioelectromagn</source>
<volume>9</volume>
,
<fpage>128</fpage>
<lpage>130</lpage>
(
<year>2007</year>
).</mixed-citation>
</ref>
<ref id="b5">
<mixed-citation publication-type="journal">
<name>
<surname>Sellers</surname>
<given-names>E. W.</given-names>
</name>
<article-title>New horizons in brain-computer interface research</article-title>
.
<source>Clin Neurophysio</source>
<volume>124</volume>
,
<fpage>2</fpage>
<lpage>4</lpage>
(
<year>2013</year>
).</mixed-citation>
</ref>
<ref id="b6">
<mixed-citation publication-type="journal">
<name>
<surname>Treder</surname>
<given-names>M. S.</given-names>
</name>
,
<name>
<surname>Schmidt</surname>
<given-names>N. M.</given-names>
</name>
&
<name>
<surname>Blankertz</surname>
<given-names>B.</given-names>
</name>
<article-title>Gaze-independent brain-computer interfaces based on covert attention and feature attention</article-title>
.
<source>J Neural Eng</source>
<volume>8</volume>
,
<fpage>066003</fpage>
(
<year>2011</year>
).
<pub-id pub-id-type="pmid">21975312</pub-id>
</mixed-citation>
</ref>
<ref id="b7">
<mixed-citation publication-type="journal">
<name>
<surname>Hwang</surname>
<given-names>H.-J.</given-names>
</name>
<etal></etal>
.
<article-title>Development of an SSVEP-based BCI spelling system adopting a QWERTY-style LED keyboard</article-title>
.
<source>J Neurosci Methods</source>
<volume>208</volume>
,
<fpage>59</fpage>
<lpage>65</lpage>
(
<year>2012</year>
).
<pub-id pub-id-type="pmid">22580222</pub-id>
</mixed-citation>
</ref>
<ref id="b8">
<mixed-citation publication-type="journal">
<name>
<surname>Rebsamen</surname>
<given-names>B.</given-names>
</name>
<etal></etal>
.
<article-title>A brain controlled wheelchair to navigate in familiar environments</article-title>
.
<source>IEEE Trans Neural Syst Rehabil Eng1</source>
<volume>8</volume>
,
<fpage>590</fpage>
<lpage>598</lpage>
(
<year>2010</year>
).</mixed-citation>
</ref>
<ref id="b9">
<mixed-citation publication-type="journal">
<name>
<surname>Müller-Putz</surname>
<given-names>G. R.</given-names>
</name>
&
<name>
<surname>Pfurtscheller</surname>
<given-names>G.</given-names>
</name>
<article-title>Control of an electrical prosthesis with an SSVEP-based BCI</article-title>
.
<source>IEEE Trans Biomed Eng</source>
<volume>55</volume>
,
<fpage>361</fpage>
<lpage>364</lpage>
(
<year>2008</year>
).
<pub-id pub-id-type="pmid">18232384</pub-id>
</mixed-citation>
</ref>
<ref id="b10">
<mixed-citation publication-type="journal">
<name>
<surname>Wilson</surname>
<given-names>J. J.</given-names>
</name>
&
<name>
<surname>Palaniappan</surname>
<given-names>R.</given-names>
</name>
<article-title>Analogue mouse pointer control via an online steady state visual evoked potential (SSVEP) brain-computer interface</article-title>
.
<source>J Neural Eng</source>
<volume>8</volume>
,
<fpage>025026</fpage>
(
<year>2011</year>
).
<pub-id pub-id-type="pmid">21436532</pub-id>
</mixed-citation>
</ref>
<ref id="b11">
<mixed-citation publication-type="journal">
<name>
<surname>Lenhardt</surname>
<given-names>A.</given-names>
</name>
,
<name>
<surname>Kaper</surname>
<given-names>M.</given-names>
</name>
&
<name>
<surname>Ritter</surname>
<given-names>H. J.</given-names>
</name>
<article-title>An adaptive P300-based online brain computer interface</article-title>
.
<source>IEEE Trans Neural Syst Rehabil Eng</source>
<volume>16</volume>
,
<fpage>121</fpage>
<lpage>130</lpage>
(
<year>2008</year>
).
<pub-id pub-id-type="pmid">18403280</pub-id>
</mixed-citation>
</ref>
<ref id="b12">
<mixed-citation publication-type="journal">
<name>
<surname>Xu</surname>
<given-names>M.</given-names>
</name>
<etal></etal>
.
<article-title>A hybrid BCI speller paradigm combining P300 potential and the SSVEP blocking feature</article-title>
.
<source>J Neural Eng</source>
<volume>10</volume>
,
<fpage>026001</fpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">23369924</pub-id>
</mixed-citation>
</ref>
<ref id="b13">
<mixed-citation publication-type="journal">
<name>
<surname>Yin</surname>
<given-names>E.</given-names>
</name>
<etal></etal>
.
<article-title>A speedy hybrid BCI spelling approach combining P300 and SSVEP</article-title>
.
<source>IEEE Trans Biomed Eng</source>
<volume>61</volume>
,
<fpage>473</fpage>
<lpage>483</lpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">24058009</pub-id>
</mixed-citation>
</ref>
<ref id="b14">
<mixed-citation publication-type="journal">
<name>
<surname>Yin</surname>
<given-names>E.</given-names>
</name>
<etal></etal>
.
<article-title>A novel hybrid BCI speller based on the incorporation of SSVEP into the P300 paradigm</article-title>
.
<source>J Neural Eng</source>
<volume>10</volume>
,
<fpage>026012</fpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">23429035</pub-id>
</mixed-citation>
</ref>
<ref id="b15">
<mixed-citation publication-type="journal">
<name>
<surname>Kushner</surname>
<given-names>M. J.</given-names>
</name>
<etal></etal>
.
<article-title>Nystagmus in motor neuron disease - Clinicopathological study of 2 cases</article-title>
.
<source>Ann Neurol</source>
<volume>16</volume>
,
<fpage>71</fpage>
<lpage>77</lpage>
(
<year>1984</year>
).
<pub-id pub-id-type="pmid">6465863</pub-id>
</mixed-citation>
</ref>
<ref id="b16">
<mixed-citation publication-type="journal">
<name>
<surname>Abel</surname>
<given-names>L. A.</given-names>
</name>
,
<name>
<surname>Gibson</surname>
<given-names>K.</given-names>
</name>
,
<name>
<surname>Williams</surname>
<given-names>I. M.</given-names>
</name>
&
<name>
<surname>Li</surname>
<given-names>C. W.</given-names>
</name>
<article-title>Asymmetric smooth pursuit impairment and mystagmus in motor-neuron disease</article-title>
.
<source>Neuro-Ophthalmology</source>
<volume>12</volume>
,
<fpage>197</fpage>
<lpage>206</lpage>
(
<year>1992</year>
).</mixed-citation>
</ref>
<ref id="b17">
<mixed-citation publication-type="journal">
<name>
<surname>Averbuch-Heller</surname>
<given-names>L.</given-names>
</name>
<etal></etal>
.
<article-title>Slow vertical saccades in motor neuron disease: correlation of structure and function</article-title>
.
<source>Ann Neurol</source>
<volume>44</volume>
,
<fpage>641</fpage>
<lpage>648</lpage>
(
<year>1998</year>
).
<pub-id pub-id-type="pmid">9778263</pub-id>
</mixed-citation>
</ref>
<ref id="b18">
<mixed-citation publication-type="journal">
<name>
<surname>Pinto</surname>
<given-names>S.</given-names>
</name>
&
<name>
<surname>Carvalho</surname>
<given-names>M.</given-names>
</name>
<article-title>Amyotrophic lateral sclerosis patients and ocular ptosis</article-title>
.
<source>Clin Neurol Neurosur</source>
<volume>110</volume>
,
<fpage>168</fpage>
<lpage>170</lpage>
(
<year>2008</year>
).</mixed-citation>
</ref>
<ref id="b19">
<mixed-citation publication-type="journal">
<name>
<surname>Ushio</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Iwasaki</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Sugasawa</surname>
<given-names>K.</given-names>
</name>
&
<name>
<surname>Murofushi</surname>
<given-names>T.</given-names>
</name>
<article-title>Atypical motor neuron disease with supranuclear vertical gaze palsy and slow saccades</article-title>
.
<source>Auris Nasus Larynx</source>
<volume>36</volume>
,
<fpage>85</fpage>
<lpage>87</lpage>
(
<year>2009</year>
).
<pub-id pub-id-type="pmid">18328655</pub-id>
</mixed-citation>
</ref>
<ref id="b20">
<mixed-citation publication-type="journal">
<name>
<surname>Jacobs</surname>
<given-names>L.</given-names>
</name>
,
<name>
<surname>Bozian</surname>
<given-names>D.</given-names>
</name>
,
<name>
<surname>Heffner</surname>
<given-names>R. R.</given-names>
</name>
&
<name>
<surname>Barron</surname>
<given-names>S. A.</given-names>
</name>
<article-title>An eye-movement disorder in amyotrophic lateral sclerosis</article-title>
.
<source>Neurology</source>
<volume>31</volume>
,
<fpage>1282</fpage>
<lpage>1287</lpage>
(
<year>1981</year>
).
<pub-id pub-id-type="pmid">7202138</pub-id>
</mixed-citation>
</ref>
<ref id="b21">
<mixed-citation publication-type="journal">
<name>
<surname>Donaghy</surname>
<given-names>C.</given-names>
</name>
,
<name>
<surname>Thurtell</surname>
<given-names>M. J.</given-names>
</name>
,
<name>
<surname>Pioro</surname>
<given-names>E. P.</given-names>
</name>
,
<name>
<surname>Gibson</surname>
<given-names>J. M.</given-names>
</name>
&
<name>
<surname>Leigh</surname>
<given-names>R. J.</given-names>
</name>
<article-title>Eye movements in amyotrophic lateral sclerosis and its mimics: a review with illustrative cases</article-title>
.
<source>J Neurol Neurosurg Psychiatry</source>
<volume>82</volume>
,
<fpage>110</fpage>
<lpage>116</lpage>
(
<year>2011</year>
).
<pub-id pub-id-type="pmid">21097546</pub-id>
</mixed-citation>
</ref>
<ref id="b22">
<mixed-citation publication-type="journal">
<name>
<surname>Okuda</surname>
<given-names>B.</given-names>
</name>
,
<name>
<surname>Yamamoto</surname>
<given-names>T.</given-names>
</name>
,
<name>
<surname>Yamasaki</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Maya</surname>
<given-names>K.</given-names>
</name>
&
<name>
<surname>Imai</surname>
<given-names>T.</given-names>
</name>
<article-title>Motor-neuron disease with slow eye-movements and vertical gaze palsy</article-title>
.
<source>Acta Neurol Scand</source>
<volume>85</volume>
,
<fpage>71</fpage>
<lpage>76</lpage>
(
<year>1992</year>
).
<pub-id pub-id-type="pmid">1546538</pub-id>
</mixed-citation>
</ref>
<ref id="b23">
<mixed-citation publication-type="journal">
<name>
<surname>Hayashi</surname>
<given-names>H.</given-names>
</name>
,
<name>
<surname>Kato</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Kawada</surname>
<given-names>T.</given-names>
</name>
&
<name>
<surname>Tsubaki</surname>
<given-names>T.</given-names>
</name>
<article-title>Amyotrophic lateralsclerosis: oculomotor function in patients in respirators</article-title>
.
<source>Neurology</source>
<volume>37</volume>
,
<fpage>1431</fpage>
<lpage>1432</lpage>
(
<year>1987</year>
).
<pub-id pub-id-type="pmid">3475594</pub-id>
</mixed-citation>
</ref>
<ref id="b24">
<mixed-citation publication-type="journal">
<name>
<surname>Treder</surname>
<given-names>M. S.</given-names>
</name>
&
<name>
<surname>Blankertz</surname>
<given-names>B.</given-names>
</name>
<article-title>(C) overt attention and visual speller design in an ERP-based brain-computer interface</article-title>
.
<source>Behav Brain Funct</source>
<volume>6</volume>
,
<fpage>28</fpage>
(
<year>2010</year>
).
<pub-id pub-id-type="pmid">20509913</pub-id>
</mixed-citation>
</ref>
<ref id="b25">
<mixed-citation publication-type="journal">
<name>
<surname>Lesenfants</surname>
<given-names>D.</given-names>
</name>
<etal></etal>
.
<article-title>An independent SSVEP-based brain-computer interface in locked-in syndrome</article-title>
.
<source>J Neural Eng</source>
<volume>11</volume>
,
<fpage>035002</fpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">24838215</pub-id>
</mixed-citation>
</ref>
<ref id="b26">
<mixed-citation publication-type="journal">
<name>
<surname>Zhang</surname>
<given-names>D.</given-names>
</name>
<etal></etal>
.
<article-title>An independent brain-computer interface using covert non-spatial visual selective attention</article-title>
.
<source>J Neural Eng</source>
<volume>7</volume>
,
<fpage>16010</fpage>
(
<year>2010</year>
).
<pub-id pub-id-type="pmid">20083864</pub-id>
</mixed-citation>
</ref>
<ref id="b27">
<mixed-citation publication-type="journal">
<name>
<surname>Tonin</surname>
<given-names>L.</given-names>
</name>
,
<name>
<surname>Leeb</surname>
<given-names>R.</given-names>
</name>
,
<name>
<surname>Sobolewski</surname>
<given-names>A.</given-names>
</name>
&
<name>
<surname>Millan Jdel</surname>
<given-names>R.</given-names>
</name>
<article-title>An online EEG BCI based on covert visuospatial attention in absence of exogenous stimulation</article-title>
.
<source>J Neural Eng</source>
<volume>10</volume>
,
<fpage>056007</fpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">23918205</pub-id>
</mixed-citation>
</ref>
<ref id="b28">
<mixed-citation publication-type="journal">
<name>
<surname>Marchetti</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Piccione</surname>
<given-names>F.</given-names>
</name>
,
<name>
<surname>Silvoni</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Gamberini</surname>
<given-names>L.</given-names>
</name>
&
<name>
<surname>Priftis</surname>
<given-names>K.</given-names>
</name>
<article-title>Covert visuospatial attention orienting in a brain-computer interface for amyotrophic lateral sclerosis patients</article-title>
.
<source>Neurorehabil Neural Repair</source>
<volume>27</volume>
,
<fpage>430</fpage>
<lpage>438</lpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">23353184</pub-id>
</mixed-citation>
</ref>
<ref id="b29">
<mixed-citation publication-type="journal">
<name>
<surname>Aloise</surname>
<given-names>F.</given-names>
</name>
<etal></etal>
.
<article-title>A covert attention P300-based brain-computer interface: Geospell</article-title>
.
<source>Ergonomics</source>
<volume>55</volume>
,
<fpage>538</fpage>
<lpage>551</lpage>
(
<year>2012</year>
).
<pub-id pub-id-type="pmid">22455372</pub-id>
</mixed-citation>
</ref>
<ref id="b30">
<mixed-citation publication-type="journal">
<name>
<surname>Allison</surname>
<given-names>B. Z.</given-names>
</name>
<etal></etal>
.
<article-title>Towards an independent brain-computer interface using steady state visual evoked potentials</article-title>
.
<source>Clin Neurophysiol</source>
<volume>119</volume>
,
<fpage>399</fpage>
<lpage>408</lpage>
(
<year>2008</year>
).
<pub-id pub-id-type="pmid">18077208</pub-id>
</mixed-citation>
</ref>
<ref id="b31">
<mixed-citation publication-type="journal">
<name>
<surname>Acqualagna</surname>
<given-names>L.</given-names>
</name>
&
<name>
<surname>Blankertz</surname>
<given-names>B.</given-names>
</name>
<article-title>Gaze-independent BCI-spelling using rapid serial visual presentation (RSVP)</article-title>
.
<source>Clin Neurophysiol</source>
<volume>124</volume>
,
<fpage>901</fpage>
<lpage>908</lpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">23466266</pub-id>
</mixed-citation>
</ref>
<ref id="b32">
<mixed-citation publication-type="journal">
<name>
<surname>Lim</surname>
<given-names>J.-H.</given-names>
</name>
,
<name>
<surname>Hwang</surname>
<given-names>H.-J.</given-names>
</name>
,
<name>
<surname>Han</surname>
<given-names>C.-H.</given-names>
</name>
,
<name>
<surname>Jung</surname>
<given-names>K.-Y.</given-names>
</name>
&
<name>
<surname>Im</surname>
<given-names>C.-H.</given-names>
</name>
<article-title>Classification of binary intentions for individuals with impaired oculomotor function: ‘eyes-closed’ SSVEP-based brain-computer interface (BCI)</article-title>
.
<source>J Neural Eng</source>
<volume>10</volume>
,
<fpage>026021</fpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">23528484</pub-id>
</mixed-citation>
</ref>
<ref id="b33">
<mixed-citation publication-type="journal">
<name>
<surname>Blankertz</surname>
<given-names>B.</given-names>
</name>
,
<name>
<surname>Lemm</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Treder</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Haufe</surname>
<given-names>S.</given-names>
</name>
&
<name>
<surname>Müller</surname>
<given-names>K. R.</given-names>
</name>
<article-title>Single-trialanalysis and classification of ERP components - a tutorial</article-title>
.
<source>Neuroimage</source>
<volume>56</volume>
,
<fpage>814</fpage>
<lpage>825</lpage>
(
<year>2011</year>
).
<pub-id pub-id-type="pmid">20600976</pub-id>
</mixed-citation>
</ref>
<ref id="b34">
<mixed-citation publication-type="journal">
<name>
<surname>Winkler</surname>
<given-names>I.</given-names>
</name>
,
<name>
<surname>Haufe</surname>
<given-names>S.</given-names>
</name>
&
<name>
<surname>Tangermann</surname>
<given-names>M.</given-names>
</name>
<article-title>Automatic classification of artifactual ICA-components for artifact removal in EEG signals</article-title>
.
<source>Behav Brain Funct</source>
<volume>7</volume>
,
<fpage>30</fpage>
(
<year>2011</year>
).
<pub-id pub-id-type="pmid">21810266</pub-id>
</mixed-citation>
</ref>
<ref id="b35">
<mixed-citation publication-type="journal">
<name>
<surname>Winkler</surname>
<given-names>I.</given-names>
</name>
<etal></etal>
.
<article-title>Robust artifactual independent component classification for BCI practitioners</article-title>
.
<source>J Neural Eng</source>
<volume>11</volume>
,
<fpage>035013</fpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">24836294</pub-id>
</mixed-citation>
</ref>
<ref id="b36">
<mixed-citation publication-type="journal">
<name>
<surname>Brunner</surname>
<given-names>P.</given-names>
</name>
<etal></etal>
.
<article-title>Does the ‘P300’ speller depend on eye gaze?</article-title>
<source>J Neural Eng</source>
<volume>7</volume>
,
<fpage>056013</fpage>
(
<year>2010</year>
).
<pub-id pub-id-type="pmid">20858924</pub-id>
</mixed-citation>
</ref>
<ref id="b37">
<mixed-citation publication-type="journal">
<name>
<surname>Frenzel</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Neubert</surname>
<given-names>E.</given-names>
</name>
&
<name>
<surname>Bandt</surname>
<given-names>C.</given-names>
</name>
<article-title>Two communication lines in a 3 × 3 matrix speller</article-title>
.
<source>J Neural Eng</source>
<volume>8</volume>
,
<fpage>036021</fpage>
(
<year>2011</year>
).
<pub-id pub-id-type="pmid">21555846</pub-id>
</mixed-citation>
</ref>
<ref id="b38">
<mixed-citation publication-type="journal">
<name>
<surname>Shimoda</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Yokoyama</surname>
<given-names>Y.</given-names>
</name>
,
<name>
<surname>Okada</surname>
<given-names>A.</given-names>
</name>
,
<name>
<surname>Nakashima</surname>
<given-names>K.</given-names>
</name>
&
<name>
<surname>Takahashi</surname>
<given-names>K.</given-names>
</name>
<article-title>Electrically induced blink reflex and clinical blinking ability in patients with amyotrophic lateral sclerosis</article-title>
.
<source>Acta Neurol Scand</source>
<volume>92</volume>
,
<fpage>141</fpage>
<lpage>144</lpage>
(
<year>1995</year>
).
<pub-id pub-id-type="pmid">7484062</pub-id>
</mixed-citation>
</ref>
<ref id="b39">
<mixed-citation publication-type="journal">
<name>
<surname>Cohen</surname>
<given-names>B.</given-names>
</name>
&
<name>
<surname>Caroscio</surname>
<given-names>J.</given-names>
</name>
<article-title>Eye movements in amyotrophic lateral sclerosis</article-title>
.
<source>J Neural Transm Suppl</source>
<volume>19</volume>
,
<fpage>305</fpage>
<lpage>315</lpage>
(
<year>1983</year>
).
<pub-id pub-id-type="pmid">6583314</pub-id>
</mixed-citation>
</ref>
<ref id="b40">
<mixed-citation publication-type="journal">
<name>
<surname>Shaunak</surname>
<given-names>S.</given-names>
</name>
<etal></etal>
.
<article-title>Oculomotor function in amyotrophic lateral sclerosis: evidence for frontal impairment</article-title>
.
<source>Ann Neurol</source>
<volume>38</volume>
,
<fpage>38</fpage>
<lpage>44</lpage>
(
<year>1995</year>
).
<pub-id pub-id-type="pmid">7611722</pub-id>
</mixed-citation>
</ref>
<ref id="b41">
<mixed-citation publication-type="journal">
<name>
<surname>Gizzi</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Dirocco</surname>
<given-names>A.</given-names>
</name>
,
<name>
<surname>Sivak</surname>
<given-names>M.</given-names>
</name>
&
<name>
<surname>Cohen</surname>
<given-names>B.</given-names>
</name>
<article-title>Ocular motor function in motor neuron disease</article-title>
.
<source>Neurology</source>
<volume>42</volume>
,
<fpage>1037</fpage>
<lpage>1046</lpage>
(
<year>1992</year>
).
<pub-id pub-id-type="pmid">1579227</pub-id>
</mixed-citation>
</ref>
<ref id="b42">
<mixed-citation publication-type="journal">
<name>
<surname>Dirocco</surname>
<given-names>A.</given-names>
</name>
,
<name>
<surname>Gizzi</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Sivak</surname>
<given-names>M.</given-names>
</name>
&
<name>
<surname>Cohen</surname>
<given-names>B.</given-names>
</name>
<article-title>Oculomotor function in motor neuron disease</article-title>
.
<source>Neurology</source>
<volume>41</volume>
(suppl 1),
<fpage>203</fpage>
(Abstract) (
<year>1991</year>
).</mixed-citation>
</ref>
<ref id="b43">
<mixed-citation publication-type="journal">
<name>
<surname>Donaghy</surname>
<given-names>C.</given-names>
</name>
<etal></etal>
.
<article-title>Slow saccades in bulbar-onset motor neuron disease</article-title>
.
<source>J Neurol</source>
<volume>257</volume>
,
<fpage>1134</fpage>
<lpage>1140</lpage>
(
<year>2010</year>
).
<pub-id pub-id-type="pmid">20146069</pub-id>
</mixed-citation>
</ref>
<ref id="b44">
<mixed-citation publication-type="journal">
<name>
<surname>Rivaud</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Muri</surname>
<given-names>R. M.</given-names>
</name>
,
<name>
<surname>Gaymard</surname>
<given-names>B.</given-names>
</name>
,
<name>
<surname>Vermersch</surname>
<given-names>A. I.</given-names>
</name>
&
<name>
<surname>Pierrotdeseilligny</surname>
<given-names>C.</given-names>
</name>
<article-title>Eye-movement disorders after frontal eye field lesions in humans</article-title>
.
<source>Exp Brain Res</source>
<volume>102</volume>
,
<fpage>110</fpage>
<lpage>120</lpage>
(
<year>1994</year>
).
<pub-id pub-id-type="pmid">7895787</pub-id>
</mixed-citation>
</ref>
<ref id="b45">
<mixed-citation publication-type="journal">
<name>
<surname>Schaeff</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Treder</surname>
<given-names>M. S.</given-names>
</name>
,
<name>
<surname>Venthur</surname>
<given-names>B.</given-names>
</name>
&
<name>
<surname>Blankertz</surname>
<given-names>B.</given-names>
</name>
<article-title>Exploring motion VEPs for gaze-independent communication</article-title>
.
<source>J Neural Eng</source>
<volume>9</volume>
,
<fpage>045006</fpage>
(
<year>2012</year>
).
<pub-id pub-id-type="pmid">22832017</pub-id>
</mixed-citation>
</ref>
<ref id="b46">
<mixed-citation publication-type="journal">
<name>
<surname>Höhne</surname>
<given-names>J.</given-names>
</name>
&
<name>
<surname>Tangermann</surname>
<given-names>M.</given-names>
</name>
<article-title>Towards User-friendly spelling with an auditory brain-computer interface: the CharStreamer paradigm</article-title>
.
<source>Plos One</source>
<volume>9</volume>
,
<fpage>e102630</fpage>
(
<year>2014</year>
).</mixed-citation>
</ref>
<ref id="b47">
<mixed-citation publication-type="journal">
<name>
<surname>Schreuder</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Blankertz</surname>
<given-names>B.</given-names>
</name>
&
<name>
<surname>Tangermann</surname>
<given-names>M.</given-names>
</name>
<article-title>A new auditory multi-class brain-computer interface paradigm: spatial hearing as an informative cue</article-title>
.
<source>Plos One</source>
<volume>5</volume>
,
<fpage>e0009813</fpage>
(
<year>2010</year>
).</mixed-citation>
</ref>
<ref id="b48">
<mixed-citation publication-type="journal">
<name>
<surname>Schreuder</surname>
<given-names>M.</given-names>
</name>
<etal></etal>
.
<article-title>Optimizing event-related potential based brain-computer interfaces: a systematic evaluation of dynamic stopping methods</article-title>
.
<source>J Neural Eng</source>
<volume>10</volume>
,
<fpage>036025</fpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">23685458</pub-id>
</mixed-citation>
</ref>
<ref id="b49">
<mixed-citation publication-type="journal">
<name>
<surname>Simon</surname>
<given-names>N.</given-names>
</name>
<etal></etal>
.
<article-title>An auditory multiclass brain-computer interface with natural stimuli: usability evaluation with healthy participants and a motor impaired end user</article-title>
.
<source>Front Hum Neurosci</source>
<volume>8</volume>
,
<fpage>1039</fpage>
(
<year>2015</year>
).
<pub-id pub-id-type="pmid">25620924</pub-id>
</mixed-citation>
</ref>
<ref id="b50">
<mixed-citation publication-type="journal">
<name>
<surname>Hill</surname>
<given-names>N. J.</given-names>
</name>
<etal></etal>
.
<article-title>A practical, intuitive brain–computer interface for communicating ‘yes’ or ‘no’ by listening</article-title>
.
<source>J Neural Eng</source>
<volume>11</volume>
,
<fpage>035003</fpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">24838278</pub-id>
</mixed-citation>
</ref>
<ref id="b51">
<mixed-citation publication-type="journal">
<name>
<surname>Käthner</surname>
<given-names>I.</given-names>
</name>
<etal></etal>
.
<article-title>A portable auditory P300 brain–computer interface with directional cues</article-title>
.
<source>Clin Neurophysio</source>
<volume>124</volume>
,
<fpage>327</fpage>
<lpage>338</lpage>
(
<year>2013</year>
).</mixed-citation>
</ref>
<ref id="b52">
<mixed-citation publication-type="journal">
<name>
<surname>Brouwer</surname>
<given-names>A.-M.</given-names>
</name>
&
<name>
<surname>van Erp</surname>
<given-names>J. B. F.</given-names>
</name>
<article-title>A tactile P300 brain-computer interface</article-title>
.
<source>Front Neurosci</source>
<volume>4</volume>
,
<fpage>19</fpage>
(
<year>2010</year>
).
<pub-id pub-id-type="pmid">20582261</pub-id>
</mixed-citation>
</ref>
<ref id="b53">
<mixed-citation publication-type="journal">
<name>
<surname>van der Waal</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Severens</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Geuze</surname>
<given-names>J.</given-names>
</name>
&
<name>
<surname>Desain</surname>
<given-names>P.</given-names>
</name>
<article-title>Introducing the tactile speller: an ERP-based brain-computer interface for communication</article-title>
.
<source>J Neural Eng</source>
<volume>9</volume>
,
<fpage>045002</fpage>
(
<year>2012</year>
).
<pub-id pub-id-type="pmid">22831906</pub-id>
</mixed-citation>
</ref>
<ref id="b54">
<mixed-citation publication-type="journal">
<name>
<surname>Kaufmann</surname>
<given-names>T.</given-names>
</name>
,
<name>
<surname>Herweg</surname>
<given-names>A.</given-names>
</name>
&
<name>
<surname>Kübler</surname>
<given-names>A.</given-names>
</name>
<article-title>Toward brain-computer interface based wheelchair control utilizing tactually-evoked event-related potentials</article-title>
.
<source>J Neuroeng Rehabil</source>
<volume>11</volume>
,
<fpage>7</fpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">24428900</pub-id>
</mixed-citation>
</ref>
<ref id="b55">
<mixed-citation publication-type="journal">
<name>
<surname>Polich</surname>
<given-names>J.</given-names>
</name>
,
<name>
<surname>Ellerson</surname>
<given-names>P. C.</given-names>
</name>
&
<name>
<surname>Cohen</surname>
<given-names>J.</given-names>
</name>
<article-title>P300, stimulus intensity, modality, and probability</article-title>
.
<source>Int J Psychophysiol</source>
<volume>23</volume>
,
<fpage>55</fpage>
<lpage>62</lpage>
(
<year>1996</year>
).
<pub-id pub-id-type="pmid">8880366</pub-id>
</mixed-citation>
</ref>
<ref id="b56">
<mixed-citation publication-type="journal">
<name>
<surname>Comerchero</surname>
<given-names>M. D.</given-names>
</name>
&
<name>
<surname>Polich</surname>
<given-names>J.</given-names>
</name>
<article-title>P3a and P3b from typical auditory and visual stimuli</article-title>
.
<source>Clin Neurophysio</source>
<volume>110</volume>
,
<fpage>24</fpage>
<lpage>30</lpage>
(
<year>1999</year>
).</mixed-citation>
</ref>
<ref id="b57">
<mixed-citation publication-type="journal">
<name>
<surname>Kaufmann</surname>
<given-names>T.</given-names>
</name>
,
<article-title>Holz, E. M. & Kubler, A. Comparison of tactile, auditory, and visual modality for brain-computer interface use: a case study with a patient in the locked-in state</article-title>
.
<source>Front Neurosci</source>
<volume>7</volume>
,
<fpage>129</fpage>
(
<year>2013</year>
)
<pub-id pub-id-type="pmid">23898236</pub-id>
</mixed-citation>
</ref>
<ref id="b58">
<mixed-citation publication-type="journal">
<name>
<surname>Schreuder</surname>
<given-names>M.</given-names>
</name>
<etal></etal>
.
<article-title>User-centered design in brain-computer interfaces-a case study</article-title>
.
<source>Artif Intell Med</source>
<volume>59</volume>
,
<fpage>71</fpage>
<lpage>80</lpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">24076341</pub-id>
</mixed-citation>
</ref>
<ref id="b59">
<mixed-citation publication-type="journal">
<name>
<surname>Nijboer</surname>
<given-names>F.</given-names>
</name>
<etal></etal>
.
<article-title>A P300-based brain-computer interface for people with amyotrophic lateral sclerosis</article-title>
.
<source>Clin Neurophysio</source>
<volume>119</volume>
,
<fpage>1909</fpage>
<lpage>1916</lpage>
(
<year>2008</year>
).</mixed-citation>
</ref>
<ref id="b60">
<mixed-citation publication-type="journal">
<name>
<surname>Kübler</surname>
<given-names>A.</given-names>
</name>
<etal></etal>
.
<article-title>A brain-computer interface controlled auditory event-related potential (P300) spelling system for locked-in patients</article-title>
.
<source>Ann Ny Acad Sci</source>
<volume>1157</volume>
,
<fpage>90</fpage>
<lpage>100</lpage>
(
<year>2009</year>
).
<pub-id pub-id-type="pmid">19351359</pub-id>
</mixed-citation>
</ref>
<ref id="b61">
<mixed-citation publication-type="journal">
<name>
<surname>Brouwer</surname>
<given-names>A.-M.</given-names>
</name>
,
<name>
<surname>van Erp</surname>
<given-names>J. B. F.</given-names>
</name>
,
<name>
<surname>Aloise</surname>
<given-names>F.</given-names>
</name>
&
<name>
<surname>Cincotti</surname>
<given-names>F.</given-names>
</name>
<article-title>Tactile, visual, and bimodal P300s: could bimodal p300s boost bci performance?</article-title>
<source>SRX Neuroscience 2010</source>
,
<fpage>967027</fpage>
(
<year>2010</year>
).</mixed-citation>
</ref>
</ref-list>
<fn-group>
<fn>
<p>
<bold>Author Contributions</bold>
M.T., H.J.H. and B.B. designed this study. D.U., T.K., X.C. and M.T. conducted preliminary experiments, and H.J.H., V.Y.F. and M.T. performed main experiments and data analyses. H.J.H. wrote the manuscript, and the other authors reviewed and approved the final manuscript.</p>
</fn>
</fn-group>
</back>
<floats-group>
<fig id="f1">
<label>Figure 1</label>
<caption>
<title>Schematic illustration of the developed visual stimulation system.</title>
<p>Two LEDs colored with blue and red were attached with a distance of 1 cm on the center of each glass. The red LEDs placed inside of each glass were paired and each blue LED was separately employed, such that a 3-class BCI system was implemented. The duration of a single flash and inter-stimulus interval (ISI) were set to 100 ms and 1200 ms, respectively.</p>
</caption>
<graphic xlink:href="srep15890-f1"></graphic>
</fig>
<fig id="f2">
<label>Figure 2</label>
<caption>
<title>Online classification results.</title>
<p>(
<bold>a</bold>
) Classification accuracies of each subject, and their mean and standard deviation (chance level 33.33%). (
<bold>b</bold>
) Confusion matrix of online classification results. The mean classification accuracies of the left, middle, and right class are 70.83, 86.24, and 66.66%, respectively, and most false negatives of the left and right class are observed in the middle class (24.16% for the left class and 24.99% for the right class, respectively).</p>
</caption>
<graphic xlink:href="srep15890-f2"></graphic>
</fig>
<fig id="f3">
<label>Figure 3</label>
<caption>
<title>Grand-average ERPs for target and non-target stimuli and their differences in terms of the
<italic>sgn r</italic>
<sup>
<italic>2</italic>
</sup>
value along time after removing artifact components using ICA.</title>
<p>No significant eye movements are observed in the ERP maps, while P3 components are clearly seen. The topographic maps in each column correspond to the five time periods shaded in the top panel.</p>
</caption>
<graphic xlink:href="srep15890-f3"></graphic>
</fig>
<fig id="f4">
<label>Figure 4</label>
<caption>
<title>Class-specific grand-average ERPs for target and non-target stimuli and their differences in terms of the
<italic>sgn r</italic>
<sup>
<italic>2</italic>
</sup>
value after removing artifact components using ICA when a target is the (
<bold>a</bold>
) left, (
<bold>b</bold>
) middle, and (
<bold>c</bold>
) right stimulus, respectively.</title>
<p>P3 components more shifted to the direction of a visual stimulus are observed in the
<italic>sgn r</italic>
<sup>
<italic>2</italic>
</sup>
maps of the second time periods for left and right targets. In each figure, the topographic maps in three columns correspond to the three time periods shaded in the top panels, and the time intervals were empirically selected to better show the target-specific ERP patterns.</p>
</caption>
<graphic xlink:href="srep15890-f4"></graphic>
</fig>
<fig id="f5">
<label>Figure 5</label>
<caption>
<p>(
<bold>a</bold>
) Horizontal and (
<bold>b</bold>
) vertical eye movements occurred during stimulation. The red, blue, and green lines indicate eye movements produced when a target is the left, middle, and right stimulus, respectively, and the shaded areas of each line represent the standard errors of respective EOG signals. Irrespective of whether visual stimuli are targets or non-targets, eye movements moved to the opposite direction of a visual stimulus are confirmed in horizontal EOGs when left and right stimuli are presented, while dominant vertical EOGs are observed for middle stimuli.</p>
</caption>
<graphic xlink:href="srep15890-f5"></graphic>
</fig>
<fig id="f6">
<label>Figure 6</label>
<caption>
<p>(
<bold>a</bold>
) Spatial and (
<bold>b</bold>
) temporal distribution of discriminative information of three electrode sets. The electrode sets, “All”, “Frontal”, and “Central-Occipital”, were constructed using all electrodes, six frontal electrodes (FP1–2, AF3–4, 7–8), and the others excluding the frontal electrode set, respectively. In general, the performance of the ‘Frontal’ electrode set is significantly lower than that of the other electrode sets. The vertical bars represent the standard deviations of the classification accuracies for each condition.</p>
</caption>
<graphic xlink:href="srep15890-f6"></graphic>
</fig>
<fig id="f7">
<label>Figure 7</label>
<caption>
<title>Comparison of simulated classification accuracies attained before and after applying ICA.</title>
<p>The difference between the accuracies is not statistically significant (before ICA: 74.58% vs. after ICA: 75.83%, p = 0.74 with Wilcoxon signed rank test).</p>
</caption>
<graphic xlink:href="srep15890-f7"></graphic>
</fig>
<fig id="f8">
<label>Figure 8</label>
<caption>
<title>Grand-average ERPs for target and non-target stimuli and their differences in terms of the
<italic>sgn r</italic>
<sup>
<italic>2</italic>
</sup>
value after removing artifact components when a target is the left stimulus.</title>
<p>The ERP maps are separately presented when a non-target stimulus is (
<bold>a</bold>
) the right LED and (
<bold>b</bold>
) the paired middle LEDs, respectively. The P3 components elicited by the middle non-target LEDs are stronger than those elicited by the right non-target LED, and thereby the differences of P3 components between targets and non-targets are reduced when middle non-target stimuli are presented, compared to right non-target ones. In each figure, the topographic maps in three columns correspond to the three time periods shaded in the top panels, and the time intervals were empirically selected between 250–570 ms to better show P3 components.</p>
</caption>
<graphic xlink:href="srep15890-f8"></graphic>
</fig>
</floats-group>
</pmc>
<affiliations>
<list>
<country>
<li>Allemagne</li>
<li>Royaume-Uni</li>
</country>
</list>
<tree>
<country name="Allemagne">
<noRegion>
<name sortKey="Hwang, Han Jeong" sort="Hwang, Han Jeong" uniqKey="Hwang H" first="Han-Jeong" last="Hwang">Han-Jeong Hwang</name>
</noRegion>
<name sortKey="Blankertz, Benjamin" sort="Blankertz, Benjamin" uniqKey="Blankertz B" first="Benjamin" last="Blankertz">Benjamin Blankertz</name>
<name sortKey="Chatziliadis, Xenofon" sort="Chatziliadis, Xenofon" uniqKey="Chatziliadis X" first="Xenofon" last="Chatziliadis">Xenofon Chatziliadis</name>
<name sortKey="Ferreria, Valeria Y" sort="Ferreria, Valeria Y" uniqKey="Ferreria V" first="Valeria Y." last="Ferreria">Valeria Y. Ferreria</name>
<name sortKey="Kilic, Tayfun" sort="Kilic, Tayfun" uniqKey="Kilic T" first="Tayfun" last="Kilic">Tayfun Kilic</name>
<name sortKey="Treder, Matthias" sort="Treder, Matthias" uniqKey="Treder M" first="Matthias" last="Treder">Matthias Treder</name>
<name sortKey="Ulrich, Daniel" sort="Ulrich, Daniel" uniqKey="Ulrich D" first="Daniel" last="Ulrich">Daniel Ulrich</name>
</country>
<country name="Royaume-Uni">
<noRegion>
<name sortKey="Treder, Matthias" sort="Treder, Matthias" uniqKey="Treder M" first="Matthias" last="Treder">Matthias Treder</name>
</noRegion>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000792 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 000792 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:4625131
   |texte=   A Gaze Independent Brain-Computer Interface Based on Visual Stimulation through Closed Eyelids
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:26510583" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024