Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Haptic, Virtual Interaction and Motor Imagery: Entertainment Tools and Psychophysiological Testing

Identifieur interne : 000577 ( Pmc/Curation ); précédent : 000576; suivant : 000578

Haptic, Virtual Interaction and Motor Imagery: Entertainment Tools and Psychophysiological Testing

Auteurs : Sara Invitto [Italie] ; Chiara Faggiano ; Silvia Sammarco ; Valerio De Luca ; Lucio T. De Paolis

Source :

RBID : PMC:4813969

Abstract

In this work, the perception of affordances was analysed in terms of cognitive neuroscience during an interactive experience in a virtual reality environment. In particular, we chose a virtual reality scenario based on the Leap Motion controller: this sensor device captures the movements of the user’s hand and fingers, which are reproduced on a computer screen by the proper software applications. For our experiment, we employed a sample of 10 subjects matched by age and sex and chosen among university students. The subjects took part in motor imagery training and immersive affordance condition (a virtual training with Leap Motion and a haptic training with real objects). After each training sessions the subject performed a recognition task, in order to investigate event-related potential (ERP) components. The results revealed significant differences in the attentional components during the Leap Motion training. During Leap Motion session, latencies increased in the occipital lobes, which are entrusted to visual sensory; in contrast, latencies decreased in the frontal lobe, where the brain is mainly activated for attention and action planning.


Url:
DOI: 10.3390/s16030394
PubMed: 26999151
PubMed Central: 4813969

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4813969

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Haptic, Virtual Interaction and Motor Imagery: Entertainment Tools and Psychophysiological Testing</title>
<author>
<name sortKey="Invitto, Sara" sort="Invitto, Sara" uniqKey="Invitto S" first="Sara" last="Invitto">Sara Invitto</name>
<affiliation wicri:level="1">
<nlm:aff id="af1-sensors-16-00394">Human Anatomy and Neuroscience Laboratory, Department of Biological and Environmental Science and Technologies, University of Salento, Campus Ecotekne, Via per Monteroni, Lecce 73100, Italy</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea>Human Anatomy and Neuroscience Laboratory, Department of Biological and Environmental Science and Technologies, University of Salento, Campus Ecotekne, Via per Monteroni, Lecce 73100</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Faggiano, Chiara" sort="Faggiano, Chiara" uniqKey="Faggiano C" first="Chiara" last="Faggiano">Chiara Faggiano</name>
<affiliation>
<nlm:aff id="af2-sensors-16-00394">University of Salento, Campus Ecotekne, Via per Monteroni, Lecce 73100, Italy;
<email>chiarafaggiano0@gmail.com</email>
(C.F.);
<email>silvia.sammarco@yahoo.it</email>
(S.S.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Sammarco, Silvia" sort="Sammarco, Silvia" uniqKey="Sammarco S" first="Silvia" last="Sammarco">Silvia Sammarco</name>
<affiliation>
<nlm:aff id="af2-sensors-16-00394">University of Salento, Campus Ecotekne, Via per Monteroni, Lecce 73100, Italy;
<email>chiarafaggiano0@gmail.com</email>
(C.F.);
<email>silvia.sammarco@yahoo.it</email>
(S.S.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="De Luca, Valerio" sort="De Luca, Valerio" uniqKey="De Luca V" first="Valerio" last="De Luca">Valerio De Luca</name>
<affiliation>
<nlm:aff id="af3-sensors-16-00394">Augmented and Virtual Reality Laboratory (AVR Lab), Department of Engineering for Innovation, University of Salento, Campus Ecotekne, Via per Monteroni, Lecce 73100, Italy;
<email>valerio.deluca@unisalento.it</email>
(V.D.L.);
<email>lucio.depaolis@unisalento.it</email>
(L.T.D.P.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="De Paolis, Lucio T" sort="De Paolis, Lucio T" uniqKey="De Paolis L" first="Lucio T." last="De Paolis">Lucio T. De Paolis</name>
<affiliation>
<nlm:aff id="af3-sensors-16-00394">Augmented and Virtual Reality Laboratory (AVR Lab), Department of Engineering for Innovation, University of Salento, Campus Ecotekne, Via per Monteroni, Lecce 73100, Italy;
<email>valerio.deluca@unisalento.it</email>
(V.D.L.);
<email>lucio.depaolis@unisalento.it</email>
(L.T.D.P.)</nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">26999151</idno>
<idno type="pmc">4813969</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4813969</idno>
<idno type="RBID">PMC:4813969</idno>
<idno type="doi">10.3390/s16030394</idno>
<date when="2016">2016</date>
<idno type="wicri:Area/Pmc/Corpus">000577</idno>
<idno type="wicri:Area/Pmc/Curation">000577</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Haptic, Virtual Interaction and Motor Imagery: Entertainment Tools and Psychophysiological Testing</title>
<author>
<name sortKey="Invitto, Sara" sort="Invitto, Sara" uniqKey="Invitto S" first="Sara" last="Invitto">Sara Invitto</name>
<affiliation wicri:level="1">
<nlm:aff id="af1-sensors-16-00394">Human Anatomy and Neuroscience Laboratory, Department of Biological and Environmental Science and Technologies, University of Salento, Campus Ecotekne, Via per Monteroni, Lecce 73100, Italy</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea>Human Anatomy and Neuroscience Laboratory, Department of Biological and Environmental Science and Technologies, University of Salento, Campus Ecotekne, Via per Monteroni, Lecce 73100</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Faggiano, Chiara" sort="Faggiano, Chiara" uniqKey="Faggiano C" first="Chiara" last="Faggiano">Chiara Faggiano</name>
<affiliation>
<nlm:aff id="af2-sensors-16-00394">University of Salento, Campus Ecotekne, Via per Monteroni, Lecce 73100, Italy;
<email>chiarafaggiano0@gmail.com</email>
(C.F.);
<email>silvia.sammarco@yahoo.it</email>
(S.S.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Sammarco, Silvia" sort="Sammarco, Silvia" uniqKey="Sammarco S" first="Silvia" last="Sammarco">Silvia Sammarco</name>
<affiliation>
<nlm:aff id="af2-sensors-16-00394">University of Salento, Campus Ecotekne, Via per Monteroni, Lecce 73100, Italy;
<email>chiarafaggiano0@gmail.com</email>
(C.F.);
<email>silvia.sammarco@yahoo.it</email>
(S.S.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="De Luca, Valerio" sort="De Luca, Valerio" uniqKey="De Luca V" first="Valerio" last="De Luca">Valerio De Luca</name>
<affiliation>
<nlm:aff id="af3-sensors-16-00394">Augmented and Virtual Reality Laboratory (AVR Lab), Department of Engineering for Innovation, University of Salento, Campus Ecotekne, Via per Monteroni, Lecce 73100, Italy;
<email>valerio.deluca@unisalento.it</email>
(V.D.L.);
<email>lucio.depaolis@unisalento.it</email>
(L.T.D.P.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="De Paolis, Lucio T" sort="De Paolis, Lucio T" uniqKey="De Paolis L" first="Lucio T." last="De Paolis">Lucio T. De Paolis</name>
<affiliation>
<nlm:aff id="af3-sensors-16-00394">Augmented and Virtual Reality Laboratory (AVR Lab), Department of Engineering for Innovation, University of Salento, Campus Ecotekne, Via per Monteroni, Lecce 73100, Italy;
<email>valerio.deluca@unisalento.it</email>
(V.D.L.);
<email>lucio.depaolis@unisalento.it</email>
(L.T.D.P.)</nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Sensors (Basel, Switzerland)</title>
<idno type="eISSN">1424-8220</idno>
<imprint>
<date when="2016">2016</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>In this work, the perception of affordances was analysed in terms of cognitive neuroscience during an interactive experience in a virtual reality environment. In particular, we chose a virtual reality scenario based on the Leap Motion controller: this sensor device captures the movements of the user’s hand and fingers, which are reproduced on a computer screen by the proper software applications. For our experiment, we employed a sample of 10 subjects matched by age and sex and chosen among university students. The subjects took part in motor imagery training and immersive affordance condition (a virtual training with Leap Motion and a haptic training with real objects). After each training sessions the subject performed a recognition task, in order to investigate event-related potential (ERP) components. The results revealed significant differences in the attentional components during the Leap Motion training. During Leap Motion session, latencies increased in the occipital lobes, which are entrusted to visual sensory; in contrast, latencies decreased in the frontal lobe, where the brain is mainly activated for attention and action planning.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Bryan, J" uniqKey="Bryan J">J. Bryan</name>
</author>
<author>
<name sortKey="Vorderer, P E" uniqKey="Vorderer P">P.E. Vorderer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ricciardi, F" uniqKey="Ricciardi F">F. Ricciardi</name>
</author>
<author>
<name sortKey="De Paolis, L T" uniqKey="De Paolis L">L.T. De Paolis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Invitto, S" uniqKey="Invitto S">S. Invitto</name>
</author>
<author>
<name sortKey="Faggiano, C" uniqKey="Faggiano C">C. Faggiano</name>
</author>
<author>
<name sortKey="Sammarco, S" uniqKey="Sammarco S">S. Sammarco</name>
</author>
<author>
<name sortKey="De Luca, V" uniqKey="De Luca V">V. De Luca</name>
</author>
<author>
<name sortKey="De Paolis, L T" uniqKey="De Paolis L">L.T. De Paolis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mandryk, R L" uniqKey="Mandryk R">R.L. Mandryk</name>
</author>
<author>
<name sortKey="Inkpen, K M" uniqKey="Inkpen K">K.M. Inkpen</name>
</author>
<author>
<name sortKey="Calvert, T W" uniqKey="Calvert T">T.W. Calvert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bau, O" uniqKey="Bau O">O. Bau</name>
</author>
<author>
<name sortKey="Poupyrev, I" uniqKey="Poupyrev I">I. Poupyrev</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gibson, J J" uniqKey="Gibson J">J.J. Gibson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thill, S" uniqKey="Thill S">S. Thill</name>
</author>
<author>
<name sortKey="Caligiore, D" uniqKey="Caligiore D">D. Caligiore</name>
</author>
<author>
<name sortKey="Borghi, A M" uniqKey="Borghi A">A.M. Borghi</name>
</author>
<author>
<name sortKey="Ziemke, T" uniqKey="Ziemke T">T. Ziemke</name>
</author>
<author>
<name sortKey="Baldassarre, G" uniqKey="Baldassarre G">G. Baldassarre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jones, K S" uniqKey="Jones K">K.S. Jones</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chemero, A" uniqKey="Chemero A">A. Chemero</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Handy, T C" uniqKey="Handy T">T.C. Handy</name>
</author>
<author>
<name sortKey="Grafton, S T" uniqKey="Grafton S">S.T. Grafton</name>
</author>
<author>
<name sortKey="Shroff, N M" uniqKey="Shroff N">N.M. Shroff</name>
</author>
<author>
<name sortKey="Ketay, S" uniqKey="Ketay S">S. Ketay</name>
</author>
<author>
<name sortKey="Gazzaniga, M S" uniqKey="Gazzaniga M">M.S. Gazzaniga</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gibson, J J" uniqKey="Gibson J">J.J. Gibson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Apel, J" uniqKey="Apel J">J. Apel</name>
</author>
<author>
<name sortKey="Cangelosi, A" uniqKey="Cangelosi A">A. Cangelosi</name>
</author>
<author>
<name sortKey="Ellis, R" uniqKey="Ellis R">R. Ellis</name>
</author>
<author>
<name sortKey="Goslin, J" uniqKey="Goslin J">J. Goslin</name>
</author>
<author>
<name sortKey="Fischer, M" uniqKey="Fischer M">M. Fischer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Caligiore, D" uniqKey="Caligiore D">D. Caligiore</name>
</author>
<author>
<name sortKey="Borghi, A M" uniqKey="Borghi A">A.M. Borghi</name>
</author>
<author>
<name sortKey="Parisi, D" uniqKey="Parisi D">D. Parisi</name>
</author>
<author>
<name sortKey="Ellis, R" uniqKey="Ellis R">R. Ellis</name>
</author>
<author>
<name sortKey="Cangelosi, A" uniqKey="Cangelosi A">A. Cangelosi</name>
</author>
<author>
<name sortKey="Baldassarre, G" uniqKey="Baldassarre G">G. Baldassarre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gross, D C" uniqKey="Gross D">D.C. Gross</name>
</author>
<author>
<name sortKey="Stanney, K M" uniqKey="Stanney K">K.M. Stanney</name>
</author>
<author>
<name sortKey="Cohn, L J" uniqKey="Cohn L">L.J. Cohn</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lepecq, J C" uniqKey="Lepecq J">J.-C. Lepecq</name>
</author>
<author>
<name sortKey="Bringoux, L" uniqKey="Bringoux L">L. Bringoux</name>
</author>
<author>
<name sortKey="Pergandi, J M" uniqKey="Pergandi J">J.-M. Pergandi</name>
</author>
<author>
<name sortKey="Coyle, T" uniqKey="Coyle T">T. Coyle</name>
</author>
<author>
<name sortKey="Mestre, D" uniqKey="Mestre D">D. Mestre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Warren, W H" uniqKey="Warren W">W.H. Warren</name>
</author>
<author>
<name sortKey="Whang, S" uniqKey="Whang S">S. Whang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Luck, S J" uniqKey="Luck S">S.J. Luck</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cutmore, T R H" uniqKey="Cutmore T">T.R.H. Cutmore</name>
</author>
<author>
<name sortKey="Hine, T J" uniqKey="Hine T">T.J. Hine</name>
</author>
<author>
<name sortKey="Maberly, K J" uniqKey="Maberly K">K.J. Maberly</name>
</author>
<author>
<name sortKey="Langford, N M" uniqKey="Langford N">N.M. Langford</name>
</author>
<author>
<name sortKey="Hawgood, G" uniqKey="Hawgood G">G. Hawgood</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liu, B" uniqKey="Liu B">B. Liu</name>
</author>
<author>
<name sortKey="Wang, Z" uniqKey="Wang Z">Z. Wang</name>
</author>
<author>
<name sortKey="Song, G" uniqKey="Song G">G. Song</name>
</author>
<author>
<name sortKey="Wu, G" uniqKey="Wu G">G. Wu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Neumann, U" uniqKey="Neumann U">U. Neumann</name>
</author>
<author>
<name sortKey="Majoros, A" uniqKey="Majoros A">A. Majoros</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Garber, L" uniqKey="Garber L">L. Garber</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Giuroiu, M C" uniqKey="Giuroiu M">M.-C. Giuroiu</name>
</author>
<author>
<name sortKey="Marita, T" uniqKey="Marita T">T. Marita</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Takahashi, T" uniqKey="Takahashi T">T. Takahashi</name>
</author>
<author>
<name sortKey="Kishino, F" uniqKey="Kishino F">F. Kishino</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rehg, J M" uniqKey="Rehg J">J.M. Rehg</name>
</author>
<author>
<name sortKey="Kanade, T" uniqKey="Kanade T">T. Kanade</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ionescu, B" uniqKey="Ionescu B">B. Ionescu</name>
</author>
<author>
<name sortKey="Coquin, D" uniqKey="Coquin D">D. Coquin</name>
</author>
<author>
<name sortKey="Lambert, P" uniqKey="Lambert P">P. Lambert</name>
</author>
<author>
<name sortKey="Buzuloiu, V" uniqKey="Buzuloiu V">V. Buzuloiu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Joslin, C" uniqKey="Joslin C">C. Joslin</name>
</author>
<author>
<name sortKey="El Sawah, A" uniqKey="El Sawah A">A. El-Sawah</name>
</author>
<author>
<name sortKey="Chen, Q" uniqKey="Chen Q">Q. Chen</name>
</author>
<author>
<name sortKey="Georganas, N" uniqKey="Georganas N">N. Georganas</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Xu, D" uniqKey="Xu D">D. Xu</name>
</author>
<author>
<name sortKey="Yao, W" uniqKey="Yao W">W. Yao</name>
</author>
<author>
<name sortKey="Zhang, Y" uniqKey="Zhang Y">Y. Zhang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wang, R Y" uniqKey="Wang R">R.Y. Wang</name>
</author>
<author>
<name sortKey="Popovic, J" uniqKey="Popovic J">J. Popovic</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bellarbi, A" uniqKey="Bellarbi A">A. Bellarbi</name>
</author>
<author>
<name sortKey="Benbelkacem, S" uniqKey="Benbelkacem S">S. Benbelkacem</name>
</author>
<author>
<name sortKey="Zenati Henda, N" uniqKey="Zenati Henda N">N. Zenati-Henda</name>
</author>
<author>
<name sortKey="Belhocine, M" uniqKey="Belhocine M">M. Belhocine</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fiala, M" uniqKey="Fiala M">M. Fiala</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Seichter, H" uniqKey="Seichter H">H. Seichter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Irawati, S" uniqKey="Irawati S">S. Irawati</name>
</author>
<author>
<name sortKey="Green, S" uniqKey="Green S">S. Green</name>
</author>
<author>
<name sortKey="Billinghurst, M" uniqKey="Billinghurst M">M. Billinghurst</name>
</author>
<author>
<name sortKey="Duenser, A" uniqKey="Duenser A">A. Duenser</name>
</author>
<author>
<name sortKey="Ko, H" uniqKey="Ko H">H. Ko</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Berlia, R" uniqKey="Berlia R">R. Berlia</name>
</author>
<author>
<name sortKey="Kandoi, S" uniqKey="Kandoi S">S. Kandoi</name>
</author>
<author>
<name sortKey="Dubey, S" uniqKey="Dubey S">S. Dubey</name>
</author>
<author>
<name sortKey="Pingali, T R" uniqKey="Pingali T">T.R. Pingali</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bellarbi, A" uniqKey="Bellarbi A">A. Bellarbi</name>
</author>
<author>
<name sortKey="Belghit, H" uniqKey="Belghit H">H. Belghit</name>
</author>
<author>
<name sortKey="Benbelkacem, S" uniqKey="Benbelkacem S">S. Benbelkacem</name>
</author>
<author>
<name sortKey="Zenati, N" uniqKey="Zenati N">N. Zenati</name>
</author>
<author>
<name sortKey="Belhocine, M" uniqKey="Belhocine M">M. Belhocine</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sandor, C" uniqKey="Sandor C">C. Sandor</name>
</author>
<author>
<name sortKey="Klinker, G" uniqKey="Klinker G">G. Klinker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bhame, V" uniqKey="Bhame V">V. Bhame</name>
</author>
<author>
<name sortKey="Sreemathy, R" uniqKey="Sreemathy R">R. Sreemathy</name>
</author>
<author>
<name sortKey="Dhumal, H" uniqKey="Dhumal H">H. Dhumal</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pu, Q" uniqKey="Pu Q">Q. Pu</name>
</author>
<author>
<name sortKey="Gupta, S" uniqKey="Gupta S">S. Gupta</name>
</author>
<author>
<name sortKey="Gollakota, S" uniqKey="Gollakota S">S. Gollakota</name>
</author>
<author>
<name sortKey="Patel, S" uniqKey="Patel S">S. Patel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pu, Q" uniqKey="Pu Q">Q. Pu</name>
</author>
<author>
<name sortKey="Gupta, S" uniqKey="Gupta S">S. Gupta</name>
</author>
<author>
<name sortKey="Gollakota, S" uniqKey="Gollakota S">S. Gollakota</name>
</author>
<author>
<name sortKey="Patel, S" uniqKey="Patel S">S. Patel</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Li, K F" uniqKey="Li K">K.F. Li</name>
</author>
<author>
<name sortKey="Sevcenco, A M" uniqKey="Sevcenco A">A.-M. Sevcenco</name>
</author>
<author>
<name sortKey="Cheng, L" uniqKey="Cheng L">L. Cheng</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sarbolandi, H" uniqKey="Sarbolandi H">H. Sarbolandi</name>
</author>
<author>
<name sortKey="Lefloch, D" uniqKey="Lefloch D">D. Lefloch</name>
</author>
<author>
<name sortKey="Kolb, A" uniqKey="Kolb A">A. Kolb</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Marin, G" uniqKey="Marin G">G. Marin</name>
</author>
<author>
<name sortKey="Dominio, F" uniqKey="Dominio F">F. Dominio</name>
</author>
<author>
<name sortKey="Zanuttigh, P" uniqKey="Zanuttigh P">P. Zanuttigh</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cook, H" uniqKey="Cook H">H. Cook</name>
</author>
<author>
<name sortKey="Nguyen, Q V" uniqKey="Nguyen Q">Q.V. Nguyen</name>
</author>
<author>
<name sortKey="Simoff, S" uniqKey="Simoff S">S. Simoff</name>
</author>
<author>
<name sortKey="Trescak, T" uniqKey="Trescak T">T. Trescak</name>
</author>
<author>
<name sortKey="Preston, D" uniqKey="Preston D">D. Preston</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Coles, M G H" uniqKey="Coles M">M.G.H. Coles</name>
</author>
<author>
<name sortKey="Rugg, M D" uniqKey="Rugg M">M.D. Rugg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Invitto, S" uniqKey="Invitto S">S. Invitto</name>
</author>
<author>
<name sortKey="Scalinci, G" uniqKey="Scalinci G">G. Scalinci</name>
</author>
<author>
<name sortKey="Mignozzi, A" uniqKey="Mignozzi A">A. Mignozzi</name>
</author>
<author>
<name sortKey="Faggiano, C" uniqKey="Faggiano C">C. Faggiano</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Delorme, A" uniqKey="Delorme A">A. Delorme</name>
</author>
<author>
<name sortKey="Miyakoshi, M" uniqKey="Miyakoshi M">M. Miyakoshi</name>
</author>
<author>
<name sortKey="Jung, T P" uniqKey="Jung T">T.P. Jung</name>
</author>
<author>
<name sortKey="Makeig, S" uniqKey="Makeig S">S. Makeig</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pascual Marqui, R D" uniqKey="Pascual Marqui R">R.D. Pascual-Marqui</name>
</author>
<author>
<name sortKey="Esslen, M" uniqKey="Esslen M">M. Esslen</name>
</author>
<author>
<name sortKey="Kochi, K" uniqKey="Kochi K">K. Kochi</name>
</author>
<author>
<name sortKey="Lehmann, D" uniqKey="Lehmann D">D. Lehmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Burgess, P W" uniqKey="Burgess P">P.W. Burgess</name>
</author>
<author>
<name sortKey="Dumontheil, I" uniqKey="Dumontheil I">I. Dumontheil</name>
</author>
<author>
<name sortKey="Gilbert, S J" uniqKey="Gilbert S">S.J. Gilbert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kassuba, T" uniqKey="Kassuba T">T. Kassuba</name>
</author>
<author>
<name sortKey="Menz, M M" uniqKey="Menz M">M.M. Menz</name>
</author>
<author>
<name sortKey="Roder, B" uniqKey="Roder B">B. Röder</name>
</author>
<author>
<name sortKey="Siebner, H R" uniqKey="Siebner H">H.R. Siebner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Amedi, A" uniqKey="Amedi A">A. Amedi</name>
</author>
<author>
<name sortKey="Malach, R" uniqKey="Malach R">R. Malach</name>
</author>
<author>
<name sortKey="Hendler, T" uniqKey="Hendler T">T. Hendler</name>
</author>
<author>
<name sortKey="Peled, S" uniqKey="Peled S">S. Peled</name>
</author>
<author>
<name sortKey="Zohary, E" uniqKey="Zohary E">E. Zohary</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ellis, R" uniqKey="Ellis R">R. Ellis</name>
</author>
<author>
<name sortKey="Tucker, M" uniqKey="Tucker M">M. Tucker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Foxe, J J" uniqKey="Foxe J">J.J. Foxe</name>
</author>
<author>
<name sortKey="Simpson, G V" uniqKey="Simpson G">G.V. Simpson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Coull, J T" uniqKey="Coull J">J.T. Coull</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kudo, N" uniqKey="Kudo N">N. Kudo</name>
</author>
<author>
<name sortKey="Nakagome, K" uniqKey="Nakagome K">K. Nakagome</name>
</author>
<author>
<name sortKey="Kasai, K" uniqKey="Kasai K">K. Kasai</name>
</author>
<author>
<name sortKey="Araki, T" uniqKey="Araki T">T. Araki</name>
</author>
<author>
<name sortKey="Fukuda, M" uniqKey="Fukuda M">M. Fukuda</name>
</author>
<author>
<name sortKey="Kato, N" uniqKey="Kato N">N. Kato</name>
</author>
<author>
<name sortKey="Iwanami, A" uniqKey="Iwanami A">A. Iwanami</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gratton, G" uniqKey="Gratton G">G. Gratton</name>
</author>
<author>
<name sortKey="Coles, M G" uniqKey="Coles M">M.G. Coles</name>
</author>
<author>
<name sortKey="Sirevaag, E J" uniqKey="Sirevaag E">E.J. Sirevaag</name>
</author>
<author>
<name sortKey="Eriksen, C W" uniqKey="Eriksen C">C.W. Eriksen</name>
</author>
<author>
<name sortKey="Donchin, E" uniqKey="Donchin E">E. Donchin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vainio, L" uniqKey="Vainio L">L. Vainio</name>
</author>
<author>
<name sortKey="Ala Salom Ki, H" uniqKey="Ala Salom Ki H">H. Ala-Salomäki</name>
</author>
<author>
<name sortKey="Huovilainen, T" uniqKey="Huovilainen T">T. Huovilainen</name>
</author>
<author>
<name sortKey="Nikkinen, H" uniqKey="Nikkinen H">H. Nikkinen</name>
</author>
<author>
<name sortKey="Salo, M" uniqKey="Salo M">M. Salo</name>
</author>
<author>
<name sortKey="V Liaho, J" uniqKey="V Liaho J">J. Väliaho</name>
</author>
<author>
<name sortKey="Paavilainen, P" uniqKey="Paavilainen P">P. Paavilainen</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Sensors (Basel)</journal-id>
<journal-id journal-id-type="iso-abbrev">Sensors (Basel)</journal-id>
<journal-id journal-id-type="publisher-id">sensors</journal-id>
<journal-title-group>
<journal-title>Sensors (Basel, Switzerland)</journal-title>
</journal-title-group>
<issn pub-type="epub">1424-8220</issn>
<publisher>
<publisher-name>MDPI</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">26999151</article-id>
<article-id pub-id-type="pmc">4813969</article-id>
<article-id pub-id-type="doi">10.3390/s16030394</article-id>
<article-id pub-id-type="publisher-id">sensors-16-00394</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Haptic, Virtual Interaction and Motor Imagery: Entertainment Tools and Psychophysiological Testing</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Invitto</surname>
<given-names>Sara</given-names>
</name>
<xref ref-type="aff" rid="af1-sensors-16-00394">1</xref>
<xref rid="c1-sensors-16-00394" ref-type="corresp">*</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Faggiano</surname>
<given-names>Chiara</given-names>
</name>
<xref ref-type="aff" rid="af2-sensors-16-00394">2</xref>
<xref ref-type="author-notes" rid="fn1-sensors-16-00394"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Sammarco</surname>
<given-names>Silvia</given-names>
</name>
<xref ref-type="aff" rid="af2-sensors-16-00394">2</xref>
<xref ref-type="author-notes" rid="fn1-sensors-16-00394"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>De Luca</surname>
<given-names>Valerio</given-names>
</name>
<xref ref-type="aff" rid="af3-sensors-16-00394">3</xref>
<xref ref-type="author-notes" rid="fn1-sensors-16-00394"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>De Paolis</surname>
<given-names>Lucio T.</given-names>
</name>
<xref ref-type="aff" rid="af3-sensors-16-00394">3</xref>
<xref ref-type="author-notes" rid="fn1-sensors-16-00394"></xref>
</contrib>
</contrib-group>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Lamberti</surname>
<given-names>Fabrizio</given-names>
</name>
<role>Academic Editor</role>
</contrib>
<contrib contrib-type="editor">
<name>
<surname>Sanna</surname>
<given-names>Andrea</given-names>
</name>
<role>Academic Editor</role>
</contrib>
<contrib contrib-type="editor">
<name>
<surname>Rokne</surname>
<given-names>Jon</given-names>
</name>
<role>Academic Editor</role>
</contrib>
</contrib-group>
<aff id="af1-sensors-16-00394">
<label>1</label>
Human Anatomy and Neuroscience Laboratory, Department of Biological and Environmental Science and Technologies, University of Salento, Campus Ecotekne, Via per Monteroni, Lecce 73100, Italy</aff>
<aff id="af2-sensors-16-00394">
<label>2</label>
University of Salento, Campus Ecotekne, Via per Monteroni, Lecce 73100, Italy;
<email>chiarafaggiano0@gmail.com</email>
(C.F.);
<email>silvia.sammarco@yahoo.it</email>
(S.S.)</aff>
<aff id="af3-sensors-16-00394">
<label>3</label>
Augmented and Virtual Reality Laboratory (AVR Lab), Department of Engineering for Innovation, University of Salento, Campus Ecotekne, Via per Monteroni, Lecce 73100, Italy;
<email>valerio.deluca@unisalento.it</email>
(V.D.L.);
<email>lucio.depaolis@unisalento.it</email>
(L.T.D.P.)</aff>
<author-notes>
<corresp id="c1-sensors-16-00394">
<label>*</label>
Correspondence:
<email>sara.invitto@unisalento.it</email>
; Tel.: +39-0832-298-618; Fax: +39-0832-298-626</corresp>
<fn id="fn1-sensors-16-00394">
<label></label>
<p>These authors contributed equally to this work.</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>18</day>
<month>3</month>
<year>2016</year>
</pub-date>
<pub-date pub-type="collection">
<month>3</month>
<year>2016</year>
</pub-date>
<volume>16</volume>
<issue>3</issue>
<elocation-id>394</elocation-id>
<history>
<date date-type="received">
<day>24</day>
<month>12</month>
<year>2015</year>
</date>
<date date-type="accepted">
<day>14</day>
<month>3</month>
<year>2016</year>
</date>
</history>
<permissions>
<copyright-statement>© 2016 by the authors; licensee MDPI, Basel, Switzerland.</copyright-statement>
<copyright-year>2016</copyright-year>
<license>
<license-p>
<pmc-comment>CREATIVE COMMONS</pmc-comment>
This article is an open access article distributed under the terms and conditions of the Creative Commons by Attribution (CC-BY) license (
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">http://creativecommons.org/licenses/by/4.0/</ext-link>
).</license-p>
</license>
</permissions>
<abstract>
<p>In this work, the perception of affordances was analysed in terms of cognitive neuroscience during an interactive experience in a virtual reality environment. In particular, we chose a virtual reality scenario based on the Leap Motion controller: this sensor device captures the movements of the user’s hand and fingers, which are reproduced on a computer screen by the proper software applications. For our experiment, we employed a sample of 10 subjects matched by age and sex and chosen among university students. The subjects took part in motor imagery training and immersive affordance condition (a virtual training with Leap Motion and a haptic training with real objects). After each training sessions the subject performed a recognition task, in order to investigate event-related potential (ERP) components. The results revealed significant differences in the attentional components during the Leap Motion training. During Leap Motion session, latencies increased in the occipital lobes, which are entrusted to visual sensory; in contrast, latencies decreased in the frontal lobe, where the brain is mainly activated for attention and action planning.</p>
</abstract>
<kwd-group>
<kwd>virtual training</kwd>
<kwd>event-related potentials</kwd>
<kwd>interactive entertainment</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="sec1-sensors-16-00394">
<title>1. Entertainment, Virtual Environment and Affordance</title>
<p>Entertainment is becoming an extremely strong field from an economic point of view, so as to arouse interest among economists, such that the modern age is taking the name of “entertainment age” [
<xref rid="B1-sensors-16-00394" ref-type="bibr">1</xref>
], both from the engineering and psychological research points of view. Recently, in fact, some studies developed a new field of research, called “Psychology of Entertainment”, which takes the playful processing and its relationship into consideration with learning, perception, emotions, and communication, through an interdisciplinary approach also useful to marketing, to communication sciences, and cognitive and clinical neuroscience (e.g., the development of serious therapeutics games [
<xref rid="B2-sensors-16-00394" ref-type="bibr">2</xref>
]).</p>
<p>Until now, this research has been analyzed simply through behavioral analysis (e.g., the study of role playing in learning levels and behavioral reaction times), but it becomes more and more evident the need to investigate the effects of the game through psychophysiological variables. This is also due to the fact that evaluating entertainment technology is challenging because success is not defined in terms of productivity and performance, then compared to objective categorization, but in terms of enjoyment and interaction which, instead, are subjective categorizations, strongly related to both a perceptual approach and an emotional approach [
<xref rid="B3-sensors-16-00394" ref-type="bibr">3</xref>
,
<xref rid="B4-sensors-16-00394" ref-type="bibr">4</xref>
].</p>
<p>Around this theme, we are developing various studies in order to analyze and implement haptic interaction aspects; for example, in an interface built from the Disney Group, is looking to increase an augmented reality product through tactile stimulation [
<xref rid="B5-sensors-16-00394" ref-type="bibr">5</xref>
].</p>
<p>To investigate these aspects we aim to understand the properties of an interactive space and activations that can be produced by objects that lead to actions, so that this analysis allows us to better understand the different possible uses of entertainment technology, as well as ergonomic implementations that we can improve (e.g., studying how the perceptual levels process the virtual product). To do this we introduce an extremely important concept for cognitive science: the concept of affordances. Gibson [
<xref rid="B6-sensors-16-00394" ref-type="bibr">6</xref>
] was the psychologist who introduced the affordance theory: he highlighted that the dynamic pattern of the optic flow can reactively lead the interaction in the environment. He introduced the term “affordance” to give an active meaning to the visual perception of the environment: according to his new theory, such perception directly includes the potential actions that a perceiver can carry out without activating high-level reasoning processes about object properties [
<xref rid="B7-sensors-16-00394" ref-type="bibr">7</xref>
]. The affordance is directly perceivable by the organism because there is information in the environment that uniquely specifies that affordance for this organism. In other words, Gibson’s affordances introduce the idea of the actor-environment mutuality: the actor and the environment make an inseparable pair. This idea was different from the contemporary view of the time that the meaning of objects was created internally with further “mental calculation” of the otherwise meaningless perceptual data. Indeed, Gibson’s work was focused on direct perception, a form of perception that does not require mediation or internal processing by an actor [
<xref rid="B8-sensors-16-00394" ref-type="bibr">8</xref>
,
<xref rid="B9-sensors-16-00394" ref-type="bibr">9</xref>
].</p>
<p>From a manipulation perspective, for instance, a person watching an object would directly perceive also the object’s “graspability”, “liftability”,
<italic>etc.</italic>
, as well as shapes and colors [
<xref rid="B10-sensors-16-00394" ref-type="bibr">10</xref>
].</p>
<p>To be graspable, an object must have opposite surfaces separated by a distance less than the span of the hand. In addition to the object itself, the embodiment (in particular the actuators) of the observing agent conditions the object affordance, too. A bottle, for instance, affords grasping for human beings, whereas it might afford a biting action for dogs, and another different action for ants. Furthermore, the affordance is perceived visually: the surface properties are seen relative to the body surfaces, the self, they constitute a seat and have meaning. The size of an object that constitutes a graspable size is specified in the optic array, and if this is true, it is not true that a haptic sensation of size has to become associated with the visual sensation of size in order for the affordance to be perceived. A five-inch cube, for example, can be grasped, but a ten-inch cube cannot [
<xref rid="B11-sensors-16-00394" ref-type="bibr">11</xref>
]. The affordance concept assumes that the resulting sensorimotor processing tends to trigger or prepare an action automatically in a reactive pattern (even though this tendency can be significantly influenced by the agent context and goals).</p>
<p>Recently, the Gibsonian affordance concept has been expanded to include also the brain representations of affordances (
<italic>i.e.</italic>
, the possible sensorimotor interactions linked to objects; see for instance recent papers on computational models of affordances [
<xref rid="B12-sensors-16-00394" ref-type="bibr">12</xref>
,
<xref rid="B13-sensors-16-00394" ref-type="bibr">13</xref>
].</p>
<p>They worked on memory span, involving objects with handles oriented to the left or right side. These representations include both the features involved during a handling object, such as the object size and location, and the relation between the object and the agent’s body, such as the proximity or the contact between the object and a distal effector (hand).</p>
<p>Nowadays Gibson’s ecological framework is considered a valid functional approach for defining the level of realism of experience in the design of virtual environments [
<xref rid="B14-sensors-16-00394" ref-type="bibr">14</xref>
]. For instance, the perception of affordances could potentially improve the sensorimotor assessment of physical presence, which is the sensation of being physically located in a virtual world [
<xref rid="B15-sensors-16-00394" ref-type="bibr">15</xref>
]. In this context, Lepecq [
<xref rid="B15-sensors-16-00394" ref-type="bibr">15</xref>
] studied the behavior of some subjects walking through a virtual aperture of variable widths. The analysis assumed the sense of presence leads the subjects to change their body orientation according to the aperture width during the walk: this suggested a significant similarity between the locomotor postural patterns of subjects walking through a virtual aperture and those of subjects walking through a real aperture [
<xref rid="B16-sensors-16-00394" ref-type="bibr">16</xref>
]. A behavioral transition from frontal walking to body rotation was observed in most subjects when the virtual aperture narrowed.</p>
<p>Finally, researchers proposed a conceptual model representing affordances’ arousal in virtual environments (VEs) through sensory-stimuli substitution. Such a model can provide VE designers with several useful hints for conceiving more ecologically valid environments [
<xref rid="B14-sensors-16-00394" ref-type="bibr">14</xref>
]. According to this literature, there is a need to integrate a comprehensive theory of perception into VE design. Theories of direct perception, in particular affordance theory, may prove particularly relevant to VE system design because affordance theory provides an explanation of the interaction of an organism with its environment [
<xref rid="B14-sensors-16-00394" ref-type="bibr">14</xref>
].</p>
<p>Some recent works analyzed the cognitive process in the human brain in terms of event-related potentials (ERP) during virtual reality experiences. An event-related potential measures the brain response to a specific sensory, perceptual, cognitive, or motor event [
<xref rid="B17-sensors-16-00394" ref-type="bibr">17</xref>
]. For these reasons, realistic and immersive environments provided by virtual reality technologies can be used to setup reliable experimental scenarios to analyze the cognitive process mechanism of the human brain.</p>
<p>In this regard, some studies have already tried to shed light on factors that affect the efficient and rapid acquisition of knowledge using this technology and demonstrated that the right cerebral hemisphere appears to be more activated than the left during navigational learning in a VE. These results underlined the implications of the use of VEs for training purposes and many assist in linking the processes involved in navigation to a more general framework of visual-spatial processing and mental imagery [
<xref rid="B18-sensors-16-00394" ref-type="bibr">18</xref>
].</p>
<p>A recent experimental work [
<xref rid="B19-sensors-16-00394" ref-type="bibr">19</xref>
] focused on a virtual reality traffic environment to study the cognitive processing in the human brain in terms of event-related potentials: for this purpose, traffic signs are showed during the experiment with correct and incorrect background colors. The results revealed that the human brain can reply more quickly in presence of simpler contents and stronger color contrast between the background and the foreground. Indeed, according to this approach, some researchers consider augmented reality as a proper instrument to achieve a better adherence to the right procedures thanks to its ability to increase users’ motivation [
<xref rid="B20-sensors-16-00394" ref-type="bibr">20</xref>
].</p>
<p>This paper analyses the perception of affordances in a virtual environment in terms of psychophysiological and behavioral parameters, especially using event-related potentials components, according to the neuroscientific aspect of cognition. For our empirical study, we chose as experimental scenario the interaction with a virtual reality application through the Leap Motion controller [
<xref rid="B21-sensors-16-00394" ref-type="bibr">21</xref>
], a small and low-cost gesture detection system [
<xref rid="B22-sensors-16-00394" ref-type="bibr">22</xref>
] designed for interactive entertainment. The software applications provided with this device reproduce the proper interactive experience for our study about the perception of affordance.</p>
<p>In accordance with our model, the adoption of new information and communication technologies within the education field is allowing the development of several interactive systems and software applications. Their technological novelty makes the interaction with them even more interesting, compelling and fascinating. Their adoption allows the user to play, to entertain and to memorize some learning objects more easily and with more involvement. They have expanded the possibilities for experimentation within fields and situations that can be simulated by virtual environments.</p>
<sec id="sec1dot1-sensors-16-00394">
<title>1.1. Gestural Technologies</title>
<p>Gestures [
<xref rid="B22-sensors-16-00394" ref-type="bibr">22</xref>
] are new forms of communication based on the association of particular messages and meaningful commands with well-defined positions or movements of some parts of the human body. They typically deal with finger and hand movements, but some authors are investigating also the possibility to track head and eye movements and face expressions [
<xref rid="B23-sensors-16-00394" ref-type="bibr">23</xref>
] by exploiting the proper face tracking API provided by devices’ vendors [
<xref rid="B24-sensors-16-00394" ref-type="bibr">24</xref>
].</p>
<p>Gestures represent a more natural form of human-computer interaction than traditional devices such as mouse and keyboard: indeed, they mostly derive from body language, which is part of the natural communication among people. In gesture-based environments, users can pay more attention to the output on the screen, since they do not have to look at the input devices anymore.</p>
<p>Moreover, gestures are a more practical alternative to voice-recognition systems, which require a long and complex training phase to adjust to the voice tone and the user’s diction.</p>
<p>In the last years a lot of devices and systems have been designed for gesture detection by engineers and software developers. However, hints provided by human-interaction experts should be taken into account in the design process to make gestural technologies more usable and consistent with natural interaction forms. Moreover, the distinction between real control gestures and accidental movements is still an open problem.</p>
<p>In the earlier interaction systems based on hand gestures users had to wear electronic gloves [
<xref rid="B25-sensors-16-00394" ref-type="bibr">25</xref>
,
<xref rid="B26-sensors-16-00394" ref-type="bibr">26</xref>
] containing several sensors. The first systems could only detect static hand postures, sometimes coupled with position and orientation in the space [
<xref rid="B25-sensors-16-00394" ref-type="bibr">25</xref>
]. Then, in a later phase, some more advanced systems introduced dynamic gesture detection [
<xref rid="B27-sensors-16-00394" ref-type="bibr">27</xref>
], which also include movements, and finger tracking [
<xref rid="B28-sensors-16-00394" ref-type="bibr">28</xref>
].</p>
<p>An important drawback coming from the adoption of electronic gloves is related to the calibration of those systems, whose parameters need to be tuned according to hand geometry [
<xref rid="B29-sensors-16-00394" ref-type="bibr">29</xref>
].</p>
<p>Some other gesture detection systems [
<xref rid="B30-sensors-16-00394" ref-type="bibr">30</xref>
,
<xref rid="B31-sensors-16-00394" ref-type="bibr">31</xref>
] apply color segmentation on colored gloves worn by users or use vision-tracked fiducial markers [
<xref rid="B32-sensors-16-00394" ref-type="bibr">32</xref>
] placed on some handheld devices [
<xref rid="B33-sensors-16-00394" ref-type="bibr">33</xref>
] or even directly over hands and fingertips [
<xref rid="B34-sensors-16-00394" ref-type="bibr">34</xref>
].</p>
<p>Another wearable device is a wristband [
<xref rid="B35-sensors-16-00394" ref-type="bibr">35</xref>
] based on the electromyography technique, which consists in retrieving signals from the electric potential of human muscles.</p>
<p>Wearable devices are often cumbersome and may limit hand movements due to the presence of sensors and wires. The consequent constraints on the degree of freedom of movements partially reduce the range of users’ gestures. Moreover, the general user experience may be negatively affected. Users often do not perceive this form of interaction as being as natural as that offered by vision-based systems.</p>
<p>Interactive tabletop devices can detect gestures by exploiting histogram methods [
<xref rid="B36-sensors-16-00394" ref-type="bibr">36</xref>
] or some more expensive touch-sensitive surfaces [
<xref rid="B37-sensors-16-00394" ref-type="bibr">37</xref>
], but they require the hand to keep a fixed posture.</p>
<p>The vision-based recognition system described in [
<xref rid="B38-sensors-16-00394" ref-type="bibr">38</xref>
] uses a simple webcam to acquire images of the user’s hand and recognize Indian signs (representing numbers from 0 to 9).</p>
<p>The main drawback of camera-based systems is related to their restricted interaction area, which is limited to a specific field of view near the device. Furthermore, lenses or sensors issues, lighting problems, and objects in the scene foreground/background could significantly affect their accuracy and reliability [
<xref rid="B22-sensors-16-00394" ref-type="bibr">22</xref>
]. Some systems try to overcome these problems by exploiting electric fields [
<xref rid="B39-sensors-16-00394" ref-type="bibr">39</xref>
] or Wi-Fi signals [
<xref rid="B40-sensors-16-00394" ref-type="bibr">40</xref>
,
<xref rid="B41-sensors-16-00394" ref-type="bibr">41</xref>
,
<xref rid="B42-sensors-16-00394" ref-type="bibr">42</xref>
]: they recognize gestures by detecting the disturbance produced by body parts.</p>
<p>Nevertheless, nowadays the most common gestural technologies, introduced by videogame and entertainment companies (e.g., Microsoft, Nintendo, and Sony), rely either on handheld devices or on cameras performing motion tracking.</p>
<p>The Leap Motion controller [
<xref rid="B21-sensors-16-00394" ref-type="bibr">21</xref>
] is a small, easy to-use, and low-cost device designed to capture the movements of human hands and fingers.
<xref ref-type="fig" rid="sensors-16-00394-f001">Figure 1</xref>
shows the use of the Leap Motion during the experiments.</p>
<p>The Sony PlayStation Move controller [
<xref rid="B43-sensors-16-00394" ref-type="bibr">43</xref>
] detects movements in the 3D space thanks to a multi-colored light source and a webcam.</p>
<p>The Nintendo Wii remote controller (Wiimote) [
<xref rid="B44-sensors-16-00394" ref-type="bibr">44</xref>
] is equipped with accelerometers, infrared detectors, and a LED sensor bar. The MotionPlus add-on can detect change in orientation along the three axes thanks to some inertial gyroscopes.</p>
<p>A comparison between Leap Motion and Wiimote in terms of accuracy and tracking reliability can be found in [
<xref rid="B45-sensors-16-00394" ref-type="bibr">45</xref>
]. The Wiimote is more reliable in movement detection than the Leap Motion controller, which is not able to detect twisting motion properly; on the other hand, the Leap Motion is able to detect finger gestures, such as circular movements and forward/downward movements. Furthermore, the Leap Motion controller allows a more natural form of interaction: users do not have to hold any particular object and, thus, can freely move their hands and fingers in the space.</p>
<p>Unlike Sony and Nintendo, Microsoft introduced Kinect [
<xref rid="B46-sensors-16-00394" ref-type="bibr">46</xref>
], a markerless webcam-based system that does not need any handheld device. In fact, two different Kinect versions were released, characterized by different architectures and performance [
<xref rid="B47-sensors-16-00394" ref-type="bibr">47</xref>
].</p>
<p>Among all the presented devices, we chose the Leap Motion controller for the experiments described in this paper mainly due to its usability: the user can move freely his hand within the interaction volume without any need for grasping or wearing devices. Moreover, the Leap Motion may give a higher sense of realism due to a more accurate gesture detection compared to other devices such as the Kinect. Indeed, although in a combined use the detailed information in Kinect’s full-depth map could overcome some limitations of the Leap Motion [
<xref rid="B48-sensors-16-00394" ref-type="bibr">48</xref>
], even the second Kinect version may fail in properly detecting hands and fingers, especially during abrupt movements [
<xref rid="B49-sensors-16-00394" ref-type="bibr">49</xref>
].</p>
</sec>
<sec id="sec1dot2-sensors-16-00394">
<title>1.2. EEG, Virtual Interaction and Motor Imagery</title>
<p>This approach is performed within a paradigm of cognitive neuroscience, specifically within the studies of motor affordances. This paradigm is developed through the technique of electroencephalography (EEG) and, in particular, event-related potentials (ERPs). EEG is a procedure that measures neural activity of the brain through electrodes placed on the scalp. The EEG is the result of ongoing activity of numerous neurons under the scalp, so, it is very improbable to see a single-peak evoked response in a single task. To obtain a clear wave elicited by the task, the trials must be repeated [
<xref rid="B50-sensors-16-00394" ref-type="bibr">50</xref>
].</p>
<p>Moreover, in psychophysiology, go-no go tests are employed to measure subject attentional and decisional processing. For example, a go-no go task requires one to perform an action given certain stimuli (e.g., press a button where the stimulus is recognized) and inhibit that action under a different set of stimuli (e.g., not press that same button, where the stimulus is not recognized).</p>
<p>Specifically, in relation to this perspective, we started to study, according to a survey of psychophysiological entertainment interface, the possibilities for differentiation of this with a real or imagery interface [
<xref rid="B3-sensors-16-00394" ref-type="bibr">3</xref>
], and the analysis of the difference between a real haptic manipulation, built with a 3D printer, and an augmented manipulation according to learning styles [
<xref rid="B51-sensors-16-00394" ref-type="bibr">51</xref>
].</p>
<p>In a previous study [
<xref rid="B51-sensors-16-00394" ref-type="bibr">51</xref>
], Invitto,
<italic>et al.</italic>
investigated through an “augmented game”, Advanced Distributed Learning (ADL). ADL is a kind of learning mediated by new technologies. The ADL also makes use of augmented reality, which takes place through processes of virtual manipulation.</p>
<p>The experimental research focused on an augmented reality product (Dune AURASMA), and 3D objects printed with a 3D scan. We analyzed the possibilities of interaction and manipulation of shapes in augmented reality and in a real context. Literature shows that there are different modules within the occipitotemporal cortex that receive both visual and somatosensory inputs and it explains how these can be integrated in the learning process.</p>
<p>These cortical modules can be activated in the evaluation of the various aspects of surface properties of objects, such as the 3D shape, as well as in visual and haptic movements. The work analyzed ERP components variations during two different kinds of learning training: the same objects are manipulated either in augmented reality or during the condition of real haptic manipulation and the variations due to different learning styles are investigated.</p>
<p>We considered four scales of learning style: visual verbal, visual non-verbal, kinesthetic, and analytical. The subjects performed a training lasting 5 min consisting of haptic manipulation of 3D models, obtained through modeling in 3D Blender 2.74 and manipulation in augmented reality, presented through Dune
<sup>®</sup>
Aurasma models. After each training the subjects had to perform a recognition task of the same stimuli (presented in 2D), during an EEG recording. A general linear model was computed to investigate the research hypothesis. Results of this study highlighted an effect on ERPs components due to learning styles. The subjects with high scores of visual non-verbal learning style showed higher amplitude in the central, occipital, and parietal channels in early components (appointed to attentional processing) of ERPs. Meanwhile, the subjects with visual verbal learning presented higher amplitude, in general, in the cognitive component. According to this result, we can conclude that learning styles are involved in perceptual levels during an augmented training and according to the prominent style, processing involves different ERP components and different brain areas. The learning style affects more these variations when the mode of training is through augmented reality, where the visuomotor process is prevalent.</p>
<p>In this work, with respect to our previous studies, we implemented and used the concept of “motor imagery” to convey the idea of a mental interaction built on the sensorimotor mental imagery.</p>
<p>The aim of the paper is to investigate how we can use tools designed for entertainment, in a useful way in learning processes and interaction. Using electrophysiology techniques and, in particular, the event related potentials, in this study we investigated cortical responses and attentional arousing during the interaction with the interactive game Leap Motion, rather than when a subject manipulated a real or imagined object with motor affordance. This allows us, in this case, to understand whether these new gaming tools, which seem to be a middle ground between the game “acted” and the play “imagined” cannot be only new playful instruments, but also become the means that allow and facilitate learning.</p>
</sec>
</sec>
<sec id="sec2-sensors-16-00394">
<title>2. Method</title>
<sec id="sec2dot1-sensors-16-00394" sec-type="subjects">
<title>2.1. Participants</title>
<p>Our sample was composed of 10 university students matched by age and sex (five men and five women). The sample of recruited volunteers had normal hearing, normal or corrected-to-normal vision, and a right manual dominance. The subjects recruited had no previous experience of EEG and cognitive tasks. The subjects performed in the baseline condition study (motor imagery training) and in the immersive affordance condition (a virtual training with Leap Motion and a haptic training). None of them had previously taken part in experiments like these. All participants gave written informed consent according to the Helsinki Declaration.</p>
</sec>
<sec id="sec2dot2-sensors-16-00394">
<title>2.2. Measurements and Stimuli</title>
<p>After reading the informed consent form and task description, participants completed the questionnaire with anagraphical data.</p>
<p>Subjects had to perform three training sessions: a motor imagery training, a haptic object manipulation training, and Leap Motion manipulation training. In the three training sessions, subjects were shown objects with affordance of grasping: a cup, glasses, scissors, pot handle, a computer mouse, a fork, a pen.</p>
<p>After each training the subject had to perform a go-no go task, during an EEG recognition task recorded through E-Prime 2.0 presentation (Psychology Software Tools, Inc., Sharpsburg, PA, USA).</p>
<sec id="sec2dot2dot1-sensors-16-00394">
<title>EEG</title>
<p>During the images presentation task we recorded a 16 channel EEG using Brain AMP—Brain Vision Recorder. We considered the event-related potential (ERP) for grasping objects.</p>
<p>During the computer-supported attention task, EEG was recorded by 16 surface recording electrodes, belonging to the Brain Vision Recorder apparatus (Brain Products, GmbH, Dusserdolf, Germany). A further electrode was positioned above the right eyebrow for electro-oculogram (EOG) recording. The ERP’s analysis was obtained through the Brain Vision Analyzer. Time offline analysis was from 100 ms pre-stimulus to 500 ms post-stimulus with 100–0 baseline correction.</p>
<p>Thereafter, trials with blinks and eye movements were rejected on the basis of a horizontal electro-oculogram with an ICA component analysis. An artifact rejection criterion of 60 V was used at all other scalp sites to reject trials with excessive EMG or other transient noise. The sampling rate was 256 Hz. After a transformation and re-segmentation of data with the Brain Vision Analyzer, the artifact-free EEG tracks corresponding to the affordance object, marked by the motor response, were averaged in each case to extract the main waveforms, the N1 in the 120–175 ms time range, the P1 in the 175–250 ms time range, and the P3 component in the 260–400 ms time interval, according to the literature. We performed a semi-automatic peak detection with the mean of the maximum area for the different components of the ERP waves.</p>
</sec>
</sec>
<sec id="sec2dot3-sensors-16-00394">
<title>2.3. Procedure and Task</title>
<p>Our study/experiment consisted of an analysis of affordances perception in VR through an electrophysiological variable (ERP). The task in baseline condition shows, in a pseudo-random way, images (pictures in 2D presentation, with the E-Prime 2.0 software) such as colored frames, no grasping objects (e.g., table, chair), and grasping objects (e.g., glasses, cup, scissors, mouse, pen, and fork, e.g.,
<xref ref-type="fig" rid="sensors-16-00394-f002">Figure 2</xref>
). The grasping objects were presented with a percentage of 20%.</p>
<p>Subjects were seated in a comfortable chair 100 cm from the display monitor; their midsagittal plane was aligned with the center of the screen and with the center of the Leap Motion. Subjects performed three different tasks during the experiment (
<xref ref-type="fig" rid="sensors-16-00394-f003">Figure 3</xref>
):
<list list-type="bullet">
<list-item>
<p>Task Training A, in which they were asked to think using the objects with affordance of grasping (the objects were positioned in front of the subject, on the table);</p>
</list-item>
<list-item>
<p>Task Training B, in which they were asked to think using the objects while interacting with the Leap Motion Playground app (they had a visual feedback of the hand motion on the screen);</p>
</list-item>
<list-item>
<p>Task Training C, in which they really used the grasping objects.</p>
</list-item>
</list>
</p>
<p>Each training task had a duration of 2 min.</p>
<p>The training were not presented in sequence but follow a randomly alternating order for each subject.</p>
<p>After each training session, the subject had to perform an E-Prime experiment in which he had to recognize, among various objects and colored frames, the objects he had previously seen during the training. The triggers of affordance images were used for ERP segmentation analysis.</p>
<p>The recognition task images were selected through a repertoire of neutral images (colored squares on a light background), non-target images (animals, everyday objects), and target images (the same grasping objects used in the previous training session).</p>
<p>All stimuli had dimensions of 240 × 210 pixels, and were displayed centrally on a light grey background and to the same level of brightness on the computer monitor. The task was administered via E-Prime software 2.0. The task paradigm was a go-no go presentation (
<xref ref-type="fig" rid="sensors-16-00394-f004">Figure 4</xref>
). Each trial lasted 600 s, with a stimulus duration of 2000 ms and 1000 ms of interstimulus duration.</p>
<p>The participants were instructed to stand upright with
<italic>ca</italic>
. 75 cm between the front edge of the chair and the floor of the computer screen. The following instruction message was shown to each user: “Please click a button when you see an element which has been previously imagined or manipulated.”</p>
</sec>
</sec>
<sec id="sec3-sensors-16-00394">
<title>3. Statistical Analysis and Results</title>
<p>A statistical ANOVA method was performed to analyze behavioral and electrophysiological data, using the three condition of trainings as independent variables, and reaction time (behavioral value) and wave’s component (psychophysiological value) as dependent variables.</p>
<sec id="sec3dot1-sensors-16-00394">
<title>3.1. Behavioral Task Analysis</title>
<p>The result of reaction time is significant (F = 4.009;
<italic>p</italic>
= 0.020);
<italic>post hoc</italic>
analysis shows a significant difference (
<italic>p</italic>
= 0.016) between condition 2 and condition 3 (Leap Motion training and real training in the direction of slower reaction time in the Leap Motion condition (mean of 1.254 ms)
<italic>versus</italic>
haptic training (mean of 934 ms). There was no significant effect on reaction time in the motor imagery condition (mean 1.112 ms).</p>
</sec>
<sec id="sec3dot2-sensors-16-00394">
<title>3.2. ERP Analysis</title>
<p>We performed a one-way ANOVA analysis with the training condition as factor (1: motor imagery training; 2: Leap Motion training; 3 haptic training) and latencies and amplitude of N1, P1, and P3 ERP components as dependent variables in the affordance condition.</p>
<p>Main results for N1 waves are reported in
<xref ref-type="table" rid="sensors-16-00394-t001">Table 1</xref>
: N1 shows significant values for O1 Latency (F = 5.373;
<italic>p</italic>
= 0.012), O2 Latency (F = 5.570;
<italic>p</italic>
= 0.010), and Fz Latency (F = 5.206;
<italic>p</italic>
= 0.013).</p>
<p>
<italic>Post hoc</italic>
analysis (Bonferroni test) shows in O1 a significant difference between condition 1 (motor imagery training) and condition 2 (Leap Motion training) with a significant value (
<italic>p</italic>
= 0.025) and a significant difference between condition 2 (Leap Motion training) and condition 3 (haptic training) (
<italic>p</italic>
= 0.030). This trend is highlighted in
<xref ref-type="fig" rid="sensors-16-00394-f005">Figure 5</xref>
. The session after the Leap Motion training presents slower latency in occipital left channel.</p>
<p>Same trend is present for the O2 latency, where the Bonferroni test indicates a significant difference between 1 and 2 (
<italic>p</italic>
= 0.044) and 2 and 3 (
<italic>p</italic>
= 0.015) (
<xref ref-type="fig" rid="sensors-16-00394-f006">Figure 6</xref>
). Grand average is a method for comparing variability in ERP across subjects and conditions [
<xref rid="B52-sensors-16-00394" ref-type="bibr">52</xref>
].</p>
<p>In Fz latency, the Bonferroni test shows a significant difference between 1 and 2 (
<italic>p</italic>
= 0.019) and 2 and 3 (
<italic>p</italic>
= 0.053) as shown in
<xref ref-type="fig" rid="sensors-16-00394-f007">Figure 7</xref>
. These differences are in direction of a faster latency in the Leap Motion condition. Trend electrodes comparison (grand averages) is showed in
<xref ref-type="fig" rid="sensors-16-00394-f008">Figure 8</xref>
.</p>
<p>The low resolution of brain electromagnetic tomography (LORETA [
<xref rid="B53-sensors-16-00394" ref-type="bibr">53</xref>
]) is shown in
<xref ref-type="fig" rid="sensors-16-00394-f009">Figure 9</xref>
,
<xref ref-type="fig" rid="sensors-16-00394-f010">Figure 10</xref>
and
<xref ref-type="fig" rid="sensors-16-00394-f011">Figure 11</xref>
.</p>
<p>Main results for P1 waves are:
<list list-type="bullet">
<list-item>
<p>P1 shows significant value for F4 Latency (F = 4.334;
<italic>p</italic>
= 0.025);</p>
</list-item>
<list-item>
<p>
<italic>post hoc</italic>
analysis (Bonferroni test) shows a significant difference between condition 1 and condition 3 (
<italic>p</italic>
= 0.031);</p>
</list-item>
<list-item>
<p>no significant difference in P3.</p>
</list-item>
</list>
</p>
<p>The LORETA technique is a method that allows to understand the source of electrical activity in the brain based on scalp potentials during an EEG recording. In
<xref ref-type="fig" rid="sensors-16-00394-f009">Figure 9</xref>
there is a LORETA reconstruction of the N1 component after the motor imagery task. The reconstruction suggests that there is a deeper cortical level in the frontal lobe, especially in Broadmann Area 10, that is involved in strategic processes of memory and in executive function.</p>
<p>The LORETA recostruction in
<xref ref-type="fig" rid="sensors-16-00394-f010">Figure 10</xref>
suggests that there is a deeper cortical level in the frontal lobe, especially in Broadmann Area 8, which is involved when a subject is in the management of uncertainty.</p>
<p>The LORETA reconstruction in
<xref ref-type="fig" rid="sensors-16-00394-f011">Figure 11</xref>
suggests that there is a deeper source in cortical level in temporal lobe, in particular in Broadmann Area 22, which is involved in language and prosody processing. These aspects will be discussed in more detail in the next session within the results discussion and conclusion.</p>
</sec>
</sec>
<sec id="sec4-sensors-16-00394">
<title>4. Discussion</title>
<p>The results of this work are sensitive to training conditions; in particular to the virtual Leap Motion training. In the behavioral task we had a significantly slower reaction time in the Leap Motion condition than in the real condition. In the event-related potential result, we found significant values in early components (N1 and P1).</p>
<p>For N1 waves, we had a slower latency in occipital channels and faster latency in frontal channels for the Leap Motion, while we found differences in P1 for real training and motor imagery training sessions with a shorter latency in the frontal right channel only for the motor imagery training condition. There were no results for ERP amplitude and no significant results in the P3 component. Trying to fillet the ERP results and source areas highlighted by LORETA technique, we can see how, after several trainings, different brain sources are involved, notwithstanding the ERPs that are always recorded on the same task. In fact, in our case, this does not change the recognition of the stimulus test, but only the training that precedes this recognition. In the motor imagery task, we see the involvement of the Broadmann area 10 linked to executive functions. This specific part of the cortex is the rostral prefrontal cortex, and is called the gateway of imagination and of executive processing [
<xref rid="B54-sensors-16-00394" ref-type="bibr">54</xref>
]. This is evident if we consider that motor imagery is equivalent in some way to an action planning. It is interesting to see, then as during the use of Leap Motion there is an activation of the dorsolateral prefrontal cortex, which is involved when the subject has to manage a state of uncertainty. From these results, which are also supported by retarded behavioral reaction times of and slowest latencies in the event related potentials, we can say that the interaction with augmented reality, in this case with the Leap Motion, did not directly facilitate the perceptual processes, but it creates a sort of “dissonance”, probably due to an incomplete interaction of sensory systems. Finally the haptic interaction processes, with real objects and in a real environment, using objects with grasping affordances, activate a process in the temporal area, where the haptic interaction and the object visual recognition is strongly related to multisensory processing [
<xref rid="B55-sensors-16-00394" ref-type="bibr">55</xref>
,
<xref rid="B56-sensors-16-00394" ref-type="bibr">56</xref>
].</p>
</sec>
<sec id="sec5-sensors-16-00394">
<title>5. Conclusions</title>
<p>In recent years, affordances have attracted considerable interest in the field of cognitive neuroscience. Starting from a more general idea that objects in the environment invite us to act [
<xref rid="B6-sensors-16-00394" ref-type="bibr">6</xref>
] the cognitive researchers now investigate most specific components of actions evoked by objects, as well as the neural correlates of visual-motor associations formed during the experience with them [
<xref rid="B57-sensors-16-00394" ref-type="bibr">57</xref>
].</p>
<p>In this work, we have studied the affordances presenting individual objects with grasping affordance, asking the subject to interact with the objects at different levels. The objects, for example, can be manipulated concretely in a training of haptic handling which, in a virtual way by using the Leap Motion or in an imagination session by means of motor imagery.</p>
<p>After each training session, the subject was subjected to a recognition task of the stimuli, during an EEG recording, which allows us to consider how the brain activations vary depending on the proposed training. Among the ERP’s components we chose the most sensitive to these conditions. N1 is a component of ERP that occurs between 50 and 150 ms and is considered too short to be influenced by top-down influences from the prefrontal cortex.</p>
<p>Some researches show that sensory input is processed by the occipital cortex in 56 ms and that the information is communicated to the dorsolateral frontal cortex where it arrives in about 80 ms [
<xref rid="B58-sensors-16-00394" ref-type="bibr">58</xref>
]. These higher-level areas create the repetition and arousal modulations upon the sensory area processing reflected in this component [
<xref rid="B59-sensors-16-00394" ref-type="bibr">59</xref>
]. Another top-down influence upon N1 was suggested to be efference copies from a person’s intended movements so that the stimulation deriving from them is not processed [
<xref rid="B60-sensors-16-00394" ref-type="bibr">60</xref>
]. Instead, P1 is very sensitive to attentional processing.</p>
<p>In our results, we found a sensitive variation for latencies in these two components but not in the P3 component. This can happen because the sensorimotor and attentional processes are activated in this experimental model in a very early way, through motor training and through the visual affordance of the objects. In behavioral results, we have a slower reaction time in the Leap Motion condition. This could be due to the virtual motion training, which can be mostly considered as a non-integrated sensory multilevel training, because it involves both the thought feedback and the motor/visual feedback, without providing any tactile feedback. This may, indeed, somehow slow down the processes of motor response.</p>
<p>According to our results, we can say that motion training, applied to interactive entertainment (in this experiment Leap Motion training was used as an interactive entertainment), can significantly change the discrimination and retention ability of the presented stimulus. In our study, the motor imagery training and the haptic training can sometimes be in the same results trend as the virtual training; the Leap Motion session changes this trend. Early attentional components in the occipital lobe, which is entrusted to visual sensory, increase latencies when the subject uses the Leap Motion controller.</p>
<p>In contrast, latencies decrease in the frontal lobe, where the brain is involved in attentional arousal, sensorimotor action, and action planning rather than visual perception. In this study we used grasping objects with affordance motion, because they are easier to try during haptic training and because they had an interesting way of being perceived by cognition, because objects have affordance related to action planning. Moreover, affordance allows one to understand the motion way and the grasping way to manipulate the object.</p>
<p>However, in future experiments we will investigate grasping affordance with lateralized readiness potentials, which are a special form of readiness potential (a general pre-motor potential) [
<xref rid="B61-sensors-16-00394" ref-type="bibr">61</xref>
], very sensitive to affordances over a whole range of conditions, even when there is no intention to act or to prepare a manual motor response [
<xref rid="B62-sensors-16-00394" ref-type="bibr">62</xref>
]. In future experiments, we will use the Leap Motion controller and others game interfaces to analyze the virtual interaction and the connections related pleasure, learning, and cortical response. All of these studies and results could be useful for implementing, in a neurocognitive and ergonomic way, a technological and virtual product, too, by employing a device developed with the purpose of entertainment, as a tool of psychophysiological testing.</p>
</sec>
</body>
<back>
<ack>
<title>Acknowledgments</title>
<p>This work has been supported by EDOC@Work Project 3.0—Education and Work in Cloud. We thank Virginia Massari illustrator, who provided the design of the cup (
<xref ref-type="fig" rid="sensors-16-00394-f002">Figure 2</xref>
).</p>
</ack>
<notes>
<title>Author Contributions</title>
<p>The experimental work was conceived by the first author, Sara Invitto, who provided all the methodological part, recording data, data analysis and interpretation of results. All EEG instruments are belong to the Human Anatomy and Neuroscience Laboratory. Chiara Faggiano and Silvia Sammarco collaborated on ERP and EEG recording. Valerio De Luca and Lucio T. De Paolis oversaw the gesture interaction section with the Leap Motion controller.</p>
</notes>
<notes>
<title>Conflicts of Interest</title>
<p>The authors declare no conflict of interest.</p>
</notes>
<ref-list>
<title>References</title>
<ref id="B1-sensors-16-00394">
<label>1.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Bryan</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Vorderer</surname>
<given-names>P.E.</given-names>
</name>
</person-group>
<source>Psychology of Entertainment</source>
<publisher-name>Routledge, Taylor & Francis group</publisher-name>
<publisher-loc>New York, NY, USA</publisher-loc>
<year>2006</year>
<fpage>457</fpage>
</element-citation>
</ref>
<ref id="B2-sensors-16-00394">
<label>2.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ricciardi</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>De Paolis</surname>
<given-names>L.T.</given-names>
</name>
</person-group>
<article-title>A Comprehensive Review of Serious Games in Health Professions</article-title>
<source>Int. J. Comput. Games Technol.</source>
<year>2014</year>
<volume>2014</volume>
<pub-id pub-id-type="doi">10.1155/2014/787968</pub-id>
</element-citation>
</ref>
<ref id="B3-sensors-16-00394">
<label>3.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Invitto</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Faggiano</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Sammarco</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>De Luca</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>De Paolis</surname>
<given-names>L.T.</given-names>
</name>
</person-group>
<article-title>Interactive entertainment, virtual motion training and brain ergonomy</article-title>
<source>Proceedings of the 7th International Conference on Intelligent Technologies for Interactive Entertainment (INTETAIN)</source>
<conf-loc>Turin, Italy</conf-loc>
<conf-date>10–12 June 2015</conf-date>
<fpage>88</fpage>
<lpage>94</lpage>
</element-citation>
</ref>
<ref id="B4-sensors-16-00394">
<label>4.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mandryk</surname>
<given-names>R.L.</given-names>
</name>
<name>
<surname>Inkpen</surname>
<given-names>K.M.</given-names>
</name>
<name>
<surname>Calvert</surname>
<given-names>T.W.</given-names>
</name>
</person-group>
<article-title>Using psychophysiological techniques to measure user experience with entertainment technologies</article-title>
<source>Behav. Inf. Technol.</source>
<year>2006</year>
<volume>25</volume>
<fpage>141</fpage>
<lpage>158</lpage>
<pub-id pub-id-type="doi">10.1080/01449290500331156</pub-id>
</element-citation>
</ref>
<ref id="B5-sensors-16-00394">
<label>5.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bau</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Poupyrev</surname>
<given-names>I.</given-names>
</name>
</person-group>
<article-title>REVEL: Tactile Feedback Technology for Augmented Reality</article-title>
<source>ACM Trans. Graph.</source>
<year>2012</year>
<volume>31</volume>
<fpage>1</fpage>
<lpage>11</lpage>
<pub-id pub-id-type="doi">10.1145/2185520.2185585</pub-id>
</element-citation>
</ref>
<ref id="B6-sensors-16-00394">
<label>6.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Gibson</surname>
<given-names>J.J.</given-names>
</name>
</person-group>
<article-title>The Theory of Affordances</article-title>
<source>Perceiving, Acting, and Knowing. Towards an Ecological Psychology</source>
<publisher-name>John Wiley & Sons Inc.</publisher-name>
<publisher-loc>Hoboken, NJ, USA</publisher-loc>
<year>1977</year>
<fpage>127</fpage>
<lpage>143</lpage>
</element-citation>
</ref>
<ref id="B7-sensors-16-00394">
<label>7.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thill</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Caligiore</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Borghi</surname>
<given-names>A.M.</given-names>
</name>
<name>
<surname>Ziemke</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Baldassarre</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>Theories and computational models of affordance and mirror systems: An integrative review</article-title>
<source>Neurosci. Biobehav. Rev.</source>
<year>2013</year>
<volume>37</volume>
<fpage>491</fpage>
<lpage>521</lpage>
<pub-id pub-id-type="doi">10.1016/j.neubiorev.2013.01.012</pub-id>
<pub-id pub-id-type="pmid">23333761</pub-id>
</element-citation>
</ref>
<ref id="B8-sensors-16-00394">
<label>8.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jones</surname>
<given-names>K.S.</given-names>
</name>
</person-group>
<article-title>What Is an Affordance?</article-title>
<source>Ecol. Psychol.</source>
<year>2003</year>
<volume>15</volume>
<fpage>107</fpage>
<lpage>114</lpage>
<pub-id pub-id-type="doi">10.1207/S15326969ECO1502_1</pub-id>
</element-citation>
</ref>
<ref id="B9-sensors-16-00394">
<label>9.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chemero</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>An Outline of a Theory of Affordances</article-title>
<source>Ecol. Psychol.</source>
<year>2003</year>
<volume>15</volume>
<fpage>181</fpage>
<lpage>195</lpage>
<pub-id pub-id-type="doi">10.1207/S15326969ECO1502_5</pub-id>
</element-citation>
</ref>
<ref id="B10-sensors-16-00394">
<label>10.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Handy</surname>
<given-names>T.C.</given-names>
</name>
<name>
<surname>Grafton</surname>
<given-names>S.T.</given-names>
</name>
<name>
<surname>Shroff</surname>
<given-names>N.M.</given-names>
</name>
<name>
<surname>Ketay</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Gazzaniga</surname>
<given-names>M.S.</given-names>
</name>
</person-group>
<article-title>Graspable objects grab attention when the potential for action is recognized</article-title>
<source>Nat. Neurosci.</source>
<year>2003</year>
<volume>6</volume>
<fpage>421</fpage>
<lpage>427</lpage>
<pub-id pub-id-type="doi">10.1038/nn1031</pub-id>
<pub-id pub-id-type="pmid">12640459</pub-id>
</element-citation>
</ref>
<ref id="B11-sensors-16-00394">
<label>11.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Gibson</surname>
<given-names>J.J.</given-names>
</name>
</person-group>
<source>The Ecological Approach to Visual Perception</source>
<publisher-name>Houghton Mifflin</publisher-name>
<publisher-loc>Boston, MA, USA</publisher-loc>
<year>1979</year>
</element-citation>
</ref>
<ref id="B12-sensors-16-00394">
<label>12.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Apel</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Cangelosi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Ellis</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Goslin</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Fischer</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Object affordance influences instruction span</article-title>
<source>Exp. Brain Res.</source>
<year>2012</year>
<volume>223</volume>
<fpage>199</fpage>
<lpage>206</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-012-3251-0</pub-id>
<pub-id pub-id-type="pmid">22972449</pub-id>
</element-citation>
</ref>
<ref id="B13-sensors-16-00394">
<label>13.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Caligiore</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Borghi</surname>
<given-names>A.M.</given-names>
</name>
<name>
<surname>Parisi</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Ellis</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Cangelosi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Baldassarre</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>How affordances associated with a distractor object affect compatibility effects: A study with the computational model TRoPICALS</article-title>
<source>Psychol. Res.</source>
<year>2013</year>
<volume>77</volume>
<fpage>7</fpage>
<lpage>19</lpage>
<pub-id pub-id-type="doi">10.1007/s00426-012-0424-1</pub-id>
<pub-id pub-id-type="pmid">22327121</pub-id>
</element-citation>
</ref>
<ref id="B14-sensors-16-00394">
<label>14.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gross</surname>
<given-names>D.C.</given-names>
</name>
<name>
<surname>Stanney</surname>
<given-names>K.M.</given-names>
</name>
<name>
<surname>Cohn</surname>
<given-names>L.J.</given-names>
</name>
</person-group>
<article-title>Evoking affordances in virtual environments via sensory-stimuli substitution</article-title>
<source>Presence Teleoper. Virtual Environ.</source>
<year>2005</year>
<volume>14</volume>
<fpage>482</fpage>
<lpage>491</lpage>
<pub-id pub-id-type="doi">10.1162/105474605774785244</pub-id>
</element-citation>
</ref>
<ref id="B15-sensors-16-00394">
<label>15.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lepecq</surname>
<given-names>J.-C.</given-names>
</name>
<name>
<surname>Bringoux</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Pergandi</surname>
<given-names>J.-M.</given-names>
</name>
<name>
<surname>Coyle</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Mestre</surname>
<given-names>D.</given-names>
</name>
</person-group>
<article-title>Afforded Actions As a Behavioral Assessment of Physical Presence in Virtual Environments</article-title>
<source>Virtual Real.</source>
<year>2009</year>
<volume>13</volume>
<fpage>141</fpage>
<lpage>151</lpage>
<pub-id pub-id-type="doi">10.1007/s10055-009-0118-1</pub-id>
</element-citation>
</ref>
<ref id="B16-sensors-16-00394">
<label>16.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Warren</surname>
<given-names>W.H.</given-names>
<suffix>Jr.</suffix>
</name>
<name>
<surname>Whang</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Visual Guidance of Walking Through Apertures: Body-Scaled Information for Affordances</article-title>
<source>J. Exp. Psychol. Hum. Percept. Perform.</source>
<year>1987</year>
<volume>13</volume>
<fpage>371</fpage>
<lpage>383</lpage>
<pub-id pub-id-type="doi">10.1037/0096-1523.13.3.371</pub-id>
<pub-id pub-id-type="pmid">2958586</pub-id>
</element-citation>
</ref>
<ref id="B17-sensors-16-00394">
<label>17.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Luck</surname>
<given-names>S.J.</given-names>
</name>
</person-group>
<source>An Introduction to the Event-Related Potential Technique</source>
<edition>2nd ed.</edition>
<publisher-name>MIT Press</publisher-name>
<publisher-loc>Cambridge, MA, USA</publisher-loc>
<year>2014</year>
<fpage>1</fpage>
<lpage>50</lpage>
</element-citation>
</ref>
<ref id="B18-sensors-16-00394">
<label>18.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cutmore</surname>
<given-names>T.R.H.</given-names>
</name>
<name>
<surname>Hine</surname>
<given-names>T.J.</given-names>
</name>
<name>
<surname>Maberly</surname>
<given-names>K.J.</given-names>
</name>
<name>
<surname>Langford</surname>
<given-names>N.M.</given-names>
</name>
<name>
<surname>Hawgood</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>Cognitive and gender factors influencing navigation in a virtual environment</article-title>
<source>Int. J. Hum. Comput. Stud.</source>
<year>2000</year>
<volume>53</volume>
<fpage>223</fpage>
<lpage>249</lpage>
<pub-id pub-id-type="doi">10.1006/ijhc.2000.0389</pub-id>
</element-citation>
</ref>
<ref id="B19-sensors-16-00394">
<label>19.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Song</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>Cognitive processing of traffic signs in immersive virtual reality environment: An ERP study</article-title>
<source>Neurosci. Lett.</source>
<year>2010</year>
<volume>485</volume>
<fpage>43</fpage>
<lpage>48</lpage>
<pub-id pub-id-type="doi">10.1016/j.neulet.2010.08.059</pub-id>
<pub-id pub-id-type="pmid">20801188</pub-id>
</element-citation>
</ref>
<ref id="B20-sensors-16-00394">
<label>20.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Neumann</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Majoros</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Cognitive, performance, and systems issues for augmented reality applications in manufacturing and maintenance</article-title>
<source>Proceedings of the IEEE 1998 Virtual Reality Annual International Symposium</source>
<conf-loc>Atlanta, GA, USA</conf-loc>
<conf-date>14–18 March 1998</conf-date>
<fpage>4</fpage>
<lpage>11</lpage>
</element-citation>
</ref>
<ref id="B21-sensors-16-00394">
<label>21.</label>
<element-citation publication-type="webpage">
<article-title>The Leap Motion Controller</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="https://www.leapmotion.com">https://www.leapmotion.com</ext-link>
</comment>
<date-in-citation>(accessed on 26 December 2015)</date-in-citation>
</element-citation>
</ref>
<ref id="B22-sensors-16-00394">
<label>22.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Garber</surname>
<given-names>L.</given-names>
</name>
</person-group>
<article-title>Gestural Technology: Moving Interfaces in a New Direction [Technology News]</article-title>
<source>Computer</source>
<year>2013</year>
<volume>46</volume>
<fpage>22</fpage>
<lpage>25</lpage>
<pub-id pub-id-type="doi">10.1109/MC.2013.352</pub-id>
</element-citation>
</ref>
<ref id="B23-sensors-16-00394">
<label>23.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Giuroiu</surname>
<given-names>M.-C.</given-names>
</name>
<name>
<surname>Marita</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>Gesture recognition toolkit using a Kinect sensor</article-title>
<source>Proceedings of the 2015 IEEE International Conference on Intelligent Computer Communication and Processing (ICCP)</source>
<conf-loc>Cluj-Napoca, Romania</conf-loc>
<conf-date>3–5 September 2015</conf-date>
<fpage>317</fpage>
<lpage>324</lpage>
</element-citation>
</ref>
<ref id="B24-sensors-16-00394">
<label>24.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>Microsoft Developer Network</collab>
</person-group>
<article-title>Face Tracking</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="https://msdn.microsoft.com/en-us/library/dn782034.aspx">https://msdn.microsoft.com/en-us/library/dn782034.aspx</ext-link>
</comment>
<date-in-citation>(accessed on 26 December 2015)</date-in-citation>
</element-citation>
</ref>
<ref id="B25-sensors-16-00394">
<label>25.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Takahashi</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Kishino</surname>
<given-names>F.</given-names>
</name>
</person-group>
<article-title>Hand Gesture Coding Based on Experiments Using a Hand Gesture Interface Device</article-title>
<source>SIGCHI Bull.</source>
<year>1991</year>
<volume>23</volume>
<fpage>67</fpage>
<lpage>74</lpage>
<pub-id pub-id-type="doi">10.1145/122488.122499</pub-id>
</element-citation>
</ref>
<ref id="B26-sensors-16-00394">
<label>26.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Rehg</surname>
<given-names>J.M.</given-names>
</name>
<name>
<surname>Kanade</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>Visual Tracking of High DOF Articulated Structures: An Application to Human Hand Tracking</article-title>
<source>
<italic>Computer Vision—ECCV ’94</italic>
, Proceedings of Third European Conference on Computer Vision Stockholm</source>
<conf-loc>Sweden</conf-loc>
<conf-date>2–6 May 1994</conf-date>
<publisher-name>Springer-Verlag</publisher-name>
<publisher-loc>London, UK</publisher-loc>
<year>1994</year>
<fpage>35</fpage>
<lpage>46</lpage>
</element-citation>
</ref>
<ref id="B27-sensors-16-00394">
<label>27.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ionescu</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Coquin</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Lambert</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Buzuloiu</surname>
<given-names>V.</given-names>
</name>
</person-group>
<article-title>Dynamic Hand Gesture Recognition Using the Skeleton of the Hand</article-title>
<source>EURASIP J. Appl. Signal Process.</source>
<year>2005</year>
<volume>2005</volume>
<fpage>2101</fpage>
<lpage>2109</lpage>
<pub-id pub-id-type="doi">10.1155/ASP.2005.2101</pub-id>
</element-citation>
</ref>
<ref id="B28-sensors-16-00394">
<label>28.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Joslin</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>El-Sawah</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Georganas</surname>
<given-names>N.</given-names>
</name>
</person-group>
<article-title>Dynamic Gesture Recognition</article-title>
<source>Proceedings of the IEEE Instrumentation and Measurement Technology Conference</source>
<conf-loc>Ottawa, ON, Canada</conf-loc>
<conf-date>16–19 May 2005</conf-date>
<volume>Volume 3</volume>
<fpage>1706</fpage>
<lpage>1711</lpage>
</element-citation>
</ref>
<ref id="B29-sensors-16-00394">
<label>29.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Xu</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Yao</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Y.</given-names>
</name>
</person-group>
<article-title>Hand Gesture Interaction for Virtual Training of SPG</article-title>
<source>Proceedings of the 16th International Conference on Artificial Reality and Telexistence–Workshops</source>
<conf-loc>Hangzhou, China</conf-loc>
<conf-date>29 November–1 December 2006</conf-date>
<fpage>672</fpage>
<lpage>676</lpage>
</element-citation>
</ref>
<ref id="B30-sensors-16-00394">
<label>30.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>R.Y.</given-names>
</name>
<name>
<surname>Popovic</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Real-time Hand-tracking with a Color Glove</article-title>
<source>ACM Trans. Graphics</source>
<year>2009</year>
<volume>28</volume>
<pub-id pub-id-type="doi">10.1145/1531326.1531369</pub-id>
</element-citation>
</ref>
<ref id="B31-sensors-16-00394">
<label>31.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Bellarbi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Benbelkacem</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Zenati-Henda</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Belhocine</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Hand Gesture Interaction Using Color-Based Method For Tabletop Interfaces</article-title>
<source>Proceedings of the 7th International Symposium on Intelligent Signal Processing (WISP)</source>
<conf-loc>Floriana, Malta</conf-loc>
<conf-date>19–21 September 2011</conf-date>
<fpage>1</fpage>
<lpage>6</lpage>
</element-citation>
</ref>
<ref id="B32-sensors-16-00394">
<label>32.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Fiala</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>ARTag, a fiducial marker system using digital techniques</article-title>
<source>IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005</source>
<conf-loc>San Diego, CA, USA</conf-loc>
<conf-date>20–25 June 2005</conf-date>
<volume>Volume 2</volume>
<fpage>590</fpage>
<lpage>596</lpage>
</element-citation>
</ref>
<ref id="B33-sensors-16-00394">
<label>33.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Seichter</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>Augmented Reality and Tangible User Interfaces in Collaborative Urban Design</article-title>
<source>CAAD Futures</source>
<publisher-name>Springer</publisher-name>
<publisher-loc>Sydney, Australia</publisher-loc>
<year>2007</year>
<fpage>3</fpage>
<lpage>16</lpage>
</element-citation>
</ref>
<ref id="B34-sensors-16-00394">
<label>34.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Irawati</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Green</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Billinghurst</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Duenser</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Ko</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>Move the Couch Where: Developing an Augmented Reality Multimodal Interface</article-title>
<source>Proceedings of the 5th IEEE and ACM International Symposium on Mixed and Augmented Reality, ISMAR ’06</source>
<conf-loc>Santa Barbara, CA, USA</conf-loc>
<conf-date>22–25 October 2006</conf-date>
<fpage>183</fpage>
<lpage>186</lpage>
</element-citation>
</ref>
<ref id="B35-sensors-16-00394">
<label>35.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Berlia</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Kandoi</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Dubey</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Pingali</surname>
<given-names>T.R.</given-names>
</name>
</person-group>
<article-title>Gesture based universal controller using EMG signals</article-title>
<source>Proceedings of the 2014 International Conference on Circuits, Communication, Control and Computing (I4C)</source>
<conf-loc>Bangalore, India</conf-loc>
<conf-date>21–22 November 2014</conf-date>
<fpage>165</fpage>
<lpage>168</lpage>
</element-citation>
</ref>
<ref id="B36-sensors-16-00394">
<label>36.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Bellarbi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Belghit</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Benbelkacem</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Zenati</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Belhocine</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Hand gesture recognition using contour based method for tabletop surfaces</article-title>
<source>Proceedings of the 10th IEEE International Conference on Networking, Sensing and Control (ICNSC)</source>
<conf-loc>Evry, France</conf-loc>
<conf-date>10–12 April 2013</conf-date>
<fpage>832</fpage>
<lpage>836</lpage>
</element-citation>
</ref>
<ref id="B37-sensors-16-00394">
<label>37.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sandor</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Klinker</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>A Rapid Prototyping Software Infrastructure for User Interfaces in Ubiquitous Augmented Reality</article-title>
<source>Pers. Ubiq. Comput.</source>
<year>2005</year>
<volume>9</volume>
<fpage>169</fpage>
<lpage>185</lpage>
<pub-id pub-id-type="doi">10.1007/s00779-004-0328-1</pub-id>
</element-citation>
</ref>
<ref id="B38-sensors-16-00394">
<label>38.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Bhame</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Sreemathy</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Dhumal</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>Vision based hand gesture recognition using eccentric approach for human computer interaction</article-title>
<source>Proceedings of the 2014 International Conference on Advances in Computing, Communications and Informatics</source>
<conf-loc>New Delhi, India</conf-loc>
<conf-date>24–27 September 2014</conf-date>
<fpage>949</fpage>
<lpage>953</lpage>
</element-citation>
</ref>
<ref id="B39-sensors-16-00394">
<label>39.</label>
<element-citation publication-type="webpage">
<article-title>Microchip Technology’s GestIC</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://www.microchip.com/pagehandler/en_us/technology/gestic">http://www.microchip.com/pagehandler/en_us/technology/gestic</ext-link>
</comment>
<date-in-citation>(accessed on 26 December 2015)</date-in-citation>
</element-citation>
</ref>
<ref id="B40-sensors-16-00394">
<label>40.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Pu</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Gupta</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Gollakota</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Patel</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Whole-home Gesture Recognition Using Wireless Signals</article-title>
<source>Proceedings of the 19th Annual International Conference on Mobile Computing & Networking</source>
<conf-loc>Miami, FL, USA</conf-loc>
<conf-date>30 September 2013</conf-date>
<fpage>27</fpage>
<lpage>38</lpage>
</element-citation>
</ref>
<ref id="B41-sensors-16-00394">
<label>41.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pu</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Gupta</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Gollakota</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Patel</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Gesture Recognition Using Wireless Signals</article-title>
<source>GetMob. Mob. Comput. Commun.</source>
<year>2015</year>
<volume>18</volume>
<fpage>15</fpage>
<lpage>18</lpage>
<pub-id pub-id-type="doi">10.1145/2721914.2721919</pub-id>
</element-citation>
</ref>
<ref id="B42-sensors-16-00394">
<label>42.</label>
<element-citation publication-type="webpage">
<article-title>WiSee Homepage</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://wisee.cs.washington.edu/">http://wisee.cs.washington.edu/</ext-link>
</comment>
<date-in-citation>(accessed on 26 December 2015)</date-in-citation>
</element-citation>
</ref>
<ref id="B43-sensors-16-00394">
<label>43.</label>
<element-citation publication-type="webpage">
<article-title>Sony PlayStation</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://www.playstation.com">http://www.playstation.com</ext-link>
</comment>
<date-in-citation>(accessed on 26 December 2015)</date-in-citation>
</element-citation>
</ref>
<ref id="B44-sensors-16-00394">
<label>44.</label>
<element-citation publication-type="webpage">
<article-title>Nintendo Wii System</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://wii.com">http://wii.com</ext-link>
</comment>
<date-in-citation>(accessed on 26 December 2015)</date-in-citation>
</element-citation>
</ref>
<ref id="B45-sensors-16-00394">
<label>45.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>K.F.</given-names>
</name>
<name>
<surname>Sevcenco</surname>
<given-names>A.-M.</given-names>
</name>
<name>
<surname>Cheng</surname>
<given-names>L.</given-names>
</name>
</person-group>
<article-title>Impact of Sensor Sensitivity in Assistive Environment</article-title>
<source>Proceedings of the 2014 Ninth International Conference on Broadband and Wireless Computing, Communication and Applications (BWCCA)</source>
<conf-loc>Guangdong, China</conf-loc>
<conf-date>8–10 November 2014</conf-date>
<fpage>161</fpage>
<lpage>168</lpage>
</element-citation>
</ref>
<ref id="B46-sensors-16-00394">
<label>46.</label>
<element-citation publication-type="webpage">
<article-title>Microsoft Kinect</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://support.xbox.com/en-US/browse/xbox-one/kinect">http://support.xbox.com/en-US/browse/xbox-one/kinect</ext-link>
</comment>
<date-in-citation>(accessed on 26 December 2015)</date-in-citation>
</element-citation>
</ref>
<ref id="B47-sensors-16-00394">
<label>47.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sarbolandi</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Lefloch</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Kolb</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Kinect range sensing: Structured-light
<italic>versus</italic>
Time-of-Flight Kinect</article-title>
<source>Comput. Vis. Image Underst.</source>
<year>2015</year>
<volume>139</volume>
<fpage>1</fpage>
<lpage>20</lpage>
<pub-id pub-id-type="doi">10.1016/j.cviu.2015.05.006</pub-id>
</element-citation>
</ref>
<ref id="B48-sensors-16-00394">
<label>48.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Marin</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Dominio</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Zanuttigh</surname>
<given-names>P.</given-names>
</name>
</person-group>
<article-title>Hand gesture recognition with Leap Motion and Kinect devices</article-title>
<source>Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP)</source>
<conf-loc>Paris, France</conf-loc>
<conf-date>27–30 October 2014</conf-date>
<fpage>1565</fpage>
<lpage>1569</lpage>
</element-citation>
</ref>
<ref id="B49-sensors-16-00394">
<label>49.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Cook</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Nguyen</surname>
<given-names>Q.V.</given-names>
</name>
<name>
<surname>Simoff</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Trescak</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Preston</surname>
<given-names>D.</given-names>
</name>
</person-group>
<article-title>A Close-Range Gesture Interaction with Kinect</article-title>
<source>Proceedings of the 2015 Big Data Visual Analytics (BDVA)</source>
<conf-loc>Hobart, Australia</conf-loc>
<conf-date>22–25 September 2015</conf-date>
<fpage>1</fpage>
<lpage>8</lpage>
</element-citation>
</ref>
<ref id="B50-sensors-16-00394">
<label>50.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Coles</surname>
<given-names>M.G.H.</given-names>
</name>
<name>
<surname>Rugg</surname>
<given-names>M.D.</given-names>
</name>
</person-group>
<article-title>Event-related brain potentials: An introduction</article-title>
<source>Electrophysiol. Mind Event-relat. Brain Potentials Cognit.</source>
<year>1996</year>
<volume>1</volume>
<fpage>1</fpage>
<lpage>26</lpage>
</element-citation>
</ref>
<ref id="B51-sensors-16-00394">
<label>51.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Invitto</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Scalinci</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Mignozzi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Faggiano</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>Advanced Distributed Learning and ERP: Interaction in Augmented Reality, haptic manipulation with 3D models and learning styles</article-title>
<source>Proceedings of the XXIII National Congress of the Italian Society of Psychophysiology</source>
<conf-loc>Lucca, Italy</conf-loc>
<conf-date>19–21 November 2015</conf-date>
</element-citation>
</ref>
<ref id="B52-sensors-16-00394">
<label>52.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Delorme</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Miyakoshi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Jung</surname>
<given-names>T.P.</given-names>
</name>
<name>
<surname>Makeig</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Grand average ERP-image plotting and statistics: A method for comparing variability in event-related single-trial EEG activities across subjects and conditions</article-title>
<source>J. Neurosci. Methods</source>
<year>2015</year>
<volume>250</volume>
<fpage>3</fpage>
<lpage>6</lpage>
<pub-id pub-id-type="doi">10.1016/j.jneumeth.2014.10.003</pub-id>
<pub-id pub-id-type="pmid">25447029</pub-id>
</element-citation>
</ref>
<ref id="B53-sensors-16-00394">
<label>53.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pascual-Marqui</surname>
<given-names>R.D.</given-names>
</name>
<name>
<surname>Esslen</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Kochi</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Lehmann</surname>
<given-names>D.</given-names>
</name>
</person-group>
<article-title>Functional imaging with low-resolution brain electromagnetic tomography (LORETA): A review</article-title>
<source>Methods Find. Exp. Clin. Pharm.</source>
<year>2002</year>
<volume>24</volume>
<issue>Suppl. C</issue>
<fpage>91</fpage>
<lpage>95</lpage>
</element-citation>
</ref>
<ref id="B54-sensors-16-00394">
<label>54.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Burgess</surname>
<given-names>P.W.</given-names>
</name>
<name>
<surname>Dumontheil</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Gilbert</surname>
<given-names>S.J.</given-names>
</name>
</person-group>
<article-title>The gateway hypothesis of rostral prefrontal cortex (area 10) function</article-title>
<source>Trends Cognit. Sci.</source>
<year>2007</year>
<volume>11</volume>
<fpage>290</fpage>
<lpage>298</lpage>
<pub-id pub-id-type="doi">10.1016/j.tics.2007.05.004</pub-id>
<pub-id pub-id-type="pmid">17548231</pub-id>
</element-citation>
</ref>
<ref id="B55-sensors-16-00394">
<label>55.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kassuba</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Menz</surname>
<given-names>M.M.</given-names>
</name>
<name>
<surname>Röder</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Siebner</surname>
<given-names>H.R.</given-names>
</name>
</person-group>
<article-title>Multisensory interactions between auditory and haptic object recognition</article-title>
<source>Cereb. Cortex</source>
<year>2013</year>
<volume>23</volume>
<fpage>1097</fpage>
<lpage>1107</lpage>
<pub-id pub-id-type="doi">10.1093/cercor/bhs076</pub-id>
<pub-id pub-id-type="pmid">22518017</pub-id>
</element-citation>
</ref>
<ref id="B56-sensors-16-00394">
<label>56.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Amedi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Malach</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Hendler</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Peled</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Zohary</surname>
<given-names>E.</given-names>
</name>
</person-group>
<article-title>Visuo-haptic object-related activation in the ventral visual pathway</article-title>
<source>Nat. Neurosci.</source>
<year>2001</year>
<volume>4</volume>
<fpage>324</fpage>
<lpage>330</lpage>
<pub-id pub-id-type="doi">10.1038/85201</pub-id>
<pub-id pub-id-type="pmid">11224551</pub-id>
</element-citation>
</ref>
<ref id="B57-sensors-16-00394">
<label>57.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ellis</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Tucker</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Micro-affordance: The potentiation of components of action by seen objects</article-title>
<source>Br. J. Psychol.</source>
<year>2000</year>
<volume>91</volume>
<fpage>451</fpage>
<lpage>471</lpage>
<pub-id pub-id-type="doi">10.1348/000712600161934</pub-id>
<pub-id pub-id-type="pmid">11104173</pub-id>
</element-citation>
</ref>
<ref id="B58-sensors-16-00394">
<label>58.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Foxe</surname>
<given-names>J.J.</given-names>
</name>
<name>
<surname>Simpson</surname>
<given-names>G.V.</given-names>
</name>
</person-group>
<article-title>Flow of activation from V1 to frontal cortex in humans</article-title>
<source>Exp. Brain Res.</source>
<year>2002</year>
<volume>142</volume>
<fpage>139</fpage>
<lpage>150</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-001-0906-7</pub-id>
<pub-id pub-id-type="pmid">11797091</pub-id>
</element-citation>
</ref>
<ref id="B59-sensors-16-00394">
<label>59.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Coull</surname>
<given-names>J.T.</given-names>
</name>
</person-group>
<article-title>Neural correlates of attention and arousal: Insights from electrophysiology, functional neuroimaging and psychopharmacology</article-title>
<source>Prog. Neurobiol.</source>
<year>1998</year>
<volume>55</volume>
<fpage>343</fpage>
<lpage>361</lpage>
<pub-id pub-id-type="doi">10.1016/S0301-0082(98)00011-2</pub-id>
<pub-id pub-id-type="pmid">9654384</pub-id>
</element-citation>
</ref>
<ref id="B60-sensors-16-00394">
<label>60.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kudo</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Nakagome</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Kasai</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Araki</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Fukuda</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Kato</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Iwanami</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Effects of corollary discharge on event-related potentials during selective attention task in healthy men and women</article-title>
<source>Neurosci. Res.</source>
<year>2004</year>
<volume>48</volume>
<fpage>59</fpage>
<lpage>64</lpage>
<pub-id pub-id-type="doi">10.1016/j.neures.2003.09.008</pub-id>
<pub-id pub-id-type="pmid">14687881</pub-id>
</element-citation>
</ref>
<ref id="B61-sensors-16-00394">
<label>61.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gratton</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Coles</surname>
<given-names>M.G.</given-names>
</name>
<name>
<surname>Sirevaag</surname>
<given-names>E.J.</given-names>
</name>
<name>
<surname>Eriksen</surname>
<given-names>C.W.</given-names>
</name>
<name>
<surname>Donchin</surname>
<given-names>E.</given-names>
</name>
</person-group>
<article-title>Pre- and poststimulus activation of response channels: A psychophysiological analysis</article-title>
<source>J. Exp. Psychol. Hum. Percept. Perform.</source>
<year>1988</year>
<volume>14</volume>
<fpage>331</fpage>
<lpage>344</lpage>
<pub-id pub-id-type="doi">10.1037/0096-1523.14.3.331</pub-id>
<pub-id pub-id-type="pmid">2971764</pub-id>
</element-citation>
</ref>
<ref id="B62-sensors-16-00394">
<label>62.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vainio</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Ala-Salomäki</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Huovilainen</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Nikkinen</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Salo</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Väliaho</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Paavilainen</surname>
<given-names>P.</given-names>
</name>
</person-group>
<article-title>Mug handle affordance and automatic response inhibition: Behavioural and electrophysiological evidence</article-title>
<source>Q. J. Exp. Psychol.</source>
<year>2014</year>
<volume>67</volume>
<fpage>1697</fpage>
<lpage>1719</lpage>
<pub-id pub-id-type="doi">10.1080/17470218.2013.868007</pub-id>
<pub-id pub-id-type="pmid">24266417</pub-id>
</element-citation>
</ref>
</ref-list>
</back>
<floats-group>
<fig id="sensors-16-00394-f001" position="float">
<label>Figure 1</label>
<caption>
<p>The use of Leap Motion controller during the experiments.</p>
</caption>
<graphic xlink:href="sensors-16-00394-g001"></graphic>
</fig>
<fig id="sensors-16-00394-f002" position="float">
<label>Figure 2</label>
<caption>
<p>Example of an object with grasping affordance.</p>
</caption>
<graphic xlink:href="sensors-16-00394-g002"></graphic>
</fig>
<fig id="sensors-16-00394-f003" position="float">
<label>Figure 3</label>
<caption>
<p>Subject during virtual training with Leap Motion.</p>
</caption>
<graphic xlink:href="sensors-16-00394-g003"></graphic>
</fig>
<fig id="sensors-16-00394-f004" position="float">
<label>Figure 4</label>
<caption>
<p>Behavioral task: go-no go task.</p>
</caption>
<graphic xlink:href="sensors-16-00394-g004"></graphic>
</fig>
<fig id="sensors-16-00394-f005" position="float">
<label>Figure 5</label>
<caption>
<p>N1 ERP component latency in occipital left lobe: condition 1 (motor imagery), condition 2 (Leap Motion), and condition 3 (haptic manipulation). In the Leap Motion condition N1 latency is slower.</p>
</caption>
<graphic xlink:href="sensors-16-00394-g005"></graphic>
</fig>
<fig id="sensors-16-00394-f006" position="float">
<label>Figure 6</label>
<caption>
<p>Grand average-matching ERP: black line represents the response of the task after motor imagery training; the red line represents the response after the task of Leap Motion training, and the blue line represents the response after haptic training.</p>
</caption>
<graphic xlink:href="sensors-16-00394-g006"></graphic>
</fig>
<fig id="sensors-16-00394-f007" position="float">
<label>Figure 7</label>
<caption>
<p>N1 ERP component latency in frontal lobe: condition 1 (motor imagery), condition 2 (Leap Motion), and condition 3 (haptic manipulation); in the Leap Motion condition N1 latency is faster.</p>
</caption>
<graphic xlink:href="sensors-16-00394-g007"></graphic>
</fig>
<fig id="sensors-16-00394-f008" position="float">
<label>Figure 8</label>
<caption>
<p>Grand average-matching ERP: black line represents the response of the task after motor imagery training; the red line represents the response after the task of leap motion training, and the blue line represents the response after haptic training. This matching is the ERP trend of the three conditions during the task in all channels recorded.</p>
</caption>
<graphic xlink:href="sensors-16-00394-g008"></graphic>
</fig>
<fig id="sensors-16-00394-f009" position="float">
<label>Figure 9</label>
<caption>
<p>LORETA reconstruction of the N1 component after the Motor Imagery session.</p>
</caption>
<graphic xlink:href="sensors-16-00394-g009"></graphic>
</fig>
<fig id="sensors-16-00394-f010" position="float">
<label>Figure 10</label>
<caption>
<p>LORETA reconstruction of the N1 component after the Leap Motion session.</p>
</caption>
<graphic xlink:href="sensors-16-00394-g010"></graphic>
</fig>
<fig id="sensors-16-00394-f011" position="float">
<label>Figure 11</label>
<caption>
<p>LORETA reconstruction of N1 after the haptic manipulation session.</p>
</caption>
<graphic xlink:href="sensors-16-00394-g011"></graphic>
</fig>
<table-wrap id="sensors-16-00394-t001" position="float">
<object-id pub-id-type="pii">sensors-16-00394-t001_Table 1</object-id>
<label>Table 1</label>
<caption>
<p>ANOVA results and post hoc analysis with mean of latencies (ms).</p>
</caption>
<table frame="hsides" rules="groups">
<tbody>
<tr>
<td align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">
<bold>ERP N1</bold>
</td>
<td align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">
<bold>MotorImageryLat</bold>
</td>
<td align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">
<bold>LeapMotionLat</bold>
</td>
<td align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">
<bold>Haptic Lat</bold>
</td>
<td align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">
<bold>F</bold>
</td>
<td align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">
<bold>
<italic>p</italic>
</bold>
</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">Fz</td>
<td align="center" valign="middle" rowspan="1" colspan="1">113.30 *</td>
<td align="center" valign="middle" rowspan="1" colspan="1">58 *</td>
<td align="center" valign="middle" rowspan="1" colspan="1">107.75 *</td>
<td align="center" valign="middle" rowspan="1" colspan="1">5.206</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.013</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">O1</td>
<td align="center" valign="middle" rowspan="1" colspan="1">81.40 *</td>
<td align="center" valign="middle" rowspan="1" colspan="1">128 *</td>
<td align="center" valign="middle" rowspan="1" colspan="1">80.25 *</td>
<td align="center" valign="middle" rowspan="1" colspan="1">5.373</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.012</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">O2</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">89.40</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">137.56 *</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">77.75 *</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">5.570</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">0.010</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>ERP P1</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>MotorImageryLat</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>LeapMotionLat</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>Haptic Lat</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>F</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>
<italic>p</italic>
</bold>
</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">F4</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100.20 *</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">122.50</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">148.41 *</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">4.334</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">0.025</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn>
<p>* indicates a significant value for alpha ≤0.05.</p>
</fn>
</table-wrap-foot>
</table-wrap>
</floats-group>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000577 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 000577 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:4813969
   |texte=   Haptic, Virtual Interaction and Motor Imagery: Entertainment Tools and Psychophysiological Testing
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:26999151" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024