Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Can you handle this? The impact of object affordances on how co-speech gestures are produced

Identifieur interne : 000707 ( Pmc/Checkpoint ); précédent : 000706; suivant : 000708

Can you handle this? The impact of object affordances on how co-speech gestures are produced

Auteurs : Ingrid Masson-Carro ; Martijn Goudbeek ; Emiel Krahmer

Source :

RBID : PMC:4867791

Abstract

ABSTRACT

Hand gestures are tightly coupled with speech and with action. Hence, recent accounts have emphasised the idea that simulations of spatio-motoric imagery underlie the production of co-speech gestures. In this study, we suggest that action simulations directly influence the iconic strategies used by speakers to translate aspects of their mental representations into gesture. Using a classic referential paradigm, we investigate how speakers respond gesturally to the affordances of objects, by comparing the effects of describing objects that afford action performance (such as tools) and those that do not, on gesture production. Our results suggest that affordances play a key role in determining the amount of representational (but not non-representational) gestures produced by speakers, and the techniques chosen to depict such objects. To our knowledge, this is the first study to systematically show a connection between object characteristics and representation techniques in spontaneous gesture production during the depiction of static referents.


Url:
DOI: 10.1080/23273798.2015.1108448
PubMed: 27226970
PubMed Central: 4867791


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4867791

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Can you handle this? The impact of object affordances on how co-speech gestures are produced</title>
<author>
<name sortKey="Masson Carro, Ingrid" sort="Masson Carro, Ingrid" uniqKey="Masson Carro I" first="Ingrid" last="Masson-Carro">Ingrid Masson-Carro</name>
<affiliation>
<nlm:aff id="AF1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Goudbeek, Martijn" sort="Goudbeek, Martijn" uniqKey="Goudbeek M" first="Martijn" last="Goudbeek">Martijn Goudbeek</name>
<affiliation>
<nlm:aff id="AF1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Krahmer, Emiel" sort="Krahmer, Emiel" uniqKey="Krahmer E" first="Emiel" last="Krahmer">Emiel Krahmer</name>
<affiliation>
<nlm:aff id="AF1"></nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">27226970</idno>
<idno type="pmc">4867791</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4867791</idno>
<idno type="RBID">PMC:4867791</idno>
<idno type="doi">10.1080/23273798.2015.1108448</idno>
<date when="2015">2015</date>
<idno type="wicri:Area/Pmc/Corpus">000525</idno>
<idno type="wicri:Area/Pmc/Curation">000525</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000707</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Can you handle this? The impact of object affordances on how co-speech gestures are produced</title>
<author>
<name sortKey="Masson Carro, Ingrid" sort="Masson Carro, Ingrid" uniqKey="Masson Carro I" first="Ingrid" last="Masson-Carro">Ingrid Masson-Carro</name>
<affiliation>
<nlm:aff id="AF1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Goudbeek, Martijn" sort="Goudbeek, Martijn" uniqKey="Goudbeek M" first="Martijn" last="Goudbeek">Martijn Goudbeek</name>
<affiliation>
<nlm:aff id="AF1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Krahmer, Emiel" sort="Krahmer, Emiel" uniqKey="Krahmer E" first="Emiel" last="Krahmer">Emiel Krahmer</name>
<affiliation>
<nlm:aff id="AF1"></nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Language, Cognition and Neuroscience</title>
<idno type="ISSN">2327-3798</idno>
<idno type="eISSN">2327-3801</idno>
<imprint>
<date when="2015">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<title>ABSTRACT</title>
<p>Hand gestures are tightly coupled with speech and with action. Hence, recent accounts have emphasised the idea that simulations of spatio-motoric imagery underlie the production of co-speech gestures. In this study, we suggest that action simulations directly influence the iconic strategies used by speakers to translate aspects of their mental representations into gesture. Using a classic referential paradigm, we investigate how speakers respond gesturally to the affordances of objects, by comparing the effects of describing objects that afford action performance (such as tools) and those that do not, on gesture production. Our results suggest that affordances play a key role in determining the amount of representational (but not non-representational) gestures produced by speakers, and the techniques chosen to depict such objects. To our knowledge, this is the first study to systematically show a connection between object characteristics and representation techniques in spontaneous gesture production during the depiction of static referents.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Barr, D J" uniqKey="Barr D">D. J. Barr</name>
</author>
<author>
<name sortKey="Levy, R" uniqKey="Levy R">R. Levy</name>
</author>
<author>
<name sortKey="Scheepers, C" uniqKey="Scheepers C">C. Scheepers</name>
</author>
<author>
<name sortKey="Tily, H J" uniqKey="Tily H">H. J. Tily</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bartolo, A" uniqKey="Bartolo A">A. Bartolo</name>
</author>
<author>
<name sortKey="Cubelli, R" uniqKey="Cubelli R">R. Cubelli</name>
</author>
<author>
<name sortKey="Della Sala, S" uniqKey="Della Sala S">S. Della Sala</name>
</author>
<author>
<name sortKey="Drei, S" uniqKey="Drei S">S. Drei</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bavelas, J" uniqKey="Bavelas J">J. Bavelas</name>
</author>
<author>
<name sortKey="Healing, S" uniqKey="Healing S">S. Healing</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bavelas, J B" uniqKey="Bavelas J">J. B. Bavelas</name>
</author>
<author>
<name sortKey="Chovil, N" uniqKey="Chovil N">N. Chovil</name>
</author>
<author>
<name sortKey="Lawrie, D A" uniqKey="Lawrie D">D. A. Lawrie</name>
</author>
<author>
<name sortKey="Wade, A" uniqKey="Wade A">A. Wade</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bavelas, J B" uniqKey="Bavelas J">J. B. Bavelas</name>
</author>
<author>
<name sortKey="Gerwing, J" uniqKey="Gerwing J">J. Gerwing</name>
</author>
<author>
<name sortKey="Sutton, C" uniqKey="Sutton C">C. Sutton</name>
</author>
<author>
<name sortKey="Prevost, D" uniqKey="Prevost D">D. Prevost</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Beattie, G" uniqKey="Beattie G">G. Beattie</name>
</author>
<author>
<name sortKey="Shovelton, H" uniqKey="Shovelton H">H. Shovelton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bergman, K" uniqKey="Bergman K">K. Bergman</name>
</author>
<author>
<name sortKey="Kopp, S" uniqKey="Kopp S">S. Kopp</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bub, D N" uniqKey="Bub D">D. N. Bub</name>
</author>
<author>
<name sortKey="Masson, M E J" uniqKey="Masson M">M. E. J. Masson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bub, D N" uniqKey="Bub D">D. N. Bub</name>
</author>
<author>
<name sortKey="Masson, M E J" uniqKey="Masson M">M. E. J. Masson</name>
</author>
<author>
<name sortKey="Bukach, C M" uniqKey="Bukach C">C. M. Bukach</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bub, D N" uniqKey="Bub D">D. N. Bub</name>
</author>
<author>
<name sortKey="Masson, M E J" uniqKey="Masson M">M. E. J. Masson</name>
</author>
<author>
<name sortKey="Cree, G S" uniqKey="Cree G">G. S. Cree</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chu, M" uniqKey="Chu M">M. Chu</name>
</author>
<author>
<name sortKey="Kita, S" uniqKey="Kita S">S. Kita</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chu, M" uniqKey="Chu M">M. Chu</name>
</author>
<author>
<name sortKey="Kita, S" uniqKey="Kita S">S. Kita</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cook, S W" uniqKey="Cook S">S. W. Cook</name>
</author>
<author>
<name sortKey="Tanenhaus, M K" uniqKey="Tanenhaus M">M. K. Tanenhaus</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ellis, R" uniqKey="Ellis R">R. Ellis</name>
</author>
<author>
<name sortKey="Tucker, M" uniqKey="Tucker M">M. Tucker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Feyereisen, P" uniqKey="Feyereisen P">P. Feyereisen</name>
</author>
<author>
<name sortKey="Havard, I" uniqKey="Havard I">I. Havard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fischer, M" uniqKey="Fischer M">M. Fischer</name>
</author>
<author>
<name sortKey="Zwaan, R" uniqKey="Zwaan R">R. Zwaan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Galati, A" uniqKey="Galati A">A. Galati</name>
</author>
<author>
<name sortKey="Brennan, S" uniqKey="Brennan S">S. Brennan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gerlach, C" uniqKey="Gerlach C">C. Gerlach</name>
</author>
<author>
<name sortKey="Law, I" uniqKey="Law I">I. Law</name>
</author>
<author>
<name sortKey="Paulson, O B" uniqKey="Paulson O">O. B. Paulson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gerwing, J" uniqKey="Gerwing J">J. Gerwing</name>
</author>
<author>
<name sortKey="Bavelas, J" uniqKey="Bavelas J">J. Bavelas</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gibson, J J" uniqKey="Gibson J">J. J. Gibson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Glenberg, A M" uniqKey="Glenberg A">A. M. Glenberg</name>
</author>
<author>
<name sortKey="Kaschak, M P" uniqKey="Kaschak M">M. P. Kaschak</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Glenberg, A M" uniqKey="Glenberg A">A. M. Glenberg</name>
</author>
<author>
<name sortKey="Robertson, D A" uniqKey="Robertson D">D. A. Robertson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Glover, S" uniqKey="Glover S">S. Glover</name>
</author>
<author>
<name sortKey="Rosenbaum, D A" uniqKey="Rosenbaum D">D. A. Rosenbaum</name>
</author>
<author>
<name sortKey="Graham, J" uniqKey="Graham J">J. Graham</name>
</author>
<author>
<name sortKey="Dixon, P" uniqKey="Dixon P">P. Dixon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Goldin Meadow, S" uniqKey="Goldin Meadow S">S. Goldin-Meadow</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hadar, U" uniqKey="Hadar U">U. Hadar</name>
</author>
<author>
<name sortKey="Butterworth, B" uniqKey="Butterworth B">B. Butterworth</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Handy, T C" uniqKey="Handy T">T. C. Handy</name>
</author>
<author>
<name sortKey="Grafton, S T" uniqKey="Grafton S">S. T. Grafton</name>
</author>
<author>
<name sortKey="Shroff, N M" uniqKey="Shroff N">N. M. Shroff</name>
</author>
<author>
<name sortKey="Ketay, S" uniqKey="Ketay S">S. Ketay</name>
</author>
<author>
<name sortKey="Gazzaniga, M S" uniqKey="Gazzaniga M">M. S. Gazzaniga</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hauk, O" uniqKey="Hauk O">O. Hauk</name>
</author>
<author>
<name sortKey="Johnsrude, I" uniqKey="Johnsrude I">I. Johnsrude</name>
</author>
<author>
<name sortKey="Pulvermuller, F" uniqKey="Pulvermuller F">F. Pulvermüller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hoetjes, M" uniqKey="Hoetjes M">M. Hoetjes</name>
</author>
<author>
<name sortKey="Koolen, R" uniqKey="Koolen R">R. Koolen</name>
</author>
<author>
<name sortKey="Goudbeek, M" uniqKey="Goudbeek M">M. Goudbeek</name>
</author>
<author>
<name sortKey="Krahmer, E" uniqKey="Krahmer E">E. Krahmer</name>
</author>
<author>
<name sortKey="Swerts, M" uniqKey="Swerts M">M. Swerts</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hostetter, A B" uniqKey="Hostetter A">A. B. Hostetter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hostetter, A B" uniqKey="Hostetter A">A. B. Hostetter</name>
</author>
<author>
<name sortKey="Alibali, M W" uniqKey="Alibali M">M. W. Alibali</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hostetter, A B" uniqKey="Hostetter A">A. B. Hostetter</name>
</author>
<author>
<name sortKey="Alibali, M W" uniqKey="Alibali M">M. W. Alibali</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hostetter, A B" uniqKey="Hostetter A">A. B. Hostetter</name>
</author>
<author>
<name sortKey="Alibali, M W" uniqKey="Alibali M">M. W. Alibali</name>
</author>
<author>
<name sortKey="Bartholomew, A E" uniqKey="Bartholomew A">A. E. Bartholomew</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jaeger, T F" uniqKey="Jaeger T">T. F. Jaeger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kendon, A" uniqKey="Kendon A">A. Kendon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Krauss, R M" uniqKey="Krauss R">R. M. Krauss</name>
</author>
<author>
<name sortKey="Chen, Y" uniqKey="Chen Y">Y. Chen</name>
</author>
<author>
<name sortKey="Gottesman, R F" uniqKey="Gottesman R">R. F. Gottesman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lederman, S J" uniqKey="Lederman S">S. J. Lederman</name>
</author>
<author>
<name sortKey="Klatzky, R L" uniqKey="Klatzky R">R. L. Klatzky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Louwerse, M M" uniqKey="Louwerse M">M. M. Louwerse</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Louwerse, M M" uniqKey="Louwerse M">M. M. Louwerse</name>
</author>
<author>
<name sortKey="Jeuniaux, P" uniqKey="Jeuniaux P">P. Jeuniaux</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcneill, D" uniqKey="Mcneill D">D. McNeill</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Muller, C" uniqKey="Muller C">C. Müller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Parrill, F" uniqKey="Parrill F">F. Parrill</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Perniss, P" uniqKey="Perniss P">P. Perniss</name>
</author>
<author>
<name sortKey="Vigliocco, G" uniqKey="Vigliocco G">G. Vigliocco</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pine, K" uniqKey="Pine K">K. Pine</name>
</author>
<author>
<name sortKey="Gurney, D" uniqKey="Gurney D">D. Gurney</name>
</author>
<author>
<name sortKey="Fletcher, B" uniqKey="Fletcher B">B. Fletcher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Streeck, J" uniqKey="Streeck J">J. Streeck</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Streeck, J" uniqKey="Streeck J">J. Streeck</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tettamanti, M" uniqKey="Tettamanti M">M. Tettamanti</name>
</author>
<author>
<name sortKey="Buccino, G" uniqKey="Buccino G">G. Buccino</name>
</author>
<author>
<name sortKey="Saccuman, M C" uniqKey="Saccuman M">M. C. Saccuman</name>
</author>
<author>
<name sortKey="Gallese, V" uniqKey="Gallese V">V. Gallese</name>
</author>
<author>
<name sortKey="Danna, M" uniqKey="Danna M">M. Danna</name>
</author>
<author>
<name sortKey="Scifo, P" uniqKey="Scifo P">P. Scifo</name>
</author>
<author>
<name sortKey="Perani, D" uniqKey="Perani D">D. Perani</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tucker, M" uniqKey="Tucker M">M. Tucker</name>
</author>
<author>
<name sortKey="Ellis, R" uniqKey="Ellis R">R. Ellis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Nispen, K" uniqKey="Van Nispen K">K. Van Nispen</name>
</author>
<author>
<name sortKey="Van De Sandt Koenderman, M" uniqKey="Van De Sandt Koenderman M">M. van de Sandt-Koenderman</name>
</author>
<author>
<name sortKey="Mol, L" uniqKey="Mol L">L. Mol</name>
</author>
<author>
<name sortKey="Krahmer, E" uniqKey="Krahmer E">E. Krahmer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wittenburg, P" uniqKey="Wittenburg P">P. Wittenburg</name>
</author>
<author>
<name sortKey="Brugman, H" uniqKey="Brugman H">H. Brugman</name>
</author>
<author>
<name sortKey="Russel, A" uniqKey="Russel A">A. Russel</name>
</author>
<author>
<name sortKey="Klassmann, A" uniqKey="Klassmann A">A. Klassmann</name>
</author>
<author>
<name sortKey="Sloetjes, H" uniqKey="Sloetjes H">H. Sloetjes</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Lang Cogn Neurosci</journal-id>
<journal-id journal-id-type="iso-abbrev">Lang Cogn Neurosci</journal-id>
<journal-id journal-id-type="archive">PLCP</journal-id>
<journal-id journal-id-type="publisher-id">plcp21</journal-id>
<journal-title-group>
<journal-title>Language, Cognition and Neuroscience</journal-title>
</journal-title-group>
<issn pub-type="ppub">2327-3798</issn>
<issn pub-type="epub">2327-3801</issn>
<publisher>
<publisher-name>Routledge</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">27226970</article-id>
<article-id pub-id-type="pmc">4867791</article-id>
<article-id pub-id-type="publisher-id">1108448</article-id>
<article-id pub-id-type="doi">10.1080/23273798.2015.1108448</article-id>
<article-categories>
<subj-group subj-group-type="article-type">
<subject>Article</subject>
</subj-group>
<subj-group subj-group-type="heading">
<subject>Articles</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Can you handle this? The impact of object affordances on how co-speech gestures are produced</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Masson-Carro</surname>
<given-names>Ingrid</given-names>
</name>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
<xref ref-type="aff" rid="AF1">
<sup>a</sup>
</xref>
<xref ref-type="corresp" rid="cor2">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Goudbeek</surname>
<given-names>Martijn</given-names>
</name>
<xref ref-type="aff" rid="AF1">
<sup>a</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Krahmer</surname>
<given-names>Emiel</given-names>
</name>
<xref ref-type="aff" rid="AF1">
<sup>a</sup>
</xref>
</contrib>
<aff id="AF1">
<label>
<sup>a</sup>
</label>
<institution>
<named-content content-type="institution-name">Tilburg Centre for Cognition and Communication (TiCC), University of Tilburg</named-content>
</institution>
,
<named-content content-type="city">Tilburg</named-content>
,
<country>The Netherlands</country>
</aff>
</contrib-group>
<author-notes>
<corresp id="cor1">
<label>CONTACT </label>
Ingrid Masson-Carro  
<email xlink:href="i.massoncarro@tilburguniversity.edu">i.massoncarro@tilburguniversity.edu</email>
</corresp>
<corresp id="cor2">
<email xlink:href="i.massoncarro@uvt.nl">i.massoncarro@uvt.nl</email>
</corresp>
</author-notes>
<pub-date pub-type="ppub">
<day>15</day>
<month>3</month>
<year>2016</year>
<pmc-comment>string-date: April 2016</pmc-comment>
</pub-date>
<pub-date pub-type="epub">
<day>4</day>
<month>11</month>
<year>2015</year>
</pub-date>
<volume>31</volume>
<issue>3</issue>
<fpage seq="14">430</fpage>
<lpage>440</lpage>
<history>
<date date-type="received">
<day>8</day>
<month>5</month>
<year>2015</year>
</date>
<date date-type="accepted">
<day>6</day>
<month>10</month>
<year>2015</year>
</date>
</history>
<permissions>
<copyright-statement>© 2015 The Author(s). Published by Taylor & Francis.</copyright-statement>
<copyright-year>2015</copyright-year>
<copyright-holder>The Author(s)</copyright-holder>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by-nc-nd/4.0/">
<license-p>This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License (
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by-nc-nd/4.0/">http://creativecommons.org/licenses/by-nc-nd/4.0/</ext-link>
), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited, and is not altered, transformed, or built upon in any way.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:type="simple" xlink:href="plcp-31-430.pdf"></self-uri>
<abstract>
<title>ABSTRACT</title>
<p>Hand gestures are tightly coupled with speech and with action. Hence, recent accounts have emphasised the idea that simulations of spatio-motoric imagery underlie the production of co-speech gestures. In this study, we suggest that action simulations directly influence the iconic strategies used by speakers to translate aspects of their mental representations into gesture. Using a classic referential paradigm, we investigate how speakers respond gesturally to the affordances of objects, by comparing the effects of describing objects that afford action performance (such as tools) and those that do not, on gesture production. Our results suggest that affordances play a key role in determining the amount of representational (but not non-representational) gestures produced by speakers, and the techniques chosen to depict such objects. To our knowledge, this is the first study to systematically show a connection between object characteristics and representation techniques in spontaneous gesture production during the depiction of static referents.</p>
</abstract>
<kwd-group kwd-group-type="author">
<title>KEYWORDS</title>
<kwd>Gesture</kwd>
<kwd>action</kwd>
<kwd>representation techniques</kwd>
<kwd>simulation</kwd>
<kwd>affordances</kwd>
</kwd-group>
<funding-group>
<award-group>
<funding-source>
<named-content content-type="funder-name">Nederlandse Organisatie voor Wetenschappelijk Onderzoek</named-content>
<named-content content-type="funder-identifier">10.13039/501100003246</named-content>
</funding-source>
<award-id>322-89-010</award-id>
</award-group>
<funding-statement>The research reported in this article was financially supported by The Netherlands Organisation for Scientific Research (NWO) [grant number 322-89-010].</funding-statement>
</funding-group>
<counts>
<fig-count count="3"></fig-count>
<table-count count="2"></table-count>
<equation-count count="0"></equation-count>
<ref-count count="49"></ref-count>
<page-count count="11"></page-count>
</counts>
</article-meta>
</front>
<body>
<p>Hand gestures produced in conversation convey meaning that is co-expressive with the content of speech (McNeill,
<xref rid="CIT0041" ref-type="bibr">1992</xref>
). This is particularly true for imagistic or representational gestures (Kendon,
<xref rid="CIT0036" ref-type="bibr">2004</xref>
; McNeill,
<xref rid="CIT0041" ref-type="bibr">1992</xref>
), which depict aspects of the objects or scenes they refer to. For instance, when speaking about an eagle, we may spread our arms away from the body symbolising the wings of the eagle, whereas when referring to a house, we may use our index finger to trace an inverted “v”, symbolising its roof, and if we speak about our new piano, we may mime the action of playing the piano. These examples highlight how different referents may elicit the use of noticeably different gestural representation techniques such as drawing, imitating an action, etc. (Müller,
<xref rid="CIT0042" ref-type="bibr">1998</xref>
). Gestures occurring alongside speech are assumed to be spontaneous, i.e. produced without conscious awareness of the speaker (Goldin-Meadow,
<xref rid="CIT0025" ref-type="bibr">2003</xref>
; McNeill,
<xref rid="CIT0041" ref-type="bibr">1992</xref>
), and speakers seem to combine the use of these iconic strategies effortlessly (and successfully) when describing referents to an interlocutor. Identifying the factors that influence the choice and combination of representation techniques used by speakers to convey meaning is a central (but understudied) issue in gesture research, and one that may shed light on the nature of the conceptual representations that become active at the moment of speaking. Furthermore, speakers do not gesture about every idea they express in speech. While the amount of gestures produced by speakers is influenced by factors such as the communicative context (for instance, speakers often gesture to highlight information that is new for their addressees, Gerwing & Bavelas,
<xref rid="CIT0020" ref-type="bibr">2004</xref>
), it could be the case that certain features of objects are naturally more salient to speakers, and thus more likely to be gestured about. In this paper, we argue that the type of imagery that is activated upon perception of different object characteristics plays a role in determining (a) how frequently speakers gesture, and also (b) what manual techniques they may use in representing referents. Particularly, we focus on the effect of object affordances (i.e. action possibilities that objects allow for, Gibson,
<xref rid="CIT0021" ref-type="bibr">1986</xref>
) as a possible gesture predictor.</p>
<sec id="S002">
<title>Affordances, object recognition, and language production</title>
<p>Affordances (Gibson,
<xref rid="CIT0021" ref-type="bibr">1986</xref>
) have been defined as potential actions that objects and other entities allow for. For example, a handle affords gripping, just like a doorknob affords twisting or a button affords pressing. According to Gibson (
<xref rid="CIT0021" ref-type="bibr">1986</xref>
), humans are predisposed to pay attention to the affordances of objects. This attentional bias towards graspable or manipulable objects (see, e.g. Handy, Grafton, Shroff, Ketay, & Gazzaniga,
<xref rid="CIT0028" ref-type="bibr">2003</xref>
) has led researchers to study the role of action affordances as facilitators of object recognition and categorisation, mainly using neuroimaging techniques and visuomotor priming paradigms. These studies have revealed activation in the premotor areas of the brain (presumably involved in the planning of movement), when participants are presented with manipulable objects during the completion of categorisation tasks (e.g. Gerlach, Law, & Paulson,
<xref rid="CIT0019" ref-type="bibr">2002</xref>
), laterality effects in motor response to affordance perception (e.g. Tucker & Ellis,
<xref rid="CIT0049" ref-type="bibr">1998</xref>
), and handshape-affordance congruency effects (e.g. Bub, Masson, & Bukach,
<xref rid="CIT0009" ref-type="bibr">2003</xref>
; Ellis & Tucker,
<xref rid="CIT0015" ref-type="bibr">2000</xref>
). Most importantly, these experiments challenge the view that motor planning requires a conscious intention to act.</p>
<p>Object affordances have also been acknowledged to influence language comprehension (Glenberg & Robertson,
<xref rid="CIT0023" ref-type="bibr">2000</xref>
; for a review see Fischer & Zwaan,
<xref rid="CIT0017" ref-type="bibr">2008</xref>
). In an experiment in which participants had to make sensibility judgements (i.e. identifying whether a sentence is sensible or not), Glenberg and Kaschak (
<xref rid="CIT0022" ref-type="bibr">2002</xref>
) detected a compatibility effect between grammatical constructions and action understanding. Sentences such as “Andy delivered the pizza to you” were judged faster if the motion performed by the participant during the task (e.g. towards or away from body) would match the direction implied by the sentence. This facilitation effect suggests that processing language entails a certain degree of motor simulation (but note that other accounts have attributed these effects to linguistic, and not necessarily embodied, factors—see, for instance, Louwerse,
<xref rid="CIT0039" ref-type="bibr">2011</xref>
; or Louwerse & Jeuniaux,
<xref rid="CIT0040" ref-type="bibr">2010</xref>
, for further discussion). Strengthening these findings, several neuroimaging studies have shown that listening to sentences describing actions triggers the activation of the premotor brain areas related to the body parts involved in such actions (Hauk, Johnsrude, & Pulvermüller,
<xref rid="CIT0029" ref-type="bibr">2004</xref>
; Tettamanti et al.,
<xref rid="CIT0048" ref-type="bibr">2005</xref>
). Similarly, reading the names of objects that can be grasped (e.g. a grape) or manipulated (e.g. pliers) triggers simulations of grasping and of specific hand configurations (Bub, Masson, & Cree,
<xref rid="CIT0010" ref-type="bibr">2008</xref>
; Glover, Rosenbaum, Graham, & Dixon,
<xref rid="CIT0024" ref-type="bibr">2004</xref>
).</p>
<p>In sum, the finding that the processing of action-related visual stimuli and language can evoke appropriate motor responses is relevant for the field of gesture studies: it is conceivable that such affordance-evoked motor responses may be partly responsible for the production of co-speech representational gestures, as has been recently suggested by Hostetter and Alibali (
<xref rid="CIT0032" ref-type="bibr">2008</xref>
).</p>
</sec>
<sec id="S003">
<title>Affordances and gestures</title>
<p>Gesture and speech seem suited to convey different types of information (Beattie & Shovelton,
<xref rid="CIT0006" ref-type="bibr">2002</xref>
; Cook & Tanenhaus,
<xref rid="CIT0014" ref-type="bibr">2009</xref>
). Gestures occur often with content that is highly imageable (Hadar & Butterworth,
<xref rid="CIT0027" ref-type="bibr">1997</xref>
), and particularly so when speakers depict events that underlie spatial and motoric information (Chu & Kita,
<xref rid="CIT0012" ref-type="bibr">2008</xref>
; Feyereisen & Havard,
<xref rid="CIT0016" ref-type="bibr">1999</xref>
; Hostetter & Alibali,
<xref rid="CIT0032" ref-type="bibr">2008</xref>
). In these cases, gestures might help get across a meaning that is hard to encode linguistically but that is relatively easy to visualise. For example, Feyereisen and Havard (
<xref rid="CIT0016" ref-type="bibr">1999</xref>
) conducted a series of interviews where they asked specific questions to elicit the activation of motor imagery (e.g. could you explain how to change the wheel of a car or to repair the tire of a bicycle?), of visual imagery (e.g. could you describe your favourite painting or sculpture?), or of no imagery (e.g. do you think more women should go into politics?). They found that speakers produced the highest amount of gestures when speaking about information related to action, and the lowest amount of gestures when speaking about abstract topics that, in principle, did not evoke imagery directly. Indeed, gestures are often depictive of (one's own) motoric experiences, and we could say that the gestures we perform daily reveal something about how we have acquired knowledge (Streeck,
<xref rid="CIT0047" ref-type="bibr">2009</xref>
).</p>
<p>In light of findings such as the above, Hostetter and Alibali propose their Gestures as Simulated Action (GSA) framework (Hostetter & Alibali,
<xref rid="CIT0032" ref-type="bibr">2008</xref>
). This framework contends that the gestures that speakers produce stem from the perceptual and motor simulations that underlie thinking and speaking. According to the GSA, one of the chief factors that determine whether a gesture will be produced by a speaker is the strength of activation of the simulated action (p. 503). This rests on the assumption that different types of mental imagery can be organised along a continuum determined by the extent to which they are tied to action simulation. In practice, this implies that simulations of motor imagery (e.g. a person imagines herself performing an action) and of spatial imagery (e.g. a person imagines what an object will look like if perceived from a different angle) have a stronger action component than simulations of visual imagery (e.g. a person mentally visualises a famous painting in detail), and will culminate into higher representational gesture rates.</p>
<p>Two studies investigated the differences in gesture rate when speakers were induced to simulate motor and spatial imagery, as compared with a visual imagery control condition (Hostetter & Alibali,
<xref rid="CIT0033" ref-type="bibr">2010</xref>
; Hostetter, Alibali, & Bartholomew,
<xref rid="CIT0034" ref-type="bibr">2011</xref>
). Hostetter and Alibali (
<xref rid="CIT0033" ref-type="bibr">2010</xref>
) showed that speakers gestured more while describing visual patterns that they had manually constructed with matches than while describing patterns they had only viewed. In the second study, Hostetter et al. (
<xref rid="CIT0034" ref-type="bibr">2011</xref>
) presented speakers with sets of arrow patterns, and asked them to describe the patterns either in the position in which they were presented, or imagining them as they would appear if they were rotated. In this case, too, higher gesture rates were observed when speakers had to simulate rotation, as opposed to when they directly viewed the patterns. Thus, both studies supported the notion that co-speech gestures are produced more frequently following spatial or motoric simulations. Nevertheless, in both studies, speakers still gestured to a fair extent in the (no simulation) control conditions. The authors suggest that visual imagery may in some cases trigger a certain degree of action simulation. For example, in Hostetter and Alibali (
<xref rid="CIT0033" ref-type="bibr">2010</xref>
), participants might have simulated the action of arranging the matches by hand to form the visual patterns they attended to. Similarly, in Hostetter et al. (
<xref rid="CIT0034" ref-type="bibr">2011</xref>
), the stimuli consisted of arrows, which may thus have generated simulations of motion. Taking this into account, it becomes apparent that a clear-cut distinction cannot be made between types of mental imagery, with various types of imagery sometimes becoming simultaneously active.</p>
<p>The first question that we address in this paper relates to whether the perception of objects with a manual affordance (such as tools) will elicit simulations of object use and, hence, result in higher gesture rates. Typically, the perception of static scenes where no animate character or actor is involved should activate simulations of visual imagery, but the motor cognition literature has extensively shown that viewing objects with affordances may generate simulations of object manipulation and object use (e.g. Bub et al.,
<xref rid="CIT0009" ref-type="bibr">2003</xref>
; Ellis & Tucker,
<xref rid="CIT0015" ref-type="bibr">2000</xref>
; Glover et al.,
<xref rid="CIT0024" ref-type="bibr">2004</xref>
). A handful of recent studies have asked whether objects that afford action performance elicit higher gesture rates during description tasks similar to the experiment reported in the present study (Hostetter,
<xref rid="CIT0031" ref-type="bibr">2014</xref>
; Pine, Gurney, & Fletcher,
<xref rid="CIT0045" ref-type="bibr">2010</xref>
) but also during a mental rotation task and a subsequent motion depiction task (Chu & Kita,
<xref rid="CIT0013" ref-type="bibr">2015</xref>
). In an experiment designed to examine the intrapersonal function of gestures, Pine et al. (
<xref rid="CIT0045" ref-type="bibr">2010</xref>
) presented speakers with pictures of praxic (e.g. scissors, stapler) and non-praxic objects (e.g. fence, chicken), and measured their gesture rates while describing these objects to a listener under different visibility conditions. Their results showed that people produced more gestures in trials corresponding to praxic objects, regardless of whether they could directly see their addressee or not. Using a similar paradigm, Hostetter (
<xref rid="CIT0031" ref-type="bibr">2014</xref>
) asked speakers to describe a series of nouns, and found more gesturing accompanying the descriptions of the items that had been rated highest in a scale of manipulability, also regardless of visibility. Both studies conclude that the likelihood of producing representational gestures is co-determined by the semantic properties of the words they accompany—specifically, by the motoric component evoked by such words.</p>
<p>While these findings are suggestive, both studies have some limitations which we try to address in the current paper. First of all, in both studies, participants were not allowed to name the objects being described. It is likely that this type of instruction may have biased the speakers’ descriptions towards including information about the function of objects when possible, perhaps as the easiest communicative strategy to describe objects. This would make questionable the extent to which speakers gesture more about manipulable objects because of the action simulation that may underlie the representation of such objects, perhaps arguing in favour of an account where function is simply a more salient (and easier to gesturally depict) attribute, that leads to more successful identification.</p>
<p>Secondly, both studies provide no data about the occurrence of other non-representational gesture types (e.g. rhythmic gestures such as beats) in relation to manipulable objects. While it is true that both the study by Pine et al. (
<xref rid="CIT0045" ref-type="bibr">2010</xref>
) and the GSA (Hostetter,
<xref rid="CIT0031" ref-type="bibr">2014</xref>
; Hostetter & Alibali,
<xref rid="CIT0032" ref-type="bibr">2008</xref>
) are specific to representational gestures, it may be the case that the activation evoked by descriptions with a strong action component is not restricted to the production of representational gestures, but that it primes gesturing in general. This could support what we may term a general activation account, by means of which the motoric activation evoked by action-related language may lower the speaker's gesture threshold (Hostetter & Alibali,
<xref rid="CIT0032" ref-type="bibr">2008</xref>
, p. 503) enough to allow for other hand movements to be produced. However, whether both representational and non-representational gestures depend on the same threshold height is not specified in any gesture model to date, and remains to be investigated.</p>
<p>A recent study by Chu and Kita (
<xref rid="CIT0013" ref-type="bibr">2015</xref>
) extends previous research by suggesting that gestures may arise in response to action potential independently of the content of speech, as evidenced by the increase in the number of gestures both while solving a mental rotation task (“co-thought” gestures) and during depictions of motion events (co-speech gestures), where the affordance component of the object presented (in this case, mugs with handles) was task-irrelevant. Furthermore, their study featured a condition in which the affordances of the mugs were obscured, by presenting participants with mugs covered in spikes (minimising grasping potential). In both co-speech and co-thought conditions, participants were less likely to gesture about the mugs in the spiky condition, exposing a fine-grained sensitivity to the affordance of objects in speakers, even when these are task-irrelevant.</p>
<p>So far the few studies that have examined gesture production about objects that afford action performance have mostly looked at the frequency of gesturing. However, gesture rate may not be the only aspect of gesture production influenced by perceiving affordances. Here, we argue that the representation technique chosen to depict a referent (e.g. Kendon,
<xref rid="CIT0036" ref-type="bibr">2004</xref>
; Müller,
<xref rid="CIT0042" ref-type="bibr">1998</xref>
; Van Nispen, van de Sandt-Koenderman, Mol, & Krahmer,
<xref rid="CIT0050" ref-type="bibr">2014</xref>
; Streeck,
<xref rid="CIT0046" ref-type="bibr">2008</xref>
,
<xref rid="CIT0047" ref-type="bibr">2009</xref>
) might be susceptible to such influence too. If we think of representational gestures as being abstract materialisations of (selective) mental representations that are active at the moment of speaking, one can think that the techniques chosen to represent these images may reveal something about the nature and quality of the information being simulated by a speaker. Müller (
<xref rid="CIT0042" ref-type="bibr">1998</xref>
) recognises four main representation modes employed by speakers in the construction of meaning. These gestures are perceivably different, and imply varying degrees of abstraction with respect to the referent they represent. These modes include
<italic>imitation</italic>
, which is by and large the most common technique associated to first-person (enacting) gestures, and consists of miming actions associated to an object;
<italic>portrayal</italic>
, where the hand represents an object or character, for example the hand pretending to be a gun;
<italic>drawing</italic>
, where a speaker traces a contour, typically with an extended finger; and
<italic>moulding</italic>
, where the speaker moulds a shape in the air, as if palpating it. Very little is known about what drives the use of one technique over another and, in general, about what determines the physical form that representational gestures adopt (Bavelas, Gerwing, Sutton, & Prevost,
<xref rid="CIT0005" ref-type="bibr">2008</xref>
; Krauss, Chen, & Gottesman,
<xref rid="CIT0037" ref-type="bibr">2000</xref>
).</p>
<p>One factor known to influence gestural representation modes is action observation (e.g. seeing a character perform an action, Parrill,
<xref rid="CIT0043" ref-type="bibr">2010</xref>
) or action performance (e.g. Cook & Tanenhaus,
<xref rid="CIT0014" ref-type="bibr">2009</xref>
). For instance, Cook and Tanenhaus (
<xref rid="CIT0014" ref-type="bibr">2009</xref>
) had speakers solve the Tower of Hanoi problem and describe its solution to a listener. Solving this task consists of moving a stack of disks from one peg to another one, using an auxiliary middle peg. Half of the speakers performed the task with real disks, whereas the other half performed the task on the computer, by dragging the disks with the mouse. While no changes were observed in the speech and number of gestures in these two conditions, gestures were qualitatively different. When speakers had performed the actions with real disks, they were more likely to use grasping handshapes, i.e. imitating the action that they just performed. Speakers who solved the task on the computer tended to use drawing gestures, i.e. tracing the trajectory of the mouse on the screen. This suggests that the type of action simulation may have an impact on the particular representation techniques used by speakers. However, it could also be that these results stem from priming effects, whereby speakers simply “reproduced” the action they had just performed.</p>
<p>Chu and Kita (
<xref rid="CIT0013" ref-type="bibr">2015</xref>
) also suggest a connection between affordance and representation technique. Although their study only included one object type (mugs), their results show that speakers were more likely to use grasping gestures to solve the rotation task when the mugs were presented with a smooth surface (affordance enhanced) as opposed to when the mugs appeared covered in spikes (affordance obscured). Hence, both of these studies highlight the importance of investigating not only the number of gestures produced by speakers, if we are really to understand why we produce gestures at all—as has been emphasised by recent studies on gesture production (e.g. Bavelas & Healing,
<xref rid="CIT0003" ref-type="bibr">2013</xref>
; Galati & Brennan,
<xref rid="CIT0018" ref-type="bibr">2014</xref>
; Hoetjes, Koolen, Goudbeek, Krahmer, & Swerts,
<xref rid="CIT0030" ref-type="bibr">2015</xref>
). Limiting ourselves to annotating the number of gestures produced can be compared to doing speech studies in which only the number of words—but not the content of speech—is analysed.</p>
<sec id="S003-S2001">
<title>The present study</title>
<p>In sum, it seems that action simulation plays a role in eliciting gesture production, with recent studies suggesting that higher gesture rates may be evoked by visual inspection of objects that afford action performance, such as tools. Nevertheless, previous research has mainly focussed on analysing gesture rates; therefore, we have little knowledge of how object characteristics influence the strategies that gesturers employ in communicating about them.</p>
<p>The aim of this study is to assess the effects of perceiving objects with different (high and low) affordance degrees, on the production of speech-accompanying gestures during a communication task, focussing on the gestural techniques employed by speakers in the representation of objects. We predict that affordance will determine the number of gestures produced by speakers, with more gestures accompanying the descriptions of manipulable objects, in line with previous research (Chu & Kita,
<xref rid="CIT0013" ref-type="bibr">2015</xref>
; Hostetter,
<xref rid="CIT0031" ref-type="bibr">2014</xref>
; Pine et al.,
<xref rid="CIT0045" ref-type="bibr">2010</xref>
). Currently, the predictions made by the GSA (Hostetter & Alibali,
<xref rid="CIT0032" ref-type="bibr">2008</xref>
) are specific to representational gestures. In this study, we will also annotate the occurrence of non-representational gestures. On the one hand, it is conceivable, given that gestures are seen as outward manifestations of specific imagery simulations, that only the number of representational gestures is influenced by our condition. On the other hand, however, it could be possible that the activation evoked by action-related language primes the production of hand gestures in general, including non-representational types.</p>
<p>When we look specifically at the presentation of gestures, we expect that communicating about objects that afford actions will trigger more imitation gestures (e.g. where the speaker mimes the function associated with the object) than tracing or moulding gestures (e.g. where the speaker traces or “sculpts” an object's shape), given that the gestures should reflect the type of imagery being simulated at the moment of speaking. Conversely, we do not expect the occurrence of imitation gestures accompanying descriptions of objects that are non-manipulable (although they can occur—e.g. pretending to eat, when describing a table), but a predominance of moulding or tracing gestures.</p>
</sec>
</sec>
<sec id="S004">
<title>Method</title>
<sec id="S004-S2001">
<title>Participants</title>
<p>Eighty undergraduate students from Tilburg University (
<italic>M</italic>
 = 21; SD = 2; 50 female) took part in this experiment, in exchange for course credit points. All participants were native speakers of Dutch, and carried out the experimental task in pairs.</p>
</sec>
<sec id="S004-S2002">
<title>Material and apparatus</title>
<p>Our stimuli set was composed of pictures of 28 objects: 14 with a high-affordance degree (e.g. whisk), and 14 with a low-affordance degree (e.g. plant) (see
<xref rid="T0001" ref-type="table">Appendix 1</xref>
for the complete list of objects). We defined objects with a high-affordance degree simply as
<italic>manipulable</italic>
objects operated exclusively with the hands, whose operation may induce a change in the physical world. For instance, the use of a pair of scissors typically results into the division of a sheet of paper into smaller units. Conversely,
<italic>non-manipulable</italic>
objects could not be directly operated using the hands, and we minimised the possibility for any object in our dataset to induce motor simulation. For instance, if an object might contain handles or knobs, we either chose a visual instance of the object without such features, or the features were digitally erased from the picture.</p>
<p>To validate the stimuli, we conducted a pre-test where we asked questions about the objects to 25 Dutch-speaking naïve judges uninvolved in the actual experiment, using Crowdflower (an online crowdsourcing service;
<ext-link ext-link-type="uri" xlink:href="http://www.crowdflower.com/">http://www.crowdflower.com/</ext-link>
). In this questionnaire, participants were asked to name each object in Dutch (we later computed whether the name was correct, and assigned it either a 0—incorrect or 1—correct), and also rated the manipulability, and degree of perceived visual complexity of each object on a scale from 0 to 100 (being 0 the least manipulable/complex and 100 the most). Our aim was to make sure that participants could name the objects correctly in Dutch, and that these objects were rated similarly in visual complexity, to ensure that the speakers’ gesturing rate would not be affected by anything other than our affordance manipulation.</p>
<p>The percentage of correctly named objects ranged between 90% and 100% for the selected items (
<italic>M</italic>
<sub>HIGH</sub>
 = 94.35, SD = 2.24,
<italic>M</italic>
<sub>LOW</sub>
 = 93.14, SD = 2.14), and fell below 35% for their perceived visual complexity (
<italic>M</italic>
<sub>HIGH</sub>
 = 29.01, SD = 2.46,
<italic>M</italic>
<sub>LOW</sub>
 = 26.74, SD = 2.39). Most importantly, the scores did not differ between the high- and low-affordance items for complexity (
<italic>t</italic>
(24) = 1.51,
<italic>p</italic>
 = .14). The manipulability ratings for both affordance groups were statistically significant, as intended (
<italic>M</italic>
<sub>HIGH</sub>
 = 74.47, SD = 11.96,
<italic>M</italic>
<sub>LOW</sub>
 = 41.4, SD = 21.42) (
<italic>t</italic>
(24) = 9.53,
<italic>p</italic>
 < .001).</p>
</sec>
<sec id="S004-S2003">
<title>Procedure</title>
<p>The experiment introduced participants to a fictive scenario in which participant A (the speaker) was relocating, but due to an injury could not go by himself to the department store to buy utensils and furniture. Participant B (the listener) would go in his place, but for this to be possible they would have to agree beforehand on the items to be purchased. Thus, the speaker's task was to briefly describe each of the items, in such a way that the listener would be able to visually identify them. The stimuli that the speaker would describe were displayed on a 13 in. laptop screen, placed on a table to the left side of the speaker. All picture items were compiled into a presentation document, where high- and low-affordance objects were mixed at random. Each object fully occupied the screen. Each object was preceded by a slide indicating the trial number (see
<xref rid="F0001" ref-type="fig">Figure 1</xref>
), to ease the coordination between the speaker and the listener's tasks. The listener was given a paper brochure, in which pictures of all objects appeared forming a grid, each item accompanied by a letter. Next to it, the listener was given an answer sheet with two columns: one with the trial numbers, and the other with blanks to fill in the letters corresponding to the items described. Thus, the listener's task was to identify each object in the brochure she was given, and annotate the letter corresponding to such object on her answer sheet.
<fig id="F0001" orientation="portrait" position="float">
<label>Figure 1. </label>
<caption>
<p>Example of the stimuli presentation as seen by the speaker. Each object is embedded in one slide, occupying it fully, always preceded by a slide presenting the item number.</p>
</caption>
<graphic xlink:href="plcp_a_1108448_f0001_c"></graphic>
</fig>
</p>
<p>Each pair received written instructions, and had the chance to do a practice round before the actual experiment began, with an item that was not part of the stimuli set. Speakers and listeners were allowed to speak freely, and had no restrictions with respect to the way they designed their descriptions—for example, naming the objects was not prohibited. A digital video camera was placed behind the listener, to record the speaker's speech and gestures.</p>
</sec>
<sec id="S004-S2004">
<title>Data analyses</title>
<p>We transcribed all words produced by the speakers (until the listener would write down her response) and annotated all gestures, using the multimodal annotation tool Elan (Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands,
<ext-link ext-link-type="uri" xlink:href="http://www.lat-mpi.eu/tools/elan">http://www.lat-mpi.eu/tools/elan</ext-link>
; Wittenburg, Brugman, Russel, Klassmann, & Sloetjes,
<xref rid="CIT0051" ref-type="bibr">2006</xref>
). We categorised gestures as representational and non-representational gestures. Representational gestures were defined as hand movements depicting information related to the semantic content of the ongoing speech. Examples of such gestures are tracing the contour of a house with the index finger, or repeatedly pushing down the air with the palm, simulating the bouncing of a basketball. The non-representational gestures mainly comprised rhythmic gestures used to emphasise words (beats—McNeill,
<xref rid="CIT0041" ref-type="bibr">1992</xref>
), and interactive or pragmatic gestures directed at the addressee (Bavelas, Chovil, Lawrie, & Wade,
<xref rid="CIT0004" ref-type="bibr">1992</xref>
; Kendon,
<xref rid="CIT0036" ref-type="bibr">2004</xref>
). We excluded from our annotation other non-verbal behaviours such as self-adaptors (e.g. fixing one's hair). Each gesture was annotated in its full length, from the preparation to the retraction phase (see McNeill,
<xref rid="CIT0041" ref-type="bibr">1992</xref>
). When a gesture stroke was immediately followed by a new gesture, we examined the fragment frame-by-frame, and set the partition at the exact moment where a change in hand shape, or movement type would take place.</p>
<p>Next, we annotated the techniques observed in the speakers’ gestures. Representation technique was coded only for representational gestures, assigning always one technique to each gesture. We took as our point of departure Müller's four representation modes—imitating, drawing, portraying, and moulding (Müller,
<xref rid="CIT0042" ref-type="bibr">1998</xref>
), and expanded the list, further sub-categorising some representation modes, based on the gestures we observed in our dataset after screening the first five videos, and adding an extra category: placing (see, e.g. Bergmann & Kopp,
<xref rid="CIT0007" ref-type="bibr">2009</xref>
). A detailed overview of the techniques annotated can be found in
<xref rid="T0002" ref-type="table">Appendix 2</xref>
. While it is true that some representation modes are often associated to specific handshapes (for example, moulding is oftentimes associated with flat handshapes, and tracing is often performed with a single stretched finger), our main criterion in coding these representation modes was to ask “how the hands are used symbolically” (Müller,
<xref rid="CIT0042" ref-type="bibr">1998</xref>
, p. 323).</p>
<p>To validate the reliability of the annotations, a second coder, naïve to the experimental conditions and hypotheses, performed gesture identification in 40 descriptions (produced by 8 different speakers), and judged the representation technique used in a sample of 60 gestures produced by 12 different speakers (5 gestures per speaker). In total, 146 gestures from 20 different speakers (9.8% of all annotated gestures) were analysed by the second coder. Cohen's
<italic>κ</italic>
reveals substantial agreement with respect to the number of gestures produced by speakers (
<italic>κ</italic>
 = .71,
<italic>p</italic>
 < .001), and an almost perfect agreement with respect to the representation techniques (
<italic>κ</italic>
 = .84
<italic>p</italic>
 < .001).</p>
</sec>
<sec id="S004-S2005">
<title>Design and statistical analyses</title>
<p>The effects of affordance on our dependent variables were assessed using linear mixed models for continuous variables (i.e. gesture rates), and logit mixed models for categorical variables (i.e. representation techniques) (see Jaeger,
<xref rid="CIT0035" ref-type="bibr">2008</xref>
). Mixed-effect models allow us to account for fixed as well as random effects in our data simultaneously, thereby optimising the generalisability of our results and eliminating the need to conduct separate F1 and F2 analyses. Thus, “affordance” (two levels: high and low) was the fixed factor in all of our analyses, and participants and items were included as random factors. In all cases, we started with a full random effects model (following the recommendation by Barr, Levy, Scheepers, & Tily,
<xref rid="CIT0001" ref-type="bibr">2013</xref>
). In case the model did not converge, we eliminated the random slopes with the lowest variance.
<italic>P</italic>
values were estimated using the Likelihood Ratio Test, contrasting, for each dependent variable, the fit of our (alternative) model with the fit of the null model.</p>
</sec>
</sec>
<sec id="S005">
<title>Results</title>
<p>The communication task elicited 1120 descriptions, 509 of which were accompanied by at least one gesture. A total of 1483 gestures were identified. Representational gestures accounted for 72% (1070) of the gestures annotated, the remaining 28% (413) consisting of non-representational gestures. Our first research question was concerned with whether perceiving objects that afford manual actions would result into the production of more gestures. We computed a
<italic>normalised</italic>
gesture rate measure, whereby the number of gestures produced per description is calculated relative to the number of words spoken (Gestures/Words × 100). Trials where no gestures were produced (null trials) were excluded, but in such a way that the ratio of descriptions for high and low manipulability objects for each speaker was preserved. Thus, we only excluded null trials for one condition if the same number of null trials could be excluded for the other condition, leading to the examination of gestures in 572 descriptions (286 per condition). We did this in order to reduce the variance in our dataset caused by the amount of 0-gesture trials, without either losing data or compromising our results. We computed the gesture rate two times, first for representational gestures and second for non-representational gestures. The results show that
<italic>affordance</italic>
influenced the representational gesture rate, which was higher for high-affordance objects (
<italic>M</italic>
<sub>HIGH</sub>
 = 9.76, SD = 12.53) than for low-affordance objects (
<italic>M</italic>
<sub>LOW</sub>
 = 6.47, SD = 7.82) (
<italic>β</italic>
 = −2.91, SE = 1.17,
<italic>p</italic>
 = .004). However, we found no effects of
<italic>affordance</italic>
on the non-representational gesture rate, which did not differ between manipulable (
<italic>M</italic>
<sub>HIGH</sub>
 = 3.83, SD = 6.33) and non-manipulable objects (
<italic>M</italic>
<sub>LOW</sub>
 = 3.71, SD = 6.01) (
<italic>β</italic>
 = −.15, SE = 0.74,
<italic>p</italic>
 = .72).</p>
<p>Given that gesture rate is also dependent on the number of words produced by a speaker, it could be the case that the number of words is also sensitive to
<italic>affordance</italic>
, which could in turn have influenced gesture rate. Hence, we computed the effects of
<italic>affordance</italic>
on the number of words uttered by speakers, and found no statistically supported differences between manipulable (
<italic>M</italic>
<sub>HIGH</sub>
 = 23.29, SD = 14.85) and non-manipulable objects (
<italic>M</italic>
<sub>LOW</sub>
 = 24.41.4, SD = 15.2) (
<italic>β</italic>
 = .58, SE = 2.78,
<italic>p</italic>
 = .1).</p>
<p>In summary, our results suggest that speakers do gesture more when faced with an object that they can manipulate with their hands, but this effect is restricted to the production of representational gestures (
<xref rid="F0002" ref-type="fig">Figure 2</xref>
).
<fig id="F0002" orientation="portrait" position="float">
<label>Figure 2. </label>
<caption>
<p>Gesture rates for non-representational gestures (left) and representational gestures (right). The bars represent the mean number of gestures per 100 words, and the error bars represent the 95% confidence intervals. **Significant at
<italic>p</italic>
 < .005.</p>
</caption>
<graphic xlink:href="plcp_a_1108448_f0002_b"></graphic>
</fig>
</p>
<sec id="S005-S2001">
<title>Analysis of representation techniques</title>
<p>Our results support the prediction that describing objects that afford manual action would elicit more gestures where the speaker pretended to execute the action associated to the object (
<italic>β</italic>
 = −4.46, SE = 0.93,
<italic>p</italic>
 < .001) (
<italic>M</italic>
<sub>HIGH</sub>
 = .39, SD = .48;
<italic>M</italic>
<sub>LOW</sub>
 = .02, SD = .15), or pretended to handle (grip) such object (
<italic>β</italic>
 = −3.34, SE = 1.17,
<italic>p</italic>
 < .001) (
<italic>M</italic>
<sub>HIGH</sub>
 = .16, SD = .37;
<italic>M</italic>
<sub>LOW</sub>
 = .02, SD = .14). In contrast, for objects in the low-affordance condition, speakers typically made use of moulding gestures in which the hands sculpted their shape (
<italic>β</italic>
 = 1.76, SE = 0.34,
<italic>p</italic>
 < .001) (
<italic>M</italic>
<sub>HIGH</sub>
 = .28, SD = .45;
<italic>M</italic>
<sub>LOW</sub>
 = .66, SD = .47), and of placing gestures where the hands expressed the spatial relation between different features of an object (
<italic>β</italic>
 = 3.007, SE = 1.04,
<italic>p</italic>
 < .001) (
<italic>M</italic>
<sub>HIGH</sub>
 = .02, SD = .14;
<italic>M</italic>
<sub>LOW</sub>
 = .13, SD = .33) (see
<xref rid="F0003" ref-type="fig">Figure 3</xref>
).
<fig id="F0003" orientation="portrait" position="float">
<label>Figure 3. </label>
<caption>
<p>Frequency of use of each representation technique (annotated only for representational gestures). The error bars represent the 95% confidence intervals. ***Significant at
<italic>p</italic>
 < .001.</p>
</caption>
<graphic xlink:href="plcp_a_1108448_f0003_b"></graphic>
</fig>
</p>
</sec>
</sec>
<sec id="S006">
<title>Discussion</title>
<p>The experiment reported in this paper was designed to examine the impact of a core property of objects, namely their degree of
<italic>action affordance</italic>
, on the production of co-speech gestures. Particularly, we sought to elucidate (a) whether perceiving objects that afford manual actions (without attending to explicit action demonstrations) sufficed to increase the production of (representational) gestures and (b) whether the action component intrinsic to these objects would be reflected in the representation techniques used to gesture.</p>
<p>Our analyses yielded a number of noteworthy results. First, our results suggest that merely describing objects with manual affordances (e.g. tools), as opposed to objects whose daily function is not primarily executed with the hand, is indeed enough to elevate the rate of co-speech gestures produced by speakers. This result, however, was only found for representational gestures. This is consistent both with previous research (Pine et al.,
<xref rid="CIT0045" ref-type="bibr">2010</xref>
), and with the
<italic>GSA</italic>
(Hostetter & Alibali,
<xref rid="CIT0032" ref-type="bibr">2008</xref>
), which specifically predicts more representational gestures accompanying stronger simulations of motor imagery. Currently, the GSA framework accounts solely for the production of representational gestures. Our study contributes to a possible instantiation of such framework, by showing that this effect does not extend to the production of non-imagistic gestures such as
<italic>pragmatic</italic>
gestures directed to the addressee (Kendon,
<xref rid="CIT0036" ref-type="bibr">2004</xref>
) and
<italic>beats</italic>
(McNeill,
<xref rid="CIT0041" ref-type="bibr">1992</xref>
)—which constituted most of the gestures annotated as non-representational in our study. The relationship between representational and non-representational gestures has been largely ignored in currently available gesture models, in terms of the mechanisms underlying the production of both gesture types, with nearly all accounts limiting their scope to representational gesture. This fact suggests that, although produced together in talk, both types of gestures may have their origin in different cognitive processes (Chu & Kita,
<xref rid="CIT0013" ref-type="bibr">2015</xref>
) and relate to imagistic and linguistic content in different ways. Our results emphasise this difference by showing that the activation caused by our stimuli was restricted to representational gestures, thereby suggesting that the response to the perception of affordances does not generate simple movement activation (going against what we earlier termed a “general activation” account), but that it seems to recruit motor responses that are specific to the features of the represented referents. The extent to which the production of affordance-congruent gestures is semantically mediated, or whether these gestures emerge from a more “direct” visual route to action is a question that requires further investigation.</p>
<p>Despite our finding that more gestures were produced while describing high-affordance objects, still a high amount of gestures were produced while describing low-affordance items. We hypothesise that objects in the low-affordance category may have evoked action simulations as well, but of a different kind. For instance, many of these objects had large flat surfaces, which may have activated haptic (“touching”) simulations in the speaker (e.g. a ball-shaped lamp affords to be palpated and its structure affords to be moulded with both hands; a flat surface affords running our palms over it, etc.). This explanation is supported by the predominant use of moulding gestures (mainly associated with flat handshapes) in the description of low-affordance objects. In addition, we observed a tendency in speakers to represent the objects in the low-affordance condition following a piecemeal strategy. That is, whereas for high-affordance objects speakers could mime the performance of an action in one gesture, for low-affordance objects speakers tended to represent separately, in sequential gestures, the shape of different salient features of the object. For instance, it was common that a speaker would describe a shelves rack by first moulding its overall shape, then moulding the shape of one shelf (showing its horizontality, flatness, and size) and then producing several placing gestures, indicating the location of the remaining individual shelves with respect to one another. Such detailed descriptions occurred very often in our dataset, and they may partly be due to the fact that our speakers had to describe pictures of objects, rich in visual detail, and not verbal items (as in Hostetter,
<xref rid="CIT0031" ref-type="bibr">2014</xref>
). It is therefore likely that speakers will produce even less gestures accompanying the descriptions of non-manipulable objects when the targets are not presented visually. Further studies comparing the production of gestures in response to both types of stimuli presentation (written versus pictorial) should clarify this issue.</p>
<sec id="S006-S2001">
<title>Representation modes in gestural depiction</title>
<p>While the use of different techniques to gesturally represent concepts has been described in the literature (see, e.g. Kendon,
<xref rid="CIT0036" ref-type="bibr">2004</xref>
; Müller,
<xref rid="CIT0042" ref-type="bibr">1998</xref>
; Streeck,
<xref rid="CIT0046" ref-type="bibr">2008</xref>
,
<xref rid="CIT0047" ref-type="bibr">2009</xref>
; van Nispen et al.,
<xref rid="CIT0050" ref-type="bibr">2014</xref>
), it has received little scholarship thus far. If gestures stem from the imagery that underlies thought, we can conceive representational hand gestures as visible materialisations of certain aspects of a speaker's mental pictures. In other words, there is a degree of isomorphism between mental representations and hand gestures, and therefore it is worthwhile investigating the iconic strategies that allow for the “transduction” of imagery into movement. In this study, we originally looked at four representation modes that have their root on daily activity as well as on artistic expression modes: imitating, moulding, tracing, and portraying (Müller,
<xref rid="CIT0042" ref-type="bibr">1998</xref>
). Our results show that objects that afforded manual actions were mostly represented through imitating gestures (particularly, object use and gripping gestures), whereas low-affordance objects were mostly represented with moulding and placing gestures. The remaining categories did not reveal significant differences (e.g. tracing), mostly because of the low frequency with which they occurred (e.g. enacting, portraying).</p>
<p>In summary, it is likely that high-affordance objects evoked simulations of action in the speakers, and that this was manifested not only in the amount of gesturing, but also in the features of the referents that these gestures represented. The fact that most high-affordance objects were represented through imitating gestures (object use, grip) supports the notion that viewing objects triggers the activation of the motor processes associated with physically grasping those objects (e.g. Bub et al.,
<xref rid="CIT0009" ref-type="bibr">2003</xref>
; Ellis & Tucker,
<xref rid="CIT0015" ref-type="bibr">2000</xref>
). Conversely, it is likely that the moulding gestures accompanying low-affordance objects stemmed from simulations of touching, having their root on everyday exploring through
<italic>haptic perception</italic>
(Lederman & Klatzky,
<xref rid="CIT0038" ref-type="bibr">1987</xref>
). We have hypothesised about the connection between imitating and moulding gestures, and different types of simulated action. Nevertheless, we wonder about the cognitive origin of other representation modes, such as tracing, or portraying. One noteworthy aspect of the gestural techniques we analysed is that they display different degrees of abstraction or schematicity (see, e.g. Perniss & Vigliocco,
<xref rid="CIT0044" ref-type="bibr">2014</xref>
). For instance, miming the performance of an object is close to daily sensorimotor experience, and seems relatively “unfiltered” in comparison with drawing a contour, which implies the abstraction of a series of features into a shape, ultimately traced by the finger. It becomes apparent that these gestures also vary in terms of their cognitive complexity (e.g. Bartolo, Cubelli, Della Sala, & Drei,
<xref rid="CIT0002" ref-type="bibr">2003</xref>
), and it is therefore likely that different gestural techniques originate in different processes. Thus, future research should address how cognitive and communicative aspects constrain the use of representation techniques, which will, in our opinion, inform greatly the creation of more comprehensive co-speech gesture models.</p>
<p>In conclusion, this study showed that (action) affordances influence gestural behaviour, by determining both the amount of representational gestures produced by speakers, and the gestural techniques chosen to depict such objects. The present findings thus support and expand the assumptions of the GSA framework (Hostetter & Alibali,
<xref rid="CIT0032" ref-type="bibr">2008</xref>
) and are compatible with previous research in the field of motor cognition showing specific handshape-affordance congruency effects during visual and language tasks (e.g. Bub & Masson,
<xref rid="CIT0008" ref-type="bibr">2006</xref>
; Bub et al.,
<xref rid="CIT0009" ref-type="bibr">2003</xref>
; Tucker & Ellis,
<xref rid="CIT0049" ref-type="bibr">1998</xref>
). In addition, to our knowledge, this is the first study to have systematically shown a connection between object properties and gestural representation techniques during referential communication. The insight gained by looking at such techniques highlights the importance of adopting a more qualitative approach to gesture research, as means to comprehend in depth the processes that give rise to gesture production.</p>
</sec>
</sec>
</body>
<back>
<ack>
<title>Acknowledgements</title>
<p>We would like to thank Diede Schots for assistance in transcribing the data, and our colleagues at the
<italic>Tilburg Centre for Cognition and Communication</italic>
(TiCC) for their valuable comments. Earlier versions of this study were presented at the 6th conference of the
<italic>International Society for Gesture Studies</italic>
(ISGS) (July 2014, San Diego, USA) and at the 7th annual conference on
<italic>Embodied and Situated Language Processing</italic>
(ESLP).</p>
</ack>
<sec id="S007">
<title>Disclosure statement</title>
<p>No potential conflict of interest was reported by the authors.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="CIT0001">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Barr</surname>
<given-names>D. J.</given-names>
</name>
<name>
<surname>Levy</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Scheepers</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Tily</surname>
<given-names>H. J.</given-names>
</name>
</person-group>
<year>2013</year>
<article-title>Random effects structure for confirmatory hypothesis testing: Keep it maximal</article-title>
<source>
<italic>Journal of Memory and Language</italic>
</source>
<fpage>255</fpage>
<lpage>278</lpage>
<pub-id pub-id-type="doi">10.1016/j.jml.2012.11.001</pub-id>
</element-citation>
</ref>
<ref id="CIT0002">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bartolo</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Cubelli</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Della Sala</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Drei</surname>
<given-names>S.</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Pantomimes are special gestures which rely on working memory</article-title>
<source>
<italic>Brain & Cognition</italic>
</source>
<fpage>483</fpage>
<lpage>494</lpage>
<pub-id pub-id-type="doi">10.1016/S0278-2626(03)00209-4</pub-id>
<pub-id pub-id-type="pmid">14642299</pub-id>
</element-citation>
</ref>
<ref id="CIT0003">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bavelas</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Healing</surname>
<given-names>S.</given-names>
</name>
</person-group>
<year>2013</year>
<article-title>Reconciling the effects of mutual visibility on gesturing: A review</article-title>
<source>
<italic>Gesture</italic>
</source>
<issue>1</issue>
<fpage>63</fpage>
<lpage>92</lpage>
<pub-id pub-id-type="doi">10.1075/gest.13.1.03bav</pub-id>
</element-citation>
</ref>
<ref id="CIT0004">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bavelas</surname>
<given-names>J. B.</given-names>
</name>
<name>
<surname>Chovil</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Lawrie</surname>
<given-names>D. A.</given-names>
</name>
<name>
<surname>Wade</surname>
<given-names>A.</given-names>
</name>
</person-group>
<year>1992</year>
<article-title>Interactive gestures</article-title>
<source>
<italic>Discourse Processes</italic>
</source>
<fpage>469</fpage>
<lpage>489</lpage>
<pub-id pub-id-type="doi">10.1080/01638539209544823</pub-id>
</element-citation>
</ref>
<ref id="CIT0005">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bavelas</surname>
<given-names>J. B.</given-names>
</name>
<name>
<surname>Gerwing</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Sutton</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Prevost</surname>
<given-names>D.</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Gesturing on the telephone: Independent effects of dialogue and visibility</article-title>
<source>
<italic>Journal of Memory and Language</italic>
</source>
<fpage>495</fpage>
<lpage>520</lpage>
<pub-id pub-id-type="doi">10.1016/j.jml.2007.02.004</pub-id>
</element-citation>
</ref>
<ref id="CIT0006">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Beattie</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Shovelton</surname>
<given-names>H.</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>What properties of talk are associated with the generation of spontaneous iconic hand gestures?</article-title>
<source>
<italic>British Journal of Psychology</italic>
</source>
<fpage>403</fpage>
<lpage>417</lpage>
<pub-id pub-id-type="doi">10.1348/014466602760344287</pub-id>
</element-citation>
</ref>
<ref id="CIT0007">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Bergman</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Kopp</surname>
<given-names>S.</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Increasing expressiveness for virtual agents: Autonomous generation of speech and gesture</article-title>
<person-group person-group-type="editor">
<name>
<surname>Decker</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Sichman</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Sierra</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Castelfranchi</surname>
<given-names>C.</given-names>
</name>
</person-group>
<source>
<italic>Proceedings of the 8th international conference on autonomous agents and multiagent systems</italic>
</source>
<fpage>361</fpage>
<lpage>368</lpage>
<publisher-loc>
<named-content content-type="city">Ann Arbor</named-content>
,
<named-content content-type="state">MI</named-content>
</publisher-loc>
<publisher-name>IFAAMAS</publisher-name>
</element-citation>
</ref>
<ref id="CIT0008">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bub</surname>
<given-names>D. N.</given-names>
</name>
<name>
<surname>Masson</surname>
<given-names>M. E. J.</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Gestural knowledge evoked by objects as part of conceptual representations</article-title>
<source>
<italic>Aphasiology</italic>
</source>
<fpage>1112</fpage>
<lpage>1124</lpage>
<pub-id pub-id-type="doi">10.1080/02687030600741667</pub-id>
</element-citation>
</ref>
<ref id="CIT0009">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bub</surname>
<given-names>D. N.</given-names>
</name>
<name>
<surname>Masson</surname>
<given-names>M. E. J.</given-names>
</name>
<name>
<surname>Bukach</surname>
<given-names>C. M.</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Gesturing and naming: The use of functional knowledge in object identification</article-title>
<source>
<italic>Psychological Science</italic>
</source>
<fpage>467</fpage>
<lpage>472</lpage>
<pub-id pub-id-type="doi">10.1111/1467-9280.02455</pub-id>
<pub-id pub-id-type="pmid">12930478</pub-id>
</element-citation>
</ref>
<ref id="CIT0010">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bub</surname>
<given-names>D. N.</given-names>
</name>
<name>
<surname>Masson</surname>
<given-names>M. E. J.</given-names>
</name>
<name>
<surname>Cree</surname>
<given-names>G. S.</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Evocation of functional and volumetric gestural knowledge by objects and words</article-title>
<source>
<italic>Cognition</italic>
</source>
<fpage>27</fpage>
<lpage>58</lpage>
<pub-id pub-id-type="doi">10.1016/j.cognition.2006.12.010</pub-id>
<pub-id pub-id-type="pmid">17239839</pub-id>
</element-citation>
</ref>
<ref id="CIT0012">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chu</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Kita</surname>
<given-names>S.</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Spontaneous gestures during mental rotation tasks: Insights into the microdevelopment of the motor strategy</article-title>
<source>
<italic>Journal of Experimental Psychology: General</italic>
</source>
<fpage>706</fpage>
<lpage>723</lpage>
<pub-id pub-id-type="doi">10.1037/a0013157</pub-id>
<pub-id pub-id-type="pmid">18999362</pub-id>
</element-citation>
</ref>
<ref id="CIT0013">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chu</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Kita</surname>
<given-names>S.</given-names>
</name>
</person-group>
<year>2015</year>
<article-title>Co-thought and Co-speech gestures are generated by the same action generation process</article-title>
<source>
<italic>Journal of Experimental Psychology: Learning, Memory, and Cognition</italic>
</source>
<comment>Advance online publication</comment>
<pub-id pub-id-type="doi">10.1037/xlm0000168</pub-id>
</element-citation>
</ref>
<ref id="CIT0014">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cook</surname>
<given-names>S. W.</given-names>
</name>
<name>
<surname>Tanenhaus</surname>
<given-names>M. K.</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Embodied communication: Speakers’ gestures affect listeners’ actions</article-title>
<source>
<italic>Cognition</italic>
</source>
<fpage>98</fpage>
<lpage>104</lpage>
<pub-id pub-id-type="doi">10.1016/j.cognition.2009.06.006</pub-id>
<pub-id pub-id-type="pmid">19682672</pub-id>
</element-citation>
</ref>
<ref id="CIT0015">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ellis</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Tucker</surname>
<given-names>M.</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>Micro-affordance: The potentiation of components of action by seen objects</article-title>
<source>
<italic>British Journal of Psychology</italic>
</source>
<issue>4</issue>
<fpage>451</fpage>
<lpage>471</lpage>
<pub-id pub-id-type="doi">10.1348/000712600161934</pub-id>
<pub-id pub-id-type="pmid">11104173</pub-id>
</element-citation>
</ref>
<ref id="CIT0016">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Feyereisen</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Havard</surname>
<given-names>I.</given-names>
</name>
</person-group>
<year>1999</year>
<article-title>Mental imagery and production of hand gestures while speaking in younger and older adults</article-title>
<source>
<italic>Journal of Nonverbal Behavior</italic>
</source>
<fpage>153</fpage>
<lpage>171</lpage>
<pub-id pub-id-type="doi">10.1023/A:1021487510204</pub-id>
</element-citation>
</ref>
<ref id="CIT0017">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fischer</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Zwaan</surname>
<given-names>R.</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Embodied language: A review of the role of the motor system in language comprehension</article-title>
<source>
<italic>The Quarterly Journal of Experimental Psychology</italic>
</source>
<issue>6</issue>
<fpage>825</fpage>
<lpage>850</lpage>
<pub-id pub-id-type="doi">10.1080/17470210701623605</pub-id>
<pub-id pub-id-type="pmid">18470815</pub-id>
</element-citation>
</ref>
<ref id="CIT0018">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Galati</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Brennan</surname>
<given-names>S.</given-names>
</name>
</person-group>
<year>2014</year>
<article-title>Speakers adapt gestures to addressees’ knowledge: Implications for models of co-speech gesture</article-title>
<source>
<italic>Language, Cognition and Neuroscience</italic>
</source>
<issue>4</issue>
<fpage>435</fpage>
<lpage>451</lpage>
<pub-id pub-id-type="doi">10.1080/01690965.2013.796397</pub-id>
</element-citation>
</ref>
<ref id="CIT0019">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gerlach</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Law</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Paulson</surname>
<given-names>O. B.</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>When action turns into words: Activation of motor-based knowledge during categorization of manipulable objects</article-title>
<source>
<italic>Journal of Cognitive Neuroscience</italic>
</source>
<fpage>1230</fpage>
<lpage>1239</lpage>
<pub-id pub-id-type="doi">10.1162/089892902760807221</pub-id>
<pub-id pub-id-type="pmid">12495528</pub-id>
</element-citation>
</ref>
<ref id="CIT0020">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gerwing</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Bavelas</surname>
<given-names>J.</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Linguistic influences on gesture's form</article-title>
<source>
<italic>Gesture</italic>
</source>
<fpage>157</fpage>
<lpage>195</lpage>
<pub-id pub-id-type="doi">10.1075/gest.4.2.04ger</pub-id>
</element-citation>
</ref>
<ref id="CIT0021">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Gibson</surname>
<given-names>J. J.</given-names>
</name>
</person-group>
<year>1986</year>
<source>
<italic>The ecological approach to visual perception</italic>
</source>
<publisher-loc>
<named-content content-type="city">New York</named-content>
</publisher-loc>
<publisher-name>Psychology Press</publisher-name>
<pub-id pub-id-type="doi">10.4324/9780203767764</pub-id>
</element-citation>
</ref>
<ref id="CIT0022">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Glenberg</surname>
<given-names>A. M.</given-names>
</name>
<name>
<surname>Kaschak</surname>
<given-names>M. P.</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Grounding language in action</article-title>
<source>
<italic>Psychonomic Bulletin & Review</italic>
</source>
<fpage>558</fpage>
<lpage>565</lpage>
<pub-id pub-id-type="doi">10.3758/BF03196313</pub-id>
<pub-id pub-id-type="pmid">12412897</pub-id>
</element-citation>
</ref>
<ref id="CIT0023">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Glenberg</surname>
<given-names>A. M.</given-names>
</name>
<name>
<surname>Robertson</surname>
<given-names>D. A.</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>Symbol grounding and meaning: A comparison of high-dimensional and embodied theories of meaning</article-title>
<source>
<italic>Journal of Memory and Language</italic>
</source>
<issue>3</issue>
<fpage>379</fpage>
<lpage>401</lpage>
<pub-id pub-id-type="doi">10.1006/jmla.2000.2714</pub-id>
</element-citation>
</ref>
<ref id="CIT0024">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Glover</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Rosenbaum</surname>
<given-names>D. A.</given-names>
</name>
<name>
<surname>Graham</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Dixon</surname>
<given-names>P.</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Grasping the meaning of words</article-title>
<source>
<italic>Experimental Brain Research</italic>
</source>
<fpage>103</fpage>
<lpage>108</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-003-1659-2</pub-id>
<pub-id pub-id-type="pmid">14578997</pub-id>
</element-citation>
</ref>
<ref id="CIT0025">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Goldin-Meadow</surname>
<given-names>S.</given-names>
</name>
</person-group>
<year>2003</year>
<source>
<italic>Hearing gesture: How our hands help us think</italic>
</source>
<publisher-loc>
<named-content content-type="city">Cambridge</named-content>
,
<named-content content-type="state">MA</named-content>
</publisher-loc>
<publisher-name>Harvard University Press</publisher-name>
</element-citation>
</ref>
<ref id="CIT0027">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hadar</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Butterworth</surname>
<given-names>B.</given-names>
</name>
</person-group>
<year>1997</year>
<article-title>Iconic gestures, imagery and word retrieval in speech</article-title>
<source>
<italic>Semiotica</italic>
</source>
<fpage>147</fpage>
<lpage>72</lpage>
<pub-id pub-id-type="doi">10.1515/semi.1997.115.1-2.147</pub-id>
</element-citation>
</ref>
<ref id="CIT0028">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Handy</surname>
<given-names>T. C.</given-names>
</name>
<name>
<surname>Grafton</surname>
<given-names>S. T.</given-names>
</name>
<name>
<surname>Shroff</surname>
<given-names>N. M.</given-names>
</name>
<name>
<surname>Ketay</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Gazzaniga</surname>
<given-names>M. S.</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Graspable object grab attention when the potential for action is recognised</article-title>
<source>
<italic>Nature Neuroscience</italic>
</source>
<fpage>421</fpage>
<lpage>427</lpage>
<pub-id pub-id-type="doi">10.1038/nn1031</pub-id>
<pub-id pub-id-type="pmid">12640459</pub-id>
</element-citation>
</ref>
<ref id="CIT0029">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hauk</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Johnsrude</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Pulvermüller</surname>
<given-names>F.</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Somatotopic representation of action words in human motor and premotor cortex</article-title>
<source>
<italic>Neuron</italic>
</source>
<fpage>301</fpage>
<lpage>307</lpage>
<pub-id pub-id-type="doi">10.1016/S0896-6273(03)00838-9</pub-id>
<pub-id pub-id-type="pmid">14741110</pub-id>
</element-citation>
</ref>
<ref id="CIT0030">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hoetjes</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Koolen</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Goudbeek</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Krahmer</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Swerts</surname>
<given-names>M.</given-names>
</name>
</person-group>
<year>2015</year>
<article-title>Reduction in gesture during the production of repeated references</article-title>
<source>
<italic>Journal of Memory and Language</italic>
</source>
<fpage>1</fpage>
<lpage>17</lpage>
<pub-id pub-id-type="doi">10.1016/j.jml.2014.10.004</pub-id>
</element-citation>
</ref>
<ref id="CIT0031">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hostetter</surname>
<given-names>A. B.</given-names>
</name>
</person-group>
<year>2014</year>
<article-title>Action attenuates the effect of visibility on gesture rates</article-title>
<source>
<italic>Cognitive Science</italic>
</source>
<issue>7</issue>
<fpage>1468</fpage>
<lpage>1481</lpage>
<pub-id pub-id-type="doi">10.1111/cogs.12113</pub-id>
<pub-id pub-id-type="pmid">24889881</pub-id>
</element-citation>
</ref>
<ref id="CIT0032">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hostetter</surname>
<given-names>A. B.</given-names>
</name>
<name>
<surname>Alibali</surname>
<given-names>M. W.</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Visible embodiment: Gestures as simulated action</article-title>
<source>
<italic>Psychonomic Bulletin and Review</italic>
</source>
<fpage>495</fpage>
<lpage>514</lpage>
<pub-id pub-id-type="doi">10.3758/PBR.15.3.495</pub-id>
<pub-id pub-id-type="pmid">18567247</pub-id>
</element-citation>
</ref>
<ref id="CIT0033">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hostetter</surname>
<given-names>A. B.</given-names>
</name>
<name>
<surname>Alibali</surname>
<given-names>M. W.</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Language, gesture, action! A test of the gesture as simulated action framework</article-title>
<source>
<italic>Journal of Memory and Language</italic>
</source>
<fpage>245</fpage>
<lpage>257</lpage>
<pub-id pub-id-type="doi">10.1016/j.jml.2010.04.003</pub-id>
</element-citation>
</ref>
<ref id="CIT0034">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Hostetter</surname>
<given-names>A. B.</given-names>
</name>
<name>
<surname>Alibali</surname>
<given-names>M. W.</given-names>
</name>
<name>
<surname>Bartholomew</surname>
<given-names>A. E.</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>Gesture during mental rotation</article-title>
<person-group person-group-type="editor">
<name>
<surname>Carlson</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Hoelscher</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Shipley</surname>
<given-names>T.</given-names>
</name>
</person-group>
<source>
<italic>Proceedings of the 33rd annual meeting of the cognitive science society</italic>
</source>
<fpage>1448</fpage>
<lpage>1454</lpage>
<publisher-loc>
<named-content content-type="city">Austin</named-content>
,
<named-content content-type="state">TX</named-content>
</publisher-loc>
<publisher-name>Cognitive Science Society</publisher-name>
</element-citation>
</ref>
<ref id="CIT0035">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jaeger</surname>
<given-names>T. F.</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Categorical data analysis: Away from ANOVAs (transformation or not) and towards logit mixed models</article-title>
<source>
<italic>Journal of Memory and Language</italic>
</source>
<issue>4</issue>
<fpage>434</fpage>
<lpage>446</lpage>
<pub-id pub-id-type="doi">10.1016/j.jml.2007.11.007</pub-id>
<pub-id pub-id-type="pmid">19884961</pub-id>
</element-citation>
</ref>
<ref id="CIT0036">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Kendon</surname>
<given-names>A.</given-names>
</name>
</person-group>
<year>2004</year>
<source>
<italic>Gesture. Visible action as utterance</italic>
</source>
<publisher-loc>
<named-content content-type="city">Cambridge</named-content>
</publisher-loc>
<publisher-name>Cambridge University Press</publisher-name>
<pub-id pub-id-type="doi">10.1017/cbo9780511807572</pub-id>
</element-citation>
</ref>
<ref id="CIT0037">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Krauss</surname>
<given-names>R. M.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Gottesman</surname>
<given-names>R. F.</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>Lexical gestures and lexical access: A process model</article-title>
<person-group person-group-type="editor">
<name>
<surname>McNeill</surname>
<given-names>D.</given-names>
</name>
</person-group>
<source>
<italic>Language and gesture</italic>
</source>
<fpage>261</fpage>
<lpage>283</lpage>
<publisher-loc>
<named-content content-type="city">New York</named-content>
,
<named-content content-type="state">NY</named-content>
</publisher-loc>
<publisher-name>Cambridge University Press</publisher-name>
<pub-id pub-id-type="doi">10.1017/cbo9780511620850.017</pub-id>
</element-citation>
</ref>
<ref id="CIT0038">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lederman</surname>
<given-names>S. J.</given-names>
</name>
<name>
<surname>Klatzky</surname>
<given-names>R. L.</given-names>
</name>
</person-group>
<year>1987</year>
<article-title>Hand movements: A window into haptic object recognition</article-title>
<source>
<italic>Cognitive Psychology</italic>
</source>
<fpage>342</fpage>
<lpage>368</lpage>
<pub-id pub-id-type="doi">10.1016/0010-0285(87)90008-9</pub-id>
<pub-id pub-id-type="pmid">3608405</pub-id>
</element-citation>
</ref>
<ref id="CIT0039">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Louwerse</surname>
<given-names>M. M.</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>Symbol interdependency in symbolic and embodied cognition</article-title>
<source>
<italic>Topics in Cognitive Science</italic>
</source>
<fpage>273</fpage>
<lpage>302</lpage>
<pub-id pub-id-type="doi">10.1111/j.1756-8765.2010.01106.x</pub-id>
<pub-id pub-id-type="pmid">25164297</pub-id>
</element-citation>
</ref>
<ref id="CIT0040">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Louwerse</surname>
<given-names>M. M.</given-names>
</name>
<name>
<surname>Jeuniaux</surname>
<given-names>P.</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>The linguistic and embodied nature of conceptual processing</article-title>
<source>
<italic>Cognition</italic>
</source>
<fpage>96</fpage>
<lpage>104</lpage>
<pub-id pub-id-type="doi">10.1016/j.cognition.2009.09.002</pub-id>
<pub-id pub-id-type="pmid">19818435</pub-id>
</element-citation>
</ref>
<ref id="CIT0041">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>McNeill</surname>
<given-names>D.</given-names>
</name>
</person-group>
<year>1992</year>
<source>
<italic>Hand and mind. What gestures reveal about thought</italic>
</source>
<publisher-loc>
<named-content content-type="city">Chicago</named-content>
</publisher-loc>
<publisher-name>University of Chicago Press</publisher-name>
</element-citation>
</ref>
<ref id="CIT0042">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Müller</surname>
<given-names>C.</given-names>
</name>
</person-group>
<year>1998</year>
<article-title>Iconicity and gesture</article-title>
<person-group person-group-type="editor">
<name>
<surname>Santi</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Guatiella</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Cave</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Konopczyncki</surname>
<given-names>G.</given-names>
</name>
</person-group>
<source>
<italic>Oralité et Gestualité</italic>
(pp. 321–328)</source>
<publisher-loc>
<named-content content-type="city">Montreal</named-content>
</publisher-loc>
<publisher-name>L'Harmattan</publisher-name>
</element-citation>
</ref>
<ref id="CIT0043">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Parrill</surname>
<given-names>F.</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Viewpoint in speech-gesture integration: Linguistic structure, discourse structure, and event structure</article-title>
<source>
<italic>Language and Cognitive Processes</italic>
</source>
<issue>5</issue>
<fpage>650</fpage>
<lpage>668</lpage>
<pub-id pub-id-type="doi">10.1080/01690960903424248</pub-id>
</element-citation>
</ref>
<ref id="CIT0044">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Perniss</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Vigliocco</surname>
<given-names>G.</given-names>
</name>
</person-group>
<year>2014</year>
<article-title>The bridge of iconicity: From a world of experience to the experience of language</article-title>
<source>
<italic>Philosophical Transactions of the Royal Society</italic>
</source>
<issue>1651</issue>
<fpage>1</fpage>
<lpage>13</lpage>
<pub-id pub-id-type="doi">10.1098/rstb.2014.0179</pub-id>
</element-citation>
</ref>
<ref id="CIT0045">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pine</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Gurney</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Fletcher</surname>
<given-names>B.</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>The semantic specificity hypothesis: When gestures do not depend upon the presence of a listener</article-title>
<source>
<italic>Journal of Nonverbal Behavior</italic>
</source>
<issue>3</issue>
<fpage>169</fpage>
<lpage>178</lpage>
<pub-id pub-id-type="doi">10.1007/s10919-010-0089-7</pub-id>
</element-citation>
</ref>
<ref id="CIT0046">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Streeck</surname>
<given-names>J.</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Depicting by gesture</article-title>
<source>
<italic>Gesture</italic>
</source>
<issue>3</issue>
<fpage>285</fpage>
<lpage>301</lpage>
<pub-id pub-id-type="doi">10.1075/gest.8.3.02str</pub-id>
</element-citation>
</ref>
<ref id="CIT0047">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Streeck</surname>
<given-names>J.</given-names>
</name>
</person-group>
<year>2009</year>
<source>
<italic>Gesturecraft: The manu-facture of meaning</italic>
</source>
<publisher-loc>
<named-content content-type="city">Amsterdam</named-content>
</publisher-loc>
<publisher-name>John Benjamins Publishing</publisher-name>
<pub-id pub-id-type="doi">10.1017/s0047404513000262</pub-id>
</element-citation>
</ref>
<ref id="CIT0048">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tettamanti</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Buccino</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Saccuman</surname>
<given-names>M. C.</given-names>
</name>
<name>
<surname>Gallese</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Danna</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Scifo</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Perani</surname>
<given-names>D.</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Listening to action-related sentences activates fronto-parietal motor circuits</article-title>
<source>
<italic>Journal of Cognitive Neuroscience</italic>
</source>
<fpage>273</fpage>
<lpage>281</lpage>
<pub-id pub-id-type="doi">10.1162/0898929053124965</pub-id>
<pub-id pub-id-type="pmid">15811239</pub-id>
</element-citation>
</ref>
<ref id="CIT0049">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tucker</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Ellis</surname>
<given-names>R.</given-names>
</name>
</person-group>
<year>1998</year>
<article-title>On the relations between seen objects and components of potential actions</article-title>
<source>
<italic>Journal of Experimental Psychology: Human Perception and Performance</italic>
</source>
<issue>3</issue>
<fpage>830</fpage>
<pub-id pub-id-type="doi">10.1037//0096-1523.24.3.830</pub-id>
<pub-id pub-id-type="pmid">9627419</pub-id>
</element-citation>
</ref>
<ref id="CIT0050">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Van Nispen</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>van de Sandt-Koenderman</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Mol</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Krahmer</surname>
<given-names>E.</given-names>
</name>
</person-group>
<year>2014</year>
<article-title>Pantomime Strategies: On regularities in how people translate mental representations into the gesture modality</article-title>
<person-group person-group-type="editor">
<name>
<surname>Bello</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Guarini</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>McShane</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Scassellati</surname>
<given-names>B.</given-names>
</name>
</person-group>
<source>
<italic>Proceedings of the 36th annual conference of the cognitive science society</italic>
</source>
<fpage>976</fpage>
<lpage>981</lpage>
<publisher-loc>
<named-content content-type="city">Austin</named-content>
,
<named-content content-type="state">TX</named-content>
</publisher-loc>
<publisher-name>Cognitive Science Society</publisher-name>
</element-citation>
</ref>
<ref id="CIT0051">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Wittenburg</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Brugman</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Russel</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Klassmann</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Sloetjes</surname>
<given-names>H.</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>ELAN: a professional framework for multimodality research</article-title>
<source>
<italic>Proceedings of LREC, 5th international conference on language resources and evaluation</italic>
</source>
<publisher-loc>
<named-content content-type="city">Paris</named-content>
</publisher-loc>
<publisher-name>ELRA</publisher-name>
</element-citation>
</ref>
</ref-list>
<app-group>
<app>
<title>Appendix 1. List of target items (note: in the experiment, items were presented visually).</title>
<sec id="S009">
<table-wrap id="T0001" orientation="portrait" position="anchor">
<pmc-comment>OASIS TABLE HERE</pmc-comment>
<table frame="hsides" rules="groups">
<colgroup>
<col width="1*"></col>
<col width="1*"></col>
</colgroup>
<thead valign="bottom">
<tr>
<th align="left">Manipulable objects</th>
<th align="center">Non-manipulable objects</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Pastry brush</td>
<td align="left">Stepladder</td>
</tr>
<tr>
<td align="left">Spatula</td>
<td align="left">Plant</td>
</tr>
<tr>
<td align="left">Knife</td>
<td align="left">Dining table</td>
</tr>
<tr>
<td align="left">Grater</td>
<td align="left">Flatware tray</td>
</tr>
<tr>
<td align="left">Whisk</td>
<td align="left">Ball lamp</td>
</tr>
<tr>
<td align="left">Hammer</td>
<td align="left">Wall shelf</td>
</tr>
<tr>
<td align="left">Garlic press</td>
<td align="left">Cart</td>
</tr>
<tr>
<td align="left">Rolling pin</td>
<td align="left">Hood</td>
</tr>
<tr>
<td align="left">Cook timer</td>
<td align="left">Sink</td>
</tr>
<tr>
<td align="left">Egg slicer</td>
<td align="left">Kitchen island</td>
</tr>
<tr>
<td align="left">Wine glass</td>
<td align="left">Desk</td>
</tr>
<tr>
<td align="left">Cheese slicer</td>
<td align="left">Clock</td>
</tr>
<tr>
<td align="left">Pitcher</td>
<td align="left">Lamp</td>
</tr>
<tr>
<td align="left">French press</td>
<td align="left">Stool</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</app>
</app-group>
<app-group>
<app>
<title>Appendix 2. Description and examples of the representation techniques annotated in the present study.</title>
<sec id="S010">
<table-wrap id="T0002" orientation="portrait" position="anchor">
<pmc-comment>OASIS TABLE HERE</pmc-comment>
<table frame="hsides" rules="groups">
<colgroup>
<col width="1*"></col>
<col width="1*"></col>
</colgroup>
<thead valign="bottom">
<tr>
<th align="left">Representation mode</th>
<th align="center">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Object use</td>
<td align="left">Represents a transitive action, whereby the actor simulates the performance of an object-directed action.
<break></break>
Example: the hand acts as if holding a pen, with both thumb and index fingertips pressed together, imitating the act of writing.</td>
</tr>
<tr>
<td align="left">Enactment</td>
<td align="left">Represents an intransitive action, whereby the actor simulates the performance of a non-object-directed action.
<break></break>
Example: the arms swing back and forth in alternated movements, simulating the motion of the upper body while running.</td>
</tr>
<tr>
<td align="left">Hand grip</td>
<td align="left">The hand acts as if it were grasping or holding an object, without carrying out any specific action.
<break></break>
Example: fingers close into a clenched fist, as if holding the handle of a tool.</td>
</tr>
<tr>
<td align="left">Moulding</td>
<td align="left">The hand acts as if it were palpating, or sculpting the surface of an object.
<break></break>
Example: a flat hand with the palm facing down moves along the horizontal axis, representing the “flatness” of an object's surface.</td>
</tr>
<tr>
<td align="left">Tracing</td>
<td align="left">The hand (typically using the index finger) draws a shape in the air, or traces the trajectory (to be) followed by an entity.
<break></break>
Example: tracing a big square with the tip of the finger, representing a quadratic object such as a window.</td>
</tr>
<tr>
<td align="left">Portraying</td>
<td align="left">The hand is used to portray an object (or character) in a holistic manner, as if it had become the object itself.
<break></break>
Example: with two fingers (index and middle) stretched out horizontally, and the others closed, the hand can portray a pair of scissors, and simulate the action of cutting through paper.</td>
</tr>
<tr>
<td align="left">Placing</td>
<td align="left">The hand anchors or places an entity within the gesture space, or explicitly expresses a spatial relation between two or more entities. Example: when describing a scene, a speaker might use his hand to indicate the location of the actors and objects portrayed.</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</app>
</app-group>
</back>
</pmc>
<affiliations>
<list></list>
<tree>
<noCountry>
<name sortKey="Goudbeek, Martijn" sort="Goudbeek, Martijn" uniqKey="Goudbeek M" first="Martijn" last="Goudbeek">Martijn Goudbeek</name>
<name sortKey="Krahmer, Emiel" sort="Krahmer, Emiel" uniqKey="Krahmer E" first="Emiel" last="Krahmer">Emiel Krahmer</name>
<name sortKey="Masson Carro, Ingrid" sort="Masson Carro, Ingrid" uniqKey="Masson Carro I" first="Ingrid" last="Masson-Carro">Ingrid Masson-Carro</name>
</noCountry>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000707 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 000707 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:4867791
   |texte=   Can you handle this? The impact of object affordances on how co-speech gestures are produced
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:27226970" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024