Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

The sound of your lips: electrophysiological cross-modal interactions during hand-to-face and face-to-face speech perception

Identifieur interne : 001F07 ( Pmc/Curation ); précédent : 001F06; suivant : 001F08

The sound of your lips: electrophysiological cross-modal interactions during hand-to-face and face-to-face speech perception

Auteurs : Avril Treille ; Coriandre Vilain ; Marc Sato

Source :

RBID : PMC:4026678

Abstract

Recent magneto-encephalographic and electro-encephalographic studies provide evidence for cross-modal integration during audio-visual and audio-haptic speech perception, with speech gestures viewed or felt from manual tactile contact with the speaker’s face. Given the temporal precedence of the haptic and visual signals on the acoustic signal in these studies, the observed modulation of N1/P2 auditory evoked responses during bimodal compared to unimodal speech perception suggest that relevant and predictive visual and haptic cues may facilitate auditory speech processing. To further investigate this hypothesis, auditory evoked potentials were here compared during auditory-only, audio-visual and audio-haptic speech perception in live dyadic interactions between a listener and a speaker. In line with previous studies, auditory evoked potentials were attenuated and speeded up during both audio-haptic and audio-visual compared to auditory speech perception. Importantly, the observed latency and amplitude reduction did not significantly depend on the degree of visual and haptic recognition of the speech targets. Altogether, these results further demonstrate cross-modal interactions between the auditory, visual and haptic speech signals. Although they do not contradict the hypothesis that visual and haptic sensory inputs convey predictive information with respect to the incoming auditory speech input, these results suggest that, at least in live conversational interactions, systematic conclusions on sensory predictability in bimodal speech integration have to be taken with caution, with the extraction of predictive cues likely depending on the variability of the speech stimuli.


Url:
DOI: 10.3389/fpsyg.2014.00420
PubMed: 24860533
PubMed Central: 4026678

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4026678

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">The sound of your lips: electrophysiological cross-modal interactions during hand-to-face and face-to-face speech perception</title>
<author>
<name sortKey="Treille, Avril" sort="Treille, Avril" uniqKey="Treille A" first="Avril" last="Treille">Avril Treille</name>
</author>
<author>
<name sortKey="Vilain, Coriandre" sort="Vilain, Coriandre" uniqKey="Vilain C" first="Coriandre" last="Vilain">Coriandre Vilain</name>
</author>
<author>
<name sortKey="Sato, Marc" sort="Sato, Marc" uniqKey="Sato M" first="Marc" last="Sato">Marc Sato</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">24860533</idno>
<idno type="pmc">4026678</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4026678</idno>
<idno type="RBID">PMC:4026678</idno>
<idno type="doi">10.3389/fpsyg.2014.00420</idno>
<date when="2014">2014</date>
<idno type="wicri:Area/Pmc/Corpus">001F07</idno>
<idno type="wicri:Area/Pmc/Curation">001F07</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">The sound of your lips: electrophysiological cross-modal interactions during hand-to-face and face-to-face speech perception</title>
<author>
<name sortKey="Treille, Avril" sort="Treille, Avril" uniqKey="Treille A" first="Avril" last="Treille">Avril Treille</name>
</author>
<author>
<name sortKey="Vilain, Coriandre" sort="Vilain, Coriandre" uniqKey="Vilain C" first="Coriandre" last="Vilain">Coriandre Vilain</name>
</author>
<author>
<name sortKey="Sato, Marc" sort="Sato, Marc" uniqKey="Sato M" first="Marc" last="Sato">Marc Sato</name>
</author>
</analytic>
<series>
<title level="j">Frontiers in Psychology</title>
<idno type="eISSN">1664-1078</idno>
<imprint>
<date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Recent magneto-encephalographic and electro-encephalographic studies provide evidence for cross-modal integration during audio-visual and audio-haptic speech perception, with speech gestures viewed or felt from manual tactile contact with the speaker’s face. Given the temporal precedence of the haptic and visual signals on the acoustic signal in these studies, the observed modulation of N1/P2 auditory evoked responses during bimodal compared to unimodal speech perception suggest that relevant and predictive visual and haptic cues may facilitate auditory speech processing. To further investigate this hypothesis, auditory evoked potentials were here compared during auditory-only, audio-visual and audio-haptic speech perception in live dyadic interactions between a listener and a speaker. In line with previous studies, auditory evoked potentials were attenuated and speeded up during both audio-haptic and audio-visual compared to auditory speech perception. Importantly, the observed latency and amplitude reduction did not significantly depend on the degree of visual and haptic recognition of the speech targets. Altogether, these results further demonstrate cross-modal interactions between the auditory, visual and haptic speech signals. Although they do not contradict the hypothesis that visual and haptic sensory inputs convey predictive information with respect to the incoming auditory speech input, these results suggest that, at least in live conversational interactions, systematic conclusions on sensory predictability in bimodal speech integration have to be taken with caution, with the extraction of predictive cues likely depending on the variability of the speech stimuli.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Alcorn, S" uniqKey="Alcorn S">S. Alcorn</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Arnal, L H" uniqKey="Arnal L">L. H. Arnal</name>
</author>
<author>
<name sortKey="Giraud, A L" uniqKey="Giraud A">A. L. Giraud</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Arnal, L H" uniqKey="Arnal L">L. H. Arnal</name>
</author>
<author>
<name sortKey="Morillon, B" uniqKey="Morillon B">B. Morillon</name>
</author>
<author>
<name sortKey="Kell, C A" uniqKey="Kell C">C. A. Kell</name>
</author>
<author>
<name sortKey="Giraud, A L" uniqKey="Giraud A">A. L. Giraud</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Baart, M" uniqKey="Baart M">M. Baart</name>
</author>
<author>
<name sortKey="Stekelenburg, J J" uniqKey="Stekelenburg J">J. J. Stekelenburg</name>
</author>
<author>
<name sortKey="Vroomen, J" uniqKey="Vroomen J">J. Vroomen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Benoit, C" uniqKey="Benoit C">C. Benoît</name>
</author>
<author>
<name sortKey="Mohamadi, T" uniqKey="Mohamadi T">T. Mohamadi</name>
</author>
<author>
<name sortKey="Kandel, S D" uniqKey="Kandel S">S. D. Kandel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Besle, J" uniqKey="Besle J">J. Besle</name>
</author>
<author>
<name sortKey="Fort, A" uniqKey="Fort A">A. Fort</name>
</author>
<author>
<name sortKey="Delpuech, C" uniqKey="Delpuech C">C. Delpuech</name>
</author>
<author>
<name sortKey="Giard, M H" uniqKey="Giard M">M. H. Giard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Boersma, P" uniqKey="Boersma P">P. Boersma</name>
</author>
<author>
<name sortKey="Weenink, D" uniqKey="Weenink D">D. Weenink</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Campbell, C S" uniqKey="Campbell C">C. S. Campbell</name>
</author>
<author>
<name sortKey="Massaro, D W" uniqKey="Massaro D">D. W. Massaro</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chandrasekaran, C" uniqKey="Chandrasekaran C">C. Chandrasekaran</name>
</author>
<author>
<name sortKey="Trubanova, A" uniqKey="Trubanova A">A. Trubanova</name>
</author>
<author>
<name sortKey="Stillittano, S" uniqKey="Stillittano S">S. Stillittano</name>
</author>
<author>
<name sortKey="Caplier, A" uniqKey="Caplier A">A. Caplier</name>
</author>
<author>
<name sortKey="Ghazanfar, A A" uniqKey="Ghazanfar A">A. A. Ghazanfar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Delorme, A" uniqKey="Delorme A">A. Delorme</name>
</author>
<author>
<name sortKey="Makeig, S" uniqKey="Makeig S">S. Makeig</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fowler, C" uniqKey="Fowler C">C. Fowler</name>
</author>
<author>
<name sortKey="Dekle, D" uniqKey="Dekle D">D. Dekle</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gick, B" uniqKey="Gick B">B. Gick</name>
</author>
<author>
<name sortKey="J Hannsd Ttir, K M" uniqKey="J Hannsd Ttir K">K. M. Jóhannsdóttir</name>
</author>
<author>
<name sortKey="Gibraiel, D" uniqKey="Gibraiel D">D Gibraiel</name>
</author>
<author>
<name sortKey="Muhlbauer, M" uniqKey="Muhlbauer M">M. Mühlbauer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grant, K" uniqKey="Grant K">K. Grant</name>
</author>
<author>
<name sortKey="Walden, B E" uniqKey="Walden B">B. E. Walden</name>
</author>
<author>
<name sortKey="Seitz, P F" uniqKey="Seitz P">P. F. Seitz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Green, K P" uniqKey="Green K">K. P. Green</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hertrich, I" uniqKey="Hertrich I">I. Hertrich</name>
</author>
<author>
<name sortKey="Mathiak, K" uniqKey="Mathiak K">K. Mathiak</name>
</author>
<author>
<name sortKey="Lutzenberger, W" uniqKey="Lutzenberger W">W. Lutzenberger</name>
</author>
<author>
<name sortKey="Menning, H" uniqKey="Menning H">H. Menning</name>
</author>
<author>
<name sortKey="Ackermann, H" uniqKey="Ackermann H">H. Ackermann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jones, J A" uniqKey="Jones J">J. A. Jones</name>
</author>
<author>
<name sortKey="Munhall, K G" uniqKey="Munhall K">K. G. Munhall</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Klucharev, V" uniqKey="Klucharev V">V. Klucharev</name>
</author>
<author>
<name sortKey="Mottonen, R" uniqKey="Mottonen R">R. Möttönen</name>
</author>
<author>
<name sortKey="Sams, M" uniqKey="Sams M">M. Sams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lebib, R" uniqKey="Lebib R">R. Lebib</name>
</author>
<author>
<name sortKey="Papo, D" uniqKey="Papo D">D. Papo</name>
</author>
<author>
<name sortKey="De Bode, S" uniqKey="De Bode S">S de Bode</name>
</author>
<author>
<name sortKey="Baudonniere, P M" uniqKey="Baudonniere P">P. M. Baudonnière</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcgurk, H" uniqKey="Mcgurk H">H. McGurk</name>
</author>
<author>
<name sortKey="Macdonald, J" uniqKey="Macdonald J">J. MacDonald</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="N T Nen, R" uniqKey="N T Nen R">R. Näätänen</name>
</author>
<author>
<name sortKey="Picton, T W" uniqKey="Picton T">T. W. Picton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Navarra, J" uniqKey="Navarra J">J. Navarra</name>
</author>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S. Soto-Faraco</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Norton, S J" uniqKey="Norton S">S. J. Norton</name>
</author>
<author>
<name sortKey="Schultz, M C" uniqKey="Schultz M">M. C. Schultz</name>
</author>
<author>
<name sortKey="Reed, C M" uniqKey="Reed C">C. M. Reed</name>
</author>
<author>
<name sortKey="Braida, L D" uniqKey="Braida L">L. D. Braida</name>
</author>
<author>
<name sortKey="Durlach, N I" uniqKey="Durlach N">N. I. Durlach</name>
</author>
<author>
<name sortKey="Rabinowitz, W M" uniqKey="Rabinowitz W">W. M. Rabinowitz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pilling, M" uniqKey="Pilling M">M. Pilling</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Reisberg, D" uniqKey="Reisberg D">D. Reisberg</name>
</author>
<author>
<name sortKey="Mclean, J" uniqKey="Mclean J">J. McLean</name>
</author>
<author>
<name sortKey="Goldfield, A" uniqKey="Goldfield A">A. Goldfield</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sams, M" uniqKey="Sams M">M. Sams</name>
</author>
<author>
<name sortKey="Aulanko, R" uniqKey="Aulanko R">R. Aulanko</name>
</author>
<author>
<name sortKey="H M L Inen, M" uniqKey="H M L Inen M">M. Hämäläinen</name>
</author>
<author>
<name sortKey="Hari, R" uniqKey="Hari R">R. Hari</name>
</author>
<author>
<name sortKey="Lounasmaa, O V" uniqKey="Lounasmaa O">O. V. Lounasmaa</name>
</author>
<author>
<name sortKey="Lu, S T" uniqKey="Lu S">S. T. Lu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sato, M" uniqKey="Sato M">M. Sato</name>
</author>
<author>
<name sortKey="Cave, C" uniqKey="Cave C">C. Cavé</name>
</author>
<author>
<name sortKey="Menard, L" uniqKey="Menard L">L. Ménard</name>
</author>
<author>
<name sortKey="Brasseur, L" uniqKey="Brasseur L">L. Brasseur</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Scherg, M" uniqKey="Scherg M">M Scherg</name>
</author>
<author>
<name sortKey="Von Cramon, D" uniqKey="Von Cramon D">D. Von Cramon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schwartz, J L" uniqKey="Schwartz J">J. L. Schwartz</name>
</author>
<author>
<name sortKey="Berthommier, F" uniqKey="Berthommier F">F. Berthommier</name>
</author>
<author>
<name sortKey="Savariaux, C" uniqKey="Savariaux C">C. Savariaux</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schwartz, J L" uniqKey="Schwartz J">J. L. Schwartz</name>
</author>
<author>
<name sortKey="Menard, L" uniqKey="Menard L">L. Ménard</name>
</author>
<author>
<name sortKey="Basirat, A" uniqKey="Basirat A">A. Basirat</name>
</author>
<author>
<name sortKey="Sato, M" uniqKey="Sato M">M. Sato</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schwartz, J L" uniqKey="Schwartz J">J. L. Schwartz</name>
</author>
<author>
<name sortKey="Savariaux, C" uniqKey="Savariaux C">C. Savariaux</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stein, B E" uniqKey="Stein B">B. E. Stein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stein, B E" uniqKey="Stein B">B. E. Stein</name>
</author>
<author>
<name sortKey="Meredith, M A" uniqKey="Meredith M">M. A. Meredith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stekelenburg, J J" uniqKey="Stekelenburg J">J. J. Stekelenburg</name>
</author>
<author>
<name sortKey="Vroomen, J" uniqKey="Vroomen J">J. Vroomen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sumby, W H" uniqKey="Sumby W">W. H. Sumby</name>
</author>
<author>
<name sortKey="Pollack, I" uniqKey="Pollack I">I. Pollack</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Treille, A" uniqKey="Treille A">A. Treille</name>
</author>
<author>
<name sortKey="Cordeboeuf, C" uniqKey="Cordeboeuf C">C. Cordeboeuf</name>
</author>
<author>
<name sortKey="Vilain, C" uniqKey="Vilain C">C. Vilain</name>
</author>
<author>
<name sortKey="Sato, M" uniqKey="Sato M">M. Sato</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Wassenhove, V" uniqKey="Van Wassenhove V">V. van Wassenhove</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Wassenhove, V" uniqKey="Van Wassenhove V">V. van Wassenhove</name>
</author>
<author>
<name sortKey="Grant, K W" uniqKey="Grant K">K. W. Grant</name>
</author>
<author>
<name sortKey="Poeppel, D" uniqKey="Poeppel D">D. Poeppel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Wassenhove, V" uniqKey="Van Wassenhove V">V. van Wassenhove</name>
</author>
<author>
<name sortKey="Grant, K W" uniqKey="Grant K">K. W. Grant</name>
</author>
<author>
<name sortKey="Poeppel, D" uniqKey="Poeppel D">D. Poeppel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vroomen, J" uniqKey="Vroomen J">J. Vroomen</name>
</author>
<author>
<name sortKey="Stekelenburg, J J" uniqKey="Stekelenburg J">J. J. Stekelenburg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Winneke, A H" uniqKey="Winneke A">A. H. Winneke</name>
</author>
<author>
<name sortKey="Phillips, N A" uniqKey="Phillips N">N. A. Phillips</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Psychol</journal-id>
<journal-id journal-id-type="iso-abbrev">Front Psychol</journal-id>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Psychology</journal-title>
</journal-title-group>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">24860533</article-id>
<article-id pub-id-type="pmc">4026678</article-id>
<article-id pub-id-type="doi">10.3389/fpsyg.2014.00420</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>The sound of your lips: electrophysiological cross-modal interactions during hand-to-face and face-to-face speech perception</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Treille</surname>
<given-names>Avril</given-names>
</name>
<xref ref-type="author-notes" rid="fn001">
<sup>*</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/115027"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Vilain</surname>
<given-names>Coriandre</given-names>
</name>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/143536"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Sato</surname>
<given-names>Marc</given-names>
</name>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/16852"></uri>
</contrib>
</contrib-group>
<aff>
<institution>CNRS, Département Parole and Cognition, Gipsa-Lab, UMR 5216, Grenoble Université</institution>
<country>Grenoble, France</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by:
<italic>Riikka Mottonen, University of Oxford, UK</italic>
</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by:
<italic>Joana Acha, Basque Centre on Cognition, Brain and Language, Spain; Takayuki Ito, Haskins Laboratories, USA</italic>
</p>
</fn>
<corresp id="fn001">*Correspondence:
<italic>Avril Treille, CNRS, Département Parole and Cognition, Gipsa-Lab, UMR 5216, Grenoble Université, 1180 Avenue Centrale, BP 25, 38040 Grenoble Cedex 9, France e-mail:
<email xlink:type="simple">avril.treille@gipsa-lab.inpg.fr</email>
</italic>
</corresp>
<fn fn-type="other" id="fn002">
<p>This article was submitted to Language Sciences, a section of the journal Frontiers in Psychology.</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>13</day>
<month>5</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<volume>5</volume>
<elocation-id>420</elocation-id>
<history>
<date date-type="received">
<day>02</day>
<month>3</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>21</day>
<month>4</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2014 Treille, Vilain and Sato.</copyright-statement>
<copyright-year>2014</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/3.0/">
<license-p> This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>Recent magneto-encephalographic and electro-encephalographic studies provide evidence for cross-modal integration during audio-visual and audio-haptic speech perception, with speech gestures viewed or felt from manual tactile contact with the speaker’s face. Given the temporal precedence of the haptic and visual signals on the acoustic signal in these studies, the observed modulation of N1/P2 auditory evoked responses during bimodal compared to unimodal speech perception suggest that relevant and predictive visual and haptic cues may facilitate auditory speech processing. To further investigate this hypothesis, auditory evoked potentials were here compared during auditory-only, audio-visual and audio-haptic speech perception in live dyadic interactions between a listener and a speaker. In line with previous studies, auditory evoked potentials were attenuated and speeded up during both audio-haptic and audio-visual compared to auditory speech perception. Importantly, the observed latency and amplitude reduction did not significantly depend on the degree of visual and haptic recognition of the speech targets. Altogether, these results further demonstrate cross-modal interactions between the auditory, visual and haptic speech signals. Although they do not contradict the hypothesis that visual and haptic sensory inputs convey predictive information with respect to the incoming auditory speech input, these results suggest that, at least in live conversational interactions, systematic conclusions on sensory predictability in bimodal speech integration have to be taken with caution, with the extraction of predictive cues likely depending on the variability of the speech stimuli.</p>
</abstract>
<kwd-group>
<kwd>audio-visual speech perception</kwd>
<kwd>audio-haptic speech perception</kwd>
<kwd>multisensory interactions</kwd>
<kwd>EEG</kwd>
<kwd>auditory evoked potentials</kwd>
</kwd-group>
<counts>
<fig-count count="3"></fig-count>
<table-count count="0"></table-count>
<equation-count count="0"></equation-count>
<ref-count count="40"></ref-count>
<page-count count="8"></page-count>
<word-count count="0"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec>
<title>INTRODUCTION</title>
<p>How information from different sensory modalities, such as sight, sound and touch, is combined to form a single coherent percept? As central to adaptive behavior, multisensory integration occurs in everyday life when natural events in the physical world have to be integrated from different sensory sources. It is an highly complex process known to depend on the temporal, spatial and causal relationships between the sensory signals, to take place at different timescales in several subcortical and cortical structures and to be mediated by both feedforward and backward neural projections. In addition to their coherence, the perceptual saliency and relevance of each sensory signal from the external environment, as well as their predictability and joint probability to occur, also act on the integration process and on the representational format at which the sensory modalities interface (for reviews, see
<xref rid="B32" ref-type="bibr">Stein and Meredith, 1993</xref>
;
<xref rid="B31" ref-type="bibr">Stein, 2012</xref>
).</p>
<p>Audio-visual speech perception is a special case of multisensory processing that interfaces with the linguistic system. Although one can extract phonetic features from the acoustic signal alone, adding visual speech information from the speaker’s face is known to improve speech intelligibility in case of a degraded acoustic signal (
<xref rid="B34" ref-type="bibr">Sumby and Pollack, 1954</xref>
;
<xref rid="B5" ref-type="bibr">Benoît et al., 1994</xref>
;
<xref rid="B28" ref-type="bibr">Schwartz et al., 2004</xref>
), to facilitate the understanding of a semantically complex statement (
<xref rid="B24" ref-type="bibr">Reisberg et al., 1987</xref>
) or a foreign language (
<xref rid="B21" ref-type="bibr">Navarra and Soto-Faraco, 2005</xref>
), and to benefit hearing-impaired listeners (
<xref rid="B13" ref-type="bibr">Grant et al., 1998</xref>
). Conversely, in laboratory settings, adding incongruent visual speech information may interfere with auditory speech perception and even create an illusory percept (
<xref rid="B19" ref-type="bibr">McGurk and MacDonald, 1976</xref>
). Finally, as in other cases of bimodal integration, audio-visual speech integration depends on the perceptual saliency of both the auditory (
<xref rid="B14" ref-type="bibr">Green, 1998</xref>
) and visual (
<xref rid="B8" ref-type="bibr">Campbell and Massaro, 1997</xref>
) speech signals, as well as their spatial (
<xref rid="B16" ref-type="bibr">Jones and Munhall, 1997</xref>
) and temporal (
<xref rid="B37" ref-type="bibr">van Wassenhove et al., 2003</xref>
) relationships.</p>
<p>At the brain level, several magneto-encephalographic (MEG) and electro-encephalographic (EEG) studies demonstrate that visual speech input modulates auditory activity as early as 50–100 ms in the primary and secondary auditory cortices (
<xref rid="B25" ref-type="bibr">Sams et al., 1991</xref>
;
<xref rid="B17" ref-type="bibr">Klucharev et al., 2003</xref>
;
<xref rid="B18" ref-type="bibr">Lebib et al., 2003</xref>
;
<xref rid="B6" ref-type="bibr">Besle et al., 2004</xref>
;
<xref rid="B15" ref-type="bibr">Hertrich et al., 2007</xref>
;
<xref rid="B40" ref-type="bibr">Winneke and Phillips, 2011</xref>
). Importantly, it has been shown that both the latency and amplitude of auditory evoked responses (N1/P2, M100) are attenuated and speeded up during audio-visual compared to auditory-only speech perception (
<xref rid="B17" ref-type="bibr">Klucharev et al., 2003</xref>
;
<xref rid="B6" ref-type="bibr">Besle et al., 2004</xref>
;
<xref rid="B38" ref-type="bibr">van Wassenhove et al., 2005</xref>
;
<xref rid="B33" ref-type="bibr">Stekelenburg and Vroomen, 2007</xref>
;
<xref rid="B3" ref-type="bibr">Arnal et al., 2009</xref>
;
<xref rid="B23" ref-type="bibr">Pilling, 2010</xref>
;
<xref rid="B39" ref-type="bibr">Vroomen and Stekelenburg, 2010</xref>
;
<xref rid="B4" ref-type="bibr">Baart et al., 2014</xref>
;
<xref rid="B35" ref-type="bibr">Treille et al., 2014</xref>
). Moreover, N1/P2 latency facilitation also appears to be directly function of the visemic information, with the higher visual recognition of the syllable, the longer latency facilitation (
<xref rid="B38" ref-type="bibr">van Wassenhove et al., 2005</xref>
;
<xref rid="B3" ref-type="bibr">Arnal et al., 2009</xref>
). Since the visual speech signal preceded the acoustic speech signal by 10s or 100s of milliseconds in these studies, the observed speeding-up and amplitude suppression of auditory evoked potentials might both reflect non-speech specific temporal (
<xref rid="B33" ref-type="bibr">Stekelenburg and Vroomen, 2007</xref>
;
<xref rid="B39" ref-type="bibr">Vroomen and Stekelenburg, 2010</xref>
) and phonetic (
<xref rid="B38" ref-type="bibr">van Wassenhove et al., 2005</xref>
;
<xref rid="B3" ref-type="bibr">Arnal et al., 2009</xref>
) visual predictions of the incoming auditory syllable (for recent discussions, see
<xref rid="B2" ref-type="bibr">Arnal and Giraud, 2012</xref>
;
<xref rid="B36" ref-type="bibr">van Wassenhove, 2013</xref>
;
<xref rid="B4" ref-type="bibr">Baart et al., 2014</xref>
).</p>
<p>Interestingly, speech can be perceived not only by the ear and by the eye but also by the hand, with orofacial speech gestures felt and monitored from manual tactile contact with the speaker’s face. Past studies on the Tadoma method provide evidence for successful communication abilities in trained deaf-blind individuals through the haptic modality (
<xref rid="B1" ref-type="bibr">Alcorn, 1932</xref>
;
<xref rid="B22" ref-type="bibr">Norton et al., 1977</xref>
). A few behavioral studies also demonstrate the influence of tactile information on auditory speech perception in untrained individuals without sensory impairment, especially in case of noisy or ambiguous acoustic signals (
<xref rid="B11" ref-type="bibr">Fowler and Dekle, 1991</xref>
;
<xref rid="B12" ref-type="bibr">Gick et al., 2008</xref>
;
<xref rid="B26" ref-type="bibr">Sato et al., 2010</xref>
). In a recent EEG study (
<xref rid="B35" ref-type="bibr">Treille et al., 2014</xref>
), electrophysiological evidence of cross-modal interactions was found during both audio-visual and audio-haptic speech perception, through the course of live dyadic interactions between a listener and a speaker. In this study, participants were seated at arm’s length from an experimenter and they were instructed to manually categorize /pa/ or /ta/ syllables presented auditorily, visually and/or haptically. In line with the above-mentioned EEG/MEG studies, N1 auditory evoked responses were attenuated and speeded up during live audio-visual speech perception. Crucially, haptic information was also found to speed up auditory speech processing as early as 100 ms. Given the temporal precedence of the dynamic configurations of the articulators on the auditory signal, as attested in a behavioral control experiment, the observed audio-haptic interactions in the listener’s brain raise the possibility that the brain use predictive temporal and/or phonetic relevant tactile information for auditory processing, despite less natural processing to extract relevant speech information from the haptic modality. From this possibility, however, a clear limit of this study comes from the use of a simple two-alternative forced-choice identification task between /pa/ and /ta/ syllables and an insufficient number of trials for reliable EEG analyses per syllable.</p>
<p>To further explore whether perceivers might integrate tactile information in auditory speech perception as they do with visual information, the present study aimed at replicating the observed bimodal interactions during live face-to-face and hand-to-face speech perception (
<xref rid="B35" ref-type="bibr">Treille et al., 2014</xref>
). As observed in previous studies on audio-visual speech perception (
<xref rid="B38" ref-type="bibr">van Wassenhove et al., 2005</xref>
;
<xref rid="B3" ref-type="bibr">Arnal et al., 2009</xref>
), we also specifically tested whether modulation of N1/P2 auditory evoked potentials during both audio-visual and audio-haptic speech perception might depend on the degree to which the haptic and visual signals predict the incoming auditory speech target. To this aim, the experimental procedure was adapted from the Tadoma method and similar to that previously used by
<xref rid="B35" ref-type="bibr">Treille et al. (2014)</xref>
, except the use of a three-alternative forced-choice identification task between /pa/, /ta/, and /ka/ syllables and a sufficient number of trials for reliable EEG analyses per syllable. A gradient of visual and haptic recognition between the three syllables was first attested in a behavioral experiment, which was a requirement to assess visual and haptic predictability on the incoming auditory signal in a subsequent EEG experiment. In line with previous EEG studies on audio-visual speech integration (
<xref rid="B38" ref-type="bibr">van Wassenhove et al., 2005</xref>
;
<xref rid="B3" ref-type="bibr">Arnal et al., 2009</xref>
), we hypothesized that the higher visual and haptic recognition of the syllable, the stronger latency facilitation in the audio-visual and audio-haptic modalities.</p>
</sec>
<sec sec-type="materials|methods" id="s1">
<title>MATERIALS AND METHODS</title>
<sec>
<title>PARTICIPANTS</title>
<p>Sixteen healthy adults, native French speakers, participated in the study (eight females; mean age ± SD, 29 ± 8 years). All participants were right-handed, had normal or corrected-to-normal vision and reported no history of speaking, hearing or motor disorders. Written informed consent was obtained for all participants and they were compensated for the time spent in the study. The study was approved by the Grenoble University Ethical Committee.</p>
</sec>
<sec>
<title>STIMULI</title>
<p>Based on a previous EEG study (
<xref rid="B38" ref-type="bibr">van Wassenhove et al., 2005</xref>
), /pa/, /ta/, and /ka/ syllables were selected in order to ensure precise acoustic onsets (thanks to the unvoiced stop bilabial /p/, alveolar /t/, and velar /k/ stop consonants) crucial for EEG analyses and, importantly, to ensure a gradient of visual and haptic recognition between these syllables (with notably the bilabial /p/ consonant known to be more visually salient than alveolar /t/ and velar /k/ consonants).</p>
</sec>
<sec>
<title>EXPERIMENTAL PROCEDURE</title>
<p>The study consisted on one behavioral experiment immediately followed by one EEG experiment. The behavioral experiment was performed in order to ensure a gradient of visual and haptic recognition of /pa/, /ta/, and /ka/ syllables. Importantly, since individual syllable onsets of the experimenter’s productions were used as acoustical triggers for EEG analyses, the visual and haptic modalities of presentation were not included in the EEG experiment. In both experiments, Presentation software (Neurobehavioral Systems, Albany, CA, USA) was used to control the visual stimuli for the experimenter, the audio stimuli (beep) for the participant and to record key responses. In addition, all experimenter productions were recorded for off-line analyses in the EEG experiment.</p>
<sec>
<title>Behavioral experiment</title>
<p>In a first behavioral experiment, participants were individually tested in a sound-proof room and were seated at arm’s length from a female experimenter (see
<bold>Figure
<xref ref-type="fig" rid="F1">1A</xref>
</bold>
).</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>
<bold>(A)</bold>
Experimental design used in the audio-haptic (AH) modality. In the haptic (H) and AH modalities, participants were asked to keep their eyes closed with their right hand placed on the experimenter’s face and to categorize with their left hand each perceived syllable. In the auditory modality (A), participants were instructed to keep their eyes closed while, in the visual (V) and audio-visual modality (AV), they were asked to also look at the experimenter’s face. The behavioral experiment included A, V, H, AV, AH modalities while the EEG experiment only included A, AV, and AH modalities.
<bold>(B,C)</bold>
Mean percentage of correct identification for /pa/, /ta/, and /ka/ syllables in each modality of presentation in the
<bold>(B)</bold>
behavioral and
<bold>(C)</bold>
EEG experiments. Error bars represent standard errors of the mean.</p>
</caption>
<graphic xlink:href="fpsyg-05-00420-g001"></graphic>
</fig>
<p>They were told that they would be presented with /pa/, /ta/, or /ka/ syllables either auditorily, visually, audio-visually, haptically, or audio-haptically over the hand-face contact. In the auditory modality (A), participants were instructed to keep their eyes closed and to listen to each syllable overtly produced by the experimenter. In the audio-visual modality (AV), they were asked to also look at the experimenter’s face. In the audio-haptic modality (AH), they were asked to keep their eyes closed with their right hand placed on the experimenter’s face (the thumb placed lightly and vertically against the experimenter’s lips and the other fingers placed horizontally along the jaw line in order to help distinguishing both lip and jaw movements). This experimental procedure was adapted from the Tadoma method and similar to that previously used by
<xref rid="B35" ref-type="bibr">Treille et al. (2014)</xref>
. Finally, the visual-only (V) and haptic-only (H) modalities were similar to the AV and AH modalities except that the experimenter silently produced each syllable.</p>
<p>The experimenter faced the participant and a computer screen placed behind the participant. On each trial, the computer screen specified the syllable to be produced. To this aim, the syllable was printed three times on the computer screen at 1 Hz, with the last display serving as the visual go-signal to produce the syllable. The inter-trial interval was 3 s. The experimenter previously practiced and learned to articulate each syllable in synchrony with the visual go-signal, with an initial neutral closed-mouth position and maintaining an even intonation, tempo and vocal intensity.</p>
<p>A three-alternative forced-choice identification task was used, with participants instructed to categorize each perceived syllable by pressing on one of three keys corresponding to /pa/, /ta/, or /ka/ on a computer keyboard with their left hand. A brief single audio beep was delivered 600 ms after the visual go-signal (expecting to occur in synchrony with the experimenter production) with the participants told to produce their responses only after this audio go-signal. This procedure was done in order to dissociate sensory/perceptual responses from motor responses on EEG data in the next experiment. As a consequence, no reaction-times were acquired and only response rate were considered in further analyses.</p>
<p>Every syllable (/pa/, /ta/, or /ka/) was presented 15 times in each modality (A, V, H, AV, AH) in a single randomized sequence for a total of 225 trials. The response key designation were counterbalanced across participants. Before the experiment, participants performed few practice trials in all modalities. They received no instructions concerning how to interpret visual and haptic information but they were asked to pay attention to both modalities during bimodal presentation.</p>
</sec>
<sec>
<title>EEG experiment</title>
<p>Because of no possible reliable acoustical triggers in the visual-only and haptic-only modalities, the EEG experiment only included three individual experimental sessions related to A, AV, and AH modalities of presentation. Except this difference and the number of trials, the experimental procedure was identical to that used in the behavioral experiment. In each session, every syllable (/pa/, /ta/, or /ka/) was presented 80 times in a randomized sequence for a total of 240 trials. The order of the modality of presentation and the response key designation were fully counterbalanced across participants. Because the experimental procedure was quite taxing, each experimental session was split into two blocks of around 6 min each, allowing short breaks for both the experimenter and the participants.</p>
</sec>
</sec>
<sec>
<title>EEG ACQUISITION</title>
<p>In the EEG experiment, EEG data were continuously recorded from 64 scalp electrodes (Electro-Cap International, INC., according to the international 10–20 system) using the Biosemi ActiveTwo AD-box EEG system operating at a sampling rate of 256 Hz. Two additional electrodes served as reference (common mode sense [CMS] active electrode) and ground (driven right leg [DRL] passive electrode). One other external reference electrode was at the top of the nose. The electro-oculogram measuring horizontal (HEOG) and vertical (VEOG) eye movements were recorded using electrodes at the outer canthus of each eye as well as above and below the right eye. Before the experiment, the impedance of all electrodes was adjusted to get low offset voltages and stable DC.</p>
</sec>
<sec>
<title>DATA ANALYSES</title>
<sec>
<title>Behavioral analyses</title>
<p>In both the behavioral and EEG experiments, the proportion of correct responses was individually determined for each participant, each syllable and each modality. Two-way repeated-measure ANOVAs were performed on these data with the modality (A, V, H, AV, AH in the behavioral experiment; A, AV, AH in the EEG experiment) and the syllable (/pa, /ta/, /ka/) as within-subjects variables.</p>
</sec>
<sec>
<title>Acoustical analyses</title>
<p>In the EEG experiment, acoustical analyses were performed on the experimenter’s recorded syllables in order to determine the individual syllable onsets serving as acoustical triggers for the EEG analyses. All acoustical analyses were performed using Praat software (
<xref rid="B7" ref-type="bibr">Boersma and Weenink, 2013</xref>
). First, an automatic procedure based on an intensity and duration algorithm detection roughly identified each syllable’s onset in the A, AV, and AH modalities (11520 utterances). For all syllables, these onsets were further manually and precisely determined, based on waveform and spectrogram information related to the acoustic characteristics of voiced stop consonants. Omissions and wrong productions were identified and removed from the analyses (less than 1%).</p>
</sec>
<sec>
<title>EEG analyses</title>
<p>EEG data were processed using the EEGLAB toolbox (
<xref rid="B10" ref-type="bibr">Delorme and Makeig, 2004</xref>
) running on Matlab (Mathworks, Natick, MA, USA). Since N1/P2 auditory evoked potentials have maximal response over central sites on the scalp (
<xref rid="B27" ref-type="bibr">Scherg and Von Cramon, 1986</xref>
;
<xref rid="B20" ref-type="bibr">Näätänen and Picton, 1987</xref>
), EEG data preprocessing and analyses were conducted on three central electrodes (C3, Cz, C4). These electrodes, covering left, middle, and right central sites, were also selected based on previous EEG studies on audio-visual speech perception (e.g.,
<xref rid="B17" ref-type="bibr">Klucharev et al., 2003</xref>
;
<xref rid="B6" ref-type="bibr">Besle et al., 2004</xref>
;
<xref rid="B23" ref-type="bibr">Pilling, 2010</xref>
;
<xref rid="B35" ref-type="bibr">Treille et al., 2014</xref>
). EEG data were first re-referenced off-line to the nose recording and band-pass filtered using a two-way least-squares FIR filtering (1–20 Hz). Data were then segmented into epochs of 1000 ms (from -500 ms to +500 ms to the acoustic syllable onset, individually determined from the acoustical analyses), with the prestimulus baseline defined from -500 ms to -400 ms. Epochs with an amplitude change exceeding ±60 μV at any channel (including HEOG and VEOG channels) were rejected (on average, less than 10%).</p>
<p>For each participant and each modality, the peak latency of auditory N1 and P2 evoked responses were first determined on the EEG waveform averaged over all electrodes and syllables. For each syllable, two temporal windows were then defined on these peaks ±30 ms in order to individually calculate N1 and P2 amplitude and latency on the related average waveform of C3, Cz, C4 electrodes. Two-way repeated-measure ANOVAs were then performed on N1 and P2 amplitude and latency with the modality (A, AV, AH) and the syllable (/pa/, /ka/, /ta/) as within-subjects variables.</p>
<p>In order to confirm previous EEG/MEG studies demonstrating that P2 and M100 latency reduction in the audio-visual modality vary as a function of the visual recognition of the presented syllable (
<xref rid="B38" ref-type="bibr">van Wassenhove et al., 2005</xref>
;
<xref rid="B3" ref-type="bibr">Arnal et al., 2009</xref>
), additional Pearson’s correlation analyses were carried out. These correlation analyses were performed between the individual visual and haptic recognition scores of the three syllables in the behavioral experiment and the related latency facilitation and reduction amplitude observed in the AV and AH modalities in the EEG experiment (leading to 3 × 16 correlation points per measure and per modality). In addition to raw data, these analyses were also performed on individual
<italic>Z</italic>
-score normalized data, in order to take account of individual differences.</p>
</sec>
</sec>
</sec>
<sec>
<title>RESULTS</title>
<p>For all the following analyses, the significance level was set at
<italic>p</italic>
= 0.05 and Greenhouse–Geisser corrected (for violation of the sphericity assumption) when appropriate. When required,
<italic>post hoc</italic>
analyses were conducted with Newman–Keuls tests.</p>
<sec>
<title>BEHAVIORAL ANALYSES</title>
<sec>
<title>Behavioral experiment (see Figure
<xref ref-type="fig" rid="F1">1B</xref>
)</title>
<p>Overall, the mean proportion of correct responses was of 94%. The main effect of modality of presentation was significant [
<italic>F</italic>
(4,60) = 33.67,
<italic>p</italic>
< 0.001], with more correct responses in A, AV, and AH modalities than in V and H modalities (as shown by
<italic>post hoc</italic>
analyses, all
<italic>p</italic>
’s < 0.001). Significant differences were also observed between syllables [
<italic>F</italic>
(2,30) = 15.59,
<italic>p</italic>
< 0.001], with more correct responses for /pa/ than for /ta/ and /ka/ syllables (as shown by
<italic>post hoc</italic>
analyses, all
<italic>p</italic>
’s < 0.001). Finally, the interaction between the modality and the syllable was also reliable [
<italic>F</italic>
(8,120) = 7.39,
<italic>p</italic>
< 0.001]. While no significant differences were observed between syllables in A, AV, and AH modalities (with almost perfect identification for all syllables), more correct responses were observed for /pa/ than for /ta/ and /ka/ syllables in both V and H modalities (as shown by
<italic>post hoc</italic>
analyses, all
<italic>p</italic>
’s < 0.001). Altogether, these results thus demonstrate a near perfect identification of /pa/ in all modalities, but a lower accuracy for /ta/ and /ka/ syllables in V and H modalities.</p>
</sec>
<sec>
<title>EEG experiment (see Figure
<xref ref-type="fig" rid="F1">1C</xref>
)</title>
<p>In the EEG experiment, the mean proportion of correct responses was of 99%. No significant effect of the modality [
<italic>F</italic>
(2,30) = 1.72], syllable [
<italic>F</italic>
(2,30) = 1.34] or interaction [
<italic>F</italic>
(4,60) = 0.90] was observed, with a near perfect identification of all syllables in A, AV, and AH modalities.</p>
</sec>
</sec>
<sec>
<title>EEG ANALYSES</title>
<sec>
<title>N1 amplitude (see Figures
<xref ref-type="fig" rid="F2">2</xref>
and
<xref ref-type="fig" rid="F3">3A</xref>
-left)</title>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption>
<p>
<bold>Grand-average of auditory evoked potentials for /pa/, /ta/, and /ka/ syllables averaged over the left (C3), middle (Cz), and right (C4) central electrodes in the auditory, audio-visual, and audio-haptic modalities</bold>
.</p>
</caption>
<graphic xlink:href="fpsyg-05-00420-g002"></graphic>
</fig>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption>
<p>
<bold>Left.</bold>
Mean N1
<bold>(A)</bold>
and P2
<bold>(B)</bold>
amplitude and mean N1
<bold>(C)</bold>
and P2
<bold>(D)</bold>
latency for /pa/, /ta/, and /ka/ syllables averaged over left (C3), middle (Cz), and right (C4) central electrodes in the auditory (A), audio-visual (AV), and audio-haptic (AH) modalities. Error bars represent standard errors of the mean. * indicates a significant effect.
<bold>Right.</bold>
Correlation on raw data between the recognition scores observed in the visual-only and haptic-only modalities in the behavioral experiment (
<italic>x</italic>
-axis) and the reduction amplitude and latency facilitation observed in the audio-visual and audio-haptic modalities in the EEG experiment (
<italic>y</italic>
-axis). No correlation was significant.</p>
</caption>
<graphic xlink:href="fpsyg-05-00420-g003"></graphic>
</fig>
<p>The main effect of modality was significant [
<italic>F</italic>
(2,30) = 9.19,
<italic>p</italic>
< 0.001], with a reduced negative N1 amplitude observed in the AV and AH modalities as compared to the A modality (as shown by
<italic>post hoc</italic>
analyses,
<italic>p</italic>
< 0.001 and
<italic>p</italic>
< 0.02, respectively; on average, A: -5.3 μV, AV: -3.1 μV, AH: -4.1 μV). The interaction between the modality and the syllable was also found to be significant [
<italic>F</italic>
(4,60) = 7.23,
<italic>p</italic>
< 0.001]. While for /pa/ a significant amplitude reduction was observed in both AV and AH modalities as compared to the A modality, an amplitude reduction was only observed in the AV modality for /ta/ and /ka/ syllables (as shown by
<italic>post hoc</italic>
analyses, all
<italic>p</italic>
’s < 0.001, see
<bold>Figure
<xref ref-type="fig" rid="F3">3A</xref>
</bold>
-left). In sum, these results demonstrate a visually induced amplitude suppression for all syllables and, importantly, an haptically induced amplitude suppression but only for /pa/ syllable.</p>
</sec>
<sec>
<title>P2 amplitude (see Figures
<xref ref-type="fig" rid="F2">2</xref>
and
<xref ref-type="fig" rid="F3">3B</xref>
-left)</title>
<p>No significant effect of the modality [
<italic>F</italic>
(2,30) = 1.91], the syllable [
<italic>F</italic>
(2,30) = 1.09] and their interaction [
<italic>F</italic>
(4,60) = 1.58] was observed.</p>
</sec>
<sec>
<title>N1 latency (see Figures
<xref ref-type="fig" rid="F2">2</xref>
and
<xref ref-type="fig" rid="F3">3C</xref>
-left)</title>
<p>No significant effect of the modality [
<italic>F</italic>
(2,30) = 0.36], the syllable [
<italic>F</italic>
(2,30) = 3.13] and their interaction [
<italic>F</italic>
(4,60) = 1.78] was observed.</p>
</sec>
<sec>
<title>P2 latency (see Figures
<xref ref-type="fig" rid="F2">2</xref>
and
<xref ref-type="fig" rid="F3">3D</xref>
-left)</title>
<p>The main effect of syllable [
<italic>F</italic>
(2,30) = 4.54,
<italic>p</italic>
< 0.02] was reliable, with shorter P2 latencies observed for /pa/ and /ta/ syllables as compared to /ka/ (as shown by
<italic>post hoc</italic>
analyses, all
<italic>p</italic>
’s < 0.03; on average, /pa/: 210 ms, /ta/: 211 ms, /ka/: 217 ms). Crucially, the main effect of modality was significant [
<italic>F</italic>
(2,30) = 4.05,
<italic>p</italic>
< 0.03], with shorter latencies in AV and AH as compared to the A modality (as shown by
<italic>post hoc</italic>
analyses, all
<italic>p</italic>
’s < 0.05; on average, A: 223 ms, AV: 208 ms, AH: 207 ms). In sum, these results thus indicate faster processing of the P2 auditory evoked potential for /pa/ and /ka/ syllables. In addition, a latency facilitation was observed in both AV and AH modalities, irrespective of the presented syllables.</p>
</sec>
<sec>
<title>Correlation between perceptual recognition scores (see Figure
<xref ref-type="fig" rid="F3">3</xref>
-right)</title>
<p>For raw data, whatever the modality, no significant correlation was however observed for both N1 amplitude (AV:
<italic>r</italic>
= 0.09,
<italic>p</italic>
= 0.54; AH:
<italic>r</italic>
= 0.06,
<italic>p</italic>
= 0.70), P2 amplitude (AV:
<italic>r</italic>
= 0.25,
<italic>p</italic>
= 0.09; AH:
<italic>r</italic>
= -0.09,
<italic>p</italic>
= 0.53), N1 latency (AV:
<italic>r</italic>
= -0.06,
<italic>p</italic>
= 0.71; AH:
<italic>r</italic>
= 0.11,
<italic>p</italic>
= 0.45), and P2 latency (AV:
<italic>r</italic>
= 0.07,
<italic>p</italic>
= 0.66; AH:
<italic>r</italic>
= -0.01,
<italic>p</italic>
= 0.92). Results on additional correlation analyses on normalized data also failed to demonstrate any significant correlation for both N1 and P2 amplitude (N1-AV:
<italic>r</italic>
= 0.01,
<italic>p</italic>
= 0.98; N1-AH:
<italic>r</italic>
= 0.18,
<italic>p</italic>
= 0.87; P2-AV:
<italic>r</italic>
= 0.21,
<italic>p</italic>
= 0.15; P2-AH:
<italic>r</italic>
= 0.02,
<italic>p</italic>
= 0.91) and latency (N1-AV:
<italic>r</italic>
= 0.01,
<italic>p</italic>
= 0.92; N1-AH:
<italic>r</italic>
= 0.12,
<italic>p</italic>
= 0.65; P2-AV:
<italic>r</italic>
= 0.06,
<italic>p</italic>
= 0.68; P2-AH:
<italic>r</italic>
= -0.02,
<italic>p</italic>
= 0.87).</p>
</sec>
</sec>
</sec>
<sec>
<title>DISCUSSION</title>
<p>Two main results emerge from the present study. First, in line with our previous results (
<xref rid="B35" ref-type="bibr">Treille et al., 2014</xref>
), a modulation of N1/P2 auditory evoked potentials was observed during live audio-visual and audio-haptic speech perception compared to auditory speech perception. However, contrary to two previous studies of audio-visual speech perception (
<xref rid="B38" ref-type="bibr">van Wassenhove et al., 2005</xref>
;
<xref rid="B3" ref-type="bibr">Arnal et al., 2009</xref>
), no significant correlation was observed between the latency facilitation observed in the bimodal conditions and the degree of visual and haptic recognition of the presented syllables.</p>
<p>Before we discuss these results, it is first important to consider one potential limitation of the present study. Classically, testing cross-modal interactions requires to determine that the observed response in the bimodal condition differ to the sum of those observed in the unimodal conditions (e.g., AV ≠ A + V). However, visual-only and haptic-only modalities were not here tested, due to the technical difficulty to get temporal accurate and reliable triggers for EEG analyses. Notably, because of their temporal limitation and variability, visual and/or surface electromyographic recordings of the experimenter’s lip, jaw or tongue movements would not allowed to determine reliable triggers (especially in the case of lip stretching for /ta/ and /ka/ syllables). From the possibility that the observed bimodal neural responses simply come from a superposition of the unimodal signals, it should however be noted that auditory evoked potentials are rarely observed in the visual-only modality in central electrodes (
<xref rid="B6" ref-type="bibr">Besle et al., 2004</xref>
;
<xref rid="B38" ref-type="bibr">van Wassenhove et al., 2005</xref>
;
<xref rid="B23" ref-type="bibr">Pilling, 2010</xref>
). Furthermore, in our previous study and using the same experimental design, we obtained behavioral evidence for a strong temporal precedence of the haptic and visual signals on the acoustic signal (
<xref rid="B35" ref-type="bibr">Treille et al., 2014</xref>
). In our view, it is therefore unlikely that visual and haptic event-related potentials might arise at the same time-latency and at the same central electrodes that N1 and P2 auditory evoked potentials. For these reasons, we here compared neural responses in each bimodal condition to the related unimodal condition (i.e., AV ≠ A and AH ≠ H), a testing procedure that has previously demonstrated latency facilitation and amplitude reduction of auditory evoked potentials in audio-visual compared to auditory-only speech perception (
<xref rid="B38" ref-type="bibr">van Wassenhove et al., 2005</xref>
;
<xref rid="B23" ref-type="bibr">Pilling, 2010</xref>
).</p>
<p>In spite of this limitation, the observed modulation of N1/P2 auditory evoked potentials in the audio-visual condition strongly suggests cross-modal speech interactions. It is first worthwhile noting that, for each participant, the three syllables were randomly presented in each session in order to minimize repetition effects, and the order of the modality of presentation was fully counterbalanced across participants so that possible overlapping modality effects are unlikely. In addition, auditory-evoked responses were compared between modalities, with the same number of trials and therefore similar possible habituation effects. Although our results appear globally consistent with previous EEG studies, some differences have however to be mentioned. First, while the observed amplitude reduction was here confined to the N1 auditory evoked potential, as in our previous study (
<xref rid="B35" ref-type="bibr">Treille et al., 2014</xref>
; see also
<xref rid="B6" ref-type="bibr">Besle et al., 2004</xref>
), such a visually induced suppression has been previously observed for both N1 and P2 auditory components (
<xref rid="B17" ref-type="bibr">Klucharev et al., 2003</xref>
;
<xref rid="B38" ref-type="bibr">van Wassenhove et al., 2005</xref>
;
<xref rid="B33" ref-type="bibr">Stekelenburg and Vroomen, 2007</xref>
;
<xref rid="B23" ref-type="bibr">Pilling, 2010</xref>
;
<xref rid="B4" ref-type="bibr">Baart et al., 2014</xref>
) or only for the P2 component (
<xref rid="B4" ref-type="bibr">Baart et al., 2014</xref>
). Second, the observed P2 latency facilitation also contrasts with previous studies showing earlier latencies during audio-visual speech perception for both N1 and P2 peaks (
<xref rid="B38" ref-type="bibr">van Wassenhove et al., 2005</xref>
; see also
<xref rid="B23" ref-type="bibr">Pilling, 2010</xref>
, for a small but not consistent effect) or only for N1 peak (
<xref rid="B33" ref-type="bibr">Stekelenburg and Vroomen, 2007</xref>
;
<xref rid="B4" ref-type="bibr">Baart et al., 2014</xref>
;
<xref rid="B35" ref-type="bibr">Treille et al., 2014</xref>
). From these differences, it is hypothesized that N1 and P2 components as well as latency facilitation and amplitude reduction effects might reflect different aspects and/or stages of audio-visual speech integration. For instance,
<xref rid="B38" ref-type="bibr">van Wassenhove et al. (2005)</xref>
observed a visually induced suppression of both N1 and P2 components independently of the visual saliency of the speech stimuli, but a latency reduction of N1 and P2 peaks depending on the degree of their visual predictability. From their results, they argue for two distinct integration stages: (1) a global bimodal perceptual stage, reflected in the amplitude reduction, independent of the featural content of the visual stimulus and possibly reflecting phase-coupling of auditory and visual cortices, and (2) a featural phonetic stage, reflected in the latency facilitation and stronger for P2, in which articulator-specific and predictive visual information are taking into account in auditory phonetic processing (for further discussion, see
<xref rid="B36" ref-type="bibr">van Wassenhove, 2013</xref>
). In parallel,
<xref rid="B33" ref-type="bibr">Stekelenburg and Vroomen (2007)</xref>
,
<xref rid="B39" ref-type="bibr">Vroomen and Stekelenburg (2010)</xref>
, and
<xref rid="B4" ref-type="bibr">Baart et al. (2014)</xref>
also argue for a bimodal, non-speech specific stage in audio-visual speech integration but here thought to be reflected in the N1 latency facilitation and amplitude reduction. Congruent with this hypothesis, they observed an amplitude and a latency reduction of auditory-evoked N1 responses during audio-visual perception for both speech and non-speech actions, like clapping hands (
<xref rid="B33" ref-type="bibr">Stekelenburg and Vroomen, 2007</xref>
), as well as for artificial audio-visual stimul, like two moving disks predicting a pure tone when colliding with a fixed rectangle (
<xref rid="B39" ref-type="bibr">Vroomen and Stekelenburg, 2010</xref>
). In addition, they also provided evidence for a P2 amplitude reduction specifically dependent on the phonetic predictability of the visual speech input (
<xref rid="B4" ref-type="bibr">Baart et al., 2014</xref>
; see also
<xref rid="B39" ref-type="bibr">Vroomen and Stekelenburg, 2010</xref>
). Taken together, although the observed differences across the present and previous studies on N1 and/or P2 latency facilitation and/or amplitude reduction are still a matter of debate (
<xref rid="B38" ref-type="bibr">van Wassenhove et al., 2005</xref>
;
<xref rid="B4" ref-type="bibr">Baart et al., 2014</xref>
), they might both reflect multistage processes in audio-visual speech integration and also derive from specific experimental settings used in these studies.</p>
<p>From that latter possibility, one interesting finding is that the observed latency and amplitude reduction in the EEG experiment, notably for the P2 component, did not significantly depend on the degree of visual recognition of the speech targets in the behavioral experiment. This contrasts with two previous studies reporting latency shifts of auditory evoked responses directly function of the visemic information (
<xref rid="B38" ref-type="bibr">van Wassenhove et al., 2005</xref>
;
<xref rid="B3" ref-type="bibr">Arnal et al., 2009</xref>
). For instance,
<xref rid="B38" ref-type="bibr">van Wassenhove et al. (2005)</xref>
demonstrated a visually induced facilitation of the P2 auditory evoked potential which systematically varied according to the visual-only recognition of the presented syllable (i.e., the more visually salient was the syllable, the more stronger the latency facilitation). While they observed a P2 latency facilitation around 25 ms, 16 ms, and 8 ms for /pa/, /ta/, and /ka/ syllables, respectively, we here observed latency facilitations around 17 ms, 13 ms, and 15 ms for the same syllables. However, correlation scores likely depend on overall differences in recognition scores between syllables which were stronger in previous studies (
<xref rid="B38" ref-type="bibr">van Wassenhove et al., 2005</xref>
;
<xref rid="B3" ref-type="bibr">Arnal et al., 2009</xref>
). Furthermore, one important difference between our experimental setting and those used in these two studies is that audio-visual interactions were here tested during live face-to-face interactions between a speaker and a listener, with a unique occurrence of the presented syllable in each trial. This natural stimulus variability contrasts with the limited number of tokens used to represent each syllable in the previous studies which were repeatedly presented to the participants (i.e.,
<xref rid="B38" ref-type="bibr">van Wassenhove et al. (2005)</xref>
: one speaker, three syllables, one token per syllable and 100 trials per syllable and per modality;
<xref rid="B3" ref-type="bibr">Arnal et al. (2009)</xref>
: one speaker, five syllables, one token per syllable and 54 trials per syllable and per modality). Similarly, another possible experimental factor impacting bimodal speech integration comes from the number of syllable type. From that view, it is worthwhile noting that we did observe a latency facilitation during live face-to-face speech perception in our previous study, using a similar experimental design, but only for the N1 component (
<xref rid="B35" ref-type="bibr">Treille et al., 2014</xref>
). In this study, however, a simple two-alternative forced-choice identification task between /pa/ and /ta/ syllables was used. It is therefore possible that specific phonetic contents of these two syllables were less perceptually dominant in this previous study, with a more global yes-no strategy done in relation to the more salient bilabial movements for /pa/ as compared to /ta/ (for experimental designs only using two distinct speech stimuli, see also
<xref rid="B33" ref-type="bibr">Stekelenburg and Vroomen, 2007</xref>
;
<xref rid="B23" ref-type="bibr">Pilling, 2010</xref>
;
<xref rid="B39" ref-type="bibr">Vroomen and Stekelenburg, 2010</xref>
;
<xref rid="B4" ref-type="bibr">Baart et al., 2014</xref>
). Overall, given the significant P2 latency facilitation, our results do not contradict the hypothesis that visual inputs convey predictive information with respect to the incoming auditory speech input (for a discussion on the sensory predictability of audio-visual speech stimuli, see
<xref rid="B9" ref-type="bibr">Chandrasekaran et al., 2009</xref>
;
<xref rid="B30" ref-type="bibr">Schwartz and Savariaux, 2013</xref>
) nor the fact that visual predictability of the speech stimulus might be reflected in auditory evoked responses. We simply argue that visual predictions on the incoming acoustic signal in audio-visual speech perception might likely be constrained not only by the featural content of the visual stimuli but also by the experimental context and by short-term memory traces and knowledge the listener previously acquired on these stimuli.</p>
<p>As in the audio-visual condition, the observed modulation of N1/P2 auditory evoked potentials during audio-haptic speech perception also clearly suggests cross-modal speech interactions between the auditory and the haptic signals. In this bimodal condition, we also observed a latency facilitation on the P2 auditory evoked potential that did not vary according to the degree of haptic recognition of the speech targets. In addition to this latency facilitation, an N1 amplitude reduction was also observed but only for /pa/ syllable. As previously noted, this latter result fits well with a stronger haptic saliency of the bilabial rounding movements involved in /pa/ syllable (see
<xref rid="B35" ref-type="bibr">Treille et al., 2014</xref>
, for behavioral evidence) and with previous studies on audio-visual integration demonstrating that N1 suppression is strongly dependent on whether the visual signal reliably predicts the onset of the auditory event (
<xref rid="B33" ref-type="bibr">Stekelenburg and Vroomen, 2007</xref>
;
<xref rid="B39" ref-type="bibr">Vroomen and Stekelenburg, 2010</xref>
). As discussed previously, the fact that P2 latency reduction was nevertheless observed for all syllables indirectly argue for distinct integration processes in the cortical speech processing hierarchy (
<xref rid="B38" ref-type="bibr">van Wassenhove et al., 2005</xref>
;
<xref rid="B33" ref-type="bibr">Stekelenburg and Vroomen, 2007</xref>
;
<xref rid="B39" ref-type="bibr">Vroomen and Stekelenburg, 2010</xref>
;
<xref rid="B4" ref-type="bibr">Baart et al., 2014</xref>
).</p>
<p>Taken together, our results provide new evidence for audio-visual and audio-haptic speech interactions in live dyadic interactions (
<xref rid="B35" ref-type="bibr">Treille et al., 2014</xref>
). The fact that the modulation of N1/P2 auditory evoked potentials were quite similar in these bimodal conditions, despite the less natural haptic modality, further emphasizes the multimodal nature of speech perception. As previously mentioned, apart from speech, multisensory integration from sight, sound and haptic modalities naturally occurs in everyday life. Although bimodal speech perception is a special case of multisensory processing that interfaces with the linguistic system, similar integration processes might have been used to extract temporal and/or phonetic relevant information from the visual and haptic speech signals that, together with the listener’s knowledge of speech production (for a review, see
<xref rid="B29" ref-type="bibr">Schwartz et al., 2012</xref>
), might have constrained the incoming auditory processing.</p>
</sec>
<sec>
<title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<ref-list>
<title>REFERENCES</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alcorn</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>1932</year>
).
<article-title>The Tadoma method.</article-title>
<source>
<italic>Volta Rev.</italic>
</source>
<volume>34</volume>
<fpage>195</fpage>
<lpage>198</lpage>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Arnal</surname>
<given-names>L. H.</given-names>
</name>
<name>
<surname>Giraud</surname>
<given-names>A. L.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Cortical oscillations and sensory predictions.</article-title>
<source>
<italic>Trends Cogn. Sci.</italic>
</source>
<volume>16</volume>
<fpage>390</fpage>
<lpage>398</lpage>
<pub-id pub-id-type="doi">10.1016/j.tics.2012.05.003</pub-id>
<pub-id pub-id-type="pmid">22682813</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Arnal</surname>
<given-names>L. H.</given-names>
</name>
<name>
<surname>Morillon</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Kell</surname>
<given-names>C. A.</given-names>
</name>
<name>
<surname>Giraud</surname>
<given-names>A. L.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Dual neural routing of visual facilitation in speech processing.</article-title>
<source>
<italic>J. Neurosci.</italic>
</source>
<volume>29</volume>
<fpage>13445</fpage>
<lpage>13453</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.3194-09.2009</pub-id>
<pub-id pub-id-type="pmid">19864557</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Baart</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Stekelenburg</surname>
<given-names>J. J.</given-names>
</name>
<name>
<surname>Vroomen</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>Electrophysiological evidence for speech-specific audiovisual integration.</article-title>
<source>
<italic>Neuropsychologia</italic>
</source>
<volume>65</volume>
<fpage>115</fpage>
<lpage>211</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2013.11.011</pub-id>
<pub-id pub-id-type="pmid">24291340</pub-id>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Benoît</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Mohamadi</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Kandel</surname>
<given-names>S. D.</given-names>
</name>
</person-group>
(
<year>1994</year>
).
<article-title>Effects on phonetic context on audio-visual intelligibility of French.</article-title>
<source>
<italic>J. Speech Hear. Res.</italic>
</source>
<volume>37</volume>
<fpage>1195</fpage>
<lpage>1203</lpage>
<pub-id pub-id-type="pmid">7823561</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Besle</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Fort</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Delpuech</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Giard</surname>
<given-names>M. H.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Bimodal speech: early suppressive visual effects in human auditory cortex.</article-title>
<source>
<italic>Eur. J. Neurosci.</italic>
</source>
<volume>20</volume>
<fpage>2225</fpage>
<lpage>2234</lpage>
<pub-id pub-id-type="doi">10.1111/j.1460-9568.2004.03670.x</pub-id>
<pub-id pub-id-type="pmid">15450102</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Boersma</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Weenink</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>
<italic>Praat: Doing Phonetics by Computer. Computer Program</italic>
, Version 5.3.42.</article-title>
<comment>Available at:
<ext-link ext-link-type="uri" xlink:href="http://www.praat.org/">http://www.praat.org/</ext-link>
[accessed March 2, 2013]</comment>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Campbell</surname>
<given-names>C. S.</given-names>
</name>
<name>
<surname>Massaro</surname>
<given-names>D. W.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>Perception of visible speech: influence of spatial quantization.</article-title>
<source>
<italic>Perception</italic>
</source>
<volume>26</volume>
<fpage>627</fpage>
<lpage>644</lpage>
<pub-id pub-id-type="doi">10.1068/p260627</pub-id>
<pub-id pub-id-type="pmid">9488886</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chandrasekaran</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Trubanova</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Stillittano</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Caplier</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Ghazanfar</surname>
<given-names>A. A.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>The natural statistics of audiovisual speech.</article-title>
<source>
<italic>PLoS Comput. Biol.</italic>
</source>
<volume>5</volume>
:
<issue>e1000436</issue>
<pub-id pub-id-type="doi">10.1371/journal.pcbi.1000436</pub-id>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Delorme</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Makeig</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis.</article-title>
<source>
<italic>J. Neurosci. Methods</italic>
</source>
<volume>134</volume>
<fpage>9</fpage>
<lpage>21</lpage>
<pub-id pub-id-type="doi">10.1016/j.jneumeth.2003.10.009</pub-id>
<pub-id pub-id-type="pmid">15102499</pub-id>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fowler</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Dekle</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>1991</year>
).
<article-title>Listening with eye and hand: crossmodal contributions to speech perception.</article-title>
<source>
<italic>J. Exp. Psychol. Hum. Percept. Perform.</italic>
</source>
<volume>17</volume>
<fpage>816</fpage>
<lpage>828</lpage>
<pub-id pub-id-type="doi">10.1037/0096-1523.17.3.816</pub-id>
<pub-id pub-id-type="pmid">1834793</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gick</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Jóhannsdóttir</surname>
<given-names>K. M.</given-names>
</name>
<name>
<surname>Gibraiel</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Mühlbauer</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Tactile enhancement of auditory and visual speech perception in untrained perceivers.</article-title>
<source>
<italic>J. Acoust. Soc. Am.</italic>
</source>
<volume>123</volume>
<fpage>72</fpage>
<lpage>76</lpage>
<pub-id pub-id-type="doi">10.1121/1.2884349</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grant</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Walden</surname>
<given-names>B. E.</given-names>
</name>
<name>
<surname>Seitz</surname>
<given-names>P. F.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Auditory-visual speech recognition by hearing-impaired subjects: consonant recognition, sentence recognition, and auditory-visual integration.</article-title>
<source>
<italic>J. Acoust. Soc. Am.</italic>
</source>
<volume>103</volume>
<fpage>2677</fpage>
<lpage>2690</lpage>
<pub-id pub-id-type="doi">10.1121/1.422788</pub-id>
<pub-id pub-id-type="pmid">9604361</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Green</surname>
<given-names>K. P.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>“The use of auditory and visual information during phonetic processing: implications for theories of speech perception,” in</article-title>
<source>
<italic>Hearing by Eye, II. Perspectives and Directions in Research on Audiovisual Aspects of Language Processing</italic>
</source>
<role>eds</role>
<person-group person-group-type="editor">
<name>
<surname>Campbell</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Dodd</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Burnham</surname>
<given-names>D.</given-names>
</name>
</person-group>
<publisher-loc>(Hove:</publisher-loc>
<publisher-name>Psychology Press)</publisher-name>
<fpage>3</fpage>
<lpage>25</lpage>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hertrich</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Mathiak</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Lutzenberger</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Menning</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Ackermann</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Sequential audiovisual interactions during speech perception: a whole-head MEG study.</article-title>
<source>
<italic>Neuropsychologia</italic>
</source>
<volume>45</volume>
<fpage>1342</fpage>
<lpage>1354</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2006.09.019</pub-id>
<pub-id pub-id-type="pmid">17067640</pub-id>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jones</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>Munhall</surname>
<given-names>K. G.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>The effects of separating auditory and visual sources on audiovisual integration of speech.</article-title>
<source>
<italic>Can. Acoust.</italic>
</source>
<volume>25</volume>
<fpage>13</fpage>
<lpage>19</lpage>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Klucharev</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Möttönen</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Sams</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Electrophysiological indicators of phonetic and non-phonetic multisensory interactions during audiovisual speech perception.</article-title>
<source>
<italic>Brain Res. Cogn. Brain Res.</italic>
</source>
<volume>18</volume>
<fpage>65</fpage>
<lpage>75</lpage>
<pub-id pub-id-type="doi">10.1016/j.cogbrainres.2003.09.004</pub-id>
<pub-id pub-id-type="pmid">14659498</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lebib</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Papo</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>de Bode</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Baudonnière</surname>
<given-names>P. M.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Evidence of a visual-to-auditory cross-modal sensory gating phenomenon as reflected by the human P50 event-related brain potential modulation.</article-title>
<source>
<italic>Neurosci. Lett.</italic>
</source>
<volume>341</volume>
<fpage>185</fpage>
<lpage>188</lpage>
<pub-id pub-id-type="doi">10.1016/S0304-3940(03)00131-9</pub-id>
<pub-id pub-id-type="pmid">12697279</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McGurk</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>MacDonald</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1976</year>
).
<article-title>Hearing lips and seeing voices.</article-title>
<source>
<italic>Nature</italic>
</source>
<volume>264</volume>
<fpage>746</fpage>
<lpage>748</lpage>
<pub-id pub-id-type="doi">10.1038/264746a0</pub-id>
<pub-id pub-id-type="pmid">1012311</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Näätänen</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Picton</surname>
<given-names>T. W.</given-names>
</name>
</person-group>
(
<year>1987</year>
).
<article-title>The N1 wave of the human electric and magnetic response to sound: a review and an analysis of the component structure.</article-title>
<source>
<italic>Psychophysiology</italic>
</source>
<volume>24</volume>
<fpage>375</fpage>
<lpage>425</lpage>
<pub-id pub-id-type="doi">10.1111/j.1469-8986.1987.tb00311.x</pub-id>
<pub-id pub-id-type="pmid">3615753</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Navarra</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Soto-Faraco</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Hearing lips in a second language: visual articulatory information enables the perception of second language sounds.</article-title>
<source>
<italic>Psychol. Res.</italic>
</source>
<volume>71</volume>
<fpage>4</fpage>
<lpage>12</lpage>
<pub-id pub-id-type="doi">10.1007/s00426-005-0031-5</pub-id>
<pub-id pub-id-type="pmid">16362332</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Norton</surname>
<given-names>S. J.</given-names>
</name>
<name>
<surname>Schultz</surname>
<given-names>M. C.</given-names>
</name>
<name>
<surname>Reed</surname>
<given-names>C. M.</given-names>
</name>
<name>
<surname>Braida</surname>
<given-names>L. D.</given-names>
</name>
<name>
<surname>Durlach</surname>
<given-names>N. I.</given-names>
</name>
<name>
<surname>Rabinowitz</surname>
<given-names>W. M.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>1977</year>
).
<article-title>Analytic study of the Tadoma method: background and preliminary results.</article-title>
<source>
<italic>J. Speech Hear. Res.</italic>
</source>
<volume>20</volume>
<fpage>574</fpage>
<lpage>595</lpage>
<pub-id pub-id-type="pmid">904318</pub-id>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pilling</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Auditory event-related potentials (ERPs) in audiovisual speech perception.</article-title>
<source>
<italic>J. Speech Lang. Hear. Res.</italic>
</source>
<volume>52</volume>
<fpage>1073</fpage>
<lpage>1081</lpage>
<pub-id pub-id-type="doi">10.1044/1092-4388(2009/07-0276)</pub-id>
<pub-id pub-id-type="pmid">19641083</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Reisberg</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>McLean</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Goldfield</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>1987</year>
).
<article-title>“Easy to hear but hard tounderstand: a lipreading advantage with intact auditory stimuli,” in</article-title>
<source>
<italic>Hearing by Eye: The Psychology of Lipreading</italic>
</source>
<role>eds</role>
<person-group person-group-type="editor">
<name>
<surname>Campbell</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Dodd</surname>
<given-names>B.</given-names>
</name>
</person-group>
<publisher-loc>(London:</publisher-loc>
<publisher-name>Lawrence Erlbaum Associates)</publisher-name>
<fpage>97</fpage>
<lpage>113</lpage>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sams</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Aulanko</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Hämäläinen</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hari</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Lounasmaa</surname>
<given-names>O. V.</given-names>
</name>
<name>
<surname>Lu</surname>
<given-names>S. T.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>1991</year>
).
<article-title>Seeing speech: visual information from lip movements modifies activity in the human auditory cortex.</article-title>
<source>
<italic>Neurosci. Lett.</italic>
</source>
<volume>127</volume>
<fpage>141</fpage>
<lpage>145</lpage>
<pub-id pub-id-type="doi">10.1016/0304-3940(91)90914-F</pub-id>
<pub-id pub-id-type="pmid">1881611</pub-id>
</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sato</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Cavé</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Ménard</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Brasseur</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Auditory-tactile speech perception in congenitally blind and sighted adults.</article-title>
<source>
<italic>Neuropsychologia</italic>
</source>
<volume>48</volume>
<fpage>3683</fpage>
<lpage>3686</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2010.08.017</pub-id>
<pub-id pub-id-type="pmid">20736028</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Scherg</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Von Cramon</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>1986</year>
).
<article-title>Evoked dipole source potentials of the human auditory cortex.</article-title>
<source>
<italic>Electroencephalogr. Clin. Neurol.</italic>
</source>
<volume>65</volume>
<fpage>344</fpage>
<lpage>360</lpage>
<pub-id pub-id-type="doi">10.1016/0168-5597(86)90014-6</pub-id>
</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schwartz</surname>
<given-names>J. L.</given-names>
</name>
<name>
<surname>Berthommier</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Savariaux</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Seeing to hear better: evidence for early audio-visual interactions in speech identification.</article-title>
<source>
<italic>Cognition</italic>
</source>
<volume>93</volume>
<fpage>B69</fpage>
<lpage>B78</lpage>
<pub-id pub-id-type="doi">10.1016/j.cognition.2004.01.006</pub-id>
<pub-id pub-id-type="pmid">15147940</pub-id>
</mixed-citation>
</ref>
<ref id="B29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schwartz</surname>
<given-names>J. L.</given-names>
</name>
<name>
<surname>Ménard</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Basirat</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Sato</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>The Perception for Action Control Theory (PACT): a perceptuo-motor theory of speech perception.</article-title>
<source>
<italic>J. Neurolinguistics</italic>
</source>
<volume>25</volume>
<fpage>336</fpage>
<lpage>354</lpage>
<pub-id pub-id-type="doi">10.1016/j.jneuroling.2009.12.004</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Schwartz</surname>
<given-names>J. L.</given-names>
</name>
<name>
<surname>Savariaux</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>“Data and simulations about audiovisual asynchrony and predictability in speech perception,” in</article-title>
<source>
<italic>Proceedings of the 12th International Conference on Auditory-Visual Speech Processing</italic>
</source>
<publisher-loc>Annecy, France</publisher-loc>
</mixed-citation>
</ref>
<ref id="B31">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Stein</surname>
<given-names>B. E.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>
<italic>The New Handbook of Multisensory Processing</italic>
.</article-title>
<publisher-loc>Cambridge</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B32">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Stein</surname>
<given-names>B. E.</given-names>
</name>
<name>
<surname>Meredith</surname>
<given-names>M. A.</given-names>
</name>
</person-group>
(
<year>1993</year>
).
<article-title>
<italic>The New Handbook of Multisensory Processing</italic>
.</article-title>
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B33">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stekelenburg</surname>
<given-names>J. J.</given-names>
</name>
<name>
<surname>Vroomen</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Neural correlates of multisensory integration of ecologically valid audiovisual events.</article-title>
<source>
<italic>J. Cogn. Neurosci.</italic>
</source>
<volume>19</volume>
<fpage>1964</fpage>
<lpage>1973</lpage>
<pub-id pub-id-type="doi">10.1162/jocn.2007.19.12.1964</pub-id>
<pub-id pub-id-type="pmid">17892381</pub-id>
</mixed-citation>
</ref>
<ref id="B34">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sumby</surname>
<given-names>W. H.</given-names>
</name>
<name>
<surname>Pollack</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>1954</year>
).
<article-title>Visual contribution to speech intelligibility in noise.</article-title>
<source>
<italic>J. Acoust. Soc. Am.</italic>
</source>
<volume>26</volume>
<fpage>212</fpage>
<lpage>215</lpage>
<pub-id pub-id-type="doi">10.1121/1.1907309</pub-id>
</mixed-citation>
</ref>
<ref id="B35">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Treille</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Cordeboeuf</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Vilain</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Sato</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>Haptic and visual information speed up the neural processing of auditory speech in live dyadic interactions.</article-title>
<source>
<italic>Neuropsychologia</italic>
</source>
<volume>57</volume>
<fpage>71</fpage>
<lpage>77</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2014.02.004</pub-id>
<pub-id pub-id-type="pmid">24530236</pub-id>
</mixed-citation>
</ref>
<ref id="B36">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van Wassenhove</surname>
<given-names>V.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Speech through ears and eyes: interfacing the senses with the supramodal brain.</article-title>
<source>
<italic>Front. Psychol.</italic>
</source>
<volume>4</volume>
:
<issue>388</issue>
<pub-id pub-id-type="doi">10.3389/fpsyg.2013.00388</pub-id>
</mixed-citation>
</ref>
<ref id="B37">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van Wassenhove</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Grant</surname>
<given-names>K. W.</given-names>
</name>
<name>
<surname>Poeppel</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Temporal window of integration in auditory-visual speech perception.</article-title>
<source>
<italic>Neuropsychologia</italic>
</source>
<volume>45</volume>
<fpage>598</fpage>
<lpage>607</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2006.01.001</pub-id>
<pub-id pub-id-type="pmid">16530232</pub-id>
</mixed-citation>
</ref>
<ref id="B38">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van Wassenhove</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Grant</surname>
<given-names>K. W.</given-names>
</name>
<name>
<surname>Poeppel</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Visual speech speeds up the neural processing of auditory speech.</article-title>
<source>
<italic>Proc. Natl. Acad. Sci. U.S.A.</italic>
</source>
<volume>102</volume>
<fpage>1181</fpage>
<lpage>1186</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.0408949102</pub-id>
<pub-id pub-id-type="pmid">15647358</pub-id>
</mixed-citation>
</ref>
<ref id="B39">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vroomen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Stekelenburg</surname>
<given-names>J. J.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Visual anticipatory information modulates multisensory interactions of artificial audiovisual stimuli.</article-title>
<source>
<italic>J. Cogn. Neurosci.</italic>
</source>
<volume>22</volume>
<fpage>1583</fpage>
<lpage>1596</lpage>
<pub-id pub-id-type="doi">10.1162/jocn.2009.21308</pub-id>
<pub-id pub-id-type="pmid">19583474</pub-id>
</mixed-citation>
</ref>
<ref id="B40">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Winneke</surname>
<given-names>A. H.</given-names>
</name>
<name>
<surname>Phillips</surname>
<given-names>N. A.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Does audiovisual speech offer a fountain of youth for old ears? An event-related brain potential study of age differences in audiovisual speech perception.</article-title>
<source>
<italic>Psychol. Aging</italic>
</source>
<volume>26</volume>
<fpage>427</fpage>
<lpage>438</lpage>
<pub-id pub-id-type="doi">10.1037/a0021683</pub-id>
<pub-id pub-id-type="pmid">21443357</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001F07 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 001F07 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:4026678
   |texte=   The sound of your lips: electrophysiological cross-modal interactions during hand-to-face and face-to-face speech perception
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:24860533" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024