Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

The sound of your lips: electrophysiological cross-modal interactions during hand-to-face and face-to-face speech perception.

Identifieur interne : 000632 ( PubMed/Curation ); précédent : 000631; suivant : 000633

The sound of your lips: electrophysiological cross-modal interactions during hand-to-face and face-to-face speech perception.

Auteurs : Avril Treille [France] ; Coriandre Vilain [France] ; Marc Sato [France]

Source :

RBID : pubmed:24860533

Abstract

Recent magneto-encephalographic and electro-encephalographic studies provide evidence for cross-modal integration during audio-visual and audio-haptic speech perception, with speech gestures viewed or felt from manual tactile contact with the speaker's face. Given the temporal precedence of the haptic and visual signals on the acoustic signal in these studies, the observed modulation of N1/P2 auditory evoked responses during bimodal compared to unimodal speech perception suggest that relevant and predictive visual and haptic cues may facilitate auditory speech processing. To further investigate this hypothesis, auditory evoked potentials were here compared during auditory-only, audio-visual and audio-haptic speech perception in live dyadic interactions between a listener and a speaker. In line with previous studies, auditory evoked potentials were attenuated and speeded up during both audio-haptic and audio-visual compared to auditory speech perception. Importantly, the observed latency and amplitude reduction did not significantly depend on the degree of visual and haptic recognition of the speech targets. Altogether, these results further demonstrate cross-modal interactions between the auditory, visual and haptic speech signals. Although they do not contradict the hypothesis that visual and haptic sensory inputs convey predictive information with respect to the incoming auditory speech input, these results suggest that, at least in live conversational interactions, systematic conclusions on sensory predictability in bimodal speech integration have to be taken with caution, with the extraction of predictive cues likely depending on the variability of the speech stimuli.

DOI: 10.3389/fpsyg.2014.00420
PubMed: 24860533

Links toward previous steps (curation, corpus...)


Links to Exploration step

pubmed:24860533

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">The sound of your lips: electrophysiological cross-modal interactions during hand-to-face and face-to-face speech perception.</title>
<author>
<name sortKey="Treille, Avril" sort="Treille, Avril" uniqKey="Treille A" first="Avril" last="Treille">Avril Treille</name>
<affiliation wicri:level="1">
<nlm:affiliation>CNRS, Département Parole and Cognition, Gipsa-Lab, UMR 5216, Grenoble Université Grenoble, France.</nlm:affiliation>
<country xml:lang="fr">France</country>
<wicri:regionArea>CNRS, Département Parole and Cognition, Gipsa-Lab, UMR 5216, Grenoble Université Grenoble</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Vilain, Coriandre" sort="Vilain, Coriandre" uniqKey="Vilain C" first="Coriandre" last="Vilain">Coriandre Vilain</name>
<affiliation wicri:level="1">
<nlm:affiliation>CNRS, Département Parole and Cognition, Gipsa-Lab, UMR 5216, Grenoble Université Grenoble, France.</nlm:affiliation>
<country xml:lang="fr">France</country>
<wicri:regionArea>CNRS, Département Parole and Cognition, Gipsa-Lab, UMR 5216, Grenoble Université Grenoble</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Sato, Marc" sort="Sato, Marc" uniqKey="Sato M" first="Marc" last="Sato">Marc Sato</name>
<affiliation wicri:level="1">
<nlm:affiliation>CNRS, Département Parole and Cognition, Gipsa-Lab, UMR 5216, Grenoble Université Grenoble, France.</nlm:affiliation>
<country xml:lang="fr">France</country>
<wicri:regionArea>CNRS, Département Parole and Cognition, Gipsa-Lab, UMR 5216, Grenoble Université Grenoble</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PubMed</idno>
<date when="2014">2014</date>
<idno type="doi">10.3389/fpsyg.2014.00420</idno>
<idno type="RBID">pubmed:24860533</idno>
<idno type="pmid">24860533</idno>
<idno type="wicri:Area/PubMed/Corpus">000632</idno>
<idno type="wicri:Area/PubMed/Curation">000632</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">The sound of your lips: electrophysiological cross-modal interactions during hand-to-face and face-to-face speech perception.</title>
<author>
<name sortKey="Treille, Avril" sort="Treille, Avril" uniqKey="Treille A" first="Avril" last="Treille">Avril Treille</name>
<affiliation wicri:level="1">
<nlm:affiliation>CNRS, Département Parole and Cognition, Gipsa-Lab, UMR 5216, Grenoble Université Grenoble, France.</nlm:affiliation>
<country xml:lang="fr">France</country>
<wicri:regionArea>CNRS, Département Parole and Cognition, Gipsa-Lab, UMR 5216, Grenoble Université Grenoble</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Vilain, Coriandre" sort="Vilain, Coriandre" uniqKey="Vilain C" first="Coriandre" last="Vilain">Coriandre Vilain</name>
<affiliation wicri:level="1">
<nlm:affiliation>CNRS, Département Parole and Cognition, Gipsa-Lab, UMR 5216, Grenoble Université Grenoble, France.</nlm:affiliation>
<country xml:lang="fr">France</country>
<wicri:regionArea>CNRS, Département Parole and Cognition, Gipsa-Lab, UMR 5216, Grenoble Université Grenoble</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Sato, Marc" sort="Sato, Marc" uniqKey="Sato M" first="Marc" last="Sato">Marc Sato</name>
<affiliation wicri:level="1">
<nlm:affiliation>CNRS, Département Parole and Cognition, Gipsa-Lab, UMR 5216, Grenoble Université Grenoble, France.</nlm:affiliation>
<country xml:lang="fr">France</country>
<wicri:regionArea>CNRS, Département Parole and Cognition, Gipsa-Lab, UMR 5216, Grenoble Université Grenoble</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in psychology</title>
<idno type="eISSN">1664-1078</idno>
<imprint>
<date when="2014" type="published">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Recent magneto-encephalographic and electro-encephalographic studies provide evidence for cross-modal integration during audio-visual and audio-haptic speech perception, with speech gestures viewed or felt from manual tactile contact with the speaker's face. Given the temporal precedence of the haptic and visual signals on the acoustic signal in these studies, the observed modulation of N1/P2 auditory evoked responses during bimodal compared to unimodal speech perception suggest that relevant and predictive visual and haptic cues may facilitate auditory speech processing. To further investigate this hypothesis, auditory evoked potentials were here compared during auditory-only, audio-visual and audio-haptic speech perception in live dyadic interactions between a listener and a speaker. In line with previous studies, auditory evoked potentials were attenuated and speeded up during both audio-haptic and audio-visual compared to auditory speech perception. Importantly, the observed latency and amplitude reduction did not significantly depend on the degree of visual and haptic recognition of the speech targets. Altogether, these results further demonstrate cross-modal interactions between the auditory, visual and haptic speech signals. Although they do not contradict the hypothesis that visual and haptic sensory inputs convey predictive information with respect to the incoming auditory speech input, these results suggest that, at least in live conversational interactions, systematic conclusions on sensory predictability in bimodal speech integration have to be taken with caution, with the extraction of predictive cues likely depending on the variability of the speech stimuli.</div>
</front>
</TEI>
<pubmed>
<MedlineCitation Owner="NLM" Status="PubMed-not-MEDLINE">
<PMID Version="1">24860533</PMID>
<DateCreated>
<Year>2014</Year>
<Month>05</Month>
<Day>26</Day>
</DateCreated>
<DateCompleted>
<Year>2014</Year>
<Month>05</Month>
<Day>26</Day>
</DateCompleted>
<DateRevised>
<Year>2015</Year>
<Month>02</Month>
<Day>09</Day>
</DateRevised>
<Article PubModel="Electronic-eCollection">
<Journal>
<ISSN IssnType="Electronic">1664-1078</ISSN>
<JournalIssue CitedMedium="Print">
<Volume>5</Volume>
<PubDate>
<Year>2014</Year>
</PubDate>
</JournalIssue>
<Title>Frontiers in psychology</Title>
<ISOAbbreviation>Front Psychol</ISOAbbreviation>
</Journal>
<ArticleTitle>The sound of your lips: electrophysiological cross-modal interactions during hand-to-face and face-to-face speech perception.</ArticleTitle>
<Pagination>
<MedlinePgn>420</MedlinePgn>
</Pagination>
<ELocationID EIdType="doi" ValidYN="Y">10.3389/fpsyg.2014.00420</ELocationID>
<Abstract>
<AbstractText>Recent magneto-encephalographic and electro-encephalographic studies provide evidence for cross-modal integration during audio-visual and audio-haptic speech perception, with speech gestures viewed or felt from manual tactile contact with the speaker's face. Given the temporal precedence of the haptic and visual signals on the acoustic signal in these studies, the observed modulation of N1/P2 auditory evoked responses during bimodal compared to unimodal speech perception suggest that relevant and predictive visual and haptic cues may facilitate auditory speech processing. To further investigate this hypothesis, auditory evoked potentials were here compared during auditory-only, audio-visual and audio-haptic speech perception in live dyadic interactions between a listener and a speaker. In line with previous studies, auditory evoked potentials were attenuated and speeded up during both audio-haptic and audio-visual compared to auditory speech perception. Importantly, the observed latency and amplitude reduction did not significantly depend on the degree of visual and haptic recognition of the speech targets. Altogether, these results further demonstrate cross-modal interactions between the auditory, visual and haptic speech signals. Although they do not contradict the hypothesis that visual and haptic sensory inputs convey predictive information with respect to the incoming auditory speech input, these results suggest that, at least in live conversational interactions, systematic conclusions on sensory predictability in bimodal speech integration have to be taken with caution, with the extraction of predictive cues likely depending on the variability of the speech stimuli.</AbstractText>
</Abstract>
<AuthorList CompleteYN="Y">
<Author ValidYN="Y">
<LastName>Treille</LastName>
<ForeName>Avril</ForeName>
<Initials>A</Initials>
<AffiliationInfo>
<Affiliation>CNRS, Département Parole and Cognition, Gipsa-Lab, UMR 5216, Grenoble Université Grenoble, France.</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y">
<LastName>Vilain</LastName>
<ForeName>Coriandre</ForeName>
<Initials>C</Initials>
<AffiliationInfo>
<Affiliation>CNRS, Département Parole and Cognition, Gipsa-Lab, UMR 5216, Grenoble Université Grenoble, France.</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y">
<LastName>Sato</LastName>
<ForeName>Marc</ForeName>
<Initials>M</Initials>
<AffiliationInfo>
<Affiliation>CNRS, Département Parole and Cognition, Gipsa-Lab, UMR 5216, Grenoble Université Grenoble, France.</Affiliation>
</AffiliationInfo>
</Author>
</AuthorList>
<Language>eng</Language>
<PublicationTypeList>
<PublicationType UI="D016428">Journal Article</PublicationType>
</PublicationTypeList>
<ArticleDate DateType="Electronic">
<Year>2014</Year>
<Month>05</Month>
<Day>13</Day>
</ArticleDate>
</Article>
<MedlineJournalInfo>
<Country>Switzerland</Country>
<MedlineTA>Front Psychol</MedlineTA>
<NlmUniqueID>101550902</NlmUniqueID>
<ISSNLinking>1664-1078</ISSNLinking>
</MedlineJournalInfo>
<OtherID Source="NLM">PMC4026678</OtherID>
<KeywordList Owner="NOTNLM">
<Keyword MajorTopicYN="N">EEG</Keyword>
<Keyword MajorTopicYN="N">audio-haptic speech perception</Keyword>
<Keyword MajorTopicYN="N">audio-visual speech perception</Keyword>
<Keyword MajorTopicYN="N">auditory evoked potentials</Keyword>
<Keyword MajorTopicYN="N">multisensory interactions</Keyword>
</KeywordList>
</MedlineCitation>
<PubmedData>
<History>
<PubMedPubDate PubStatus="ecollection">
<Year>2014</Year>
<Month></Month>
<Day></Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="received">
<Year>2014</Year>
<Month>3</Month>
<Day>02</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="accepted">
<Year>2014</Year>
<Month>4</Month>
<Day>21</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="epublish">
<Year>2014</Year>
<Month>5</Month>
<Day>13</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="entrez">
<Year>2014</Year>
<Month>5</Month>
<Day>27</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="pubmed">
<Year>2014</Year>
<Month>5</Month>
<Day>27</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline">
<Year>2014</Year>
<Month>5</Month>
<Day>27</Day>
<Hour>6</Hour>
<Minute>1</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>epublish</PublicationStatus>
<ArticleIdList>
<ArticleId IdType="doi">10.3389/fpsyg.2014.00420</ArticleId>
<ArticleId IdType="pubmed">24860533</ArticleId>
<ArticleId IdType="pmc">PMC4026678</ArticleId>
</ArticleIdList>
</PubmedData>
</pubmed>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PubMed/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000632 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PubMed/Curation/biblio.hfd -nk 000632 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    PubMed
   |étape=   Curation
   |type=    RBID
   |clé=     pubmed:24860533
   |texte=   The sound of your lips: electrophysiological cross-modal interactions during hand-to-face and face-to-face speech perception.
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/PubMed/Curation/RBID.i   -Sk "pubmed:24860533" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/PubMed/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024