Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Speech comprehension aided by multiple modalities: behavioural and neural interactions

Identifieur interne : 001594 ( Pmc/Curation ); précédent : 001593; suivant : 001595

Speech comprehension aided by multiple modalities: behavioural and neural interactions

Auteurs : Carolyn Mcgettigan [Royaume-Uni] ; Andrew Faulkner [Royaume-Uni] ; Irene Altarelli [Royaume-Uni, France] ; Jonas Obleser [Allemagne] ; Harriet Baverstock ; Sophie K. Scott [Royaume-Uni]

Source :

RBID : PMC:4050300

Abstract

Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources – e.g. voice, face, gesture, linguistic context – to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension.


Url:
DOI: 10.1016/j.neuropsychologia.2012.01.010
PubMed: 22266262
PubMed Central: 4050300

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4050300

Curation

No country items

Harriet Baverstock
<affiliation>
<nlm:aff id="A5">School of Psychological Sciences, The University of Manchester, Coupland 1 building, Coupland Street, Oxford Road, Manchester, M13 9PL</nlm:aff>
<wicri:noCountry code="subfield">M13 9PL</wicri:noCountry>
</affiliation>

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Speech comprehension aided by multiple modalities: behavioural and neural interactions</title>
<author>
<name sortKey="Mcgettigan, Carolyn" sort="Mcgettigan, Carolyn" uniqKey="Mcgettigan C" first="Carolyn" last="Mcgettigan">Carolyn Mcgettigan</name>
<affiliation wicri:level="1">
<nlm:aff id="A1">Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Faulkner, Andrew" sort="Faulkner, Andrew" uniqKey="Faulkner A" first="Andrew" last="Faulkner">Andrew Faulkner</name>
<affiliation wicri:level="1">
<nlm:aff id="A2">Department of Speech, Hearing & Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Department of Speech, Hearing & Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Altarelli, Irene" sort="Altarelli, Irene" uniqKey="Altarelli I" first="Irene" last="Altarelli">Irene Altarelli</name>
<affiliation wicri:level="1">
<nlm:aff id="A2">Department of Speech, Hearing & Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Department of Speech, Hearing & Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="A3">Laboratoire de Sciences Cognitives et Psycholinguistique, Ecole Normale Supérieure, 29 rue d’Ulm,75005 Paris, France</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>Laboratoire de Sciences Cognitives et Psycholinguistique, Ecole Normale Supérieure, 29 rue d’Ulm,75005 Paris</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Obleser, Jonas" sort="Obleser, Jonas" uniqKey="Obleser J" first="Jonas" last="Obleser">Jonas Obleser</name>
<affiliation wicri:level="1">
<nlm:aff id="A4">Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Baverstock, Harriet" sort="Baverstock, Harriet" uniqKey="Baverstock H" first="Harriet" last="Baverstock">Harriet Baverstock</name>
<affiliation>
<nlm:aff id="A5">School of Psychological Sciences, The University of Manchester, Coupland 1 building, Coupland Street, Oxford Road, Manchester, M13 9PL</nlm:aff>
<wicri:noCountry code="subfield">M13 9PL</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Scott, Sophie K" sort="Scott, Sophie K" uniqKey="Scott S" first="Sophie K." last="Scott">Sophie K. Scott</name>
<affiliation wicri:level="1">
<nlm:aff id="A1">Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">22266262</idno>
<idno type="pmc">4050300</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4050300</idno>
<idno type="RBID">PMC:4050300</idno>
<idno type="doi">10.1016/j.neuropsychologia.2012.01.010</idno>
<date when="2012">2012</date>
<idno type="wicri:Area/Pmc/Corpus">001594</idno>
<idno type="wicri:Area/Pmc/Curation">001594</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Speech comprehension aided by multiple modalities: behavioural and neural interactions</title>
<author>
<name sortKey="Mcgettigan, Carolyn" sort="Mcgettigan, Carolyn" uniqKey="Mcgettigan C" first="Carolyn" last="Mcgettigan">Carolyn Mcgettigan</name>
<affiliation wicri:level="1">
<nlm:aff id="A1">Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Faulkner, Andrew" sort="Faulkner, Andrew" uniqKey="Faulkner A" first="Andrew" last="Faulkner">Andrew Faulkner</name>
<affiliation wicri:level="1">
<nlm:aff id="A2">Department of Speech, Hearing & Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Department of Speech, Hearing & Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Altarelli, Irene" sort="Altarelli, Irene" uniqKey="Altarelli I" first="Irene" last="Altarelli">Irene Altarelli</name>
<affiliation wicri:level="1">
<nlm:aff id="A2">Department of Speech, Hearing & Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Department of Speech, Hearing & Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="A3">Laboratoire de Sciences Cognitives et Psycholinguistique, Ecole Normale Supérieure, 29 rue d’Ulm,75005 Paris, France</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>Laboratoire de Sciences Cognitives et Psycholinguistique, Ecole Normale Supérieure, 29 rue d’Ulm,75005 Paris</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Obleser, Jonas" sort="Obleser, Jonas" uniqKey="Obleser J" first="Jonas" last="Obleser">Jonas Obleser</name>
<affiliation wicri:level="1">
<nlm:aff id="A4">Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Baverstock, Harriet" sort="Baverstock, Harriet" uniqKey="Baverstock H" first="Harriet" last="Baverstock">Harriet Baverstock</name>
<affiliation>
<nlm:aff id="A5">School of Psychological Sciences, The University of Manchester, Coupland 1 building, Coupland Street, Oxford Road, Manchester, M13 9PL</nlm:aff>
<wicri:noCountry code="subfield">M13 9PL</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Scott, Sophie K" sort="Scott, Sophie K" uniqKey="Scott S" first="Sophie K." last="Scott">Sophie K. Scott</name>
<affiliation wicri:level="1">
<nlm:aff id="A1">Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Neuropsychologia</title>
<idno type="ISSN">0028-3932</idno>
<idno type="eISSN">1873-3514</idno>
<imprint>
<date when="2012">2012</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p id="P1">Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources – e.g. voice, face, gesture, linguistic context – to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension.</p>
</div>
</front>
</TEI>
<pmc article-type="research-article">
<pmc-comment>The publisher of this article does not allow downloading of the full text in XML form.</pmc-comment>
<pmc-dir>properties manuscript</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-journal-id">0020713</journal-id>
<journal-id journal-id-type="pubmed-jr-id">6083</journal-id>
<journal-id journal-id-type="nlm-ta">Neuropsychologia</journal-id>
<journal-id journal-id-type="iso-abbrev">Neuropsychologia</journal-id>
<journal-title-group>
<journal-title>Neuropsychologia</journal-title>
</journal-title-group>
<issn pub-type="ppub">0028-3932</issn>
<issn pub-type="epub">1873-3514</issn>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">22266262</article-id>
<article-id pub-id-type="pmc">4050300</article-id>
<article-id pub-id-type="doi">10.1016/j.neuropsychologia.2012.01.010</article-id>
<article-id pub-id-type="manuscript">EMS48580</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Speech comprehension aided by multiple modalities: behavioural and neural interactions</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>McGettigan</surname>
<given-names>Carolyn</given-names>
</name>
<xref ref-type="aff" rid="A1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Faulkner</surname>
<given-names>Andrew</given-names>
</name>
<xref ref-type="aff" rid="A2">2</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Altarelli</surname>
<given-names>Irene</given-names>
</name>
<xref ref-type="aff" rid="A2">2</xref>
<xref ref-type="aff" rid="A3">3</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Obleser</surname>
<given-names>Jonas</given-names>
</name>
<xref ref-type="aff" rid="A4">4</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Baverstock</surname>
<given-names>Harriet</given-names>
</name>
<xref ref-type="aff" rid="A5">5</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Scott</surname>
<given-names>Sophie K.</given-names>
</name>
<xref ref-type="aff" rid="A1">1</xref>
</contrib>
</contrib-group>
<aff id="A1">
<label>1</label>
Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK</aff>
<aff id="A2">
<label>2</label>
Department of Speech, Hearing & Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, UK</aff>
<aff id="A3">
<label>3</label>
Laboratoire de Sciences Cognitives et Psycholinguistique, Ecole Normale Supérieure, 29 rue d’Ulm,75005 Paris, France</aff>
<aff id="A4">
<label>4</label>
Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany</aff>
<aff id="A5">
<label>5</label>
School of Psychological Sciences, The University of Manchester, Coupland 1 building, Coupland Street, Oxford Road, Manchester, M13 9PL</aff>
<author-notes>
<corresp id="CR1">
<bold>CORRESPONDING AUTHOR:</bold>
Carolyn McGettigan,
<email>c.mcgettigan@ucl.ac.uk</email>
, Phone: +44 (0) 20 7679 7529, Fax: +44 (0) 020 7813 2835, Address: Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK</corresp>
</author-notes>
<pub-date pub-type="nihms-submitted">
<day>6</day>
<month>6</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="epub">
<day>17</day>
<month>1</month>
<year>2012</year>
</pub-date>
<pub-date pub-type="ppub">
<month>4</month>
<year>2012</year>
</pub-date>
<pub-date pub-type="pmc-release">
<day>10</day>
<month>6</month>
<year>2014</year>
</pub-date>
<volume>50</volume>
<issue>5</issue>
<fpage>762</fpage>
<lpage>776</lpage>
<pmc-comment>elocation-id from pubmed: 10.1016/j.neuropsychologia.2012.01.010</pmc-comment>
<abstract>
<p id="P1">Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources – e.g. voice, face, gesture, linguistic context – to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension.</p>
</abstract>
<kwd-group>
<kwd>speech</kwd>
<kwd>fMRI</kwd>
<kwd>auditory cortex</kwd>
<kwd>individual differences</kwd>
<kwd>noise-vocoding</kwd>
</kwd-group>
</article-meta>
</front>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001594 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 001594 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:4050300
   |texte=   Speech comprehension aided by multiple modalities: behavioural and neural interactions
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:22266262" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024