Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

An additive-factors design to disambiguate neuronal and areal convergence: measuring multisensory interactions between audio, visual, and haptic sensory streams using fMRI.

Identifieur interne : 001159 ( PubMed/Checkpoint ); précédent : 001158; suivant : 001160

An additive-factors design to disambiguate neuronal and areal convergence: measuring multisensory interactions between audio, visual, and haptic sensory streams using fMRI.

Auteurs : Ryan A. Stevenson [États-Unis] ; Sunah Kim ; Thomas W. James

Source :

RBID : pubmed:19352638

English descriptors

Abstract

It can be shown empirically and theoretically that inferences based on established metrics used to assess multisensory integration with BOLD fMRI data, such as superadditivity, are dependent on the particular experimental situation. For example, the law of inverse effectiveness shows that the likelihood of finding superadditivity in a known multisensory region increases with decreasing stimulus discriminability. In this paper, we suggest that Sternberg's additive-factors design allows for an unbiased assessment of multisensory integration. Through the manipulation of signal-to-noise ratio as an additive factor, we have identified networks of cortical regions that show properties of audio-visual or visuo-haptic neuronal convergence. These networks contained previously identified multisensory regions and also many new regions, for example, the caudate nucleus for audio-visual integration, and the fusiform gyrus for visuo-haptic integration. A comparison of integrative networks across audio-visual and visuo-haptic conditions showed very little overlap, suggesting that neural mechanisms of integration are unique to particular sensory pairings. Our results provide evidence for the utility of the additive-factors approach by demonstrating its effectiveness across modality (vision, audition, and haptics), stimulus type (speech and non-speech), experimental design (blocked and event-related), method of analysis (SPM and ROI), and experimenter-chosen baseline. The additive-factors approach provides a method for investigating multisensory interactions that goes beyond what can be achieved with more established metric-based, subtraction-type methods.

DOI: 10.1007/s00221-009-1783-8
PubMed: 19352638


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

pubmed:19352638

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">An additive-factors design to disambiguate neuronal and areal convergence: measuring multisensory interactions between audio, visual, and haptic sensory streams using fMRI.</title>
<author>
<name sortKey="Stevenson, Ryan A" sort="Stevenson, Ryan A" uniqKey="Stevenson R" first="Ryan A" last="Stevenson">Ryan A. Stevenson</name>
<affiliation wicri:level="2">
<nlm:affiliation>Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA. stevenra@indiana.edu</nlm:affiliation>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405</wicri:regionArea>
<placeName>
<region type="state">Indiana</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Kim, Sunah" sort="Kim, Sunah" uniqKey="Kim S" first="Sunah" last="Kim">Sunah Kim</name>
</author>
<author>
<name sortKey="James, Thomas W" sort="James, Thomas W" uniqKey="James T" first="Thomas W" last="James">Thomas W. James</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PubMed</idno>
<date when="2009">2009</date>
<idno type="doi">10.1007/s00221-009-1783-8</idno>
<idno type="RBID">pubmed:19352638</idno>
<idno type="pmid">19352638</idno>
<idno type="wicri:Area/PubMed/Corpus">001307</idno>
<idno type="wicri:Area/PubMed/Curation">001307</idno>
<idno type="wicri:Area/PubMed/Checkpoint">001159</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">An additive-factors design to disambiguate neuronal and areal convergence: measuring multisensory interactions between audio, visual, and haptic sensory streams using fMRI.</title>
<author>
<name sortKey="Stevenson, Ryan A" sort="Stevenson, Ryan A" uniqKey="Stevenson R" first="Ryan A" last="Stevenson">Ryan A. Stevenson</name>
<affiliation wicri:level="2">
<nlm:affiliation>Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA. stevenra@indiana.edu</nlm:affiliation>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405</wicri:regionArea>
<placeName>
<region type="state">Indiana</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Kim, Sunah" sort="Kim, Sunah" uniqKey="Kim S" first="Sunah" last="Kim">Sunah Kim</name>
</author>
<author>
<name sortKey="James, Thomas W" sort="James, Thomas W" uniqKey="James T" first="Thomas W" last="James">Thomas W. James</name>
</author>
</analytic>
<series>
<title level="j">Experimental brain research</title>
<idno type="eISSN">1432-1106</idno>
<imprint>
<date when="2009" type="published">2009</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Acoustic Stimulation</term>
<term>Adult</term>
<term>Auditory Perception (physiology)</term>
<term>Brain (blood supply)</term>
<term>Brain (physiology)</term>
<term>Brain Mapping (methods)</term>
<term>Cerebrovascular Circulation</term>
<term>Female</term>
<term>Humans</term>
<term>Magnetic Resonance Imaging (methods)</term>
<term>Male</term>
<term>Oxygen (blood)</term>
<term>Photic Stimulation</term>
<term>Physical Stimulation</term>
<term>Signal Processing, Computer-Assisted</term>
<term>Speech</term>
<term>Speech Perception (physiology)</term>
<term>Touch Perception (physiology)</term>
<term>Visual Perception (physiology)</term>
</keywords>
<keywords scheme="MESH" type="chemical" qualifier="blood" xml:lang="en">
<term>Oxygen</term>
</keywords>
<keywords scheme="MESH" qualifier="blood supply" xml:lang="en">
<term>Brain</term>
</keywords>
<keywords scheme="MESH" qualifier="methods" xml:lang="en">
<term>Brain Mapping</term>
<term>Magnetic Resonance Imaging</term>
</keywords>
<keywords scheme="MESH" qualifier="physiology" xml:lang="en">
<term>Auditory Perception</term>
<term>Brain</term>
<term>Speech Perception</term>
<term>Touch Perception</term>
<term>Visual Perception</term>
</keywords>
<keywords scheme="MESH" xml:lang="en">
<term>Acoustic Stimulation</term>
<term>Adult</term>
<term>Cerebrovascular Circulation</term>
<term>Female</term>
<term>Humans</term>
<term>Male</term>
<term>Photic Stimulation</term>
<term>Physical Stimulation</term>
<term>Signal Processing, Computer-Assisted</term>
<term>Speech</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">It can be shown empirically and theoretically that inferences based on established metrics used to assess multisensory integration with BOLD fMRI data, such as superadditivity, are dependent on the particular experimental situation. For example, the law of inverse effectiveness shows that the likelihood of finding superadditivity in a known multisensory region increases with decreasing stimulus discriminability. In this paper, we suggest that Sternberg's additive-factors design allows for an unbiased assessment of multisensory integration. Through the manipulation of signal-to-noise ratio as an additive factor, we have identified networks of cortical regions that show properties of audio-visual or visuo-haptic neuronal convergence. These networks contained previously identified multisensory regions and also many new regions, for example, the caudate nucleus for audio-visual integration, and the fusiform gyrus for visuo-haptic integration. A comparison of integrative networks across audio-visual and visuo-haptic conditions showed very little overlap, suggesting that neural mechanisms of integration are unique to particular sensory pairings. Our results provide evidence for the utility of the additive-factors approach by demonstrating its effectiveness across modality (vision, audition, and haptics), stimulus type (speech and non-speech), experimental design (blocked and event-related), method of analysis (SPM and ROI), and experimenter-chosen baseline. The additive-factors approach provides a method for investigating multisensory interactions that goes beyond what can be achieved with more established metric-based, subtraction-type methods.</div>
</front>
</TEI>
<pubmed>
<MedlineCitation Owner="NLM" Status="MEDLINE">
<PMID Version="1">19352638</PMID>
<DateCreated>
<Year>2009</Year>
<Month>08</Month>
<Day>27</Day>
</DateCreated>
<DateCompleted>
<Year>2009</Year>
<Month>11</Month>
<Day>02</Day>
</DateCompleted>
<DateRevised>
<Year>2013</Year>
<Month>12</Month>
<Day>13</Day>
</DateRevised>
<Article PubModel="Print-Electronic">
<Journal>
<ISSN IssnType="Electronic">1432-1106</ISSN>
<JournalIssue CitedMedium="Internet">
<Volume>198</Volume>
<Issue>2-3</Issue>
<PubDate>
<Year>2009</Year>
<Month>Sep</Month>
</PubDate>
</JournalIssue>
<Title>Experimental brain research</Title>
<ISOAbbreviation>Exp Brain Res</ISOAbbreviation>
</Journal>
<ArticleTitle>An additive-factors design to disambiguate neuronal and areal convergence: measuring multisensory interactions between audio, visual, and haptic sensory streams using fMRI.</ArticleTitle>
<Pagination>
<MedlinePgn>183-94</MedlinePgn>
</Pagination>
<ELocationID EIdType="doi" ValidYN="Y">10.1007/s00221-009-1783-8</ELocationID>
<Abstract>
<AbstractText>It can be shown empirically and theoretically that inferences based on established metrics used to assess multisensory integration with BOLD fMRI data, such as superadditivity, are dependent on the particular experimental situation. For example, the law of inverse effectiveness shows that the likelihood of finding superadditivity in a known multisensory region increases with decreasing stimulus discriminability. In this paper, we suggest that Sternberg's additive-factors design allows for an unbiased assessment of multisensory integration. Through the manipulation of signal-to-noise ratio as an additive factor, we have identified networks of cortical regions that show properties of audio-visual or visuo-haptic neuronal convergence. These networks contained previously identified multisensory regions and also many new regions, for example, the caudate nucleus for audio-visual integration, and the fusiform gyrus for visuo-haptic integration. A comparison of integrative networks across audio-visual and visuo-haptic conditions showed very little overlap, suggesting that neural mechanisms of integration are unique to particular sensory pairings. Our results provide evidence for the utility of the additive-factors approach by demonstrating its effectiveness across modality (vision, audition, and haptics), stimulus type (speech and non-speech), experimental design (blocked and event-related), method of analysis (SPM and ROI), and experimenter-chosen baseline. The additive-factors approach provides a method for investigating multisensory interactions that goes beyond what can be achieved with more established metric-based, subtraction-type methods.</AbstractText>
</Abstract>
<AuthorList CompleteYN="Y">
<Author ValidYN="Y">
<LastName>Stevenson</LastName>
<ForeName>Ryan A</ForeName>
<Initials>RA</Initials>
<AffiliationInfo>
<Affiliation>Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA. stevenra@indiana.edu</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y">
<LastName>Kim</LastName>
<ForeName>Sunah</ForeName>
<Initials>S</Initials>
</Author>
<Author ValidYN="Y">
<LastName>James</LastName>
<ForeName>Thomas W</ForeName>
<Initials>TW</Initials>
</Author>
</AuthorList>
<Language>eng</Language>
<PublicationTypeList>
<PublicationType UI="D016428">Journal Article</PublicationType>
<PublicationType UI="D013485">Research Support, Non-U.S. Gov't</PublicationType>
</PublicationTypeList>
<ArticleDate DateType="Electronic">
<Year>2009</Year>
<Month>04</Month>
<Day>08</Day>
</ArticleDate>
</Article>
<MedlineJournalInfo>
<Country>Germany</Country>
<MedlineTA>Exp Brain Res</MedlineTA>
<NlmUniqueID>0043312</NlmUniqueID>
<ISSNLinking>0014-4819</ISSNLinking>
</MedlineJournalInfo>
<ChemicalList>
<Chemical>
<RegistryNumber>S88TT14065</RegistryNumber>
<NameOfSubstance UI="D010100">Oxygen</NameOfSubstance>
</Chemical>
</ChemicalList>
<CitationSubset>IM</CitationSubset>
<MeshHeadingList>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D000161">Acoustic Stimulation</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D000328">Adult</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D001307">Auditory Perception</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D001921">Brain</DescriptorName>
<QualifierName MajorTopicYN="N" UI="Q000098">blood supply</QualifierName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D001931">Brain Mapping</DescriptorName>
<QualifierName MajorTopicYN="N" UI="Q000379">methods</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D002560">Cerebrovascular Circulation</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D005260">Female</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D006801">Humans</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D008279">Magnetic Resonance Imaging</DescriptorName>
<QualifierName MajorTopicYN="N" UI="Q000379">methods</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D008297">Male</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D010100">Oxygen</DescriptorName>
<QualifierName MajorTopicYN="N" UI="Q000097">blood</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D010775">Photic Stimulation</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D010812">Physical Stimulation</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D012815">Signal Processing, Computer-Assisted</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D013060">Speech</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D013067">Speech Perception</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D055698">Touch Perception</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D014796">Visual Perception</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
</MeshHeadingList>
</MedlineCitation>
<PubmedData>
<History>
<PubMedPubDate PubStatus="received">
<Year>2008</Year>
<Month>9</Month>
<Day>30</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="accepted">
<Year>2009</Year>
<Month>3</Month>
<Day>20</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="aheadofprint">
<Year>2009</Year>
<Month>4</Month>
<Day>8</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="entrez">
<Year>2009</Year>
<Month>4</Month>
<Day>9</Day>
<Hour>9</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="pubmed">
<Year>2009</Year>
<Month>4</Month>
<Day>9</Day>
<Hour>9</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline">
<Year>2009</Year>
<Month>11</Month>
<Day>3</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>ppublish</PublicationStatus>
<ArticleIdList>
<ArticleId IdType="doi">10.1007/s00221-009-1783-8</ArticleId>
<ArticleId IdType="pubmed">19352638</ArticleId>
</ArticleIdList>
</PubmedData>
</pubmed>
<affiliations>
<list>
<country>
<li>États-Unis</li>
</country>
<region>
<li>Indiana</li>
</region>
</list>
<tree>
<noCountry>
<name sortKey="James, Thomas W" sort="James, Thomas W" uniqKey="James T" first="Thomas W" last="James">Thomas W. James</name>
<name sortKey="Kim, Sunah" sort="Kim, Sunah" uniqKey="Kim S" first="Sunah" last="Kim">Sunah Kim</name>
</noCountry>
<country name="États-Unis">
<region name="Indiana">
<name sortKey="Stevenson, Ryan A" sort="Stevenson, Ryan A" uniqKey="Stevenson R" first="Ryan A" last="Stevenson">Ryan A. Stevenson</name>
</region>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PubMed/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001159 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PubMed/Checkpoint/biblio.hfd -nk 001159 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    PubMed
   |étape=   Checkpoint
   |type=    RBID
   |clé=     pubmed:19352638
   |texte=   An additive-factors design to disambiguate neuronal and areal convergence: measuring multisensory interactions between audio, visual, and haptic sensory streams using fMRI.
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/PubMed/Checkpoint/RBID.i   -Sk "pubmed:19352638" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/PubMed/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024