Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

An additive-factors design to disambiguate neuronal and areal convergence: measuring multisensory interactions between audio, visual, and haptic sensory streams using fMRI

Identifieur interne : 000858 ( PascalFrancis/Corpus ); précédent : 000857; suivant : 000859

An additive-factors design to disambiguate neuronal and areal convergence: measuring multisensory interactions between audio, visual, and haptic sensory streams using fMRI

Auteurs : Ryan A. Stevenson ; Sunah Kim ; Thomas W. James

Source :

RBID : Francis:09-0430774

Descripteurs français

English descriptors

Abstract

It can be shown empirically and theoretically that inferences based on established metrics used to assess multisensory integration with BOLD fMRI data, such as superadditivity, are dependent on the particular experimental situation. For example, the law of inverse effectiveness shows that the likelihood of finding superadditivity in a known multisensory region increases with decreasing stimulus discriminability. In this paper, we suggest that Sternberg's additive-factors design allows for an unbiased assessment of multisensory integration. Through the manipulation of signal-to-noise ratio as an additive factor, we have identified networks of cortical regions that show properties of audio-visual or visuo-haptic neuronal convergence. These networks contained previously identified multisensory regions and also many new regions, for example, the caudate nucleus for audio-visual integration, and the fusiform gyrus for visuo-haptic integration. A comparison of integrative networks across audio-visual and visuo-haptic conditions showed very little overlap, suggesting that neural mechanisms of integration are unique to particular sensory pairings. Our results provide evidence for the utility of the additive-factors approach by demonstrating its effectiveness across modality (vision, audition, and haptics), stimulus type (speech and non-speech), experimental design (blocked and event-related), method of analysis (SPM and ROI), and experimenter-chosen baseline. The additive-factors approach provides a method for investigating multisensory interactions that goes beyond what can be achieved with more established metric-based, subtraction-type methods.

Notice en format standard (ISO 2709)

Pour connaître la documentation sur le format Inist Standard.

pA  
A01 01  1    @0 0014-4819
A02 01      @0 EXBRAP
A03   1    @0 Exp. brain res.
A05       @2 198
A06       @2 2-3
A08 01  1  ENG  @1 An additive-factors design to disambiguate neuronal and areal convergence: measuring multisensory interactions between audio, visual, and haptic sensory streams using fMRI
A09 01  1  ENG  @1 Crossmodal Processing
A11 01  1    @1 STEVENSON (Ryan A.)
A11 02  1    @1 KIM (Sunah)
A11 03  1    @1 JAMES (Thomas W.)
A12 01  1    @1 SPENCE (Charles) @9 ed.
A12 02  1    @1 SENKOWSKI (Daniel) @9 ed.
A12 03  1    @1 RÖDER (Brigitte) @9 ed.
A14 01      @1 Department of Psychological and Brain Sciences, Indiana University, 1101 East Tenth Street, Room 293 @2 Bloomington, IN 47405 @3 USA @Z 3 aut.
A14 02      @1 Program in Neuroscience, Indiana University @2 Bloomington @3 USA @Z 1 aut. @Z 2 aut. @Z 3 aut.
A14 03      @1 Cognitive Science Program, Indiana University @2 Bloomington @3 USA @Z 2 aut. @Z 3 aut.
A15 01      @1 Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University @2 OX1 3UD Oxford @3 GBR @Z 1 aut.
A15 02      @1 Department of Neurophysiology and pathophysiology, University Medical Center Hamburg-Eppendorf @2 20246 Hamburg @3 DEU @Z 2 aut.
A15 03      @1 Biological Psychology and Neuropsychology, University of Hamburg @2 20146 Hamburg @3 DEU @Z 3 aut.
A20       @1 183-194
A21       @1 2009
A23 01      @0 ENG
A43 01      @1 INIST @2 12535 @5 354000196196430060
A44       @0 0000 @1 © 2009 INIST-CNRS. All rights reserved.
A45       @0 1 p.1/4
A47 01  1    @0 09-0430774
A60       @1 P
A61       @0 A
A64 01  1    @0 Experimental brain research
A66 01      @0 DEU
C01 01    ENG  @0 It can be shown empirically and theoretically that inferences based on established metrics used to assess multisensory integration with BOLD fMRI data, such as superadditivity, are dependent on the particular experimental situation. For example, the law of inverse effectiveness shows that the likelihood of finding superadditivity in a known multisensory region increases with decreasing stimulus discriminability. In this paper, we suggest that Sternberg's additive-factors design allows for an unbiased assessment of multisensory integration. Through the manipulation of signal-to-noise ratio as an additive factor, we have identified networks of cortical regions that show properties of audio-visual or visuo-haptic neuronal convergence. These networks contained previously identified multisensory regions and also many new regions, for example, the caudate nucleus for audio-visual integration, and the fusiform gyrus for visuo-haptic integration. A comparison of integrative networks across audio-visual and visuo-haptic conditions showed very little overlap, suggesting that neural mechanisms of integration are unique to particular sensory pairings. Our results provide evidence for the utility of the additive-factors approach by demonstrating its effectiveness across modality (vision, audition, and haptics), stimulus type (speech and non-speech), experimental design (blocked and event-related), method of analysis (SPM and ROI), and experimenter-chosen baseline. The additive-factors approach provides a method for investigating multisensory interactions that goes beyond what can be achieved with more established metric-based, subtraction-type methods.
C02 01  X    @0 770B03D @1 II
C03 01  X  FRE  @0 Intégration multisensorielle @5 01
C03 01  X  ENG  @0 Multisensory integration @5 01
C03 01  X  SPA  @0 Integración multisensorial @5 01
C03 02  X  FRE  @0 Imagerie RMN @5 02
C03 02  X  ENG  @0 Nuclear magnetic resonance imaging @5 02
C03 02  X  SPA  @0 Imaginería RMN @5 02
C03 03  X  FRE  @0 Noyau caudé @5 03
C03 03  X  ENG  @0 Caudate nucleus @5 03
C03 03  X  SPA  @0 Núcleo caudado @5 03
C03 04  X  FRE  @0 Intégration nerveuse @5 04
C03 04  X  ENG  @0 Neural integration @5 04
C03 04  X  SPA  @0 Integración nerviosa @5 04
C03 05  X  FRE  @0 Parole @5 05
C03 05  X  ENG  @0 Speech @5 05
C03 05  X  SPA  @0 Habla @5 05
C03 06  X  FRE  @0 Méthode analyse @5 06
C03 06  X  ENG  @0 Analysis method @5 06
C03 06  X  SPA  @0 Método análisis @5 06
C03 07  X  FRE  @0 Perception haptique @4 CD @5 96
C03 07  X  ENG  @0 Haptic perception @4 CD @5 96
C07 01  X  FRE  @0 Encéphale @5 20
C07 01  X  ENG  @0 Encephalon @5 20
C07 01  X  SPA  @0 Encéfalo @5 20
C07 02  X  FRE  @0 Noyau gris central @5 21
C07 02  X  ENG  @0 Basal ganglion @5 21
C07 02  X  SPA  @0 Núcleo basal @5 21
C07 03  X  FRE  @0 Système nerveux central @5 22
C07 03  X  ENG  @0 Central nervous system @5 22
C07 03  X  SPA  @0 Sistema nervioso central @5 22
N21       @1 313
N44 01      @1 OTO
N82       @1 OTO

Format Inist (serveur)

NO : FRANCIS 09-0430774 INIST
ET : An additive-factors design to disambiguate neuronal and areal convergence: measuring multisensory interactions between audio, visual, and haptic sensory streams using fMRI
AU : STEVENSON (Ryan A.); KIM (Sunah); JAMES (Thomas W.); SPENCE (Charles); SENKOWSKI (Daniel); RÖDER (Brigitte)
AF : Department of Psychological and Brain Sciences, Indiana University, 1101 East Tenth Street, Room 293/Bloomington, IN 47405/Etats-Unis (3 aut.); Program in Neuroscience, Indiana University/Bloomington/Etats-Unis (1 aut., 2 aut., 3 aut.); Cognitive Science Program, Indiana University/Bloomington/Etats-Unis (2 aut., 3 aut.); Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University/OX1 3UD Oxford/Royaume-Uni (1 aut.); Department of Neurophysiology and pathophysiology, University Medical Center Hamburg-Eppendorf/20246 Hamburg/Allemagne (2 aut.); Biological Psychology and Neuropsychology, University of Hamburg/20146 Hamburg/Allemagne (3 aut.)
DT : Publication en série; Niveau analytique
SO : Experimental brain research; ISSN 0014-4819; Coden EXBRAP; Allemagne; Da. 2009; Vol. 198; No. 2-3; Pp. 183-194; Bibl. 1 p.1/4
LA : Anglais
EA : It can be shown empirically and theoretically that inferences based on established metrics used to assess multisensory integration with BOLD fMRI data, such as superadditivity, are dependent on the particular experimental situation. For example, the law of inverse effectiveness shows that the likelihood of finding superadditivity in a known multisensory region increases with decreasing stimulus discriminability. In this paper, we suggest that Sternberg's additive-factors design allows for an unbiased assessment of multisensory integration. Through the manipulation of signal-to-noise ratio as an additive factor, we have identified networks of cortical regions that show properties of audio-visual or visuo-haptic neuronal convergence. These networks contained previously identified multisensory regions and also many new regions, for example, the caudate nucleus for audio-visual integration, and the fusiform gyrus for visuo-haptic integration. A comparison of integrative networks across audio-visual and visuo-haptic conditions showed very little overlap, suggesting that neural mechanisms of integration are unique to particular sensory pairings. Our results provide evidence for the utility of the additive-factors approach by demonstrating its effectiveness across modality (vision, audition, and haptics), stimulus type (speech and non-speech), experimental design (blocked and event-related), method of analysis (SPM and ROI), and experimenter-chosen baseline. The additive-factors approach provides a method for investigating multisensory interactions that goes beyond what can be achieved with more established metric-based, subtraction-type methods.
CC : 770B03D
FD : Intégration multisensorielle; Imagerie RMN; Noyau caudé; Intégration nerveuse; Parole; Méthode analyse; Perception haptique
FG : Encéphale; Noyau gris central; Système nerveux central
ED : Multisensory integration; Nuclear magnetic resonance imaging; Caudate nucleus; Neural integration; Speech; Analysis method; Haptic perception
EG : Encephalon; Basal ganglion; Central nervous system
SD : Integración multisensorial; Imaginería RMN; Núcleo caudado; Integración nerviosa; Habla; Método análisis
LO : INIST-12535.354000196196430060
ID : 09-0430774

Links to Exploration step

Francis:09-0430774

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en" level="a">An additive-factors design to disambiguate neuronal and areal convergence: measuring multisensory interactions between audio, visual, and haptic sensory streams using fMRI</title>
<author>
<name sortKey="Stevenson, Ryan A" sort="Stevenson, Ryan A" uniqKey="Stevenson R" first="Ryan A." last="Stevenson">Ryan A. Stevenson</name>
<affiliation>
<inist:fA14 i1="02">
<s1>Program in Neuroscience, Indiana University</s1>
<s2>Bloomington</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Kim, Sunah" sort="Kim, Sunah" uniqKey="Kim S" first="Sunah" last="Kim">Sunah Kim</name>
<affiliation>
<inist:fA14 i1="02">
<s1>Program in Neuroscience, Indiana University</s1>
<s2>Bloomington</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
<affiliation>
<inist:fA14 i1="03">
<s1>Cognitive Science Program, Indiana University</s1>
<s2>Bloomington</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="James, Thomas W" sort="James, Thomas W" uniqKey="James T" first="Thomas W." last="James">Thomas W. James</name>
<affiliation>
<inist:fA14 i1="01">
<s1>Department of Psychological and Brain Sciences, Indiana University, 1101 East Tenth Street, Room 293</s1>
<s2>Bloomington, IN 47405</s2>
<s3>USA</s3>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
<affiliation>
<inist:fA14 i1="02">
<s1>Program in Neuroscience, Indiana University</s1>
<s2>Bloomington</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
<affiliation>
<inist:fA14 i1="03">
<s1>Cognitive Science Program, Indiana University</s1>
<s2>Bloomington</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">INIST</idno>
<idno type="inist">09-0430774</idno>
<date when="2009">2009</date>
<idno type="stanalyst">FRANCIS 09-0430774 INIST</idno>
<idno type="RBID">Francis:09-0430774</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">000858</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a">An additive-factors design to disambiguate neuronal and areal convergence: measuring multisensory interactions between audio, visual, and haptic sensory streams using fMRI</title>
<author>
<name sortKey="Stevenson, Ryan A" sort="Stevenson, Ryan A" uniqKey="Stevenson R" first="Ryan A." last="Stevenson">Ryan A. Stevenson</name>
<affiliation>
<inist:fA14 i1="02">
<s1>Program in Neuroscience, Indiana University</s1>
<s2>Bloomington</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Kim, Sunah" sort="Kim, Sunah" uniqKey="Kim S" first="Sunah" last="Kim">Sunah Kim</name>
<affiliation>
<inist:fA14 i1="02">
<s1>Program in Neuroscience, Indiana University</s1>
<s2>Bloomington</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
<affiliation>
<inist:fA14 i1="03">
<s1>Cognitive Science Program, Indiana University</s1>
<s2>Bloomington</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="James, Thomas W" sort="James, Thomas W" uniqKey="James T" first="Thomas W." last="James">Thomas W. James</name>
<affiliation>
<inist:fA14 i1="01">
<s1>Department of Psychological and Brain Sciences, Indiana University, 1101 East Tenth Street, Room 293</s1>
<s2>Bloomington, IN 47405</s2>
<s3>USA</s3>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
<affiliation>
<inist:fA14 i1="02">
<s1>Program in Neuroscience, Indiana University</s1>
<s2>Bloomington</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
<affiliation>
<inist:fA14 i1="03">
<s1>Cognitive Science Program, Indiana University</s1>
<s2>Bloomington</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</analytic>
<series>
<title level="j" type="main">Experimental brain research</title>
<title level="j" type="abbreviated">Exp. brain res.</title>
<idno type="ISSN">0014-4819</idno>
<imprint>
<date when="2009">2009</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt>
<title level="j" type="main">Experimental brain research</title>
<title level="j" type="abbreviated">Exp. brain res.</title>
<idno type="ISSN">0014-4819</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Analysis method</term>
<term>Caudate nucleus</term>
<term>Haptic perception</term>
<term>Multisensory integration</term>
<term>Neural integration</term>
<term>Nuclear magnetic resonance imaging</term>
<term>Speech</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr">
<term>Intégration multisensorielle</term>
<term>Imagerie RMN</term>
<term>Noyau caudé</term>
<term>Intégration nerveuse</term>
<term>Parole</term>
<term>Méthode analyse</term>
<term>Perception haptique</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">It can be shown empirically and theoretically that inferences based on established metrics used to assess multisensory integration with BOLD fMRI data, such as superadditivity, are dependent on the particular experimental situation. For example, the law of inverse effectiveness shows that the likelihood of finding superadditivity in a known multisensory region increases with decreasing stimulus discriminability. In this paper, we suggest that Sternberg's additive-factors design allows for an unbiased assessment of multisensory integration. Through the manipulation of signal-to-noise ratio as an additive factor, we have identified networks of cortical regions that show properties of audio-visual or visuo-haptic neuronal convergence. These networks contained previously identified multisensory regions and also many new regions, for example, the caudate nucleus for audio-visual integration, and the fusiform gyrus for visuo-haptic integration. A comparison of integrative networks across audio-visual and visuo-haptic conditions showed very little overlap, suggesting that neural mechanisms of integration are unique to particular sensory pairings. Our results provide evidence for the utility of the additive-factors approach by demonstrating its effectiveness across modality (vision, audition, and haptics), stimulus type (speech and non-speech), experimental design (blocked and event-related), method of analysis (SPM and ROI), and experimenter-chosen baseline. The additive-factors approach provides a method for investigating multisensory interactions that goes beyond what can be achieved with more established metric-based, subtraction-type methods.</div>
</front>
</TEI>
<inist>
<standard h6="B">
<pA>
<fA01 i1="01" i2="1">
<s0>0014-4819</s0>
</fA01>
<fA02 i1="01">
<s0>EXBRAP</s0>
</fA02>
<fA03 i2="1">
<s0>Exp. brain res.</s0>
</fA03>
<fA05>
<s2>198</s2>
</fA05>
<fA06>
<s2>2-3</s2>
</fA06>
<fA08 i1="01" i2="1" l="ENG">
<s1>An additive-factors design to disambiguate neuronal and areal convergence: measuring multisensory interactions between audio, visual, and haptic sensory streams using fMRI</s1>
</fA08>
<fA09 i1="01" i2="1" l="ENG">
<s1>Crossmodal Processing</s1>
</fA09>
<fA11 i1="01" i2="1">
<s1>STEVENSON (Ryan A.)</s1>
</fA11>
<fA11 i1="02" i2="1">
<s1>KIM (Sunah)</s1>
</fA11>
<fA11 i1="03" i2="1">
<s1>JAMES (Thomas W.)</s1>
</fA11>
<fA12 i1="01" i2="1">
<s1>SPENCE (Charles)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="02" i2="1">
<s1>SENKOWSKI (Daniel)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="03" i2="1">
<s1>RÖDER (Brigitte)</s1>
<s9>ed.</s9>
</fA12>
<fA14 i1="01">
<s1>Department of Psychological and Brain Sciences, Indiana University, 1101 East Tenth Street, Room 293</s1>
<s2>Bloomington, IN 47405</s2>
<s3>USA</s3>
<sZ>3 aut.</sZ>
</fA14>
<fA14 i1="02">
<s1>Program in Neuroscience, Indiana University</s1>
<s2>Bloomington</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</fA14>
<fA14 i1="03">
<s1>Cognitive Science Program, Indiana University</s1>
<s2>Bloomington</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</fA14>
<fA15 i1="01">
<s1>Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University</s1>
<s2>OX1 3UD Oxford</s2>
<s3>GBR</s3>
<sZ>1 aut.</sZ>
</fA15>
<fA15 i1="02">
<s1>Department of Neurophysiology and pathophysiology, University Medical Center Hamburg-Eppendorf</s1>
<s2>20246 Hamburg</s2>
<s3>DEU</s3>
<sZ>2 aut.</sZ>
</fA15>
<fA15 i1="03">
<s1>Biological Psychology and Neuropsychology, University of Hamburg</s1>
<s2>20146 Hamburg</s2>
<s3>DEU</s3>
<sZ>3 aut.</sZ>
</fA15>
<fA20>
<s1>183-194</s1>
</fA20>
<fA21>
<s1>2009</s1>
</fA21>
<fA23 i1="01">
<s0>ENG</s0>
</fA23>
<fA43 i1="01">
<s1>INIST</s1>
<s2>12535</s2>
<s5>354000196196430060</s5>
</fA43>
<fA44>
<s0>0000</s0>
<s1>© 2009 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45>
<s0>1 p.1/4</s0>
</fA45>
<fA47 i1="01" i2="1">
<s0>09-0430774</s0>
</fA47>
<fA60>
<s1>P</s1>
</fA60>
<fA61>
<s0>A</s0>
</fA61>
<fA64 i1="01" i2="1">
<s0>Experimental brain research</s0>
</fA64>
<fA66 i1="01">
<s0>DEU</s0>
</fA66>
<fC01 i1="01" l="ENG">
<s0>It can be shown empirically and theoretically that inferences based on established metrics used to assess multisensory integration with BOLD fMRI data, such as superadditivity, are dependent on the particular experimental situation. For example, the law of inverse effectiveness shows that the likelihood of finding superadditivity in a known multisensory region increases with decreasing stimulus discriminability. In this paper, we suggest that Sternberg's additive-factors design allows for an unbiased assessment of multisensory integration. Through the manipulation of signal-to-noise ratio as an additive factor, we have identified networks of cortical regions that show properties of audio-visual or visuo-haptic neuronal convergence. These networks contained previously identified multisensory regions and also many new regions, for example, the caudate nucleus for audio-visual integration, and the fusiform gyrus for visuo-haptic integration. A comparison of integrative networks across audio-visual and visuo-haptic conditions showed very little overlap, suggesting that neural mechanisms of integration are unique to particular sensory pairings. Our results provide evidence for the utility of the additive-factors approach by demonstrating its effectiveness across modality (vision, audition, and haptics), stimulus type (speech and non-speech), experimental design (blocked and event-related), method of analysis (SPM and ROI), and experimenter-chosen baseline. The additive-factors approach provides a method for investigating multisensory interactions that goes beyond what can be achieved with more established metric-based, subtraction-type methods.</s0>
</fC01>
<fC02 i1="01" i2="X">
<s0>770B03D</s0>
<s1>II</s1>
</fC02>
<fC03 i1="01" i2="X" l="FRE">
<s0>Intégration multisensorielle</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="ENG">
<s0>Multisensory integration</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="SPA">
<s0>Integración multisensorial</s0>
<s5>01</s5>
</fC03>
<fC03 i1="02" i2="X" l="FRE">
<s0>Imagerie RMN</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="ENG">
<s0>Nuclear magnetic resonance imaging</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="SPA">
<s0>Imaginería RMN</s0>
<s5>02</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE">
<s0>Noyau caudé</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG">
<s0>Caudate nucleus</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA">
<s0>Núcleo caudado</s0>
<s5>03</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE">
<s0>Intégration nerveuse</s0>
<s5>04</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG">
<s0>Neural integration</s0>
<s5>04</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA">
<s0>Integración nerviosa</s0>
<s5>04</s5>
</fC03>
<fC03 i1="05" i2="X" l="FRE">
<s0>Parole</s0>
<s5>05</s5>
</fC03>
<fC03 i1="05" i2="X" l="ENG">
<s0>Speech</s0>
<s5>05</s5>
</fC03>
<fC03 i1="05" i2="X" l="SPA">
<s0>Habla</s0>
<s5>05</s5>
</fC03>
<fC03 i1="06" i2="X" l="FRE">
<s0>Méthode analyse</s0>
<s5>06</s5>
</fC03>
<fC03 i1="06" i2="X" l="ENG">
<s0>Analysis method</s0>
<s5>06</s5>
</fC03>
<fC03 i1="06" i2="X" l="SPA">
<s0>Método análisis</s0>
<s5>06</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE">
<s0>Perception haptique</s0>
<s4>CD</s4>
<s5>96</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG">
<s0>Haptic perception</s0>
<s4>CD</s4>
<s5>96</s5>
</fC03>
<fC07 i1="01" i2="X" l="FRE">
<s0>Encéphale</s0>
<s5>20</s5>
</fC07>
<fC07 i1="01" i2="X" l="ENG">
<s0>Encephalon</s0>
<s5>20</s5>
</fC07>
<fC07 i1="01" i2="X" l="SPA">
<s0>Encéfalo</s0>
<s5>20</s5>
</fC07>
<fC07 i1="02" i2="X" l="FRE">
<s0>Noyau gris central</s0>
<s5>21</s5>
</fC07>
<fC07 i1="02" i2="X" l="ENG">
<s0>Basal ganglion</s0>
<s5>21</s5>
</fC07>
<fC07 i1="02" i2="X" l="SPA">
<s0>Núcleo basal</s0>
<s5>21</s5>
</fC07>
<fC07 i1="03" i2="X" l="FRE">
<s0>Système nerveux central</s0>
<s5>22</s5>
</fC07>
<fC07 i1="03" i2="X" l="ENG">
<s0>Central nervous system</s0>
<s5>22</s5>
</fC07>
<fC07 i1="03" i2="X" l="SPA">
<s0>Sistema nervioso central</s0>
<s5>22</s5>
</fC07>
<fN21>
<s1>313</s1>
</fN21>
<fN44 i1="01">
<s1>OTO</s1>
</fN44>
<fN82>
<s1>OTO</s1>
</fN82>
</pA>
</standard>
<server>
<NO>FRANCIS 09-0430774 INIST</NO>
<ET>An additive-factors design to disambiguate neuronal and areal convergence: measuring multisensory interactions between audio, visual, and haptic sensory streams using fMRI</ET>
<AU>STEVENSON (Ryan A.); KIM (Sunah); JAMES (Thomas W.); SPENCE (Charles); SENKOWSKI (Daniel); RÖDER (Brigitte)</AU>
<AF>Department of Psychological and Brain Sciences, Indiana University, 1101 East Tenth Street, Room 293/Bloomington, IN 47405/Etats-Unis (3 aut.); Program in Neuroscience, Indiana University/Bloomington/Etats-Unis (1 aut., 2 aut., 3 aut.); Cognitive Science Program, Indiana University/Bloomington/Etats-Unis (2 aut., 3 aut.); Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University/OX1 3UD Oxford/Royaume-Uni (1 aut.); Department of Neurophysiology and pathophysiology, University Medical Center Hamburg-Eppendorf/20246 Hamburg/Allemagne (2 aut.); Biological Psychology and Neuropsychology, University of Hamburg/20146 Hamburg/Allemagne (3 aut.)</AF>
<DT>Publication en série; Niveau analytique</DT>
<SO>Experimental brain research; ISSN 0014-4819; Coden EXBRAP; Allemagne; Da. 2009; Vol. 198; No. 2-3; Pp. 183-194; Bibl. 1 p.1/4</SO>
<LA>Anglais</LA>
<EA>It can be shown empirically and theoretically that inferences based on established metrics used to assess multisensory integration with BOLD fMRI data, such as superadditivity, are dependent on the particular experimental situation. For example, the law of inverse effectiveness shows that the likelihood of finding superadditivity in a known multisensory region increases with decreasing stimulus discriminability. In this paper, we suggest that Sternberg's additive-factors design allows for an unbiased assessment of multisensory integration. Through the manipulation of signal-to-noise ratio as an additive factor, we have identified networks of cortical regions that show properties of audio-visual or visuo-haptic neuronal convergence. These networks contained previously identified multisensory regions and also many new regions, for example, the caudate nucleus for audio-visual integration, and the fusiform gyrus for visuo-haptic integration. A comparison of integrative networks across audio-visual and visuo-haptic conditions showed very little overlap, suggesting that neural mechanisms of integration are unique to particular sensory pairings. Our results provide evidence for the utility of the additive-factors approach by demonstrating its effectiveness across modality (vision, audition, and haptics), stimulus type (speech and non-speech), experimental design (blocked and event-related), method of analysis (SPM and ROI), and experimenter-chosen baseline. The additive-factors approach provides a method for investigating multisensory interactions that goes beyond what can be achieved with more established metric-based, subtraction-type methods.</EA>
<CC>770B03D</CC>
<FD>Intégration multisensorielle; Imagerie RMN; Noyau caudé; Intégration nerveuse; Parole; Méthode analyse; Perception haptique</FD>
<FG>Encéphale; Noyau gris central; Système nerveux central</FG>
<ED>Multisensory integration; Nuclear magnetic resonance imaging; Caudate nucleus; Neural integration; Speech; Analysis method; Haptic perception</ED>
<EG>Encephalon; Basal ganglion; Central nervous system</EG>
<SD>Integración multisensorial; Imaginería RMN; Núcleo caudado; Integración nerviosa; Habla; Método análisis</SD>
<LO>INIST-12535.354000196196430060</LO>
<ID>09-0430774</ID>
</server>
</inist>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PascalFrancis/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000858 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Corpus/biblio.hfd -nk 000858 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    PascalFrancis
   |étape=   Corpus
   |type=    RBID
   |clé=     Francis:09-0430774
   |texte=   An additive-factors design to disambiguate neuronal and areal convergence: measuring multisensory interactions between audio, visual, and haptic sensory streams using fMRI
}}

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024