Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

The integration of motion and disparity cues to depth in dorsal visual cortex

Identifieur interne : 002557 ( Pmc/Curation ); précédent : 002556; suivant : 002558

The integration of motion and disparity cues to depth in dorsal visual cortex

Auteurs : Hiroshi Ban [Royaume-Uni, Japon] ; Tim J. Preston [Royaume-Uni, États-Unis] ; Alan Meeson [Royaume-Uni] ; Andrew E. Welchman [Royaume-Uni]

Source :

RBID : PMC:3378632

Abstract

Humans exploit a range of visual depth cues to estimate three-dimensional (3D) structure. For example, the slant of a nearby tabletop can be judged by combining information from binocular disparity, texture and perspective. Behavioral tests show humans combine cues near-optimally, a feat that could depend on: (i) discriminating the outputs from cue-specific mechanisms, or (ii) fusing signals into a common representation. While fusion is computationally attractive, it poses a significant challenge, requiring the integration of quantitatively different signals. We used functional magnetic resonance imaging (fMRI) to provide evidence that dorsal visual area V3B/KO meets this challenge. Specifically, we found that fMRI responses are more discriminable when two cues (binocular disparity and relative motion) concurrently signal depth, and that information provided by one cue is diagnostic of depth indicated by the other. This suggests a cortical node important when perceiving depth, and highlights computations based on fusion in the dorsal stream.


Url:
DOI: 10.1038/nn.3046
PubMed: 22327475
PubMed Central: 3378632

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3378632

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">The integration of motion and disparity cues to depth in dorsal visual cortex</title>
<author>
<name sortKey="Ban, Hiroshi" sort="Ban, Hiroshi" uniqKey="Ban H" first="Hiroshi" last="Ban">Hiroshi Ban</name>
<affiliation wicri:level="1">
<nlm:aff id="A1">School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="A2">Japan Society for the Promotion of Science, Tokyo 102-8472, Japan</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea>Japan Society for the Promotion of Science, Tokyo 102-8472</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Preston, Tim J" sort="Preston, Tim J" uniqKey="Preston T" first="Tim J" last="Preston">Tim J. Preston</name>
<affiliation wicri:level="1">
<nlm:aff id="A1">School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="A3">Department of Psychology, University of California, Santa Barbara, Santa Barbara, CA 93106-9660, USA</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Psychology, University of California, Santa Barbara, Santa Barbara, CA 93106-9660</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Meeson, Alan" sort="Meeson, Alan" uniqKey="Meeson A" first="Alan" last="Meeson">Alan Meeson</name>
<affiliation wicri:level="1">
<nlm:aff id="A1">School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Welchman, Andrew E" sort="Welchman, Andrew E" uniqKey="Welchman A" first="Andrew E" last="Welchman">Andrew E. Welchman</name>
<affiliation wicri:level="1">
<nlm:aff id="A1">School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">22327475</idno>
<idno type="pmc">3378632</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3378632</idno>
<idno type="RBID">PMC:3378632</idno>
<idno type="doi">10.1038/nn.3046</idno>
<date when="2012">2012</date>
<idno type="wicri:Area/Pmc/Corpus">002557</idno>
<idno type="wicri:Area/Pmc/Curation">002557</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">The integration of motion and disparity cues to depth in dorsal visual cortex</title>
<author>
<name sortKey="Ban, Hiroshi" sort="Ban, Hiroshi" uniqKey="Ban H" first="Hiroshi" last="Ban">Hiroshi Ban</name>
<affiliation wicri:level="1">
<nlm:aff id="A1">School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="A2">Japan Society for the Promotion of Science, Tokyo 102-8472, Japan</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea>Japan Society for the Promotion of Science, Tokyo 102-8472</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Preston, Tim J" sort="Preston, Tim J" uniqKey="Preston T" first="Tim J" last="Preston">Tim J. Preston</name>
<affiliation wicri:level="1">
<nlm:aff id="A1">School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="A3">Department of Psychology, University of California, Santa Barbara, Santa Barbara, CA 93106-9660, USA</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Psychology, University of California, Santa Barbara, Santa Barbara, CA 93106-9660</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Meeson, Alan" sort="Meeson, Alan" uniqKey="Meeson A" first="Alan" last="Meeson">Alan Meeson</name>
<affiliation wicri:level="1">
<nlm:aff id="A1">School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Welchman, Andrew E" sort="Welchman, Andrew E" uniqKey="Welchman A" first="Andrew E" last="Welchman">Andrew E. Welchman</name>
<affiliation wicri:level="1">
<nlm:aff id="A1">School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Nature neuroscience</title>
<idno type="ISSN">1097-6256</idno>
<idno type="eISSN">1546-1726</idno>
<imprint>
<date when="2012">2012</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p id="P1">Humans exploit a range of visual depth cues to estimate three-dimensional (3D) structure. For example, the slant of a nearby tabletop can be judged by combining information from binocular disparity, texture and perspective. Behavioral tests show humans combine cues near-optimally, a feat that could depend on: (i) discriminating the outputs from cue-specific mechanisms, or (ii) fusing signals into a common representation. While fusion is computationally attractive, it poses a significant challenge, requiring the integration of quantitatively different signals. We used functional magnetic resonance imaging (fMRI) to provide evidence that dorsal visual area V3B/KO meets this challenge. Specifically, we found that fMRI responses are more discriminable when two cues (binocular disparity and relative motion) concurrently signal depth, and that information provided by one cue is diagnostic of depth indicated by the other. This suggests a cortical node important when perceiving depth, and highlights computations based on fusion in the dorsal stream.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Dosher, Ba" uniqKey="Dosher B">BA Dosher</name>
</author>
<author>
<name sortKey="Sperling, G" uniqKey="Sperling G">G Sperling</name>
</author>
<author>
<name sortKey="Wurst, Sa" uniqKey="Wurst S">SA Wurst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Buelthoff, Hh" uniqKey="Buelthoff H">HH Buelthoff</name>
</author>
<author>
<name sortKey="Mallot, Ha" uniqKey="Mallot H">HA Mallot</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Landy, Ms" uniqKey="Landy M">MS Landy</name>
</author>
<author>
<name sortKey="Maloney, Lt" uniqKey="Maloney L">LT Maloney</name>
</author>
<author>
<name sortKey="Johnston, Eb" uniqKey="Johnston E">EB Johnston</name>
</author>
<author>
<name sortKey="Young, M" uniqKey="Young M">M Young</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Clark, Jj" uniqKey="Clark J">JJ Clark</name>
</author>
<author>
<name sortKey="Yuille, Al" uniqKey="Yuille A">AL Yuille</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Knill, Dc" uniqKey="Knill D">DC Knill</name>
</author>
<author>
<name sortKey="Saunders, Ja" uniqKey="Saunders J">JA Saunders</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tsutsui, K" uniqKey="Tsutsui K">K Tsutsui</name>
</author>
<author>
<name sortKey="Sakata, H" uniqKey="Sakata H">H Sakata</name>
</author>
<author>
<name sortKey="Naganuma, T" uniqKey="Naganuma T">T Naganuma</name>
</author>
<author>
<name sortKey="Taira, M" uniqKey="Taira M">M Taira</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nadler, Jw" uniqKey="Nadler J">JW Nadler</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liu, Y" uniqKey="Liu Y">Y Liu</name>
</author>
<author>
<name sortKey="Vogels, R" uniqKey="Vogels R">R Vogels</name>
</author>
<author>
<name sortKey="Orban, Ga" uniqKey="Orban G">GA Orban</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y Gu</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC Deangelis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Morgan, Ml" uniqKey="Morgan M">ML Morgan</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC Deangelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rogers, B" uniqKey="Rogers B">B Rogers</name>
</author>
<author>
<name sortKey="Graham, M" uniqKey="Graham M">M Graham</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bradshaw, Mf" uniqKey="Bradshaw M">MF Bradshaw</name>
</author>
<author>
<name sortKey="Rogers, Bj" uniqKey="Rogers B">BJ Rogers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nawrot, M" uniqKey="Nawrot M">M Nawrot</name>
</author>
<author>
<name sortKey="Blake, R" uniqKey="Blake R">R Blake</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Poom, L" uniqKey="Poom L">L Poom</name>
</author>
<author>
<name sortKey="Borjesson, E" uniqKey="Borjesson E">E Borjesson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Domini, F" uniqKey="Domini F">F Domini</name>
</author>
<author>
<name sortKey="Caudek, C" uniqKey="Caudek C">C Caudek</name>
</author>
<author>
<name sortKey="Tassinari, H" uniqKey="Tassinari H">H Tassinari</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
<author>
<name sortKey="Newsome, Wt" uniqKey="Newsome W">WT Newsome</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Preston, Tj" uniqKey="Preston T">TJ Preston</name>
</author>
<author>
<name sortKey="Li, S" uniqKey="Li S">S Li</name>
</author>
<author>
<name sortKey="Kourtzi, Z" uniqKey="Kourtzi Z">Z Kourtzi</name>
</author>
<author>
<name sortKey="Welchman, Ae" uniqKey="Welchman A">AE Welchman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nandy, As" uniqKey="Nandy A">AS Nandy</name>
</author>
<author>
<name sortKey="Tjan, Bs" uniqKey="Tjan B">BS Tjan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hillis, Jm" uniqKey="Hillis J">JM Hillis</name>
</author>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
<author>
<name sortKey="Landy, Ms" uniqKey="Landy M">MS Landy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kording, Kp" uniqKey="Kording K">KP Kording</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Popple, Av" uniqKey="Popple A">AV Popple</name>
</author>
<author>
<name sortKey="Smallman, Hs" uniqKey="Smallman H">HS Smallman</name>
</author>
<author>
<name sortKey="Findlay, Jm" uniqKey="Findlay J">JM Findlay</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Preston, Tj" uniqKey="Preston T">TJ Preston</name>
</author>
<author>
<name sortKey="Kourtzi, Z" uniqKey="Kourtzi Z">Z Kourtzi</name>
</author>
<author>
<name sortKey="Welchman, Ae" uniqKey="Welchman A">AE Welchman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tyler, Cw" uniqKey="Tyler C">CW Tyler</name>
</author>
<author>
<name sortKey="Likova, Lt" uniqKey="Likova L">LT Likova</name>
</author>
<author>
<name sortKey="Kontsevich, Ll" uniqKey="Kontsevich L">LL Kontsevich</name>
</author>
<author>
<name sortKey="Wade, Ar" uniqKey="Wade A">AR Wade</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Meredith, Ma" uniqKey="Meredith M">MA Meredith</name>
</author>
<author>
<name sortKey="Stein, Be" uniqKey="Stein B">BE Stein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Avillac, M" uniqKey="Avillac M">M Avillac</name>
</author>
<author>
<name sortKey="Hamed, S Ben" uniqKey="Hamed S">S. Ben Hamed</name>
</author>
<author>
<name sortKey="Duhamel, Jr" uniqKey="Duhamel J">JR Duhamel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stanford, Tr" uniqKey="Stanford T">TR Stanford</name>
</author>
<author>
<name sortKey="Quessy, S" uniqKey="Quessy S">S Quessy</name>
</author>
<author>
<name sortKey="Stein, Be" uniqKey="Stein B">BE Stein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ma, Wj" uniqKey="Ma W">WJ Ma</name>
</author>
<author>
<name sortKey="Beck, Jm" uniqKey="Beck J">JM Beck</name>
</author>
<author>
<name sortKey="Latham, Pe" uniqKey="Latham P">PE Latham</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A Pouget</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Preston, Tj" uniqKey="Preston T">TJ Preston</name>
</author>
<author>
<name sortKey="Kourtzi, Z" uniqKey="Kourtzi Z">Z Kourtzi</name>
</author>
<author>
<name sortKey="Welchman, Ae" uniqKey="Welchman A">AE Welchman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Orban, Ga" uniqKey="Orban G">GA Orban</name>
</author>
<author>
<name sortKey="Janssen, P" uniqKey="Janssen P">P Janssen</name>
</author>
<author>
<name sortKey="Vogels, R" uniqKey="Vogels R">R Vogels</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Parker, Aj" uniqKey="Parker A">AJ Parker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Backus, Bt" uniqKey="Backus B">BT Backus</name>
</author>
<author>
<name sortKey="Fleet, Dj" uniqKey="Fleet D">DJ Fleet</name>
</author>
<author>
<name sortKey="Parker, Aj" uniqKey="Parker A">AJ Parker</name>
</author>
<author>
<name sortKey="Heeger, Dj" uniqKey="Heeger D">DJ Heeger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chandrasekaran, C" uniqKey="Chandrasekaran C">C Chandrasekaran</name>
</author>
<author>
<name sortKey="Canon, V" uniqKey="Canon V">V Canon</name>
</author>
<author>
<name sortKey="Dahmen, Jc" uniqKey="Dahmen J">JC Dahmen</name>
</author>
<author>
<name sortKey="Kourtzi, Z" uniqKey="Kourtzi Z">Z Kourtzi</name>
</author>
<author>
<name sortKey="Welchman, Ae" uniqKey="Welchman A">AE Welchman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Orban, Ga" uniqKey="Orban G">GA Orban</name>
</author>
<author>
<name sortKey="Sunaert, S" uniqKey="Sunaert S">S Sunaert</name>
</author>
<author>
<name sortKey="Todd, Jt" uniqKey="Todd J">JT Todd</name>
</author>
<author>
<name sortKey="Van Hecke, P" uniqKey="Van Hecke P">P Van Hecke</name>
</author>
<author>
<name sortKey="Marchal, G" uniqKey="Marchal G">G Marchal</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Murray, So" uniqKey="Murray S">SO Murray</name>
</author>
<author>
<name sortKey="Olshausen, Ba" uniqKey="Olshausen B">BA Olshausen</name>
</author>
<author>
<name sortKey="Woods, Dl" uniqKey="Woods D">DL Woods</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Paradis, Al" uniqKey="Paradis A">AL Paradis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sereno, Me" uniqKey="Sereno M">ME Sereno</name>
</author>
<author>
<name sortKey="Trinath, T" uniqKey="Trinath T">T Trinath</name>
</author>
<author>
<name sortKey="Augath, M" uniqKey="Augath M">M Augath</name>
</author>
<author>
<name sortKey="Logothetis, Nk" uniqKey="Logothetis N">NK Logothetis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Durand, Jb" uniqKey="Durand J">JB Durand</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peuskens, H" uniqKey="Peuskens H">H Peuskens</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Orban, Ga" uniqKey="Orban G">GA Orban</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shmuel, A" uniqKey="Shmuel A">A Shmuel</name>
</author>
<author>
<name sortKey="Chaimow, D" uniqKey="Chaimow D">D Chaimow</name>
</author>
<author>
<name sortKey="Raddatz, G" uniqKey="Raddatz G">G Raddatz</name>
</author>
<author>
<name sortKey="Ugurbil, K" uniqKey="Ugurbil K">K Ugurbil</name>
</author>
<author>
<name sortKey="Yacoub, E" uniqKey="Yacoub E">E Yacoub</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kriegeskorte, N" uniqKey="Kriegeskorte N">N Kriegeskorte</name>
</author>
<author>
<name sortKey="Cusack, R" uniqKey="Cusack R">R Cusack</name>
</author>
<author>
<name sortKey="Bandettini, P" uniqKey="Bandettini P">P Bandettini</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Op De Beeck, Hp" uniqKey="Op De Beeck H">HP Op de Beeck</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Freeman, J" uniqKey="Freeman J">J Freeman</name>
</author>
<author>
<name sortKey="Brouwer, Gj" uniqKey="Brouwer G">GJ Brouwer</name>
</author>
<author>
<name sortKey="Heeger, Dj" uniqKey="Heeger D">DJ Heeger</name>
</author>
<author>
<name sortKey="Merriam, Ep" uniqKey="Merriam E">EP Merriam</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Welchman, Ae" uniqKey="Welchman A">AE Welchman</name>
</author>
<author>
<name sortKey="Deubelius, A" uniqKey="Deubelius A">A Deubelius</name>
</author>
<author>
<name sortKey="Conrad, V" uniqKey="Conrad V">V Conrad</name>
</author>
<author>
<name sortKey="Bulthoff, Hh" uniqKey="Bulthoff H">HH Bülthoff</name>
</author>
<author>
<name sortKey="Kourtzi, Z" uniqKey="Kourtzi Z">Z Kourtzi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tjan, Bs" uniqKey="Tjan B">BS Tjan</name>
</author>
<author>
<name sortKey="Lestou, V" uniqKey="Lestou V">V Lestou</name>
</author>
<author>
<name sortKey="Kourtzi, Z" uniqKey="Kourtzi Z">Z Kourtzi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dupont, P" uniqKey="Dupont P">P Dupont</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Serences, Jt" uniqKey="Serences J">JT Serences</name>
</author>
<author>
<name sortKey="Boynton, Gm" uniqKey="Boynton G">GM Boynton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="De Martino, F" uniqKey="De Martino F">F De Martino</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kamitani, Y" uniqKey="Kamitani Y">Y Kamitani</name>
</author>
<author>
<name sortKey="Tong, F" uniqKey="Tong F">F Tong</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<pmc-dir>properties manuscript</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-journal-id">9809671</journal-id>
<journal-id journal-id-type="pubmed-jr-id">21092</journal-id>
<journal-id journal-id-type="nlm-ta">Nat Neurosci</journal-id>
<journal-id journal-id-type="iso-abbrev">Nat. Neurosci.</journal-id>
<journal-title-group>
<journal-title>Nature neuroscience</journal-title>
</journal-title-group>
<issn pub-type="ppub">1097-6256</issn>
<issn pub-type="epub">1546-1726</issn>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">22327475</article-id>
<article-id pub-id-type="pmc">3378632</article-id>
<article-id pub-id-type="doi">10.1038/nn.3046</article-id>
<article-id pub-id-type="manuscript">UKMS40862</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>The integration of motion and disparity cues to depth in dorsal visual cortex</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Ban</surname>
<given-names>Hiroshi</given-names>
</name>
<xref ref-type="aff" rid="A1">1</xref>
<xref ref-type="aff" rid="A2">2</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Preston</surname>
<given-names>Tim J</given-names>
</name>
<xref ref-type="aff" rid="A1">1</xref>
<xref ref-type="aff" rid="A3">3</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Meeson</surname>
<given-names>Alan</given-names>
</name>
<xref ref-type="aff" rid="A1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Welchman</surname>
<given-names>Andrew E</given-names>
</name>
<xref ref-type="aff" rid="A1">1</xref>
<xref ref-type="corresp" rid="CR1">*</xref>
</contrib>
</contrib-group>
<aff id="A1">
<label>1</label>
School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK</aff>
<aff id="A2">
<label>2</label>
Japan Society for the Promotion of Science, Tokyo 102-8472, Japan</aff>
<aff id="A3">
<label>3</label>
Department of Psychology, University of California, Santa Barbara, Santa Barbara, CA 93106-9660, USA</aff>
<author-notes>
<corresp id="CR1">
<label>*</label>
Corresponding author </corresp>
<fn id="FN1">
<p id="P35">
<bold>Author contributions</bold>
HB collected data, programmed stimuli, performed the analysis, wrote the simulations and prepared the work for publication; TJP collected data, programmed stimuli and performed preliminary analysis; AM wrote SVM analysis tools; AEW originated and designed the study, performed and guided analysis, wrote the simulations, prepared the work of publication and wrote the paper.</p>
</fn>
</author-notes>
<pub-date pub-type="nihms-submitted">
<day>24</day>
<month>1</month>
<year>2012</year>
</pub-date>
<pub-date pub-type="epub">
<day>12</day>
<month>2</month>
<year>2012</year>
</pub-date>
<pub-date pub-type="pmc-release">
<day>01</day>
<month>10</month>
<year>2012</year>
</pub-date>
<volume>15</volume>
<issue>4</issue>
<fpage>636</fpage>
<lpage>643</lpage>
<permissions>
<license>
<license-p>Users may view, print, copy, download and text and data- mine the content in such documents, for the purposes of academic research, subject always to the full Conditions of use:
<uri xlink:type="simple" xlink:href="http://www.nature.com/authors/editorial_policies/license.html#terms">http://www.nature.com/authors/editorial_policies/license.html#terms</uri>
</license-p>
</license>
</permissions>
<abstract>
<p id="P1">Humans exploit a range of visual depth cues to estimate three-dimensional (3D) structure. For example, the slant of a nearby tabletop can be judged by combining information from binocular disparity, texture and perspective. Behavioral tests show humans combine cues near-optimally, a feat that could depend on: (i) discriminating the outputs from cue-specific mechanisms, or (ii) fusing signals into a common representation. While fusion is computationally attractive, it poses a significant challenge, requiring the integration of quantitatively different signals. We used functional magnetic resonance imaging (fMRI) to provide evidence that dorsal visual area V3B/KO meets this challenge. Specifically, we found that fMRI responses are more discriminable when two cues (binocular disparity and relative motion) concurrently signal depth, and that information provided by one cue is diagnostic of depth indicated by the other. This suggests a cortical node important when perceiving depth, and highlights computations based on fusion in the dorsal stream.</p>
</abstract>
<funding-group>
<award-group>
<funding-source country="United Kingdom">Wellcome Trust : </funding-source>
<award-id>095183 || WT</award-id>
</award-group>
<award-group>
<funding-source country="United Kingdom">Biotechnology and Biological Sciences Research Council : </funding-source>
<award-id>BB/C520620/1 || BB_</award-id>
</award-group>
</funding-group>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="S1">
<title>Introduction</title>
<p id="P2">To achieve robust estimates of depth, the brain combines information from different visual cues
<sup>
<xref ref-type="bibr" rid="R1">1</xref>
-
<xref ref-type="bibr" rid="R3">3</xref>
</sup>
. Computational work proposes this produces more reliable estimates
<sup>
<xref ref-type="bibr" rid="R4">4</xref>
</sup>
and behavioral tests show it improves discriminability
<sup>
<xref ref-type="bibr" rid="R5">5</xref>
,
<xref ref-type="bibr" rid="R6">6</xref>
</sup>
. However, our understanding of the neural basis of integration is underdeveloped. Electrophysiological recordings suggest locations where depth signals converge
<sup>
<xref ref-type="bibr" rid="R7">7</xref>
-
<xref ref-type="bibr" rid="R9">9</xref>
</sup>
. Nevertheless, comparing the responses evoked by individual cues (e.g. disparity, perspective or motion- defined depth) presented ‘alone’ does not imply fusion—response characteristics might be dominated by one cue, or show opposite tuning rather than integration
<sup>
<xref ref-type="bibr" rid="R10">10</xref>
,
<xref ref-type="bibr" rid="R11">11</xref>
</sup>
.</p>
<p id="P3">Here we used human fMRI to test for cortical areas that integrate cues, rather than containing convergent information (i.e. co-located, independent signals). To this end, we exploited two cues to which the brain is remarkably sensitive: horizontal binocular disparity and depth from relative motion
<sup>
<xref ref-type="bibr" rid="R12">12</xref>
</sup>
. Psychophysical evidence for interactions between them
<sup>
<xref ref-type="bibr" rid="R13">13</xref>
-
<xref ref-type="bibr" rid="R16">16</xref>
</sup>
suggests common stages of processing; thus these cues provide a useful pairing to test fusion.</p>
<p id="P4">To frame the problem of cue integration, consider a solid object (e.g. ballerina) whose depth is defined by both disparity and motion (
<xref ref-type="fig" rid="F1">Fig. 1a</xref>
). An estimate of depth could be derived from each cue (quasi-) independently, defining a bivariate likelihood estimate in motion-disparity space. Thereafter, a fusion mechanism would produce a univariate ‘depth’ estimate with lower variance
<sup>
<xref ref-type="bibr" rid="R3">3</xref>
,
<xref ref-type="bibr" rid="R4">4</xref>
</sup>
. To probe this process, it is customary to measure discrimination performance; for instance, asking observers to judge which of two shapes has greater depth (e.g.
<xref ref-type="fig" rid="F1">Fig. 1b</xref>
‘Margot’
<italic>vs</italic>
. ‘Darcy’). There are two computationally distinct ways of solving this task: independence
<italic>vs</italic>
. fusion. Under independence, an ideal observer would discriminate the two bivariate distributions (
<xref ref-type="fig" rid="F1">Fig. 1b</xref>
green and purple blobs) orthogonal to the optimal decision boundary. By so doing, the observer is more sensitive to differences between the shapes than if they judged only one cue. This improvement corresponds to the quadratic sum of the marginal discriminabilities (
<xref ref-type="fig" rid="F1">Fig. 1b</xref>
: Motion, Disparity bars), and has an intuitive geometrical interpretation: by Pythagoras’ theorem, the separation between shapes is greater along the diagonal than along the component dimensions.</p>
<p id="P5">The alternative possibility is an optimal fusion mechanism that combines the component dimensions into a single (‘depth’) dimension. This reduces variance, thereby improving discriminability (
<xref ref-type="fig" rid="F1">Fig. 1b</xref>
: Fusion bar). Disparity and motion typically signal the same structure, making the predictions of independence and fusion equivalent (
<xref ref-type="fig" rid="F1">Fig. 1b</xref>
). However, the alternatives are dissociated by manipulating the viewed shapes experimentally (
<xref ref-type="fig" rid="F1">Fig. 1c,d</xref>
), to effect different predictions for independence (
<xref ref-type="fig" rid="F1">Fig. 1e</xref>
) and fusion (
<xref ref-type="fig" rid="F1">Fig. 1f</xref>
).</p>
<p id="P6">Here we tested for cue integration at the levels of behavior and fMRI responses. We presented a central plane that was nearer or farther than its surround (
<xref ref-type="fig" rid="F2">Fig. 2a</xref>
). When viewing this stimulus, some neurons will respond to ‘near’ positions and others ‘far’
<sup>
<xref ref-type="bibr" rid="R17">17</xref>
</sup>
, producing a dissociable pattern of activity. fMRI measures this activity at the scale of neuronal populations; nevertheless multivoxel pattern analysis (MVPA) provides a sensitive tool to reveal depth selectivity in human cortex
<sup>
<xref ref-type="bibr" rid="R18">18</xref>
</sup>
. Here we decoded fMRI responses evoked when viewing ‘near’ or ‘far’ depths defined by binocular disparity, relative motion, and these signals in combination.</p>
<p id="P7">We developed three tests for integration. First, we assessed whether discrimination performance in combined cue settings exceeds quadratic summation. Our logic was that a fusion mechanism is compromised when ‘single’ cues are presented (
<xref ref-type="fig" rid="F1">Fig. 1c</xref>
). For example, a ‘single’ cue disparity stimulus contains motion information that the viewed surface is flat, depressing performance (contrast single cues in
<xref ref-type="fig" rid="F1">Fig. 1e
<italic>vs</italic>
. f</xref>
). Thus, if ‘single’ cue data are used to derive a prediction for the concurrent stimulus, measured performance will exceed quadratic summation. We used this test to establish a minimum bound for fusion, as considerations of fMRI signal generation and measurement (e.g. scanner noise) entail that this test cannot rule out independence (see Discussion). Second, we determined whether improved performance is specific to congruent cues (
<xref ref-type="fig" rid="F1">Fig. 1e
<italic>vs</italic>
. f</xref>
). An independence mechanism should be unaffected by incongruency (
<xref ref-type="fig" rid="F1">Fig. 1d</xref>
) as quadratic summation ignores the sign of differences. However, a fusion mechanism would be affected: a strict fusion mechanism would be insensitive, while a robust mechanism would revert to a single component. Third, motivated by psychophysical reports of cross-adaptation between cues
<sup>
<xref ref-type="bibr" rid="R13">13</xref>
-
<xref ref-type="bibr" rid="R15">15</xref>
</sup>
, we determined whether depth from one cue (e.g. disparity) is diagnostic of depth from the other (e.g. motion).</p>
<p id="P8">To foreshadow our findings, we found that decoding fMRI responses from area V3B/KO surpasses the minimum bound, was specific for consistent depth cues, and supported a transfer between cues. This suggests a region involved in representing depth from integrated cues, whose activity may underlie improved behavioral performance in multi-cue settings.</p>
</sec>
<sec sec-type="results" id="S2">
<title>Results</title>
<sec id="S3">
<title>Psychophysics</title>
<p id="P9">We presented participants with random dot patterns (
<xref ref-type="fig" rid="F2">Fig. 2b</xref>
) depicting depth from: (1) binocular disparity, (2) relative motion and (3) the combination of disparity and motion. To test for integration psychophysically, we presented two stimuli sequentially with a slight depth difference between them and participants decided which had the greater depth (i.e. which was nearer or farther depending on whether near or far stimuli were shown). Using a staircase procedure, we assessed observers’ sensitivity under four conditions by measuring just noticeable difference (j.n.d.) thresholds (
<xref ref-type="fig" rid="F2">Fig. 2c</xref>
). We found that observers were most sensitive when disparity and motion concurrently signaled depth differences, and least sensitive for motion-defined differences. Using performance in the ‘single’ cue (disparity; motion) conditions, we generated a quadratic summation prediction for the combined cue (disparity and motion) case. In line with the expectations of fusion, performance for congruent cues exceeded quadratic summation (F
<sub>1,6</sub>
=8.16;
<italic>p</italic>
=0.015). Moreover, when disparity and motion were incongruent, sensitivity was lower (F
<sub>1,6</sub>
=11.07;
<italic>p</italic>
=0.016) and comparable to performance in the ‘single cue’ disparity condition (F
<sub>1,6</sub>
<1;
<italic>p</italic>
=0.809). To quantify this effect, we calculated a psychophysical integration index (
<italic>ψ</italic>
):
<disp-formula id="FD1">
<label>(1)</label>
<mml:math display="block" id="M1" overflow="scroll">
<mml:mrow>
<mml:mi>ψ</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>M</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:msubsup>
<mml:mi>S</mml:mi>
<mml:mi>D</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>S</mml:mi>
<mml:mi>M</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:msqrt>
</mml:mfrac>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</disp-formula>
where
<italic>S
<sub>D+M</sub>
</italic>
is the observer’s sensitivity (1/j.n.d.) in the combined condition, and
<italic>S
<sub>D</sub>
and S
<sub>M</sub>
</italic>
correspond to sensitivity in the ‘single cue’ conditions (cf.
<sup>
<xref ref-type="bibr" rid="R19">19</xref>
</sup>
). A value of zero indicates the minimum bound for fusion (i.e. quadratic sum). Bootstrapping the index revealed that observers’ sensitivity exceeded the minimum bound for consistent-(
<italic>p</italic>
<0.001) but not inconsistent (
<italic>p</italic>
=0.865) cue conditions. Additional tests (
<xref ref-type="supplementary-material" rid="SD1">Supplementary Fig 1 online</xref>
) provided further psychophysical evidence of cue integration.</p>
</sec>
<sec id="S4">
<title>fMRI quadratic summation</title>
<p id="P10">To examine the neural basis of disparity and motion integration, we measured fMRI responses in independently localized regions of interest (ROIs) (
<xref ref-type="fig" rid="F3">Fig. 3</xref>
). We then used multivariate pattern analysis (MVPA) to determine which areas contained fMRI signals that enabled a support vector machine (SVM) to discriminate reliably between targets presented closer or farther than the fixation plane.</p>
<p id="P11">Both disparity- and motion-defined depth were decoded reliably, and there was a clear interaction between conditions and areas (
<xref ref-type="fig" rid="F4">Fig. 4a</xref>
; F
<sub>7.1,135.1</sub>
=6.50;
<italic>p</italic>
<0.001). However, our principle interest was not in ‘single’ cue processing, or in contrasting overall prediction accuracies between areas (these are influenced by a range of non-neuronal factors). Rather, we were interested in relative performance under conditions in which disparity and motion concurrently signaled depth. Prediction accuracies for the concurrent stimulus were statistically higher than the component cue accuracies in areas V3A (F
<sub>2,38</sub>
=7.07;
<italic>p</italic>
=0.002) and V3B/KO (F
<sub>1.5,28.9</sub>
=14.35;
<italic>p<</italic>
0.001). To assess integration, we calculated the minimum bound prediction (red lines in
<xref ref-type="fig" rid="F4">Fig. 4a</xref>
) based on quadratic summation. We found that fMRI responses in V3B/KO supported decoding performance that exceeded the minimum bound (F
<sub>1,19</sub>
=4.99,
<italic>p</italic>
=.019), but not elsewhere. We quantified this effect across areas using an fMRI integration index (
<inline-graphic xlink:href="ukmss-40862-ig0007.jpg"></inline-graphic>
):
<disp-formula id="FD2">
<label>(2)</label>
<mml:math display="block" id="M2" overflow="scroll">
<mml:mrow>
<mml:mi>ϕ</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mi>d</mml:mi>
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mo></mml:mo>
</mml:msubsup>
</mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:msup>
<mml:msubsup>
<mml:mi>d</mml:mi>
<mml:mi>D</mml:mi>
<mml:mo></mml:mo>
</mml:msubsup>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:msubsup>
<mml:mi>d</mml:mi>
<mml:mi>M</mml:mi>
<mml:mo></mml:mo>
</mml:msubsup>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mfrac>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</disp-formula>
where
<inline-formula>
<mml:math display="inline" id="M3" overflow="scroll">
<mml:mrow>
<mml:msubsup>
<mml:mi>d</mml:mi>
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mo></mml:mo>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
<italic>is the</italic>
classifier’s performance in the congruent condition, and
<inline-formula>
<mml:math display="inline" id="M7" overflow="scroll">
<mml:mrow>
<mml:msubsup>
<mml:mi>d</mml:mi>
<mml:mrow>
<mml:mi>D</mml:mi>
</mml:mrow>
<mml:mo></mml:mo>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math display="inline" id="M8" overflow="scroll">
<mml:mrow>
<mml:msubsup>
<mml:mi>d</mml:mi>
<mml:mrow>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mo></mml:mo>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
are performance for ‘single’ cue conditions. The values of
<inline-graphic xlink:href="ukmss-40862-ig0007.jpg"></inline-graphic>
differed between areas (
<xref ref-type="fig" rid="F3">Fig. 3b</xref>
;
<italic>F</italic>
<sub>4.5,86.6</sub>
=3.14,
<italic>p</italic>
=0.014), with a value significantly above zero only in V3B/KO (
<xref ref-type="table" rid="T1">Table 1</xref>
). This suggests an area in which improved decoding performance may result from the fusion of disparity and motion (although this test cannot rule out independence).</p>
<p id="P12">A possible concern is that there is a gain change in the fMRI response when testing disparity and motion concurrently relative to ‘single’ cues, and this enhances decoding accuracy (e.g. in V3B/KO). However, fMRI signals in each ROI (
<xref ref-type="supplementary-material" rid="SD1">Supplementary Fig. 2a online</xref>
) showed no evidence for reliable differences in responsiveness between conditions (
<italic>F</italic>
<sub>2,38</sub>
=2.51,
<italic>p</italic>
=0.094). Another possibility is that fMRI noise is reduced when cues concurrently signal depth, supporting better decoding. To assess this possibility, we created a composite dataset by averaging raw fMRI responses from the ‘single’ cue conditions. However, prediction accuracies were lower for this composite dataset than for the concurrent condition in V3B/KO, indicating that a simple noise reduction did not explain the result (
<xref ref-type="supplementary-material" rid="SD1">Supplementary Fig. 2b online</xref>
;
<italic>F</italic>
<sub>4.9,93.8</sub>
=3.74,
<italic>p</italic>
=0.004).</p>
</sec>
<sec id="S5">
<title>Congruent vs. incongruent cues</title>
<p id="P13">To provide a stronger test for integration, we manipulated both disparity and motion, but placed these cues in extreme conflict (i.e., an exaggerated conflict over our ‘single’ cue conditions). For each stimulus, one cue signaled ‘near’ and the other ‘far’ (
<xref ref-type="fig" rid="F1">Fig. 1d</xref>
). If depth from the two cues is independent, this manipulation should have no effect. (Note that the SVM distinguishes the stimulus classes that evoked voxel responses, thus an objectively correct answer exists for the classifier).</p>
<p id="P14">Consistent with the idea that V3B/KO fuses signals, discrimination performance was significantly lower when motion and disparity conflicted (
<xref ref-type="fig" rid="F5">Fig. 5a</xref>
;
<xref ref-type="table" rid="T1">Table 1</xref>
), with accuracy falling to the level of the ‘single’ cue components. There was a significant difference between congruent and incongruent conditions (
<italic>F</italic>
<sub>1,6</sub>
=7.49,
<italic>p</italic>
=0.034), but no significant difference between the incongruent condition and the ‘single’ cue disparity (
<italic>F</italic>
<sub>1,6</sub>
<1,
<italic>p</italic>
=0.62) or relative motion (
<italic>F</italic>
<sub>1,6</sub>
=1.13,
<italic>p</italic>
=0.33) conditions. This robust behavior in the face of extreme conflicts matches perception: conflicts are accommodated within bounds, but thereafter one component is ignored
<sup>
<xref ref-type="bibr" rid="R20">20</xref>
</sup>
. Our participants relied on disparity when perceiving the incongruent stimulus (
<xref ref-type="fig" rid="F2">Fig. 2c,d</xref>
). Other visual areas (notably V3v, V3d and V3A), also supported lower prediction accuracies for the incongruent cues (
<xref ref-type="fig" rid="F5">Fig. 5a</xref>
), although these differences were not statistically reliable (
<xref ref-type="table" rid="T1">Table 1</xref>
).</p>
</sec>
<sec id="S6">
<title>Transfer test</title>
<p id="P15">To obtain a further test for similarities in responses to the two cues, we asked whether depth information provided by one cue (e.g. disparity) is diagnostic of depth indicated by the other (e.g. motion). We performed a cross-cue transfer test whereby we trained a classifier to discriminate depth configurations using one cue, and tested the classifier’s predictions for data obtained when depth was indicated by the other cue.</p>
<p id="P16">To accompany this analysis, we employed a control condition that addressed differences in average velocity that arose from the relative motion stimuli. In particular, when we presented motion-defined depth, the classifier might have discriminated movement speed rather than depth position (this likely explains high accuracies for motion in early visual areas,
<xref ref-type="fig" rid="F4">Fig 4a</xref>
). To control for speed differences, we presented stimuli in which the central target region moved with a fast or slow velocity, but there was no moving background, meaning that participants had no impression of relative depth. We reasoned that an area showing a response specific to depth would show transfer between relative motion and disparity, but not between the motion control and disparity.</p>
<p id="P17">We observed a significant interaction between accuracy in the transfer tests across regions of interest (
<xref ref-type="fig" rid="F5">Fig. 5b</xref>
;
<italic>F</italic>
<sub>9,63</sub>
=3.88,
<italic>p</italic>
=0.001). In particular, higher responses for the depth transfer (disparity-relative motion) than the control (disparity-control) were significant in areas V4, V3d and V3B/KO (
<xref ref-type="table" rid="T2">Table 2</xref>
). To assess the relationship between transfer classification performance (
<inline-formula>
<mml:math display="inline" id="M4" overflow="scroll">
<mml:mrow>
<mml:msubsup>
<mml:mi>d</mml:mi>
<mml:mi>T</mml:mi>
<mml:mo></mml:mo>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
) and the mean performance for the component cues (i.e.
<inline-formula>
<mml:math display="inline" id="M5" overflow="scroll">
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mn>2</mml:mn>
</mml:mfrac>
<mml:mo>(</mml:mo>
<mml:msubsup>
<mml:mi>d</mml:mi>
<mml:mi>D</mml:mi>
<mml:mo></mml:mo>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>d</mml:mi>
<mml:mi>M</mml:mi>
<mml:mo></mml:mo>
</mml:msubsup>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
), we calculated a bootstrapped transfer index.
<disp-formula id="FD3">
<label>(3)</label>
<mml:math display="block" id="M6" overflow="scroll">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msubsup>
<mml:mi>d</mml:mi>
<mml:mi>T</mml:mi>
<mml:mo></mml:mo>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>d</mml:mi>
<mml:mi>D</mml:mi>
<mml:mo></mml:mo>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>d</mml:mi>
<mml:mi>M</mml:mi>
<mml:mo></mml:mo>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p id="P18">This suggested that transfer test performance was most similar to within-cue decoding in area V3B/KO (
<xref ref-type="fig" rid="F5">Fig. 5c</xref>
). Specifically, transfer performance was around 80% of that obtained when training and testing on the same stimuli. To assess the amount of transfer that arises by chance, we conducted the transfer test on randomly permuted data (1000 tests per area). This baseline value (dotted horizontal lines in
<xref ref-type="fig" rid="F5">Fig 5c</xref>
) indicated that transfer between cues was significant in areas V3d and V3B/KO (
<xref ref-type="table" rid="T2">Table 2</xref>
). In conjunction with our previous findings, this suggests that responses in V3B/KO relate to a more generic representation of depth.</p>
</sec>
<sec id="S7">
<title>Decoding simulated populations</title>
<p id="P19">So far, we have considered two extreme scenarios: independence
<italic>vs</italic>
. fusion. However, there are computational and empirical reasons to believe that responses might lie between these poles. Computationally, it is attractive to estimate depth based on both (a) fusion and (b) independence, to determine whether or not cues should be integrated
<sup>
<xref ref-type="bibr" rid="R21">21</xref>
</sup>
. Empirically, it is unlikely we sampled voxels that respond only to fused signals as our region of interest localizers were standardized tests that do not target fusion. Thus, it is probable that some voxels (i.e. within V3B/KO) do not reflect integrated cues. To evaluate how a population mixture might affect decoding results, we used simulations to vary systematically the composition of the neuronal population. We decoded simulated voxels whose activity reflected neural maps based on (i) fused depth, (ii) interdigitated, independent maps for disparity and motion and (iii) a mixture of the two.</p>
<p id="P20">First, to characterize how different parameters affected these simulations, we tested a range of columnar arrangements for disparity and motion, different amounts of voxel and neuronal noise, and different relative reliabilities for the disparity and motion cues (
<xref ref-type="supplementary-material" rid="SD1">Supplementary Figs. 5</xref>
,
<xref ref-type="supplementary-material" rid="SD1">6 online</xref>
). We chose parameter values that matched our fMRI data as closely as possible (e.g., signal-to-noise ratio) and corresponded to published data (e.g. spatial period of disparity representations
<sup>
<xref ref-type="bibr" rid="R17">17</xref>
</sup>
). These simulations demonstrated the experimental logic, confirming that fused cues surpass quadratic summation (
<xref ref-type="supplementary-material" rid="SD1">Supplementary Fig. 5b online</xref>
), and independent representations are unaffected by large conflicts and do not support transfer (
<xref ref-type="supplementary-material" rid="SD1">Supplementary Fig. 6c online</xref>
). Second, we explored the composition of the neuronal population, comparing our simulation results to our empirical data (
<xref ref-type="fig" rid="F6">Fig. 6</xref>
). We found a close correspondence between the fMRI decoding data from V3B/KO and a simulated population in which 50-70% of the neuronal population fuses cues (50% for strict fusion, 70% for robust fusion, based on minimizing the χ
<sup>2</sup>
statistic).</p>
</sec>
<sec id="S8">
<title>Control analyses</title>
<p id="P21">During scanning we took precautions to reduce the possibility of artifacts. First, we introduced a demanding task at fixation to ensure equivalent attentional allocation across conditions (
<xref ref-type="supplementary-material" rid="SD1">Supplementary Fig. 3 online</xref>
). Second, measurements of functional signal-to-noise ratio (fSNR) for each area (
<xref ref-type="supplementary-material" rid="SD1">Supplementary Fig. 2c online</xref>
) showed that differences in prediction accuracy related to stimulus-specific processing rather than the overall fMRI responsiveness. That is, fSNR was highest in the early visual areas rather than higher areas that showed fusion. Finally, eye movements are unlikely to account for our findings as we outline below.</p>
<p id="P22">First, while we could not measure eye vergence objectively in the scanner, the attentional task
<sup>
<xref ref-type="bibr" rid="R22">22</xref>
</sup>
showed that participants maintained vergence well (
<xref ref-type="supplementary-material" rid="SD1">Supplementary Fig. 3 online</xref>
) with no reliable differences between conditions. Second, our stimuli were designed to reduce vergence changes: a low spatial frequency pattern surrounded the stimuli, and participants used horizontal and vertical nonius lines to promote correct eye alignment. Together with previous control data using similar disparities
<sup>
<xref ref-type="bibr" rid="R23">23</xref>
</sup>
, this suggests vergence differences could not explain our results. Third, monocular eye movement recordings suggested little systematic difference between conditions (
<xref ref-type="supplementary-material" rid="SD1">Supplementary Fig. 4 online</xref>
). Moreover, we showed that an SVM could not discriminate near
<italic>vs</italic>
. far positions reliably based on eye position, suggesting patterns of eye movement did not contain systematic information about depth positions (
<xref ref-type="supplementary-material" rid="SD1">Supplementary Fig. 4 online</xref>
).</p>
</sec>
</sec>
<sec sec-type="discussion" id="S9">
<title>Discussion</title>
<p id="P23">Estimating 3D structure in a robust and reliable manner is a principle goal of the visual system. A computationally attractive means of achieving this goal is to fuse information provided from two or more signals, so that the composite is more precise than its constituents. Despite considerable interest in this topic, comparatively little is known about the cortical circuits involved. Here we demonstrate that visual area V3B/KO may be important in this process, and propose that fusion is an important computation performed by the dorsal visual stream.</p>
<p id="P24">First we showed that fMRI signals from area V3B/KO are more discriminable when two cues concurrently signal depth, and this improvement exceeds the minimum bound expected for fusion. Second, we showed that improved performance is specific to congruent cues: presenting highly inconsistent disparity and motion information did not improve discriminability. This follows the predictions of integration, and matched perceptual judgments, but is not expected if disparity and motion signals are co-located, but independent. A potential issue of concern is whether the discrimination of brain signals relates to depth
<italic>per se</italic>
, or less interesting low-level correlates (e.g. speed of movement). We showed that while information about relative motion is diagnostic of depth from disparity, these cross-cue transfer effects are not found between perceptually-flat motion and disparity-defined depth. These results suggest a potential neural locus for interactions between disparity and motion depth cues demonstrated in threshold
<sup>
<xref ref-type="bibr" rid="R13">13</xref>
</sup>
and suprathreshold psychophysical tasks
<sup>
<xref ref-type="bibr" rid="R14">14</xref>
,
<xref ref-type="bibr" rid="R15">15</xref>
</sup>
. More generally, they highlight V3B/KO as an area that may play an important role in integrating cues to estimate depth.</p>
<p id="P25">While our results point clearly to area V3B/KO, our different analyses (
<xref ref-type="fig" rid="F4">Fig. 4</xref>
: quadratic summation;
<xref ref-type="fig" rid="F5">Fig. 5</xref>
: congruent
<italic>vs</italic>
. incongruent, transfer test) suggested responses in other areas (i.e. V3, V3A) that, although not significant, might also relate to fusion. It is possible that our tests were not sufficiently sensitive to reveal fusion in these (or other) areas for which we have a null result; for instance, decoding accuracies for the motion condition were high in some areas, so responses in the congruent condition may have been near ceiling, limiting detection. However, an interesting alternative is that responses in these earlier areas represent an intermediate depth representation in which links between disparity and motion are not fully established. Previously it was suggested that the Kinetic Occipital (KO) area is specialized for depth structure
<sup>
<xref ref-type="bibr" rid="R24">24</xref>
</sup>
, and is functionally distinct from V3B. Using independent localizer scans, we do not find a reliable means of delineating V3B from KO. However, to check we were not mischaracterizing responses, we examined the spatial distribution of voxels chosen by the classifier. We found that chosen voxels were distributed throughout V3B/KO and did not cluster into subregions (
<xref ref-type="supplementary-material" rid="SD1">Supplementary Fig. 7 online</xref>
).</p>
<sec sec-type="results" id="S10">
<title>Relation between psychophysical and fMRI results</title>
<p id="P26">While results in V3B/KO are consistent with behavioral evidence for fusion, there is a difference in that sensitivity to the ‘single’ cues differs at the behavioral level (
<xref ref-type="fig" rid="F2">Fig. 2</xref>
) but not at the decoding level (
<xref ref-type="fig" rid="F3">Fig. 3</xref>
). From psychophysical results
<sup>
<xref ref-type="bibr" rid="R13">13</xref>
</sup>
, higher sensitivity to disparity-defined depth is expected. However, this would not necessarily translate to decoding differences. Specifically, our behavioral task measured increment thresholds (sensitivity to small depth differences) while fMRI stimuli were purposefully
<sup>
<xref ref-type="bibr" rid="R18">18</xref>
</sup>
suprathreshold (the difference between ‘near’ and ‘far’ stimuli was very apparent). Thus, while clear parallels can be drawn between tests for integration at the psychophysical- and fMRI- levels, necessary differences between paradigms make it difficult to compare the magnitude of the effects directly.</p>
<p id="P27">Further, multi-sensory integration effects for single unit recordings are reported to be highly non-linear near threshold
<sup>
<xref ref-type="bibr" rid="R25">25</xref>
</sup>
, but more additive or subadditive with suprathreshold stimuli
<sup>
<xref ref-type="bibr" rid="R11">11</xref>
,
<xref ref-type="bibr" rid="R26">26</xref>
,
<xref ref-type="bibr" rid="R27">27</xref>
</sup>
. Our use of suprathreshold stimuli makes it unsurprising that we did not observe significant changes in overall fMRI responses (
<xref ref-type="supplementary-material" rid="SD1">Supplementary Fig. 2 online</xref>
). Moreover, it is important to note that we have not attempted to ‘add’ and ‘subtract’ cues (e.g., our ‘single’ cue relative motion stimulus contained disparity information that the viewed display was flat). Our manipulation purposefully changes the degree of cue conflict between cues, thereby establishing a minimum bound for fusion. While useful, testing against this bound alone cannot preclude independence. Specifically, fused cues should have reduced neuronal variability
<sup>
<xref ref-type="bibr" rid="R28">28</xref>
</sup>
, however, fMRI measures of this activity aggregate responses and are subject to additional noise (e.g. participant movement and scanner noise). Depending on the amount of noise, decoding independent representations can surpass the minimum bound (
<xref ref-type="supplementary-material" rid="SD1">Supplementary Fig. 5 online</xref>
). The subsequent tests we develop (incongruent cues; transfer test) are therefore important in confirming the results.</p>
<p id="P28">Finally, we outlined two variants for the fusion of strongly conflicting cues: strict
<italic>vs</italic>
. robust (
<xref ref-type="fig" rid="F1">Fig. 1d</xref>
). Behaviorally, we found evidence for robust fusion: sensitivity in the incongruent cue condition matched the disparity condition (
<xref ref-type="fig" rid="F2">Fig. 2c</xref>
), and perceived depth relied on disparity (
<xref ref-type="fig" rid="F2">Fig. 2d</xref>
). This was compatible with fMRI results in V3B/KO (
<xref ref-type="fig" rid="F5">Fig. 5a</xref>
), where performance dropped to the level of ‘single’ cues. However, we developed a further test of robust fusion: if responses in V3B/KO reflect robust perception, the classifier’s predictions might reverse for incongruent stimuli. That is, if depth is decoded at the perceptual level, training the classifier on ‘near’ motion may predict a ‘near’ perceptual interpretation of the incongruent stimulus, even though motion signals ‘far’. We did not find a reversal of discrimination performance (
<xref ref-type="fig" rid="F6">Fig. 6c</xref>
), however performance was considerably reduced, suggesting an attenuated response. While this result
<italic>per se</italic>
does not match robust fusion, intriguingly it is compatible with a population mechanism for robust perception. In particular, depth estimation can be understood as causal inference
<sup>
<xref ref-type="bibr" rid="R21">21</xref>
</sup>
in which the brain computes depth ‘both ways’ – i.e. there is a mixed population that contains both units tuned to (a) independent and (b) fused cues. A readout mechanism then selects one of the competing interpretations, using the relative reliability of the fused
<italic>vs</italic>
. independent models. This idea is compatible with our simulations of a mixed population in V3B/KO (
<xref ref-type="fig" rid="F6">Fig. 6c</xref>
) and previous work that suggests V3B/KO plays an important role in selecting among competing depth interpretations
<sup>
<xref ref-type="bibr" rid="R29">29</xref>
</sup>
.</p>
</sec>
<sec id="S11">
<title>Cortical organization for depth processing</title>
<p id="P29">While there is comparatively little work on neural representation of depth from integrated visual cues, individual cues have been studied quite extensively. Responses to binocular disparity are observed through occipital, temporal and parietal cortices
<sup>
<xref ref-type="bibr" rid="R30">30</xref>
,
<xref ref-type="bibr" rid="R31">31</xref>
</sup>
and there are links between the perception of depth from disparity and fMRI responses in dorsal and ventral areas
<sup>
<xref ref-type="bibr" rid="R18">18</xref>
,
<xref ref-type="bibr" rid="R32">32</xref>
,
<xref ref-type="bibr" rid="R33">33</xref>
</sup>
. Similarly, responses to motion defined depth have been observed in ventral, dorsal and parietal areas
<sup>
<xref ref-type="bibr" rid="R34">34</xref>
-
<xref ref-type="bibr" rid="R36">36</xref>
</sup>
. To link depth from disparity and motion, previous work has highlighted overlapping fMRI activations
<sup>
<xref ref-type="bibr" rid="R24">24</xref>
,
<xref ref-type="bibr" rid="R37">37</xref>
-
<xref ref-type="bibr" rid="R39">39</xref>
</sup>
. This suggests widespread cortical loci in which different cues converge; however, this does not imply the shared organizational structure that we demonstrate here.</p>
<p id="P30">Our tests of cue fusion reveal V3B/KO as the main cortical locus for depth cue integration. However, tests of motion parallax processing in the macaque highlighted area MT/V5
<sup>
<xref ref-type="bibr" rid="R8">8</xref>
</sup>
. Given well-established disparity selectivity in MT/V5
<sup>
<xref ref-type="bibr" rid="R17">17</xref>
</sup>
, this suggests a candidate for integrating depth cues. We observed discriminable fMRI responses for both disparity and relative motion in hMT+/V5 but did not obtain evidence for fusion. While it is possible this represents a species difference
<sup>
<xref ref-type="bibr" rid="R40">40</xref>
</sup>
, the difference may relate to different causes of motion. In particular, we simulated movement of a scene in front of a static observer, while previous work
<sup>
<xref ref-type="bibr" rid="R8">8</xref>
</sup>
moved the participant in a static scene. Thus, in our situation, there was no potential for vestibular signals to contribute to the estimation of ego movement by mediotemporal cortex
<sup>
<xref ref-type="bibr" rid="R10">10</xref>
,
<xref ref-type="bibr" rid="R11">11</xref>
</sup>
.</p>
<p id="P31">In interpreting our results it is important to consider that the multi voxel pattern analysis approach we use is generally understood to rely on weak biases in the responses of individual voxels that reflect a voxel’s sample of neuronal selectivities and vasculature (
<sup>
<xref ref-type="bibr" rid="R41">41</xref>
,
<xref ref-type="bibr" rid="R42">42</xref>
</sup>
; although see
<sup>
<xref ref-type="bibr" rid="R43">43</xref>
,
<xref ref-type="bibr" rid="R44">44</xref>
</sup>
). By definition, these signals reflect a population response, so our results cannot be taken to reveal fusion by single neurons. For instance, it is possible that depth is represented in parallel for (i) disparity and (ii) motion within area V3B/KO. However, if this is the case, these representations are not independent – they must share common organizational structure to account for our findings that (a) prediction accuracy falls to single component levels for incongruent stimuli and (b) training the classifier on one cue supports decoding of the other. It was suggested that MVPA of stimulus orientation relies on univariate differences across the visual field
<sup>
<xref ref-type="bibr" rid="R44">44</xref>
</sup>
. Such spatial organization for disparity preferences has not been identified in the human or macaque brain; however, this is a matter for further investigation. Our previous study
<sup>
<xref ref-type="bibr" rid="R18">18</xref>
</sup>
and on-going work has not provided evidence of retinotopic disparity organization.</p>
</sec>
<sec id="S12">
<title>Independence vs. fusion</title>
<p id="P32">Previously, we tested cue combination by relating psychophysical and fMRI responses
<sup>
<xref ref-type="bibr" rid="R45">45</xref>
</sup>
. This highlighted ventral cortex (LOC) in cue integration, which is not the main locus observed here. Differences in stimuli may be responsible: we previously used slanted planes defined by disparity and perspective cues. Thus ventral areas may be more selective for ‘pictorial’ cues and/or be more selective for slanted surfaces than flat planes. Second, here we used a coarse task, while previously
<sup>
<xref ref-type="bibr" rid="R45">45</xref>
</sup>
a fine judgment was made that may require greater ventral involvement
<sup>
<xref ref-type="bibr" rid="R31">31</xref>
</sup>
. However, next we discuss the possibility that the different cortical loci (dorsal
<italic>vs</italic>
. ventral) point to different types of computation.</p>
<p id="P33">In the Introduction, we presented two scenarios for optimal judgments: fusion
<italic>vs</italic>
. independence.
<italic>Independence</italic>
increases the separation between classes (e.g. ‘near’, ‘far’) but does not reduce variance, while
<italic>fusion</italic>
reduces the variance of estimates, but leaves separation unchanged. We suggest these two modes of operation may be exploited for different types of task. If a body movement is required, the brain is best served by fusing the available information to obtain an estimate of the scene that is unbiased and has low variance. Such a representation would be particular to the viewing situation (i.e. highly specific), and variant under manipulations of individual cues. In contrast, recognition tasks are best served by maximizing the separation of objects in a high-dimensional feature space, while ignoring uninformative dimensions. Such a mechanism would support invariant performance by discarding irrelevant ‘nuisance’ scene parameters, and/or changes in the reliability of individual cues, yet may be highly uncertain about the particular structure of the scene
<sup>
<xref ref-type="bibr" rid="R46">46</xref>
</sup>
. To illustrate the distinction, consider a typical desktop scene. If the observers’ goal is to discriminate a telephone from a nearby book, information about the 3D orientation on the tabletop is uninformative, so should be discounted from the judgment (i.e. the telephone’s features should be recognized while ignoring location). In contrast, to pick up the telephone, the brain should incorporate all the information relevant to the location from the current view.</p>
<p id="P34">Our previous tests of disparity processing
<sup>
<xref ref-type="bibr" rid="R18">18</xref>
</sup>
suggest differences between the visual pathways: dorsal areas appear selective for metric disparity (i.e. the precise location of a plane) while ventral Lateral Occipital (LO) represents depth configuration (i.e. whether the stimulus is ‘near’ or ‘far’, but not how near or how far). The current findings bolster this suggested distinction by providing novel evidence for fusion in the dorsal pathway. We propose this provides the best metric information about the scene that is specific to the current view.</p>
</sec>
</sec>
<sec sec-type="supplementary-material" id="SM">
<title>Supplementary Material</title>
<supplementary-material content-type="local-data" id="SD1">
<label>1</label>
<media xlink:href="NIHMS40862-supplement-1.pdf" orientation="portrait" xlink:type="simple" id="d35e1057" position="anchor" mimetype="application" mime-subtype="pdf"></media>
</supplementary-material>
<supplementary-material content-type="local-data" id="SD2">
<label>2</label>
<media xlink:href="NIHMS40862-supplement-onlinemethods.pdf" orientation="portrait" xlink:type="simple" id="d35e1061" position="anchor" mimetype="application" mime-subtype="pdf"></media>
</supplementary-material>
</sec>
</body>
<back>
<ack id="S13">
<title>Acknowledgments</title>
<p>We thank Bosco Tjan, Roland Fleming and Andrew Glennerster for valuable discussions on the project. We thank the referees for their thoughtful and intelligent critiques of the work. The work was supported by fellowships to AEW from the Wellcome Trust [095183/Z/10/Z] and Biotechnology and Biological Sciences Research Council [C520620] and to HB from the Japan Society for the Promotion of Science [H22,290].</p>
</ack>
<ref-list>
<title>References</title>
<ref id="R1">
<label>1</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dosher</surname>
<given-names>BA</given-names>
</name>
<name>
<surname>Sperling</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Wurst</surname>
<given-names>SA</given-names>
</name>
</person-group>
<article-title>Tradeoffs between stereopsis and proximity luminance covariance as determinants of perceived 3D structure</article-title>
<source>Vision Res</source>
<year>1986</year>
<volume>26</volume>
<fpage>973</fpage>
<lpage>990</lpage>
<pub-id pub-id-type="pmid">3750879</pub-id>
</element-citation>
</ref>
<ref id="R2">
<label>2</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Buelthoff</surname>
<given-names>HH</given-names>
</name>
<name>
<surname>Mallot</surname>
<given-names>HA</given-names>
</name>
</person-group>
<article-title>Integration of Depth Modules - Stereo and Shading</article-title>
<source>Journal of the Optical Society of America a-Optics Image Science and Vision</source>
<year>1988</year>
<volume>5</volume>
<fpage>1749</fpage>
<lpage>1758</lpage>
<pub-id pub-id-type="pmid">3204438</pub-id>
</element-citation>
</ref>
<ref id="R3">
<label>3</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Landy</surname>
<given-names>MS</given-names>
</name>
<name>
<surname>Maloney</surname>
<given-names>LT</given-names>
</name>
<name>
<surname>Johnston</surname>
<given-names>EB</given-names>
</name>
<name>
<surname>Young</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Measurement and Modeling of Depth Cue Combination - in Defense of Weak Fusion</article-title>
<source>Vision Research</source>
<year>1995</year>
<volume>35</volume>
<fpage>389</fpage>
<lpage>412</lpage>
<pub-id pub-id-type="pmid">7892735</pub-id>
</element-citation>
</ref>
<ref id="R4">
<label>4</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Clark</surname>
<given-names>JJ</given-names>
</name>
<name>
<surname>Yuille</surname>
<given-names>AL</given-names>
</name>
</person-group>
<source>Data fusion for sensory information processing systems</source>
<year>1990</year>
<publisher-name>Kluwer Academic</publisher-name>
</element-citation>
</ref>
<ref id="R5">
<label>5</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
</person-group>
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion</article-title>
<source>Nature</source>
<year>2002</year>
<volume>415</volume>
<fpage>429</fpage>
<lpage>433</lpage>
<pub-id pub-id-type="pmid">11807554</pub-id>
</element-citation>
</ref>
<ref id="R6">
<label>6</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Knill</surname>
<given-names>DC</given-names>
</name>
<name>
<surname>Saunders</surname>
<given-names>JA</given-names>
</name>
</person-group>
<article-title>Do humans optimally integrate stereo and texture information for judgments of surface slant?</article-title>
<source>Vision Research</source>
<year>2003</year>
<volume>43</volume>
<fpage>2539</fpage>
<lpage>2558</lpage>
<pub-id pub-id-type="pmid">13129541</pub-id>
</element-citation>
</ref>
<ref id="R7">
<label>7</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tsutsui</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Sakata</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Naganuma</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Taira</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Neural correlates for perception of 3D surface orientation from texture gradient</article-title>
<source>Science</source>
<year>2002</year>
<volume>298</volume>
<fpage>409</fpage>
<lpage>412</lpage>
<pub-id pub-id-type="pmid">12376700</pub-id>
</element-citation>
</ref>
<ref id="R8">
<label>8</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nadler</surname>
<given-names>JW</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
</person-group>
<article-title>A neural representation of depth from motion parallax in macaque visual cortex</article-title>
<source>Nature</source>
<year>2008</year>
<volume>452</volume>
<fpage>642</fpage>
<lpage>U610</lpage>
<pub-id pub-id-type="pmid">18344979</pub-id>
</element-citation>
</ref>
<ref id="R9">
<label>9</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Vogels</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Orban</surname>
<given-names>GA</given-names>
</name>
</person-group>
<article-title>Convergence of depth from texture and depth from disparity in macaque inferior temporal cortex</article-title>
<source>J Neurosci</source>
<year>2004</year>
<volume>24</volume>
<fpage>3795</fpage>
<lpage>3800</lpage>
<pub-id pub-id-type="pmid">15084660</pub-id>
</element-citation>
</ref>
<ref id="R10">
<label>10</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
<name>
<surname>Deangelis</surname>
<given-names>GC</given-names>
</name>
</person-group>
<article-title>Neural correlates of multisensory cue integration in macaque MSTd</article-title>
<source>Nat Neurosci</source>
<year>2008</year>
<volume>11</volume>
<fpage>1201</fpage>
<lpage>1210</lpage>
<pub-id pub-id-type="pmid">18776893</pub-id>
</element-citation>
</ref>
<ref id="R11">
<label>11</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Morgan</surname>
<given-names>ML</given-names>
</name>
<name>
<surname>Deangelis</surname>
<given-names>GC</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<article-title>Multisensory integration in macaque visual cortex depends on cue reliability</article-title>
<source>Neuron</source>
<year>2008</year>
<volume>59</volume>
<fpage>662</fpage>
<lpage>673</lpage>
<pub-id pub-id-type="pmid">18760701</pub-id>
</element-citation>
</ref>
<ref id="R12">
<label>12</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rogers</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Graham</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Motion parallax as an independent cue for depth perception</article-title>
<source>Perception</source>
<year>1979</year>
<volume>8</volume>
<fpage>125</fpage>
<lpage>134</lpage>
<pub-id pub-id-type="pmid">471676</pub-id>
</element-citation>
</ref>
<ref id="R13">
<label>13</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bradshaw</surname>
<given-names>MF</given-names>
</name>
<name>
<surname>Rogers</surname>
<given-names>BJ</given-names>
</name>
</person-group>
<article-title>The interaction of binocular disparity and motion parallax in the computation of depth</article-title>
<source>Vision Res</source>
<year>1996</year>
<volume>36</volume>
<fpage>3457</fpage>
<lpage>3468</lpage>
<pub-id pub-id-type="pmid">8977012</pub-id>
</element-citation>
</ref>
<ref id="R14">
<label>14</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nawrot</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Blake</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>Neural integration of information specifying structure from stereopsis and motion</article-title>
<source>Science</source>
<year>1989</year>
<volume>244</volume>
<fpage>716</fpage>
<lpage>718</lpage>
<pub-id pub-id-type="pmid">2717948</pub-id>
</element-citation>
</ref>
<ref id="R15">
<label>15</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Poom</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Borjesson</surname>
<given-names>E</given-names>
</name>
</person-group>
<article-title>Perceptual depth synthesis in the visual system as revealed by selective adaptation</article-title>
<source>J Exp Psychol Hum Percept Perform</source>
<year>1999</year>
<volume>25</volume>
<fpage>504</fpage>
<lpage>517</lpage>
<pub-id pub-id-type="pmid">10205863</pub-id>
</element-citation>
</ref>
<ref id="R16">
<label>16</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Domini</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Caudek</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Tassinari</surname>
<given-names>H</given-names>
</name>
</person-group>
<article-title>Stereo and motion information are not independently processed by the visual system</article-title>
<source>Vision Res</source>
<year>2006</year>
<volume>46</volume>
<fpage>1707</fpage>
<lpage>1723</lpage>
<pub-id pub-id-type="pmid">16412492</pub-id>
</element-citation>
</ref>
<ref id="R17">
<label>17</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
<name>
<surname>Newsome</surname>
<given-names>WT</given-names>
</name>
</person-group>
<article-title>Organization of disparity-selective neurons in macaque area MT</article-title>
<source>J Neurosci</source>
<year>1999</year>
<volume>19</volume>
<fpage>1398</fpage>
<lpage>1415</lpage>
<pub-id pub-id-type="pmid">9952417</pub-id>
</element-citation>
</ref>
<ref id="R18">
<label>18</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Preston</surname>
<given-names>TJ</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Kourtzi</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Welchman</surname>
<given-names>AE</given-names>
</name>
</person-group>
<article-title>Multivoxel pattern selectivity for perceptually relevant binocular disparities in the human brain</article-title>
<source>J Neurosci</source>
<year>2008</year>
<volume>28</volume>
<fpage>11315</fpage>
<lpage>11327</lpage>
<pub-id pub-id-type="pmid">18971473</pub-id>
</element-citation>
</ref>
<ref id="R19">
<label>19</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nandy</surname>
<given-names>AS</given-names>
</name>
<name>
<surname>Tjan</surname>
<given-names>BS</given-names>
</name>
</person-group>
<article-title>Efficient integration across spatial frequencies for letter identification in foveal and peripheral vision</article-title>
<source>J Vis</source>
<year>2008</year>
<volume>8</volume>
<issue>3</issue>
<fpage>1</fpage>
<lpage>20</lpage>
<pub-id pub-id-type="pmid">19146333</pub-id>
</element-citation>
</ref>
<ref id="R20">
<label>20</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hillis</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
<name>
<surname>Landy</surname>
<given-names>MS</given-names>
</name>
</person-group>
<article-title>Combining sensory information: mandatory fusion within, but not between, senses</article-title>
<source>Science</source>
<year>2002</year>
<volume>298</volume>
<fpage>1627</fpage>
<lpage>1630</lpage>
<pub-id pub-id-type="pmid">12446912</pub-id>
</element-citation>
</ref>
<ref id="R21">
<label>21</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kording</surname>
<given-names>KP</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Causal Inference in Multisensory Perception</article-title>
<source>Plos One</source>
<year>2007</year>
<volume>2</volume>
</element-citation>
</ref>
<ref id="R22">
<label>22</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Popple</surname>
<given-names>AV</given-names>
</name>
<name>
<surname>Smallman</surname>
<given-names>HS</given-names>
</name>
<name>
<surname>Findlay</surname>
<given-names>JM</given-names>
</name>
</person-group>
<article-title>Spatial integration region for initial horizontal disparity vergence</article-title>
<source>Investigative Ophthalmology and Visual Science</source>
<year>1997</year>
<volume>38</volume>
<fpage>4225</fpage>
<lpage>4225</lpage>
</element-citation>
</ref>
<ref id="R23">
<label>23</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Preston</surname>
<given-names>TJ</given-names>
</name>
<name>
<surname>Kourtzi</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Welchman</surname>
<given-names>AE</given-names>
</name>
</person-group>
<article-title>Adaptive estimation of three-dimensional structure in the human brain</article-title>
<source>J Neurosci</source>
<year>2009</year>
<volume>29</volume>
<fpage>1688</fpage>
<lpage>1698</lpage>
<pub-id pub-id-type="pmid">19211876</pub-id>
</element-citation>
</ref>
<ref id="R24">
<label>24</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tyler</surname>
<given-names>CW</given-names>
</name>
<name>
<surname>Likova</surname>
<given-names>LT</given-names>
</name>
<name>
<surname>Kontsevich</surname>
<given-names>LL</given-names>
</name>
<name>
<surname>Wade</surname>
<given-names>AR</given-names>
</name>
</person-group>
<article-title>The specificity of cortical region KO to depth structure</article-title>
<source>Neuroimage</source>
<year>2006</year>
<volume>30</volume>
<fpage>228</fpage>
<lpage>238</lpage>
<pub-id pub-id-type="pmid">16356738</pub-id>
</element-citation>
</ref>
<ref id="R25">
<label>25</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Meredith</surname>
<given-names>MA</given-names>
</name>
<name>
<surname>Stein</surname>
<given-names>BE</given-names>
</name>
</person-group>
<article-title>Interactions among converging sensory inputs in the superior colliculus</article-title>
<source>Science</source>
<year>1983</year>
<volume>221</volume>
<fpage>389</fpage>
<lpage>391</lpage>
<pub-id pub-id-type="pmid">6867718</pub-id>
</element-citation>
</ref>
<ref id="R26">
<label>26</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Avillac</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Hamed</surname>
<given-names>S. Ben</given-names>
</name>
<name>
<surname>Duhamel</surname>
<given-names>JR</given-names>
</name>
</person-group>
<article-title>Multisensory integration in the ventral intraparietal area of the macaque monkey</article-title>
<source>J Neurosci</source>
<year>2007</year>
<volume>27</volume>
<fpage>1922</fpage>
<lpage>1932</lpage>
<pub-id pub-id-type="pmid">17314288</pub-id>
</element-citation>
</ref>
<ref id="R27">
<label>27</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stanford</surname>
<given-names>TR</given-names>
</name>
<name>
<surname>Quessy</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Stein</surname>
<given-names>BE</given-names>
</name>
</person-group>
<article-title>Evaluating the operations underlying multisensory integration in the cat superior colliculus</article-title>
<source>J Neurosci</source>
<year>2005</year>
<volume>25</volume>
<fpage>6499</fpage>
<lpage>6508</lpage>
<pub-id pub-id-type="pmid">16014711</pub-id>
</element-citation>
</ref>
<ref id="R28">
<label>28</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ma</surname>
<given-names>WJ</given-names>
</name>
<name>
<surname>Beck</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Latham</surname>
<given-names>PE</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Bayesian inference with probabilistic population codes</article-title>
<source>Nat Neurosci</source>
<year>2006</year>
<volume>9</volume>
<fpage>1432</fpage>
<lpage>1438</lpage>
<pub-id pub-id-type="pmid">17057707</pub-id>
</element-citation>
</ref>
<ref id="R29">
<label>29</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Preston</surname>
<given-names>TJ</given-names>
</name>
<name>
<surname>Kourtzi</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Welchman</surname>
<given-names>AE</given-names>
</name>
</person-group>
<article-title>Adaptive estimation of three-dimensional structure in the human brain</article-title>
<source>Journal of Neuroscience</source>
<year>2009</year>
<volume>29</volume>
<fpage>1688</fpage>
<lpage>1698</lpage>
<pub-id pub-id-type="pmid">19211876</pub-id>
</element-citation>
</ref>
<ref id="R30">
<label>30</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Orban</surname>
<given-names>GA</given-names>
</name>
<name>
<surname>Janssen</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Vogels</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>Extracting 3D structure from disparity</article-title>
<source>Trends Neurosci</source>
<year>2006</year>
<volume>29</volume>
<fpage>466</fpage>
<lpage>473</lpage>
<pub-id pub-id-type="pmid">16842865</pub-id>
</element-citation>
</ref>
<ref id="R31">
<label>31</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Parker</surname>
<given-names>AJ</given-names>
</name>
</person-group>
<article-title>Binocular depth perception and the cerebral cortex</article-title>
<source>Nat.Rev.Neurosci</source>
<year>2007</year>
<volume>8</volume>
<fpage>379</fpage>
<lpage>391</lpage>
<pub-id pub-id-type="pmid">17453018</pub-id>
</element-citation>
</ref>
<ref id="R32">
<label>32</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Backus</surname>
<given-names>BT</given-names>
</name>
<name>
<surname>Fleet</surname>
<given-names>DJ</given-names>
</name>
<name>
<surname>Parker</surname>
<given-names>AJ</given-names>
</name>
<name>
<surname>Heeger</surname>
<given-names>DJ</given-names>
</name>
</person-group>
<article-title>Human cortical activity correlates with stereoscopic depth perception</article-title>
<source>J Neurophysiol</source>
<year>2001</year>
<volume>86</volume>
<fpage>2054</fpage>
<lpage>2068</lpage>
<pub-id pub-id-type="pmid">11600661</pub-id>
</element-citation>
</ref>
<ref id="R33">
<label>33</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chandrasekaran</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Canon</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Dahmen</surname>
<given-names>JC</given-names>
</name>
<name>
<surname>Kourtzi</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Welchman</surname>
<given-names>AE</given-names>
</name>
</person-group>
<article-title>Neural correlates of disparity-defined shape discrimination in the human brain</article-title>
<source>Journal of Neurophysiology</source>
<year>2007</year>
<volume>97</volume>
<fpage>1553</fpage>
<lpage>1565</lpage>
<pub-id pub-id-type="pmid">17151220</pub-id>
</element-citation>
</ref>
<ref id="R34">
<label>34</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Orban</surname>
<given-names>GA</given-names>
</name>
<name>
<surname>Sunaert</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Todd</surname>
<given-names>JT</given-names>
</name>
<name>
<surname>Van Hecke</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Marchal</surname>
<given-names>G</given-names>
</name>
</person-group>
<article-title>Human cortical regions involved in extracting depth from motion</article-title>
<source>Neuron</source>
<year>1999</year>
<volume>24</volume>
<fpage>929</fpage>
<lpage>940</lpage>
<pub-id pub-id-type="pmid">10624956</pub-id>
</element-citation>
</ref>
<ref id="R35">
<label>35</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Murray</surname>
<given-names>SO</given-names>
</name>
<name>
<surname>Olshausen</surname>
<given-names>BA</given-names>
</name>
<name>
<surname>Woods</surname>
<given-names>DL</given-names>
</name>
</person-group>
<article-title>Processing shape, motion and three-dimensional shape-from-motion in the human cortex</article-title>
<source>Cerebral Cortex</source>
<year>2003</year>
<volume>13</volume>
<fpage>508</fpage>
<lpage>516</lpage>
<pub-id pub-id-type="pmid">12679297</pub-id>
</element-citation>
</ref>
<ref id="R36">
<label>36</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Paradis</surname>
<given-names>AL</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Visual perception of motion and 3-D structure from motion: an fMRI study</article-title>
<source>Cereb Cortex</source>
<year>2000</year>
<volume>10</volume>
<fpage>772</fpage>
<lpage>783</lpage>
<pub-id pub-id-type="pmid">10920049</pub-id>
</element-citation>
</ref>
<ref id="R37">
<label>37</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sereno</surname>
<given-names>ME</given-names>
</name>
<name>
<surname>Trinath</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Augath</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Logothetis</surname>
<given-names>NK</given-names>
</name>
</person-group>
<article-title>Three-dimensional shape representation in monkey cortex</article-title>
<source>Neuron</source>
<year>2002</year>
<volume>33</volume>
<fpage>635</fpage>
<lpage>652</lpage>
<pub-id pub-id-type="pmid">11856536</pub-id>
</element-citation>
</ref>
<ref id="R38">
<label>38</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Durand</surname>
<given-names>JB</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Anterior regions of monkey parietal cortex process visual 3D shape</article-title>
<source>Neuron</source>
<year>2007</year>
<volume>55</volume>
<fpage>493</fpage>
<lpage>505</lpage>
<pub-id pub-id-type="pmid">17678860</pub-id>
</element-citation>
</ref>
<ref id="R39">
<label>39</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peuskens</surname>
<given-names>H</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Attention to 3-D shape, 3-D motion, and texture in 3-D structure from motion displays</article-title>
<source>J Cogn Neurosci</source>
<year>2004</year>
<volume>16</volume>
<fpage>665</fpage>
<lpage>682</lpage>
<pub-id pub-id-type="pmid">15165355</pub-id>
</element-citation>
</ref>
<ref id="R40">
<label>40</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Orban</surname>
<given-names>GA</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Similarities and differences in motion processing between the human and macaque brain: evidence from fMRI</article-title>
<source>Neuropsychologia</source>
<year>2003</year>
<volume>41</volume>
<fpage>1757</fpage>
<lpage>1768</lpage>
<pub-id pub-id-type="pmid">14527539</pub-id>
</element-citation>
</ref>
<ref id="R41">
<label>41</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shmuel</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Chaimow</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Raddatz</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Ugurbil</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Yacoub</surname>
<given-names>E</given-names>
</name>
</person-group>
<article-title>Mechanisms underlying decoding at 7 T: ocular dominance columns, broad structures, and macroscopic blood vessels in V1 convey information on the stimulated eye</article-title>
<source>Neuroimage</source>
<year>2010</year>
<volume>49</volume>
<fpage>1957</fpage>
<lpage>1964</lpage>
<pub-id pub-id-type="pmid">19715765</pub-id>
</element-citation>
</ref>
<ref id="R42">
<label>42</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kriegeskorte</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Cusack</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Bandettini</surname>
<given-names>P</given-names>
</name>
</person-group>
<article-title>How does an fMRI voxel sample the neuronal activity pattern: compact-kernel or complex spatiotemporal filter?</article-title>
<source>Neuroimage</source>
<year>2010</year>
<volume>49</volume>
<fpage>1965</fpage>
<lpage>1976</lpage>
<pub-id pub-id-type="pmid">19800408</pub-id>
</element-citation>
</ref>
<ref id="R43">
<label>43</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Op de Beeck</surname>
<given-names>HP</given-names>
</name>
</person-group>
<article-title>Against hyperacuity in brain reading: spatial smoothing does not hurt multivariate fMRI analyses?</article-title>
<source>Neuroimage</source>
<year>2009</year>
<volume>49</volume>
<fpage>1943</fpage>
<lpage>1948</lpage>
<pub-id pub-id-type="pmid">19285144</pub-id>
</element-citation>
</ref>
<ref id="R44">
<label>44</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Freeman</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Brouwer</surname>
<given-names>GJ</given-names>
</name>
<name>
<surname>Heeger</surname>
<given-names>DJ</given-names>
</name>
<name>
<surname>Merriam</surname>
<given-names>EP</given-names>
</name>
</person-group>
<article-title>Orientation decoding depends on maps, not columns</article-title>
<source>J Neurosci</source>
<year>2011</year>
<volume>31</volume>
<fpage>4792</fpage>
<lpage>4804</lpage>
<pub-id pub-id-type="pmid">21451017</pub-id>
</element-citation>
</ref>
<ref id="R45">
<label>45</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Welchman</surname>
<given-names>AE</given-names>
</name>
<name>
<surname>Deubelius</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Conrad</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>HH</given-names>
</name>
<name>
<surname>Kourtzi</surname>
<given-names>Z</given-names>
</name>
</person-group>
<article-title>3D shape perception from combined depth cues in human visual cortex</article-title>
<source>Nature Neuroscience</source>
<year>2005</year>
<volume>8</volume>
<fpage>820</fpage>
<lpage>827</lpage>
</element-citation>
</ref>
<ref id="R46">
<label>46</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tjan</surname>
<given-names>BS</given-names>
</name>
<name>
<surname>Lestou</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Kourtzi</surname>
<given-names>Z</given-names>
</name>
</person-group>
<article-title>Uncertainty and invariance in the human visual cortex</article-title>
<source>J Neurophysiol</source>
<year>2006</year>
<volume>96</volume>
<fpage>1556</fpage>
<lpage>1568</lpage>
<pub-id pub-id-type="pmid">16723410</pub-id>
</element-citation>
</ref>
<ref id="R47">
<label>47</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dupont</surname>
<given-names>P</given-names>
</name>
<etal></etal>
</person-group>
<article-title>The kinetic occipital region in human visual cortex</article-title>
<source>Cerebral Cortex</source>
<year>1997</year>
<volume>7</volume>
<fpage>283</fpage>
<lpage>292</lpage>
<pub-id pub-id-type="pmid">9143447</pub-id>
</element-citation>
</ref>
<ref id="R48">
<label>48</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Serences</surname>
<given-names>JT</given-names>
</name>
<name>
<surname>Boynton</surname>
<given-names>GM</given-names>
</name>
</person-group>
<article-title>The representation of behavioral choice for motion in human visual cortex</article-title>
<source>J Neurosci</source>
<year>2007</year>
<volume>27</volume>
<fpage>12893</fpage>
<lpage>12899</lpage>
<pub-id pub-id-type="pmid">18032662</pub-id>
</element-citation>
</ref>
<ref id="R49">
<label>49</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>De Martino</surname>
<given-names>F</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Combining multivariate voxel selection and support vector machines for mapping and classification of fMRI spatial patterns</article-title>
<source>Neuroimage</source>
<year>2008</year>
<volume>43</volume>
<fpage>44</fpage>
<lpage>58</lpage>
<pub-id pub-id-type="pmid">18672070</pub-id>
</element-citation>
</ref>
<ref id="R50">
<label>50</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kamitani</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Tong</surname>
<given-names>F</given-names>
</name>
</person-group>
<article-title>Decoding the visual and subjective contents of the human brain</article-title>
<source>Nature Neuroscience</source>
<year>2005</year>
<volume>8</volume>
<fpage>679</fpage>
<lpage>685</lpage>
</element-citation>
</ref>
</ref-list>
</back>
<floats-group>
<fig id="F1" orientation="portrait" position="float">
<label>Figure 1</label>
<caption>
<p>A. Cartoon of depth processing: depth of the ballerina figurine is estimated from disparity and motion, producing a bivariate Gaussian (3D plot with purple blob). Fusion combines disparity and motion using maximum likelihood estimation, producing a univariate ‘depth’ estimate.</p>
<p>B. Discriminating two shapes (‘Margot’ vs. ‘Darcy’) defined by bivariate Gaussians (purple and green blobs). We envisage four types of detector: ‘disparity’ and ‘motion’ respond to only one dimension (i.e. discrimination of the marginals); the ‘independent’ detector uses the optimal separating plane (grey line on the negative diagonal); the ‘fusion’ detector integrates cues.</p>
<p>C. ‘Single’ cue case: shapes differ in disparity but motion is the same. The optimal separating plane is now vertical (independent detector), while the fusion mechanism is compromised.</p>
<p>D. Incongruent cues: disparity and motion indicate opposite depths. Independent performance matches Fig 1b while fusion is illustrated for two scenarios: strict (detector is insensitive) and robust (dotted bar – performance reverts to one component).</p>
<p>E. Predicted measurements of independent units. Four types of stimuli are displayed: ‘disparity’ (Fig 1c), ‘motion’ (motion indicates a depth difference, disparity specifies the same depth), ‘Disparity+motion’ (Fig 1b), and ‘incongruent’ (Fig 1d).</p>
<p>F. Predicted measurements of fused units. Note that performance in the Motion and Disparity conditions is lower than in panel e.</p>
</caption>
<graphic xlink:href="ukmss-40862-f0001"></graphic>
</fig>
<fig id="F2" orientation="portrait" position="float">
<label>Figure 2</label>
<caption>
<p>A. Cartoon of the decoding approach. Participants view stimuli that depict ‘near’ or ‘far’ depths. These differentially excite neuronal populations within an area of cortex. fMRI measurements reduce the resolution. We characterize the sensitivity of the decoding algorithm in discriminating near and far stimuli.</p>
<p>B. Illustrations of disparity and motion defined depth stimuli. The top row provides stereograms to be viewed through red-green anaglyphs. The bottom row provides a cartoon of the relative motion stimuli: yellow arrow speed of target, blue arrow speed of background.</p>
<p>C. Behavioural tests of integration. Data show observers’ mean sensitivity (N=7) with the between-subjects SEM. The red horiztonal line indcates the quadratic summation prediction. The adjacent plot shows the results as an integration index for the congruent and incongruent conditions. A value of zero indicates the minimum bound for fusion. Data are presented as notched distribution plots. The center of the ‘bow tie’ represents the median, the edges depict 68% confidence values, and the upper and lower error bars 95% confidence intervals.</p>
<p>D. The results of an experiment in which observers (N=4) reported whether the stimulus was near or far in the incongruent cue stimulus. Data are expressed as the percentage of trials on which reported depth matched depth from disparity.</p>
</caption>
<graphic xlink:href="ukmss-40862-f0002"></graphic>
</fig>
<fig id="F3" orientation="portrait" position="float">
<label>Figure 3</label>
<caption>
<p>Representative flatmaps showing the left and right visual regions of interest from one participant. The maps show the location of retinotopic areas, V3B/KO, the human motion complex (hMT+/V5) and the lateral occipital (LO) area. Regions were defined using independent localizers. Sulci are coded in darker gray than the gyri. Superimposed on the maps are the results of a group searchlight classifier analysis that moved itteratively throughout the entire volume of cortex measured, discriminating between ‘near’ and ‘far’ depth positions
<sup>
<xref ref-type="bibr" rid="R18">18</xref>
</sup>
. The colour code represents the
<italic>t</italic>
-value of the classification accuracies obtained. This analysis confirmed that we had not missed any important areas outside those localized independently.</p>
</caption>
<graphic xlink:href="ukmss-40862-f0003"></graphic>
</fig>
<fig id="F4" orientation="portrait" position="float">
<label>Figure 4</label>
<caption>
<p>A. Prediction accuracy for near
<italic>vs</italic>
. far discrimination in different regions of interest. The red lines illustrate the accuracy expected from the quadratic summation of discriminabilities for the ‘single’ cue conditions. Error bars depict the SEM.</p>
<p>B. Results as an integration index. A value of zero indicates the minimum bound for fusion (i.e. the prediction based on quadratic summation). Data are presented as notched distribution plots. The center of the ‘bow tie’ represents the median, the grey-shaded area depicts 68% confidence values, and the upper and lower error bars 95% confidence intervals.</p>
</caption>
<graphic xlink:href="ukmss-40862-f0004"></graphic>
</fig>
<fig id="F5" orientation="portrait" position="float">
<label>Figure 5</label>
<caption>
<p>A. Prediction accuracy for near
<italic>vs</italic>
. far classification when cues are congruent (
<xref ref-type="fig" rid="F1">Fig. 1b</xref>
) or incongruent (
<xref ref-type="fig" rid="F1">Fig. 1d</xref>
). Error bars show SEM. The dotted horizontal line at 0.5 corresponds to chance performance for this binary classification.</p>
<p>B. Prediction accuracy for the cross-cue transfer analysis. Two types of transfer are depicted: between moiton and disparity (gray bars) and between disparity and a flat motion control stimulus (white bars). Classification accuracies are generally lower than for the standard SVM analysis (
<xref ref-type="fig" rid="F4">Fig. 4a</xref>
); this is not surprising given the considerable differences between the stimuli that evoked the training and test fMRI responses. Error bars show SEM.</p>
<p>C. Data shown as a transfer index. A value of 100% would indicate that prediction accuracies were equivalent for within- and between- cue testing. Distribution plots show the median, 68% and 95% confidence intervals. Dotted horizontal lines depcit a bootstrapped chance baseline based on the upper 95
<sup>th</sup>
centile for transfer obtained with randomly permutted data.</p>
</caption>
<graphic xlink:href="ukmss-40862-f0005"></graphic>
</fig>
<fig id="F6" orientation="portrait" position="float">
<label>Figure 6</label>
<caption>
<p>A. fMRI decoding data from V3B/KO adjacent to the simulation results. Simulation results show decoding performance of a simulated population of voxels where the neuronal population contains different percentages of units tuned to individual vs. fused cues. The χ
<sup>2</sup>
statistic was used to identify the closest fit between empirical and simulated data from a range of popoulation mixtures. Error bars depecit SEM.</p>
<p>B. fMRI decoding data for the transfer tests adjacent to the simulation results. Error bars depecit SEM.</p>
<p>C. Performance in a transfer test between data from the motion condition and the consistent and inconsistent cue conditions. Error bars depecit SEM.</p>
</caption>
<graphic xlink:href="ukmss-40862-f0006"></graphic>
</fig>
<table-wrap id="T1" position="float" orientation="portrait">
<label>Table 1</label>
<caption>
<p>Probabilities associated with obtaining a value of zero for (i) the fMRI integration index, and (ii) the prediction accuracy difference between congruent and incongruent stimulus conditions. Values are from a bootstrapped resampling of the individual participants’ data. Bold formating indicates Bonferoni-corrected significance.</p>
</caption>
<table frame="box" rules="all">
<thead>
<tr>
<th align="center" valign="middle" rowspan="1" colspan="1"></th>
<th colspan="2" align="center" valign="middle" rowspan="1">
<italic>p</italic>
-value</th>
</tr>
<tr>
<th align="center" valign="middle" rowspan="1" colspan="1">Cortical area</th>
<th align="center" valign="middle" rowspan="1" colspan="1">Integration
<break></break>
index above zero</th>
<th align="center" valign="middle" rowspan="1" colspan="1">Congruent vs.
<break></break>
incongruent</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">V1</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.789</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.523</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">V2</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.799</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.419</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">V3v</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.150</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.079</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">V4</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.880</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.486</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">LO</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.838</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.262</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">V3d</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.733</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.203</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">V3A</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.265</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.148</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">V3B/KO</td>
<td align="center" valign="middle" rowspan="1" colspan="1">
<bold>0.001</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">
<bold>0.004</bold>
</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">V7</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.915</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.247</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">hMT+/V5</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.479</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.499</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="T2" position="float" orientation="portrait">
<label>Table 2</label>
<caption>
<p>Probabilities associated with (i) obtaining zero difference between decoding performance in the disparity-to-relative motion and disparity-to-motion control transfer tests; (ii) probability associated with zero difference between the value of the transfer index in the disparity-to-relative motion condition compared to random (shuffled) performance. These
<italic>p</italic>
-values are calculated using bootstrapped resampling with 10,000 samples. Bold formating indicates Bonferoni-corrected significance.</p>
</caption>
<table frame="box" rules="all">
<thead>
<tr>
<th align="center" valign="middle" rowspan="1" colspan="1"></th>
<th colspan="2" align="center" valign="middle" rowspan="1">
<italic>p</italic>
-value</th>
</tr>
<tr>
<th align="center" valign="middle" rowspan="1" colspan="1">Cortical area</th>
<th align="center" valign="middle" rowspan="1" colspan="1">Difference between transfer
<break></break>
and control accuracies</th>
<th align="center" valign="middle" rowspan="1" colspan="1">Transfer index from chance</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">V1</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.273</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.279</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">V2</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.068</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.168</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">V3v</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.024</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.061</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">V4</td>
<td align="center" valign="middle" rowspan="1" colspan="1">
<bold>0.002</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.102</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">LO</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.778</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.758</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">V3d</td>
<td align="center" valign="middle" rowspan="1" colspan="1">
<bold>0.001</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">
<bold>0.002</bold>
</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">V3A</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.121</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.012</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">V3B/KO</td>
<td align="center" valign="middle" rowspan="1" colspan="1">
<bold><0.001</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">
<bold><0.001</bold>
</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">V7</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.590</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.141</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">hMT+/V5</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.815</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0.302</td>
</tr>
</tbody>
</table>
</table-wrap>
</floats-group>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002557 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 002557 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:3378632
   |texte=   The integration of motion and disparity cues to depth in dorsal visual cortex
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:22327475" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024