Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Vestibular Facilitation of Optic Flow Parsing

Identifieur interne : 002118 ( Ncbi/Merge ); précédent : 002117; suivant : 002119

Vestibular Facilitation of Optic Flow Parsing

Auteurs : Paul R. Macneilage [Allemagne] ; Zhou Zhang [États-Unis] ; Gregory C. Deangelis [États-Unis] ; Dora E. Angelaki [États-Unis]

Source :

RBID : PMC:3388053

Abstract

Simultaneous object motion and self-motion give rise to complex patterns of retinal image motion. In order to estimate object motion accurately, the brain must parse this complex retinal motion into self-motion and object motion components. Although this computational problem can be solved, in principle, through purely visual mechanisms, extra-retinal information that arises from the vestibular system during self-motion may also play an important role. Here we investigate whether combining vestibular and visual self-motion information improves the precision of object motion estimates. Subjects were asked to discriminate the direction of object motion in the presence of simultaneous self-motion, depicted either by visual cues alone (i.e. optic flow) or by combined visual/vestibular stimuli. We report a small but significant improvement in object motion discrimination thresholds with the addition of vestibular cues. This improvement was greatest for eccentric heading directions and negligible for forward movement, a finding that could reflect increased relative reliability of vestibular versus visual cues for eccentric heading directions. Overall, these results are consistent with the hypothesis that vestibular inputs can help parse retinal image motion into self-motion and object motion components.


Url:
DOI: 10.1371/journal.pone.0040264
PubMed: 22768345
PubMed Central: 3388053

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3388053

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Vestibular Facilitation of Optic Flow Parsing</title>
<author>
<name sortKey="Macneilage, Paul R" sort="Macneilage, Paul R" uniqKey="Macneilage P" first="Paul R." last="Macneilage">Paul R. Macneilage</name>
<affiliation wicri:level="3">
<nlm:aff id="aff1">
<addr-line>Vertigo, Balance, and Oculomotor Research Center, University Hospital of Munich, Munich, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Vertigo, Balance, and Oculomotor Research Center, University Hospital of Munich, Munich</wicri:regionArea>
<placeName>
<region type="land" nuts="1">Bavière</region>
<region type="district" nuts="2">District de Haute-Bavière</region>
<settlement type="city">Munich</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Zhang, Zhou" sort="Zhang, Zhou" uniqKey="Zhang Z" first="Zhou" last="Zhang">Zhou Zhang</name>
<affiliation wicri:level="4">
<nlm:aff id="aff2">
<addr-line>Department of Biomedical Engineering, University of Southern California, Los Angeles, California, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Biomedical Engineering, University of Southern California, Los Angeles, California</wicri:regionArea>
<placeName>
<region type="state">Californie</region>
</placeName>
<orgName type="university">Université de Californie du Sud</orgName>
</affiliation>
</author>
<author>
<name sortKey="Deangelis, Gregory C" sort="Deangelis, Gregory C" uniqKey="Deangelis G" first="Gregory C." last="Deangelis">Gregory C. Deangelis</name>
<affiliation wicri:level="2">
<nlm:aff id="aff3">
<addr-line>Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York</wicri:regionArea>
<placeName>
<region type="state">État de New York</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Angelaki, Dora E" sort="Angelaki, Dora E" uniqKey="Angelaki D" first="Dora E." last="Angelaki">Dora E. Angelaki</name>
<affiliation wicri:level="2">
<nlm:aff id="aff4">
<addr-line>Department of Neuroscience, Baylor College of Medicine, Houston, Texas, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Neuroscience, Baylor College of Medicine, Houston, Texas</wicri:regionArea>
<placeName>
<region type="state">Texas</region>
</placeName>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">22768345</idno>
<idno type="pmc">3388053</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3388053</idno>
<idno type="RBID">PMC:3388053</idno>
<idno type="doi">10.1371/journal.pone.0040264</idno>
<date when="2012">2012</date>
<idno type="wicri:Area/Pmc/Corpus">002210</idno>
<idno type="wicri:Area/Pmc/Curation">002210</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001496</idno>
<idno type="wicri:Area/Ncbi/Merge">002118</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Vestibular Facilitation of Optic Flow Parsing</title>
<author>
<name sortKey="Macneilage, Paul R" sort="Macneilage, Paul R" uniqKey="Macneilage P" first="Paul R." last="Macneilage">Paul R. Macneilage</name>
<affiliation wicri:level="3">
<nlm:aff id="aff1">
<addr-line>Vertigo, Balance, and Oculomotor Research Center, University Hospital of Munich, Munich, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Vertigo, Balance, and Oculomotor Research Center, University Hospital of Munich, Munich</wicri:regionArea>
<placeName>
<region type="land" nuts="1">Bavière</region>
<region type="district" nuts="2">District de Haute-Bavière</region>
<settlement type="city">Munich</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Zhang, Zhou" sort="Zhang, Zhou" uniqKey="Zhang Z" first="Zhou" last="Zhang">Zhou Zhang</name>
<affiliation wicri:level="4">
<nlm:aff id="aff2">
<addr-line>Department of Biomedical Engineering, University of Southern California, Los Angeles, California, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Biomedical Engineering, University of Southern California, Los Angeles, California</wicri:regionArea>
<placeName>
<region type="state">Californie</region>
</placeName>
<orgName type="university">Université de Californie du Sud</orgName>
</affiliation>
</author>
<author>
<name sortKey="Deangelis, Gregory C" sort="Deangelis, Gregory C" uniqKey="Deangelis G" first="Gregory C." last="Deangelis">Gregory C. Deangelis</name>
<affiliation wicri:level="2">
<nlm:aff id="aff3">
<addr-line>Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York</wicri:regionArea>
<placeName>
<region type="state">État de New York</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Angelaki, Dora E" sort="Angelaki, Dora E" uniqKey="Angelaki D" first="Dora E." last="Angelaki">Dora E. Angelaki</name>
<affiliation wicri:level="2">
<nlm:aff id="aff4">
<addr-line>Department of Neuroscience, Baylor College of Medicine, Houston, Texas, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Neuroscience, Baylor College of Medicine, Houston, Texas</wicri:regionArea>
<placeName>
<region type="state">Texas</region>
</placeName>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2012">2012</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Simultaneous object motion and self-motion give rise to complex patterns of retinal image motion. In order to estimate object motion accurately, the brain must parse this complex retinal motion into self-motion and object motion components. Although this computational problem can be solved, in principle, through purely visual mechanisms, extra-retinal information that arises from the vestibular system during self-motion may also play an important role. Here we investigate whether combining vestibular and visual self-motion information improves the precision of object motion estimates. Subjects were asked to discriminate the direction of object motion in the presence of simultaneous self-motion, depicted either by visual cues alone (i.e. optic flow) or by combined visual/vestibular stimuli. We report a small but significant improvement in object motion discrimination thresholds with the addition of vestibular cues. This improvement was greatest for eccentric heading directions and negligible for forward movement, a finding that could reflect increased relative reliability of vestibular versus visual cues for eccentric heading directions. Overall, these results are consistent with the hypothesis that vestibular inputs can help parse retinal image motion into self-motion and object motion components.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Warren, Wh" uniqKey="Warren W">WH Warren</name>
</author>
<author>
<name sortKey="Saunders, Ja" uniqKey="Saunders J">JA Saunders</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Royden, Cs" uniqKey="Royden C">CS Royden</name>
</author>
<author>
<name sortKey="Connors, Em" uniqKey="Connors E">EM Connors</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Royden, Cs" uniqKey="Royden C">CS Royden</name>
</author>
<author>
<name sortKey="Hildreth, Ec" uniqKey="Hildreth E">EC Hildreth</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rushton, Sk" uniqKey="Rushton S">SK Rushton</name>
</author>
<author>
<name sortKey="Bradshaw, Mf" uniqKey="Bradshaw M">MF Bradshaw</name>
</author>
<author>
<name sortKey="Warren, Pa" uniqKey="Warren P">PA Warren</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rushton, Sk" uniqKey="Rushton S">SK Rushton</name>
</author>
<author>
<name sortKey="Warren, Pa" uniqKey="Warren P">PA Warren</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Warren, Pa" uniqKey="Warren P">PA Warren</name>
</author>
<author>
<name sortKey="Rushton, Sk" uniqKey="Rushton S">SK Rushton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Warren, Pa" uniqKey="Warren P">PA Warren</name>
</author>
<author>
<name sortKey="Rushton, Sk" uniqKey="Rushton S">SK Rushton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Warren, Pa" uniqKey="Warren P">PA Warren</name>
</author>
<author>
<name sortKey="Rushton, Sk" uniqKey="Rushton S">SK Rushton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Warren, Pa" uniqKey="Warren P">PA Warren</name>
</author>
<author>
<name sortKey="Rushton, Sk" uniqKey="Rushton S">SK Rushton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fajen, Br" uniqKey="Fajen B">BR Fajen</name>
</author>
<author>
<name sortKey="Kim, Ng" uniqKey="Kim N">NG Kim</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mapstone, M" uniqKey="Mapstone M">M Mapstone</name>
</author>
<author>
<name sortKey="Duffy, Cj" uniqKey="Duffy C">CJ Duffy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gray, R" uniqKey="Gray R">R Gray</name>
</author>
<author>
<name sortKey="Macuga, K" uniqKey="Macuga K">K Macuga</name>
</author>
<author>
<name sortKey="Regan, D" uniqKey="Regan D">D Regan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Calabro, Fj" uniqKey="Calabro F">FJ Calabro</name>
</author>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S Soto-Faraco</name>
</author>
<author>
<name sortKey="Vaina, Lm" uniqKey="Vaina L">LM Vaina</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gogel, Wc" uniqKey="Gogel W">WC Gogel</name>
</author>
<author>
<name sortKey="Tietz, Jd" uniqKey="Tietz J">JD Tietz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dyde, Rt" uniqKey="Dyde R">RT Dyde</name>
</author>
<author>
<name sortKey="Harris, Lr" uniqKey="Harris L">LR Harris</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Butler, Js" uniqKey="Butler J">JS Butler</name>
</author>
<author>
<name sortKey="Smith, St" uniqKey="Smith S">ST Smith</name>
</author>
<author>
<name sortKey="Campos, Jl" uniqKey="Campos J">JL Campos</name>
</author>
<author>
<name sortKey="Bulthoff, Hh" uniqKey="Bulthoff H">HH Bulthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fetsch, Cr" uniqKey="Fetsch C">CR Fetsch</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A Pouget</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC Deangelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fetsch, Cr" uniqKey="Fetsch C">CR Fetsch</name>
</author>
<author>
<name sortKey="Turner, Ah" uniqKey="Turner A">AH Turner</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y Gu</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC Deangelis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Crowell, Ja" uniqKey="Crowell J">JA Crowell</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y Gu</name>
</author>
<author>
<name sortKey="Fetsch, Cr" uniqKey="Fetsch C">CR Fetsch</name>
</author>
<author>
<name sortKey="Adeyemo, B" uniqKey="Adeyemo B">B Adeyemo</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC Deangelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Macneilage, Pr" uniqKey="Macneilage P">PR MacNeilage</name>
</author>
<author>
<name sortKey="Zhang, Z" uniqKey="Zhang Z">Z Zhang</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhang, Z" uniqKey="Zhang Z">Z Zhang</name>
</author>
<author>
<name sortKey="Macneilage, Pr" uniqKey="Macneilage P">PR MacNeilage</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Macneilage, Pr" uniqKey="Macneilage P">PR MacNeilage</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chen, A" uniqKey="Chen A">A Chen</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC Deangelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chen, A" uniqKey="Chen A">A Chen</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fetsch, Cr" uniqKey="Fetsch C">CR Fetsch</name>
</author>
<author>
<name sortKey="Rajguru, Sm" uniqKey="Rajguru S">SM Rajguru</name>
</author>
<author>
<name sortKey="Karunaratne, A" uniqKey="Karunaratne A">A Karunaratne</name>
</author>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y Gu</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y Gu</name>
</author>
<author>
<name sortKey="Watkins, Pv" uniqKey="Watkins P">PV Watkins</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Morgan, Ml" uniqKey="Morgan M">ML Morgan</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wichmann, Fa" uniqKey="Wichmann F">FA Wichmann</name>
</author>
<author>
<name sortKey="Hill, Nj" uniqKey="Hill N">NJ Hill</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wichmann, Fa" uniqKey="Wichmann F">FA Wichmann</name>
</author>
<author>
<name sortKey="Hill, Nj" uniqKey="Hill N">NJ Hill</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
<author>
<name sortKey="Hess, Bj" uniqKey="Hess B">BJ Hess</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
<author>
<name sortKey="Hess, Bj" uniqKey="Hess B">BJ Hess</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mchenry, Mq" uniqKey="Mchenry M">MQ McHenry</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Warren, Pa" uniqKey="Warren P">PA Warren</name>
</author>
<author>
<name sortKey="Rushton, Sk" uniqKey="Rushton S">SK Rushton</name>
</author>
<author>
<name sortKey="Foulkes, Aj" uniqKey="Foulkes A">AJ Foulkes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Duffy, Cj" uniqKey="Duffy C">CJ Duffy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schlack, A" uniqKey="Schlack A">A Schlack</name>
</author>
<author>
<name sortKey="Hoffmann, Kp" uniqKey="Hoffmann K">KP Hoffmann</name>
</author>
<author>
<name sortKey="Bremmer, F" uniqKey="Bremmer F">F Bremmer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Takahashi, K" uniqKey="Takahashi K">K Takahashi</name>
</author>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y Gu</name>
</author>
<author>
<name sortKey="May, Pj" uniqKey="May P">PJ May</name>
</author>
<author>
<name sortKey="Newlands, Sd" uniqKey="Newlands S">SD Newlands</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wallach, H" uniqKey="Wallach H">H Wallach</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kim, H" uniqKey="Kim H">H Kim</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="iso-abbrev">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">22768345</article-id>
<article-id pub-id-type="pmc">3388053</article-id>
<article-id pub-id-type="publisher-id">PONE-D-11-25886</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0040264</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Biology</subject>
<subj-group>
<subject>Anatomy and Physiology</subject>
<subj-group>
<subject>Neurological System</subject>
<subj-group>
<subject>Sensory Physiology</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group>
<subject>Computational Biology</subject>
<subj-group>
<subject>Computational Neuroscience</subject>
<subj-group>
<subject>Sensory Systems</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Computational Neuroscience</subject>
<subj-group>
<subject>Sensory Systems</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Psychophysics</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Sensory Systems</subject>
<subj-group>
<subject>Visual System</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Behavioral Neuroscience</subject>
<subject>Cognitive Neuroscience</subject>
</subj-group>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Vestibular Facilitation of Optic Flow Parsing</article-title>
<alt-title alt-title-type="running-head">Vestibular Facilitation of Optic Flow Parsing</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>MacNeilage</surname>
<given-names>Paul R.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Zhang</surname>
<given-names>Zhou</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>DeAngelis</surname>
<given-names>Gregory C.</given-names>
</name>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Angelaki</surname>
<given-names>Dora E.</given-names>
</name>
<xref ref-type="aff" rid="aff4">
<sup>4</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<label>1</label>
<addr-line>Vertigo, Balance, and Oculomotor Research Center, University Hospital of Munich, Munich, Germany</addr-line>
</aff>
<aff id="aff2">
<label>2</label>
<addr-line>Department of Biomedical Engineering, University of Southern California, Los Angeles, California, United States of America</addr-line>
</aff>
<aff id="aff3">
<label>3</label>
<addr-line>Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York, United States of America</addr-line>
</aff>
<aff id="aff4">
<label>4</label>
<addr-line>Department of Neuroscience, Baylor College of Medicine, Houston, Texas, United States of America</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Lappe</surname>
<given-names>Markus</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">University of Muenster, Germany</aff>
<author-notes>
<corresp id="cor1">* E-mail:
<email>p.macneilage@gmail.com</email>
</corresp>
<fn fn-type="con">
<p>Conceived and designed the experiments: PRM ZZ DEA. Performed the experiments: PRM ZZ. Analyzed the data: PRM ZZ. Wrote the paper: PRM GCD DEA.</p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<year>2012</year>
</pub-date>
<pub-date pub-type="epub">
<day>2</day>
<month>7</month>
<year>2012</year>
</pub-date>
<volume>7</volume>
<issue>7</issue>
<elocation-id>e40264</elocation-id>
<history>
<date date-type="received">
<day>22</day>
<month>12</month>
<year>2011</year>
</date>
<date date-type="accepted">
<day>4</day>
<month>6</month>
<year>2012</year>
</date>
</history>
<permissions>
<copyright-statement>MacNeilage et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</copyright-statement>
<copyright-year>2012</copyright-year>
</permissions>
<abstract>
<p>Simultaneous object motion and self-motion give rise to complex patterns of retinal image motion. In order to estimate object motion accurately, the brain must parse this complex retinal motion into self-motion and object motion components. Although this computational problem can be solved, in principle, through purely visual mechanisms, extra-retinal information that arises from the vestibular system during self-motion may also play an important role. Here we investigate whether combining vestibular and visual self-motion information improves the precision of object motion estimates. Subjects were asked to discriminate the direction of object motion in the presence of simultaneous self-motion, depicted either by visual cues alone (i.e. optic flow) or by combined visual/vestibular stimuli. We report a small but significant improvement in object motion discrimination thresholds with the addition of vestibular cues. This improvement was greatest for eccentric heading directions and negligible for forward movement, a finding that could reflect increased relative reliability of vestibular versus visual cues for eccentric heading directions. Overall, these results are consistent with the hypothesis that vestibular inputs can help parse retinal image motion into self-motion and object motion components.</p>
</abstract>
<counts>
<page-count count="8"></page-count>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>Accurate and precise estimation of object motion during self-motion is important for survival, because moving organisms must often simultaneously monitor other moving agents, including predators, prey and potential mates. Self-motion relative to a stationary environment produces a globally consistent pattern of visual motion on the retina, whereas independently moving objects give rise to local motion signals that are inconsistent with the global pattern. Thus, estimating object motion during self-motion can potentially be achieved by comparing local retinal motion signals to the global flow pattern. Indeed, visual psychophysical studies in humans have shown that the brain parses retinal image motion into object and self-motion components based on global flow computations
<xref ref-type="bibr" rid="pone.0040264-Warren1">[1]</xref>
<xref ref-type="bibr" rid="pone.0040264-Warren5">[9]</xref>
. This body of research has focused on two related topics: 1) estimating heading (i.e., direction of self-translation) in the presence of moving objects
<xref ref-type="bibr" rid="pone.0040264-Warren1">[1]</xref>
,
<xref ref-type="bibr" rid="pone.0040264-Royden2">[3]</xref>
,
<xref ref-type="bibr" rid="pone.0040264-Fajen1">[10]</xref>
,
<xref ref-type="bibr" rid="pone.0040264-Mapstone1">[11]</xref>
, and 2) estimating object motion during self-motion
<xref ref-type="bibr" rid="pone.0040264-Royden1">[2]</xref>
,
<xref ref-type="bibr" rid="pone.0040264-Rushton1">[4]</xref>
<xref ref-type="bibr" rid="pone.0040264-Warren5">[9]</xref>
,
<xref ref-type="bibr" rid="pone.0040264-Gray1">[12]</xref>
,
<xref ref-type="bibr" rid="pone.0040264-Calabro1">[13]</xref>
.</p>
<p>These studies, however, have primarily focused on biases introduced by interactions between object motion and background motion due to self-translation, and have not generally considered how these interactions affect perceptual sensitivity. Furthermore, while some prior studies have investigated perception of object motion during real physical self-motion
<xref ref-type="bibr" rid="pone.0040264-Gogel1">[14]</xref>
,
<xref ref-type="bibr" rid="pone.0040264-Dyde1">[15]</xref>
, other studies that have focused on the specific question of optic flow parsing have largely ignored non-visual (e.g., vestibular and proprioceptive) cues that could help to disambiguate retinal image motion. In particular, vestibular sensory signals play a vital role in heading perception, leading to more precise heading estimates when both visual and vestibular cues are available
<xref ref-type="bibr" rid="pone.0040264-Butler1">[16]</xref>
<xref ref-type="bibr" rid="pone.0040264-Gu1">[19]</xref>
. Given these interactions between self-motion and object motion perception, as documented previously, we hypothesized that vestibular signals may also influence the precision with which subjects judge object motion during self-motion.</p>
<p>To test this hypothesis, we asked subjects to discriminate object motion during simulated self-motion in the presence and absence of scene-consistent vestibular stimulation. Our rationale is as follows: combined visual/vestibular stimulation leads to improved heading perception
<xref ref-type="bibr" rid="pone.0040264-Butler1">[16]</xref>
<xref ref-type="bibr" rid="pone.0040264-Gu1">[19]</xref>
and thus presumably improved flow estimation at the object location, and may therefore also lead to improved flow parsing ability and object motion discrimination. The vestibular contribution to heading perception depends on the relative reliability of visual and vestibular cues, so we hypothesized that the same should hold for flow-parsing and object motion discrimination. Relative reliability was manipulated by varying heading eccentricity (i.e., heading direction relative to straight ahead). Relative reliability of vestibular cues increases with eccentricity because visual heading discrimination thresholds increase more steeply with eccentricity than vestibular thresholds
<xref ref-type="bibr" rid="pone.0040264-Crowell1">[20]</xref>
,
<xref ref-type="bibr" rid="pone.0040264-Gu2">[21]</xref>
. Therefore we expected that improvement in object motion discrimination thresholds during the combined visual-vestibular stimulation would be more pronounced for eccentric rather than forward heading directions. Preliminary aspects of this work were presented in abstract form
<xref ref-type="bibr" rid="pone.0040264-MacNeilage1">[22]</xref>
,
<xref ref-type="bibr" rid="pone.0040264-Zhang1">[23]</xref>
.</p>
</sec>
<sec sec-type="methods" id="s2">
<title>Methods</title>
<sec id="s2a">
<title>Ethics Statement</title>
<p>Eight human subjects (3 female) participated in this study. Informed consent was obtained from all participants and all procedures were reviewed and approved by the human subjects committee of Washington University.</p>
</sec>
<sec id="s2b">
<title>Setup</title>
<p>Subjects were seated in a padded racing seat mounted on a 6-degree-of-freedom Moog© motion platform. A 3-chip DLP projector (Galaxy 6; Barco, Kortrijk, Belgium) was also mounted on the motion platform behind the subject and front-projected images onto a large (149×127 cm) projection screen via a mirror mounted above the subject’s head. The projection screen was located ∼70 cm in front of the eyes, thus allowing for a visual angle of ∼94°×84°. A 5-point harness held subjects’ bodies securely in place and a custom-fitted plastic mask secured the head against a cushioned head mount thereby holding head position fixed relative to the chair. Subjects were enclosed in a black aluminum superstructure, such that only the display screen was visible in the darkened room. Subjects also wore active stereo shutter glasses (CrystalEyes 3; RealD, Beverly Hills, CA), thereby restricting the field of view to ∼90°×70°. Eye position was recorded for both eyes at 600 Hz via a video-based eye-tracking system (ISCAN©) attached to the stereo glasses and subjects were instructed to look at a centrally-located, head-fixed target throughout each trial. Sounds from the platform were masked by playing white noise through headphones. Behavioral tasks and data acquisition were controlled by Matlab and responses were collected using a button box. Additional details specific to the human apparatus can be found in recent publications
<xref ref-type="bibr" rid="pone.0040264-Fetsch2">[18]</xref>
,
<xref ref-type="bibr" rid="pone.0040264-Gu2">[21]</xref>
,
<xref ref-type="bibr" rid="pone.0040264-MacNeilage2">[24]</xref>
.</p>
</sec>
<sec id="s2c">
<title>Experimental Protocol: Main Experiment</title>
<p>The visual scene consisted of a 3-dimensional (3D) starfield composed of randomly placed triangles with base and height of 1 cm. The triangles filled a volume 170 cm wide ×170 cm tall× 100 cm deep and the 3D density of triangles was 0.001 triangles/cm
<sup>3</sup>
. With this density and viewing frustum, ∼1000 triangles were rendered on a given frame. The nearest and farthest rendered triangles subtended ∼3° and ∼0.6°, respectively. A spherical object (diameter of 10 cm, i.e., ∼8°) was rendered at the same depth as the screen, and located to the left of the fixation point, ∼27 cm (∼21°) away. The object was also composed of random triangles and the density of triangles within the volume of the object was the same as for the starfield, such that the object was distinguished only by its velocity relative to the background motion. Given the volume of the sphere and its density, ∼4 triangles were rendered within the sphere on a given video frame. Motion coherence of the starfield and object was set to 70% and the elements of the scene were limited-lifetime (1 sec). Note, reduced motion coherence was used to make the relative reliabilities of the visual and vestibular self-motion cues more equal
<xref ref-type="bibr" rid="pone.0040264-Fetsch1">[17]</xref>
,
<xref ref-type="bibr" rid="pone.0040264-Fetsch2">[18]</xref>
, and to allow comparison with heading discrimination data collected under the same conditions with a range of heading eccentricities
<xref ref-type="bibr" rid="pone.0040264-Gu2">[21]</xref>
. To prevent pop-out of the object relative to the background, object motion coherence matched coherence of the background star field.</p>
<p>Each trial simulated a 13cm, 1s translation of the subject relative to the starfield and object. The object was simultaneously displaced either upward or downward relative to the starfield and the subject’s task was to indicate whether the object moved upward or downward relative to the world (
<xref ref-type="fig" rid="pone-0040264-g001">Fig. 1A</xref>
). Note that we did not attempt to evaluate whether subjects made their judgments in world or screen coordinates. However, regardless of the coordinate frame of the judgment, subjects had to parse the optic flow field to perform the task. Thus, for this task, we do not suspect that the basic conclusions of the present study would change depending on the strategy used by the subjects.</p>
<fig id="pone-0040264-g001" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0040264.g001</object-id>
<label>Figure 1</label>
<caption>
<title>Schematic of the experimental design.</title>
<p>A) Side-view illustrating the task with a heading of 0° (straight forward). The subject experiences self-motion and synchronized movement of the object (dashed circle) either up or down. The subject’s task is to indicate which direction the object moved in the world. B) Close up of the pattern of image motion on the display for heading  = 60° and downward object motion in the world (from panel E). Variables
<italic>v
<sub>s</sub>
</italic>
and
<italic>v
<sub>o</sub>
</italic>
represent the independent components of image motion associated with the self-motion and object motion, respectively (horizontal and vertical white arrows). Note that the object motion component (
<italic>v
<sub>o</sub>
</italic>
) is equal in all examples shown here (C-F), but the angle of deviation (
<italic>d</italic>
) is not because the self-motion component (
<italic>v
<sub>s</sub>
</italic>
) depends on heading direction. (C)-(F) The experiment was conducted at four heading directions: 0°, 30°, 60°, and 90°. The optic flow associated with each heading direction (as displayed on the screen) is illustrated in each panel and each inset shows a top down view of the self-motion trajectory. As heading eccentricity increases, the focus of expansion (FOE) is displaced further from the center of the display. The resultant image motion associated with the object is also visible in these panels to the left of fixation.</p>
</caption>
<graphic xlink:href="pone.0040264.g001"></graphic>
</fig>
<p>The simulated self-motion and object motion followed synchronized Gaussian velocity profiles, such that the object could not be distinguished simply by having a different temporal profile of motion than the background. Given this velocity profile, the peak simulated visual and vestibular speed of self-motion was 30 cm/s and peak acceleration/deceleration was 1.13 m/s
<sup>2</sup>
. This dynamic stimulus was chosen because: (1) it is a smooth, transient, natural stimulus, (2) it evokes robust visual and vestibular responses in cortical multisensory neurons (e.g., areas MSTd and VIP; both visual and vestibular responses tend to reflect stimulus velocity more than acceleration
<xref ref-type="bibr" rid="pone.0040264-Chen1">[25]</xref>
<xref ref-type="bibr" rid="pone.0040264-Gu3">[28]</xref>
), (3) it results in near-optimal multisensory integration, both at the level of behavior
<xref ref-type="bibr" rid="pone.0040264-Fetsch1">[17]</xref>
<xref ref-type="bibr" rid="pone.0040264-Gu1">[19]</xref>
and at the level of single neurons
<xref ref-type="bibr" rid="pone.0040264-Fetsch1">[17]</xref>
,
<xref ref-type="bibr" rid="pone.0040264-Gu1">[19]</xref>
,
<xref ref-type="bibr" rid="pone.0040264-Morgan1">[29]</xref>
.</p>
<p>Due to the independent object motion in the scene, the retinal image motion associated with the object deviated from that of the surrounding optic flow (
<xref ref-type="fig" rid="pone-0040264-g001">Fig. 1B</xref>
). Deviation angle was varied from trial to trial according to a staircase procedure. The staircase began at the largest deviation angle and possible deviation angles were +/− [80° 64° 48° 32° 16° 8° 4° 2° 1° 0.5° 0.25°]. The deviation angle was reduced 30% of the time after correct responses and was increased 80% of the time after incorrect responses. This staircase rule converges to the 73% point of the psychometric function. The deviation angle was positive (upward) on 50% of trials and negative (downward) on the other 50%.</p>
<p>The angle of deviation is given by
<inline-formula>
<inline-graphic xlink:href="pone.0040264.e001.jpg" mimetype="image"></inline-graphic>
</inline-formula>
where
<italic>v
<sub>s</sub>
</italic>
and
<italic>v
<sub>o</sub>
</italic>
, respectively, are the independent velocity components (in screen coordinates) associated with self-motion and object motion, respectively (
<xref ref-type="fig" rid="pone-0040264-g001">Fig. 1B</xref>
). The self-motion component (
<italic>v
<sub>s</sub>
</italic>
) depended on heading angle but was constant for a given heading (peak velocity of 10.2°/s, 20.7°/s, 24.0°/s, and 20.8°/s for headings of 0°, 30°, 60°, and 90°, respectively). Deviation angle (d) for a given trial was specified by the staircase procedure. Object speed on the screen (
<italic>v
<sub>o</sub>
</italic>
) was therefore constrained to satisfy the above equation.</p>
<p>Four different heading directions were examined (0°, 30°, 60°, and 90° from straight ahead,
<xref ref-type="fig" rid="pone-0040264-g001">Fig. 1C-F</xref>
), with data for each heading angle collected in a separate block of trials. Trials for visual-only and combined (visual/vestibular) conditions were interleaved within a given block (200 trials/block, lasting ∼25 min). This made for a total of 8 stimulus conditions in the
<italic>Main Experiment</italic>
. At least 800 trials per condition per subject (6 subjects, S1-S6) were collected.</p>
</sec>
<sec id="s2d">
<title>Experimental Protocol: Eye-movement Control</title>
<p>Because no eye movement data were recorded initially, we repeated the visual-only and combined protocols in a second experiment for the lateral (90°) heading only, while recording eye movements. This was necessary to verify that subjects maintained fixation equally well during both visual-only and combined visual-vestibular trials. At least 500 trials per subject per condition were collected in 5 subjects (S4-S8) for the second experiment.</p>
</sec>
<sec id="s2e">
<title>Experimental Protocol: Retinal-speed Control</title>
<p>Finally, in a third experiment, observers were presented with visual-only trials, as described above, except that the simulated distance of translation was reduced to <13cm (6.75, 5.56, and 6.13 cm for heading directions of 30°, 60° and 90°, respectively) in order to achieve the same retinal image speed (
<italic>v
<sub>s</sub>
</italic>
in
<xref ref-type="fig" rid="pone-0040264-g001">Fig. 1B</xref>
) at the eccentric location where the moving object was presented (
<italic>v
<sub>s</sub>
</italic>
equal to 10.2°/s for all headings). This control experiment was necessary to examine to what extent the observed dependence of object motion discrimination thresholds on heading direction was simply a result of changes in retinal speed. Because translation distance was fixed in the first experiment,
<italic>v
<sub>s</sub>
</italic>
increases with eccentricity, such that effects of heading eccentricity (i.e. flow-field geometry) and retinal speed are confounded. At least 600 trials per subject per condition were collected in 5 subjects (S4-S8) for the third experiment.</p>
</sec>
<sec id="s2f">
<title>Data Analysis</title>
<p>For each subject and each condition we plotted the proportion of ‘upward’ responses as a function of object deviation angle and a cumulative Gaussian function was fit to these data using psignifit software
<xref ref-type="bibr" rid="pone.0040264-Wichmann1">[30]</xref>
,
<xref ref-type="bibr" rid="pone.0040264-Wichmann2">[31]</xref>
. Threshold is given by the standard deviation of the fitted function. A two-factor repeated measures ANOVA was performed on threshold data from the Main Experiment to examine the effect of heading eccentricity (0°, 30°, 60°, 90°), the effect of condition (visual-only, combined), and their interaction. Data were further examined using paired t-tests. Threshold data from the Retinal-speed Control experiment were analyzed with a one-factor repeated measures ANOVA to examine the effect of heading eccentricity (0°, 30°, 60°, 90°) when retinal speed at the object location was matched across headings.</p>
<p>To analyze eye movement data, horizontal eye position traces were first smoothed by applying a boxcar filter and then differentiated to obtain eye velocity traces for both eyes. From these traces we calculated mean eye velocity during the stimulus presentation (1s) on each trial and then examined how psychophysical threshold changed as a function of mean eye velocity for each subject. Over the entire range of mean eye velocities, we used a sliding window 1°/s wide, and fit a psychometric function to all trials within that window, provided that a minimum of 150 trials were available in a given velocity window. Window position was increased from the minimum to the maximum mean velocity at 0.1°/s intervals, so that a different threshold was calculated for each window position (i.e., each mean eye velocity). A regression line was fit to the resulting data and the slope and significance of the regression were used to evaluate the influence of mean eye velocity on discrimination performance.</p>
</sec>
</sec>
<sec id="s3">
<title>Results</title>
<p>In these experiments, optic flow simulated observer translation through a starfield, while simultaneously an object moved up or down in the world (
<xref ref-type="fig" rid="pone-0040264-g001">Fig. 1A</xref>
). The subject’s task was to indicate the object’s motion direction (up/down) in the world during trials in which self-motion was cued by either optic flow alone (visual-only condition) or optic flow combined with platform motion (combined condition). The object was transparent, composed of random dots with the same density as the starfield, and was distinguished from the starfield only by the relative velocity of its movement. Starfield and object velocity followed synchronized Gaussian velocity profiles. Object motion amplitude (i.e., total displacement), and thus angle of deviation of the object motion relative to the background (
<xref ref-type="fig" rid="pone-0040264-g001">Fig. 1B</xref>
), was varied from trial to trial using a staircase procedure. Subjects were instructed to maintain visual fixation on a central, head-fixed target to cancel reflexive eye movements. In each block of trials, the heading was fixed, but it differed across blocks such that data were collected separately for forward (0°), lateral (rightward, 90°) and intermediate (30° and 60°) directions (
<xref ref-type="fig" rid="pone-0040264-g001">Fig. 1</xref>
).</p>
<sec id="s3a">
<title>Main Experiment</title>
<p>Subject-by-subject thresholds for both the visual-only and combined conditions are displayed in
<xref ref-type="fig" rid="pone-0040264-g002">Fig. 2</xref>
(blue and red bars, respectively). For most subjects and most headings, it can be observed that combined thresholds are slightly lower than those in the visual-only condition; this effect was significant. Across all heading eccentricities, the mean object discrimination threshold is lower in the combined condition compared to the visual-only condition (p = 0.011; paired t-test), consistent with the hypothesis that vestibular cues facilitate optic flow parsing. A separate analysis also revealed a significant effect of stimulus condition on threshold improvement (combined vs. visual-only: F(1,5) = 7.40, p = 0.04, repeated measures ANOVA).</p>
<fig id="pone-0040264-g002" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0040264.g002</object-id>
<label>Figure 2</label>
<caption>
<title>Summary of discrimination thresholds.</title>
<p>Each panel shows the data from a different subject. Error bars represent 95% confidence intervals. Subjects S1-S6 participated in the main experiment, so visual-only (blue bars) and combined (red bars) thresholds were measured at all heading eccentricities. Subjects S4-S8 participated in the retinal speed (RS) control experiment (green bars). Note that subjects S7 and S8 were only tested with the 90° heading in the eye movement control experiment (lateral motion).</p>
</caption>
<graphic xlink:href="pone.0040264.g002"></graphic>
</fig>
<p>Closer examination of
<xref ref-type="fig" rid="pone-0040264-g002">Fig. 2</xref>
reveals that the improvement in object discrimination thresholds in the combined condition depends on heading eccentricity, and this effect was also significant (F(3,5) = 3.78, p = 0.03, interaction term of repeated measures ANOVA). This dependence of vestibular facilitation on heading eccentricity is further illustrated in
<xref ref-type="fig" rid="pone-0040264-g003">Fig. 3</xref>
, which plots the percentage decrease in object discrimination thresholds in the combined condition, relative to that in the visual-only condition, for subjects that participated in all conditions of the main experiment (S1-S6). For the forward (0°) heading, there was no significant improvement in object discrimination thresholds when vestibular cues were present (p = 0.58; paired t-test). In contrast, for headings 30°, 60°, and 90°, the improvement was either significant or approaching significance (p = 0.02, p = 0.12, p = 0.04, respectively; paired t-test). Pooling across all non-zero heading directions, the improvement was highly significant (p<0.001; paired t-test).</p>
<fig id="pone-0040264-g003" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0040264.g003</object-id>
<label>Figure 3</label>
<caption>
<title>Comparison of visual-only and combined thresholds.</title>
<p>Percent decrease in combined threshold relative to the visual-only threshold (computed as
<italic>(</italic>
<bold>
<italic>σ</italic>
</bold>
<italic>
<sub>v</sub>
- </italic>
<bold>
<italic>σ</italic>
</bold>
<italic>
<sub>c</sub>
)/</italic>
<bold>
<italic>σ</italic>
</bold>
<italic>
<sub>v</sub>
</italic>
; subjects S1-S6) for all four heading angles. The decrease in threshold depends on heading angle, with the smallest decrease for 0° heading and the largest decrease for 90° heading.</p>
</caption>
<graphic xlink:href="pone.0040264.g003"></graphic>
</fig>
<p>As shown in
<xref ref-type="fig" rid="pone-0040264-g003">Fig. 3</xref>
, vestibular facilitation was least for heading 0 deg, greatest for heading 90 deg, and moderate for intermediate heading angles. The corresponding mean percentage decreases in the combined condition were −3.1%, 9.7%, 6.7%, and 17.0% for headings 0°, 30°, 60°, and 90°, respectively. While we do not expect vestibular facilitation to depend linearly on heading eccentricity, the data suggests a trend for vestibular facilitation to increase with heading eccentricity. Therefore, using the data presented in
<xref ref-type="fig" rid="pone-0040264-g003">Fig. 3</xref>
, we conducted a non-parametric (rank-based) correlation analysis in order to evaluate the significance of this trend. This revealed a significant positive correlation between heading eccentricity and percent decrease in combined threshold (p = 0.007, Spearman’s rho  = 0.53).</p>
</sec>
<sec id="s3b">
<title>Eye-movement Control</title>
<p>A potentially trivial explanation for this finding is that incomplete suppression of the translational vestibulo-ocular reflex (TVOR) improves nulling of retinal slip in the combined condition compared to the visual-only condition. In this scenario, a residual TVOR during combined stimulation would
<italic>physically</italic>
(rather than
<italic>computationally</italic>
through flow parsing) cancel more of the background motion on the retina, thus reducing the speed of the starfield motion and making it easier to discriminate the direction of object motion. Indeed, prior research has shown that the TVOR is more effective in canceling retinal slip during lateral than during forward movements
<xref ref-type="bibr" rid="pone.0040264-Angelaki1">[32]</xref>
<xref ref-type="bibr" rid="pone.0040264-McHenry1">[34]</xref>
, consistent with the improvement we observed during lateral self-motion. We therefore repeated the experiment for the lateral (90°) heading in a subset of subjects (S4-S8) while recording eye movements, in order to monitor fixation and identify differences in residual eye velocity between visual-only and combined conditions.</p>
<p>Distributions of mean eye velocity (for the left eye) are illustrated in
<xref ref-type="fig" rid="pone-0040264-g004">Fig. 4</xref>
, left column (blue: visual-only condition; red: combined condition). Because the self-motion direction was rightward in these experiments, an unsuppressed TVOR would elicit leftward (negative) eye velocities. All histograms peaked near zero with only one subject (S6) exhibiting mean eye velocity significantly different from zero (t-test, visual-only p<0.001, combined p = 0.01). Importantly, visual-only and combined histograms were largely overlapping; there was no significant difference in the distribution of eye velocity between combined and visual-only conditions, and this was true for all subjects (t-test, p>0.05). To further investigate the relationship between eye movements and object discrimination performance, we also examined how object discrimination thresholds changed as a function of mean eye velocity for each subject. To do this, we binned trials according to mean eye velocity and we fitted psychometric functions to behavioral data for each bin (see
<xref ref-type="sec" rid="s2">Methods</xref>
for details). If a residual TVOR facilitates object motion discrimination in the combined condition (red), there should be a positive correlation between mean eye velocity and discrimination performance (i.e., leftward (negative) eye velocity should be associated with lower thresholds).</p>
<fig id="pone-0040264-g004" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0040264.g004</object-id>
<label>Figure 4</label>
<caption>
<title>Summary of eye movement analysis.</title>
<p>Each row summarizes data from one subject. Only left eye (LE) velocities were used for these analyses; conducting the same analyses using right eye velocities yielded similar results. Left column shows histograms of mean eye velocities from all trials for both the Visual-only (blue) and Combined (red) conditions. Right column shows Visual-only (blue) and Combined (red) thresholds as a function of mean eye velocity, along with regression lines fit to these data (see text for details).</p>
</caption>
<graphic xlink:href="pone.0040264.g004"></graphic>
</fig>
<p>Only one subject (S6) exhibited a significant positive correlation between eye velocity and discrimination threshold in the combined condition (r = 0.85, p<0.001). However, visual-only and combined thresholds were virtually identical for this subject (
<xref ref-type="fig" rid="pone-0040264-g002">Fig. 2</xref>
, S6, Heading = 90°). On the other hand, subjects who exhibited the largest decrease in threshold for the combined relative to the visual-only condition (e.g. S5 or S7) showed a negative correlation for the combined condition in
<xref ref-type="fig" rid="pone-0040264-g004">Fig. 4</xref>
(larger leftward eye velocities were associated with
<italic>worse</italic>
discrimination performance; S5, r = −0.76, p = 0.001; S7, r = −0.82, p<0.001). Moreover, S7 showed a significant positive correlation between threshold and eye velocity for the visual-only condition (r = 0.90, p<0.001), suggesting that unsuppressed (perhaps optokinetic) eye movements led to improved performance in the visual-only but not in the combined condition. Yet this subject performed better in the combined that the visual-only condition, suggesting that these correlations cannot explain the behavioral results. Thus, in summary, we found no evidence that the improvement in object discrimination thresholds in the combined condition is due to a physical cancellation of the optic flow by unsuppressed, reflexive eye movements.</p>
</sec>
<sec id="s3c">
<title>Retinal-speed Control</title>
<p>The data from the visual-only and combined conditions of the Main Experiment (
<xref ref-type="fig" rid="pone-0040264-g002">Figs. 2</xref>
and
<xref ref-type="fig" rid="pone-0040264-g003">3</xref>
, S1-S6) show a significant (F(3,5) = 28.25, p<0.001) overall effect of heading direction: object discrimination thresholds were consistently greatest for the 0° heading. We hypothesized that this dependence was predominantly due to differences in the self-motion-related component of retinal speed at the object location (
<italic>v
<sub>s</sub>
</italic>
) across headings. Specifically, as heading direction is shifted from forward toward lateral, the expected retinal image motion due to self-motion at the location of the object (
<italic>v
<sub>s</sub>
</italic>
in
<xref ref-type="fig" rid="pone-0040264-g001">Fig. 1B</xref>
) increases. We therefore repeated the experiment for a subset of subjects while matching optic flow speed at the object location (
<italic>v
<sub>s</sub>
</italic>
) across heading directions. This was done by changing the amplitude of self-motion as a function of heading. With the self-motion component of retinal speed (
<italic>v
<sub>s</sub>
</italic>
) matched at the location of the object, any remaining effect of heading direction would suggest some dependence of flow-parsing on flow field geometry. In particular, for heading 0, the flow field is radial and there is considerable divergence at the location of the object motion (
<xref ref-type="fig" rid="pone-0040264-g001">Fig. 1C</xref>
). For heading 90, on the other hand, the flow field is laminar and divergence at the location of object motion is minimal (
<xref ref-type="fig" rid="pone-0040264-g001">Fig. 1F</xref>
).</p>
<p>Results from this experiment are illustrated by the green bars in
<xref ref-type="fig" rid="pone-0040264-g002">Fig. 2</xref>
(S4-S8). When the retinal speed of optic flow at the object location (
<italic>v
<sub>s</sub>
</italic>
) was matched across headings, there was no significant influence of heading direction on object discrimination thresholds (F(3,4) = 1.34, p = 0.31). Thus, the overall effect of heading eccentricity on discrimination thresholds in the first experiment appears to result primarily from associated changes in retinal speed. Prior research has demonstrated the dependence of flow parsing on global flow properties
<xref ref-type="bibr" rid="pone.0040264-Royden1">[2]</xref>
. However, given our limited investigation of this question, we did not find evidence that flow-parsing depended on the degree of divergence in the flow field at the location of the object motion.</p>
</sec>
</sec>
<sec id="s4">
<title>Discussion</title>
<p>Estimation of self-motion and object motion are reciprocal parts of the flow-parsing problem, so factors influencing estimation of self-motion may also influence observers’ ability to estimate object motion during self-motion. We examined the influence of vestibular stimulation and heading direction on observers’ ability to discriminate the direction of object motion in the world. Similar manipulations were shown previously to influence heading discrimination
<xref ref-type="bibr" rid="pone.0040264-Gu1">[19]</xref>
<xref ref-type="bibr" rid="pone.0040264-Gu2">[21]</xref>
, and here we have shown that they also influence object motion discrimination. We found that object discrimination thresholds during self-motion generally decreased when congruent vestibular stimulation accompanied background optic flow, suggesting that vestibular inputs can help parse retinal image motion into self-motion and object motion components.</p>
<sec id="s4a">
<title>Vestibular Facilitation of Optic Flow Parsing</title>
<p>Although the observed effect was small, this is not surprising considering the processes that are likely to be involved. We assume (at least) a two-stage process in which 1) the nervous systems generates a multisensory estimate of self-motion, and 2) uses this estimate to recover object motion in the world by canceling the expected visual consequences of self-motion. Any facilitation due to vestibular stimuli will most likely act by reducing the variability of the multisensory estimate of self-motion described in stage one above. We have studied visual-vestibular heading estimation extensively
<xref ref-type="bibr" rid="pone.0040264-Fetsch1">[17]</xref>
<xref ref-type="bibr" rid="pone.0040264-Gu1">[19]</xref>
and have found that the standard predictions of the Maximum-likelihood Estimation (MLE) model of cue integration are upheld
<xref ref-type="bibr" rid="pone.0040264-Ernst1">[35]</xref>
. The predicted improvement in combined heading estimation relative to visual-only is at most ∼√2, and this should occur when visual and vestibular heading estimates are approximately equally reliable.</p>
<p>Over the range of headings investigated here, previous measurements indicate that the reliabilities of visual and vestibular heading estimates vary considerably
<xref ref-type="bibr" rid="pone.0040264-Gu2">[21]</xref>
. For discrimination around a straight forward heading reference, visual heading discrimination thresholds are much more reliable than vestibular thresholds. However, visual heading thresholds increase approximately 5-fold as reference eccentricity increases toward lateral heading directions [
<xref ref-type="fig" rid="pone-0040264-g002">Fig. 2B</xref>
of 21]. Vestibular heading discrimination thresholds also increase with eccentricity of the reference heading, but only approximately 2-fold, for lateral as compared to forward heading directions [
<xref ref-type="fig" rid="pone-0040264-g002">Fig. 2A</xref>
of 21]. Vestibular heading thresholds were never lower than visual thresholds, but were approximately equal for the lateral (90°) heading eccentricity.</p>
<p>Consequently, it is reasonable to expect that vestibular cues are weighted more heavily for eccentric heading directions where their relative reliability is more comparable to that of visual heading cues. By this logic, we expect to see larger vestibular-facilitated decreases in object motion discrimination thresholds for eccentric rather than forward heading directions. Our results are consistent with this hypothesis. Subjects showed little or no improvement in object motion discrimination in the combined condition for forward heading (0°) and the largest improvement for lateral (90°) heading (
<xref ref-type="fig" rid="pone-0040264-g003">Fig. 3B</xref>
). Indeed, the maximum improvement predicted by the MLE model is ∼√2, which is of the same order of magnitude as the largest observed improvements in our experiment (∼20–30%,
<xref ref-type="fig" rid="pone-0040264-g003">Fig. 3B</xref>
).</p>
<p>Note that direct extension of MLE cue integration predictions to our object motion task requires some assumptions. First, the estimate of self-motion should be unbiased, or the bias should remain fairly constant for a given heading direction. Second, the operation that cancels the expected visual consequences of self-motion (described as stage two, above) should introduce little noise into the object motion estimate. If either of these assumptions is substantially violated, the expected improvement in performance in the combined condition will be reduced relative to the MLE-prediction.</p>
<p>While the present results are suggestive, they do not prove conclusively that object motion perception depends directly on heading recovery. Recent work with visual-only stimuli has aimed to test the hypothesis that object motion estimates can be predicted directly from heading estimates in response to an illusory optic flow stimulus
<xref ref-type="bibr" rid="pone.0040264-Warren6">[36]</xref>
. Results of that study are inconsistent with predictions of the strict self-motion-cancellation hypothesis, suggesting that flow parsing does not necessarily depend on heading recovery. Clearly, further research is needed on this topic.</p>
<p>Importantly, an alternative explanation of our results based on a residual TVOR, which might cause a physical (rather than computational) reduction of background optic flow, is inconsistent with our data. Mean eye velocity was small on most trials and was similar for visual-only and combined conditions. We calculated object discrimination thresholds as a function of mean eye velocity and this analysis confirmed that the vestibular facilitation of object discriminability could not be attributed to reflexive eye movements. We suggest instead that vestibular self-motion signals contribute to optic flow parsing computations. Note, however, that a more complete understanding of the role of vestibular signals in flow parsing will require experiments that also measure biases in perceived object motion trajectory due to self-motion. Future studies should examine how vestibular signals modulate the ability of subjects to accurately judge the direction of object motion (relative to the world) in the presence of self-motion.</p>
</sec>
<sec id="s4b">
<title>Neurophysiological Implications</title>
<p>Given the above considerations, it is striking that we observed an overall decrease in thresholds in the combined condition. Although modest, the improvements in object motion discrimination thresholds that we have observed are likely to be functionally relevant. Moreover, it is possible that the same cortical areas with convergent optic flow and vestibular inputs (e.g., areas MSTd and VIP)
<xref ref-type="bibr" rid="pone.0040264-Chen1">[25]</xref>
,
<xref ref-type="bibr" rid="pone.0040264-Chen2">[26]</xref>
,
<xref ref-type="bibr" rid="pone.0040264-Gu3">[28]</xref>
,
<xref ref-type="bibr" rid="pone.0040264-Duffy1">[37]</xref>
,
<xref ref-type="bibr" rid="pone.0040264-Schlack1">[38]</xref>
, which have been implicated in mediating the improvement in heading discrimination thresholds
<xref ref-type="bibr" rid="pone.0040264-Fetsch1">[17]</xref>
<xref ref-type="bibr" rid="pone.0040264-Gu1">[19]</xref>
, also mediate improved object motion discrimination during simultaneous vestibular stimulation. Particularly relevant might be a group of cortical multisensory neurons with incongruent visual and vestibular preferences
<xref ref-type="bibr" rid="pone.0040264-Chen2">[26]</xref>
,
<xref ref-type="bibr" rid="pone.0040264-Gu3">[28]</xref>
,
<xref ref-type="bibr" rid="pone.0040264-Takahashi1">[39]</xref>
. These cells are sub-optimally stimulated when visual and vestibular signals are congruent, as during self-motion relative to a stationary visual environment in the absence of object motion. On the other hand, they are maximally stimulated by incongruent optic flow and vestibular signals
<xref ref-type="bibr" rid="pone.0040264-Gu3">[28]</xref>
,
<xref ref-type="bibr" rid="pone.0040264-Morgan1">[29]</xref>
, and are therefore ideally suited to signal instances when visual motion does not match the optic flow that might be expected based on vestibular input. This is precisely what occurs during independent object motion. As Wallach proposed
<xref ref-type="bibr" rid="pone.0040264-Wallach1">[40]</xref>
, the visual system could better estimate object motion during self-motion by ‘canceling’ the effects of self-motion and it is possible that incongruent cells contribute to implementing this cancellation process, such that object motion may be estimated more precisely
<xref ref-type="bibr" rid="pone.0040264-Kim1">[41]</xref>
.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>We thank Babatunde Adeyemo and Jing Lin for technical assistance.</p>
</ack>
<fn-group>
<fn fn-type="conflict">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="financial-disclosure">
<p>
<bold>Funding: </bold>
The work was supported by the United States National Institutes of Health (NIH) R01 DC007620 (to DEA), an NIH Institutional National Research Service Award 5-T32-EY13360-07, a National Space Biomedical Research Institute fellowship PF-01103 (to PRM) through National Aeronautics and Space Administration 9-58, and a grant from the German Federal Ministry of Education and Research under the Grant code 01 EO 0901. Experiments were performed at Washington University Medical School, St Louis, MO. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</p>
</fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="pone.0040264-Warren1">
<label>1</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Warren</surname>
<given-names>WH</given-names>
</name>
<name>
<surname>Saunders</surname>
<given-names>JA</given-names>
</name>
</person-group>
<year>1995</year>
<article-title>Perceiving heading in the presence of moving objects.</article-title>
<source>Perception</source>
<volume>24</volume>
<fpage>315</fpage>
<lpage>331</lpage>
<pub-id pub-id-type="pmid">7617432</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Royden1">
<label>2</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Royden</surname>
<given-names>CS</given-names>
</name>
<name>
<surname>Connors</surname>
<given-names>EM</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>The detection of moving objects by moving observers.</article-title>
<source>Vision Res</source>
<volume>50</volume>
<fpage>1014</fpage>
<lpage>1024</lpage>
<pub-id pub-id-type="pmid">20304002</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Royden2">
<label>3</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Royden</surname>
<given-names>CS</given-names>
</name>
<name>
<surname>Hildreth</surname>
<given-names>EC</given-names>
</name>
</person-group>
<year>1996</year>
<article-title>Human heading judgments in the presence of moving objects.</article-title>
<source>Percept Psychophys</source>
<volume>58</volume>
<fpage>836</fpage>
<lpage>856</lpage>
<pub-id pub-id-type="pmid">8768180</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Rushton1">
<label>4</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rushton</surname>
<given-names>SK</given-names>
</name>
<name>
<surname>Bradshaw</surname>
<given-names>MF</given-names>
</name>
<name>
<surname>Warren</surname>
<given-names>PA</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>The pop out of scene-relative object movement against retinal motion due to self-movement.</article-title>
<source>Cognition</source>
<volume>105</volume>
<fpage>237</fpage>
<lpage>245</lpage>
<pub-id pub-id-type="pmid">17069787</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Rushton2">
<label>5</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rushton</surname>
<given-names>SK</given-names>
</name>
<name>
<surname>Warren</surname>
<given-names>PA</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Moving observers, relative retinal motion and the detection of object movement.</article-title>
<source>Curr Biol</source>
<volume>15</volume>
<fpage>R542</fpage>
<lpage>543</lpage>
<pub-id pub-id-type="pmid">16051158</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Warren2">
<label>6</label>
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Warren</surname>
<given-names>PA</given-names>
</name>
<name>
<surname>Rushton</surname>
<given-names>SK</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Perception of object trajectory: parsing retinal motion into self and object movement components.</article-title>
<source>J Vis 7: 2 1–11</source>
</element-citation>
</ref>
<ref id="pone.0040264-Warren3">
<label>7</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Warren</surname>
<given-names>PA</given-names>
</name>
<name>
<surname>Rushton</surname>
<given-names>SK</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Evidence for flow-parsing in radial flow displays.</article-title>
<source>Vision Res</source>
<volume>48</volume>
<fpage>655</fpage>
<lpage>663</lpage>
<pub-id pub-id-type="pmid">18243274</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Warren4">
<label>8</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Warren</surname>
<given-names>PA</given-names>
</name>
<name>
<surname>Rushton</surname>
<given-names>SK</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Optic flow processing for the assessment of object movement during ego movement.</article-title>
<source>Curr Biol</source>
<volume>19</volume>
<fpage>1555</fpage>
<lpage>1560</lpage>
<pub-id pub-id-type="pmid">19699091</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Warren5">
<label>9</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Warren</surname>
<given-names>PA</given-names>
</name>
<name>
<surname>Rushton</surname>
<given-names>SK</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Perception of scene-relative object movement: Optic flow parsing and the contribution of monocular depth cues.</article-title>
<source>Vision Res</source>
<volume>49</volume>
<fpage>1406</fpage>
<lpage>1419</lpage>
<pub-id pub-id-type="pmid">19480063</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Fajen1">
<label>10</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fajen</surname>
<given-names>BR</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>NG</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Perceiving curvilinear heading in the presence of moving objects.</article-title>
<source>J Exp Psychol Hum Percept Perform</source>
<volume>28</volume>
<fpage>1100</fpage>
<lpage>1119</lpage>
<pub-id pub-id-type="pmid">12421058</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Mapstone1">
<label>11</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mapstone</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Duffy</surname>
<given-names>CJ</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Approaching objects cause confusion in patients with Alzheimer’s disease regarding their direction of self-movement.</article-title>
<source>Brain</source>
<volume>133</volume>
<fpage>2690</fpage>
<lpage>2701</lpage>
<pub-id pub-id-type="pmid">20647265</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Gray1">
<label>12</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gray</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Macuga</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Regan</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Long range interactions between object-motion and self-motion in the perception of movement in depth.</article-title>
<source>Vision Res</source>
<volume>44</volume>
<fpage>179</fpage>
<lpage>195</lpage>
<pub-id pub-id-type="pmid">14637367</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Calabro1">
<label>13</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Calabro</surname>
<given-names>FJ</given-names>
</name>
<name>
<surname>Soto-Faraco</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Vaina</surname>
<given-names>LM</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>Acoustic facilitation of object movement detection during self-motion.</article-title>
<source>Proc Biol Sci</source>
<volume>278</volume>
<fpage>2840</fpage>
<lpage>2847</lpage>
<pub-id pub-id-type="pmid">21307050</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Gogel1">
<label>14</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gogel</surname>
<given-names>WC</given-names>
</name>
<name>
<surname>Tietz</surname>
<given-names>JD</given-names>
</name>
</person-group>
<year>1992</year>
<article-title>Determinants of the perception of sagittal motion.</article-title>
<source>Percept Psychophys</source>
<volume>52</volume>
<fpage>75</fpage>
<lpage>96</lpage>
<pub-id pub-id-type="pmid">1635859</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Dyde1">
<label>15</label>
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Dyde</surname>
<given-names>RT</given-names>
</name>
<name>
<surname>Harris</surname>
<given-names>LR</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>The influence of retinal and extra-retinal motion cues on perceived object motion during self-motion.</article-title>
<source>J Vis 8: 5 1–10</source>
</element-citation>
</ref>
<ref id="pone.0040264-Butler1">
<label>16</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Butler</surname>
<given-names>JS</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>ST</given-names>
</name>
<name>
<surname>Campos</surname>
<given-names>JL</given-names>
</name>
<name>
<surname>Bulthoff</surname>
<given-names>HH</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Bayesian integration of visual and vestibular signals for heading.</article-title>
<source>J Vis</source>
<volume>10</volume>
<fpage>23</fpage>
<pub-id pub-id-type="pmid">20884518</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Fetsch1">
<label>17</label>
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Fetsch</surname>
<given-names>CR</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Deangelis</surname>
<given-names>GC</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>Neural correlates of reliability-based cue weighting during multisensory integration.</article-title>
<source>Nat Neurosci</source>
</element-citation>
</ref>
<ref id="pone.0040264-Fetsch2">
<label>18</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fetsch</surname>
<given-names>CR</given-names>
</name>
<name>
<surname>Turner</surname>
<given-names>AH</given-names>
</name>
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Dynamic reweighting of visual and vestibular cues during self-motion perception.</article-title>
<source>J Neurosci</source>
<volume>29</volume>
<fpage>15601</fpage>
<lpage>15612</lpage>
<pub-id pub-id-type="pmid">20007484</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Gu1">
<label>19</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
<name>
<surname>Deangelis</surname>
<given-names>GC</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Neural correlates of multisensory cue integration in macaque MSTd.</article-title>
<source>Nat Neurosci</source>
<volume>11</volume>
<fpage>1201</fpage>
<lpage>1210</lpage>
<pub-id pub-id-type="pmid">18776893</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Crowell1">
<label>20</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Crowell</surname>
<given-names>JA</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
</person-group>
<year>1993</year>
<article-title>Perceiving heading with different retinal regions and types of optic flow.</article-title>
<source>Percept Psychophys</source>
<volume>53</volume>
<fpage>325</fpage>
<lpage>337</lpage>
<pub-id pub-id-type="pmid">8483696</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Gu2">
<label>21</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Fetsch</surname>
<given-names>CR</given-names>
</name>
<name>
<surname>Adeyemo</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Deangelis</surname>
<given-names>GC</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Decoding of MSTd Population Activity Accounts for Variations in the Precision of Heading Perception.</article-title>
<source>Neuron</source>
<volume>66</volume>
<fpage>596</fpage>
<lpage>609</lpage>
<pub-id pub-id-type="pmid">20510863</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-MacNeilage1">
<label>22</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>MacNeilage</surname>
<given-names>PR</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Vestibular facilitation of optic flow parsing.</article-title>
<source>J Vis Vision Sciences Society Abstract</source>
<volume>9</volume>
<fpage>701</fpage>
</element-citation>
</ref>
<ref id="pone.0040264-Zhang1">
<label>23</label>
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>Z</given-names>
</name>
<name>
<surname>MacNeilage</surname>
<given-names>PR</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Vestibular facilitation of visual motion segmentation; a role for incongruent visual-vestibular MSTd neurons?.</article-title>
<source>Society for Neuroscience Conference Abstract 857.14</source>
</element-citation>
</ref>
<ref id="pone.0040264-MacNeilage2">
<label>24</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>MacNeilage</surname>
<given-names>PR</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Vestibular heading discrimination and sensitivity to linear acceleration in head and world coordinates.</article-title>
<source>J Neurosci</source>
<volume>30</volume>
<fpage>9084</fpage>
<lpage>9094</lpage>
<pub-id pub-id-type="pmid">20610742</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Chen1">
<label>25</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Deangelis</surname>
<given-names>GC</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>Representation of vestibular and visual cues to self-motion in ventral intraparietal cortex.</article-title>
<source>J Neurosci</source>
<volume>31</volume>
<fpage>12036</fpage>
<lpage>12052</lpage>
<pub-id pub-id-type="pmid">21849564</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Chen2">
<label>26</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>A</given-names>
</name>
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>A comparison of vestibular spatiotemporal tuning in macaque parietoinsular vestibular cortex, ventral intraparietal area, and medial superior temporal area.</article-title>
<source>J Neurosci</source>
<volume>31</volume>
<fpage>3082</fpage>
<lpage>3094</lpage>
<pub-id pub-id-type="pmid">21414929</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Fetsch3">
<label>27</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fetsch</surname>
<given-names>CR</given-names>
</name>
<name>
<surname>Rajguru</surname>
<given-names>SM</given-names>
</name>
<name>
<surname>Karunaratne</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Gu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
<etal></etal>
</person-group>
<year>2010</year>
<article-title>Spatiotemporal properties of vestibular responses in area MSTd.</article-title>
<source>J Neurophysiol</source>
<volume>104</volume>
<fpage>1506</fpage>
<lpage>1522</lpage>
<pub-id pub-id-type="pmid">20631212</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Gu3">
<label>28</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Watkins</surname>
<given-names>PV</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Visual and nonvisual contributions to three-dimensional heading selectivity in the medial superior temporal area.</article-title>
<source>J Neurosci</source>
<volume>26</volume>
<fpage>73</fpage>
<lpage>85</lpage>
<pub-id pub-id-type="pmid">16399674</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Morgan1">
<label>29</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Morgan</surname>
<given-names>ML</given-names>
</name>
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Multisensory integration in macaque visual cortex depends on cue reliability.</article-title>
<source>Neuron</source>
<volume>59</volume>
<fpage>662</fpage>
<lpage>673</lpage>
<pub-id pub-id-type="pmid">18760701</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Wichmann1">
<label>30</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wichmann</surname>
<given-names>FA</given-names>
</name>
<name>
<surname>Hill</surname>
<given-names>NJ</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>The psychometric function: I. Fitting, sampling, and goodness of fit.</article-title>
<source>Percept Psychophys</source>
<volume>63</volume>
<fpage>1293</fpage>
<lpage>1313</lpage>
<pub-id pub-id-type="pmid">11800458</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Wichmann2">
<label>31</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wichmann</surname>
<given-names>FA</given-names>
</name>
<name>
<surname>Hill</surname>
<given-names>NJ</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>The psychometric function: II. Bootstrap-based confidence intervals and sampling.</article-title>
<source>Percept Psychophys</source>
<volume>63</volume>
<fpage>1314</fpage>
<lpage>1329</lpage>
<pub-id pub-id-type="pmid">11800459</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Angelaki1">
<label>32</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
<name>
<surname>Hess</surname>
<given-names>BJ</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>Direction of heading and vestibular control of binocular eye movements.</article-title>
<source>Vision Res</source>
<volume>41</volume>
<fpage>3215</fpage>
<lpage>3228</lpage>
<pub-id pub-id-type="pmid">11718768</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Angelaki2">
<label>33</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
<name>
<surname>Hess</surname>
<given-names>BJ</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Self-motion-induced eye movements: effects on visual acuity and navigation.</article-title>
<source>Nat Rev Neurosci</source>
<volume>6</volume>
<fpage>966</fpage>
<lpage>976</lpage>
<pub-id pub-id-type="pmid">16340956</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-McHenry1">
<label>34</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McHenry</surname>
<given-names>MQ</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>Primate translational vestibuloocular reflexes. II. Version and vergence responses to fore-aft motion.</article-title>
<source>J Neurophysiol</source>
<volume>83</volume>
<fpage>1648</fpage>
<lpage>1661</lpage>
<pub-id pub-id-type="pmid">10712486</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Ernst1">
<label>35</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion.</article-title>
<source>Nature</source>
<volume>415</volume>
<fpage>429</fpage>
<lpage>433</lpage>
<pub-id pub-id-type="pmid">11807554</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Warren6">
<label>36</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Warren</surname>
<given-names>PA</given-names>
</name>
<name>
<surname>Rushton</surname>
<given-names>SK</given-names>
</name>
<name>
<surname>Foulkes</surname>
<given-names>AJ</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>Does assessment of scene-relative object movement rely upon recovery of heading?.</article-title>
<source>J Vis</source>
<volume>11</volume>
<fpage>925</fpage>
</element-citation>
</ref>
<ref id="pone.0040264-Duffy1">
<label>37</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Duffy</surname>
<given-names>CJ</given-names>
</name>
</person-group>
<year>1998</year>
<article-title>MST neurons respond to optic flow and translational movement.</article-title>
<source>J Neurophysiol</source>
<volume>80</volume>
<fpage>1816</fpage>
<lpage>1827</lpage>
<pub-id pub-id-type="pmid">9772241</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Schlack1">
<label>38</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schlack</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Hoffmann</surname>
<given-names>KP</given-names>
</name>
<name>
<surname>Bremmer</surname>
<given-names>F</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Interaction of linear vestibular and visual stimulation in the macaque ventral intraparietal area (VIP).</article-title>
<source>Eur J Neurosci</source>
<volume>16</volume>
<fpage>1877</fpage>
<lpage>1886</lpage>
<pub-id pub-id-type="pmid">12453051</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Takahashi1">
<label>39</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Takahashi</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Gu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>May</surname>
<given-names>PJ</given-names>
</name>
<name>
<surname>Newlands</surname>
<given-names>SD</given-names>
</name>
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
<etal></etal>
</person-group>
<year>2007</year>
<article-title>Multimodal coding of three-dimensional rotation and translation in area MSTd: comparison of visual and vestibular selectivity.</article-title>
<source>J Neurosci</source>
<volume>27</volume>
<fpage>9742</fpage>
<lpage>9756</lpage>
<pub-id pub-id-type="pmid">17804635</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Wallach1">
<label>40</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wallach</surname>
<given-names>H</given-names>
</name>
</person-group>
<year>1987</year>
<article-title>Perceiving a stable environment when one moves.</article-title>
<source>Annu Rev Psychol</source>
<volume>38</volume>
<fpage>1</fpage>
<lpage>27</lpage>
<pub-id pub-id-type="pmid">3548572</pub-id>
</element-citation>
</ref>
<ref id="pone.0040264-Kim1">
<label>41</label>
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Kim</surname>
<given-names>H</given-names>
</name>
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Estimation of heading in the presence of moving objects: A functional role for ‘opposite’ cells in area MSTd?</article-title>
<source>Society for Neuroscience Conference Abstract 731.2</source>
</element-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>Allemagne</li>
<li>États-Unis</li>
</country>
<region>
<li>Bavière</li>
<li>Californie</li>
<li>District de Haute-Bavière</li>
<li>Texas</li>
<li>État de New York</li>
</region>
<settlement>
<li>Munich</li>
</settlement>
<orgName>
<li>Université de Californie du Sud</li>
</orgName>
</list>
<tree>
<country name="Allemagne">
<region name="Bavière">
<name sortKey="Macneilage, Paul R" sort="Macneilage, Paul R" uniqKey="Macneilage P" first="Paul R." last="Macneilage">Paul R. Macneilage</name>
</region>
</country>
<country name="États-Unis">
<region name="Californie">
<name sortKey="Zhang, Zhou" sort="Zhang, Zhou" uniqKey="Zhang Z" first="Zhou" last="Zhang">Zhou Zhang</name>
</region>
<name sortKey="Angelaki, Dora E" sort="Angelaki, Dora E" uniqKey="Angelaki D" first="Dora E." last="Angelaki">Dora E. Angelaki</name>
<name sortKey="Deangelis, Gregory C" sort="Deangelis, Gregory C" uniqKey="Deangelis G" first="Gregory C." last="Deangelis">Gregory C. Deangelis</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002118 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 002118 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:3388053
   |texte=   Vestibular Facilitation of Optic Flow Parsing
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:22768345" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024